Burying Huawei and ZTE
The Federal Communications Commission (FCC) voted last week to ban the import and sale of all communications equipment made by Chinese telecommunications companies Huawei and ZTE, saying these products pose “unacceptable risk” to U.S. national security. In a prepared statement, FCC Chairwoman Jessica Rosenworcel, said, “The FCC is committed to protecting our national security by ensuring that untrustworthy communications equipment is not authorized for use within our borders, and we are continuing that work here.”
While government actions against Huawei, ZTE, and other Chinese technology makers have increased over the last five years, this is the first time the FCC has voted to prohibit the authorization of new equipment based on national security concerns.
Here’s what I’m thinking (HWIT):
The wisdom of this action should be obvious. This newsletter, and increasingly other media, frequently discuss the now undeniable risks associated with Chinese and Chinese-linked technology companies. The Chinese Communist Party (CCP) exercises decisive control over these entities and employs them as extensions of the government for economic, social, and political espionage. Preventing this within our country is among the most basic actions we can take for our self-preservation and I’m glad it’s happening.
There is now a playbook for tightening the noose. In many ways, last week’s vote is the culmination of a yearslong process, one that can be repeated as we address similarly risky companies. First, you use the National Defense Authorization Act (NDAA)—the annual military budget bill that is one of a very few “must pass” bills in Congress—to ban the U.S. military from purchasing or using risky foreign technology. Next, you add the company to the Commerce Department’s “Entities” list. This places strict restrictions on how U.S. companies and individuals are allowed to do business with the listed entity, often denying them access to essential goods and services. Then you expand via legislation the defense ban to include all of the U.S. government. The ban can then be expanded again so that no company that contracts with the U.S. government can use these prohibited services or employ subcontractors who use them. Finally, through executive or legislative action, the company or service can be outright banned from doing any business in the United States.
There’s a long list of other companies who are just as dangerous as Huawei and ZTE. The same rationale equally justifies similar actions against a host of Chinese tech companies operating in the United States. For drones, DJI and Autel come to mind. For internet connected devices there’s Tuya. For cloud computing, Alibaba and Tencent. And of course, there’s always my personal white whale, TikTok. These and many, many others can and should be removed from the U.S. market.
If this makes you uncomfortable, blame China. I understand those who read what I’ve said and get queasy. If this isn’t “industrial policy,” we’re certainly bumping up against it. That stinks. While I am a full-throated supporter of the free market, I am not suicidal. Washington does not unilaterally dictate everyone else’s behavior, and the U.S. has certainly given Beijing every opportunity to thrive as a responsible actor. It has chosen not to do so. Now, motivated by our own national security interests, we are forced to assume a more confrontational and aggressive posture. So be it because it is part of our government’s constitutional responsibility to provide for the common defense, and that defense increasingly depends on data security.
Russian Software Sneaks Into Military and CDC Apps
Pushwoosh, a computer software company that says it’s headquartered in the United States but is actually a Russian company, has infiltrated thousands of smartphone applications, including applications used by the U.S. Army and the Centers for Disease Control and Prevention (CDC), according to Reuters.
In the United States and on social media, Pushwoosh claims it is an American company headquartered at different times in either California, Maryland, or Washington, D.C. But according to company documents filed in Russia and reviewed by Reuters, Pushwoosh is headquartered in the Siberian town of Novosibirsk.
The company’s services include code and data services for software development, and these services were integrated into apps the Army used at its National Training Center in Fort Irwin, California, until it was removed from service after the Russian connection was discovered. The CDC, likewise, removed Pushwoosh software from several public-facing apps over the same security concerns.
A company press release responding to Reuters’ reporting says the following:
Pushwoosh Inc. is a privately held C-Corp company incorporated under the state laws of Delaware, USA. Pushwoosh Inc. was never owned by any company registered in the Russian Federation … Pushwoosh Inc. used to outsource development parts of the product to the Russian company in Novosibirsk, mentioned in the article. However, in February 2022, Pushwoosh Inc. terminated the contract.
HWIT:
Pushwoosh would have access to tons of data. The core function of the company’s services involves code that allows software developers to profile users based on their digital activities, enabling tailored notifications. Not every company wants to build this kind of code from scratch because it is complex and expensive. So, thousands of companies buy “off the shelf” solutions like those offered by Pushwoosh. This large group of users allows the code company to build huge databases of their almost 8,000 customers’ customers, giving them insights into an unknown, but large, number of individuals.
Russia is just like China: data hungry and authoritarian. Moscow has passed laws like those in China that compel all companies within its borders to provide data access to the government. All Pushwoosh’s explanations reek of the same type of tortured—or outright deceptive—language that is employed by other known hostile foreign companies. For example, it appears company executives are trying to draw a distinction between “Pushwoosh Inc.” incorporated in Delaware and “the Russian company in Novosibirsk.” Just ignore the fact that the name of that “Russian company” is also Pushwoosh. Similarly, saying Pushwoosh Inc. “was never owned” by a company “registered in the Russian Federation” is just another rhetorical smokescreen intended to obscure the truth.
This problem is deep and wide. I imagine we’ve only seen the tip of the iceberg when it comes to hostile foreign companies. The truth is that our open marketplace presupposes good-faith actors and is easily exploited by nations that are willing to hide their origins and intentions. Even more, often the only thing needed to throw investigators off the scent is a name change and some new paperwork. But this is a permanent feature of the modern business and national security environment, and the U.S. private and public sectors need a reliable, repeatable process for ferreting these bad actors out before they can dig into our valuable data like technological ticks.
Meta’s CICERO AI Is a Master Strategist (Sort Of)
Facebook’s parent company, Meta, has released details about its new strategy AI, called CICERO. The new agent appears to be a breakthrough because of the way it combines natural language processing and strategic reasoning, even achieving human-level performance in the strategy game Diplomacy.
First, you should know the basics of Diplomacy:
- The goal of the game is to control most of a game board map.
- It starts with seven players, each controlling a European power in 1901.
- Players work to win by building alliances that are negotiated through private, one-on-one conversations.
- Agreements are not binding—meaning players can lie and double-deal.
- After a period of negotiation, players write down their moves which are then executed simultaneously (trusting that everyone has been honest when describing their intentions).
- While lying and misdirection are allowed, the only way to win the whole game is by building trust through negotiation and cooperation with other players.
According to Meta, between August and October of this year, CICERO played 40 games of Diplomacy online, against anonymous human opponents. In each of the games, the AI agent sent an average of 130 messages to the other six players. It also averaged two times the average score of its human competitors and was in the top 10 percent of those who played more than one game.
HWIT:
This is very impressive. The special sauce that makes CICERO amazing isn’t that it can understand natural language communication or that it can “think” in the context of a strategy game. Both capabilities, while amazing, have been demonstrated by other AI agents. The magic is that CICERO develops a plan for winning, engages other players in back-and-forth conversations, uses these conversations to understand and shape other’s strategies—using “empathy” and persuasion—and then adjusts its own strategy based on these engagements and the players’ moves in the game. It is a type of complexity and performance that is truly amazing.
Strategy is no longer an exclusively human activity. For years, we’ve had AI agents that dominate humans in checkers, chess, the Chinese game “Go,” and other strategy games. In one sense, CICERO is just the next evolution. But it’s an evolutionary leap that demonstrates how machines can exercise a type of “understanding,” “judgment,” and even “persuasion” that just a few months ago was thought to be reserved for humans. While there are lots of little details about the parameters of CICERO’s operations, how the games were set up, etc., the broader point is that a computer proved that not only it can engage and negotiate with humans to achieve a strategic end, but that it was often better than humans at doing these things. If past is prologue, the AI’s ability to play millions of real and simulated games of Diplomacy means we can expect CICERO to routinely outstrip human opponents in the very near future.
The future is AI-enabled defense and foreign policy strategy. Some skeptics will dismiss the idea that an AI will ever “replace” humans when it comes to strategy. But it’s not a question of replacing humans. What is almost inevitable is that highly capable AI agents will augment human strategies and that this type of teaming will have fascinating implications.
For example, using AIs will allow humans to generate and test strategies with complexity and scale that are currently unimaginable. We’ll also have sophisticated models of our opponents that will allow us to more accurately anticipate their motives, capabilities, and intentions—all of which could lead to more fruitful diplomatic engagements that could make conflict less likely. But things could also get dicier.
Because AIs can often process data on scales that no human ever could, they also often make connections that are not immediately clear—or even understandable—to humans. The connections are not wrong, they are just “black boxes” that are opaque and inscrutable to us meat puppets. Well, what happens when an AI agent makes or recommends a course of action that all human history says is a mistake? Maybe it is, maybe it isn’t. Maybe the AI has crunched the numbers and has found a unique opportunity where breaking with settled wisdom is the best way to throw off an opponent and to win the day.
Now consider being the defender in that same situation: Your opponent has done something that everyone on the planet would say is a mistake, but you know they’re using a super intelligent AI, that’s proven its strategic merit—is the unconventional move a mistake that you can exploit or a masterstroke of genius that you now have parry? How do you know? What does indications and warning analysis even look like in a world of constantly learning machines? I don’t know, but it’s going to be epic.
Don’t get too worried, though. As cool as all of this is, we’ve got a long way to go. Our machines are certainly getting smarter, but they are still narrowly focused and highly susceptible to bad data and mistakes. Instead of running the world, the most likely future for powerful AIs is that they’ll be employed by humans to make the world better in many, many ways. That reminds me of a quote from John F. Kennedy that I’ll use to end this newsletter: “There are risks and costs to action. But they are far less than the long-range risks of comfortable inaction.”
That’s it for this edition of The Current. Be sure to comment on this post and to share this newsletter with your family, friends, and followers. You can also follow me on Twitter (@KlonKitchen). Thanks for taking the time and I’ll see you next week!
Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.
You are currently using a limited time guest pass and do not have access to commenting. Consider subscribing to join the conversation.
With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.