Late last month, Microsoft President Brad Smith issued a dire warning. He noted that a Chinese operation—the “Beijing Academy of Artificial Intelligence”—was only months behind his own company and Google in the AI technological race.
He then raised alarm about AI’s potential as a weapon. “We should absolutely assume, and even expect, that certain nation states will use AI to launch cyber attacks, even stronger cyber attacks and cyber influence operations than we see today,” he told Japan-based news magazine Nikkei Asia. By “certain nation states,” of course, he was referring to China.
Of the many concerned statements about AI technology, Smith’s remarks may win the prize for most ironic—or, to be more precise—hypocritical. That’s because China’s AI success is due in no small part to its long-time partnership with a certain American company: Microsoft. The company has for decades helped China build up its AI infrastructure—a partnership the U.S. government ought to investigate.
In 1998, as a way of combating piracy of his company’s software, Microsoft founder Bill Gates invested time, money, and energy in China, culminating in the creation of Microsoft Research China, later renamed Microsoft Research Asia (MSRA).
On its surface, MSRA was a modest foreign research and development outpost; in reality, MSRA trained the developers behind the Chinese Communist Party’s digital surveillance tools. In fact, MSRA was so successful at military training that it was referred to as the new “Whampoa Military Academy,” a nod to the old Whampoa Military Academy, the training ground for some of China’s fiercest military commanders.
MSRA’s researchers have been Chinese citizens who have gone on to work for China’s top surveillance companies, where they’ve advanced the Chinese surveillance state and have invited U.S. sanctions in the process.
MSRA alumnus Sun Jian, for instance, went on to become chief scientist of Megvii, the Chinese facial recognition unicorn, a company sanctioned in December 2021 for “actively cooperating with the government’s efforts to repress members of ethnic and religious minority groups,” according to the Department of Treasury. Another former MSRA research scientist, Tang Xiao’ou, became a billionaire in 2021 from his facial recognition company SenseTime, also sanctioned by the U.S. Department of Treasury.
Li Shipeng, a founding member of Microsoft Research Asia, went onto voice recognition company iFlyTek to head its AI research. In 2021, iFlyTek was also sanctioned for selling surveillance technologies used in human rights abuses.
These Chinese firms are leaders in the fields of facial and voice recognition for surveillance—and each one has been sanctioned for human rights abuses. In Xinjiang, where these companies operate, Chinese authorities have taken away up to 1.8 million people, mostly Uyghur Muslims, to a network of concentration camps. The camps are the largest internment of ethnic minorities since the Holocaust.
When I was reporting on Xinjiang, Uyghur refugees told me that the government had installed surveillance cameras in their homes and that the entire region was blanketed in police pillboxes and cameras, as if in a science fiction novel. Uyghurs and other minority groups were hauled away to camps for praying and reading religious texts, which signified to the Communist Party that they would carry out “pre-crimes” such as hijacking an airplane or robbing a store. It was darkly dystopian, and it happened with the help of MSRA alumni.
In 2010, MSRA alumni at Tencent built WeChat, China’s leading instant messaging app—and a CCP tool for mass surveillance. In 2012, a Microsoft team in China created a neural network to allow AI to learn more quickly—technology that became a cornerstone of its surveillance systems.
“Microsoft, you can say, has become the central pillar of new technologies in China,” a Chinese Microsoft employee admitted to me in 2007. “It’s difficult to express in words how far we’ve come.”
These problems are not only buried in the past. Just four years ago, the news broke that Microsoft employees had co-published papers with researchers at a Chinese military-run university, the National University of Defense Technology. One particularly troubling paper described a novel AI process to create environmental maps by analyzing human faces, giving the system a better understanding of the surrounding environment without a camera—one of many technologies that were used to build China’s surveillance state.
The government surveillance applications of such technologies were self-evident—and so worrisome that even members of Congress spoke out about Microsoft’s involvement.
Just as recently as this year, ByteDance, the Chinese-based parent company of TikTok, announced Microsoft as a new partner. They are reportedly planning to work together on big data and artificial intelligence—a partnership that could give ByteDance even more data about Americans’ digital habits.
Perhaps because of its sordid past and suspicious present, Microsoft has said little in response to various troubling reports about Chinese misbehavior, including refraining from any of the usual corporate condemnations of China’s human rights record. The company was even silent last year when LinkedIn, the social media service Microsoft owns, exited the Chinese market and decried the “challenging operating environment.” In May 2022, Chinese media reported that Microsoft will no longer recruit students from military-connected universities, although Microsoft has remained silent on that, too.
All of this—the past, the present, and the silence—leads to the obvious conclusion: Microsoft’s relationship with China and its assistance with AI surveillance technology is a national security issue for the United States, and the company needs to be held accountable.
Brad Smith is right: The Chinese AI problem is a problem. And officials can begin to fix it by investigating Smith’s company, if only to understand the role that Microsoft played in causing the problem they’re now crowing about.
Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.
You are currently using a limited time guest pass and do not have access to commenting. Consider subscribing to join the conversation.
With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.