Huawei tested AI software that could recognize Uighur minorities and alert police, report says

If the system detected the face of a member of the mostly Muslim minority group, the test report said, it could trigger a “Uighur alarm” — potentially flagging them for police in China, where members of the group have been detained en masse as part of a brutal government crackdown. The document, which was found on Huawei’s website, was removed shortly after The Post and IPVM asked the companies for comment.

Such technology has in recent years gained an expanding role among police departments in China, human rights activists say. But the document sheds new light on how Huawei, the world’s biggest maker of telecommunications equipment, has also contributed to its development, providing the servers, cameras, cloud-computing infrastructure and other tools undergirding the systems’ technological might.

John Honovich, the founder of IPVM, a Pennsylvania-based company that reviews and investigates video-surveillance equipment, said the document showed how “terrifying” and “totally normalized” such discriminatory technology has become.

“This is not one isolated company. This is systematic,” Honovich said. “A lot of thought went into making sure this ‘Uighur alarm’ works.”

Huawei and Megvii have announced three surveillance systems using both companies’ technology in the last couple years. The Post could not immediately confirm if the system with the “Uighur alarm” tested in 2018 was one of the three currently for sale.

Both companies have acknowledged the document is real. Shortly after this story published Tuesday morning, Huawei spokesman Glenn Schloss said the report “is simply a test and it has not seen real-world application. Huawei only supplies general-purpose products for this kind of testing. We do not provide custom algorithms or applications.”

Also after publication, a Megvii spokesman said the company’s systems are not designed to target or label ethnic groups.

Chinese officials have said such systems reflect the country’s technological advancement, and that their expanded use can help government responders and keep people safe. But to international rights advocates, they are a clear sign of China’s dream of social control — a way to identify unfavorable members of society and squash public dissent. China’s foreign ministry did not immediately respond to requests for comment.

Artificial-intelligence researchers and human-rights advocates said they worry the technology’s development and normalization could lead to its spread across the world, as government authorities elsewhere push for a fast and automated way to detect members of ethnic groups they’ve deemed undesirable or a danger to their political control.

Maya Wang, a China senior researcher at the advocacy group Human Rights Watch, said the country has increasingly used AI-assisted surveillance to closely monitor the general public and oppress minorities, protesters and others deemed threats to the state.

“China’s surveillance ambition goes way, way, way beyond minority persecution,” Wang said, but “the persecution of minorities is obviously not exclusive to China. … And these systems would lend themselves quite well to countries that want to criminalize minorities.”

Trained on immense numbers of facial photos, the systems can begin to detect certain patterns that might differentiate, for instance, the faces of Uighur minorities from those of the Han majority in China. In one 2018 paper, “Facial feature discovery for ethnicity recognition,” AI researchers in China designed algorithms that could distinguish between the “facial landmarks” of Uighur, Korean and Tibetan faces.

But the software has sparked major ethical debates among AI researchers who say it could assist in discrimination, profiling or punishment. They argue also that the system is bound to return inaccurate results, because its performance would vary widely based on lighting, image quality and other factors — and because the diversity of people’s ethnicities and backgrounds is not so cleanly broken down into simple groupings.

Clare Garvie, a senior associate at Georgetown Law’s Center on Privacy & Technology who has studied facial recognition software, said the “Uighur alarm” software represents a dangerous step toward automating ethnic discrimination at a devastating scale.

“There are certain tools that quite simply have no positive application and plenty of negative applications, and an ethnic-classification tool is one of those,” Garvie said. “Name a human rights norm, and this is probably violative of that.”

Huawei and Megvii are two of China’s most prominent tech trailblazers, and officials have cast them as leaders of a national drive to reach the cutting edge of AI development. But the multibillion-dollar companies have also faced blowback from U.S. authorities, who argue they represent a security threat to the U.S. or have contributed to China’s brutal regime of ethnic oppression.

Eight Chinese companies, including Megvii, were sanctioned by the U.S. Commerce Department last year for their involvement in “human rights violations and abuses in the implementation of China’s campaign of repression, mass arbitrary detention, and high-technology surveillance” against Uighurs and other Muslim minority groups.

The U.S. government has also sanctioned Huawei, banning the export of U.S. technology to the company and lobbying other countries to exclude its systems from their telecommunications networks.

Huawei, a hardware behemoth with equipment and services used in more than 170 countries, has surpassed Apple to become the world’s second-biggest maker of smartphones and is pushing to lead an international rollout of new 5G mobile networks that could reshape the Internet.

And Megvii, the Beijing-based developer of the Face++ system and one of the world’s most highly valued facial recognition start-ups, said in a public-offering prospectus last year that its “city [Internet of Things] solutions,” which include camera systems, sensors and software that government agencies can use to monitor the public, covered 112 cities across China as of last June.

The “Uighur alarm” document obtained by the researchers, called an “interoperability test report,” offers technical information on how authorities can align the Huawei-Megvii systems with other software tools for seamless public surveillance.

The system tested how a mix of Megvii’s facial recognition software and Huawei’s cameras, servers, networking equipment, cloud-computing platform and other hardware and software worked on dozens of “basic functions,” including its support of “recognition based on age, sex, ethnicity and angle of facial images,” the report states. It passed those tests, as well as another in which it was tested for its ability to support offline “Uighur alarms.”

The test report also said the system was able to take real-time snapshots of pedestrians, analyze video files and replay the 10 seconds of footage before and after any Uighur face is detected.

The document did not provide information on where or how often the system is used. But similar systems are used by police departments across China, according to official documents reviewed last year by the New York Times, which found one city system that had scanned for Uighur faces half a million times in a single month.

Jonathan Frankle, a deep-learning researcher at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Lab, said such systems are clearly becoming a priority among developers willing to capitalize on the technical ability to classify people by ethnicity or race. The flood of facial-image data from public crowds, he added, could be used to further develop the systems’ precision and processing power.

“People don’t go to the trouble of building expensive systems like this for nothing,” Frankle said. “These aren’t people burning money for fun. If they did this, they did it for a very specific reason in mind. And that reason is very clear.”

It’s less certain whether ethnicity-detecting software could ever take off outside the borders of a surveillance state. In the U.S. and other Western-style democracies, the systems could run up against long-established laws limiting government searches and mandating equal protection under the law.

Police and federal authorities in the U.S. have shown increasing interest in facial recognition software as an investigative tool, but the systems have sparked a fierce public backlash over their potential bias and inaccuracies, and some cities and police forces have opted to ban the technology outright.

Such technologies could, however, find a market among international regimes somewhere in the balance between Chinese and American influence. In Uganda, Huawei facial recognition cameras have already been used by police and government officials to surveil protesters and political opponents.

“If you’re willing to model your government and run your country in that way,” Frankle said, “why wouldn’t you use the best technology available to exert control over your citizens?”

Discrimination against Uighurs has long been prevalent in the majority Han Chinese population. In the Xinjiang region of northwestern China, authorities have cited sporadic acts of terrorism as justification for a harsh crackdown starting in 2015 that has drawn condemnation from the U.S. and other western nations. Scholars estimate more than 1 million Uighurs have been detained in reeducation camps, with some claims of torture.

Under international pressure, Xinjiang authorities announced in December 2019 that all reeducation “students” had graduated, though some Uighurs have since reported that they were forced to agree to work in factories or risk a return to detention. Xinjiang authorities say all residents work of their own free will.

The U.S. government has banned the import of certain products from China on the basis that they could have been made by forced labor in Xinjiang.

One of the Huawei-Megvii systems offered for sale after the “Uighur alarm” test, in June 2019, is advertised as saving local governments digital storage space by saving images in a single place.

Two other systems, said to use Megvii’s surveillance software and Huawei’s Atlas AI computing platform, were announced for sale in September of this year. Both were described as “localization” of the products using Huawei chips and listed for sale “only by invitation.” Marketing materials for one of those systems say it was used by authorities in China’s southern Guizhou province to catch a criminal.

Source: WP