AI News, China, Japan, ROK businesses eye closer ties in 5G artificial intelligence
The Global Expansion of AI Surveillance
Startling developments keep emerging, from the onset of deepfake videos that blur the line between truth and falsehood, to advanced algorithms that can beat the best players in the world in multiplayer poker.
Yet a growing number of states are deploying advanced AI surveillance tools to monitor, track, and surveil citizens to accomplish a range of policy objectives—some lawful, others that violate human rights, and many of which fall into a murky middle ground.
An important AI subfield is machine learning, which is a statistical process that analyzes a large amount of information in order to discern a pattern to explain the current data and predict future uses.6 Several breakthroughs are making new achievements in the field possible: the maturation of machine learning and the onset of deep learning;
It is starting to transform basic patterns of governance, not only by providing governments with unprecedented capabilities to monitor their citizens and shape their choices but also by giving them new capacity to disrupt elections, elevate false information, and delegitimize democratic discourse across borders.
The focus of this paper is on AI surveillance and the specific ways governments are harnessing a multitude of tools—from facial recognition systems and big data platforms to predictive policing algorithms—to advance their political goals.
The index breaks down AI surveillance tools into the following subcategories: 1) smart city/safe city, 2) facial recognition systems, and 3) smart policing.
The index uses the same list of countries found in the Varieties of Democracy (V-Dem) project with two minor exceptions.8 The V-Dem country list includes all independent polities worldwide but excludes microstates with populations below 250,000.
The research collection effort combed through open-source material, country by country, in English and other languages, including news articles, websites, corporate documents, academic articles, NGO reports, expert submissions, and other public sources.
Given limited resources and staffing constraints (one full-time researcher plus volunteer research assistance), the index is only able to offer a snapshot of AI surveillance levels in a given country.
The index does not differentiate between governments that expansively deploy AI surveillance techniques versus those that use AI surveillance to a much lesser degree (for example, the index does not include a standardized interval scale correlating to levels of AI surveillance).
Because this is a nascent field and there is scant information about how different countries are using AI surveillance techniques, attempting to score a country’s relative use of AI surveillance would introduce a significant level of researcher bias.
report raised eyebrows when it reported that eighteen out of sixty-five assessed countries were using AI surveillance technology from Chinese companies.9 The report’s assessment period ran from June 1, 2017 to May 31, 2018.
the region has eighteen of twenty countries with the lowest levels of internet penetration).10 Given the aggressiveness of Chinese companies to penetrate African markets via BRI, these numbers will likely rise in the coming years.
In contrast, 37 percent of closed autocratic states, 41 percent of electoral autocratic/competitive autocratic states, and 41 percent of electoral democracies/illiberal democracies deploy AI surveillance technology.
Liberal democratic governments are aggressively using AI tools to police borders, apprehend potential criminals, monitor citizens for bad behavior, and pull out suspected terrorists from crowds.
A 2016 investigation by Axios’s Kim Hart revealed, for example, that the Baltimore police had secretly deployed aerial drones to carry out daily surveillance over the city’s residents: “From a plane flying overhead, powerful cameras capture aerial images of the entire city.
Photos are snapped every second, and the plane can be circling the city for up to 10 hours a day.”11 Baltimore’s police also deployed facial recognition cameras to monitor and arrest protesters, particularly during 2018 riots in the city.12 The ACLU condemned these techniques as the “technological equivalent of putting an ankle GPS [Global Positioning Service] monitor on every person in Baltimore.”13 On the U.S.-Mexico border, an array of hi-tech companies also purvey advanced surveillance equipment.
Captured images “are analysed using artificial intelligence to pick out humans from wildlife and other moving objects.”14 It is unclear to what extent these surveillance deployments are covered in U.S. law, let alone whether these actions meet the necessity and proportionality standard.
The goal of the program is to reduce crime by establishing a vast public surveillance network featuring an intelligence operations center and nearly one thousand intelligent closed-circuit television (CCTV) cameras (the number will double by 2020).
The package included upgraded high definition CCTV surveillance and an intelligent command center powered by algorithms to detect unusual movements and crowd formations.16 The fact that so many democracies—as well as autocracies—are taking up this technology means that regime type is a poor predictor for determining which countries will adopt AI surveillance.
A breakdown of military expenditures in 2018 shows that forty of the top fifty military spending countries also have AI surveillance technology.17 These countries span from full democracies to dictatorial regimes (and everything in between).
If a country takes its security seriously and is willing to invest considerable resources in maintaining robust military-security capabilities, then it should come as little surprise that the country will seek the latest AI tools.
The motivations for why European democracies acquire AI surveillance (controlling migration, tracking terrorist threats) may differ from Egypt or Kazakhstan’s interests (keeping a lid on internal dissent, cracking down on activist movements before they reach critical mass), but the instruments are remarkably similar.
States are also compelling the preservation and retention of communication data to enable them to conduct historical surveillance.18 It goes without saying that such intrusions profoundly affect an individual’s right to privacy—to not be subjected to what the Office of the UN High Commissioner for Human Rights (OHCHR) called “arbitrary or unlawful interference with his or her privacy, family, home or correspondence.”19 Surveillance likewise may infringe upon an individual’s right to freedom of association and expression.
La Rue’s successor, David Kaye, issued a report in 2019 that affirmed that legal regulations should be “formulated with sufficient precision to enable an individual to regulate his or her conduct accordingly and it must be made accessible to the public.”
international legal standard, which restricts surveillance to situations that are “strictly and demonstrably necessary to achieve a legitimate aim”?21 Third, are the interests justifying the surveillance action legitimate?
that entrusts judiciaries to authorize relevant surveillance measures and provide remedies in cases of abuse is required.22 Kaye adds that legitimate surveillance should only apply when the interest of a “whole nation is at stake,”
and should exclude surveillance carried out “in the sole interest of a Government, regime or power group.”23 The legal standards required to legitimately carry out surveillance are high, and governments struggle to meet them.
Countries with weak legal enforcement or authoritarian systems “routinely shirk these obligations.”24 As the OHCHR’s inaugural report on privacy in the digital age concludes, states with “a lack of adequate national legislation and/or enforcement, weak procedural safeguards and ineffective oversight”
bring reduced accountability and heightened conditions for unlawful digital surveillance.25 AI surveillance exacerbates these conditions and makes it likelier that democratic and authoritarian governments may carry out surveillance that contravenes international human rights standards.
This brings cost efficiencies, decreases reliance on security forces, and overrides potential principal-agent loyalty problems (where the very forces operating at the behest of the regime decide to seize power for themselves).
citizens never know if an automated bot is monitoring their text messages, reading their social media posts, or geotracking their movements around town.28 This paper recognizes that AI surveillance technology is “value neutral.”
Huawei is helping the government build safe cities, but Google is establishing cloud servers, UK arms manufacturer BAE has sold mass surveillance systems, NEC is vending facial recognition cameras, and Amazon and Alibaba both have cloud computing centers in Saudi Arabia and may support a major smart city project.31 The index shows that repressive countries rarely procure such technology from a single source.
operating in Algiers.33 Uganda subsequently agreed to purchase a similar facial recognition surveillance system from Huawei costing $126 million.34 The Australian Strategic Policy Institute’s project on Mapping China’s Tech Giants indicates that Huawei is responsible for seventy-five “smart city-public security projects,”
in 2018, that reach had reportedly more than doubled to 90 countries (including 230 cities).”35 Huawei is directly pitching the safe city model to national security agencies, and China’s Exim Bank appears to be sweetening the deal with subsidized loans.
The result is that a country like Mauritius obtains long-term financing from the Chinese government, which mandates contracting with Chinese firms.36 The Mauritian government then turns to Huawei as the prime contractor or sub-awardee to set up the safe city and implement advanced surveillance controls.
Sun Yafang, for example, chairwoman of Huawei’s board from 1999 to 2018, once worked in China’s Ministry of State Security.39 Max Chafkin and Joshua Brustein reported in Bloomberg Businessweek that there are allegations that Ren may have been a “high-ranking Chinese spymaster and indeed may still be.”40 Experts maintain that the Chinese Communist Party increasingly is establishing “party ‘cells’
to any demands by the Chinese government to hand over user data.42 But this contravenes a 2015 Chinese national security law that mandates companies to allow third-party access to their networks and to turn over source code or encryption keys upon request.43 Huawei’s declared ownership structure is remarkably opaque.
which in all likelihood is a proxy for Chinese state control of the company.”44 Even if Chinese companies are making a greater push to sell advanced surveillance tech, the issue of intentionality remains perplexing—to what extent are Chinese firms like Huawei and ZTE operating out of their own economic self-interest when peddling surveillance technology versus carrying out the bidding of the Chinese state?
At least in Thailand, recent research interviews did not turn up indications that Chinese companies are pushing a concerted agenda to peddle advanced AI surveillance equipment or encourage the government to build sophisticated monitoring systems.
It forms part of a suite of digital repression tools—information and communications technologies used to surveil, intimidate, coerce, and harass opponents in order to inflict a penalty on a target and deter specific activities or beliefs that challenge the state.47 (See Appendix 2 for more information.) Table 1 summarizes each technique and its corresponding level of global deployment.
in order to facilitate improved service delivery and city management.48 They help municipal authorities manage traffic congestion, direct emergency vehicles to needed locations, foster sustainable energy use, and streamline administrative processes.
IBM, one of the original coiners of the term, designed a brain-like municipal model where information relevant to city operations could be centrally processed and analyzed.49 A key component of IBM’s smart city is public safety, which incorporates an array of sensors, tracking devices, and surveillance technology to increase police and security force capabilities.
and “address new and emerging threats.”50 In a 2016 white paper, Huawei describes a “suite of technology that includes video surveillance, emergent video communication, integrated incident command and control, big data, mobile, and secured public safety cloud”
to support local law enforcement and policing as well as the justice and corrections system.51 Huawei explicitly links its safe city technology to confronting regional security challenges, noting that in the Middle East, its platforms can prevent “extremism”;
Recently, Huawei’s safe city project in Serbia, which intends to install 1,000 high-definition (HD) cameras with facial recognition and license plate recognition capabilities in 800 locations across Belgrade, sparked national outrage.54 Huawei posted a case study (since removed) about the benefits of safe cities and described how similar surveillance technology had facilitated the apprehension of a Serbian hit-and-run perpetrator who had fled the country to a city in China: “Based on images provided by Serbian police, the .
[local] Public Security Bureau made an arrest within three days using new technologies.”55 Rather than applaud the efficiency of the system, Serbian commentators observed that in a country racked by endemic corruption and encroaching authoritarianism, such technology offers a powerful tool for Serbian authorities to curb dissent and perpetrate abuses.
This will allow security officials to “rapidly compare images caught by live body cameras with images from a central database.”56 Huawei is a major purveyor of facial recognition video surveillance, particularly as part of its safe city platforms.
It describes the technology’s benefits in the Kenya Safe City project: As part of this project, Huawei deployed 1,800 HD cameras and 200 HD traffic surveillance systems across the country’s capital city, Nairobi.
With Huawei’s HD video surveillance and a visualized integrated command solution, the efficiency of policing efforts as well as detention rates rose significantly.57 Experts detail several concerns associated with facial recognition.
Recent disclosures that U.S. law enforcement agencies (the Federal Bureau of Investigation and Immigration and Customs Enforcement) scanned through millions of photos in state driver’s license databases without prior knowledge or consent come as little surprise.
A recent independent report of the UK’s Metropolitan Police found that its facial recognition technology had an extraordinary error rate of 81 percent.59 Similarly, Axon, a leading supplier of police body cameras in the United States, announced that it would cease offering facial recognition on its devices.
Evaluations conducted between 2014 and 2018 of 127 algorithms from thirty-nine developers by the U.S. National Institute for Standards and Technology showed that “facial recognition software got 20 times better at searching a database to find a matching photograph.”
Facial recognition technology also has been unable to shake consistent gender and racial biases, which lead to elevated false positives for minorities and women—“the darker the skin, the more errors arise—up to nearly 35 percent for images of darker skinned women”
The idea behind smart policing is to feed immense quantities of data into an algorithm—geographic location, historic arrest levels, types of committed crimes, biometric data, social media feeds—in order to prevent crime, respond to criminal acts, or even to make predictions about future criminal activity.
As Privacy International notes: “With the proliferation of surveillance cameras, facial recognition, open source and social media intelligence, biometrics, and data emerging from smart cities, the police now have unprecedented access to massive amounts of data.”
Therefore, one major component to smart policing is to create automated platforms that can disaggregate immense amounts of material, facilitate data coming in from multiple sources, and permit fine-tuned collection of individual information.
It assumes that “certain crimes committed at a particular time are more likely to occur in the same place in the future.”65 PredPol reveals that “historical event datasets are used to train the algorithm for each new city (ideally 2 to 5 years of data).
New predictions are highlighted in special red boxes superimposed on Google Maps representing high-risk areas that warrant special attention from police patrols.66 A key shortcoming in PredPol’s methodology is that it generates future predictions based on data from past criminal activity and arrests.
IJOP procures additional data from license plates and identification cards scanned at checkpoints, as well as health, banking, and legal records.67 Chinese authorities are supplementing IJOP with mandatory DNA samples from all Xinjiang residents aged twelve to sixty-five.68 This information is fed into IJOP computers, and algorithms sift through troves of data looking for threatening patterns.
According to the consulting firm Accenture, ABC systems use “multi-model biometric matching”—facial image recognition combined with e-passports or other biometric documents—to process passengers.72 The process initiates when a passenger steps in front of a multi-camera wall.
A risk assessment is then performed through automated testing of identities against an individual’s passport and certain security watch-lists.73 Those who are not cleared by the automated system must go into secondary screening with human agents.
Psychologists have widely criticized these tools, maintaining that it is difficult to rely on facial expressions alone to accurately determine a person’s state of mind.75 Despite scientific skepticism about these techniques, governments continue to explore their use.
Governments and companies are increasingly storing data in massive off-site locations—known as the cloud—that are accessible through a network, usually the internet.76 Cloud computing is a general use technology that includes everything from turn-by-turn GPS maps, social network and email communications, file storage, and streaming content access.
The National Institute of Standards and Technology defines cloud computing as a “model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”77 In basic terms, cloud computing data centers function as the backbone of the internet, instantly storing, communicating, and transporting produced information.
leading many to question whether cloud computing companies can keep personal information, corporate secrets, classified government material, or health records safe (however they generally represent a more secure method of storage than legacy on-site data storage facilities).79 A related concern is forced data disclosures—even if cloud servers remain technically secure, governments may coerce companies into disclosing certain data (such as email communications or text messages of regime critics) held in the cloud.
and ensure device integration and data aggregation (although companies like Amazon, Apple, and Google are also setting up distinct ecosystems that only have limited interoperability with other platforms).81 While the IOT will bring greater efficiencies, it may also transform traditional non-networked devices, such as smart speakers, into omnipresent surveillance instruments: The Internet of Things promises a new frontier for networking objects, machines, and environments in ways that we [are] just beginning to understand.
When, say, a television has a microphone and a network connection, and is reprogrammable by its vendor, it could be used to listen in to one side of a telephone conversation taking place in its room—no matter how encrypted the telephone service itself might be.
A new device was recently demonstrated that plugs into a Tesla Model S or Model 3 car and turns its built-in cameras “into a system that spots, tracks, and stores license plates and faces over time,”
and provides “another set of eyes, to help out and tell you it’s seen a license plate following you over multiple days, or even multiple turns of a single trip.”85 The spread of AI surveillance continues unabated.
Electoral autocracies hold de-facto multiparty elections for the chief executive, but they fall short of democratic standards due to significant irregularities, limitations on party competition or other violations of Dahl’s institutional requisites for democracies.
Concerns over Chinese involvement in 5G wireless networks
Concerns over Chinese involvement in 5G wireless networks stem from allegations that cellular network equipment sourced from Chinese vendors may contain backdoors enabling surveillance by the Chinese government (as part of intelligence activity internationally) and Chinese laws which compels companies to assist the state intelligence agency on the collection of information when warranted.
The allegations came against the backdrop of the rising prominence of Chinese telecommunication vendors Huawei and ZTE in the 5G equipment marketspace, and the controversy has led to other countries debating whether Chinese vendors should be allowed to participate in 5G deployments;
developments have been focused on enabling low-latency communications, and promises of a minimum peak network speed of 20 gigabits per/second (20 times faster than the equivalent on 4G LTE networks), and uses within Internet of things and smart city technology.
China's five-year plan for 2016–2020 and the Made in China 2025 initiative both identified 5G as a 'strategic emerging industry', with goals for Chinese companies to become more competitive and innovative in the global market, and avert the country's prior reputation for low-quality and counterfeit goods.
Huawei has faced various allegations of intellectual property theft and corporate espionage, including copying proprietary source code from Cisco Systems equipment, and an employee stealing a robotic arm for smartphone stress testing from a T-Mobile US laboratory.
In January 2019, U.S. authorities indicted Huawei and its vice-chairwoman and CFO Meng Wanzhou on charges of theft of trade secrets (including allegations that Huawei's Chinese division had a program to issue bonuses for employees who successfully obtain confidential information from competitors.
In regards to the aforementioned T-Mobile robotic arm, Huawei's U.S. division disavowed the employee's actions and this program, as it is not in line with local business practices), and having used a shell company to mask investments in Iran that violated U.S. sanctions (including resale of technology of U.S. origin);
The company's former security adviser Brian Shields alleged that the intrusion was a state-sponsored attack that may have benefited domestic competitors such as Huawei and ZTE, and acknowledged that there was circumstantial evidence that connected the company's downfall to the beginning of Huawei's international growth.
The 2014 Counter-Espionage law states that 'when the state security organ investigates and understands the situation of espionage and collects relevant evidence, the relevant organizations and individuals shall provide it truthfully and may not refuse.'
Softbank CTO Miyagawa Jyunichi explained that unlike a 4G core network (where data is encrypted and transmitted using a tunneling protocol that makes it difficult to extract communication data from the network), if technology like mobile edge computing is used, processing servers could be placed near 5G base stations, in order to enable information processing on the base station side of the carrier network.
However, the transaction faced scrutiny from the Committee on Foreign Investment in the United States, which deemed it a threat to national security due to Huawei founder Ren Zhengfei having been a former engineer for the People's Liberation Army, and concerns that China could gain access to intrusion detection technology that 3Com had developed for the U.S. government and armed forces.
Wray stating that they were 'concerned about the risks of allowing any company or entity that is beholden to foreign governments that don’t share our values to gain positions of power inside our telecommunications networks.'
On 15 May 2019, president Donald Trump signed executive order 13873 to declare a national emergency under the International Emergency Economic Powers Act, allowing for restrictions to be imposed on commerce with 'foreign adversaries' that involve information and communications technology.
The same day, the U.S. Department of Commerce also added Huawei and various affiliates to its entity list under the Export Administration Regulations (restricting its ability to perform commerce with U.S. companies), citing that it had been indicted for 'knowingly and willfully causing the export, reexport, sale and supply, directly and indirectly, of goods, technology and services (banking and other financial services) from the United States to Iran and the government of Iran without obtaining a license from the Department of Treasury's Office of Foreign Assets Control (OFAC)'.
In a speech at the Mobile World Congress 2019, Huawei's rotating chairman Guo Ping similarly addressed the allegations, stating that innovation 'is nothing without security', and pledging that Huawei had never placed backdoors in its equipment, would never place backdoors, and would not allow other parties to do so.
In a statement published by Chinese state-owned newspaper Global Times in response to Trump's May 2019 executive order, Huawei stated that the move would 'only force the US to use inferior and expensive alternative equipment, lagging behind other countries', and that they were willing to 'communicate with the US to ensure product security'.
The following week, Japan announced that effective 1 August 2019, the telecom, integrated circuitry, and mobile phone manufacturing industries would be added to laws allowing the government to block foreign investments within sensitive sectors for security reasons.
- On Friday, May 29, 2020
5G and chemtrails - a lethal combination (MUST WATCH !!)
5g and chemtrails - a lethal combination. A full-length documentary by Sacha Stone exposing the 5G existential threat to humanity in a way we never imagined ...
Kaiser Kuo of Sinica on Modern China and US-China relations - Episode 5
Kaiser Kuo is a host and co-founder of Sinica, a current affairs podcast originally based in Beijing. Sinica guests include prominent journalists, academics, and ...
People's Republic of China | Wikipedia audio article
This is an audio version of the Wikipedia Article: 00:03:33 1 Names 00:05:45 2 History 00:05:53 2.1 Prehistory 00:07:01 2.2 ..
LIVE: Theory & Practice of Nuclear Weapons and International Security
What do we really know about the impact of nuclear weapons on international relations? What are the best theories and explanations for state behavior in the ...
2019 SelectUSA Investment Summit - Wednesday
Live from Disrupt Berlin 2018 Day 1
TechCrunch Disrupt Berlin 2018 - Day 1.
David Icke Live At The Oxford Union(2008)