AI News, (PDF) 13 PAPERS of 2011 artificial intelligence

The Global Expansion of AI Surveillance

Startling developments keep emerging, from the onset of deepfake videos that blur the line between truth and falsehood, to advanced algorithms that can beat the best players in the world in multiplayer poker.

Yet a growing number of states are deploying advanced AI surveillance tools to monitor, track, and surveil citizens to accomplish a range of policy objectives—some lawful, others that violate human rights, and many of which fall into a murky middle ground.

An important AI subfield is machine learning, which is a statistical process that analyzes a large amount of information in order to discern a pattern to explain the current data and predict future uses.6 Several breakthroughs are making new achievements in the field possible: the maturation of machine learning and the onset of deep learning;

It is starting to transform basic patterns of governance, not only by providing governments with unprecedented capabilities to monitor their citizens and shape their choices but also by giving them new capacity to disrupt elections, elevate false information, and delegitimize democratic discourse across borders.

The focus of this paper is on AI surveillance and the specific ways governments are harnessing a multitude of tools—from facial recognition systems and big data platforms to predictive policing algorithms—to advance their political goals.

The index breaks down AI surveillance tools into the following subcategories: 1) smart city/safe city, 2) facial recognition systems, and 3) smart policing.

The index uses the same list of countries found in the Varieties of Democracy (V-Dem) project with two minor exceptions.8 The V-Dem country list includes all independent polities worldwide but excludes microstates with populations below 250,000.

The research collection effort combed through open-source material, country by country, in English and other languages, including news articles, websites, corporate documents, academic articles, NGO reports, expert submissions, and other public sources.

Given limited resources and staffing constraints (one full-time researcher plus volunteer research assistance), the index is only able to offer a snapshot of AI surveillance levels in a given country.

The index does not differentiate between governments that expansively deploy AI surveillance techniques versus those that use AI surveillance to a much lesser degree (for example, the index does not include a standardized interval scale correlating to levels of AI surveillance).

Because this is a nascent field and there is scant information about how different countries are using AI surveillance techniques, attempting to score a country’s relative use of AI surveillance would introduce a significant level of researcher bias.

report raised eyebrows when it reported that eighteen out of sixty-five assessed countries were using AI surveillance technology from Chinese companies.9 The report’s assessment period ran from June 1, 2017 to May 31, 2018.

the region has eighteen of twenty countries with the lowest levels of internet penetration).10 Given the aggressiveness of Chinese companies to penetrate African markets via BRI, these numbers will likely rise in the coming years.

In contrast, 37 percent of closed autocratic states, 41 percent of electoral autocratic/competitive autocratic states, and 41 percent of electoral democracies/illiberal democracies deploy AI surveillance technology.

Liberal democratic governments are aggressively using AI tools to police borders, apprehend potential criminals, monitor citizens for bad behavior, and pull out suspected terrorists from crowds.

A 2016 investigation by Axios’s Kim Hart revealed, for example, that the Baltimore police had secretly deployed aerial drones to carry out daily surveillance over the city’s residents: “From a plane flying overhead, powerful cameras capture aerial images of the entire city.

Photos are snapped every second, and the plane can be circling the city for up to 10 hours a day.”11 Baltimore’s police also deployed facial recognition cameras to monitor and arrest protesters, particularly during 2018 riots in the city.12 The ACLU condemned these techniques as the “technological equivalent of putting an ankle GPS [Global Positioning Service] monitor on every person in Baltimore.”13 On the U.S.-Mexico border, an array of hi-tech companies also purvey advanced surveillance equipment.

Captured images “are analysed using artificial intelligence to pick out humans from wildlife and other moving objects.”14 It is unclear to what extent these surveillance deployments are covered in U.S. law, let alone whether these actions meet the necessity and proportionality standard.

The goal of the program is to reduce crime by establishing a vast public surveillance network featuring an intelligence operations center and nearly one thousand intelligent closed-circuit television (CCTV) cameras (the number will double by 2020).

The package included upgraded high definition CCTV surveillance and an intelligent command center powered by algorithms to detect unusual movements and crowd formations.16 The fact that so many democracies—as well as autocracies—are taking up this technology means that regime type is a poor predictor for determining which countries will adopt AI surveillance.

A breakdown of military expenditures in 2018 shows that forty of the top fifty military spending countries also have AI surveillance technology.17 These countries span from full democracies to dictatorial regimes (and everything in between).

If a country takes its security seriously and is willing to invest considerable resources in maintaining robust military-security capabilities, then it should come as little surprise that the country will seek the latest AI tools.

The motivations for why European democracies acquire AI surveillance (controlling migration, tracking terrorist threats) may differ from Egypt or Kazakhstan’s interests (keeping a lid on internal dissent, cracking down on activist movements before they reach critical mass), but the instruments are remarkably similar.

States are also compelling the preservation and retention of communication data to enable them to conduct historical surveillance.18 It goes without saying that such intrusions profoundly affect an individual’s right to privacy—to not be subjected to what the Office of the UN High Commissioner for Human Rights (OHCHR) called “arbitrary or unlawful interference with his or her privacy, family, home or correspondence.”19 Surveillance likewise may infringe upon an individual’s right to freedom of association and expression.

La Rue’s successor, David Kaye, issued a report in 2019 that affirmed that legal regulations should be “formulated with sufficient precision to enable an individual to regulate his or her conduct accordingly and it must be made accessible to the public.”

international legal standard, which restricts surveillance to situations that are “strictly and demonstrably necessary to achieve a legitimate aim”?21 Third, are the interests justifying the surveillance action legitimate?

that entrusts judiciaries to authorize relevant surveillance measures and provide remedies in cases of abuse is required.22 Kaye adds that legitimate surveillance should only apply when the interest of a “whole nation is at stake,”

and should exclude surveillance carried out “in the sole interest of a Government, regime or power group.”23 The legal standards required to legitimately carry out surveillance are high, and governments struggle to meet them.

Countries with weak legal enforcement or authoritarian systems “routinely shirk these obligations.”24 As the OHCHR’s inaugural report on privacy in the digital age concludes, states with “a lack of adequate national legislation and/or enforcement, weak procedural safeguards and ineffective oversight”

bring reduced accountability and heightened conditions for unlawful digital surveillance.25 AI surveillance exacerbates these conditions and makes it likelier that democratic and authoritarian governments may carry out surveillance that contravenes international human rights standards.

This brings cost efficiencies, decreases reliance on security forces, and overrides potential principal-agent loyalty problems (where the very forces operating at the behest of the regime decide to seize power for themselves).

citizens never know if an automated bot is monitoring their text messages, reading their social media posts, or geotracking their movements around town.28 This paper recognizes that AI surveillance technology is “value neutral.”

Huawei is helping the government build safe cities, but Google is establishing cloud servers, UK arms manufacturer BAE has sold mass surveillance systems, NEC is vending facial recognition cameras, and Amazon and Alibaba both have cloud computing centers in Saudi Arabia and may support a major smart city project.31 The index shows that repressive countries rarely procure such technology from a single source.

operating in Algiers.33 Uganda subsequently agreed to purchase a similar facial recognition surveillance system from Huawei costing $126 million.34 The Australian Strategic Policy Institute’s project on Mapping China’s Tech Giants indicates that Huawei is responsible for seventy-five “smart city-public security projects,”

in 2018, that reach had reportedly more than doubled to 90 countries (including 230 cities).”35 Huawei is directly pitching the safe city model to national security agencies, and China’s Exim Bank appears to be sweetening the deal with subsidized loans.

The result is that a country like Mauritius obtains long-term financing from the Chinese government, which mandates contracting with Chinese firms.36 The Mauritian government then turns to Huawei as the prime contractor or sub-awardee to set up the safe city and implement advanced surveillance controls.

Sun Yafang, for example, chairwoman of Huawei’s board from 1999 to 2018, once worked in China’s Ministry of State Security.39 Max Chafkin and Joshua Brustein reported in Bloomberg Businessweek that there are allegations that Ren may have been a “high-ranking Chinese spymaster and indeed may still be.”40 Experts maintain that the Chinese Communist Party increasingly is establishing “party ‘cells’

to any demands by the Chinese government to hand over user data.42 But this contravenes a 2015 Chinese national security law that mandates companies to allow third-party access to their networks and to turn over source code or encryption keys upon request.43 Huawei’s declared ownership structure is remarkably opaque.

which in all likelihood is a proxy for Chinese state control of the company.”44 Even if Chinese companies are making a greater push to sell advanced surveillance tech, the issue of intentionality remains perplexing—to what extent are Chinese firms like Huawei and ZTE operating out of their own economic self-interest when peddling surveillance technology versus carrying out the bidding of the Chinese state?

At least in Thailand, recent research interviews did not turn up indications that Chinese companies are pushing a concerted agenda to peddle advanced AI surveillance equipment or encourage the government to build sophisticated monitoring systems.

It forms part of a suite of digital repression tools—information and communications technologies used to surveil, intimidate, coerce, and harass opponents in order to inflict a penalty on a target and deter specific activities or beliefs that challenge the state.47 (See Appendix 2 for more information.) Table 1 summarizes each technique and its corresponding level of global deployment.

in order to facilitate improved service delivery and city management.48 They help municipal authorities manage traffic congestion, direct emergency vehicles to needed locations, foster sustainable energy use, and streamline administrative processes.

IBM, one of the original coiners of the term, designed a brain-like municipal model where information relevant to city operations could be centrally processed and analyzed.49 A key component of IBM’s smart city is public safety, which incorporates an array of sensors, tracking devices, and surveillance technology to increase police and security force capabilities.

and “address new and emerging threats.”50 In a 2016 white paper, Huawei describes a “suite of technology that includes video surveillance, emergent video communication, integrated incident command and control, big data, mobile, and secured public safety cloud”

to support local law enforcement and policing as well as the justice and corrections system.51 Huawei explicitly links its safe city technology to confronting regional security challenges, noting that in the Middle East, its platforms can prevent “extremism”;

Recently, Huawei’s safe city project in Serbia, which intends to install 1,000 high-definition (HD) cameras with facial recognition and license plate recognition capabilities in 800 locations across Belgrade, sparked national outrage.54 Huawei posted a case study (since removed) about the benefits of safe cities and described how similar surveillance technology had facilitated the apprehension of a Serbian hit-and-run perpetrator who had fled the country to a city in China: “Based on images provided by Serbian police, the .

[local] Public Security Bureau made an arrest within three days using new technologies.”55 Rather than applaud the efficiency of the system, Serbian commentators observed that in a country racked by endemic corruption and encroaching authoritarianism, such technology offers a powerful tool for Serbian authorities to curb dissent and perpetrate abuses.

This will allow security officials to “rapidly compare images caught by live body cameras with images from a central database.”56 Huawei is a major purveyor of facial recognition video surveillance, particularly as part of its safe city platforms.

It describes the technology’s benefits in the Kenya Safe City project: As part of this project, Huawei deployed 1,800 HD cameras and 200 HD traffic surveillance systems across the country’s capital city, Nairobi.

With Huawei’s HD video surveillance and a visualized integrated command solution, the efficiency of policing efforts as well as detention rates rose significantly.57 Experts detail several concerns associated with facial recognition.

Recent disclosures that U.S. law enforcement agencies (the Federal Bureau of Investigation and Immigration and Customs Enforcement) scanned through millions of photos in state driver’s license databases without prior knowledge or consent come as little surprise.

A recent independent report of the UK’s Metropolitan Police found that its facial recognition technology had an extraordinary error rate of 81 percent.59 Similarly, Axon, a leading supplier of police body cameras in the United States, announced that it would cease offering facial recognition on its devices.

Evaluations conducted between 2014 and 2018 of 127 algorithms from thirty-nine developers by the U.S. National Institute for Standards and Technology showed that “facial recognition software got 20 times better at searching a database to find a matching photograph.”

Facial recognition technology also has been unable to shake consistent gender and racial biases, which lead to elevated false positives for minorities and women—“the darker the skin, the more errors arise—up to nearly 35 percent for images of darker skinned women”

The idea behind smart policing is to feed immense quantities of data into an algorithm—geographic location, historic arrest levels, types of committed crimes, biometric data, social media feeds—in order to prevent crime, respond to criminal acts, or even to make predictions about future criminal activity.

As Privacy International notes: “With the proliferation of surveillance cameras, facial recognition, open source and social media intelligence, biometrics, and data emerging from smart cities, the police now have unprecedented access to massive amounts of data.”

Therefore, one major component to smart policing is to create automated platforms that can disaggregate immense amounts of material, facilitate data coming in from multiple sources, and permit fine-tuned collection of individual information.

It assumes that “certain crimes committed at a particular time are more likely to occur in the same place in the future.”65 PredPol reveals that “historical event datasets are used to train the algorithm for each new city (ideally 2 to 5 years of data).

New predictions are highlighted in special red boxes superimposed on Google Maps representing high-risk areas that warrant special attention from police patrols.66 A key shortcoming in PredPol’s methodology is that it generates future predictions based on data from past criminal activity and arrests.

IJOP procures additional data from license plates and identification cards scanned at checkpoints, as well as health, banking, and legal records.67 Chinese authorities are supplementing IJOP with mandatory DNA samples from all Xinjiang residents aged twelve to sixty-five.68 This information is fed into IJOP computers, and algorithms sift through troves of data looking for threatening patterns.

According to the consulting firm Accenture, ABC systems use “multi-model biometric matching”—facial image recognition combined with e-passports or other biometric documents—to process passengers.72 The process initiates when a passenger steps in front of a multi-camera wall.

A risk assessment is then performed through automated testing of identities against an individual’s passport and certain security watch-lists.73 Those who are not cleared by the automated system must go into secondary screening with human agents.

Psychologists have widely criticized these tools, maintaining that it is difficult to rely on facial expressions alone to accurately determine a person’s state of mind.75 Despite scientific skepticism about these techniques, governments continue to explore their use.

Governments and companies are increasingly storing data in massive off-site locations—known as the cloud—that are accessible through a network, usually the internet.76 Cloud computing is a general use technology that includes everything from turn-by-turn GPS maps, social network and email communications, file storage, and streaming content access.

The National Institute of Standards and Technology defines cloud computing as a “model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”77 In basic terms, cloud computing data centers function as the backbone of the internet, instantly storing, communicating, and transporting produced information.

leading many to question whether cloud computing companies can keep personal information, corporate secrets, classified government material, or health records safe (however they generally represent a more secure method of storage than legacy on-site data storage facilities).79 A related concern is forced data disclosures—even if cloud servers remain technically secure, governments may coerce companies into disclosing certain data (such as email communications or text messages of regime critics) held in the cloud.

and ensure device integration and data aggregation (although companies like Amazon, Apple, and Google are also setting up distinct ecosystems that only have limited interoperability with other platforms).81 While the IOT will bring greater efficiencies, it may also transform traditional non-networked devices, such as smart speakers, into omnipresent surveillance instruments: The Internet of Things promises a new frontier for networking objects, machines, and environments in ways that we [are] just beginning to understand.

When, say, a television has a microphone and a network connection, and is reprogrammable by its vendor, it could be used to listen in to one side of a telephone conversation taking place in its room—no matter how encrypted the telephone service itself might be.

A new device was recently demonstrated that plugs into a Tesla Model S or Model 3 car and turns its built-in cameras “into a system that spots, tracks, and stores license plates and faces over time,”

and provides “another set of eyes, to help out and tell you it’s seen a license plate following you over multiple days, or even multiple turns of a single trip.”85 The spread of AI surveillance continues unabated.

Electoral autocracies hold de-facto multiparty elections for the chief executive, but they fall short of democratic standards due to significant irregularities, limitations on party competition or other violations of Dahl’s institutional requisites for democracies.

2019 Conference

If you have submitted a paper to a workshop, you should join the lottery by clicking the green registration above. All workshop presenters must register for the workshops to gain entrance into the convention center.  Workshop organizers will have a limited number of reserve tickets to give to workshop presenters, and getting your ticket through the lottery would reduce the need for an organizer to consume their reserved tickets.

AGI 2011: Architectures Part I

The Fourth Conference on Artificial General Intelligence Mountain View, California, USA August 3-6, 2011 Architectures Session, Part I Javier Snaider, Ryan ...

Aaron Sloman - Artificial Intelligence - Psychology - Oxford Interview

Interview at St Anne's College Oxford. Conference: Winter Intelligence put on by the Future of Humanity Institute. Online Material: (web pages and PDF ...

Artificial Intelligence Full Course | Artificial Intelligence Tutorial for Beginners | Edureka

Machine Learning Engineer Masters Program: This Edureka video on "Artificial ..

AGI 2011: Session 3, Architectures Part II

The Fourth Conference on Artificial General Intelligence Mountain View, California, USA August 3-6, 2011 Session 3, Architectures Part II Haris Dindo, Antonio ...

Outrageous Artificial Intelligence: (Game 4): French Defence: DeepMind’s AlphaZero crushes Stockfish

Play turn style chess at 1 minute per move, 100 game match, match score: 28 wins, 72 draws, AI Landmark game, Stockfish crushed, ..

Probabilistic Machine Learning and AI

How can a machine learn from experience? Probabilistic modelling provides a mathematical framework for understanding what learning is, and has therefore ...

Connectionist Temporal Classification, Labelling Unsegmented Sequence Data with RNN | TDLS

Toronto Deep Learning Series, 9 July 2018 For slides and more information, visit Paper Review: ..

Machine Learning & Artificial Intelligence: Crash Course Computer Science #34

So we've talked a lot in this series about how computers fetch and display data, but how do they make decisions on this data? From spam filters and self-driving ...

IBM Storage Strategy for AI by Rob Coventry and John Ramieri

Welcome to this series of speaker video presentations from IBM Systems Technical University. Learn more: - Your Optimal Choice of AI Storage for Today and ...

Ray Solomonoff paper read by Marcus Hutter - Algorithmic Probability, Heuristic Programming & AGI

Paper: This paper is about Algorithmic Probability (ALP) and Heuristic Programming and ..