AI News, The European Commission recently addressed the issue of Artificial ... artificial intelligence
Before you continue...
To give you a better overall experience, we want to provide relevant ads that are more useful to you.
For example, when you search for a film, we use your search information and location to show the most relevant cinemas near you.
Algorithmic bias is found across platforms, including but not limited to search engine results and social media platforms, and can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity.
If the algorithm recommends loans to one group of users, but denies loans to another set of nearly identical users based on unrelated criteria, and if this behavior can be repeated across multiple occurrences, an algorithm can be described as biased.:332 This bias may be intentional or unintentional (for example, it can come from biased data obtained from a worker that previously did the job the algorithm is going to do from now on).
This requires human decisions about how data is categorized, and which data is included or discarded.:4 Some algorithms collect their own data based on human-selected criteria, which can also reflect the bias of human designers.:8 Other algorithms may reinforce stereotypes and preferences as they process and display 'relevant' data for human users, for example, by selecting information based on previous choices of a similar user or group of users.:6
For example, algorithms that determine the allocation of resources or scrutiny (such as determining school placements) may inadvertently discriminate against a category when determining risk based on similar users (as in credit scores).:36 Meanwhile, recommendation engines that work by associating users with similar users, or that make use of inferred marketing traits, might rely on inaccurate associations that reflect broad ethnic, gender, socio-economic, or racial stereotypes.
That means the code could incorporate the programmer's imagination of how the world works, including his or her biases and expectations.:109 While a computer program can incorporate bias in this way, Weizenbaum also noted that any data fed to a machine additionally reflects 'human decisionmaking processes' as data is being selected.:70, 105
Finally, he noted that machines might also transfer good information with unintended consequences if users are unclear about how to interpret the results.:65 Weizenbaum warned against trusting decisions made by computer programs that a user doesn't understand, comparing such faith to a tourist who can find his way to a hotel room exclusively by turning left or right on a coin toss.
An early example of algorithmic bias resulted in as many as 60 women and ethnic minorities denied entry to St. George's Hospital Medical School per year from 1982 to 1986, based on implementation of a new computer-guidance assessment system that denied entry to women and men with 'foreign-sounding names' based on historical trends in admissions.
over time these decisions and their collective impact on the program's output may be forgotten.:115 In theory, these biases may create new patterns of behavior, or 'scripts,' in relationship to specific technologies as the code interacts with other elements of society.
Because of their convenience and authority, algorithms are theorized as a means of delegating responsibility away from humans.:16:6 This can have the effect of reducing alternative options, compromises, or flexibility.:16 Sociologist Scott Lash has critiqued algorithms as a new form of 'generative power', in that they are a virtual means of generating actual ends.
Such prejudices can be explicit and conscious, or implicit and unconscious.:334:294 Poorly selected input data, or simply data from a biased source, will influence the outcomes created by machines.:17 Encoding pre-existing bias into software can preserve social and institutional bias, and without correction, could be replicated in all future uses of that algorithm.:116:8
An example of this form of bias is the British Nationality Act Program, designed to automate the evaluation of new UK citizens after the 1981 British Nationality Act.:341 The program accurately reflected the tenets of the law, which stated that 'a man is the father of only his legitimate children, whereas a woman is the mother of all her children, legitimate or not.':341:375 In its attempt to transfer a particular logic into an algorithmic process, the BNAP inscribed the logic of the British Nationality Act into its algorithm, which would perpetuate it even if the act was eventually repealed.:342
Technical bias emerges through limitations of a program, computational power, its design, or other constraint on the system.:332 Such bias can also be a restraint of design, for example, a search engine that shows three results per screen can be understood to privilege the top three results slightly more than the next three, as in an airline price display.:336 Another case is software that relies on randomness for fair distributions of results.
decontextualized algorithm uses unrelated information to sort results, for example, a flight-pricing algorithm that sorts results by alphabetical order would be biased in favor of American Airlines over United Airlines.:332 The opposite may also apply, in which results are evaluated in contexts different from which they are collected.
Data may be collected without crucial external context: for example, when facial recognition software is used by surveillance cameras, but evaluated by remote staff in another country or region, or evaluated by non-human algorithms with no awareness of what takes place beyond the camera's field of vision.
For example, software weighs data points to determine whether a defendant should accept a plea bargain, while ignoring the impact of emotion on a jury.:332 Another unintended result of this form of bias was found in the plagiarism-detection software Turnitin, which compares student-written texts to information found online and returns a probability score that the student's work is copied.
Because the software compares long strings of text, it is more likely to identify non-native speakers of English than native speakers, as the latter group might be better able to change individual words, break up strings of plagiarized text, or obscure copied passages through synonyms.
Emergent bias is the result of the use and reliance on algorithms across new or unanticipated contexts.:334 Algorithms may not have been adjusted to consider new forms of knowledge, such as new drugs or medical breakthroughs, new laws, business models, or shifting cultural norms.:334,336 This may exclude groups through technology, without providing clear outlines to understand who is responsible for their exclusion.:179:294 Similarly, problems may emerge when training data (the samples 'fed' to a machine, by which it models certain conclusions) do not align with contexts that an algorithm encounters in the real world.
By selecting according to certain behavior or browsing patterns, the end effect would be almost identical to discrimination through the use of direct race or sexual orientation data.:6 In other cases, the algorithm draws conclusions from correlations, without being able to understand those correlations.
For example, machines may require that users can read, write, or understand numbers, or relate to an interface using metaphors that they do not understand.:334 These exclusions can become compounded, as biased or exclusionary technology is more deeply integrated into society.:179
The agents administering the questions relied entirely on the software, which excluded alternative pathways to citizenship, and used the software even after new case laws and legal interpretations led the algorithm to become outdated.
As a result of designing an algorithm for users assumed to be legally savvy on immigration law, the software's algorithm indirectly led to bias in favor of applicants who fit a very narrow set of legal criteria set by the algorithm, rather than by the more broader criteria of UK immigration law.:342
In a 1998 paper describing Google, it was shown that the founders of the company adopted a policy of transparency in search results regarding paid placement, arguing that 'advertising-funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.'
For example, 'Top 25 Sexiest Women Athletes' articles displayed as first-page results in searches for 'women athletes'.:31 In 2017, Google adjusted these results along with others that surfaced hate groups, racist views, child abuse and pornography, and other upsetting and offensive content.
However, cost is not race-neutral, as black patients incurred about $1,800 less in medical costs per year than white patients with the same number of chronic conditions, which led to the algorithm scoring white patients as equally at risk of future health problems as black patients who suffered from significantly more diseases.
study conducted by researchers at UC Berkeley in November 2019 revealed that mortgage algorithms have been discriminatory towards Latino and African Americans which discriminated against minorities based on 'creditworthiness' which is rooted in the U.S. fair-lending law which allows lenders to use measures of identification to determine if an individual is worthy of receiving loans.
An unanticipated outcome of the algorithm is to allow hate speech against black children, because they denounce the 'children' subset of blacks, rather than 'all blacks,' whereas 'all white men' would trigger a block, because whites and males are not considered subsets.
Surveillance camera software may be considered inherently political because it requires algorithms to distinguish normal from abnormal behaviors, and to determine who belongs in certain locations at certain times.:572 The ability of such algorithms to recognize faces across a racial spectrum has been shown to be limited by the racial diversity of images in its training database;
The software was assessed as identifying men more frequently than women, older people more frequently than the young, and identified Asians, African-Americans and other races more often than whites.:190 Additional studies of facial recognition software have found the opposite to be true when trained on non-criminal databases, with the software being the least accurate in identifying darker-skinned females.
As a result of this, some of the accounts of trans uber drivers were suspended which cost them fares and potentially cost them a job, all due to the facial recognition software experiencing difficulties with recognizing the face of a trans driver who was transitioning.
Although the solution to this issue would appear to be including trans individuals in training sets for machine learning models, a instance of trans YouTube videos that were collected to be used in training data did not receive consent from the trans individuals that were included in the videos, which created an issue of violation of privacy.
Commercial algorithms are proprietary, and may be treated as trade secrets.:2:7:183 Treating algorithms as trade secrets protects companies, such as search engines, where a transparent algorithm might reveal tactics to manipulate search rankings.:366 This makes it difficult for researchers to conduct interviews or analysis to discover how algorithms function.:20 Critics suggest that such secrecy can also obscure possible unethical methods used in producing or processing algorithmic output.:369
Furthermore, large teams of programmers may operate in relative isolation from one another, and be unaware of the cumulative effects of small decisions within connected, elaborate algorithms.:118 Not all code is original, and may be borrowed from other libraries, creating a complicated set of relationships between data processing and data input systems.:22
Furthermore, false and accidental correlations can emerge from a lack of understanding of protected categories, for example, insurance rates based on historical data of car accidents which may overlap, strictly by coincidence, with residential clusters of ethnic minorities.
that prevents, inter alia, discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status or sexual orientation, or that result in measures having such an effect.
It has been argued that the Data Protection Impact Assessments for high risk data profiling (alongside other pre-emptive measures within data protection) may be a better way to tackle issues of algorithmic discrimination, as it restricts the actions of those deploying algorithms, rather than requiring consumers to file complaints or request changes.
The bill, which went into effect on January 1, 2018, required 'the creation of a task force that provides recommendations on how information on agency automated decision systems may be shared with the public, and how agencies may address instances where people are harmed by agency automated decision systems.'
Artificial Intelligence and Automated Systems Legal Update (4Q19)
If enacted, the bill would require large-scale internet platforms to provide greater transparency to consumers by providing clear notice on the use, and enabling consumers to opt out, of personalized content curated by “opaque” algorithms so that they can “engage with a platform without being manipulated by algorithms driven by user-specific data” and “simply opt out of the filter bubble.” “Filter bubble” refers to a zone of potential manipulation that exists within algorithms that curate or rank content in internet platforms based on user-specific data, potentially creating digital “echo chambers.” Sen. John Thune, R-S.D., one of the bill’s sponsors, explained that the bill is intended to facilitate “a better understanding of how internet platforms use artificial intelligence and opaque algorithms to make inferences from the reams of personal data at their fingertips that can be used to affect behavior and influence outcomes.” The proposed legislation covers “any public-facing website, internet application, or mobile application,” such as social network sites, video sharing services, search engines and content aggregation services, and generally would prohibit the use of opaque algorithms on platforms without those platforms having first provided notice in a “clear, conspicuous manner on the platform whenever the user interacts with an opaque algorithm for the first time.” The term “opaque algorithm” is defined as “an algorithmic ranking system that determines the order or manner that information is furnished to a user on a covered internet platform based, in whole or part, on user-specific data that was not expressly provided by the user to the platform” in order to interact with it. Examples of “user-specific” data include the user’s history of web searches and browsing, geographical locations, physical activity, device interaction, and financial transactions. Conversely, data that was expressly provided to the platform by the user for the purpose of interacting with the platform—such as search terms, saved preferences, an explicitly entered geographical location or the user’s social media profiles—is considered “user-supplied.” Additionally, the bill requires that users be given the option to choose to view content based on “input-transparent algorithms,” a purportedly generic algorithmic ranking system that “does not use the user-specific data of a user to determine the order or manner that information is furnished to such user on a covered platform,” and be able to easily switch between the opaque and the input-transparent versions. By way of example, Sen. Marsha Blackburn (R-TN), another co-sponsor of the bill, explained that “this legislation would give consumers the choice to decide whether they want to use the algorithm or view content in the order it was posted.” However, there is nothing in the bill that would require platforms to disclose the use of algorithms unless they are using hyper-personal “user-specific” data for customization, and even “input-transparent” algorithms using “user-supplied” data would not necessarily show content in chronological order.
As drafted, the bill’s goals of providing transparency and protecting consumers from algorithmic manipulation by “opting out” of personalized content appear to be overstated, and lawmakers will need to grapple with the proposed definitions to clarify the scope of the bill’s provisions. Much like the Algorithmic Accountability Act, discussed in more detail in our Artificial Intelligence and Autonomous Systems Legal Update (1Q19), the bill is squarely targeted at “Big Tech” platforms—it would not apply to platforms wholly owned, controlled and operated by a person that did not employ more than 500 employees in the past six months, averaged less than $50 million in annual gross receipts, and annually collects or processes personal data of less than a million individuals. Violations of the Act would be enforced with civil penalties by the Federal Trade Commission (“FTC”) but, unlike the Algorithmic Accountability Act, the bill does not grant state attorneys general the right to bring civil suits for violations, nor expressly state that its provisions do not preempt state laws.
The commission’s preliminary conclusion is that the U.S. “is not translating broad national AI strengths and AI strategy statements into specific national security advantages.” Notably, the commission reported that federal R&D funding has not kept pace with the potential of AI technologies, noting that the requested fiscal year 2020 federal funding for core AI research outside of the defense sector grew by less than 2 percent from the estimated 2019 levels. Further, it noted that AI is not realizing its potential to execute core national security missions because agencies are failing to embrace the technology as a result of “bureaucratic impediments and inertia.” NSCAI also criticized the shortage of AI talent in government agencies, specifically in the Department of Defense (“DoD”).
On October 23, 2019, Germany’s Data Ethics Commission released a landmark 240-page report containing 75 recommendations for regulating data, algorithmic systems and AI. Consistent with EC President Ursula von der Leyen’s recent remarks discussed above, the report suggests that EU regulation of AI may mirror the approach espoused in the GDPR — broad in scope, focused on individual rights and corporate accountability, and “horizontally” applicable across industries, rather than specific sectors. Expanding on the EU’s non-binding “Ethics Guidelines for Trustworthy AI,” the commission concludes that “regulation is necessary, and cannot be replaced by ethical principles.” The commission creates a blueprint for the implementation of binding legal rules for AI—nominally both at national and EU level—on a sliding scale based on the risk of harm across five levels of algorithmic systems, with a focus on the degree of potential harm rather than differentiating between specific use cases.
The commission recommended a full or partial ban on systems with an “untenable potential for harm.” Of particular relevance to companies deploying AI software, the report recommends that measures be taken against “ethically indefensible uses of data,” such as “total surveillance, profiling that poses a threat to personal integrity, the targeted exploitation of vulnerabilities, addictive designs and dark patterns, methods of influencing political elections that are incompatible with the principle of democracy, vendor lock-in and systematic consumer detriment, and many practices that involve trading in personal data.” The commission also recommends that human operators of algorithmic systems be held vicariously liable for any harm caused by autonomous technology, and calls for an overhaul of existing product liability and strict liability laws as they pertain to algorithmic products and services. While the report’s pro-regulation approach is a counterweight to the “light-touch’ regulation favored by the U.S. government, the commission takes the view that, far from impeding private sector innovation, regulation can provide much-needed certainty to companies developing, testing, and deploying innovative AI products. Certainly, the commission’s guiding principles—among them the need to ensure “the human-centred and value-oriented design of technology”—reinforce that European lawmakers are likely to regulate AI development comprehensively and decisively.
The board also suggested that humans should always be responsible for the “development, deployment, use and outcomes” of AI rather than letting AI set its own standards of use: “Governability is important because operators and users of AI systems should understand the potential consequences of deploying the system or system of systems to its full extent, which may lead to unintended harmful outcomes.” In these cases, DOD should not use that AI system because “it does not achieve mission objectives in an ethical or responsible manner.” The DIB also recommended a number of technical and organizational measures that would help lay the groundwork to ensure military artificial intelligence systems adhere to ethical standards, such as increasing investment in standards development, workforce programs and AI security applications, and formalizing channels for exploring the ethical implications of deploying AI technology across the department.
On the heels of that guidance, on August 27, 2019, the USPTO published a request for public comment on several patent-related issues regarding AI inventions. The request for comment posed twelve questions covering several topics from “patent examination policy to whether new forms of intellectual property protection are needed.” The questions included topics such whether patent laws, which contemplate only human inventors, should be amended to allow entities other than a human being to be considered an inventor. The commenting period was extended until November 8, 2019, and many of the comments submitted argue that ownership of patent rights should remain reserved for only natural or juridical persons. On December 13, 2019, the World Intellectual Property Organization (“WIPO”) published a draft issue paper on IP policy and AI, and requested comments on several areas of IP, including patents and data, and, similarly to the USPTO before it, with regard to issues of inventorship and ownership. The commenting period is set to end on February 14, 2020.
Robustness and Explainability of Artificial Intelligence
In the light of the recent advances in artificial intelligence (AI), the serious negative consequences of its use for EU citizens and organisations have led to multiple initiatives from the European Commission to set up the principles of a trustworthy and secure AI.
This Technical Report by the European Commission Joint Research Centre (JRC) aims to contribute to this movement for the establishment of a sound regulatory framework for AI, by making the connection between the principles embodied in current regulations regarding to the cybersecurity of digital systems and the protection of data, the policy activities concerning AI, and the technical discussions within the scientific community of AI, in particular in the field of machine learning, that is largely at the origin of the recent advancements of this technology. The
- On 4. december 2020
European Commission Publishes Ethical AI Guidelines
Commissioner Gabriel: EU’s real added value on artificial intelligence is that it addresses ethics
In the tech war that rages on between China and the United States, European Commissioner for Digital Economy and Society Mariya Gabriel believes that there ...
European Union Strategy for Artificial Intelligence
On 18 June 2018, the European Economic and Social Committee, together with the European Commission, organised a stakeholder summit on artificial ...
This House Fears The Rise Of Artificial Intelligence | Cambridge Union
How can the European Union address the challenge of the Future of Work
Views of Maria João Rodrigues, Vice-President of "The Progressives" in the European Parliament & President of Foundation for European Progressive Studies.
A European Strategy on Artificial Intelligence: The Mobile Industry’s Perspective
GSMA Europe hosted a Mobile Meetings Series discussion on the mobile industry's perspective on Artificial Intelligence (AI). The event explored, in particular, ...
Video blog by Michael O'Flaherty: artificial intelligence
This video blog by FRA Director Michael O'Flaherty is released periodically and will address burning fundamental rights themes. In this 22nd edition Michael ...
Pope meets with Microsoft and IBM to push for ethics in Artificial Intelligence
The new Rome Reports app is now available! Download it here: Android: Apple: Subscrib
Paving the way for the EU Artificial Intelligence initiative - DD18
Mariya Gabriel, Commissioner for Digital Economy & Society Panel discussion - Boris Koprivnikar, Deputy Prime Minister and Minister for Public Administration, ...
SPACE is three C Congested - Contested - Competitive. What does Josep Borrell mean by that?
Like life on earth, “space” is changing its nature. “Space” is increasingly congested, contested and competitive it is three C congested, contested and competitive ...