AI News, A European approach to Artificial intelligence
Strategy for artificial intelligence
In its strategy on artificial intelligence, the European Commission put forward three strands that aim to: The three strands of the AI Strategy go hand in hand with the vision for a European ecosystem of excellence and trust.
Presented in the white paper on artificial intelligence, this vision proposes: This vision will come into practice through two key documents presented by the Commission in its “AI Package” of April 2021: As part of its AI Strategy, the Commission has joined forces with all Member States, as well as Norway and Switzerland, to foster the development and use of AI in Europe.
To this end, the Commission will take more actions to: According to a special report conducted on how AI and automation will transform our world of work, a major shift of economies and all related activities is currently taking place.
The High-Level Expert Group on AI (AI HLEG) built on this concept in their ethics guidelines for trustworthy AI. The goal of an ethical and legal framework for AI, along with the four deliverables presented by the 52 experts in their two years of mandate, has strongly affected the Commission’s vision on AI.
EDPB EDPS call for ban on use of AI for automated recognition of human features in publicly accessible spaces, and some other uses of AI that can lead to unfair discrimination
The EDPB and EDPS have adopted a joint opinion on the European Commission’s Proposal for a Regulation laying down harmonised rules on artificial intelligence (AI). The EDPB and the EDPS strongly welcome the aim of addressing the use of AI systems within the European Union, including the use of AI systems by EU institutions, bodies or agencies.
The EDPB and EDPS also stress the need to explicitly clarify that existing EU data protection legislation (GDPR, the EUDPR and the LED) applies to any processing of personal data falling under the scope of the draft AI Regulation. While the EDPB and the EDPS welcome the risk-based approach underpinning the Proposal, they consider that the concept of “risk to fundamental rights” should be aligned with the EU data protection framework.
Taking into account the extremely high risks posed by remote biometric identification of individuals in publicly accessible spaces, the EDPB and the EDPS call for a general ban on any use of AI for automated recognition of human features in publicly accessible spaces, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, in any context.
However, the role and tasks of the EDPS should be further clarified, specifically when it comes to its role as market surveillance authority. The EDPB and EDPS recall that data protection authorities (DPAs) are already enforcing the GDPR and the LED on AI systems involving personal data, in order to guarantee the protection of fundamental rights and more specifically the right to data protection.
2. Hopes about developments in ethical AI
Benjamin Grosof, chief scientist at Kyndi, a Silicon Valley start-up aimed at the reasoning and knowledge representation side of AI, wrote, “Some things that give me hope are the following: Most AI technical researchers (as distinguished from business or government deployers of AI) care quite a lot about ethicality of AI.
It has tremendous potential to improve productivity economically and to save people effort even when money is not flowing directly by better automating decisions and information analysis/supply in a broad range of work processes.
I am concerned that so much of that is directed toward military purposes or controlled by military branches of governments.” Perry Hewitt, chief marketing officer at data.org, responded, “I am hopeful that ‘ethical AI’ will extend beyond the lexicon to the code by 2030.
The contribution into Linux Foundation AI (by major technology companies) of the open-source project code for Trusted AI for AI-Fairness, AI-Robustness and AI-Explainability on which their products are based is a very positive sign.” Michael Wollowski, a professor of computer science at Rose-Hulman Institute of Technology and expert in artificial intelligence, said, “It would be unethical to develop systems that do not abide by ethical codes, if we can develop those systems to be ethical.
Since Europe is a big market, since developing systems that abide by ethical code is not a trivial endeavor and since the big tech companies (except for Facebook) by and large want to do good (well, their employees by and large want to work for companies that do good), they will develop their systems in a way that they abide by ethical codes.
While they can be used to automate many things and while people by and large are creatures of habit, it is my fond hope that we will rediscover what it means to be human.” Paul Epping, chairman and co-founder of XponentialEQ and well-known keynote speaker on exponential change, wrote, “The power of AI and machine learning (and deep learning) is underestimated.
AI is helping to solve the world’s biggest problems, finding new materials, running simulations, digital twins (including personal digital twins that can be used to run experiments in case of treatments).
Atkinson, president of the Information Technology and Innovation Foundation, wrote, “The real question is not whether all AI developers sign up to some code of principles, but rather whether most AI applications work in ways that society expects them to, and the answer to that question is almost 100% ‘yes.’” Ben Shneiderman, distinguished professor of computer science and founder of the Human-Computer Interaction Lab at the University of Maryland, commented, “While technology raises many serious problems, efforts to limit malicious actors should eventually succeed and make these technologies safer.
True change will come when corporate business choices are designed to limit the activity of malicious actors – criminals, political operatives, hate groups and terrorists – while increasing user privacy.” Carol Smith, a senior research scientist in human–machine interaction at Carnegie Mellon University’s Software Engineering Institute, said, “There are still many lessons to be learned with regard to AI and very little in the way of regulation to support human rights and safety.
Humans must be kept in the loop with regard to decisions involving people’s lives, quality of life, health and reputation, and humans must be ultimately responsible for all AI decisions and recommendations (not the AI system).” Marvin Borisch, chief technology officer at RED Eagle Digital based in Berlin, wrote, “When used for the greater good, AI can and will help us fight a lot of human problems in the next decade.
Prediagnostics, fair ratings for insurance or similar, supporting humans in space and other exploration and giving us theoretical solutions for economic and ecological problems – these are just a few examples of how AI is already helping us and can and will help us in the future.
What worries me the most is that AI developers are trying to trump each other – not for the better use but for the most medial outcome in order to impress stakeholders and potential investors.” Tim Bray, well-known technology leader who has worked for Amazon, Google and Sun Microsystems, predicted, “Unethical AI-driven behavior will produce sufficiently painful effects that legal and regulatory frameworks will be imposed that make its production and deployment unacceptable.” Gary M.
Like everything else, it will stabilize in some type of compromised structure within the decade time frame the question anticipates.” Erhardt Graeff, a researcher expert in the design and use of digital technologies for civic and political engagement, noted, “Ethical AI is boring directly into the heart of the machine-learning community and, most importantly, influencing how it is taught in the academy.
Hopefully, smaller companies and those that don’t draw the same level of scrutiny from regulators and private citizens will adopt similar practices and join ethical AI consortia and find themselves staffed with upstanding technologists.
AI and machine learning will benefit us the most in the health context – being able to examine thousands of possibilities and variables in a few seconds, but human professionals will always have to examine the data and context to apply any results.
We need to be sure that something like insurance doesn’t affect a doctor or researcher’s readout in these contexts.” Su Sonia Herring, a Turkish-American internet policy researcher with Global Internet Policy Digital watch, said, “AI will be used in questionable ways due to companies and governments putting profit and control in front of ethical principles and the public good.
Issues related to privacy, security and accountability and transparency related to AI tech concerns me, while the potential of processing big data to solve global issues excites me.” Edson Prestes, a professor of computer science at Federal University of Rio Grande do Sul, Brazil, commented, “By 2030, technology in general will be developed taking into account ethical considerations.
Guterres used the panel’s recommendations to create a roadmap with concrete actions that address the digital domain in a holistic way, engaging a wide group of organisations to deal with the consequences emerging from the digital domain.” James Blodgett, futurist, author and consultant, said, “‘Focused primarily on the public good’ is not enough if the exception is a paperclip maximizer.
futurist and managing principal for a consultancy commented, “AI offers extremely beneficial opportunities, but only if we actively address the ethical principles and regulate and work toward: “AI also has the potential to once again be a job killer but also assist the practice medicine, law enforcement, etc.
Colclough, an expert on the future of work and the politics of technology and ethics in AI, observed, “By 2030, governments will have woken up to the huge challenges AI (semi/autonomous systems, machine learning, predictive analytics, etc.) pose to democracy, legal compliance and our human and fundamental rights.
Otherwise, they risk being good intentions with little effect.” Thomas Birkland, professor of public and international affairs at North Carolina State University, wrote, “AI will be informed by ethical considerations in the coming years because the stakes for companies and organizations making investments in AI are too high.
New institutions like an ‘auditor general of algorithms’ (to oversee that algorithms and other computations actually produce the results they intend, and to offer ways to respond and correct) will inevitably arise – just like our other institutions of oversight.” James Morris, professor of computer science at Carnegie Mellon, wrote, “I had to say ‘yes.’ The hope is that engineers get control away from capitalists and rebuild technology to embody a new ‘constitution.’ I actually think that’s a longshot in the current atmosphere.
I worry that individuals will choose the code they like the best – which is why a plethora of codes is dangerous.” Nigel Cameron, president emeritus at the Center for Policy on Emerging Technologies, commented, “The social and economic shifts catalyzed by the COVID plague are going to bring increasing focus to our dependence on digital technologies, and with that focus will likely come pressure for algorithmic transparency and concerns over equity and so forth.
Reiner, professor of neuroethics at the University of British Columbia, said, “As AI-driven applications become ever more entwined in our daily lives, there will be substantial demand from the public for what might be termed ‘ethical AI.’ Precisely how that will play out is unknown, but it seems unlikely that the present business model of surveillance capitalism will hold, at least not to the degree that it does today.
An alternative is that a new regulatory regime emerges, constraining AI service providers and mandating ethical practice.” Ronnie Lowenstein, a pioneer in interactive technologies, noted, “AI and the related integration of technologies holds the potential of altering lives in profound ways.
Most people should learn informatics and have one person in the family who understands computers.” Ray Schroeder, associate vice chancellor of online learning, University of Illinois-Springfield, responded, “One of the aspects of this topic that gives me the most hope is that, while there is the possibility of unethical use of AI, the technology of AI can also be used to uncover those unethical applications.
Michelson, a professor of political science at Menlo College, commented, “Because of the concurrent rise of support for the Black Lives Matter movement, I see people taking a second look at the role of AI in our daily lives, as exemplified by the decision to stop police use of facial recognition technology.
AI may have to be used to help keep up with adaptation needed for the ethical standards needed for AI systems.” Anne Collier, editor of Net Family News and founder of The Net Safety Collaborative, responded, “Policymakers, newsmakers, users and consumers will exert and feel the pressure for ethics with regard to tech and policy because of three things: “Populism and authoritarianism in a number of countries certainly threaten that trajectory, but – though seemingly on the rise now – I don’t see this as a long-term threat (a sense of optimism that comes from watching the work of so-called ‘Gen Z’).
I believe it would give many other adults a sense of optimism similar to mine.” Eric Knorr, pioneering technology journalist and editor in chief of International Data Group, the publisher of a number of leading technology journals, commented, “First, only a tiny slice of AI touches ethics – it’s primarily an automation tool to relieve humans of performing rote tasks.
Current awareness of ethical issues offers hope that AI will either be adjusted to compensate for potential bias or sequestered from ethical judgment.” Anthony Clayton, an expert policy analysis, futures studies and scenario and strategic planning based at the University of the West Indies, said, “Technology firms will come under increasing regulatory pressure to introduce standards (with regard to, e.g., ethical use, error-checking and monitoring) for the use of algorithms when dealing with sensitive data.
AI will also enable, e.g., autonomous lethal weapons systems, so it will be important to develop ethical and legal frameworks to define acceptable use.” Fabrice Popineau, an expert on AI, computer intelligence and knowledge engineering based in France, responded, “I have hope that AI will follow the same path as other potential harmful technologies before (nuclear, bacteriological);
safety mechanisms will be put in motion to guarantee that AI use stays beneficial.” Concepcion Olavarrieta, foresight and economic consultant and president of the Mexico node of The Millennium Project, wrote, “Yes, there will be progress: Sharon Sputz, executive director of strategic programs at The Data Science Institute at Columbia University, predicted, “In the distant future, ethical systems will prevail, but it will take time.” A
Some point out that the field of bioethics has already managed to broadly embrace the concepts of beneficence, nonmaleficence, autonomy and justice in its work to encourage and support positive biotech evolution that serves the common good.
Some of these experts expect to see an expansion of the type of ethical leadership already being demonstrated by open-source AI developers, a cohort of highly principled AI builders who take the view that it should be thoughtfully created in a fairly transparent manner and be sustained and innovated in ways that serve the public well and avoid doing harm.
Micah Altman, a social and information scientist at MIT, said, “First, the good news: In the last several years, dozens of major reports and policy statements have been published by stakeholders from across all sectors arguing that the need for ethical design of AI is urgent and articulating general ethical principles that should guide such design.
“I think public agencies will take these issues very seriously, and mechanisms will be created to improve AI (although the issues pose difficult problems for legislators due to [the] highly technical nature).
Nathan Matias, an assistant professor at Cornell University and expert in digital governance and behavior change in groups and networks, noted, “Unless there is a widespread effort to halt their deployment, artificial intelligence systems will become a basic part of how people and institutions make decisions.
I foresee that issues caused by the primary use of AI will bring the community to debate about that, and we will come up with some ethical guidelines around AI by 2030.” Doris Marie Provine, emeritus professor of justice and social inquiry at Arizona State University, noted, “I am encouraged by the attention that ethical responsibilities are getting.
At a global level, I worry about AI being used as the next phase of cyber warfare, e.g., to mess up public utilities.” Judith Schoßböck, research fellow at Danube University Krems, said, “I don’t believe that most AI systems will be used in ethical ways.
Increased attention to issues of privacy, autonomy and justice in digital activities and services should lead to safeguards and regulations concerning ethical use of AI.” Michael Marien, director of Global Foresight Books, futurist and compiler of the annual list of the best futures books of the year, said, “We have too many crises right now, and many more ahead, where technology can only play a secondary role at best.
Technology should be aligned with the UN’s 17 Sustainable Development Goals and especially concerned about reducing the widening inequality gap (SDG #10), e.g., in producing and distributing nutritious food (SDG #2).” Ibon Zugasti, futurist, strategist and director with Prospektiker, wrote, “The use of technologies, ethics and privacy must be guaranteed through transparency.
There is a need to define a new governance system for the transition from current narrow AI to the future general AI.” Gerry Ellis, an accessibility and usability consultant, said, “The concepts of fairness and bias are key to ensure that AI supports the needs of all of society, particularly those who are vulnerable, such as many (but not all) persons with disabilities.
Unless there are both ethical and legal constraints with real teeth, we’ll find all manner of exploitations in finance, insurance, investing, employment, personal data harvesting, surveillance and dynamic pricing of almost everything from a head of lettuce to real estate.
I am concerned about using AI to write news stories unless the ‘news’ is a sports score, weather report or some other description of data … My greatest fear, not likely in my lifetime, is that AI eventually is deployed as our minders – telling us when to get up, what to eat, when to sleep, how much and how to exercise, how to spend our time and money, where to vacation, who to socialize with, what to watch or read and then secretly rates us for employers or others wanting to size us up.” Michael R.
I see these as ethically neutral applications.” Lee McKnight, associate professor at the Syracuse University School of Information Studies, wrote, “When we say ‘AI,’ most people really mean a wider range of systems and applications, including machine learning, neural networks and natural language processing to name a few.
I am hopeful that smart cities and communities initially and eventually all levels of public organizations, and nonprofits – will write into their procurement contracts requirements that firms not only commit to an ethical review process for AI applications touching on people directly – such as facial recognition.
How exactly to determine if one is truly ‘certified’ in ethics is obviously an area where the public would laugh in the faces of corporate representatives claiming their internal, not publicly disclosed, or audited, ethical training efforts are sufficient.
The real challenge to consider is how AI will be used in combination with other disruptive technologies such as Internet of Things, 3D printing, cloud computing, blockchain, genomics engineering, implantable devices, new materials and environment-friendly technologies, new ways to store energy and how the environment and people will be affected and at the same part of the change – physically and mentally for the human race.
Artificial Intelligence Act: What Is the European Approach for AI?
Mark MacCarthy and Kenneth Propp have called the proposed regulation “a comprehensive and thoughtful start to the legislative process in Europe [that] might prove to be the basis for trans-Atlantic cooperation.” This post builds on MacCarthy and Propp’s discussion and closely examines the key elements of the proposal—the provisions most likely to shape the discussion regarding the regulation of AI on this side of the Atlantic. Before diving deep into the legislation itself, it is important to recognize the significant amount of work that the European Union has done to come up with this text.
Among these key elements was the risk-based approach suggesting that mandatory legal requirements—derived from the ethical principles—should be imposed on high-risk AI systems. The white paper was followed by a public consultation process that involved 1,200 stakeholders from various backgrounds—citizens, academics, EU member states, civil society, as well as businesses and industry.
First, it provides an expansive (and somewhat vague) definition of AI systems—that is strongly influenced by the Organization for Economic Cooperation and Development’s definition—in the body of the proposed regulation at Article 3: “[A]rtificial intelligence system” (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with[.] The European Commission sought to clarify the definition and provide legal certainty surrounding the scope of the act by enumerating computer sciences techniques and approaches that would be regulated in Annex I.
For example, some observers claimed that the EU was “proposing to regulate the use of Bayesian estimation”—Bayesianism being first and foremost a mathematical theorem—to decry the overbroadness of the proposed regulation. While the critiques of the act’s definition broadness is not totally without merit because it can create uncertainty for developers of AI systems, it’s worth noting that a certain technology falling within the scope of the proposal does not necessarily mean it will be subject to novel legal obligations.
The AI Act has attempted to find the middle ground by adopting a risk-based approach that bans specific unacceptable uses of AI, heavily regulates some other uses that carry important risks, and says nothing—except encouraging the adoption of codes of conduct—about the uses that are of limited risk or no risk at all. The gradation of risks is represented using a four-level pyramid—an unacceptable risk, a high risk, a limited risk and a minimal risk (see Figure 1).
The defining characteristic of AI systems that fall into this category is that they raise certain issues in terms of transparency and thus require special disclosure obligations. There are three types of technologies that require such special transparency requirements: deep fakes, AI systems that are intended to interact with people, and AI-powered emotion recognition systems/biometric categorization systems. Article 52 of the proposed regulation grants people living in the European Union the right to know if the video they are watching is a deep fake, if the person they’re talking to is a chatbot or a voice assistant, and if they are subject to an emotion recognition analysis or a biometric categorization made by an AI system.
The overseeing duties include surveilling for automation bias problems, spotting anomalies or signs of dysfunctions, and deciding whether to override an AI system’s decision or to pull the “kill switch” if a system poses a threat to the safety and fundamental rights of people. Accuracy, Robustness and Cybersecurity: High-risk AI systems must achieve a level of accuracy, robustness and cybersecurity proportionate to their intended purpose.
Once the compliance assessment is done, the provider of an AI system completes an EU declaration of conformity and then can affix the CE marking of conformity and enter the European market. Compliance is not static, and the proposed regulation requires high-risk AI system providers to enact a postmarket monitoring process that actively evaluates the system’s compliance throughout its life cycle. Unacceptable Risk: This category of systems is regulated by Article 5.
The proposal also bans what can be described as “manipulation.” The proposal describes this as AI systems that “exploit any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.” A doll with an integrated voice assistant that encourages a minor to engage in progressively dangerous behavior would be prohibited under Article 5(1)(b).
However, since the proposed regulation applies to every high-risk AI system that is put on the European market and to every high-risk AI system for which the output is used in the union, the requirements are likely to have significant impacts on American tech developers. Indeed, due to the Brussels effect—a phenomenon by which the European Union seeks to apply its own regulations to foreign actors through extraterritoriality means—American tech developers will have to comply with European rules in many instances.
Instead, the commission suggested a broad European debate on the specific circumstances, if any, that might justify the use of facial recognition technologies. France, Finland, the Czech Republic and Denmark submitted remarks during this debate that using facial recognition to identify people in public spaces might be justified for important public security reasons and should therefore be allowed under strict legal conditions and safeguards.
Key voices like that of Wojciech Wiewiórowski, the European data protection supervisor, have called for a moratorium on such uses in public spaces, and members of the European Parliament went even further, asking for an outright ban of all kinds of remote biometric identification technologies. In the meantime, however, this makes the EU’s claim of being the champion of fundamental rights protection in the field of AI harder to sustain.
Various cities in the United States have taken action, with cities such as Portland, Boston, and San Francisco banning city departments’ and agencies’ use of facial recognition in public places. The International Influence of European Regulations When it comes to technology, Europe has made no secret of its desire to “export its values across the world.” And as we have seen with the GDPR—the European law that has quickly become the international standard in terms of data protection—these efforts are far from in vain. Since the GDPR went into full effect in 2018, the law has significantly shaped how data protection is conducted around the world.
She also explains how users of major internet services and platforms such as Google, Netflix, Airbnb or Uber—wherever they are located in the world—end up being governed by European data protection laws because these companies have adopted single global privacy policies abiding with the GDPR to manage all of their users’ data. With the proposed AI Act, the European Union seems to want to replicate the same kind of regulatory influence it achieved with the GDPR.
The extraterritorial reach of the proposal illustrates the European Commission’s hegemonic aims: The proposed regulation covers providers of AI systems in the EU, irrespective of where the provider is located, as well as users of AI systems located within the EU, and providers and users located outside the EU “where the output produced by the system is used in the Union.” The international community should not let Europe write the rules that govern technology all by itself.