AI News, (PDF) Artificial Intelligence (AI) Deployments in Africa: Benefits ... artificial intelligence
Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade
Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question: By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?
It is important to note that the responses were gathered in the summer of 2020 in a different cultural context amid the pandemic, before COVID-19 vaccines had been approved, at a time when racial justice issues were particularly prominent in the U.S. and before the conclusion of the U.S. presidential election.
For instance, in early 2021 the Stanford Institute for Human-Centered Artificial Intelligence released an updated AI Index Report, the IEEE deepened its focus on setting standards for AI systems and the U.S. National Security Commission on AI, headed by tech leaders including Eric Schmidt, Andy Jassy, Eric Horvitz, Katharina McFarland and Robert Work, released its massive report on accelerating innovation while defending against malign uses of AI.
The key themes these experts voiced in the written elaborations explaining their choices are outlined in the shaded boxes below about “worries” and “hopes.” The respondents whose insights are shared in this report focus their lives on technology and its study.
Some described their approach as a comparative one: It’s not whether AI systems alone produce questionable ethical outcomes, it’s whether the AI systems are less biased than the current human systems and their known biases.
AI systems will be used in ways that affect people’s livelihoods and well-being – their jobs, their family environment, their access to things like housing and credit, the way they move around, the things they buy, the cultural activities to which they are exposed, their leisure activities and even what they believe to be true.
Barry Chudakov, founder and principal of Sertain Research, said,“Before answering whether AI will mostly be used in ethical or questionable ways in the next decade, a key question for guidance going forward will be, What is the ethical framework for understanding and managing artificial intelligence?
Further, while we have a host of regulatory injunctions such as speed limits, tax rates, mandatory housing codes and permits, etc., we consider our devices so much a part of our bodies that we use them without a moment’s thought for their effects upon the user.
With that understanding, for AI to be used in ethical ways, and to avoid questionable approaches, we must begin by reimagining ethics itself.” Mike Godwin, former general counsel for the Wikimedia Foundation and creator of Godwin’s Law, wrote, “The most likely outcome, even in the face of increasing public discussions and convenings regarding ethical AI, will be that governments and public policy will be slow to adapt.
The most likely scenario is that some kind of public abuse of AI technologies will come to light, and this will trigger reactive limitations on the use of AI, which will either be blunt-instrument categorical restrictions on its use or (more likely) a patchwork of particular ethical limitations addressed to particular use cases, with unrestricted use occurring outside the margins of these limitations.” Jamais Cascio, research fellow at the Institute for the Future, observed, “I expect that there will be an effort to explicitly include ethical systems in AI that have direct interaction with humans but largely in the most clear-cut, unambiguous situations.
For instance, it has been shown that judges are more lenient toward first offenders than machine learning in the sense that machine learning predicts a high probability of reoffending, and this probability is not taken into account by judges when sentencing.
But, more generally, the algorithm only does what it is told to do: If the law that has been voted on by the public ends up throwing large fractions of poor young males in jail, then that’s what the algorithm will implement, removing the judge’s discretion to do some minor adjustment at the margin.
AI can be an opportunity to improve the ethical behavior of cars (and other apps), based on rational principles instead of knee-jerk emotional reaction.” “When it comes to AI, we should pay close attention to China, which has talked openly about its plans for cyber sovereignty.
We don’t have a definition, and that creates a strategic vulnerability.” Stowe Boyd, consulting futurist expert in technological evolution and the future of work, noted, “I have projected a social movement that would require careful application of AI as one of several major pillars.
I’ve called this theHuman Spring, conjecturing that a worldwide social movement will arise in 2023, demanding the right to work and related social justice issues, a massive effort to counter climate catastrophe, and efforts to control artificial intelligence.
AI applied in narrow domains that are really beyond the reach of human cognition – like searching for new ways to fold proteins to make new drugs or optimizing logistics to minimize the number of miles that trucks drive everyday – are sensible and safe applications of AI.
But AI directed toward making us buy consumer goods we don’t need or surveilling everyone moving through public spaces to track our every move, well, that should be prohibited.” Jonathan Grudin, principal researcher with the Natural Interaction Group at Microsoft Research, said, “The past quarter-century has seen an accelerating rise of online bad actors (not all of whom would agree they are bad actors) and an astronomical rise in the costs of efforts to combat them, with AI figuring in this.
“The principal use of AI is likely to be finding ever more sophisticated ways to convince people to buy things that they don’t really need, leaving us deeper in debt with no money to contribute to efforts to combat climate change, environmental catastrophe, social injustice and inequality and so on.” Ben Shneiderman, distinguished professor of computer science and founder of Human Computer Interaction Lab, University of Maryland, commented, “Ethical principles (responsibility, fairness, transparency, accountability, auditability, explainable, reliable, resilient, safe, trustworthy) are a good starting point, but much more is needed to bridge the gap with the realities of practice in software engineering, organization management and independent oversight.
Bolles, chair for the future of work at Singularity University, responded, “I hope we will shift the mindset of engineers, product managers and marketers from ethics and human centricity as a tack-on after AI products are released, to a model that guarantees ethical development from inception.
If we don’t fix this, we can’t even imagine how much off the rails this can go once AI is creating AI.” Douglas Rushkoff, well-known media theorist, author and professor of media at City University of New York, wrote, “Why should AI become the very first technology whose development is dictated by moral principles?
It extracts value from us in the most ‘humane’ way possible?” David Brin, physicist, futures thinker and author of “Earth” and “Existence,” commented, “Isaac Asimov in his ‘Robots’ series conceived a future when ethical matters would be foremost in the minds of designers of AI brains, not for reasons of judiciousness, but in order to quell the fears of an anxious public, and hence Asimov’s famed ‘Three Laws of Robotics.’ No such desperate anxiety about AI seems to surge across today’s populace, perhaps because we are seeing our AI advances in more abstract ways, mostly on screens and such, not in powerful, clanking mechanical men.
Dyer, professor emeritus of computer science at UCLA, expert in natural language processing, responded, “Ethical software is an ambiguous notion and includes: “Consider that you, in the more distant future, own a robot and you ask it to get you an umbrella because you see that it might rain today.
AI systems are right now being evolved to survive (and learn) in simulated environments and such systems, if given language comprehension abilities (being developed in the AI field of natural language processing), would then achieve a form of sentience (awareness of one’s awareness and ability to communicate that awareness to others, and an ability to debate beliefs via reasoning, counterfactual and otherwise, e.g., see work of Judea Pearl).” Marjory S.
The support provided by today’s ‘smart speakers’ should become more meaningful and more useful (especially if clear attention to privacy and security comes with the increased functionality).” Ethan Zuckerman, director of MIT’s Center for Civic Media and associate professor at the MIT Media Lab, commented, “The activists and academics advocating for ethical uses of AI have been remarkably successful in having their concerns aired even as harms of misused AI are just becoming apparent.
Because these pioneers have been so active in putting AI ethics on the agenda, I think we have a rare opportunity to deploy AI in a vastly more thoughtful way that we otherwise might have.” Vint Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google, observed, “There will be a good-faith effort, but I am skeptical that the good intentions will necessarily result in the desired outcomes.
We are, however, benefiting enormously from many ML applications, including speech recognition and language translation, search efficiency and effectiveness, medical diagnosis, exploration of massive data to find patterns, trends and unique properties (e.g., pharmaceuticals).
So, we should expect to see ethical norms around bias, governance and transparency become more common, much the same way we’ve seen the auto industry and others adopt safety measures like seatbelts, airbags and traffic signals over time.
And all bets are off when we are talking about AI-enabled weaponry, which will require a level of diplomacy, policy and global governance similar to nuclear power.” Esther Dyson, internet pioneer, journalist, entrepreneur and executive founder of Wellville, responded, “With luck, we’ll increase transparency around what AI is doing (as well as around what people are doing), because it will be easier to see the impact of decisions made by both people and algorithms.
We should be directing far more attention to research on helping people learn better, helping them interact online better and helping them make decisions better.” Beth Noveck, director, NYU Governance Lab and its MacArthur Research Network on Opening Governance, responded, “Successful AI applications depend upon the use of large quantities of data to develop algorithms.
Adams, a 24-year veteran of IBM, now working as a senior research scientist in artificial intelligence for RTI International, architecting national-scale knowledge graphs for global good, wrote, “The AI genie is completely out of the bottle already, and by 2030 there will be dramatic increases in the utility and universal access to advanced AI technology.
This may be the equivalent of a mutually assured destruction policy, but to take guns away from the good guys only means they can’t defend themselves from the bad guys anymore.” Joël Colloc, professor of computer sciences at Le Havre University, Normandy, responded, “Most researchers in the public domain have an ethical and epistemological culture and do research to find new ways to improve the lives of humanity.
When these tools are placed only in the hands of private interests, for the sole purpose of making profit and getting even more money and power, the use of science can lead to deviances and even uses against the states themselves – even though it is increasingly difficult for these companies to enforce the laws, which do not necessarily have the public interest as their concern.
What scares me most is the use of AI for personalized medicine that, under the guise of prevention, will lead to a new eugenics and all the cloning drifts, etc., that can lead to the ‘Brave New World’ of Aldous Huxley.” Susan Crawford, a professor at Harvard Law School and former special assistant in the Obama White House for Science Technology and Innovation Policy, noted, “For AI, just substitute ‘digital processing.’ We have no basis on which to believe that the animal spirits of those designing digital processing services, bent on scale and profitability, will be restrained by some internal memory of ethics, and we have no institutions that could impose those constraints externally.” Paul Jones, professor emeritus of information science at the University of North Carolina, Chapel Hill, observed, “Unless, as I hope happens, the idea that all tech is neutral is corrected, there is little hope or incentive to create ethical AI.
More often I hear data scientists disparaging what they consider ‘soft sciences’ and claiming that their socially agnostic engineering approach or their complex statistical approach is a ‘hard science.’ While I don’t fear an AI war, a Capek-like robot uprising, I do fear the tendency not to ask the tough questions of AI –not just of general AI, where most of such questions are entertained, but in narrow AI where most progress and deployment are happening quickly.
Nonetheless, even if those features are explicitly excluded from the training set, the training data might well encode the biases of human raters, and the AI could pick up on secondary features that infer the excluded ones (e.g., silently inferring a proxy variable for race from income and postal address).
“Without a doubt, AI will do great things for us, whether it’s self-driving cars that significantly reduce automotive death and injury, or whether it is computers reading radiological scans and identifying tumors earlier in their development than any human radiologist might do reliably.
AI can and will be engineered toward utopian and dystopian ends.” Shel Israel, Forbes columnist and author of many books on disruptive technologies,commented, “Most developers of AI are well-intentioned, but issues that have been around for over 50 years remain unresolved: Calton Pu, professor and chair in the School of Computer Science at Georgia Tech, wrote, “The main worry about the development of AI and ML (machine learning) technologies is the current AI/ML practice of using fixed training data (ground truth) for experimental evaluation as proof that they work.
… To change these limitations, the ML/AI community and companies will need to face the inconvenient truth, the growing gap, and start to work on the growing gap instead of simply shutting down AI systems that no longer work (when the gap grew too wide), which has been the case of the Microsoft Tay chatbot and Google Flu Trends, among others.” Greg Sherwin, vice president for engineering and information technology at Singularity University, responded, “Explainable AI will become ever more important.
“As the world and society become more defined by VUCA [volatile, uncertain, complex, ambiguous] forces, the less AI will be useful given its complete dependency on past data, existing patterns and its ineffectiveness in novel situations.
AI will simply become much like what computers were to society a couple decades ago: algorithmic tools in the background, with inherent and many known flaws (bugs, etc.), that are no longer revered for their mythical innovative novelty but are rather understood in context within limits, within boundaries that are more popularly understood.” Kathleen M.
“AI will save time, allow for increased control over your living space, do boring tasks, help with planning, auto-park your car, fill out grocery lists, remind you to take medicines, support medical diagnosis, etc.
Tools that auto-declare messages as disinformation could be used by authoritarian states to harm individuals.” Chris Savage, a leading expert in legal and regulatory issues based in Washington, D.C., wrote, “AI is the new social network, by which I mean: Back in 2007 and 2008, it was easy to articulate the benefits of robust social networking, and people adopted the technology rapidly, but its toxic elements – cognitive and emotional echo chambers, economic incentives of the platforms to drive engagement via stirred-up negative emotions, rather than driving increased awareness and acceptance (or at least tolerance) of others – took some time to play out.
“But we simply do not know enough about what ‘ethical’ or ‘public-interested’ algorithmic decision-making looks like to build those concepts into actually deployed AI (actually, we don’t actually know enough about what human ‘ethical’ and ‘public-interested’ decision-making looks like to effectively model it).
But the initial decade or so will be messy.” Jeff Jarvis, director of the Tow-Knight Center and professor of journalism innovation at City University of New York, said, “AI is an overbroad label for sets of technical abilities to gather, analyze and learn from data to predict behavior, something we have done in our heads since some point in our evolution as a species.
I have trouble seeing this treated as if it is an entirely new branch of ethics, for that brings an air of mystery to what should be clear and understandable questions of responsibility.” David Krieger, director of the Institute for Communication and Leadership, based in Switzerland, commented, “It appears that, in the wake of the pandemic, we are moving faster toward the data-driven global network society than ever before.
Perhaps traditional notions of civil liberties need to be revised and updated for a world in which connectivity, flow, transparency and participation are the major values.” John Smart, foresight educator, scholar, author, consultant and speaker, predicted, “Ethical AI frameworks will be used in high-reliability and high-risk situations, but the frameworks will remain primitive and largely human-engineered (top-down) in 2030.
Truly bottom-up, evolved and selected collective ethics and empathy (affective AI), similar to what we find in our domestic animals, won’t emerge until we have truly bottom-up, evo-devo [evolutionary developmental biology] approaches to AI.
The real benefits of AI will come when we’ve moved into a truly bottom-up style of AI development, with hundreds of millions of coders using open-source AI code on GitHub, with natural language development platforms that lower the complexity of altering code, with deeply neuro-inspired commodity software and hardware, and with both evolutionary and developmental methods being used to select, test and improve AI.
Medical Robotics and Artificial Intelligence MSc
Country-specific information, including details of when UCL representatives are visiting your part of the world, can be obtained from the International Students website.
International applicants can find out the equivalent qualification for their country by selecting from the list below.
1. Worries about developments in AI
portion of these experts infused their answers with questions that amount to this overarching question: How can ethical standards be defined and applied for a global, cross-cultural, ever-evolving, ever-expanding universe of diverse black-box systems in which bad actors and misinformation thrive?
Alternatively, banning black boxes may hinder AI development, putting our economic, military or political interests at risk.” Ryan Sweeney, director of analytics for Ignite Social Media, commented, “The definition of ‘public good’ is important here.
AI offers the most promise in replacing very poor human judgment in things like facial recognition and police stops.” Marc Brenman, managing member at IDARE, a transformational training and leadership development consultancy based in Washington, D.C., wrote, “As societies, we are very weak on morality and ethics generally.
Oversight has not been our strong suit in the last few decades, and there is little reason to believe it will be instituted in human-automation interactions.” Amali De Silva-Mitchell, a futurist and consultant participating in multistakeholder, global internet governance processes, wrote, “Although there are lots of discussions, there are few standards or those that exist are at a high level or came too late for the hundreds of AI applications already rolled out.
Duplication of efforts is a waste of resources.” Glenn Edens, professor at Thunderbird School of Global Management, Arizona State University, previously a vice president at PARC, observed, “The promise: AI and ML could create a world that is more efficient, wasting less energy or resources providing health care, education, entertainment, food and shelter to more people at lower costs.
Regulation is largely reactionary, rarely proactive – typically, bad things have to happen before frameworks to guide responsible and equitable behavior are written into laws, standards emerge or usage is codified into acceptable norms.
“Credit scoring comes to mind as a major potential area of concern – while the credit-scoring firms always position their work as providing consumers more access to financial products, the reality is that we’ve created a system that unfairly penalizes the poor and dramatically limits fair access to financial products at equitable prices.
AI and ML will be used by corporations to evaluate everything they do and every transaction, rate every customer and their potential (value), predict demand, pricing, targeting as well as their own employees and partners – while this can lead to efficiency, productivity and creation of economic value, a lot of it will lead to segmenting, segregation, discrimination, profiling and inequity.
It isn’t AI that needs ethics, it’s the owners.” Glynn Rogers, retired, previously senior principal engineer and a founding member at the CSIRO Centre for Complex Systems Science, said, “AI and its successors are potentially so powerful that we have no choice but to ensure attention to ethics.
The alternative would be to hand over control of our way of life to a class of developers and implementors that are either focused on short-term and shortsighted interests or who have some form of political agenda particularly ‘state actors.’ The big question is how to ensure this.
Rather than developing technologies simply for the sake of it, or to publish clever papers, there needs to be a cultural environment in which developers see as an inherent part of their task to consider the potential social and economic impacts of their activities and an employment framework that does not seek to repress this.
Perhaps moral and political philosophy should be part of the education of AI developers.” Alexandra Samuel, technology writer, researcher, speaker and regular contributor to the Wall Street Journal and Harvard Business Review, wrote, “Without serious, enforceable international agreements on the appropriate use and principles for AI, we face an almost inevitable race to the bottom.
otherwise, governments will be too worried about putting their own countries’ businesses at a disadvantage.” Valerie Bock, VCB Consulting, former Technical Services Lead at Q2 Learning, commented, “I don’t think we’ve developed the philosophical sophistication in the humans who design AI sufficiently to expect them to be able to build ethical sophistication into their software.
A little humility based on what we are learning is in order.” “I don’t think we’ve developed the philosophical sophistication in the humans who design AI sufficiently to expect them to be able to build ethical sophistication into their software.”
An example: robot caregivers, assistants and tutors are being increasingly used in caring for the most vulnerable members of society despite known misgivings by both scientist–roboticists, ethicists and users, potential and current.
But profit-driven systems, the hubris of inventors, humans’ innate tendency to try to relate to any objects that seem to have agency, and other forces combine to work against the human skepticism that is needed if we are to create assistive robots that preserve the freedom and dignity of the humans who receive their care.” Alan D.
How AI is used will depend on your government’s hierarchy of values among economic development, international competitiveness and social impacts.” Jim Witte, director of the Center for Social Science Research at George Mason University, responded, “The question assumes that ethics and morals are static systems.
With developments in AI, there may also be an evolution of these systems such that what is moral and ethical tomorrow may be very different from what we see as moral and ethical today.” Yves Mathieu, co-director at Missions Publiques, based in Paris, France, wrote, “Ethical AI will require legislation like the European [GDPR] legislation to protect privacy rights on the internet.
These questions should guide us in our decision-making today so that we have more hope of AI being used to improve or benefit lives in the years to come.” Dan McGarry, an independent journalist based in Vanuatu, noted, “Just like every other algorithm ever deployed, AI will be a manifestation of human bias and the perspective of its creator.
they will be products of them and recognisable as such.” Abigail De Kosnik, associate professor and director of the Center for New Media at the University of California-Berkeley, said, “I don’t see nearly enough understanding in the general public, tech workers or in STEM students about the possible dangers of AI – the ways that AI can harm and fail society.
Given that, it looks like it will take more than 10 years for ‘most of the AI systems being used by organizations of all sorts to employ ethical principles focused primarily on the public good.’ Also, many organizations are simply focused primarily on other goals – not on protecting or promoting the public good.” A
lawyer and former law school dean who specializes in technology issues wrote, “AI is an exciting new space, but it is unregulated and, at least in early stages, will evolve as investment and monetary considerations direct.
Lauriault, a professor expert in critical media studies and big data based at Carleton University, Ottawa, Canada, commented, “Automation, AI and machine learning (ML) used in traffic management as in changing the lights to improve the flow of traffic, or to search protein databases in big biochemistry analytics, or to help me sort out ideas on what show to watch next or books to read next, or to do land-classification of satellite images, or even to achieve organic and fair precision agriculture, or to detect seismic activity, the melting of polar ice caps, or to predict ocean issues are not that problematic (and its use, goodness forbid, to detect white-collar crime in a fintech context is not a problem).
So, the interesting research question is whether or not we learn in a way that overcomes our intrinsic biases.” Jean Seaton, director of the Orwell Foundation and professor of media history at the University of Westminster, responded, “The ethics question also begs the questions of who would create and police such standards internationally?
People have to be terrified enough, leaders have to be wise enough, people have to be cooperative enough, tech people have to be forward thinking enough, responsibility has to be felt vividly, personally, overwhelmingly enough – to get a set of rules passed and policed.” Cliff Lynch, director at the Coalition for Networked Information, wrote, “Efforts will be made to create mostly ‘ethical’ AI applications by the end of the decade, but please understand that an ethical AI application is really just software that’s embedded in an organization that’s doing something;
There will be some obvious exceptions for research, some kinds of national security, military and intelligence applications, market trading and economic prediction systems – many of these things operate under various sorts of ‘alternative ethical norms’ such as the ‘laws of war’ or the laws of the marketplace.
It’s clear that there’s a huge problem with machine learning and pattern-recognition systems, for example, that are trained on inappropriate, incomplete or biased data (or data that reflect historical social biases) or where the domain of applicability and confidence of the classifiers or predictors aren’t well-demarcated and understood.
There’s another huge problem where organizations are relying on (often failure-prone and unreliable, or trained on biased data, or otherwise problematic) pattern recognition or prediction algorithms (again machine-learning-based, usually) and devolving too much decision-making to these.
The (rather more theoretical and speculative) philosophical and research discussions about superintelligence and about how one might design and develop such a general-purpose AI that won’t rapidly decide to exterminate humanity are extremely useful, important and valid, but they have little to do with the rhetorical social justice critiques that confuse algorithms with the organizations that stupidly and inappropriately design, train and enshrine and apply them in today’s world.” Deirdre Williams, an independent researcher expert in global technology policy, commented, “I can’t be optimistic.
A tiny human error or bias at the very beginning can balloon into an enormous error of truth and/or justice.” Alexa Raad, co-founder and co-host of the TechSequences podcast and former chief operating officer at Farsight Security, said, “There is hope for AI in terms of applications in health care that will make a positive difference.
By then, I am afraid some of the downsides and risks of AI will already be in play.” Andrea Romaoli Garcia, an international lawyer actively involved with multistakeholder activities of the International Telecommunication Union and Internet Society, said, “I define ethics as all possible and available choices where the conscience establishes the best option.
In terms of ethics for AI, the process for discovering what is good and right means choosing among all possible and available applications to find the one that best applies to the human-centred purposes, respecting all the principles and values that make human life possible.
This justifies the concerns about ethics and also focuses on issues such as freedom of expression, privacy and surveillance, ownership of data and discrimination, manipulation of information and trust, environmental and global warming and also on how the power will be established among society.
This scenario should accelerate efforts at international cooperation to establish a harmonious ethical AI that supports human survival and global evolution.” Olivier MJ Crépin-Leblond, entrepreneur and longtime participant in the activities of ICANN and IGF, said, “What worries me the most is that some actors in nondemocratic regimes do not see the same ‘norm’ when it comes to ethics.
As a result, I believe that most dominant outcomes – how ‘ethical’ is defined, how ‘acceptable risk’ is perceived, how ‘optimal solutions’ will be determined – will be limited and almost certainly perpetuate and amplify existing harms.
Like the internet and globalization, the path forward is likely less about guiding such complex systems toward utopian outcomes and more about adapting to how humans wield them under the same competitive and collaborative drivers that have attended the entirety of human history.” Kenneth Cukier, senior editor at The Economist and coauthor of “Big Data,” said, “Few will set out to use AI in bad ways (though some criminals certainly will).
In health care, an AI system may identify that some people need more radiation to penetrate the pigment in their skin to get a clearer medical image, but if this means Black people are blasted with higher doses of radiation and are therefore prone to negative side effects, people will believe there is an unfair bias.
“On global economics, a ‘neocolonial’ or ‘imperial’ commercial structure will form, whereby all countries have to become customers of AI from one of the major powers, America, China and, to a lesser extent, perhaps Europe.” Bruce Mehlman, a futurist and consultant, responded, “AI is powerful and has a huge impact, but it’s only a tool like gunpowder, electricity or aviation.
Black Lives Matter and other social justice movements must ‘shame’ and force profit-focused companies to delve into the inherently biased data and information they’re feeding the AI systems – the bots and robots – and try to keep those biased ways of thinking to a minimum.
I also hope that AI will somehow ease transportation, education and health care inequities.” Ilana Schoenfeld, an expert in designing online education and knowledge-sharing systems, said, “I am frightened and at the same time excited about the possibilities of the use of AI applications in the lives of more and more people.
In considering the fuller range of the ‘goods’ and ‘bads’ of artificial intelligence, think of the implications of Masayoshi Son’s warning that: ‘Supersmart robots will outnumber humans, and more than a trillion objects will be connected to the internet within three decades.’ Researchers are creating systems that are increasingly able to teach themselves and use their new and expanding ability to improve and evolve.
“The work on technological breakthroughs such as quantum computers capable of operating at speeds that are multiple orders of magnitude beyond even the fastest current computers is still at a relatively early stage and will take time to develop beyond the laboratory context.
If scientists are successful in achieving a reliable quantum computer system, the best exascale system will pale in relation to the reduced size and exponentially expanded capacity of the most advanced existing computer systems.
When this occurs in the commercialized context, predictions about what will happen to humans and their societies are ‘off the board.’” An expert in the regulation of risk and the roles of politics within science and science within politics observed, “In my work, I use cost-benefit analysis.
I doubt there will be much hesitancy about grabbing AI as the ‘neutral, objective, fast, cheap’ way to avoid all those messy human-type complications, such as justice, empathy, etc.” Neil Davies, co-founder of Predictable Network Solutions and a pioneer of the committee that oversaw the UK’s initial networking developments, commented, “Machine learning (I refuse to call it AI, as the prerequisite intelligence behind such systems is definitely not artificial) is fundamentally about transforming a real-world issue into a numerical value system, the processing (and decisions) being performed entirely in that numerical system.
The UK government’s track record, as an example – but other countries have their examples, too, on universal credit, Windrush, EU settled status, etc., are all examples of a value-based assessment process in which the notion of assurance against some ethical framework is absent.
… “A cautionary tale: In the mathematics that underpins all modelling of this kind (category theory), there are the notions of ‘infidelity’ and ‘junk.’ Infidelity is the failure to capture the ‘real world’ well enough to even have the appropriate values (and structure of values) in the evaluatable model;
This requires reconciling AI design principles like scalability and automation with individual and community values.” Jeff Gulati, professor of political science at Bentley University, responded, “It seems that more AI and the data coming out could be useful in increasing public safety and national security.
They noted that the public is unable to understand how the systems are built, they are not informed as to their impact and they are unable to challenge firms that try to invoke ethics in a public relations context but are not truly committed to ethics.
Joseph Turow, professor of communication at the University of Pennsylvania, wrote, “Terms such ‘transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and nonmaleficence, freedom, trust, sustainability and dignity’ can have many definitions so that companies (and governments) can say they espouse one or another term but then implement it algorithmically in ways that many outsiders would not find satisfactory.
My concern is that companies will define ‘ethical’ in ways that best match their interests, often with vague precepts that sound good from a PR standpoint but, when integrated into code, allow their algorithms to proceed in ways that do not constrain them from creating products that ‘work’ in a pragmatic sense.” Charlie Kaufman, a security architect with Dell EMC, said, “There may be ethical guidelines imposed on AI-based systems by legal systems in 2030, but they will have little effect – just as privacy principles have little effect today.
Let’s try to make that be a good thing!” Simeon Yates, a professor expert in digital culture and personal interaction at the University of Liverpool and the research lead for the UK government’s Digital Culture team, predicted, “Until we bring in ‘ethical-by-design’ (responsible innovation) principles to ICT [information and communications technologies] and AI/machine learning design – like attempts to create ‘secure-by-design’ systems to fight cybercrime – the majority of AI systems will remain biased and unethical in principle.
The presentation of such solutions as bias-free or more rational or often ‘cleverer’ as they are based on ‘cold computation,’ not ‘emotive human thinking,’ is itself a false and an unethical claim.” Colin Allen, a cognitive scientist and philosopher who has studied and written about AI ethics, wrote, “Corporate and government statements of ethics are often broad and nonspecific and thus vague with respect to what specifically is disallowed.
But corporations have a long history of hiding or obfuscating their true intent (it’s partly required to stay competitive, not to let everyone else know what you are doing) as well as engaging actively in public disinformation campaigns.
We cannot depend on technology companies to self-regulate, as there are too many financial incentives to employ AI systems in ways that disadvantage people or are unethical.” Jillian York, director of international freedom of expression for the Electronic Frontier Foundation, said, “There is absolutely no question that AI will be used in questionable ways.
I don’t see the positive potential, just another ethical morass, because the people running the show have no desire to build technology to benefit the 99%.” David Mussington, a senior fellow at CIGI and professor and director at the Center for Public Policy and Private Enterprise at the University of Maryland, predicted, “Most AI systems deployed by 2030 will be owned and developed in the private sector, both in the U.S. and elsewhere in the world.
As tool sets for AI development continue to empower small research groups and individuals (datasets, software-development frameworks and open-source algorithms), how is a government going to keep up – let alone maintain awareness – of AI progress?
I think that the answers to most of these questions are in the negative.” Giacomo Mazzone, head of institutional relations for the European Broadcasting Union and Eurovision, observed, “Nobody could realistically predict ethics for AI will evolve, despite all of the efforts deployed by the UN secretary general, the UNESCO director general and many others.
Of course, many governments already do not support human rights principles, considering the preservation of the existing regime to be a priority more important than individual citizens’ rights.” Rob Frieden, a professor of telecommunications law at Penn State who previously worked with Motorola and has held senior policy positions at the FCC and the NTIA, said, “I cannot see a future scenario where governments can protect citizens from the incentives of stakeholders to violate privacy and fair-minded consumer protections.
I expect many large technology companies will make an effort to hire professional ethicists to audit their work, and that we may see companies that differentiate themselves through more ethical approaches to their work.” Ebenezer Baldwin Bowles, an advocate/activist, commented, “Altruism on the part of the designers of AI is a myth of corporate propaganda.
So, in an environment where ethically questionable behavior has been allowed or even glorified in areas such as finance, corporate governance, government itself, pharmaceuticals, education and policing, why all of a sudden are we supposed to believe that AI developers will behave in an ethical fashion?
It’s basic institutional social scientific insight.” Christine Boese, a consultant and independent scholar, wrote, “What gives me the most hope is that, by bringing together ethical AI with transparent UX, we can find ways to open the biases of perception being programmed into the black boxes, most often, not malevolently, but just because all perception is limited and biased and part of the laws of unintended consequences.
None of us have the agency to be the engines able to drive this bus, and yet the bus is being driven by all of us, collectively.” Mireille Hildebrandt, expert in cultural anthropology and the law and editor of “Law, Human Agency and Autonomic Computing,” commented, “Considering the economic incentives, we should not expect ‘ethical AI,’ unless whatever one believes to be ethical coincides with shareholder value.
I am not optimistic in the face of the enormous push of technology companies to continue taking advantage of the end-user product, an approach that is firmly supported by undemocratic governments or those with weak institutions to train and defend citizens about the social implications of the penetration of digital platforms.
This transformed, with the help of other technology giants, users into end-user products and the agents of their own marketing … This evolution is a threat with important repercussions in the nonvirtual world, including the weakening of the democratic foundations of our societies.
The most important step is to declare a digital emergency that motivates massive education programs that insert citizens in working to overcome the ethical challenges, identifying the potentialities of and risks for the global knowledge society and emphasizing information literacy.” Bill Woodcock, executive director at Packet Clearing House, observed, “AI is already being used principally for purposes that are not beneficial to the public nor to all but a tiny handful of individuals.
Until regulators address the root issues – the automated exploitation of human psychological weaknesses – things aren’t going to get better.” Jonathan Kolber, a member of the TechCast Global panel of forecasters and author of a book about the threats of automation, commented, “I expect that, by 2030, most AIs will still primarily serve the interests of their owners, while paying lip service to the public good.
Social media platforms have long operated in a contested ethical space – between the ethics of ‘free speech’ in the public commons versus limitations on speech to ensure civil society.” Rosalie Day, policy leader and consultancy owner specializing in system approaches to data ethics, compliance and trust, observed, “In this individualistic and greed-is-still-good American society, there exist few incentives for ethical AI.
For responsible dialogue to occur, and to apply critical thinking about the risks versus the benefits, society in general needs to be data literate.” Michael Zimmer, director of data science and associate professor in the department of computer science at Marquette University, said, “While there has certainly been increased attention to applying broader ethical principles and duties to the development of AI, I feel the market pressures are such that companies will continue to deploy narrow AI over the next decade with only a passing attentiveness to ethics.
Yes, many companies are starting to hire ‘ethics officers’ and engage in other ways to bring ethics into the fold, but we’re still very early in the ability to truly integrate this kind of framework into product development and business decision processes.
We’re at the very start of this process with AI ethics, and it will take more than 10 years to realize.” David Robertson, professor and chair of political science at the University of Missouri, St. Louis, wrote, “A large share of AI administration will take place in private enterprises and in public or nonprofit agencies with an incentive to use AI for gain.
In some cases, transparency will suffer, with tragic consequences.” Dmitri Williams, a communications professor at the University of Southern California and expert in technology and society, commented, “Companies are literally bound by law to maximize profits, so to expect them to institute ethical practices is illogical.
So, despite some regulatory capture, I do expect AI to improve quality of life in some places.” Daniel Castro, vice president at the Information Technology and Innovation Foundation, noted, “The question should be: ‘Will companies and governments be ethical in the next decade?’ If they are not ethical, there will be no ‘ethical AI.’ If they are ethical, then they will pursue ethical uses of AI, much like they would with any other technology or tool.
We see evidence of racist and discriminatory mechanisms embedded in systems that will negatively impact large swaths of our population.” Art Brodsky, communications consultant and former vice president of communications for Public Knowledge, observed, “Given the record of tech companies and the government, AI like other things will be used unethically.
I am [also] concerned that AI will become a new form of [an] arms race among world powers and that AI will be used to suppress societies and employed in terrorism.” Jon Stine, executive director of the Open Voice Network, setting standards for AI-enabled vocal assistance, said, “What most concerns me: The cultural divide between technologists of engineering mindsets (asking what is possible) and technologists/ethicists of philosophical mindsets (asking what is good and right).
Movements toward self-policing have been and will likely continue to be toothless, and even frameworks like GDPR and CCPA don’t meaningfully grapple with fairness and transparency in AI systems.” Andre Popov, a principal software engineer for a large technology company, wrote, “Leaving aside the question of what ‘artificial intelligence’ means, it is difficult to discuss this question.
there is little chance we’ll do better with ‘AI.’” “Just as there is currently little incentive to avoid the expansion of surveillance and punitive technological infrastructures around the world, there is little incentive for companies to meaningfully grapple with bias and opacity in AI.
Until there is a major incident, I don’t see global governance bodies such as the UN or World Bank putting into place any ethical policy with teeth in place.” Rich Ling, professor of media technology at Nanyang Technological University, Singapore, responded, “There is the danger that, for example, capitalist interests will work out the application of AI so as to benefit their position.
But as long as we’re a world that elevates madmen and warlords to positions of power, its negative use will be prioritized.” Benjamin Shestakofsky, assistant professor of sociology at the University of Pennsylvania, commented, “It is likely that ‘ethical’ frameworks will increasingly be applied to the production of AI systems over the next decade.
Following ‘ethical’ guidelines will help tech companies shield themselves from lawsuits without forcing them to develop technologies that truly prioritize justice and the public good over profits.” Warren Yoder, longtime director of the Public Policy Center of Mississippi, now an executive coach, responded, “Widespread adoption of real, consequential ethical systems that go beyond window dressing will not happen without a fundamental change in the ownership structure of big tech.
House Judiciary – antitrust) moving into positions of influence, but the next 12 months will tell us much about what is possible in the near future.” Ben Grosser, associate professor of new media at the University of Illinois-Urbana-Champaign, said, “As long as the organizations that drive AI research and deployment are private corporations whose business models are dependent on the gathering, analysis and action from personal data, then AIs will not trend toward ethics.
We have already seen how this plays out (for example, with the use of data analysis and targeted advertising to manipulate the U.S. and UK electorate in 2016), and it will only get worse as increasing amounts of human activity move online.” Jeanne Dietsch, New Hampshire senator and former CEO of MobileRobots Inc., commented, “The problem is that AI will be used primarily to increase sales of products and services.
The downside to the above is that it is creating, and will continue to create, echo chambers that magnify ignorance and misinformation.” Patrick Larvie, global lead for the workplace user experience team at one of the world’s largest technology companies, observed, “I’m hope I’m wrong, but the history of the internet so far indicates that any rules around the use of artificial intelligence may be written to benefit private entities wishing to commercially exploit AI rather than the consumers such companies would serve.
But there are few (if any) rewards for doing that ethically.” Holmes Wilson, co-director of Fight for the Future, said, “Even before we figure out general artificial intelligence, AI systems will make the imposition of mass surveillance and physical force extremely cheap and effective for anyone with a large enough budget, mostly nation-states.
It’s really, really dangerous.” Susan Price, user-experience pioneer and strategist and founder of Firecat Studio, wrote, “I don’t believe that governments and regulatory agencies are poised to understand the implications of AI for ethics and consumer or voter protection.
Without strong regulation, we can’t correct that imbalance, and the processes designed to protect U.S. citizens from exploitation through elected leaders is similarly subverted by funds from these same large companies.” Craig Spiezle, managing director and trust strategist for Agelight, and chair emeritus for the Online Trust Alliance, said, “Look no further than data privacy and other related issues such as net neutrality.
Many of these same leaders have a major play in AI, and I fear they will continue to act in their own self-interests.” Sam Punnett, futurist and retired owner of FAD Research, commented, “System and application design is usually mandated by a business case, not by ethical considerations.
The most concerning applications of AI systems are those being employed for surveillance and societal control.” An ethics expert who served as an advisor on the UK’s report on “AI in Health care” responded, “I don’t think the tech companies understand ethics at all.
In a thought I saw attributed to Hannah Arendt recently, though I cannot find the source, ‘It is not that behaviourism is true, it is more that it might become true: That is the problem.’ It would be racist to say that in some parts of the world AI developers care less about ethics than in others;
But underlying all that is that the machine learning models used are antithetical to humane ethics in their mode of operation.” Nathalie Maréchal, senior research analyst at Ranking Digital Rights, observed, “Until the development and use of AI systems is grounded in an international human rights framework, and until governments regulate AI following human rights principles and develop a comprehensive system for mandating human rights impact assessments, auditing systems to ensure they work as intended, and hold violating entities to account, ‘AI for good’ will continue to be an empty slogan.” Mark Maben, a general manager at Seton Hall University, wrote, “It is simply not in the DNA of our current economic and political system to put the public good first.
This generation may prove to be the difference makers on whether we get AI that is primarily guided by ethical principles focused on the public good.” Arthur Bushkin, writer, philanthropist and social activist, said, “I worry that AI will not be driven by ethics, but rather by technological efficiency and other factors.” Dharmendra K.
They will not consider the needs of emerging economies or local communities in the developing world.” Garth Graham, a longtime leader of Telecommunities Canada, said, “The drive in governance worldwide to eradicate the public good in favour of market-based approaches is inexorable.
For example, existing Smart City initiatives are quite willing to outsource the design and operation of complex adaptive systems that learn as they operate civic functions, not recognizing that the operation of such systems is replacing the functions of governance.” A
share of these experts note that AI applications designed with little or no attention to ethical considerations are already deeply embedded across many aspects of human activity, and they are generally invisible to the people they affect.
Much of that AI will not be visible to the public – it will be employed by health insurance companies that are again free to price-discriminate based on preexisting conditions, by employers looking for employees who won’t cause trouble, by others who will want to nip any unionization efforts in the bud, by election campaigns targeting narrow subgroups.” Jeff Johnson, a professor of computer science, University of San Francisco, who previously worked at Xerox, HP Labs and Sun Microsystems, responded, “The question asks about ‘most AI systems.’ Many new applications of AI will be developed to improve business operations.
Yet, we have major problems that can’t be solved without historical grounding, functioning societies, collaboration, artistic inspiration and many other things that suffer from overfocusing on STEM or AI.” Steve Jones, professor of communication at the University of Illinois at Chicago and editor of New Media and Society, commented, “We’ll have more discussion, more debate, more principles, but it’s hard to imagine that there’ll be – in the U.S. case – a will among politicians and policymakers to establish and enforce laws based on ethical principles concerning AI.
I’d expect we’ll do the same in this case.” Andy Opel, professor of communications at Florida State University, said, “Because AI is likely to gain access to a widening gyre of personal and societal data, constraining that data to serve a narrow economic or political interest will be difficult.” Doug Schepers, a longtime expert in web technologies and founder of Fizz Studio, observed, “As today, there will be a range of deliberately ethical computing, poor-quality inadvertent unethical computing and deliberately unethical computing using AI.
My hope is that laws will rein this in.” Jay Owens, research director at pulsarplatform.com and author of HautePop, said, “Computer science education – and Silicon Valley ideology overall – focuses on ‘what can be done’ (the technical question) without much consideration of ‘should it be done’ (a social and political question).
Otherwise, we suffer from what is said by Cathy O’Neil in ‘Weapons of Math Destruction.’ Unsupervised machine learning without causal analysis is irresponsible and bad.” Michael Richardson, open-source consulting engineer, responded, “In the 1980s, ‘AI’ was called ‘expert systems,’ because we recognized that it wasn’t ‘intelligent.’ In the 2010s, we called it ‘machine learning’ for the same reason.
China is at the forefront of exporting the technologies of ‘digital authoritarianism.’ Whatever important cultural caveats may be made about a more collective society finding these technologies of surveillance and control positive as they reward pro-social behavior – the clash with the foundational assumptions of democracy, including rights to privacy, freedom of expression, etc.
“For its part, the U.S. has a miserable record (at best) of attempting to regulate these technologies – starting with computer law from the 1970s that categorizes these companies as carriers, not content providers, and thus not subject to regulation that would include attention to freedom of speech issues, etc.
But 1) what I know first-hand to be successful efforts at ethics-washing by Google (e.g., attempting to hire in some of its more severe and prominent ethical critics in the academy in order to buy their silence), and 2) given its track record of cooperation with authoritarian regimes, including China, it’s hard to be optimistic here.
But even these applications are subject to important critique, e.g., under the name of ‘the algorithmization of taste’ – the reshaping of our tastes and preferences is influenced by opaque processes driven by corporate interests in maximizing our engagement and consumption, not necessarily helping us discover liberating and empowering new possibilities.
More starkly, especially if AI and machine-learning techniques remain black-boxed and unpredictable, even to those who create them (which is what AI and ML are intended to do, after all), I mostly see a very dark and nightmarish future in which more and more of our behaviors are monitored and then nudged by algorithmic processes we cannot understand and thereby contest.
The starkest current examples are in the areas of so-called ‘predictive policing’ and related efforts to replace human judgment with machine-based ‘decision-making.’ As Mireille Hildebrandt has demonstrated, when we can no longer contest the evidence presented against us in a court of law – because it is gathered and processed by algorithmic processes even its creators cannot clarify or unpack – that is the end of the modern practices of law and democracy.
It’s clearly bad enough when these technologies are used to sort out human beings in terms of their credit ratings: Relying on these technologies for judgments/decisions about who gets into what educational institution, who does and does not deserve parole, and so on seem to me to be a staggeringly nightmarish dystopian future.
But to use a different metaphor – one perhaps unfamiliar to younger generations, unfortunately – we will remain the human equivalent of Skinner pigeons in nice and comfortable Skinner cages, wired carefully to maximize desired behaviors via positive reinforcement, if not discouraging what will be defined as undesirable behaviors via negative reinforcement (including force and violence) if need be.” Adam Clayton Powell III, senior fellow at the USC Annenberg Center on Communication Leadership and Policy, observed, “By 2030, many will use ethical AI and many won’t.
AI will enable increased flight from cities into more hospitable and healthy living areas through automation of governmental services and increased transparency of skill sets to potential employers.” Mark Perkins, an information science professional active in the Internet Society, noted, “AI will be developed by corporations (with government backing) with little respect for ethics.
The example of China will be followed by other countries – development of AI by use of citizens’ data, without effective consent, to develop products not in the interest of such citizens (surveillance, population control, predictive policing, etc.).
AI will also be developed to implement differential pricing/offers further enlarging the ‘digital divide’ AI will be used by both governments and corporations to take nontransparent, nonaccountable decisions regarding citizens AI will be treated as a ‘black box,’ with citizens having little – if any – understanding of how they function, on what basis they make decisions, etc.” Wendell Wallach, ethicist and scholar at Yale University’s Interdisciplinary Center for Bioethics, responded, “While I applaud the proliferation of ethical principles, I remain concerned about the ability of countries to put meat on the bone.
While there are signs of a possible shift in this posture, I remain skeptical while hopeful.” Pamela McCorduck, writer, consultant and author of several books, including “Machines Who Think,” wrote, “Many efforts are underway worldwide to define ethical AI, suggesting that this is already considered a grave problem worthy of intense study and legal remedy.
In the short term, only the unwillingness of Western courts to accept evidence gathered this way (as inadmissible) will protect Western citizens from this kind of thing, including the ‘social scores’ the Chinese government assigns to its citizens as a consequence of what surveillance turns up.
There is also the risk that some of the large datasets that are the fundamental to a lot of decision-making – from facial recognition, to criminal sentencing, to loan applications – being conducted using AI that are critically biased and will continue to produce biased outcomes if they are used without undergoing severe audits – issues with transparency compound these problems.
Advances to medical treatment using AI run the risk of not being fairly distributed as well.” Sam Lehman-Wilzig, professor and former chair of communication at Bar-Ilan University, Israel, said, “I am optimistic because the issue is now on the national agenda – scientific, academic and even political/legislative.
I want to believe that scientists and engineers are somewhat more open to ‘ethical’ considerations than the usual ‘businesspeople.’ The major concern is what other (nondemocratic) countries might be doing – and whether we should be involved in such an ‘arms race,’ e.g., AI-automated weaponry.
Thus, I foresee a move to international treaties dealing with the most worrisome aspects of ‘AI ethics.’” An economist who works in government responded, “Ethical principles will be developed and applied in democratic countries by 2030, focusing on the public good, global competition and cyber breaches.
This control and the impact of cybercrimes will be of great concern, and innovation will intrigue.” Ian Peter, a pioneering internet rights activist, said, “The biggest threats we face are weaponisation of AI and development of AI being restricted within geopolitical alliances.
Though mainstream AI applications will include ethical considerations, a large amount of AI will be made for profit and be applied in business systems, not visible to the users.” Marita Prandoni, linguist, freelance writer, editor, translator and research associate with the Shape of History group, predicted, “Ethical uses of AI will dominate, but it will be a constant struggle against disruptive bots and international efforts to undermine nations.
What excites me is that advertisers are rejecting platforms that allow for biased and dangerous hate speech and that increasingly there are economic drivers (i.e., corporate powers) that take the side of social justice.” Gus Hosein, executive director of Privacy International, observed, “Unless AI becomes a competition problem and gets dominated by huge American and Chinese companies, then the chances of ethical AI are low, which is a horrible reality.