AI News, How Do We Align Artificial Intelligence with Human Values?

How Do We Align Artificial Intelligence with Human Values?

Reply Javed September 8, 2017 at 9:41 am With any new birth of life, there is always associated with it are “birth pangs as painful as it could be …” So a new technology does give us hope of solving current problems and sure enough there is “Painful birth pangs” associated with it.

There is, I fear, the potential for this discussion to be stalled by speculative arguments regarding potential interpretation of singular words, phrases, and even of the AI Principles themselves, as was demonstrated in the article that requested public.

As I began considering contributing I found myself contemplating whether or not I, a business professional with what I would consider only a cursory (though increasing) understanding of artificial intelligence, would be able to provide a meaningful or useful contribution to this dialogue.

Even though I am well aware of the fact that AI currently does, or will in the future, impact and influence nearly every facet of our global society & of our lives, I question whether or not to join this dialogue.

It was quite vexing to discover that instead of having answers to the questions that were asked, I instead ended up raising more and more questions, many of which cannot be answered without further definition of, discussion of, and/or real life examples of the application of, the AI Principles.

For these reasons, I’d like to make a suggestion that could achieve not only the stated goal of receiving public comment on the AI Principles themselves, but could serve to educate the general populace about AI, increase the number of individuals who are willing to engage in this dialogue, and which could potentially help define the implementation of the Asilomar AI Principles.

recommend the creation of a draft comprehensive ‘companion document’ for the Asilomar AI Principles & inviting public comment on the companion document, with the intention of the creation of an end product that would include the following (with notations for sections that would only be included during the ‘creation/public comment’ stage of this document development, which would help to shape the final product): •

For example – what is intended to be/is to be included in the phrase “personal data” in Principle 13 ‘Liberty and Privacy’, as individuals, governments, courts, companies, and countries all have their own views regarding what constitutes ‘personal data’ •

During the initial creation /comments stage , this section could include space for questions & concerns regarding individual Principles to be raised, and a method for others to respond to the questions/concerns &/or provide potential solutions and/or clarifications •

These examples beneficial if they were to include examples of individuals at multiple vocational levels, & multiple vocational/professional fields, as this will help educate and inform both professionals and the general populace that every person can contribute to the safeguarding & stewardship of AI development and implementation.

Professionals and individuals commenting on, discussing, & intervening to this evolving draft should be encouraged to add their own examples, as well as to provide recommendations, suggestions, ideas, and tips regarding their interpretation of how the individual in the example could proceed, while honoring the AI Principles 

As a result, management at Black Mesa instruct one of their Computer Engineers to create an AI program that will allow the company to access every computer located within any Black Mesa-owned facility, no matter the no matter how remote, scan them for suspicious activity, and search the contents for ‘key words/phrases’ that have been identified to indicate illegal &/or subversive activity by a current employee/insider threat.

The Computer Engineer is instructed that the AI program must be capable of accessing the computers and performing the intended scans and searches – even if the employees utilizing them have taken extreme measures to mask or hide their activities o

Just prior to the completion of the AI program, the Computer Engineer accidentally discovers that another engineer will be modifying the AI program so that it can search any computer, owned by any facility or contained on any network, as Black Mesa intends to sell the AI program to a tyrannical island dictatorship, which intends to utilize the program to conduct searches of citizens’ computers for the express purpose of “identifying subversives”.

This AI system is required to have the ability to, among other things, ascertain when the occupants of the compound are in imminent danger of harm by an outside force but are incapacitated (such as following an initial chemical attack that renders the occupants unconscious ), to assume command and control of the compound’s array of security and defensive systems, & to execute o

The accounts payable intern who, during a live, on-site usage test of the not-yet-armed AI system, is erroneously identified by the AI system as an ‘enemy combatant which must be neutralized’, followed by the displaying of code indicating the AI system’s chosen actions would have been to activate the facility security system that is designed to deliver a low voltage electric ‘warning shock’ to would-be intruders when they touched any part of the all-metal entry door – but which had been discovered to be severely malfunctioning, & if it had been armed & under the command of the AI system, would have resulted in the intern receiving an electric shock at levels no human could possibly survive.

The AI system development team has not been able to identify why the intern was identified at the elevated level of an ‘enemy combatant’, but they are under extreme pressure from Aperture’s leadership to provide a prototype of the AI system to the agency that funded the project, with the agency intending to immediately begin live testing of the prototype.

What could each of the individuals do to forward the goal of preventing the AI system from malfunctioning in such a way that results in unnecessary harm to humans or other living beings either within the facility or outside of the facility o

(For example, industry – wide, governmental, through treaties/international agreements, etc.) This draft companion document would serve a number of purposes, and facilitate the achievement of a number of implied and/or stated goals of the Future of Life Institute, as well as of the AI professional community as a whole, including: •

Empower a much wider array of individuals from varying professional fields, educational backgrounds, & societal standings to participate in the discussion, & to do so in an informed and meaningful way •

Provide a working, referenceable document for professionals entering the field &/or working in the field of AI throughout the globe (including professionals who made be unable to attend conferences, or even communicate with the larger community of AI professionals) •

Provide actionable methods through which to implement the AI Principles After I completed this response, I read about how the Asilomar AI principles were developed, and was surprised to learn that it was through a similar method as defined above, but which had occurred in-person at the BIA 2017 Conference.

In essence, my suggestion is to provide a similar context, via a written/computerized method, through which the general populace can explore, discuss, better understand, & provide meaningful, informed contributions to this dialogue.

I believe that providing this context, & the opportunity to ask questions or for clarification, will increase the probability that FLI will receive input that reflects the diversity of experiences, opinions, cultures, and viewpoints that you are seeking.

And while I am now cognizant of the fact that, technically speaking, Dragon is categorized as a “non-sentient artificial intelligence that is focused on one narrow task” (utilization of computers completely by voice), there have been many, many times when the program has behaved in ways that have caused both myself and those around me to sincerely question whether or not the Dragon program has somehow attained sentience – and a wicked sense of humor to boot.

Your examples are terrific, and I think compiling more like them would, in and of itself, be a useful service to the community – people may disagree on what to do about such situations, but it’s hard to disagree that they *might* arise, and that it’s worth considering in advance how we would deal with them.

And while, when a museum hires night guards, no one asks how many people will accidentally be shot by a coworker, when an AI is might be put in a scenario like you described, someone should ask ‘what if an AI doesn’t recognize someone it should?

(One implementation I can imagine, of achieving the principle discussed here is a model of reality, and running a simulation, and seeing if anyone dies, or is injured.) How should these ethics apply to inaction?

Instead, the car caught the edge of the bridge and flipped…”If you were to animate the sequence, you wouldn’t even think to make that part of what it does.”” The effect was visually awesome, but highlights the fact that there are a lot of limitations to what simulations can predict.

This AI system is required to have the ability to, among other things, ascertain when the occupants of the compound are in imminent danger of harm by an outside force but are incapacitated (such as following an initial chemical attack that renders the occupants unconscious ), to assume command and control of the compound’s array of security and defensive systems, & to execute ….) Here’s what I intended the rest of that sentence to be: the compound’s extensive array of defensive, and offensive, protections – including everything from immediate vicinity protections (including electric shocks to potential intruders, transmitted via the exterior wall, interior quarantining via sealing various areas, etc) to longer range, more lethal protections (including longer-range weapons mounted on exterior compound walls).

Convolution’s Q: And while, when a museum hires night guards, no one asks how many people will accidentally be shot by a coworker, when an AI is might be put in a scenario like you described, someone should ask ‘what if an AI doesn’t recognize someone it should?

Even if the HR rep hadn’t mis-keyed the intern’s termination date in scenario 2, even if it had been an electronic or automated data entry, a human had to be involved with the data at some point in the process because that data had to come from somewhere.

There’s also the possibility of programming errors (human), the potential for malicious code being embedded (human), corporate/government spies sabotaging the code for economic/power advantages (human) – the list is endless.

Even if there is zero human error/sabotage/espionage/etc., there are still other concerns – power failures/surges, hardware failures, hardware becoming obsolete, computer programming updates/patches that glitch (for whatever reason), the AI becoming outdated/outmoded & lack of funds to update hardware/software – again, the list is endless.

had been concerned that this type of dystopian future would be set in motion, but I had honestly believed it would be because AI progressed far faster than predicted, not because power-hungry people would take control of what I had previously believed to be a democracy with actual, actionable protections in place to prevent such an occurrence from happening.

Between this & other current events [such as laws attacking freedoms from every angle (including my very right to exist)] my efforts to create example 3 have been derailed, as my time has become consumed with safety, security, & the immediate future.

Varon, M.Ed Founder March 2, 2017 at 9:32 am Knowledge Integration aligns the value requirements for consistently beneficial AI In the interest of aligning values between AI devices and humans, it is imperative to know the values essential to the preservation of humanity.

fully agree with Roman Yampolskiy: “It is very difficult to encode human values in a programming language, but the problem is made more difficult by the fact that we as humanity do not agree on common values, and even parts we do agree on change with time.” Maybe 21 century realities need an universal approach to human values that is not influenced by ideological, religious, ethnic, racial etc.

The Concept of Civilizational Values If Nobel Prize winner Albert Szent-Gyorgyi was precise in defining brain as “… another organ of survival, like fangs, or claws” that “does not search for truth, but for advantage, and … tries to make us accept as truth what only self-interest is allowing our thoughts to be dominated by our desires”, then its decisions on where do we go and what we do, determine the interplay of both existentialist and behavioral sides of our existence through a very simple command: Chase Values!

As an example, let’s follow 15 minutes of your routine morning: your croissant for breakfast incorporates pieces of civilizational values such as farmers cultivating land, harvest of crops, mill, transportation, baking, etc;

Just think about it: If we take for granted that Happiness is the momentary measure of self-interest, then one million people or teams of people, who lived in different centuries of human history, have worked hard to make you happy and feel comfortable in that 15 minutes of your routine morning!

No matter if a finalized civilizational value is a direct outcome of human activity, or was made by a robot that was made by a robotic plant that has been designed by humans, ultimately, humans only can be the source of any civilizational value.

The greater the numbers of healthy and well educated people who enjoy high life standards and have successful careers, the higher the total output of civilizational values generated globally.

Today Big Data processing practically enables analysing the market information about who of us, (7.4 billions of people on the planet), likes what, and that will have two major cognitive (and not only) consequences: First, Artificial Intelligence Deep Learning already is in position to peel off the layers of each complex product conglomerate of civilizational values and to reach the frequency and the multiplicated usage of every civilizational value piece ever generated in human history;

And second: by composing a trustable algorithm to attach each piece of civilizational value, as indexed in the above method, to its creator – we can design, for the first time in human history, a precisely calculated quantitative assessment of how great minds of Humankind – both from history and our contemporaries, have contributed to the long-term wellbeing of our human race.

To be rid of mindless tasks (aka the daily grind of a job) and have the freedom to create art: fine art, music, theater, literature, philosophy, human expression, etc., is really the ultimate goal.

Knowledge integrity built into robots makes evasion or denial of individuality values – independence, responsibility, purpose and productivity as well as vitality values – happiness, well-being, wisdom and wealth – IMPOSSIBLE.

Reply Dawn D September 5, 2017 at 8:46 pm It is unclear what you are seeking be discussed, as the Asilomar AI Principles have already presented the approach you mention, complete with extensively debated, thoroughly delineated, and well-defined values/ principles.

I, for one, recheck this page regularly expressly hoping to see someone sharing ideas regarding potential methods of AI Principle implementation within the context of the current/evolving local/regional/global political, social, economic, legal, and other environs that impact any such implementation.

Ariel Reply Dawn D September 5, 2017 at 9:16 pm AIs are being created every day, perfected every day, growing larger and more encompassing every day, by people in garages with electronics arrays created on a shoe-string budget to large multinational corporations’ enormous computer divisions to entire companies dedicated exclusively to creating AIs, to governments with gargantuan budgets, the highest-tech, the most cutting edge equipment, and an unimaginable level of staffing dedicated explicitly to creating AI.

The AIs currently in existence and the AIs currently being created have been and continue to be created based upon the values of their creators, and for the express purpose of fulfilling the creators’/controllers’/funders’ needs and purposes – be they self-serving, entity-serving, altruistic, nefarious, or somewhere between.

Some of those creating these AIs value free speech, some don’t, some seek world domination/economic domination, some don’t, some value human lives, some value only select human lives, & some value no lives, save their own.

This clear, concise set of values is helpful as a guideline & standard against which external and internal individuals/entities/stakeholders/humanity can compare currently existing AIs and currently being created AIs, as a reference for those creating AIs (should they so choose), and as an aid in the creation of contingency plans, laws, etc., which are/will be intended to address AIs that are created/released that are/will be harmful to humanity as a whole, or that have been/will be designed to be harmful to specific groups of humans, if not to humanity as a whole.

And that is the crux of this webpage’s questions and the discussions being sought – the very ones that brought me to this webpage, which elicited (and continue to elicit) grave concern, which intrigue me beyond belief, and which continue to vex me to no end – HOW ON EARTH can these AI Principles be effectively implemented or applied, given the numerous external factors that must be considered – i.e.

The biggest stumbling block I can see is the idea that science can provide answers to a question that isn’t scientific in nature.It is much more fundamental to human existence, the domains of art, music, literature, religion, customs and even faith.

Concerns of an Artificial Intelligence Pioneer

Natalie Wolchover Senior Writer April 21, 2015 DOWNLOAD AS PDF PRINT THIS ARTICLE Artificial Intelligence Computer Science Deep Learning Machine Learning Q&A Q&A Concerns of an Artificial Intelligence Pioneer The computer scientist Stuart Russell wants to ensure that our increasingly intelligent machines remain aligned with human values.

Natalie Wolchover / Quanta Magazine In January, the British-American computer scientist Stuart Russell drafted and became the first signatory of an open letter calling for researchers to look beyond the goal of merely making artificial intelligence more powerful.

“Our AI systems must do what we want them to do.” Thousands of people have since signed the letter, including leading artificial intelligence researchers at Google, Facebook, Microsoft and other industry hubs along with top computer scientists, physicists and philosophers around the world.

By the end of March, about 300 research groups had applied to pursue new research into “keeping artificial intelligence beneficial” with funds contributed by the letter’s 37th signatory, the inventor-entrepreneur Elon Musk.

In a bombshell result reported recently in Nature, a simulated network of artificial neurons learned to play Atari video games better than humans in a matter of hours given only data representing the screen and the goal of increasing the score at the top — but no preprogrammed knowledge of aliens, bullets, left, right, up or down.

I think one answer is a technique called “inverse reinforcement learning.” Ordinary reinforcement learning is a process where you are given rewards and punishments as you behave, and your goal is to figure out the behavior that will get you the most rewards.

For example, your domestic robot sees you crawl out of bed in the morning and grind up some brown round things in a very noisy machine and do some complicated thing with steam and hot water and milk and so on, and then you seem to be happy.

And then when I was applying to grad school I applied to do theoretical physics at Oxford and Cambridge, and I applied to do computer science at MIT, Carnegie Mellon and Stanford, not realizing that I’d missed all the deadlines for applications to the U.S. Fortunately Stanford waived the deadline, so I went to Stanford.

Instead they look ahead a dozen moves into the future and make a guess about how useful those states are, and then they choose a move that they hope leads to one of the good states.

Another thing that’s really essential is to think about the decision problem at multiple levels of abstraction, so “hierarchical decision making.” A person does roughly 20 trillion physical actions in their lifetime.

The future is spread out, with a lot of detail very close to us in time, but these big chunks where we’ve made commitments to very abstract actions, like, “get a Ph.D.,” “have children.” Are computers currently capable of hierarchical decision making?

There are some games where DQN just doesn’t get it, and the games that are difficult are the ones that require thinking many, many steps ahead in the primitive representations of actions — ones where a person would think, “Oh, what I need to do now is unlock the door,” and unlocking the door involves fetching the key, etcetera.

The basic idea of the intelligence explosion is that once machines reach a certain level of intelligence, they’ll be able to work on AI just like we do and improve their own capabilities — redesign their own hardware and so on — and their intelligence will zoom off the charts.

The most convincing argument has to do with value alignment: You build a system that’s extremely good at optimizing some utility function, but the utility function isn’t quite right.

otherwise it’s going to do pretty stupid things, like put the cat in the oven for dinner because there’s no food in the fridge and the kids are hungry.

If the machine makes these tradeoffs in ways that reveal that it just doesn’t get it — that it’s just missing some chunk of what’s obvious to humans — then you’re not going to want that thing in your house.

Then there’s the question, if we get it right such that some intelligent systems behave themselves, as you make the transition to more and more intelligent systems, does that mean you have to get better and better value functions that clean up all the loose ends, or do they still continue behaving themselves?

So, could you prove that your system is designed in such a way that it could never change the mechanism by which the score is presented to it, even though it’s within its scope of action?

With a cyber-physical system, you’ve got a bunch of bits representing an air traffic control program, and then you’ve got some real airplanes, and what you care about is that no airplanes collide.

What you would do is write a very conservative mathematical description of the physical world — airplanes can accelerate within such-and-such envelope — and your theorems would still be true in the real world as long as the real world is somewhere inside the envelope of behaviors.

Designing AI for human values

Title Responsible artificial intelligence: designing AI for human values Abstract Artificial intelligence (AI) is increasingly affecting our lives in smaller or greater ways.

In order to ensure that systems will uphold human values, design methods are needed that incorporate ethical principles and address societal concerns.

In this paper, we explore the impact of AI in the case of the expected effects on the European labor market, and propose the accountability, responsibility and transparency (ART) design principles for the development of AI systems that are sensitive to human values.

Her research focuses on value-sensitive design of intelligent systems and multi-agent organisations, in particular on the formalisation of ethical and normative behaviours and of social interactions.

Ethics of AI @ NYU: Artificial Intelligence & Human Values

Day 2 Session 1: Artificial Intelligence & Human Values :00 - David Chalmers Opening Remarks 3:30 - Stuart Russell "Provably Beneficial AI" 37:00 - Eliezer Yudkowsky "Difficulties of AGI Alignment...

3 principles for creating safer AI | Stuart Russell

How can we harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover? As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell...

PHILOSOPHY - René Descartes

Rene Descartes is perhaps the world's best known-philosopher, in large part because of his pithy statement, 'I think therefore I am.' He stands out as an example of what intellectual...

Machine intelligence makes human morals more important | Zeynep Tufekci

Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this...

Google's Deep Mind Explained! - Self Learning A.I.

Subscribe here: Become a Patreon!: Visual animal AI: Hi, welcome to ColdFusion (formally

AI and Value Alignment | Jaan Tallinn

Jaan Tallinn discusses the issue of AI and value alignment at the January 2017 Asilomar conference organized by the Future of Life Institute. The Beneficial AI 2017 Conference: In our sequel...

Interactions between the AI Control Problem and the Governance Problem | Nick Bostrom

Nick Bostrom explores the likely outcomes of human-level AI and problems regarding governing AI at the January 2017 Asilomar conference organized by the Future of Life Institute. The Beneficial...

Creating Human-Level AI | Yoshua Bengio

AI pioneer Yoshua Bengio explores paths forward to human-level artificial intelligence at the January 2017 Asilomar conference organized by the Future of Life Institute. The Beneficial AI...

How artists can (finally) get paid in the digital age | Jack Conte

It's been a weird 100 years for artists and creators, says musician and entrepreneur Jack Conte. The traditional ways we've turned art into money (like record sales) have been broken by the...

Can we build AI without losing control over it? | Sam Harris

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris,...