AI News, BOOK REVIEW: AI, Self artificial intelligence

Self-driving cars, robots: Identifying AI 'blind spots'

The AI systems powering driverless cars, for example, are trained extensively in virtual simulations to prepare the vehicle for nearly every event on the road.

Consider a driverless car that wasn't trained, and more importantly doesn't have the sensors necessary, to differentiate between distinctly different scenarios, such as large, white cars and ambulances with red, flashing lights on the road.

If the car is cruising down the highway and an ambulance flicks on its sirens, the car may not know to slow down and pull over, because it does not perceive the ambulance as different from a big white car.

In a pair of papers -- presented at last year's Autonomous Agents and Multiagent Systems conference and the upcoming Association for the Advancement of Artificial Intelligence conference -- the researchers describe a model that uses human input to uncover these training 'blind spots.'

The researchers then combine the training data with the human feedback data, and use machine-learning techniques to produce a model that pinpoints situations where the system most likely needs more information about how to act correctly.

'At that point, the system has been given multiple contradictory signals from a human: some with a large car beside it, and it was doing fine, and one where there was an ambulance in the same exact location, but that wasn't fine.

'Because the agent is getting all these contradictory signals, the next step is compiling the information to ask, 'How likely am I to make a mistake in this situation where I received these mixed signals?'' Intelligent aggregation The end goal is to have these ambiguous situations labeled as blind spots.

If the system performed correct actions nine times out of 10 in the ambulance situation, for instance, a simple majority vote would label that situation as safe.

In the end, the algorithm produces a type of 'heat map,' where each situation from the system's original training is assigned low-to-high probability of being a blind spot for the system.

If the learned model predicts a state to be a blind spot with high probability, the system can query a human for the acceptable action, allowing for safer execution,' Ramakrishnan says.

Artificial Intelligence Is Powerful—And Misunderstood. Here's How We Can Protect Workers

Just as Richards committed vast troves of words to memory in order to master the domain of the Scrabble board, state-of-the-art AI—or deep ­learning—takes in massive amounts of data from a single domain and automatically learns from the data to make specific decisions within that domain.

It can help ­Amazon maximize profit from recommendations or Facebook maximize minutes spent by users in its app, just as it can help banks minimize loan-­default rates or an airport camera determine if a terrorist has queued up for boarding.

But the rise of AI also brings many challenges, and it’s worth taking time to sort between the genuine risks of this coming technological revolution and the misunderstandings and hype that sometimes surround the topic.

Because AI can outperform humans at routine tasks—­provided the task is in one domain with a lot of data—it is technically capable of displacing hundreds of millions of white and blue collar jobs in the next 15 years or so.

In contrast with the U.S. and China, poorer and smaller countries will be unable to reap the economic rewards that will come with AI and less well placed to mitigate job displacement.

Facebook couldn’t resist the temptation to use AI technology to optimize usage and profit, at the expense of user privacy and fostering bias and division.

All of these risks require governments, businesses and technologists to work together to develop a new rule book for AI applications.

General AI requires advanced capabilities such as reasoning, conceptual learning, common sense, planning, creativity and even self-­awareness and emotions, all of which remain beyond our scientific reach.

Ethics of artificial intelligence

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings.

divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).

It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society.[3]

Pamela McCorduck counters that, speaking for women and minorities 'I'd rather take my chances with an impartial computer,' pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[14]

However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines: Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.[15]

'If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow', says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[29]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios 'seem potentially as important as the risks related to loss of control', but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: 'this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them'.[30]

To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.[35]

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[39]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions.

They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons.

In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[47]

Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation.

Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g.

while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal 'hackers'.[40]

Many researchers have argued that, by way of an 'intelligence explosion' sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.[50] In

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.[52]

Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not 'common sense'.

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence.

They stated: 'This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.'[54]

The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies.

This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them.

Where Is AI Driving Us?

With chaos in the White House, worsening climate change around the globe, more wars than we can count and a wobbling economy here at home, the last thing we need is another big challenge.

AI is not only powering a metastasizing array of autonomous machines that can think, learn and even reproduce themselves, but the advanced technology of digital intelligence has also begun restructuring our economic order, social frameworks and cultural ethic.

Once the stuff of science fiction, the future is suddenly upon us, with Google starting to market a driverless taxi service, Daimler developing a line of commercial trucks that drive themselves and General Motors rolling out a car with no steering wheel or gas and brake pedals.

In graphic terms, Musk warns that profiteering humans are 'summoning the devil' by creating a new superior species of beings that will end up dominating humanity, becoming 'an immortal dictator from which we would never escape.'

Not a corporate transaction, but a literal merger: surgically implant AI devices in human brains with 'a bunch of tiny wires' that would fuse people with super intelligence.

Populist author, public speaker and radio commentator Jim Hightower writes 'The Hightower Lowdown,' a monthly newsletter chronicling the ongoing fights by America's ordinary people against rule by plutocratic elites.

What If AI Became Self-Aware? | Alternate Reality

Artificial intelligence is without a doubt the defining scientific breakthrough of our time, making its way into nearly every technology and industry imaginable.

AI Codes its Own ‘AI Child’ - Artificial Intelligence breakthrough!

Subscribe here: Check out the previous episode: Become a Patro

Jim Self on Artificial Intelligence

Jim Self LIVE on Artificial Intelligence. Are humans even necessary or is there more that is not seen and understood? Broadcast via Facebook on Nov. 15, 2017.

The Rise of Artificial Intelligence | Documentary HD

AI (artificial intelligence) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the ...

AI Can Now Self-Reproduce—Should Humans Be Worried? | Eric Weinstein

Those among us who fear world domination at the metallic hands of super-intelligent AI have gotten a few steps ahead of themselves. We might actually be ...

ARTIFICIAL INTELLIGENCE A-Z™: LEARN HOW TO BUILD AN AI

Artificial Intelligence is reshaping your relationship with the world, and it's just getting started. Tesla's autopilot, job automation, the products you 'stumble upon' ...

Are We Approaching Robotic Consciousnesses?

Subscribe here: Become a Patreon: The field of artificial intelligence and robotics has .

Google's DeepMind AI Just Taught Itself To Walk

Google's artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance ...

AI learns to play snake using Genetic Algorithm and Deep learning

Using a neural network and the genetic algorithm I trained an AI to play snake. Time Passing By by Audionautix is licensed under a Creative Commons ...

What role does Deep Learning play in Self Driving Cars?

deep learning and self driving cars Autonomous Drive is here and ..