AI News, Cambridge Artificial Intelligence and machine learning to be even ... artificial intelligence

A.I. Policy Is Tricky. From Around the World, They Came to Hash It Out.

The goal was to give the policymakers from countries like France, Japan and Sweden a sense of the technology’s strengths and weaknesses, emphasizing the crucial role of human choices.

The class was part of a three-day gathering at M.I.T., including expert panels, debate and discussion, as the Organization for Economic Cooperation and Development seeks to agree on recommendations for artificial intelligence policy by this summer.

19 Artificial Intelligence Technologies To Look For In 2019

Tech decision makersare (and should keep)looking for ways toeffectivelyimplement artificial intelligence into their businesses and, therefore, drive value.And though all AI technologies most definitely have their own merits,not allof themare worth investing in.

By providing algorithms, APIs (application programming interface), development and training tools, big data, applications and other machines, ML platforms are gaining more and more traction every day.

The last one is actuallythe first and only audience management tool in the world that applies real AI and machine learning to digital advertising to find the most profitable audience or demographic group for any ad.

And if you haven’t seen them already, expect the imminent appearance and wide acceptance of AI-optimized silicon chips that can be inserted right into your portable devices and elsewhere.

Deep learning platforms use a unique form of ML that involves artificial neural circuits with various abstraction layers that can mimic the human brain, processing data and creating patterns for decision making.

It allows for more natural interactions between humans and machines, including interactions related to touch, image, speech and body language recognition, and is big within the market research field.

It’s a solution that lets you make the most of your human talent and move employees into more strategic and creative positions, so their actions can really make an impact on the company's growth.

Their digital twins are mainly lines of software code, but the most elaborate versions look like 3-D computer-aided design drawings full of interactive charts, diagrams, and data points.

AI and ML are now being used to move cyberdefense into a new evolutionary phase in response to an increasingly hostile environment: Breach Level Index detected a total of over 2 billion breached records during 2017.

Recurrent neural networks, which are capable of processing sequences of inputs, can be used in combination with ML techniques to create supervised learning technologies, which uncover suspicious user activity and detect up to 85% of all cyber attacks.

Startups such as Darktrace, which pairs behavioral analytics with advanced mathematics to automatically detect abnormal behavior within organizations and Cylance, which applies AI algorithms to stop malware and mitigate damage from zero-day attacks, are both working in the area of AI-powered cyber defense.

Compliance is the certification or confirmation that a person or organization meets the requirements of accepted practices, legislation, rules and regulations, standards or the terms of a contract, and there is a significant industry that upholds it.

And the volume of transaction activities flagged as potential examples of money laundering can be reduced as deep learning is used to apply increasingly sophisticated business rules to each one.

Merlon Intelligence, a global compliance technology company that supports the financial services industry to combat financial crimes, and Socure, whose patented predictive analytics platform boosts customer acceptance rates while reducing fraud and manual reviews.

While some are rightfully concerned about AI replacing people in the workplace, let’s not forget that AI technology also has the potential to vastly help employees in their work, especially those in knowledge work.

Content creation now includes any material people contribute to the online world, such as videos, ads, blog posts, white papers, infographics and other visual or written assets.

Nano Vision, a startup that rewards users with cryptocurrency for their molecular data, aims to change the way we approach threats to human health, such as superbugs, infectious diseases, and cancer, among others.

Another player utilizing peer-to-peer networks and AI is Presearch, a decentralized search engine that’s powered by the community and rewards members with tokens for a more transparent search system.

And Affectiva’s Emotion AI is used in the gaming, automotive, robotics, education, healthcare industries, and other fields, to apply facial coding and emotion analytics from face and voice data.

It uses software to automate customer segmentation, customer data integration, and campaign management, and streamlines repetitive tasks, allowing strategic minds to get back to doing what they do best.

The software automates all the process of campaign management and optimization, making more than 480 daily adjustments per ad to super-optimize campaigns and managing budgets across multiple platforms and over 20 different demographic groups per ad.

Artificial Intelligence and Ethics

 On March 18, 2018, at around 10 p.m., Elaine Herzberg was wheeling her bicycle across a street in Tempe, Arizona, when she was struck and killed by a self-driving car.

And beyond these larger social and economic considerations, data scientists have real concerns about bias, about ethical implementations of the technology, and about the nature of interactions between AI systems and humans if these systems are to be deployed properly and fairly in even the most mundane applications.

Artificial intelligence can aggregate and assess vast quantities of data that are sometimes beyond human capacity to analyze unaided, thereby enabling AI to make hiring recommendations, determine in seconds the creditworthiness of loan applicants, and predict the chances that criminals will re-offend.

During the past two years, self-driving cars that rely on rules and training data to operate have caused fatal accidents when confronted with unfamiliar sensory feedback or inputs their guidance systems couldn’t interpret.

Despite notable advances in areas such as data privacy (see “The Privacy Tools Project,” January-February 2017), and clear understanding of the limits of algorithmic fairness, the realization that ethical concerns must in many cases be considered before a system is deployed has led to formal integration of an ethics curriculum—taught by philosophy postdoctoral fellows and graduate students—into many computer-science classes at Harvard.

(Exploitation of this input vulnerability is the subject of “AI and Adversarial Attacks.”) In other words, AI lacks common sense and the ability to reason—even if it can also make incredible discoveries that no human could, such as detecting third- or higher-order interactions (when three or more variables must interact in order to have an effect) in complex biological networks.

This has proved especially valuable to the field because it led her to reflect deeply on the nature of human-computer interaction, and later, in the course of imagining a future when computers and humans might work together, to propose theoretical models for collaborative AI systems designed to work on teams with people.

She envisions a future that combines the speed and statistical prowess of intelligent computers with innate human talents, not one that pits machines and humans against each other—the way the relationship is often framed in descriptions of AI systems beating world champions in chess and go, or replacing people in the workplace.

When Grosz began experimenting with team-based AI systems in health care, she and a Stanford pediatrician started a project that coordinates care for children with rare diseases who are tended by many people besides parents, including medical experts, home-care aides, physical therapists, and classroom teachers.

The care spans years, she says, and “no human being I’ve ever encountered can keep track of 15 other people and what they are doing over long periods of time.” Grosz, with doctoral student Ofra Amir (now a faculty member at the Technion) began by analyzing how the patient-care teams worked, and developed a theory of teamwork to guide interactions between the human members and an AI system designed to coordinate information about the children’s care.

“What we’re trying to do, on the theoretical end, is to understand better how to share information” in that multi-member team environment, “and then build tools, first for parents, and then for physicians.” One of the key tenets she and her colleague, Bar-Ilan University professor Sarit Kraus, developed is that team members should not take on tasks they lack the requisite knowledge or capability to accomplish.

This is a feature of good human teamwork, as well as a key characteristic of “intelligent systems that know their limits.” “The problem, not just with AI, but a lot of technology that is out in the world, is that it can’t do the job it has been assigned”—online customer service chatbots interacting via text that “are unable to understand what you want” being a case in point.

if students from related fields such as statistics or applied mathematics are included, the total enrollment substantially exceeds that of top-ranked economics.) “Most of these ethical challenges have no single right answer,” she points out, “so just as [the students] learn fundamental computing skills, I wanted them to learn fundamental ethical-reasoning skills.” In the spring of 2017, four computer-science courses included some study of ethics.

That fall, there were five, then 8 by spring 2018, and now 18 in total, spanning subjects from systems programming to machine learning and its effects on fairness and privacy, to social networks and the question of censorship, to robots and work, and human-computer interaction.

“My fantasy,” says Grosz, “is that every computer-science course, with maybe one or two exceptions, would have an ethics module,” so that by graduation, every concentrator would see that “ethics matters everywhere in the field—not just in AI.” She and her colleagues want students to learn that in order to tackle problems such as bias and the need for human interpretability in AI, they must design systems with ethical principles in mind from the start.

The problem of fairness in autonomous systems featured prominently at the inaugural Harvard Data Science Conference (HDSC) in October, where Colony professor of computer science David Parkes outlined guiding principles for the study of data science at Harvard: it should address ethical issues, including privacy (see “The Watchers,” January-February 2017, page 56);

There are lots of reasons why someone might want to “look under the hood” of an AI system to figure out how it made a particular decision: to assess the cause of biased output, to run safety checks before rollout in a hospital, or to determine accountability after an accident involving a self-driving car.

Assistant professor of computer science Finale Doshi-Velez demonstrated by projecting onscreen a relatively simple decision tree, four layers deep, that involved answering questions based on five inputs (see a slightly more complex example, above).

Whenever there is a diverse population (differing by ethnicity, religion, or race, for example), explained McKay professor of computer science Cynthia Dwork during a HDSC talk about algorithmic fairness, an algorithm that determines eligibility for, say, a loan, should treat each group the same way.

Say we have a labor-market disparity that persists without any sort of machine-learning help, and then here comes machine learning, and it learns to re-inscribe those inequalities.” Their solution, which uses tools from economics and sociology to understand disparities in the labor market, pushes the thinking about algorithmic fairness beyond computer science to an interdisciplinary, systems-wide view of the problem.

Because humans are “self-interested, independent, error-prone, and not predictable” enough to enable design of an algorithm that would ensure fairness in every situation, she started thinking about how to take bias out of the training data—the real-world information inputs that a hiring algorithm would use.

Suppose many of the minority group’s members don’t go to college, reasoning that “it’s expensive, and because of discrimination, even if I get a degree, the chances of my getting a job are still low.” Employers, meanwhile, may believe that “people from minority groups are less educated, and don’t perform well, because they don’t try hard.” The point Chen and Hu make is that even though a minority-group member’s decision not to attend college is rational, based on existing historical unfairness, that decision reinforces employers’ preconceived ideas about the group as a whole.

I think that that’s a particularly naïve way of thinking of technology.” Whether the technology is meant to provide facial recognition to identify crime suspects from video footage, or education tailored to different learning styles, or medical advice, Hu stresses, “What we need to think about is how technologies embed particular values and assumptions.

Exposing that is a first step: realizing that it’s not the case that there are some ethical questions, and some non-ethical questions, but really that, in everything we design…there are always going to be normative questions at hand, every step of the way.” Integrating that awareness into existing coursework is critical to ensuring that “the world that we’re building, with ubiquitous technology, is a world that we want to live in.”  

Ethics of artificial intelligence

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings.

divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).

It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society.[3]

Pamela McCorduck counters that, speaking for women and minorities 'I'd rather take my chances with an impartial computer,' pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[14]

However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines: Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.[15]

'If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow', says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[29]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios 'seem potentially as important as the risks related to loss of control', but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: 'this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them'.[30]

To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.[35]

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[39]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions.

They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons.

In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[47]

Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation.

Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g.

while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal 'hackers'.[40]

Many researchers have argued that, by way of an 'intelligence explosion' sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.[50] In

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.[52]

Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not 'common sense'.

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence.

They stated: 'This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.'[54]

The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies.

This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them.

Brian Cox presents Science Matters - Machine Learning and Artificial intelligence

We're beginning to see more and more jobs being performed by machines, even creative tasks like writing music or painting can now be carried out by a ...

Machine Learning and Artificial intelligence - Science Matters

We're beginning to see more and more jobs being performed by machines, even creative tasks like writing music or painting can now be carried out by a ...

Artificial intelligence is helping the fight against cancer | European CEO

We are closer than ever before to understanding the human body on a genetic level. Billions of genetic sequences can be processed every day, transforming a ...

This is why emotional artificial intelligence matters | Maja Pantic | TEDxCERN

We display more than 7000 different facial expressions every day and we perceive all of them very intuitively. We associate to them attitudes, emotions, ...

Ned Block: "Why AI Approaches to Cognition Won't Work for Consciousness" | Talks at Google

Professor Block's research is at the center of the vibrant academic debate about the true nature of consciousness. His work often straddles the boundary of ...

A.I. and the hype ? | Martin Robbins | TEDxAberystwyth

if you take the hype and the progress made in AI over the last decade or so, and you try to match it up to real world progress in products, innovation, productivity, ...

AI Winter is Coming

Jason Howell and Megan Morrone talk to Alex Kantrowitz from Buzzfeed about the hype around artificial intelligence. Now more than ever, its cool to drop the A ...

Machine learning applications in e-learning: bias, risks, and mitigation

My talk at Machine Learning/AI Day with Google Cambridge.

Top programming language for Artificial Intelligence

Artificial Intelligence (AI) Approach towards Natural Language Understanding - Stephen Lernout, MIIA

This video was recorded at FTC 2016 - The problem being addressed in this paper is that using brute force in Natural Language ..