AI News, How does artificial intelligence work, and what do people mean ... artificial intelligence

Problems Can Begin With How We Define “AI” Similar Terms for Business Audiences

All businesses will be affected by AI in the coming years, and the impact for most will be significant.

So to frame the discussion for business audiences, here’s the definition of AI we use at AI Prescience: At one level this is a pretty straightforward view of AI, with an emphasis on business results.

The takeaway of this kind of definition is that — in a business context — it’s not important if artificial intelligence works the same way as a human brain.

If the result is better, whether the intelligence is “real” or artificial isn’t particularly relevant as far as business results are concerned.

Of the key ideas behind AI, perhaps fundamental for business is that AI involves powerful computers processing massive, unimaginable amounts of data.

The implication for a business audience is: The data we’re talking about isn’t just traditional computerised business information, like sales transactions, product catalogues and customer account details.

One way to bring this to life is to consider whether a person could recognise a given piece of data, and understand what it means.

The volumes of data generated by businesses are mind-boggling — hundreds of millions or billions of items (such as financial transactions or customers’ online clicks and scrolls) are commonplace.

So to use AI effectively in your business, you’ll need to capture and process volumes of data you may not have previously cared about, using technology you may not have previously thought necessary.

To solve a business problem, AI systems also need something to learn from — typically enough examples of right and wrong answers to figure out what to do with a new example.

It’s also where differences in terminology start to matter, with phrases like “supervised” and “unsupervised” learning, or “narrow” and “general” intelligence appearing.

So one of the most important things about AI for business leaders to prioritise is selecting the right kind of problem to solve, and the right kind of business improvement to target.

We seem to be seeing increasing numbers of cases where business leaders pay the price for high profile IT problems, not technology leaders.

So if the business results of AI aren’t positive, the responsibility may well lie with the business leaders who commissioned the work, not the data scientists who executed it.

For example, if you know customer name, address, date of birth and transaction history, you can start to build some useful insights.

But if you also know something about their work life, how they spend their free time and the biggest influences on them, there are orders of magnitude more useful knowledge to discover about them.

For example social media profiles and feeds together provide a sophisticated picture of a person’s personal and professional life.

For example, high profile issues around inadvertent racial or gender profiling in recruitment algorithms, show that diversity will probably remain a contentious aspect of AI for a long time to come.

Another example starts with advert placement, but ends somewhere we probably haven’t yet seen: If businesses know so much about you that they can pick hyper-relevant adverts, WHO WILL DECIDE WHAT ELSE THEY COULD / WOULD / SHOULD DO with that knowledge, as they discover new ways to monetise it?

Artificial intelligence in healthcare

Artificial intelligence (AI) in healthcare is the use of complex algorithms and software to emulate human cognition in the analysis of complicated medical data.

What distinguishes AI technology from traditional technologies in health care is the ability to gain information, process it and give a well-defined output to the end-user.

AI algorithms behave differently from humans in two ways: (1) algorithms are literal: if you set a goal, the algorithm can't adjust itself and only understand what it has been told explicitly, (2) and algorithms are black boxes;

AI programs have been developed and applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care.

to support operational initiatives that increase cost saving, improve patient satisfaction, and satisfy their staffing and workforce needs.[8]

that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing length of stay and optimizing staffing levels.[9]

During this time, there was a recognition by researchers and developers that AI systems in healthcare must be designed to accommodate the absence of perfect data and build on the expertise of physicians.[14]

The ability to interpret imaging results with radiology may aid clinicians in detecting a minute change in an image that a clinician might accidentally miss.

A study at Stanford created an algorithm that could detect pneumonia at that specific site, in those patients involved, with a better average F1 metric (a statistical metric based on accuracy and recall), than the radiologists involved in that trial.[25]

The emergence of AI technology in radiology is perceived as a threat by some specialists, as the technology can achieve improvements in certain statistical metrics in isolated cases, as opposed to specialists.[26][27]

Recent advances have suggested the use of AI to describe and evaluate the outcome of maxillo-facial surgery or the assessment of cleft palate therapy in regard to facial attractiveness or age appearance.[28][29]

In 2018, a paper published in the journal Annals of Oncology mentioned that skin cancer could be detected more accurately by an artificial intelligence system (which used a deep learning convolutional neural network) than by dermatologists.

On average, the human dermatologists accurately detected 86.6% of skin cancers from the images, compared to 95% for the CNN machine.[30]

One study conducted by the Centerstone research institute found that predictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response at baseline.[citation needed]

To address the difficulty of tracking all known or suspected drug-drug interactions, machine learning algorithms have been created to extract information on interacting drugs and their possible effects from medical literature.

Efforts were consolidated in 2013 in the DDIExtraction Challenge, in which a team of researchers at Carlos III University assembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms.[40]

Other algorithms identify drug-drug interactions from patterns in user-generated content, especially electronic health records and/or adverse event reports.[36][37]

The subsequent motive of large based health companies merging with other health companies, allow for greater health data accessibility.[43]

A second project with the NHS involves analysis of medical images collected from NHS patients to develop computer vision algorithms to detect cancerous tissues.[53]

Intel's venture capital arm Intel Capital recently invested in startup Lumiata which uses AI to identify at-risk patients and develop care options.[54]

team associated with the University of Arizona and backed by BPU Holdings began collaborating on a practical tool to monitor anxiety and delirium in hospital patients, particularly those with Dementia.[64]

The AI utilized in the new technology – Senior's Virtual Assistant – goes a step beyond and is programmed to simulate and understand human emotions (artificial emotional intelligence).[65]

Doctors working on the project have suggested that in addition to judging emotional states, the application can be used to provide companionship to patients in the form of small talk, soothing music, and even lighting adjustments to control anxiety.

Virtual nursing assistants are predicted to become more common and these will use AI to answer patient's questions and help reduce unnecessary hospital visits.

Overall, as Quan-Haase (2018) says, technology “extends to the accomplishment of societal goals, including higher levels of security, better means of communication over time and space, improved health care, and increased autonomy” (p. 43).

While research on the use of AI in healthcare aims to validate its efficacy in improving patient outcomes before its broader adoption, its use may nonetheless introduce several new types of risk to patients and healthcare providers, such as algorithmic bias, Do not resuscitate implications, and other machine morality issues.

We already have some scientists who know artificial intelligence and machine learning, but we want complementary people who can look forward and see how this technology will evolve.”[75]

As of November 2018, eight use cases are being benchmarked, including assessing breast cancer risk from histopathological imagery, guiding anti-venom selection from snake images, and diagnosing skin lesions.[77][78]

Ethics of artificial intelligence

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings.

divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).

It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society.[3]

Pamela McCorduck counters that, speaking for women and minorities 'I'd rather take my chances with an impartial computer,' pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[14]

However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines: Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.[15]

In a highly influential branch of AI known as 'natural language processing,' problems can arise from the 'text corpus'—the source material the algorithm uses to learn about the relationships between different words.[34]

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it.

In this case, the automated cars have the function of detecting nearby possible cars and objects in order to run the function of self-driven, but it did not have the ability to react to nearby pedestrian within its original function due to the fact that there will not be people appear on the road in a normal sense.

with the current partial or fully automated cars' function are still amateur which still require driver to pay attention with fully control the vehicle since all these functions/feature are just supporting driver to be less tried while they driving, but not let go.

Thus, the government should be most responsible for current situation on how they should regulate the car company and driver who are over-rely on self-driven feature as well educated them that these are just technologies that bring convenience to people life but not a short-cut.

'If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow', says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[58]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios 'seem potentially as important as the risks related to loss of control', but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: 'this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them'.[59]

To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.[64]

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[68]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions.

They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons.

In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[76]

Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation.

Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g.

while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal 'hackers'.[69]

Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deepfakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that don't require a human controller.[79]

Many researchers have argued that, by way of an 'intelligence explosion' sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.[80] In

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.[82]

Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not 'common sense'.

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence.

They stated: 'This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.'[84]

The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies.

This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them.

What is Artificial Intelligence (or Machine Learning)?

Want to stay current on emerging tech? Check out our free guide today: What is AI? What is machine learning and how does it work? You've ..

What is Artificial Intelligence Exactly?

Subscribe here: Check out the previous episode: Become a Patreo

Artificial Intelligence In 5 Minutes | What Is Artificial Intelligence? | AI Explained | Simplilearn

Don't forget to take the quiz at 04:10! Comment below what you think is the right answer, to be one of the 3 lucky winners who can win Amazon vouchers worth ...

How Will Artificial Intelligence Affect Your Life | Jeff Dean | TEDxLA

In the last five years, significant advances were made in the fields of computer vision, speech recognition, and language understanding. In this talk, Jeff Dean ...

What is ARTIFICIAL INTELLIGENCE? What does ARTIFICIAL INTELLIGENCE mean?

What is ARTIFICIAL INTELLIGENCE? What does ARTIFICIAL INTELLIGENCE mean? Artificial intelligence (AI) is the ..

What is Artificial Intelligence? In 5 minutes.

There is so much discussion and #confusion about #AI nowadays. People talk about #deeplearning and #computerVision without context. In this short video, ...

Artificial Intelligence & the Future - Rise of AI (Elon Musk, Bill Gates, Sundar Pichai)|Simplilearn

Artificial Intelligence (AI) is currently the hottest buzzword in tech. Here is a video on the role of Artificial Intelligence and its scope in the future. We have put ...

Artificial Intelligence for Kids

I was just making another video but got an unexpected call from an alien world. In this video, I help a girl named North from another planet help find a missing ...

Will Robots and Artificial Intelligence take your Job? | Machine Learning | SHIFT

Artificial Intelligence offers many possibilities, but perhaps also some dangers? Some people fear that they will lose their job due to artificial intelligence. But is it ...

Will artificial intelligence put us out of work?

Evan Davies, presenter of Dragon's Den, Newsnight and The Bottom Line examines the impact new technology might have on business in the future, and what it ...