AI News, Google's DeepMind A.I. beats doctors in breast cancer screening trial artificial intelligence

A machine-versus-doctors fixation masks important questions about artificial intelligence

Wallet-sized cards containing a person’s genetic code don’t exist.  Yet they were envisioned in a 1996 Los Angeles Times article, which predicted that by 2020 the makeup of a person’s genome would drive their medical care.  That idea that today we’d be basking in the fruits of ultra-personalized medicine was put forth by scientists who were promoting the Human Genome Project —

He pointed to “incentives for both biologists and journalists to tell simple stories, including the idea of relatively simple genetic causation of common, debilitating disease.” Lately the allure of a simple story thwarts public understanding of another technology that’s grabbed the spotlight in the wake of the genetic data boom:  artificial intelligence (AI).  With AI, headlines often focus on the ability of machines to “beat” doctors at finding disease. Take coverage of a study published this month on a Google algorithm for reading mammograms: CNBC: Google’s DeepMind A.I.

At least anecdotally, Harvey said, some young doctors are eschewing the field of radiology in the UK, where there is a shortage.  Harvey drew chuckles during a speech at the Radiological Society of North American in December when he presented a slide showing that while about 400 AI companies has sprung up in the last five years, the number of radiologists who have lost their jobs stands at zero.

(Medium ran Harvey’s defiant explanation of why radiologists won’t easily be nudged aside by computers.) The human-versus-machine fixation distracts from questions of whether AI will benefit patients or save money.  We’ve often written about the pitfalls of reporting on drugs that have only been studied in mice.

Almost always, a computer’s “deep learning” ability is trained and tested on cleaned-up datasets that don’t necessarily predict how they’ll perform in actual patients.  Harvey said there’s a downside to headlines “overstating the capabilities of the technology before it’s been proven.” “I think patients who read this stuff can get confused.

In Undark, Jeremy Hsu reported on the lack of evidence for a triaging app, Babylon Health.  Harvey said journalists also need to point out “the reality of what it takes to get it into the market and into the hands of end users.” He cites lung cancer screening, for which some stories cover “how good the algorithm is at finding lung cancers and not much else.” For example, a story that appeared in the New York Post (headline: “Google’s new AI is better at detecting lung cancer than doctors”)  declared that “AI is proving itself to be an incredible tool for improving lives” without presenting any evidence.

MarketWatch site logo

IZOZF, +11.76%

(fra:1R3) The journal Nature published a study entitled "International evaluation of an AI system for breast cancer screening"

She was involved with the study and spoke to Wired, "AI programmes will not solve the human staffing crisis - as radiologists and imaging teams do far more than just look at scans - but they will undoubtedly help by acting as a second pair of eyes and a safety net."

It's clear from the number of FDA approved breast specific algorithms, the Google study, and the media response that resulted, that any technology improvement or assistance that can improve diagnostic capability is potentially disruptive, and that breast specific imaging and its improvement is a major priority.

The company has also established Izotropic Imaging Corp, a wholly owned Nevada based subsidiary that will manage operations in the U.S.A. Phone: 1-833-IZOCORP Email: info@izocorp.com Website: izocorp.com This document may contain forward-looking statements that are based on the Company's expectations, estimates and projections regarding its business and the economic environment in which it operates.

Future of AI Part 5: The Cutting Edge of AI

In terms of opportunities this includes the potential for enhanced understanding o the customer and personalised marketing in turn resulting in efficiency gains and reduced wastage, improvements in healthcare and financial services offerings.

The advantage of Edge Computing is that it will allow firms to undertake near real-time analytics, with improved user experiences and less cost as the volume of data transmitted back and forth between the cloud and the device is reduced.

The convergence of AI, 5G and Edge Computing alongside Augmented Reality and Mixed Reality will enable exciting new innovations in customer experience, personalisation of services and cleaner economic development in the 2020s and 6G combined with AI will result in the Internet of Everything (IoE) in terms of anything that can be connected.

show under the 6G arrow in the image above and that 6G will offer a continuation of the technology revolution that 5G will enable, however, I believe that the timeline (2025 and beyond) is more probable for certain technologies such as more advanced autonomous vehicles (with levels 4 or 5 of autonomy, and whilst manufacturers hope to have level 4 on the road by 2022 we have seen that they have tended to be overoptimistic) to make an impact and scale as it will require sufficient scaling of 5G networks along with appropriate regulatory and legal frameworks.

An example of the positive impact of AI for humanity is provided by Fergus Walsh who reported that 'AI 'outperforms' doctors diagnosing breast cancer' with the following statement 'Artificial Intelligence is more accurate than doctors in diagnosing breast cancer from mammograms, a study in the journal Nature suggests.'

In Ex Machina a female humanoid robot named Ava is assessed to test whether Ava can successfully pass the Turing test and in the process outmanoeuvres the humans who are interrogating her by using a combination of the aggression and cunning intelligence.

Yes and No' provides two reasons why it did not beat the Turing test, with the first being that the person knew it was a machine, and the more important being that the subject matter was narrow and in order to truly pass the Turing test a machine should be able to answer any question.

There are those within the AI community, myself included believe that the type of AI that we have today can be used to help generate new business and employment opportunities by augmenting humans and making sense of the flood of data that we are creating with digital technology.

For example Daniel Thomas in an article entitled 'Automation is not the future, human augmentation is' quotes Paul Reader, CEO of Mind Foundry, an AI startup spun out of University of Oxford’s Machine Learning Research Groupas stating “Throughout history innovations have come along like electricity and steam, and they do displace jobs.

Any tech can be used for good or for evil, and we want to use it for good.” Source for Image Below Accenture, Daniel Thomas, Raconteur Automation is not the future, human augmentation is Sofia the Robot (image below) is an impressive piece of robotics engineering but it is not a form of advanced Artificial General Intelligence (see below for details of this type of AI) that can match human intelligence.

The article from Jaden Urbi entitled 'The complicated truth about Sophia the robot — an almost human robot or a PR stunt' In a Facebook post, leading AI researcher Yann LeCun said Hanson’s staff members were human puppeteers who are deliberately deceiving the public.

Image above Sofia the Robot, Hanson Robotics Editorial credit: Anton Gvozdikov / Shutterstock.com It is understandable that the general public have anxieties about AI given the lack of understanding about the technology in the wider domain and perhaps the need for more in the Data Science community to explain the subject in a manner that the wider public including many journalists and some of the social media influencer community can understand.

I hope that this article will help play a role in enabling the business community and wider public understand the complexities of AI and in particular the challenges of Artificial General Intelligence and how we can avail the opportunities that 5G will provide to use AI to enhance our economies and every day lives.

The Reality of AI in the 2020s The 2020s are going to be a decade where AI and 5G technology working alongside each other will result in a world where physical and digital connect in the form of intelligent connected devices with AI inferencing on the device itself.

Whilst there will be continued and exciting research advancements in the field of AI, it is highly unlikely that the 2020s will be a period in which Artificial General Intelligence (see below for definitions) will arrive and instead it will be a period in which we will use AI technology alongside 5G as the key for Digital Transformation of every sector of the economy resulting in mass personalisation at scale for healthcare, retail, financial services and other sectors.

The result of the Industry 4.0 revolution that AI and 5G will usher in across the 2020s will be cleaner economic development with new the creation of new jobs, economic growth and exciting new business opportunities.

It also will give doctors and healthcare professionals access to more immediate and actionable patient monitoring using IoT connectivity within cardiac pacemakers, defibrillators and even sensors in insulin pumps.'

What is Artificial Intelligence (AI) AI deals with the area of developing computing systems which are capable of performing tasks that humans are very good at, for example recognising objects, recognising and making sense of speech, and decision making in a constrained environment.

Classical Artificial Intelligence:are algorithms and approaches including rules-based systems, search algorithms that entailed uninformed search (breadth first, depth first, universal cost search), and informed search such as A and A* algorithms.

have set out above why I strongly believe that we are not on the verge of another AI winter due to the demand for Machine Learning and Deep Learning as the Vs of big data (Velocity, Volume, Value, Variety, and Veracity) are set to increase in particular volume as Edge Computing and the IoT is set for rapid growth in the 2020s.

It may arrive within the next 20 or so years but it has challenges relating to hardware, energy consumption required in today’s powerful machines, and the need to solve for catastrophic memory loss that affects even the most advanced Deep Learning algorithms of today.

– A Timeline Consensus from AI Researchers with the article entailing responses from 32 PhD researchers in the AI fieldwho were asked about their estimates for the arrival of the singularity The following observation summarises the divergence of opinion on the subject 'It’s interesting to note that our number one response, 2036-2060, was followed by likely never as the second most popular response.'

Deep Learning Neural networks generated excitement in particular in the 2010s (in particular from 2012 onwards with the success of AlexNet) with Convolutional Neural Networks (CNNs) outperforming radiologists in diagnosing diseases from medical images, and the AlphaGo Deep Reinforcement Learning combined with Tree Search algorithm beating world Go champion Lee Sedol.

Attempts have been made by researchers to overcome this problem for example, Thomas Macauley observed that aTelefónica research team in Barcelonawith their approach consisting of two separate parameters with one compacting the information that the Neural Networks require into the fewestneurons possible without compromising its accuracy, whilst the second protects the units that were essential to complete past tasks.

entitled 'Biologically inspired alternatives to Backpropagation through time for learning in Recurrent Neural Nets' argued that 'The gold standard for learning in Recurrent Neural Networks in Machine Learning is Backpropagation through time (BPTT), which implements stochastic gradient descent with regard to a given loss function.

can be found on the hyperlink and is entitled'The HSIC Bottleneck: Deep Learning Without Back-Propagation' Ram Sagar in 'Is Deep Learning without BackPropogration possible' explains that: The authors in theirpaper, claim that this method: Ram Sargar observes 'The successful demonstration of HSIC as a method is an indication of the growing research in exploration of Deep Learning fundamentals from an information theoretical perspective.'

HSIC represents a fascinating research initiative as an alternative to Backpropogation, however, it will require reviews from others with testing and validation at scale on different (and potentially more complex) data sets for performance and replicability to assess its potential to move beyond Backpropogation.

explains that in order 'To reach AGI, computer hardware needs to increase in computational power to perform more total calculations per second (cps).Tianhe-2, a supercomputer created by China’s National University of Defense Technology, currently holds the record for cps at 33.86 petaflops (quadrillions of cps).

Some experts predict quantum computers doubling in power every six months — if correct, within 20 years, quantum computers will be a trillion times more powerful than present.'

key landmark for Quantum Computing is quantum supremacy defined as '...the goal of demonstrating that a programmable quantum device can solve a problem that classical computers practically cannot (irrespective of the usefulness of the problem).'

Research developments in the lab will continue to make headlines through the next few years, for example researchers at Princeton University announced that 'In leap for quantum computing, silicon quantum bits establish a long-distance relationship' explains how they have demonstrated that two quantum-computing components, known as silicon 'spin' qubits, can interact even when spaced relatively far apart on a computer chip.

David Nield in 'Physicists Just Achieved The First-Ever Quantum Teleportation Between Computer Chips' further explains this as 'Put simply, this breakthrough means that information was passed between the chips not by physical electronic connections, but throughquantum entanglement– by linking two particles across a gap using the principles of quantum physics.'

'We don't yet understand everything aboutquantum entanglement(it's the same phenomenon Albert Einstein famously called 'spooky action'), but being able to use it to send information between computer chips is significant, even if so far we're confined to a tightly controlled lab environment.'

According to one estimate, all of the information on every computer in 2015 coded onto DNA could “fit in the back of an SUV.” And that 'The essence of memory, of course, lies in its durability...and hard drives decompose after 20 or 30 years.

The models have to be rebuilt from scratch once the feature-space distribution changes.Transfer learning is the idea of overcoming the isolated learning paradigm and utilizing knowledge acquired for one task to solve related ones.'

PathNet, A step towards AGI Théo Szymkowiak authored 'DeepMind just published a mind blowing paper: PathNet' where the author observed 'Since scientists started building and training Neural Networks, Transfer Learning has been the main bottleneck.Transfer Learning is the ability of an AI to learn from different tasks and apply its pre-learned knowledge to a completely new task.It is implicit that with this precedent knowledge, the AI will perform better and train faster thande novoNeural Networks on the new task.'

Developed by Silicon Valley startup Vicarious, RCN were recently used to solve text-based CAPTCHAs with a high accuracy rate using significantly less data than its counterparts much — 300x less in the case of a scene text recognition benchmark.'

This research is interesting because there has been criticism that Deep Learning currently requires large data sets for effective training that will show consistent and reproducible good results in both in and out of sample testing, where as a human child can learn to determine what is a dog and a cat from much smaller set of examples.

This need for large data sets has been a barrier for scaling Deep Learning into other areas of the economy including those businesses (and areas of healthcare) that have smaller sets of data and will be of increasing importance in the eras of 5G and 6G when AI will increasingly sit on the edge (devices around us such as autonomous cars and robots).

“It’s just an attempt to hang on to the view they already have, without really comprehending that they’re being swept away.” The above shows that there is an ongoing dispute in the research community about the optimal approach to move AI towards a generalised approach that will attain human levels of ability.

Using Deep Learning to improve our understanding of the human brain Nathan Collins explains how Deep Learning is being applied to help us understand our brains better in an article entitled Deep Learning comes full circle states that 'Although not explicitly designed to do so, certain Artificial Intelligence systems seem to mimic our brains’ inner workings more closely than previously thought, suggesting that both AI and our minds have converged on the same approach to solving problems.

'Now, Yamins, who is also a faculty scholar of theStanford Neurosciences Instituteand a member ofStanford Bio-X, and his lab are building on that connection to produce better theories of the brain – how it perceives the world, how it shifts efficiently from one task to the next and perhaps, one day, how it thinks.'

In 2014, Yamins and colleaguesshowedthat a deep learning system that had learned to identify objects in pictures – nearly as well as humans could – did so in a way that closely mimicked the way the brain processes vision.

Peter Morgan in an article entitled 'Deep Learning and Neuromorphic Chips' explains that 'Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli.

One such project isMICrONSor Machine Intelligence from Cortical Networks which “seeks to revolutionize machine learning by reverse-engineering the algorithms of the brain.” The program is expressly designed as a dialogue between data science and neuroscience with the goal to advance theories of neural computation.'

Baely Almonte notes in an article entitled 'Engineering professor uses Machine Learning to help people with paralysis regain independence' how a brain-computer interface using Machine Learning to translate brain patterns into instructions for a personal computer, or even a powered wheelchair can be used to understand the specific tasks that a person wants to achieve.

The Future is Humans Augmenting ourselves with AI Along the way the arrival of 6G around 2030 that Marcus Weldon of Nokia Bell Labs, claims '...will be a sixth sense experience for humans and machines” where biology meets AI and the potential for Quantum Computing to take off in the real world in the 2030s may combine to accelerate the pathway towards stronger AI in the 2030s and beyond.

The computational complexities and other design challenges for AGI are going to take time to resolve and the period of the 2020s is going to be about maximising the benefits of cutting edge AI techniques across the sectors of the economy.

Therefore whilst we will continue to hear about breakthroughs of Deep Learning algorithms outperforming humans at specific tasks in 2020 and across the decade, we are unlikely to experience AI outperforming humans at multiple tasks in the foreseeable future.

Source for infographic below: Lawtomated, AI for Legal: ANI, AGI and ASI Whilst I personally do believe that AGI will arrive someday in the future (albeit not in the 2020s), I don't believe that it will occur in some vacuum or perhaps better to state in isolation of humans also learning to use advanced AI to enhance ourselves.

The work being undertaken by the likes of the Stanford Neurosciences Institute into using AI to further understand how our own brains work along with the research and development into neural human brain computer chips means that AI will not advance in isolation but rather so too will our own understanding of both our own brains but also the ability for humans to augment ourselves with advanced AI and hence there will be a partnership between AI and humans in the future rather than AGI or ASI simply replacing humans.

Albeit we'll need to become more enlightened and aware of the risks (control by others) and hence perhaps the real risk with AI is not so much AGI and ASI replacing humans but rather particular people seeking to use technology to influence (perhaps even control) the actions of others.

As a society we are already engaged in a debate in relation to the influence of social media using algorithms and the impact of the Cambridge Analytica scandal with Facebook with our politicians still playing catch up in understanding modern technology.

Our ability to handle the challenges that face us today rather than focus anxiety on technologies that are yet to arrive will in fact be key to creating the foundations for how we handle the eventual arrival of AGI and use advanced AI to further develop humanity.

As things stand today and for the foreseeable future humanity faces genuine dangers from extreme weather events, climate change, and the challenges of how we are going to feed a world where the population is forecast to grow from 7.7 to 11 billion people this century.

Furthermore, maybe we are not ready for such a huge transition that AGI may result in the economy at a time when the aftermath of the impact of the Great Recession is still felt in certain sectors of the global economy with recovery only recently occurring and the resulting impact in global politics and across societies.

Furthermore, as we use Deep Learning to learn more about our own brains, and transition to Industry 4.0 in the 2020s it will give us time to use ANI to augment our own capabilities and also to apply AI towards challenges that we face in terms of economic growth and cleaner technologies.

In the 2020s the cutting edge of AI is going to occur increasing at the edge (embedded, on device) resulting in exciting new opportunities across businesses, education (virtual tutors and remote classrooms) and the healthcare sector rather than AGI / ASI replacing us.

Examples of State of the Art AI Today We experienced an exciting time in AI research during the 2010s and in recent times this has continued to produce exciting developments that in turn may also see fascinating new innovative products and services in real-world applications such as the entertainment sector, (including gaming), healthcare, finance, art, manufacturing and robotics.

The section below outlines some of the cutting edge techniques used within the AI community today and that we should expect to hear more about during the 2020s GANs Generative Adversarial Networks (GANs) were invented by Ian Goodfellow and his colleagues in 2014 after an argument in a bar.

This way, as training progresses, the generator continuously gets good at generating fake data that looks like real while the discriminator gets better at learning the difference between fake and real, in turn helping the generator to improve itself.

For example, a GAN trained on faces can be used to generate images of faces that do not exist and look very real.GANs are viewed as one of the most exciting areas of AI today with applications to healthcare and the entertainment sector.

The authors further claim that this is the first work to demonstrate evidence empirically that GANs can be used to generate unseen classes of training samples in the VSR domain thereby facilitating zero-shot learning.

Furthermore, situations may arise whereby rewards incentivise gains in the short-term at the expense of long-term progress known as deceptive rewards with Matthew Hutson referring to this as resulting in algorithms being trapped in dead ends and DRL being stuck in a rut.

Matthew Hutson also notes that AI researcher Jeff Clune published research entitled 'AI-GAs: AI-generating algorithms, an alternate paradigm for producing general Artificial Intelligence' whereby 'Clune argues thatopen-ended discoveryis likely the fastest path toward artificial general intelligence — machines with nearly all the capabilities of humans.

For image recognition, capsnets exploit the fact that while viewpoint changes have nonlinear effects at the pixel level, they have linear effects at the part/object level.This can be compared to inverting the rendering of an object of multiple parts.

Kyle Wiggers authored AI capsule system classifies digits with state-of-the-art accuracy stated that 'the coauthors of the study say that the SCAE’s design enables it to register industry-leading results for unsupervised image classification on two open source data sets, the SVHN (which contains images of small cropped digits) and the MINST (handwritten digits).

Adam Conner-Simons in Smarter training of Neural Networks provided an incisive summary of the paper noting'In a new paper, researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have shown that Neural Networks contain subnetworks that are up to 10 times smaller, yet capable of being trained to make equally accurate predictions - and sometimes can learn to do so even faster than the originals.' 'MIT professor Michael Carbin says that his team’s findings suggest that, if we can determine precisely which part of the original network is relevant to the final prediction, scientists might one day be able to skip this expensive process altogether.

Artificial Intelligence (AI) deals with the area of developing computing systems which are capable of performing tasks that humans are very good at, for example recognising objects, recognising and making sense of speech, and decision making in a constrained environment.

This way, as training progresses, the generator continuously gets good at generating fake data that looks like real while the discriminator gets better at learning the difference between fake and real, in turn helping the generator to improve itself.

AI model improves breast cancer detection on mammograms

A new Artificial Intelligence (AI) model predicts breast cancer in mammograms more accurately than radiologists, reducing false positives and false negatives, ...

Deep Neural Networks in Medical Imaging and Radiology

A Google TechTalk, 5/11/17, presented by Le Lu ABSTRACT: Deep Neural Networks in Medical Imaging and Radiology: Preventative and Precision Medicine ...

Black-Box Medicine: Legal and Ethical Issues

Black-box medicine—the use of opaque computational models to make care decisions—has the potential to shape health care by improving and aiding many ...

Computing the Future: Setting New Directions (Part 1)

MIT Chancellor Cynthia Barnhart, the Ford Foundation Professor of Engineering, offers an introduction to the session on “Computing the Future: Setting New ...