AI News, Artificial neural networks improve and simplify intensive care ... artificial intelligence

JMIR Publications

A large amount of medical materials are consumed in the daily routine of intensive care units (ICUs).

The latter include, for example, sterile disposable material (eg, venous catheters or scalpels), material for body care (eg, absorbent pads, disposable flaps), or small materials (eg, needles, swabs or spatulas).

Due to the complexity of an ICU treatment, the financing of ICUs is often based on a flat-rate reimbursement scheme [3], resulting in a fixed daily hospital reimbursement for each day on the unit, not taking into account the reason for admission, the disease, or the resulting expenditure.

It is largely unknown how many materials are needed for a single patient with a specific disease, so it is therefore not possible to carry out analyses in this respect even though these questions are highly relevant in daily practice.

This makes retrospective data analysis (eg, in the field of machine learning) considerably more difficult as action and reaction are often critically time-linked, thus making the scientific evaluation of measures and data analysis significantly more time-consuming [5].

An intelligent, cost-effective solution is urgently needed for this problem, and the developed solution must suit the specific needs of the ICU with the following characteristics: (1) recognizes materials without explicit labelling (eg, barcodes, radio frequency labelling);

The aim of this work is to develop and evaluate Consumabot, a novel client-server recognition system for medical consumable materials based on a convolutional neural network, as an approach to solve the above-mentioned challenges in the sector of intensive care medicine.

Based on these considerations, the distributed concept of Consumabot was developed as follows: multiple low-cost detection units are located close to the patient bed, then these units are wirelessly connected to a local training server with high computational power for model training.

This server was equipped with an Intel Xeon Gold 6140 processor with four dedicated processor cores, had 40 gigabytes of storage space on a single-state hard disk and had 320 gigabytes of storage space on a conventional magnetic hard disk for the resulting training data.

One major advantage of Python is the availability of a wide range of machine learning tools like NumPy, used in data preprocessing, scikit-learn, used for data mining, or Keras as a high-level neural network interface.

Consumabot uses the software library TensorFlow, a software framework that simplifies the programming of data stream–centered procedures [17], and several adapted programming code elements for retraining image classifiers were included into Consumabot’s source code [18].

Since the training of a full neural network is a complex and computationally intensive process, we applied a technique called transfer learning, a machine learning method where a model developed for a task is reused as the starting point for a model on a second task [19].

In transfer learning, basic processing image recognition steps, such as the recognition of edges, objects, and picture elements, are already trained in many iteration steps while the classification task is only assigned to the neural network in its final step, which is analogous to the training of an infant.

Thus, a neural network trained on images achieves better results in this domain than in the domain of something like natural language processing, resulting in the need to choose a task-specific, suited, pretrained network.

To facilitate online learning, at regular intervals the stored images of correctly recognized materials were transferred as training data to the training server and the model of the neural network was trained.

Comparing MobileNet, AlexNet, GoogleNet and VGG16, we decided to apply a MobileNet, a class of efficient models for mobile and embedded vision apps, as a compromise between low requirements for computational power and high accuracy in image classification [24].

We observed a >99% accuracy of the model prediction after 60 training steps and 150 validation steps, as defined by the relation of true positives over the total number of occurrences.

As shown in Figure 5, our model revealed a desirable cross-entropy <0.03, with asymptotic stability after approximately 170 iteration steps in the validation set and 100 steps in the training set (Figure 5).

The data generation for an entirely new consumable or for retraining, if a previously trained consumable material significantly changed its outer appearance, took approximately 100 seconds (1 second per picture, 100 pictures).

Materials with large surface areas and many distinguishable visual features (eg, a disposable bag valve mask [mean recognition accuracy 1.0] or sterile syringes [mean recognition accuracy 0.9]) had particularly good detection rates.

For materials only distinguishable by color (eg, intravenous [IV] accesses in different colors) Consumabot showed lower recognition accuracies for the grey IV access (mean 0.8) and the orange IV access (mean 0.6) (Figure 6, Table 1).

This was particularly true for materials with a small surface or with less distinguishable features (eg, for an oxygen tube), where the recognition accuracy dropped by 0.3 between when it was uncovered (mean 0.9) to when it was covered (mean 0.6).

For small elements compared to the secondary material present in a scene, this mostly resulted in a drop in recognition accuracy (eg, for a medication ampoule in the noncovered scenario [mean 0.8] versus a multiple element scenario [mean 0.6]).

In this work we developed and evaluated Consumabot, a novel contactless visual recognition system for tracking medical consumable materials in ICUs using a deep learning approach on a distributed client-server architecture.

In our proof-of-concept study in the context of a real ICU environment, we observed a high classificatory performance of the system for a selection of medical consumables, thus confirming its wide applicability in a real-world hospital setting.

The development of software for processing complex visual information is no longer a task requiring specialized hardware and software, as even the training of a complex neural network without specialist knowledge is possible now.

This enables researchers and medical professionals alike to expand the use of artificial intelligence beyond today's commercial applications, such as in the fields of natural language processing [27,28], or intention, or pattern analysis [29], within constantly growing data volumes.

Using a convolutional neural network infrastructure, the system Consumabot consistently achieved good results in the classification of consumables and thus is a feasible way to directly recognize and register medical consumables to a hospital’s EHR system.

Choosing a transfer learning technique based on MobileNet assured a fast training time while keeping steadily high recognition rates, achieving an optimal compromise of high accuracy and low computational requirements while maintaining a moderate model size.

Thus, we believe that Consumabot will ultimately enable hospitals to reduce costs associated with consumable materials and consequently let them spend their resources on higher quality care (eg, by employing additional medical personnel).

Nevertheless, the conducted on-site study showed potential for optimization, particularly for standard medical consumables (eg, venous accesses of different sizes) since they did not show fully satisfactory recognition rates.

When using Consumabot in scenarios where many objects are unknown to the system (eg, if the system will only be used for detection of certain objects), the software should be adapted to only display predictions above a certain confidence threshold.

The full source code of the detection unit, the pretrained model, and the training script have been released under the open source license Apache Version 2.0, January 2004 [32], and detailed assembly instructions have been released with the manuscript to encourage and enable other researchers to contribute to the development of the system and assess usability and feasibility in other use cases without increasing the financial burden of ICU patients [33].

Surgical robots, new medicines and better care: 32 examples of AI in healthcare

Artificial intelligence simplifies the lives of patients, doctors and hospital administrators by performing tasks that are typically done by humans, but in less time and at a fraction of the cost.   One of the world's highest-growth industries, the AI sector was valued at about $600 million in 2014 and is projected to reach a $150 billion by 2026.

Whether it's used to find new links between genetic codes or to drive surgery-assisting robots, artificial intelligence is reinventing — and reinvigorating — modern healthcare through machines that can predict, comprehend, learn and act.

The company’s deep learning platform analyzes unstructured medical data (radiology images, blood tests, EKGs, genomics, patient medical history) to give doctors better insight into a patient’s real-time needs.

The scientists used 25,000 images of blood samples to teach the machines how to search for bacteria. The machines then learned how to identify and predict harmful bacteria in blood with 95% accuracy.

Adam scoured billions of data points in public databases to hypothesize about the functions of 19 genes within yeast, predicting 9 new and accurate hypotheses.

BERG recently presented its findings on Parkinson’s Disease treatment —  they used AI to find links between chemicals in the human body that were previously unknown — at the Neuroscience 2018 conference.

Location: Cambridge, Massachusetts How it's using AI in healthcare: Combining AI, the cloud and quantum physics, XtalPi’s ID4 platform predicts the chemical and pharmaceutical properties of small-molecule candidates for drug design and development.

Additionally, the company claims its crystal structure prediction technology (aka polymorph prediction) predicts complex molecular systems within days rather than weeks or months.

Atomwise’s AI technology screens between 10 and 20 million genetic compounds each day and can reportedly deliver results 100 times faster than traditional pharmaceutical companies.

Location: London, England How it's using AI in healthcare: The primary goal of BenevolentAI is to get the right treatment to the right patients at the right time by using artificial intelligence to produce a better target selection and provide previously undiscovered insights through deep learning.

A 2016 study of 35,000 physician reviews revealed 96% of patient complaints are about lack of customer service, confusion over paperwork and negative front desk experiences.

New innovations in AI healthcare technology are streamlining the patient experience, helping hospital staff process millions, if not billions of data points, faster and more efficiently.

The company’s technology helps hospitals and clinics manage patient data, clinical history and payment information by using predictive analytics to intervene at critical junctures in the patient care experience.

Location: Cleveland, Ohio How it's using AI in healthcare: The Cleveland Clinic teamed up with IBM to infuse its IT capabilities with artificial intelligence.  The world-renowned hospital is using AI to gather information on trillions of administrative and health record data points to streamline the patient experience.

Since implementing the program, the facility has seen a 60% improvement in its ability to admit patients and a 21% increase in patient discharges before noon, resulting in a faster, more positive patient experience.

Additionally, the inability to connect important data points is slows the development of new drugs, preventative medicine and proper diagnosis.  Many in healthcare are turning to artificial intelligence as way to stop the data hemorrhaging.

Location: Seattle, Washington How it's using AI in healthcare: KenSci combines big data and artificial intelligence to predict clinical, financial and operational risk by taking data from existing sources to foretell everything from who might get sick to what's driving up a hospital’s healthcare costs.

The company’s software helps pathology labs eliminate bottlenecks in data management and uses AI-powered image analysis to connect data points that support cancer discovery and treatment.

How it's using AI in healthcare: When IBM’s Watson isn’t competing on Jeopardy!, it's helping healthcare professionals harness their data to optimize hospital efficiency, better engage with patients and improve treatment.

Location: Shenzhen, China How it's using AI in healthcare: ICarbonX is using AI and big data to look more closely at human life characteristics in a way they describe as “digital life.'  By analyzing the health and actions of human beings in a “carbon cloud,' the company hopes its big data will become so powerful that it can manage all aspects of health.

Robots equipped with cameras, mechanical arms and surgical instruments augment the experience, skill and knowledge of doctors to create a new kind of surgery. Surgeons control the mechanical arms while seated at a computer console while the robot gives the doctor a three dimensional, magnified view of the surgical site that surgeons could not get from relying on their eyes alone.

Being the first robotic surgery assistant approved by the FDA over 18 years ago, the surgical machines feature cameras, robotic arms and surgical tools to aide in minimally invasive procedures.

Under a physician’s control, the tiny robot enters the chest through a small incision, navigates to certain locations of the heart by itself, adheres to the surface of the heart and administers therapy.

Location: Eindhoven, The Netherlands How it's using AI in healthcare: MicroSure’s robots help surgeons overcome their human physical limitations.  The company's motion stabilizer system reportedly improves performance and precision during surgical procedures.

Location: Caesarea, Israel How it's using AI in healthcare: Surgeons use the Mazor Robotics' 3D tools to visualize their surgical plans, read images with AI that recognizes anatomical features and perform a more stable and precise spinal operation.

How Graph Technology Is Changing Artificial Intelligence and Machine Learning

Graph enhancements to Artificial Intelligence and Machine Learning are changing the landscape of intelligent applications. Beyond improving accuracy and ...

Bringing AI and machine learning innovations to healthcare (Google I/O '18)

Could machine learning give new insights into diseases, widen access to healthcare, and even lead to new scientific discoveries? Already we can see how ...

Leukemia Screening using Artificial Intelligence

Classification For Acute Lymphocytic Leukemia Using Feature Extraction and Neural Network In White Blood Cell Stained Images. The 3rd International ...

HPC - The Computational Foundation of Deep Learning

In this video from the 2016 Stanford HPC Conference, Brian Catanzaro from Baidu presents: HPC - The Computational Foundation of Deep Learning. "During ...

DeepMind Health – how our Streams app is used at the Royal Free (with subtitles)

Sarah Stanley, consultant nurse at the Royal Free London NHS Foundation Trust, talks about how Streams is used on the wards. Streams is a secure instant ...

Artificial Intelligence Full Course | Artificial Intelligence Tutorial for Beginners | Edureka

Machine Learning Engineer Masters Program: This Edureka video on "Artificial ..

Convolutional Neural Networks Explained | Lecture 7

An intuitive explanation of Convolutional Neural Networks. Deep Learning Crash Course playlist: ...

Practical Use Cases for AI & Machine Learning in Healthcare Organizations

Practical Use Cases for AI & Machine Learning in Healthcare Organizations Artificial intelligence (AI) and machine learning (ML) are effective tools for managing ...

Demis Hassabis: Towards General Artificial Intelligence

Dr. Demis Hassabis is the Co-Founder and CEO of DeepMind, the world's leading General Artificial Intelligence (AI) company, which was acquired by Google in ...