AI News, Artificial neural networks could be used to provide insight into ... artificial intelligence

Machine learning predicts behavior of biological circuits

In the new study, the researchers trained a neural network to predict the circular patterns that would be created by a biological circuit embedded into a bacterial culture.

To further improve accuracy, the team devised a method for retraining the machine learning model multiple times to compare their answers.

Then they used it to solve a second biological system that is computationally demanding in a different way, showing the algorithm can work for disparate challenges.

'This work was inspired by Google showing that neural networks could learn to beat a human in the board game Go,' said Lingchong You, professor of biomedical engineering at Duke.

The challenge facing You and his postdoctoral associate Shangying Wang was determining what set of parameters could produce a specific pattern in a bacteria culture following an engineered gene circuit.

By controlling variables such as the size of the growth environment and the amount of nutrients provided, the researchers found they could control the ring's thickness, how long it took to appear and other characteristics.

But because a single computer simulation took five minutes, it became impractical to search any large design space for a specific result.

To skip to the end results, Wang turned to a machine learning model called a deep neural network that can effectively make predictions orders of magnitude faster than the original model.

The network takes model variables as its input, initially assigns random weights and biases, and spits out a prediction of what pattern the bacterial colony will form, completely skipping the intermediate steps leading to the final pattern.

While this is a completely different reason for long computational run times than their initial model, the researchers found their approach still worked, showing it is generalizable to many different complex biological systems.

Can Artificial Intelligence Help to Prevent Sexual Harassment?

It is difficult to not get moved the first time you visit Safecity’s website and watch the opening video with the testimonials of women who got raped in India.

The question that arises for a woman when walking in the streets, sitting in a quiet place to drink a cup of coffee or get on the bus to go home is, To answer this question Omdena and SafeCity India organized a two month AI challenge where I was one among 30 collaborators to build an AI solution.

Basically speaking, given past information about crimes against women that happened on a certain date in a certain place, how can we predict which places have high chances of an incident to happen and which places are safe.

One of the most effective ways to express a heatmap that can be used on mathematical models is thought the use of matrices where each cell represents a square portion of space in a given measuring distance system and the colors represent the intensity of the studied event that happened on each cell mapped.

In figure 2 we can see an example of heatmap predicted by a machine learning model plotted on a grid divided in 30 by 30 cells where each cell has a color representing the crime intensity for in Delhi on August 13.

used this technique, as a regression problem to predict the number of crimes hourly on a grid divided map over a region in Los Angeles city where their great contribution was to apply a spatial and temporal regularization technique on the dataset.

This ANN model uses the concept of aggregating heatmaps according to the concept of trends, period and closeness where each of these terms defines the temporal distance between the heatmaps and have its own internal architecture inside the model were it learns features related to those distances as we can see in figure 4.

The SFTT model, presented in figure 5, receives as input a sequence of heat maps aggregated in time by classes and outputs a future heat map with predicted hotspots that according to the article definition are places where at least a single crime happened and category probability.

Only later when I read Udo Schlegel’s master thesis, Towards Crime Forecasting Using Deep Learning, where he makes use of encoder-decoder and GAN’s to predict heatmaps showing results that looked more similar to the ones I found in the past, I changed my mind on crime prediction.

Even failing at using this model, I would reconsider retaking the study and implementation using it in the future since it can help immensely on the cases we would like to make a clear distinction between predicting ordinary cases and rape against women.

The model’s input is a sequence of binary maps in time where each cell contains one if at least an incident happened there (burglary in the article’s case) and zero otherwise and outputs a risk probability map for the next period.

Being loose in aggregation let the spatial dimension to be explored even further enabling to amplify the resolution of the heatmaps since accumulation was not a necessity anymore but just the simple existence of a single incident inside the debilitated cell square.

The dataset provided was stored in a spreadsheet containing around 11.000 inputs where each row had a reported incident that happened to a woman on a certain place on the globe with information like incident date, description, latitude, longitude, place, category and so on.

From this point the following pipeline was adopted in order to build a solution: Most of the open datasets used for crime studies contain lots of detailed samples reaching numbers that vary from 100.000 to 10.000.000 reports since they are generally provided by the local police department which contains a good computerized system capable of collecting, organizing, and storing this valuable information over a delimited region.

These values seems aggressive for a small dataset but as stated early, binary maps allowed more space between cells since they need just one incident on that place and daily granularity was used, even with lots of missing values between days, because with the data augmentation technique used below we still got some good results.

Heatmaps were made using aggregation technique where first we converted the latitude and longitude from our data samples into a matrix coordinate and after summed all those samples that fell in the same coordinate on the same temporal granularity as demonstrated on the figure 9.

After synthetically creating the missing heatmaps a threshold value was arbitrarily chosen to convert them into binary maps (figure 10) since we stated that the model works with risky and non-risky hotspot prediction.

For the train and test splitting the decision boundary was based on the relevant temporal data between the period of 2013 to 2017 where are sample before the second half of 2016 where selected to train the model and the second semester to validate it.

Heatmaps are very large sparse matrices (matrix with lots of zeroes), so to equilibrated the loss that naturally towards the easy way meaning learning to predict only zeros we use a weighted cross-entropy function for backpropagation to put more emphasis on missing ones.

To not penalize the prediction so hard against missing results that can lead up to adjust the model in a way that we end up with a totally overfitted algorithm we give some score for the predicted cell in neighboring areas around hotspots since we didn’t miss so much.

It is easy to check that the 99th percentile prediction gives a more accurate risky area but it has at the same time more chance to miss important cells since it aggressively tries to classify just areas were we are almost certain that something can happen.

One way to improve the resolution of the output is to use a bilinear interpolation technique and increase the heatmap size providing a more fine tuned resolution when presenting to the final user like the one shown in figure 15.

Gain insight into ESPN fantasy football with Watson

If you choose to get bitter, you might as well taste a lemon, throw a dart, and let chance guide your team management decisions.

Watson will read millions of articles, watch thousands of videos, and listen to hundreds of podcasts and distill it all into actionable evidence that can ultimately help you win.

Here in Part 1 of 4, we will describe the system architecture, discuss the hybrid cloud approach, show our monitoring strategy and introduce the fair machine learning pipeline.

Sitting between our user experience and AI components is a web acceleration tier comprising two stacked content delivery networks (CDNs).

If our active Dallas region has an issue or we need to perform maintenance, the consumer-facing experience is not impacted because our AI insights are still available across hundreds of edge servers, which provide a continuously available user experience with a highly available back end.

The Python applications handle the AI algorithms, including data cleansing, normalization, model training and test, fairness evaluation, and multimedia management.

The JavaScript applications that are run through Node.js combine finished data artifacts to generate content in support of the user experience.

If any of the applications experience slowness or a service outage, our continuous service monitoring will send out alerts via email, Slack, and mobile.

This app pulls news articles from Watson Discovery and enrolls the articles into a custom collection with our statistical entity detector.

The incremental runs are driven by a Python app that detects if a player’s projections or actuals have changed, and if so, sends a post request to the Cloud Foundry Natural Language Container app that runs the player through the machine learning pipeline.

In parallel, and on the hour, a player pre-processor updates player information, such as a trade, injury, suspension, and bye status.

All of the converted text from the video and podcast transcripts, as well as from articles, are used as the basis of the machine learning pipeline.

The textual data is turned into predictors to determine player boom, bust, likelihood to play with an injury, and likelihood to play meaningful minutes.

The open source AI Fairness 360 library identifies and mitigates bias present within the output of the boom and bust models.

The favorable label or team that groups players into unfair groups is dampened through slight modifications of boom and bust probabilities.

As data expires by passing the duration of the time-to-live limit, request traffic will eventually go to the origin or Cloud Object Storage for data to be cached back on the content delivery networks.

As data is generated by the Node.js app, any existing data within Cloud Object Storage is replaced by the newer data and pushed through the delivery networks.

The timestamp is placed within the HTML file to prevent the social media platforms from using a cached version of the media.

The social sharing app then posts a request to the social image generator app to create snapshots of player cards.

The crawler posts a request to the Cloud Foundry natural language container to process the player through the entire machine learning pipeline.

Running our system distributed across different clouds that includes third-party clouds is critical to sustaining a large enterprise-scale AI computing system like Watson Insights.

The configuration and monitoring features are accessible on the IBM public cloud while the midgress and edge servers are provided by the Akamai cloud.

Over a week, the system maintained 3.65 billion hits and served 214.4 TB of data with a hit ratio of 82.82 percent.

The general traffic flow is as follows: The data TTL is set according to data type and put into a header by the content generation Node.js app.

With dozens of services and dependencies, instrumenting the application to find vulnerability points and generating a first response under a service failure is critical to sustaining a continuously available service to the end user.

If a service receives an error that is defined by alerting rules, such as HTTP returns that are not 200 codes or response-time lags, alerts are sent to IBM alerting.

If operators do not acknowledge a problem, secondary escalation messages can be configured and sent to second-line support staff.

Finally, Watson understands the patterns through millions of words combined with traditional fantasy football stats for interpretable insights.

When given the keyword test between players and teams, 80 percent of the questions were correct if the answer was in the top 1 percent of ranked answers.

When given the keyword test between team and location, 75 percent were correct when the correct answer was in the top 1 percent of the results.

If the output of the model was a boom or bust, we used the AI Fairness 360 library to mitigate any bias associated with a specific team since some teams are more popular than others.

Next, the boom and bust features need to be normalized to be related to the score distribution’s shape before the 15th percentile and after the 85th percentile.

Surgical robots, new medicines and better care: 32 examples of AI in healthcare

Artificial intelligence simplifies the lives of patients, doctors and hospital administrators by performing tasks that are typically done by humans, but in less time and at a fraction of the cost.   One of the world's highest-growth industries, the AI sector was valued at about $600 million in 2014 and is projected to reach a $150 billion by 2026.

Whether it's used to find new links between genetic codes or to drive surgery-assisting robots, artificial intelligence is reinventing — and reinvigorating — modern healthcare through machines that can predict, comprehend, learn and act.

The company’s deep learning platform analyzes unstructured medical data (radiology images, blood tests, EKGs, genomics, patient medical history) to give doctors better insight into a patient’s real-time needs.

The scientists used 25,000 images of blood samples to teach the machines how to search for bacteria. The machines then learned how to identify and predict harmful bacteria in blood with 95% accuracy.

Adam scoured billions of data points in public databases to hypothesize about the functions of 19 genes within yeast, predicting 9 new and accurate hypotheses.

BERG recently presented its findings on Parkinson’s Disease treatment —  they used AI to find links between chemicals in the human body that were previously unknown — at the Neuroscience 2018 conference.

Location: Cambridge, Massachusetts How it's using AI in healthcare: Combining AI, the cloud and quantum physics, XtalPi’s ID4 platform predicts the chemical and pharmaceutical properties of small-molecule candidates for drug design and development.

Additionally, the company claims its crystal structure prediction technology (aka polymorph prediction) predicts complex molecular systems within days rather than weeks or months.

Atomwise’s AI technology screens between 10 and 20 million genetic compounds each day and can reportedly deliver results 100 times faster than traditional pharmaceutical companies.

Location: London, England How it's using AI in healthcare: The primary goal of BenevolentAI is to get the right treatment to the right patients at the right time by using artificial intelligence to produce a better target selection and provide previously undiscovered insights through deep learning.

A 2016 study of 35,000 physician reviews revealed 96% of patient complaints are about lack of customer service, confusion over paperwork and negative front desk experiences.

New innovations in AI healthcare technology are streamlining the patient experience, helping hospital staff process millions, if not billions of data points, faster and more efficiently.

The company’s technology helps hospitals and clinics manage patient data, clinical history and payment information by using predictive analytics to intervene at critical junctures in the patient care experience.

Location: Cleveland, Ohio How it's using AI in healthcare: The Cleveland Clinic teamed up with IBM to infuse its IT capabilities with artificial intelligence.  The world-renowned hospital is using AI to gather information on trillions of administrative and health record data points to streamline the patient experience.

Since implementing the program, the facility has seen a 60% improvement in its ability to admit patients and a 21% increase in patient discharges before noon, resulting in a faster, more positive patient experience.

Additionally, the inability to connect important data points is slows the development of new drugs, preventative medicine and proper diagnosis.  Many in healthcare are turning to artificial intelligence as way to stop the data hemorrhaging.

Location: Seattle, Washington How it's using AI in healthcare: KenSci combines big data and artificial intelligence to predict clinical, financial and operational risk by taking data from existing sources to foretell everything from who might get sick to what's driving up a hospital’s healthcare costs.

The company’s software helps pathology labs eliminate bottlenecks in data management and uses AI-powered image analysis to connect data points that support cancer discovery and treatment.

How it's using AI in healthcare: When IBM’s Watson isn’t competing on Jeopardy!, it's helping healthcare professionals harness their data to optimize hospital efficiency, better engage with patients and improve treatment.

Location: Shenzhen, China How it's using AI in healthcare: ICarbonX is using AI and big data to look more closely at human life characteristics in a way they describe as “digital life.'  By analyzing the health and actions of human beings in a “carbon cloud,' the company hopes its big data will become so powerful that it can manage all aspects of health.

Robots equipped with cameras, mechanical arms and surgical instruments augment the experience, skill and knowledge of doctors to create a new kind of surgery. Surgeons control the mechanical arms while seated at a computer console while the robot gives the doctor a three dimensional, magnified view of the surgical site that surgeons could not get from relying on their eyes alone.

Being the first robotic surgery assistant approved by the FDA over 18 years ago, the surgical machines feature cameras, robotic arms and surgical tools to aide in minimally invasive procedures.

Under a physician’s control, the tiny robot enters the chest through a small incision, navigates to certain locations of the heart by itself, adheres to the surface of the heart and administers therapy.

Location: Eindhoven, The Netherlands How it's using AI in healthcare: MicroSure’s robots help surgeons overcome their human physical limitations.  The company's motion stabilizer system reportedly improves performance and precision during surgical procedures.

Location: Caesarea, Israel How it's using AI in healthcare: Surgeons use the Mazor Robotics' 3D tools to visualize their surgical plans, read images with AI that recognizes anatomical features and perform a more stable and precise spinal operation.

How A.I learns? - Artificial Intelligence - Episode 5

In this episode of Artificial Intelligence, we will look at how A.I learns. There are five major types of learning for Artificial Intelligence. 1. Supervised Learning 2.

The Present and Future of Machine Learning and Artificial Intelligence

Visit for more information on business intelligence and data warehousing training and education. TDWI Las Vegas Conference 2019 Keynote: The ..

Artificial Neural Networks and Deep Learning | Two Minute Papers #3

Artificial neural networks provide us incredibly powerful tools in machine learning that are useful for a variety of tasks ranging from image classification to voice ...

OpenAI 5 Explained | Artificial Intelligence In Dota 2 | Edureka

**** This Edureka video will give you a fun insight into the artificial intelligence and deep learning ..

Artificial intelligence’s impact on civil engineering

The NAO robot asks Arup director Tim Chapman further questions about the broader impact of artificial intelligence on the civil engineering profession. Find out ...

Ethics In AI And Artificial Narrow Intelligence | AI for Business #5

Ethics In AI And Artificial Narrow Intelligence | AI for Business #5 In this episode of AI for Business, we dive into the complex issues surrounding ethics in AI and ...

Introduction To TensorFlow | Deep Learning with TensorFlow | TensorFlow For Beginners | Edureka

AI & Deep Learning with Tensorflow Training: ** This Edureka video on "Introduction to TensorFlow" provides you an insight into one of the ..

AI for Supply Chain

Every product in your home is there as a result of being distributed across whats called a supply chain. The path that a commodity takes through manufacturing, ...

Neural Networks, Internet Trends Research, Google Drive Hacks & more - Growth Insights #13

The Growth Insights series is back and bigger than ever! In this Episode we're going to focus on Neural networks, internet trends research, google drive hacks ...

Artificial Intelligence Starting From a Blank Slate

Read more: A grand ..