AI News, Machine Learning and Deep Learning Day

Machine Learning and Deep Learning Day|Singapore|12 December 2019

08:40AM – 08:45AM

Registration

08:45AM – 09:00AM

09:00AM - 09:45AM

Key Note topic: Ideas for creating through machine learning

09:45AM – 10:30AM

10:30AM – 10:45AM

10:45AM - 11:30AM

Sathvik Rao, Practice Director -Digital Architecture , Asia Pacific &

Machine learning: A new wave in financial services

Machine Learning and AI trends in logistics industry

Brand storytelling in the machine era

Combating fraud using Artificial Intelligence

Artificial intelligence in healthcare

Artificial intelligence (AI) in healthcare is the use of complex algorithms and software to emulate human cognition in the analysis of complicated medical data.

What distinguishes AI technology from traditional technologies in health care is the ability to gain information, process it and give a well-defined output to the end-user.

AI algorithms behave differently from humans in two ways: (1) algorithms are literal: if you set a goal, the algorithm can't adjust itself and only understand what it has been told explicitly, (2) and algorithms are black boxes;

AI programs have been developed and applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care.

to support operational initiatives that increase cost saving, improve patient satisfaction, and satisfy their staffing and workforce needs.[8]

that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing length of stay and optimizing staffing levels.[9]

During this time, there was a recognition by researchers and developers that AI systems in healthcare must be designed to accommodate the absence of perfect data and build on the expertise of physicians.[14]

The ability to interpret imaging results with radiology may aid clinicians in detecting a minute change in an image that a clinician might accidentally miss.

A study at Stanford created an algorithm that could detect pneumonia at that specific site, in those patients involved, with a better average F1 metric (a statistical metric based on accuracy and recall), than the radiologists involved in that trial.[25]

The emergence of AI technology in radiology is perceived as a threat by some specialists, as the technology can achieve improvements in certain statistical metrics in isolated cases, as opposed to specialists.[26][27]

Recent advances have suggested the use of AI to describe and evaluate the outcome of maxillo-facial surgery or the assessment of cleft palate therapy in regard to facial attractiveness or age appearance.[28][29]

In 2018, a paper published in the journal Annals of Oncology mentioned that skin cancer could be detected more accurately by an artificial intelligence system (which used a deep learning convolutional neural network) than by dermatologists.

On average, the human dermatologists accurately detected 86.6% of skin cancers from the images, compared to 95% for the CNN machine.[30]

One study conducted by the Centerstone research institute found that predictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response at baseline.[citation needed]

To address the difficulty of tracking all known or suspected drug-drug interactions, machine learning algorithms have been created to extract information on interacting drugs and their possible effects from medical literature.

Efforts were consolidated in 2013 in the DDIExtraction Challenge, in which a team of researchers at Carlos III University assembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms.[40]

Other algorithms identify drug-drug interactions from patterns in user-generated content, especially electronic health records and/or adverse event reports.[36][37]

The subsequent motive of large based health companies merging with other health companies, allow for greater health data accessibility.[43]

A second project with the NHS involves analysis of medical images collected from NHS patients to develop computer vision algorithms to detect cancerous tissues.[53]

Intel's venture capital arm Intel Capital recently invested in startup Lumiata which uses AI to identify at-risk patients and develop care options.[54]

team associated with the University of Arizona and backed by BPU Holdings began collaborating on a practical tool to monitor anxiety and delirium in hospital patients, particularly those with Dementia.[64]

The AI utilized in the new technology – Senior's Virtual Assistant – goes a step beyond and is programmed to simulate and understand human emotions (artificial emotional intelligence).[65]

Doctors working on the project have suggested that in addition to judging emotional states, the application can be used to provide companionship to patients in the form of small talk, soothing music, and even lighting adjustments to control anxiety.

Virtual nursing assistants are predicted to become more common and these will use AI to answer patient's questions and help reduce unnecessary hospital visits.

Overall, as Quan-Haase (2018) says, technology “extends to the accomplishment of societal goals, including higher levels of security, better means of communication over time and space, improved health care, and increased autonomy” (p. 43).

While research on the use of AI in healthcare aims to validate its efficacy in improving patient outcomes before its broader adoption, its use may nonetheless introduce several new types of risk to patients and healthcare providers, such as algorithmic bias, Do not resuscitate implications, and other machine morality issues.

We already have some scientists who know artificial intelligence and machine learning, but we want complementary people who can look forward and see how this technology will evolve.”[75]

As of November 2018, eight use cases are being benchmarked, including assessing breast cancer risk from histopathological imagery, guiding anti-venom selection from snake images, and diagnosing skin lesions.[77][78]

Machine Learning and Deep Learning Day|Bangalore|12 December 2019

Head of Design, Flipkart 09:45AM – 10:15AMTea/Coffee Break 10:15AM – 11:45AMDoes AI and Machine Learning create a Data Driven Organisation Culture?Ram Kumar, Head and Senior Vice President – India, quantium 11:45AM - 12:15PMBig Data and AI - Click Here for More InfoNishant Goyal, SDE II, Microsoft 12:15PM – 01:15PMLunch Break 01:15PM - 01:45PMBlueprint for Building AI ProductsAmit Baldwa, Director Engineering (R&D), Finastra 01:45PM - 02:30PMMachine Learning For NLPDr Chandrasekhar Subramanyam, Senior Professor and Director Business Analytics, IFIM Business School BANGALORE 02:30PM – 03:00PMTea/Coffee Break 03:00PM – 03:30PMKnowledge Graphs (KG)Karrtik Iyer, Data Architect, ThoughtWorks 03:30PM – 04:00PMAnamolies Detection using Deep Learning - Neural Networks and MXNetKrishnendu Dasgupta, Senior Consultant - AI &

Deepfake

Deepfakes (a portmanteau of 'deep learning' and 'fake'[1]) are media that take a person in an existing image or video and replace them with someone else's likeness using artificial neural networks.[2]

They often combine and superimpose existing media onto source media using machine learning techniques known as autoencoders and generative adversarial networks (GANs).[3][4]

Deepfakes have garnered widespread attention for their uses in celebrity pornographic videos, revenge porn, fake news, hoaxes, and financial fraud.[5][6][7][8]

An early landmark project was the Video Rewrite program, published in 1997, which modified existing video footage of a person speaking to depict that person mouthing the words contained in a different audio track.[14]

It was the first system to fully automate this kind of facial reanimation, and it did so using machine learning techniques to make connections between the sounds produced by a video's subject and the shape of the subject's face.[14]

The “Synthesizing Obama” program, published in 2017, modifies video footage of former president Barack Obama to depict him mouthing the words contained in a separate audio track.[15]

The Face2Face program, published in 2016, modifies video footage of a person's face to depict them mimicking the facial expressions of another person in real time.[16]

The project lists as a main research contribution the first method for re-enacting facial expressions in real time using a camera that does not capture depth, making it possible for the technique to be performed using common consumer cameras.[16]

online communities remain, including Reddit communities that do not share pornography, such as r/SFWdeepfakes (short for 'safe for work deepfakes'), in which community members share deepfakes depicting celebrities, politicians, and others in non-pornographic scenarios.[24]

In June 2019, a downloadable Windows and Linux application called DeepNude was released which used neural networks, specifically generative adversarial networks, to remove clothing from images of women.

Deepfakes have begun to see use in popular social media platforms, notably through Zao, a Chinese deepfake app that allows users to substitute their own faces onto those of characters in scenes from films and television shows such as Romeo+Juliet and Game of Thrones.[53]

The app originally faced scrutiny over its invasive user data and privacy policy, after which the company put out a statement claiming it would revise the policy.[54]

In 2019, a U.K.-based energy firm’s CEO was scammed over the phone when he was ordered to transfer €220,000 into a Hungarian bank account by an individual who used audio deepfake technology to impersonate the voice of the firm's parent company's chief executive.[56]

The perpetrator reportedly called three times and requested a second payment but was turned down when the CEO realized the phone number of the caller was Austrian and that the money was not being reimbursed as he was told it would be.[56]

Similarly, computer science associate professor Hao Li of the University of Southern California states that deepfakes created for malicious use, such as fake news, will be even more harmful if nothing is done to spread awareness of deepfake technology.[57]

Li predicts that genuine videos and deepfakes will become indistinguishable in as soon as half a year, as of October 2019, due to rapid advancement in artificial intelligence and computer graphics.[57]

In a prepared statement, she expressed that despite concerns, she would not attempt to remove any of her deepfakes, due to her belief that they do not affect her public image and that differing laws across countries and the nature of internet culture make any attempt to remove the deepfakes 'a lost cause'.[60]

While celebrities like herself are protected by their fame, however, she believes that deepfakes pose a grave threat to women of lesser prominence who could have their reputations damaged by depiction in involuntary deepfake pornography or revenge porn.[60]

In September 2018, Google added 'involuntary synthetic pornographic imagery” to its ban list, allowing anyone to request the block of results showing their fake nudes.[72]

730 prohibits the distribution of malicious deepfake audio or visual media targeting a candidate running for public office within 60 days of their election.[79]

Its plot revolves around digitally enhanced or digitally generated videos produced by skilled hackers serving unscrupulous lawyers and political figures.[81]

Jack Wodhams calls such fabricated videos picaper or mimepic—image animation based on 'the information from the presented image, and copied through choices from an infinite number of variables that a program might supply'.[81]

In the 1992 techno-thriller A Philosophical Investigation by Philip Kerr, 'Wittgenstein', the main character and a serial killer, makes use of both a software similar to Deepfake and a virtual reality suit for having sex with an avatar of the female police lieutenant Isadora 'Jake' Jakowicz assigned to catch him.[82]

Unsupervised Deep Learning - Google DeepMind & Facebook Artificial Intelligence NeurIPS 2018

Presented by Alex Graves (Google DeepMind) and Marc Aurelio Ranzato (Facebook) Presented December 3rd, 2018 This tutorial Unsupervised Deep Learning ...

How Can Physics Inform Deep Learning Methods - Anuj Karpatne

Anuj Karpatne (University of Minnesota) Contributed Talk 6: How Can Physics Inform Deep Learning Methods in Scientific Problems? Deep Learning for ...

A friendly introduction to Deep Learning and Neural Networks

Announcement: New Book by Luis Serrano! Grokking Machine Learning. bit.ly/grokkingML A friendly introduction to neural networks and deep learning. This is a ...

Why Deep Learning Works: ICSI UC Berkeley 2018

Recent development in the Theory of Heavy Tailed Self Regularization for Deep Neural Networks. An invited talk at The International Computer Science Institute ...

Automated Deep Learning: Joint Neural Architecture and Hyperparameter Search (discussions) | AISC

Toronto Deep Learning Series, 10 December 2018 Paper: Discussion Lead: Mark Donaldson (Ryerson University) Discussion ..

Bringing AI and machine learning innovations to healthcare (Google I/O '18)

Could machine learning give new insights into diseases, widen access to healthcare, and even lead to new scientific discoveries? Already we can see how ...

Automated Deep Learning: Joint Neural Architecture and Hyperparameter Search (algorithm) | AISC

Toronto Deep Learning Series, 10 December 2018 Paper: Discussion Lead: Mark Donaldson (Ryerson University) Discussion ..

Machine Learning @ Amazon by Rajeev Rastogi

DISCUSSION MEETING THE THEORETICAL BASIS OF MACHINE LEARNING (ML) ORGANIZERS: Chiranjib Bhattacharya, Sunita Sarawagi, Ravi Sundaram ...

Demo Day Pitch - AI Category - Butterfly AI

Butterfly.ai "Executive coaching for every manager" Demo Day pitch at the A-ha! conference, December 12, 2017 by David Mandlewicz, CEO & Co-founder of ...