AI News, Qualcomm’s Scene-Detecting Smartphone System Is Almost Here
Qualcomm’s Scene-Detecting Smartphone System Is Almost Here
Advertisement Editor’s Picks Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter Photo: Qualcomm Artificial neural networks have done many cool things in recent years, including learning how to cook food by watching YouTube videos and making cars less noisy.
Their goal is to discover new algorithmic advances in machine learning that perform things like visual perception and audio recognition, and to develop efficient implementations of those algorithms for power-constrained devices such as smartphones.
SceneDetect relies on an emerging field of artificial intelligence called deep learning, and it is implemented by a kind of artificial neural network called a deep convolutional network.
To train the network to recognize the dog, researchers feed images of many kinds of dogs into the network, and the network’s pattern of internal connections is adjusted until the system recognizes dogs.
(For an explanation of deep learning by one of its inventors, see this interview with Yann LeCun.) Convolutional neural networks are widely used in image and video recognition, and SceneDetect currently recognizes between 30 and 50 categories of scenes—including birds, mountains, people, and clouds.
(That number was chosen because earlier research concluded that this was a reasonable amount for most users.) The training of SceneDetect’s neural network was performed offline using a compute cluster, and only then was it deployed to the Snapdragon-powered devices.
According to Jeff Gehlhaar, vice president of technology at Qualcomm, as SceneDetect technology improves to include video localization of objects within a scene and counting specific types of objects, it could break in to whole new categories of devices.
Qualcomm offers neural network SDK for Snapdragon processor
However, at least one company – Nauto Inc., (Palo Alto, Calif.), a startup working on autonomous driving and automotive collision recording – has already had access to the SDK.
The intention is to allow companies in a broad range of industries, including healthcare, automotive, security and imaging, to run their own proprietary trained neural network models on portable devices.
'The Neural Processing Engine SDK means we can quickly deploy our proprietary deep learning algorithms to our Snapdragon-based connected camera devices in the field, which can detect driver distraction and help prevent auto accidents,' said Frederick Soo, chief technology officer of Nauto, in a statement issued by Qualcomm.
Live from New York, it’s Snapdragon 820: Prepare for an immersive dive into mobile experience
Materials that are as of a specific date, including but not limited to press releases, presentations, blog posts and webcasts, may have been superseded by subsequent events or disclosures.
Qualcomm Technologies, Inc., a wholly-owned subsidiary of Qualcomm Incorporated, operates, along with its subsidiaries, substantially all of Qualcomm's engineering, research and development functions, and substantially all of its products and services businesses.
Today, virtually all the AI and deep learning algorithms sit in the cloud, with Google Photos and iPhoto sorting and tagging photos, with the capability of identifying photos of flowers, beaches, or the sunsets by simply processing image pixels.
The algorithms have recently experienced impressive performance gains with each year seeing university and corporate research teams competing with each other in terms of detecting and classifying images and reducing error rates.
We will have to wait and see what developers come up with, but with AI processing being done on the device, Apple is promoting a new local AI paradigm, which is largely governed by its belief in maintaining user privacy.
This could also have big implications for wearable cameras, both action cameras and lifelogging cameras that can extract useful information from pixels and learn more about you, as well as your likes and dislikes, from the images and videos that you capture.
In 10 years, I believe it will become commonplace to have a local AI-based virtual digital assistant (VDA) that will be able to suggest further actions based on the images and videos taken on your phone.
All of this will become possible because of powerful AI algorithms running locally on the device, which will instantly be able to extract data and information from your smartphone camera, and then share it through APIs with other applications and services.
- On Tuesday, February 18, 2020
Computer Vision 3D Depth Reconstruction
Qualcomm Research has developed a leading 3D depth reconstruction scanning system, running on a mobile device, as part of our computer vision program. The system is very fast and highly responsive....
MediaTek Deep Learning SDK on helio X20 with Visual and Voice Recognition features
MediaTek shows the high-performance deep learning (DL) SDK for mobile devices based on MediaTek X series, empowering device markers or users to build visual and speech recognition capabilities...