AI News, hardware : :: TheGuruReview.net :: artificial intelligence

Google unveils new tools to bolster AI hardware development

Google continues to expand its range of AI products and services with a trio of new hardware devices aimed at the development community.

They’re being introduced under a new Google Coral brand (which is itself still “in beta”), and include a development board that sells for $149.99, a USB accelerator that goes for $74.99, and a 5-megapixel camera that’s available for $24.99.

Both dev board and accelerator are powered by Google’s Edge TPU chips, which are ASIC processors no bigger than your fingernail that are designed to run AI models without breaking a sweat.

The USB accelerator can boost inference on any Linux machine, while the dev board’s array of pins and ports make it perfect for prototyping hardware and other experimental applications.

AI Sheds New Light on Cell Developmental Dynamics

This website uses cookies to ensure you get the best user experience.

For instructions on how to block cookies from this site, please click the 'Give Me More Info' button.

Artificial-intelligence hardware: New opportunities for semiconductor companies

Software has been the star of high tech over the past few decades, and it’s easy to understand why.

Although their innovations in chip design and fabrication enabled next-generation devices, they received only a small share of the value coming from the technology stack—about 20 to 30 percent with PCs and 10 to 20 percent with mobile.

But the story for semiconductor companies could be different with the growth of artificial intelligence (AI)—typically defined as the ability of a machine to perform cognitive functions associated with human minds, such as perceiving, reasoning, and learning.

These diverse solutions, as well as other emerging AI applications, share one common feature: a reliance on hardware as a core enabler of innovation, especially for logic and memory functions.

Our analysis revealed three important findings about value creation: By keeping these beliefs in mind, semiconductor leaders can create a new road map for winning in AI.

This article begins by reviewing the opportunities that they will find across the technology stack, focusing on the impact of AI on hardware demand at data centers and the edge (computing that occurs with devices, such as self-driving cars).

AI has made significant advances since its emergence in the 1950s, but some of the most important developments have occurred recently as developers created sophisticated machine-learning (ML) algorithms that can process large data sets, “learn”

The greatest leaps came in the 2010s because of advances in deep learning (DL), a type of ML that can process a wider range of data, requires less data preprocessing by human operators, and often produces more accurate results.

By providing next-generation accelerator architectures, semiconductor companies could increase computational efficiency or facilitate the transfer of large data sets through memory and storage.

For instance, specialized memory for AI has 4.5 times more bandwidth than traditional memory, making it much better suited to handling the vast stores of big data that AI applications require.

This performance improvement is so great that many customers would be more willing to pay the higher price that specialized memory requires (about $25 per gigabyte, compared with $8 for standard memory).

With hardware serving as a differentiator in AI, semiconductor companies will find greater demand for their existing chips, but they could also profit by developing novel technologies, such as workload-specific AI accelerators (Exhibit 2).

We created a model to estimate how these AI opportunities would affect revenues and to determine whether AI-related chips would constitute a significant portion of future demand (see sidebar “How we estimated value”

If this growth materializes as expected, semiconductor companies will be positioned to capture more value from the AI technology stack than they have obtained with previous innovations—about 40 to 50 percent of the total.

For instance, route-planning applications have different needs for processing speed, hardware interfaces, and other performance features than applications for autonomous driving or financial risk stratification (Exhibit 4).

After analyzing more than 150 DL use cases, looking at both inference and training requirements, we were able to identify the architectures most likely to gain ground in data centers and the edge (Exhibit 6).

AI applications have high memory-bandwidth requirements, since computing layers within deep neural networks must pass input data to thousands of cores as quickly as possible.

That said, memory will see the lowest annual growth of the three accelerator categories—about 5 to 10 percent—because of efficiencies in algorithm design, such as reduced bit precision, as well as capacity constraints in the industry relaxing.

Solutions that are attracting more interest include the following: AI applications generate vast volumes of data—about 80 exabytes per year, which is expected to increase to 845 exabites by 2025.

These shifts could lead to annual growth of 25 to 30 percent from 2017 to 2025 for storage—the highest rate of all segments we examined.3 3.When exploring opportunities for semiconductor players in storage, we focused on NAND.

Unlike traditional storage solutions that tend to take a one-size-fits-all approach across different use cases, AI solutions must adapt to changing needs—and those depend on whether an application is used for training or inference.

For instance, AI training systems must store massive volumes of data as they refine their algorithms, but AI inference systems only store input data that might be useful in future training.

To capture the value they deserve, they’ll need to focus on end-to-end solutions for specific industries (also called microvertical solutions), ecosystem development, and innovation that goes far beyond improving compute, memory, and networking technologies.

To assist with the development of software for self-driving cars, for instance, Nvidia created DriveWorks, a kit with ready-to- use software tools, including object-detection libraries that can help applications interpret data from cameras and sensors in self-driving cars.

Only platforms that add real value to end users will be able to compete against comprehensive offerings from large high-tech players, such as Google’s TensorFlow, an open-source library of ML and DL models and algorithms.4 4.An open-source, machine-learning framework for everyone, available at tensorflow.org.

TensorFlow supports Google’s core products, such as Google Translate, and also helps the company solidify its position within the AI technology stack, since TensorFlow is compatible with multiple compute accelerators Many hardware players who want to enable AI innovation focus on improving the computation process.

For example, AI-based facial-recognition systems for secure authentication on smartphones were enabled by specialized software and a 3-D sensor that projects thousands of invisible dots to capture a geometric map of a user’s face.