New Silicon for the New Enterprise Stack

One of the most transformational trends in computing today is the shift from traditional CPUs to alternative chip designs engineered for emerging use cases like AI. As Moore’s Law tails off, the performance needs of next-generation applications are being met by new approaches that represent a departure from the CPU-dominated status quo.

Nowhere is this more evident than in the harnessing of GPUs for use cases beyond graphics processing. GPUs offer superior performance and cost effectiveness over CPUs in any application requiring massively parallel computation, which includes today’s most advanced approaches to artificial intelligence. For this reason, GPUs have already been widely adopted by research institutes and hyperscale players like Google and Facebook. We expect the next phase of adoption will see mainstream enterprises taking advantage of GPUs (and eventually more radical new chip designs), and this adoption will have ramifications across the entire enterprise technology stack.

During the first half of 2017, revenue from leading GPU designer NVIDIA’s Datacenter segment was $825 million, nearly tripling from the prior year. To put that growth in context, Amazon Web Services, the most transformative business in the field of computing over the last decade, was not growing at such a rate when it was a similar size. This lends credence to the idea that, while we have been living in an era defined by the cloud, we are entering an era defined by AI. (By the way, NVIDIA’s revenues from the fast-growing segments of cryptocurrency mining and autonomous vehicles are reported in segments other than the Datacenter segment).

AI is at the forefront of GPU computing’s remarkably rapid adoption curve, but GPUs have other powerful applications as well. In fact, we observe GPUs having an impact up and down the entire enterprise technology stack, and we expect to see a range of beneficiaries of GPU adoption beyond the GPU designers and manufacturers themselves.

At Converge we view the enterprise technology stack as comprised of three layers:

  • Core Infrastructure: network, storage, and compute
  • Enabling Technologies: technologies that sit on top of core infrastructure but do not face line-of-business users directly, such as databases and machine learning infrastructure
  • Business Applications: any application facing a line-of-business user

 

Core infrastructure use case: network monitoring

As the throughput of data center networks has skyrocketed, network monitoring has had a difficult time keeping up. The clock speed and memory bandwidth of CPUs have not kept pace with the requirements of today’s networks, even in multi-core configurations. As a result, network monitoring solutions have a propensity to drop packets in heavy traffic. This creates a challenge for obtaining real-time intelligence about the network and securing the network against threats.

GPUs, because of their massively parallel architecture, are perfectly suited for examining network packets at very high throughput rates while performing complex processing on them. They represent an off-the-shelf, ubiquitous, and flexible alternative to developing expensive purpose-built networking silicon.   Bricata is an example of a vendor leveraging GPUs for network monitoring and specifically for next-generation Intrusion Detection and Prevention.

 

Enabling technologies use case: databases

Moving one level up the stack, there is a significant amount of activity around the area of GPU-accelerated databases. Kinetica, MapD, Sqream, and BlazingDB all offer SQL databases that use GPUs to accelerate queries across massive datasets. GPU databases could be considered the next technology iteration beyond in-memory databases: similar use cases, but even faster.

One particular advantage of GPU databases is the ability to use them as a backend for business intelligence on large and streaming data sets that would otherwise be impractical to analyze. With the performance advantages of GPUs, analysts using a tool like Tableau can navigate and filter a data set of almost any size in real-time. This applies especially to emerging use cases like the analysis of streaming IoT data. Business intelligence offers a prime example of a use case outside AI that is being impacted by GPUs.

 

Business applications use case: anything with AI!

The application layer is where the use cases for GPU computing are really boundless. GPUs are the hardware backbone for training deep learning models, enabling a limitless number of AI-fueled enterprise applications. A prime example is medical diagnosis and prediction. Analyzing medical images to facilitate early treatment and prevention is a classic deep learning use case. Now promising research is being done, specifically with GPUs, on utilizing other sources of data such as full electronic health records to compare a patient’s risk factors to a vast population of other patients in order to predict the risk of disease.

Additional AI use cases include robotics, autonomous vehicles, and smart cities, among many others. Of course, applications need not be based on AI to benefit from GPUs. Financial risk modeling is often a highly parallelized endeavor for which GPUs are perfectly suited. Elsen is an example of a startup offering computational finance in the cloud based on GPU-powered clusters.

 

A post-GPU future on the horizon?

Incumbents and startups alike have taken notice of the explosion in AI-focused computing. This is leading to the emergence of potential rivals to GPUs for AI workloads. For example, Google has developed the Tensor Processing Unit (TPU), a proprietary chip that excels at the inferencing stage of deep learning and is credited with helping in Google’s victory over the world champion in Go. The team behind the TPU has spun out into a new startup called Groq that is presumably working on a TPU successor. Intel offers its Xeon Phi manycore architecture and will be releasing AI-specific hardware based on its acquisition last year of Nervana Systems. IBM and Intel have both prototyped “neuromorphic” chips, so-called because they mimic the operation of biological brains, and which, like the TPU, excel at the inferencing stage of deep learning.

Do these new chips herald a post-GPU era? Our belief is that the next few years will see a heterogeneous computing environment where GPUs, next-generation CPUs, and domain-specific hardware all coexist and are used in the areas at which they excel. If anything, the emergence of these new architectures signals that the world is moving on from an era in which multi-core x86 architecture dominated, to a new era of silicon engineering for AI and other next-generation workloads.

Even with new rivals to GPUs appearing, the GPU opportunity in the enterprise is still in the very early phases. We expect GPUs, as well as new rival architectures, to play a pivotal role in reshaping all layers of the enterprise stack in the coming years.