Employ SoCs for analytics

This post was originally published in Machine Design.

As the amount of data continues to grow, it is becoming increasingly apparent that the traditional, compute-centric data center architecture may not be the best configuration for many computing applications—especially analytics. Traditional servers are very energy- and space-intensive, not to mention pricey. On top of that, the majority of the energy cost in a traditional server environment comes from moving data from point A to point B, rather than from processing the raw data into value-add information (i.e., analytics). What is needed in this era of analyzing “big data” is a trusted architecture that combines the data with the high-performance compute.

Creating compute-dense processing nodes or appliances that are hyper-efficient—with ultra-low power, small form factor and high performance compute nodes—are built by integrating a system-on-a-chip (SoC) processor with DRAM (Dynamic random-access memory), flash memory, and the power conversion logic using open standards interfaces and software. These appliances are emerging as a new option for more efficient data processing per dollar spent.

Each compute node in the appliance consists of a 12-core, 24-thread SoC, 48GB DRAM, 2 SATA*, 4 10Gb Ethernet, SD and USB2 interfaces—yet is only 139 mm wide by 55 mm high and uses an inexpensive DIMM (dual in-line memory module) connector. 128 of these nodes are provided within each appliance which consumes about 6 kW. It runs standard Fedora 20 Linux and the IBM DB2 database. IBM researcher Ronald Luijten calls their creation the “datacenter in a box.” Ronald and his co-authors, Dac Pham, Mihir Pandya, and Huy Nguyen, presented the results of this work to date at the2015 ISSCC conference.

QorIQ T4240

Complex systems on a chip are becoming more in demand as the internet of things and machine to machine (M2M) are growing. The systems are becoming more complex by adding cores and features as shown by the QorIQ T4240.

*Exabyte: A unit of information equals one billion gigabytes.

*SATA: An integrated drive electronics (IDE) device, which means the controller is in the drive, and only a simple circuit is required on the motherboard.

In another example, System Fabric Works demonstrated another implementation at Super Computing 2013 using the exact same SoC, which they called the “strongest candidate for low power exascale*.” These two examples demonstrate that combining powerful, low-power compute with the integration of networking infrastructure on a single SoC can enable an appliance platform to scale efficiently to Exabyte levels of performance.

*Exascale: A computing system capable of a billion billion calculations per second.

What are some of the use cases that compute-dense appliances are uniquely suited for?

  • In developing regions, the power and communications infrastructure is limited. Carrying physical currency can also be dangerous. Therefore, mobile payments have emerged as a safer way to conduct business, including transactions as basic as buying groceries. Unfortunately, the infrastructure doesn’t exist to support that, but kiosks supported by low-power, compute-dense appliances—powered by cheap diesel engines or another inexpensive energy source—are considered a viable option to support the need for mobile transactions, without requiring a full mobile infrastructure build-out.
  • In the Netherlands, ASTRON (The Netherlands Institute for Radio Astronomy) is collaborating with the aforementioned IBM researcher on a project called DOME, in which researchers are utilizing a very large array of radio antennas to listen in on the Big Bang from 13 billion years ago. These antennas generate 14 Exabytes of data per day They are deployed in remote locations, such as in a desert, where the power and network infrastructure is fairly limited. Where did IBM look when they needed to work with a partner to develop a prototype for such challenges? The QorIQ T4240 SoC. To further address energy efficiency, the prototype is fan-less, as it utilizes hot-water cooling.
  • Autonomous vehicles will generate huge amounts of data, which will need to be processed locally rather than in a remote data center in order to maintain the safe and efficient operation of the car. Some OEMs are estimating that to be truly autonomous, these self-driving cars will require 2-3 server class machines to analyze and process the data in real time. These need to be low-power, small-form factor machines that can locally process and analyze the large amounts of data that the car will generate. Once again, these compute-dense analytic appliances perfectly fit that need.

Low-power, compute-dense analytic appliances have not yet fully come into their own. Right now, it is common to rely on the established data center technology. As big data continues to grow, and the business value of getting to the answers quickly and efficiently becomes the demand, rather than paying for the movement of data, a paradigm shift will take place. As this shift occurs, high-performance multicore processors will be needed to help address many challenges to optimize the system architecture for their specific application requirements.

Projects like DOME, work being done with deployments in developing regions, and other uses will pave the way for a new generation of compute-dense appliances to meet our local, low power, higher efficiency compute needs.


Toby Foster
Toby Foster
Toby Foster, a product marketing manager for high-end portfolio of Digital Networking, works with sales teams and customers to design-in existing solutions and to define next-generation devices. He earned an MS and BS in electrical engineering from Harvey Mudd College. He likes old cars—Toby drives and maintains a 1977 MGB roadster, and is restoring a 1972 VW Super Beetle convertible.

Comments are closed.

Buy now