Home Enterprise Solutions HPE's The Machine gets a step closer

HPE's The Machine gets a step closer

HPE has demonstrated a rack-scale prototype of The Machine, an implementation of its new memory-driven computing architecture that allows a large number of processors to share a single massive memory space.

In 2014, HPE revealed that it was developing a new computing architecture it called The Machine. According to HP Labs chief architect Kirk Bresniker, this wasn't just a research project or an attempt to make incremental improvements, rather it was "a grand challenge" to produce a system that could operate at an unprecedented scale.

The design adopted involved a large number of processing cores, all with high-bandwidth access to a single non-volatile memory fabric.

As previously reported, the subsystems were put on show at HPE Discover in mid-2016, and by December "all the pieces had fallen into place" and the company was able to show a working implementation "at the scale of one," said Bresniker.

HPE has now implemented The Machine at rack scale, with 1280 ARMv8-A cores and 160TB of non-volatile memory s in 40 nodes housed in four enclosures with a photonic interconnect, running an optimised version of Linux.

It is thought to be the largest single memory array computer ever constructed, he said.

Existing high-performance computing systems generally rely on being able to break a task up into many small parts that can be executed independently and in parallel. As a trivial example, if you need to multiply 1000 pairs of numbers, a four-core system can generate the results in about one quarter of the time taken by a single core.

But as the tasks get bigger and more complicated, the amount of memory required exceeds the addressable space of a single system, and the job has to be shared between multiple computing nodes, each with their own memory. There are various ways to connect those nodes, but moving data between them is always slower than moving it between a processor and its own memory. And some problems aren't purely parallel and so cannot be spread across systems with separate memory spaces, Bresniker said, noting the potential to build The Machine with petabytes of memory.

Furthermore, The Machine's architecture makes it possible to assemble the right combination of resources — general-purpose processing cores, GPUs, memory, and so on — to suit the task in hand.

By keeping all the data in memory, it's possible to get answers as you think of new questions. he said.

The choice of non-volatile memory is also significant for two reasons. Firstly, it reduces the need to read and write data from and to storage devices, which means significant time savings. Secondly, unlike DRAM, it doesn't need to be refreshed and so consumes less power.

The current prototype of The Machine actually uses DRAM backed by a UPS to simulate NVRAM. Bresniker said this was "a reasonable compromise" in order to "learn as much as we could, as quickly as we could" while experimenting with various non-volatile technologies including HPE's memristor memory and Intel's XPoint.

So what does all this mean in terms of performance? The answer depends on the application, and how much effort is put into taking advantage of The Machine.

Changing just a couple of hundred lines of code in Hadoop yielded a 10 to 15x performance boost, according to Bresniker.

At the other extreme, a Monte Carlo simulation for financial analysis was 10,000x faster after being rewritten to use a completely different approach that exploited the architecture. The trick was in the realisation that the massive memory space means the large set of calculations can be performed once and left in memory. When the result is needed again it can be simply and quickly looked up instead of being recomputed. While there is an initial cost to doing all the calculations upfront, it is amortised over repeated use.

This use of "memory as a computation acceleration tool" is analogous to the mathematical tables that were in widespread use before the introduction of electronic calculators, he observed.

Even conventional business applications stand to gain from a fully-implemented The Machine. The the multiple layers of API calls needed to transfer data from an application through middleware and a DBMS to the physical storage take milliseconds of time to do what The Machine does in nanoseconds. Significantly, these programs would need little alteration as it would be possible to provide an equivalent API. This approach minimises risk, Bresniker said, so many organisations would be able to explore the possibilities.

Another early application for The Machine is a security system drawing inferences from a large-scale (as in billions of nodes, which Bresniker described as Facebook or NSA scale) graph to detect advanced persistent threats. Being able to do this in a sufficiently short time requires all the data to be held in a flat memory space. Running this application on The Machine represents a hundredfold improvement on the state of the art, he said.

A commercial implementation of The Machine is still a few years away, according to Bresniker. Apart from anything else, the current system makes extensive use of FPGAs (field programmable gate arrays) because they can be reprogrammed while the system is being developed. Once the design is settled, the task will be to build a high-performance implementation in silicon, which will take at least two or three years.

But some of the technologies from The Machine will show up in relatively conventional systems before then, he said.

Bresniker stressed that The Machine is not a solely HPE project: "it's bigger than us," he said, noting that a variety of hardware and software partners and suppliers are involved.

HPE expects to be able to easily scale this architecture to a memory space measured in exabytes, and ultimately to 4096 yottabytes, which is "250,000 times the entire digital universe today," according to the company. "With that amount of memory, it will be possible to simultaneously work with every digital health record of every person on earth; every piece of data from Facebook; every trip of Google's autonomous vehicles; and every data set from space exploration all at the same time – getting to answers and uncovering new opportunities at unprecedented speeds."

"The secrets to the next great scientific breakthrough, industry-changing innovation, or life-altering technology hide in plain sight behind the mountains of data we create every day," said HPE chief executive Meg Whitman.

"To realise this promise, we can't rely on the technologies of the past, we need a computer built for the Big Data era."

Information about memory-driven computing and The Machine is available here. A fact sheet is also available.

LEARN NBN TRICKS AND TRAPS WITH FREE NBN SURVIVAL GUIDE

Did you know: Key business communication services may not work on the NBN?

Would your office survive without a phone, fax or email?

Avoid disruption and despair for your business.

Learn the NBN tricks and traps with your FREE 10-page NBN Business Survival Guide

The NBN Business Survival Guide answers your key questions:

· When can I get NBN?
· Will my business phones work?
· Will fax & EFTPOS be affected?
· How much will NBN cost?
· When should I start preparing?

DOWNLOAD NOW!