2021. What an interesting year. With the world turned upside down by a pandemic that seemingly had its sights set on...
Social Media Round-Up [February, 2019]
NonStop Insider
It isn’t every day that scrolling through the posts to LinkedIn you come across something quite revealing. On this occasion, it was research coming from an unexpected source; 451 Research, LLC that was promoted by HPE. What caught my attention was a simple message coming from Kirk Bresniker, Hewlett Packard Labs Chief Architect / HPE Fellow / VP at Hewlett Packard Enterprise. As Bresniker notes, “Just read the solid write up from 451 Research on our Memory-Driven Computing work. I really appreciate their analysis of not just the work but also how in fits into the full scope of Hewlett Packard Enterprise.”
For many members of the NonStop community, references to The Machine rarely causes a ripple – surely, it’s all aimed at meeting the needs of users wanting to exploit massive scale-up architectures and not the needs of the typical NonStop user where scale-out architectures are welcomed. And yet, with the heavy-lifting going on within HPE Labs, there are surprises install and this is what has led me to pull some highlights from this research note. As for the Research Note itself, look for the heading
HPE invites developers to play in its memory sandbox
DANIEL BIZO
451 Research, LLC
02 JAN 2019
And it can be found on the HPE web site:
https://www.labs.hpe.com/pdf/HPE_MDC_2019.pdf
But now, for a summary here’s what I found of interest even as a die-hard NonStop advocate …
THE 451 TAKE
Even if the commercial introduction of technologies behind the idea of MDC is still some years out, HPE’s efforts to demonstrate the underlying concepts early make perfect sense. An MDC-class system will not be an incremental improvement on existing systems, but will introduce architectural developments that will change fundamental assumptions about software engineering, server design, installation and operations. To take advantage of the expected benefits, software developers and IT buyers need to adapt their thinking – and that’s the goal HPE’s MDC Sandbox serves.
Initially, it dubbed this ambitious next generation of systems The Machine, a concept it first presented in 2014. It now calls this Memory-Driven Computing to refer to the new architecture and the body of technologies behind the idea. HPE says the historical trajectory of development of current general-purpose architectures is simply not sufficient to meet the growing IT challenges in performance, power efficiency and cost-effectiveness, hence the new direction. MDC extends the original Machine idea that was built on three key pillars: workload-optimized processors (accelerators), storage-class memory, and photonic system interconnects and networks
One of the better summaries about what fueled the interest in The Machine as well as the first reference to the all-important MDC Sandbox – key point however is HPE’s premise that traditional models for computing have reached their expiry date! It has to change to meet the demands of IT in the future …
Realizing this concept requires nontrivial research and development efforts that require vast resources from the industry. Fortunately, numerous players are pursuing these goals, including heavyweights such as Intel, NVIDIA, Cisco and some hyperscale operators, and key players from across the ecosystem are collaborating on the fabric aspect through the Gen-Z Consortium.
Noteworthy? Key HPE partner, Intel, is not part of the Gen-Z Consortium even as it looks to new ways to better exploit existing architectures . However, as 451 Research also notes, “is also a proponent of server disaggregation, has productized and is commodifying two key technology components that are outwardly similar: silicon photonic networks and storage-class memory (marketed as the Optane family of products). However, HPE is not content with existing server architectures and upgrades to individual components.”
With the first generation of decoupled memory products a couple of years away if development goes to plan, HPE has started whipping up interest in MDC among the developer and user communities. What makes this possible (well ahead of having workable hardware) is HPE’s latest mission-critical system, Superdome Flex, which offers scalability and partitioning features to create an environment that resembles a future MDC infrastructure.
Superdome Flex isn’t being marketed at traditional transaction processing markets but don’t be fooled. The intent of Superdome Flex and with the SGI team’s input has always been developing a commercial deployment where pioneering work based on The Machine can be tested. Yes for the NonStop community, check out what follows next:
In order to make its high-end Superdome Flex systems accessible to developers, HPE offers a ‘sandbox’ as a service for those that qualify for the program via its Pointnext services arm.
The Sandbox can create up to 16 independent server instances and pool terabytes of memory to simulate a scale-out infrastructure in which memory amount and IO resources can be flexibly matched to compute needs.
Future configurations of the Sandbox should also help developers prepare for storage-class memory. Once equipped with non-volatile memory modules (as opposed to purely DRAM), developers can not only think of allocating memory more freely to processors, but also treating memory as a lightning-fast persistent storage tier by modifying their applications. When an application uses persistent memory for writes (saving the data), it avoids the storage subsystem, which carries much higher resource overhead and is generally orders of magnitude slower. 451 Research expects HPE to announce such plans in the near future.
Applications that will likely benefit the most from large pools of persistent memory include high-volume online transaction processing systems found in e-commerce and financial services, running analytics against massive datasets, training of deep neural networks, and even ‘classical’ scientific and technical supercomputing workloads.
Perhaps the time is rapidly approaching where the NonStop vendor community may want to become more involved with this project as the impact of future generation HPE systems cannot be ignored. Furthermore it makes the move by the NonStop development team to sever ties with proprietary hardware look even more forward-thinking than we may have given it credit for. Surprising as it is to pick up on this in a social media channel like LinkedIn, it shouldn’t be ignored as the ramifications for the NonStop community could be sizable in terms of future product R&D spend as well as potential new markets that might open up to accept NonStop solutions!
@RichardKBuckle