As I began to write this article for some strange reason these words came to mind. First heard on the Disney pavilion at the 1964 World’s Fair, the song, It’s a small world after all continues to captivate audiences worldwide. It was back in this month, April, of 1964 that the Fair opened with Disney supporting four exhibits, and even as I was living in Sydney, Australia, my father captured the spirit of the Fair in photos he took when visiting New York in 1965.
The Ford Mustang debuted at this fair with Lee Iacocca performing the unveiling honors. All of this my family took in while viewing a 35 mm slide show my father put together on his return to Sydney. For many families of the 1960s, such viewings were more or less mandatory and in a way through these slides we were transported to another place and even to another time, entering a virtual world totally out of reach for most of us. True, the world was introduced to the community by those whom we called the “jet setters”, but that felt more like fiction than what the rest of us were experiencing.
We have come so far in the years that followed that Worlds’ Fair. How often do we write about simulations and of folks who spend time in simulators to gain experience in steering a ship, flying a plane, and yes, racing a car? Simulations provide virtual landscapes, be they of the sea, an airport, or a race track, and do so without ever placing the participant in harm’s way. Today, however, we have virtual reality, the metaverse, and avatars, adding even more layers of abstraction between the real world and any realm we care to enter.
For computers, the value that comes with such abstraction and the presence of virtualization has been with us for quite some time. I first ran into virtual machines during my early days as a programmer. To put it mildly, I was wowed by the idea that we could run many different computing environments on a single computer thanks to the speed differential between the CPU and secondary storage. While there were other vendors providing a level of virtualization it was with the IBM mainframe where I gained my early experience and that sense of wow has remained with me ever since.
Hearing for the first time that NonStop was going to complement its presence on traditional systems with presence on virtual machines seemed to me to be a fantasy. Surely, the single points of failure NonStop has always avoided would imply that fault tolerance and the five-nines of availability would be compromised. Configuring your NonStop system atop a single hypervisor that resided on one computer seemed folly and fortunately, this was addressed early on. Desktop configurations based on a pair of x86 processors, suitable for demonstration and perhaps useful to individual software developers, were just the introduction, and production systems would look a lot different.
That announcement of NonStop going virtual occurred during the NonStop Technical Boot Camp 2015. Can you recall Martin Fink as the guest keynote speaker at a time when he was the CTO of all of HPE? When Martin Fink took to the stage at NonStop TBC15, he walked the audience through the moves that were required to execute on a strategy and then projected a vision for NonStop unlike any we had heard previously. Not surprisingly, many inside of NonStop development were caught by surprise as well. At the time, Fink’s talking about virtualization touched a nerve with everyone in the audience:
“Running on a virtual (environment) on a Linux,” Fink asked before adding, for those interested in the topic, “As an important proof point, we can absolutely get there.” Furthermore, looking at it a little differently, “Wouldn’t it be cool to bring the NonStop value proposition to Linux and bring to market (more) powerful hybrids – a powerful combination.” If there were to be a future for NonStop in clouds, for instance, then there has to be a future for NonStop atop a virtualized world.
Atop a virtualized world; a world where what lies beneath increasingly becomes less relevant. Talking the way he did back in 2015, Fink presented a fantastical image of where the future would take NonStop and while he has since moved on to other projects, those words proved prophetic. Visionary, if you prefer as they provoked thoughts of being able to run NonStop as software almost anywhere you cared to so long as it was on Intel x86 architecture. And yes, utilizing InfiniBand (IB) as the fabric; then converged Ethernet, and now, HPE Slingshot anyone? HPE Slingshot was just announced as the world’s only high-performance Ethernet fabric designed for HPC and AI solutions and speculations have already begun that it could be a possible fabric to interconnect NonStop with platforms apart from x86, virtual or otherwise. Anyone for adding connectivity to HPC/ Apollo in support of real-time analytics, for instance?
For now, the conversation about virtualized NonStop centers on the presence of VMware. There have been numerous evaluations but participants at recent NonStop TBC events will have heard of enterprises turning to NonStop and VMware for production use – did everyone hear the presentation by Dell Technologies about their deployment of NonStop and VMware on their own hardware? Of course, this is not an endorsement for going down a hardware path other than that provided by HPE, but nevertheless opened the door to many “what if?” scenarios being openly discussed.
And that is the beauty of virtualized NonStop today; the NonStop community has many more choices than at any other time in its history. It is an important step and one that needs a little more context. As Fink noted, “wouldn’t it be cool to ‘bring to market (more) powerful hybrids – a powerful combination.’” Enterprise IT has become Enterprise Hybrid IT and with some transformation, there are now many moving parts to deal with. However, by focusing on just one foundation server architecture such as x86, configuring VMware across a server farm means that NonStop can be positioned just a few electrons away from servers where analysis can be performed in real-time and where the algorithms of AI / ML can be leverage also in real-time.
This is the real story behind the virtualization of NonStop. The story of many steps being taken to get us to where we are today; capable of running real or virtual. Considering where the separation of NonStop from the underlying hardware has taken us so far it’s difficult to imagine that this story is entering its final chapters. Rather, as enterprise hybrid IT becomes even more diverse in time catering to IoT, IIoT, V2V, and more, it becomes abundantly clear that what we are witnessing today is only the first of many baby steps to follow. Yes, baby steps that lead us to the realization that as an operating system, NonStop is on the verge of virtually becoming ubiquitous in terms of possibilities for underpinning all mission-critical applications.
Excited? Suddenly, it becomes a very small world, after all. All at once, we can interact with technology in ways unimaginable in the twentieth century. To think we gained an early insight into where NonStop was headed way back in 2015 don’t you think that it’s good to participate in future NonStop TBC events? What will we hear of next? Something related to the database or to development tools? To think, too, that when it comes to virtualization, simulations, and avatars there is still no better way to interact with the community than in-person. Of course, the only certainty is that if you aren’t at NonStop TBC 22 you will miss whatever will be announced and I certainly plan on attending. Will I see you there?
Be the first to comment