NonStop Trends & Wins: NonStop @ 50 Part I

Nonstop Trends and Wins

In keeping with the 50-year theme, here are a few more early stories.  When I joined Tandem in 1982 they were already active in the stock exchanges.  The story goes that the New York Stock Exchange invested in a Tandem in an already crowded data center hosting IBM, Siemens, DEC, and other “big iron” of the time.  There was a pretty bad brown-out, and early in the morning, all the various vendors ’ CEs came in to repair damaged electronics and rethread sector maps affected by the brown-out.  They were all waiting for the Tandem guys to show up, and at about noon, the IBM CE went over to the system manager and asked if these Tandem guys were ever going to show up.  The manager replied, “Why?  The Tandem never went down.”  After that, Tandem started winning stock market deals all over the world.  At least back then, availability and data integrity were top priorities in the exchanges.

In the area of data comm, which was my specialty coming into Tandem, Expand was just magical.  The fact that a new node was ‘discovered’ and all the network maps updated just seemed amazing.  The network failovers worked, as did the internal failover.  A large banking customer had come into Cupertino for presentations and a demo.  At that time, the standard demo involved pulling a board and taking out a chip (monitors showed a CPU down).  Then, putting the board back in, the monitors showed the bad (missing) chip on the board.  The board was pulled, chip replaced and board reinserted and monitors showed “all good”.  Then, the networking demo was given.  Monitors displayed traffic between \CUP (Cupertino) \NY (New York).  We then pulled the plug on the modem connecting the systems.  Monitors showed \CUP was now sending data to \CHICAGO to \NY.  So no data interruption just a line switch.  The modem was plugged back in, and once synced, traffic went back to the more direct and faster route of straight to \NY.  The customer said it was very impressive that Tandem had set up such an expensive test network.  Test network?  No, that was the live Tandem network that we demoed.  The bank was then really impressed by the demo and by the confidence we had to demo on our own live network.

On Oct. 19, 1987, Tandem’s ‘NonStop’ computers were put to the test as a then-record 604 million shares changed hands on the NYSE in one of Wall Street’s busiest, if bleakest, trading sessions. The Dow Jones industrial average lost 508 points on what became known as Black Monday. The market crashed, but Tandem’s computers didn’t.  In fact, the exchange called the New York Tandem office early in the day for help since the system wasn’t sized for this massive load.  The New York office carried over processors and disks and installed them while the system was running to handle the load.  There was no purchase order and no management approval; the employees in the New York office decided to help a customer in need.  As Jimmy (Jimmy Treybig, CEO of Tandem) said at a presentation many years later – ‘No one was ever fired for doing the right thing’.

For those who have read some of the early papers, you are likely to recall many written by Dr. Jim Gray.  He was a wonderful man, and it is an understatement to say he was extremely bright, given he was a Turing Award winner.  He was considered to have been one of the top 10 database experts in the world.  In 1985, Jim co-authored a paper now known as “Anon et al.,” which outlined a potential standard named DebitCredit. This would eventually grow into the Transaction Processing Performance Council, or TPC, the standards body to measure system performance for Online Transaction Processing (OLTP), among many other things now.  But more on that in a bit.  I was in Cupertino meeting with a large aerospace company on a ‘secret’ project.  They were concerned about scalability and very specific in-memory capabilities.  At a certain point, they were beyond my ability to architect what they wanted. We were at headquarters, and Tandem folks always were anxious to help.  I excused myself from the meeting, saying I’d find somebody.  As I turned one of the corners, I wondered who I should run into, but Jim Gray, who I’d met before at TOPS (Tandem Outstanding Performers – top 2% of employees).  We had sailed together in San Diego at the event.  “Hey Jim, can you help me out?” I explained the situation, and Dr. Jim Gray, in Blue Jeans, a T-shirt, and flip-flops, popped into the meeting and astounded everyone.  Answered every question and even architected a solution on the whiteboard.  Toward the end, he asked what their budget was, and they said it was open-ended.  Without a moment’s hesitation, Jim said, “Well, a 16-processor NonStop should handle this easily,” with a big smile.  At that point, it appeared there was a budget, but Jim left, allowing them to debate what they needed.  I was and am still amazed at the talent Tandem had and how eager everyone was to help everybody else.  It was a culture like no other.

Continuing with the Jim Gray theme, the TPC was formed, and a new DebitCredit benchmark was created, changing from TPC-A to TPC-C.  It was 1994, and the current TPC-C leader was an HP system that performed 2000 tpmC. The results of the benchmark are measured in transactions per minute, known as tpmC.  At a cost of $2,000 ($tpmC).  Tandem decided to blow the number out of the water and put together a 112 processor system that delivered 20,918 tpmC at a cost of $1,532 ($tpmC).  It was a quantum leap forward in performance, and Tandem offered a one-million-dollar challenge to any company that could beat that number within 18 months.  The million would go to a charity of their choice.  No other company came close within 18 months.  The record stood for over 2 years.

The late 1980s and early 1990s were the era of benchmarks.  Customers wanted to buy ‘the fastest gun’.  Tandem found itself pitted against IBM and DEC most often.  One of the larger and most well-known benchmarks was the California DMV.  It came down to IBM and Tandem with California publishing the results (being a government office).  It turned out that Tandem passed all 7 performance benchmarks (barely), and IBM failed all 7 benchmarks (also barely).  The funny part of that story was, back in Los Angeles, where I worked, Kaiser had requested Tandem to participate in a benchmark they were running, and we “no-bid” the request since it seemed wired for IBM.  At the IBM benchmark center in North Carolina, it just so happened that the DMV and Kaiser teams were there at the same time.  In the lunchroom, the DMV team told the Kaiser team how great Tandem was doing and that they should definitely have Tandem compete.  Kaiser approached Tandem to reconsider the benchmark, and after some modifications, Tandem accepted and won.  Two rather big sales for Tandem back in the day.

Jumping forward to 1997, Tandem was struggling as we were flatlining on revenue and growth.  Tandem had consulted with Microsoft on their “Wolf Pack” project, which was clustering.  Tandem became a reseller of Windows, specifically clustered Windows systems. In 1997, onstage with Microsoft, we demonstrated the world’s largest Microsoft® Windows NT® Server network, linking 64 Intel Pentium Pro processors in a cluster using Tandem’s ServerNet interconnect technology.  It had a 2 Terabyte database (really big back then) and was code-named 2-Ton.  How successful was it?  Let me clip in some very old news articles:

May 14, 1997 — Microsoft Corp. And Tandem Computers Inc. Demonstrated the world’s largest Microsoft® Windows NT® Server network operating system-based system linking 64 Intel Pentium Pro processors in a cluster using Tandem’s ServerNet interconnect technology. The system managed a 2-terabyte database with a 30-billion row table based on Dayton-Hudson’s data warehouse, which manages retail outlets such as Target and Mervyn’s stores.

June 23, 1997 – Hoping to bolster its position as a computing solutions provider for enterprise networks, Compaq Computer (CPQ) announced an agreement today to acquire Tandem Computers (TDM) for approximately $3 billion in stock.

So a month after the demonstration Tandem was acquired by Compaq.  In fact, not long after the acquisition, Compaq liked the NonStop name so much that it started branding everything NonStop. That was until they discovered it was trademarked and the name meant that it was a system that “Has no single component that can cause the system to fail and that any failed component can be repaired or replaced while the system continues to run”.  Needless to say, Compaq had only one NonStop system and had to immediately stop the extra branding.

The next big thing NonStop demonstrated was our ZLE project and demo.  Several of us had been to the Gartner conference in 1998 and saw Roy Schulte talk about the Zero-Latency Enterprise.  This was a Gartner ‘stalking horse’ presentation about the elimination of batch processing within an enterprise.  It was really interesting.  In 1999, one of our Telco customers wanted to increase their database from 5 Terabytes to 20 Terabytes and was asking for a demonstration that NonStop could support 20TB.  The Cupertino lab/demonstration center delivered a working ZLE system.

The heart of the system was a 128-processor NonStopHimalaya S72000 server. The Operational Data Store (ODS) was 110 terabytes; it held more than 100 billion call detail records (representing a rolling 90 days of telephone calls).  Two 4-processor HP ProLiant™ servers acted as rating engines, inserting an average of more than 1.2 billion call detail records. That volume, at the time, represented the combined call volume of the five largest telephone companies in the world. Two 4-processor ProLiant servers acted as customer service representatives and generated 1,000 queries per second. This volume represented more than 40,000 customer service representatives. The original copy of the customer data resided on the AlphaServer (DEC) platform.  A data mart kept track of corporate customers in case it became necessary to send a credit limit notification.  A 4-processor ProLiant server was used for data mining.  A 2 x 2 ProLiant server cluster provided government compliance, with the ability to satisfy regulations that involve detection and notification of key events within a transaction stream and notification within 6 seconds. For the demonstration, MicroStrategy was used to launch large ad hoc queries (for example, determining which customers in a database fit a particular model). Finally, a bank of 10 flat-screen display panels showed key statistics of the demonstration (total call detail records, calls per second, customer service response time, and so on).

Needless to say, NonStop proved they could handle 20TB’s and a lot more.  The flat panel displays showed all 128 processors busy based on bars.  When the MicroStrategy query was launched all 128 processors went to 100 percent busy, all while maintaining the call detail record transactions and the call center transaction with no impact on response time.  It was jaw-dropping.  And selling the real-time enterprise architecture known as ZLE was the NonStop strategy for several years.

We had several ZLE customer success stories but one question that usually came up in the demonstration was how Compaq was using it.  Compaq did decide to use ZLE and embarked on a real-time supply chain project that kicked off just before Compaq was acquired by HP.

After the HP acquisition, it was determined to move forward with the “iHub” project, but the scope of the project changed from real-time supply chain to SAP integration.  Both Compaq and HP had very large SAP systems, and integrating the two very different implementations was going to be difficult.  The iHub ZLE system was to provide a united, real-time frontend to the two SAP systems until, and it turns out, until after they were combined.  This was a very successful project and went into production in 2004.  In 2005, The Winter Group gave it several awards, including:

1st place World’s Largest Event Store
1st place World’s Busiest Event Store
2nd place World’s Most Rows Scientific, Other (Not OLTP, not DW)
4th place World’s Largest Database Overall

The iHub scope was every HP Sales Order, Shipment, Invoice, BOM (Bill Of Material), ECM (Engineering Change Mgt), and GPG (Global Product Group) codes.  It went into production in 2004, and the HP IT team never upgraded the hardware or the OS or even touched the system.  It was finally decommissioned in 2014, having run over 10 years without a second of downtime – planned or unplanned.

At the time ZLE was fading as a strategy, there were a lot of venture capital startups using a technology known as a database appliance.  It was all the rage, allowing users to quickly load and start running queries on these smaller database appliance systems.  NonStop noted that we already had an excellent query system that could be quickly loaded and announced the Neoview system.  NonStop was getting quite a bit of interest, considering HP was a reliable, known company as opposed to venture capital startups.  Every company wanted a presentation on Neoview when Mark Hurd became the CEO of HP, displacing Carly Fiorina.  Mark had been the CEO of Teradata and knew a lot about NonStop since their architecture was based on the NonStop design of massively parallel.  Mark quickly got wind of Neoview and, after an internal presentation, decided it was going to be a Teradata killer.  Neoview was initially designed as a database appliance, not an Enterprise Data Warehouse (EDW), but one doesn’t argue with the CEO, so the ramp-up of Neoview began along with HP’s hiring of Randy Mott, the ex-CIO of Walmart and was then the current CIO of Dell.  He was tasked with a massive transformation of HP’s IT environment, and among many things, was to convert over 650 data marts into one EDW using Neoview.  It was a three-year project, but the EDW was a major component of the transformation, so it was put into production about a year after Randy was hired.  So NonStop (Neoview) was the EDW for HP for several years.  Mark Hurd unfortunately made some enemies and was ousted from HP (and started at Oracle as president the very next day).  Also, unfortunately, Neoview was considered his pet project and was end-of-life soon after Mark’s departure.  This not only annoyed the Neoview customers but also put Randy Mott in a horrible position since that was the HP EDW.  It wasn’t initially easy to get everything on Neoview, but it was really hard getting it off Neoview, and it took several years and multiple technologies to replace it.

We’ll continue Part II in the next issue of The Connection.


  • Justin Simonds

    Justin Simonds is a Master Technologist for the Americans Enterprise Solutions and Architecture group (ESA) under the mission- critical division of Hewlett Packard Enterprise. His focus is on emerging technologies, business intelligence for major accounts and strategic business development. He has worked on Internet of Things (IoT) initiatives and integration architectures for improving the reliability of IoT offerings. He has been involved in the AI/ML HPE initiatives around financial services and fraud analysis and was an early member of the Blockchain/MC-DLT strategy. He has written articles and whitepapers for internal publication on TCO/ROI, availability, business intelligence, Internet of Things, Blockchain and Converged Infrastructure. He has been published in Connect/Converge and Connection magazine. He is a featured speaker at HPE’s Technology Forum and at HPE’s Aspire and Bootcamp conferences and at industry conferences such as the XLDB Conference at Stanford, IIBA, ISACA and the Metropolitan Solutions Conference.

Be the first to comment

Leave a Reply

Your email address will not be published.