
While many are familiar with what happened on 9/11, most do not know one NonStop user’s story. The NonStop customer implemented best practices at the time since it set up a Disaster Recovery operations center after seeing what happened in the 1993 World Trade Center bombing. The customer ran its production system in a building adjacent to the World Trade Center’s towers and followed the classic DR principles for its data: in addition to the DR site, an additional copy was stored offsite.
However, the customer picked what was believed to be the most secure location in the world (let alone New York City) to store one of its backup copies – the basement of the World Trade Center, where it was lost on 9/11. While both production and the first backup were compromised, the company had a second backup in Queens, New York, and was able to eventually restore operations at that facility.
“Those Who Cannot Remember the Past Are Condemned to Repeat It”
This famous quote is from “The Life of Reason, or The Phases of Human Progress” by George Santayana and reminds us the importance of learning from mistakes, whether we made them or someone else did.
This New York trading company learned from the 1993 World Trade Center bombing and survived by remembering and adapting its Business Continuity strategy because of it.
With All This Talk of Disaster – How Much Data Can You Afford to Lose?
- Losing any data for a large-value funds transfer application can bankrupt companies
- Losing trading activity can crash markets
- Losing financial data can bring unwanted attention from auditors and result in regulatory fines
- Losing any data for a health care application could mean improper care, leading to patient illness or death
Estimated Cost of Data Loss*

*Do These Numbers Seem Too Low?
Yes, because they are.
- These numbers were calculated over a decade ago and do not account for inflation.
- The graph assumes 500 TPS; current TPS rates in the NonStop world are often at least 10x faster, and the RPOs in the table are far lower than what actual customer architectures actually suffer. (We have seen RPOs ranging from a few seconds to a few days!)
Many NonStop Customers Have Never Experienced a Catastrophic Outage and its Consequences
Understanding the implications, and in particular, lost data, is critical to assessing business risk. It all depends on the length of the replication latency period and the technology in use.
HPE Shadowbase Zero Data Loss (ZDL)
Prevent Data Loss for Your Most Critical Applications
Mismatched data between systems means data corruption, jeopardizing your business and wreaking havoc on your customers.
Most NonStop customers have a backup system and a data replication engine. However, almost all are at serious risk of losing some of their critical data if a catastrophic failure occurs because they do not synchronize their application processing with their replication engine to fully deliver their data to the target system. Worse, almost none are aware of these risks.
In a Disaster, How Do You Know if All of Your High-value Data Actually Arrived at the Backup System?
Most data replication engines in use today rely on Asynchronous Technology to extract and deliver the data changes that the application makes at the source to the target. Therefore, the replication engine is decoupled from the source application processing and works completely independently from it.
As the application commits transactional changes, they are extracted and replicated to the target system a short time later. During this “replication latency” period, the committed source application changes are at risk of being lost if a catastrophic failure makes the source system inaccessible or inoperable.
- With Asynchronous Replication, transactional data can be lost during a catastrophic source system failure because data committed on the source may not yet be replicated to the backup due to latency.
- With Synchronous Replication, no committed data loss occurs during a source system failure.
Key Shadowbase ZDL Features
- Resolves key challenges of Asynchronous Replication using sophisticated tokens and messaging
- Safe-stores application data changes on your backup before COMMIT is allowed to complete
- Avoids specialized hardware and distance limitations
- Scales for high performance
- Supports parallel operations
- Innovative, patented technology
HPE Shadowbase DoLockStep
Avoid Data Loss by Coordinating Application Processing with Replication Activity
In addition, Gravic is also excited to announce HPE Shadowbase DoLockStep, which allows the application and the replication engine to synchronize their application processing across critical points. In some environments, this synchronization may be sufficient to protect your valuable data.
At Times, it is Important to Know if Your High-Value Data Actually Arrived at the Backup System Before Continuing Processing on the Source
For example, before performing a critical processing task on the source, it is often important to make sure that all (or certain) data recently created or updated on the source actually made it to the target and is safe-stored there.
Key Shadowbase DoLockStep Features
- Coordinate your application with your replication engine
- Transaction Tracking (TRACKTX) feature so that replicated data can be fully delivered and properly serialized on the target system
- Scalability for high-performance and parallel operations
- Perfect notification coordination for key operations, for example, by simplifying coordinating planned shutdowns and restarts at any time your application needs to check that all data has arrived at the target
Replacing RDF DOLOCKSTEP with Shadowbase DoLockStep
Shadowbase DoLockStep was designed to enhance and replace the functionality provided in the RDF DOLOCKSTEP feature. Replacing the application’s API calls to RDF DOLOCKSTEP with Shadowbase DoLockStep API calls is a straightforward process and allows for the application designer to improve processing using the additional capabilities provided in Shadowbase DoLockStep.
The Gravic Shadowbase Development Team has been hard at work and is excited to announce that both HPE Shadowbase Zero Data Loss and HPE Shadowbase DoLockStep Synchronization Facility are now released under controlled availability for customers looking to switch from RDF or improve their current synchronization technology by upgrading to HPE Shadowbase.
Hewlett Packard Enterprise globally sells and supports Shadowbase solutions under the name HPE Shadowbase. For more information, please contact your local HPE Shadowbase representative or visit our website. For additional information, please view our Shadowbase solution videos: https://vimeo.com/shadowbasesoftware, or follow us on LinkedIn.
NOTICE: Each user’s experiences will vary depending on its system configuration, hardware and other software compatibility, operator capability, data integrity, user procedures, backups and verification, network integrity, third-party products and services, modifications and updates to this product, and others, as well as other factors. As a result, the ZDL product does not guarantee that you will not lose any data; all user warranties are provided solely in accordance with the terms of the product License Agreement. Please consult with your supplier and review our License Agreement for more information.
Specifications subject to change without notice. Trademarks mentioned are the property of their respective owners. Copyright © Gravic, Inc. 2025.
Be the first to comment