
Shadowbase Zero Data Loss
Shadowbase version 7.000 launched this past year, containing our first official release of this patented and groundbreaking technology.
Why Does it Matter?
Because all HPE NonStop data replication products available today use an asynchronous data replication technology (while RDF/ZLT is an exception, this product is a special case that we would be happy to discuss with you).
How Asynchronous Data Replication Works
Only after the application creates data changes does the replication engine (shortened to “replication” in this article) copy and transport those changes to the target system. This means that the source and target are not directly synchronized with each other.
This delay, called Replication Latency, is the time after the changes are made at the source until they are safe-stored or applied on a target system.
The length of this delay represents your potential for data loss should the source system suddenly disappear due to an outage.
Replication Latency & Average Transactions per Second (TPS) Metrics are Key
What is your replication latency? (Note that most replication engines will report that information via a STATUS command.)
What is your application’s average TPS rate?
- If replication latency = 5 seconds and TPS = 1,000, then:
- 5 × 1,000 = 5,000 transactions are at risk if the source fails.
- If replication latency = 15 seconds and TPS = 5,000, then:
- 15 × 5,000 = 75,000 transactions could be lost.
Key takeaway: Higher replication latency or TPS increases the volume of at-risk data.
Should you care?
Does this matter to your business? Maybe, and maybe not (for those in the latter, be sure to read the rest of the article).
- For ATM transactions, some data loss may be tolerated because the business feels the average monetary value of a transaction is low – it is just a cost of doing business.
- For POS transactions, the amount of potential data loss may still be below the institution’s threshold.
- While these transactions may ultimately be recoverable from other sources such as ATM or POS logs after the outage, we discuss this aspect more below.
- For online banking transactions, these values can add up quickly.
- What about funds transfers such as money-center or inter-bank loans to meet cash margins?
- In some cases, losing any data can affect the market as in losing security pricing.
- Clearly, losing this type and quantity of data has potentially catastrophic consequences.
As the saying goes, “Beauty is in the eye of the beholder.” In this case, so is an organization’s data and its tolerance for data loss. While the explicit amount of data loss may be low for some applications and some data loss (or ‘temporary’ inaccessibility) can be tolerated, what about other mission-critical applications that process high-value transactions?
Consider a POS system at a local pharmacy, an outage resulting in data loss means that that people are not getting their medication. What about critical healthcare services such as dosing information? Losing that information can put the patient’s life at risk.
These days, economic activity is dependent upon previous transactions, so there is an implicit impact to the macroenvironment that is not easily visible. Whether we like it or not, data loss has consequences.
Have you ever experienced a sudden catastrophic event that completely lost your source system and data (e.g., the source systems was simply gone)?
For many customers, the answer is no.
The simple fact remains: most have never experienced a 9/11 type of outage.
For most of us, when the source fails or becomes inaccessible, it is often eventually recoverable, and the data remains intact (never mind the fact the data is unavailable until it is recovered). But what about those cases when it is not recoverable? Where do you get the data from then?
ZDL isn’t Just About Data Loss – it’s About Data Availability
While (most of) the data may be stored on a backup, understand that in today’s 24×7 environment, data availability is key for most mission-critical applications, certainly those in finance, health care, and manufacturing sectors.
When discussing tolerance for data loss, an important question arises: how long can this data remain inaccessible?
While it may be possible to eventually recover some or all of it:
- Can we effectively start application processing given that we missed replicating data for some period?
- How much chaos, uncertainty, and mistakes will arise from not having that data?
- Will it result in regulatory fines, loss of brand reputation, customer and market unrest, or even loss of life?
RPO Can Significantly Impact RTO
It may not be clearly understood that the Recovery Point Objective (RPO) profile of an application (i.e., when using asynchronous replication vs. synchronous replication) can have a significant impact on the application’s Recovery Time Objective (RTO) profile. The critical question is: how fast the application can come back online if it is missing data?
Why? Because some applications cannot (or should not) go live without full, correct, consistent, and complete information after a catastrophic failure.
If your data matters, Shadowbase Zero Data Loss software is a solution that can help. It removes the potential for committed data loss during catastrophic loss of the source environment and subsequent failover to the backup.
New Shadowbase DoLockStep Solution
In a similar vein, we are working on an important new feature called Shadowbase DoLockStep. This new synchronization facility is being designed to allow the customer’s application to coordinate its processing with the Shadowbase asynchronous data replication engine to ensure that data is delivered and safe-stored at the target. This is useful for a variety of use cases, such as:
- Processing high-value transactions and needing to know that they have been delivered and safe-stored on the target system before entering a critical processing function at the source.
- Verifying all data has been successfully delivered to the target before a planned failover to the target as part of a Shadowbase Zero Downtime Migration.
- Simply knowing that all data has been successfully delivered to the target, for example, before performing a backup on the target system.
The new Shadowbase DoLockStep solution is being designed as a direct replacement for customers using RDF’s DOLOCKSTEP functionality, and includes additional enhancements.
Is Shadowbase Zero Data Loss or DoLockStep right for me?
While there are differences in functionality, ease-of-configuration, and application considerations that should be considered when making this decision, having the choice makes redefining data integrity to eliminate data loss more easily adaptable to fit existing customer environments.
Want to learn more? Contact us today!
Hewlett Packard Enterprise globally sells and supports Shadowbase solutions under the name HPE Shadowbase. Contact your local HPE representative for further information about how HPE Shadowbase software can assist you in protecting and increasing the availability of your data.
For more information, please contact us or visit our website. For additional information, please view our Shadowbase solution videos: https://vimeo.com/shadowbasesoftware, or follow us on LinkedIn.
Specifications subject to change without notice. Trademarks mentioned are the property of their respective owners. Copyright © Gravic, Inc. 2025.
Be the first to comment