HPE recently announced the availability of a new feature in Pathway (ACS), called Automatic Weights Reconfiguration (AWR). This new feature is available in L22.09 Release Version Update (RVU).
With AWR, the ability of Pathway to continue to provide application service availability in the event of unplanned process, CPU, or server outages is significantly enhanced.
It is therefore timely to revisit this article, which has been updated to include details about AWR, and the ways in which it improves Pathway application availability when coupled with an online data replication engine, to provide a comprehensive business continuity solution.
HPE NonStop Pathway is a mainstay of today’s mission-critical systems. It implements application scalability by distributing workloads across dynamic pools of application server processes (serverclasses) that in turn are spread across multiple HPE NonStop server CPUs. It offers high levels of application availability by monitoring and automatically restarting server processes in the event of failure.
The Pathway model delivers excellent availability for a single HPE NonStop server, providing local fault tolerance. But this model still necessitates application outages for certain planned system-maintenance activities, and it offers no protection against unplanned system outages.
These limitations are addressed by Pathway domains (PD), which allow the serverclasses to be distributed across multiple NonStop server systems. This capability improves application scalability and availability, protecting against some server failures. However, even though application availability is improved by PD, if the database being used by those applications becomes unavailable, then the application is still down.
To survive the failure of the critical server containing the database, the application database must be replicated on another server. The databases are kept synchronized by data replication. Thus, if the server with the database fails, then applications can continue to run using the surviving server and its database.
PD and data replication complement each other perfectly. PD maintains application availability and HPE Shadowbase maintains data availability. When combined, both provide scalability and protection from any single point of failure, enabling advanced business continuity architectures. Pathway domains and Shadowbase data replication are designed to work perfectly together!
HPE Pathway Domains
A Pathway domain enables the configuration of up to four Pathway environments (PATHMONs) that can behave as a single, large, integrated Pathway application. The serverclasses are replicated in each PATHMON. A Pathway domain can therefore span up to four physical NonStop servers.
By replicating an application environment across the members of a Pathway domain, any of the PATHMON environments within a domain may be taken out of service (for an application software upgrade, for example), or suffer an unplanned outage, and the remaining environments within the domain will continue processing with no application service interruption. This replication is made possible by Application Cluster Services (ACS), an internal component of Pathway, which load-balances serverclass requests across the domain. If a Pathway environment becomes unavailable, ACS automatically directs requests to the remaining PATHMON environments within the domain.
In Figure 1, four PATHMONs ($PM1 through $PM4) are configured in a domain spanning two HPE NonStop server nodes (\NodeA and \NodeB). The same serverclass (SC1) is running under each PATHMON, so any PATHMON in the domain can service any request for that serverclass. Under normal conditions, requests destined for this serverclass are routed by ACS to any available PATHMON in the domain where SC1 is defined, regardless of the node.
Figure 1 – Pathway Domain Comprising Four PATHMON Environments Over Two Nodes
To use PD, the Pathway system administrator creates a Pathway domain configuration in the ACSCTL file for each node. The configuration defines the environments in the domain, specifically, the PATHMON names qualified by node name if the PATHMON is remote. In addition to the domain name and the list of PATHMONs in the domain, a weighting factor can be configured for each PATHMON in the domain. ACS uses the weighting factor to distribute serverclass links. For example, on \NodeA,
%PDOM = $PM1:40, $PM2:40, \NodeB $PM3:10, \NodeB $PM4:10
In this code example:
- %PDOM is the name of the Pathway Domain
- $PM1 and $PM2 are PATHMONs executing on the local node [the one on which this Pathway Domain (%PDOM) is defined (\NodeA in this case)], Each are weighted at 40%, so each gets approximately 40% of the serverclass links created for server requests for this domain.
- $PM3 and $PM4 are remote PATHMONs on \NodeB, and each receives approximately 10% of the links created. The total weighting factor must equal 100%.
If PATHMON $PM1 is taken down for an upgrade, then ACS will route further requests only to PATHMONs $PM2 through $PM4 until $PM1 is returned to service. Likewise, if server node \NodeA fails, then ACS will automatically route requests only to the two remaining PATHMONs in the domain on \NodeB.
But What About the Data?
All mission-critical business continuity architectures involve multiple (at least two) geographically dispersed processing nodes so that one node can continue to offer business services in the event of an outage of any other node. This redundancy is the basis for continuous application availability, and several requirements must be met in order to accomplish it. Two of the most fundamental requirements are that the data the business services require to operate and the business services themselves (applications) must remain available in the event of a failure.
This availability is typically achieved via some form of application duplication across the systems and data replication between the systems. In the event of one node failing, the applications and data are readily available on the other system; and users can be quickly switched (re-routed) to it. We have discussed how PD maintain application availability. We now discuss how data replication can maintain data availability.
Improving availability via data replication depends upon having at least two nodes, each being capable of hosting the application database. As an application makes changes (inserts, updates, and deletes) to its local database (the source database), those changes are sent immediately over a communication channel to the target system, where they are applied to the target database. The target database typically resides on another independent node that may be hundreds or thousands of miles away. The facility that gathers changes made to the source database and applies them to the remote target database is known as a replication engine.
Users can pick from a variety of levels of availability across a continuum via different system architectures. These levels determine the amount of data loss resulting from a failure, known as the Recovery Point Objective (RPO), and by the amount of time taken to restore service after the failure, known as the Recovery Time Objective (RTO). These architectures can implement high availability (an hour of downtime per year) to continuous availability (almost no downtime per year with recovery happening in seconds).
Active/Passive Systems
Uni-directional (one-way) replication (Figure 2) is often used to keep a passive backup (target) system synchronized with an active (source) production system. These systems are known as active/passive systems. All online users are connected to an active node that processes all transactions and replicates database changes to a geographically remote standby database. This system keeps the two databases in (or nearly in) synchronization. If the active node fails, the applications at the backup node can remount the database for read/write access and take over the role of the original active node.
Figure 2 – Active/Passive Uni-directional Replication
Sizzling-Hot-Takeover Systems (SZT)
A Sizzling-Hot-Takeover (SZT) system is similar to an active/passive node architecture using uni-directional replication, as described above, except that it is immediately ready to start processing transactions if a failover occurs (Figure 3). This configuration has its applications running on the backup system, and the local copy of the application database already is open for read/write access. It is in all respects an active/active system with the exception that all user transactions are directed to the active node (thereby avoiding data collision issues). The SZT system can take over processing very quickly because its local database is synchronized with the active database and is completely consistent and accurate. The applications are already up and running with the database open for read/write access.
Figure 3 – Sizzling-Hot-Takeover (SZT)
If the active node fails, all that is required for failover is to switch the users or their transactions to the passive node. The switch can be accomplished in seconds to sub-seconds, leading to very small RTOs. To operate in this mode, it is essential that the replication engine being used allows the application processes to also open the target database for read/write access.
Active/Active Systems
An active/active configuration takes the SZT arrangement one step further and provides continuous availability. Rather than routing all transactions to one active node as in the SZT case, all nodes in an active/active network are simultaneously processing transactions for the same application using their local copies of the application database (Figure 4).
Figure 4 – Active/Active Bi-directional Replication
Bi-directional data replication is configured between each node pair; therefore, any change an application makes to its local copy of the database is immediately replicated to the other nodes in the application network (each database is acting simultaneously both as a source and a target database). Thus, all applications have the same view of the application database; and all can process online transactions.
If a node fails, then all that needs to be done is to route all further transactions to the surviving node, which is known to be working. A consideration with active/active systems is the possibility that the same data item might be changed in each copy of the application database within the replication-latency interval. If this possibility happens, the change is replicated to the other database; and the replicated change overwrites the original change in each database. As a result, both databases are different, and both are wrong. This is called a data collision. It is important that the data replication engine is able to detect and resolve data collisions.
Active/active configurations are not limited to two nodes. Rather, any number of nodes may make up the application network. Changes made to its database by one node are replicated to all of the other nodes so that all nodes have the same copy of the application database. More specifically, each of the four nodes in a maximum Pathway domain configuration can have its own database and can process any request sent it by using its own database copy.
Data Replication and HPE Pathway Domains – Perfect Together
It is evident that the combination of data replication and PD across nodes offers a powerful business continuity architecture to meet the requirements of continuous application and data availability. PD and data replication together provide geographically dispersed application services and database services. The data is made available on other nodes by data replication, and the application is made available on other nodes by PD. When a node outage occurs (for whatever reason), ACS automatically detects it and switches all user request traffic to the surviving node(s). They can continue processing because the data on those nodes has been synchronized with the failed node by data replication technology. Thus, all the necessary components (applications and data) are in place to maintain business services with minimal downtime. It makes sense, therefore, to take a deeper look at the combination of data replication used with PD across nodes.
Without using PD, it is possible to have duplicate PATHMON environments up and running on both the active and passive nodes, with transaction requests routed to both nodes. But the Pathway applications on the passive node have the database on the active node open for updates, in order to avoid data collisions (no updates are made to the backup database). Of course, at the same time, the data replication engine is keeping the passive database current with the active database via uni-directional replication. Upon an active system failure and a failover to the passive node, it is necessary to switch the users connected to the active node (now defunct) to the passive node, and redirect the applications on the passive node to open the backup database. However, these manual actions and the time taken to perform them can lead to extended application outages. The use of PD and the other business continuity architectures discussed below are superior (from an availability perspective) and safer (from a risk of failover perspective).
Pathway’s Automatic Weights Reconfiguration (AWR) Feature
Using PD, it is possible to specify a weighting factor of zero for a PATHMON, so a PD can be configured across both nodes, with the PATHMONs on the passive node given a weighting factor of 0. Transactions would be routed to either node. When the Pathway requesters on the passive node receive these transactions and issue PATHSEND requests to servers, those requests would all be routed by ACS to the PATHMONs on the active node, thereby ensuring no updates are performed on the passive database (Figure 5). If the active node becomes unavailable, ACS automatically detects that and begins routing PATHSEND requests to the local PATHMONs (which already have the local database open for read/write access, with no reconfiguration or operator intervention required) – this feature is called Automatic Weights Reconfiguration (AWR).
Figure 5 – Pathway Domains in A/P and SZT modes
In a nutshell, AWR automates the process by which a passive (zero weight) PATHMON becomes an active PATHMON, in response to the unavailability of one or more active PATHMONs in a domain (i.e. it supports automatic Pathway server outage failover). With AWR, ACS periodically monitors the state of each PATHMON in the domain. When it detects that all PATHMONS in the domain with an original non-zero weight (i.e., the active PATHMONs) are not available, it automatically rebalances the client request load to the original zero-weighted PATHMONs (by changing the zero weight to a non-zero weight), and moves them from passive to active state, thereby maintaining application availability. When the original zero-weighted PATHMONs are recovered, ACS will also automatically detect this recovery and fallback to restore the original weight balance.
AWR introduces new configuration options for the PD ACSCTL file, enabling this mechanism to be controlled. It can be enabled or disabled; the frequency that ACS checks on availability of other PATHMONS in the domain can be specified:
- how long ACS should wait after detecting a PATHMON outage before reconfiguring weights can be specified;
- how long ACS should wait before falling back to the original weighting after failed PATHMONs are recovered can be specified; etc.
- There are also operator commands added to PDMCOM to manually perform the rebalancing/fallback operations, etc.
What benefits do an A/P or SZT architectures provide?
One of the most time-consuming issues when performing a failover in an active/passive environment is the switching of all the users from the active to the passive node (updating and deploying new network routing tables etc.). With this architecture (Figure 5), first of all, some of the users would already be connected to the passive system and would not see an outage at all, and secondly, all that would be needed is to switch the remaining users to the already up and running passive (now active) node, thereby reducing failover times. Failover times are further reduced because the passive (now active) node is already in a known-working state because the applications are already up and processing transactions on the passive (now active) node. Failover to a known-working state eliminates failover faults. Last, but by no means least, the capacity of the backup system is better utilized for application processing since client requests (PATHSENDs) are executed on the passive system.
One significant factor to note in this architecture is that some database replication products (such as HPE Shadowbase) allow applications on the passive node to have the local (passive) database open for read/write access, even while replication to it is actively working. This feature allows the Pathway servers to be fully up and running on this passive system in passive mode; the passive database is open for read/write access. No user involvement is required to start or switch the passive system from read-only to read/write access at takeover, further reducing takeover times.
This combination of PD, AWR, and data replication enables the automated failover of Pathway servers to a standby (passive) node in the event of a failure of the primary (active) node and eliminates passive application startup costs, thereby minimizing takeover times, reducing complexity, and the possibility of errors arising from manual intervention. It is a very powerful business continuity architecture, without the complexities of an active/active configuration.
Sizzling-Hot-Takeover (SZT)
This situation is more-or-less the same as the active/passive case described above. The difference is that in this case, the data replication engine is operating in bi-directional mode (even though all updates are only being made by the active node in Figure 6).
This way, when a takeover by the passive node occurs, it is already set up to replicate those changes back to the downed node once it is recovered (the dotted line in Figure 6), eliminating reconfiguration of the replication engine and speeding the time for putting the recovered node back into service (thereby re-establishing a backup system).
Figure 6 – Shadowbase Uni-directional (A/P) vs. Bi-directional (SZT) Replication Architectures
Active/Active Systems
This configuration is where the combination of data replication and PD capabilities really complement each other to provide significant benefits, with PDs providing active/active application processing, and replication providing active/active database processing. Each PATHMON in an active/active configuration (Figure 7) is included in a Pathway domain configuration on each node, with a subset/superset of the same applications (serverclasses) running in each PATHMON. Transactions can be routed to any active node, and ACS handles the distribution of workload between the PATHMONs configured in the domain. Data replication technology provides bi-directional replication to keep all active database copies synchronized. Read/write database access is enabled for all applications while replication is in process.
Figure 7 – Pathway Domains in A/A mode
When a node is taken down for maintenance or suffers an unplanned outage, ACS detects that the PATHMONs running on that node are no longer available and stops sending requests to them. The workload is thus automatically routed by ACS to the remaining node(s). Since each node also has a current copy of the online database (due to bi-directional replication), business transactions can proceed with no noticeable downtime.
Once the downed node is recovered, ACS automatically detects that the PATHMONs configured on that node as part of the Pathway domain are again available and begins routing requests to them in accordance with the load-balancing algorithm. Similarly, the data replication engines detect that the system is again available, and they begin replicating to the recovered node all of the queued updates (those updates that occurred while the system was down) as well as ongoing transactions. Likewise, updates now occurring on the newly recovered system are replicated to the other nodes. All of these steps happen without any outage and without operator action, except to recover the failed system. The failover is fast, transparent, and automatic.
As discussed above, with any active/active environment, the possibility of data collisions exists. There are ways to partition data to avoid this problem, e.g., to partition the database such that each handles a separate subset of user accounts. This partitioning can be achieved by network-connection routing (assuming the end user is remotely connected to the HPE NonStop server systems).
Even better is the so-called route anywhere active/active model, where ACS load-balancing and partitioning issues are avoided completely by allowing any request to be processed by any node. This model applies naturally to some applications. For example, in ATM transactions, it is highly unlikely the same card is being used simultaneously for two different transactions. Hence, data collisions are also highly unlikely. For other applications, the model can be achieved by using the built-in capabilities of the data replication technology for automatic data collision detection and resolution.[1]
Zero Downtime Migration (ZDM)
It is worth making a point about the use of PD and data replication architectures in the context of eliminating planned downtime for system upgrades and migrations. The use of the two facilities to avoid planned outages is as applicable as it is to avoid unplanned outages. During a system maintenance or upgrade, the online workload is routed by ACS from the downed system to another system in the domain to maintain business services. Once the maintenance is completed, the system is brought back online and begins processing transactions. Then if required, another system can be taken down for maintenance while the original system takes care of the online processing, and so on.
This technique is called a rolling upgrade, where each system is taken out of service in turn, on a planned schedule, while business services remain available to end users as ACS redistributes the workload to other available systems in the domain. The data replication engines bring the database of the upgraded node into synchronization once it is returned to service. The technique also can be used to migrate to a new system (for instance, to a next generation NonStop server) with zero downtime. So-called big-bang upgrades requiring production application outages are no longer required, even when performing very disruptive changes to your environments.
Planned Downtime and Disaster Recovery
A common requirement while performing system maintenance is that disaster recovery capabilities are not compromised in the meantime. In a two-node architecture, while one system is offline for maintenance, a failure of the other system will bring business services to a halt, which is unacceptable for many applications. Companies increasingly are deploying multi-node configurations to meet this need. In such configurations, multiple systems are capable of taking over the business services located in different datacenters. In a Pathway domain, there can be up to four such datacenters.
A typical architecture is two local systems each in two datacenters (Figure 8), with the local systems operating in either active/passive or SZT modes, as described above. In such a configuration, while one system is down for maintenance, the other local system takes over active processing. If either datacenter then experiences a complete outage, there will remain at least one other system available to continue active processing. There could be just a single active system in one datacenter, with uni-directional replication between it and the other datacenter to keep the databases synchronized. But, to maximize the utility of the four systems, it makes more sense to utilize an active/active configuration between the two datacenters, with systems configured as part of PD and with the workload distributed between them (one system active in each datacenter, with the other used as a local backup). Bi-directional replication is used to keep the active databases synchronized, with uni-directional replication between the active and backup databases in each datacenter. It is up to the architects as to whether or not the front-end routing is performed on a separate system in a tiered architecture with a database back-end tier (consisting of two nodes at each datacenter); PD do not require this approach, but do support it.
Figure 8 – Pathway Domain Supporting Planned Downtime While Maintaining DR Protection
Taking this architecture all the way brings us to availability nirvana. Rather than having just two nodes running active/active, with the other two nodes passive, all four nodes can be run in an active/active configuration, with bi-directional replication keeping all four copies of the database synchronized (Figure 9). PD can be configured on each node to span the PATHMONs across all four nodes.[2] With this architecture, it does not matter what node a request lands on, or what nodes are unavailable. With the database consistent across all four nodes, ACS can route requests to any PATHMON on any node which is available. During the outage of a single local node, or a whole datacenter, whether planned or unplanned, ACS will route requests to the remaining nodes, maintaining service availability, without reconfiguration or operator action. It is also possible to take one of the databases within a datacenter offline for maintenance, but for the applications on both nodes to remain active, each accessing the other local database. This architecture is truly the definition of continuous availability, and is feasible now using PD together with sophisticated data replication software.
Figure 9 – Planned Downtime While Maintaining DR Protection, Fully Active/Active Architecture
If an active/active architecture is just not possible for a particular application (for example, because data collisions cannot be avoided nor tolerated), this multi-node setup can be run in a quasi-active/active configuration. Applications will be active on multiple nodes, and ACS will distribute requests between them. However, all database updates will be to a single online database copy attached to one system (as discussed above). Uni-directional replication will keep the other database copies synchronized. This configuration incurs network TMF transaction overhead, but it does better utilize multiple systems to run the online business applications.
Another possible configuration to avoid a single point-of-failure during maintenance is to have three systems located across three geographically distributed datacenters. Again, if the first system is down for maintenance, and a simultaneous event takes out the second datacenter, there is still the third system available to provide business services. Whether this configuration is better than four systems located across two datacenters really depends upon the costs of an additional datacenter versus the costs of an additional system and upon whatever existing IT infrastructure exists.
A Data Replication Engine
A data replication solution such as HPE Shadowbase software delivers all of the features required for adding not only data availability to PD, as described above, but also many other features. It scales applications and databases across multiple physical nodes to create one logical system that can survive the failure of a node or even an entire datacenter. All are attributes of Shadowbase data replication: low latency, high capacity, heterogeneous, powerful message processing, flexible end points, and high availability. Shadowbase solutions provide both uni- and bi-directional data replication capabilities, enabling all of the architectures for data sharing as described in this article. Integrating homogeneous and heterogeneous distributed data resources is a formidable challenge, one that has been solved by Shadowbase technology.
Summary
The introduction of Pathway domains across nodes and the AWR feature for HPE NonStop servers helps to ensure the applications remain available even when catastrophic failures take out a system, an entire datacenter, or a geographic region. By configuring applications within a Pathway domain across multiple HPE NonStop nodes, application scalability is increased by automatic workload distribution across those systems.
In addition, in the event of planned or unplanned outages, user requests are automatically routed to the remaining systems, thereby preserving application availability. Since HPE Shadowbase technology ensures that the data is available on those remaining systems, the combination of Shadowbase software and PD provides a firm basis to achieve increased levels of availability for an enterprise’s business services.
It takes two to tango! It is of no use having data available on alternate systems if the applications required to provide the business services are not also available when needed. PD satisfy this need. HPE Shadowbase data replication solutions provide all the capabilities necessary to keep databases replicated and synchronized across multiple systems, whether it be a uni-directional (active/passive), bi-directional (SZT or active/active), heterogeneous, or homogeneous environment. In short, Pathway domains and Shadowbase data replication solutions are perfect together!
For a more details, please visit: ShadowbaseSoftware.com/Publications/Pathway-Domains/.
Hewlett Packard Enterprise globally sells and supports Shadowbase solutions under the name HPE Shadowbase. For more information, please contact your local HPE Shadowbase representative, visit our website, or view our Shadowbase solution videos.
- For more information on business continuity architectures and leveraging active/active data replication, please see our white paper, Choosing a Business Continuity Solution to Match Your Business Availability Requirements. ↑
- This architecture also benefits from the ability to specify zero weighting for certain PATHMONs unless other PATHMONs become unavailable, to better control request distribution to local nodes (when available). ↑
Be the first to comment