APIs have seen fast growth and adoption for many years now; as of 2020, it was estimated that there are over 24,000 APIs in use, just in the financial industry. While the number of APIs used is increasing, the more important statistic is the value these APIs bring to businesses. Services offered by APIs now account for 75% of many large companies’ revenue. For example, Salesforce generates 50% of its revenue through RESTful APIs, eBay 60%, and Expedia generates 90% of its total revenue through the use of APIs. Without the help of RESTful APIs, these companies might cease to exist.
APIs have become critical components of enterprise infrastructure and application architecture. They provide the ability to integrate and share essential business data both inside and outside the enterprise, providing business value that has not been possible in years past. They also offer the flexibility for businesses to adapt to changing demands created by the rapidly changing needs of customers. The question for most business units has shifted from “why do we need an API?” to “how can we leverage APIs?”. This shift in mindset has made its way to the HPE NonStop industry, and now many companies are constantly developing new APIs to integrate with their NonStop applications. As the use of RESTful APIs continues to increase, the need to manage and secure them is becoming essential.
In many environments that utilize APIs, the client application makes one-to-one connections with backend services. While this provides benefits to the organization, it also introduces security and operational concerns that businesses and users must take into account.
- Backend servers and services storing critical information have countless entry points that need protection and management.
- These entry points introduce security risks and operational and development resource burdens.
- Each service has to implement its own denial of service (DoS) protection and authentication mechanisms.
- Each service has to implement its own upgrade strategy. If a change is made to a backend service, there has to be a strategy in place to upgrade clients and the backend services to the new version, which is costly and time-consuming.
- Each service has to implement its own load balancing mechanism where necessary.
The best way to leverage APIs and address these concerns is to introduce an API gateway into your REST API strategy.
An API gateway is a management tool that sits between a client and one or more backend services. With the right API gateway in place, companies not only reduce their operational overhead but they also enhance security for their entire REST API infrastructure.
NuWave Technologies’ Prizm Gateway™ makes it easy to enable, maintain, monitor, and secure API connections throughout the enterprise.
APIs act as entry points for applications to access data from backend services. With Prizm Gateway™, users can connect all of their RESTful APIs and enable real-time communication management for all applications both on and off the HPE NonStop.
Prizm Gateway™ does this by routing all API calls through a single point of entry while enabling traffic management, denial of service (DoS) protection, authorization and access control, service deployment, load balancing, and monitoring.
Prizm Gateway™ makes it easy to manage and protect your REST infrastructure and provides:
- A single point of entry for all communication between frontend clients and backend servers.
- A central point for DoS protection and authentication
- A central point for service load balancing
- A central point for service upgrade, in this case, using the canary (A/B) deployment method
- Easier lives for your security, operations, and development staff because they don’t have to implement any of these features for each individual service and microservice
And this is all accomplished on the most scalable, robust, and fault-tolerant platform: the HPE NonStop.
Let’s take a deeper dive into the features of Prizm Gateway™.
Single point of entry – By providing a single point of entry for all APIs, Prizm GatewayTM provides added security and management simplicity for enterprise applications.
Guardian-based – No need to have OSS, or to have any additional software installed. This simplifies installation and management for NonStop environments.
Fault-tolerant – Eliminates the risk of downtime with hardware that is dual redundant everything, and system and application software that is robust and self-healing.
Browser-based management console – Modeled on NuWave’s widely used LightWave management consoles, the Prizm Gateway™ console is easy to use and provides comprehensive management of the entire feature set. This allows for a short learning curve, resulting in less time and money spent getting users up to speed.
Management CLI – Command line interface for configuration and management, in addition to the browser-based interface. This allows users to manage the gateway using scripts or TACL macros.
Logging & diagnostics – Helps monitor the health of the gateway and helps resolve any functional issues that users might experience.
HTTP Basic authentication – Simple username/password authentication.
JWT authentication – JSON Web Token authentication, an open standard, is available for customers with more stringent authentication requirements.
Load balancing – Prizm automatically distributes the incoming request load over multiple backend servers.
Canary deployment – Allows for gradual deployment of new services in parallel with existing services, reducing the risk compared to rolling out a new version all at once.
Denial of Service Protection
- Rate limiting – Limit the number of requests per client for a specified period of time that are passed to services to prevent overloading.
- Connection count limiting – Limits the number of concurrent incoming connections to prevent overloading of services.
- Request size limiting – Limits the size of incoming request messages to prevent overloading of services.
- IP address restriction – Limits access to known good IP addresses, or prevents access from specific IP addresses.
NuWave’s LightWave ServerTM allows REST client applications to communicate with NonStop application servers running as standalone processes or HPE NonStop Pathway serverclasses. LightWave ServerTM is a Guardian process that accepts JSON messages via HTTP/S over TCP/IP. LightWave ServerTM converts each JSON request message into an HPE NonStop interprocess message (IPM) format before forwarding to the backend application. When the NonStop application replies, LightWave ServerTM converts the IPM into a JSON response, which it returns to the calling application.
Can Prizm GatewayTM co-exist with LightWave ServerTM? Absolutely! Prizm GatewayTM can complement LightWave ServerTM in the same way that it can complement any other REST Web service provider. By “frontending” LightWave ServerTM with Prizm GatewayTM, LightWave ServerTM gains all of the benefits of Prizm GatewayTM, including enhanced authentication, DoS protection, load balancing, and canary deployment. And since both Prizm GatewayTM and LightWave ServerTM are fault-tolerant, combining both solutions on the NonStop platform creates an easy-to-deploy and easy-to-use, mission-critical Web service platform.
API gateways are frequently described as a “single point of entry” for APIs, but this doesn’t mean that only one instance of an API gateway can or should be deployed in the enterprise. With the speed at which business and application requirements change, service providers need to be nimble and not “lock in” on any one gateway deployment pattern. It may be appropriate to deploy a dedicated gateway instance for a single application, technology stack, or business unit. In development, test, & certification environments, multiple gateway instances will often be required. The backends for frontends pattern is one example of a microservices pattern that employs multiple API gateways.
An enterprise service bus (ESB) is a software architecture that provides applications with a “central nexus” for communications, allowing heterogeneous applications to communicate as either client or server. ESBs typically support a number of communications protocols, protocol translation, message routing, traffic management, security, and message processing based on business rules. When an enterprise has a large portfolio of heterogeneous applications that need to interoperate, ESBs can do the heavy lifting. But with that heavy lifting comes complexity and cost. On the other hand, API gateways are purpose-built for client-server communications using SOAP-, REST-, or RPC-based APIs. Because API gateways are focused on a single application architecture, they can be optimized for that architecture in terms of features, performance, management, and cost.
The key benefits of Prizm GatewayTM when compared to an ESB are:
- Simple configuration and management requiring less development and training
- Decentralized architecture, eliminating bottlenecks and single points of failure
- Purpose-built for higher performance, requiring less infrastructure
- Features optimized for a specific application architecture, not a “one size fits all” solution with the accompanying overhead
Depending on the application architecture, choosing an API gateway like Prizm Gateway™ may have advantages over an ESB. Companies should consider comparing these advantages with new projects, where using a new or existing ESB might not be the best choice.
API gateways typically provide some measure of denial of service attack protection, and Prizm GatewayTM is no exception. A denial of service attack is when legitimate clients of an application are denied access to that application as the result of the malicious actions of an attacker. These attacks can be “distributed”, employing multiple client systems to mount the attack. Prizm GatewayTM can blunt these attacks with its DoS protection features. The frequency rate and size of API requests passing through the gateway, and the number of network connections can be limited using simple configuration options.
Load balancing allows API requests arriving at a single entry point to be distributed across multiple physical Web service providers or “upstream” systems. This allows for upstream systems to be added or removed based on capacity or maintenance requirements. All of this can be done with simple configuration options in Prizm GatewayTM, without any impact on the client or upstream systems.
Canary deployment is a method of slowly rolling out a new version of an existing application. The new version is released gradually to a small subset of clients, which are carefully monitored to determine if any issues crop up with the new release. If no issues are identified, traffic is gradually shifted to more and more clients until the entire workload is handled by the new version. This reduces the risk of deploying the new release to the entire client community at once. Once again, this can be done with Prizm GatewayTM with simple configuration, and without any impact or changes to the API application.
API gateways are becoming a necessary component in the enterprise API management arena. With Prizm GatewayTM, your API gateway can run on the fault-tolerant, scalable, HPE NonStop platform. By adding LightWave ServerTM, you’ll have everything you need to expose your NonStop applications as secure, well-managed REST APIs. We’d be very happy to help you better understand how these solutions can work in your own environment. Please get in touch with us if you’d like to know more, and keep an eye on our website, YouTube channel, and Twitter for information and updates for all of our HPE NonStop solutions.