As artificial intelligence moves from research curiosity to enterprise imperative, organisations running Nonstop infrastructure face a pointed question: how do we participate in the AI revolution without compromising the very qualities that make our systems indispensable?
The answer is emerging through a combination of a new open standard called Model Context Protocol (MCP) and a growing ecosystem of integration tooling built specifically for the Nonstop environment. One example of this new generation of integration technology is the latest addition to Infrasoft’s uLinga product suite – uLinga Nexus – which provides native MCP connectivity for Nonstop applications and data. Together, these developments are removing the barriers that have historically kept AI at arm’s length from mission-critical systems – and in doing so, are unlocking capabilities that were simply not achievable before.
What is Model Context Protocol?
Model Context Protocol, or MCP, is an open standard developed to give AI applications – particularly large language model (LLM) agents – a structured, secure, and consistent way to interact with external systems, data sources, and tools. Rather than requiring developers to build bespoke connectors for every AI application that needs to touch a backend system, MCP defines a common interface that any MCP-compatible AI client can use.
MCP does for AI integration what REST did for web APIs – it establishes a common interface that any compliant client can use, eliminating the need for bespoke connectors for every new application. An AI agent that speaks MCP can, in principle, interact with any MCP-enabled backend – whether that is a Nonstop system running Pathway, a relational database, a cloud API, or anything in between. And like REST, MCP operates over HTTP, which means it builds on infrastructure and security practices organisations already understand and trust.
In MCP terminology, a tool is a named, callable capability that an MCP server exposes to AI clients – it might represent a query against a database, an invocation of a Pathway serverclass, or any other operation the backend can perform. The protocol defines how these tools are registered and discovered, how requests are authenticated, how parameters are passed, and how responses are returned. This standardisation is significant for Nonstop environments because it means the integration work done today – exposing a Pathway server as an MCP tool, for instance – immediately becomes usable by any number of current and future AI applications, without re-engineering.
Why This Matters for Nonstop
Nonstop systems carry decades of transactional history and operate at the heart of business-critical processes. That makes them uniquely valuable as a data source for AI – but that value has traditionally been locked away behind proprietary interfaces, binary data formats, and integration patterns that modern AI tooling cannot natively speak.
The challenge has never been lack of capability on the Nonstop side. These systems can process requests at extraordinary speed and with carrier-grade reliability. The challenge has been translation: getting AI applications to communicate with Nonstop in a way that is practical, secure, and maintainable at scale.
MCP-based integration addresses this in several important ways:
- Standards-based connectivity means AI frameworks and tools – including commercial AI platforms, open-source agents, and custom LLM applications – can connect to Nonstop systems using interfaces they already understand. uLinga Nexus supports both MCP for AI agent workloads and REST for conventional application integration, served from the same platform, with no duplication of integration effort.
- Data transformation bridges the gap between the JSON payloads that modern AI speaks and the DDL-defined binary structures that Nonstop applications use internally, including ISO 8583 financial messaging formats.
- Enterprise security is maintained through OAuth 2.0 and JWT-based authentication, TLS transport encryption, and scope-based authorisation – ensuring that AI applications access only the data and operations they are permitted to use.
- Operational observability is preserved, with comprehensive tracing and audit trails for every AI-initiated interaction – meeting the compliance and monitoring requirements of regulated industries.
Key Benefits for Nonstop Organisations
Organisations that adopt MCP-based AI integration for their Nonstop environments stand to gain across multiple dimensions.
Protecting Existing Investments
Nonstop infrastructure represents significant investment – not just in hardware and licences, but in the applications, business logic, and decades of operational refinement that run on it. MCP-based integration allows this infrastructure to participate in AI-powered workflows without requiring replacement or fundamental re-architecture. Existing Pathway servers and Guardian processes become AI-accessible tools with configuration, not code rewrites.
Accelerating AI Adoption
By providing a standardised integration layer, MCP dramatically reduces the time and specialist expertise required to connect AI applications to Nonstop systems. Teams that would previously have faced months of custom integration work can now expose Nonstop capabilities as MCP tools in a fraction of the time, allowing AI initiatives to move from pilot to production faster.
Enabling Real-Time Intelligence
Many of the most powerful AI applications are those that can act on live data – not data that has been extracted, copied, and aged through ETL pipelines. MCP integration allows AI models to query Nonstop systems for real-time transactional data, enabling a class of time-sensitive intelligence applications that simply could not have operated on batch data alone.
Future-Proofing the Integration Layer
Because MCP is an open, evolving standard with broad industry support, organisations that build on it today are not locking themselves into a single vendor’s integration vision. As new AI frameworks, agents, and applications emerge – and the pace of emergence is extraordinary – MCP-compatible Nonstop integrations remain usable without additional development effort.
Importantly, adopting MCP does not mean displacing existing REST integrations. uLinga Nexus supports both side by side – REST for the conventional application integrations already in production, MCP for AI agent workloads being introduced now. Organisations get a single, unified integration layer rather than a proliferation of point solutions.
Bridging Teams and Reducing Friction
AI development teams and Nonstop operations teams often operate in different worlds, with different tooling, terminology, and priorities. MCP provides a common interface language that allows AI developers to work with Nonstop capabilities without needing deep knowledge of Nonstop-specific APIs – reducing friction and enabling more productive collaboration between teams.
Security: The Non-Negotiable Foundation
Expanding access to Nonstop systems – for any purpose – will rightly invite hard questions from security and compliance teams. This is where MCP’s built-in support for OAuth 2.0 is not merely convenient – it is essential. OAuth 2.0 is the industry-standard authorisation framework used across enterprise software, cloud platforms, and financial services APIs. Its inclusion as a first-class requirement in the MCP standard means that AI access to Nonstop systems through uLinga Nexus is governed by exactly the same rigorous authentication and authorisation mechanisms that organisations already rely on for other sensitive integrations.
Every Request is Authenticated
uLinga Nexus enforces OAuth 2.0 JWT bearer token authentication on every single request – without exception. Before any AI application can invoke a tool, query data, or trigger a process on a Nonstop system, it must present a valid, signed token issued by the organisation’s identity provider. uLinga Nexus validates the token’s signature, checks its expiry, and confirms the issuer – rejecting anything that does not meet these criteria before the request proceeds any further.
This means there is no pathway for an unauthenticated AI agent – whether legitimate but misconfigured, or actively malicious – to reach Nonstop systems through the MCP layer. Authentication is not optional, and it cannot be bypassed.
Fine-Grained Authorisation Through Scopes
Authentication confirms who is making a request. Authorisation determines what they are permitted to do. uLinga Nexus implements scope-based access control that gives organisations precise control over which AI applications can access which tools, and which tools can reach which backend systems.
Each MCP tool exposed through uLinga Nexus is associated with one or more required scopes. An AI application’s JWT token must contain the appropriate scopes for the requested tool – if it does not, the request is denied regardless of whether the application is otherwise authenticated. This means an AI agent authorised for read-only reporting tools cannot, even inadvertently, invoke tools that initiate transactions or modify data. Cross-tool access to backend resources is explicitly prohibited by design.
For organisations operating in regulated industries, this level of granularity is critical. It enables them to extend AI access to Nonstop capabilities in a controlled and auditable way – expanding what is possible without expanding risk.
Integration with Existing Identity Infrastructure
One of the most practical security advantages of uLinga Nexus’s OAuth implementation is that it integrates with the identity providers organisations already operate. Whether an organisation uses Microsoft Entra ID, Okta, Ping Identity, or another standards-compliant OAuth provider, uLinga Nexus can validate tokens issued by that provider – including fetching public keys dynamically from the provider’s JWKS endpoint.
This means there is no need to introduce a separate identity silo for AI access. AI applications authenticate through the same identity infrastructure as every other enterprise system, and the same access governance processes – provisioning, deprovisioning, audit, and review – apply automatically. When an employee leaves, or an application is decommissioned, access is revoked in one place and takes effect everywhere, including AI access to Nonstop.
Transport Security and Data Protection
Beyond authentication and authorisation, uLinga Nexus enforces TLS encryption – supporting versions 1.2 and 1.3 – for all communications between AI clients and the MCP server. Data in transit between AI applications and Nonstop systems is protected against interception and tampering, meeting the transport security requirements of financial services regulators and enterprise security standards.
The data transformation layer also applies bounds checking on array sizes and string lengths, preventing a class of malformed-input vulnerabilities that could otherwise arise when AI-generated parameters are passed to backend systems with strict binary data formats.
Comprehensive Audit Trails
Security for regulated systems is not only about prevention – it is also about accountability. uLinga Nexus generates detailed trace records for every MCP request, capturing the timestamp, the authenticated client identity extracted from the OAuth token, the tool invoked, the backend system targeted, the response status, processing duration, and error details where applicable.
This creates an unambiguous audit trail of all AI-initiated activity on Nonstop systems – who accessed what, when, and with what result. For organisations subject to financial services regulation, data protection legislation, or internal governance requirements, this level of visibility is not a nice-to-have. It is what makes AI access to mission-critical systems defensible to auditors, regulators, and risk committees.
In short, uLinga Nexus does not ask organisations to choose between AI capability and security rigour. By building on OAuth 2.0 as mandated by the MCP standard, and extending it with scope-based access control, existing identity provider integration, transport encryption, and comprehensive audit logging, it delivers both – on the terms that mission-critical Nonstop environments demand.
Use Case: AI-Powered Fraud Detection
Financial institutions running Nonstop systems are custodians of some of the world’s most sensitive and voluminous transaction flows. Fraud detection is a domain where the combination of Nonstop’s real-time processing power and AI’s pattern recognition capabilities is particularly compelling.
Consider the traditional approach: transaction data is captured on Nonstop systems, periodically extracted to a data warehouse or analytics platform, and analysed by fraud models that are, by definition, working with data that is minutes or hours old. A fraudulent pattern might be underway – and growing – before the detection system sees it.
With MCP-based integration, an AI fraud detection agent can query the Nonstop transaction processing system directly, in real time. The agent can be configured with MCP tools that expose:
- Current account balance and recent transaction velocity for a given account
- Geolocation and merchant category patterns from recent authorisations
- Cross-account network patterns that may indicate coordinated fraud
- Historical baseline behaviour for comparison against current activity
When a suspicious transaction is flagged, the AI agent can call these tools in sequence – assembling a multi-dimensional picture of the account’s recent activity in milliseconds – and return a risk score and supporting rationale to the authorisation flow, all before the transaction is approved or declined.
Critically, this happens without duplicating data into a secondary system or accepting the latency that comes with replication. The AI is working directly with the authoritative data on the system of record. For fraud scenarios where seconds matter, this distinction is decisive.
The same architecture supports retrospective fraud investigation. An AI agent given investigative tools can autonomously traverse transaction histories, identify related accounts and transactions, and surface patterns for human review – work that previously required skilled analysts and days of effort.
Use Case: Conversational Operations and Support
Nonstop operations teams have deep expertise, but that expertise is not always accessible to the broader organisation. Business stakeholders who need to understand transaction volumes, system health, or operational metrics often must wait for reports to be run, or for operations staff to field their queries. When something goes wrong, the time taken to gather context and communicate across teams can extend the impact of an incident significantly.
MCP integration opens the door to conversational AI interfaces that allow authorised users to interact with Nonstop systems in natural language, without requiring knowledge of Nonstop-specific tooling or query languages.
A business analyst might ask: “What were the peak transaction volumes on the payments system between 9am and 11am yesterday, and how does that compare with the same period last week?” – and receive an answer drawn directly from live Nonstop data, without writing a single line of code or raising a request with the operations team.
An on-call engineer receiving an alert at 2am might ask: “Are there any Pathway servers showing elevated error rates in the last 15 minutes, and what are the most common error codes?” – and get an immediate, contextual answer that helps them triage the issue faster.
The MCP tools underpinning this kind of interface might include:
- System health and performance metrics from Nonstop monitoring subsystems
- Transaction volume and throughput queries against operational databases
- Process and server status checks for key Pathway and Guardian processes
- Recent error log retrieval and classification
- Configuration queries to confirm current system parameters
Importantly, the same authorisation framework that governs programmatic access governs conversational access. An analyst can only query data they are authorised to see. An AI agent responding to a question about account balances will only return information that the authenticated user’s scope permits. The conversational interface does not bypass security – it operates within it.
This use case also has significant potential for support and training contexts. New team members can learn the operational landscape by asking questions of a system that can answer with live data. Documentation that might otherwise go stale can be supplemented – or replaced – by AI interfaces that always reflect current system state.
The Path Forward
The organisations best positioned to benefit from AI are those that can connect AI capabilities to their most valuable data and most critical processes. For organisations running HPE Nonstop, that connection is now achievable – not through years of custom integration work, but through standards-based tooling designed for exactly this purpose.
MCP is still a young standard, but its adoption trajectory across the AI industry has been rapid and broad. Building Nonstop integration on this foundation now means that as the AI ecosystem continues to evolve, the integration work done today continues to pay dividends. New AI models, new agent frameworks, and new enterprise AI platforms will be able to leverage Nonstop capabilities through the same interfaces – without starting from scratch.
For mission-critical systems that have always been defined by their reliability and longevity, this kind of future-proof architecture is not just an advantage. It is exactly the right foundation.
Infrasoft’s uLinga Nexus provides MCP server capabilities for HPE Nonstop systems, enabling the use cases described in this article. For more information, contact Infrasoft at info@infrasoft.com.au.




Be the first to comment