uLinga for Kafka – Now with Kafka Consumer Support


HPE NonStop users are becoming more familiar with Kafka, as Kafka in turn becomes more and more prevalent inside those large environments where the NonStop platform is often present.  Kafka is being used by thousands of companies, including 60% of the Fortune 100.

These organizations use Kafka to manage “streams” of data, which have become more common as internet usage massively boosts the amount of data being generated and needing to be processed.  Kafka allows these huge volumes of data to be processed in real-time, via a combination of “producers” and “consumers”, which work with a Kafka “cluster” – the main data repository.

Many HPE NonStop users need to integrate with Kafka in some way, be it to stream NonStop application data to Kafka, or integrate data from Kafka back into their NonStop applications.  While there are Java-based solutions to send (PRODUCE) data to and receive (CONSUME) data from Kafka, these require OSS and may not perform at the level that NonStop users require.

uLinga for Kafka, which has had Kafka Producer functionality for over a year, has recently been enhanced to add Kafka Consumer functionality.  This article explains how that functionality works and gives some potential use cases that NonStop users may find interesting.

uLinga for Kafka – Overview

uLinga for Kafka takes a unique approach to Kafka integration: it runs as a natively compiled NonStop process pair, and supports the Kafka communications protocols directly over TCP/IP.  This removes the need for Java libraries or intermediate databases, providing the best possible performance on HPE NonStop.  It also allows uLinga for Kafka to directly communicate with the Kafka cluster, getting streamed data across as quickly and reliably as possible.

With the latest release of uLinga for Kafka now available, Kafka Consumer functionality has been added bringing with it the same lightning-fast performance as the existing Producer functionality.

uLinga for Kafka as Kafka Consumer

Previous releases of uLinga for Kafka provide a range of options to stream NonStop data to Kafka, via uLinga’s Kafka Producer functionality.  These features have been outlined in our previous Connection articles.

With the addition of Kafka Consumer functionality, uLinga now supports a range of options to stream Kafka data directly from the Kafka cluster, and apply that data to a NonStop database or send it directly to a NonStop application.

Fig 1. uLinga for Kafka – Consumer to Enscribe File

The diagram above depicts uLinga for Kafka acting as Kafka Consumer and writing the data consumed to an Enscribe file.  This file could be used, for example, as a database refresh for a NonStop application.  As events/records are streamed to the cluster by a remote producer, uLinga’s Consumer logic will retrieve those records, process them, and apply them to the Enscribe file.  As with uLinga’s Producer functionality, very high throughput (greater than 20,000TPS) is possible, with sub-millisecond latency.  This allows for the support of the largest application requirements.

Figure 2. uLinga for Kafka – Consumer direct to Process or Serverclass

Figure 2 shows another use case for Kafka Consumer functionality.  In this scenario, uLinga for Kafka is forwarding consumed data directly to a NonStop application, either via a Guardian WRITE/WRITEREAD or a Pathway SERVERCLASS_SEND_ directly to a Pathway Serverclass.  Once again this configuration supports very high throughput, allowing for any application load to be handled.  This implementation might be used, for instance, to apply online updates to an application database.

As our previous Connection article outlined, uLinga for Kafka supports User Exits and these can be invoked in either of the above scenarios to support data manipulation or any other custom processing.

Using Kafka for Transactional Messaging

The addition of Consumer functionality to uLinga for Kafka opens up some interesting possibilities where Kafka can be used to handle large-scale transaction messaging.  When a produced request message can be linked to a consumed response, you have a model for distributed transaction processing that looks quite similar to some message queuing solutions.  Consider the following example configuration:

Figure 3. uLinga for Kafka – Produce-Consume Integration with Linux Application

The following steps take place:

  1. Guardian application issues WRITEREAD/Pathway application issues SERVERCLASS_SEND_
  2. uLinga for Kafka PRODUCEs data from WRITEREAD/SERVERCLASS_SEND_ to Kafka, including correlation data (passed either in the Key or a custom header field)
  3. Linux Consumer logic retrieves data from Kafka, including correlation data, Linux application does some processing of data
  4. Linux Producer logic sends response as a Kafka PRODUCE, including original correlation data
  5. uLinga for Kafka CONSUMEs response, including correlation data
  6. uLinga for Kafka correlates response to original request and completes the WRITEREAD/SERVERCLASS_SEND_

This configuration provides a high-performance, reliable messaging mechanism between HPE NonStop and (in this example) a Linux/cloud-based application, utilising the extensive Kafka support already available to Linux applications.  It is also an extremely simple method, from the point of view of the NonStop application, to communicate via Kafka – using a straightforward WRITEREAD or SERVERCLASS_SEND_.

Depending on the specific application and environmental requirements, this approach may provide better clustering/fault-tolerance than some establishing queuing solutions.  It is also likely to perform at least as well, if not considerably better, than most other alternatives.  In particular, the low latency of this approach is impressive – initial testing by Infrasoft has seen overall latency as low as 1ms with this configuration.

If there is a requirement for different “channels” for the request and response to be sent on, this can be achieved with a minor adjustment to the configuration depicted in Figure 3, and provides completely asynchronous request/response handling.

Figure 4. Separate Channels for Request and Response Messaging

In this example, the NonStop application would handle context management between request and response messages.



uLinga for Kafka provides reliable, high-performance options for integrating NonStop applications and data with Kafka.  The addition of Kafka Consumer functionality gives customers the ability to stream data from Kafka and easily incorporate it with NonStop databases and applications.  With uLinga for Kafka’s unique NonStop interprocess communication (IPC) support, customers have the ability to use Kafka in new ways, be it as a high-performance messaging hub, a feed for application updates or any number of additional use cases.

For more information on uLinga for Kafka, speak to our partners at comforte and TIC Software, or contact us at productinfo@infrasoft.com.au.


  • Andrew Price

    Andrew Price has worked on the NonStop for his entire career. He spent most of the 90s as a BASE24 developer at different banks in different countries, then many years at Insession and ACI. He’s also worked at XYPRO and most recently served as NuWave Technologies’ Chief Operating Officer. He has extensive experience in most aspects of NonStop software development and sales, having been a coder, product manager, sales support technician, and engineering manager. He has been with Infrasoft since January 2020 where he is Director of Business Operations. You can connect with him at https://www.linkedin.com/in/andrew-g-price/ or on Twitter @andrewgprice

Be the first to comment

Leave a Reply

Your email address will not be published.