As we go into the third decade of the 21st century, we see how much the predictions about data proliferation were mostly true. In the 20th century, we considered terabytes of storage to be large and difficult to manage. Now, it is common for enterprises to manage exabytes – that’s 6 orders of magnitude more than what we considered normal just 15-20 years go. Furthermore, predictions are that by 2030, we will be pushing common usage to zetta and yottabytes – another 6 orders of magnitude increase.
The so-called “Data Tsunami” is driven largely by Internet of Things (IoT) devices which are expected to grow at a 10% rate in 2020 (Source Computerworld/IDC) and will continue to feed data into our enterprise ecosystems at an amazing rate.
At these data production and consumption rates, much of the data analyst and systems architects’ work will be to filter out the chaff from the wheat. One estimate is that in 2013 only 22% of the data from machines was useful. In 2020 that number will be close to the same at about 35% (Source Computerworld/IDC), but the volume of data harvested will have significantly increased; as shown above. In other words, to mix a metaphor, the hay stack keeps getting bigger and bigger, but the number of needles in the stack remains almost the same. It’s our job to make the most of those needles and to make sure we do not lose them. NonStop is all about the value of the data and how to protect and share it in the best possible way for the enterprise.
What this does to enterprise data usage is still mostly to be seen. But aside from usage, the hardware technology is also changing significantly. The days of the mechanical spinning disk drive are finally waning after a lifespan longer than anything else in the compute-hardware toolbox. With the solid-state storage and with persistent memory moving in to replace spinning media, the computer hardware architects a free to reduce physical footprint of storage and increase speed and capacity at the same time. [Did you ever stop to think about how archaic spinning disks are compared to all other components of the modern computer?]
As enterprise users, the NonStop users have led the way for very sophisticated transactional data usage during this two decade romp into data proliferation. As users, you have delivered ever expanding relational databases leveraging SQL/MP and SQL/MX. You’ve protected the data with automated replication; and you have greatly exposed and enriched the data with open interfaces like JDBC, ODBC, along with open APIs like SOAP.
Going into the next decade, you will be pressured to continue to deliver open access and nearly unlimited expansion of your primary business asset: your data. This is not just because of the amount of data being stored, but also because the access options and data perspectives are expanding too. The business needs will evolve, and will require some very creative data analysis in order to deliver on the promise provided by an estimated 450 billion internet transactions per day.
HPE offers the most scalable, open relational technology with SQL/MX. With the recent announcement of Liquibase support and with expanded features for cross-platform portability of data definition and SQL language compatibility, SQL/MX continues to remain at the top for all sizes of relational data access. But relational is not all that is needed these days. Many open applications simply use relative, key-sequenced, and flat data storage and internal software structures. HPE NonStop Enscribe has supported this since day-one. Some applications need temporary in-memory access to key-value storage. Some need key-value storage with transparent persistence. NonStop In-Memory Cache (NSIMC) addresses this need. And then there is the emerging model of streaming data which is used for both transactional and data distribution. HPE will not rest with the existing data access technologies; as research is being done to address the future needs in the data space.
With Fall and Winter 2019 upon us, it is almost time to bring in the new decade with hope and optimism that this new data age brings. With the insights and discovery that intertwines with proliferation of internet devices and the new artificial intelligence (AI) technology promise, we are all poised to use these data bits in ways that greatly improve the health and safety of the world. Together, you, HPE, and the HPE community will take this data tsunami and make it work for you. I am honored and delighted to be part of this community. Enjoy this issue of The Connection. I look forward to seeing you all in November at NonStop TBC 2019!
Hewlett Packard Enterprise