Despite the promise of business intelligence tools and data warehousing, enterprises have long overlooked one of the biggest internal sources of valuable data – the venerable mainframe.
The mainframe has been around for over 60 years and its demise has been prematurely declared on numerous occasions. But mainframes, many of them years or decades old, continue to support many mission-critical transactional applications at enterprises across industries. IBM, far and away the mainframe market leader, continues to develop its System z line of mainframe computers and even had its most successful quarter (albeit measured by MIPS, not revenue) less than two years ago.
Mainframes are reliable and secure workhorses that support many of the transactional workloads that power the modern economy, including financial transactions, payroll processing and inventory controls. Anyone who has ever used an automated teller machine (ATM) has used a mainframe computer.
But sorting and transforming mainframe data in order to perform complex data analytics is difficult, expensive and requires trade-offs relative to CPU cycles. As a result, most enterprises are not leveraging mainframe data to support data-driven decision-making as much as they should. Considering the type of data stored in mainframes – namely customer financial transaction data – enterprises are leaving an awful lot of value on the table.
With the advent of modern Big Data technologies and related tools, there are now options for enterprises to more easily enable analytics on valuable mainframe data. One such option is to offload mainframe JCL batch workloads to Hadoop, This approach results in significant cost savings over processing data in the mainframe, with the added benefit of having the resulting data available for analysis.
The challenge with this approach is the lack of native connectivity between mainframes and Hadoop. Vendors such as Syncsort have developed data ingestions techniques that significantly reduce the complexity of connecting mainframes and Hadoop. Mainframe data in Hadoop can then be integrated with other data sources and/or exposed to related analytic environments for further mining.
There are other methods for better leveraging mainframe data for Big Data analysis, including new tools and software for extracting, sorting and analyzing mainframe data. Enterprise practitioners should examine their options with an eye towards the analytic workloads and use cases they ultimately will support. Whatever options are chosen, enterprises with heavy mainframe use cannot afford to overlook this important source of value.
Action Item: Enterprise Big Data practitioners should first conduct an inventory of their internal data resources, including mainframe data. Next, consider the potential analytic use cases for mainframe data with a focus on core business objectives. Work with the business-side to align objectives while simultaneously laying out a framework for extracting and analyzing all relevant data sources.