MODEL9 IS NOW A GARTNER COOL VENDOR!

Learn more

Category: Data management

Blame the genius that gave us the term “cloud” as shorthand for distributed computing. Clouds, in many languages and cultures, are equated with things ephemeral and states of mind that are dreamy or thoughts that are vague.

Well, cloud computing is none of those “cloud things.”  It is the place where huge capital investments, the best thinking about reliability, and the leading developments in technology have come together to create a value proposition that is hard to ignore.

When it comes to reliability, as a distributed system – really a system of distributed systems – cloud accepts the inevitability of failure in individual system elements and in recompense, incorporates very high levels of resilience across the whole architecture.

For those counting nines (those reliability figures quoted as 99.xxx) there can be enormous comfort in the figures quoted by cloud providers. Those digging deeper, may find the picture to be less perfect in ways that make the trusty mainframe seem pretty wonderful. But the vast failover capabilities built into clouds, especially those operated by the so-called hyperscalers, is so immense as to be essentially unmatchable, especially when other factors are considered.  

The relevance of this for mainframe operators is not about “pro or con.”  Although some enterprises have taken the “all cloud” path, in general, few are suggesting the complete replacement of mainframe by cloud.

What is instead true, is that the cloud’s immense reliability – its ability to offer nearly turnkey capabilities in analytics and many other areas, and its essentially unlimited scalability – means it is the only really meaningful way to supplement mainframe core capabilities and in 2021 its growth is unabated.

Whether it is providing the ultimate RAID-like storage reliability across widely distributed physical systems to protect and preserve vital data or spinning up compute power to ponder big business (or tech) questions, cloud is simply unbeatable.

So, for mainframe operations, it is futile to try to “beat” cloud but highly fruitful to join – the mainframe + cloud combination is a winner.

Indeed, Gartner analyst Jeff Vogel, in a September 2020 report, “Cloud Storage Management Is Transforming Mainframe Data,” predicts that one-third of mainframe data (typically backup and archive) will reside in the cloud by 2025 — most likely a public cloud — compared to less than 5% at present – a stunning shift.

This change is coming. And it is great news for mainframe operators because it adds new capabilities and powers to what’s already amazing about mainframe. And it opens the doors to new options that have immense potential benefits for enterprises ready to take advantage of them.

Get the eBook & start your mainframe modernization journey!
READ NOW

Change is good – a familiar mantra, but one not always easy to practice. When it comes to moving toward a new way of handling data, mainframe organizations, which have earned their keep by delivering the IT equivalent of corporate-wide insurance policies (rugged, reliable, and risk-averse), naturally look with caution on new concepts like ELT — extract, load and transform.

Positioned as a lighter and faster alternative to more traditional data handling procedures such as ETL, (extract, transform and load), ELT definitely invites scrutiny. And that scrutiny can be worthwhile.

Definitions provided by SearchDataManagement.com say that ELT is “a data integration process for transferring raw data from a source server to a data system (such as a data warehouse or data lake) on a target server and then preparing the information for downstream uses.”  In contrast, another source defines ETL as “three database functions that are combined into one tool to pull data out of one database and place it into another database.”

Model9 | Diagram | ETL vs. ELT

The crucial functional difference in those definitions is the exclusive focus on database-to-database transfer with ETL, while ELT is open-ended and flexible. To be sure, there are variations in ETL and ELT that might not fit those definitions but the point is that in the mainframe world ETL is a tool with a more limited focus, while ELT is focused on jump-starting the future.

While each approach has its advantages and disadvantages, let’s take a look as to why we think ETL is all wrong for mainframe data migration.

ETL is Too Complex  

ETL was not originally designed to handle all the tasks it is now being asked to do. In the early days it was often applied to pull data from one relational structure and get it to fit in a different relational structure. This often included cleansing the data, too. For example, a traditional RDBMS can get befuddled by numeric data where it is expecting alpha data or by the presence of obsolete address abbreviations. So, ETL is optimized for that kind of painstaking, field-by-field data checking, `cleaning,’ and data movement, not so much for feeding a hungry Hadoop database or modern data lake. In short, ETL wasn’t invented to take advantage of all the ways data originates and all the ways it can be used in the 21st century.

ETL is Labor Intensive 

All that RDBMS-to-RDBMS movement takes supervision and even scripting. Skilled DBAs are in demand and may not last at your organization.  So, keeping the human part of the equation going can be tricky. In many cases, someone will have to come along and recreate their hand-coding or replace it whenever something new is needed. 

ETL is a Bottleneck 

Because the ETL process is built around transformation, everything is dependent on the timely completion of that transformation.  However, with larger amounts of data in play (think, Big Data), this can make the needed transformation times inconvenient or impractical, turning ETL into a potential functional and computational bottleneck.

ETL Demands Structure 

ETL is not really designed for unstructured data and can add complexity rather than value when asked to deal with such data. It is best for traditional databases but does not help much with the huge waves of unstructured data that companies need to process today.

ETL Has High Processing Costs 

ETL can be especially challenging for mainframes because they generally incur MSU processing charges and can burden systems when they need to be handling real-time challenges.  This stands in contrast to ELT which can be accomplished using mostly the capabilities of built-in zIIP engines, which cuts MSU costs, with additional processing conducted in a chosen cloud destination. In response to those high costs, some customers have taken the Transformation stage into the cloud to handle all kinds of data transformations, integrations, and preparations to support analytics and the creation of data lakes.

Moving Forward

It would obviously be wrong to oversimplify a decision regarding the implementation of ETL or ELT, there are too many moving parts and too many decision points to weigh. However, what is crucial is understanding that rather than being focused on legacy practices and limitations, ELT speaks to most of the evolving IT paradigms. ELT is ideal for moving massive amounts of data. Typically the desired destination is the cloud and often a data lake, built to ingest just about any and all available data so that modern analytics can get to work. That is why ELT today is growing and why it is making inroads specifically in the mainframe environment. In particular, it represents perhaps the best way to accelerate the movement of data to the cloud and to do so at scale. That’s why ELT is emerging as a key tool for IT organizations aiming at modernization and at maximizing the value of their existing investments.

Webinar: Add MF data sets to data analytics w/ Model9 & AWS
WATCH NOW

One of the great revelations for those considering new or expanded cloud adoption is the cost factor – especially with regard to storage. The received wisdom has long been that nothing beats the low cost of tape for long-term and mass storage.

In fact, though tape is still cheap, cloud options are getting very close such as with Amazon S3 Glacier Deep Archive, and offer tremendous advantages that tape can’t match. A case in point is Amazon S3 Intelligent-Tiering. 

Tiering (also called hierarchical storage management or HSM) is not new. It’s been part of the mainframe world for a long time, but with limits imposed by the nature of the storage devices involved and the software. According to Amazon, Intelligent Tiering helps to reduce storage costs by up to 95 percent and now supports automatic data archiving. It’s a great way to modernize your mainframe environment by simply moving data to the cloud, even if you are not planning to migrate your mainframe to AWS entirely.

How does Intelligent-Tiering work? The idea is pretty simple. When objects are found to have been rarely accessed over long periods of time, they are automatically targeted for movement to less expensive storage tiers.

Migrate Mainframe to AWS

In the past (both in mainframes and in the cloud) you had to define a specific policy stating what needed to be moved to which tier and when, for example after 30 days or 60 days. The point with the new AWS tiering is that it automatically identifies what needs to be moved, when, and then moves it at the proper time. To migrate mainframe to Amazon S3 is no problem because modern data movement technology now allows you to move both historical and active data directly from tape or virtual tape to Amazon S3. Once there, auto-tiering can transparently move cold and long-term data to less expensive tiers.

This saves the trouble of needing to specifically define the rules. By abstracting the cost issue, AWS simplifies tiering and optimizes the cost without impacting the applications that read and write the data. Those applications can continue to operate under their usual protocols while AWS takes care of selecting the optimal storage tier. According to AWS, this is the first and, at the moment, the only cloud storage that delivers this capability automatically.

When reading from tape, the traditional lower tier for mainframe environments, recall times are the concern as the system has to deal with tape mount and search protocols. In contrast, Amazon S3 Intelligent-Tiering can provide a low millisecond latency as well as high throughput whether you are calling for data in the Frequent or Infrequent access tiers. In fact, Intelligent-Tiering can also automatically migrate the most infrequently used data to Glacier, the durable and extremely low-cost S3 storage class for data archiving and long-term backup. And with new technology allowing efficient and secure data movement over TCP/IP, getting mainframe data to S3 is even easier.

The potential impact on mainframe data practices

For mainframe-based organizations this high-fidelity tiering option could be an appealing choice compared with tape from both a cost and benefits perspective. However, the tape comparison is rarely that simple. For example, depending on the amount of data involved and the specific backup and/or archiving practices, any given petabyte of data needing to be protected may have to be copied and retained two or more times, which immediately makes tape seem a bit less competitive. Add overhead costs, personnel, etc., and the “traditional” economics may begin to seem even less appealing.

Tiering, in a mainframe context, is often as much about speed of access as anything else. So, in the tape world, big decisions have to be made constantly about what can be relegated to lower tiers and whether the often much-longer access times will become a problem after that decision has been made. But getting mainframe data to S3, where such concerns are no longer an issue, is now easy. Modern data movement technology means you can move your mainframe data in mainframe format directly to object storage in the cloud so it is available for restore directly from AWS.

Many mainframe organizations have years, even decades of data on tape. The management of this tape data is retained only in the tape management system. Or perhaps it was just copied forward from a prior tape system upgrade.  How much of this data is really needed? Is it even usable anymore? To migrate mainframe to AWS, specifically this older data, allows management of the data in a modern way and can reduce the amount of tape data on-premises.

And what about those tapes that today are shipped off-site for storage and recovery purposes? Why not put that data on cloud storage for recovery anywhere? 

For mainframe organizations interested in removing on-premise tape technology, reducing tape storage sizes, or creating remote backup copies, cloud options like Amazon S3 Intelligent Tiering can offer cost optimization that is better “tuned” to an organization’s real needs than anything devised manually or implemented on-premises. Furthermore, with this cloud-based approach, there is no longer any need to know your data patterns or think about tiering, it just gets done.

Best of all, you can now perform a stand-alone restore directly from cloud. This is especially valuable with ransomware attacks on the rise because there is no dependency on a potentially compromised system.

You can even take advantage of AWS immutable copies and versioning capabilities to further protect your mainframe data.

Getting there

Of course, in order to take advantage of cloud storage like Amazon S3 Intelligent Tiering, you need to find a way to get your mainframe data out of its on-premises environment. Traditionally, that has presented a big challenge. But, as with multiplying storage options, the choices in data movement technology are also improving. For a review of new movement options, take a look at a discussion of techniques and technologies for Mainframe to Cloud Migration.

We recently looked at the topic of “Why Mainframe Data Management is Crucial for BI and Analytics” in an Analytics Insight article written by our CEO, Gil Peleg. Our conclusions, in brief, are that enterprises are missing opportunities when they allow mainframe data to stay siloed. And, while that might have been acceptable in the past, today data and analytics are very critical to achieving business advantage.

How did we get here? Mainframes are the rock on which many businesses built their IT infrastructure. However, while the rest of IT has galloped toward shared industry standards and even open architectures, mainframe has stood aloof and unmoved. It operates largely within a framework of proprietary hardware and software that does not readily share data. But with the revolutionary pace of change, especially in the cloud, old notions of scale and cost have been cast aside. As big and as powerful as mainframe systems are, there are things the cloud can now do better, and analytics is one of those things.

In the cloud no problem is too big. Effectively unlimited scale is available if needed and a whole host of analytic tools like Kibana, Splunk and Snowflake, have emerged to better examine not only structured data but also unstructured data, which abounds in mainframes.

Cloud tools have proven their worth on “new” data, yielding extremely important insights. But those insights could be enhanced, often dramatically, if mainframe data, historic and current, were made available in the same way or, better yet combined – for instance in modern cloud-based data lakes.

Eliminating silos

It turns out that most organizations have had a good excuse for not liberating their data: It has been a difficult and expensive task. For example, mainframe data movement, typically described as “extract, transform, and load” (ETL), requires intensive use of mainframe computing power. This can interfere with other mission-critical activities such as transaction processing, backup, and other regularly scheduled batch jobs. Moreover, mainframe software vendors typically charge in “MSUs” which roughly correlate with CPU processing loads. 

This is not a matter of “pie in the sky” thinking. Technology is available now to address and reform this process. Now, mainframe data can be exported, loaded, and transformed to any standard format in a cloud target. There, it can be analyzed using any of a number of tools. And this can be done as often as needed. What is different about this ELT process is the fact that it is no longer so dependent on the mainframe. It sharply reduces MSU charges by accomplishing most of the work on built-in zIIP engines, which are a key mainframe component and have considerable processing power. 

What does all this mean? It means data silos can be largely a thing of the past. It means an organization can finally get at all its data and can monetize that data. It means opening the door to new business insights, new business ideas, and new business applications.

An incidental impact is that there can be big cost savings in keeping data in the cloud in storage resources that are inherently flexible (data can move from deep archive to highly accessible quickly) rather than on-premises. And, of course, no capital costs – all operational expenses. Above all, though, this provides freedom. No more long contracts, mandatory upgrades, services, staff, etc. In short, it’s a much more modern way of looking at mainframe storage.

Webinar: Add MF data sets to data analytics w/ Model9 & AWS
WATCH NOW

Introducing object storage terminology and concepts – and how to leverage cost-effective cloud data management for mainframe 

Object storage is coming to the mainframe.  It’s the optimal platform for demanding backup, archive, DR, and big-data analytics operations, allowing mainframe data centers to leverage scalable, cost-effective cloud infrastructures.

For mainframe personnel, object storage is a new language to speak.  It’s not complex, just a few new buzzwords to learn.  This paper was written to introduce you to object storage, and to assist in learning the relevant terminology.  Each term is compared to familiar mainframe concepts.  Let’s go!

What is Object Storage?

Object storage is a computer data architecture in which data is stored in object form – as compared to DASD, file/NAS storage and block storage.  Object storage is a cost-effective technology that makes data easily accessible for large-scale operations, such as backup, archive, DR, and big-data analytics and BI applications. 

IT departments with mainframes can use object storage to modernize their mainframe ecosystems and reduce dependence on expensive, proprietary hardware, such as tape systems and VTLs.

Basic Terms

Let’s take a look at some basic object storage terminology (and compare it to mainframe lingo):

  • Objects.  Object storage contains objects, which are also known as blobs.  These are analogous to mainframe data sets.
  • Buckets.  A bucket is a container that hosts zero or more objects.  In the mainframe realm, data sets are hosted on a volume – such as a tape or DASD device.

Data Sets vs. Objects – a Closer Look

As with data sets, objects contain both data and some basic metadata describing the object’s properties, such as creation date and object size.  Here is a table with a detailed comparison between data set and object attributes:

NOTE

The object attributes described below are presented as defined in AWS S3 storage systems.

Table 1 | Model9 | A Mainframer's Guide to Object Storage

Volumes vs. Buckets – a Closer Look

Buckets, which are analogous to mainframe volumes, are unlimited in size.  Separate buckets are often deployed for security reasons, and not because of performance limitations.  A bucket can be assigned a life cycle policy that includes automatic tiering, data protection, replication, and automatic at-rest encryption.

NOTE

The bucket attributes described below are presented as defined in AWS S3 storage systems.

Table 2 | Model9 | A Mainframer's Guide to Object Storage

Security Considerations

In the z/OS domain, a SAF user and password are required, as well as the necessary authorization level for the volume and data set.  For example, users with ALTER access to a data set can perform any action – read/write/create/delete.

In object storage, users are defined in the storage system.  Each user is granted access to specific buckets, prefixes, objects, and separate permissions are defined for each action, for example:

  • PutObject
  • DeleteObject
  • ListBucket
  • DeleteObjectVersion

In addition, each user can be associated with a programmatic API key and API secret in order to access the bucket and the object storage via a TCP/IP-based API. When accessing data in the cloud, HTTPS is used to encrypt the in-transit stream.  When accessing data on-premises, HTTP can be used to avoid encryption overhead.  If required, the object storage platform can be configured to perform data-at-rest encryption. 

Disaster Recovery Considerations

While traditional mainframe storage platforms such as tape and DASD rely on full storage replication, object storage supports both replication and erasure coding.  Erasure coding provides significant savings in storage space, as the data can be spread over multiple geographical locations. For example, on AWS, data is automatically spread across a minimum of 3 geographical locations, thus providing multi-site redundancy and disaster recovery from anywhere in the world. Erasure-coded buckets can also be fully replicated to another region, as is practiced with traditional storage. Most object storage platforms support both synchronous and asynchronous replication.

Model9 – Connecting Object Storage to the Mainframe

Model9’s Cloud Data Manager for Mainframe is a software-only platform that leverages powerful, scalable cloud-based object storage capabilities for data centers that operate mainframes.

The platform runs on the mainframe’s zIIP processors, providing cost-efficient storage, backup, archive, and recovery functionalities with an easy-to-use interface that requires no object-storage knowledge or skills.

Read the guide as a PDF - get it now!
READ NOW

For mainframe shops that need to move data on or off the mainframe, whether to the cloud or to an alternative on-premises destination, FICON, the IBM mainstay for decades, is generally seen as the standard, and with good reason. When it was first introduced in 1998 it was a big step up from its predecessor ESCON that had been around since the early 1990s. Comparing the two was like comparing a firehose to a kitchen faucet. 

FICON is fast, in part, because it runs over Fibre Channel in an IBM proprietary form defined by ANSI FC-SB-3 Single-Byte Command Code Sets-3 Mapping Protocol for Fibre Channel (FC) protocol. In that schema it is a FC layer 4 protocol.  As a mainframe protocol it is used on IBM Systems Z to handle both DASD and tape I/O. It is also supported by other vendors of disk and tape storage and switches designed for the IBM environment. 

Over time, IBM has increased speeds and added features such as High Performance FICON, without significantly enhancing the disk and tape protocols that traverse over it; meaning these limitations on data movement remain. For this reason, the popularity and a long-history of FICON does not make it the answer for every data movement challenge.

Stuck in the Past

One challenge, of particular concern today, is that mainframe secondary storage is still being written to tape via tape protocols, whether it is real physical tape or virtual tape emulating actual tape. With tape as a central technology, it implies dealing with tape mount protocols and tape management software to maintain where datasets reside on those miles of Mylar. The serial nature of tape and limitations of the original hardware required large datasets to often span multiple tape images. 

Though virtual tapes written to DASD improved the speed of writes and recalls, the underlying protocol is still constrained by tape’s serialized protocols.  This implies waiting for tape mounts and waiting for I/O cycles to complete before next data can be written. When reading back, the system must traverse through the tape image to find the specific dataset requested. In short, while traditional tape may have its virtues, speed – the 21st century speed of modern storage – is not among them. Even though tape and virtual tape is attached via FICON, the process of writing and recalling data relies on the underlying tape protocol for moving data, thus making FICON attached less-than-ideal for many modern use cases.

Faster and Better

But there is an alternative that doesn’t rely on tape or emulate tape because it does not have to. 

Instead,  software generates multiple streams of data from a source and pushes data over IBM Open Systems Adapter (OSA) cards using TCP/IP in an efficient and secure manner to an object storage device, either on premise or in the cloud. The Open Systems Adapter functions as a network controller that supports many networking transport protocols, making it a powerful helper for this efficient approach to data movement.  Importantly, as an open standard, OSA is developing faster than FICON. For example, with the IBM z15 there is already a 25GbE OSA-Express7S card, while FICON is still at 16Gb with the FICON Express16 card.

While there is a belief common among many mainframe professionals that OSA cards are “not as good as FICON,” that is simply not true when the necessary steps are taken to optimize OSA throughput.

Model9 | Product Architecture | Diagram

To achieve better overall performance, the data is captured well before tape handling, thus avoiding the overhead of tape management, tape mounts, etc. Rather than relying on serialized data movement, this approach breaks apart large datasets and sends them across the wire in simultaneous chunks, while also pushing multiple datasets at a time.  Data can be compressed prior to leaving the mainframe and beginning its journey, reducing the amount of data that would otherwise be written. Dataset recalls and restores are also compressed and use multiple streams to ensure quick recovery of data from the cloud. 

Having the ability to write multiple streams further increases throughput and reduces latency issues. In addition, compression on the mainframe side dramatically reduces the amount of data sent over the wire.  If software is also designed to run on zIIP engines within the mainframe, data discovery and movement as well  backup and recovery workloads will consume less billable MIPS and TCP/IP cycles also benefit.

This approach delivers mainframe data to cloud storage, including all dataset types and historical data, in a quick and efficient manner. And this approach can also transform mainframe data into standard open formats that can be ingested by BI and Analytics off of the mainframe itself, with a key difference. When data transformation occurs on the cloud side, no mainframe MIPS are used to transform the data. This allows for the quick and easy movement of complete datasets, tables, image copies, etc. to the cloud, then makes all data available to open applications by transforming the data on the object store. 

A modern, software-based approach to data movement means there is no longer a need to go to your mainframe team to update the complex ETL process on the mainframe side. 

To address the problem of hard-to-move mainframe data, this software-based approach provides the ability to readily move mainframe data and, if desired, readily transform it to common open formats. This data transformation is accomplished on the cloud side, after data movement is complete, which means no MF resources are required to transform the data. 

  • Dedicated software quickly discovers (or rediscovers) all data on the mainframe. Even with no prior documentation or insights, Model9 can rapidly assemble and map the data to be moved, expediting both modernization planning and data movement.
  • Policies are defined to move either selected data sets or all data sets automatically, reducing oversight and management requirements dramatically as compared to other data movement methods.
  • For the sake of simplicity, a software approach can be designed to invoke actions via a RESTful API, or a management UI, as well as from the Mainframe side via a traditional batch or command line,
  • A software approach can also work with targets both on premises or in the cloud.

Moving mainframe data over TCP/IP | Diagram | Model9

In summary, a wide-range of useful features can make data movement with a software-based approach  intuitive and easy.  By avoiding older FICON and tape protocols, a software-based approach can push mainframe data over TCP/IP to object storage in a secure and efficient manner, making it the answer to modern mainframe data movement challenges! 

Why says TCP/IP is Slow? Get the PDF!
READ NOW

At Model9, our position in the industry gives us a unique vantage point on what is happening across both the mainframe and cloud world. Clearly, while everyone expects the movement to the cloud to continue, with Gartner even going so far as to suggest that around one-third of mainframe storage will be in the cloud by 2025, there are some glitches and trends that we find interesting.

The unifying theme is the need to do more with data and do it at the lowest cost possible.  Cloud costs are often extremely low in comparison with traditional on-premises costs, while also offering innumerable advantages in terms of data mobility and cost-effective analytics. The big challenge is data movement “up” from on-premises repositories to those cloud opportunities. 

Rediscovering Mainframe Value

Looking ahead, it seems that mainframe organizations are finally realizing that successful digital transformation requires adopting modern technology solutions, taking some risks, and not just relying on professional services firms to force feed the process. That means these same organizations will be looking to optimize their investment in MF instead of simply migrating off the platform.  They need a way to optimize the value of their investments by integrating mainframe and cloud rather than attempting the risky path of migrating completely away from the mainframe. Moving data and enhancing analytics is a great place to start.  The goal will be first, to seek to leverage cloud analytics and BI tools for their MF data and, second, to leverage cloud technologies to simplify daily processes and reduce IT costs. 

Speeding Past ETL

Last year’s BMC survey discussed the continued support for mainframes even as cloud storage continues to grow…We have heard tales of woe from individuals at some well-known independent companies that were built around the expectation of a substantial and rapid mainframe-to-cloud transition. The problem they face is that traditional data movement (extract, transform, load –ETL) processes are slow and expensive (by comparison with newer extract, load and transform – ELT), contributing to slower than expected movement to the cloud, and perhaps even driving some fear, uncertainty, and doubt amongst mainframe operators about the path ahead.  With Gartner pointing to the compelling case for cloud storage, we expect more mainframe shops to look beyond the “same old” movement strategies in the year ahead to try something new.  Certainly, there is no longer any question about the capabilities of these options — and again, Gartner has pointed this out in their report.

A Superior Data Lake 

Another thing we definitely see from customers is a move to create and/or grow bigger and better data lakes.  Data lakes are almost always synonymous with cloud storage.  The economics are compelling and the analytic options, equally appealing.

Analysts are predicting as much as a 29 percent compound annual growth rate (CAGR) for data lake implementation over the next five years.  We also see that organizations want all their data in the data lake, so they can run comprehensive analytics, with AI, ML and every other tool of the trade. That means, if at all possible they want to include mainframe data, which is often critical for understanding customers and the business as a whole. And that means wrestling with the challenges of data movement in a more effective way than in the past. It is almost like getting organizations to make the leap from “sneaker net” (the old days when companies transferred big files by moving floppy disks or portable drives) to actual high-bandwidth networks.  The leap in data movement and transformation today is equally dramatic.

The Competitive Edge Provided by Data

As more and more companies do succeed in sharing their mainframe data within a data lake, the superiority of outcomes in terms of analytic insight is likely to create a real competitive advantage that will force other companies to take the same path. It’s a sort of corollary to the data gravity theory. In this case, when others move their data, you may be forced by competitive pressure to do the same thing. 

Without that critical spectrum of mainframe information, a data lake can easily become what some are terming a Data Swamp — something that consumes resources but provides little real value.  In the year ahead, this view of data gravity should resonate more with decision-makers and will start to inform their strategic decisions.

Multicloud Becomes Mainstream  

In the early days of cloud, as competition began to ramp up and as more and more companies migrated work to the cloud, the use of multi-cloud became controversial. Pundits worried on the one hand about vendor lock-in and on the other about complexity. It turns out, customers tell us they often need multicloud.  Just as on-premises operations often depend on multiple hardware and software vendors, different clouds have different strengths and customers seem to be simply picking the best-of-breed for a specific purpose. Fortunately, data movement between mainframe and cloud is getting easier!

That leads us to think multicloud will no longer be treated as exceptional or controversial and instead, organizations will focus on making the most of it.

No matter how the Covid crisis and the global economy evolve over the next year, the cloud-mainframe relationships will be dynamic – and interesting to watch. The “new oil” of data will continue to influence us all and will put a premium on getting storage out of static storage and into circulation where it can be monetized.  We wish our customers the best as they navigate these waters!

Everyone is under pressure to modernize their mainframe environment – keeping all the mission-critical benefits without being so tied to a crushing cost structure and a style of computing that often discourages the agility and creativity enterprises badly need. 

Several general traits of cloud can deliver attributes to a mainframe environment that are increasingly demanded and very difficult to achieve in any other way. These are:

Elasticity

Leading cloud providers have data processing assets that dwarf anything available to any other kind of organization. So, as a service, they can provide capacity and/or specific functionality that is effectively unlimited in scale but for which, roughly speaking, customers pay on an as-needed basis. For a mainframe organization this can be extremely helpful for dealing with periodic demand spikes such as the annual holiday sales period. They can also support sudden and substantial shifts in a business model, such as some of those that have emerged during the COVID pandemic.

Resilience

The same enormous scale of the cloud providers that delivers elasticity, also delivers resilience. Enormous compute and storage resources, in multiple locations, and vast data pipes guarantee data survivability.  Cloud outages can happen, but the massive redundancy makes data loss or a complete outage, highly unlikely.

OpEx model

The ‘pay only for what you need’ approach of cloud means that cloud expenses are generally tracked as operating expenses rather than capital expenditures and, in that sense, are usually much easier to fund.  If properly managed cloud services are usually as cost-effective as on-premises and sometimes much more, though of course complex questions of how costs are logged factor into this.  Unlike the mainframe model, there is no single monthly peak 4-hour interval that sets the pricing for the whole month. Also, there is no need to order storage boxes, compute chassis and other infrastructure components, nor track the shipment and match the bill of materials, or rack and stack the servers, as huge infrastructure is available at the click of a button.

Finally, cloud represents a cornucopia of potential solutions to problems you may be facing, with low compute and storage costs, a wide range of infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS) options – including powerful analytic capabilities. 

Experiment First

Fortunately, for those interested in exploring cloud options for mainframe environments, there are many paths forward and no need to make “bet the business” investments.  On the contrary, cloud options are typically modular and granular, meaning you can choose many routes to the functionality you want while starting small and expanding when it makes sense.

Areas most often targeted for cloud experimentation include

  • Analytics – Mainframe environments have an abundance of data but can’t readily provide many of the most-demanded BI and analytics services. Meanwhile, all across the business, adoption of cloud-based analytics has been growing but without direct access to mainframe data, it has not reached its full potential. Data locked in the mainframe has simply not been accessible. Making mainframe data cloud-accessible is a risk-free first step for modernization that can quickly and easily multiply the options for leveraging key data, delivering rapid and meaningful rewards in the form of scalable state-of-the-art analytics. 
  • Backup – Mainframe environments know how to do backup but they often face difficult tradeoffs when mainframe resources are needed for so many critical tasks. Backup often gets relegated to narrow windows of time. Factors such as reliance on tape, or even virtual tape, can also make it even more difficult to achieve needed results. In contrast, a cloud-based backup, whether for particular applications or data or even for all applications and data, is one of the easiest use cases to get started with. Cloud-based backups can eliminate slow and bulky tape-type architecture. As a backup medium, cloud is fast and cost-effective, and comparatively easy to implement.
  • Disaster recovery (DR) –The tools and techniques for disaster recovery vary depending on the needs of an enterprise and the scale of its budget but often include a secondary site. Of course, setting up a dedicated duplicate mainframe disaster recovery site comes with a high total cost of ownership (TCO). A second, slightly more affordable option, is a business continuity colocation facility, which may be shared among multiple companies and made available to one of them at a time of need.  Emerging as a viable third option is a cloud-based BCDR capability that provides essentially the same capabilities as a secondary site at a much lower cost. Predefined service level agreements for a cloud “facility” guarantee a quick recovery, saving your company both time and money.
  • Archive – Again, existing mainframe operations often rely on tape to store infrequently accessed data, typically outside of the purview of regular backup activities.   Sometimes this is just a matter of retaining longitudinal corporate data but many sectors such as the financial and healthcare industries which are heavily regulated are required to retain data for long durations of up to 10 years or more. As these collections of static data continue to grow, keeping it in “prime real estate” in the data center becomes less and less appealing.  At the same time, few alternatives are appealing because they often involve transporting physical media.  The cloud option, of course, is a classic “low-hanging fruit” choice that can eliminate space and equipment requirements on-premises and readily move any amount of data to low-cost and easy-to-access cloud storage.

A Painless Path for Mainframe Administrators

If an administrator of a cloud-based data center was suddenly told they needed to migrate to a mainframe environment, their first reaction would probably be panic! And with good reason. Mainframe is a complex world that requires layers of expertise. On the other hand, if a mainframe administrator chooses to experiment in the cloud or even begin to move data or functions into the cloud, the transition is likely to be smoother. That is not to say that learning isn’t required for the cloud but, in general, cloud practices are oriented toward a more modern, self-service world. Indeed, cloud growth has been driven in part by ease of use.

Odds are good, someone in your organization has had exposure to cloud, but courses and self-study options abound. Above all, cloud is typically oriented toward learn-by-doing, with free or affordable on-ramps that let individuals and organizations gain experience and skills at low cost.

In other words, in short order, a mainframe shop can also develop cloud competency. And, for the 2020s, that’s likely to be a very good investment of time and energy.

Get the eBook & start your mainframe modernization journey!
READ NOW

The COVID-19 pandemic has presented many challenges for mainframe-based organizations, in particular the need to conduct on-site hardware and software maintenance.  That has especially led to consideration of cost-effective cloud data management options as an alternative to legacy mainframe storage platforms.

Evidence is abundant.  In recent months, businesses have learned that cloud growth is largely pandemic-proof and may actually have accelerated. According to Synergy Research Group, Q1 spend on cloud infrastructure services reached $29 billion, up 37% from the first quarter of 2019.  Furthermore, according to Synergy, anecdotal evidence points to some COVID-19-related market growth as additional enterprise workloads appear to have been pushed onto public clouds.  And, according to a recent article in Economic Times, IBM Chief Executive Officer Arvind Krishna has indicated that the pandemic has also heightened interest in hybrid cloud, with transformations that were planned to last for years now being compressed into months.

For organizations built around mainframe technology, these trends underscore an opportunity that has become an urgent need during the pandemic, namely the question of how to reduce or eliminate dependence on on-premises storage assets such as physical tapes – which are still the main reason for needing personnel to access on-prem equipment.

Although essential for many daily operations as well as for routine backup or disaster recovery, these physical assets depend too heavily on having access to a facility and having trained personnel available on site.

Furthermore, they can be expensive to acquire, maintain, and operate.  Additionally, on-prem hardware depends on other physical infrastructure that requires on-site maintenance such as air conditioning and electrical power systems. That was a tolerable situation in the past, but the reality of the pandemic, with lockdowns, transportation problems and the potential health threats to staff is leading mainframe operators to consider modern alternatives to many traditional on-prem mainframe storage options.

Cloud data manager for mainframe

One industry-leading example is the Model9 Cloud Data Manager for Mainframe, which securely delivers mainframe data to any cloud or commodity storage platform, eliminating the dependency on physical and virtual tape alike. It leverages reliable and cost-effective cloud storage for mainframe backup, archive, recovery, and space management purposes.

Needless to say, in the event of emergencies ranging from natural disasters to global pandemics, having mainframe data secured in the cloud means organizations can count on continued smooth operations with few if any people required on site.  And, on an ongoing basis, Model9 also helps unlock mainframe data by transforming it to universal formats that can be used by advanced cloud-based analytics tools, providing more options for making use of corporate data.

It is a future-proof approach that enhances mainframe operations, adds resiliency, helps control costs, and provides a path to better leverage corporate information in a more flexible and cost-effective manner.

The recently posted Computer Weekly article, “Mainframe storage: Three players in a market that’s here to stay”, did a good job of describing the central players in mainframe disk storage but neglected to mention other types of mainframe storage solutions such as tapes and cloud data management.

In particular, the article omitted mention of one of the biggest opportunities for mainframe storage modernization and cost reduction, namely leveraging the cloud to reduce the footprint and cost of the petabytes of data still locked in various kinds of on-premises tape storage.  Model9 currently offers the key to this dilemma by eliminating the dependency on FICON connectivity for mainframe secondary storage.  This means, specifically, that mainframe-based organizations can finally gain real access to reliable and cost-effective on-premises and cloud storage from Cohesity, NetApp, Amazon Web Services, Microsoft Azure, Google Cloud Platform, etc. that until now could not be considered due to the proprietary nature of traditional mainframe storage.  And, while keeping mainframe as the core system that powers transactions, its data can be accessible for analytics, BI and any other cloud application.   

Surely, this is major news for such a key part of the computing market that has hitherto been essentially monopolized by the three players author Antony Adshead discussed at length. 

Mainframe professionals know that new technologies can help them achieve even more; they deserve guidance with regard to the wide options opening up for them.

Webinar: Manage MF data on private clouds w/ Hitachi & Model9
WATCH NOW
Register for a Demo