MODEL9 IS NOW A GARTNER COOL VENDOR!

Learn more

Category: Mainframe modernization

Vendors are scrambling to deliver modern analytics to act on streams of real-time mainframe data.  There are good reasons for attempting this activity, but they may actually be missing the point or at least missing a more tantalizing opportunity.  

Real-time data in mainframes comes mostly from transaction processing. No doubt, spotting a sudden national spike in cash withdrawals from a bank’s ATM systems or an uptick in toilet paper sales in the retail world may have significance beyond the immediate “signal” to reinforce cash availability and reorder from a paper goods supplier. These are the kinds of things real-time apostles rave about when they tout the potential for running mainframe data streams through Hadoop engines and similar big data systems.

What’s missed, however, is the fact that mainframe systems have been quietly accumulating data points just like this for decades. And where mainframe data can be most valuable is in supporting analytics across the time axis. Looking at when similar demand spikes have happened over time and their duration and repetition offers the potential to predict them in the future and can hint at the optimal ways to respond and their broader meaning.

Furthermore, for most enterprises, a vast amount of real-time data exists outside the direct purview of mainframe: think about the oceans of IoT information coming from machinery and equipment, real-time sensor data in retail, and consumer data floating around in the device universe. Little of this usually gets to the mainframe. But it is this data, combined with mainframe data that is not real-time (but sometimes near-real-time), that may have the greatest potential as a font of analytic insight, according to a recent report.

To give mainframes the power to participate in this analytics bonanza requires some of the same nostrums being promoted by the “real-time” enthusiasts but above all requires greatly improving access to older mainframe data, typically resident on tape or VTL.

The optimal pattern here should be rescuing archival and non-real-time operational data from mainframe storage and sharing it with on-prem or cloud-based big data analytics in a data lake.  This allows the mainframe to continue doing what it does best while providing a tabula rasa for analyzing the widest range and largest volume of data.

Technology today can leverage the too-often unused power of zIIP engines to facilitate data movement inside the mainframe and help it get to new platforms for analytics (ensuring necessary transformation to standard formats along the way).

It’s a way to make the best use of data and the best use of mainframe in its traditional role while ensuring the very best in state-of-the-art analytics.  This is a far more profound opportunity than simply dipping into the flow or real-time data in the mainframe. It is based on a fuller appreciation of what data matters and how data can be used. And it is the path that mainframe modernizers will ultimately choose to follow.

Get the eBook & start your mainframe modernization journey!
READ NOW

Blame the genius that gave us the term “cloud” as shorthand for distributed computing. Clouds, in many languages and cultures, are equated with things ephemeral and states of mind that are dreamy or thoughts that are vague.

Well, cloud computing is none of those “cloud things.”  It is the place where huge capital investments, the best thinking about reliability, and the leading developments in technology have come together to create a value proposition that is hard to ignore.

When it comes to reliability, as a distributed system – really a system of distributed systems – cloud accepts the inevitability of failure in individual system elements and in recompense, incorporates very high levels of resilience across the whole architecture.

For those counting nines (those reliability figures quoted as 99.xxx) there can be enormous comfort in the figures quoted by cloud providers. Those digging deeper, may find the picture to be less perfect in ways that make the trusty mainframe seem pretty wonderful. But the vast failover capabilities built into clouds, especially those operated by the so-called hyperscalers, is so immense as to be essentially unmatchable, especially when other factors are considered.  

The relevance of this for mainframe operators is not about “pro or con.”  Although some enterprises have taken the “all cloud” path, in general, few are suggesting the complete replacement of mainframe by cloud.

What is instead true, is that the cloud’s immense reliability – its ability to offer nearly turnkey capabilities in analytics and many other areas, and its essentially unlimited scalability – means it is the only really meaningful way to supplement mainframe core capabilities and in 2021 its growth is unabated.

Whether it is providing the ultimate RAID-like storage reliability across widely distributed physical systems to protect and preserve vital data or spinning up compute power to ponder big business (or tech) questions, cloud is simply unbeatable.

So, for mainframe operations, it is futile to try to “beat” cloud but highly fruitful to join – the mainframe + cloud combination is a winner.

Indeed, Gartner analyst Jeff Vogel, in a September 2020 report, “Cloud Storage Management Is Transforming Mainframe Data,” predicts that one-third of mainframe data (typically backup and archive) will reside in the cloud by 2025 — most likely a public cloud — compared to less than 5% at present – a stunning shift.

This change is coming. And it is great news for mainframe operators because it adds new capabilities and powers to what’s already amazing about mainframe. And it opens the doors to new options that have immense potential benefits for enterprises ready to take advantage of them.

Get the eBook & start your mainframe modernization journey!
READ NOW

Introducing object storage terminology and concepts – and how to leverage cost-effective cloud data management for mainframe 

Object storage is coming to the mainframe.  It’s the optimal platform for demanding backup, archive, DR, and big-data analytics operations, allowing mainframe data centers to leverage scalable, cost-effective cloud infrastructures.

For mainframe personnel, object storage is a new language to speak.  It’s not complex, just a few new buzzwords to learn.  This paper was written to introduce you to object storage, and to assist in learning the relevant terminology.  Each term is compared to familiar mainframe concepts.  Let’s go!

What is Object Storage?

Object storage is a computer data architecture in which data is stored in object form – as compared to DASD, file/NAS storage and block storage.  Object storage is a cost-effective technology that makes data easily accessible for large-scale operations, such as backup, archive, DR, and big-data analytics and BI applications. 

IT departments with mainframes can use object storage to modernize their mainframe ecosystems and reduce dependence on expensive, proprietary hardware, such as tape systems and VTLs.

Basic Terms

Let’s take a look at some basic object storage terminology (and compare it to mainframe lingo):

  • Objects.  Object storage contains objects, which are also known as blobs.  These are analogous to mainframe data sets.
  • Buckets.  A bucket is a container that hosts zero or more objects.  In the mainframe realm, data sets are hosted on a volume – such as a tape or DASD device.

Data Sets vs. Objects – a Closer Look

As with data sets, objects contain both data and some basic metadata describing the object’s properties, such as creation date and object size.  Here is a table with a detailed comparison between data set and object attributes:

NOTE

The object attributes described below are presented as defined in AWS S3 storage systems.

Table 1 | Model9 | A Mainframer's Guide to Object Storage

Volumes vs. Buckets – a Closer Look

Buckets, which are analogous to mainframe volumes, are unlimited in size.  Separate buckets are often deployed for security reasons, and not because of performance limitations.  A bucket can be assigned a life cycle policy that includes automatic tiering, data protection, replication, and automatic at-rest encryption.

NOTE

The bucket attributes described below are presented as defined in AWS S3 storage systems.

Table 2 | Model9 | A Mainframer's Guide to Object Storage

Security Considerations

In the z/OS domain, a SAF user and password are required, as well as the necessary authorization level for the volume and data set.  For example, users with ALTER access to a data set can perform any action – read/write/create/delete.

In object storage, users are defined in the storage system.  Each user is granted access to specific buckets, prefixes, objects, and separate permissions are defined for each action, for example:

  • PutObject
  • DeleteObject
  • ListBucket
  • DeleteObjectVersion

In addition, each user can be associated with a programmatic API key and API secret in order to access the bucket and the object storage via a TCP/IP-based API. When accessing data in the cloud, HTTPS is used to encrypt the in-transit stream.  When accessing data on-premises, HTTP can be used to avoid encryption overhead.  If required, the object storage platform can be configured to perform data-at-rest encryption. 

Disaster Recovery Considerations

While traditional mainframe storage platforms such as tape and DASD rely on full storage replication, object storage supports both replication and erasure coding.  Erasure coding provides significant savings in storage space, as the data can be spread over multiple geographical locations. For example, on AWS, data is automatically spread across a minimum of 3 geographical locations, thus providing multi-site redundancy and disaster recovery from anywhere in the world. Erasure-coded buckets can also be fully replicated to another region, as is practiced with traditional storage. Most object storage platforms support both synchronous and asynchronous replication.

Model9 – Connecting Object Storage to the Mainframe

Model9’s Cloud Data Manager for Mainframe is a software-only platform that leverages powerful, scalable cloud-based object storage capabilities for data centers that operate mainframes.

The platform runs on the mainframe’s zIIP processors, providing cost-efficient storage, backup, archive, and recovery functionalities with an easy-to-use interface that requires no object-storage knowledge or skills.

Read the guide as a PDF - get it now!
READ NOW

For mainframe shops that need to move data on or off the mainframe, whether to the cloud or to an alternative on-premises destination, FICON, the IBM mainstay for decades, is generally seen as the standard, and with good reason. When it was first introduced in 1998 it was a big step up from its predecessor ESCON that had been around since the early 1990s. Comparing the two was like comparing a firehose to a kitchen faucet. 

FICON is fast, in part, because it runs over Fibre Channel in an IBM proprietary form defined by ANSI FC-SB-3 Single-Byte Command Code Sets-3 Mapping Protocol for Fibre Channel (FC) protocol. In that schema it is a FC layer 4 protocol.  As a mainframe protocol it is used on IBM Systems Z to handle both DASD and tape I/O. It is also supported by other vendors of disk and tape storage and switches designed for the IBM environment. 

Over time, IBM has increased speeds and added features such as High Performance FICON, without significantly enhancing the disk and tape protocols that traverse over it; meaning these limitations on data movement remain. For this reason, the popularity and a long-history of FICON does not make it the answer for every data movement challenge.

Stuck in the Past

One challenge, of particular concern today, is that mainframe secondary storage is still being written to tape via tape protocols, whether it is real physical tape or virtual tape emulating actual tape. With tape as a central technology, it implies dealing with tape mount protocols and tape management software to maintain where datasets reside on those miles of Mylar. The serial nature of tape and limitations of the original hardware required large datasets to often span multiple tape images. 

Though virtual tapes written to DASD improved the speed of writes and recalls, the underlying protocol is still constrained by tape’s serialized protocols.  This implies waiting for tape mounts and waiting for I/O cycles to complete before next data can be written. When reading back, the system must traverse through the tape image to find the specific dataset requested. In short, while traditional tape may have its virtues, speed – the 21st century speed of modern storage – is not among them. Even though tape and virtual tape is attached via FICON, the process of writing and recalling data relies on the underlying tape protocol for moving data, thus making FICON attached less-than-ideal for many modern use cases.

Faster and Better

But there is an alternative that doesn’t rely on tape or emulate tape because it does not have to. 

Instead,  software generates multiple streams of data from a source and pushes data over IBM Open Systems Adapter (OSA) cards using TCP/IP in an efficient and secure manner to an object storage device, either on premise or in the cloud. The Open Systems Adapter functions as a network controller that supports many networking transport protocols, making it a powerful helper for this efficient approach to data movement.  Importantly, as an open standard, OSA is developing faster than FICON. For example, with the IBM z15 there is already a 25GbE OSA-Express7S card, while FICON is still at 16Gb with the FICON Express16 card.

While there is a belief common among many mainframe professionals that OSA cards are “not as good as FICON,” that is simply not true when the necessary steps are taken to optimize OSA throughput.

Model9 | Product Architecture | Diagram

To achieve better overall performance, the data is captured well before tape handling, thus avoiding the overhead of tape management, tape mounts, etc. Rather than relying on serialized data movement, this approach breaks apart large datasets and sends them across the wire in simultaneous chunks, while also pushing multiple datasets at a time.  Data can be compressed prior to leaving the mainframe and beginning its journey, reducing the amount of data that would otherwise be written. Dataset recalls and restores are also compressed and use multiple streams to ensure quick recovery of data from the cloud. 

Having the ability to write multiple streams further increases throughput and reduces latency issues. In addition, compression on the mainframe side dramatically reduces the amount of data sent over the wire.  If software is also designed to run on zIIP engines within the mainframe, data discovery and movement as well  backup and recovery workloads will consume less billable MIPS and TCP/IP cycles also benefit.

This approach delivers mainframe data to cloud storage, including all dataset types and historical data, in a quick and efficient manner. And this approach can also transform mainframe data into standard open formats that can be ingested by BI and Analytics off of the mainframe itself, with a key difference. When data transformation occurs on the cloud side, no mainframe MIPS are used to transform the data. This allows for the quick and easy movement of complete datasets, tables, image copies, etc. to the cloud, then makes all data available to open applications by transforming the data on the object store. 

A modern, software-based approach to data movement means there is no longer a need to go to your mainframe team to update the complex ETL process on the mainframe side. 

To address the problem of hard-to-move mainframe data, this software-based approach provides the ability to readily move mainframe data and, if desired, readily transform it to common open formats. This data transformation is accomplished on the cloud side, after data movement is complete, which means no MF resources are required to transform the data. 

  • Dedicated software quickly discovers (or rediscovers) all data on the mainframe. Even with no prior documentation or insights, Model9 can rapidly assemble and map the data to be moved, expediting both modernization planning and data movement.
  • Policies are defined to move either selected data sets or all data sets automatically, reducing oversight and management requirements dramatically as compared to other data movement methods.
  • For the sake of simplicity, a software approach can be designed to invoke actions via a RESTful API, or a management UI, as well as from the Mainframe side via a traditional batch or command line,
  • A software approach can also work with targets both on premises or in the cloud.

Moving mainframe data over TCP/IP | Diagram | Model9

In summary, a wide-range of useful features can make data movement with a software-based approach  intuitive and easy.  By avoiding older FICON and tape protocols, a software-based approach can push mainframe data over TCP/IP to object storage in a secure and efficient manner, making it the answer to modern mainframe data movement challenges! 

Why says TCP/IP is Slow? Get the PDF!
READ NOW

At Model9, our position in the industry gives us a unique vantage point on what is happening across both the mainframe and cloud world. Clearly, while everyone expects the movement to the cloud to continue, with Gartner even going so far as to suggest that around one-third of mainframe storage will be in the cloud by 2025, there are some glitches and trends that we find interesting.

The unifying theme is the need to do more with data and do it at the lowest cost possible.  Cloud costs are often extremely low in comparison with traditional on-premises costs, while also offering innumerable advantages in terms of data mobility and cost-effective analytics. The big challenge is data movement “up” from on-premises repositories to those cloud opportunities. 

Rediscovering Mainframe Value

Looking ahead, it seems that mainframe organizations are finally realizing that successful digital transformation requires adopting modern technology solutions, taking some risks, and not just relying on professional services firms to force feed the process. That means these same organizations will be looking to optimize their investment in MF instead of simply migrating off the platform.  They need a way to optimize the value of their investments by integrating mainframe and cloud rather than attempting the risky path of migrating completely away from the mainframe. Moving data and enhancing analytics is a great place to start.  The goal will be first, to seek to leverage cloud analytics and BI tools for their MF data and, second, to leverage cloud technologies to simplify daily processes and reduce IT costs. 

Speeding Past ETL

Last year’s BMC survey discussed the continued support for mainframes even as cloud storage continues to grow…We have heard tales of woe from individuals at some well-known independent companies that were built around the expectation of a substantial and rapid mainframe-to-cloud transition. The problem they face is that traditional data movement (extract, transform, load –ETL) processes are slow and expensive (by comparison with newer extract, load and transform – ELT), contributing to slower than expected movement to the cloud, and perhaps even driving some fear, uncertainty, and doubt amongst mainframe operators about the path ahead.  With Gartner pointing to the compelling case for cloud storage, we expect more mainframe shops to look beyond the “same old” movement strategies in the year ahead to try something new.  Certainly, there is no longer any question about the capabilities of these options — and again, Gartner has pointed this out in their report.

A Superior Data Lake 

Another thing we definitely see from customers is a move to create and/or grow bigger and better data lakes.  Data lakes are almost always synonymous with cloud storage.  The economics are compelling and the analytic options, equally appealing.

Analysts are predicting as much as a 29 percent compound annual growth rate (CAGR) for data lake implementation over the next five years.  We also see that organizations want all their data in the data lake, so they can run comprehensive analytics, with AI, ML and every other tool of the trade. That means, if at all possible they want to include mainframe data, which is often critical for understanding customers and the business as a whole. And that means wrestling with the challenges of data movement in a more effective way than in the past. It is almost like getting organizations to make the leap from “sneaker net” (the old days when companies transferred big files by moving floppy disks or portable drives) to actual high-bandwidth networks.  The leap in data movement and transformation today is equally dramatic.

The Competitive Edge Provided by Data

As more and more companies do succeed in sharing their mainframe data within a data lake, the superiority of outcomes in terms of analytic insight is likely to create a real competitive advantage that will force other companies to take the same path. It’s a sort of corollary to the data gravity theory. In this case, when others move their data, you may be forced by competitive pressure to do the same thing. 

Without that critical spectrum of mainframe information, a data lake can easily become what some are terming a Data Swamp — something that consumes resources but provides little real value.  In the year ahead, this view of data gravity should resonate more with decision-makers and will start to inform their strategic decisions.

Multicloud Becomes Mainstream  

In the early days of cloud, as competition began to ramp up and as more and more companies migrated work to the cloud, the use of multi-cloud became controversial. Pundits worried on the one hand about vendor lock-in and on the other about complexity. It turns out, customers tell us they often need multicloud.  Just as on-premises operations often depend on multiple hardware and software vendors, different clouds have different strengths and customers seem to be simply picking the best-of-breed for a specific purpose. Fortunately, data movement between mainframe and cloud is getting easier!

That leads us to think multicloud will no longer be treated as exceptional or controversial and instead, organizations will focus on making the most of it.

No matter how the Covid crisis and the global economy evolve over the next year, the cloud-mainframe relationships will be dynamic – and interesting to watch. The “new oil” of data will continue to influence us all and will put a premium on getting storage out of static storage and into circulation where it can be monetized.  We wish our customers the best as they navigate these waters!

Everyone is under pressure to modernize their mainframe environment – keeping all the mission-critical benefits without being so tied to a crushing cost structure and a style of computing that often discourages the agility and creativity enterprises badly need. 

Several general traits of cloud can deliver attributes to a mainframe environment that are increasingly demanded and very difficult to achieve in any other way. These are:

Elasticity

Leading cloud providers have data processing assets that dwarf anything available to any other kind of organization. So, as a service, they can provide capacity and/or specific functionality that is effectively unlimited in scale but for which, roughly speaking, customers pay on an as-needed basis. For a mainframe organization this can be extremely helpful for dealing with periodic demand spikes such as the annual holiday sales period. They can also support sudden and substantial shifts in a business model, such as some of those that have emerged during the COVID pandemic.

Resilience

The same enormous scale of the cloud providers that delivers elasticity, also delivers resilience. Enormous compute and storage resources, in multiple locations, and vast data pipes guarantee data survivability.  Cloud outages can happen, but the massive redundancy makes data loss or a complete outage, highly unlikely.

OpEx model

The ‘pay only for what you need’ approach of cloud means that cloud expenses are generally tracked as operating expenses rather than capital expenditures and, in that sense, are usually much easier to fund.  If properly managed cloud services are usually as cost-effective as on-premises and sometimes much more, though of course complex questions of how costs are logged factor into this.  Unlike the mainframe model, there is no single monthly peak 4-hour interval that sets the pricing for the whole month. Also, there is no need to order storage boxes, compute chassis and other infrastructure components, nor track the shipment and match the bill of materials, or rack and stack the servers, as huge infrastructure is available at the click of a button.

Finally, cloud represents a cornucopia of potential solutions to problems you may be facing, with low compute and storage costs, a wide range of infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS) options – including powerful analytic capabilities. 

Experiment First

Fortunately, for those interested in exploring cloud options for mainframe environments, there are many paths forward and no need to make “bet the business” investments.  On the contrary, cloud options are typically modular and granular, meaning you can choose many routes to the functionality you want while starting small and expanding when it makes sense.

Areas most often targeted for cloud experimentation include

  • Analytics – Mainframe environments have an abundance of data but can’t readily provide many of the most-demanded BI and analytics services. Meanwhile, all across the business, adoption of cloud-based analytics has been growing but without direct access to mainframe data, it has not reached its full potential. Data locked in the mainframe has simply not been accessible. Making mainframe data cloud-accessible is a risk-free first step for modernization that can quickly and easily multiply the options for leveraging key data, delivering rapid and meaningful rewards in the form of scalable state-of-the-art analytics. 
  • Backup – Mainframe environments know how to do backup but they often face difficult tradeoffs when mainframe resources are needed for so many critical tasks. Backup often gets relegated to narrow windows of time. Factors such as reliance on tape, or even virtual tape, can also make it even more difficult to achieve needed results. In contrast, a cloud-based backup, whether for particular applications or data or even for all applications and data, is one of the easiest use cases to get started with. Cloud-based backups can eliminate slow and bulky tape-type architecture. As a backup medium, cloud is fast and cost-effective, and comparatively easy to implement.
  • Disaster recovery (DR) –The tools and techniques for disaster recovery vary depending on the needs of an enterprise and the scale of its budget but often include a secondary site. Of course, setting up a dedicated duplicate mainframe disaster recovery site comes with a high total cost of ownership (TCO). A second, slightly more affordable option, is a business continuity colocation facility, which may be shared among multiple companies and made available to one of them at a time of need.  Emerging as a viable third option is a cloud-based BCDR capability that provides essentially the same capabilities as a secondary site at a much lower cost. Predefined service level agreements for a cloud “facility” guarantee a quick recovery, saving your company both time and money.
  • Archive – Again, existing mainframe operations often rely on tape to store infrequently accessed data, typically outside of the purview of regular backup activities.   Sometimes this is just a matter of retaining longitudinal corporate data but many sectors such as the financial and healthcare industries which are heavily regulated are required to retain data for long durations of up to 10 years or more. As these collections of static data continue to grow, keeping it in “prime real estate” in the data center becomes less and less appealing.  At the same time, few alternatives are appealing because they often involve transporting physical media.  The cloud option, of course, is a classic “low-hanging fruit” choice that can eliminate space and equipment requirements on-premises and readily move any amount of data to low-cost and easy-to-access cloud storage.

A Painless Path for Mainframe Administrators

If an administrator of a cloud-based data center was suddenly told they needed to migrate to a mainframe environment, their first reaction would probably be panic! And with good reason. Mainframe is a complex world that requires layers of expertise. On the other hand, if a mainframe administrator chooses to experiment in the cloud or even begin to move data or functions into the cloud, the transition is likely to be smoother. That is not to say that learning isn’t required for the cloud but, in general, cloud practices are oriented toward a more modern, self-service world. Indeed, cloud growth has been driven in part by ease of use.

Odds are good, someone in your organization has had exposure to cloud, but courses and self-study options abound. Above all, cloud is typically oriented toward learn-by-doing, with free or affordable on-ramps that let individuals and organizations gain experience and skills at low cost.

In other words, in short order, a mainframe shop can also develop cloud competency. And, for the 2020s, that’s likely to be a very good investment of time and energy.

Get the eBook & start your mainframe modernization journey!
READ NOW

The COVID-19 pandemic has presented many challenges for mainframe-based organizations, in particular the need to conduct on-site hardware and software maintenance.  That has especially led to consideration of cost-effective cloud data management options as an alternative to legacy mainframe storage platforms.

Evidence is abundant.  In recent months, businesses have learned that cloud growth is largely pandemic-proof and may actually have accelerated. According to Synergy Research Group, Q1 spend on cloud infrastructure services reached $29 billion, up 37% from the first quarter of 2019.  Furthermore, according to Synergy, anecdotal evidence points to some COVID-19-related market growth as additional enterprise workloads appear to have been pushed onto public clouds.  And, according to a recent article in Economic Times, IBM Chief Executive Officer Arvind Krishna has indicated that the pandemic has also heightened interest in hybrid cloud, with transformations that were planned to last for years now being compressed into months.

For organizations built around mainframe technology, these trends underscore an opportunity that has become an urgent need during the pandemic, namely the question of how to reduce or eliminate dependence on on-premises storage assets such as physical tapes – which are still the main reason for needing personnel to access on-prem equipment.

Although essential for many daily operations as well as for routine backup or disaster recovery, these physical assets depend too heavily on having access to a facility and having trained personnel available on site.

Furthermore, they can be expensive to acquire, maintain, and operate.  Additionally, on-prem hardware depends on other physical infrastructure that requires on-site maintenance such as air conditioning and electrical power systems. That was a tolerable situation in the past, but the reality of the pandemic, with lockdowns, transportation problems and the potential health threats to staff is leading mainframe operators to consider modern alternatives to many traditional on-prem mainframe storage options.

Cloud data manager for mainframe

One industry-leading example is the Model9 Cloud Data Manager for Mainframe, which securely delivers mainframe data to any cloud or commodity storage platform, eliminating the dependency on physical and virtual tape alike. It leverages reliable and cost-effective cloud storage for mainframe backup, archive, recovery, and space management purposes.

Needless to say, in the event of emergencies ranging from natural disasters to global pandemics, having mainframe data secured in the cloud means organizations can count on continued smooth operations with few if any people required on site.  And, on an ongoing basis, Model9 also helps unlock mainframe data by transforming it to universal formats that can be used by advanced cloud-based analytics tools, providing more options for making use of corporate data.

It is a future-proof approach that enhances mainframe operations, adds resiliency, helps control costs, and provides a path to better leverage corporate information in a more flexible and cost-effective manner.

The recently posted Computer Weekly article, “Mainframe storage: Three players in a market that’s here to stay”, did a good job of describing the central players in mainframe disk storage but neglected to mention other types of mainframe storage solutions such as tapes and cloud data management.

In particular, the article omitted mention of one of the biggest opportunities for mainframe storage modernization and cost reduction, namely leveraging the cloud to reduce the footprint and cost of the petabytes of data still locked in various kinds of on-premises tape storage.  Model9 currently offers the key to this dilemma by eliminating the dependency on FICON connectivity for mainframe secondary storage.  This means, specifically, that mainframe-based organizations can finally gain real access to reliable and cost-effective on-premises and cloud storage from Cohesity, NetApp, Amazon Web Services, Microsoft Azure, Google Cloud Platform, etc. that until now could not be considered due to the proprietary nature of traditional mainframe storage.  And, while keeping mainframe as the core system that powers transactions, its data can be accessible for analytics, BI and any other cloud application.   

Surely, this is major news for such a key part of the computing market that has hitherto been essentially monopolized by the three players author Antony Adshead discussed at length. 

Mainframe professionals know that new technologies can help them achieve even more; they deserve guidance with regard to the wide options opening up for them.

Webinar: Manage MF data on private clouds w/ Hitachi & Model9
WATCH NOW

New Jersey Governor Phil Murphy’s open call for COBOL programmers because of system failures in supporting unemployment benefits processing and distribution is completely missing the point.

The fact that the mainframe system could not handle the workload and increase in demand is not the fault of the app and it’s certainly not an issue of the app’s programming language. It is a matter of upgrading the mainframe’s infrastructure to sustain the increased workload.

Governments and institutions have allowed their systems to stagnate, neglecting to invest in agile, newer technologies with greater scalability to keep up with increased workload. In fact, to date the system was working well with most believing “if it ain’t broke…,” don’t bother to “fix” it.

The challenges lie with a lack of maintenance and modernization of the mainframe’s infrastructure. If there’s any type of skill shortage, it is that of mainframe system programmers whose mainframe expertise is unique due to the proprietary nature of the system. Any dependency on a unique, proprietary set of skills is a risk for any organization and, therefore, the resolution lies in opening up the system to cloud-native, modern architectures.

To summarize, had organizations invested in integrating cloud technology with their mainframe infrastructure, they would have been benefiting from quick scaling and fast-paced app development on the cloud side to process their mainframe data.

In today’s current crisis, the cloud serves to decrease the dependency of on-prem hardware and infrastructure, and offloads the work from the mainframe infrastructure to increase capacity. 

Register for a Demo