Monday, January 30, 2012

Who Cares About Operating Systems?

A mainframe is more than just a computing device: it is a business computing platform; the difference is its operating systems. They are highly optimized to the mainframe hardware and context and embody over half a century of the requirements from the biggest users of business computing.

Obviously, there are many other computing platforms than the mainframe out there. Terms like "wintel" describe a generic PC with an Intel (or similar) processor running the Windows operating system. An Apple with some version of Mac OS is another example. A given hardware configuration may have multiple different operating system options (e.g. Windows and Linux, among others, for PCs), and a given operating system may have different versions that run on multiple different hardware platforms (e.g. Linux, which has versions for Intel-type PCs, but also for other hardware platforms including the mainframe).

The history of IT is rife with "holy wars" about operating systems, including what level of functionality is sufficient, whether they should be proprietary or open, and if they should be optimized to a specific hardware platform or generically available to many. Before Linux made it big and after the mainframe had become so taken-for-granted that it was already generally ignored and written off as extinct, there were big "holy wars" between supporters of Windows and of UNIX. A Dilbert comic from that era embodies this well: http://dilbert.com/strips/comic/1995-06-24/ depicts a "condescending UNIX computer user" telling Wally, likely a Windows user, "Here's a nickel, kid. Get yourself a better computer."

Of course, throughout that era, debates about such operating systems were able to proceed with more energy than urgency since the critically important work was already being handled by mainframes.

The four mainframe operating systems which have continued to be available for the past few decades are today known as: z/TPF, z/VSE, z/VM and z/OS.

Which leads to the question: what is it that these operating systems do?

Put simply, they provide a functioning context for all the applications that run on the mainframe to focus on what they do best, letting the operating systems handle everything from talking to the hardware to enabling many, many different tasks to run concurrently and safely with the best possible performance and availability.

Being written specifically for the IBM mainframe hardware, and having then evolved concurrent with it over time to respond to the demands of the biggest users on earth, this has resulted in a platform of unparalleled performance, capacity and reliability.

In addition to the four "z/" operating systems, there is also Linux available for the mainframe (though it generally runs as a "guest" under z/VM) and an interface to z/OS known as UNIX System Services (USS) or z/OS UNIX. Because each of these relies on one of the previously-mentioned operating systems to interact with the hardware of the mainframe, I'll save specific discussion of them for future blog entries, and focus this one on the aforementioned four.

Over the years, the names of these operating systems have changed. The original operating system announced for IBM's System/360 line of computers was to be known as OS/360, but the learning curve that came with developing such a complex operating system led to significant delays in delivery (as discussed in Fred Brooks' great book, "The Mythical Man-Month: Essays on Software Engineering."). So, as a stop-gap, IBM announced the scaled-down DOS/360 (for "Disk Operating System/360" - not to be confused with any of the other operating systems also known as DOS). OS/360 was eventually delivered, and it grew and changed and went through multiple names, becoming what we know today as z/OS. DOS/360 went through many twists and turns to become z/VSE.

I like to coin epigrams (just ask my kids, who have a compilation of what they call, "dad-isms"), and one of them is, "The temporary outlasts the permanent." This refers to the fact that we often adopt short-term measures without the detailed planning and perfectionism we would apply to something intended to last. Then, these short-term measures, free from the obligation of perfection, have grown, adapted and gained a life of their own. Meanwhile, the more carefully-planned results may see the world pass them by if we don't keep applying the same level of scrupulousness to their ongoing viability as we applied to their original development.

Interestingly enough, z/VSE and z/OS represent the two sides of viability inherent in this: the stop-gap that adapted and survived and the scrupulously-created high-quality result that continued to be maintained with great effort and attention.

Now, don't get me wrong: today's z/VSE is indeed a high-quality operating system, and has accrued many of the advantages originally developed for OS/360 and its successors over time. And, for that matter, it's always been "good enough" - so much so, that IBM's efforts to get its users to convert to OS/360's successors have never seen a complete conclusion.

In fact, that's one of the great stories of the mainframe: how IBM has tried to get the users of the "good enough" operating system to convert to the "top quality" operating system, and how those users have responded.

While my focus for this blog entry isn't to give an in-depth history of the mainframe context (I'm working on a book on that topic with my friend and colleague Dr. Stephen Guendert - stay tuned), it's worth following this thread a little way just to see a couple of noteworthy outcomes.

The first of these is z/VM. 1972 marked the beginning for a precursor to z/VM: VM/370. While this was intended to host multiple users in a time-sharing context, there are two very relevant aspects about it for this discussion: 1) it was the first Virtual Machine operating system, allowing multiple concurrent environments, including full mainframe operating system instances, to think they had the entire mainframe to themselves; and 2) it was employed as part of IBM's ongoing strategy to get the users of DOS/360's descendants to convert to OS/360's descendants by allowing them to run both operating systems concurrently on the same machine, thus allowing for a smooth and gradual cutover.

The other interesting thread is the emergence of a range of non-IBM operating systems that were generally enhanced alternatives to the successors of DOS/360. One of the most well-known of these was MVT/VSE from Software Pursuits, which my friend and colleague Tim Gregerson was closely involved with. He has shared many insights with me about this turbo-charged alternative to IBM's light mainframe OS, and I look forward to including some of them in the mainframe history book I mentioned above.

Lastly, let me give a tip-of-the-hat to z/TPF (or z/Transaction Processing Facility). Descended from the Airlines Control Program (ACP - developed in the mid-1960s), it is a highly-optimized environment for serving up intensive, real-time services such as airline reservations at the greatest of volumes. While it is the least commonly-used of the big four mainframe operating systems, for those who use it, nothing else comes close to the nature and scale of performance it offers.

Because all four of these operating systems run on the same hardware platform, they are able to benefit from significant cross-pollination. That means that RAS (Reliability, Availability, Serviceability/Scalability/Security) features of one can be repurposed or used to model similar aspects in the others.

When I say "the same hardware platform" it should not be construed to indicate that there's only one kind of IBM mainframe, of course. Rather, since the beginning, the System/360 and its descendants have provided an extremely wide range of capacities and performance characteristics. But they're all designed to be able to run the same software and operating systems, providing the functional equivalent of an extremely open platform.

However, operating systems are just one more layer of what makes the mainframe great, and the next layer is the one I know best. Next week: all the software between the operating systems and the applications, part one!

Monday, January 23, 2012

What's so Special About Mainframe Hardware?

If a PC is like an amoeba then a mainframe is like a vertebrate.

At this point, I'm sure any biologist could be justifiably displeased with such an analogy, given the complexity of the amoeba, and the fact that we continue to discover new things about this basic life form which has so many aspects to it.

However, if you will indulge me, I think this is a good way to introduce the hardware of the mainframe and draw a distinction between it and the computing hardware that most of us are used to.

My first introduction to what might be called a PC was the Apple ][+ computer: a consumer-focused, self-contained computer designed to have all of the most basic essentials together. Conceptually, this is very much like a single-celled organism, with all of its functionality bound up in a single place. And, while things such as the screen and other peripherals were external to the computer case, everything was about a single user at a time and served by a single processor and operating system instance.

Next, IBM introduced their PC, which became the genus of the wintel species that has come to predominate the consumer and small business computing world. Like the Apple, the IBM PC was a stripped-down, affordable, consumer-oriented machine that had everything in, or attached to, a single box.

While processor speed, memory and storage capacity, and complexity of operating systems, applications and interfaces have grown over the years, the PC has continued to be the conceptual equivalent of a single-celled organism - everything self-contained in a single place serving a single user or purpose at a time, all directed by the CPU.

To which I hope you are responding, "But how about networking - home, corporate, Internet, etc. - and multitasking and virtualization?"

The first half of this question, for me, can be answered by saying that the network can be seen as just another interface, like the keyboard, screen, mouse, joystick, and disk drive: it's an outside source of input and destination of output for the PC. In our analogy, it's part of what might be called the environment.

Multitasking and virtualization are, of course, primarily software issues, so I don't want to stretch the analogy too far. Briefly, though, re multitasking, let's not forget that there are many processes happening concurrently inside an amoeba as well - movement, digestion, reproduction, and so on - but they're all focused on the behavior of that single cell. I should also mention that PC's are at their most functional when they have no more than one high-demand task running at a time.

Virtualization also doesn't change the essential nature of the PC - it just allows for one or more instances of the PC environment to run generically on one or more PC hardware configurations.

This is where the difference between PCs and mainframes begins to resolve: no matter how many PCs I have, each one is, in its essence, a generic PC. I can have a million PCs connected together, but that doesn't make them into even a single mainframe any more than a million amoeba together spontaneously become a vertebrate.

The difference is a highly-ordered, functional structure that allows different parts of the mainframe to have specialized tasks that work together, much like a vertebrate with a brain, internal systems and organs, a skeleton, and limbs. As with the PC, software is an important part of how these operate. Unlike the PC, however, the mainframe hardware was designed from the very beginning according to this structure, which enabled even the earliest mainframe processors to be effective, despite having very small capacities.

Unlike a PC, then, the mainframe's brain or central processors are able to focus their power and capacity (which continue to be leading-edge) primarily on doing the work that pays for them.

Like bodily systems and internal organs, the mainframe has controllers to deal with the vast amount of data that passes through the mainframe.These "sub-computers" free up the central processors from focusing on the other activities of the mainframe, and they are not simply mini-mainframes - they're functionally designed to perform support roles other than application processing.

The limbs, then, would be analogous to the actual devices attached at arm's length to the mainframe via the controllers, including: vast amounts of disk and tape storage, high-speed printers, network-connected users and other computers.

There are other hardware-based aspects that are more analogous to the autoimmune system, enabling IBM's statement of integrity to the effect that unauthorized application programs, subsystems, and users are prevented from bypassing mainframe security. This is also a significant differentiator because it means that mainframes are secure all the way down to the bare metal, as distinct from PCs which were designed for simplicity and consumer pricing, with security as an afterthought at best.

This structure, with its skeletal consistency and solidity, is essential to enabling the mainframe to reliably process vast amounts of data without spending all of its cpu time just inputting and outputting data.

Of course, as part of an environment designed to be highly robust, all of this hardware is of the highest quality, and not merely commodity as one is prone to get with PCs. While that means it costs more, it pays for itself every nanosecond of every day of every year with a mean time between failure ("MTBF") measured in decades. And it means that the next layers on top of it can be optimized for this most reliable of environments to bring even greater benefit from it.

As I think of my own species of vertebrate, and all the things we've built using our bodies, such as language, culture, art and science, I can appreciate the value of being able to optimize for a highly-functional, reliable hardware platform.

Of course, the world needs all different species, including such inveterate invertebrates as the amoeba. Likewise, the world of IT needs single-purpose-oriented computers that allow for the conceptual simplicity of having a platform all to yourself. Just like there are more amoeba in the world than there are vertebrates, it makes sense that there should be more PCs than mainframes.

But when only a mainframe will do, it's nice to know that this highly-reliable, robustly-structured, leading-edge, proven hardware platform continues to be the backbone of the world of business IT.

Now, speaking of quality and reliability, I'd like to take a moment to express appreciation for the comment on last week's blog entry from Jim Michael, a friend and valued mentor of mine who continues to offer me greatly-appreciated support and guidance, and was a significant factor in my increasing involvement with SHARE.

Next week, I intend to talk about the next layer up from the hardware: the operating systems that make the mainframe great.

Monday, January 16, 2012

What Makes a Mainframe?

Oh, there are Apples and Windows and things that end in "x" and even a few last instances of other platforms out there that haven't quite finished going away. There are some new operating systems for mobile devices as well. One thing they all have in common is that they aren't sufficient to constitute a mainframe.

You could run them on a mainframe if you wanted, of course - it's already happening with Linux on z. But you'd have to do it on top of what already works - in the case of Linux that would mean running it on z/VM.

But, whatever you run on the mainframe, you're going to be using one of four mainframe operating systems underneath it to make it work. Those are:

  1. z/OS: this is IBM's premier mainframe operating system, and the one that supports most of the business applications and data that make the mainframe a mainstay of the world economy. It is descended from a 47-year-long line of operating systems, beginning with OS/360, all of which were written specifically for IBM's System/360 hardware and its descendants.
  2. z/VM: the original "virtual machine" operating system, first officially made available in 1972, this operating system allows many (or even very, very many) operating system instances to run at the same time on the same mainframe, each as if it had an entire mainframe to itself. These instances can currently be an arbitrary selection of one or more of the four mainframe operating systems in this list and/or Linux.
  3. z/VSE: the frugal mainframe operating system, introduced as a stop-gap in the early days, and always lighter in functionality and pricing than z/OS and its predecessors. Those who still use it are dedicated to it, and they have insisted that IBM continue to support it.
  4. z/TPF: a specialized operating system for such applications as airline reservations, and only used by a select few organizations.

You'll note right away that "z/" is at the beginning of the current names of each of these operating systems. That's in reference to the current mainframe hardware, System z®. IBM asserts that the "z" stands for "zero downtime" as distinct from the "i," "p" and "x" for the other hardware platforms they offer. The "i" and "p" now run on IBM's Power architecture while the "x" uses Intel x86 processors.

That's important, because you could run any of the above four mainframe operating systems on a non-mainframe hardware platform using emulation (one computer pretending to be another) but it wouldn't constitute a true mainframe. The strengths of IBM's mainframe hardware are essential to creating that optimized combination of factors resulting in today's mainframe.

On top of the hardware and operating systems are two software layers of what make up today's mainframe: utility or middleware software that manages and makes the mainframe run better, and applications - many of which are tried-and-proven over decades.

But wait: there's more! Today's mainframe is not an island. In fact, it backstops many of the non-mainframe activities in the organizations where it runs. By relying on the mainframe for key data and processing, applications running on every other platform - even on the zBX portion of IBM's zEnterprise System - can let the mainframe take care of the details such as security, reliability, availability, massive volume, and being a single source for important data. So the mainframe is an essential part of an ecosystem that supports the entire world.

None of the above is sufficient to make up a mainframe, however. If you took someone with no mainframe experience or culture and gave them all the above hardware and software and told them to run it, they'd almost certainly begin by failing. That's because the beating heart of what makes the mainframe work is the people and their culture.

Now, clearly, the above is barely an appetizer about what makes a mainframe. So, the next Mainframe Analytics blog entries over the coming weeks will dig into each of these areas in greater depth in order to lay the ground work for future blog entries about other things that make the mainframe work today and tomorrow, including the mainframe's constantly-improving business value, and its brilliant future.

Monday, January 9, 2012

Why Do Mainframes Matter?

As I sit here typing this blog entry on my PC, I think of the uncountably many ways I interact with computers every day, and I realize that not one of those makes me think, "I'm dealing with a mainframe right now."

That's actually intentional. Back in the 1980's, IBM came out with a strategy known as "Systems Application Architecture" (SAA) which was intended to bring together the various platforms of computing in a way that would allow workloads to run on the most suitable computer, while serving up interfaces on other suitable computers, without users knowing or caring that they were dealing with more than one computer.

That soon came to be seen as part of the larger "Client/Server" model, which allowed users to have "client computers" that only did tasks that were best handled locally, such as interfacing. A larger, remote server would handle the more intense processing.

This reminds me of when, in 1998, I asked clients of mine in Québec what the French word for mainframe was, and they said, "serveur central," which translates literally as, "central server."

In any case, we no longer need to know or care what the computer "back there" doing the main processing is. Unless, that is, we're responsible for it.

Today, with the Internet and Cloud Computing, we are more abstracted than ever from the platforms that do the core processing we need. We just sit at our browser or smartphone and talk to a graphical interface. And we trust that it will work.

And when it works, we barely notice. It's like that old saying, "Housework is something that nobody notices unless you don't do it."

Given that perspective, you would think that the computer that works most well would be the most invisible - and it is. Whether you're submitting your taxes online, booking flights, or even withdrawing cash from a bank machine, you're dealing with a mainframe, and it's behaving with such reliability that you don't have to know or care that it's there.

But here's the catch: for the most serious, identity-handling, secure and data-intensive of applications, it makes a very big difference to have a computer that has proven over nearly five decades that it can be trusted to handle these requirements so well that you don't have to care.

However, in our rush to welcome the client side of computing over the last three decades, we have participated in this abstraction of the main servers by distancing ourselves and our strategies from the computers we rely on to make everything else work. We've used words like "legacy" as if they were put-downs rather than recognizing the value they imply. And we've focused much of our new development of people and applications on these distributed platforms.

The good news is that our faith was not misplaced in taking the mainframe for granted, and even treating it with the contempt that comes from the kind of familiarity you only have with someone or something you trust never to let you down, no matter how you treat it.

There is a problem, however, that needs to be addressed - or perhaps it can better be seen as an opportunity: decision makers whose decisions impact the maintenance and usage of the mainframe need to have a more balanced understanding of the fact that they've bet their businesses, and the economy, on the mainframe, and realize how much better off they can be if they take that into account.

An illustration of this reliance, though very rarely occurring, is the occasional anecdotal story of an organization that has had their mainframe accidentally powered off. This takes an amazing confluence of events that usually includes at least one action by someone authorized to take it who should really know better, plus the failure of entire aspects of corporate culture built to prevent such things - for example, change control.

When it happens, the results are generally the complete failure of every important corporate application - even those that were apparently entirely built on non-mainframe platforms. It's like a corporate heart attack. Why? Because all those new applications are generally relying on important data and processing from "somewhere trusted" on the corporate intranet. And at the bottom of that entire upside-down pyramid of trusted data and processing is the nearly-invisible mainframe, running the up-to-date version of the most high-performing, secure, reliable, scalable legacy applications that keep the organization functioning.

Did I say scalable? Yes, and adaptable too. Sadly, many architect-level and above people in modern IT seem to think of the mainframe as inflexible and limited. In fact, the opposite is true. No other platform has such a high ceiling for one or (far more often) thousands of applications to take up as much capacity as needed on a moment-by-moment adjustable basis.

I'll be opening up the substance of this topic much more over coming blog entries. I intend to continue next week with a high-level overview of what makes the mainframe so functional and reliable.

Meanwhile, I'd also like to recognize and thank those who have already given feedback to this blog. Since launching it last Monday we've had hundreds of views from every continent but Africa and Antarctica, and I hope to see them join the audience too.

Willie, thank you for your positive comments, and for the many ways you support and encourage the mainframe community, including with your blog and your support of zNextGen.

Pandoria13, I certainly agree that there were some very important innovations such as memory speed for platforms that qualified as mainframes and were precursors to the System/360. However, since anything written for them had to be rewritten before it could be used for the System/360, but anything written for the System/360 could still be in use today, I've chosen to focus on that "line that survived" of mainframes.

And to other colleagues I've discussed this with, I just wanted to confirm that, while it is sometimes appropriate to move off of an out-of-date application that may happen to run on the mainframe to a newer one that might not run on the mainframe, it is also worth looking at performance, reliability, scalability, security and other architectural considerations when looking for an ideal platform to move a "legacy" application to, even if the legacy is Unix or Windows, and the ideal destination platform turns out to be a mainframe.

Monday, January 2, 2012

What is a Mainframe?

On April 7, 1964, IBM announced their System/360 mainframe. It wasn't the first mainframe, but it was the beginning of a new era, and in some ways it did become the last mainframe.
Today, IBM's zEnterprise (and specificaly either the z196 or the z114 mainframe component of it) is light years beyond its original progenitor, but you can still run programs on it that were written for System/360 mainframes - in fact, there likely are such programs still in use.

Modern IT is also light years beyond those early days of computing, with graphics, the Internet, portable computing, and many other innovations that have given the illusion that we had moved beyond the mainframe.

What actually happened was that the mainframe became the bedrock foundation for large-scale business computing for governments, financial institutions, large manufacturers and other key organizations that keep the world economy running. In fact, if you were to turn off every mainframe right now, everything I know tells me that the world economy would be grinding to a halt before the day was over.

This is not a problem, because today's mainframes are so reliable that we've been taking them for granted for decades, without even realizing that they're handling the most critical business data processing in the world, every second of every day, allowing other platforms to handle lower-criticality activities. And as long as electricity works, the mainframe looks to keep humming along, almost invisibly keeping everything else flowing smoothly as well.

But there is a problem: perception. With the exception of those who deal directly with the mainframe, it seems almost no one is aware that modern business computing is built on the approximately 10,000 mainframes and 4,000 organizations that use them to keep things running. That can lead to some poor decision making, and skewed perceptions at the highest levels - as well as everywhere else.

I intend this blog to be one source of clarity about the mainframe, its history, its current roles, and its future. I also intend to offer insights and ideas to improve the already-substantial value of mainframe computing. And, while I will write this blog as a mainframer myself, with an audience that includes other mainframers, it is my intention that it be of value and interest to everyone who is affected by mainframes - i.e. everyone.

So, let's begin with: What is a Mainframe?

Some people have claimed the mainframe has become extinct. That's clearly inaccurate.

Some have claimed that any large computer is a mainframe. I do not endorse that definition.

My definition of today's mainframe is:

A large, highly-functional business computer, descended from IBM's System/360, which is capable of running thousands (or more) of concurrent applications, serving millions (or more) of concurrent users at greater than 99% busy with 99.999% or bettter uptime and no degradation in service. It continues to be the computer that handles much of the world's most critical business data processing, with so much reliability, availability and security that it may seem invisible, but it is actually essential.