Hardware - S/360 and beyond

     The design of IBM mainframes over the last 40 years has evolved as user requirements have changed. The same basic functionality is there and has expanded, but met in different ways to improve speed and availability.

     The CPUs started with a large, flexible and powerful Instruction Set, far beyond any other computers available and that Instruction Set has grown considerably over the years. While there are competitive mainframes from other companies, they duplicate or emulate IBM mainframes. Neither Intel nor Sun nor any non-mainframe computer even approaches the sophistication and power of the mainframe Instruction Set.

     Starting with single-processor machines, IBM moved to multi-processor machines and today one can install a CPU with multiple processors, then turn them on and off as requirements vary and only pay for the processing power they are using. It also means that if their requirments grow, they may well not have to upgrade to new computer systems. They only need to activate (and pay for) the hardware that is already installed.

     One of the capabilities of the hardware is the ability to segment it into multiple logical computers called LPARs and spread the available CPU cycles and memory among them.

     Memory (and the ability to address it) has also increased dramatically, from the first 24-bit addressing of 16Megabytes through 31-bit addressing of 2Gigabytes to 64-bit addressing of ????? bytes. Sequential devices such as printers and tape drives can be accessed from multiple LPARs but can be used on only one LPAR at a time. Disk drives can be dedicated to a particular LPAR or shared between systems. Individual files (Datasets, in IBM parlance) may also be shared between systems, either on the different LPARs of the same mainframe or by software running on seperate mainframes. In fact, disk (DASD in IBMland) and the datasets thereon can even be shared between different Operating Systems such as zOS, OS/390 and VM. All the Operating Systems and the underlying hardware control circuits and microcode perform 'lockup' operations to preserve data integrity during concurrent accesses.

     The basic structure of the hardware intially consisted of a CPU, Channels, Control Units and Devices. Some channels were high-speed and carried only a single dataflow at a time, although they might be connected via multiple paths. Low-speed channels were multiplexed. High-speed devices such as disks and tapes went to the high-speed (Selector) channels and low-speed devices (Teleprocessing gear, card readers & punches, printers) went to the low-speed multiplexor channels. Some of the low-speed devices hung off Control Units and some hung directly on the channel, but high-speed devices always connected via a Control Unit. These CUs could be connected to more than one channel, so multi-threading I/O could occur through a CU as long as the devices were unique to each I/O.

     The initial CUs were 'dumb boxes', simply providing a physical connection, with all the power residing in the CPU. Later, the CUs were more intelligent, with the Tape Controller of a s/370, for example, having the CPU-power of a S/360-Mod 40. Teleprocessing controllers in particular evolved from the 'dumb' 2701/2703 to the mini-computer 3705/3725/3745 front-end processors. There eventually evolved an I/O Subsystem which took over most of the 'controller' functions, with the actual physical connection handled by what are essentially mini-processors. Connectivity between devices and external hardware is by ESCON or FICON optic cable as well as Ethernet. Our z800 connects to older bus-and-tag devices such as the printer via converters which bridge ESCON to B&T. There are free-standing front-end communications controllers to handle large networks, but these are falling out of use with the rise of IP-based communication.

     Modern DASD is on an ESS box commonly called the Shark. This is disk storage that is configurable for either mainframe use or standard network storage, arranged in fault-tolerant Raid-5 or even Raid-10 configuration. Healthserve has a small Shark, with four 8-packs of disks. Two 8-packs are formatted for network use behind a NAS server and two 8-packs are formatted for OS use, providing 292 volumes to the mainframe. Each volume has almost 3 gigabytes of data. If needed, we could easily (and non-disuptively) add more 8-packs to the Shark and we could even add an expansion chassis to support more 8-packs. The maximum capacity of a Shark soars into realms whose nomenclature reeks of science fiction. Gigabytes, terabytes, petrabytes, ????bytes. Imagine shops with multiple full Sharks and you get some idea of how much data can be accessible to a mainframe or group of mainframes. And while that data could, in theory, be on network-formatted disks, a mainframe can process and deliver it with a speed that could only be matched by a sizable collection of servers. Large Internet sites such as Amazon, Yahoo, Ebay and Google have huge server farms to support a vast array of disk storage. They have grown from much smaller installations and they are probably pretty much locked into that architecture. If they had origially known how big they would become, I have no doubt they would have been better served to have put up their applications on IBM Mainframe(s), which have full TCP/IP functionality including Linux and which might have been coded under Websphere on the mainframe instead of on Intel boxes. Beyond the basic reliability of mainframes and mainframe disks, much of the processing overhead has been moved 'downstream' and does not impact the application. In particular, much of Linux and DB2 processing has been microcoded and achieves performance levels far beyond Windows servers. I have heard that of all the data which is stored on computers, 80% is stored on mainframes. That is particularly amazing, given the explosive growth of the Internet and the billions of web pages on hundreds of thousands of servers.

     The z800 at Healthserve has multiple processors, more than we are using but which we could use if necessary and more processors could be added. We have 8 megabytes of memory, partitioned between the two LPARs. Memory could also be expanded if neccessary. Adding memory to the z800 or more disks to the Shark can be done non-disruptively and even the software IOGEN (a 'map' of the hardware environment) can be updated dynamically and non-disruptively. Plug-and-play is alive and well on the IBM mainframe!

     When one calls the mainframes 'dinosaurs', it is useful to remember that dinosaurs ruled the world for about 1000 times longer than homo sapiens has existed. I wonder if we'll be around for 150 million years...