Unit Control Block

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Voidxor (talk | contribs) at 00:26, 12 May 2022 (Use American English and mdy dates due to strong national ties.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In IBM mainframe operating systems from the OS/360 and successors line, a Unit Control Block (UCB) is a memory structure, or a control block, that describes any single input/output peripheral device (unit), or an exposure (alias), to the operating system. Certain data within the UCB also instructs the Input/Output Supervisor (IOS) to use certain closed subroutines in addition to normal IOS processing for additional physical device control.

Some other operating systems have similar structures.

Overview

During initial program load (IPL) of current[a] MVS systems, the Nucleus Initialization Program (NIP) reads necessary information from the I/O Definition File (IODF) and uses it to build the UCBs. The UCBs are stored in system-owned memory, in the Extended System Queue Area (ESQA). After IPL completes, UCBs are owned by Input/Output Support. Some of the information stored in the UCB are: device type (e.g. disk, tape, printer, terminal), address of the device (such as 1002), subchannel identifier and device number, channel path ID (CHPID) which defines the path to the device, for some devices the volume serial number (VOLSER), and a large amount of other information, including OS Job Management data.

While the contents of the UCB has changed as MVS evolved, the concept has not. It is a representation to the operating system of an external device. Inside every UCB are the UCBIOQ pointer to the current[1] IOS Queue Element[2] (IOQ), UCBIOQF and UCBIOQL pointers[3] to a queue of IOQs[b] (IOQs) and a subchannel number for the subchannel-identification word used in the start subchannel (SSCH) instruction to start a channel program (chain of channel command words (CCWs)).[4]

The UCB evolved to be an anchor to hold information and states about the device. The UCB currently has five areas used for an external interface: Device Class Extension, UCB Common Extension, UCB Prefix Stub, UCB Common Segment and the UCB Device Dependent Segment.[5] Other areas are internal use only. This information can be read and used to determine information about the device.

In the earliest implementations of this OS, the UCBs (foundations and extensions) were assembled during SYSGEN, and were located within the first 64 KB of the system area, as the I/O device lookup table consisted of 16-bit addresses. Subsequent enhancements allowed the extensions to be above the 64-kilobyte (65,536 bytes) line, thereby saving space for additional UCB foundations below the 64-kilobyte line and also thereby preserving the architecture of the UCB lookup table (converting a CUu to a UCB foundation address).

Handling parallel I/O operations

UCBs were introduced in the 1960s with OS/360. Then a device addressed by UCB was typically a moving head hard disk drive or a tape drive, with no internal cache. Without it, the device was usually grossly outperformed by the mainframe's channel processor. Hence, there was no reason to execute multiple input/output operations to it at the same time, as these would be impossible for a device to physically handle. In 1968 IBM introduced the 2305-1 and 2305-2 fixed-head disks, which had rotational position sensing (RPS) and 8 exposures (alias addresses) per disk; the OS/360 support provided a UCB per exposure in order to permit multiple concurrent channel programs. Similarly, later systems derived from OS/360 required an additional UCB for each allocated virtual volume in a 3850 Mass Storage System (MSS) and for each exposure on a 3880-11, 3880-13 and their successors.

Parallel Access Volumes (PAVs)

Since only one set of channel commands or I/O could be run at one time. This was fine in the 1960s when CPUs were slow and I/O could only be processed as fast as CPUs could process it. As systems matured and CPU speed greatly surpassed I/O input capacity, access to the device that was serialized at the UCB level became a serious bottleneck.

Parallel Access Volume (PAV) allow UCBs to clone themselves to allow multiple I/O to run simultaneously. With appropriate support by the DASD hardware, PAV provides support for more than one I/O to a single device at a time. To maintain backward compatibility, operations are still serialized below the UCB level. But PAV allows the definition of additional UCBs to the same logical device, each using an additional alias address. For example, a DASD device at base address 1000, could have alias addresses of 1001, 1002 and 1003. Each of these alias addresses would have their own UCB. Since there are now four UCBs to a single device, four concurrent I/Os are possible. Writes to the same extent, an area of the disk assigned to one contiguous area of a file, are still serialized, but other reads and writes occur simultaneously. The first version of PAV the disk controller assigns a PAV to a UCB. In the second version of PAV processing, Workload Manager (WLM) reassigns a PAV to new UCBs from time to time. In the third version of PAV processing, with the IBM DS8000 series, each I/O uses any available PAV with the UCB it needs.

The net effect of PAVs is to decrease the IOSQ time component of disk response time, often to zero. As of 2007, the only restrictions to PAV are the number of alias addresses, 255 per base address, and overall number of devices per logical control unit, 256 counting base plus aliases.

Static versus dynamic PAVs

There are two types of PAV alias addresses, static and dynamic. A static alias address is defined, in both DASD hardware and z/OS, to refer to a specific single base address. Dynamic means that the number of alias addresses assigned to a specific base address fluctuates based on need. The management of these dynamic aliases is left to WLM, running in goal mode (which is always the case with supported levels of z/OS). On most systems that implement PAV, there is usually a mixture of both PAV types. One, perhaps two, static aliases are defined for each base UCB and a bunch of dynamic aliases are defined for WLM to manage as it sees fit.

As WLM watches over the I/O activity in the system, WLM determines if there a high-importance workload is delayed due to high contention for a specific PAV-enabled device. Specifically, for a disk device, base and alias UCBs must be insufficient to eliminate IOS Queue time. If there is high contention, and WLM estimates doing so would help the workload achieve its goals more readily, it will try to move aliases from another base address to this device.

Another problem may be certain performance goals are not being met, as specified by WLM service classes. WLM will then look for alias UCBs that are processing work for less important tasks (service class), and if appropriate, WLM will re-associate aliases to the base addresses associated with the more important work.

HyperPAVs

WLM's actions in moving aliases from one disk device to another take a few seconds for the effects to be seen. For many situations this is not fast enough. HyperPAVs are significantly more responsive because they acquire a UCB from a pool for the duration of a single I/O operation, before returning it to the pool. Thus, a smaller number of UCBs are required to service the same workload, compared to Dynamic PAVs. There is no delay waiting for WLM to react.[6]

In other operating systems

Digital's VMS operating system uses an identically named structure, the UCB, for similar purposes. A UCB is created for each I/O device. The data in the UCB includes the device's unit number (a part of the device name) and a listhead to which pending I/O requests may be queued. The UCB may have a device-driver defined extension in which the driver can keep driver-defined data that is instantiated for each device.[7]

See also

Notes

  1. ^ In some older systems, the UCBs were part of the Nucleus and were assembled during the SYSGEN process.
  2. ^ In OS/360, OS/VS1 and SVS, there was a field pointing to a queue of Request Queue Elements (RQEs).

References

  1. ^ "UCB Mapping" (PDF). z/OS 2.5 MVS Data Areas MVS Data Areas Volume 4 (RRP - XTL) (PDF). IBM. September 30, 2021. p. 994. GA32-0938-500. Retrieved May 9, 2022.
  2. ^ "IOQ Information" (PDF). z/OS 2.5 MVS Data Areas Volume 2 (IAX - ISG) (PDF). IBM. September 30, 2021. p. 1033–1039. GA32-0936-50. Retrieved May 8, 2022.
  3. ^ "IOSDUPFX mapping" (PDF). z/OS 2.5 MVS Data Areas Volume 2 (IAX - ISG) (PDF). IBM. September 30, 2021. p. 1181. GA32-0936-50. Retrieved May 9, 2022.
  4. ^ z/Architecture Principles of Operation. IBM. May 4, 2004. p. 14.3.9. Retrieved January 3, 2017. {{cite book}}: |website= ignored (help)
  5. ^ "z/OS Release 11 MVS Data Areas" (PDF). PubLibZ.Boulder.IBM.com. IBM. 2009. Retrieved January 4, 2017.
  6. ^ Rogers, Paul; Salla, Alvaro; Sousa, Livio (September 2008). "7.22 HyperPAV feature for DS8000 series" (PDF). ABCs of z/OS System Programming (PDF). Vol. 10 (Fourth ed.). IBM. p. 494. SG24-6990-03. Retrieved May 5, 2022. {{cite book}}: |work= ignored (help)
  7. ^ Goldenberg, Ruth; Saravanan, Sara (1994). OpenVMS AXP Internals and Data Structures. Digital Press. p. 753. ISBN 978-1555581206. The executive creates a unit control block (UCB) for each I/O device attached to the system.