CICS Transaction Server V5.3 / December 11, 2015
|Operating system||z/OS, z/VSE|
|Platform||IBM System z|
|Type||Mixed Language Application Server|
Customer Information Control System (CICS®) is a family of mixed language application servers that provide online transaction management and connectivity for applications on IBM Mainframe systems under z/OS and z/VSE.
CICS is middleware designed to support rapid, high-volume online transaction processing. A CICS transaction is a unit of processing initiated by a single request that may affect one or more objects. This processing is usually interactive (screen-oriented), but background transactions are possible.
CICS provides services that extend or replace the functions of the operating system and are more efficient than the generalized services in the operating system and simpler for programmers to use, particularly with respect to communication with diverse terminal devices.
Applications developed for CICS may be written in a variety of programming languages and use CICS-supplied language extensions to interact with resources such as files, database connections, terminals, or to invoke functions such as web services. CICS manages the entire transaction such that if for any reason a part of the transaction fails all recoverable changes can be backed out.
While CICS has its highest profile among financial institutions such as banks and insurance companies, many Fortune 500 companies are reported to run CICS along with many government entities. CICS is also widely used by many smaller organizations. CICS is used in bank-teller applications, ATM systems, industrial production control systems, insurance applications, and many other types of interactive applications.
Recent CICS Transaction Server enhancements include support for Web services and Java, Event processing, Atom feeds, and RESTful interfaces. CICS Transaction Server 5.3, which generally became available on December 11, 2015, provides new and enhanced capabilities in three main areas; Service agility, Operational efficiency and Cloud with DevOps.
- 1 History
- 2 Programming
- 3 Transactions
- 4 Structure
- 5 CICS Recovery/Restart
- 6 Components
- 7 Pronunciation
- 8 See also
- 9 References
- 10 External links
CICS was preceded by an earlier, single threaded transaction processing system, IBM MTCS. An 'MTCS-CICS bridge' was later developed to allow these transactions to execute under CICS with no change to the original application programs.
CICS was originally developed in the United States at an IBM Development Center in Des Plaines, Illinois, beginning in 1966 to address requirements from the public utility industry. The first CICS product was released in 1968, named Public Utility Customer Information Control System, or PU-CICS. It became clear immediately that it had applicability to many other industries, so the Public Utility prefix was dropped with the introduction of the first release of the CICS Program Product on July 8, 1969, not long after IMS database management system.
In 1974, CICS development responsibility was shifted to the IBM Hursley Site in the United Kingdom, where development work continues today alongside labs in India, China, Russia, Australia and United States.
CICS originally only supported a few IBM-brand devices like the 1965 IBM 2741 Selectric (golf ball) typewriter based terminal. The 1964 IBM 2260 and 1972 IBM 3270 video display terminals were widely used later.
In the early days of IBM mainframes, computer software was free – bundled at no extra charge with computer hardware. The OS/360 operating system and application support software like CICS were "open" to IBM customers long before the open source software initiative. Corporations like Standard Oil of Indiana (Amoco) made major contributions to CICS.
The IBM Des Plaines team tried to add support for popular non-IBM terminals like the ASCII Teletype Model 33 ASR, but the small low-budget software development team could not afford the $100-per-month hardware to test it. IBM executives incorrectly felt that the future would be like the past with batch processing using traditional punch cards.
IBM reluctantly provided only minimal funding when public utility companies, banks and credit-card companies demanded a cost-effective interactive system (similar to the 1965 IBM Airline Control Program used by the American Airlines Sabre computer reservations system) for high-speed data access-and-update to customer information for their telephone operators (without waiting for overnight batch processing punch card systems).
When CICS was delivered to Amoco with Teletype Model 33 ASR support, it caused the entire OS/360 operating system to crash (including non-CICS application programs). The majority of the CICS Terminal Control Program (TCP - the heart of CICS) and part of OS/360 had to be laboriously redesigned and rewritten by Amoco Production Company in Tulsa Oklahoma. It was then given back to IBM for free distribution to others.
In a few years[when?], CICS generated over $60 billion in new hardware revenue for IBM, and became their most-successful mainframe software product.
In 1972, CICS was available in three versions – DOS-ENTRY (program number 5736-XX6) for DOS/360 machines with very limited memory, DOS-STANDARD (program number 5736-XX7), for DOS/360 machines with more memory, and OS-STANDARD V2 (program number 5734-XX7) for the larger machines which ran OS/360.
In early 1970, a number of the original developers, including Ben Riggins (the principal architect of the early releases) relocated to California and continued CICS development at IBM's Palo Alto Development Center. IBM executives did not recognize value in software as a revenue-generation product until after federal law required software unbundling. In 1980, IBM executives failed to heed Ben Riggins' strong suggestions that IBM should provide their own EBCDIC-based operating system and integrated-circuit microprocessor chip for use in the IBM Personal Computer as a CICS intelligent terminal (instead of the incompatible Intel chip, and immature ASCII-based Microsoft 1980 DOS).
Because of the limited capacity of even large processors of that era every CICS installation was required to assemble the source code for all of the CICS system modules after completing a process similar to system generation (sysgen), called CICSGEN, to establish values for conditional assembly language statements. This process allowed each customer to exclude support from CICS itself for any feature they did not intend to use, such as device support for terminal types not in use.
CICS owes its early popularity to its relatively efficient implementation when hardware was very expensive, its multi-threaded processing architecture, its relative simplicity for developing terminal-based real-time transaction applications, and many open-source customer contributions, including both debugging and feature enhancement.
Part of CICS was formalized using the Z notation in the 1980s and 1990s in collaboration with the Oxford University Computing Laboratory, under the leadership of Tony Hoare. This work won a Queen's Award for Technological Achievement.
CICS as a distributed file server
In 1986, IBM announced CICS support for the record-oriented file services defined by Distributed Data Management Architecture (DDM). This enabled programs on remote, network-connected computers to create, manage, and access files that had previously been available only within the CICS/MVS and CICS/VSE transaction processing environments.
In newer versions of CICS, support for DDM has been removed. Support for the DDM component of CICS z/OS was discontinued at the end of 2003, and was removed from CICS for z/OS in version 5.2 onwards. In CICS TS for z/VSE, support for DDM was stabilised at V1.1.1 level, with an announced intention to discontinue it in a future release In CICS for z/VSE 2.1 onwards, CICS/DDM is not supported.
CICS and the World Wide Web
CICS Transaction Server first introduced a native HTTP interface in version 1.2, together with a Web Bridge technology for wrapping green-screen Terminal based programs with an HTML facade. CICS Web and Document APIs were enhanced in CICS TS V1.3 to enable web-aware applications to be written to interact more effectively with web browsers.
CICS TS versions 2.1 through 2.3 focused on introducing CORBA and EJB technologies to CICS, offering new ways to integrate CICS assets into distributed application component models. These technologies relied on hosting Java applications in CICS. The Java hosting environment saw numerous improvements over many releases, ultimately resulting in the embedding of the WebSphere Liberty Profile into CICS Transaction Server V5.1. Numerous web facing technologies could be hosted in CICS using Java, this ultimately resulted in the removal of the native CORBA and EJB technologies.
CICS TS V3.1 added a native implementation of the SOAP and WSDL technologies for CICS, together with client side HTTP APIs for outbound communication. These twin technologies enabled easier integration of CICS components with other Enterprise applications, and saw widespread adoption. Tools were included for taking traditional CICS programs written in languages such as COBOL, and converting them into WSDL defined Web Services, with little or no program changes. This technology saw regular enhancements over successive releases of CICS.
CICS TS V4.1 and V4.2 saw further enhancements to web connectivity, including a native implementation of the ATOM publishing protocol.
Many of the newer web facing technologies were made available for earlier releases of CICS using delivery models other than a traditional product release. This allowed early adopters to provide constructive feedback that could influence the final design of the integrated technology. Examples include the Soap for CICS technology preview SupportPac for TS V2.2, or the ATOM SupportPac for TS V3.1. This approach was used to introduce JSON support for CICS TS V4.2, a technology that went on to be integrated into CICS TS V5.2.
The JSON technology in CICS is similar to earlier SOAP technology, both of which allowed programs hosted in CICS to be wrapped with a modern facade. The JSON technology was in turn enhanced in z/OS Connect Enterprise Edition, an IBM product for composing JSON APIs that can leverage assets from several mainframe subsystems.
Many partner products have also been used to interact with CICS. Popular examples include using the CICS Transaction Gateway for connecting to CICS from JCA compliant Java application servers, and IBM DataPower appliances for filtering web traffic before it reaches CICS.
Modern versions of CICS provide many ways for both existing and new software assets to be integrated into distributed application flows. CICS assets can be accessed from remote systems, and can access remote systems; user identity and transactional context can be propagated; RESTful APIs can be composed and managed; devices, users and servers can interact with CICS using standards based technologies; and the IBM WebSphere Liberty environment in CICS promotes the rapid adoption of new technologies.
Although when CICS is mentioned, people usually mean CICS Transaction Server, the CICS Family refers to a portfolio of transaction servers, connectors (called CICS Transaction Gateway) and CICS Tools.
CICS on distributed platforms—not mainframes—is called IBM TXSeries. TXSeries is distributed transaction processing middleware. It supports C, C++, COBOL, Java™ and PL/I applications in cloud environments and traditional data centers.TXSeries is available on AIX, Linux x86, Windows, Solaris and HP-UX platforms. CICS is also available on other operating systems, notably IBM i and OS/2. The z/OS implementation (i.e., CICS Transaction Server for z/OS) is by far the most popular and significant.
Two versions of CICS were previously available for VM/CMS, but both have since been discontinued. In 1986, IBM released CICS/CMS which was a single-user version of CICS designed for development use, the applications later being transferred to an MVS or DOS/VS system for production execution. Later, in 1988, IBM released CICS/VM.
Provisioning, management and analysis of CICS systems and applications is provided by CICS Tools. This includes performance management as well as deployment and management of CICS resources. In 2015, the four core foundational CICS tools (and the CICS Optimization Solution Pack for z/OS) were updated with the release of CICS Transaction Server for z/OS 5.3. The four core CICS Tools: CICS Interdependency Analyzer for z/OS, CICS Deployment Assistant for z/OS, CICS Performance Analyzer for z/OS and CICS Configuration Manager for z/OS.
Multiple-user interactive-transaction application programs were required to be quasi-reentrant in order to support multiple concurrent transaction threads. A software coding error in one application could block all users from the system. The modular design of CICS reentrant / reusable control programs meant that, with judicious "pruning," multiple users with multiple applications could be executed on a computer with just 32K of expensive magnetic core physical memory (including the operating system).
Considerable effort was required by CICS application programmers to make their transactions as efficient as possible. A common technique was to limit the size of individual programs to no more than 4,096 bytes, or 4K, so that CICS could easily reuse the memory occupied by any program not currently in use for another program or other application storage needs. When virtual memory was added to versions OS/360 in 1972, the 4K strategy became even more important to reduce paging and thrashing unproductive resource-contention overhead.
The efficiency of compiled high-level COBOL and PL/I language programs left much to be desired. Many CICS application programs continued to be written in assembler language, even after COBOL and PL/I support became available.
With 1960s-and-1970s hardware resources expensive and scarce, a competitive "game" developed among system optimization analysts. When critical path code was identified, a code snippet was passed around from one analyst to another. Each person had to either (a) reduce the number of bytes of code required, or (b) reduce the number of CPU cycles required. Younger analysts learned from what more-experienced mentors did. Eventually, when no one could do (a) or (b), the code was considered optimized, and they moved on to other snippets. Small shops with only one analyst learned CICS optimization very slowly (or not at all).
Unfortunately, many of the "rules" were frequently broken, especially by COBOL programmers who might not understand the internals of their programs or fail to use the necessary restrictive compile time options. This resulted in "non-re-entrant" code that was often unreliable, leading to spurious storage violations and entire CICS system crashes.
The entire partition, or Multiple Virtual Storage (MVS) region, operated with the same memory protection key including the CICS kernel code. Program corruption and CICS control block corruption was a frequent cause of system downtime. A software error in one application program could overwrite the memory (code or data) of one or all currently running application transactions. Locating the offending application code for complex transient timing errors could be a very-difficult operating-system analyst problem.
These serious shortcomings persisted for multiple new releases of CICS over a period of more than 20 years. CICS application transactions were often mission-critical for public utility companies, large banks and other multibillion-dollar financial institutions. Top-quality CICS skills were in high demand and short supply. The complex learning curve was shallow and long. Unqualified novice developers could have a major negative impact on company operations.
Eventually, it became possible to provide a measure of advance application protection by performing all testing under control of a monitoring program that also served to provide Test and Debug features.
When CICS was first released, it only supported application transaction programs written in IBM 360 Assembler. COBOL and PL/I support were added years later. Because of the initial assembler orientation, requests for CICS services were made using assembler language macros. For example, the request to read a record from a file were made by a macro call to the "File Control Program" of CICS might look like this:
This gave rise to the later terminology "Macro-level CICS."
When high-level language support was added, the macros were retained and the code was converted by a pre-compiler that expanded the macros to their COBOL or PL/I CALL statement equivalents. Thus preparing a HLL application was effectively a "two-stage" compile — output from the preprocessor fed into the HLL compiler as input.
COBOL considerations: unlike PL/I, IBM COBOL does not normally provide for the manipulation of pointers (addresses). In order to allow COBOL programmers to access CICS control blocks and dynamic storage the designers resorted to what was essentially a hack. The COBOL Linkage Section was normally used for inter-program communication, such as parameter passing. The compiler generates a list of addresses, each called a Base Locator for Linkage (BLL) which were set on entry to the called program. The first BLL corresponds to the first item in the Linkage Section and so on. CICS allows the programmer to access and manipulate these by passing the address of the list as the first argument to the program. The BLLs can then be dynamically set, either by CICS or by the application to allow access to the corresponding structure in the Linkage Section.
During the 1980s, IBM at Hursley produced a "half-way house" version of CICS that supported what became known as "Command-level CICS." This release still supported the older programs but introduced a new layer of execution to the new Command level application programs.
A typical Command-level call might look like the following:
EXEC CICS SEND MAPSET('LOSMATT') MAP('LOSATT') END-EXEC
The values given in the SEND MAPSET command correspond to the names used on the first DFHMSD macro in the map definition given below for the MAPSET argument, and the DFHMSI macro for the MAP argument. This is pre-processed by a pre-compile batch translation stage, which converts the embedded commands (EXECs) into call statements to a stub subroutine. So, preparing application programs for later execution still required two stages. It was possible to write "Mixed mode" applications using both Macro-level and Command-level statements.
At execution time, the command-level commands were converted back using a run-time translator, "The EXEC Interface Program", to the old Macro-level call, which was then executed by the mostly unchanged CICS nucleus programs.
The Command-level-only CICS introduced in the early 1990s offered some advantages over earlier versions of CICS. However, IBM also dropped support for Macro-level application programs written for earlier versions. This meant that many application programs had to be converted or completely rewritten to use Command-level EXEC commands only.
By this time, there were perhaps millions of programs worldwide that had been in production for decades in many cases. Rewriting them inevitably introduced new bugs without necessarily adding new features.
New programming styles
Recent CICS Transaction Server enhancements include support for a number of modern programming styles.
CICS Transaction Server Version 2.1 introduced support for Java. CICS Transaction Server Version 2.2 supported the Software Developers Toolkit. CICS provides the same run-time container as IBM's WebSphere product family so EJB applications are portable between CICS and Websphere and there is common tooling for the development and deployment of EJB applications.
A CICS transaction is a set of operations that perform a task together. Usually, the majority of transactions are relatively simple tasks such as requesting an inventory list or entering a debit or credit to an account. A primary characteristic of a transaction is that it should be atomic. On IBM System z servers, CICS easily supports thousands of transactions per second, making it a mainstay of enterprise computing.
CICS applications comprise transactions, which can be written in numerous programming languages, including COBOL, PL/I, C, C++, IBM Basic assembly language, REXX, and Java.
Each CICS program is initiated using a transaction identifier. CICS screens are usually sent as a construct called a map, a module created with Basic Mapping Support (BMS) assembler macros or third-party tools. CICS screens may contain text that is highlighted, has different colors, and/or blinks depending on the terminal type used. An example of how a map can be sent through COBOL is given below. The end user inputs data, which is made accessible to the program by receiving a map from CICS.
EXEC CICS RECEIVE MAPSET('LOSMATT') MAP('LOSATT') INTO(OUR-MAP) END-EXEC.
For technical reasons, the arguments to some command parameters must be quoted and some must not be quoted, depending on what is being referenced. Most programmers will code out of a reference book until they get the "hang" or concept of which arguments are quoted, or they'll typically use a "canned template" where they have example code that they just copy and paste, then edit to change the values.
Example of BMS Map Code
Basic Mapping Support defines the screen format through assembler macros such as the following. This was assembled to generate both the physical map set – a load module in a CICS load library – and a symbolic map set – a structure definition or DSECT in PL/I, COBOL, assembler, etc. which was copied into the source program.
LOSMATT DFHMSD TYPE=MAP, X MODE=INOUT, X TIOAPFX=YES, X TERM=3270-2, X LANG=COBOL, X MAPATTS=(COLOR,HILIGHT), X DSATTS=(COLOR,HILIGHT), X STORAGE=AUTO, X CTRL=(FREEKB,FRSET) * LOSATT DFHMDI SIZE=(24,80), X LINE=1, X COLUMN=1 * LSSTDII DFHMDF POS=(1,01), X LENGTH=04, X COLOR=BLUE, X INITIAL='MQCM', X ATTRB=PROT * DFHMDF POS=(24,01), X LENGTH=79, X COLOR=BLUE X ATTRB=ASKIP, X INITIAL='PF7- 8- 9- 10- X 11- 12-CANCEL' * DFHMSD TYPE=FINAL END
In the z/OS environment, a CICS installation comprises one or more regions (generally referred to as a "CICS Region"), spread across one or more z/OS system images. Although it processes interactive transactions, each CICS region may be started as a batch processing|batch address space with standard JCL statements: it's a job that runs indefinitely. Alternatively, each CICS region may be started as a started task. Whether a batch job or a started task, CICS regions may run for days, weeks, or even months before shutting down for maintenance (MVS or CICS).
Installations are divided into multiple address spaces for a wide variety of reasons, such as:
- application separation,
- function separation,
- avoiding the workload capacity limitations of a single region, or address space.
A typical installation consists of a number of distinct applications. Each application usually has its own "Terminal-Owning Region" (TOR) and one or more "Application-Owning Regions" (AORs), though other topologies are possible. For example, the AORs might not perform File I/O. Instead there would be "File-Owning Regions" (FORs) that performed the File I/O on behalf of transactions in the AOR.
Objective of recovery/restart in CICS is to minimize and if possible eliminate damage done to Online System when a failure occurs, so that system and data integrity is maintained.
Under CICS, following are some of the resources which are considered recoverable. If one wishes these resources to be recoverable then special options must be specified in relevant CICS control tables:
- VSAM files
- Intrapartition TDQ
- Temporary Storage Queue in auxiliary storage
- I/O messages from/to transactions in a VTAM network
CICS offers extensive recovery/restart facilities for users to establish their own recovery/restart capability in their CICS system. Commonly used recovery/restart facilities include:
- Dynamic Transaction Backout (DTB)
- Automatic Transaction Restart
- Resource Recovery using System Log
- Resource Recovery using Journal
- System Restart
- Extended Recovery Facility
Each CICS region comprises one major task on which every transaction runs, although certain services such as access to DB2 data use other tasks (TCBs). Within a region transactions are cooperatively multitasked — they are expected to be well-behaved and yield the CPU rather than wait. CICS services handle this automatically.
Each unique CICS "Task" or transaction is allocated its own dynamic memory at start-up and subsequent requests for additional memory were handled by a call to the "Storage Control program" (part of the CICS nucleus or "kernel"), which is analogous to an operating system.
A CICS system consists of the online nucleus, batch support programs, and applications services.
the CICS nucleus consists of a number of functional modules.
- Task Control Program (KCP).
- Storage Control Program (SCP).
- Program Control Program (PCP).
- Program Interrupt Control Program (PIP).
- Interval Control Program (ICP).
- Dump Control Program (DCP).
- Terminal Control Program (TCP).
- File Control Program (FCP).
- Transient Data Control Program (TDP).
- Temporary Storage Control Program (TSP).
In addition to the online functions CICS has several support programs that run as batch jobs. :pp.34–35
- High level language (macro) preprocessor.
- Command language translator.
- Dump utility – prints formatted dumps generated by CICS Dump Management.
- Trace utility – formats and prints CICS trace output.
- Journal formatting utility – prints a formatted dump of the CICS region in case of error.
The following components of CICS support application development.:pp.35–37
- Basic Mapping Support (BMS) provides device-independent terminal input and output.
- Data Interchange Program (DIP) provides support for IBM 3770 and IBM 3790 programmable devices.
- 2260 Compatibility allows programs written for IBM 2260 display devices to run on 3270 displays.
- EXEC Interface Program – the stub program that converts calls generated by
EXEC CICScommands to calls to CICS functions.
- Built-in Functions – table search, phonetic conversion, field verify, field edit, bit checking, input formatting, weighted retrieval.
Different countries have differing pronunciations 
- Within IBM (specifically Tivoli) it is referred to as //.
- In the US, it is more usually pronounced by reciting each letter //.
- In Australia, Belgium, Canada, Hong Kong, the UK and some other countries, it is pronounced //.
- In Finland, it is pronounced [kiks]
- In France, it is pronounced [se.i.se.ɛs].
- In Germany, Austria and Hungary, it is pronounced [ˈtsɪks] and, less often, [ˈkɪks].
- In Greece, it is pronounced kiks.
- In India, it is pronounced kicks.
- In Iran, it is pronounced kicks.
- In Italy, is pronounced [ˈtʃiks].
- In Poland, it is pronounced [ˈkʲiks].
- In Portugal and Brazil, it is pronounced [ˈsiks].
- In Russia, it is pronounced kiks.
- In Slovenia, it is pronounced kiks.
- In Spain, it is pronounced [ˈθiks].
- In Sweden, it is pronounced kicks.
- In Israel, it is pronounced kicks.
- In Uganda, it is pronounced kicks.
- In Turkey, it is pronounced kiks.
- IBM TXSeries (CICS on distributed platforms)
- IBM WebSphere
- IBM 2741
- IBM 2260
- IBM 3270
- OS 360
- IBM Corporation. "CICS Transaction Server for z/OS, Version 3.2 Glossary:T". Retrieved December 7, 2012.
- Customer Information Control System (CICS) General Information Manual (PDF). White Plains, New York: IBM. December 1972. GH20-1028-3. Retrieved 2016-04-01.
- King, Steve (1993). "The Use of Z in the Restructure of IBM CICS". In Hayes, Ian. Specification Case Studies (2nd ed.). New York: Prentice Hall. pp. 202–213. ISBN 0-13-832544-8.
- Warner, Edward (1987-02-23). "IBM Gives PC Programs Direct Mainframe Access: PC Applications Can Alter Files". InfoWorld. 9 (8): 1. Retrieved 2016-04-01.
- "IBM CICS Transaction Server for z/OS, V5.2 takes service agility, operational efficiency, and cloud enablement to a new level". IBM. 2014-04-07. Retrieved 2016-04-14.
CICS DDM is no longer available from IBM and support was discontinued, as of December 31, 2003. CICS DDM is no longer available in CICS TS from Version 5.2 onwards.
- "IBM z/VSE Central Functions Version 9.2 - z/VSE Version 5.2". IBM. April 7, 2014. Retrieved 2016-04-14.
Support for CICS Distributed Data Management (DDM) is stabilized in CICS TS for VSE/ESA V1.1.1. In a future release of CICS TS for z/VSE, IBM intends to discontinue support for CICS DDM.
- "IBM CICS Transaction Server for z/VSE V2.1 delivers enhancements for future workloads". IBM. October 5, 2015. Retrieved 2016-04-14.
CICS Distributed Data Management (CICS/DDM) is not supported with CICS TS for z/VSE V2.1.
- "CICS/CMS". IBM. Retrieved 2016-04-01.
- "CUSTOMER INFORMATION CONTROL SYSTEM/ CONVERSATIONAL MONITOR SYSTEM (CICS/CMS) RELEASE 1 ANNOUNCED AND PLANNED TO BE AVAILABLE JUNE 1986". IBM. October 15, 1985. Retrieved 2016-04-02.
- "(CICS/VM) Customer Information Control System / Virtual Machine". IBM. Retrieved 2016-04-01.
- "CUSTOMER INFORMATION CONTROL SYSTEM/VIRTUAL MACHINE (CICS/VM)". IBM. October 20, 1987. Retrieved 2016-04-02.
- IBM Corporation (1972). Customer Information Control System (CICS) Application Programmer's Reference Manual (PDF). Retrieved Jan 4, 2016.
- IBM Corporation. "Basic mapping support". CICS Information Center.
- IBM (September 13, 2010). "CICS Transaction Server glossary". CICS Transaction Server for z/OS V3.2. IBM Information Center, Boulder, Colorado. Retrieved December 12, 2010.
- IBM Corporation (1975). Customer Information Control System (CICS) System Programmer's Reference Manual (PDF).
- IBM Corporation (1977). Customer Information Control System/Virtual Storage (CICS/VS) Version 1, Release 3 Introduction to Program Logic Manual (PDF).
- "CICS - An Introduction" (PDF). IBM Corporation. July 8, 2004. Retrieved April 20, 2014.
- IBM CICS Family official website
- IBM CICS Whitepaper - Why to choose CICS Transaction Server for new IT projects
- CICS official 35th Anniversary website
- Support Forum for CICS Programming
- CICS User Community website for CICS related news, announcements and discussions
- Bob Yelavich's CICS focused website. (Note that this site uses frames, but on high-resolution screens the left-hand frame, which contains the site index, may be hidden. Scroll right within the frame to see its content.) at the Wayback Machine (archived February 5, 2005)