Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Transaction Processing Facility

From Wikipedia, the free encyclopedia
IBM real-time operating system
Operating system
z/TPF
DeveloperIBM
Written inz/ArchitectureAssembly language,C,C++
OS familyz/Architecture assembly language (z/TPF), ESA/390 assembly language (TPF4)
Working stateCurrent
Source modelClosed source (Source code is available to licensed users with restrictions)
Initial release1979; 47 years ago (1979)
Latest release1.1.0.2025[1]
Supported platformsIBM System z (z/TPF),ESA/390 (TPF4)
Kernel typeReal-time
Default
user interface
32153270
LicenseProprietary monthly license charge (MLC)
Official websitez/TPF Product Page
History of IBM mainframe operating systems
Early mainframe computer OSes
Miscellaneous S/360 line OSes
TPF line

Transaction Processing Facility (TPF)[2] is anIBMreal-time operating system formainframe computers descended from the IBMSystem/360 family, includingzSeries andSystem z9.

TPF delivers fast, high-volume, high-throughput transaction processing, handling large, continuous loads of essentially simple transactions across large, geographically dispersed networks.

While there are other industrial-strengthtransaction processing systems, notably IBM's ownCICS andIMS, TPF's specialty is extreme volume, large numbers of concurrent users, and very fast response times. For example, it handlesVISA credit card transaction processing during the peak holiday shopping season.[3][2]

The TPF passenger reservation applicationPARS, or its international version IPARS, is used by many airlines.PARS is anapplication program; TPF is an operating system.

One of TPF's major optional components is a high performance, specialized database facility calledTPF Database Facility (TPFDF).[4]

A close cousin of TPF, the transaction monitorALCS, was developed by IBM to integrate TPF services into the more common mainframe operating systemMVS, nowz/OS.

History

[edit]

TPF evolved from theAirline Control Program (ACP), a free package developed in the mid-1960s byIBM in association with major North American and European airlines. In 1979, IBM introduced TPF as a replacement for ACP — and as a priced software product. The new name suggests its greater scope and evolution into non-airline related entities.

TPF was traditionally anIBM System/370assembly language environment for performance reasons, and many TPF assembler applications persist. However, more recent versions of TPF encourage the use ofC. Anotherprogramming language calledSabreTalk was born and died on TPF.

IBM announced the delivery of the current release of TPF, dubbed z/TPF V1.1, in September 2005. Most significantly, z/TPF adds 64-bit addressing and mandates use of the 64-bitGNU development tools.[5][6]

TheGCC compiler and the DIGNUS Systems/C++ and Systems/C are the only supported compilers for z/TPF. The Dignus compilers offer reduced source code changes when moving from TPF 4.1 to z/TPF.

Users

[edit]

Current users includeSabre (reservations),VISA Inc. (authorizations),American Airlines,[7]American Express (authorizations),DXC Technology SHARES (reservations),Amtrak,Marriott International,Travelport (Galileo, Apollo, Worldspan),Citibank,Trenitalia (reservations),Delta Air Lines (reservations and operations) andJapan Airlines.[8]

Operating environment

[edit]

Tightly coupled

[edit]

Although IBM's3083 was aimed at running TPF on a "fast...uniprocessor",[9] TPF is capable of running on amultiprocessor, that is, on systems in which there is more than one CPU. Within theLPAR, the CPUs are referred to asinstruction streams or simplyI-streams. When running on a LPAR with more than one I-stream, TPF is said to be runningtightly coupled. TPF adheres toSMP concepts; no concept ofNUMA-based distinctions between memory addresses exist.

The depth of theCPU ready list is measured as any incoming transaction is received, and queued for the I-stream with the lowest demand, thus maintaining continuous load balancing among available processors. In cases whereloosely coupled configurations are populated by multiprocessorCPCs (Central Processing Complex, i.e. the physical machine packaged in onesystem cabinet),SMP takes place within the CPC as described here, whereas sharing of inter-CPC resources takes place as described underLoosely coupled, below.

In the TPF architecture, all memory (except for a 4KB-sizedprefix area) is shared among all I-streams. In instances where memory-resident data must or should be kept separated by I-stream, the programmer typically allocates a storage area into a number ofsubsections equal to the number of I-streams, then accesses the desired I-stream associated area by taking the base address of the allocated area, and adding to it the product of the I-stream relative number times the size of each subsection.

Loosely coupled

[edit]

TPF is capable of supporting multiple mainframes (of any size themselves — be it single I-stream to multiple I-stream) connecting to and operating on a common database. Currently, 32 IBM mainframes may share the TPF database; if such a system were in operation, it would be called32-way loosely coupled. The simplestloosely coupled system would be two IBM mainframes sharing oneDASD (Direct Access Storage Device). In this case, the control program would be equally loaded into memory and each program or record on DASD could be potentially accessed by either mainframe.

In order to serialize accesses between data records on a loosely coupled system, a practice known asrecord locking must be used. This means that when one mainframe processor obtains ahold on a record, the mechanism must prevent all other processors from obtaining the same hold and communicate to the requesting processors that they are waiting. Within any tightly coupled system, this is easy to manage between I-streams via the use of theRecord Hold Table. However, when the lock is obtained offboard of the TPF processor in the DASD control unit, an external process must be used. Historically, the record locking was accomplished in the DASD control unit via anRPQ known asLLF (Limited Locking Facility) and laterELLF (extended). LLF and ELLF were both replaced by the Multipathing Lock Facility (MPLF). To run clustered (loosely coupled) z/TPF requires either MPLF in all disk control units or an alternative locking device called a Coupling Facility.[10][11]

Processor shared records

[edit]

Records that absolutely must be managed by arecord locking process are those which are processor shared. In TPF, most record accesses are done by usingrecord type andordinal. Given a record type in the TPF system of 'FRED' with 100 records or ordinals, in a processor shared scheme, record type 'FRED' ordinal '5' would resolve to exactly the same file address on DASD — necessitating the use of a record locking mechanism.

All processor shared records on a TPF system will be accessed via the same file address which will resolve to the same location.

Processor unique records

[edit]

A processor unique record is one that is defined such that each processor expected to be in the loosely coupled complex has a record type of 'FRED' and perhaps 100 ordinals. However, if a user on any 2 or more processors examines the file address that record type 'FRED', ordinal '5' resolves to, they will note a different physical address is used.

TPF attributes

[edit]

What TPF is not

[edit]

TPF is not a general-purpose operating system. TPF's specialized role is to process transaction input messages, then return output messages on a 1:1 basis at extremely high volume with short maximum elapsed time limits.

TPF has no built-in graphical user interface functionality, and TPF has never offered direct graphical display facilities: to implement it on the host would be considered an unnecessary and potentially harmful diversion of real-time system resources. TPF's user interface is command-line driven with simple text display terminals that scroll upward, and there are no mouse-driven cursors, windows, or icons on a TPFPrime CRAS[12] (Computer room agent set — which is best thought of as the "operator's console"). Character messages are intended to be the mode of communications with human users. All work is accomplished via the use of the command line, similar toUNIX withoutX. There are several products available which connect to Prime CRAS and provide graphical interface functions to the TPF operator, such asTPF Operations Server.[13] Graphical interfaces for end users, if desired, must be provided by external systems. Such systems perform analysis on character content (seeScreen scrape) and convert the message to/from the desired graphical form, depending on its context.

Being a specialized purpose operating system, TPF does not host a compiler/assembler, text editor, nor implement the concept of a desktop as one might expect to find in a general-purpose operating system. TPF application source code is commonly stored in external systems, and likewise built "offline". Starting with z/TPF 1.1,Linux is the supported build platform; executable programs intended for z/TPF operation must observe theELF format for s390x-ibm-linux.

Using TPF requires a knowledge of itsCommand Guide[14] since there is no support for an online command "directory" or "man"/help facility to which users might be accustomed. Commands created and shipped by IBM for the system administration of TPF are called "functional messages"—commonly referred to as "Z-messages", as they are all prefixed with the letter "Z". Other letters are reserved so that customers may write their own commands.

TPF implements debugging in a distributed client-server mode, which is necessary because of the system's headless, multi-processing nature: pausing the entire system in order to trap a single task would be highly counterproductive. Debugger packages have been developed by third party vendors who took very different approaches to the "break/continue" operations required at the TPF host, implementing unique communications protocols used in traffic between the human developer running the debugger client and the server-side debug controller, as well as the form and function of debugger program operations at the client side. Two examples of third party debugger packages areStep by Step Trace from Bedford Associates[15] andCMSTPF,TPF/GI, andzTPFGI, all from TPF Software, Inc.[16] Neither package is wholly compatible with the other, nor with IBM's own offering. IBM's debugging client offering is packaged in anIDE calledIBM TPF Toolkit.[17]

What TPF is

[edit]

TPF is highly optimized to permit messages from the supported network to either be switched out to another location, routed to an application (specific set of programs) or to permit extremely efficient accesses to database records.

Data records

[edit]

Historically, all data on the TPF system had to fit in fixed record (and memory block) sizes of 381, 1055 and 4K bytes. This was due in part to the physical record sizes of blocks located on DASD. Much overhead was saved by freeing up any part of the operating system from breaking large data entities into smaller ones during file operations, and reassembling the same during read operations. Since IBM hardware does I/O via the use ofchannels andchannel programs, TPF would generate very small and efficient channel programs to do its I/O — all in the name of speed. Since the early days also placed a premium on the size of storage media — be it memory or disk, TPF applications evolved into doing very powerful things while using very little resource.

Today, much of these limitations are removed. In fact, only because of legacy support are smaller-than-4K DASD records still used. With the advances made in DASD technology, a read/write of a 4K record is just as efficient as a 1055 byte record. The same advances have increased the capacity of each device so that there is no longer a premium placed on the ability to pack data into the smallest model as possible.

Programs and residency

[edit]

TPF also had its programsegments allocated as 381, 1055 and 4K byte-sizedrecords at different points in its history. Each segment consisted of a single record; with a typically comprehensive application requiring perhaps tens or even hundreds of segments. For the first forty years of TPF's history, these segments were neverlink-edited. Instead, the relocatable object code (direct output from the assembler) was laid out in memory, had itsinternally (self-referential) relocatable symbols resolved, then the entire image was written to file for later loading into the system. This created a challenging programming environment in whichsegments related to one another could not directly address each other, with control transfer between them implemented as theENTER/BACKsystem service.

In ACP/TPF's earliest days (circa 1965), memory space was severely limited, which gave rise to a distinction betweenfile-resident andcore-resident programs—only the most frequently used application programs were written into memory and never removed (core-residency); the rest were stored on file and read in on demand, with their backing memory buffers released post-execution.

The introduction ofC language to TPF at version 3.0 was first implemented conformant to segment conventions, including the absence of linkage editing. This scheme quickly demonstrated itself to be impractical for anything other than the simplest of C programs. At TPF 4.1, truly and fully linkedload modules were introduced to TPF. These were compiled with thez/OS C/C++ compiler using TPF-specificheader files and linked withIEWL, resulting in a z/OS-conformant load module, which in no manner could be considered a traditional TPF segment. TheTPF loader was extended to read the z/OS-uniqueload module file format, then lay out file-resident load modules' sections into memory; meanwhile, assembly language programs remained confined to TPF'ssegment model, creating an obvious disparity between applications written in assembler and those written in higher level languages (HLL).

At z/TPF 1.1, all source language types were conceptually unifiedand fully link-edited to conform to theELF specification. Thesegment concept became obsolete, meaning thatany program written inany source language—including Assembler—may now be ofany size. Furthermore, external references became possible, and separate source code programs that had once beensegments could now be directly linked together into ashared object. A value point is that critical legacy applications can benefit from improved efficiency through simplerepackaging—calls made between members of a single shared object module now have a much shorterpathlength at run time as compared to calling the system'sENTER/BACK service. Members of the same shared object may now share writeable data regions directly thanks tocopy-on-write functionality also introduced at z/TPF 1.1; which coincidentally reinforces TPF'sreentrancy requirements.

The concepts of file- and memory- residency were also made obsolete, due to a z/TPF design point which sought to have all programs resident in memory at all times.

Since z/TPF had to maintain acall stack for high-level language programs, which gave HLL programs the ability to benefit fromstack-based memory allocation, it was deemed beneficial to extend the call stack to assembly language programs on an optional basis, which can ease memory pressure and easerecursive programming.

All z/TPF executable programs are now packaged as ELF shared objects.

Memory usage

[edit]

Historically and in step with the previous, core blocks— memory— were also 381, 1055 and 4 K bytes in size. SinceALL memory blocks had to be of this size, most of the overhead for obtaining memory found in other systems was discarded. The programmer merely needed to decide what size block would fit the need and ask for it. TPF would maintain a list of blocks in use and simply hand out the first block on the available list.

Physical memory was divided into sections reserved for each size so a 1055 byte block always came from a section and returned there, the only overhead needed was to add its address to the appropriate physical block table's list. No compaction or data collection was required.

As applications got more advanced demands for memory increased, and once C became available memory chunks of indeterminate or large size were required. This gave rise to the use of heap storage and some memory management routines. To ease the overhead, TPF memory was broken into frames— 4 KB in size (1 MB with z/TPF). If an application needs a certain number of bytes, the number of contiguous frames required to fill that need are granted.

References

[edit]
  1. ^"z/TPF, z/TPFDF, TPF Operations Server, and TPF Toolkit 4.6 for 2025". IBM.
  2. ^abSteve Lohr (October 4, 2004)."IBM Updates Old Workhorse to Use Linux".The New York Times.
  3. ^Michelle Louzoun (August 24, 1987). "Visa Is Everywhere It Wants To Be".InformationWeek. p. 19.
  4. ^IBM Corporation."TPF Database Facility (TPFDF)".z/Transaction Processing Facility. RetrievedNovember 11, 2016.
  5. ^"IBM bolsters its mainframe platform".Computerworld.
  6. ^Jennifer Mears."IBM pumps up Linux virtual machines on mainframe OS".Computerworld.
  7. ^"TPF Users Group, Job Corner". Archived fromthe original on 2000-01-15.
  8. ^"IBM News room - 2008-04-14 Japan Airlines International to Upgrade Reservation and Ticketing System With IBM Mainframe - United States".03.ibm.com. 2008-04-14. Archived fromthe original on September 24, 2009. Retrieved2017-03-15.
  9. ^Anne & Lynn Wheeler."IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))".Newsgroupalt.folklore.computers.
  10. ^"IBM Knowledge Center".Publib.boulder.ibm.com. 2014-10-24. Retrieved2017-03-15.
  11. ^"IBM z/Transaction Processing Facility Enterprise Edition V1.1 hardware requirements - United States".www-01.ibm.com. Archived fromthe original on 7 October 2012. Retrieved17 January 2022.
  12. ^IBM Corporation (19 Apr 2018)."z/TPF Glossary".IBM. Retrieved10 May 2018.
  13. ^IBM Corporation (19 April 2018)."IBM TPF Operations Server".IBM. Retrieved10 May 2018.
  14. ^IBM Corporation (29 January 2019)."z/TPF Operations Command Guide".IBM.
  15. ^Bedford Associates."Bedford Associates, Inc". RetrievedOctober 17, 2012.
  16. ^TPF Software."TPF Software, Inc". RetrievedOctober 17, 2012.
  17. ^IBM Corporation (Dec 2017)."IBM TPF Toolkit Overview".IBM. Retrieved10 May 2018.

Bibliography

[edit]
  • Transaction Processing Facility: A Guide for Application Programmers (Yourdon Press Computing Series) by R. Jason Martin (Hardcover - April 1990),ISBN 978-0139281105

External links

[edit]
Supercomputer
Server,mainframe
Desktop,workstation
Point of sale
Projects
Related
Operating
systems
POSIX support
Unix-like
Partial
TRON support
Partial
Capability-based
Java virtual machine
DOS
L4 kernel
Psion
Microsoft
IBM
Texas Instruments
DEC:PDP-11,VAX
Low resource
Frameworks,kits
Developers
General
Variants
Kernel
Architectures
Components
Process management
Concepts
Scheduling
algorithms
Memory management,
resource protection
Storage access,
file systems
Supporting concepts
Retrieved from "https://en.wikipedia.org/w/index.php?title=Transaction_Processing_Facility&oldid=1282162449"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp