Batch processing
Computerized batch processing, since the 1964 introduction of the IBM System/360, has primarily referred to the scripted running of one or more programs, as directed by Job Control Language, with no human interaction other than, if JCL-requested, the mounting of one or more pre-determined input and/or output computer tapes.
The computer's operating system, which pre-scans and deciphers the JCL,[1][2] optimizes the sequencing of this and other jobs to best make use of the system.
Much of this is done overnight, in the hours called the "Batch Window."[3][4]
History
The term "batch processing" originates in the traditional classification of methods of production as job production (one-off production), batch production (production of a "batch" of multiple items at once, one stage at a time), and flow production (mass production, all stages in process at once).
Early history
Prior to 1964,[5][6] "operating systems" were often called MONITORS,[7] and typically could not run more than one program at a time.
A series of programs, including Fortran Monitor System, SOS (Share Operating System) and finally IBSYS were the state of the art for IBM's 7090/709/47040/7044 systems;[8] The first 7090s were delivered Nov. 1959.[9]
Later history
From the late 1960s onwards, interactive computing such as via text-based computer terminal interfaces (as in Unix shells or read-eval-print loops), and later graphical user interfaces became common. Non-interactive computation, both one-off jobs such as compilation, and processing of multiple items in batches, became retrospectively referred to as batch processing, and the oxymoronic term batch job (in early use often "batch of jobs") became common. Early use is particularly found at the University of Michigan, around the Michigan Terminal System (MTS). [10]}}
Although timesharing did exist, its use was not robust enough for corporate data processing; none of this was related to the earlier unit record equipment, which was human-operated.
Ongoing
Non-interactive computation remains pervasive in computing, both for general data processing and for system "housekeeping" tasks (using system software). A high-level program (executing multiple programs, with some additional "glue" logic) is today most often called a script, and written in scripting languages, particularly shell scripts for system tasks; however, in DOS this is instead known as a batch file. That includes UNIX-based computers, Microsoft Windows, macOS (whose foundation is the BSD Unix kernel), and even smartphones. A running script, particularly one executed from an interactive login session, is often known as a job, but that term is used very ambiguously.
Modern systems
Batch applications are still critical in most organizations in large part because many common business processes are amenable to batch processing. While online systems can also function when manual intervention is not desired, they are not typically optimized to perform high-volume, repetitive tasks. Therefore, even new systems usually contain one or more batch applications for updating information at the end of the day, generating reports, printing documents, and other non-interactive tasks that must complete reliably within certain business deadlines.
Some applications are amenable to flow processing, namely those that only need data from a single input at once (not totals, for instance): start the next step for each input as it completes the previous step. In this case flow processing lowers latency for individual inputs, allowing them to be completed without waiting for the entire batch to finish. However, many applications require data from all records, notably computations such as totals. In this case the entire batch must be completed before one has a usable result: partial results are not usable.
Modern batch applications make use of modern batch frameworks such as Jem The Bee, Spring Batch or implementations of JSR 352[11] written for Java, and other frameworks for other programming languages, to provide the fault tolerance and scalability required for high-volume processing. In order to ensure high-speed processing, batch applications are often integrated with grid computing solutions to partition a batch job over a large number of processors, although there are significant programming challenges in doing so. High volume batch processing places particularly heavy demands on system and application architectures as well. Architectures that feature strong input/output performance and vertical scalability, including modern mainframe computers, tend to provide better batch performance than alternatives.
Scripting languages became popular as they evolved along with batch processing.[12]
Batch window
A batch window is "a period of less-intensive online activity",[13] when the computer system is able to run batch jobs without interference from, or with, interactive online systems.
A bank's end-of-day (EOD) jobs require the concept of Cutover, where transaction and data are cut off for a particular day's batch activity ("deposits after 3 PM will be processed the next day").
As requirements for online systems uptime expanded to support globalization, the Internet, and other business needs, the batch window shrank[4][14] and increasing emphasis was placed on techniques that would require online data to be available for a maximum amount of time.
Common batch processing usage
- efficient bulk database updates and automated transaction processing, as contrasted to interactive online transaction processing (OLTP) applications. The extract, transform, load (ETL) step in populating data warehouses is inherently a batch process in most implementations.
- to perform various operations with digital images such as resize, convert, watermark, or otherwise edit image files.
- converting computer files from one format to another. For example, a batch job may convert proprietary and legacy files to common standard formats for end-user queries and display.
Notable batch scheduling and execution environments
The Unix programs cron
, at
, and batch
(today batch
is a variant of at
) allow for complex scheduling of jobs. Windows has a job scheduler. Most high-performance computing clusters use batch processing to maximize cluster usage.[15]
The IBM mainframe z/OS operating system or platform has arguably the most highly refined and evolved set of batch processing facilities owing to its origins, long history, and continuing evolution. Today such systems commonly support hundreds or even thousands of concurrent online and batch tasks within a single operating system image. Technologies that aid concurrent batch and online processing include Job Control Language (JCL), scripting languages such as REXX, Job Entry Subsystem (JES2 and JES3), Workload Manager (WLM), Automatic Restart Manager (ARM), Resource Recovery Services (RRS), DB2 data sharing, Parallel Sysplex, unique performance optimizations such as HiperDispatch, I/O channel architecture, and several others.
See also
- Background process
- Batch file
- Batch queue - for schedulers that plan the execution of batch jobs
- Batch renaming - to rename lots of files automatically without human intervention, in order to save time and effort
- Batch scheduler
- BatchPipes - for utility that increases batch performance
- Jem The Bee - Job Entry Manager the Batch Execution Environment
- Job Processing Cycle - for detailed description of batch processing in the mainframe field
- Processing modes
- Production support - for batch job/schedule/stream support
References
- ^ "TYPRUN parameter".
scan a job's JCL for syntax errors
- ^ "Source Scanner for IBM JCL".
- ^ "Batch Model". IBM.com.
- ^ a b Batch Processing: Design – Build – Run: Applied Practices and Principles. Oreilly. ISBN 9780470257630.
- ^ M Bullynck (2017). "What is an Operating System? A historical investigation (1954–1964)".
- ^ "z/OS Basic Skills Information Center: Mainframe concepts" (PDF).
... workloads are commonly associated with the mainframe; ... marvel at the ... of the IBM® System/360 in 1964
- ^ "Before operating systems, what concept was used to make them work".
The earliest electronic digital computers had no operating systems. Machines of the time were so primitive that ... had so called "Monitor" ...
- ^ "The Direct Couple for the IBM 7090". SoftwarePreservationGroup.org.
IBSYS was an operating system for the 7090 that evolved from SOS (SHARE Operating System)
- ^ "IBM delivers 7090 mainframe computers, November 30, 1959". November 30, 2017.
Today in 1959, IBM delivered the first two 7090 mainframe computers
- ^ "The Computing Center: Coming to Terms with the IBM System/360 Model 67". Research News. University of Michigan. 20 (Nov/Dec): 10. 1969.
- ^ "Batch Applications for the Java Platform". Java Community Process. Retrieved 2015-08-03.
- ^ "JSR352 null". IBM.com.
JSR 352, the open standard specification for Java batch processing. ... The programming languages used evolved over time based on what was available
- ^ "Mainframes working after hours: Batch processing". Mainframe concepts. IBM Corporation. Retrieved June 20, 2013.
- ^ "Traditionally batch was an overnight activity, with jobs processing millions of ... Today the batch window is ever decreasing with 24/7 availability requirements."
- ^ "High performance computing tutorial, with checklist and tips to optimize". January 25, 2018.
a multi-user, shared and smart batch processing system improves the scale ..... Most HPC clusters are in Linux