What Is Mainframe (big Iron)? - Definition From

  • Home
  • Data center hardware and strategy
  • Share this item with your network:
Paul Kirvan By
  • Paul Kirvan
Published: Apr 12, 2023

What is a mainframe computer?

A mainframe, also known as big iron, is a high-performance computer used for large-scale, compute-intensive purposes and tasks that require greater availability and security than smaller-scale machines. Historically, mainframes have been associated with centralized rather than distributed computing. But that distinction has blurred as smaller types of computers become more powerful and mainframes increasingly multipurpose.

Photo of IBM System z
The IBM System z provide a high level of data privacy, security and resiliency.

The original mainframes were housed in room-sized metal frames that occupied 2,000-10,000 square feet. Newer mainframes are about the size of a large refrigerator to fit in the modern data center more easily. Smaller-scale IBM mainframes serve distributed users and as smaller servers in a computing network.

The mainframe is sometimes referred to as a dinosaur, not only because of its size but also because of predictions, going back many years, of its extinction. In the early 1990s, experts predicted the demise of the mainframe by the end of that decade. However, in February 2008, IBM released the z10 mainframe, running z/OS, z/VM , z/VSE and z/TPF mainframe operating systems (OS) as well as Linux. Today, companies, government agencies and other organizations continue to use mainframes for back-office transactions and data processing as well as web-based applications.

A brief history of the mainframe

IBM is credited with creating the original mainframe computer, the Harvard Mark I, which was designed by Howard Aikens and built by IBM. It ran its first programs in 1944.

Commercial mainframes hit the market in the 1950s, starting with Remington Rand's UNIVAC, which was introduced in 1951. By the 1970s and 1980s, IBM remained a leader in the mainframe market and large organizations the main customers. However, many vendors sold mainframe-class machines, including Amdahl, Burroughs, Control Data, Data General, Digital Equipment, Fujitsu, Hewlett-Packard, Hitachi, Honeywell, NCR, RCA, Scientific Data Systems, Siemens and Sperry Univac.

Plug-compatible mainframes (PCMs) that competed with IBM mainframes emerged during those decades. They were cheaper yet more powerful than comparable IBM products. Vendors such as Amdahl, Fujitsu and Hitachi offered computer systems that had their own central processing units (CPUs) and could use the IBM System/370 instruction set. That meant apps configured to run in IBM mainframes could also run in PCM systems.

Plug-compatible peripheral systems, such as memory and disk drives, were less expensive than IBM equipment, but they could support IBM environments. Vendors of these peripherals included Amdahl, IPL Systems, Memorex, Storage Technology and Telex. Considering that one megabyte of memory for an IBM System/370 mainframe in the late 1970s and early 1980s could cost $100,000, competing with IBM presented many opportunities for plug-compatible manufacturers.

Today, IBM still owns much of the mainframe market. Most of these competing firms either no longer exist or merged to create other companies.

How are mainframes used today?

Mainframes aren't as ubiquitous today as in the past, but they still play a significant role in several industries. They handle large-scale, data-intensive workloads and process huge amounts of data fast. They are often used for high-volume transaction processing, batch processing, data warehousing and analytics. Modern mainframes can run multiple OSes simultaneously and support cloud computing and virtual environments.

Among the industries where mainframes continue to have a significant role are the following:

  • Banking and financial companies. These use mainframes to process large volumes of transactions and to handle high-frequency trading in the financial markets.
  • Healthcare providers. They depend on mainframes to provide the security, dependability and scalability they need to manage patient data and data storage.
  • Government agencies. These include the military and the Internal Revenue Service. They rely on mainframes to handle large databases and data processing tasks.
  • Transportation providers. They use these machines to manage traffic control, scheduling and reservation systems.
  • Retailers. Particularly large online retailers, they use mainframes to track sales and inventory data.
Photo of Nvidia Perlmutter supercomputer
Nvidia's Perlmutter supercomputer is being used in astrophysics and climate science research.

Supercomputers vs. mainframe systems

In the 1960s the term supercomputer started being used to describe the fastest, most powerful computers. Control Data's 6600, designed by Seymour Cray, was the first to get that label. Supercomputers were designed for highly compute-intensive processing tasks, such as simulating nuclear weapons and forecasting weather.

These systems used 64-bit word sizes, as opposed to mainframes, which used 16-bit and 32-bit structures. Supercomputers used multiple CPUs operating in parallel to achieve ultra high-speed processing power. The processing speed of these processors is significantly faster than personal computers, servers and mainframes.

Learn more about where the mainframe fits in the modern data center.

Continue Reading About mainframe (big iron)

  • IBM Z as a service brings mainframes closer to hybrid clouds
  • Modernize and migrate mainframe apps to the cloud
  • To maximize potential, mainframers much reach out and engage
  • Has the cloud caught up with the mainframe?
  • COBOL application modernization tools and techniques

Related Terms

What is a configuration management database? A configuration management database (CMDB) is a file -- usually in the form of a standardized database -- that contains all ... See complete definition What is data restoration? Data restoration is the process of copying backup data from secondary storage and restoring it to its original location or a new ... See complete definition What is off-site backup? Off-site backup is a method of backing up data to a remote server or to media that's transported to another physical location. See complete definition

Dig Deeper on Data center hardware and strategy

  • What is a supercomputer?
    PaulKirvan By: Paul Kirvan
  • What is a microcomputer?
    GavinWright By: Gavin Wright
  • logical partition (LPAR)
    RobertSheldon By: Robert Sheldon
  • Amdahl's law
    PatBrans By: Pat Brans
Sponsored News
  • Sustainable AI in Action: 3 Real-World Examples –Equinix
  • Flexible IT: When Performance and Security Can’t Be Compromised –Dell Technologies
  • See More
Vendor Resources
  • 5 Steps for Mainframe Migration –TechTarget
  • Short and Long-term Strategies for Bridging the Mainframe Skills Gap –Ensono
Latest TechTarget resources
  • Windows Server
  • Cloud Computing
  • Storage
  • Sustainability and ESG
SearchWindows Server
  • Configure domain controllers after Server 2025 upgrade

    Windows Server 2025 has many new features, but how can you get the most from them? Use this tutorial to configure AD domain ...

  • Microsoft Applied Skills program puts expertise to the test

    Microsoft's Applied Skills help IT pros validate hands-on technical expertise and real-world skills. But what sets these ...

  • Understand the basics of Microsoft hybrid identity

    Microsoft hybrid identity combines on-premises AD resources and cloud-based Entra ID capabilities to create a seamless access ...

Search Cloud Computing
  • 5 real-world FinOps use cases to maximize ROI

    FinOps transforms cloud cost management by aligning spending with usage, cutting waste and boosting efficiency. Learn how ...

  • Multi-cloud vs. hybrid cloud: The main difference

    As businesses digitally transform across increasingly distributed environments, know the benefits, challenges, similarities and ...

  • How autonomous AI workloads reshape cloud cost management

    AI workloads break traditional cloud cost models. As agents and inference loops drive spend, organizations must embed ...

Search Storage
  • Key issues to address in storage infrastructure management

    Emerging technology, such as AI and software-based storage management systems, have simplified how IT manages its infrastructure....

  • After rebrand, Everpure extends high availability to files

    The company formerly known as Pure Storage is opening new competitive fronts with ActiveCluster for files and plans to add data ...

  • 10 data storage issues and how to overcome them

    Don't overlook or ignore these 10 data storage issues -- including those related to staff, security and cost -- as they could ...

Sustainability and ESG
  • The role of IT in promoting a circular economy

    IT leaders play a key role in promoting a circular economy. Longer hardware lifecycles and broad sustainability goals can aid ...

  • How can AI help sustainability?

    AI holds potential for sustainability, despite concerns of energy consumption and privacy. But it's too early in AI's lifespan to...

  • How to lead on sustainability in the rise of greenhushing

    Sustainability leaders have been navigating a period of uncertainty. While they work to keep their initiatives afloat, they can ...

Close

Tag » What's A Big Iron