A quadrillion mainframes on your knees


[ad_1]

Every time I hear someone rhapsody About how much more computing power we have now compared to what was available in the 1960s around the time of Apollo, I cringe. These comparisons are generally roughly underestimate the difference.

By 1961, a few universities around the world had purchased IBM 7090 Mainframe Computers. The 7090 was the first line of all-transistor computers, and it cost US $ 20 million in today’s money, about 6,000 times more than a high-end laptop today. Its early buyers typically deployed computers as a shared resource for an entire campus. Very few users have been fortunate enough to have up to an hour of computer per week.


The 7090 had a clock cycle of 2.18 microseconds, so the operating frequency was just under 500 kilohertz. But at that time the instructions were not in pipeline, so most took more than one cycle to complete. Some integer arithmetic operations took up to 14 cycles, and a floating point operation could monopolize up to 15. So it is generally estimated that the 7090 executed about 100,000 instructions per second. Most modern computer cores can operate at a sustained rate of 3 billion instructions per second, with much faster top speeds. It’s 30,000 times faster, so a modern four- or eight-core chip is easily 100,000 times faster.

Unlike the lucky 1961 person who had an hour of computer time, you can use your laptop all the time, racking up over 1,900 years of 7,090 computer time each week. (Far be it from me to ask how many of those hours are spent on Minecraft.)

Continuing with this comparison, consider the number of instructions needed to train the popular natural language AI model, GPT-3. Running them on cloud servers took the equivalent of 355 years of laptop time, which translates to over 36 million years on the 7090. You would need a parcel of coffee while you waited for this job to finish.

A week of computing time on a modern laptop would take longer than the age of the universe on the 7090.

But, really, this comparison is unfair for today’s computers. Your laptop probably has 16 gigabytes of main memory. The 7090 has reached a maximum of 144 kilobytes. To run the same program would require a great deal of shuffling of data in and out of the 7090, and this should be done using magnetic tapes. The best tape drives of the day had maximum data transfer rates of 60 KB per second. Although 12 tape drives could be connected to a single 7090 computer, this throughput had to be shared among them. But such sharing would require a group of human operators to swap tapes on drives; Reading (or writing) 16 GB of data this way would take three days. Thus, the data transfer was also slower by a factor of about 100,000 compared to the current rate.

So now the 7090 appears to have run at about a quadrillionth (10-15) the speed of your 2021 laptop. A week of computing time on a modern laptop would take longer than the universe age on the 7090.

But wait, there is more! Every core of your laptop has SIMD (single instruction, multiple data) extensions that turbocharging floating-point arithmetic, used for vector operations. Not even a whiff of those from the 7090. And then there’s the GPU, originally used for graphics acceleration, but now used for most of learning AI, like in GPT training. 3. And the latest iPhone chip, the A15 Bionic, has not one, but five GPUs, as well as a bonus neural engine that performs 15 trillion arithmetic operations per second on top of all the other comparisons we’ve made. .

The difference in just 60 years is staggering. But I’m wondering, are we effectively using all of these calculations to make as much of a difference as our ancestors did after switching from pencil and paper to 7090?

This article appears in the print issue of January 2022 under the title “So Moore.”

From your Articles site

Related articles on the web

[ad_2]

Comments are closed.