Computing Power & the Development of Computers


1965 was a long time ago. The moon's surface was free of any footprints. Four young men started playing instruments together and named themselves Pink Floyd. Sgt. Pepper was unknown and people would have stared at you strangely had you ever shouted "They think it's all over … it is now!"

You would imagine, therefore, that any forecast made about technology back then would by now be wild inaccurate, its naivety chuckled at fondly by today's geeks. However, in 1965 a man named Gordon E. Moore – co-founder of annoying be-jingled computer chip manufacturer Intel – made a prediction that has held true to this day, and will probably remain accurate for at least another decade.

So what was Moore's prediction?

The fact that we still have to hoover and wash up certainly discards the possibility that it was of robotic household servants, or indeed of anti-gravity boots as I can not yet scoop out the moldy gunk from my upstairs guttering without the assistance of a ladder. Although his forecast was probably viewed skeptically at the time, it has indirectly led to some of the most exciting technology available today.

Moore foretold the rate of growth of computing power; specifically, that it would double every eighty months. It is not necessary for the gist of this post to get into discussion about how computing power is measured (according to Moore, it was the number of transistors – semi-conductors used to amplify and switch signals – per square inch of circuit board) , just to understand that computers could process more and more information per second as time passed.

So how can that be put that into context? In 1965, the year of Moore's prediction, DEC introduced the first commercially successful mini-computer, the PDP-8. This computer cost nearly $ 20,000 (around $ 130,000 in today's money), was the size of a refrigerator (do bear in mind this is a mini-computer we are talking about) and was capable of one million calculations per second. This sounds like a lot, certainly enough to do some simple sums on. Today, I am writing this on a desktop PC that cost under £ 1000, happily fits under my desk and can do over three billion calculations per second.

Looking at supercomputers – those at the very edge of what is physically and technically possible – the differences over time are more marked still. A 1964 model would be able to perform 3 million calculations per second. The Cray Jaguar, today's champion, ups that slightly. To 1.8 million million.

What then of storage, the amount of information a machine can hold on a rotating magnetic disc? In January 1980, Morrow launched a 26 megabyte hard drive which retailed at £ 2500 (over £ 7000 today). That mean each one of your megabytes of data cost you nearly £ 100 (£ 285 today) in storage.

Memory is cheap now. A few seconds on Google Product Search tells me I can purchase 2 terabytes of disc space for £ 300. That works out at a mere one ten-thousandth of a pence per megabyte which would appear to be something of a bargain. Look at it the other way around if you wish: had the cost of storage remained the same as in 1980, that 2 TB disc would cost you £ 60 million pounds. Not exactly something you would pop into PC World for.

Increased storage, and indeed the massive reductions in physical size required for that storage, has been put to most effective use in today's portable music, video and personal-organization devices, most noticeably of course by the massively commercially successful Apple suite of i products – Touches, Pads and Pods.

The largest capacity iPod Touch you can buy today is 80 GB, which, taking an average song length of 240 seconds (4 minutes) and one megabyte per 60 seconds at a standard bitrate, can store 82,000 songs. That makes it an awful lot easier to make your collection mobile than in days gone by, when you would have had to cart 3,600 TDK90s around with you.

Thoughts then of course turn to the future. Just how powerful, how fast, will computers become?

By maintaining the current model, a natural, physical speed limit will be reached soon. The components that make up the chips themselves will not be able to be physically shrunk further, and therefore the lengths of time that electrons take to travel around the chip and between transistors will not reduce. It is already assumed that by 2025 these components will be measured on an atomic scale.

This unbreakable ceiling may be smashed soon though. Scientists are working hard on developing viable quantum computers that will be capable of many thousands of times the speed of today's fastest machines. These futuristic devices will tame and take advantage of a strange, sub-atomic world where things can be in more than one place and can exist in two states, all at once. This confusing property allows quantum computers to achieve parallelism – that is, to do more than one thing at the same time, and therefore of course, much more quickly.

This technology is not as far-fetched as it sounds. The first experimental quantum algorithm was demonstrated in 1998. Things have come a long way since then.



Source by Simon Bishop


Leave a Reply