My first personal computer was an IBM PC-XT 8088 (16-bit single core with an 8-bit external bus and 640Kb RAM) which ran at an astounding 4.77 MHz clock speed. Today, even a lowly netbook runs at speeds in excess of 1.6 GHz (or 1600 MHz). At the other end of the scale, Intel and AMD are duking it for the title of King of the Hill when it comes to processor speed (or rather, King of the Sand Dune, as the winner keeps slipping in an incessant display of one-upsmanship).
Those days, the speed of the processor really made a difference. Getting acquainted with the "Turbo" button of later generation PC's, once could really see a noticeable jump in performance when the *Button* was depressed or not. Why would anyone want to switch "Turbo" off? Well, if you played computer games those days, running at a slower speed made it easier to stay alive, as it was less demanding on the reflexes (technically, this is not cheating, right?).
Fast-forward to the present, in the modern age of quad-core processors which idle at speeds in excess of hundreds of megahertz. Imagine what a game of Digger or Moonbugs would be like if you could get it to run on one of today's computers...?
The question I ask myself is, have we reached the point of diminishing returns, where outright speed no longer makes a noticeable difference to the user? Looking at the software we have today, it is really quite hard to tell if the code is written as efficiently as possible to extract every last ounce of horsepower from the CPU (like in the old days of *demos*).
These days, other system key performance indicators come into play, and with them, the bottleneck shifts to the display, memory and hard-disk sub-systems. Adding more memory now could potentially make your system much faster compared to getting a processor with a faster clock speed. A famous figure in the IT circle was once misquoted as saying "who would ever need more than 640Kb of memory?", although he vehemently denied this.
Somehow or rather, when software is concerned, the idiom "Nature abhors a vacuum" comes to mind. It appears that the faster and more powerful computers become, the higher the *minimum requirements* of any software becomes too. Is there some notion (perhaps originating from Marketing) that software that is not *optimized* to run with the latest, blazingly fast CPUs will somehow be inferior? Why do we need all that computing power just so my desktop appears semi-translucently in 16.7 million colors anyway?
So, I decide now to try out some evaluation. My laptop's running on a T7800 processor, which clocks out at around 2.6 GHz max. Due to the technical wizardry these days, by playing around with the power options, it's possible to set the maximum and minimum processor state to 0%, whatever that means. Doing this does not freeze up the computer, so I guess 0% does not equal 0 GHz.
Hmm, so far so good. Now, using a combination of RealTemp version 3.60 and Prime95 V25.11 Build 2, I decided to check out the processor clock speed at 100% load. First up, 100% load on 100% processor state.
So, the clock speed registers at around 2593.51 MHz. On the upper right, "199.50 x 13.0" denotes the current FSB speed (actually, it's 800 MHz divided by 4, as Intel CPU's have quad-rate data transfer) and the multiplier. Note that I just ran this long enough for the clock speed to stabilize. Running it it for prolonged periods brings the core temperature very close to the maximum rated junction temperature of 100�C. Now, what happens when I rein in the processor to 0% maximum state?
Interesting. The clock speed is now clamped to 798 MHz, even at full load. Due to Dynamic FSB Frequency Switching (more Intel Marketing-speak), this comes about as a result of half of the FSB speed times a reduced multiplier of 8. Core temperature is also around 18-20 degrees lower....
So, how does this translate to performance? Well, most of the time, when going through e-mails, surfing the net, blogging, compiling data, there is hardly any difference. These activities are quite light on computing resources, and are more dependent on other things, like Internet connection speed, typing speed, how fast ideas come to mind. In such cases, having a faster computer (at least faster than this one) doesn't really bring about any improvements. If it did, then I would definitely upgrade all my engineers' PCs, so they can "compile data" faster...! :)
A quick glance at the performance graphs on the Windows Task Manager show that even running at the lowest possible speed, the dual-core T7800 is almost never running at 100% on both cores, while doing some of the normal, mundane tasks mentioned above. I got the CPU's to peak by clicking wildly from one tab to another in a multi-tab Web Browser faster than I could read contents of the page.
Another added benefit is that my laptop now doesn't heat up as much as before. Sure it's still warm, but not so hot that it practically fries the organic matter beneath it (my lap, what were you thinking...?). Running at this speed also results in longer battery time, which is much appreciated in meetings where I use my laptop as an electronic note book. Incidentally, a senior colleague of mine chided us (I guess I was one of them) for not bring around a diary or book to write down notes - but that's another story for a future blog post....
Well, there are times where the full processing power of whatever CPU you have will come in handy. Such is the case when encoding video / audio, rendering something in 3D software (like AutoCAD, perhaps) or applying some filters in Photoshop. Whenever I perform some batch resizing, I notice that the CPU really runs at 100%, so every bit makes a difference. However, 95% of the time, it's quite alright to be running in an *underclocked* state, provided that all other performance bottlenecks are taken care off.
Note: My T7800 system is running with 4 GB RAM and a dedicated GPU with 128Mb onboard memory. Probably why, even with maximum underclocking, it will faster than an Atom-based Netbook....