Once upon a time, the clock rate of a processor was considered by many to be representative of a PC's speed. Today's PC users are far more savvy in recognising that the clock speed alone doesn't reflect a processor's performance, and also that the processor isn't the only component that dictates the speed.
Performance depends on many factors, but it's hard to gain an accurate picture of what speed a processor, motherboard, memory configuration, graphics card and hard disk will achieve. Yet in choosing a new PC or assessing the impact of an upgrade, it's important to be able to measure a PC's performance.
This is where benchmarking comes in, but with so many benchmarks to choose from, it's hard to know which to pick. The bottom line is that no one benchmark - or even one type of benchmark - will fulfil all possible requirements.
Different benchmarks are designed for different purposes, and you need to understand the differences when making your choice. Here, we'll introduce you to the main types of benchmark, highlight the pros and cons of each, and guide you into using them on your PC.
Benchmarks classified
Benchmarks typically fall into two categories - synthetic and application-based. All involve executing a program while measuring the time taken for the code to complete, but that's where the similarity ends.
A synthetic benchmark is a program written for the sole purpose of obtaining a figure that sums up the performance of a PC or its components. These can be divided into two categories.
The first subdivision is the component-level benchmark, which is probably the most familiar to PC users. This is a piece of software that exercises components individually. Strictly speaking that's impossible, because the execution of software generally involves multiple components, but good software can minimise the effect of components other than the one being benchmarked.
A component-level benchmark will, as a minimum, report scores for the processor, memory, graphics card and hard disk, although these scores are often subdivided so you can distinguish a disk's read speed from its write speed, for example.
These benchmarks can play an important role in selecting components for an upgrade, so long as you're able to use a benchmark package on your prospective purchase. They are also invaluable in judging the effectiveness of system tuning or overclocking your processor.
Other synthetic benchmarks (which we'll refer to as 'combined', although that term isn't used universally) aim to exercise the various components together in a way that's representative of real-world use. These benchmarks are intended to provide an overall measure of system performance, so would be useful in choosing a new PC, but there are a couple of issues to bear in mind.
First, it's pretty much impossible to write a general-purpose benchmark since not all people use their PC in the same way. A gamer, for example, will be highly dependent on the performance of the graphics card, while someone working on a database will be much more interested in the disk speed.
The implication is that any combined benchmark will be tailored to a specific type of application, and you should choose one that's representative of your PC use.
The second issue is that even if you choose the most suitable benchmark, the way it exercises the components in a PC will only ever approximate real-world use.
Real-world apps
Our third type of benchmark, which is based on real-world applications, provides the solution to this drawback. An application based benchmark runs real software using simulated keyboard strokes and mouse actions, in much the same way that a macro can be used to drive a Microsoft Office application, while measuring the time taken to complete the process.
These benchmarking solutions can be very specific, carrying out a single type of action in a single package, or they can carry out a range of actions considered typical of real-world use.
Combined synthetic benchmarks, still commonly used in Unix, are rare in the world of the Windows PC, although some component-level packages do supply various overall scores that are obtained by combining the component scores. This being the case, we'll be concentrating on component-level and application based benchmarking.
Our chosen component-level synthetic benchmark is Sandra (System Analyser, Diagnostic and Reporting Assistant) from SiSoftware. Although more advanced versions are available, the Lite edition is remarkably powerful given that it's free.
A good place to start would be running the 'Overall Score' wizard, which you can find under the Tools tab. This measures system performance in five broad categories: arithmetic, shading, multimedia, memory and disk. If you upgrade or optimise your system you can come back here later to see the effect as hard figures.
While you're here you might also want to generate a report on how to optimise your system using the 'Analysis and Advice' wizard. Although a lot of the tips will be obvious, and not all relate to improving performance, you might pick up one or two useful pointers.
Hard disk testing
To gain some idea of the wealth of performance information that Sandra can provide, we'll take a look at the hard disk benchmark. You'll probably need to do some homework to understand all the figures, but even so, you'll be impressed by the wealth of data provided.
To run this benchmark, click on the 'Benchmarks' tab and then, under 'Storage devices', double-click on 'Physical disks'. Now, on the 'Physical disks' window, page back and forth using the green arrows at the bottom, choosing the category of test you want to run. You can, for example, monitor speed against position on the media, or against file size.
Having made your choice, use the drop-down lists at the top to select which drive you want to benchmark (if you have more than one, and remember that it considers USB memory sticks to be disks) and whether you want to test the read or write speed. Finally, click the 'Refresh' icon to run the benchmark and display the results in figures at the bottom of the window, and graphically compared to other popular drives.
Do it yourself
Sysmark is a popular application-based benchmark suite but it is hugely expensive. Virtually all the cheap or free benchmarks are of the synthetic variety, but your can make your own tool for application-based benchmarking. It isn't as hard as you might think.
Eventcorder is a macro recording and playback utility, which provides a similar facility to Microsoft Office's macros, but across any application. It also has a timing facility, which makes it ideally suited to application based benchmarking.
Start up Eventcorder and click the red 'Record' button. The Eventcorder window will be hidden, but you'll see a message in the top-left of the screen indicating that recording is in progress. At this point, perform whatever actions you want to be played back as your benchmark. When you're done, press [Ctrl]+ [Esc].
The Eventcorder window will reappear so you can save your recording from the File menu, and try it out by clicking the 'Play' button. It's not yet a benchmark - we'll address that later - but it's now important to recognise the limitations of this approach.
Nothing important should change between recording and playback. Don't change the screen resolution and, if your recording involves clicking on desktop icons, make sure they haven't moved and that the wallpaper is the same. Also, if the recording results in a file being created, delete it before playback, otherwise you'll be asked if you want to overwrite it, and the script won't be expecting to provide such a confirmation.
All of this is fairly easy if you're playing back on the same machine but could be more difficult if you want to run your benchmark on a different PC.
In practice, because you can't guarantee that icons are at the same position on the desktop or that Start menu entries are in the same order, you're probably limited to using a single application that's already running before making the recording.
Run your test
To use a recording as a benchmark it's necessary to edit the recording file. It's in XML format, which means it's human-readable and easily editable using a text editor. The recording will include delays, which represent the time you waited between key presses or mouse movements.
For use as a benchmark, all those delays should be removed so that the playback time depends solely on the execution time.
To do that, replace all lines of the form (where '120' varies depending on the delay time) with (zero delay).
Finally, we need to tell Eventcorder to display a pop-up box showing the time taken to play back the file. This is achieved by adding the following immediately before (as opposed to the singular
{\rtf1\ansi\ansicpg1252\deff0\deflang1033{\fonttbl{\f0\fnil\fcharset0 Arial;}{\f1\fnil Arial;}} Elapsed time = %TIME% mS
}
Note that if you try to edit xml code in Word, it will recognise it as XML and format it accordingly, but the tags (like
Notepad, on the other hand, will display the XML code as pure text so you will be able to search for tabs but it doesn't support wildcards that you'll need to search and replace lines containing with any number between the tags.
So unless you edit every one of those lines by hand, which will normally be a huge undertaking, you'll need a fully-featured text editor.
0 comments:
Post a Comment