PC vs. Thin Client Performance in Typical Office Application Scenarios Executive Summary
PC vs. Thin Client Performance in Typical Office Application Scenarios Executive Summary
AUGUST 2006
We focused on operations that would typically make users wait, because those operations by their nature tend to
be the ones on which users would most appreciate performance improvements. We tested the following four
scenarios, two with a single active task and two with multiple tasks running at the same time:
We tested each scenario first on a single client with exclusive access to the file server and then repeated the
scenario with 2, 3, 4, and 5 clients running concurrently. We collected the response time on each of the
participating clients. Our results graphs and tables show the effect the load of additional clients had on response
time. Figure 1 illustrates the results for a simple single-task scenario, calculating Excel subtotals.
As you can see, the performance of the PCs stayed the same as we added more simultaneously active users. By
being able to do the computation work locally, the PCs did not have to rely on the file server for more than
supplying the data.
The thin clients, by contrast, delivered dramatically worse response time as more clients worked at the same time,
with performance dipping to below 20 percent of what it was with a single active client. This performance dip
occurred because all the active thin clients had to rely on the single shared server to not only supply the data files
but also do the computation work.
1.00
0.80
0.40
0.20
0.00
1 2 3 4 5
Number of simultaneously active clients
Figure 1: Results for the Excel subtotals task for all three client platforms. All results are normalized to the one-client PC
result. Higher comparative ratings are better.
As Figure 1 also shows, the different types of clients delivered similar response time with only a single client
running the test. In this case, each thin client had the server acting as essentially a dedicated PC for that client, so
it is no surprise that the PCs and the thin clients performed about the same. The moment we added a second
active client, however, thin client performance plunged, because those two clients then had to share the server.
In our tests, all the clients performed the same task at the same time, though each had its own copies of all data
files. Though typically people are not doing exactly the same thing at exactly the same time, most networks with a
similarly capable server would be supporting a lot more than a mere 5 simultaneous users. Further, during normal
work hours a great many of those users would be working on different tasks at the same time. Our test cases are
thus probably less demanding on the server than real user networks.
In the following sections we discuss our test application scenarios (Application scenarios), examine the results of
our tests (Test results and analysis), and provide detailed information about how we actually performed the tests
(Test methodology). In the appendices, we present the configurations of the test systems, explain how to
manually execute the application functions in our scenarios, and discuss some issues in the development of the
test scripts.
We timed this task from the point she presses Enter until the Excel status bar displays Ready at the end of the
calculation.
Single task scenario: Compressing a PDF from within Adobe Acrobat
In our second scenario, Parker, the assistant to a marketing director, has a 4.01MB PDF of a white paper that he
wants to put on the company’s Web site. He plans to save download time for customers by reducing the files size.
He has the file open in Adobe Acrobat 7.0 Standard and selects File/Reduce File Size. When the Reduce File
Size dialog displays, Parker changes the Make Compatible with: selection to Acrobat 7.0 or later and presses OK.
In the Save As dialog, he enters compressed.pdf as the file name and presses Save. Acrobat compresses the file
and then displays a Conversion Warning saying that the PDF contained image masks that were not down-
sampled. Parker presses OK, and Acrobat displays his compressed PDF.
We timed this task from the point he presses OK in the Reduce File Size dialog until the Conversion Warning
appears at the end of the compression.
Multitasking scenario: Changing the view in a Microsoft PowerPoint presentation while
compressing a folder in Windows Explorer
In the third scenario, Maya, a project manager, has a 265MB folder in her My Documents folder that she wants to
compress and copy to an FTP site for a customer. (We stored this folder locally on the PCs, because a typical PC
user would likely work on such a large amount of data locally. We necessarily stored the folder on the file server
for the thin clients.) She locates the folder in Windows Explorer, right-clicks it, and selects Send to/Compressed
(zipped) Folder from the drop-down menu that displays. She continues working while Windows Explorer
compresses the file. Her next task is to edit a PowerPoint deck for an upcoming customer presentation. The
PowerPoint file is on the file server. While the compression is still running, she opens the 30.4MB, 36-slide
PowerPoint deck and selects View\Slide Sorter so she can find the slide she wants. She then must wait for the
slides to display. She will later copy the 195MB compressed (zipped) folder to the FTP site.
We timed 3 tasks:
• the Windows Explorer task, from the time Maya starts the compression until the Compressing dialog
disappears
• the PowerPoint open task, from the time she clicks the desktop shortcut to open the file until PowerPoint
displays all the slide snapshot on the left
• the PowerPoint change view task, from the time she selects View\Slide Sorter until PowerPoint displays
all the slide images.
Multitasking scenario: Opening large XML files in Microsoft Word and Microsoft Excel
In this scenario, Akhil, a financial analyst, wants to update an 11MB Word XML file with data from a 29.9MB Excel
spreadsheet that is also in XML format. He opens Windows Explorer and locates the file server folder that holds
the files. He selects both files, presses Enter, and waits for the files to display.
We timed 2 tasks:
For more details on how we executed and measured these scenarios, our specific test functions, and the files the
scenarios use, see Appendix B.
For each of those results sets, we present a single time: the mean response time, in seconds, of all the
participating clients in one of the five runs of the scenario. We call that run the representative run.
We used a different process to select the representative run for single-task and multitasking scenarios. For single
task scenarios, we calculated the mean response time for all the clients participating in each test run of a script.
That process yielded one result for each of the five runs. We consider the representative run to be the one with
the median of those results.
For multitasking scenarios, we had to consider the results of all the tasks we timed. Because the foreground task
is the one on which, by definition, users are waiting, we used the foreground task to select a representative run.
So, as we did with the single-task scripts, we calculated the mean response time on the foreground task for all the
clients participating in each test run of a multitasking script. That process yielded one foreground result for each of
the five runs. We consider the representative run to be the one with the median of those results. We then
calculated the mean response time of each other task for all the clients on that run, and we report those results.
In the following sub-sections we explore these results in more detail. Because our goals were to compare how
well the thin clients fared against the PCs and to show how well each type of client performed as we added more
simultaneously active clients, we normalized all comparisons to the performance of the tests with a single active
PC. The result for a run with one active PC is thus always 1.00, because that run is the comparison basis. Results
higher than 1.00 indicate how much faster a given client type ran with a particular client count and script than a
single active PC with the same script. Results lower than 1.00 indicate how much slower a given client type ran
with a particular client count and script than a single active PC with the same script. Because of the normalization,
higher result numbers are better. For example, a result of 0.80 for 2 active clients of type X would mean those
clients completed the script 20 percent slower than a single PC running the same script.
We present the results for each task in each scenario in both tabular and graphical form. Each results table shows
the results of each type of client with 1, 2, 3, 4, and 5 simultaneously active clients. Each graph shows how each
type of client's response time changed as we moved from 1 active client to 5 active clients.
As all the results show, with just 5 clients simultaneously running the same script, the PC clients always
dramatically outperformed the thin clients.
For more details on each scenario, see the Application scenarios section and Appendix B.
Single task scenario: Calculating subtotals in Microsoft Excel
Figure 2 shows the response times for each of the client platforms running this Excel subtotals task. Though all
clients of all types were getting the test file from the file server, the PCs were able to perform the computation
locally, while the thin clients had to rely on sharing the server's processor to do the same work. The server was
able to handle the file requests from the PCs without slowing under load, but having to perform the computations
for the thin clients caused the server to slow as we added more clients.
The result, as you can see, is that PC performance held steady as we added clients, while with 5 active clients the
performance of both types of thin clients fell to about 20 percent that of the PCs. Consequently, PC users would
1.00
client run of the Dell Optiplex 210L
0.80
0.40
0.20
0.00
1 2 3 4 5
Figure 2: Results for the Excel subtotals task for all three client platforms. All results are normalized to the one-client PC result.
Higher comparative ratings are better.
have experienced the same response time as we added users, while thin client users would have experienced
dramatically worse response time with only 5 of them running the test.
Figure 3 details the response times for each of the three client platforms running this task. The performance
results in the left section of the table show the mean response time, in seconds, of all the participating clients in
each run of the test. Lower performance results are better. The center column shows the number of
simultaneously active clients in the test whose results that row provides. The comparative ratings in the right
section of the table show the response time normalized to the result with 1 active PC client. Higher comparative
ratings are better.
As Figure 3 also shows, the percentage differences in performance between PCs and thin clients translate into
time differences users most definitely notice. With a single client active, all the clients completed the task in
roughly 13 seconds. With 5 clients doing the same test, PC performance stayed basically the same, while thin
client performance went to about 68 seconds--an increase of 55 seconds, or nearly a minute, in response time.
PC response time again held basically steady as we added clients, with the response time for 5 simultaneously
active clients at worst 4 percent lower than the response time with a single client. Both thin clients, by contrast,
dropped greatly as we added clients, going to 68 percent (Sun Ray 2; 67 percent for Wyse Winterm 5150SE) as
fast with only 2 clients active and dropping to 30 percent (Sun Ray 2; 29 percent for Wyse Winterm 5150SE) as
fast with 5 simultaneously active clients.
1.00
0.80
Dell OptiPlex 210L
0.20
0.00
1 2 3 4 5
Figure 4: Results for the Acrobat compress PDF task for all three client platforms. All results are normalized to the one-
client PC result. Higher comparative ratings are better.
As the one-client results in Figure 5 show, the thin clients were actually a tiny bit, 1 to 3 percent, faster with only
one client active. The reason for this slight performance edge is that in the one-client test each thin client basically
has the full power of the server available to it.
The response-time differences for the thin clients were ones users would definitely notice, with response time
going from about 16 seconds in the one-client case to 54 to 55 seconds in the five-client case--an increase of 39
seconds.
Multitasking scenario: Changing the view in a Microsoft PowerPoint presentation while
compressing a folder in Windows Explorer
In this scenario, the users are running multiple tasks at the same time: a folder compression via Microsoft
Explorer in the background, and opening and then changing the view of a PowerPoint presentation in the
foreground. We present the results of our tests of each of those tasks in this section.
Windows Explorer task results
Figures 6 and 7 show the performance of each of the types of clients on the background Windows Explorer file
compression task.
As Figure 6 shows, response time for the PCs held basically constant as we added test clients, while the thin
1.00
0.80
0.20
0.00
1 2 3 4 5
Figure 6: Results for the Windows Explorer task for all three client platforms. All results are normalized to the one-client PC
result. Higher comparative ratings are better.
client performance both started lower than that of the PCs with 1 client active and then dropped dramatically as
we went to 5 active clients. The slight increase in PC performance is probably because as the server became
busier supplying the PowerPoint file for the file open, the PCs had to wait briefly and so had more processor
cycles available for the file compression.
As Figure 7 details, the single PC actually finished the task 4.2 seconds faster than the Sun Ray 2 thin client and
3.6 seconds faster than the Wyse Winterm 5150SE thin client. As we added clients running the test, however, this
performance lead widened dramatically, because PC performance stayed basically the same as thin client
performance plunged.
Figure 7: Results for the Windows Explorer task for all three client platforms. All results are normalized to the one-client PC
result. Higher comparative ratings are better.
With 5 clients running the test simultaneously, the PCs finished the task 108.9 seconds faster than the Sun Ray 2
thin clients and 110.2 seconds faster than the Wyse Winterm 5150SE thin clients--differences of nearly two
minutes.
Microsoft PowerPoint file open task results
The two foreground tasks in this test suffered on all platforms as the server had to supply all the clients with the
data they needed. Figures 8 and 9 illustrate this effect on the performance of all the types of clients on the
Microsoft PowerPoint file open task.
As you can see in Figure 8, as we added clients the performance of all three types of clients dipped. With 5 active
PCs, PC performance dropped to 55 percent of that of a single PC; a single PC completed the task in 9.9
1.00
0.80
Dell OptiPlex 210L
0.60 Sun Ray 2
Wyse Winterm 5150SE
0.40
0.20
0.00
1 2 3 4 5
Number of simultaneously active clients
Figure 8: Results for the Microsoft PowerPoint file open task for all three client platforms. All results are normalized to the
one-client PC result. Higher comparative ratings are better.
seconds, while with 5 PCs the response time was 18.1 seconds. The PCs still dramatically out-performed both
types of thin clients, however: as Figure 9 shows, thin client response time dropped to 11 percent (Wyse Winterm
5150SE) or 12 percent (Sun Ray 2) of the response time of the single-PC case. Both types of thin clients
1.00
0.80
Dell OptiPlex 210L
0.60 Sun Ray 2
Wyse Winterm 5150SE
0.40
0.20
0.00
1 2 3 4 5
Number of simultaneously active clients
Figure 10: Results for the Microsoft PowerPoint change view task for all three client platforms. All results are normalized to
the one-client PC result. Higher comparative ratings are better.
PCs in performance.
The actual time penalties were again ones that users would notice. As Figure 11 shows, the average response
time of the Sun Ray 2 thin clients went from 4.8 seconds with 1 active client to 22.4 seconds with 5 active clients,
while the response time of the Wyse Winterm 5150SE thin clients dropped from 5.2 seconds with 1 client to 37.3
seconds with 5 clients.
In this multitasking scenario, even with the data files residing on the server the PCs were able to use their local
computing power to response dramatically more quickly than the thin clients on the foreground tasks and to
complete the background task more than 4 times faster than the thin clients--a performance win on all fronts for
users.
Multitasking scenario: Opening large XML files in Microsoft Word and Microsoft Excel
Our last test scenario includes two tasks that both read large data files from the server and are processor-
intensive: opening XML files in Microsoft Word and Microsoft Excel at the same time. The test begins both file
opens at the same time, so the two tasks begin running simultaneously.
Microsoft Excel XML file open task results
Figures 12 and 13 show the response times for each of the client platforms running the Windows Explorer task in
this scenario.
As the graph shows, performance for all the platforms dipped as we added clients and the server had to do more
work to service the additional systems. The PCs, however, stayed significantly ahead of the thin clients as we
added clients, with the 5 PCs running at 64 percent the speed of the single-PC case; by contrast, the thin clients
1.00
0.80
Dell OptiPlex 210L
0.60 Sun Ray 2
Wyse Winterm 5150SE
0.40
0.20
0.00
1 2 3 4 5
Number of simultaneously active clients
Figure 12: Results for the Microsoft Excel file open task for all three client platforms. All results are normalized to the one-
client PC result. Higher comparative ratings are better.
As both this graph and the detailed results in Figure 13 show, the single Sun Ray 2 thin client was actually 9
percent faster the single PC--but the Sun Ray 2 effectively had the file server dedicated to it (and the requisite
Sun Fire V240 server also supporting it).
1.00
run of the Dell Optiplex 210L
0.80
Dell OptiPlex 210L
0.60 Sun Ray 2
Wyse Winterm 5150SE
0.40
0.20
0.00
1 2 3 4 5
Number of simultaneously active clients
Figure 14: Results for the Microsoft Word XML file open task for all three client platforms. All results are normalized to the
one-client PC result. Higher comparative ratings are better.
As Figure 15 shows, with 5 active clients the PCs, which, like the thin clients, had to rely on the server to supply
the files, ran at 48 percent the speed of the single-PC case. The thin clients, by contrast, ran at only 20 percent
(Sun Ray 2) and 19 percent (Wyse Winterm 5150SE) of the speed of the single PC.
Figure 15: Results for the Microsoft Word XML file open task for all three client platforms. All results are normalized to the
one-client PC result. Higher comparative ratings are better.
In the five-client case, these percentage differences translated into time savings of more than 30 seconds for the
PCs as compared to the thin clients.
As we noted earlier, to provide an apples-to-apples comparison, we forced all the clients to store the data they
needed on the server. The PCs, of course, could have stored the data locally. Had we allowed the PCs to do so,
their performance edge in the multi-client tests would almost certainly have been much larger.
Uneven service
In all of our results discussions to this point, we have focused on average response time. In multi-user networks,
all systems of the same type should generally receive the same type of response time when performing the same
operations on the same files. In our tests, that was certainly the case with the PCs. Consider, for example, the
Microsoft PowerPoint change view task. Figure 16 shows the range of response times for the five-client tests of
this task on each of the client types. The PCs, as we would hope and expect, showed remarkably little variance,
with the difference between the best response time a system received (5.3 seconds) and the worst (5.8 seconds)
only half a second.
Range of response times for the five-client results on the Microsoft PowerPoint change view task
Dell OptiPlex 210L Sun Ray 2 Wyse Winterm 5150SE
Minimum response time (seconds) 5.3 14.1 16.8
Maximum response time (seconds) 5.8 30.8 27.9
Range of response times (seconds) 0.5 16.7 11.1
Figure 16: Range of response times for the runs of the five-client test for the PowerPoint change view task for all three
client platforms. Lower numbers are better.
As the same table shows, on some tasks the thin clients, by contrast, delivered very different response times to
each user, a phenomenon we refer to as "uneven service." In the five-client test of this PowerPoint operation, one
Sun Ray 2 thin client finished the test in 14.1 seconds, while another took 30.8 seconds--a difference of 16.7
seconds. The Wyse Winterm 5150SE thin clients ranged in completion times from 16.8 seconds to 27.9 seconds,
a difference of 11.1 seconds. This level of uneven service would result in different users having very different
computing experiences, something IT managers generally want to avoid.
We created a test network for each of the client types: Dell OptiPlex 210L PCs, Sun Ray 2 thin clients, and Wyse
Winterm 5150SE thin clients. Each test network included a file server, five client systems, and, for the Sun Ray 2
thin clients, the special Sun server they require. We used a pair of identical file servers to allow us to have two
networks under test at a time. Appendix A provides detailed configuration information on all of the different
systems we used in our test. We used a 100-Mbps network infrastructure whenever possible, because that
infrastructure is common in enterprises today.
For the Sun Ray 2 thin client test network, we set up user accounts and Windows Terminal Server on the file
server. For the Wyse Winterm 5150SE thin client test network, we set up the file server so it would have accounts
for all five Wyse Winterm 5150SE thin clients and run the Citrix Access Essentials software they required to be
able to execute the office applications in our test scripts. In all of these test networks, we assigned each system a
static IP address, with one exception: the Sun Fire V240 server automatically assigned IP addresses to the Sun
Ray 2 thin clients. The PCs required no special setup.
We installed the Microsoft Office 2003 and Adobe Acrobat 7.0 Standard applications so that they would be
available to all the clients. The test scripts run tasks in these applications. Because the thin clients do not have
disks, all their applications and data files reside on the file server. We installed the PC applications on each PC,
but to make the performance comparison as fair as possible, we stored the data files on the server except in one
case in which storing a file locally made more sense in the usage model.
We ran four test scripts on each test network with five client configurations:
This approach allowed us to gauge the response-time effects on end users of adding clients to each test network.
For each test script on each test network, we first performed the following script setup steps:
• reboot (in the appropriate order; more on that in the discussions below) the systems in the test network
• create a desktop shortcut for the test script
• create a desktop shortcut for the setup script that prepares the data files for testing
• create a desktop shortcut for the script that cleans up data files between runs of the script
• run the setup script
After we finished this setup process for each script, we ran that script on that network five times in each of the
above five client configurations. If any test or script failed, we discarded that test’s results and ran the test again.
We rebooted the test network systems between each run of each test script.
We refer in this paper only to the median results of each set of five runs on each test network configuration. The
scripts produce times (in milliseconds), with lower times to complete a given function indicating better
performance. We round those times to tenths of seconds in this report.
We performed the initial setup of the shared file server the same way on all the test networks. The first subsection
below outlines that process. The thin clients do not have disks, so the file server held both their applications and
the test data. For the PC test network, the file server held only the test data; we installed the applications locally,
as typical users would.
The subsequent subsections discuss each test network and the steps we took to set it up. Each of those
discussions includes three sections:
• Instructions for setting an additional server: The Sun Ray 2 thin clients required a special Sun server.
• Test network-specific setup instructions for the file server. On each thin client test network, the file server
also ran the software necessary to support the thin clients. We outline the steps necessary to set up that
software in this section.
• Instructions for setting up the clients.
Setting up the file server for all three test networks
We followed this process to initially prepare the file server.
1. Install an OEM copy of Microsoft Windows 2003 Server Enterprise Edition, Service Pack 1.
2. Create two partitions: one for the server, and one for the test applications and files the clients use.
3. Apply the following updates from the Microsoft Windows Update site:
• Windows Server 2003 Security Update for Windows Server 2003 (KB908531)
• Windows Server 2003 Windows Malicious Software Removal Tool - April 2006 (KB890830)
• Windows Server 2003 Security Update for Windows Server 2003 (KB911562)
• Windows Server 2003 Cumulative Security Update for Internet Explorer for Windows Server 2003
(KB912812)
• Windows Server 2003 Cumulative Security Update for Outlook Express for Windows Server 2003
(KB911567)
• Windows Server 2003 Security Update for Windows Server 2003 (KB913446)
• Windows Server 2003 Security Update for Windows Server 2003 (KB911927)
• Windows Server 2003 Security Update for Windows Server 2003 (KB908519)
• Windows Server 2003 Security Update for Windows Server 2003 (KB912919)
• Windows Server 2003 Security Update for Windows Server 2003 (KB904706)
• Windows Server 2003 Update for Windows Server 2003 (KB910437)
• Windows Server 2003 Security Update for Windows Server 2003 (KB896424)
• Windows Server 2003 Security Update for Windows Server 2003 (KB900725)
• Windows Server 2003 Security Update for Windows Server 2003 (KB901017)
• Windows Server 2003 Security Update for Windows Server 2003 (KB899589)
• Windows Server 2003 Security Update for Windows Server 2003 (KB902400)
• Windows Server 2003 Security Update for Windows Server 2003 (KB905414)
• Windows Server 2003 Security Update for Windows Server 2003 (KB899591)
• Windows Server 2003 Security Update for Windows Server 2003 (KB890046)
• Windows Server 2003 Security Update for Windows Server 2003 (KB899587)
• Windows Server 2003 Security Update for Windows Server 2003 (KB896358)
• Windows Server 2003 Security Update for Windows Server 2003 (KB896422)
• Windows Server 2003 Security Update for Windows Server 2003 (KB896428)
• Windows Server 2003 Security Update for Windows Server 2003 (KB893756)
• Windows Server 2003 Security Update for Windows Server 2003 (KB899588)
• Windows Server 2003 Security Update for Windows Server 2003 (KB901214)
• Windows Server 2003 Update for Windows Server 2003 (KB898715)
Setting up any additional servers in the Sun Ray 2 thin client test network
We followed this process to set up the Sun Fire V240 server:
1. Following the instructions for the V240 on Sun’s Web site (http://www.sun.com/products-n-
solutions/hardware/docs/html/819-4209-10/), set up a server with two NICs: one for the connection with
the Sun Ray 2 thin clients, and one for the connection with the file server.
2. Install the following products that the V240 needs to support the Sun Ray 2 thin clients:
• Sun Ray Server Software 3.1
• Sun Ray Connector for Windows OS 1.0
• Sun Desktop Manager 1.0
3. Using the default settings, configure the thin client NIC to have an exclusive network for the Sun Ray 2
thin clients.
4. When the installation software asks whether it should configure the Sun Fire server to have controlled
access mode, select Yes. This configuration lets the Sun Fire server directly control how the Sun Ray 2
thin clients boot.
5. Create a user account, ruser, with the password, “password”, to allow telnetting into the server.
Additional file server set up for the Sun Ray 2 thin client test network
We set up the file server so it would have accounts for all five Sun Ray 2 thin clients, run the Windows Terminal
Server software they required to be able to execute the office applications in our test scripts, and contain the
Adobe Acrobat and Visual Test software the scripts required.
1. Create five users (RUSER1 through RUSER5). Give each remote desktop privileges and the password
“password”.
2. Change the IP address of the Sun Fire V240 server to 10.41.1.80.
3. Configure the Visual Test Runtime application so that it will work with all the test scripts:
a. Copy the following five Visual Test dll files into /WINDOWS/SYSTEM32:
• IEHelper.dll
• Vtaa.dll
• VTest60.dll
• Vtres.dll
• WebDrive.dll
Setting up any additional servers in the Wyse Winterm 5150SE thin client test network
The Wyse Winterm 5150SE test network does not require any servers beyond the file server.
Additional file server set up for the Wyse Winterm 5150SE thin client test network
We set up the file server so it would have accounts for all five Wyse Winterm 5150SE thin clients, run the Citrix
Access Essentials software they required to be able to execute the office applications in our test scripts, and
contain the Adobe Acrobat and Visual Test software the scripts required.
1. Create five users (RUSER1 through RUSER5). Give each remote desktop privileges and the password
“password”.
2. Install Citrix Access Essentials using all defaults.
3. Set up RUSER1 through RUSER5 so each account has Citrix user permissions.
4. Change the Citrix connection settings to permit the Wyse Winterm 5150SE thin clients to run unpublished
applications.
a. Open the Citrix Connection Configuration tool.
b. Double-click the ica-tcp connection to open its properties.
c. Click the Advanced button in the lower left.
d. In the Initial Program group box, uncheck the Only launch Published Applications checkbox if it is
checked.
5. Configure the Visual Test Runtime application so that it will work with all the test scripts:
a. Copy the following five Visual Test dll files into /WINDOWS/SYSTEM32:
• IEHelper.dll
• Vtaa.dll
• VTest60.dll
• Vtres.dll
• WebDrive.dll
b. Open a command prompt.
c. Type cd \WINDOWS\SYSTEM32, and press Enter.
d. For each of the following three dlls, type regsvr32 [dll filename], and press Enter. (This command
registers a dll with the system.)
• IEHelper.dll
• Vtaa.dll
• WebDrive.dll
1. Getting the systems ready to go. In this phase, you make sure all the systems in the test network are on,
appropriately connected (e.g., clients are connected to the file server), and ready for testing.
2. Setting up the test script you want to run. Each script has a setup script that you must run once on each
client before testing on that client with that script. The setup script makes sure the data files are ready,
the application windows are where the test script expects to find them, and so on.
3. Running the test scripts and recording results. You must reboot the test network systems before each run
of each test script and start the test script at the same time on all the clients under test.
Phase 1 varies for each test network. We detail it below in the sections on the test networks. In all of these
discussions, we assume you have already completed the setup process we outlined earlier. We also assume any
client systems you do not want to include in a test will not be on.
Phase 2 is the same regardless of the type of client you are testing. Once you have readied all the systems to go
and are working on a client, follow this process to prepare the client to run a test script:
1. Double-click the desktop shortcut Shortcut to UserX at [servername], where X is the number of the client
and servername is the name you gave the file server.
2. You will see four folders, one for each script. Open the folder that contains the script you are testing.
3. Inside that folder is a folder named SC1. Double-click that folder.
4. You will see three folders: Content, Results, and Scripts. The Scripts folder contains the individual script
files. Double-click the Scripts folder.
5. In the Scripts folder, find the files SC1-Setup.pc6 and SC1main.pc6. Create desktop shortcuts to each of
them.
6. Some scripts require an additional preparation or cleanup program. If so, the Script folder will contain a
third file named SC1-Prep.pc6 or SC1cleanup.pc6, respectively. If either file exists, create a desktop
shortcut to it.
7. Run SC1-Setup.
Phase 3 is largely the same regardless of the type of client. Once you have finished the above script setup phase,
do the following for each test you want to run:
1. Reboot all the servers and the clients you will be using in the test. This process varies by client type; we
outline it for each client test network below.
2. Wait 10 seconds after the Windows hourglass has disappeared on all the clients to ensure a consistent
starting state.
3. On each client you want to test, if there is a shortcut to SC1-Prep or SC1cleanup, do the following:
a. Double-click that shortcut.
b. Wait until you see a confirmation window that prep has completed, or, in the case of SC1cleanup,
wait 30 seconds.
4. Start the script at the same time on all the clients you are testing by clicking the Shortcut to SC1main and
pressing Enter on each client.
5. When the test completes, record the results of each client.
As we discussed at the beginning of the Test methodology section, we ran each script five times on each test
configuration of each network (e.g., five times with one active PC, five times with two active PCs, and so on.)
In the following three subsections, we detail the first phase for each of the three types of test networks. As
Testing the PC clients
This section provides the test execution preparation steps specific to the PC test network.
File server both the PCs and thin clients used HP ProLiant DL360 1U Rack Server
Server the Sun Ray 2 thin clients required Sun Fire V240
As the instructions below reflect, to get the most consistent possible timings and to make our hand-timed actions
more like the ones the automated scripts perform, we sometimes chose to follow procedures for launching
applications that were different from those typical users would follow. (See Appendix C for additional information
on scripting issues.) When we made such choices, we also independently verified that the typical user procedures
would still show similar results.
Consequently, we are confident that the benefits PCs delivered in these scenarios are benefits that users can
expect to realize in real work situations and are not artifacts of the measurement or scripting technology.
We ran all application scenarios five times on each of the systems under test, and we reported the median of
those runs.
The following subsections, which assume you have already completed all of the setup work in the Test
methodology section, describe how to run each of the individual scenarios.
• Open Sales2002a1.xls. (We did not time that task, because we focused on a single function.)
• Start the Excel timer, and perform the subtotal function.
• Stop the Excel timer when Excel finishes calculating the subtotals.
• Close Excel. (We did not time that task, because we focused on a single function.)
• Open Computing.pdf. (We did not time that task, because we focused on a single function.)
• Start the Acrobat timer, and tell Acrobat to compress the PDF.
• Stop the Acrobat timer when the Conversion Warning dialog displays.
• Close the Conversion Warning window.
• Close Acrobat.
• Delete Compress.pdf, the file the script just created. (We did not time these three final tasks, because we
focused on a single function.)
The manual process
To execute the test, follow these instructions. You will need one stopwatch.
• Open Windows Explorer. (We did not time this task, because it occurs outside the multitasking section of
the script.)
• Navigate to FourFiles.
• Start the timer for Explorer Compress, and start compressing FourFiles.
• Start the PowerPoint Open timer, and double-click the Content.ppt desktop shortcut.
• Stop the PowerPoint Open timer when the bottom slide in the slide viewer loads.
• Start the PowerPoint Change View timer, and select View/Slide Sorter.
To execute the test, follow these instructions. You will need three stopwatches to time three different tasks.
Multitasking scenario: Opening large XML files in Microsoft Word and Microsoft Excel
The applications involved
• Microsoft Office Excel 2003
• Microsoft Office Word 2003
• Microsoft Windows Explorer for Windows XP Professional (Service Pack 2)
The data files involved
• SalesSummary.xml, an 11MB XML document (on the file server)
• Excel2minlarge.xml, a 28.9MB XML document (on the file server)
The script
The script for the scenario performs the following tasks:
• Open Windows Explorer. (We did not time this task, because it occurs outside the multitasking section of
the script.)
• Navigate to the directory that contains SalesSummary.xml and Excel2minlarge.xml.
• Open SalesSummary.xml and Excel2minlarge.xml, and starts one timer for the Word document open and
one timer for the Excel document open.
• Stop the Excel timer when the Excel document finishes loading and Ready appears in the lower left of the
document.
• Stop the Word timer when the Word document finishes loading and the page count on the bottom of the
page equals the actual page count of the document.
• Close all open applications. (We did not time these tasks, because they occur outside the multitasking
section of the script.)
To execute the test, follow these instructions. You will need two stopwatches.
First, the product’s own documentation notes that its primary goal is to be a tool for automating application testing,
not for benchmark development. Consequently, the granularity of some of its functions and the way some of its
functions behave are not ideal for benchmark development.
IBM also does not officially support Visual Test 6.5 for the Windows XP operating system. Because Windows XP
is the leading and most current desktop version of Windows today, we nonetheless felt it was essential to use that
operating system in our tests.
The presence of any scripting tool has the potential to affect the performance of a system. The tool unavoidably
must, for example, occupy some memory and consume some processing power. Consequently, developing a
performance-measurement script with such a tool involves maintaining a delicate balance between using the tool
to automate typical real user behavior and minimizing the effects of the tool on system performance. To make
sure the results of our scripts were accurate, we also hand-timed each of the functions we scripted.
To minimize these limitations and problems, we sometimes had to use scripting techniques that would achieve
the same results as typical user behavior but not exactly mirror that behavior. Such techniques include inserting
delays to mimic user think time and launching applications by clicking the OK button of a pre-filled Run command
line. The hand timing instructions we provide in Appendix B reflect those techniques, so following those
instructions will yield results similar to those the scripts produce. Whenever we had to use one of these alternative
techniques, we manually verified that doing so did not materially alter the way the system behaved and that real
users performing the same actions in more typical ways would see the type of performance benefits we describe.
The timings the scripts produce also inevitably contain some variability. This variability is a result of the
combination of the tool’s limitations and the generally asynchronous nature of the many processes Windows XP
and other modern operating systems have running at any given time.
Finally, though one of the goals of this effort was to produce reliable scripts, we were not trying to build bulletproof
benchmarks for wide distribution and use. We developed the scripts to mimic user behavior on our specific test
systems; on different systems the scripts might show different levels of performance benefits or even fail to work.
So, although the scripts are as reliable, self-contained, and free of system dependencies as we could reasonably
make them within the project’s timeframe, they do sometimes fail or encounter problems. Should a problem occur,
rebooting the system and running the script again will generally yield a good result.
IN NO EVENT SHALL PRINCIPLED TECHNOLOGIES, INC. BE LIABLE FOR INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL
DAMAGES IN CONNECTION WITH ITS TESTING, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. IN NO EVENT SHALL
PRINCIPLED TECHNOLOGIES, INC.’S LIABILITY, INCLUDING FOR DIRECT DAMAGES, EXCEED THE AMOUNTS PAID IN
CONNECTION WITH PRINCIPLED TECHNOLOGIES, INC.’S TESTING. CUSTOMER’S SOLE AND EXCLUSIVE REMEDIES ARE AS SET
FORTH HEREIN.