Sametime Meeting 3.0 vs 2.5 performance comparison
by Bill Masek
and Bruce Webster
Level:
Intermediate
Works with:
Sametime
Updated:
03-Apr-2003
Lotus Sametime is the market-leading instant messaging and Web conferencing product. An important component of Sametime functionality is the ability to conduct online meetings that include instant messaging, streaming audio and video, a shared whiteboard, and shared applications.
This Performance Perspectives column discusses tests we performed comparing Sametime 3.0 meeting performance to Sametime 2.5. These tests simulated user workloads on a small server gradually increasing in size until server performance was impacted. As you'll see, the Sametime 3.0 server logged more users into meetings in a fixed time than the 2.5 server. In addition, Sametime 3.0 users joined meetings faster than Sametime 2.5 users.
This column assumes that you're familiar with Sametime features and terminology. For more information about Sametime 3.0 features, see the
LDD Today
article "
A preview of Sametime 3.0
."
Test setup
The following sections describe our hardware, software, and workload test setup.
Hardware and software
In our study, we compared Sametime 2.5 to Sametime 3
.0
running a hotfix available from IBM Lotus Technical Support. We ran these tests on small departmental servers. These were by choice low-end servers, selected to provide a "lower scale" performance scenario. Each server was an IBM 704 PC with four PII 200 MHz processors. We ran our Sametime 2.5 tests on a server with 512 MB of memory, and our Sametime 3.0 tests on servers with 512 MB and 1 GB memory. (This was due to the availability of lab hardware. The different amounts of memory had no measurable impact on performance; in our 3.0 tests, we saw the same results with both memory configurations.) Each server ran Windows 2000 and was connected to the network with a 100 MB network card
.
For our tests we used a load tool developed in-house. Each simulated user connected to the server using the HTTP protocol and then started a customized meeting room client. The custom client used the standard Sametime APIs. We modified the client code to reduce the load on the test drivers. Images for the whiteboard pages were downloaded normally to the client, but immediately discarded to minimize the memory footprint on the test driver. Our client did not build or display the client GUIs. No Sametime server code was modified for this testing.
Workload
Our tests compared several key variables, the prime one being the number of users joining meetings in a fixed time frame. We measured the performance impact of 300,
600, and 750
concurrent users. Each meeting included five users sharing a presentation on the whiteboard.
We designed the test workload to mirror a typical peak busy hour for a Sametime meeting server. Our simulated users started meetings in two groups; one met on the hour and the other on the half-hour. Users joined the meetings during a 20 minute period before the test. Once a meeting started, one test user assumed the role of the meeting presenter and flipped slides of a presentation. The other test users acted as meeting participants and viewed the slides from the presenter.
Meetings in this test had five test users (one presenter and four listeners) per meeting. (Our research indicated this is the average size of IBM electronic meetings.)
The workload performed its tasks in three main stages: schedule meetings, prepare the background load, and simulate an active server. This ability to capture a multi-step process represents an advancement over previous workloads.
Stage 1: Schedule meetings
The server workload ran before the test and scheduled the meetings. Half the meetings were scheduled to start at the beginning of the test, the rest 55 minutes later. Meetings were usually scheduled well before the actual meeting, so the workload scheduled all the meetings before the test.
When the meetings were scheduled, a 5 MB Freelance presentation with 21 slides was also uploaded to the server. Although each meeting used the same presentation, each received its own unique copy of the presentation.
Stage 2: Prepare the background load
This initiated the active test and consisted of the following steps:
Start half the test users over a 20 minute ramp up time.
Wait five minutes. (Some test users started just before the end of the ramp up time. This was a way to clearly differentiate between start time and test time.)
Run the whiteboard workload for 30 minutes. This period represents pure meeting load. The presenter flips through all of the pages of the presentation at the rate of one per minute. The actual whiteboard presentation starts as soon as the presenter joins the meeting.
Stage 3: Simulate an active meeting server
This began 55 minutes after the test began. The server started the remaining test users over a 20 minute interval, then performed steps 2 and 3 listed in Stage 2. At this stage in the test, all meetings were started, and users were either in meetings or added to meetings.
We used multiple drivers to run the test against the server. Each driver started its users evenly over the 20 minute ramp up period.
Each test user performed the following tasks:
Connect to the server.
Log into the server.
View the list of active meetings. If there are too many meetings to fit into a single page, download the next page of meetings.
Open a link to the meeting.
Download the meeting applet.
Start the meeting client on the driver. (The standard applet was downloaded, but the test ran the custom client.)
(Presenter) Start turning pages on the whiteboard.
There were three to 15 second pauses between steps to simulate user think time.
Test results
We ran eight tests with 300, 600, and 750 test users on both Sametime 3.0 and Sametime 2.5 on our small test servers. Both servers supported 3
00 test users
easily. The Sametime 3.0 server effectively supported 600 test users. Despite heavy loads, both servers stayed up during all of the tests.
CPU usage
The following table shows the CPU usage percentages we obtained in our Sametime 2.5 test:
Users
Create meetings
(Stage 1)
Join meeting
(Stage 2)
Run meeting
(Stage 2)
Join meeting
(Stage 3)
Run meeting
(Stage 3)
300
39.5
65.0
32.6
62.0
31.4
600
54.6
95.6
55.0
95.2
37.4
In the preceding table, the Users column shows the number of simulated users in each test. All other numbers on the table represent percentage of CPU time. For example, in our 300 user test the workload consumed 39.5 percent of available CPU when creating meetings, 65 percent when the first half of our 300 users joined the meetings, 62 percent when the second half joined, and so on. As this table shows, at 600 users CPU was "saturated" (over 95 percent usage) when users were joining the meetings.
Our second table displays CPU usage for Sametime 3.0:
Users
Create meetings
(Stage 1)
Join meeting
(Stage 2)
Run meeting
(Stage 2)
Join meeting
(Stage 3)
Run meeting
(Stage 3)
300
20.0
61.7
4.0
49.0
5.7
600
28.4
76.3
31.4
85.7
24.0
750
44.9
99.2
33.8
95.4
35.0
As you can see, Sametime 3.0 reached CPU saturation at 750 users compared to 600 for Sametime 2.5. Until it reached saturation, Sametime 3.0 also consumed significantly less CPU during most stages of the test run than did Sametime 2.5.
Page transactions
Another measure of performance is the number of page transactions. Here is a chart comparing Sametime 3.0 versus 2.5 page transactions for the 300, 600, and 750 runs:
This data shows that once the meetings started, the test Sametime 3.0 server could drive the meetings with 750 test users.
Joining a meeting
In a real environment, a user joining a meeting connects to the meeting server, finds the meeting, opens the meeting, and starts the meeting room applet. Our test users performed these same steps in the same sequence. This section looks at the performance of the server while users were joining a meeting.
Connecting to the meeting
The test user first connected to the Sametime server. In the browser, this shows up as the initial Sametime screen. Here is a graph of the connect times for the different tests.
During Stage 2 (150 to 375 test users), both servers maintained reasonable performance, while the Sametime 3.0 server was faster. In Stage 2 (300 to 750 test users), Sametime 3.0 supported more users with a good response time.
Finding the meeting
Next, our test user listed the active meetings on the server to find the URL for the appropriate meeting. Here is a graph of the time required to list the meetings view:
These results are similar to the connect results—Sametime 3.0 showed better performance with 600 simulated users than Sametime 2.5.
Joining the meeting
Finally, our simulated users joined the meetings. This graph shows our results:
The preceding three charts all tell the same story: Our Sametime 3.0 supported 600 test users with reasonable performance. The Sametime 2.5 server exhibited performance issues starting with 300 users. The following table lists the differences in performance time for 300 users:
Transaction
Sametime 3.0
(sec)
Sametime 2.5 (sec)
Difference
(sec)
Percent difference
Connect
0.9
1.4
0.5
39
View meetings
1.4
1.4
0
0
Open meetings
3.5
3.3
-0.2
-6
This table shows the same data for the 600 user test:
Transaction
Sametime 3.0
(sec)
Sametime 2.5 (sec)
Difference
(sec)
Percent difference
Connect
1.3
5.8
4.5
446
View meetings
3.0
7.6
4.6
253
Open meetings
6.5
14.4
7.9
222
At 300 test users, Sametime 3.0 connects almost 40 percent faster, while view and open are both about the same. At 600 test users, Sametime 3.0 server is much faster. The Sametime 3.0 server not only has a higher capacity, it also has better performance.
Meeting performance
After our simulated meetings were underway, we measured the time it took the server to deliver pages to the clients. The following table lists our results. All times are in seconds:
Users
Sametime 3.0
(Stage 2)
Sametime 3.0
(Stage 3)
Sametime 2.5
(Stage 2)
Sametime 2.5
(Stage 3)
175
1.79
2.86
1.53
13.19
300
1.71
2.84
1.60
375
1.66
2.74
1.56
The average times are marginally longer in Sametime 3.0.
Performance considerations
In this section, we share some observations that may offer some best practices to help you anticipate performance in your own Sametime environment.
CPU profile
After we concluded our tests, we noticed CPU usage followed a pattern during the completion of each stage. So we ran a separate test and obtained the following results:
This graph illustrates a "CPU profile." CPU usage increases in Stage 1 when meetings are scheduled. There is a significant CPU spike in Stage 2 when the first set of meetings actually start. This drops off once the meetings start, but notice there still is significant load on the server while the first set of users joins the meeting. Viewing the meeting requires little CPU time. A second spike occurs in Stage 3 when the rest of the meetings start.
This profile may be a good indication of CPU usage in your own Sametime site. Be aware that scheduling and joining meetings are typically the most CPU-intensive parts of Sametime usage, when users are most likely to notice performance issues.
Other tips
As with any other performance test, we've attempted to simulate a real-world environment. But no test, however carefully conducted, can exactly match every possible customer configuration. For example, our tests did not include the following factors which could have significant impact on Sametime performance:
Network
Sametime Meetings are network intensive. High latency and low bandwidth connections impact performance. These tests were run in a lab environment with 100 MB connections with no latency.
AppShare
This workload only runs the whiteboard. For the whiteboard, the server downloads each page as it is needed. In an AppShare environment, the presenter is constantly sending screen and mouse updates back to the server. The server then relays that data back to the rest of the clients. AppShare has a heavier load on the server.
Audio/Video
Audio/Video meetings require additional bandwidth.
Small local user directory
Our users were authenticated on the local server. There are less than 10,000 names in the directory. Directory performance was not an issue in these tests.
Servers
As stated earlier, we ran our tests on small departmental servers rather than enterprise servers. Performance started to suffer when the server CPUs were saturated. In other tests, we have seen that bigger servers can support more users.
Conclusion
The key findings of our Sametime 3
.0
versus 2.5 comparison are:
The major performance constraint in Sametime is joining meetings. Sametime 3
.0
performed significantly better than Sametime 2.5 in this area.
Sametime 3
.0
logged more users into meetings in a fixed time than Sametime 2.5.
After all users had joined their meetings the load on each server dropped significantly.
In Sametime 3
.0
, the average time to change pages in the presentation was slightly slower
.
One final note: Although in our simulation 750 concurrent users was the largest community supported by our low-end Sametime 3.0 server, a different workload model (for example, a lower "User Join" density) or a more powerful hardware configuration would have supported a significantly higher number.
We hope you found this Performance Perspectives column useful. Please let us know what you think!
ABOUT THE AUTHORS
Bill Masek works for the IBM Lotus Product Engineering team. He is a software architect who develops tools and leads performance test projects.
His development experience gives him a programmer's view of performance. His accomplishments include a "smart" sample prep system designed for chemists.
Bill has a Masters in Computer Science from MIT. Originally from
California, he is an avid choral singer and board gamer.
Bruce Webster is an advisory software engineer for the IBM Lotus Product Engineering team and has been doing performance work since the inception of the group. Bruce has also been a Notes application developer, working both as an independent consultant and with the Lotus Consulting Group. Bruce originally began his career with Lotus in 1984. He currently specializes in Domino, Sametime, and QuickPlace performance simulations and analysis. Bruce coaches youth baseball and basketball in his spare time.