Australian Amateur Packet Radio Association
Why ROSE?
If you have ever wondered why AAPRA chose to use ROSE in the network rather than some other system have a read of this paper. It compares transmission speed of ROSE with NET/ROM.
Performance Evaluation of Networking Firmware
by Chuck Haste KP4DJT
[email protected] or [email protected]
The following are the results of test done to establish throughput speed
benchmarks for answering user questions, and to have benchmarks for
future reference. The testing was done *NOT* to pit ROSE against TheNet
X1J but rather to determine what throughput would be had for both
systems in order to be able to respond to user questions. And also to
have a set of benchmarks for controllers running higher speed clock.
There are a large number of PacComm TNC's in use on both TheNet and
ROSE networks. When sysops ask the question of how fast does data move
through a stack, I was unable to give a good solid answer to that
question, now with these test I can answer the question for both systems.
In the process of collecting data I noticed the differences that started
appearing as the data blocks became larger, the larger blocks of data
represent those a BBS such as FBB would commonly present to the network.
Those block sizes also represent common block sizes that IP facilities
would present to the network (1k+ char IP frames for good solid links
over IP routers and ROSE, 236 char IP frames for NetRom). The smaller
blocks would represent those common to keyboard to keyboard or
conference applications.
Equipment used:
End PC's: Two 12mhz 286's running a program called Pingpong. The
serial port running at 9k6 baud due to a limitation of
pingpong.
End TNC's: PacComm Tiny II Mk II with 10mhz mods done. Both ports
running 9k6 baud.
Network: Tiny II Mk 2 running 10mhz clocks. HDLC ports at 9k6 baud
and wire lan (TI3DJT COAXLAN) running at 38k4 baud.
There were two units networked together over the COAXLAN.
Physical
Links: PC to TNC; RS-232, TNC to one side of network stack was
wire, network stack to other TNC was a radio link.
Radio Link: TEKK KS-900 and KS-960 radios
Timing and other parameters:
TNC's: Frack = 1s, DWait = 0, Resptime = 0, TXDelay = 10*, MaxFrame=4
Nodes: Frack = 1s, DWait = 0, Resptime = 0, TXDelay = 10*, MaxFrame=7
Switches:Frack = 1000ms**, DWait = 0, Resptime = 1000ms,
TXDelay = 100ms*, MaxFrame=7
** ROSE allows much finer resolution but I wanted the timers to be equal
to those on X1J.
Two Paclens were used, first 256 characters which is largest size on most
AX.25 packet radio networks. Then to take into account TheNet's need to
put the network overhead INSIDE the 256 char level 3 packets, I reduced
the TNC Paclen to 236 to allow for the 20 characters of NetRom overhead.
ROSE extends the size of the level 3 frames to add the 3-5 characters of
level 3 overhead, so a paclen reduction was not needed, but a comparison
was made.
TXDelay was set to 100ms for all units, could have run much faster but
I wanted a reference that others could reproduce with ease, each radio
in this case had a slightly different minimum TXDelay, one being about
3 and the other 5.
There are six columns - they are as follows:
- Size
- The size of data blocks sent to TNC's
- TNC SRTT
- TNC to TNC Smoothed Round Trip Time in seconds, this one also
includes PC overhead and is simply the test done with TNC's linked
to each other.
- ROSE SRTT
- Smoothed Round Trip Time through the system with ROSE switches
added. ROSE_SRTT minus TNC_SRTT = Network transit time.
- Transit time
- ROSE_SRTT minus TNC-TNC_SRTT = Transit_time (time through stack)
- X1J SRTT
- Smoothed Round Trip Time through the system with X1J nodes instead
of ROSE. Again X1J_SRTT minus TNC_SRTT = Network transit time.
- Transit time
- X1J_SRTT minus TNC-TNC_SRTT = Transit_time (time through stack)
Smoothed Round Trip Time was the average of 20 round trips, each block was
tested 20 times and the average taken for the SRTT.
In the following graphs the bottom line is TNC SRTT, middle is ROSE SRTT
and top is X1J SRTT.
Paclen = 256 characters
Size TNC SRTT ROSE SRTT Transit Time X1J SRTT Transit Time
64 1.3 2.0 .7 2.2 .9
80 1.3 2.2 .9 2.8 1.5
128 2.0 2.7 .7 3.2 1.2
256 3.0 3.8 .8 3.8 0.8
512 4.1 5.5 1.4 6.5 2.4
1024 6.4 9.2 2.8 19.4 13.4
2048 10.3 14.6 4.3 43.0 32.7
4096 19.1 24.2 5.1 97.8 78.7
Paclen = 236 characters.
Size TNC SRTT ROSE SRTT Transit Time X1J SRTT Transit Time
64 1.3 2.0 .7 2.1 .8
80 1.3 2.2 .9 2.7 1.4
128 2.0 2.7 .7 3.1 1.1
256 3.3 4.8 1.5 4.8 1.5
512 4.1 5.4 1.3 6.8 2.7
1024 7.0 8.9 1.9 19.0 12.0
2048 10.9 14.8 3.9 31.0 20.1
4096 18.6 25.9 7.3 56.1 37.5
Notice how that the jump to from the 512 block size to the 1024 block size
causes a large slow down of X1J. Under a previous test where the actual
block of data being sent was 522 chars length, the difference was ROSE
5.7 and X1J 9.3 seconds, so there appears to be a critical area
somewhere just above 512 characters for TheNet.
Some of the times on the shorter data blocks were corrupted by retries on
the RF link (noise from other equipment, maybe one day we will have a
screen room to test in!) In particular on ROSE this is a problem since the
TNC code thinks it is linking over a dual digipeater path, a retry will cause
long delays (relative to a non digipeat type connect) which will cause long
waits in the test. One case is the 256 char line in the first set of test
where both ROSE and X1J have a SRTT of 3.8, observing the ROSE test,
most of the RTT's were on the order of 3.2 seconds, but in order to
show real world operations, I have keep the times from the retried data in
to reflect actual delays.
In X1J there appears to be a bug that causes a transmission that times
out the watch dog timer, then about 60-70 seconds later the node comes
back to life, I removed this corruption from the two entries that had
that problem, figuring that this bug would be fixed. But for those of
you who would like to know what the SRTT was for those times they are
as follows:
Paclen 256 block 4096 SRTT with bad RTT was 104.9 seconds
Paclen 236 Block 2048 SRTT with bad RTT was 34 seconds.
This bug seems to cause this problem about every 30 minutes, and
NO THIS DOES NOT APPEAR to be a NODES broadcast, these were very
visible and quite short (only 2 nodes in the table).
This is NOT The final say, I would ENCOURAGE OTHERS TO TRY THE
SAME TEST, BUT DO NOT USE THE DEFAULTS IN EITHER EPROM, SET THE
TIMERS AND LENGTHS UP TO BE EQUAL.
Future TEST
I also plan to do these test shipping IP over the network, using
ROSE, NetRom, and the IP routers in X1J. I will test using a IP
frame of 1500 characters, this frame will be fragmented at the AX.25
layer and sent according to each network system. For the test with
IP I will be using a PING session between the two ends, this will
give me much better resolution, than the present program does.
I would also like to have a pingpong program that will resolve the
time better, this one appears to be too granular, and needs better
resolution.
I want to test the two systems while there is other traffic moving
through the nodes/switches, this will be a more "real world" test,
in order to do this I must set up a series of switches/nodes through
which other users are passing data, I also will need to run sampling
for longer times in order to get a better round trip time taking into
account activity changes over the same circuit.
I freely admit to being biased to ROSE and IP, but I have tried to
keep this bias out of these test, this is the reason I made the second
run reducing the paclen to avoid NetRom fragmentation, as you see there
was a small gain for X1J and a small loss for ROSE, but the difference
is still great at the larger frame sizes.
Whatever network you are running, the key to making it run even better
is the fine tuning of the timers and delays, a LAN channel is not going
to have the same parameters as a point to point link, nor is a 1200b
channel going to use the same timing as a 9k6 channel.
There have been:
accesses since 23/12/98
Last update to this page: 22/2/00
|