Report for: Wifi Capacity Test







Sat May 02 14:47:49 PDT 2020

PDF Report

Objective

The Candela WiFi Capacity test is designed to measure performance of an Access Point when handling different amounts of WiFi Stations. The test allows the user to increase the number of stations in user defined steps for each test iteration and measure the per station and the overall throughput for each trial. Along with throughput other measurements made are client connection times, Fairness, % packet loss, DHCP times and more. The expected behavior is for the AP to be able to handle several stations (within the limitations of the AP specs) and make sure all stations get a fair amount of airtime both in the upstream and downstream. An AP that scales well will not show a significant over-all throughput decrease as more stations are added.




Realtime Graph shows summary download and upload RX bps of connections created by this test.
Realtime BPS


Total bits-per-second transferred. This only counts the protocol payload, so it will not count the Ethernet, IP, UDP, TCP or other header overhead. A well behaving system will show about the same rate as stations increase. If the rate decreases significantly as stations increase, then it is not scaling well.
If selected, the Golden AP comparison graphs will be added. These tests were done in an isolation chamber, open encryption, conductive connection, with LANforge CT525 wave-1 3x3 NIC as the stations.
Total Kbps Received vs Number of Stations Active

Text Data for Kbps Upload/Download



Protocol-Data-Units received. For TCP, this does not mean much, but for UDP connections, this correlates to packet size. If the PDU size is larger than what fits into a single frame, then the network stack will segment it accordingly. A well behaving system will show about the same rate as stations increase. If the rate decreases significantly as stations increase, then it is not scaling well.
Total PDU/s Received vs Number of Stations Active

Text Data for Pps Upload/Download



Station disconnect stats. These will be only for the last iteration. If the 'Clear Reset Counters' option is selected, the stats are cleared after the initial association. Any re-connects reported indicate a potential stability issue. Can be used for long-term stability testing in cases where you bring up all stations in one iteration and then run the test for a longer duration.
Port Reset Totals


Station connect time is calculated from the initial Authenticate message through the completion of Open or RSN association/authentication.
Station Connect Times


This measures the time it takes to complete the ANQP communication. This is used in Hot-Spot 2.0 (HS20) negotiation and discovery.
Station ANQP Times


This measures the time it takes to complete the 4-way Authentication used by WPA encryption. If this increases as more stations are added, it may indicate scalability problems.
Station 4-Way Auth Times


This measures the time it takes to acquire a DHCP lease. The DHCP protocol broadcasts at least one discovery message and then waits a second or two before trying to aquire a lease. So, longer times here are usually not a problem. If the time goes up as more stations associate then it may indicate scalability issues, and it may also mean that the DHCP server has run out of leases.
Station DHCP Times


This measures the one-way latency reported by LANforge. Much of the latency will be in the LANforge itself when transmitting at maximum speeds because LANforge will have fairly large send buffers. You can force the send buffers smaller to decrease this. But, the device-under-test can also influence over-all latency. We often see multiple seconds of latency in our testing, but in a perfect world you would want the latency to not increase much as more stations are added.
Latency vs Time


This packet loss is calculated based on the sequence-gap detected drops. If the device-under-test is reordering packets, then this value may be incorrect. Check the Layer-3 Endpoint out-of-order column if this graph is significantly different from the cx-detected-drop graph above.
Total Drop % vs Number of Stations Active (Sequence Gap Detected Drops)


This packet loss is calculated based on the number of PDUs sent by one side versus the number received on the other. Please note that TCP does not actually drop packets, but it will instead just run slower and retransmit frames. UDP will give more accurate packet-loss statistics.
Total Drop % vs Number of Stations Active (Send vs Receive Detected Drops)

Text Data for CX Drop Percent Upload/Download



This charts the total time it takes the stations to associate and acquire a DHCP lease (if DHCP is being used). If the system is scaling well, this time should not increase much as more stations are brought up.
Stations requested UP vs Bringup Time for Last Batch of 228 Stations

Wifi-Capacity Test requested values
Station Increment: 1,2,5,10,20,45,60,100
Loop Iterations: Single (1)
Duration: 1 min (1 m)
Protocol: TCP-IPv4
Layer 4-7 Endpoint: NONE
Payload Size: AUTO
MSS AUTO
Total Download Rate: 1G (1 Gbps)
Total Upload Rate: 1G (1 Gbps)
Percentage TCP Rate: 10% (10%)
Set Bursty Minimum Speed: Burst Mode Disabled (-1)
Randomize Rates true
Leave Ports Up false
Socket buffer size: OS Default
Settle Time: 5 sec (5 s)
Rpt Timer: fast (1 s)
IP ToS: Best Effort (0)
Multi-Conn: AUTO
Show-Per-Iteration-Charts true
Show-Per-Loop-Totals true
Hunt-Lower-Rates false
Show Events true
Clear Reset Counters false
CSV Reporting Dir - not selected -
Build Date Sat May 2 10:22:39 PDT 2020
Build Version 5.4.2
Git Version 1c7aa80894052fe3b618e8f17ba7de8b47bafc45
Ports 1.1.eth0 1.1.sta00000 1.1.sta00001 1.1.sta00002 1.1.sta00003 1.1.sta00004 1.1.sta00005 1.1.sta00006 1.1.sta00007 1.1.sta00008 1.1.sta00009 1.1.sta00010 1.1.sta00011 1.1.sta00012 1.1.sta00013 1.1.sta00014 1.1.sta00015 1.1.sta00016 1.1.sta00017 1.1.sta00018 1.1.sta00019 1.1.sta00020 1.1.sta00021 1.1.sta00022 1.1.sta00023 1.1.sta00024 1.1.sta00025 1.1.sta00026 1.1.sta00027 1.1.sta00028 1.1.sta00029 1.1.sta00030 1.1.sta00031 1.1.sta00032 1.1.sta00033 1.1.sta00034 1.1.sta00035 1.1.sta00036 1.1.sta00037 1.1.sta00038 1.1.sta00039 1.1.sta00040 1.1.sta00041 1.1.sta00042 1.1.sta00043 1.1.sta00044 1.1.sta00045 1.1.sta00046 1.1.sta00047 1.1.sta00048 1.1.sta00049 1.1.sta00050 1.1.sta00051 1.1.sta00052 1.1.sta00053 1.1.sta00054 1.1.sta00055 1.1.sta00056 1.1.sta00057 1.1.sta00058 1.1.sta00059 1.1.sta00060 1.1.sta00061 1.1.sta00062 1.1.sta00063 1.1.sta00500 1.1.sta00501 1.1.sta00502 1.1.sta00503 1.1.sta00504 1.1.sta00505 1.1.sta00506 1.1.sta00507 1.1.sta00508 1.1.sta00509 1.1.sta00510 1.1.sta00511 1.1.sta00512 1.1.sta00513 1.1.sta00514 1.1.sta00515 1.1.sta00516 1.1.sta00517 1.1.sta00518 1.1.sta00519 1.1.sta00520 1.1.sta00521 1.1.sta00522 1.1.sta00523 1.1.sta00524 1.1.sta00525 1.1.sta00526 1.1.sta00527 1.1.sta00528 1.1.sta00529 1.1.sta00530 1.1.sta00531 1.1.sta00532 1.1.sta00533 1.1.sta00534 1.1.sta00535 1.1.sta00536 1.1.sta00537 1.1.sta00538 1.1.sta00539 1.1.sta00540 1.1.sta00541 1.1.sta00542 1.1.sta00543 1.1.sta00544 1.1.sta00545 1.1.sta00546 1.1.sta00547 1.1.sta00548 1.1.sta00549 1.1.sta00550 1.1.sta00551 1.1.sta00552 1.1.sta00553 1.1.sta00554 1.1.sta00555 1.1.sta00556 1.1.sta00557 1.1.sta00558 1.1.sta00559 1.1.sta00560 1.1.sta00561 1.1.sta00562 1.1.sta00563 1.1.sta00564 1.1.sta00565 1.1.sta00566 1.1.sta00567 1.1.sta00568 1.1.sta00569 1.1.sta00570 1.1.sta00571 1.1.sta00572 1.1.sta00573 1.1.sta00574 1.1.sta00575 1.1.sta00576 1.1.sta00577 1.1.sta00578 1.1.sta00579 1.1.sta00580 1.1.sta00581 1.1.sta00582 1.1.sta00583 1.1.sta00584 1.1.sta00585 1.1.sta00586 1.1.sta00587 1.1.sta00588 1.1.sta00589 1.1.sta00590 1.1.sta00591 1.1.sta00592 1.1.sta00593 1.1.sta00594 1.1.sta00595 1.1.sta00596 1.1.sta00597 1.1.sta00598 1.1.sta00599 1.1.sta01000 1.1.sta01001 1.1.sta01002 1.1.sta01003 1.1.sta01004 1.1.sta01005 1.1.sta01006 1.1.sta01007 1.1.sta01008 1.1.sta01009 1.1.sta01010 1.1.sta01011 1.1.sta01012 1.1.sta01013 1.1.sta01014 1.1.sta01015 1.1.sta01016 1.1.sta01017 1.1.sta01018 1.1.sta01019 1.1.sta01020 1.1.sta01021 1.1.sta01022 1.1.sta01023 1.1.sta01024 1.1.sta01025 1.1.sta01026 1.1.sta01027 1.1.sta01028 1.1.sta01029 1.1.sta01030 1.1.sta01031 1.1.sta01032 1.1.sta01033 1.1.sta01034 1.1.sta01035 1.1.sta01036 1.1.sta01037 1.1.sta01038 1.1.sta01039 1.1.sta01040 1.1.sta01041 1.1.sta01042 1.1.sta01043 1.1.sta01044 1.1.sta01045 1.1.sta01046 1.1.sta01047 1.1.sta01048 1.1.sta01049 1.1.sta01050 1.1.sta01051 1.1.sta01052 1.1.sta01053 1.1.sta01054 1.1.sta01055 1.1.sta01056 1.1.sta01057 1.1.sta01058 1.1.sta01059 1.1.sta01060 1.1.sta01061 1.1.sta01062 1.1.sta01063
Firmware N/A 10.1-ct-8x-__xtH-022-db8cfc6c 0.3-0
Machines ben-ota-2




Requested Parameters:
Download Rate: Per station: 1000000000 (   1 Gbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 1000000000 (   1 Gbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 1   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min: 332.757 Mbps  Cx Ave: 332.757 Mbps  Cx Max: 332.757 Mbps  All Cx: 332.757 Mbps
Upload Rate:       Cx Min: 200.008 Mbps  Cx Ave: 200.008 Mbps  Cx Max: 200.008 Mbps  All Cx: 200.008 Mbps
                                                                                     Total: 532.766 Mbps
Aggregated Rate:   Min:    532.766 Mbps  Avg:    532.766 Mbps  Max:    532.766 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station: 1000000000 (   1 Gbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 1000000000 (   1 Gbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 1   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:     2.346 GB  Cx Ave:     2.346 GB  Cx Max:     2.346 GB  All Cx:     2.346 GB
Upload Amount:     Cx Min:     1.316 GB  Cx Ave:     1.316 GB  Cx Max:     1.316 GB  All Cx:     1.316 GB
                                                                                     Total:      3.662 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station: 500000000 ( 500 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 500000000 ( 500 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 2   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:  129.16 Mbps  Cx Ave: 133.701 Mbps  Cx Max: 138.242 Mbps  All Cx: 267.402 Mbps
Upload Rate:       Cx Min:  98.442 Mbps  Cx Ave: 110.252 Mbps  Cx Max: 122.061 Mbps  All Cx: 220.504 Mbps
                                                                                     Total: 487.905 Mbps
Aggregated Rate:   Min:    227.602 Mbps  Avg:    243.953 Mbps  Max:    260.303 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station: 500000000 ( 500 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 500000000 ( 500 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 2   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:   937.481 MB  Cx Ave:   965.473 MB  Cx Max:   993.466 MB  All Cx:     1.886 GB
Upload Amount:     Cx Min:   752.814 MB  Cx Ave:    761.98 MB  Cx Max:   771.145 MB  All Cx:     1.488 GB
                                                                                     Total:      3.374 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station: 200000000 ( 200 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 200000000 ( 200 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 5   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:  35.198 Mbps  Cx Ave:  46.489 Mbps  Cx Max:  68.031 Mbps  All Cx: 232.445 Mbps
Upload Rate:       Cx Min:  41.722 Mbps  Cx Ave:  45.548 Mbps  Cx Max:  54.779 Mbps  All Cx:  227.74 Mbps
                                                                                     Total: 460.185 Mbps
Aggregated Rate:   Min:      76.92 Mbps  Avg:     92.037 Mbps  Max:     122.81 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station: 200000000 ( 200 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 200000000 ( 200 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 5   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:   252.518 MB  Cx Ave:   335.852 MB  Cx Max:   494.704 MB  All Cx:      1.64 GB
Upload Amount:     Cx Min:   318.794 MB  Cx Ave:   340.365 MB  Cx Max:    356.18 MB  All Cx:     1.662 GB
                                                                                     Total:      3.302 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station: 100000000 ( 100 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 100000000 ( 100 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 10   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:   9.012 Mbps  Cx Ave:  20.536 Mbps  Cx Max:  60.937 Mbps  All Cx: 205.356 Mbps
Upload Rate:       Cx Min:  21.493 Mbps  Cx Ave:  24.762 Mbps  Cx Max:  27.899 Mbps  All Cx: 247.623 Mbps
                                                                                     Total: 452.979 Mbps
Aggregated Rate:   Min:     30.505 Mbps  Avg:     45.298 Mbps  Max:     88.836 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station: 100000000 ( 100 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 100000000 ( 100 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 10   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:    64.667 MB  Cx Ave:   147.911 MB  Cx Max:   439.532 MB  All Cx:     1.444 GB
Upload Amount:     Cx Min:   172.772 MB  Cx Ave:   180.428 MB  Cx Max:   190.612 MB  All Cx:     1.762 GB
                                                                                     Total:      3.206 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  50000000 (  50 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  50000000 (  50 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 20   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:   2.502 Mbps  Cx Ave:   8.902 Mbps  Cx Max:  49.575 Mbps  All Cx: 178.041 Mbps
Upload Rate:       Cx Min:   9.124 Mbps  Cx Ave:  13.276 Mbps  Cx Max:  15.981 Mbps  All Cx: 265.518 Mbps
                                                                                     Total: 443.559 Mbps
Aggregated Rate:   Min:     11.625 Mbps  Avg:     22.178 Mbps  Max:     65.556 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  50000000 (  50 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  50000000 (  50 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 20   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:    18.033 MB  Cx Ave:    64.389 MB  Cx Max:   357.828 MB  All Cx:     1.258 GB
Upload Amount:     Cx Min:    87.097 MB  Cx Ave:    95.744 MB  Cx Max:    102.58 MB  All Cx:      1.87 GB
                                                                                     Total:      3.128 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  22222222 (22.222 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  22222222 (22.222 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 45   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:  86.977 Kbps  Cx Ave:   3.027 Mbps  Cx Max:  14.081 Mbps  All Cx: 136.203 Mbps
Upload Rate:       Cx Min:   2.813 Mbps  Cx Ave:   6.564 Mbps  Cx Max:  11.422 Mbps  All Cx: 295.369 Mbps
                                                                                     Total: 431.571 Mbps
Aggregated Rate:   Min:        2.9 Mbps  Avg:       9.59 Mbps  Max:     25.503 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  22222222 (22.222 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  22222222 (22.222 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 45   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:   632.062 KB  Cx Ave:    21.478 MB  Cx Max:    99.908 MB  All Cx:   966.521 MB
Upload Amount:     Cx Min:    19.055 MB  Cx Ave:    44.222 MB  Cx Max:    62.083 MB  All Cx:     1.943 GB
                                                                                     Total:      2.887 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  16666666 (16.667 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  16666666 (16.667 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 60   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:   2.206 Mbps  Cx Max:   8.347 Mbps  All Cx: 132.349 Mbps
Upload Rate:       Cx Min:        0 bps  Cx Ave:   4.542 Mbps  Cx Max:   9.313 Mbps  All Cx: 272.503 Mbps
                                                                                     Total: 404.851 Mbps
Aggregated Rate:   Min:           0 bps  Avg:      6.748 Mbps  Max:      17.66 Mbps
Non-Transmitting endpoints: (3)  tcp--1.eth0-01.sta00050-A tcp--1.eth0-01.sta00051-A tcp--1.eth0-01.sta00052-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  16666666 (16.667 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  16666666 (16.667 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 60   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:    15.681 MB  Cx Max:    59.337 MB  All Cx:   940.856 MB
Upload Amount:     Cx Min:          0 B  Cx Ave:    33.154 MB  Cx Max:      59.4 MB  All Cx:     1.943 GB
                                                                                     Total:      2.861 GB
Non-Transmitting endpoints: (3)  tcp--1.eth0-01.sta00050-A tcp--1.eth0-01.sta00051-A tcp--1.eth0-01.sta00052-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  10000000 (  10 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  10000000 (  10 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 100   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:   1.499 Mbps  Cx Max:  10.014 Mbps  All Cx: 149.856 Mbps
Upload Rate:       Cx Min:        0 bps  Cx Ave:   3.409 Mbps  Cx Max:  10.696 Mbps  All Cx: 340.877 Mbps
                                                                                     Total: 490.734 Mbps
Aggregated Rate:   Min:           0 bps  Avg:      4.907 Mbps  Max:     20.711 Mbps
Non-Transmitting endpoints: (45)  tcp--1.eth0-01.sta00050-A tcp--1.eth0-01.sta00052-A tcp--1.eth0-01.sta00053-A tcp--1.eth0-01.sta00054-A tcp--1.eth0-01.sta00055-A tcp--1.eth0-01.sta00056-A tcp--1.eth0-01.sta00057-A tcp--1.eth0-01.sta00058-A tcp--1.eth0-01.sta00059-A tcp--1.eth0-01.sta00060-A tcp--1.eth0-01.sta00061-A tcp--1.eth0-01.sta00062-A tcp--1.eth0-01.sta00063-A tcp--1.eth0-01.sta00500-A tcp--1.eth0-01.sta00501-A tcp--1.eth0-01.sta00502-A tcp--1.eth0-01.sta00503-A tcp--1.eth0-01.sta00504-A tcp--1.eth0-01.sta00505-A tcp--1.eth0-01.sta00506-A tcp--1.eth0-01.sta00507-A tcp--1.eth0-01.sta00508-A tcp--1.eth0-01.sta00509-A tcp--1.eth0-01.sta00510-A tcp--1.eth0-01.sta00511-A tcp--1.eth0-01.sta00512-A tcp--1.eth0-01.sta00513-A tcp--1.eth0-01.sta00514-A tcp--1.eth0-01.sta00515-A tcp--1.eth0-01.sta00516-A tcp--1.eth0-01.sta00517-A tcp--1.eth0-01.sta00518-A tcp--1.eth0-01.sta00519-A tcp--1.eth0-01.sta00520-A tcp--1.eth0-01.sta00521-A tcp--1.eth0-01.sta00522-A tcp--1.eth0-01.sta00525-A tcp--1.eth0-01.sta00526-A tcp--1.eth0-01.sta00527-A tcp--1.eth0-01.sta00528-A tcp--1.eth0-01.sta00529-A tcp--1.eth0-01.sta00530-A tcp--1.eth0-01.sta00531-A tcp--1.eth0-01.sta00532-A tcp--1.eth0-01.sta00533-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  10000000 (  10 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  10000000 (  10 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 100   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:    10.904 MB  Cx Max:    72.942 MB  All Cx:     1.065 GB
Upload Amount:     Cx Min:          0 B  Cx Ave:    23.896 MB  Cx Max:    73.062 MB  All Cx:     2.334 GB
                                                                                     Total:      3.398 GB
Non-Transmitting endpoints: (45)  tcp--1.eth0-01.sta00050-A tcp--1.eth0-01.sta00052-A tcp--1.eth0-01.sta00053-A tcp--1.eth0-01.sta00054-A tcp--1.eth0-01.sta00055-A tcp--1.eth0-01.sta00056-A tcp--1.eth0-01.sta00057-A tcp--1.eth0-01.sta00058-A tcp--1.eth0-01.sta00059-A tcp--1.eth0-01.sta00060-A tcp--1.eth0-01.sta00061-A tcp--1.eth0-01.sta00062-A tcp--1.eth0-01.sta00063-A tcp--1.eth0-01.sta00500-A tcp--1.eth0-01.sta00501-A tcp--1.eth0-01.sta00502-A tcp--1.eth0-01.sta00503-A tcp--1.eth0-01.sta00504-A tcp--1.eth0-01.sta00505-A tcp--1.eth0-01.sta00506-A tcp--1.eth0-01.sta00507-A tcp--1.eth0-01.sta00508-A tcp--1.eth0-01.sta00509-A tcp--1.eth0-01.sta00510-A tcp--1.eth0-01.sta00511-A tcp--1.eth0-01.sta00512-A tcp--1.eth0-01.sta00513-A tcp--1.eth0-01.sta00514-A tcp--1.eth0-01.sta00515-A tcp--1.eth0-01.sta00516-A tcp--1.eth0-01.sta00517-A tcp--1.eth0-01.sta00518-A tcp--1.eth0-01.sta00519-A tcp--1.eth0-01.sta00520-A tcp--1.eth0-01.sta00521-A tcp--1.eth0-01.sta00522-A tcp--1.eth0-01.sta00525-A tcp--1.eth0-01.sta00526-A tcp--1.eth0-01.sta00527-A tcp--1.eth0-01.sta00528-A tcp--1.eth0-01.sta00529-A tcp--1.eth0-01.sta00530-A tcp--1.eth0-01.sta00531-A tcp--1.eth0-01.sta00532-A tcp--1.eth0-01.sta00533-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   7142857 (7.143 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   7142857 (7.143 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 140   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:   1.286 Mbps  Cx Max:   2.907 Mbps  All Cx: 180.045 Mbps
Upload Rate:       Cx Min:        0 bps  Cx Ave:   2.422 Mbps  Cx Max:   7.281 Mbps  All Cx:  339.12 Mbps
                                                                                     Total: 519.165 Mbps
Aggregated Rate:   Min:           0 bps  Avg:      3.708 Mbps  Max:     10.188 Mbps
Non-Transmitting endpoints: (2)  tcp--1.eth0-01.sta00569-A tcp--1.eth0-01.sta00572-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   7142857 (7.143 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   7142857 (7.143 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 140   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:     9.372 MB  Cx Max:     21.41 MB  All Cx:     1.281 GB
Upload Amount:     Cx Min:          0 B  Cx Ave:    19.913 MB  Cx Max:    49.532 MB  All Cx:     2.723 GB
                                                                                     Total:      4.004 GB
Non-Transmitting endpoints: (2)  tcp--1.eth0-01.sta00569-A tcp--1.eth0-01.sta00572-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   5555555 (5.556 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   5555555 (5.556 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 180   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:   1.025 Mbps  Cx Max:   5.312 Mbps  All Cx: 184.541 Mbps
Upload Rate:       Cx Min:        0 bps  Cx Ave:    2.31 Mbps  Cx Max:   6.592 Mbps  All Cx: 415.864 Mbps
                                                                                     Total: 600.405 Mbps
Aggregated Rate:   Min:           0 bps  Avg:      3.336 Mbps  Max:     11.904 Mbps
Non-Transmitting endpoints: (17)  tcp--1.eth0-01.sta00054-A tcp--1.eth0-01.sta00055-A tcp--1.eth0-01.sta00056-A tcp--1.eth0-01.sta00514-A tcp--1.eth0-01.sta00515-A tcp--1.eth0-01.sta00516-A tcp--1.eth0-01.sta00517-A tcp--1.eth0-01.sta00518-A tcp--1.eth0-01.sta00519-A tcp--1.eth0-01.sta00520-A tcp--1.eth0-01.sta00521-A tcp--1.eth0-01.sta00522-A tcp--1.eth0-01.sta00523-A tcp--1.eth0-01.sta00524-A tcp--1.eth0-01.sta00525-A tcp--1.eth0-01.sta00526-A tcp--1.eth0-01.sta00527-A tcp--1.eth0-01.sta00533-A tcp--1.eth0-01.sta00590-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   5555555 (5.556 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   5555555 (5.556 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 180   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:     7.307 MB  Cx Max:    37.873 MB  All Cx:     1.284 GB
Upload Amount:     Cx Min:          0 B  Cx Ave:    14.852 MB  Cx Max:    38.133 MB  All Cx:     2.611 GB
                                                                                     Total:      3.895 GB
Non-Transmitting endpoints: (17)  tcp--1.eth0-01.sta00054-A tcp--1.eth0-01.sta00055-A tcp--1.eth0-01.sta00056-A tcp--1.eth0-01.sta00514-A tcp--1.eth0-01.sta00515-A tcp--1.eth0-01.sta00516-A tcp--1.eth0-01.sta00517-A tcp--1.eth0-01.sta00518-A tcp--1.eth0-01.sta00519-A tcp--1.eth0-01.sta00520-A tcp--1.eth0-01.sta00521-A tcp--1.eth0-01.sta00522-A tcp--1.eth0-01.sta00523-A tcp--1.eth0-01.sta00524-A tcp--1.eth0-01.sta00525-A tcp--1.eth0-01.sta00526-A tcp--1.eth0-01.sta00527-A tcp--1.eth0-01.sta00533-A tcp--1.eth0-01.sta00590-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   4545454 (4.545 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   4545454 (4.545 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 220   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave: 746.417 Kbps  Cx Max:    2.21 Mbps  All Cx: 164.212 Mbps
Upload Rate:       Cx Min:        0 bps  Cx Ave:   1.922 Mbps  Cx Max:    5.92 Mbps  All Cx: 422.889 Mbps
                                                                                     Total: 587.101 Mbps
Aggregated Rate:   Min:           0 bps  Avg:      2.669 Mbps  Max:       8.13 Mbps
Non-Transmitting endpoints: (23)  tcp--1.eth0-01.sta00049-A tcp--1.eth0-01.sta00050-A tcp--1.eth0-01.sta00051-A tcp--1.eth0-01.sta00052-A tcp--1.eth0-01.sta00053-A tcp--1.eth0-01.sta00054-A tcp--1.eth0-01.sta00055-A tcp--1.eth0-01.sta00056-A tcp--1.eth0-01.sta00057-A tcp--1.eth0-01.sta00058-A tcp--1.eth0-01.sta00059-A tcp--1.eth0-01.sta00060-A tcp--1.eth0-01.sta00061-A tcp--1.eth0-01.sta00062-A tcp--1.eth0-01.sta00063-A tcp--1.eth0-01.sta00500-A tcp--1.eth0-01.sta00501-A tcp--1.eth0-01.sta00502-A tcp--1.eth0-01.sta00503-A tcp--1.eth0-01.sta00504-A tcp--1.eth0-01.sta00505-A tcp--1.eth0-01.sta00510-A tcp--1.eth0-01.sta00590-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   4545454 (4.545 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   4545454 (4.545 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 220   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:     5.475 MB  Cx Max:     15.85 MB  All Cx:     1.176 GB
Upload Amount:     Cx Min:          0 B  Cx Ave:    13.652 MB  Cx Max:    33.597 MB  All Cx:     2.933 GB
                                                                                     Total:      4.109 GB
Non-Transmitting endpoints: (23)  tcp--1.eth0-01.sta00049-A tcp--1.eth0-01.sta00050-A tcp--1.eth0-01.sta00051-A tcp--1.eth0-01.sta00052-A tcp--1.eth0-01.sta00053-A tcp--1.eth0-01.sta00054-A tcp--1.eth0-01.sta00055-A tcp--1.eth0-01.sta00056-A tcp--1.eth0-01.sta00057-A tcp--1.eth0-01.sta00058-A tcp--1.eth0-01.sta00059-A tcp--1.eth0-01.sta00060-A tcp--1.eth0-01.sta00061-A tcp--1.eth0-01.sta00062-A tcp--1.eth0-01.sta00063-A tcp--1.eth0-01.sta00500-A tcp--1.eth0-01.sta00501-A tcp--1.eth0-01.sta00502-A tcp--1.eth0-01.sta00503-A tcp--1.eth0-01.sta00504-A tcp--1.eth0-01.sta00505-A tcp--1.eth0-01.sta00510-A tcp--1.eth0-01.sta00590-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   4385964 (4.386 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   4385964 (4.386 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 228   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave: 641.194 Kbps  Cx Max:   1.774 Mbps  All Cx: 146.192 Mbps
Upload Rate:       Cx Min:        0 bps  Cx Ave:   1.963 Mbps  Cx Max:   5.861 Mbps  All Cx: 447.657 Mbps
                                                                                     Total: 593.849 Mbps
Aggregated Rate:   Min:           0 bps  Avg:      2.605 Mbps  Max:      7.634 Mbps
Non-Transmitting endpoints: (25)  tcp--1.eth0-01.sta00053-A tcp--1.eth0-01.sta00062-A tcp--1.eth0-01.sta00063-A tcp--1.eth0-01.sta00506-A tcp--1.eth0-01.sta00507-A tcp--1.eth0-01.sta00508-A tcp--1.eth0-01.sta00509-A tcp--1.eth0-01.sta00511-A tcp--1.eth0-01.sta00512-A tcp--1.eth0-01.sta00513-A tcp--1.eth0-01.sta00514-A tcp--1.eth0-01.sta00515-A tcp--1.eth0-01.sta00516-A tcp--1.eth0-01.sta00517-A tcp--1.eth0-01.sta00520-A tcp--1.eth0-01.sta00521-A tcp--1.eth0-01.sta00522-A tcp--1.eth0-01.sta00523-A tcp--1.eth0-01.sta00524-A tcp--1.eth0-01.sta00525-A tcp--1.eth0-01.sta00526-A tcp--1.eth0-01.sta00527-A tcp--1.eth0-01.sta00529-A tcp--1.eth0-01.sta00532-A tcp--1.eth0-01.sta00533-A tcp--1.eth0-01.sta00590-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   4385964 (4.386 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   4385964 (4.386 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 228   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:     4.816 MB  Cx Max:    14.175 MB  All Cx:     1.072 GB
Upload Amount:     Cx Min:          0 B  Cx Ave:    14.459 MB  Cx Max:    31.918 MB  All Cx:     3.219 GB
                                                                                     Total:      4.292 GB
Non-Transmitting endpoints: (25)  tcp--1.eth0-01.sta00053-A tcp--1.eth0-01.sta00062-A tcp--1.eth0-01.sta00063-A tcp--1.eth0-01.sta00506-A tcp--1.eth0-01.sta00507-A tcp--1.eth0-01.sta00508-A tcp--1.eth0-01.sta00509-A tcp--1.eth0-01.sta00511-A tcp--1.eth0-01.sta00512-A tcp--1.eth0-01.sta00513-A tcp--1.eth0-01.sta00514-A tcp--1.eth0-01.sta00515-A tcp--1.eth0-01.sta00516-A tcp--1.eth0-01.sta00517-A tcp--1.eth0-01.sta00520-A tcp--1.eth0-01.sta00521-A tcp--1.eth0-01.sta00522-A tcp--1.eth0-01.sta00523-A tcp--1.eth0-01.sta00524-A tcp--1.eth0-01.sta00525-A tcp--1.eth0-01.sta00526-A tcp--1.eth0-01.sta00527-A tcp--1.eth0-01.sta00529-A tcp--1.eth0-01.sta00532-A tcp--1.eth0-01.sta00533-A tcp--1.eth0-01.sta00590-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph



Maximum Stations Connected: 228
Stations NOT connected at this time: 0
Maximum Stations with IP Address: 228
Stations without IP at this time: 0

Station Maximums


RF stats give an indication of how well how congested is the RF environment. Channel activity is what the wifi radio reports as the busy-time for the RF environment. It is expected that this be near 100% when LANforge is running at max speed, but at lower speeds, this should be a lower percentage unless the RF environment is busy with other systems.

RF Stats for Stations

RX-Signal and Activity Data



Link rate stats give an indication of how well the rate-control is working. For rate-control, the 'RX' link rate corresponds to what the device-under-test is transmitting. If all of the stations are on the same radio, then the TX and RX encoding rates should be similar for all stations. If there is a definite pattern where some stations do not get good RX rate, then probably the device-under-test has rate-control problems. The TX rate is what LANforge is transmitting at.

Link Rate for Stations

TX/RX Link Rate Data



Removing old entries to save space.

1588456048.101  EVENT: 2020-05-02 14:47:28.094 eventId:1043793 eidType:Endpoint name:tcp--1.eth0-01.sta00053-A eid:1.1.59.30.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00053-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456048.101  EVENT: 2020-05-02 14:47:28.095 eventId:1043794 eidType:Endpoint name:tcp--1.eth0-01.sta00053-B eid:1.1.1.28.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00053-B  Reason: notifyEndpStopping.
1588456048.102  EVENT: 2020-05-02 14:47:28.096 eventId:1043795 eidType:Endpoint name:tcp--1.eth0-01.sta00062-B eid:1.1.1.64.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00062-B  Reason: accept report, TRNS_RUNNING_TO_STO
1588456048.102  EVENT: 2020-05-02 14:47:28.096 eventId:1043796 eidType:Endpoint name:tcp--1.eth0-01.sta00062-A eid:1.1.68.66.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00062-A  Reason: notifyEndpStopping.
1588456048.103  EVENT: 2020-05-02 14:47:28.097 eventId:1043797 eidType:Endpoint name:tcp--1.eth0-01.sta00063-A eid:1.1.69.70.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00063-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456048.103  EVENT: 2020-05-02 14:47:28.097 eventId:1043798 eidType:Endpoint name:tcp--1.eth0-01.sta00063-B eid:1.1.1.68.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00063-B  Reason: notifyEndpStopping.
1588456048.103  EVENT: 2020-05-02 14:47:28.098 eventId:1043799 eidType:Endpoint name:tcp--1.eth0-01.sta00053-A eid:1.1.59.30.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00053-A Reason: checkNonPhantom: restart cx.
1588456048.103  EVENT: 2020-05-02 14:47:28.098 eventId:1043800 eidType:Endpoint name:tcp--1.eth0-01.sta00053-B eid:1.1.1.28.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00053-B Reason: checkNonPhantom: restart cx.
1588456048.103  EVENT: 2020-05-02 14:47:28.099 eventId:1043801 eidType:Endpoint name:tcp--1.eth0-01.sta00062-A eid:1.1.68.66.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00062-A Reason: checkNonPhantom: restart cx.
1588456048.103  EVENT: 2020-05-02 14:47:28.099 eventId:1043802 eidType:Endpoint name:tcp--1.eth0-01.sta00062-B eid:1.1.1.64.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00062-B Reason: checkNonPhantom: restart cx.
1588456048.103  EVENT: 2020-05-02 14:47:28.099 eventId:1043803 eidType:Endpoint name:tcp--1.eth0-01.sta00063-A eid:1.1.69.70.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00063-A Reason: checkNonPhantom: restart cx.
1588456048.103  EVENT: 2020-05-02 14:47:28.099 eventId:1043804 eidType:Endpoint name:tcp--1.eth0-01.sta00063-B eid:1.1.1.68.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00063-B Reason: checkNonPhantom: restart cx.
1588456051.244  EVENT: 2020-05-02 14:47:31.241 eventId:1043805 eidType:Endpoint name:tcp--1.eth0-01.sta00046-A eid:1.1.52.2.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00046-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456051.244  EVENT: 2020-05-02 14:47:31.241 eventId:1043806 eidType:Endpoint name:tcp--1.eth0-01.sta00046-B eid:1.1.1.1.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00046-B  Reason: notifyEndpStopping.
1588456051.245  EVENT: 2020-05-02 14:47:31.242 eventId:1043807 eidType:Endpoint name:tcp--1.eth0-01.sta00047-A eid:1.1.53.6.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00047-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456051.245  EVENT: 2020-05-02 14:47:31.242 eventId:1043808 eidType:Endpoint name:tcp--1.eth0-01.sta00047-B eid:1.1.1.4.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00047-B  Reason: notifyEndpStopping.
1588456051.246  EVENT: 2020-05-02 14:47:31.243 eventId:1043809 eidType:Endpoint name:tcp--1.eth0-01.sta00048-A eid:1.1.54.10.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00048-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456051.246  EVENT: 2020-05-02 14:47:31.243 eventId:1043810 eidType:Endpoint name:tcp--1.eth0-01.sta00048-B eid:1.1.1.8.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00048-B  Reason: notifyEndpStopping.
1588456051.251  EVENT: 2020-05-02 14:47:31.246 eventId:1043811 eidType:Endpoint name:tcp--1.eth0-01.sta00046-A eid:1.1.52.2.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00046-A Reason: checkNonPhantom: restart cx.
1588456051.252  EVENT: 2020-05-02 14:47:31.246 eventId:1043812 eidType:Endpoint name:tcp--1.eth0-01.sta00046-B eid:1.1.1.1.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00046-B Reason: checkNonPhantom: restart cx.
1588456051.252  EVENT: 2020-05-02 14:47:31.247 eventId:1043813 eidType:Endpoint name:tcp--1.eth0-01.sta00047-A eid:1.1.53.6.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00047-A Reason: checkNonPhantom: restart cx.
1588456051.253  EVENT: 2020-05-02 14:47:31.247 eventId:1043814 eidType:Endpoint name:tcp--1.eth0-01.sta00047-B eid:1.1.1.4.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00047-B Reason: checkNonPhantom: restart cx.
1588456051.253  EVENT: 2020-05-02 14:47:31.249 eventId:1043815 eidType:Endpoint name:tcp--1.eth0-01.sta00048-A eid:1.1.54.10.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00048-A Reason: checkNonPhantom: restart cx.
1588456051.253  EVENT: 2020-05-02 14:47:31.249 eventId:1043816 eidType:Endpoint name:tcp--1.eth0-01.sta00048-B eid:1.1.1.8.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00048-B Reason: checkNonPhantom: restart cx.
1588456052.122  EVENT: 2020-05-02 14:47:31.589 eventId:1043817 eidType:Endpoint name:tcp--1.eth0-01.sta00506-A eid:1.1.76.98.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00506-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.122  EVENT: 2020-05-02 14:47:31.590 eventId:1043818 eidType:Endpoint name:tcp--1.eth0-01.sta00506-B eid:1.1.1.96.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00506-B  Reason: notifyEndpStopping.
1588456052.122  EVENT: 2020-05-02 14:47:31.590 eventId:1043819 eidType:Endpoint name:tcp--1.eth0-01.sta00507-A eid:1.1.77.102.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00507-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.122  EVENT: 2020-05-02 14:47:31.591 eventId:1043820 eidType:Endpoint name:tcp--1.eth0-01.sta00507-B eid:1.1.1.100.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00507-B  Reason: notifyEndpStopping.
1588456052.122  EVENT: 2020-05-02 14:47:31.592 eventId:1043821 eidType:Endpoint name:tcp--1.eth0-01.sta00508-A eid:1.1.78.106.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00508-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.123  EVENT: 2020-05-02 14:47:31.592 eventId:1043822 eidType:Endpoint name:tcp--1.eth0-01.sta00508-B eid:1.1.1.104.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00508-B  Reason: notifyEndpStopping.
1588456052.123  EVENT: 2020-05-02 14:47:31.593 eventId:1043823 eidType:Endpoint name:tcp--1.eth0-01.sta00509-A eid:1.1.79.110.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00509-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.123  EVENT: 2020-05-02 14:47:31.593 eventId:1043824 eidType:Endpoint name:tcp--1.eth0-01.sta00509-B eid:1.1.1.108.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00509-B  Reason: notifyEndpStopping.
1588456052.123  EVENT: 2020-05-02 14:47:31.595 eventId:1043825 eidType:Endpoint name:tcp--1.eth0-01.sta00511-A eid:1.1.81.118.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00511-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.123  EVENT: 2020-05-02 14:47:31.595 eventId:1043826 eidType:Endpoint name:tcp--1.eth0-01.sta00511-B eid:1.1.1.116.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00511-B  Reason: notifyEndpStopping.
1588456052.123  EVENT: 2020-05-02 14:47:31.596 eventId:1043827 eidType:Endpoint name:tcp--1.eth0-01.sta00512-A eid:1.1.82.122.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00512-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.124  EVENT: 2020-05-02 14:47:31.596 eventId:1043828 eidType:Endpoint name:tcp--1.eth0-01.sta00512-B eid:1.1.1.120.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00512-B  Reason: notifyEndpStopping.
1588456052.124  EVENT: 2020-05-02 14:47:31.596 eventId:1043829 eidType:Endpoint name:tcp--1.eth0-01.sta00513-A eid:1.1.83.126.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00513-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.124  EVENT: 2020-05-02 14:47:31.597 eventId:1043830 eidType:Endpoint name:tcp--1.eth0-01.sta00513-B eid:1.1.1.124.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00513-B  Reason: notifyEndpStopping.
1588456052.125  EVENT: 2020-05-02 14:47:31.597 eventId:1043831 eidType:Endpoint name:tcp--1.eth0-01.sta00514-A eid:1.1.84.130.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00514-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.125  EVENT: 2020-05-02 14:47:31.598 eventId:1043832 eidType:Endpoint name:tcp--1.eth0-01.sta00514-B eid:1.1.1.128.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00514-B  Reason: notifyEndpStopping.
1588456052.125  EVENT: 2020-05-02 14:47:31.599 eventId:1043833 eidType:Endpoint name:tcp--1.eth0-01.sta00515-A eid:1.1.85.134.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00515-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.125  EVENT: 2020-05-02 14:47:31.599 eventId:1043834 eidType:Endpoint name:tcp--1.eth0-01.sta00515-B eid:1.1.1.132.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00515-B  Reason: notifyEndpStopping.
1588456052.126  EVENT: 2020-05-02 14:47:31.600 eventId:1043835 eidType:Endpoint name:tcp--1.eth0-01.sta00516-A eid:1.1.86.138.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00516-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.126  EVENT: 2020-05-02 14:47:31.600 eventId:1043836 eidType:Endpoint name:tcp--1.eth0-01.sta00516-B eid:1.1.1.136.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00516-B  Reason: notifyEndpStopping.
1588456052.126  EVENT: 2020-05-02 14:47:31.602 eventId:1043837 eidType:Endpoint name:tcp--1.eth0-01.sta00517-A eid:1.1.87.142.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00517-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.126  EVENT: 2020-05-02 14:47:31.602 eventId:1043838 eidType:Endpoint name:tcp--1.eth0-01.sta00517-B eid:1.1.1.140.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00517-B  Reason: notifyEndpStopping.
1588456052.127  EVENT: 2020-05-02 14:47:31.603 eventId:1043839 eidType:Endpoint name:tcp--1.eth0-01.sta00518-B eid:1.1.1.144.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00518-B  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.127  EVENT: 2020-05-02 14:47:31.604 eventId:1043840 eidType:Endpoint name:tcp--1.eth0-01.sta00518-A eid:1.1.88.146.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00518-A  Reason: notifyEndpStopping.
1588456052.127  EVENT: 2020-05-02 14:47:31.604 eventId:1043841 eidType:Endpoint name:tcp--1.eth0-01.sta00520-A eid:1.1.90.154.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00520-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.127  EVENT: 2020-05-02 14:47:31.605 eventId:1043842 eidType:Endpoint name:tcp--1.eth0-01.sta00520-B eid:1.1.1.152.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00520-B  Reason: notifyEndpStopping.
1588456052.130  EVENT: 2020-05-02 14:47:31.605 eventId:1043843 eidType:Endpoint name:tcp--1.eth0-01.sta00521-A eid:1.1.91.158.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00521-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.130  EVENT: 2020-05-02 14:47:31.605 eventId:1043844 eidType:Endpoint name:tcp--1.eth0-01.sta00521-B eid:1.1.1.156.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00521-B  Reason: notifyEndpStopping.
1588456052.131  EVENT: 2020-05-02 14:47:31.606 eventId:1043845 eidType:Endpoint name:tcp--1.eth0-01.sta00522-A eid:1.1.92.162.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00522-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.131  EVENT: 2020-05-02 14:47:31.607 eventId:1043846 eidType:Endpoint name:tcp--1.eth0-01.sta00522-B eid:1.1.1.160.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00522-B  Reason: notifyEndpStopping.
1588456052.132  EVENT: 2020-05-02 14:47:31.609 eventId:1043847 eidType:Endpoint name:tcp--1.eth0-01.sta00523-A eid:1.1.93.166.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00523-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.132  EVENT: 2020-05-02 14:47:31.610 eventId:1043848 eidType:Endpoint name:tcp--1.eth0-01.sta00523-B eid:1.1.1.164.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00523-B  Reason: notifyEndpStopping.
1588456052.132  EVENT: 2020-05-02 14:47:31.611 eventId:1043849 eidType:Endpoint name:tcp--1.eth0-01.sta00524-B eid:1.1.1.168.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00524-B  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.132  EVENT: 2020-05-02 14:47:31.611 eventId:1043850 eidType:Endpoint name:tcp--1.eth0-01.sta00524-A eid:1.1.94.170.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00524-A  Reason: notifyEndpStopping.
1588456052.132  EVENT: 2020-05-02 14:47:31.611 eventId:1043851 eidType:Endpoint name:tcp--1.eth0-01.sta00525-A eid:1.1.95.174.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00525-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.132  EVENT: 2020-05-02 14:47:31.612 eventId:1043852 eidType:Endpoint name:tcp--1.eth0-01.sta00525-B eid:1.1.1.172.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00525-B  Reason: notifyEndpStopping.
1588456052.133  EVENT: 2020-05-02 14:47:31.612 eventId:1043853 eidType:Endpoint name:tcp--1.eth0-01.sta00526-A eid:1.1.96.178.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00526-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.133  EVENT: 2020-05-02 14:47:31.612 eventId:1043854 eidType:Endpoint name:tcp--1.eth0-01.sta00526-B eid:1.1.1.176.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00526-B  Reason: notifyEndpStopping.
1588456052.133  EVENT: 2020-05-02 14:47:31.613 eventId:1043855 eidType:Endpoint name:tcp--1.eth0-01.sta00527-A eid:1.1.97.182.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00527-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.133  EVENT: 2020-05-02 14:47:31.613 eventId:1043856 eidType:Endpoint name:tcp--1.eth0-01.sta00527-B eid:1.1.1.180.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00527-B  Reason: notifyEndpStopping.
1588456052.133  EVENT: 2020-05-02 14:47:31.613 eventId:1043857 eidType:Endpoint name:tcp--1.eth0-01.sta00532-B eid:1.1.1.200.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00532-B  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.133  EVENT: 2020-05-02 14:47:31.613 eventId:1043858 eidType:Endpoint name:tcp--1.eth0-01.sta00532-A eid:1.1.102.202.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00532-A  Reason: notifyEndpStopping.
1588456052.133  EVENT: 2020-05-02 14:47:31.614 eventId:1043859 eidType:Endpoint name:tcp--1.eth0-01.sta00533-B eid:1.1.1.204.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00533-B  Reason: accept report, TRNS_RUNNING_TO_STO
1588456052.133  EVENT: 2020-05-02 14:47:31.614 eventId:1043860 eidType:Endpoint name:tcp--1.eth0-01.sta00533-A eid:1.1.103.206.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00533-A  Reason: notifyEndpStopping.
1588456052.134  EVENT: 2020-05-02 14:47:31.617 eventId:1043861 eidType:Endpoint name:tcp--1.eth0-01.sta00506-A eid:1.1.76.98.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00506-A Reason: checkNonPhantom: restart cx.
1588456052.134  EVENT: 2020-05-02 14:47:31.617 eventId:1043862 eidType:Endpoint name:tcp--1.eth0-01.sta00506-B eid:1.1.1.96.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00506-B Reason: checkNonPhantom: restart cx.
1588456052.134  EVENT: 2020-05-02 14:47:31.618 eventId:1043863 eidType:Endpoint name:tcp--1.eth0-01.sta00507-A eid:1.1.77.102.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00507-A Reason: checkNonPhantom: restart cx.
1588456052.134  EVENT: 2020-05-02 14:47:31.618 eventId:1043864 eidType:Endpoint name:tcp--1.eth0-01.sta00507-B eid:1.1.1.100.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00507-B Reason: checkNonPhantom: restart cx.
1588456052.134  EVENT: 2020-05-02 14:47:31.619 eventId:1043865 eidType:Endpoint name:tcp--1.eth0-01.sta00508-A eid:1.1.78.106.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00508-A Reason: checkNonPhantom: restart cx.
1588456052.134  EVENT: 2020-05-02 14:47:31.619 eventId:1043866 eidType:Endpoint name:tcp--1.eth0-01.sta00508-B eid:1.1.1.104.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00508-B Reason: checkNonPhantom: restart cx.
1588456052.134  EVENT: 2020-05-02 14:47:31.619 eventId:1043867 eidType:Endpoint name:tcp--1.eth0-01.sta00509-A eid:1.1.79.110.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00509-A Reason: checkNonPhantom: restart cx.
1588456052.134  EVENT: 2020-05-02 14:47:31.619 eventId:1043868 eidType:Endpoint name:tcp--1.eth0-01.sta00509-B eid:1.1.1.108.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00509-B Reason: checkNonPhantom: restart cx.
1588456052.134  EVENT: 2020-05-02 14:47:31.620 eventId:1043869 eidType:Endpoint name:tcp--1.eth0-01.sta00511-A eid:1.1.81.118.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00511-A Reason: checkNonPhantom: restart cx.
1588456052.134  EVENT: 2020-05-02 14:47:31.620 eventId:1043870 eidType:Endpoint name:tcp--1.eth0-01.sta00511-B eid:1.1.1.116.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00511-B Reason: checkNonPhantom: restart cx.
1588456052.134  EVENT: 2020-05-02 14:47:31.620 eventId:1043871 eidType:Endpoint name:tcp--1.eth0-01.sta00512-A eid:1.1.82.122.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00512-A Reason: checkNonPhantom: restart cx.
1588456052.134  EVENT: 2020-05-02 14:47:31.620 eventId:1043872 eidType:Endpoint name:tcp--1.eth0-01.sta00512-B eid:1.1.1.120.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00512-B Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.621 eventId:1043873 eidType:Endpoint name:tcp--1.eth0-01.sta00513-A eid:1.1.83.126.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00513-A Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.621 eventId:1043874 eidType:Endpoint name:tcp--1.eth0-01.sta00513-B eid:1.1.1.124.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00513-B Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.621 eventId:1043875 eidType:Endpoint name:tcp--1.eth0-01.sta00514-A eid:1.1.84.130.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00514-A Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.621 eventId:1043876 eidType:Endpoint name:tcp--1.eth0-01.sta00514-B eid:1.1.1.128.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00514-B Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.621 eventId:1043877 eidType:Endpoint name:tcp--1.eth0-01.sta00515-A eid:1.1.85.134.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00515-A Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.621 eventId:1043878 eidType:Endpoint name:tcp--1.eth0-01.sta00515-B eid:1.1.1.132.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00515-B Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.622 eventId:1043879 eidType:Endpoint name:tcp--1.eth0-01.sta00516-A eid:1.1.86.138.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00516-A Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.622 eventId:1043880 eidType:Endpoint name:tcp--1.eth0-01.sta00516-B eid:1.1.1.136.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00516-B Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.622 eventId:1043881 eidType:Endpoint name:tcp--1.eth0-01.sta00517-A eid:1.1.87.142.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00517-A Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.622 eventId:1043882 eidType:Endpoint name:tcp--1.eth0-01.sta00517-B eid:1.1.1.140.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00517-B Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.623 eventId:1043883 eidType:Endpoint name:tcp--1.eth0-01.sta00518-A eid:1.1.88.146.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00518-A Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.623 eventId:1043884 eidType:Endpoint name:tcp--1.eth0-01.sta00518-B eid:1.1.1.144.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00518-B Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.623 eventId:1043885 eidType:Endpoint name:tcp--1.eth0-01.sta00520-A eid:1.1.90.154.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00520-A Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.623 eventId:1043886 eidType:Endpoint name:tcp--1.eth0-01.sta00520-B eid:1.1.1.152.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00520-B Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.624 eventId:1043887 eidType:Endpoint name:tcp--1.eth0-01.sta00521-A eid:1.1.91.158.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00521-A Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.624 eventId:1043888 eidType:Endpoint name:tcp--1.eth0-01.sta00521-B eid:1.1.1.156.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00521-B Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.625 eventId:1043889 eidType:Endpoint name:tcp--1.eth0-01.sta00522-A eid:1.1.92.162.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00522-A Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.625 eventId:1043890 eidType:Endpoint name:tcp--1.eth0-01.sta00522-B eid:1.1.1.160.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00522-B Reason: checkNonPhantom: restart cx.
1588456052.135  EVENT: 2020-05-02 14:47:31.626 eventId:1043891 eidType:Endpoint name:tcp--1.eth0-01.sta00523-A eid:1.1.93.166.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00523-A Reason: checkNonPhantom: restart cx.
1588456052.136  EVENT: 2020-05-02 14:47:31.626 eventId:1043892 eidType:Endpoint name:tcp--1.eth0-01.sta00523-B eid:1.1.1.164.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00523-B Reason: checkNonPhantom: restart cx.
1588456052.136  EVENT: 2020-05-02 14:47:31.627 eventId:1043893 eidType:Endpoint name:tcp--1.eth0-01.sta00524-A eid:1.1.94.170.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00524-A Reason: checkNonPhantom: restart cx.
1588456052.136  EVENT: 2020-05-02 14:47:31.627 eventId:1043894 eidType:Endpoint name:tcp--1.eth0-01.sta00524-B eid:1.1.1.168.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00524-B Reason: checkNonPhantom: restart cx.
1588456052.136  EVENT: 2020-05-02 14:47:31.629 eventId:1043895 eidType:Endpoint name:tcp--1.eth0-01.sta00525-A eid:1.1.95.174.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00525-A Reason: checkNonPhantom: restart cx.
1588456052.136  EVENT: 2020-05-02 14:47:31.629 eventId:1043896 eidType:Endpoint name:tcp--1.eth0-01.sta00525-B eid:1.1.1.172.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00525-B Reason: checkNonPhantom: restart cx.
1588456052.136  EVENT: 2020-05-02 14:47:31.629 eventId:1043897 eidType:Endpoint name:tcp--1.eth0-01.sta00526-A eid:1.1.96.178.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00526-A Reason: checkNonPhantom: restart cx.
1588456052.136  EVENT: 2020-05-02 14:47:31.629 eventId:1043898 eidType:Endpoint name:tcp--1.eth0-01.sta00526-B eid:1.1.1.176.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00526-B Reason: checkNonPhantom: restart cx.
1588456052.136  EVENT: 2020-05-02 14:47:31.630 eventId:1043899 eidType:Endpoint name:tcp--1.eth0-01.sta00527-A eid:1.1.97.182.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00527-A Reason: checkNonPhantom: restart cx.
1588456052.136  EVENT: 2020-05-02 14:47:31.630 eventId:1043900 eidType:Endpoint name:tcp--1.eth0-01.sta00527-B eid:1.1.1.180.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00527-B Reason: checkNonPhantom: restart cx.
1588456052.136  EVENT: 2020-05-02 14:47:31.630 eventId:1043901 eidType:Endpoint name:tcp--1.eth0-01.sta00532-A eid:1.1.102.202.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00532-A Reason: checkNonPhantom: restart cx.
1588456052.136  EVENT: 2020-05-02 14:47:31.630 eventId:1043902 eidType:Endpoint name:tcp--1.eth0-01.sta00532-B eid:1.1.1.200.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00532-B Reason: checkNonPhantom: restart cx.
1588456052.136  EVENT: 2020-05-02 14:47:31.631 eventId:1043903 eidType:Endpoint name:tcp--1.eth0-01.sta00533-A eid:1.1.103.206.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00533-A Reason: checkNonPhantom: restart cx.
1588456052.136  EVENT: 2020-05-02 14:47:31.631 eventId:1043904 eidType:Endpoint name:tcp--1.eth0-01.sta00533-B eid:1.1.1.204.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00533-B Reason: checkNonPhantom: restart cx.
1588456059.083  WIFI-EVENT: 1.1:  sta00000 (phy #0): scan started
1588456059.084  WIFI-EVENT: 1.1:  sta00500 (phy #1): scan started
1588456059.084  WIFI-EVENT: 1.1:  sta01000 (phy #2): scan started
1588456065.231  EVENT: 2020-05-02 14:47:45.227 eventId:1043905 eidType:Endpoint name:tcp--1.eth0-01.sta00053-A eid:1.1.59.30.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00053-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456065.231  EVENT: 2020-05-02 14:47:45.228 eventId:1043906 eidType:Endpoint name:tcp--1.eth0-01.sta00053-B eid:1.1.1.28.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00053-B  Reason: notifyEndpStopping.
1588456065.232  EVENT: 2020-05-02 14:47:45.229 eventId:1043907 eidType:Endpoint name:tcp--1.eth0-01.sta00062-B eid:1.1.1.64.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00062-B  Reason: accept report, TRNS_RUNNING_TO_STO
1588456065.232  EVENT: 2020-05-02 14:47:45.229 eventId:1043908 eidType:Endpoint name:tcp--1.eth0-01.sta00062-A eid:1.1.68.66.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00062-A  Reason: notifyEndpStopping.
1588456065.232  EVENT: 2020-05-02 14:47:45.229 eventId:1043909 eidType:Endpoint name:tcp--1.eth0-01.sta00063-A eid:1.1.69.70.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00063-A  Reason: accept report, TRNS_RUNNING_TO_STO
1588456065.232  EVENT: 2020-05-02 14:47:45.229 eventId:1043910 eidType:Endpoint name:tcp--1.eth0-01.sta00063-B eid:1.1.1.68.2 eventType:Endp-Stop details:Stopping: tcp--1.eth0-01.sta00063-B  Reason: notifyEndpStopping.
1588456065.235  EVENT: 2020-05-02 14:47:45.232 eventId:1043911 eidType:Endpoint name:tcp--1.eth0-01.sta00053-A eid:1.1.59.30.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00053-A Reason: checkNonPhantom: restart cx.
1588456065.235  EVENT: 2020-05-02 14:47:45.232 eventId:1043912 eidType:Endpoint name:tcp--1.eth0-01.sta00053-B eid:1.1.1.28.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00053-B Reason: checkNonPhantom: restart cx.
1588456065.235  EVENT: 2020-05-02 14:47:45.232 eventId:1043913 eidType:Endpoint name:tcp--1.eth0-01.sta00062-A eid:1.1.68.66.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00062-A Reason: checkNonPhantom: restart cx.
1588456065.236  EVENT: 2020-05-02 14:47:45.232 eventId:1043914 eidType:Endpoint name:tcp--1.eth0-01.sta00062-B eid:1.1.1.64.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00062-B Reason: checkNonPhantom: restart cx.
1588456065.236  EVENT: 2020-05-02 14:47:45.233 eventId:1043915 eidType:Endpoint name:tcp--1.eth0-01.sta00063-A eid:1.1.69.70.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00063-A Reason: checkNonPhantom: restart cx.
1588456065.236  EVENT: 2020-05-02 14:47:45.233 eventId:1043916 eidType:Endpoint name:tcp--1.eth0-01.sta00063-B eid:1.1.1.68.2 eventType:Endp-Start details:Starting: tcp--1.eth0-01.sta00063-B Reason: checkNonPhantom: restart cx.
1588456066.221  WIFI-EVENT: 1.1:  sta00000 (phy #0): scan finished: 2412 2417 2422 2427 2432 2437 2442 2447 2452 2457 2462 5180 5200 5220 5240 5260 5280 5300 5320 5500 5520 5540 5560 5580 5600 5620 5640 5660 5680 5700 5720 5745 5765 5785 5805 5825, ""
1588456066.274  WIFI-EVENT: 1.1:  sta01000 (phy #2): scan finished: 2412 2417 2422 2427 2432 2437 2442 2447 2452 2457 2462 5180 5200 5220 5240 5260 5280 5300 5320 5500 5520 5540 5560 5580 5600 5620 5640 5660 5680 5700 5720 5745 5765 5785 5805 5825, ""
1588456068.944  WIFI-EVENT: 1.1:  sta00500 (phy #1): scan finished: 2412 2417 2422 2427 2432 2437 2442 2447 2452 2457 2462 5180 5200 5220 5240 5260 5280 5300 5320 5500 5520 5540 5560 5580 5600 5620 5640 5660 5680 5700 5745 5765 5785 5805 5825, ""


Key Performance Indicators CSV



Scan Results for SSIDs used in this test.

BSS 30:23:03:81:9c:28(on sta00000) -- associated
	TSF: 0 usec (0d, 00:00:00)
	freq: 5180
	beacon interval: 100 TUs
	capability: ESS (0x0001)
	signal: -21.00 dBm
	last seen: 190 ms ago
	Information elements from Probe Response frame:
	SSID: OpenWrt-5lo
	Supported rates: 6.0* 9.0 12.0* 18.0 24.0* 36.0 48.0 54.0 
	DS Parameter set: channel 36
	BSS Load:
		 * station count: 64
		 * channel utilisation: 15/255
		 * available admission capacity: 0 [*32us]
	Supported operating classes:
		 * current operating class: 128
	HT capabilities:
		Capabilities: 0x9ef
			RX LDPC
			HT20/HT40
			SM Power Save disabled
			RX HT20 SGI
			RX HT40 SGI
			TX STBC
			RX STBC 1-stream
			Max AMSDU length: 7935 bytes
			No DSSS/CCK HT40
		Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
		Minimum RX AMPDU time spacing: 8 usec (0x06)
		HT TX/RX MCS rate indexes supported: 0-15
	HT operation:
		 * primary channel: 36
		 * secondary channel offset: above
		 * STA channel width: any
		 * RIFS: 0
		 * HT protection: no
		 * non-GF present: 1
		 * OBSS non-GF present: 0
		 * dual beacon: 0
		 * dual CTS protection: 0
		 * STBC beacon: 0
		 * L-SIG TXOP Prot: 0
		 * PCO active: 0
		 * PCO phase: 0
	Extended capabilities:
		 * Extended Channel Switching
		 * UTF-8 SSID
		 * Operating Mode Notification
		 * Max Number Of MSDUs In A-MSDU is unlimited
	VHT capabilities:
		VHT Capabilities (0x338819b2):
			Max MPDU length: 11454
			Supported Channel Width: neither 160 nor 80+80
			RX LDPC
			short GI (80 MHz)
			TX STBC
			SU Beamformer
			SU Beamformee
			MU Beamformer
			RX antenna pattern consistency
			TX antenna pattern consistency
		VHT RX MCS set:
			1 streams: MCS 0-9
			2 streams: MCS 0-9
			3 streams: not supported
			4 streams: not supported
			5 streams: not supported
			6 streams: not supported
			7 streams: not supported
			8 streams: not supported
		VHT RX highest supported: 0 Mbps
		VHT TX MCS set:
			1 streams: MCS 0-9
			2 streams: MCS 0-9
			3 streams: not supported
			4 streams: not supported
			5 streams: not supported
			6 streams: not supported
			7 streams: not supported
			8 streams: not supported
		VHT TX highest supported: 0 Mbps
	VHT operation:
		 * channel width: 1 (80 MHz)
		 * center freq segment 1: 42
		 * center freq segment 2: 0
		 * VHT basic MCS set: 0xfffc
	Transmit Power Envelope:
		 * Local Maximum Transmit Power For 20 MHz: 23 dBm
		 * Local Maximum Transmit Power For 40 MHz: 23 dBm
		 * Local Maximum Transmit Power For 80 MHz: 23 dBm
	WMM:	 * Parameter version 1
		 * u-APSD
		 * BE: CW 15-1023, AIFSN 3
		 * BK: CW 15-1023, AIFSN 7
		 * VI: CW 7-15, AIFSN 2, TXOP 3008 usec
		 * VO: CW 3-7, AIFSN 2, TXOP 1504 usec


BSS 30:23:03:81:9c:27(on sta00500) -- associated
	TSF: 0 usec (0d, 00:00:00)
	freq: 2462
	beacon interval: 100 TUs
	capability: ESS ShortPreamble ShortSlotTime (0x0421)
	signal: -14.00 dBm
	last seen: 70 ms ago
	Information elements from Probe Response frame:
	SSID: OpenWrt-2
	Supported rates: 1.0* 2.0* 5.5* 11.0* 6.0 9.0 12.0 18.0 
	DS Parameter set: channel 11
	ERP: <no flags>
	Extended supported rates: 24.0 36.0 48.0 54.0 
	BSS Load:
		 * station count: 100
		 * channel utilisation: 19/255
		 * available admission capacity: 0 [*32us]
	Supported operating classes:
		 * current operating class: 81
	HT capabilities:
		Capabilities: 0x19ed
			RX LDPC
			HT20
			SM Power Save disabled
			RX HT20 SGI
			RX HT40 SGI
			TX STBC
			RX STBC 1-stream
			Max AMSDU length: 7935 bytes
			DSSS/CCK HT40
		Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
		Minimum RX AMPDU time spacing: 8 usec (0x06)
		HT TX/RX MCS rate indexes supported: 0-15
	HT operation:
		 * primary channel: 11
		 * secondary channel offset: no secondary
		 * STA channel width: 20 MHz
		 * RIFS: 0
		 * HT protection: no
		 * non-GF present: 1
		 * OBSS non-GF present: 0
		 * dual beacon: 0
		 * dual CTS protection: 0
		 * STBC beacon: 0
		 * L-SIG TXOP Prot: 0
		 * PCO active: 0
		 * PCO phase: 0
	Extended capabilities:
		 * Extended Channel Switching
		 * UTF-8 SSID
		 * Operating Mode Notification
	WMM:	 * Parameter version 1
		 * u-APSD
		 * BE: CW 15-1023, AIFSN 3
		 * BK: CW 15-1023, AIFSN 7
		 * VI: CW 7-15, AIFSN 2, TXOP 3008 usec
		 * VO: CW 3-7, AIFSN 2, TXOP 1504 usec


BSS 32:23:03:81:9c:29(on sta01000) -- associated
	TSF: 0 usec (0d, 00:00:00)
	freq: 5745
	beacon interval: 100 TUs
	capability: ESS (0x0001)
	signal: -19.00 dBm
	last seen: 32 ms ago
	Information elements from Probe Response frame:
	SSID: OpenWrt-5hi
	Supported rates: 6.0* 9.0 12.0* 18.0 24.0* 36.0 48.0 54.0 
	DS Parameter set: channel 149
	BSS Load:
		 * station count: 64
		 * channel utilisation: 145/255
		 * available admission capacity: 0 [*32us]
	Supported operating classes:
		 * current operating class: 128
	HT capabilities:
		Capabilities: 0x9ef
			RX LDPC
			HT20/HT40
			SM Power Save disabled
			RX HT20 SGI
			RX HT40 SGI
			TX STBC
			RX STBC 1-stream
			Max AMSDU length: 7935 bytes
			No DSSS/CCK HT40
		Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
		Minimum RX AMPDU time spacing: 8 usec (0x06)
		HT TX/RX MCS rate indexes supported: 0-15
	HT operation:
		 * primary channel: 149
		 * secondary channel offset: above
		 * STA channel width: any
		 * RIFS: 0
		 * HT protection: no
		 * non-GF present: 1
		 * OBSS non-GF present: 0
		 * dual beacon: 0
		 * dual CTS protection: 0
		 * STBC beacon: 0
		 * L-SIG TXOP Prot: 0
		 * PCO active: 0
		 * PCO phase: 0
	Extended capabilities:
		 * Extended Channel Switching
		 * UTF-8 SSID
		 * Operating Mode Notification
		 * Max Number Of MSDUs In A-MSDU is unlimited
	VHT capabilities:
		VHT Capabilities (0x338819b2):
			Max MPDU length: 11454
			Supported Channel Width: neither 160 nor 80+80
			RX LDPC
			short GI (80 MHz)
			TX STBC
			SU Beamformer
			SU Beamformee
			MU Beamformer
			RX antenna pattern consistency
			TX antenna pattern consistency
		VHT RX MCS set:
			1 streams: MCS 0-9
			2 streams: MCS 0-9
			3 streams: not supported
			4 streams: not supported
			5 streams: not supported
			6 streams: not supported
			7 streams: not supported
			8 streams: not supported
		VHT RX highest supported: 0 Mbps
		VHT TX MCS set:
			1 streams: MCS 0-9
			2 streams: MCS 0-9
			3 streams: not supported
			4 streams: not supported
			5 streams: not supported
			6 streams: not supported
			7 streams: not supported
			8 streams: not supported
		VHT TX highest supported: 0 Mbps
	VHT operation:
		 * channel width: 1 (80 MHz)
		 * center freq segment 1: 155
		 * center freq segment 2: 0
		 * VHT basic MCS set: 0xfffc
	Transmit Power Envelope:
		 * Local Maximum Transmit Power For 20 MHz: 30 dBm
		 * Local Maximum Transmit Power For 40 MHz: 30 dBm
		 * Local Maximum Transmit Power For 80 MHz: 30 dBm
	WMM:	 * Parameter version 1
		 * u-APSD
		 * BE: CW 15-1023, AIFSN 3
		 * BK: CW 15-1023, AIFSN 7
		 * VI: CW 7-15, AIFSN 2, TXOP 3008 usec
		 * VO: CW 3-7, AIFSN 2, TXOP 1504 usec



Generated by Candela Technologies LANforge network testing tool.
www.candelatech.com