PerformanceNotesArchive2

Benchmark results of nmap r11670 vs. nmap-perf r11670 vs. nmap r11204

nmap -n -d2 -r --log-errors --max-retries 1 \
     -p 1-65535 scanme.nmap.org -oA perf-scanme
nmap -n -d2 -r --log-errors --max-retries 1 \
     -sP -PS21,23,80,3389 -PE -iL perf-down-hosts-2 -oA perf-down-ping
nmap -n -d2 -r --log-errors --max-retries 1 \
     -sP -PS21,23,80,3389 -PE -iL perf-up-hosts-2 -oA perf-up-ping
nmap -n -d2 -r --log-errors --max-retries 1 \
     -F -iL perf-random-hosts-2 -oA perf-random-F
nmap -n -d2 -r --log-errors --max-retries 1 \
     -F -iL perf-up-hosts-2 -oA perf-up-F

perf-random-hosts-2 is 704 random Internet IPs, culled of DHCP blocks. perf-down-hosts-2 is 200 addresses from that list, mostly down. perf-up-hosts-2 is 200 addresses from the random list, mostly up. (Except in the ucsd scans, where the lists have the same characteristics but the numbers are different.)

This table shows the times taken in the five benchmark scans, over three or four trials, on five different machines. Beneath each time is statistics for checking accuracy, in the following form:

0:02:08
10–34355
means that a scan took 0:02:08, had 10 hosts up, 34 open ports, and 355 closed ports.
 scanmedown-pingup-pingrandom-Fup-F
david/nmap-690:11:55
1–33
0:00:32
4–00
0:00:30
191–00
0:12:07
94–1032558
0:08:49
200–62854907
david/nmap-700:12:06
1–33
0:00:19
2–00
0:00:15
191–00
0:09:03
94–1022807
0:08:55
200–62934894
david/nmap-710:12:04
1–33
0:00:22
3–00
0:00:06
190–00
0:08:39
94–1052863
0:08:02
200–62914884
david/nmap-perf-690:09:52
1–33
0:00:25
4–00
0:00:03
188–00
0:09:18
95–1072833
0:08:12
200–62774904
david/nmap-perf-700:10:00
1–32
0:00:26
3–00
0:00:19
191–00
0:08:39
93–1052767
0:08:07
200–62694912
david/nmap-perf-710:08:59
1–33
0:00:21
3–00
0:00:16
190–00
0:09:17
95–1062906
0:08:06
200–62924905
david/nmap-r11204-690:09:35
1–33
0:01:35
4–00
0:00:12
190–00
0:09:17
97–782710
0:07:00
200–50674290
david/nmap-r11204-700:09:36
1–33
0:00:49
3–00
0:00:25
191–00
0:07:57
95–762700
0:07:39
200–50994331
david/nmap-r11204-710:09:34
1–33
0:01:28
2–00
0:00:14
190–00
0:08:35
95–772709
0:07:40
200–50934335
flog/nmap-690:09:42
1–33
0:00:53
3–00
0:00:09
190–00
0:04:34
175–1579425
0:05:26
200–62875128
flog/nmap-700:08:15
1–33
0:00:54
5–00
0:00:27
191–00
0:04:24
176–1549704
0:05:34
200–62925177
flog/nmap-710:08:43
1–33
0:00:54
3–00
0:00:30
190–00
0:04:50
175–1569517
0:05:19
200–62735131
flog/nmap-perf-690:07:41
1–33
0:00:13
3–00
0:00:11
190–00
0:04:19
170–1509326
0:05:16
200–63005143
flog/nmap-perf-700:08:05
1–33
0:00:10
5–00
0:00:19
191–00
0:04:06
175–1569711
0:05:26
200–62845134
flog/nmap-perf-710:08:06
1–33
0:00:11
3–00
0:00:05
188–00
0:04:35
176–1589514
0:05:18
200–62845136
flog/nmap-r11204-690:07:42
1–33
0:00:24
3–00
0:00:22
190–00
0:04:34
173–1549421
0:07:27
200–62725140
flog/nmap-r11204-700:08:07
1–33
0:01:00
5–00
0:00:07
191–00
0:04:35
173–1549620
0:06:49
200–62895129
flog/nmap-r11204-710:07:51
1–33
0:00:53
3–00
0:00:24
190–00
0:04:35
176–1579524
0:09:01
200–62525122
goomba/nmap-690:07:47
1–33
0:00:21
3–00
0:00:03
190–00
0:08:09
178–16927639
0:07:02
200–1841814976
goomba/nmap-700:07:06
1–33
0:00:12
3–00
0:00:03
190–00
0:08:06
177–16927848
0:12:40
200–1841714995
goomba/nmap-710:06:46
1–33
0:00:46
4–00
0:00:02
190–00
0:07:00
177–17127847
0:10:39
200–1841814995
goomba/nmap-perf-690:06:26
1–33
0:00:14
3–00
0:00:03
191–00
0:07:37
178–16927601
0:06:44
200–1841814995
goomba/nmap-perf-700:06:10
1–33
0:00:14
3–00
0:00:03
191–00
0:06:53
176–16927848
0:07:08
200–1841814994
goomba/nmap-perf-710:06:27
1–33
0:00:13
4–00
0:00:02
191–00
0:07:24
176–16927848
0:07:57
200–1841614994
goomba/nmap-r11204-690:06:22
1–33
0:00:45
3–00
0:00:04
191–00
0:10:59
178–16927848
0:14:30
200–1841814995
goomba/nmap-r11204-700:06:10
1–33
0:02:03
3–00
0:00:04
191–00
0:14:15
178–17028136
0:13:19
200–1841814995
goomba/nmap-r11204-710:05:28
1–33
0:00:44
4–00
0:00:03
191–00
0:10:26
177–17127848
0:13:51
200–1841714993
syn/nmap-690:00:00
0–00
0:04:52
3–00
0:00:03
190–00
0:02:23
176–1619504
0:02:45
200–63095169
syn/nmap-700:00:00
0–00
0:00:12
3–00
0:00:04
191–00
0:02:53
175–1559504
0:02:21
200–63085171
syn/nmap-710:00:00
0–00
0:00:20
3–00
0:00:03
190–00
0:02:07
175–1549504
0:02:12
200–63095169
syn/nmap-perf-690:00:00
0–00
0:00:12
3–00
0:00:03
190–00
0:02:01
176–1619504
0:01:39
200–63095170
syn/nmap-perf-700:00:00
0–00
0:00:09
3–00
0:00:02
191–00
0:02:07
175–1559504
0:01:37
200–63095139
syn/nmap-perf-710:00:01
0–00
0:00:10
3–00
0:00:02
190–00
0:01:40
175–1559504
0:01:50
200–63095170
syn/nmap-r11204-690:00:00
0–00
0:00:17
3–00
0:00:02
190–00
0:03:44
176–1619504
0:06:09
200–63095170
syn/nmap-r11204-700:00:00
0–00
0:01:46
3–00
0:00:02
190–00
0:03:43
176–1559504
0:05:29
200–63075168
syn/nmap-r11204-710:00:00
0–00
0:01:06
3–00
0:00:02
190–00
0:03:39
175–1559503
0:03:19
200–63095170
ucsd/nmap-510:02:10
1–33
0:01:07
14–00
0:00:07
558–00
0:16:20
421–1542205476
0:42:35
921–2052202989
ucsd/nmap-520:02:11
1–33
0:00:59
14–00
0:00:10
556–00
0:16:22
417–1524203353
0:42:27
921–2046202106
ucsd/nmap-530:02:10
1–33
0:00:57
14–00
0:00:07
556–00
0:16:13
417–1526203370
0:45:22
921–2049202007
ucsd/nmap-540:02:10
1–33
0:01:00
15–00
0:00:07
555–00
0:13:50
419–1529204573
0:47:00
921–2099206662
ucsd/nmap-perf-510:01:44
1–33
0:00:57
14–00
0:00:07
559–00
0:09:40
417–1528203559
0:24:11
921–2058203138
ucsd/nmap-perf-520:01:43
1–33
0:01:05
14–00
0:00:06
556–00
0:09:45
417–1528203515
0:27:34
921–2049202073
ucsd/nmap-perf-530:01:44
1–33
0:00:57
14–00
0:00:10
558–00
0:09:46
415–1521202392
0:25:58
921–2057201590
ucsd/nmap-perf-540:01:43
1–33
0:00:49
15–00
0:00:06
557–00
0:16:42
419–1532203774
0:29:32
921–2114211621
ucsd/nmap-r11204-510:01:43
1–33
0:01:00
14–00
0:00:06
558–00
0:10:25
418–1535204432
0:45:07
921–2048202843
ucsd/nmap-r11204-520:01:43
1–33
0:01:02
14–00
0:00:05
557–00
0:11:16
417–1532203695
0:47:33
921–2051201913
ucsd/nmap-r11204-530:01:43
1–33
0:00:52
14–00
0:00:06
555–00
0:11:16
418–1529204080
0:42:15
921–2062202657
ucsd/nmap-r11204-540:01:43
1–33
0:01:00
15–00
0:00:06
555–00
0:11:10
425–1551208468
0:51:41
921–2155220668

I noticed only a few accuracy-related changes in the table. david/nmap-r11204/random-F found ≈25% fewer open ports for reasons I haven't investigated. (Maybe it went too fast? That was the only random-F scan where r11204 won.) david/nmap-perf-70/scanme missed 1 closed port, which is unusual but not unprecedented in nmap or nmap-perf. goomba had more total ports because it used /etc/services instead of nmap-services so -F didn't work.

This chart contains the timing information from the above table. The whiskers in each box show the maximum and minimum time for each machine/benchmark combination; the heavy vertical line is the median time. More to the left is better. The different nmaps appear in the following order and colors:

Alternate views of the graph: log scale, dots, dots log scale.
The log scale makes it easier to compare the short times. I find the dots chart easier to read but you have to know what the colors mean.

Global congestion control

I found an example where global congestion control is seriously constraining: nmap -d2 scanme.nmap.org -d2 64.13.1-30.0 -PN -F -n --min-hostgroup 64 scanme gets a few responses right away and grows its congestion window to a healthy size. But the sending rate drops to less than 10 packets per second. Why?

**TIMING STATS** (35.0510s): IP, probes active/freshportsleft/
  retry_stack/outstanding/retranwait/onbench,
  cwnd/ssthresh/delay, timeout/srtt/rttvar/
   Groupstats (31/31 incomplete): 60/*/*/*/*/* 60.40/55/* 8880778/2178022/1675689
   64.13.134.52: 0/90/0/0/0/8 24.00/75/0 243796/71276/43130
   64.13.1.0: 1/81/0/2/1/17 10.00/75/0 8880778/-1/-1
   64.13.2.0: 1/82/0/1/0/17 10.00/75/0 8880778/-1/-1
   64.13.3.0: 4/81/0/4/0/15 10.00/75/0 8880778/-1/-1
   64.13.4.0: 2/82/0/3/1/15 10.00/75/0 8880778/-1/-1
   64.13.5.0: 1/85/0/1/0/14 10.00/75/0 8880778/-1/-1
   64.13.6.0: 3/81/0/3/0/16 10.00/75/0 8880778/-1/-1
   64.13.7.0: 2/81/0/3/1/16 10.00/75/0 8880778/-1/-1
   64.13.8.0: 1/84/0/1/0/15 10.00/75/0 8880778/-1/-1
   64.13.9.0: 4/78/0/5/0/18 10.00/75/0 8880778/-1/-1
   64.13.10.0: 0/80/0/2/1/18 2.00/2/0 772849/222769/137520
   64.13.11.0: 0/90/0/0/0/3 27.17/75/0 6865078/3061238/950960
   64.13.12.0: 3/89/0/3/0/8 10.00/75/0 8880778/-1/-1
   64.13.13.0: 2/84/0/3/0/4 34.84/75/0 7610620/2645884/1241184
   64.13.14.0: 4/85/0/4/0/11 10.00/75/0 8880778/-1/-1
   64.13.15.0: 3/86/0/3/0/11 10.00/75/0 8880778/-1/-1
   64.13.16.0: 2/87/0/2/0/11 10.00/75/0 8880778/-1/-1
   64.13.17.0: 3/89/0/3/0/8 10.00/75/0 8880778/-1/-1
   64.13.18.0: 1/89/0/1/0/10 10.00/75/0 8880778/-1/-1
   64.13.19.0: 1/87/0/1/0/12 10.00/75/0 8880778/-1/-1
   64.13.20.0: 4/89/0/4/0/7 10.00/75/0 8880778/-1/-1
   64.13.21.0: 2/87/0/3/1/10 10.00/75/0 8880778/-1/-1
   64.13.22.0: 1/86/0/1/0/13 10.00/75/0 8880778/-1/-1
   64.13.23.0: 2/89/0/2/0/9 10.00/75/0 8880778/-1/-1
   64.13.24.0: 1/89/0/1/0/10 10.00/75/0 8880778/-1/-1
   64.13.25.0: 2/88/0/2/0/10 10.00/75/0 8880778/-1/-1
   64.13.26.0: 3/87/0/4/1/9 10.00/75/0 8880778/-1/-1
   64.13.27.0: 2/88/0/3/0/10 10.00/75/0 8880778/-1/-1
   64.13.28.0: 1/89/0/1/0/10 10.00/75/0 8880778/-1/-1
   64.13.29.0: 3/87/0/4/1/9 10.00/75/0 8880778/-1/-1
   64.13.30.0: 1/90/0/1/0/9 10.00/75/0 8880778/-1/-1

All the other non-responsive hosts use up the global congestion window (60/60.40). Most of them have never gotten a response so their timeouts are the global timeout of almost 9 seconds. The global timeout is so high because of long RTTs on destination unreachable replies. (I repeated the experiment and the timeout settled to about 1.6 s, then once again and it was up near 10 s.) Because the congestion window is perpetually filled, we almost never get to send pings to scanme to expand it.

Timing pings

This is a picture of the effect of nmap-perf r11735, which sends a timing ping every 50 probes or 1.25 seconds, whichever comes first. First, using nmap trunk, here is a graph of the host (not global) congestion window and threshold for the scan

nmap -p 1-5000 -r -n -PN -d4 scanme.nmap.org

open 3 closed 3 filtered 4994
Overall sending rates: 112.77 packets / s, 4961.84 bytes / s.
Raw packets sent: 15084 (663.696KB) | Rcvd: 83 (3332B)

Here is the same scan with nmap-perf r11735, on the same x axis:


open 3 closed 3 filtered 4994
Overall sending rates: 117.93 packets / s, 5188.87 bytes / s.
Raw packets sent: 10155 (446.820KB) | Rcvd: 138 (5532B)

Notice that nmap-perf sent 1/3 fewer packets even though it sent 70% more pings (161 vs. 95). This is probably because the nmap scan had max_successful_tryno = 1 while the nmap-perf scan had it at 0.

The shapes of the two curves are quite different. Most significant is that in congestion avoidance mode, the nmap graph is curved while the nmap-perf graph is straight. Growth above the slow start threshold is supposed to be linear, so nmap-perf has it right. The curved shape is in fact a square root shape, and it is a sign that ping probes are hitting their maximum worth, which is not proportional to the sending rate. To see that one can look at the differential equations governing congestion avoidance:

According to RFC 2851, cwnd is increased by 1/cwnd with every response received. It is important to remember that the rate at which replies are received is itself cwnd in the absence of congestion, so we may take dcwnd/dt = 1/cwnd × cwnd = 1. Solving the differential equation gives cwnd(t) = t + C: linear growth.
But now watch what happens when the rate at which replies are received is not proportional to cwnd, but is instead some constant, call it K. Then dcwnd/dt = 1/cwnd × K = K/cwnd. The equation is separable: cwnd dcwnd = K dt. Integrating gives 1/2 cwnd2 = Kt + C, or cwnd(t) = sqrt(2Kt) + C1, which explains the square root shape.

So the curved shape is a result of the response rate scaled congestion control hitting its cap and becoming constant, no longer proportional to cwnd. Sending probes more often allows staying below this cap and staying proportional, without increasing the cap (which would only become a problem again at even higher scan rates). You can see some more square root graphs from long ago at the performance graphs page.

A repeat of the experiment:


open 3 closed 3 filtered 4994
Overall sending rates: 202.40 packets / s, 8905.50 bytes / s.
Raw packets sent: 10026 (441.144KB) | Rcvd: 30 (1212B)

Here is the same scan with nmap-perf r11735, on the same x axis:


open 3 closed 3 filtered 4994
Overall sending rates: 139.78 packets / s, 6150.36 bytes / s.
Raw packets sent: 10142 (446.248KB) | Rcvd: 135 (5412B)

The same shapes are visible, but this time nmap was faster. More tests are called for.

Benchmark of nmap r11737 vs. nmap-perf r11737

 scanmedown-pingup-pingrandom-Fup-F
david/nmap-10:09:56
1–33
0:00:24
3–00
0:00:18
190–00
0:09:46
96–1092859
0:07:29
200–62824803
david/nmap-20:09:50
1–33
0:00:27
4–00
0:00:29
190–00
0:11:02
99–1163010
0:07:38
200–62794801
david/nmap-30:09:42
1–32
0:00:23
4–00
0:00:11
190–00
0:09:17
96–1122852
0:07:51
200–62894806
david/nmap-40:09:52
1–33
0:00:33
5–00
0:00:19
190–00
0:09:01
100–1123147
0:07:50
200–62794797
david/nmap-50:09:56
1–33
0:00:15
4–00
0:00:11
190–00
0:08:52
96–963014
0:07:43
200–62774807
david/nmap-perf-10:18:24
1–33
0:00:37
4–00
0:00:28
190–00
0:09:12
95–1102809
0:09:10
200–62804822
david/nmap-perf-20:17:36
1–32
0:00:34
4–00
0:00:24
188–00
0:10:08
95–1102880
0:08:39
200–62894804
david/nmap-perf-30:17:24
1–33
0:00:23
4–00
0:00:09
190–00
0:09:53
95–1093008
0:09:01
200–62844815
david/nmap-perf-40:17:18
1–33
0:00:32
4–00
0:00:21
190–00
0:10:25
100–1133191
0:09:07
200–62754810
david/nmap-perf-50:18:29
1–33
0:00:28
4–00
0:00:26
190–00
0:10:10
98–1093110
0:08:44
200–62894829
goomba/nmap-10:06:27
1–33
0:00:15
4–00
0:00:04
190–00
0:10:36
182–17228697
0:06:18
200–1841814712
goomba/nmap-20:06:46
1–33
0:00:14
4–00
0:00:03
190–00
0:10:14
181–17228414
0:07:09
200–1841614711
goomba/nmap-30:06:54
1–33
0:00:12
4–00
0:00:04
190–00
0:08:10
182–17228413
0:05:55
200–1841814711
goomba/nmap-40:06:29
1–33
0:00:12
4–00
0:00:03
190–00
0:07:27
182–17328412
0:06:44
200–1841914710
goomba/nmap-50:06:32
1–33
0:00:14
4–00
0:00:03
190–00
0:08:29
181–17228412
0:09:18
200–1840614711
goomba/nmap-perf-10:05:32
1–33
0:00:13
4–00
0:00:05
187–00
0:13:00
182–17228692
0:08:55
200–1841814709
goomba/nmap-perf-20:05:56
1–33
0:00:11
4–00
0:00:07
190–00
0:11:46
181–17228413
0:07:03
200–1841814711
goomba/nmap-perf-30:05:46
1–33
0:00:13
4–00
0:00:03
190–00
0:12:59
183–17228413
0:06:49
200–1841814711
goomba/nmap-perf-40:06:07
1–33
0:00:13
4–00
0:00:04
190–00
0:11:01
183–17128201
0:07:11
200–1841914675
goomba/nmap-perf-50:05:38
1–33
0:00:11
4–00
0:00:03
190–00
0:09:54
181–17128695
0:07:31
200–1841914710

dots log scale.

These results are disappointing. Rate-related pings make the scan take longer in almost every case. My hypothesis for this is that nmap-perf is detecting when it's running too fast sooner than it used to. It doesn't get to coast for a little while at a rate higher than it should. nmap is just being a little more dangerous with its rate. I'll give up on this change.

Here's a graph of the time taken for nmap --pingtime X --max-retries 1 -p 1-5000 -r -n -PN -d2 scanme.nmap.org, for values of X between 100000 and 60000000 (0.1–60 s).

for n in 1 2 3 4 5 6 7 8 9 10 11 12; do
  echo >> pingtime.txt
  for a in 100000 250000 500000 1000000 1250000 1500000 2000000 \
           3000000 4000000 5000000 6000000 7000000 8000000 9000000 \
           10000000 20000000 30000000 40000000 50000000 60000000; do
    echo $n $a
    ./nmap --pingtime $a --max-retries 1 -p 1-5000 -r -n -PN -d2 scanme.nmap.org \
      > scanme-pingtime-$a-$n.nmap
    grep ^Nmap\  scanme-pingtime-$a-$n.nmap \
      | gawk "{print $a / 1000000. \" \" \$11}" >> pingtime.txt
  done
done

The above graph is a little misleading, because it shows the requested ping interval on the x axis, not the actual interval. Here is an graph adjusted for actual ping intervals, computed by dividing the scan duration by the number of "PING SENT" in the log. Note, for example, that the pings that were supposed to be sent every 0.1 s were sent about every 0.2 s.

Here are samples of the difference in time taken with various ping timing tweaks. The x axis is changed to go from 0.1 to 1.0; with probe-based pings all actual ping intervals were in that range (the --pingtime option has some effect but not much). The small black dots are from the adjusted graph above.

That last one (pings every 50 probes, loss recovery, magnifier = 3) looks pretty good. Most of the points are in the 40–50 second range, the same as nmap trunk with a 0.1 or 0.25 s requested pingtime. The outliers up higher are from high requested pingtimes (3–60 seconds). I don't know why the magnifier should have such a big effect. Two questions: Will that last combination outperform nmap trunk, and If so, what's the optimum ping magnifier?

Benchmark of nmap r11744 vs. nmap-perf r11744.

 scanmedown-pingup-pingrandom-Fup-F
david/nmap-10:09:10
1–33
0:00:16
4–00
0:00:23
189–00
0:09:11
96–1252804
0:07:36
200–62794703
david/nmap-20:08:58
1–33
0:00:35
4–00
0:00:15
189–00
0:09:10
97–1223004
0:08:07
200–63034704
david/nmap-30:09:52
1–33
0:00:20
3–00
0:00:16
188–00
0:09:30
96–1212908
0:07:40
200–62964706
david/nmap-40:09:26
1–33
0:00:15
4–00
0:00:20
189–00
0:09:15
96–1202862
0:07:50
200–63184696
david/nmap-50:19:38
1–32
0:00:18
3–00
0:00:20
190–00
0:09:05
98–1253095
0:07:40
200–62964800
david/nmap-perf-10:09:14
1–32
0:00:16
4–00
0:00:16
189–00
0:09:18
95–1252706
0:07:58
200–63084707
david/nmap-perf-20:09:26
1–33
0:00:30
4–00
0:00:20
189–00
0:09:00
96–1233002
0:07:49
200–63004680
david/nmap-perf-30:09:21
1–33
0:00:18
3–00
0:00:17
189–00
0:08:44
95–1122906
0:07:40
200–63014701
david/nmap-perf-40:08:37
1–33
0:00:20
4–00
0:00:24
189–00
0:10:12
97–1212918
0:07:54
200–63074674
david/nmap-perf-50:09:19
1–33
0:00:15
3–00
0:00:21
190–00
0:08:50
97–1243092
0:07:47
200–62894789
flog/nmap-10:09:14
1–33
0:02:09
1–00
0:00:03
185–00
0:05:04
173–1699377
0:04:48
200–62815036
flog/nmap-20:08:06
1–33
0:01:00
0–00
0:00:10
189–00
0:04:30
174–1799458
0:05:16
200–62995027
flog/nmap-30:08:13
1–33
0:01:21
0–00
0:00:03
186–00
0:05:02
171–1679271
0:04:33
200–62905035
flog/nmap-40:07:34
1–33
0:01:18
0–00
0:00:08
190–00
0:04:36
173–1729368
0:04:27
200–62865036
flog/nmap-perf-10:07:38
1–33
0:01:08
1–00
0:00:14
189–00
0:04:22
174–1719464
0:05:04
200–63035038
flog/nmap-perf-20:07:41
1–33
0:01:47
1–00
0:00:15
190–00
0:04:22
171–1719321
0:05:10
200–62945026
flog/nmap-perf-30:08:01
1–33
0:01:32
1–00
0:00:08
190–00
0:04:27
174–1679377
0:04:28
200–62865044
flog/nmap-perf-40:08:13
1–33
0:00:47
0–00
0:00:18
189–00
0:04:07
168–1659269
0:04:34
200–62955038
goomba/nmap-10:06:59
1–33
0:00:13
4–00
0:00:03
189–00
0:08:33
180–18528406
0:05:41
200–1843214428
goomba/nmap-20:06:19
1–33
0:00:28
5–00
0:00:04
188–00
0:08:53
179–18328415
0:06:19
200–1843214427
goomba/nmap-30:06:38
1–33
0:00:12
4–00
0:00:03
189–00
0:07:54
178–18328414
0:05:34
200–1843214427
goomba/nmap-40:05:53
1–33
0:00:15
3–00
0:00:04
189–00
0:07:38
177–18328129
0:05:52
200–1843214428
goomba/nmap-perf-10:05:15
1–33
0:00:12
4–00
0:00:04
189–00
0:08:36
180–18328227
0:05:51
200–1843214428
goomba/nmap-perf-20:05:08
1–33
0:00:24
5–00
0:00:06
184–00
0:08:22
179–18328415
0:05:38
200–1843214428
goomba/nmap-perf-30:05:17
1–33
0:00:12
4–00
0:00:04
189–00
0:08:30
177–18328130
0:06:16
200–1843214428
goomba/nmap-perf-40:05:24
1–33
0:00:12
3–00
0:00:04
189–00
0:07:45
177–18328130
0:05:55
200–1843214427
syn/nmap-10:00:00
0–00
0:00:42
1–00
0:00:02
190–00
0:03:00
175–1709456
0:01:21
200–63245074
syn/nmap-20:00:00
0–00
0:00:26
1–00
0:00:03
190–00
0:02:39
175–1719455
0:01:26
200–63245074
syn/nmap-30:00:00
0–00
0:00:48
1–00
0:00:03
190–00
0:02:58
175–1739550
0:01:29
200–63255074
syn/nmap-40:00:00
0–00
0:00:23
2–00
0:00:02
190–00
0:02:28
176–1769454
0:01:21
200–63245074
syn/nmap-perf-10:00:00
0–00
0:00:36
1–00
0:00:03
190–00
0:03:01
175–1709455
0:01:29
200–63254974
syn/nmap-perf-20:00:00
0–00
0:01:11
1–00
0:00:03
190–00
0:02:55
175–1719456
0:01:33
200–63255072
syn/nmap-perf-30:00:00
0–00
0:00:50
2–00
0:00:03
190–00
0:02:28
176–1739549
0:01:22
200–63255070
syn/nmap-perf-40:00:00
0–00
0:02:49
1–00
0:00:02
190–00
0:02:49
176–1769455
0:01:25
200–63244990
ucsd/nmap-10:01:43
1–33
0:00:42
39–00
0:00:10
554–00
0:14:28
435–1576212206
0:24:21
921–2002198899
ucsd/nmap-20:01:44
1–33
0:00:35
40–00
0:00:15
538–00
0:14:07
423–1538205918
0:27:43
921–1991195249
ucsd/nmap-30:01:43
1–33
0:00:38
33–00
0:00:08
536–00
0:13:34
414–1526202041
0:28:39
921–1987193988
ucsd/nmap-40:01:44
1–33
0:00:38
31–00
0:00:12
535–00
0:13:25
414–1533202036
0:28:55
921–1979194386
ucsd/nmap-perf-10:00:51
1–33
0:00:34
39–00
0:00:05
554–00
0:16:12
430–1561210576
0:28:42
921–1993197337
ucsd/nmap-perf-20:00:51
1–33
0:00:42
40–00
0:00:06
538–00
0:11:06
419–1539204425
0:28:21
921–1990195397
ucsd/nmap-perf-30:00:50
1–33
0:00:36
33–00
0:00:06
538–00
0:10:10
414–1534202664
0:26:50
921–1980195001
ucsd/nmap-perf-40:00:51
1–33
0:00:50
31–00
0:00:06
535–00
0:11:27
415–1541203275
0:31:37
921–1977194197

These results are hard to analyze. In most scans they seems to have had no effect, or a very small improvement. In a few cases nmap-perf looks to be slower: uscd/nmap-F, syn/down-ping. The scanme scans overall showed improvement; in ucsd/scanme, the time taken was roughly halved (104→51 seconds). The consistency of timing in this case is strong evidence that the improvement was not due to just chance.

As usual, green is nmap-perf and blue is nmap.


log scale

tryno equal tests

A test of Daniel Roethlisberger's patch from http://seclists.org/nmap-dev/2009/q1/0387.html that checks for exact tryno equality in responses. This is a repeat of the standard nmap-perf benchmark I've been doing; see above for the exact scans these represent.

0:02:08
10–34355
means that a scan took 0:02:08, had 10 hosts up, 34 open ports, and 355 closed ports.
 scanmedown-pingup-pingrandom-Fup-F
gusto/nmap-10:10:15
1–33
0:00:36
2–00
0:00:22
187–00
0:08:48
93–1142872
0:07:35
200–62784710
gusto/nmap-20:09:53
1–33
0:00:36
2–00
0:00:12
187–00
0:10:30
95–1202854
0:07:56
200–62674717
gusto/nmap-30:09:51
1–33
0:00:32
1–00
0:00:22
187–00
0:08:51
93–1142766
0:07:32
200–62844700
gusto/nmap-tryno-10:09:54
1–33
0:01:18
2–00
0:00:21
187–00
0:06:56
92–832627
0:05:58
200–50884171
gusto/nmap-tryno-20:10:07
1–33
0:00:33
2–00
0:00:10
187–00
0:07:01
94–832698
0:06:05
200–51104130
gusto/nmap-tryno-30:10:02
1–33
0:00:45
2–00
0:00:23
187–00
0:08:37
94–832626
0:05:34
200–50974167

With the patch Nmap misses a lot of open and closed ports. In the random-F tests the patch found 28% fewer open ports and 6% fewer closed. In the up-F tests the patch found 19% fewer open and 12% fewer closed.

Benchmark of _FORTIFY_SOURCE=2

An overdue benchmark of _FORTIFY_SOURCE=2. The nmap scans are with r11810 and the nmap-fortify are with r11811. Defining _FORTIFY_SOURCE doesn't look to have an effect on performance.

 scanmedown-pingup-pingrandom-Fup-F
nmap-10:06:09
1–33
0:00:30
3–00
0:00:03
188–00
0:13:41
174–17827625
0:06:38
200–1842614702
nmap-20:06:51
1–33
0:00:41
3–00
0:00:03
188–00
0:10:11
176–18127926
0:05:46
200–1842514705
nmap-30:05:37
1–23
0:00:12
5–00
0:00:04
187–00
0:06:45
178–18127782
0:05:41
200–1842614705
nmap-fortify-10:05:26
1–33
0:00:30
2–00
0:00:04
188–00
0:07:01
174–17827649
0:06:05
200–1842614705
nmap-fortify-20:06:35
1–33
0:00:28
3–00
0:00:03
188–00
0:08:55
177–18128054
0:05:52
200–1842514704
nmap-fortify-30:06:24
1–33
0:00:13
5–00
0:00:03
188–00
0:07:41
177–17827780
0:05:52
200–1842614705
Page last modified on October 15, 2009, at 06:18 PM