Costruzionedi unasoluzioneiniziale: solavolta ...gdicaro/15382/additional/local-search-and-ILS.pdfn...

21
2-Opt (Lin and Kernighan, 1973) 237 2-opt (Link and Kernighan 1973): i j l k k i j l Si scelgono due lati/archi (in tutti i modi possibili), li si elimina e si richiude il circuito Two edges (i, j ) and (k,l ) are selected, removed, and replaced by other two edges (i, k ) and (j, l ) (or, (k,i) and (l, j )) one of the two sub-paths always gets reverted! Gain in terms of tour length: (i, k )+(j, l ) - (i, j ) - (k,l ) The exchange with the best gain is selected and executed For each edge (i, j ) all the other n - 1 edges must be gain-checked n(n - 1) = O(n 2 ) possible moves in the 2-exchange neighborhood d(s, s 0 )=4 in an undirected graph, the 2-exchange neighborhood consists in fact of applying four move operations: delete (i, j ), insert (i, k ), delete (k,l ), insert (j, l )

Transcript of Costruzionedi unasoluzioneiniziale: solavolta ...gdicaro/15382/additional/local-search-and-ILS.pdfn...

  • 2-Opt (Lin and Kernighan, 1973)

    237

    ALGORITMI METAEURISTICI 6

    Esempi di neighborhood: TSP

    Travelling Salesman Problem: dato un insieme di punti (clienti) ed

    un grafo pesato (una rete stradale), visitare tutti i clienti una ed una

    sola volta, con costo minimo

    Costruzione di una soluzione iniziale:

    – Euristico costruttivo

    Miglioramento della soluzione iniziale tramire ricerca locale

    2-opt (Link and Kernighan 1973):

    i

    j

    l k k

    i

    j

    l

    Si scelgono due lati/archi (in tutti i modi possibili), li si elimina e si

    richiude il circuito

    n Two edges (i, j) and (k, l) are selected, removed, and replaced by other two edges (i, k) and (j, l)(or, (k, i) and (l, j))→ one of the two sub-paths always gets reverted!

    n Gain in terms of tour length: (i, k) + (j, l)− (i, j)− (k, l)

    n The exchange with the best gain is selected and executed

    n For each edge (i, j) all the other n− 1 edges must be gain-checked

    n(n− 1) = O(n2) possible moves in the 2-exchange neighborhood

    n d(s, s′) = 4 in an undirected graph, the 2-exchange neighborhood consists in fact of applyingfour move operations: delete (i, j), insert (i, k), delete (k, l), insert (j, l)

    237

  • Example 2-opt (1)

    238

    Heuristics, Gambardella, 145

    2 – OPT: example (1/3)

    Edges to be removed

    Initial Tour

    Step 1

    Subpaths

    Subtour is invertedNew edges

    238

  • Example 2-opt (2)

    239

    Heuristics, Gambardella, 146

    2 – OPT: example (2/3)

    Edges to b e removed

    Step 1

    New edges

    Step 2

    239

  • Example 2-opt (3)

    240

    Heuristics, Gambardella, 147

    2 – OPT: example (3/3)

    Edges to be removed

    Step 2

    New edges

    Step 3

    240

  • 3-Opt

    241

    Algorithm 3-opt

    In a 3-opt algorithm, when removing 3 edges there are 23 − 1 alternativefeasible solutions:

    n Including the initial solution, as well as 2-opt moves, there is a total of 23 feasible rewirings for eachselected triple of edges

    n n(n− 1)(n− 2) = O(n3) total movesn One 3-opt move does not revert the path→ appropriate for asymmetric TSP

    241

  • 2-opt vs. 3-opt solution

    242

    1

    1

    Local Search Heuristics

    LocalSearch(ProblemInstance x) y := feasible solution to x;while ���� z � N(y): v(z)

  • 4-Opt: Double bridge

    243

    Heuristics, Gambardella, 150

    b a

    c

    d

    e f

    g

    h

    b a

    c

    d

    e f

    g

    h

    Double bridge

    Does not invert subtours

    Computational complexity O(n2) . n Does not revert the tours

    n O(n2) complexity

    n Often used in conjunction with 2-opt and 3-opt

    243

  • 2-opt on TSPLIB using different heuristics for starting solution

    244

    7.2. 2-Opt Exchange 107

    Problem Random Nearest N. Savings

    d198 8.04 3.18* 5.29lin318 13.05 5.94* 8.43fl417 12.25 7.25 5.38*pcb442 12.64 7.82 7.70*u574 14.24 7.02* 8.82p654 12.40 12.37 8.66*

    rat783 12.31 8.39 8.03*pr1002 14.91 8.48* 9.07u1060 13.05 9.11 8.94*

    pcb1173 12.85 9.42 7.78*d1291 17.72 9.62 6.22*rl1323 15.89 7.88 6.56*fl1400 12.50 9.79 8.85*u1432 14.24 10.07 8.83*fl1577 21.42 8.15* 12.59d1655 16.42 8.29* 12.36vm1748 12.74 8.58* 9.20rl1889 14.22 8.64 8.55*u2152 19.89 10.02 9.64*pr2392 16.20 8.27* 9.57pcb3038 16.29 8.34* 8.36fl3795 13.52 8.57* 11.37fnl4461 14.09 7.77* 8.90rl5934 21.07 9.19* 10.98

    Average 14.67 8.42 8.75

    Table 7.8 Results of standard 2-opt for different starting tours

    0 1000 2000 3000 4000 5000 60000

    200

    400

    600

    800

    1000

    1200

    [1] [2] [3]

    Figure 7.9 CPU times for standard 2-opt heuristic

    n G. Reinelt, The Traveling Salesman, Springer, 1994

    244

  • 3-opt on TSPLIB using different heuristics for starting solution

    245

    118 Chapter 7. Improving Solutions

    We considered the following three different candidate sets:

    1) the 10 nearest neighbor subgraph,

    2) the 10 nearest neighbor subgraph augmented by the Delaunay graph,

    3) a subgraph generated by the candidate heuristic discussed in section 5.3 withparameters w = 15 and k = 10.

    For a given candidate subgraph GC we used the neighborhood set N(i) consisting of allneighbors of i in GC . To limit CPU time (which is cubic in the cardinality of N(i) forevery node i in Step (3.1)) the number of nodes in each set N(i) was bounded by 50.We applied the restricted 3-opt heuristic to the four starting tours:

    1) random tour,

    2) nearest neighbor tour,

    3) savings tour,

    4) Christofides tour.

    The Christofides heuristic was executed in its simplified version as it is described insection 6.3. Table 7.20 shows the results with the first candidate set.

    Problem Random Nearest N. Savings Christofides

    d198 2.86 5.27 1.69 1.24*lin318 2.93 2.80 2.16* 4.62fl417 5.90 3.68 0.62* 5.24pcb442 5.67 1.66* 2.54 3.03u574 5.83 3.98 4.21 3.07*p654 7.69 3.80 1.06* 5.78

    rat783 4.60 3.47* 4.26 3.88pr1002 4.45 3.61 4.02 3.24*u1060 7.06 6.17 4.04 3.20*

    pcb1173 5.82 5.70 3.93* 4.29d1291 14.40 4.26 2.74* 6.30rl1323 7.61 4.51 3.43 3.35*fl1400 10.06 6.34 3.86* 7.20u1432 7.47 4.83 3.51 3.36*fl1577 15.85 8.40 8.71 4.89*d1655 9.43 4.23* 5.09 5.17vm1748 5.98 6.05 3.84 3.69*rl1889 8.22 6.91 5.54 3.78*u2152 9.82 5.78 4.85* 4.62pr2392 7.15 4.37 4.94 3.62*pcb3038 6.82 4.12 4.63 4.09*fl3795 17.48 10.27 7.35 4.53*fnl4461 4.77 3.45 5.13 3.40*rl5934 14.39 6.37 6.74 4.11*

    Average 8.01 5.00 4.12 4.15

    Table 7.20 Results of 3-opt (Variant 1)

    The savings and the Christofides starting tours lead to the best average results. Thisfollows again our rule of thumb that all relatively simple heuristics profit from a startingtour that gives some guidelines on how a good tour should look like. On the other hand,

    245

  • 2-opt and 3-opt, results for random problems

    246

    Heuristics, Gambardella, 155

    NN=Nearest Neighbourhood, DENN=Double Ended NN, MF=MultipleFragment, NA, FA, RA=Nearest, Farthest, Random Addition, NI, FI,

    RI=Nearest, Farthest, Random Insertion, MST=Min. Spanning Three, CH=Christofides, FRP=Fast Recursive Partition.

    2-opt 3opt : results for random problems

    (Bentley 1992)

    n nn = Nearest Neighbourhood, denn = Double ended nn, mf = Multiple Fragment,na, fa, ra = Nearest, Farthest, and Random Addition,ni, fi, ri = Nearest, Farthest, and Random Insertion,mst= Min. Spanning Three, ch = Christofides, frp = Fast Recursive Partition

    246

  • Performance comparison on random instances

    247

    - 21 -

    _ ______________________________________________________________________Average Percent Excess over the Held-Karp Lower Bound_ ______________________________________________________________________

    N = 102 102. 5 103 103. 5 104 104. 5 105 105. 5 106_ ______________________________________________________________________Random Euclidean Instances_ ______________________________________________________________________

    GR 19.5 18.8 17.0 16.8 16.6 14.7 14.9 14.5 14.2CW 9.2 10.7 11.3 11.8 11.9 12.0 12.1 12.1 12.2CHR 9.5 9.9 9.7 9.8 9.9 9.8 9.9 – –2-Opt 4.5 4.8 4.9 4.9 5.0 4.8 4.9 4.8 4.93-Opt 2.5 2.5 3.1 3.0 3.0 2.9 3.0 2.9 3.0_ ______________________________________________________________________

    Random Distance Matrices_ ______________________________________________________________________GR 100 160 170 200 250 280 – – –2-Opt 34 51 70 87 125 150 – – –3-Opt 10 20 33 46 63 80 – – –_ ______________________________________________________________________

    Table 3. Tour quality for 2- and 3-Opt, plus selected tour generation heuristics.

    details.

    Note that for Euclidean instances, 2-Opt averages about 5 percentage points betterthan Christofides, the best of our tour construction heuristics, even though it is startedfrom tours that average 5 to 10 percentage points worse. 3-Opt is another 2 percentagepoints better still, and at 3% excess over the Held-Karp lower bound it may already be asgood as one needs in practice. For random distance matrix instances, the story is consid-erably less impressive. Although both 2- and 3-Opt offer substantial improvements overthe best of our tour generation heuristics on these instances, their tour quality is stilldecaying at a relatively steady rate as N increases, the excess over the Held-Karp lowerbound seeming to grow roughly as logN, just as we observed for Greedy and NN.Results for our testbed of TSPLIB instances are once again similar to those for the ran-dom Euclidean instances, although a bit worse. They average about 6.9% above Held-Karp for 2-Opt and 4.6% for 3-Opt, although these figures can be improved a bit if weare willing to spend more time and space in looking for moves.

    There is a gross disparity between the results in Table 3 (even the ones for randomdistance matrices) and the worst-case bounds of Section 3.1. Given that for those boundswe allowed an adversary to choose the starting tour, it is natural to ask to what extent ourexperimental results are dependent on our choice of starting tour. In Table 4 we compareresults for different starting heuristics (including the use of random starting tours) on the1000-city instances in our two testbeds. The same general picture emerges for otherchoices of N, although in the random distance matrix case, all the numbers are bigger forlarger N.

    For random Euclidean instances, the most striking observation is that the worst per-centage excesses do not correspond to the worst starting heuristic but to the best.Although CW produces the best starting tours by a significant margin, the results ofapplying 2- and 3-Opt to such tours are worse than those for any of the other startingheuristics, including random starts. This is not an isolated property of CW. As first

    n GR = Greedy heuristic (fragmentation)

    n CW = Clarke-Wright heuristic

    n CHR = Christofides algorithm

    n Randomized GR used to generate the starting tour for 2-opt and 3-opt

    n Results from this and the following tables (including those for SA) are from:

    F D. Johnson and L. McGeoch, The Traveling Salesman Problem: a case study in local optimization, in Local Search inCombinatorial Optimization, E. H. L. Aarts and J. K. Lenstra (editors), John Wiley and Sons, Ltd., 1997

    247

  • Performance comparison on random instances

    248

    - 22 -

    _ _______________________________________________________________________Average Percent Excess over the Held-Karp Lower Bound, N = 1000_ _______________________________________________________________________

    Random Euclidean Instances Random Distance Matrices_ _______________________________________________________________________Algorithm Start 2-Opt 3-Opt Start 2-Opt 3-Opt_ _______________________________________________________________________Random 2150 7.9 3.8 24500 290 55NN 25.9 6.6 3.6 240 96 39Greedy 17.6 4.9 3.1 170 70 33CW 11.4 8.5 5.0 980 380 56_ _______________________________________________________________________

    Table 4. Tour quality for 2- and 3-Opt using different starting heuristics.

    observed by Bentley [1992], Greedy starts provide better final results for 2- and 3-Optthan any other known starting heuristic, including those that provide better tours per se,such as Farthest Insertion and fast approximations to Christofides. It appears that if localoptimization is to make substantial improvements, its starting tour must contain somenumber of exploitable defects, and if a tour is too good it may not have enough of these.Note, however, that among the remaining three heuristics covered in Table 4, better start-ing tours do yield better final tours, albeit with the range of differences substantiallycompressed, especially in the case of the comparison between random starts and the othertwo heuristics. Ignoring CW, the same remark holds for our random distance matrixinstances: better starting tours yield better results (although here the range of initial dif-ferences is perhaps not compressed quite as much by local optimization). CW again is anoutlier, yielding final results that are worse than those for random starts, even though itsinitial tours are far better than random ones.

    _ _________________________________________________________________________________(Average Number of Moves Made)/ N_ _________________________________________________________________________________

    N = 102 102. 5 103 103. 5 104 104. 5 105 105. 5 106_ _________________________________________________________________________________Greedy Starts Random Euclidean Instances_ _________________________________________________________________________________2-Opt 0.189 0.185 0.182 0.173 0.169 0.154 0.152 0.150 0.1473-Opt 0.273 0.256 0.248 0.238 0.233 0.211 0.213 0.209 0.207_ _________________________________________________________________________________Random Starts_ _________________________________________________________________________________2-Opt 1.87 2.36 2.67 2.87 3.01 3.14 3.26 3.37 3.483-Opt 1.19 1.36 1.48 1.64 1.77 1.90 2.04 2.18 2.33_ _________________________________________________________________________________Greedy Starts Random Distance Matrices_ _________________________________________________________________________________2-Opt 0.131 0.086 0.042 0.021 0.010 0.004 – – –3-Opt 0.270 0.216 0.144 0.082 0.057 0.030 – – –_ _________________________________________________________________________________Random Starts_ _________________________________________________________________________________2-Opt 1.44 1.56 1.56 1.55 1.55 1.55 – – –3-Opt 1.19 1.30 1.28 1.21 1.12 1.07 – – –_ _________________________________________________________________________________

    Table 5. Normalized numbers of moves made by 2- and 3-Opt.

    248

  • Running times for k-opt + initial construction heuristic

    249

    - 24 -

    _ _____________________________________________________________________________________Running Time in Seconds on a 150 Mhz SGI Challenge_ _____________________________________________________________________________________

    N = 102 102. 5 103 103. 5 104 104. 5 105 105. 5 106_ _____________________________________________________________________________________Random Euclidean Instances_ _____________________________________________________________________________________

    CHR 0.03 0.12 0.53 3.57 41.9 801.9 23009 – –CW 0.00 0.03 0.11 0.35 1.4 6.5 31 173 670GR 0.00 0.02 0.08 0.29 1.1 5.5 23 90 380_ _____________________________________________________________________________________Preprocessing 0.02 0.07 0.27 0.94 2.8 10.3 41 168 673Starting Tour 0.01 0.01 0.04 0.15 0.6 2.7 12 50 1812-Opt 0.00 0.01 0.03 0.09 0.4 1.5 6 23 873-Opt 0.01 0.03 0.09 0.32 1.2 4.5 16 61 230_ _____________________________________________________________________________________Total 2-Opt 0.03 0.09 0.34 1.17 3.8 14.5 59 240 940Total 3-Opt 0.04 0.11 0.41 1.40 4.7 17.5 69 280 1080_ _____________________________________________________________________________________

    Random Distance Matrices_ _____________________________________________________________________________________GR 0.02 0.12 0.98 9.3 107 1400 – – –_ _____________________________________________________________________________________Preprocessing 0.01 0.11 0.94 9.0 104 1370 – – –Starting Tour 0.00 0.01 0.05 0.3 3 27 – – –2-Opt 0.00 0.01 0.03 0.1 0 1 – – –3-Opt 0.01 0.04 0.16 0.6 3 15 – – –_ _____________________________________________________________________________________Total 2-Opt 0.02 0.13 1.02 9.4 108 1400 – – –Total 3-Opt 0.02 0.16 1.14 9.8 110 1410 – – –_ _____________________________________________________________________________________

    Table 6. Running times for 2- and 3-Opt, plus selected tour construction heuristics.

    something like Θ(N 3 ) for 2-Opt and Θ(N 4 ) for 3-Opt. The times we actually observefor the 2- and 3-Optimization phases are definitely subquadratic, and if not O(NlogN)then at least no worse than O(N 1. 2 ). As mentioned above, the time for 3-Optimization isonly three times greater than that for 2-Optimization, not the factor of N that worst-casetheory would imply. Overall time is also decidedly subquadratic. Indeed, as N increases3-Opt’s overall time drops to less than twice that for CW!

    For random distance matrices, the time is slightly worse than quadratic for large N.This is most likely due to memory hierarchy effects, given that it also holds true for pre-processing alone, which is theoretically an Θ(N 2 ) algorithm. The added time for 2- or3-Opting is itself subquadratic and indeed essentially negligible once N gets largeenough. This is consistent with our observation that the number of moves made for theseinstances under Greedy starts is substantially sublinear.

    In the next section we describe the main implementation ideas that make these highspeeds possible. A key factor we exploit is the fact that in local optimization (as opposedto such approaches as tabu search and simulated annealing) we can ignore all non-improving moves.

    249

  • Iterated Local Search meta-heuristic: Basic idea

    259

    n A problem common to local search approaches is the risk of getting ’trapped’ in a local optimum ofthe used neighborhood function:

    F Best and first improvement schemes do actually end in a local optimum

    F Simulated Annealing tries to escape from local optima by adopting stochastic rules forneighborhood sampling and solution acceptance, and using temperature cooling schedules

    F Tabu Search makes use of memory to adaptively prohibit the access to parts of the searchspace and to define intensification and diversification strategies

    F The Iterated Local Search meta-heuristic iterates two sequences of local search steps:

    < one for reaching local optima as efficiently as possible (main local search)

    < one for effectively escaping from local optima by intelligently choosing the point from wherethe main local search should restart from

    259

  • Iterated Local Search: pseudo-code

    260

    1 procedure Iterated Local Search()

    2 Define a neighborhood functionN and select a local search algorithm localSearch() forN ;3 s := Generate a starting solution; // e.g., with a construction heuristic

    4 s := localSearch(N (s)); // e.g., 3-opt, for TSP5 sbest := s;

    6 H := (s, sbest); // memory of starting solutions, best solutions, . . .7 while (¬ termination criteria(s,H))8 s′ :=perturb(s, N (s), H); // from s, generate a new starting point, possibly not inN (s)9 s′′ :=localSearch(N (s′));

    10 if (f(s′′) < f(sbest))

    11 sbest := s′′;

    13 s :=accept(s, s′′, H);14 H := H⊕ (s, sbest);15 end while

    16 return sbest;

    260

  • Design of ILS algorithms

    261

    n localSearch(): the subsidiary local search procedure has a considerable influence on theperformance! It can be anything, also an appropriate implementation of SA or TS, or even ACO . . .

    n perturb() defines, from the current local optimum, the new starting solution for the local search.It aims to give to the local search a good chance to escape from the (basin of the) current optimumand discover a different, a possibly better, local optimum in the current iteration. The intensity of theperturbation can be adaptively modulated by using information from H.F Weak perturbation: usually leads to a short local search phase but at the same time increases

    the probability of hitting the same local optimum over and over (stagnation)

    F Strong perturbation: usually determines a long local search phase but decreases the probabilityof hitting the same local optimum (in the limit, it results in a random restart)

    F The objective is to find a perturbation function that can preserve the good aspects of the currentsolution while escaping from the current local optimum

    n acceptance() balances intensification and diversification in the space of the local optima:

    F The better of the two solutions is always accepted: the ILS algorithm performs an iterative bestimprovement search in the space of the local optima reached by the subsidiary local search

    F The new local optimum s′′ is always accepted: the ILS algorithm performs a random walk in thespace of the local optima reached by the subsidiary local search procedure

    F Any other criterion: Metropolis rule of SA, non-Markov schemes that take also into account thehistory H of the search (e.g., the number of steps since the last improvement), . . .

    261

  • Basic remarks on ILS

    262

    n ILS algorithms are easy and quick to implement once an effective problem-specific local searchprocedure is available

    n The perturbation and acceptance functions can be made relatively simple, yet the whole processcan deliver effective performance

    n For many classes of problems ILS algorithms represent the state-of-the-art (e.g., IteratedLin-Kernighan for TSP, Johnson and McGeoch, 1997)

    n In general sense, the use of an effective problem-specific modification-based local search + ’some’procedure to iteratively define its starting points (e.g., ILS, ACO, GAs, PSO, . . . ) seems to be theright way to go to attack large combinatorial optimization problems

    n An extensive treatment + reference of application of ILS to several problems can be found in:F H. Hoos and T. Stützle, Stochastic local search: Foundations and applications, Morgan Kaufmann, 2005

    262

  • Examples of perturbation mechanisms

    263

    n The double-bridge move based on the 4-exchange neighborhood has been widely used to perturba TSP tour, with 2-opt or 3-opt with first improvement used as subsidiary localSearch()procedures (e.g., Martin et al., 1991, 1992, 1996).

    ab

    c

    d

    e f

    h

    ab

    c

    d

    e f

    h

    g g

    F The edges to be removed are selected such that the combined weight of the four added edgesis k times lower than that of the edges in the original solution (k > 1)

    F The edges to be removed are selected randomly

    n Random k-exchange move: at each iteration a k value is drawn from a uniform random distributionand a k-exchange step is executed (e.g, k ∈ {2, 50} in Hong et al. 1997)

    n Perturbation on the instance data: e.g., in a Euclidean TSP city coordinates are slightly modified,and a run of the subsidiary local search is executed on the modified instance (Conedotti, 1996)

    n Genetic transformation (Katayama and Narisha, 1999): that sub-tours common to the best tour sofar and the current locally optimal one should be preserved; they are first searched in O(n) andthen the fragments are reconnected using a nearest neighbor heuristic

    263

  • Examples of acceptance criteria

    264

    n Monotonically improving (steepest descent/ascent): the most widely used rule (for TSP)

    n Large-step Markov Chain algorithm (Martin et al., 1991, 1992): the Metropolis acceptance criterionwith fixed T 6= 0 is used. The behavior of the algorithm can be modeled as a Markov chain on thelocally optimal solutions obtained by the local search, while the search trajectory between any twosubsequent local optima corresponds to a ’large step’

    n Hierarchical LSMC (Hong et al., 1997): the Metropolis criterion with T = 0 is used by default,however, when stagnation is detected (according to the used local search) the temperature is set tof(s)/200 for the following 100 iterations. That is, a deterioration of tour length by 0.5% is acceptedwith probability 1/e.

    n ’Complete’ Simulated Annealing schemes with decreasing temperature (Rohe, 1997)

    n Dynamic random restart: always accept the best among the two candidate solutions; if no improvedsolution has been found in the last ir iterations, a random restart is executed

    n Fitness Distance Diversification (Stützle & Hoos, 2001) instead of random restart:

    1. From sbest, apply p times a perturbation step followed by a local search phase to obtain a set Pof p new candidate solutions

    2. Let Q be the set of the 1 ≤ q ≤ p highest quality solutions from P3. Let s̃ be the solution in Q with the maximal distance d(s̃, sbest) from sbest. If d > dmin, return s̃,

    otherwise go to step 1

    264

  • Performance of different acceptance criteria on TSP

    265

    ILS-Descent ILS-Restart ILS-FDD

    Instance fopt ∆avg tavg fopt ∆avg tavg fopt ∆avg tavg

    rat783 0.71 0.029 238.8 0.51 0.018 384.9 1.0 0 159.5pcb1173 0 0.26 461.4 0 0.040 680.2 0.56 0.011 652.9d1291 0.08 0.29 191.2 0.68 0.012 410.8 1.0 0 245.4f1577 0.12 0.52 494.6 0.92

  • General performance of state-of-the-art ILS algorithms for TSP

    266

    Iterated-3-opt CLK-ABCC ILK-H

    Instance ∆min ∆avg ∆max ∆min ∆avg ∆max ∆min ∆avg ∆max

    fn14461 0.20 0.28 0.35 0.03 0.08 0.11 0.0 0.0016 0.0027pla7397 0.22 0.35 0.48 0.040 0.19 0.33 0.011 0.029 0.054rl11849 0.44 0.66 0.76 0.16 0.25 0.39 0.044 0.062 0.095usa13509 0.39 0.51 0.58 0.097 0.13 0.17 0.029 0.038 0.055

    n CLK-ABCC = Chained Lin-Kernighan (Applegate et al. 1999): LK as local search, variablek-exchange neighborhood, small candidate lists, variation of the double-bridge as perturbation,Quick-Boruvka as construction heuristic, reset strategies for don’t look bits,. . .

    n ILK-H = Iterated LK Helsgaun (Helsgaun, 2000): the best performing TSP algorithm, LK as localsearch, perturbation based on a construction heuristic, improvement acceptation criterion, . . .

    n Averages over 10 runs, 600 cpu seconds on Athlon 1.2 GHz with 1G RAM

    n Min, max, and average percent deviation from the known optimum

    266