AFLB is the Algorithms for Lunch Bunch. Meetings are Thursdays at 12:30 p.m.

Talks are held in Margaret Jacks Hall, on the Main Quad of Stanford's campus. Click here for directions.

Contents:

  1. 1 October 1987: Andrew Goldberg (Stanford). A Very Simple Algorithm for the Minimum-Cost Circulation Problem.
  2. 8 October 1987: Marek Karpinski (Univ. of Bonn, IBM Almaden). The Parallel Complexity of Perfect Matching Problems.
  3. 15 October 1987: David Johnson (AT&T Bell Labs). Near-Optimal Solutions to Very Large Traveling Salesman Problems.
  4. 22 October 1987: Thane Plambeck (Stanford). The Complexity of Nim and Taking and Breaking Games.
  5. 29 October 1987: Ramsey Haddad (Stanford University). Cyclic $A^i B C^i$ Paths in Labelled Graphs.
  6. 5 November 1987: Tomas Feder (Stanford). Optimal Algorithms for Approximate Clustering.
  7. 13 November 1987: Anton Dahbura (AT&T Bell Labs, Murray Hill). Optimization Techniques for Testing Finite-State Automata.
  8. 17 November 1987: Larry J. Stockmeyer (IBM Almaden). Interactive Proof Systems With Finite State Verifiers.
  9. 19 November 1987: Valerie King (Berkeley). "Lower Bounds on the Complexity of Graph Properties".
  10. 3 December 1987: Noam Nisan (Berkeley). Probabilistic vs. Deterministic Decision Trees and CREW PRAM complexity.
  11. 10 December 1987: John Canny (Berkeley). Robot Motion Planning and Real Geometry.
  12. 14 January 1988: Nathan Linial (IBM Almaden). Distributed Graph Coloring: Global Solutions From Local Data.
  13. 21 January 1988: Nancy Amato (UC Berkeley). An O(n^2 logn) Solution to the Scientific American Train Problem.
  14. 28 January 1988: Problem Session. Toward a Non-Atomic Era: L-Exclusion as a Test Case.
  15. 4 February 1988: Nir Shavit. Toward a Non-Atomic Era: L-Exclusion as a Test Case.
  16. 11 February 1988: NO AFLB -- CSD Forum. Algorithms and Unique Representations.
  17. 18 February 1988: Herbert Wilf (Stanford). Algorithms and Unique Representations.
  18. 25 February 1988: Lenore Blum (Mills College & UC Berkeley). Computing over the Reals (or any Arbitrary Ring).
  19. 10 March 1988: Shay Kutten (IBM T.J. Watson Research Ctr.). NEW MODELS AND ALGORITHMS FOR FUTURE NETWORKS.
  20. 7 April 1988: Stephen Vavasis (Stanford). A Direct Algorithm for Quadratic Local Minimization.
  21. 14 April 1988: Dan Pehoushek (Stanford). The Hamiltonian Hedge Algorithm.
  22. 21 April 1988: Joseph Naor (USC). Parallel Algorithms for Multi-source and Multi-sink Flow in Planar Graphs..
  23. 28 April 1988: Robert W. Floyd (Stanford). An Elementary Analysis of Quicksort with Median Sampling.
  24. 9 May 1988: Alistair Sinclair (University of Edinburgh). RAPIDLY-MIXING MARKOV CHAINS: SOME COMPUTATIONAL APPLICATIONS.
  25. 12 May 1988: Richard K. Guy (University of Calgary). TURNING THINGS OVER: NEW TWISTS IN IMPARTIAL GAMES.
  26. 19 May 1988: Ramin Zabih (Stanford). A REARRANGEMENT SEARCH STRATEGY FOR DETERMINING PROPOSITIONAL SATISFIABILITY.

1 October 1987

Margaret Jacks Hall 352. 12:30 p.m.

Andrew Goldberg (Stanford)

A Very Simple Algorithm for the Minimum-Cost Circulation Problem

A classical algorithm for finding a minimum-cost circulation consists of repeatedly finding a residual cycle of negative cost and canceling it by pushing enough flow around the cycle to saturate an arc. We show that a judicious choice of cycles for canceling leads to a polynomial bound on the number of iterations in this algorithm. This gives a very simple strongly polynomial algorithm that uses no scaling. A variant of the algorithm that uses dynamic trees runs in O(nm log n * min{log (nC),m log n}) time on a network of n vertices, m arcs, and arc costs of maximum absolute value C. This bound is comparable to those of the fastest previously known algorithms.

Joint work with Robert Tarjan (Princeton and Bell Labs.).


8 October 1987

Margaret Jacks Hall 352. 12:30 p.m.

Marek Karpinski (Univ. of Bonn, IBM Almaden)

The Parallel Complexity of Perfect Matching Problems

We construct fast parallel algorithms for deciding, constructing, and enumerating all the perfect matchings in bipartite graphs with polynomially bounded permanents. Some implications towards the parallel complexity of general matching and counting problems are formulated.


15 October 1987

Margaret Jacks Hall 352. 12:30 p.m.

David Johnson (AT&T Bell Labs)

Near-Optimal Solutions to Very Large Traveling Salesman Problems

Most experimental studies of algorithms for the Travel- ing Salesman Problem (TSP) have concentrated on relatively small test cases, instances with 100 cities or less. In practice, however, much larger instances are frequently encountered, both in engineering and scientific applica- tions. This talk begins by surveying complexity results about the TSP and the status of algorithms for finding optimal solutions to small instances. It then goes on to report the results of experiments with standard TSP heuris- tics on large instances, from 500 cities to 100,000, examin- ing the trade-offs obtainable between running time and qual- ity of solution. Most of the standard heuristics will be compared, including such new approaches as ``simulated annealing,'' but particular emphasis will be placed on the acknowledged ``champion,'' the sophisticated Lin-Kernighan algorithm. Using various programming tricks, we have imple- mented a version of this heuristic for the Euclidean case that remains feasible even for 10,000 city instances (8 hours on a minicomputer), and continues to find tours that are within 2% of optimal. For 20,000 or more cities, we could still obtain tours that were within 5% of optimal using Lin-Kernighan as a subroutine in a partitioning scheme suggested by Karp. If one is willing to settle for slightly worse tours, an approximate version of the Christofides heuristic seems to stay within 20% of optimal and has quite acceptable running times even for 100,000 cities.


22 October 1987

Margaret Jacks Hall 352. 12:30 p.m.

Thane Plambeck (Stanford)

The Complexity of Nim and Taking and Breaking Games

We study algorithms for a broad class of impartial two-player perfect information games known as ``taking and breaking games.'' Included in this class are the games of Nim, Kayles, Dawson's chess, Treblecross, and all finite octal and tetral games. No knowledge of these games will be assumed and the relevant theory will be developed from first principles. Our results include NC algorithms for an infinite subclass of taking and breaking games, and a P-completeness result for the recognition of winning positions of these games when the rules of the game are specified as part of the input.


29 October 1987

Margaret Jacks Hall 352. 12:30 p.m.

Ramsey Haddad (Stanford University)

Cyclic $A^i B C^i$ Paths in Labelled Graphs

(Joint work with Jeff Naughton, Princeton University.) This problem arises in database queries: Given a graph with three types of edges, A, B, and C, and given a node x_0, find all nodes y that are reachable from x_0 by a path of the form A^i B C^i. Some O(ne) algorithms are previously known for the case when either the A^i or the C^i portion of the path is acyclic. We will show how to achieve O(ne) time for the case when *both* the A^i and C^i portions are cyclic. This involves the use of a conglomeration of numeric and graph ideas such as: strongly connected components, the period of a graph, intersections of semi-linear sets, and dynamic programming.


5 November 1987

Margaret Jacks Hall 352. 12:30 p.m.

Tomas Feder (Stanford)

Optimal Algorithms for Approximate Clustering

The clustering problem is that of partitioning a given set of n points into k groups, called clusters, so that points within each cluster are near each other. Two objective functions frequently used to measure the performance of a clustering algorithm are (a) the maximum distance between pairs of points in the same cluster, and (b) the maximum distance between the points and "cluster centers"; we refer to the chosen measure as the "cluster size".

We show that, while there is a polynomial time approximation scheme to estimate the number of clusters that are needed for a fixed cluster size, one cannot approximate the optimal cluster size for a fixed number of clusters within a factor close to 2 in polynomial time, unless P=NP. We present an algorithm which achieves this factor of 2 in time O(n log k), and show that this running time is optimal in the algebraic decision tree model. Our approach can be extended to provide optimal O(n log k) approximation algorithms for the related k-centers, k-suppliers, and weighted k-suppliers problems.

This talk represents joint work with Daniel Greene at Xerox PARC.


13 November 1987

Margaret Jacks Hall 352. 12:30 p.m.

Anton Dahbura (AT&T Bell Labs, Murray Hill)

Optimization Techniques for Testing Finite-State Automata

In this talk, methods are discussed for checking that a black-box implementation of a finite-state automaton is functionally equivalent to its specification. This problem, known as design checking, or testing, is a special case of the well-known machine distinguishability problem studied for over thirty years. The design checking problem has become increasingly important in communications network design for testing the conformance of a network layer proto- col to its specification. It has been shown recently by the author that under certain conditions, optimization tech- niques can be used to design highly effective test sequences for testing a protocol which is modeled as an FSA. Tech- niques for extending these techniques under less ideal con- ditions are considered and some open problems are presented.


17 November 1987

Margaret Jacks Hall 352. 12:30 p.m.

Larry J. Stockmeyer (IBM Almaden)

Interactive Proof Systems With Finite State Verifiers

An Interactive Proof System (IPS) for membership in a language L is a two-party protocol whereby a ``prover'' can convince a ``verifier'' that elements x are in L. Both the prover and the verifier may make random moves (flip coins).

To date, almost all research in interactive proof systems has dealt with the case that the verifier is a probabilistic Turing machine (ptm) which runs in polynomial time. Several classes of interactive proof systems have been defined, for example, (1) the verifier may employ either public coins or private coins, defined according to whether the prover can see the outcomes of the random choices made by the verifier; (2) the IPS may or may not be zero-knowledge, meaning, roughly speaking, that no verifier can extract any information from the interactive proof that x is in L except for the fact that x is in L. Many previous results for ptm verifiers depend on unproved assumptions.

By restricting attention to verifiers which are 2-way probabilistic finite state automata (2pfa's), we obtain results about IPS's without unproved assumptions. This talk will focus on three results: (1) private coins are more powerful than public coins; (2) IPS's with public coins are more powerful than 2pfa's alone; (3) there is a language with an IPS but with no zero knowledge IPS.

(This is joint work with Cynthia Dwork who will speak at the Nov. 12 BATS on this work. Different material will be emphasized in the two talks.)


19 November 1987

Margaret Jacks Hall 352. 12:30 p.m.

Valerie King (Berkeley)

"Lower Bounds on the Complexity of Graph Properties"

In this simple model, a decision tree algorithm must determine whether an unknown graph on nodes {1,2,...,n} has a given property by asking questions of the form "Is edge {i,j} in the graph?". The complexity of a property is the number of questions which must be asked in the worst case. A property is called evasive if its complexity is equal to the number of edges in the complete graph.

In 1973, Aanderaa and Rosenberg conjectured that any property which is a graph or digraph property (invariant under graph isomorphism) and is monotone (not destroyed by the deletion of edges) and nontrivial (holds for some but not all graphs) has complexity at least c n^2 where n is the number of nodes and c is a constant. This conjecture was proved by Rivest and Vuillemin who found a constant of 1/16.

In 1984, Kahn, Saks, and Sturtevant discovered that algebraic topology could be used to prove evasiveness when n is a prime power. A consequence of this was a lower bound of almost 1/4 n^2 for graph and digraph properties and general n. Recently, their technique was used to prove evasiveness for all monotone, nontrivial bipartite graph properties (Yao, 1987) and a further improvement on the Aanderaa-Rosenberg constant, to almost 1/2 n^2 for all monotone nontrivial digraph properties (King, 1987).

This talk will describe the method developed by KSS and its new applications.


3 December 1987

Margaret Jacks Hall 352. 12:30 p.m.

Noam Nisan (Berkeley)

Probabilistic vs. Deterministic Decision Trees and CREW PRAM complexity

In the boolean decision tree model the only cost associated with an algorithm is the number of input bits examined. We compare the deterministic versus probabilistic complexities of boolean functions in this model, both for 1-sided error probabilistic computation and for 2-sided error computation. In both cases we show a polynomial relationship.

Our techniques involve the new notion of the "block sensitivity" of a function, a generalization of the critical sensitivity measure. They apply to a completely different model of computation as well: the CREW PRAM. We show that the block sensitivity completely characterizes the complexity in the CREW PRAM model (with an unbounded number of processors). We get that the parallel time it takes to compute f on a CREW PRAM with an unlimited number of processors is equal (up to a constant factor) to the logarithm of the boolean decision tree complexity of f.


10 December 1987

Margaret Jacks Hall 352. 12:30 p.m.

John Canny (Berkeley)

Robot Motion Planning and Real Geometry

The goal of this work is to come up with fast motion planning algorithms, in both a formal and practical sense, which have some kind of performance guarantee. The main result is for the generalized movers' problem, which is motion planning for an arbitrary robot with kinematic constraints only. I will present an algorithm for this problem called the ``roadmap algorithm'' which equals or improves the bounds of most of the special purpose exact algorithms that have appeared over the last few years. The algorithm also involves lower constants than the others and is a plausible candidate for implementation. I will also consider motion planning with dynamic constraints, minimal motion planning, and motion planning with uncertainty, and give new lower bounds for these problems. The lower bound results are joint work with John Reif of Duke university.

The roadmap algorithm is in effect a decision algorithm for the first order existential theory of real numbers. It makes use of results in singularity theory, in particular the notion of stratified sets, which provide a very natural and efficient way to represent geometric objects. It also uses a new (actually very old but obscure) algebraic tool called the multivariate resultant. This latter tool gives a fast parallel method for solving systems of equations, and should give improvements in algorithms for inverse kinematics, CSG modeling, and geometric theorem proving.


14 January 1988

Margaret Jacks Hall 352. 12:30 p.m.

Nathan Linial (IBM Almaden)

Distributed Graph Coloring: Global Solutions From Local Data

We deal with the following model of distributed computing: The processors reside at the vertices of a graph G and have distinct ID's in the range 1,..,n. The computation is synchronous and reliable, there is no bound on message lengths and computing is for free. This is an artificial model which focuses only on what is the diameter of the neighborhood from which each processor collects data. In aprticular all problems can be solved at time diameter(G).

The following three results are proved:

- The n-cycle cannot be 3-colored faster than log*n (and this is tight by previous work) and cannot be 2 colored faster than cn.

- The d-regular tree cannot be colored with root(d) colors faster than diameter/3.

- Every graph may be O(D**2) colored at time O(log*n) where D is the largest degree.

More realistic models for distributed systems will be mentioned where the computations must be local.


21 January 1988

Margaret Jacks Hall 352. 12:30 p.m.

Nancy Amato (UC Berkeley)

An O(n^2 logn) Solution to the Scientific American Train Problem

The following problem appeared in the "Computer Recreations" column by A.K. Dewdney in the June 1987 issue of the Scientific American. The problem has its origins around the turn of the century. In the October and November issues he discusses some solutions.

There is a train chugging along a straight stretch of track from station A to station B. The engineer realizes he needs to return to station A and is not very excited about the prospect of backing the train all the way. Luckily there is a short spur line off to the right side of the track. It is just large enough to hold one car and and has a switch for either direction of track.

The engineer must use this spur line to turn the entire train around. This entails reversing the ordering of the cars, and the orientation of each individual car. We count one unit of work as moving one car one car-length on the track. Dewdney has an O(n^3) solution, where n is the number of cars, and he challenges his readers to do better.

We present an O(n^2 logn) algorithm for solving a variation of this problem in which all cars are taken to be locomotives, i.e. we have n self-propelled cars. We then outline a proof that the algorithm can be simulated by the original single-locomotive train with at most a constant-factor time loss.


28 January 1988

Margaret Jacks Hall 352. 12:30 p.m.

Problem Session


4 February 1988

Margaret Jacks Hall 352. 12:30 p.m.

Nir Shavit

Toward a Non-Atomic Era: L-Exclusion as a Test Case

Most of the research in concurrency control has been based on the existence of strong synchronization primitives such as test and set. Following Lamport, recent research promoting the use of weaker ``safe'' rather then ``atomic'' primitives has resulted in construction of atomic registers from safe ones, in the belief that they would be useful tools for process synchronization. It has been shown that using such atomic registers it is impossible to create strong synchronization primitives such as test and set. We therefore advocate a different approach, to skip the intermediate step of achieving atomicity, and solve problems directly from safe registers. We show how to achieve a fair solution to $\ell$-exclusion, a problem previously solved assuming a very powerful form of test and set. We do so using safe registers alone and without introducing atomicity. The solution is based on the construction of simple novel synchronization primitives that are non-atomic.


11 February 1988

Margaret Jacks Hall 352. 12:30 p.m.

NO AFLB -- CSD Forum


18 February 1988

Margaret Jacks Hall 352. 12:30 p.m.

Herbert Wilf (Stanford)

Algorithms and Unique Representations

Several years ago I worked out a graph theoretical structure in which a number of algorithms for sequencing members of combinatorial families could be carried out. Yet another category of applications of these structures has to do with unique representations of integers, and in this talk I will discuss those theorems.


25 February 1988

Margaret Jacks Hall 352. 12:30 p.m.

Lenore Blum (Mills College & UC Berkeley)

Computing over the Reals (or any Arbitrary Ring)

Classically, the theory of computation deals with discrete structures (e.g., the integers or the rationals). Here, foundational issues (such as what are appropriate models of computation and measures of complexity) are well understood. One has confidence in a natural and polynomially invariant theory.

This is far from the case in computing over continuous structures (such as the reals). To point to a concrete case, the main competing algorithms for the linear programming problem (e.g., Simplex and Karmakar's algorithm) are defined and analyzed using incomparable models of computation and measures of complexity.

I will discuss various issues involved (e.g., the role of the "condition" of a problem), and a proposed model of computation over an arbitrary ring R. In this general setting, we get universal machines, normal forms, R.E. sets, as well as P, NP and NP complete classes. While our theory reflects the classical over Z (e.g. the computable functions are the recursive functions) it also reflects the special mathematical character of the underlying ring R (e.g. complements of Julia sets provide examples of R.E. undecidable sets over the reals) and provides a natural setting for studying foundational issues concerning algorithms in numerical analysis.

This is joint work in progress with M. Shub and S. Smale.


10 March 1988

Margaret Jacks Hall 352. 12:30 p.m.

Shay Kutten (IBM T.J. Watson Research Ctr.)

NEW MODELS AND ALGORITHMS FOR FUTURE NETWORKS

In future networks transmission and switching capacity will dominate processing capacity. In this paper we investigate the way in which distributed algorithms should be changed in order to operate efficiently in this new environment. We introduce a new model for distributed algorithms which makes explicit the difference between switching and processing. Based on this new model we define "message" and time complexity measures which essentially consider switching and transmission to be free of cost. The price we count is for setting a switch. This model can be viewed as a compromise between PRAM (that ignores communication) and the standard model of distributed computing (that ignores computation).

In order to demonstrate the capabilities of the new model we examine two problems that have been extensively studied in the distributed algorithm and networking literature. For the problem of maintaining network topology we devise a broadcast algorithm which takes O(n) "messages" and O(log n) time. For the problem of leader election we present a simple algorithm that uses O(n) "messages" and O(n) time. Both algorithms are better than the existing algorithms under the new network model. Some extensions will also be discussed.

(Joint work with I. Cidon and I. Gopal.)


7 April 1988

Margaret Jacks Hall 352. 12:30 p.m.

Stephen Vavasis (Stanford)

A Direct Algorithm for Quadratic Local Minimization

The quadratic knapsack problem, an optimization problem with the variables coupled by only one constraint, is the simplest nontrivial example of quadratic programming. It is NP-hard to find a global minimum for the general case, but linear-time direct algorithms (Brucker [1984] and Calamai & More [1987]) are known to find a global minimum in the positive-definite case. ``Direct'' in this context means that the algorithm finds the exact solution with a finite number of rational operations (as opposed to iterative approaches). We describe a direct algorithm to find a *local* minimum for the general case of quadratic knapsack. Our algorithm takes O(n log n) steps if the problem is negative definite or O(n (log n)^2) steps if the problem is indefinite. To achieve these running times, we call upon searching and sorting techniques and results from computational geometry.

Joint work with Jorge More of Argonne National Laboratory.


14 April 1988

Margaret Jacks Hall 352. 12:30 p.m.

Dan Pehoushek (Stanford)

The Hamiltonian Hedge Algorithm

The Hamiltonian Hedge Algorithm solves various Hamiltonian Circuit problems, admitting a class of polynomially solvable graphs. Given a graph, the algorithm either finds the minimum weighted Hamiltonian Circuit, or proves that no such circuit exists. The time complexity of the algorithm is a polynomial function of its space complexity. The algorithm runs in polynomial space for planar graphs which are (recursively) separable into nearly equal sized subgraphs with at most O(log N) edges connecting the two subgraphs. For non-planar graphs, the separation requirement is O(logN/loglogN).


21 April 1988

Margaret Jacks Hall 352. 12:30 p.m.

Joseph Naor (USC)

Parallel Algorithms for Multi-source and Multi-sink Flow in Planar Graphs.

In a planar graph with many sources and sinks, the maximum flow from the sources to the sinks can be computed. The sources and sinks are on different faces so that connecting them to a super source or super sink would destroy the planarity. This implies that computing a perfect matching in a planar bipartite graph is in NC.

The algorithm uses new techniques in coordinating the flows from the sources to the sinks.

Joint work with Gary Miller.


28 April 1988

Margaret Jacks Hall 352. 12:30 p.m.

Robert W. Floyd (Stanford)

An Elementary Analysis of Quicksort with Median Sampling

The fancier versions of Quicksort have been analyzed with the aid of generating functions, operator methods, and top hats from which rabbits appear. My approach:

* Set up a recurrence relation.

* Eliminate a weighted summation by differencing often enough.

* Solve the (linear) recurrence for the forcing function in terms of the unknown function.

* Try a large enough repertory of unknown functions that the forcing function of interest is in the space spanned by the above solutions.


9 May 1988

Margaret Jacks Hall 352. 12:30 p.m.

Alistair Sinclair (University of Edinburgh)

RAPIDLY-MIXING MARKOV CHAINS: SOME COMPUTATIONAL APPLICATIONS

An ergodic Markov chain is rapidly mixing if, for any choice of initial state, the final distribution is close to the stationary distribution after only a small number of transitions. Recently, following a proposal of Andrei Broder, several computational applications of rapidly mixing Markov chains have emerged. The talk will introduce some of these applications, and describe how rapid mixing can be characterised in terms of a `magnification property' of the weighted graph underlying the chain.

One direct application is to the uniform sampling of combinatorial structures. The states of the Markov chain in this instance correspond to the sample space of all structures, and the transitions to simple local perturbations. Sampling is done by simulating the chain for a certain number of steps and noting the final state. Providing the stationary distribution of the chain is uniform over states, and providing the rapid mixing property holds, experiments of this type provide a computationally efficient sampling procedure which is close to uniform. Efficient sampling procedures are of practical interest in the study of statistical physics.

Rapidly mixing Markov chains have other, less obvious, computational applications. For example, they may allow the size of a class of combinatorial structures to be estimated when exact enumeration is infeasible. In particular, it can be shown that the permanent of a sufficiently dense 0-1 matrix can be evaluated with arbitrary (fixed) accuracy in polynomial time.

The results described in this talk were the product of joint work with Mark Jerrum.


12 May 1988

Margaret Jacks Hall 352. 12:30 p.m.

Richard K. Guy (University of Calgary)

TURNING THINGS OVER: NEW TWISTS IN IMPARTIAL GAMES

H. W. Lenstra has suggested an idea for some coin-turning games, which Conway has developed into a wide range having surprising connexions with other parts of mathematics. Odious and evil numbers and the Mock Turtle Theorem. Mogul and the Golay code. Turnips and the scale of three. Turning corners, nim multiplication and the tartan theorem. Lexicodes.


19 May 1988

Margaret Jacks Hall 352. 12:30 p.m.

Ramin Zabih (Stanford)

A REARRANGEMENT SEARCH STRATEGY FOR DETERMINING PROPOSITIONAL SATISFIABILITY

We present a simple algorithm for determining the satisfiability of propositional formulas in conjunctive normal form. As the procedure searches for a satisfying truth assignment it dynamically rearranges the order in which variables are considered. The choice of which variable to assign a truth value next is guided by an analytic upper bound on the size of the search remaining; the procedure chooses the clause which yields the smallest upper bound on the size of the remaining search. We describe several upper bound functions and discuss the tradeoff between accurate upper bound functions and the overhead required to compute the upper bounds. Experimental data shows that for one easily computed upper bound the reduction in the size of the search space more than compensates for the overhead involved in selecting the next variable. Rearrangement search based on this upper bound performs well on both the $n$-queens problem and a graph-coloring problem from the Prolog literature.

This is joint work with David McAllester.