AAABBBBCCD AAABCCBCCD AAABCCCCCD AAAACCEEDD FFEEEEEEDD FFFFEGGGGD FFHFFGGGGD IIHHGGGGGD IJJHHHHGGD IJJJJHGGGG

# I Demand Satisfaction

The Boolean satsifiability problem, or SAT to its friends, asks for an algorithm that takes a Boolean expression like:

\[ (a \vee b) \wedge (\neg a \vee \neg b \vee \neg c) \wedge c \]

and determines if there is a way to assign TRUE and FALSE to its variables so it evaluates to TRUE. In this simple example, one satisfying assignment is \(a = 1, b = 0, c = 1\).

SAT is the most infamous member of
*NP-complete*, a gang of
problems that are equivalent in difficulty: if you can solve one, you can solve
them all just as efficiently (up to a polynomial factor). At the same time,
nobody knows of a fast way to solve any of them, and indeed, many suspect it is
impossible to do better than exponential time.

I learned about it in a course taught by Jeff Kingston. He said that it’s important to know of a bunch of NP-complete problems, which are diverse in appearance and sometimes bear an uncanny resemblance to problems that we do know how to solve efficiently. Forewarned is forearmed: by being aware of the usual suspects, if you encounter an NP-complete problem in your career, then you’re more likely to realize its true nature, and avoid wasting time trying to devise a fast method to solve it. He had witnessed others embark on this fool’s errand.

When proving a problem is NP-complete, typically half the work is already done for you: the celebrated Cook-Levin theorem roughly states that if a solution to a problem instance can be checked efficiently when it exists, then it can also be found efficiently with the help of a SAT solver. The other half is less clear. For example, why is it that generalized Sudoku puzzles are no easier than SAT?

Luckily, decades ago, clever researchers found clever ways to reduce SAT to seemingly unrelated problems (which Jeff said was once a good strategy for getting published). As the roster of known NP-complete problems grew, proving got easier because rather than SAT, one can start from any equivalent problem. (Perhaps this is partly why NP-complete reductions have become less noteworthy?) For example, the proof for generalized Sudoku is easy because it starts from the closely related Latin square completion problem, shown to be NP-complete in 1984.

If you must tackle an NP-complete problem, then you must compromise. One of Jeff’s interests was timetable construction, a problem he knew to be NP-complete, so the best he could hope for were approximate solutions, or discovering enough heuristics and optimizations to quickly handle real-life instances.

Part of me is repulsed by this reality. My mathematical side loves neat problems with known polynomial-time algorithms that are provably optimal. In contrast, a mess of hacks might work in real life for special cases, but surely this approach can’t go that far?

A couple of decades later, I learn:

The story of satisfiability is the tale of a triumph of software engineering, blended with rich doses of beautiful mathematics.

which is from
Knuth, *The Art of Computer Programming, Volume 4B*.
He adds:

Revolutionary methods for solving SAT problems emerged at the beginning of the twenty-first century, and they’ve led to game-changing applications in industry. These so-called "SAT solvers" can now routinely find solutions to practical problems that involve millions of variables and were thought until very recently to be hopelessly difficult.

In other words, SAT, the granddaddy of all NP-complete problems, has been humbled by a horde of heuristics and optimizations. Many instances are still untouchable: for example, Boolean expressions corresponding to factoring or discrete log remain impervious (otherwise engineers have stumbled upon advanced mathematics that has eluded the smartest cryptographers!). But outside of certain domains, the latest SAT solvers can power through real-life problems in no time.

An ancient custom has been upended. Showing a problem could be reduced to SAT was often a one-liner; a no-brainer. One simply states that a solution can be checked in polynomial time; Cook-Levin takes care of the rest, and we move on. But nowadays, we may want to take the time to explicitly spell out how to convert our problem to a Boolean expression for a SAT solver.

Actually, not just a Boolean expression. Engineers have gone above and beyond Boolean expressions and written solvers for satisfiability modulo theories, or SMT. These directly support integers, reals, lists, arrays, and so on, which are a hassle to convert to Boolean expressions, though we have no need for them in our exmaple below.

## Two Not Touch

I recently (July 2024) stumbled upon the Two
Not Touch puzzles. Each instance is an *n* by *n* grid divided into *n*
regions, where the goal is to place stars so that each row, column, and region
contains exactly two stars, and furthermore, no two stars are orthogonally or
diagonally adjacent.

To encode a puzzle, we assign each region a unique character:

In the past, I might have written a specialized solver, but today, powerful SMT solvers such as the Z3 theorem prover and cvc5 are freely available and easy to install:

$ sudo apt-get install z3 cvc5

The following Haskell program converts a Two Not Touch puzzle to SMT-LIB input format:

```
{-# LANGUAGE BlockArguments, LambdaCase #-}
import Data.List
var [r, c] = "v" ++ show r ++ "_" ++ show c
neg s = "(not " ++ s ++ ")"
conj = ("(and "++) . (++")") . unwords
disj = ("(or "++) . (++")") . unwords
disjLn = ("(or "++) . (++")") . unlines
baseRules m = line var ++ line (var . reverse) -- Rows and columns.
++ unwords [notBoth [r, c] [r + 1, c1] | r <- [0..m-1], c <- [0..m],
c1 <- (c+) <$> [1, -1], c1 >= 0, c1 <= m] -- Diagonals.
where
onTwo f = [f [a, b] | a <- [0..m], b <- [a + 2..m]]
exact f x as = conj [(if elem c as then id else neg) $ f [x, c] | c <- [0..m]]
line f = concatMap (disjLn . onTwo . exact f) [0..m]
notBoth p q = neg $ conj [var p, var q]
kSubsets = \cases
0 _ -> [[]]
_ [] -> []
k (h:t) -> ((h:) <$> kSubsets (k - 1) t) <> kSubsets k t
allPairs = kSubsets 2
region cells = disjLn $ go <$> filter isFar (allPairs cells) where
go ps = conj [(if elem x ps then id else neg) $ var x | x <- cells]
isFar [p, q] = (> 2) . sum . map (^2) $ zipWith (-) p q
parseRow r = zipWith (\c ch -> (ch, [r, c])) [0..]
go s = unlines $ concat
[ ["(set-logic QF_UF)"]
, ["(declare-const " ++ var [r, c] ++ " Bool)" | r <- [0..m], c <- [0..m]]
, ["(assert (and", baseRules m]
, region <$> as
, ["))", "(check-sat) (get-model) (exit)"]
]
where
rows = zipWith parseRow [0..] $ lines s
m = length rows - 1
as = map (snd <$>) $ groupBy ((. fst) . (==) . fst) $ sort $ concat rows
main = interact go
```

This sort of code is a joy to write in Haskell, with its list comprehensions
and resemblance to mathematics. We gratuitously include a function that
enumerates all *k*-subsets of a given set even though we only ever need all
pair of elements.

(This article began as a store the above program and provide an example of SMT-LIB to help me next time I need it. It grew a bit longer because it was a suitable place to record some old memories.)

An excerpt of the output:

(set-logic QF_UF) (declare-const v0_0 Bool) (declare-const v0_1 Bool) (declare-const v0_2 Bool) (declare-const v0_3 Bool) (declare-const v0_4 Bool) (declare-const v0_5 Bool) (declare-const v0_6 Bool) (declare-const v0_7 Bool) (declare-const v0_8 Bool) (declare-const v0_9 Bool) (declare-const v1_0 Bool) (declare-const v1_1 Bool) ... (assert (and (or (and v0_0 (not v0_1) v0_2 (not v0_3) (not v0_4) (not v0_5) (not v0_6) (not v0_7) (not v0_8) (not v0_9)) (and v0_0 (not v0_1) (not v0_2) v0_3 (not v0_4) (not v0_5) (not v0_6) (not v0_7) (not v0_8) (not v0_9)) ... (and (not v8_1) (not v8_2) v9_1 (not v9_2) (not v9_3) v9_4) (and (not v8_1) (not v8_2) (not v9_1) v9_2 (not v9_3) v9_4) ) )) (check-sat) (get-model) (exit)

The `QF_UF`

indicates that we only have a plain Boolean formula.

We declare a Boolean variable for every cell; the name contains the row and column seprated by an underscore. They are 0-indexed, which produces nicer output for 10x10 puzzles.

Following the declarations is a giant Boolean expression encoding the basic rules of the puzzle as well as sub-expressions for each of the regions.

The last line prints the first satsifying assignment it finds. Ideally we should also check that there are no other solutions.

I originally wrote this program because I thought there was a misprint in a puzzle I tried that made it impossible (of course, much to my chagrin, I was mistaken.)

To see where the stars go, we can pipe the output through:

z3 -in | grep -B1 true | grep -o 'v[^ ]*' | sort

or:

cvc5 --produce-model | grep true | grep -o 'v[^ ]*'

This yields:

v0_6 v0_8 v1_1 v1_3 v2_5 v2_9 v3_3 v3_7 v4_1 v4_5 v5_7 v5_9 v6_2 v6_4 v7_0 v7_8 v8_2 v8_6 v9_0 v9_4

*blynn@cs.stanford.edu*💡