Brute Force for Beginners

Let’s polish off the other recommended beginner Google Code Jam problems, which are mostly exercises in brute force. (Google has since updated that page, which is why we refer to an archived version.)

Minimum Scalar Product

We can brute force the small input by trying all permutations:

import Data.List

main = interact $ unlines . zipWith (++)
  ["Case #" ++ show t ++ ": " | t <- [1..]] . f . tail . lines

f [] = []
f (_:_xs:_ys:s) = solve (map read $ words _xs) (map read $ words _ys) : f s

solve xs ys = show $ minimum [sum $ zipWith (*) xs p | p <- permutations ys]

A little thought shows the smallest sum arises from repeatedly pairing up the smallest number of one list with the largest of the other. [If a < b and c < d, then (b - a)(d - c) > 0, that is, ad + bc < ac + bd.]

So an efficient solution is to sort one list in increasing order (sort), sort the other in decreasing order (reverse . sort), multiply them pairwise (zipWith (*)), and sum them (sum).

import Data.List

main = interact $ unlines . zipWith (++)
  ["Case #" ++ show t ++ ": " | t <- [1..]] . f . tail . lines

f [] = []
f (_:_xs:_ys:s) = solve (map read $ words _xs) (map read $ words _ys) : f s

solve xs ys = show . sum $ zipWith (*) (sort xs) (reverse $ sort ys)

Alien Language

For once, we use the first line, because it holds more than just the number of test cases: it also contains the number of words in the dictionary.

We solve this with the most obvious method involving lists. We parse each pattern into a list of lists (for instance, a(bc)(ca) becomes [[a], [b, c], [c, a]]), then count the number of words (length $ filter …​) in the dictionary where each letter of the word is an element of the corresponding list of the pattern.

import Data.List

main = interact $ unlines . zipWith (++)
  ["Case #" ++ show t ++ ": " | t <- [1..]] . f . lines

f (_ldn:s) = let
  [_, d, _] = map read $ words _ldn
  (ds, ps) = splitAt d s
  match p = show . length $ filter (and . zipWith (flip elem) (parse p)) ds
  in map match ps

parse [] = []
parse ('(':s) = let (a, _:b) = break (== ')') s in a : parse b
parse (c:s) = [c] : parse s

Rope Intranet

In general, a sweep line algorithm is best for finding all intersections in a set of line segments, but this problem is small enough and special enough for a far simpler algorithm.

Because no 3 lines are concurrent and no 2 lines share an endpoint, we can simply count the number of tuples (Ai, Bi, Aj, Bj) satisfying sgn(Ai - Aj) ≠ sgn(Bi - Bj).

This is rather similar to our solution to Store Credit, so let’s use Data.Vector instead of Data.Array to practice installing a package:

$ cabal install vector

Some of names exported by Data.Vector conflict with those defined in Prelude, so we explicitly list the ones we want.

import Data.Vector (fromList, (!))

main = interact $ unlines . zipWith (++)
  ["Case #" ++ show t ++ ": " | t <- [1..]] . f . tail . lines

f [] = []
f (_n:s) = let
  (_abs, s1) = splitAt (read _n) s
  in solve (map (map read . words) _abs) : f s1

solve _ws = let
  ws = fromList _ws
  m = length _ws - 1
  in show $ length [undefined | i <- [0..m-1], j <- [i+1..m],
    let ([a1,b1], [a2,b2]) = (ws!i, ws!j), signum (a2 - a1) /= signum (b2 - b1)]

What does undefined mean? Since we only want the length of the list, the actual contents of the list are irrelevant and never examined. In fact, the program would crash if we tried to compute on an element of the list, which we can verify in an interactive session:

> length [undefined, undefined, undefined]
3
> head [undefined, undefined, undefined]
*** Exception: Prelude.undefined

File Fix-It

Disappointingly, the inputs are small enough that we can get away with a obvious solution based on lists. Consider a path we wish to create. We split it into its component directory names (splitOn "/") that we then use to generate a list of the path and all its ancestors (map (intercalate "/") . inits).

The handy splitOn function requires installing a package:

$ cabal install split

Then:

> import Data.List
> import Data.List.Split
> map (intercalate "/") . inits . splitOn "/" $ "/foo/bar/baz"
["","","/foo","/foo/bar","/foo/bar/baz"]

The output contains undesired empty strings, which we could have avoided with tail, but we’ll get rid of them soon enough anyway.

We concatenate all such lists into a monstrous list (concat . map), then remove any duplicates with union: this function removes duplicates from its second argument before computing its union with the first argument. Comparing the lengths of the list before and after adding the new directories yields the answer.

import Data.List
import Data.List.Split

main = interact $ unlines . zipWith (++)
  ["Case #" ++ show t ++ ": " | t <- [1..]] . f . tail . lines

f [] = []
f (_nm:s) = let
  [n, m] = map read $ words _nm
  (ns, s1) = splitAt n s
  (ms, s2) = splitAt m s1
  in solve ns ms : f s2

solve ns ms = show $ length (g ds ms) - length ds where ds = g [""] ns

g ds = union ds . concatMap (map (intercalate "/") . inits . splitOn "/")

Once more, with trees

The above is adequate, but for training purposes, let’s write a faster solution based on trees. The idea is similar: for each path, we insert it into a tree, and count the number of times we created a new node.

In conventional languages, we might write code that traverses down a tree and inserts nodes. But this is impure; ideally, we should refrain from modifying a tree in place.

This is easier than it sounds. We’ve had plenty of practice keeping lists pure: functions like concat or union return new lists rather than modify lists in place. We can do the same for trees. One way is to use Data.Tree.Zipper:

$ cabal install rosezipper

This module introduces a TreePos type: a cursor that keeps track of where we are on a tree. We wind up with Haskell that resembles tree-updating code in other languages, but in reality creates a new tree after each operation.

Below, the ins function recursively inserts a path into a tree, at the position indicated by a given TreePos. As we wish to keep track of the number of created nodes, we also bundle an Int with each TreePos which records the number of created nodes.

For clarity, we explicity declare the type of ins even though Haskell’s type inference could figure it out on its own. The Full means the TreePos points to a node of the tree, and not to an empty space between two nodes (which is useful in other contexts).

How often should we declare types? About as often as we should comment code, for type declarations can be thought of as comments. (Only they’re better because compilers can enforce them, and compilers are more reliable than humans.) Thus we should declare types for functions that we expect others to use, and in general, anywhere where it makes code clearer.

To keep our solutions short and sweet, we’ve been omitting type declarations. We made an exception for ins because sooner or later we’ll have to learn how to write a type declaration, and ins is a good candidate to practice on, as it has a non-trivial type.

This is our first encounter with the Maybe data type: the functions firstChild and next return Nothing if there is no first or next child, and otherwise return Just tp where tp is the TreePos of the returned child.

import Data.List.Split
import Data.Tree
import Data.Tree.Zipper

main = interact $ unlines . zipWith (++)
  ["Case #" ++ show t ++ ": " | t <- [1..]] . f . tail . lines

f [] = []
f (_nm:s) = let
  [n, m] = map read $ words _nm
  (ns, s1) = splitAt n s
  (ms, s2) = splitAt m s1
  in solve ns ms : f s2

ins :: (Int, TreePos Full String) -> [String] -> (Int, TreePos Full String)
ins (n, tp) [] = (n, root tp)
ins (n, tp) (a:s) = let
  g Nothing = ins (n + 1, insert (Node a []) $ children tp) s
  g (Just tp) = if label tp == a then ins (n, tp) s else g (next tp)
  in g $ firstChild tp

mkdirs tp = foldl ins (0, tp) . map (tail . splitOn "/")

solve ns ms = show . fst $ mkdirs (snd $ mkdirs (fromTree $ Node "" []) ns) ms

Using maps instead of lists to hold the child nodes would improve lookup times from linear to logarithmic, but this is overkill. Our tree solution runs way faster than our list solution, which already was fast enough.

Rotate

One sub-problem is simulating gravity on a single column, which is essentially moving all occurrences of "." to one end of a list. For example, "B..R.BR." becomes "…​.BRBR". Haskell’s partition is perfect for this:

> import Data.List
> partition (== '.') "B..R.BR."
("....","BRBR")
> uncurry (++) $ partition (== '.') "B..R.BR."
"....BRBR"

The uncurry ()` produces a function that's just like `() (so it also concatenates two lists together) except that it takes its two input lists as a tuple instead of separate arguments. This fits with partition like hand in glove. In other words, it’s shorthand for:

let (a, b) = partition (== '.') "B..R.BR." in a ++ b

A second sub-problem is figuring out who has won. There may be ways to do this elegantly, but we’ll just bash it out with a list comprehension: for each player, we look for a square that for some orthogonal or diagonal direction ([(1, 0), (0, 1), (1, 1), (1, -1)]), we can walk k steps and always see the player’s piece. As with conventional languages, or and and short-circuit, so the search stops after the first winning combination is found.

This is our first program to feature a Haskell case expression. It looks and feels like case statements in other languages, except there’s no fallthrough.

import Data.Array
import Data.List

main = interact $ unlines . zipWith (++)
  ["Case #" ++ show t ++ ": " | t <- [1..]] . f . tail . lines

f [] = []
f (_nk:s) = let
  [n, k] = map read $ words _nk
  (ns, s1) = splitAt n s
  in solve ns n k : f s1

solve ns n k = let
  bnds = ((1, 1), (n, n))
  a = listArray bnds $ concat $ uncurry (++) . partition (== '.') <$> ns
  won p = or [and [f (r + dr * m, c + dc * m) | m <- [0..k - 1]] |
    (r, c) <- range bnds, (dr, dc) <- [(1, 0), (0, 1), (1, 1), (1, -1)]]
    where f i = inRange bnds i && (a!i == p)
  in case map won "BR" of
    [False, False] -> "Neither"
    [False,  True] -> "Red"
    [ True, False] -> "Blue"
    [ True,  True] -> "Both"

Our code is wasteful (unboxed arrays might be better; we check squares multiple times; we check directions even if they obviously take us off the board), but it’s fine for the contest.

All Your Base

In a manner akin to Minimum Scalar Product, we attain the minimum by assigning the smallest unassigned digit to the most significant (leftmost) unassigned symbol and repeating until all symbols are assigned.

This is a job for nub, which keeps only the first occurrence of each element in a list. We use the output of nub as a dictionary: the index of a symbol in this list is the digit it represents.

To minimize the number, we minimize the base, which is at minimum the number of distinct symbols, namely, the length of the dictionary.

There are two complications:

  1. The base must be at least 2, so we insert a bogus symbol for the digit 0 when the input string consists of only one symbol.

  2. The first digit cannot be 0, so we swap the first 2 symbols of the dictionary.

It’s much simpler to express all this in Haskell:

import Data.List
import Data.Maybe

main = interact $ unlines . zipWith (++)
  ["Case #" ++ show t ++ ": " | t <- [1..]] . map solve . tail . lines

solve w = let dict = case nub w of [d]        -> ['?', d]
                                   (d0:d1:ds) -> d1:d0:ds
  in show $ foldl (\n d -> n * length dict + fromJust (d `elemIndex` dict)) 0 w

We surrounded elemIndex with backticks to turn it into an infix operator. This works on any function that takes at least two arguments. We did so here mostly for fun, but also because elemIndex is a bit like the ∈ binary operator in mathematics.

Again, we encounter the Maybe data type: elemIndex returns Nothing if the given element is not found in the given list, and otherwise returns Just i where i is the index of the element in the list. Since we know for sure that d lies in dict, we know the latter must be true, so we call fromJust to extract i from Just i.

What happens if we call fromJust on Nothing?

> import Data.Maybe
> fromJust Nothing
*** Exception: Maybe.fromJust: Nothing

You can’t get something from Nothing!


Ben Lynn blynn@cs.stanford.edu 💡