Navigating the Universe of Z3 Theory Solvers
Nikolaj Bjørner, Lev Nachmanson
Microsoft Research

Abstract. Modular combination of theory solvers is an integral theme in engineering modern SMT solvers. The CDCL(T) architecture provides an overall setting for how theory solvers may cooperate around a SAT solver based on conflict driven clause learning. The Nelson-Oppen framework provides the interface contracts between theory solvers for disjoint signatures. This paper provides an update on theories integrated in Z3. We briefly review principles of theory integration in CDCL(T) and then examine the theory solvers available in Z3, with special emphasis on two recent solvers: a new solver for arithmetic and a pluggable user solver that allows callbacks to invoke propagations and detect conflicts.

1. Introduction

The aim of this paper is to provide an up-to-date overview of Z3's core solver engines. While some of the engines date back quite some time, other engines have been replaced over time, added, and even some have been removed; and then re-added in different guise. A recent addition has been a theory solver for arithmetic developed from the ground up. We give an overview of the main features of this new engine and explain selected new approaches used in the solver. A recent addition includes a mechanism for pluggable propagation. We also provide an introduction to this engine.

We will apply the following taxonomy when discussing the theory solvers. It extends the reduction approach to decision procedures [17] as explained in the context of Z3 in [12].

1.1. Present and Future

Z3 now has two CDCL(T) solvers. For main SMT workloads it offers a CDCL(T) core that integrates all the theories that we mention here. This core is a continuation of the main solver of Z3 since its inception. A core with near up-to-date advances in SAT solving has been used so far for workloads originating from bit-vector and Pseudo-Boolean constraints. This core is currently being updated to handle most if not all the SMT workloads of the legacy core with the intent of modernizing Z3's core and allowing integration of new techniques around in-processing, logging of clausal proofs, model repair, and other goodies. From the perspective of this paper, we do not distinguish these cores. Z3 exposes several other core solvers that either build on top of the SMT solver or entirely bypass it. For Horn clauses, Z3 contains dedicated engines for finite domain Horn clauses using finite hash-tables, and for Constrained Horn Clauses (CHC) it uses the SPACER [18] solver; for quantifier-free formulas over Tarski's fragment of polynomial arithmetic it exposes a self-contained solver; and for quantified formulas for theories that admit quantifier-elimination it exposes self-contained solvers.

2. CDCL(T) - In the light of Theory Solvers

In this section, we recall the main mechanisms used in mainstream modern SAT solvers in the light of theory solving. When SAT solving, as implemented using conflict driven clause learning, CDCL, is combined with theory solving it augments propositional satisfiability with theory reasoning. The CDCL solver maintains a set of formulas $F$ and a partial assignment to literals in $F$ that we refer to as $M$.
The solver starts in a state $\langle M, F\rangle$, where $M$ is initially empty. It then attempts to complete $M$ to a full model of $F$ to show that $F$ is satisfiable and at the same time adds consequences to $F$ to establish that $F$ is unsatisfiable. The transition between the search for a satisfying solution and a consequence is handled by a conflict resolution phase. The state during conflict resolution is a triple $\langle M; F; C\rangle$, where, besides the partial model $M$ and formula $F$, there is also a conflict clause $C$ false under $M$. The auxiliary function Theory is used to advance decisions, propagations and identify conflicts. If Theory determines that $S$ is conflicting with respect to the literals in $M$ it produces a conflict clause $C$, that contains a subset of conflicting literals from $M$. It can also produce a trail assignment $A$, which is either a propagation or decision and finally, if it determines that $S$ is satisfiable under trail $M$ it produces $SAT$.

From the point of view of the CDCL(T) solver theory reasoning is a module that can take a state during search and produce verdicts on how search should progress. We use the following verdicts of a theory invocation $\Theory(M;F)$:

Thus, the verdict determes whether the partial model extends to a model of the theories, can identify a subset of $M$ as an unsatisfiable core, propagate the truth assignment of a literal $\ell$, or create a new case split $\Decision{\ell}$ for a literal $\ell$ that has not already been assigned in $M$. We write $SAT = \Theory(M,F)$ when the verdict is that $M$ extends to a valid theory model of $F$, we write $C = \Theory(M,F)$ when $C$ is a conflict clause, based on negated literals from $M$ and $A = \Theory(M,F)$, when the verdict $A$ is either a propagation or decision.

\[\begin{mdmathpre}%mdk \langle \mathid{M};~{\mathid{F}}~\rangle &~\Rightarrow &~\mathid{SAT}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~&~\mathid{SAT}~=~\Theory(\mathid{M},~\mathid{F})\\ \\ \langle \mathid{M};~\mathid{F}~\rangle &~\Rightarrow &~\langle \mathid{M};~\mathid{F};~\mathid{C}~\rangle &~\mathid{C}~=~\Theory(\mathid{M},~\mathid{F})~\\ \\ \langle \mathid{M};~{\mathid{F}}~\rangle &~\Rightarrow &~\langle \mathid{M},~\mathid{A};~\mathid{F}~\rangle &~\mathid{A}~=~\Theory(\mathid{M},~\mathid{F})~\\ \\ \langle \mathid{M};~\emptyset,~{\mathid{F}}~\rangle &~\Rightarrow &~\mathid{UNSAT}~\\ \\ \langle \mathid{M},~\Decision{\overline{\ell}};~{\mathid{F}};~\mathid{C}~\rangle &~\Rightarrow &~\langle \mathid{M},~\Propagation{\ell}{\mathid{C}};~{\mathid{F}}~\rangle &~\ell \in {\mathid{C}}\\ \\ \langle \mathid{M},~\Propagation{\ell}{\mathid{C}'};~\mathid{F};~{\mathid{C}}\rangle &~\Rightarrow &~\langle \mathid{M};~\mathid{F};~(\mathid{C}~\setminus \{\overline{\ell}\})~\cup (\mathid{C}'\setminus\{{\ell}\})~\rangle &~\overline{\ell}~\in {\mathid{C}}\\ \\ \langle \mathid{M},~\mathid{A};~\mathid{F};~\mathid{C}~\rangle &~\Rightarrow &~\langle \mathid{M};~\mathid{F};~{\mathid{C}}~\rangle &~\mathid{otherwise}\\ \end{mdmathpre}%mdk \]

Example 1.
Consider the formula $(x > 0 \vee y > 0) \wedge x + y < 0$. The initial state of search is

\[\langle \epsilon; (x > 0 \vee y > 0)\wedge x + y < 0\rangle \]

based on the empty partial assignment $\epsilon$ and the original formula. A possible next state is to propagate on the unit literal $x + y < 0$, producing

\[\langle \Propagation{x + y < 0}{x + y < 0}; (x > 0 \vee y > 0) \wedge x + y < 0\rangle \]

This step may be followed by a case split setting $x > 0$.

\[\langle \Propagation{x + y < 0}{x + y < 0},\Decision{\neg(x > 0)}; (x > 0 \vee y > 0) \wedge x + y < 0\rangle \]

which triggers a propagation

\[\langle \Propagation{x + y < 0}{x + y < 0},\Decision{\neg(x > 0)},\Propagation{y > 0}{x > 0 \vee y > 0}; (x > 0 \vee y > 0) \wedge x + y < 0\rangle \]

The resulting state is satisfiable in the theory of arithmetic. On the other hand, if we had made the case split on $x > 0$ instead of $\neg(x > 0)$, and then guess the assignment $y > 0$, we would have encountered a conflicting state with conflict clause $\neg (x > 0) \vee \neg (y > 0)$1:

\[\langle \Propagation{x + y < 0}{x + y < 0},\Decision{x > 0},\Decision{y > 0};\ (x > 0 \vee y > 0) \wedge x + y < 0; \ \neg (x > 0) \vee \neg (y > 0)\rangle \]

The last decision is then reverted to produce the satisfiable state

\[\langle \Propagation{x + y < 0}{x + y < 0},\Decision{x > 0},\Propagation{\neg(y > 0)}{\neg (x > 0) \vee \neg (y > 0)};\ (x > 0 \vee y > 0) \wedge x + y < 0\rangle \]

A third scenario uses theory propagation. In this scenario, the decision $x > 0$ is made, but instead of making a decision $y > 0$, the theory solver for arithmetic is given a chance to find opportunities for propagation. It deduces that $x + y < 0, x > 0$ implies $\neg(y > 0)$, and therefore establishes the theory propagation

\[\langle \Propagation{x + y < 0}{x + y < 0},\Decision{x > 0},\Propagation{\neg(y > 0)}{\neg(y > 0) \vee \neg (x > 0)};\ (x > 0 \vee y > 0) \wedge x + y < 0\rangle \]

We are again eliding the unit literal $x + y < 0$ from the explanation for $\neg(y > 0)$.

2.1. Invariants

To be well-behaved we expect Theory to produce propagations on literals that don't already appear in $M$, and crucially enforce the main invariants:

That is, each conflict clause is a consequence of $F$ and each propagation is also a consequence of $F$, and the premises of a propagation is justified by $T$.

3. Boolean Theories

Bit-vectors are in the current solver treated as tuples of Boolean variables and all bit-vector operations are translated to Boolean SAT. The approach is called bit-blasting. Only mild equational reasoning is performed over bit-vectors. The benefits of the CDCL SAT engine have been mostly enjoyed for applications in symbolic execution of 32-bit arithmetic instructions. Bit-blasting has its limitations: applications that use quantifiers and applications around smart contracts that use 256 bits per word are stressing this approach. A revised bit-vector solver that integrates algebraic reasoning and lazy bit-blasting is therefore currently being developed.

Cardinality and Pseudo-Boolean inequalities can also be translated into CNF clauses. It is often an advantage when there are very few variables in the summations, but the overhead of translation can quickly become impractical. Z3 therefore contains dedicated solvers for cardinality and Pseudo-Boolean constraints.

4. Base Theories

A first base theory is the theory of uninterpreted functions. This theory captures a shared property of all theory: that equality is a congruence relation. The theory is described many places, including in [5].

4.1. Arithmetic

There are several alternate engines for arithmetical constraints in Z3. Some of the engines are engineered for fragments of arithmetic, such as difference arithmetic, where all inequalities are of the form $x - y \leq k$, for $k$ a constant, and unit-two-variable-per-inequality (UTVPI), where all inequalities are of the form $\pm x \pm y \leq k$. A new main solver for general arithmetic formulas has emerged recently, with the longer term objective of entirely replacing Z3's legacy arithmetic solver. We will here describe internals of the newer solver in more detail.

In overview, the arithmetic solver uses a waterfall model for solving arithmetic constraints.

4.1.1. Rational linear arithmetic

The solver for rational linear inequalities uses a dual simplex solver as explained in [14]. It maintains a global set of equalities of the form $A\vec{x} = 0$, and each variable $x_j$ is assigned lower and upper bounds during search. The solver then checks for feasibility of the resulting system $A\vec{x} = 0, lo_j \leq x_j \leq hi_j, \forall j$ for dynamically changing bounds $lo_j, hi_j$. The bounds are justified by assignments in $M$.

Finding equal variables - cheaply

The new arithmetic solver contains a novel, efficient, method for finding pairs of variables that are forced equal. By finding such variable, the arithmetic solver can then propagate equalities to other theories. Such equality propagation can make a big difference for efficiently integrating arithmetic in hybrid theories, such as the theory of sequences. We will in the following outline the new idea.

A starting point is a tableau with bounds $A\vec{x} = 0, lo_j \leq x_j \leq hi_j, \forall j$. The tableau contains a sub-tableau of equalities where at most two variables are non-fixed in every equation, such that the coefficients of the non-fixed variables have the same absolute value. That is, let $S$ be a system of linear equations of the form $e_i:=a_{i} x_{i} + b_{i} y_{i}=c_i$, where $x_{i},y_{i}$ are variables and $c_i$ is a constant, and $a_i, b_i \in \{-1,1\}$ for $0 \le i < n$. It is relatively cheap to detect whether a row in a tableau corresponds to such an equation: it can have at most two non-fixed variables whose coefficients must have the same absolute values. We assume that the variable values and the constants are rational numbers. For bounds propagation it is only relevant to consider feasible tableaux. So $S$ has a solution $V$, so we have $a_{i} V(x_{i})+ b_{i} V(y_{i})=c_i$ for every $i$. Let us call variables $u$ and $v$ equivalent if in every solution of $S$ they have the same values.

Given $S$ and $V$, we provide an efficient algorithm for finding pairs of variables $(u, t)$ such that $u$ is equivalent to $t$.

Let us start from an example. Consider the following system

\[\begin{mdmathpre}%mdk \mathid{x}~-~\mathid{y}~=~3,~&~~~~\mathid{y}~-~\mathid{w}~=~2,~&~~~~\mathid{w}~+~\mathid{t}~=~4\\ \mathid{y}~+~\mathid{z}~=~3,~&~~~~\mathid{z}~+~\mathid{u}~=~4,~&~~~~\mathid{v}~-~\mathid{z}~=~2,~\mathid{t}~-~\mathid{s}~=~1 \end{mdmathpre}%mdk \]

and solution

\[\begin{mdmathpre}%mdk \mathid{V}:~~\mathid{x}~\rightarrow 5,~\mathid{y}~\rightarrow 2,~\mathid{z}~\rightarrow 1,~\mathid{u}~\rightarrow 3,~\mathid{w}~\rightarrow 0,~\mathid{s}~\rightarrow 3,~\mathid{t}~\rightarrow 4,~\mathid{v}~\rightarrow 3 \end{mdmathpre}%mdk \]

By examining $V$, we might suspect that the $u,v$, and $s$ are pairwise equivalent. However, if we increase $x$ by one we have another solution

\[\begin{mdmathpre}%mdk \mathid{V}':~~\mathid{x}~\rightarrow 6,~\mathid{y}~\rightarrow 3,~\mathid{z}~\rightarrow 0,~\mathid{u}~\rightarrow 4,~\mathid{w}~\rightarrow 1,~\mathid{s}~\rightarrow 2,~\mathid{t}~\rightarrow 3,~\mathid{v}~\rightarrow 2 \end{mdmathpre}%mdk \]

Since $V'(u) \ne V'(v)$, we can conclude that $u$ is not equivalent to either $s$ or $v$. But we can prove that $v$ and $s$ are equivalent. Another observation is that each variable changed its value by $\pm 1$: changing as $x$ or as $-x$.

We introduce our algorithm by recalling which inference rules it simulates, without actually literally performing the inferences. The inferences are

\[\begin{array}{ccc} \AxiomC{$x - y = c_1$} \AxiomC{$y - z = c_2$} \BinaryInfC{$x - z = c_1 + c_2$} \DisplayProof & \AxiomC{$x + y = c_1$} \AxiomC{$z - y = c_2$} \BinaryInfC{$x + z = c_1 + c_2$} \DisplayProof & \AxiomC{$x + y = c_1$} \AxiomC{$z + y = c_2$} \BinaryInfC{$x - z = c_1 - c_2$} \DisplayProof \end{array} \]

The goal of deriving implied equalities is achieved if we can infer an equality $x - y = 0$ using the above inference rules. Now, the crucial observation is that there is no need to perform arithmetic on the constant offsets $c_i$ because we have a valuation $V$ and the property for every derived equation $x - y = c$:

\[ c = 0 \Leftrightarrow V(x) = V(y) \]

So it is enough to look at the variables $x, y$ and check if their values are the same for a derived equality. The benefit of avoiding arithmetic on the coefficients can be significant when the coefficients cannot be represented using ordinary 64-bit registers, but require arbitrary precision arithmetic. Note that the same method also detects fixed variables; whenever deriving $x + x = c$, we have proved $x$ to be fixed at value $c/2$. In this case, the method needs to calculate $c$. All other variables connected to $x$ through binary equalities are naturally also fixed.

To implement search over the derivation rules efficiently, we build a depth-first search forest where the nodes are annotated by variables annotated by signs and their values from $V$. Edges are annotated by octagon equations. The forest is built by picking a so-far unprocessed variable and growing a tree under it by traversing all equations involving the variable. During the tree expansion the algorithm checks if two nodes with the same value and same signs are connected. To propagate a new equality we trace back the path that was used to derive the equality and assemble the justifications from each row.

Example 2.
Let us show how the procedure works on the example from above and valuation $V$. Starting with the node $\langle +t, 4 \rangle$, the equation $t - s = 1$ produces the children $\langle +s, 3\rangle$ and $\langle -w, 0\rangle$. The second child can be expanded in turn into $\langle -y, -2 \rangle$, $\langle z, 1\rangle$, $\langle v, 3\rangle$. This establishes a path from $\langle s, 3\rangle$ to $\langle v, 3\rangle$. The two nodes are labeled by the same values and same signs.

4.1.2. Integer linear arithmetic

The mixed integer linear solver comprises of several layers. It contains several substantial improvements over Z3's original arithmetic solver and currently outperforms the legacy solver on the main SMTLIB benchmark sets, and to our knowledge mostly on user scenarios.

GCD consistency

Each row is first normalized by multiplying it with the the least common multiple of the denominators of each coefficient. For each row it assembles a value from the fixed variables. A variable $x_j$ is fixed if the current values $lo_j$, $hi_j$ are equal. Then it checks that the gcd of the coefficients to variables divide the fixed value. If they don't the row has no integer solution.


Following [11], the integer solver moves non-basic variables away from their bounds in order to ensure that basic, integer, variables are assigned integer values. The process examines each non-basic variable and checks every row where it occurs to estimate a safe zone where its value can be changed without breaking any bounds. If the safe zone is sufficiently large to patch a basic integer variable it performs an update. This heuristic is highly incomplete, but is able to locally patch several variables without resorting to more expensive analyses.


One of the deciding factors in leapfrogging the previous solver relied on a method by Bromberger and Weidenbach [7, 8]. It allows to detect feasible inequalities over integer variables by solving a stronger linear system. In addition, we observed that the default strengthening proposed by Bromberger and Weidenbach can often be avoided: integer solutions can be guaranteed from weaker systems.

We will here recall the main method and our twist. In the following we let $A, A'$ range over integer matrices and $a$, $b$, $c$ over integer vectors. The 1-norm $\onenorm{A}$ of a matrix is a column vector, such that each entry $i$ is the sum of the absolute values of the elements in the corresponding row $A_i$. We write $\onenorm{A_i}$ to directly access the 1-norm of a row.

A (unit) cube is a polyhedron that is a Cartesian product of intervals of length one for each variable. Since each variable therefore contains an integer point, the interior of the polyhedron contains an integer point. The condition for a convex polyhedron to contain a cube can be recast as follows:

Example 3.
Suppose we have $3x + y \leq 9 \wedge - 3y \leq -2$ and wish to find an integer solution. By solving $3x + y \leq 9 - \frac{1}{2}(3 + 1) = 7, -3y \leq -2 - \frac{1}{2}3 = -3.5$ we find a model where $y = \frac{7}{6}, x = 0$. After rounding $y$ to $1$ and maintaining $x$ at $0$ we obtain an integer solution to the original inequalities.

Our twist on Bromberger and Weidenbach's method is to avoid strengthening on selected inequalities. First we note that difference inequalities of the form $x - y \leq k$, where $x, y$ are integer variables and $k$ is an integer offset need not be strengthened. For octagon constraints $\pm x \pm y \leq k$ there is a boundary condition: they need only require strengthening if $x, y$ are assigned at mid-points between integral solutions. For example, if $V(x) = \frac{1}{2}$ and $V(y) = \frac{3}{2}$, for $x + y \leq 2$. Our approach is described in detail in [4].


Similar to traditional MIP branch-and-bound methods, the solver creates somewhat eagerly case splits on bounds of integer variables if the dual simplex solver fails to assign them integer values.

Gomory and Hermite cuts

The arithmetic solver produces Gomory cuts from rows where the basic variables are non-integers after the non-basic variables have been pushed to the bounds. It also incorporates algorithms from [9, 13] to generate cuts after the linear systems have been transformed into Hermite matrices.

4.1.3. Non-linear arithmetic

Similar to solving for integer feasibility, the arithmetic solver solves constraints over polynomials using a waterfall model for non-linear constraints. At the basis it maintains for every monomial term $x \cdot x \cdot y$ an equation $m = x \cdot x \cdot y$, where $m$ is a variable that represents the monomial $x \cdot x \cdot y $. The module for non-linear arithmetic then attempts to establish a valuation $V$ where $V(m) = V(x) \cdot V(x) \cdot V(y)$, or derive a consequence that no such valuation exists. The stages in the waterfall model are summarized as follows:

Bounds propagation on monomials

A relatively inexpensive step is to propagate and check bounds based on on non-linear constraints. For example, for $y \geq 3$, then $m = x\cdot x\cdot y \geq 3$, if furthermore $x \leq -2$, we have the strengthened bound $m \geq 12$. Bounds propagation can also flow from bounds on $m$ to bounds on the variables that make up the monomial, such that when $m \geq 8, 1 \leq y \leq 2, x \leq 0$, then we learn the stronger bound $x \leq -2$ on $x$.

Bounds propagation with Horner expansions

If $x \geq 2, y \geq -1, z \geq 2$, then $y + z \geq 1$ and therefore $x\cdot (y + z) \geq 2$, but we would not be able to deduce this fact if combining bounds individually for $x\cdot y$ and $x \cdot z$ because no bounds can be inferred for $x \cdot y$ in isolation. The solver therefore attempts different re-distribution of multiplication in an effort to find stronger bounds.

Gröbner reduction

We use an adaptation of ZDD (Zero suppressed decision diagrams [19, 20]) to represent polynomials. The representation has the advantage that polynomials are stored in a shared data-structure and operations over polynomials are memoized. A polynomial over the real is represented as an acyclic graph where nodes are labeled by variables and edges are labeled by coefficients. For example, the polynomial $5x^2y + xy + y + x + 1$ is represented by the acyclic graph shown below.

\usetikzlibrary{shapes,arrows} \usetikzlibrary{positioning} \tikzstyle{block} = [rectangle, draw, text centered, rounded corners, minimum height=2em] \tikzstyle{line} = [draw, -latex'] \begin{tikzpicture}[node distance = 3em, scale = 0.2] \node [circle, draw] (xroot) {$x$}; \node [circle, below of = xroot, left of= xroot, draw] (xnext) {$x$}; \node [circle, below of = xnext, left of= xnext, draw] (y1) {$y$}; \node [circle, below of = xnext, right of= xnext, draw] (y2) {$y$}; \node [circle, below of = y1, left of = y1] (t1) {$5$}; \node [circle, below of = y1, right of = y1] (t2) {$0$}; \node [circle, right of = t2] (t3) {$1$}; \node [circle, below of = y2, right of = y2] (t4) {$1$}; \path [line] (xroot.west) -- (xnext.north); \path [line] (xroot.east) -- (y2.north); \path [line] (xnext.west) -- (y1.north); \path [line] (xnext.east) -- (y2.north); \path [line] (y1.west) -- (t1.north); \path [line] (y1.east) -- (t2.north); \path [line] (y2.west) -- (t3.north); \path [line] (y2.east) -- (t4.north); \end{tikzpicture}

The root node labeled by $x$ represents the polynomial $x\cdot l + r$, where $l$ is the polynomial of the left sub-graph and $r$ the polynomial of the right sub-graph. The left sub-graph is allowed to be labeled again by $x$, but the right sub-graph may only have nodes labeled by variables that are smaller in a fixed ordering. The fixed ordering used in this example sets $x$ above $y$. Then the polynomial for the right sub-graph is $y + 1$, and the polynomial with the left sub-graph is $5xy + (y + 1)$.

The Gröbner module performs a set of partial completion steps, preferring to eliminate variables that can be isolated, and expanding a bounded number of super-position steps.

Incremental linearization

Following [10] we incrementally linearize monomial constraints. For example, we include lemmas of the form $x = 0 \rightarrow m = 0$ and $x = 1 \rightarrow m = y$, for $m = x^2y$.


As an end-game attempt, the solver attempts to solver the non-linear constraints using a complete solver for Tarski's fragment supported by the NLSat solver [16].

5. Reducible Theories

5.1. Refinement Types

Let us illustrate a use of reduction from richer theories to base theories based on a simple example based on refinement types. It encodes refinement types using auxiliary functions as explained in [15]. Abstractly, a refinement type of sort $S$ uses a predicate $p$ over $S$. At least one element of $S$ must satisfy $p$ for the construction to make sense. The refinement type $S \mid p$ represents the elements of $S$ that satisfy $p$. The properties we need to know about elements of $S\mid p$ can be encoded using two auxiliary functions that form a surjection $\restrictOp$ from $S \mid p$ into $S$ with a partial inverse $\restrictOp$ that maps elements from $S$ into $S \mid p$. The properties of these functions are summarized as follows:

\[\begin{mdmathpre}%mdk \mdmathindent{2}\mathid{p}~:~\mathid{S}~\rightarrow \mathid{Bool}\\ \mdmathindent{2}\relaxOp :~\mathid{S}~\mid \mathid{p}~\rightarrow \mathid{S}\\ \mdmathindent{2}\restrictOp :~\mathid{S}~\rightarrow \mathid{S}~\mid \mathid{p}\\ \mdmathindent{2}\forall \mathid{x}~:~\mathid{S}~\mid \mathid{p}~\ .~\ \restrictOp(\relaxOp(\mathid{x}))~=~\mathid{x}\\ \mdmathindent{2}\forall \mathid{s}~:~\mathid{S}~\ .~\ \mathid{p}(\mathid{s})\ \rightarrow \ \relaxOp(\restrictOp(\mathid{s}))~=~\mathid{s}\\ \mdmathindent{2}\forall \mathid{x}~:~\mathid{S}~\mid \mathid{p}~\ .~\ \mathid{p}(\relaxOp(\mathid{x})) \end{mdmathpre}%mdk \]

Let us illustrate the sort of natural numbers as a refinement type of integers:

Example 4.

\[\begin{mdmathpre}%mdk \mdmathindent{2}\mathid{sort}~\mathid{Nat}~=~\mathid{Int}~\mid \lambda \mathid{x}~\ .~\ \mathid{x}~\geq 0\\ \mdmathindent{2}\forall \mathid{n}~:~\mathid{Nat}~\ .~\ \restrictOp(\relaxOp(\mathid{n}))~=~\mathid{n}~\wedge \relaxOp(\mathid{n})~\geq 0\\ \mdmathindent{2}\forall \mathid{i}~:~\mathid{Int}~\ .~\ \mathid{i}~\geq 0~\rightarrow \relaxOp(\restrictOp(\mathid{i}))~=~\mathid{i} \end{mdmathpre}%mdk \]

We obtain a theory solver for formulas with refinement types by instantiating these axioms whenever there is a term $t$ introduced of sort $S \mid p$ introduced as part of the input or during search (from instantiating quantifiers). The main challenge with supporting this theory is to ensure that the new terms introduced from axiom instantiation is bounded. We don't want the solver to create terms $\relaxOp(\restrictOp(\relaxOp(\restrictOp(\ldots))))$.

5.2. Reducible theories in Z3

5.2.1. Arrays

Z3 reduces the theory of arrays to reasoning about uninterpreted functions. It furthermore treats arrays as function spaces. The first-order theory of arrays enjoys compactness and so the following formula is satisfiable2

\[\forall a : Array(Int, Int) \ . \ \exists k \ . \ \forall i \geq k \ . \ a[i] = 0. \]

The same formula is not satisfiable when arrays range over function spaces. The distinction is only relevant for formulas that contain quantifiers over arrays.

The central functionality of the decision procedure for arrays is to ensure that a satisfying model under the theory of EUF translates to a satisfying model in the theory of arrays. To this end, the main service of the theory solver is to saturate the search state with $\beta$ reduction axioms for array terms that admit beta-reduction. We call these terms $\lambda$ terms and they are defined by the beta-reduction axioms:

\[\begin{mdmathpre}%mdk \beta(\mathit{Store}(\mathid{A},\mathid{j},\mathid{v})[\mathid{i}])~&~=~&~\mathit{if}\ \mathid{i}~=~\mathid{j}\ \mathit{then}~\ \mathid{v}~\ \mathit{else}\ \mathid{A}[\mathid{i}]\\ \beta(\mathit{Map}(\mathid{f},\mathid{A},\mathid{B})[\mathid{i}])~&~=~&~\mathid{f}(\mathid{A}[\mathid{i}],\mathid{B}[\mathid{i}])\\ \beta(\mathit{AsArray}(\mathid{f})[\mathid{i}])~&~=~&~\mathid{f}(\mathid{i})\\ \beta(\mathit{Const}(\mathid{v})[\mathid{i}])~&~=~&~\mathid{v}\\ \beta((\lambda \mathid{x}~\ .~\ \mathid{M})[\mathid{i}])~&~=~&~\mathid{M}[\mathid{i}/\mathid{x}] \end{mdmathpre}%mdk \]

The reduction into EUF, is then in a nutshell an application of the following inference rule:

\[\AxiomC{$b$ is a lambda term} \AxiomC{$a[j]$ is a term} \AxiomC{$b \sim a$ ($a, b$ are equal under EUF)} \TrinaryInfC{$b[j] = \beta(b[j])$} \DisplayProof \]

5.2.2. Floating points

Floating point semantics can be defined in terms of bit-vector operations. The solver for floating points uses this connection to reduce the theory of floating points to bit-vectors.

5.2.3. Algebraic Datatypes

The theory of algebraic datatypes is compiled into uninterpreted functions. In this theory, constructors are injective functions. Injectivity is ensured by adding axioms for partial inverses. For the example of LISP S-expressions these axioms are:

\[ car(cons(x,y)) = x, cdr(cons(x,y)) = y \]

The main functionality provided by the theory solver that cannot be reduced to EUF is the occurs check.

6. Hybrid Theories

A prime example of a hybrid theory in Z3 is the theory of strings, regular expressions and sequences.

The theory of strings and regular expressions has entered mainstream SMT solving thanks to community efforts around standardization and solvers. The SMTLIB2 format for unicode strings [1]. It integrates operations that mix equational solving over the free monoid with integer arithmetic (for string lengths and extracting sub-strings). Regular expression constraints furthermore effectively introduce constraints that require unfolding recursive relations. Z3 uses symbolic derivatives [21] to handle regular expressions, noteworthy, with complementation and intersection handled by derivatives.

A second prolific example of a hybrid theory is Z3's model-based quantifier instantiation engine (MBQI). Here, a theory is encoded using a quantifier. The MBQI engine supports extensions of Bradley-Manna-Sipma's array property fragment [6] that effectively combines arithmetic with uninterpreted functions.

7. External Theories

The universe of theories is in principle unbounded. Many theories can be captured using a set of quantified axioms and therefore be handled by machinery for quantifiers. But encodings have their own limitations, noteworthy it might not be possible to encode a theory using a small number of axioms. Users can of course encode theory solvers directly themselves within Z3's core. This has been accomplished in the case of Z3Str3 [2], but it is a significant engineering effort and ties a theory to a fixed signature. A more flexible approach was exposed around 2010 in the format of user theories. A user theory plugin could be encoded outside of Z3 and it integrated with the main CDCL(T) engine. Some examples of the user theory were presented in [3]. The user theories were subsequently removed from Z3 as it was not possible to properly integrate them in model construction. Recent experiences with solving constraint satisfaction problems suggested that a restricted version of user theories would be instrumental. A variant of user theories have therefore been resurrected under the banner of a user propagator to capture its main functionality: it allows client code participate in propagation and creating conflicts in response to variable assignments.

We will illustrate the user propagator by a simple example. The example illustrates as a Pseudo-Boolean constraint that when encoded directly would be quadratic in the number of variables. The user propagator, though, does not require quadratic space overhead. We encode the constraint that

\[\begin{mdmathpre}%mdk \mdmathindent{2}&~3~|\{~(\mathid{i},\mathid{j})~\mid \mathid{i}~<~\mathid{j}~\wedge \mathid{x}_\mathid{i}~+~\mathid{x}_\mathid{j}~=~42~\wedge (\mathid{x}_\mathid{i}~>~30~\vee \mathid{x}_\mathid{j}~>~30)~\}|~\\ +~&~|\{~(\mathid{i},\mathid{j})~\mid \mathid{i}~<~\mathid{j}~\wedge \mathid{x}_\mathid{i}~+~\mathid{x}_\mathid{j}~=~42~\wedge \mathid{x}_\mathid{i}~\leq 30~\wedge \mathid{x}_\mathid{j}~\leq 30~\}|~&~\leq &~100 \end{mdmathpre}%mdk \]

and every subset $(i,j)$ that contributes to the sum and contains at least half of the variables must add up to at least 10. We first define the variables used in the problem:

from z3 import *

xs = BitVecs(["x%d" % i for i in range(8)], 10)

Then a user-propagator can be initialized by sub-classing to the UserPropagateBase class that implements the main interface to Z3's user propagation functionality.

class UserPropagate(UserPropagateBase):
    def __init__(self, s):
        super(self.__class__, self).__init__(s)
        self.xvalues = {}
        self.id2x = {self.add(x) : x for x in xs}
        self.x2id = { self.id2x[id] : id for id in self.id2x }
        self.trail = []
        self.lim = []
        self.sum = 0

The map xvalues tracks the values of assigned variables and id2x and x2id maps tracks the identifiers that Z3 uses for variables with the original variables. The sum maintains the running sum of according to our unusual constraint.

The class must implement methods for pushing and popping backtrackable scopes. We use a trail to record closures that are invoked to restore the previous state and lim to maintain the the size of the trail for the current scope.

    # overrides a base class method
    def push(self):

    # overrides a base class method
    def pop(self, num_scopes):
        lim_sz = len(self.lim)-num_scopes
        trail_sz = self.lim[lim_sz]
        while len(self.trail) > trail_sz:
            fn = self.trail.pop()
        self.lim = self.lim[0:lim_sz]

We can then define the main callback used when a variable tracked by identifier id is fixed to a value e. The identifier is returned by the solver when calling the function self.add(x) on term x. It uses this identifier to communicate the state of the term x. When terms range over bit-vectors and Booleans (but not integers or other types), the a client can register a callback with self.add_fixed to pick up a state where the variable is given a full assignment. For our example, the value is going to be a bit-vector constant, from which we can extract an unsigned integer into v. The trail is augmented with a restore point to the old state and the summation is then updated and the Pseudo-Boolean inequalities are then enforced.

    def myfix(self, id, e):
        x = self.id2x[id]
        v = e.as_long()
        old_sum = self.sum        
        self.trail.append(lambda : self.undo(old_sum, x))        
        for w in self.xvalues.values():
            if v + w == 42:
                if v > 30 or w > 30:
                    self.sum += 3
                    self.sum += 1
        self.xvalues[x] = v
        if self.sum > 100:
            self.conflict([self.x2id[x] for x in self.xvalues])
        elif self.sum < 10 and len(self.xvalues) > len(xs)/2:
            self.conflict([self.x2id[x] for x in self.xvalues])

It remains to define the last auxiliary methods for backtracking and testing.

    def undo(self, s, x):
        self.sum = s
        del self.xvalues[x]

    def myfinal(self):

s = SimpleSolver()
for x in xs:
    s.add(x % 2 == 1)
p = UserPropagate(s)

In this example, it takes relatively few conflicts to establish a satisfying assignment, one where 9 out of 10 variables are set to 1 and the last variable is set to 41. The solver is only as good as the propagation strength and the quality of the conflicts it can generate. For instance, if we change the condition $v > 30 \vee w > 30$ to $v < 30 \wedge w < 30$ the constraints are not solved with the same ease.

The user propagator also allows to register callbacks when two added terms are known to be equal to the solver, and when two variables are known to be distinct. Note again that the use-case for the user propagator is for Bit-vector and Boolean variables.

8. Conclusion

We provided an overview of solver integration in Z3 with special emphasis on a newly revised arithmetic solver and a user propagator. While most applications use a small combination of mainstream theories, there is plenty of room for expanding theory solvers both in depth, in terms of capabilities and efficiency; and expand the repertoire of solvers to new theories. While applications have been enabled by current SMT solvers, they also introduce new challenges by highlighting bottlenecks and shortcomings. Current efforts center around modernizing Z3's entire core engine around up-to-date SAT solving techniques. Among many potential advantages it would allow Z3 to apply in-processing simplifications for quantifiers and SMT theories, and for instance, handle large bit-vectors by integrating algebraic methods, incremental bit-blasting, local search and leverage other recent advances in bit-vector reasoning. The universe (of solvers) is still expanding.


Clark Barrett, Pascal Fontaine, and Cesare Tinelli. The Satisfiability Modulo Theories Library (SMT-LIB)., 2016. 🔎
Murphy Berzish, Vijay Ganesh, and Yunhui Zheng. Z3str3: A string solver with theory-aware heuristics. In Daryl Stewart and Georg Weissenbacher, editors, 2017 Formal Methods in Computer Aided Design, FMCAD 2017, Vienna, Austria, October 2-6, 2017, pages 5559. IEEE, 2017. ISBN 978-0-9835678-7-5. doi:10.23919/FMCAD.2017.8102241. URL https://​doi.​org/​10.​23919/​FMCAD.​2017.​8102241. 🔎
Nikolaj Bjørner. Engineering theories with Z3. In Hongseok Yang, editor, Programming Languages and Systems - 9th Asian Symposium, APLAS 2011, Kenting, Taiwan, December 5-7, 2011. Proceedings, volume 7078 of Lecture Notes in Computer Science, pages 416. Springer, 2011. doi:10.1007/978-3-642-25318-8_3. URL https://​doi.​org/​10.​1007/​978-​3-​642-​25318-​8\_​3. 🔎
Nikolaj Bjørner and Lev Nachmanson. Theorem recycling for theorem proving. In Laura Kov\'acs and Andrei Voronkov, editors, Vampire 2017. Proceedings of the 4th Vampire Workshop, volume 53 of EPiC Series in Computing, pages 18. EasyChair, 2018. doi:10.29007/r58f. URL https://​easychair.​org/​publications/​paper/​qGfG. 🔎
Nikolaj Bjørner, Leonardo de Moura, Lev Nachmanson, and Christoph M. Wintersteiger. Programming Z3. In Jonathan P. Bowen, Zhiming Liu, and Zili Zhang, editors, Engineering Trustworthy Software Systems - 4th International School, SETSS 2018, Chongqing, China, April 7-12, 2018, Tutorial Lectures, volume 11430 of Lecture Notes in Computer Science, pages 148201. Springer, 2018. doi:10.1007/978-3-030-17601-3_4. URL https://​doi.​org/​10.​1007/​978-​3-​030-​17601-​3\_​4. 🔎
Aaron R. Bradley, Zohar Manna, and Henny B. Sipma. What's decidable about arrays? In Verification, Model Checking, and Abstract Interpretation, 7th International Conference, VMCAI 2006, Charleston, SC, USA, January 8-10, 2006, Proceedings, pages 427442, 2006. doi:10.1007/11609773_28. URL https://​doi.​org/​10.​1007/​11609773_​28. 🔎
Martin Bromberger and Christoph Weidenbach. Fast cube tests for LIA constraint solving. In Nicola Olivetti and Ashish Tiwari, editors, Automated Reasoning - 8th International Joint Conference, IJCAR 2016, Coimbra, Portugal, June 27 - July 2, 2016, Proceedings, volume 9706 of Lecture Notes in Computer Science, pages 116132. Springer, 2016. ISBN 978-3-319-40228-4. doi:10.1007/978-3-319-40229-1_9. URL https://​doi.​org/​10.​1007/​978-​3-​319-​40229-​1_​9. 🔎
Martin Bromberger and Christoph Weidenbach. New techniques for linear arithmetic: cubes and equalities. Formal Methods in System Design, 51 (3): 433461, 2017. doi:10.1007/s10703-017-0278-7. URL https://​doi.​org/​10.​1007/​s10703-​017-​0278-​7. 🔎
Jürgen Christ and Jochen Hoenicke. Cutting the mix. In Daniel Kroening and Corina S. Pasareanu, editors, Computer Aided Verification - 27th International Conference, CAV 2015, San Francisco, CA, USA, July 18-24, 2015, Proceedings, Part II, volume 9207 of Lecture Notes in Computer Science, pages 3752. Springer, 2015. ISBN 978-3-319-21667-6. doi:10.1007/978-3-319-21668-3_3. URL https://​doi.​org/​10.​1007/​978-​3-​319-​21668-​3_​3. 🔎
Alessandro Cimatti, Alberto Griggio, Ahmed Irfan, Marco Roveri, and Roberto Sebastiani. Experimenting on solving nonlinear integer arithmetic with incremental linearization. In Olaf Beyersdorff and Christoph M. Wintersteiger, editors, Theory and Applications of Satisfiability Testing - SAT 2018 - 21st International Conference, SAT 2018, Held as Part of the Federated Logic Conference, FloC 2018, Oxford, UK, July 9-12, 2018, Proceedings, volume 10929 of Lecture Notes in Computer Science, pages 383398. Springer, 2018. ISBN 978-3-319-94143-1. doi:10.1007/978-3-319-94144-8_23. URL https://​doi.​org/​10.​1007/​978-​3-​319-​94144-​8_​23. 🔎
Leonardo Mendonça de Moura and Nikolaj Bjørner. Model-based theory combination. Electron. Notes Theor. Comput. Sci., 198 (2): 3749, 2008. doi:10.1016/j.entcs.2008.04.079. URL https://​doi.​org/​10.​1016/​j.​entcs.​2008.​04.​079. 🔎
Leonardo Mendonça de Moura and Nikolaj Bjørner. Generalized, efficient array decision procedures. In Proceedings of 9th International Conference on Formal Methods in Computer-Aided Design, FMCAD 2009, 15-18 November 2009, Austin, Texas, USA, pages 4552, 2009. doi:10.1109/FMCAD.2009.5351142. URL http://​dx.​doi.​org/​10.​1109/​FMCAD.​2009.​5351142. 🔎
Isil Dillig, Thomas Dillig, and Alex Aiken. Cuts from proofs: A complete and practical technique for solving linear inequalities over integers. In Ahmed Bouajjani and Oded Maler, editors, Computer Aided Verification, 21st International Conference, CAV 2009, Grenoble, France, June 26 - July 2, 2009. Proceedings, volume 5643 of Lecture Notes in Computer Science, pages 233247. Springer, 2009. ISBN 978-3-642-02657-7. doi:10.1007/978-3-642-02658-4_20. URL https://​doi.​org/​10.​1007/​978-​3-​642-​02658-​4_​20. 🔎
B. Dutertre and L. de Moura. A Fast Linear-Arithmetic Solver for DPLL(T). In CAV, 2006. 🔎
Bart Jacobs. Categorical Logic and Type Theory, volume 141 of Studies in Logic and the Foundations of Mathematics. North Holland, Elsevier, 1999. 🔎
Dejan Jovanovic and Leonardo Mendonça de Moura. Solving non-linear arithmetic. In Automated Reasoning - 6th International Joint Conference, IJCAR 2012, Manchester, UK, June 26-29, 2012. Proceedings, pages 339354, 2012. doi:10.1007/978-3-642-31365-3_27. URL http://​dx.​doi.​org/​10.​1007/​978-​3-​642-​31365-​3_​27. 🔎
Deepak Kapur and Calogero Zarba. A reduction approach to decision procedures. Technical report, University of New Mexico, 2006. URL https://​www.​cs.​unm.​edu/​~kapur/​mypapers/​reduction.​pdf. 🔎
Anvesh Komuravelli, Arie Gurfinkel, Sagar Chaki, and Edmund M. Clarke. Automatic Abstraction in SMT-Based Unbounded Software Model Checking. In CAV, pages 846862, 2013. 🔎
Shin-ichi Minato. Zero-suppressed bdds for set manipulation in combinatorial problems. In Alfred E. Dunlop, editor, Proceedings of the 30th Design Automation Conference. Dallas, Texas, USA, June 14-18, 1993, pages 272277. ACM Press, 1993. doi:10.1145/157485.164890. URL https://​doi.​org/​10.​1145/​157485.​164890. 🔎
Masaaki Nishino, Norihito Yasuda, Shin-ichi Minato, and Masaaki Nagata. Zero-suppressed sentential decision diagrams. In Dale Schuurmans and Michael P. Wellman, editors, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 10581066. AAAI Press, 2016. URL http://​www.​aaai.​org/​ocs/​index.​php/​AAAI/​AAAI16/​paper/​view/​12434. 🔎
Caleb Stanford, Margus Veanes, and Nikolaj Bjørner. Symbolic boolean derivatives for efficiently solving extended regular expression constraints. Technical Report MSR-TR-2020-25, Microsoft, August 2020. URL https://​www.​microsoft.​com/​en-​us/​research/​publication/​symbolic-​boolean-​derivatives-​for-​efficiently-​solving-​extended-​regular-​expression-​constraints/​. 🔎
Created with

1.To keep the formula short we have applied a shortcut and removed the literal $\neg(x + y < 0)$ from the conflict clause. In practice, solvers automatically remove unit literals that are false from conflict clauses.

2.thanks to Jasmin Blanchette for drawing attention to this distinction.