| { | |
| "title": "A DPLL(T) Framework for Verifying Deep Neural Networks", | |
| "abstract": "Deep Neural Networks (DNNs) have emerged as an effective approach to tackling real-world problems.\nHowever, like human-written software, DNNs can have bugs and can be attacked. To address this, research has explored a wide-range of algorithmic approaches to verify DNN behavior.\nIn this work, we introduce NeuralSAT, a new verification approach that adapts the widely-used DPLL(T) algorithm used in modern SMT solvers. A key feature of SMT solvers is the use of conflict clause learning and search restart to scale verification. Unlike prior DNN verification approaches, NeuralSAT combines an abstraction-based deductive theory solver with clause learning and an evaluation clearly demonstrates the benefits of the approach on a set of challenging verification benchmarks.", | |
| "sections": [ | |
| { | |
| "section_id": "1", | |
| "parent_section_id": null, | |
| "section_name": "1. Introduction", | |
| "text": "Deep Neural Networks (DNNs) have emerged as an effective approach for solving challenging real-world problems. However, just like traditional software, DNNs can have \u201cbugs\u201d, e.g., producing unexpected results on inputs that are different from those in training data, and be attacked, e.g., small perturbations to the inputs by a malicious adversary or even sensor imperfections can result in misclassification (Ren et al., 2020 ###reference_61###; Z\u00fcgner et al., 2018 ###reference_88###; Yang et al., 2022 ###reference_79###; Zhang et al., 2019 ###reference_85###; Isac et al., 2022 ###reference_38###).\nThese issues, which have been observed in many DNNs (Goodfellow et al., 2014 ###reference_30###; Szegedy et al., 2014 ###reference_67###) and demonstrated in the real world (Eykholt et al., 2018 ###reference_25###), naturally raise the question of how DNNs should be tested, validated, and ultimately verified\nto meet the requirements of relevant robustness or safety standards (Huang et al., 2020 ###reference_36###; Katz et al., 2017b ###reference_41###).\nTo address this question, researchers have developed a wide-variety of algorithmic techniques and supporting tools to verify properties of DNNs (e.g., (Katz et al., 2017a ###reference_40###; Ehlers, 2017 ###reference_24###; Huang et al., 2017 ###reference_37###; Katz et al., 2019 ###reference_43###; Wang et al., 2018b ###reference_73###; Singh et al., 2018a ###reference_64###, 2019b ###reference_66###; Katz et al., 2022 ###reference_42###; Urban and Min\u00e9, 2021 ###reference_71###; Liu et al., 2021 ###reference_49###; M\u00fcller et al., 2021 ###reference_55###; Wang et al., 2021 ###reference_74###)).\nRecent instances of the DNN verification tool competition (VNN-COMP) indicate that three key elements\nare common to the best performing approaches:\n(1) the use of abstraction to reason symbolically about sets of neuron output values;\n(2) the use of neuron splitting to specialize the analysis of subproblems in a form of branch-and-bound (BaB) reasoning; and\n(3) the use of fast-path optimizations that can discharge easy verification problems quickly (Bak et al., 2021 ###reference_6###; M\u00fcller et al., 2022 ###reference_56###).\nFor example, the top four performers in VNN-COMP 2022: --CROWN (Wang et al., 2021 ###reference_74###; Zhang et al., 2022 ###reference_81###), MN-BaB (Ferrari et al., 2022 ###reference_26###), VeriNet (Henriksen and Lomuscio, 2020 ###reference_35###), and nnenum (Bak, 2021 ###reference_5###), all include these features.\nThe problem of verifying non-trivial properties of DNNs with piecewise linear activation functions, such as \u201cReLU\u201d, has been shown to be reducible (Katz et al., 2017a ###reference_40###) to the Boolean satisfiability (SAT) problem (Cook, 1971 ###reference_19###).\nThus, at its core, any DNN verification algorithm must contend with worst-case exponential complexity.\nAs the fields of SAT and satisfiability modulo theory (SMT) solving have demonstrated, despite this\ncomplexity well-chosen combinations\nof algorithmic techniques can solve a wide-range of large real-world problems (Kroening and Strichman, 2016 ###reference_45###).\nIn this paper, we explore the design of an SMT-inspired DPLL(T) solver customized for DNN verification that is\ncompetitive with the state-of-the-art and that establishes a foundation for incorporating additional algorithmic\ntechniques from the broader SMT literature.\nWe are not the first to explore SMT solving for DNN verification.\nThe earliest techniques in the field, Planet (Ehlers, 2017 ###reference_24###) and Reluplex (Katz et al., 2017a ###reference_40###), demonstrated how the semantics of a trained DNN could be encoded as a constraint in Linear Real Arithmetic (LRA).\nIn principle, such constraints can be solved by any SMT solver equipped with an LRA\ntheory solver (T-solver) (Kroening and Strichman, 2016 ###reference_45###).\nThe DPLL(T) algorithm implemented by modern SMT solvers works by moving back and forth between solving an abstract propositional encoding of the constraint and solving a theory-specific encoding of a constraint fragment corresponding\nto a partial assignment of propositional literals.\nThe challenge in solving DNN verification constraints\nlies in the fact that each neuron gives rise to a disjunctive constraint to encode its non-linear behavior.\nIn practice, this leads to a combinatorial blowup in the space of assignments the SMT solver must consider at the abstract propositional level.\nTo resolve the exponential complexity inherent in such constraints, both Planet and Reluplex chose to push the disjunctive constraints from the propositional encoding down into the theory-specific encoding of the problem, leveraging a technique referred to as splitting-on-demand (Barrett et al., 2006 ###reference_9###).\nThis works to an extent, but it does not scale well to large DNNs (Bak et al., 2021 ###reference_6###; M\u00fcller et al., 2022 ###reference_56###).\nWe observe that the choice to pursue an aggressive splitting-on-demand strategy\nsacrifices the benefit of several of the key algorithmic techniques that make SMT solvers scale \u2013 specifically conflict-driven clause learning (CDCL) (Bayardo Jr and Schrag, 1997 ###reference_11###; Marques-Silva and Sakallah, 1999 ###reference_52###; Marques Silva and Sakallah, 1996 ###reference_51###), theory propagation (Kroening and Strichman, 2016 ###reference_45###),\nand search restart (Pipatsrisawat and Darwiche, 2009 ###reference_60###).\nWe present the NeuralSAT framework, which consists of a lazy, incremental LRA-solver that is parameterized by state-of-the-art abstractions, such as LiRPA (Xu et al., 2020b ###reference_78###; Wang et al., 2021 ###reference_74###), to efficiently perform deductive reasoning and exhaustive theory propagation (Nieuwenhuis et al., 2006 ###reference_57###), and to support restarts.\nAs prior research has demonstrated (Nieuwenhuis et al., 2006 ###reference_57###), the interplay\nbetween CDCL and restart is essential to scaling and we find that it permits NeuralSAT to increase the number of problems verified by 53% on a challenging benchmark (\u00a76.1 ###reference_###).\nMoreover, NeuralSAT significantly outperforms the best existing DPLL-based DNN verification approach \u2013 Marabou (Katz et al., 2019 ###reference_43###, 2022 ###reference_42###) \u2013 which also employs abstraction and deduction, but does not exploit clause learning (\u00a76.3 ###reference_###). Moreover, despite the fact that NeuralSAT is an early stage\nprototype, that does not incorporate the fast-path optimizations of other tools, it ranks second to --CROWN in solving benchmarks from the VNN-COMP competition (\u00a76.2 ###reference_###).\nThe contributions of this work lie in:\n(i) developing a domain-specific LRA-solver that allows for the benefits of clause learning to accelerate SMT-based DNN verification;\n(ii) developing a prototype NeuralSAT implementation which we release as open source; and\n(iii) empirically demonstrating that the approach compares favorably with the state-of-the-art in terms of scalability, performance, and ability to solve challenging DNN verification problems.\nThese findings collectively suggest that techniques like CDCL are advantageous, in combination with other optimizations, in scaling DNN verification to larger networks." | |
| }, | |
| { | |
| "section_id": "2", | |
| "parent_section_id": null, | |
| "section_name": "2. Background", | |
| "text": "" | |
| }, | |
| { | |
| "section_id": "2.1", | |
| "parent_section_id": "2", | |
| "section_name": "2.1. Satisfiability and DPLL(T)", | |
| "text": "The classical satisfiability (SAT) problem asks if a given propositional formula over Boolean variables can be satisfied (Biere et al., 2009 ###reference_12###). Given a formula , a SAT solver returns sat if it can find a satisfying assignment that maps truth values to variables of that makes evaluate to true, and unsat if it cannot find any satisfying assignments. The problem is NP-Complete and research into methods for efficiently solving problem instances has been ongoing for multiple decades.\nFig. 1 ###reference_### gives an overview of DPLL, a SAT solving technique introduced in 1961 by Davis, Putnam, Logemann, and Loveland (Davis et al., 1962 ###reference_22###). DPLL is an iterative algorithm that takes as input a propositional formula and (i) decides an unassigned variable and assigns it a truth value, (ii) performs Boolean constraint propagation (BCP or also called Unit Propagation), which detects single literal clauses that either force a literal to be true in a satisfying assignment or give rise to a conflict; (iii) analyzes the conflict to backtrack to a previous decision level dl; and (iv) erases assignments at levels greater than dl to try new assignments. These steps repeat until DPLL finds a satisfying assignment and returns sat, or decides that it cannot backtrack (dl=-1) and returns unsat.\n###figure_1### Modern DPLL solving improves the original version with Conflict-Driven Clause Learning (CDCL) (Bayardo Jr and Schrag, 1997 ###reference_11###; Marques-Silva and Sakallah, 1999 ###reference_52###; Marques Silva and Sakallah, 1996 ###reference_51###).\nDPLL with CDCL can learn new clauses to avoid past conflicts and backtrack more intelligently (e.g., using non-chronologically backjumping).\nDue to its ability to learn new clauses, CDCL can significantly reduce the search space and allow SAT solvers to scale to large problems.\nIn the following, whenever we refer to DPLL, we mean DPLL with CDCL.\nDPLL(T) (Nieuwenhuis et al., 2006 ###reference_57###) extends DPLL for propositional formulae to check SMT formulae involving non-Boolean variables, e.g., real numbers and data structures such as strings, arrays, lists.\nDPLL(T) combines DPLL with dedicated theory solvers to analyze formulae in those theories111SMT is Satisfiability Modulo Theories and the T in DPLL(T) stands for Theories.. For example, to check a formula involving linear arithmetic over the reals (LRA), DPLL(T) may use a theory solver that uses linear programming to check the constraints in the formula.\nModern DPLL(T)-based SMT solvers such as Z3 (Moura and Bj\u00f8rner, 2008 ###reference_54###) and CVC4 (Barrett et al., 2011 ###reference_8###)\ninclude solvers supporting a wide range of theories including linear arithmetic, nonlinear arithmetic, string, and arrays (Kroening and Strichman, 2016 ###reference_45###)." | |
| }, | |
| { | |
| "section_id": "2.2", | |
| "parent_section_id": "2", | |
| "section_name": "2.2. The DNN verification problem", | |
| "text": "A neural network (NN) (Goodfellow et al., 2016 ###reference_29###) consists of an input layer, multiple hidden layers, and an output layer. Each layer has a number of neurons, each connected to neurons from previous layers through a predefined set of weights (derived by training the network with data). A DNN is an NN with at least two hidden layers.\nThe output of a DNN is obtained by iteratively computing the values of neurons in each layer.\nThe value of a neuron in the input layer is the input data. The value of a neuron in the hidden layers is computed by applying an affine transformation to values of neurons in the previous layers, then followed by an activation function such as the popular Rectified Linear Unit (ReLU) activation.\nFor this activation, the value of a hidden neuron is\n, where is the bias parameter of , are the weights of , are the neuron values of preceding layer, is the affine transformation, and is the ReLU activation. The values of a neuron in the output layer is evaluated similarly but it may skip the activation function.\nA ReLU activated neuron is said to be active if its input value is greater than\nzero and inactive otherwise.\nGiven a DNN and a property , the DNN verification problem asks if is a valid property of .\nTypically, is a formula of the form , where is a property over the inputs of and is a property over the outputs of .\nA DNN verifier attempts to find a counterexample input to that satisfies but violates . If no such counterexample exists, is a valid property of . Otherwise, is not valid and the counterexample can be used to retrain or debug the DNN (Huang et al., 2017 ###reference_37###).\n###figure_2### Fig. 2 ###reference_### shows a simple DNN with two inputs , two hidden neurons , and one output . The weights of a neuron are shown on its incoming edges , and the bias is shown above or below each neuron. The outputs of the hidden neurons are computed the affine transformation and ReLU, e.g., . The output neuron is computed with just the affine transformation, i.e., .\nA valid property for this DNN is that the output is for any inputs . An invalid property for this network is that for those similar inputs.\nA counterexample showing this property violation is , from which the network evaluates to . Such properties can capture safety requirements (e.g., a rule in an collision avoidance system in (Kochenderfer et al., 2012 ###reference_44###; Katz et al., 2017a ###reference_40###) is \u201cif the intruder is distant and significantly slower than us, then we stay below a certain threshold\u201d) or local robustness (Katz et al., 2017b ###reference_41###) conditions (a form of adversarial robustness stating that small perturbations of a given input all yield the same output).\nReLU-based DNN verification is NP-Complete (Katz et al., 2017a ###reference_40###) and thus can be formulated as a SAT or SMT checking problem.\nDirect application of SMT solvers does not scale to the large and complex formulae encoding real-world, complex DNNs.\nWhile custom theory solvers, like Planet and Reluplex, retain the soundness, completeness,\nand termination of SMT and improve on the performance of a direct SMT encoding, they too do not scale sufficiently to handle realistic DNNs (Bak et al., 2021 ###reference_6###; Brix et al., 2023 ###reference_13###).\nApplying techniques from abstract interpretation (Cousot and Cousot, 1977 ###reference_20###),\nabstraction-based DNN verifiers overapproximate nonlinear computations (e.g., ReLU) of the network using linear abstract domains such as interval (Wang et al., 2018b ###reference_73###), zonotope (Singh et al., 2018a ###reference_64###), polytope (Singh et al., 2019b ###reference_66###; Xu et al., 2020b ###reference_78###).\nAs illustrated in Fig. 6 ###reference_### abstract domains can model nonlinearity with\nvarying degrees of precision using polyhedra that are efficient to compute with.\nThis allows abstraction-based DNN verifiers to side-step the disjunctive splitting that is the performance bottleneck\nof constraint-based DNN verifiers.\nA DNN verification technique using an approximation, e.g., the polytope abstract domain, works by (i) representing the input ranges of the DNN as polytopes, (ii) applying transformation rules to the affine and ReLU computations of the network to compute polytope regions representing values of neurons, and (iii) finally, converting the polytope results into output bounds.\nThe resulting outputs are an overapproximation of the actual outputs." | |
| }, | |
| { | |
| "section_id": "3", | |
| "parent_section_id": null, | |
| "section_name": "3. Overview of NeuralSAT", | |
| "text": "###figure_3### NeuralSAT is a SMT-based DNN verifier that uses abstraction in its theory solver to accelerate unsatisfiability checking and the exploration of the space of variable assignments.\nFig. 3 ###reference_### gives an overview of NeuralSAT, which follows the DPLL(T) framework (\u00a72 ###reference_###) with some modification and consists of standard DPLL components (light shades) and the theory solver (dark shade).\nNeuralSAT constructs a propositional formula over Boolean variables that represent the activation status of neurons (Boolean Abstraction). Clauses in the formula assert that each neuron, e.g., neuron , is active or inactive, e.g., .\nThis abstraction allows us to use standard DPLL components to search for truth values satisfying these clauses and a DNN-specific theory solver to check the feasibility of truth assignments with respect to the constraints encoding the DNN and the property of interest.\nNeuralSAT now enters an iterative process to find assignments satisfying the activation clauses.\nFirst, NeuralSAT assigns a truth value to an unassigned variable (Decide), detects unit clauses caused by this assignment, and infers additional assignments (Boolean Constraint Propagation).\nNext, NeuralSAT invokes the theory solver or T-solver (Deduction), which uses LP solving and abstraction to check the satisfiability of the constraints of the current assignment with the property of interest. The T-solver can also infer additional truth assignments.\nIf the T-solver confirms satisfiability, NeuralSAT continues with new assignments (Decide). Otherwise, NeuralSAT detects a conflict (Analyze Conflict) and learns clauses to remember it and backtrack to a previous decision (Backtrack).\nIf NeuralSAT detects local optima, it would restart (Restart) the search by clearing all decisions that have been made to escape and the conflict clauses learned so far would be also recorded to avoid reaching the same state in the next runs.\nAs we discuss later in \u00a76 ###reference_###, restarting especially benefits challenging DNN problems by enabling better clause learning and exploring different decision orderings.\nThis process repeats until NeuralSAT can no longer backtrack, and return unsat, indicating the DNN has the property, or it finds a total assignment for all boolean variables, and returns sat (and the user can query NeuralSAT for a counterexample)." | |
| }, | |
| { | |
| "section_id": "3.1", | |
| "parent_section_id": "3", | |
| "section_name": "3.1. Illustration", | |
| "text": "We use NeuralSAT to prove that for inputs the DNN in Fig. 2 ###reference_### produces the output .\nNeuralSAT takes as input the formula representing the DNN:\nand the formula representing the property:\nTo prove , NeuralSAT shows that no values of satisfying the input properties would result in . Thus, we want NeuralSAT to return unsat for :\nIn the following, we write to denote that the variable is assigned with a truth value . This assignment can be either decided by Decide or inferred by BCP. We also write and to indicate the respective assignments and at decision level .\nFirst, NeuralSAT creates two Boolean variables and to represent the\nactivation status of the hidden neurons and , respectively. For example, means is active and thus is the constraint . Similarly, means is inactive and therefore is . Next, NeuralSAT forms two clauses indicating these variables are either active or inactive.\nNeuralSAT searches for an assignment to satisfy the clauses and the constraints they represent.\nFor this example, NeuralSAT uses four iterations, summarized in Tab. 1 ###reference_###, to determine that no such assignment exists and the problem is thus unsat.\n###table_1### In iteration 1, as shown in Fig. 3 ###reference_###, NeuralSAT starts with BCP, which has no effects because the current clauses and (empty) assignment produce no unit clauses.\nIn Deduction, NeuralSAT uses an LP solver to determine that the current set of constraints, which contains just the initial input bounds, is feasible222We use the terms feasible, from the LP community, and satisfiable, from the SAT community, interchangeably.. NeuralSAT then uses abstraction to approximate an output upper bound and thus deduces that satisfying the output might be feasible. NeuralSAT continues with Decide, which uses a heuristic to select the unassigned variable and sets . NeuralSAT also increments the decision level () to 1 and associates to the assignment, i.e., . Note that this process of selecting and assigning (random) values to variables representing neurons is commonly called neuron splitting because it splits the search tree into subtrees corresponding into the assigned values (e.g., see \u00a73.2 ###reference_###).\nIn iteration 2, BCP again has no effect because it does not detect any unit clauses. In Deduction, NeuralSAT determines that current set of constraints, which contains due to the assignment (i.e., ), is feasible. NeuralSAT then approximates a new output upper bound , which means satisfying the output constraint is infeasible.\nNeuralSAT now enters AnalyzeConflict and determines that causes the conflict ( is the only variable assigned so far). From the assignment , NeuralSAT learns a \u201dbackjumping\u201d clause , i.e., must be . NeuralSAT now backtracks to and erases all assignments decided after this level. Thus, is now unassigned and the constraint is also removed.\nIn iteration 3, BCP determines that the learned clause is also a unit clause and infers . In Deduction, we now have the new constraint due to (i.e., ). With the new constraint, NeuralSAT\napproximates the output upper bound , which means might be satisfiable.\nAlso, NeuralSAT computes new bounds and , and deduces that must be positive because its lower bound is 0.5 Thus, NeuralSAT has a new assignment ( stays unchanged due to the implication). Note that this process of inferring new assignments from the T-solver is referred to theory propagation in DPLL(T).\nIn iteration 4, BCP has no effects because we have no new unit clauses. In Deduction, NeuralSAT determines that the current set of constraints, which contains the new constraint (due to ), is infeasible. Thus, NeuralSAT enters AnalyzeConflict and determines that , which was set at (by BCP in iteration 3), causes the conflict.\nNeuralSAT then learns a clause (the conflict occurs due to the assignment , but was implied and thus making the conflict).\nHowever, because was assigned at decision level 0, NeuralSAT can no longer backtrack and thus sets and returns unsat.\nThis unsat result shows that the DNN has the property because we cannot find a counterexample violating it, i.e., no inputs that results in .\nNote that this example is too simple to illustrate the use of restart, which is described in \u00a73.2 ###reference_### and 4.2.5 ###reference_SSS5### and crucial for more complicated and nontrivial problems." | |
| }, | |
| { | |
| "section_id": "3.2", | |
| "parent_section_id": "3", | |
| "section_name": "3.2. The search tree of NeuralSAT", | |
| "text": "###figure_4### ###figure_5### As mentioned in \u00a72.2 ###reference_###, ReLU-based DNN verification is NP-complete, and for difficult problem instances DNN verification tools often have to exhaustively search a very large space, making scalability a main concern for modern DNN verification.\nFig. 4 ###reference_### shows the difference between NeuralSAT and another DNN verification tool (e.g., using the popular Branch-and-Bound (BaB) approach) in how they navigate the search space. We assume both tools employ similar abstraction and neuron splitting.\nFig. 4 ###reference_###b shows that the other tool performs splitting to explore different parts of the tree (e.g., splitting and explore the branches with and and so on). Note that the other tool needs to consider the tree shown regardless if it runs sequentially or in parallel.\nIn contrast, NeuralSAT has a smaller search space shown in Fig. 4 ###reference_###a.\nNeuralSAT follows the path , and then (just like the tool on the right).\nHowever, because of the learned clause , NeuralSAT performs a BCP step that sets (and therefore prunes the branch with that needs to be considered in the other tree).\nThen NeuralSAT splits , and like the other tool, determines infeasibility for both branches. Now NeuralSAT\u2019s conflict analysis determines from learned clauses that it needs to backtrack to (yellow node) instead of . Without learned clauses and non-chronological backtracking, NeuralSAT would backtrack to decision and continues with the branch, just like the other tool in Fig. 4 ###reference_###b.\nThus, NeuralSAT was able to generate non-chronological backtracks and use BCP to prune various parts of the search tree. In contrast, the other tool would have to move through the exponential search space to eventually reach the same result as NeuralSAT." | |
| }, | |
| { | |
| "section_id": "4", | |
| "parent_section_id": null, | |
| "section_name": "4. The NeuralSAT Approach", | |
| "text": "Fig. 1 ###reference_### shows the NeuralSAT algorithm, which takes as input the formula representing the ReLU-based DNN and the formulae representing the property to be proved.\nInternally, NeuralSAT checks the satisfiability of the formula\nNeuralSAT returns unsat if the formula unsatisfiable, indicating that is a valid property of , and sat if it is satisfiable, indicating the is not a valid property of .\nNeuralSAT uses a DPLL(T)-based algorithm to check unsatisfiability.\nFirst, the input formula in Eq. 4 ###reference_### is abstracted to a propositional formula\nwith variables encoding neuron activation status (BooleanAbstraction).\nNext, NeuralSAT assign values to Boolean variables (Decide) and checks for conflicts the assignment has with the real-valued constraints of the DNN and the property of interest (BCP and Deduction).\nIf conflicts arise, NeuralSAT determines the assignment decisions causing the conflicts (AnalyzeConflict), backtracks to erase such decisions (Backtrack), and learns clauses to avoid those decisions in the future.\nNeuralSAT repeats these decisions and checking steps until it finds a total or full assignment for all Boolean variables, in which it returns sat, or until it no longer can backtrack, in which it returns unsat.\nNote that NeuralSAT also resets its search (if it thinks that it is stuck in a local optima) and tries different decision orderings to enable better clause learning and avoid similar \u201cbad\u201d decisions in the previous runs.\nWe describe these steps in more detail below." | |
| }, | |
| { | |
| "section_id": "4.1", | |
| "parent_section_id": "4", | |
| "section_name": "4.1. Boolean Abstraction", | |
| "text": "BooleanAbstraction (Fig. 1 ###reference_### line 1 ###reference_###) encodes the DNN verification problem into a Boolean constraint to be solved by DPLL. This step creates Boolean variables to represent the activation status of hidden neurons in the DNN. Observe that when evaluating the DNN on any concrete input, the value of each hidden neuron before applying ReLU is either (the neuron is active and the input is passed through to the output) or (the neuron is inactive because the output is 0).\nThis allows partial assignments to these variables to represent neuron activation patterns within the DNN.\nFrom the given network, NeuralSAT first creates Boolean variables representing the activation status of neurons. Next, NeuralSAT forms a set of initial clauses ensuring that each status variable is either T or F, indicating that each neuron is either active or inactive, respectively.\nFor example, for the DNN in Fig. 2 ###reference_###, NeuralSAT creates two status variables for neurons , respectively, and two initial clauses and . The assignment creates the constraint ." | |
| }, | |
| { | |
| "section_id": "4.2", | |
| "parent_section_id": "4", | |
| "section_name": "4.2. DPLL", | |
| "text": "After BooleanAbstraction, NeuralSAT iteratively searches for an assignment satisfying the status clauses (Fig. 1 ###reference_###, lines 1 ###reference_###\u2013 1 ###reference_###).\nNeuralSAT combines DPLL components (e.g., Decide, BCP, AnalyzeConflict, Backtrack and Restart) to assign truth values with a theory solver (\u00a74.3 ###reference_###), consisting of abstraction and linear programming solving, to check the feasibility of the constraints implied by the assignment with respect to the network and property of interest.\nNeuralSAT maintains several variables (Fig. 1 ###reference_###, lines 1 ###reference_###\u2013 1 ###reference_###). These include clauses, a set of clauses consisting of the initial activation clauses and learned clauses; , a truth assignment mapping status variables to truth values; , an implication graph used for analyzing conflicts; and , a non-zero decision level used for assignment and backtracking.\nBCP uses an implication graph (Barrett, 2013 ###reference_10###) to represent the current assignment and the reason for each BCP implication. In this graph, a node represents the assignment and an edge means that BCP infers the assignment represented in node due to the unit clause caused by the assignment represented by node .\nThe implication graph is used by both BCP, which iteratively constructs the graph on each BCP application and uses it to determine conflict, and AnalyzeConflict (\u00a74.2.3 ###reference_SSS3###), which analyzes the conflict in the graph to learn clauses.\n###figure_6### Assume we have the clauses in Fig. 5 ###reference_###(a), the assignments and (represented in the graph in Fig. 5 ###reference_###(b) by nodes and , respectively), and are currently at decision level 6.\nBecause of assignment , BCP infers from the unit clause and captures that implication with edge .\nNext, because of assignment , BCP infers from the unit clause as shown by edge .\nSimilarly, BCP creates edges and to capture the inference from the unit clause due to assignments and .\nNow, BCP detects a conflict because clause cannot be satisfied with the assignments and (i.e., both and are ) and creates two edges to the (red) node : and to capture this conflict.\nNote that in this example BCP has the implication order (and then reaches a conflict). In the current implementation, NeuralSAT makes an arbitrary decision and thus could have a different order, e.g., .\nWe use the standard binary resolution rule to learn a new clause implied by two (resolving) clauses and containing complementary literals involving the (resolution) variable :\nThe resulting (resolvant) clause contains all the literals that do not have complements and .\nFig. 5 ###reference_###(c) demonstrates AnalyzeConflict using the example in \u00a74.2.2 ###reference_SSS2### with the BCP implication order and the conflicting clause (connecting to node in the graph in Fig. 5 ###reference_###(b)) . From , we determine the last assigned literal is , which contains the variable , and the antecedent clause containing is (from the implication graph in Fig. 5 ###reference_###(b), we determine that assignments and cause the BCP implication due to clause ). Now we resolve the two clauses and using the resolution variable to obtain the clause .\nNext, from the new clause, we obtain and apply resolution to get the clause .\nSimilarly, from this clause, we obtain and apply resolution to obtain the clause .\nAt this point, AnalyzeConflict determines that this is an asserting clause, which would force an immediate BCP implication after backtracking. As will be shown in \u00a74.2.4 ###reference_SSS4###, NeuralSAT will backtrack to level 3 and erases all assignments after this level (so the assignment is not erased, but assignments after level 3 are erased). Then, BCP will find that is a unit clause because and infers .\nOnce obtaining the asserting clause, AnalyzeConflict stops the search, and NeuralSAT adds as the new clause to the set of existing four clauses.\nThe process of learning clauses allows NeuralSAT to learn from its past mistakes.\nWhile such clauses are logically implied by the formula in Eq. 4 ###reference_### and therefore do not change the result, they help prune the search space and allow DPLL and therefore NeuralSAT to scale. For example, after learning the clause , together with assignment , we immediately infer through BCP instead of having to guess through Decide.\nFrom the clause learned in AnalyzeConflict, we backtrack to decision level 3, the second most recent decision level in the clause (because assignments and were decided at levels 6 and 3, respectively). Next, we erase all assignments from decision level 4 onward (i.e., the assignments to as shown in the implication graph in Fig. 5 ###reference_###). This thus makes these more recently assigned variables (after decision level 3) available for new assignments (in fact, as shown by the example in \u00a74.2.2 ###reference_SSS2###, BCP will immediately infer by noticing that is now a unit clause)." | |
| }, | |
| { | |
| "section_id": "4.2.1", | |
| "parent_section_id": "4.2", | |
| "section_name": "4.2.1. Decide", | |
| "text": "From the current assignment, Decide (Fig. 1 ###reference_###, line 1 ###reference_###) uses a heuristic to choose an unassigned variable and assigns it a random truth value at the current decision level.\nNeuralSAT applies the Filtered Smart Branching (FSB) heuristic (Bunel et al., 2018 ###reference_15###; De Palma et al., 2021 ###reference_23###). For each unassigned variable, FSB assumes that it has been decided (i.e., the corresponding neuron has been split) and computes a fast approximation of the lower and upperbounds of the network output variables. FSB then prioritizes unassigned variables with the best differences among the bounds that would help make the input formula unsatisfiable (which helps prove the property of interest).\nNote that if the current assignment is full, i.e., all variables have assigned values, Decide returns False (from which NeuralSAT returns sat)." | |
| }, | |
| { | |
| "section_id": "4.2.2", | |
| "parent_section_id": "4.2", | |
| "section_name": "4.2.2. Boolean Constraint Propagation (BCP)", | |
| "text": "From the current assignment and clauses, BCP (Fig. 1 ###reference_###, line 1 ###reference_###) detects unit clauses333A unit clause is a clause that has a single unassigned literal. and infers values for variables in these clauses.\nFor example, after the decision , BCP determines that the clause becomes unit, and infers that .\nMoreover, each assignment due to BCP is associated with the current decision level because instead of being \u201cguessed\u201d by Decide the chosen value is logically implied by other assignments.\nMoreover, because each BCP implication might cause other clauses to become unit, BCP is applied repeatedly until it can no longer find unit clauses.\nBCP returns False if it obtains contradictory implications (e.g., one BCP application infers while another infers ), and returns True otherwise.\nBCP uses an implication graph (Barrett, 2013 ###reference_10### ###reference_10###) to represent the current assignment and the reason for each BCP implication. In this graph, a node represents the assignment and an edge means that BCP infers the assignment represented in node due to the unit clause caused by the assignment represented by node .\nThe implication graph is used by both BCP, which iteratively constructs the graph on each BCP application and uses it to determine conflict, and AnalyzeConflict (\u00a74.2.3 ###reference_SSS3### ###reference_SSS3###), which analyzes the conflict in the graph to learn clauses.\n###figure_7### Assume we have the clauses in Fig. 5 ###reference_### ###reference_###(a), the assignments and (represented in the graph in Fig. 5 ###reference_### ###reference_###(b) by nodes and , respectively), and are currently at decision level 6.\nBecause of assignment , BCP infers from the unit clause and captures that implication with edge .\nNext, because of assignment , BCP infers from the unit clause as shown by edge .\nSimilarly, BCP creates edges and to capture the inference from the unit clause due to assignments and .\nNow, BCP detects a conflict because clause cannot be satisfied with the assignments and (i.e., both and are ) and creates two edges to the (red) node : and to capture this conflict.\nNote that in this example BCP has the implication order (and then reaches a conflict). In the current implementation, NeuralSAT makes an arbitrary decision and thus could have a different order, e.g., ." | |
| }, | |
| { | |
| "section_id": "4.2.3", | |
| "parent_section_id": "4.2", | |
| "section_name": "4.2.3. Conflict Analysis", | |
| "text": "Given an implication graph with a conflict such as the one in Fig. 5 ###reference_###(b), AnalyzeConflict learns a new clause to avoid past decisions causing the conflict.\nThe algorithm traverses the implication graph backward, starting from the conflicting node , while constructing a new clause through a series of resolution steps.\nAnalyzeConflict aims to obtain an asserting clause, which is a clause that will force an immediate BCP implication after backtracking.\nAnalyzeConflict, shown in Fig. 2 ###reference_###, first extracts the conflicting clause (line 2 ###reference_###), represented by the edges connecting to the conflicting node in the implication graph.\nNext, the algorithm refines this clause to achieve an asserting clause (lines 2 ###reference_###\u2013 2 ###reference_###).\nIt obtains the literal that was assigned last in (line 2 ###reference_###), the variable associated with (line 2 ###reference_###), and the antecedent clause of that (line 2 ###reference_###), which contains as the only satisfied literal in the clause. Now, AnalyzeConflict resolves and to eliminate literals involving (line 2 ###reference_###). The result of the resolution is a clause, which is then refined in the next iteration.\nWe use the standard binary resolution rule to learn a new clause implied by two (resolving) clauses and containing complementary literals involving the (resolution) variable :\nThe resulting (resolvant) clause contains all the literals that do not have complements and .\nFig. 5 ###reference_### ###reference_###(c) demonstrates AnalyzeConflict using the example in \u00a74.2.2 ###reference_SSS2### ###reference_SSS2### with the BCP implication order and the conflicting clause (connecting to node in the graph in Fig. 5 ###reference_### ###reference_###(b)) . From , we determine the last assigned literal is , which contains the variable , and the antecedent clause containing is (from the implication graph in Fig. 5 ###reference_### ###reference_###(b), we determine that assignments and cause the BCP implication due to clause ). Now we resolve the two clauses and using the resolution variable to obtain the clause .\nNext, from the new clause, we obtain and apply resolution to get the clause .\nSimilarly, from this clause, we obtain and apply resolution to obtain the clause .\nAt this point, AnalyzeConflict determines that this is an asserting clause, which would force an immediate BCP implication after backtracking. As will be shown in \u00a74.2.4 ###reference_SSS4### ###reference_SSS4###, NeuralSAT will backtrack to level 3 and erases all assignments after this level (so the assignment is not erased, but assignments after level 3 are erased). Then, BCP will find that is a unit clause because and infers .\nOnce obtaining the asserting clause, AnalyzeConflict stops the search, and NeuralSAT adds as the new clause to the set of existing four clauses.\nThe process of learning clauses allows NeuralSAT to learn from its past mistakes.\nWhile such clauses are logically implied by the formula in Eq. 4 ###reference_### ###reference_### and therefore do not change the result, they help prune the search space and allow DPLL and therefore NeuralSAT to scale. For example, after learning the clause , together with assignment , we immediately infer through BCP instead of having to guess through Decide." | |
| }, | |
| { | |
| "section_id": "4.2.4", | |
| "parent_section_id": "4.2", | |
| "section_name": "4.2.4. Backtrack", | |
| "text": "From the clause returned by AnalyzeConflict, Backtrack (Fig. 1 ###reference_###, line 1 ###reference_###) computes a backtracking level and erases all decisions and implications made after that level.\nIf the clause is unary (containing just a single literal), then we backtrack to level 0.\nCurrently, NeuralSAT uses the standard conflict-drive backtracking strategy (Barrett, 2013 ###reference_10###), which sets the backtracking level to the second most recent decision level in the clause.\nIntuitively, by backtracking to the second most recent level, which means erasing assignments made after that level, this strategy encourages trying new assignments for more recently decided variables.\nFrom the clause learned in AnalyzeConflict, we backtrack to decision level 3, the second most recent decision level in the clause (because assignments and were decided at levels 6 and 3, respectively). Next, we erase all assignments from decision level 4 onward (i.e., the assignments to as shown in the implication graph in Fig. 5 ###reference_### ###reference_###). This thus makes these more recently assigned variables (after decision level 3) available for new assignments (in fact, as shown by the example in \u00a74.2.2 ###reference_SSS2### ###reference_SSS2###, BCP will immediately infer by noticing that is now a unit clause)." | |
| }, | |
| { | |
| "section_id": "4.2.5", | |
| "parent_section_id": "4.2", | |
| "section_name": "4.2.5. Restart", | |
| "text": "As with any stochastic algorithm, NeuralSAT can perform poorly if it gets into a subspace of the search that does not quickly lead to a solution, e.g., due to choosing a bad sequence of neurons to split (Bunel et al., 2018 ###reference_15###; De Palma et al., 2021 ###reference_23###).\nThis problem, which has been recognized in early SAT solving, motivates the introduction of restarting the search (Gomes et al., 1998 ###reference_28###) to avoid being stuck in such a local optima.\nNeuralSAT uses a simple restart heuristic (Fig. 1 ###reference_###, line 1 ###reference_###) that triggers a restart when either the number of processed assignments (nodes) exceeds a pre-defined number (e.g., 300 nodes) or the current runtime exceeds a pre-defined threshold (e.g., 50 seconds).\nAfter a restart, NeuralSAT avoids using the same decision order of previous runs (i.e., it would use a different sequence of neuron splittings). It also resets all internal information (e.g., decisions and implication graph) except the learned conflict clauses, which are kept and reused as these are facts about the given constraint system.\nThis allows a restarted search to quickly prune parts of the space of assignments.\nWe found the combination of clause learning and restarts effective for DNN verification. In particular, while restart resets information it keeps learned clauses, which are facts implied by the problem, and therefore enables quicker BCP applications and non-chronological backtracking (e.g., as illustrated in Fig. 4 ###reference_###).\nIt is worth noting that while it is possible to add a restart to existing DNN verification approaches. This is unlikely to help, because these techniques do not learn conflict clauses and therefore restart will just randomize order but carry no information forward to prune the search space." | |
| }, | |
| { | |
| "section_id": "4.3", | |
| "parent_section_id": "4", | |
| "section_name": "4.3. Deduction (Theory Solving)", | |
| "text": "Deduction (Fig. 1 ###reference_###, line 1 ###reference_###) is the theory or T-solver, i.e., the T in DPLL(T). The main purpose of the T-solver is to check the feasibility of the constraints represented by the current propositional variable assignment; as shown in the formalization in \u00a7A ###reference_### this amounts to just linear equation solving for verifying piecewise linear DNNs. However, NeuralSAT is able to leverage specific information from the DNN problem, including input and output properties, for more aggressive feasibility checking. Specifically, Deduction has three tasks: (i) checking feasibility using linear programming (LP) solving, (i) further checking feasibility with input tightening and abstraction, and (iii) inferring literals that are unassigned and are implied by the abstracted constraint.\nFig. 3 ###reference_### describes Deduction, which returns False if infeasibility occurs and True otherwise.\nFirst, it creates a linear constraint system from the input assignment and , i.e., the formula in Eq. 4 ###reference_### representing the original problem (line 3 ###reference_###).\nThe key idea is that we can remove ReLU activation for hidden neurons whose activation status have been decided.\nFor constraints in associated with variables that are not in the , we ignore them and just consider the cutting planes introduced by the partial assignment.\nFor example, for the assignment , the non-linear ReLU constraints and for the DNN in Fig. 2 ###reference_### become linear constraints and , respectively.\nNext, an LP solver checks the feasibility of the linear constraints (line 3 ###reference_###).\nIf the solver returns infeasible, Deduction returns False so that NeuralSAT can analyze the assignment and backtrack.\nIf the constraints are feasible, then there are two cases to handle. First, if the assignment is total (i.e., all variables are assigned), then that means that the original problem is satisfiable (line 3 ###reference_###) and NeuralSAT returns sat.\nSecond, if the assignment is not total then Deduction applies abstraction to check satisfiability (lines 3 ###reference_###\u20133 ###reference_###).\nSpecifically, we over-approximate ReLU computations to obtain the upper and lower bounds of the output values and check if the output properties are feasible with respect to these bounds. For example, the output is not feasible if the upperbound is and might be feasible if the upperbound is (\u201cmight be\u201d because this is an upper-bound). If abstraction results in infeasibility, then Deduction returns False for NeuralSAT to analyze the current assignment (line 3 ###reference_###).\n###figure_8### ###figure_9### ###figure_10### ###figure_11### NeuralSAT uses abstraction to approximate the lower and upper bounds of hidden and output neurons.\nFig. 6 ###reference_### compares the (a) interval (Wang et al., 2018b ###reference_73###), (b) zonotope (Singh et al., 2018a ###reference_64###), and (c, d) polytope (Xu et al., 2020b ###reference_78###; Singh et al., 2019b ###reference_66###; Wang et al., 2021 ###reference_74###) abstraction domains to compute the lower and upper bounds of a ReLU computation (non-convex red line).\nNeuralSAT can employ any existing abstract domains, though currently it adopts the LiRPA polytope (Fig. 6 ###reference_###d) (Xu et al., 2020b ###reference_78###; Wang et al., 2021 ###reference_74###) because it has a good trade-off between precision and efficiency.\nIf abstraction results in feasible constraints, Deduction next attempts to infer implied literals (lines 3 ###reference_###\u2013 3 ###reference_###). To obtain the bounds of the output neurons, abstraction also needs to compute the bounds of hidden neurons, including those with undecided activation status (i.e., not yet in ).\nThis allows us to assign the activation variable of a hidden neuron the value\nTrue if the lowerbound of that neuron is greater than 0 (the neuron is active) and\nFalse otherwise.\nSince each literal is considered, this would be considered exhaustive theory propagation. Whereas the literature (Nieuwenhuis et al., 2006 ###reference_57###; Kroening and Strichman, 2016 ###reference_45###) suggests that this is an inefficient strategy, we find that it does not incur significant overhead (average overhead is about 4% and median is 2% with outliners being large CIFAR2020 networks described in \u00a75 ###reference_###).\nFor the illustrative example in \u00a73.1 ###reference_###, in iteration 3, the current assignment is , corresponding to a constraint . With the new constraint, we optimize the input bounds and compute the new bounds for hidden neurons , and output neuron (and use this to determine that the postcondition might be feasible). We also infer because of the positive lower bound ." | |
| }, | |
| { | |
| "section_id": "4.4", | |
| "parent_section_id": "4", | |
| "section_name": "4.4. Optimizations", | |
| "text": "Like some other verifiers (Katz et al., 2019 ###reference_43###, 2022 ###reference_42###; Bak, 2021 ###reference_5###),\nNeuralSAT implements input splitting to quickly deal with small verification problems, such as ACAS Xu discussed in \u00a75 ###reference_###. This technique divides the original verification problem into subproblems, each checking whether the DNN produces the desired output from a smaller input region and returns unsat if all subproblems are verified and sat if a counterexample is found in any subproblem.\nMoreover, like other DNN verifiers (Ferrari et al., 2022 ###reference_26###; Zhang et al., 2022 ###reference_81###), NeuralSAT tool implements a fast-path optimization that attempts to disprove or falsify the property before running DPLL(T).\nNeuralSAT uses two adversarial attack algorithms to find counterexamples to falsify properties.\nFirst, we try a randomized attack approach (Das et al., 2021 ###reference_21###), which is a derivative-free sampling-based optimization (Yu et al., 2016 ###reference_80###), to generate a potential counterexample.\nIf this approach fails, we then use a gradient-based approach (Madry et al., 2017 ###reference_50###) to create another potential counterexample.\nIf either attack algorithm gives a valid counterexample, NeuralSAT returns sat, indicating that property is invalid. If both algorithms cannot find a valid counterexample or they exceed a predefined timeout, NeuralSAT continues with its DPLL(T) search." | |
| }, | |
| { | |
| "section_id": "5", | |
| "parent_section_id": null, | |
| "section_name": "5. Implementation and Experimental Settings", | |
| "text": "" | |
| }, | |
| { | |
| "section_id": "6", | |
| "parent_section_id": null, | |
| "section_name": "6. Results", | |
| "text": "We evaluate NeuralSAT to answer the following research questions:\nHow do clause-learning and restart impact NeuralSAT performance?\nHow does NeuralSAT compare to state-of-the-art DNN verifiers?\nHow does NeuralSAT compare to DPLL(T)-based DNN verifiers?\nWe note that in our experiments all tools provide correct results. If a tool was able to solve an instance, then it solves it correctly, i.e., no tool returned sat for unsat instances and vice versa." | |
| }, | |
| { | |
| "section_id": "6.1", | |
| "parent_section_id": "6", | |
| "section_name": "6.1. RQ1: Clause-learning and Restart Ablation Study", | |
| "text": "NeuralSAT\u2019s CDCL and restart functionality offer potential benefits in mitigating the exponential cost of verification. We apply two treatments to explore those benefits:\n\u201cFull\u201d corresponds to the full algorithm in Fig. 1 ###reference_###; and\n\u201cNo Restart\u201d corresponds to algorithm in Fig. 1 ###reference_### without Restart (Line 1 ###reference_###).\n###figure_12### ###figure_13### We use the 60 challenging CIFAR_GDVB instances in this study\nsince they force the verifier to explore the exponentially sized space of variable assignments; eliminating the potential for fast-path optimizations.\nOur primary metrics are the number of verification\nproblems solved and the time to solve them.\nFig. 7 ###reference_###(a) presents data on NeuralSAT with different treatments (Settings) in the table.\nFig. 7 ###reference_###(b) shows the problems solved within the 1800-second timeout for each technique sorted by runtime from fastest to slowest; problems that timeout are not shown on the plot.\nThese data clearly show the benefit of CDCL with restart.\nCompared to no restart, 20 additional problems can be verified\nwhich represents a 53% increase.\nWe note that performing restarts causes the search process\nto begin again and potentially performs redundant analysis, but\nthe search after a restart carries forward the learned clauses\nwhich serve to prune subsequent search.\nFig. 7 ###reference_###(b) illustrates the overhead incurred by\nrestart in the fastest 10 problems \u2013 before the \u201cFull\u201d and \u201cNo Restart\u201d curves diverge.\nWe note that performing a restart without the clauses learned through CDCL amounts to rerunning the verifier with a different random seed to vary search order. While this could be achieved\nwith other DNN verifiers its benefit for verification would be limited.\nTo further understand the benefits of CDCL with and without restart, we collected internal data from NeuralSAT to record\nthe number of iterations of in Fig. 1 ###reference_### and the number of decisions computed (Line 1 ###reference_###) on average across the benchmark. Note that for the \u201cNo Restart\u201d treatment the outer loop executes a single time so the number of iterations are just for the inner loop.\nFig. 6(c) ###reference_f3### plots the median and quartiles for iterations needed on the left axis and for decisions made on the right axis with box plots.\nWe annotate the median values next to the box plots.\nThese data show that restarts lead to a reduction\nin both median number of iterations and decisions, 11% and 9%, respectively.\nFor these experiments, we allowed only 3 restarts that were triggered when either\n300 branches were explored or 50 seconds had elapsed, so at most 4 iterations\nof the outer loop in in Fig. 1 ###reference_### were executed. Despite these restarts\nthe number of iterations of the inner loop was reduced indicating\nthat later restart phases were able to accelerate through the search space using\nlearned clauses. The data for decisions tell a consistent story since learned\nclauses will allow BCP to prune branches at decision points in later restart phases." | |
| }, | |
| { | |
| "section_id": "6.2", | |
| "parent_section_id": "6", | |
| "section_name": "6.2. RQ2: Comparison with State-of-the-art Verifiers", | |
| "text": "To compare verifiers,\nwe adopt the rules in VNN-COMP\u201922 to score and rank tools.\nFor each verification problem instance, a tool scores 10 points if it correctly verifies an instance, 1 point if it correctly falsifies an instance, and 0 points if it cannot solve (e.g., timeouts, has errors, or returns unknown), -150 points if it gives incorrect results (this penalty did not apply in the scope of our study).\nWe note that VNN-COMP\u201922 assigns different scores for falsification: 1 point if the tool found a counterexample using an external adversarial attack technique, and 10 points if the tool found a counterexample using its core search algorithm. The tools we compared to did not report how they\nfalsified problems, so we give a single point for a false result regardless of how it was obtained.\nWe note that NeuralSAT exhibited the best falsification performance so it is likely disadvantaged by this scoring approach.\nTab. 3 ###reference_### shows the results of NeuralSAT and the top-performing VNN-COMP verifiers:\n--CROWN, MN-BaB, and nnenum, and two versions of Marabou.\nWe report the rank (#) and score (S) of each tool using the VNN-COMP rules for each benchmark as well as the overall rank.\nTools that do not work on a benchmark are not shown under that benchmark (e.g., Marabou reports errors for all CIFAR2020 problems).\nThe last two columns break down the number of problems each verifier was able to verify (V) or falsify (F).\nAcross these benchmarks NeuralSAT ranks second to --CROWN, which was the top performer in VNN-COMP\u201922 and thus the state-of-the-art. It trails --CROWN in the number of problems verified, though it can falsify more problems than any other verifier across the benchmarks.\nBoth Marabou and nnenum outperform NeuralSAT on the MNISTFC and ACAS Xu,\nbut we observe that these are small DNNs. On the larger\nDNNs in the CIFAR2020, RESNET_A/B and CIFAR_GDVB benchmarks, which\nhave orders of magnitude more neurons, NeuralSAT significantly outperforms those techniques.\nWhile ranking second, NeuralSAT solves 95% of the problems in the large network benchmarks that are solved by --CROWN. Moreover, on the most challenging benchmark, CIFAR_GDVB,\nNeuralSAT solves 2 fewer problems than --CROWN.\nWe expect that further optimization of NeuralSAT will help close that gap and note that --CROWN has been under development for over 4 years and is highly-optimized from years of VNN-COMP participation.\nIn addition, --CROWN\u2019s developers tuned 10 parameters, on average,\nto optimize its performance for each individual benchmark.\nIn contrast, we did not tune any parameters for NeuralSAT which suggests that its performance on large models may generalize better in practice and that further improvement could come from parameter tuning." | |
| }, | |
| { | |
| "section_id": "6.3", | |
| "parent_section_id": "6", | |
| "section_name": "6.3. RQ3: Comparison with DPLL(T)-based DNN Verifiers", | |
| "text": "The state-of-the-art in DPLL(T)-based DNN verification is Marabou.\nIt improves on Reluplex by incorporating abstraction and deduction techniques, and\nhas been entered in VNN-COMP\u201922 in recent years. This makes it a reasonable point of\ncomparison for NeuralSAT especially in understanding the benefit of the addition of\nCDCL on the scalability of DNN verification.\nOverall both versions of Marabou ranked poorly, but it did outperform NeuralSAT on\nsmall DNNs. Consider the ACAS Xu networks which are small in two ways: they have very few neurons (300) and they only have 5 input dimensions.\nMarabou employs multiple optimizations to target small scale networks.\nFor example, a variant of the Split and Conquer algorithm (Wu et al., 2020 ###reference_75###)\nsubdivides the input space to generate separate verification problems.\nPartitioning a 5 dimensional input space is one thing, but the number of partitions\ngrows exponentially with input dimension and this approach is not cost\neffective for the larger networks in our study.\nMarabou could not scale to any of the larger CIFAR or RESNET problems, so a direct comparison with NeuralSAT is not possible.\nInstead, we observe that NeuralSAT performed well on these problems \u2013 ranking better than it did on the smaller problems.\nWe conjecture that this is because problems of this scale give ample time for\nclause learning and CDCL to significantly prune the search performed by DPLL(T).\nEvidence for this can be observed in data on the learned clauses recorded during\nruns of NeuralSAT on unsat problems. Since NeuralSAT\u2019s propositional\nencodings have a number of variables proportional to the number of neurons ()\nin the network the effect of a learned clause of\nsize is that it has the potential to block a space of assignments of\nsize . In other words, as problems grow the reduction through CDCL grows\ncombinatorially. In the largest problem in the benchmarks, with we see clauses on average\nof size which allows BCP to prune an enormous space of assignments \u2013 of size .\nThe ability of NeuralSAT to scale well beyond other DPLL(T) approaches to DNN verification demonstrates the benefit of CDCL." | |
| }, | |
| { | |
| "section_id": "7", | |
| "parent_section_id": null, | |
| "section_name": "7. Related Work", | |
| "text": "The literature on DNN verification is rich and is steadily growing (cf. (Urban and Min\u00e9, 2021 ###reference_71###; Liu et al., 2021 ###reference_49###)). Here we summarize well-known techniques with tool implementations.\nConstraint-based approaches such as\nDLV (Huang et al., 2017 ###reference_37###),\nPlanet (Ehlers, 2017 ###reference_24###), and\nReluplex (Katz et al., 2017a ###reference_40###) and its successor Marabou (Katz et al., 2019 ###reference_43###, 2022 ###reference_42###) transform DNN verification into a constraint problem, solvable using an SMT (Planet, DLV) or DPLL-based search with a customized simplex and MILP solver (Reluplex, Marabou) solvers.\nAbstraction-based techniques and tools\nsuch as AI (Gehr et al., 2018 ###reference_27###),\nERAN (M\u00fcller et al., 2021 ###reference_55###; Singh et al., 2019b ###reference_66###, 2018a ###reference_64###)\n(DeepZ (Singh et al., 2018a ###reference_64###),\nRefineZono (Singh et al., 2018b ###reference_65###),\nDeepPoly (Singh et al., 2019b ###reference_66###),\nK-ReLU (Singh et al., 2019a ###reference_63###)),\nMN-BaB (Ferrari et al., 2022 ###reference_26###)),\nReluval (Wang et al., 2018b ###reference_73###), Neurify (Wang et al., 2018a ###reference_72###), VeriNet (Henriksen and Lomuscio, 2020 ###reference_35###), NNV (Tran et al., 2021b ###reference_70###), nnenum (Bak, 2021 ###reference_5###; Bak et al., 2020 ###reference_7###), CROWN (Zhang et al., 2018 ###reference_82###), --CROWN (Wang et al., 2021 ###reference_74###), use abstract domains such as interval (Reluval/Neurify), zonotope (DeepZ, nnenum), polytope (DeepPoly), starset/imagestar (NNV, nnenum) to scale verification.\nOVAL (OVAL-group, 2023 ###reference_58###) and DNNV (Shriver et al., 2021 ###reference_62###) are frameworks employing various existing DNN verification tools.\nOur NeuralSAT, which is most related to Marabou, is a DPLL(T) approach that integrates clause learning and abstraction in theory solving.\nWell-known abstract domains for DNN verification include interval, zonotope, polytope, and starset/imagestar. Several top verifiers such as MN-BaB and nnenum use multiple abstract domains (e.g., MN-BaB uses deeppoly and lirpa, nnenum adopts deeppoly, zonotope and imagestar. The work in (Goubault et al., 2021 ###reference_31###) uses the general max-plus abstraction (Heidergott et al., 2006 ###reference_34###) to represent the non-convex behavior of ReLU.\nNeuralSAT currently uses polytope in its theory solver though it can also use other abstract domains.\nModern SAT solvers benefit from effective heuristics, e.g., VSIDS and DLIS strategies for decision (branching), random restart (Moskewicz et al., 2001 ###reference_53###) and shortening (Chinneck and Dravnieks, 1991 ###reference_18###) or deleting clauses (Moskewicz et al., 2001 ###reference_53###) for memory efficiency and avoiding local maxima caused by greedy strategies. Similarly, modern DNN verifiers such as nnenum, --CROWN, and Marabou include many optimizations to improve performance, e.g., Branch-and-Bound (Bunel et al., 2020 ###reference_14###) and Split-and-Conquer (Katz et al., 2019 ###reference_43###, 2022 ###reference_42###; Wu et al., 2020 ###reference_75###) for parallelization, and various optimizations for abstraction refinement (Singh et al., 2018b ###reference_65###; Bak, 2021 ###reference_5###)) and bound tightening (Bak, 2021 ###reference_5###; Katz et al., 2019 ###reference_43###; Wang et al., 2021 ###reference_74###).\nNeuralSAT has many opportunities for improvements such as new decision heuristics and parallel DPLL(T) search algorithms (\u00a78 ###reference_###)." | |
| }, | |
| { | |
| "section_id": "8", | |
| "parent_section_id": null, | |
| "section_name": "8. Conclusion and Future Work", | |
| "text": "We introduce NeuralSAT, a DPLL(T) approach and prototype tool for DNN verification. NeuralSAT includes the standard DPLL components such as clause learning, non-chronological backtracking, and restart heuristics in combination with a theory solver customized for DNN reasoning.\nWe evaluate the NeuralSAT prototype with standard FNNs, CNNs, and Resnets, and show that NeuralSAT is competitive to the state-of-the-art DNN verification tools.\nDespite its relatively unoptimized state, NeuralSAT already demonstrates competitive performance compared to optimized state-of-the-art DNN verification tools (\u00a76 ###reference_###).\nBy adopting the DPLL(T) framework, NeuralSAT presents an opportunity to explore how\nadditional optimizations and frameworks developed for SMT can be adapted to support DNN verification." | |
| } | |
| ], | |
| "appendix": [ | |
| { | |
| "section_id": "Appendix 1", | |
| "parent_section_id": null, | |
| "section_name": "Appendix A NeuralSAT DPLL(T) Formalization", | |
| "text": "In \u00a74 ###reference_### we describe NeuralSAT and its optimizations.\nHere we formalize the NeuralSAT DPLL(T) framework.\nBy abstracting away heuristics, optimizations, and implementation details, we can focus on the core NeuralSAT algorithm and establish its correctness and termination properties.\nNeuralSAT can be described using the states and transition rules of the standard DPLL(T) framework described in (Nieuwenhuis et al., 2006 ###reference_57###) and therefore inherits the theoretical results established there.\nWe also highlight the differences between NeuralSAT and standard DPLL(T), but these differences do not affect any main results. The section aims to be self-contained, but readers who are familiar with the work in (Nieuwenhuis et al., 2006 ###reference_57###) can quickly skim through it.\nWe formalize the NeuralSAT DPLL(T) using transition rules that move from a state to another state of the algorithm. A state is either a assignment and a CNF formula , written as , or the special state Fail, which indicates that the formula is unsatisfiable.\nWe write as a transition from state to . We write to indicate any possible transition from to (i.e., reflexive-transitive closure).\nIn a state , we say the clause is conflicting if .\nTab. 4 ###reference_### gives the conditional transition rules for NeuralSAT. Decision literals, written with suffix , are non-deterministically decided (i.e., guessed), while other literals are deduced deterministically through implication. Intuitively, mistakes can happen with decision literals and require backtracking. In contrast, rules that add non-decision literals help prune the search space.\nThe rules Decide, BCP, Fail describe transitions that do not rely on theory solving. Decide non-deterministically selects and adds an undefined literal to (i.e., is a decision literal and can be backtracked to when conflict occurs).\nBCP (or UnitPropagate) infers and adds the unit literal to to satisfy the clause , where .\nFail moves to a Fail state (i.e., is unsatisfiable) when a conflicting clause occurs and contains no decision literals to backtrack to.\nThe rules T-Learn, T-Forget, T-Backjump, TheoryPropagate describe transitions that rely on theory solving, e.g., . T-Backjump analyzes a conflicting clause to determine an \u201dincorrect\u201d decision literal and computes a \u201dbackjump\u201d clause (which will be used by T-learn to ensure that the incorrect decision literal will not be added to in the future). The rule also adds to (since ) and removes and the set of subsequent literals added to after (i.e., it backtracks and removes the \u201dincorrect\u201d decision and subsequent assignments).\nT-Learn strengthens with a clause C that is entailed by (i.e., learned clauses are lemmas of ).\nAs mentioned, clause is the \u201dbackjumping\u201d clause in T-Backjump.\nFinally, TheoryPropagate infers literals that are T-entailed by literals in (thus is a non-decision literal).\nNeuralSAT Algorithm.\nThe Decide and BCP rules align to the Decide and BCP components of NeuralSAT, respectively.\nThe other rules are also implemented in NeuralSAT through the interactions of Deduction, Analyze-Conflict, and Backtrack components. For example, the T-Backjump rule is implemented as part of Deduction and AnalyzeConflict. Also note that while implication graph is a common way to detect conflicts and derive backjumping clause, it is still an implementation detail and therefore not mentioned in T-Backjump (which states there exists a way to obtain a backjumping clause).\nT-Learn, which adds lemmas to existing clauses, is achieved in the main loop of the NeuralSAT algorithm (Fig. 1 ###reference_###, line 1 ###reference_###). TheoryPropagate is implemented as part of Deduction (Fig. 3 ###reference_###, lines 3 ###reference_###\u20133 ###reference_###). Finally, theory solving , i.e., , is implemented in Deduction by using LP solving and abstraction to check satisfiability of linear constraints.\nBy describing NeuralSAT DPLL(T) using transition rules, we can now establish the the formal properties NeuralSAT DPLL(T), which are similar to those of standard DPLL(T).\nBelow we summarize the main results and refer the readers to (Nieuwenhuis et al., 2006 ###reference_57###) for complete proofs.\nNote that the work in (Nieuwenhuis et al., 2006 ###reference_57###) covers multiple variants of DPLL with various rule configurations. Here we focus on just the base DPLL(T) algorithm of NeuralSAT. This significantly simplifies our presentation.\nWe first establish several invariants for the transition rules of NeuralSAT DPLL(T).\nIf , then the following hold:\nAll atoms in and all atoms in are atoms of .\nis indeed an assignment, i.e., it contains no pair of literals and .\nis equivalent to in the theory T.\nAll properties hold trivially in the initial state , so we will use induction to show the transition rules preserve them. Consider a transition . Assume the properties hold for .\nProperty 1 holds because the only atoms can be added to and are from and , all of which belong to .\nProperty 2 preserves the requirement that never shares both negative and positive literals of an atom (the condition of each rule adding a new literal ensures this).\nProperty 3 holds because only T-Learn rule can modify , but learning a clause that is a logical consequence of (i.e., ) will preserve the equivalence between and .\nIf , and is final\nstate, then is either Fail, or of the form , where is a T-model of .\nThis states that if then . This is true because and are logical equivalence by Lemma A.1 ###reference_heorem1###(3).\nNow we prove that NeuralSAT DPLL(T) terminates.\nEvery derivation is finite.\nThis proof uses a well-founded strict partial ordering on states . First, consider the case without T-Learn, in which only the assignment M is modified and the formula F remains constant. Then we can show no infinite derivation by (i) using Lemma A.1 ###reference_heorem1###(1,2) that the number of literals in M and M\u2019 are always less than or equal to the number of atoms in F and (ii) show that the number of \u201dmissing\u201d literals of M is strictly greater than those of M\u2019.\nNow, consider the case with T-learn. While F\u2019 can now be modified, i.e., learning new clauses, the number of possible clauses can be added to is finite as clauses are formed from a finite set of atoms and the conditions of T-learn disallow clause duplication.\nNote that if NeuralSAT involved the Restart and Forget rules, which periodically remove learned clauses, then its termination argument becomes more complicated (but still holds) as shown in the work (Nieuwenhuis et al., 2006 ###reference_57###).\nNow we prove that NeuralSAT DPLL(T) is sound and complete.\nIf where the state S is final, then\nSound: S is Fail if, and only if, F is T-unsatisfiable\nComplete: If is of the form , then is a T-model of .\nProperty 1 states that NeuralSAT DPLL(T) ends at Fail state iff the problem F is unsatisfiable. Property 2 asserts that if NeuralSAT DPLL(T) ends with an assignment , then is the model of , i.e, is satisfiable. This property requires showing that if , then , which is established in Lemma A.2 ###reference_heorem2###.\nTogether, these properties of soundness, completeness, and termination make NeuralSAT DPLL(T) a decision procedure. Note that the presented results are independent from the theory under consideration. The main requirement of T-solver is its decidability for T-satisfiability or T-consistency checking.\nNeuralSAT uses LRA, a theory of real numbers with linear constraints, including linear equalities and inequalities, which is decidable (Kroening and Strichman, 2016 ###reference_45###)." | |
| } | |
| ], | |
| "tables": { | |
| "1": { | |
| "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T1.19.1.1\" style=\"font-size:90%;\">Tab. 1</span>. </span><span class=\"ltx_text ltx_font_typewriter\" id=\"S3.T1.20.2\" style=\"font-size:90%;\">NeuralSAT<span class=\"ltx_text ltx_font_serif\" id=\"S3.T1.20.2.1\">\u2019s run producing </span>unsat<span class=\"ltx_text ltx_font_serif\" id=\"S3.T1.20.2.2\">.</span></span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.15\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.15.16.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.15.16.1.1\"><span class=\"ltx_text\" id=\"S3.T1.15.16.1.1.1\" style=\"font-size:80%;\">Iter</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.15.16.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.15.16.1.2.1\" style=\"font-size:80%;\">BCP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S3.T1.15.16.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.15.16.1.3.1\" style=\"font-size:80%;\">DEDUCTION</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.15.16.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.15.16.1.4.1\" style=\"font-size:80%;\">DECIDE</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S3.T1.15.16.1.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.15.16.1.5.1\" style=\"font-size:80%;\">ANALYZE-CONFLICT</span><span class=\"ltx_text\" id=\"S3.T1.15.16.1.5.2\" style=\"font-size:80%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.15.17.2\">\n<td class=\"ltx_td\" id=\"S3.T1.15.17.2.1\"></td>\n<td class=\"ltx_td\" id=\"S3.T1.15.17.2.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.15.17.2.3\"><span class=\"ltx_text\" id=\"S3.T1.15.17.2.3.1\" style=\"font-size:80%;\">Constraints</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.15.17.2.4\"><span class=\"ltx_text\" id=\"S3.T1.15.17.2.4.1\" style=\"font-size:80%;\">Bounds</span></td>\n<td class=\"ltx_td\" id=\"S3.T1.15.17.2.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.15.17.2.6\"><span class=\"ltx_text\" id=\"S3.T1.15.17.2.6.1\" style=\"font-size:80%;\">Bt</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.15.17.2.7\"><span class=\"ltx_text\" id=\"S3.T1.15.17.2.7.1\" style=\"font-size:80%;\">Learned Clauses</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.3.4\"><span class=\"ltx_text\" id=\"S3.T1.3.3.4.1\" style=\"font-size:80%;\">Init</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.3.5\"><span class=\"ltx_text\" id=\"S3.T1.3.3.5.1\" style=\"font-size:80%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.3.6\"><span class=\"ltx_text\" id=\"S3.T1.3.3.6.1\" style=\"font-size:80%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.3.7\"><span class=\"ltx_text\" id=\"S3.T1.3.3.7.1\" style=\"font-size:80%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.6.6.4\"><span class=\"ltx_text\" id=\"S3.T1.6.6.4.1\" style=\"font-size:80%;\">1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.6.6.5\"><span class=\"ltx_text\" id=\"S3.T1.6.6.5.1\" style=\"font-size:80%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.5.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.6.6.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.6.6.6\"><span class=\"ltx_text\" id=\"S3.T1.6.6.6.1\" style=\"font-size:80%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.6.6.7\"><span class=\"ltx_text\" id=\"S3.T1.6.6.7.1\" style=\"font-size:80%;\">-</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.9.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.9.9.4\"><span class=\"ltx_text\" id=\"S3.T1.9.9.4.1\" style=\"font-size:80%;\">2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.9.9.5\"><span class=\"ltx_text\" id=\"S3.T1.9.9.5.1\" style=\"font-size:80%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.9.9.6\"><span class=\"ltx_text\" id=\"S3.T1.9.9.6.1\" style=\"font-size:80%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.9.9.7\"><span class=\"ltx_text\" id=\"S3.T1.9.9.7.1\" style=\"font-size:80%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.9.9.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.13.13\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.13.13.5\"><span class=\"ltx_text\" id=\"S3.T1.13.13.5.1\" style=\"font-size:80%;\">3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.11.11.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.12.12.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.13.13.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.13.13.6\"><span class=\"ltx_text\" id=\"S3.T1.13.13.6.1\" style=\"font-size:80%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.13.13.7\"><span class=\"ltx_text\" id=\"S3.T1.13.13.7.1\" style=\"font-size:80%;\">-</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.15.15.3\"><span class=\"ltx_text\" id=\"S3.T1.15.15.3.1\" style=\"font-size:80%;\">4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.15.15.4\"><span class=\"ltx_text\" id=\"S3.T1.15.15.4.1\" style=\"font-size:80%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.14.14.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.15.15.5\"><span class=\"ltx_text\" id=\"S3.T1.15.15.5.1\" style=\"font-size:80%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.15.15.6\"><span class=\"ltx_text\" id=\"S3.T1.15.15.6.1\" style=\"font-size:80%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.15.15.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.15.15.7.1\" style=\"font-size:80%;\">-1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.15.15.2\"></td>\n</tr>\n</tbody>\n</table>\n</figure>", | |
| "capture": "Tab. 1. NeuralSAT\u2019s run producing unsat." | |
| }, | |
| "2": { | |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.10.1.1\" style=\"font-size:113%;\">Tab. 2</span>. </span><span class=\"ltx_text\" id=\"S5.T2.11.2\" style=\"font-size:113%;\">Benchmark instances. U: <span class=\"ltx_text ltx_font_typewriter\" id=\"S5.T2.11.2.1\">unsat</span>, S: <span class=\"ltx_text ltx_font_typewriter\" id=\"S5.T2.11.2.2\">sat</span>, ?: <span class=\"ltx_text ltx_font_typewriter\" id=\"S5.T2.11.2.3\">unknown</span>.</span></figcaption>\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T2.12\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.12.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.12.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.12.1.1.1.1\" style=\"font-size:80%;\">Benchmarks</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S5.T2.12.1.1.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.12.1.1.2.1\" style=\"font-size:80%;\">Networks</span><span class=\"ltx_text\" id=\"S5.T2.12.1.1.2.2\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S5.T2.12.1.1.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.12.1.1.3.1\" style=\"font-size:80%;\">Per Network</span><span class=\"ltx_text\" id=\"S5.T2.12.1.1.3.2\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S5.T2.12.1.1.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.12.1.1.4.1\" style=\"font-size:80%;\">Tasks</span><span class=\"ltx_text\" id=\"S5.T2.12.1.1.4.2\" style=\"font-size:80%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.12.2.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.12.2.2.1\"><span class=\"ltx_text\" id=\"S5.T2.12.2.2.1.1\" style=\"font-size:80%;\">Type</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.12.2.2.2\"><span class=\"ltx_text\" id=\"S5.T2.12.2.2.2.1\" style=\"font-size:80%;\">Networks</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.12.2.2.3\"><span class=\"ltx_text\" id=\"S5.T2.12.2.2.3.1\" style=\"font-size:80%;\">Neurons</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.12.2.2.4\"><span class=\"ltx_text\" id=\"S5.T2.12.2.2.4.1\" style=\"font-size:80%;\">Parameters</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.12.2.2.5\"><span class=\"ltx_text\" id=\"S5.T2.12.2.2.5.1\" style=\"font-size:80%;\">Properties</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.12.2.2.6\"><span class=\"ltx_text\" id=\"S5.T2.12.2.2.6.1\" style=\"font-size:80%;\">Instances (U/S/?)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.12.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.12.3.3.1\"><span class=\"ltx_text\" id=\"S5.T2.12.3.3.1.1\" style=\"font-size:80%;\">ACAS Xu</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.3.3.2\"><span class=\"ltx_text\" id=\"S5.T2.12.3.3.2.1\" style=\"font-size:80%;\">FNN</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.3.3.3\"><span class=\"ltx_text\" id=\"S5.T2.12.3.3.3.1\" style=\"font-size:80%;\">45</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.3.3.4\"><span class=\"ltx_text\" id=\"S5.T2.12.3.3.4.1\" style=\"font-size:80%;\">300</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.12.3.3.5\"><span class=\"ltx_text\" id=\"S5.T2.12.3.3.5.1\" style=\"font-size:80%;\">13305</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.3.3.6\"><span class=\"ltx_text\" id=\"S5.T2.12.3.3.6.1\" style=\"font-size:80%;\">10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.3.3.7\"><span class=\"ltx_text\" id=\"S5.T2.12.3.3.7.1\" style=\"font-size:80%;\">139/47/0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.12.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.12.4.4.1\"><span class=\"ltx_text\" id=\"S5.T2.12.4.4.1.1\" style=\"font-size:80%;\">MNISTFC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.4.4.2\"><span class=\"ltx_text\" id=\"S5.T2.12.4.4.2.1\" style=\"font-size:80%;\">FNN</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.4.4.3\"><span class=\"ltx_text\" id=\"S5.T2.12.4.4.3.1\" style=\"font-size:80%;\">3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.4.4.4\"><span class=\"ltx_text\" id=\"S5.T2.12.4.4.4.1\" style=\"font-size:80%;\">0.5\u20131.5K</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.12.4.4.5\"><span class=\"ltx_text\" id=\"S5.T2.12.4.4.5.1\" style=\"font-size:80%;\">269\u2013532K</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.4.4.6\"><span class=\"ltx_text\" id=\"S5.T2.12.4.4.6.1\" style=\"font-size:80%;\">30</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.4.4.7\"><span class=\"ltx_text\" id=\"S5.T2.12.4.4.7.1\" style=\"font-size:80%;\">56/23/11</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.12.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.12.5.5.1\"><span class=\"ltx_text\" id=\"S5.T2.12.5.5.1.1\" style=\"font-size:80%;\">CIFAR2020</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.5.5.2\"><span class=\"ltx_text\" id=\"S5.T2.12.5.5.2.1\" style=\"font-size:80%;\">FNN+CNN</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.5.5.3\"><span class=\"ltx_text\" id=\"S5.T2.12.5.5.3.1\" style=\"font-size:80%;\">3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.5.5.4\"><span class=\"ltx_text\" id=\"S5.T2.12.5.5.4.1\" style=\"font-size:80%;\">17\u201362K</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.12.5.5.5\"><span class=\"ltx_text\" id=\"S5.T2.12.5.5.5.1\" style=\"font-size:80%;\">2.1\u20132.5M</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.5.5.6\"><span class=\"ltx_text\" id=\"S5.T2.12.5.5.6.1\" style=\"font-size:80%;\">203</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.5.5.7\"><span class=\"ltx_text\" id=\"S5.T2.12.5.5.7.1\" style=\"font-size:80%;\">149/43/11</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.12.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.12.6.6.1\"><span class=\"ltx_text\" id=\"S5.T2.12.6.6.1.1\" style=\"font-size:80%;\">RESNET_A/B</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.6.6.2\"><span class=\"ltx_text\" id=\"S5.T2.12.6.6.2.1\" style=\"font-size:80%;\">CNN+ResNet</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.6.6.3\"><span class=\"ltx_text\" id=\"S5.T2.12.6.6.3.1\" style=\"font-size:80%;\">2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.6.6.4\"><span class=\"ltx_text\" id=\"S5.T2.12.6.6.4.1\" style=\"font-size:80%;\">11K</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.12.6.6.5\"><span class=\"ltx_text\" id=\"S5.T2.12.6.6.5.1\" style=\"font-size:80%;\">354K</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.6.6.6\"><span class=\"ltx_text\" id=\"S5.T2.12.6.6.6.1\" style=\"font-size:80%;\">144</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.6.6.7\"><span class=\"ltx_text\" id=\"S5.T2.12.6.6.7.1\" style=\"font-size:80%;\">49/23/72</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.12.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.12.7.7.1\"><span class=\"ltx_text\" id=\"S5.T2.12.7.7.1.1\" style=\"font-size:80%;\">CIFAR_GDVB</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.7.7.2\"><span class=\"ltx_text\" id=\"S5.T2.12.7.7.2.1\" style=\"font-size:80%;\">FNN+CNN</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.7.7.3\"><span class=\"ltx_text\" id=\"S5.T2.12.7.7.3.1\" style=\"font-size:80%;\">42</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.7.7.4\"><span class=\"ltx_text\" id=\"S5.T2.12.7.7.4.1\" style=\"font-size:80%;\">9\u201349K</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.12.7.7.5\"><span class=\"ltx_text\" id=\"S5.T2.12.7.7.5.1\" style=\"font-size:80%;\">0.08\u201358M</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.7.7.6\"><span class=\"ltx_text\" id=\"S5.T2.12.7.7.6.1\" style=\"font-size:80%;\">39</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.7.7.7\"><span class=\"ltx_text\" id=\"S5.T2.12.7.7.7.1\" style=\"font-size:80%;\">60/0/0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.12.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T2.12.8.8.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.12.8.8.1.1\" style=\"font-size:80%;\">Total</span></td>\n<td class=\"ltx_td ltx_border_bb ltx_border_t\" id=\"S5.T2.12.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T2.12.8.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.12.8.8.3.1\" style=\"font-size:80%;\">95</span></td>\n<td class=\"ltx_td ltx_border_bb ltx_border_t\" id=\"S5.T2.12.8.8.4\"></td>\n<td class=\"ltx_td ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T2.12.8.8.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T2.12.8.8.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.12.8.8.6.1\" style=\"font-size:80%;\">426</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T2.12.8.8.7\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.12.8.8.7.1\" style=\"font-size:80%;\">453/136</span><span class=\"ltx_text\" id=\"S5.T2.12.8.8.7.2\" style=\"font-size:80%;\">/94</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>", | |
| "capture": "Tab. 2. Benchmark instances. U: unsat, S: sat, ?: unknown." | |
| }, | |
| "3": { | |
| "table_html": "<figure class=\"ltx_table\" id=\"S6.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S6.T3.9.1.1\" style=\"font-size:90%;\">Tab. 3</span>. </span><span class=\"ltx_text\" id=\"S6.T3.10.2\" style=\"font-size:90%;\">A <span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.10.2.1\">Verifier</span>\u2019s rank (<span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.10.2.2\">#</span>) is based on its VNN-COMP score (<span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.10.2.3\">S</span>) on a benchmark. For each benchmark, the number of problems verified (<span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.10.2.4\">V</span>) and falsified (<span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.10.2.5\">F</span>) are shown.</span></figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S6.T3.2\" style=\"width:433.6pt;height:175.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(38.6pt,-15.6pt) scale(1.21633000542085,1.21633000542085) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S6.T3.2.2\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T3.2.2.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S6.T3.2.2.3.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.3.1.1.1\">Verifier</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"4\" id=\"S6.T3.2.2.3.1.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.3.1.2.1\">ACAS Xu</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"4\" id=\"S6.T3.2.2.3.1.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.3.1.3.1\">MNISTFC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"4\" id=\"S6.T3.2.2.3.1.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.3.1.4.1\">CIFAR2020</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"4\" id=\"S6.T3.2.2.3.1.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.3.1.5.1\">RESNET_A/B</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"4\" id=\"S6.T3.2.2.3.1.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.3.1.6.1\">CIFAR_GDVB</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S6.T3.2.2.3.1.7\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.3.1.7.1\">Overall</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.2.2.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.1.1\">#</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.2.1\">S</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.3.1\">V</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.2.2.4.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.4.1\">F</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.5.1\">#</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.6.1\">S</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.7.1\">V</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.2.2.4.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.8.1\">F</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.9.1\">#</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.10.1\">S</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.11.1\">V</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.2.2.4.2.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.12.1\">F</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.13\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.13.1\">#</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.14\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.14.1\">S</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.15\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.15.1\">V</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.2.2.4.2.16\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.16.1\">F</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.17\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.17.1\">#</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.18\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.18.1\">S</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.19\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.19.1\">V</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.2.2.4.2.20\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.20.1\">F</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.21\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.21.1\">#</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.22\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.22.1\">S</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.23\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.23.1\">V</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.2.2.4.2.24\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.4.2.24.1\">F</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.2.2\">\n<span class=\"ltx_text ltx_font_typewriter\" id=\"S6.T3.2.2.2.2.1\">--CROWN</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.3\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.4\">1436</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.5.1\">139</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.2.6\">46</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.7.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.8.1\">582</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.9.1\">56</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.2.10\">22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.11.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.12.1\">1522</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.13\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.13.1\">148</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.2.14\">42</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.15\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.15.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.16\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.16.1\">513</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.17\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.17.1\">49</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.2.18\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.18.1\">23</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.19\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.19.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.20\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.20.1\">600</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.21\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.21.1\">60</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.2.22\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.23\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.23.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.24\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.24.1\">4653</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.25\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.2.25.1\">452</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.2.26\">133</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.2.2.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.5.3.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S6.T3.2.2.5.3.1.1\">NeuralSAT</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.2\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.3\">1417</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.4\">137</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.5.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.5.3.5.1\">47</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.6\">5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.7\">363</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.8\">34</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.5.3.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.5.3.9.1\">23</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.10\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.11\">1483</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.12\">144</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.5.3.13\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.5.3.13.1\">43</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.14\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.15\">403</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.16\">38</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.5.3.17\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.5.3.17.1\">23</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.18\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.19\">580</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.20\">58</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.5.3.21\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.22\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.23\">4246</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.24\">411</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.5.3.25\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.5.3.25.1\">136</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.2.2.6.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.6.4.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S6.T3.2.2.6.4.1.1\">MN-BaB</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.2\">5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.3\">1097</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.4\">105</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.6.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.6.4.5.1\">47</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.6\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.7\">370</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.8\">36</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.6.4.9\">10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.10\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.11\">1486</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.12\">145</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.6.4.13\">36</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.14\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.15\">363</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.16\">34</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.6.4.17\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.6.4.17.1\">23</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.18\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.19\">470</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.20\">47</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.6.4.21\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.22\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.23\">3786</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.24\">367</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.6.4.25\">116</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.2.2.7.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.7.5.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S6.T3.2.2.7.5.1.1\">nnenum</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.7.5.2.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.7.5.3.1\">1437</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.7.5.4.1\">139</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.7.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.2.2.7.5.5.1\">47</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.6\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.7\">403</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.8\">39</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.7.5.9\">13</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.10\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.11\">518</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.12\">50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.7.5.13\">18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.14\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.15\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.16\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.7.5.17\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.18\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.19\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.20\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.7.5.21\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.22\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.23\">2358</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.24\">228</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.7.5.25\">78</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.2.2.8.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.8.6.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S6.T3.2.2.8.6.1.1\">Marabou\u201921</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.2\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.3\">1426</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.4\">138</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.8.6.5\">46</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.6\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.7\">370</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.8\">35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.8.6.9\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.10\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.11\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.12\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.8.6.13\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.14\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.15\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.16\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.8.6.17\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.18\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.19\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.20\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.8.6.21\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.22\">5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.23\">1796</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.24\">173</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.2.2.8.6.25\">66</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.2.2.9.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.9.7.1\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S6.T3.2.2.9.7.1.1\">Marabou\u201922</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.2\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.3\">1015</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.4\">97</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.9.7.5\">45</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.6\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.7\">308</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.8\">29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.9.7.9\">18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.10\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.11\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.12\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.9.7.13\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.14\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.15\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.16\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.9.7.17\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.18\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.19\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.20\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.9.7.21\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.22\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.23\">1323</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.24\">126</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T3.2.2.9.7.25\">63</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>", | |
| "capture": "Tab. 3. A Verifier\u2019s rank (#) is based on its VNN-COMP score (S) on a benchmark. For each benchmark, the number of problems verified (V) and falsified (F) are shown." | |
| }, | |
| "4": { | |
| "table_html": "<figure class=\"ltx_table\" id=\"A1.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"A1.T4.26.1.1\" style=\"font-size:90%;\">Tab. 4</span>. </span><span class=\"ltx_text\" id=\"A1.T4.27.2\" style=\"font-size:90%;\">Transition rules for <span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T4.27.2.1\">NeuralSAT</span> DPLL(T) solver.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A1.T4.23\" style=\"width:433.6pt;height:199.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(18.3pt,-8.4pt) scale(1.09193769476917,1.09193769476917) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A1.T4.23.23\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T4.23.23.24.1\">\n<td class=\"ltx_td ltx_border_r\" id=\"A1.T4.23.23.24.1.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.23.23.24.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.23.23.24.1.2.1\">Rule</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.23.23.24.1.3\">From</td>\n<td class=\"ltx_td\" id=\"A1.T4.23.23.24.1.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.23.23.24.1.5\">To</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T4.23.23.24.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.23.23.24.1.6.1\">Condition</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.4.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T4.4.4.4.5\" rowspan=\"3\">\n<span class=\"ltx_text\" id=\"A1.T4.4.4.4.5.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"A1.T4.4.4.4.5.1.1\" style=\"width:6.9pt;height:70.3pt;vertical-align:-31.7pt;\"><span class=\"ltx_transformed_inner\" style=\"width:70.3pt;transform:translate(-31.68pt,0pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"A1.T4.4.4.4.5.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.4.4.4.5.1.1.1.1\">Standard DPLL</span></span>\n</span></span></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T4.4.4.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.4.4.4.6.1\">Decide</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T4.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T4.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T4.3.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T4.4.4.4.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.4.4.4.4.1\">if</span>\u2003\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.8.8.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.8.8.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.8.8.8.5.1\">BCP</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.6.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.7.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T4.8.8.8.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.8.8.8.4.1\">if</span>\u2003\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.11.11.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.11.11.11.4\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.11.11.11.4.1\">Fail</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.9.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.10.10.10.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.11.11.11.5\"><span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T4.11.11.11.5.1\">Fail</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T4.11.11.11.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.11.11.11.3.1\">if</span>\u2003\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.15.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"A1.T4.15.15.15.5\" rowspan=\"4\"><span class=\"ltx_text\" id=\"A1.T4.15.15.15.5.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"A1.T4.15.15.15.5.1.1\" style=\"width:8.9pt;height:66.6pt;vertical-align:-30.8pt;\"><span class=\"ltx_transformed_inner\" style=\"width:66.7pt;transform:translate(-28.9pt,2.92pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"A1.T4.15.15.15.5.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.15.15.15.5.1.1.1.1\">Theory Solving</span></span>\n</span></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T4.15.15.15.6\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.15.15.15.6.1\">T-Backjump</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T4.12.12.12.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T4.13.13.13.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T4.14.14.14.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T4.15.15.15.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.15.15.15.4.1\">if</span>\u2003\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.19.19.19\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.19.19.19.5\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.19.19.19.5.1\">T-Learn</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.16.16.16.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.17.17.17.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T4.18.18.18.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T4.19.19.19.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.19.19.19.4.1\">if</span>\u2003\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.23.23.23\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T4.23.23.23.5\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.23.23.23.5.1\">TheoryPropagate</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T4.20.20.20.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T4.21.21.21.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T4.22.22.22.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A1.T4.23.23.23.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.23.23.23.4.1\">if</span>\u2003\n</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>", | |
| "capture": "Tab. 4. Transition rules for NeuralSAT DPLL(T) solver." | |
| } | |
| }, | |
| "image_paths": { | |
| "1": { | |
| "figure_path": "2307.10266v3_figure_1.png", | |
| "caption": "Fig. 1. Original DPLL Algorithm.", | |
| "url": "http://arxiv.org/html/2307.10266v3/x1.png" | |
| }, | |
| "2": { | |
| "figure_path": "2307.10266v3_figure_2.png", | |
| "caption": "Fig. 2. An FNN with ReLU.", | |
| "url": "http://arxiv.org/html/2307.10266v3/x2.png" | |
| }, | |
| "3": { | |
| "figure_path": "2307.10266v3_figure_3.png", | |
| "caption": "Fig. 3. NeuralSAT.", | |
| "url": "http://arxiv.org/html/2307.10266v3/x3.png" | |
| }, | |
| "4(a)": { | |
| "figure_path": "2307.10266v3_figure_4(a).png", | |
| "caption": "(a) NeuralSAT", | |
| "url": "http://arxiv.org/html/2307.10266v3/extracted/5358448/figure/cdcl-tree-clause.png" | |
| }, | |
| "4(b)": { | |
| "figure_path": "2307.10266v3_figure_4(b).png", | |
| "caption": "(a) NeuralSAT", | |
| "url": "http://arxiv.org/html/2307.10266v3/x4.png" | |
| }, | |
| "5": { | |
| "figure_path": "2307.10266v3_figure_5.png", | |
| "caption": "(a)", | |
| "url": "http://arxiv.org/html/2307.10266v3/x5.png" | |
| }, | |
| "6(a)": { | |
| "figure_path": "2307.10266v3_figure_6(a).png", | |
| "caption": "(a) interval", | |
| "url": "http://arxiv.org/html/2307.10266v3/x6.png" | |
| }, | |
| "6(b)": { | |
| "figure_path": "2307.10266v3_figure_6(b).png", | |
| "caption": "(a) interval", | |
| "url": "http://arxiv.org/html/2307.10266v3/x7.png" | |
| }, | |
| "6(c)": { | |
| "figure_path": "2307.10266v3_figure_6(c).png", | |
| "caption": "(a) interval", | |
| "url": "http://arxiv.org/html/2307.10266v3/x8.png" | |
| }, | |
| "6(d)": { | |
| "figure_path": "2307.10266v3_figure_6(d).png", | |
| "caption": "(a) interval", | |
| "url": "http://arxiv.org/html/2307.10266v3/x9.png" | |
| }, | |
| "7(a)": { | |
| "figure_path": "2307.10266v3_figure_7(a).png", | |
| "caption": "(b)\nFig. 7. Performance of NeuralSAT with \u201cFull\u201d CDCL settings and with \u201cNo Restart\u201d on CIFAR_GDVB benchmark using three different metrics: (a) Problems solved and solve time (s); (b) Sorted solved problems; and (c) Comparing counts of iterations and decisions.", | |
| "url": "http://arxiv.org/html/2307.10266v3/x10.png" | |
| }, | |
| "7(b)": { | |
| "figure_path": "2307.10266v3_figure_7(b).png", | |
| "caption": "(c)\nFig. 7. Performance of NeuralSAT with \u201cFull\u201d CDCL settings and with \u201cNo Restart\u201d on CIFAR_GDVB benchmark using three different metrics: (a) Problems solved and solve time (s); (b) Sorted solved problems; and (c) Comparing counts of iterations and decisions.", | |
| "url": "http://arxiv.org/html/2307.10266v3/x11.png" | |
| } | |
| }, | |
| "validation": true, | |
| "references": [ | |
| { | |
| "1": { | |
| "title": "Boosting the Performance of CDCL-Based SAT Solvers by Exploiting Backbones and Backdoors.", | |
| "author": "Tasniem Al-Yahya, Mohamed El Bachir Abdelkrim Menai, and Hassan Mathkour. 2022.", | |
| "venue": "Algorithms 15, 9 (2022), 302.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "2": { | |
| "title": "Efficient generation of unsatisfiability proofs and cores in SAT. In International Conference on Logic for Programming Artificial Intelligence and Reasoning. Springer, 16\u201330.", | |
| "author": "Roberto As\u00edn, Robert Nieuwenhuis, Albert Oliveras, and Enric Rodr\u00edguez-Carbonell. 2008.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "3": { | |
| "title": "ONNX Open neural network exchange.", | |
| "author": "Junjie Bai, Fang Lu, and Ke Zhang. 2023.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "4": { | |
| "title": "nnenum: Verification of relu neural networks with optimized abstraction refinement. In NASA Formal Methods Symposium. Springer, 19\u201336.", | |
| "author": "Stanley Bak. 2021.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "5": { | |
| "title": "The Second International verification of Neural Networks Competition (VNN-COMP 2021): Summary and Results.", | |
| "author": "Stanley Bak, Changliu Liu, and Taylor Johnson. 2021.", | |
| "venue": "arXiv preprint arXiv:2109.00498 (2021).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "6": { | |
| "title": "Improved geometric path enumeration for verifying relu neural networks. In International Conference on Computer Aided Verification. Springer, 66\u201396.", | |
| "author": "Stanley Bak, Hoang-Dung Tran, Kerianne Hobbs, and Taylor T Johnson. 2020.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "7": { | |
| "title": "Cvc4. In International Conference on Computer Aided Verification. Springer, 171\u2013177.", | |
| "author": "Clark Barrett, Christopher L Conway, Morgan Deters, Liana Hadarean, Dejan Jovanovi\u0107, Tim King, Andrew Reynolds, and Cesare Tinelli. 2011.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "8": { | |
| "title": "Splitting on demand in SAT modulo theories. In Logic for Programming, Artificial Intelligence, and Reasoning: 13th International Conference, LPAR 2006, Phnom Penh, Cambodia, November 13-17, 2006. Proceedings 13. Springer, 512\u2013526.", | |
| "author": "Clark Barrett, Robert Nieuwenhuis, Albert Oliveras, and Cesare Tinelli. 2006.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "9": { | |
| "title": "\u201d Decision Procedures: An Algorithmic Point of View,\u201d by Daniel Kroening and Ofer Strichman, Springer-Verlag, 2008.", | |
| "author": "Clark W Barrett. 2013.", | |
| "venue": "J. Autom. Reason. 51, 4 (2013), 453\u2013456.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "10": { | |
| "title": "Using CSP look-back techniques to solve real-world SAT instances. In Aaai/iaai. Providence, RI, 203\u2013208.", | |
| "author": "Roberto J Bayardo Jr and Robert Schrag. 1997.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "11": { | |
| "title": "Handbook of satisfiability. Vol. 185.", | |
| "author": "Armin Biere, Marijn Heule, and Hans van Maaren. 2009.", | |
| "venue": "IOS press.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "12": { | |
| "title": "First three years of the international verification of neural networks competition (VNN-COMP).", | |
| "author": "Christopher Brix, Mark Niklas M\u00fcller, Stanley Bak, Taylor T Johnson, and Changliu Liu. 2023.", | |
| "venue": "International Journal on Software Tools for Technology Transfer (2023), 1\u201311.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "13": { | |
| "title": "Branch and bound for piecewise linear neural network verification.", | |
| "author": "Rudy Bunel, P Mudigonda, Ilker Turkaslan, P Torr, Jingyue Lu, and Pushmeet Kohli. 2020.", | |
| "venue": "Journal of Machine Learning Research 21, 2020 (2020).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "14": { | |
| "title": "A unified view of piecewise linear neural network verification.", | |
| "author": "Rudy R Bunel, Ilker Turkaslan, Philip Torr, Pushmeet Kohli, and Pawan K Mudigonda. 2018.", | |
| "venue": "Advances in Neural Information Processing Systems 31 (2018).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "15": { | |
| "title": "Better decision heuristics in CDCL through local search and target phases.", | |
| "author": "Shaowei Cai, Xindi Zhang, Mathias Fleury, and Armin Biere. 2022.", | |
| "venue": "Journal of Artificial Intelligence Research 74 (2022), 1515\u20131563.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "16": { | |
| "title": "Linearity grafting: Relaxed neuron pruning helps certifiable robustness. In International Conference on Machine Learning. PMLR, 3760\u20133772.", | |
| "author": "Tianlong Chen, Huan Zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, and Zhangyang Wang. 2022.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "17": { | |
| "title": "Locating minimal infeasible constraint sets in linear programs.", | |
| "author": "John W Chinneck and Erik W Dravnieks. 1991.", | |
| "venue": "ORSA Journal on Computing 3, 2 (1991), 157\u2013168.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "18": { | |
| "title": "The complexity of theorem-proving procedures. In Proceedings of the third annual ACM symposium on Theory of computing. 151\u2013158.", | |
| "author": "Stephen A Cook. 1971.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "19": { | |
| "title": "Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languages. 238\u2013252.", | |
| "author": "Patrick Cousot and Radhia Cousot. 1977.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "20": { | |
| "title": "Fast Falsification of Neural Networks using Property Directed Testing.", | |
| "author": "Moumita Das, Rajarshi Ray, Swarup Kumar Mohalik, and Ansuman Banerjee. 2021.", | |
| "venue": "arXiv preprint arXiv:2104.12418 (2021).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "21": { | |
| "title": "A machine program for theorem-proving.", | |
| "author": "Martin Davis, George Logemann, and Donald Loveland. 1962.", | |
| "venue": "Commun. ACM 5, 7 (1962), 394\u2013397.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "22": { | |
| "title": "Improved branch and bound for neural network verification via lagrangian decomposition.", | |
| "author": "Alessandro De Palma, Rudy Bunel, Alban Desmaison, Krishnamurthy Dvijotham, Pushmeet Kohli, Philip HS Torr, and M Pawan Kumar. 2021.", | |
| "venue": "arXiv preprint arXiv:2104.06718 (2021).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "23": { | |
| "title": "Formal verification of piece-wise linear feed-forward neural networks. In International Symposium on Automated Technology for Verification and Analysis. Springer, 269\u2013286.", | |
| "author": "Ruediger Ehlers. 2017.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "24": { | |
| "title": "Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1625\u20131634.", | |
| "author": "Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2018.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "25": { | |
| "title": "Complete verification via multi-neuron relaxation guided branch-and-bound.", | |
| "author": "Claudio Ferrari, Mark Niklas Muller, Nikola Jovanovic, and Martin Vechev. 2022.", | |
| "venue": "arXiv preprint arXiv:2205.00263 (2022).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "26": { | |
| "title": "Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE symposium on security and privacy (SP). IEEE, 3\u201318.", | |
| "author": "Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. 2018.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "27": { | |
| "title": "Boosting combinatorial search through randomization.", | |
| "author": "Carla P Gomes, Bart Selman, Henry Kautz, et al. 1998.", | |
| "venue": "AAAI/IAAI 98 (1998), 431\u2013437.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "28": { | |
| "title": "Deep Learning.", | |
| "author": "Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016.", | |
| "venue": "MIT Press.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "29": { | |
| "title": "Explaining and harnessing adversarial examples.", | |
| "author": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014.", | |
| "venue": "arXiv preprint arXiv:1412.6572 (2014).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "30": { | |
| "title": "Static analysis of relu neural networks with tropical polyhedra. In International Static Analysis Symposium. Springer, 166\u2013190.", | |
| "author": "Eric Goubault, S\u00e9bastien Palumby, Sylvie Putot, Louis Rustenholz, and Sriram Sankaranarayanan. 2021.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "31": { | |
| "title": "A heuristic restart strategy to speed up the solving of satisfiability problem. In 2012 Fifth International Symposium on Computational Intelligence and Design, Vol. 2. IEEE, 423\u2013426.", | |
| "author": "Ying Guo, Bin Zhang, and Changsheng Zhang. 2012.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "32": { | |
| "title": "Gurobi Optimizer Reference Manual.", | |
| "author": "Gurobi Optimization, LLC. 2022.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "33": { | |
| "title": "Max Plus at work: modeling and analysis of synchronized systems: a course on Max-Plus algebra and its applications. Vol. 13.", | |
| "author": "Bernd Heidergott, Geert Jan Olsder, Jacob Van Der Woude, and JW van der Woude. 2006.", | |
| "venue": "Princeton University Press.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "34": { | |
| "title": "Efficient neural network verification via adaptive refinement and adversarial search.", | |
| "author": "Patrick Henriksen and Alessio Lomuscio. 2020.", | |
| "venue": "In ECAI 2020. IOS Press, 2513\u20132520.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "35": { | |
| "title": "A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability.", | |
| "author": "Xiaowei Huang, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, and Xinping Yi. 2020.", | |
| "venue": "Computer Science Review 37 (2020), 100270.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "36": { | |
| "title": "Safety verification of deep neural networks. In International conference on computer aided verification. Springer, 3\u201329.", | |
| "author": "Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. 2017.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "37": { | |
| "title": "Neural Network Verification with Proof Production.", | |
| "author": "Omri Isac, Clark Barrett, Min Zhang, and Guy Katz. 2022.", | |
| "venue": "Proc. 22nd Int. Conf. on Formal Methods in Computer-Aided Design (FMCAD) (2022).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "38": { | |
| "title": "Alcoa: the alloy constraint analyzer. In Proceedings of the 22nd international conference on Software engineering. 730\u2013733.", | |
| "author": "Daniel Jackson, Ian Schechter, and Hya Shlyahter. 2000.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "39": { | |
| "title": "Reluplex: An efficient SMT solver for verifying deep neural networks. In International Conference on Computer Aided Verification. Springer, 97\u2013117.", | |
| "author": "Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. 2017a.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "40": { | |
| "title": "Towards proving the adversarial robustness of deep neural networks.", | |
| "author": "Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. 2017b.", | |
| "venue": "Proc. 1st Workshop on Formal Verification of Autonomous Vehicles (FVAV), pp. 19-26 (2017).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "41": { | |
| "title": "Reluplex: a calculus for reasoning about deep neural networks.", | |
| "author": "Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. 2022.", | |
| "venue": "Formal Methods in System Design 60, 1 (2022), 87\u2013116.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "42": { | |
| "title": "The marabou framework for verification and analysis of deep neural networks. In International Conference on Computer Aided Verification. Springer, 443\u2013452.", | |
| "author": "Guy Katz, Derek A Huang, Duligur Ibeling, Kyle Julian, Christopher Lazarus, Rachel Lim, Parth Shah, Shantanu Thakoor, Haoze Wu, Aleksandar Zelji\u0107, et al. 2019.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "43": { | |
| "title": "Next-generation airborne collision avoidance system.", | |
| "author": "Mykel J Kochenderfer, Jessica E Holland, and James P Chryssanthacopoulos. 2012.", | |
| "venue": "Technical Report. Massachusetts Institute of Technology-Lincoln Laboratory Lexington United States.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "44": { | |
| "title": "Decision procedures.", | |
| "author": "Daniel Kroening and Ofer Strichman. 2016.", | |
| "venue": "Springer.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "45": { | |
| "title": "PaInleSS: a framework for parallel SAT solving. In Theory and Applications of Satisfiability Testing\u2013SAT 2017: 20th International Conference, Melbourne, VIC, Australia, August 28\u2013September 1, 2017, Proceedings 20. Springer, 233\u2013250.", | |
| "author": "Ludovic Le Frioux, Souheib Baarir, Julien Sopena, and Fabrice Kordon. 2017.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "46": { | |
| "title": "Modular and efficient divide-and-conquer sat solver on top of the painless framework. In Tools and Algorithms for the Construction and Analysis of Systems: 25th International Conference, TACAS 2019, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019, Prague, Czech Republic, April 6\u201311, 2019, Proceedings, Part I 25. Springer, 135\u2013151.", | |
| "author": "Ludovic Le Frioux, Souheib Baarir, Julien Sopena, and Fabrice Kordon. 2019.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "47": { | |
| "title": "Machine learning-based restart policy for CDCL SAT solvers. In Theory and Applications of Satisfiability Testing\u2013SAT 2018: 21st International Conference, SAT 2018, Held as Part of the Federated Logic Conference, FloC 2018, Oxford, UK, July 9\u201312, 2018, Proceedings 21. Springer, 94\u2013110.", | |
| "author": "Jia Hui Liang, Chanseok Oh, Minu Mathew, Ciza Thomas, Chunxiao Li, and Vijay Ganesh. 2018.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "48": { | |
| "title": "Algorithms for verifying deep neural networks.", | |
| "author": "Changliu Liu, Tomer Arnon, Christopher Lazarus, Christopher Strong, Clark Barrett, Mykel J Kochenderfer, et al. 2021.", | |
| "venue": "Foundations and Trends\u00ae in Optimization 4, 3-4 (2021), 244\u2013404.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "49": { | |
| "title": "Towards deep learning models resistant to adversarial attacks.", | |
| "author": "Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017.", | |
| "venue": "arXiv preprint arXiv:1706.06083 (2017).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "50": { | |
| "title": "GRASP-A new search algorithm for satisfiability. In Proceedings of International Conference on Computer Aided Design. 220\u2013227.", | |
| "author": "J.P. Marques Silva and K.A. Sakallah. 1996.", | |
| "venue": "https://doi.org/10.1109/ICCAD.1996.569607", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "51": { | |
| "title": "GRASP: A search algorithm for propositional satisfiability.", | |
| "author": "Joao P Marques-Silva and Karem A Sakallah. 1999.", | |
| "venue": "IEEE Trans. Comput. 48, 5 (1999), 506\u2013521.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "52": { | |
| "title": "Chaff: Engineering an efficient SAT solver. In Proceedings of the 38th annual Design Automation Conference. 530\u2013535.", | |
| "author": "Matthew W Moskewicz, Conor F Madigan, Ying Zhao, Lintao Zhang, and Sharad Malik. 2001.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "53": { | |
| "title": "Z3: An efficient SMT solver. In International conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer, 337\u2013340.", | |
| "author": "Leonardo de Moura and Nikolaj Bj\u00f8rner. 2008.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "54": { | |
| "title": "Scaling polyhedral neural network verification on gpus.", | |
| "author": "Christoph M\u00fcller, Fran\u00e7ois Serre, Gagandeep Singh, Markus P\u00fcschel, and Martin Vechev. 2021.", | |
| "venue": "Proceedings of Machine Learning and Systems 3 (2021), 733\u2013746.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "55": { | |
| "title": "The Third International Verification of Neural Networks Competition (VNN-COMP 2022): Summary and Results.", | |
| "author": "Mark Niklas M\u00fcller, Christopher Brix, Stanley Bak, Changliu Liu, and Taylor T Johnson. 2022.", | |
| "venue": "arXiv preprint arXiv:2212.10376 (2022).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "56": { | |
| "title": "Solving SAT and SAT modulo theories: From an abstract Davis\u2013Putnam\u2013Logemann\u2013Loveland procedure to DPLL (T).", | |
| "author": "Robert Nieuwenhuis, Albert Oliveras, and Cesare Tinelli. 2006.", | |
| "venue": "Journal of the ACM (JACM) 53, 6 (2006), 937\u2013977.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "57": { | |
| "title": "OVAL - Branch-and-Bound-based Neural Network Verification.", | |
| "author": "OVAL-group. 2023.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "58": { | |
| "title": "Pytorch: An imperative style, high-performance deep learning library.", | |
| "author": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019.", | |
| "venue": "Advances in neural information processing systems 32 (2019).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "59": { | |
| "title": "On the power of clause-learning SAT solvers with restarts. In International Conference on Principles and Practice of Constraint Programming. Springer, 654\u2013668.", | |
| "author": "Knot Pipatsrisawat and Adnan Darwiche. 2009.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "60": { | |
| "title": "Adversarial Attacks and Defenses in Deep Learning.", | |
| "author": "Kui Ren, Tianhang Zheng, Zhan Qin, and Xue Liu. 2020.", | |
| "venue": "Engineering 6, 3 (mar 2020), 346\u2013360.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "61": { | |
| "title": "DNNV: A framework for deep neural network verification. In International Conference on Computer Aided Verification. Springer, 137\u2013150.", | |
| "author": "David Shriver, Sebastian Elbaum, and Matthew B Dwyer. 2021.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "62": { | |
| "title": "Beyond the single neuron convex barrier for neural network certification.", | |
| "author": "Gagandeep Singh, Rupanshu Ganvir, Markus P\u00fcschel, and Martin Vechev. 2019a.", | |
| "venue": "Advances in Neural Information Processing Systems 32 (2019).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "63": { | |
| "title": "Fast and effective robustness certification.", | |
| "author": "Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus P\u00fcschel, and Martin Vechev. 2018a.", | |
| "venue": "Advances in neural information processing systems 31 (2018).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "64": { | |
| "title": "Boosting robustness certification of neural networks. In International Conference on Learning Representations.", | |
| "author": "Gagandeep Singh, Timon Gehr, Markus P\u00fcschel, and Martin Vechev. 2018b.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "65": { | |
| "title": "An abstract domain for certifying neural networks.", | |
| "author": "Gagandeep Singh, Timon Gehr, Markus P\u00fcschel, and Martin Vechev. 2019b.", | |
| "venue": "Proceedings of the ACM on Programming Languages 3, POPL (2019), 1\u201330.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "66": { | |
| "title": "Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014.", | |
| "author": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "67": { | |
| "title": "The international benchmarks standard for the Verification of Neural Networks.", | |
| "author": "Armando Tacchella, Luca Pulina, Dario Guidotti, and Stefano Demarchi. 2023.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "68": { | |
| "title": "Verification of piecewise deep neural networks: a star set approach with zonotope pre-filter.", | |
| "author": "Hoang-Dung Tran, Neelanjana Pal, Diego Manzanas Lopez, Patrick Musau, Xiaodong Yang, Luan Viet Nguyen, Weiming Xiang, Stanley Bak, and Taylor T Johnson. 2021a.", | |
| "venue": "Formal Aspects of Computing 33 (2021), 519\u2013545.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "69": { | |
| "title": "Robustness verification of semantic segmentation neural networks using relaxed reachability. In International Conference on Computer Aided Verification. Springer, 263\u2013286.", | |
| "author": "Hoang-Dung Tran, Neelanjana Pal, Patrick Musau, Diego Manzanas Lopez, Nathaniel Hamilton, Xiaodong Yang, Stanley Bak, and Taylor T Johnson. 2021b.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "70": { | |
| "title": "A review of formal methods applied to machine learning.", | |
| "author": "Caterina Urban and Antoine Min\u00e9. 2021.", | |
| "venue": "arXiv preprint arXiv:2104.02466 (2021).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "71": { | |
| "title": "Efficient formal safety analysis of neural networks.", | |
| "author": "Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018a.", | |
| "venue": "Advances in Neural Information Processing Systems 31 (2018).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "72": { | |
| "title": "Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium (USENIX Security 18). 1599\u20131614.", | |
| "author": "Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018b.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "73": { | |
| "title": "Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification.", | |
| "author": "Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J Zico Kolter. 2021.", | |
| "venue": "Advances in Neural Information Processing Systems 34 (2021), 29909\u201329921.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "74": { | |
| "title": "Parallelization techniques for verifying neural networks, Vol. 1. TU Wien Academic Press, 128\u2013137.", | |
| "author": "Haoze Wu, Alex Ozdemir, Aleksandar Zeljic, Kyle Julian, Ahmed Irfan, Divya Gopinath, Sadjad Fouladi, Guy Katz, Corina Pasareanu, and Clark Barrett. 2020.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "75": { | |
| "title": "Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.", | |
| "author": "Kai Yuanqing Xiao, Vincent Tjeng, Nur Muhammad (Mahi) Shafiullah, and Aleksander Madry. 2019.", | |
| "venue": "https://openreview.net/forum?id=BJfIVjAcKm", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "76": { | |
| "title": "Systematic generation of diverse benchmarks for dnn verification. In International Conference on Computer Aided Verification. Springer, 97\u2013121.", | |
| "author": "Dong Xu, David Shriver, Matthew B Dwyer, and Sebastian Elbaum. 2020a.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "77": { | |
| "title": "Fast and complete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers.", | |
| "author": "Kaidi Xu, Huan Zhang, Shiqi Wang, Yihan Wang, Suman Jana, Xue Lin, and Cho-Jui Hsieh. 2020b.", | |
| "venue": "arXiv preprint arXiv:2011.13824 (2020).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "78": { | |
| "title": "Natural Attack for Pre-trained Models of Code.", | |
| "author": "Zhou Yang, Jieke Shi, Junda He, and David Lo. 2022.", | |
| "venue": "Technical Track of ICSE 2022 (2022).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "79": { | |
| "title": "Derivative-free optimization via classification. In Thirtieth AAAI Conference on Artificial Intelligence.", | |
| "author": "Yang Yu, Hong Qian, and Yi-Qi Hu. 2016.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "80": { | |
| "title": "General cutting planes for bound-propagation-based neural network verification.", | |
| "author": "Huan Zhang, Shiqi Wang, Kaidi Xu, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh, and J Zico Kolter. 2022.", | |
| "venue": "arXiv preprint arXiv:2208.05740 (2022).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "81": { | |
| "title": "Efficient neural network robustness certification with general activation functions.", | |
| "author": "Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. 2018.", | |
| "venue": "Advances in neural information processing systems 31 (2018).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "82": { | |
| "title": "Extracting small unsatisfiable cores from unsatisfiable boolean formula.", | |
| "author": "Lintao Zhang and Sharad Malik. 2003a.", | |
| "venue": "SAT 3 (2003).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "83": { | |
| "title": "Validating SAT solvers using an independent resolution-based checker: Practical implementations and other applications. In 2003 Design, Automation and Test in Europe Conference and Exhibition. IEEE, 880\u2013885.", | |
| "author": "Lintao Zhang and Sharad Malik. 2003b.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "84": { | |
| "title": "An empirical study of common challenges in developing deep learning applications. In 2019 IEEE 30th International Symposium on Software Reliability Engineering (ISSRE). IEEE, 104\u2013115.", | |
| "author": "Tianyi Zhang, Cuiyun Gao, Lei Ma, Michael Lyu, and Miryung Kim. 2019.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "85": { | |
| "title": "Can Pruning Improve Certified Robustness of Neural Networks?", | |
| "author": "LI Zhangheng, Tianlong Chen, Linyi Li, Bo Li, and Zhangyang Wang. 2022.", | |
| "venue": "Transactions on Machine Learning Research (2022).", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "86": { | |
| "title": "FLACK: Counterexample-guided fault localization for alloy models. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 637\u2013648.", | |
| "author": "Guolong Zheng, ThanhVu Nguyen, Sim\u00f3n Guti\u00e9rrez Brida, Germ\u00e1n Regis, Marcelo F Frias, Nazareno Aguirre, and Hamid Bagheri. 2021.", | |
| "venue": "", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "87": { | |
| "title": "Adversarial Attacks on Neural Networks for Graph Data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Vol. 2019-Augus. ACM, New York, NY, USA, 2847\u20132856.", | |
| "author": "Daniel Z\u00fcgner, Amir Akbarnejad, and Stephan G\u00fcnnemann. 2018.", | |
| "venue": "", | |
| "url": null | |
| } | |
| } | |
| ], | |
| "url": "http://arxiv.org/html/2307.10266v3" | |
| } |