Understanding the powers and limitations of algorithms to solve logic formulas
The focus of the UtHoTP project was to design and study efficient algorithms, where the quality of an algorithm is measured by how the running time scales as the input data size increases. As project coordinator Prof Jakob Nordstrom puts it, “If an algorithm was a runner’s performance during a race, ideally it shouldn’t matter how long the race is – 100 metres or a marathon – a good algorithm runs fast for all distances.” A particularly interesting algorithmic challenge is so-called NP-complete problems. This class of problems includes some very challenging combinatorial optimisation problems, which can nonetheless often be solved surprisingly well in practice. Researchers do not yet understand when and why the algorithms for these problems work as well as they often do. The project studied the best algorithms known today and their methods of reasoning. By proving mathematical theorems, delineating their power and limitations, the team has provided a better understanding of how these algorithms work. The importance of NP-complete problems Research in computational complexity theory has focused on problems at the limit of what is possible to solve. Many of these problems have an intriguing characteristic: while they are very challenging to solve, once a solution is proposed, it is easy to verify. Many tasks in science and engineering share this property, and this is why research in computational complexity theory has focused on problems with this property and attempted to understand their difficulty. It turns out, rather surprisingly, that in order to solve any computational problem with this property, referred to as an NP-complete problem, it is enough to have efficient algorithms for solving logic formulas. This is why research in computational complexity theory has focused on this problem, known as the Boolean satisfiability problem, or SAT for short – doing so enables researchers to better understand the workings of efficient algorithms. The UtHoTP project studied algorithms for solving the SAT problem – so-called SAT solvers – focusing, in particular, on more advanced mathematical methods of reasoning that are exponentially stronger than the methods commonly in use today. By designing and studying new algorithms, and proving mathematical theorems about them, the project shed light on their potential. The team has also experimentally evaluated the new algorithms that they have developed, but so far only in ‘idealised, lab-like conditions’. They constructed benchmark formulas, designed to highlight the strengths and weaknesses of different methods for solving the SAT problem. This work has yielded some quite promising results, and Prof. Nordstrom says, “If these new methods could be made to work as well on formulas arising in real-life problems, then this could have a huge impact on many areas in industry that use SAT solvers, such as computer hardware and software design.” Building bridges between theory and practice When theoreticians and practitioners from different areas of research study the same problems, their different perspectives can often present almost insurmountable challenges for communication – researchers from different communities do not even share a common technical language. As Prof Nordstrom explains, “This has been one of the barriers to designing and understanding really strong algorithms for the SAT problem. Although SAT has been intensely studied since the 1960s, there has been very little interaction between theory and practice. This is now starting to change, and I believe an important part of this is a series of international workshops that I have organised since 2014, with the help of this ERC grant.” The team is now looking to apply this approach to the practical performance of algorithms in neighbouring areas such as constraint programming and mixed integer linear programming.
Keywords
UtHoTP, algorithm, formula, computers, logic, mathematics, reasoning, computational, problems, algebra, geometry