# Bristol academics announce breakthroughs for fundamental computer science problems

Earlier this week academics from the University of Bristol presented new breakthroughs on two fundamental problems in computer science. These results were presented at the 56th annual IEEE symposium on Foundations of Computer Science (FOCS 2015) in California earlier this week, the University reports.

One of the most challenging questions in computer science is whether there exist problems that are proven to be hard to solve. This is most famously shown in an unsolved computer science question of whether P=NP, for which a prize of $1,000,000 awaits the person(s) who can solve it.

In the first paper, *New unconditional hardness results for dynamic and online problems*, Dr Raphaël Clifford, Reader in Algorithm Design at Bristol University’s Department of Computer Science and colleagues from Denmark’s Aarhus University, have proved hardness results for versions of matrix vector multiplication, a fundamental tool in much of applied mathematics. The researchers go on to show further hardness results for problems where the data are dynamically changing.

The research team have studied the cell probe complexity of two fundamental problems: matrix-vector multiplication and a version of dynamic set disjointness known as Patrascu’s Multiphase Problem. The researchers have presented improved unconditional lower bounds for these problems as well as introducing new proof techniques of independent interest. These include a technique capable of proving strong threshold lower bounds of the following form: if we insist on having a very fast query time, then the update time has to be slow enough to compute a lookup table with the answer to every possible query. This is the first time a lower bound of this type has been proven.

The researchers have proved that the equal the highest that have ever been achieved and give the second ever example of such a mathematical proof that holds even when a potential solution is allowed to use random numbers.

In the second paper, *Constructing linear-sized spectral sparsification in almost-linear time*, Dr He Sun, Lecturer in Computer Science in Bristol University’s Department of Computer Science and Yin Tat Lee, a PhD student from MIT, have presented the first algorithm for constructing linear-sized spectral sparsifiers that runs in almost-linear time.

More and more applications from today’s big data scenario need to deal with graphs of millions of vertices. While traditional algorithms can be applied directly in these massive graphs, these algorithms are usually too slow to be practical when the graph contains millions of vertices. Also, storing these practical massive graphs are very expensive.

Dr He Sun said: “Over the past decade, there have been intensive studies in order to overcome these two bottlenecks. One notable approach is through the intermediate step called spectral sparsification, which is the approximation of any input graph by a very sparse graph that inherits many properties of the input graph. Since most algorithms run faster in sparse graphs, spectral sparsification is used as a key intermediate step in speeding up the runtime of many practical graph algorithms, including finding approximate maximum flows in an undirected graph, and approximately solving linear systems, among many others.”

Using spectral sparsification, the researchers ran many algorithms in a sparse graph and obtained approximately the correct results as well. This general framework allowed them to speed up the runtime of a wide range of algorithms by a magnitude. However, to make the overall approach practical, a key issue was to find faster constructions of spectral sparsification with fewer edges in the resulting sparsifiers. There have been many studies looking at this area in the past decade.

The researchers have proved that, for any graph, they can construct in almost-linear time its spectral sparsifier, and in the output sparsifier every vertex has only constant number of vertices. This result is almost optimal respect to time complexity of the algorithm and the number of edges in the spectral sparsifier.

If you are interested in getting your head around this subject there is of course a Wiki page https://simple.wikipedia.org/wiki/P_versus_NP and next Thursday 6th at 9:00 and 21:30, Radio 4’s In Our Time presented by Melvin Barg (sic) is also dedicated to this issue.

Many thanks for the additional information and enlightenment*, Jim.

* I always thought his name was Melvin Borg. 🙂