While reading papers on cardinality estimation I noticed something odd: The seminal paper by Flajolet and Martin on probabilistic counting gives a bias correction constant as 0.77351, while a more recent (and very useful) paper by Scheuermann and Mauve gives the constant as 0.775351. Was this a mistake? Or did they correct a mistake in the original paper?
I started searching, and there is a large number of papers that uses the value 0.775351, but there is also a number of papers that uses the value 0.77351. Judging by the number of Google hits for "Flajolet 0.77351" vs. "Flajolet 0.775351" the 0.77351 group seems to be somewhat larger, but both camps have a significant number of publications. Interestingly, not a single paper mentions both constants, and thus no paper explains what the correct constant should be.
In the end I repeated the constant computation as explained by Flajolet, and the correct value is 0.77351. We can even derive one digit more when using double arithmetic (i.e., 0.773516), but that makes no difference in practice. Thus, the original paper was correct.
But why do so many paper use the incorrect value 0.775351 then? My guess is that at some point somebody made a typo while writing a paper, introducing the superfluous digit 5, and that all other authors copied the constant from that paper without re-checking its value. I am not 100% sure what the origin of the mistake is. The incorrect value seems to appear first in the year 2007, showing up in multiple publications from that year. Judging by publication date the source seems to be this paper (also it did not cite any other papers with the incorrect value, as far as I know). And everybody else just copied the constant from somewhere else, propagating it from paper to paper.
If you find this web page because you are searching for the correct Flajolet/Martin bias correction constant, I can assure you that the original paper was correct, and that the value is 0.77351. But you do not have to trust me on this, you can just repeat the computation yourself.
Friday, June 8, 2018
Monday, April 23, 2018
Addendum to the MILP Optimization Times
In our upcoming SIGMOD paper on Adaptive Optimization of Very Large Join Queries we compared result quality and optimization times of over a dozen approaches for join ordering, over a wide range of query sizes. Which is quite a challenging problem, as the different algorithms often work under different assumptions, usually no reference implementation is available, and we had to unify them all into one framework that can handle joins from 10 to 5,000 relations.
One of the approaches that we included was Solving the Join Ordering Problem via Mixed Integer Linear Programming by Immanuel Trummer and Christoph Koch. Our implementation tries to follow the original paper faithfully, implementing the mapping from query graph to MILP problem just as described in the original paper. For some benchmarks like TPC-DS (up to 18 relations in a join, with a median of 3) that implementation worked fine. But for some other benchmarks like the Join Order Benchmark (up to 17 relations, median 8) and the SQLite join set (up to 64 relations, median 34) we saw significant optimization times on our Xeon E7-4870 system: In total 290s for the JOB queries, and 5,100s for the SQLite queries. (Note that the JOB times do not include queries with non-inner join edges, as these currently cannot be handled by the MILP approach).
Immanuel pointed out to me that we can improve the optimization time quite a bit by initializing the start position for the Gurobi solver to a solution constructed by a greedy heuristic. In particular for the SQLite queries that helps a lot, as the greedy solution works very well there and thus the start position is already very good. The optimization times for his implementation (on a weaker hardware, a 2.2 GHz Intel Core i7 laptop) are 52s for the Join Order Benchmark and 44s for the SQLite queries.
I am glad for the hint, and thus amend the numbers here. As far as a I can see it this initialization trick was not mentioned in the original SIGMOD17 paper. But of course a carefully tuned implementation will often have tricks that are unfortunately not described in detail in the corresponding publication.
If there are any more comments about any of the approaches we measured I am happy to hear them.
One of the approaches that we included was Solving the Join Ordering Problem via Mixed Integer Linear Programming by Immanuel Trummer and Christoph Koch. Our implementation tries to follow the original paper faithfully, implementing the mapping from query graph to MILP problem just as described in the original paper. For some benchmarks like TPC-DS (up to 18 relations in a join, with a median of 3) that implementation worked fine. But for some other benchmarks like the Join Order Benchmark (up to 17 relations, median 8) and the SQLite join set (up to 64 relations, median 34) we saw significant optimization times on our Xeon E7-4870 system: In total 290s for the JOB queries, and 5,100s for the SQLite queries. (Note that the JOB times do not include queries with non-inner join edges, as these currently cannot be handled by the MILP approach).
Immanuel pointed out to me that we can improve the optimization time quite a bit by initializing the start position for the Gurobi solver to a solution constructed by a greedy heuristic. In particular for the SQLite queries that helps a lot, as the greedy solution works very well there and thus the start position is already very good. The optimization times for his implementation (on a weaker hardware, a 2.2 GHz Intel Core i7 laptop) are 52s for the Join Order Benchmark and 44s for the SQLite queries.
I am glad for the hint, and thus amend the numbers here. As far as a I can see it this initialization trick was not mentioned in the original SIGMOD17 paper. But of course a carefully tuned implementation will often have tricks that are unfortunately not described in detail in the corresponding publication.
If there are any more comments about any of the approaches we measured I am happy to hear them.