Sunday, June 29, 2014

Main-memory vs. disk based

Given today's RAM sizes the working set of a database is in main-memory most of the time, even for disk-based systems. However, it makes a big difference if the system knows that the data will be in main-memory or if it has to expect disk I/O.

We can illustrate that nicely in the HyPer system, which is a pure main-memory database system. Due to the way it was developed it is nearly 100% source compatible with an older project that implemented a System R-style disk-based OLAP engine aiming at very large (and thus cold cache) OLAP workloads. The disk-based engine includes a regular buffer manager, locking/latching, a column store using compressed pages, etc. This high degree of compatibility allows for an interesting experiment, namely replacing the data access of a main-memory system with that of a disk based system (thanks to Alexander Böhm for the idea).

In the following experiment we replaced the table scan operator of HyPer with the table scan of the disk-based system, but left all other operators like joins untouched. We than executed all TPC-H queries on SF1 repeatedly, using identical execution plans, and compared the performance of the disk-based scan to the original HyPer system. After the first pass all data is in main-memory, so both systems are effectively main-memory systems, but of course the disk-based system does not know this and assumes data is coming from disk. In the experiments we disabled parallelism and index nested loop joins, as these were not supported by the older project. All runtimes are in milliseconds on SF1 (single-threaded).



1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
main-memory 50 4 20 72 11 52 43 17 132 44 5 42 117 12 14 41 34 225 133 20 115 17
disk based 374 40 263 208 240 185 277 271 316 202 47 275 170 182 182 53 250 461 200 204 556 39

It is evident that the main-memory system is much faster, approx. a factor 5 in geometric means. The interesting question is why. Profiling showed that a main culprit was compression. The disk-based system compressed data quite aggressively, which is a good idea if the data is indeed coming from disk, as then throughput increases, but a very bad idea if the data is already in memory to begin with. For scan heavy queries like Q1 nearly up to 70% of the time was spend on decompressing the data, Or, to phrase it differently, we could have improved the performance by nearly a factor of 4 by avoiding that expensive decompression for these queries (other queries are less affected, but pay for decompression, too).

Note that even though that aggressive compression looks like a bad idea today (and it probably is, given todays systems), it was plausible when the system was designed. Aggressive compression saves disk I/O, and if the system is waiting for disk the only thing that matters is that decompression is faster than the disk drive. Which is the case here. Today we would prefer a more light-weight compression that does not add such a high CPU overhead if the data is already in main-memory. But which of of course offers worse performance if data should indeed come directly from disk...

The compression explains roughly a factor 3 in performance difference, where does the rest come from? There is no obvious hot spot, performance is lost all over the place. Buffering, latching, memory management, tuple reconstruction. data passing, all components add some overhead here and there, with a quite significant overhead in total. All of that was largely irrelevant as long as we waited for disk I/O, but in the main-memory case the overhead is quite painful.

So what can we learn from that little experiment? First, systems should be designed with the main-memory case in mind. Tuning originally disk-based systems for the in-memory case is difficult, as it requires removing layers and overhead all over the system. Not impossible, with sufficient determination we could probably get the disk-based case within a small factor of the main-memory case, but a lot of work.
And second, comparisons between disk-based systems and main-memory systems are unfair. Here, it looks like the main-memory system would win by a large margin. And of course it does, in most settings. If, however, the data would really come from disk, without any caching, the aggressive compression of the disk-based system would have paid off, as it would fetch less data from the slow disk. These are really two different use cases, and even though main-memory becomes the norm, there is still some use for disk-based systems.

Wednesday, June 4, 2014

Random Execution Plans

Query optimizers spend a lot of effort on finding the best execution plans. HyPer for example uses a fairly complex dynamic-programming strategy for finding the optimal join order. Of course all of this complex and expensive optimization is based upon the cost model and cardinality estimations. Unfortunately cardinality estimation is often wrong, in particular higher up in the execution plan. Therefore some cynics claim that databases are basically executing random execution plans.

Now the interesting question is, do we really do that? To get an impression on what "random execution plan" means, we took regular SQL queries, generated 100 random execution plan using QuickPick and executed all of them in HyPer. Even though they are random, the generated plans are still somewhat reasonable, as 1) they contain no cross products, 2) selections are pushed down, and 3) the smaller side is used as build input. Note that constructing these plans needed no estimations at all (except for build/probe selection), we simply constructed plans with random join orders and executed them with a timeout of 3 seconds.
They results for TPC-H Query 5 (SF1) are shown below:


Pygal 0.0 200.0 400.0 600.0 800.0 1000.0 1200.0 1400.0 1600.0 1800.0 2000.0 2200.0 2400.0 2600.0 2800.0 3000.0 26 16.538461538461537 521.5057692307693 31 23.153846153846153 521.0778846153846 36 29.769230769230766 520.6500000000001 37 36.38461538461538 520.564423076923 41 43.0 520.2221153846153 46 49.61538461538461 519.7942307692308 47 56.230769230769226 519.7086538461539 48 62.846153846153854 519.623076923077 50 69.46153846153845 519.4519230769231 50 76.07692307692308 519.4519230769231 51 82.6923076923077 519.3663461538462 52 89.3076923076923 519.2807692307692 91 95.9230769230769 515.9432692307693 93 102.53846153846153 515.7721153846154 95 109.15384615384615 515.6009615384615 102 115.76923076923076 515.001923076923 102 122.38461538461537 515.001923076923 105 129.0 514.7451923076924 107 135.6153846153846 514.5740384615385 107 142.23076923076923 514.5740384615385 112 148.84615384615387 514.1461538461539 112 155.46153846153845 514.1461538461539 113 162.0769230769231 514.060576923077 113 168.6923076923077 514.060576923077 118 175.30769230769232 513.6326923076923 138 181.92307692307696 511.92115384615386 140 188.53846153846155 511.75 140 195.1538461538462 511.75 151 201.7692307692308 510.8086538461539 154 208.3846153846154 510.5519230769231 154 215.0 510.5519230769231 160 221.61538461538464 510.03846153846155 162 228.23076923076925 509.8673076923077 172 234.84615384615387 509.0115384615385 178 241.4615384615385 508.49807692307695 178 248.0769230769231 508.49807692307695 178 254.6923076923077 508.49807692307695 180 261.30769230769226 508.3269230769231 243 267.92307692307685 502.93557692307695 275 274.53846153846155 500.1971153846154 277 281.15384615384613 500.02596153846156 290 287.7692307692307 498.91346153846155 334 294.38461538461536 495.1480769230769 334 300.99999999999994 495.1480769230769 334 307.6153846153846 495.1480769230769 334 314.23076923076917 495.1480769230769 350 320.8461538461538 493.7788461538462 360 327.4615384615384 492.92307692307696 360 334.07692307692304 492.92307692307696 360 340.6923076923076 492.92307692307696 360 347.30769230769226 492.92307692307696 360 353.9230769230769 492.92307692307696 361 360.53846153846155 492.83750000000003 387 367.15384615384613 490.6125 416 373.7692307692307 488.13076923076926 422 380.38461538461536 487.6173076923077 423 387.0 487.53173076923076 470 393.6153846153845 483.5096153846154 672 400.23076923076917 466.223076923077 685 406.8461538461538 465.11057692307696 699 413.4615384615384 463.9125 714 420.07692307692304 462.6288461538462 714 426.6923076923076 462.6288461538462 715 433.30769230769226 462.5432692307693 751 439.9230769230769 459.46250000000003 757 446.5384615384615 458.9490384615385 844 453.15384615384613 451.5038461538462 852 459.7692307692307 450.8192307692308 1098 466.38461538461536 429.7673076923077 1139 472.9999999999999 426.2586538461539 1139 479.6153846153845 426.2586538461539 1194 486.23076923076917 421.5519230769231 1212 492.8461538461538 420.0115384615385 1212 499.4615384615384 420.0115384615385 1234 506.07692307692304 418.1288461538462 1518 512.6923076923076 393.82500000000005 2062 519.3076923076923 347.2711538461539 2126 525.9230769230768 341.7942307692308 2662 532.5384615384614 295.92500000000007 2662 539.1538461538462 295.92500000000007 2670 545.7692307692308 295.24038461538464 3000 552.3846153846154 267.0 3000 558.9999999999999 267.0 3000 565.6153846153845 267.0 3000 572.2307692307692 267.0 3000 578.8461538461537 267.0 3000 585.4615384615385 267.0 3000 592.0769230769231 267.0 3000 598.6923076923076 267.0 3000 605.3076923076923 267.0 3000 611.9230769230769 267.0 3000 618.5384615384615 267.0 3000 625.1538461538461 267.0 3000 631.7692307692307 267.0 3000 638.3846153846154 267.0 3000 645.0 267.0 3000 651.6153846153845 267.0 3000 658.2307692307692 267.0 3000 664.8461538461538 267.0 3000 671.4615384615383 267.0 random query plans [sorted by runtime] runtime [ms]


We see several interesting results. First, the best random plan is not bad at all. It is only slightly worse than the plan generated by our complex DP optimizer. On the other hand, the worst random plan is very bad, we had to kill it after 3 seconds. So picking a random plan is clearly dangerous. And even the median plan is not that good, either. Roughly speaking, the median plan is a factor of 10 slower than the best random plan, and the worst random plan is more than a factor of 10 slower than the median for this query.

Therefore database optimizers are most likely not picking random plans, even though they are reasoning using noisy estimates. Truly random plans are simply too bad. This also demonstrates that query optimization is crucially important, even a fast runtime system cannot correct the mistakes made in query optimization.

Of course randomness can also be used during query optimization. Generate 1000 plans using QuickPick, pick the cheapest one, and you will most likely get a decent plan. Of course there we cannot simply execute all of the plans, so we will have to pick the cheapest plan based upon estimates, which brings us back to the original problem. But still, QuickPick is very fast, so that might be attractive for large queries.

Random execution plans are also useful for testing the quality of the cost prediction. For TPC-H that is not that an issue, as here cardinality estimates and cost predictions are quite good, but for data sets with skew and correlations a scatter plot of expected runtime and actual runtime is quite enlightening.
So there is indeed a lot of unresolved issues with cardinality estimation,  but fortunately there is usually at least a correlation between expected costs and actual costs. Which means that we might pick plans with some randomness induced by estimation errors, but at least we tend to pick them from the good end of the spectrum.