tag:blogger.com,1999:blog-8891884956165580080.post175091255205769136..comments2018-06-19T07:41:32.116+02:00Comments on Database Architects: The Case for B-Tree Index StructuresThomas Neumannhttp://www.blogger.com/profile/15209393663505917383noreply@blogger.comBlogger15125tag:blogger.com,1999:blog-8891884956165580080.post-45366174793799606272018-01-16T11:31:38.252+01:002018-01-16T11:31:38.252+01:00Well, the code is available, I have already given ...Well, the code is available, I have already given it out to several parties. But this is a proof-of-concept hack and in no way ready for productive usage. I would therefore like to add usage instructions and clarify issues if needed. If I just provide a download link people report that the code is hard to use and does not work as expected.<br />But I am happy to provide the code to you (with instructions), just drop me a mail.Thomas Neumannhttps://www.blogger.com/profile/15209393663505917383noreply@blogger.comtag:blogger.com,1999:blog-8891884956165580080.post-27443683902619624522018-01-16T11:16:51.066+01:002018-01-16T11:16:51.066+01:00You point out that not having any code available f...You point out that not having any code available from the "learned index" paper makes comparison difficult, yet you decided to make your own code available "upon request" only. Both choices make the results harder to reproduce and study, they hamper experimenting with the design space.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8891884956165580080.post-36889658506148359342017-12-28T17:33:00.244+01:002017-12-28T17:33:00.244+01:00Thanks, Thomas. The post and comments are definit...Thanks, Thomas. The post and comments are definitely not negative, and I appreciate the chance to dissect these ideas. I agree that interpolation search seems sufficient for precisely linear data. Rather, I was using linear data as a simple example for which it is clear we can have constant time lookups (in this case for both interpolation search and learned indexes), and to show this can be extended to other cases where the data is not linear but would still precisely match our model. For updates, I agree it is so far not as well understood as B-Trees, but I would also argue that there are no obvious blockers. We will see what we can do.Alex Beutelhttps://www.blogger.com/profile/12108188108299619225noreply@blogger.comtag:blogger.com,1999:blog-8891884956165580080.post-21139946891639913622017-12-27T10:23:14.720+01:002017-12-27T10:23:14.720+01:00Hi Alex,
you are right, there is a point in tryin...Hi Alex,<br /><br />you are right, there is a point in trying out different functions. It is very easy to add linear approximation to b-trees, but linear approximations are not always appropriate. Trying out different schemes, for example using a learning method, is certainly a good idea.<br /><br />I hope my blog post did not came out too negative, I am not against trying different schemes. I am bit skeptic about the learned index for the general use case, for data is changed over time and where the data distribution is not known a priori (and might even change over time). But if the data is static, like in the read-only use case that Mark has asked about below, than it absolutely makes sense to try to adapt to the data distribution as much as possible.<br /><br />I am not sure I good your point about evenly distributed integers. If the numbers are nice (e.g., dense or evenly space integers), than the b-tree interpolation would work perfect, too. It would never fall back to binary search and directly jump to the correct position. The only log n at all would be that it does the interpolation once per b-tree level. Which is usually a very small number. And one could try to avoid even that by recognizing that the spline errors are very small, and skip whole b-tree levels. (But admittedly that kind of precomputation makes more sense if the data is largely static, and then one could try even better spline construction algorithms instead of regular b-tree separators, as I have mentioned in my post to Mark).<br /><br />Best<br /><br />Thomas<br />Thomas Neumannhttps://www.blogger.com/profile/15209393663505917383noreply@blogger.comtag:blogger.com,1999:blog-8891884956165580080.post-37888715421894798702017-12-27T05:58:44.968+01:002017-12-27T05:58:44.968+01:00Hi Thomas,
Thanks for the detailed comment. You ...Hi Thomas,<br /><br />Thanks for the detailed comment. You are right that some of the methods for making inserts efficient apply to all types of indexes (B-Trees and learned indexes), but I think there are some details that make learned indexes have some promising opportunities. Let me start with trying to explain the second point about lookup time. The claim of constant time lookups is not for any dataset, but for some datasets. For example, as we discuss in the paper, if our dataset consists of consecutive integer keys, then our model (a linear model with a slope of 1) will take constant time to execute and there is no search time because the error is 0, meaning we have constant time lookups. This of course works for other linear data, even if they are not consecutive integers. Generalizing, the lookup time scales with the complexity of the data (where the time includes how complex of a model is required and how much additional error there is that needs to be addressed through local search). The advantage of the learned index perspective is that ML provides a broad class of models that can match a wide variety of real-world data distributions. This is in contrast to B-Trees for which the lookup time is O(log n) for any data distribution. <br /><br />As you suggest, blending these perspectives using a short tree with interpolation search may also be sufficient to approximate some functions, but also leads to some clear gaps and inefficiencies. For example, log-normal data has a well-defined continuous CDF, and the challenge going forward is finding more flexible functions to approximate it and other common distributions. As an example of inefficiency, a 2-piecewise linear function for which we have different amounts of data in each piece can be modeled by a small neural network but would not align with the typical branching strategy in a B-Tree.<br /><br />For inserts, yes learned indexes and B-Trees can leverage many of the same techniques (such as spacing the underlying data), but learned indexes also provide some new avenues for updates. Because B-Trees grow with the size of the data, we need to change the branching structure as the data grows (in addition to shifting around the underlying data). Learned indexes, on the contrary, may not need to actually change as we insert data (the underlying data of course will need to change). That is, if the data comes from the same distribution, the model will still be accurate and no updates to the model are needed. Even if the data distribution changes, the model can update through online learning or simple updates to sufficient statistics (as in linear models). This opens up new opportunities how to adjust an index for data growth and changes in the distribution Again, we find that the cost of updates here corresponds to model (and thus data distribution) complexity and not size of the data. Of course, the paper focuses on lookups not inserts, and we feel there are many open, interesting questions to demonstrate how to best use learned indexes with workloads with many updates/inserts. Overall, for both lookups and inserts, learned indexes offer a broader set of design choices in building index structures.<br /><br />Thanks,<br />AlexAlex Beutelnoreply@blogger.comtag:blogger.com,1999:blog-8891884956165580080.post-13062913771170040762017-12-26T23:46:44.798+01:002017-12-26T23:46:44.798+01:00If your data is read only you do not need the upda...If your data is read only you do not need the update capabilities of b-trees. And in general you can probably afford to spend more time on preprocessing to get a good representation.<br />Tims machine learning approach for example might be an interesting option, as you do not have to worry about updates. Or you use a classical function approximation approach with hard error bounds, like this one here:<br /><br />Michael T. Goodrich: Efficient Piecewise-Linear Function Approximation Using the Uniform Metric. Discrete & Computational Geometry 14(4): 445-462 (1995)<br /><br />or similar approaches. The later ones tend to be a bit math heavy, but they have the advantage that they are provably optimal, they are not heuristics. The can be computed in reasonable time (Goodrichs algorithm runs in O(n log n)), but they cannot be updated. For read only data sets that isn't a problem, of course.<br /><br />The SILT paper uses compact, but updateable data structures like, e.g., tries. In general that is a good idea, of course. But if you know beforehand that you will never update your data that might be wasteful, and direct approximation of the CDF could be an interesting option.<br />Thomas Neumannhttps://www.blogger.com/profile/15209393663505917383noreply@blogger.comtag:blogger.com,1999:blog-8891884956165580080.post-51324731780008074262017-12-25T20:07:09.900+01:002017-12-25T20:07:09.900+01:00I am interested in the topic for read-only index s...I am interested in the topic for read-only index structures like the per-SST block indexes and bloom filters in an LSM. How can space and search efficiency be improved compared to what is currently done for RocksDB. The SILT paper has interesting results on that topic for the SortedStore - https://www.cs.cmu.edu/~dga/papers/silt-sosp2011.pdf<br />Mark Callaghanhttps://www.blogger.com/profile/09590445221922043181noreply@blogger.comtag:blogger.com,1999:blog-8891884956165580080.post-57014936099360307552017-12-25T15:39:48.667+01:002017-12-25T15:39:48.667+01:00Hi Tim,
I hope you enjoy the holidays, we will ha...Hi Tim,<br /><br />I hope you enjoy the holidays, we will have to continue the discusses afterwards. Just some remarks: In your paper you argue that indexes are models. If we follow that argument, it holds in both directions: Not only can we approximate an index with a model, but we can also interpret an index as a model. This means that each and every trick that you apply to improve update behavior could be applied to b-trees, too, if it made sense to do so. Because it does not really matter if you approximate the CDF with a neuronal network or with a spline. They differ in accuracy, lookup performance, and updateability, but fundamentally they are interchangeable. And we can naturally interpret a b-tree as a spline if we want. (And we know how to update b-trees etc., without additional assumptions about future data distributions).<br /><br />I am also a bit skeptic about you claims of O(1) lookup in your neuronal network tree. Sure, the cost is fixed if the neuronal network tree is fixed, but a b-tree has O(1) lookup time, too, if you fix the depth of the tree. And the interesting question is if you could live with this 2 level models of yours for arbitrary data sizes. Most likely the answer is no, at some points the errors become so large that you need an additional layer of neuronal networks to keep the estimation errors bounded, and then you are back in the O(log n) world. And that O(1) notations of yours ignores the problem that you have to search the ultimate tuple within the error bounds. To be truly in O(1) you 1) had to limit the absolute error in a hard way, and 2) show that you can get that hard error limit with a fixed sized neuronal network for arbitrary input sizes. Information theory makes me a bit skeptic there. Note that not even hash tables are truly in O(1) if we consider all corner cases, and here we are talking about ordered data structures. <br /><br />Best<br /><br />Thomas<br />Thomas Neumannhttps://www.blogger.com/profile/15209393663505917383noreply@blogger.comtag:blogger.com,1999:blog-8891884956165580080.post-33609156118799666562017-12-24T18:34:29.432+01:002017-12-24T18:34:29.432+01:00Alex Beutel just pointed out to me, that the spaci...Alex Beutel just pointed out to me, that the spacing argument might be misleading. An oversized (very low fill-level) BTree would also space out the available space. However, it does not do it within the page, meaning every insert would still occur a certain cost, especially for large pages, plus of course the cost for finding the page in the first place. The more interesting thing is, that we can use online learning to update our index in a way btrees may not be able to for shifting distributions. Again much more research is needed here to understand that better. Tim Kraskahttps://cs.brown.edu/~kraskat/noreply@blogger.comtag:blogger.com,1999:blog-8891884956165580080.post-39960379661357491812017-12-24T16:12:24.947+01:002017-12-24T16:12:24.947+01:00Hi Thomas,
no worries! Family always comes first ...Hi Thomas,<br /><br />no worries! Family always comes first and I am also pretty busy right now with Christmas preparations. Just a "quick" answer to your two questions:<br /><br />For updates, the difference between BTrees and learned indexes is that the available space is more intelligently spread. This allows for much more O(1) inserts. Plus it can be really O(1), as in the case of the BTree you still need to search the key, which is O(log n). The idea also better separates the processes of inserting and adding space. For example, you could insert space during night for the best performance during the day. But you are right, if the distribution shifts, this is not yet as well-understood and a great future research direction (Alkis and I had plenty of discussions about it). <br /><br />On the high log-normal error: yes, this is because of a particularity of our training process and the std err alone is not a good indicator here. The reason why we also included the std. err variance between buckets. However, a mean-value or better a per bucket-size-weighted std. err/mean would be more representative; something we can fix in the next revision of the paper. Let me send you more details on it after Christmas when I have time to dig up the numbers. <br />However note, that with small changes in the model search process, we could (easily) achieve even much better numbers for the log-normal data than for the map-data as it is not hard to learn the often simple distributions of a data generators. We will also expand on this in the next revision of the paper. <br /><br />Glad to hear, that the paper achieved its main goal to offer a new tool and view on indexing. However, I do see a lot of potential in the idea, especially when combined with clever auto-tuning of the models and the hybrid indexing idea. The hybrid index can take advantage of the distribution where possible and degrades to a BTree where it does not make sense. So even without GPUs/TPUs it should provide significant benefits.<br /><br />Merry Christmas to you and your family and let's catch up after the holidays,<br /><br />TimTim Kraskahttps://cs.brown.edu/~kraskat/noreply@blogger.comtag:blogger.com,1999:blog-8891884956165580080.post-10583608382640543232017-12-24T09:47:20.280+01:002017-12-24T09:47:20.280+01:00Hi Tim,
thanks for your long comment, I am not su...Hi Tim,<br /><br />thanks for your long comment, I am not sure that I can do it justice, today being Christmas Eve and my kids pulling me around. Just a few short comments:<br /><br />For updates, your idea of leaving reserve space for new elements works until the reserve space is full. Incidentally the same is true for b-trees, you can insert into a fixed sized b-tree bucket in O(1) if the bucket is not full. But at some point all free space is gone, and then you must pay a price. Which is no surprise, if you could insert n elements in O(n) in a sorted data structure for arbitrary large n, you could sort n element in O(n), which is not possible in the general case. Plus the data distribution might change over time, you may want to support updates and deletions, etc. Which is all well understood for b-trees, but probably difficult for a learned model.<br /><br />A detail that puzzled me about your experiments is that if you compare Figure 5 and 6, in Figure 5 you get a lookup time of ca. 100ns for an error of 20. In figure 6 you get a lookup time of ca.100ns for an error of 17,000. Do you have an idea what that happens?<br /><br />Overall your paper is quite interesting and has caused a lot of discussion, which is a good thing! I even discuss it in some of my lectures. I still believe more in b-trees than in machine learning, in particular for the general use case, but I am always interested in new ideas.<br /><br />Best<br /><br />Thomas<br />Thomas Neumannhttps://www.blogger.com/profile/15209393663505917383noreply@blogger.comtag:blogger.com,1999:blog-8891884956165580080.post-29911635158070054252017-12-24T05:33:13.056+01:002017-12-24T05:33:13.056+01:00I suspect at least some of the learned index work ...I suspect at least some of the learned index work is designed specifically for Google's tensor units and their low-accuracy but high-parallelism computations.Williamhttps://www.blogger.com/profile/11620499500613863262noreply@blogger.comtag:blogger.com,1999:blog-8891884956165580080.post-61110636910317775792017-12-24T05:12:12.897+01:002017-12-24T05:12:12.897+01:00Hi Thomas,
Great to see your interest in learned...Hi Thomas, <br /><br />Great to see your interest in learned indexes. Yet, we would like to clarify a few things:<br /><br /><b>- Why not use other models than NN</b>: We could not agree more. The main point of the paper is to offer a new view on how to design data structures and algorithms, and we make the case that machine learning can help. We just use neural nets because of their generality and potential for TPUs. At the same time, many other types of models can work and might be better. Ideally, the system would try automatically different types of models from B-Trees to splines to neural nets. It just always depends on the use case what works best. For example in the log-normal data set, the log-normal CDF function would probably be the smallest and fastest index structure available.<br /><br /><b>- Performance results:</b> we tried your described approach in one of the first iterations and it had comparable performance to our B-Tree implementation. In fact, there is another paper under submission from Brown, which studies how the leaf nodes of a BTree can be merged using linear functions. However, we did find that the search between the layers of the BTree (even with interpolation search) has a negative impact of the performance. In our experiments your described technique was roughly 2x slower than the best learned indexes. <br /><br />The best indicator that it is an apples-to-oranges comparison can be seen in your B-Tree(10,000) case vs our B-Tree implementation. The avg. error for your B-Tree(10k) case is 225 but the search takes only 54ns. In contrast, our most fine-grained B-Tree with an average error of 4 takes 52ns to find the data. With an average error of 128 (page size 512) it takes 154ns in our paper, so 3x longer than your implementation while still having a smaller average error (I make the assumption here, that the average error between B-Trees is actually comparable.)<br /><br />There might be several factors contributing to it: <br />(1) The hardware as you already pointed out. <br />(2) The record size. We always used records with a key and a payload and we already know that the payload can have a significant impact. <br />(3) Our general learned index framework and other implementation details. <br /><br />In addition, it would be interesting to know how the performance numbers for the map data looks like. Our guess is, that they are worse than the log-normal performance numbers given the higher error. At the same time we report even better numbers for them (under 100ns)<br /><br /><b>- On Inserts:</b> You statement that "machine learning model will have great difficulties if the data is updated later on" is not so clear to us. In fact, if the new data follows roughly the same trend/distribution of the existing data, even inserts could become faster, ideally O(1). To some degree the rebalancing of a B-Tree is nothing else than retraining a model and there is more and more work on how to provide better guarantees for ML under changing conditions. But clearly more research is needed here to understand this better. <br /><br /><b>- Your final words</b> that we should try everything that helps including efficient implementations. Yes and double yes! Learned indexes are just another tool and it highly depends on the use case. Our hope is that further research will continue to refine that tool and understand those use cases so that learned indexes are trusted as much as B-Trees.<br /><br />Best,<br /><br />Tim, Alex, Alkis, Ed, Jeff<br />Tim Kraskahttps://cs.brown.edu/~kraskat/noreply@blogger.comtag:blogger.com,1999:blog-8891884956165580080.post-12139525074930805362017-12-23T23:12:16.435+01:002017-12-23T23:12:16.435+01:00I don't think that is the right analogy for le...I don't think that is the right analogy for learned indexes. There is a reason why this is called machine learning and not artificially intelligence. There is no intelligence involved here at all, this is "just" learning the cumulative distribution function.<br /><br />Which still a hard problem, but not AI level hard. Function approximation is a well studied field, and we know many different approaches to that. The only question is what is the best approach, considering dimensions like accuracy, performance, and updateability.Thomas Neumannhttps://www.blogger.com/profile/15209393663505917383noreply@blogger.comtag:blogger.com,1999:blog-8891884956165580080.post-86306339970101407542017-12-23T21:33:07.134+01:002017-12-23T21:33:07.134+01:00Why do I get a feeling this is the modern version ...Why do I get a feeling this is the modern version of John Henry? Instead of man against machine, we have the old machines competing against the new AIs. Todd Hoffhttps://www.blogger.com/profile/01729396972174536460noreply@blogger.com