The ORDER BY Algorithm Is Harder Than You Think
Vložit
- čas přidán 5. 07. 2024
- In this video I describe in detail how my implementation of the K-Way External Merge Sort algorithm works. K-Way External Merge Sort is an algorithm used to sort large datasets that don't fit in main memory (usually RAM). Therefore, this algorithm is used by databases like Postgres to process ORDER BY queries when tables don't fit in memory. The algorithm consists of a series of "passes" through one or multiple files and a number of in-memory buffers used to load and process different chunks of a file in each pass. The end result is a file that contains all the requested rows sorted by the keys given in the ORDER BY clause.
🌐 LINKS
Algorithm Implementation:
github.com/antoniosarosi/mkdb...
✉️ CONTACT INFO
Business Email: business@antoniosarosi.io
Contact Email: sarosiantonio@gmail.com
Twitter: / antoniosarosi
Instagram: / antoniosarosi
LinkedIn: / antoniosarosi
🎵 MUSIC
• [Chillstep] Broken Ele...
• Ptr. - Genesis
• Digital Road
• Juno
📖 CHAPTERS
00:00 Introduction
00:22 The Memory Problem
01:32 Database Tables & Sorting
03:18 K-Way Data Structures
04:38 Algorithm Execution (Pass 0)
06:17 Pass 1
09:22 Pass 2
11:07 I/O Complexity
11:58 Variable Length Data
13:16 Final Thoughts
🏷️ HASHTAGS
#programming
#computerscience
#algorithm - Věda a technologie
Since you do "limit 100", can't the query planner be smarter and do some sort of limited sorting that drops off entries automatically? Like an array that holds 100 entries, and every entry is just inserted into there, but the excess goes off and is discarded automatically.
I actually wonder if you can't do this for most reasonable queries too!
Maybe it can, I didn't implement it myself though.
I also had the same thought - for any limit N result that is easy enough fit in ram you could do this with no extra disk space required and only one pass of the data. take first N rows, sort them and then load more data from the file in big enough chunks (10MB?). check each row against the last in the sorted result table. if the checked row is before the last in the result table, keep it in an unsorted pile that just tracks the last in the pile, and then move the last-pointer in the sorted result table to the previous entry. continue till the last in the unsorted pile is after the current last in the sorted pile, then merge and sort the two together, keeping the first N again. repeat till whole database has been scanned.
@@treelibrarian7618 Yeah the algorithm seems pretty obvious, but you'd have to figure out if the limit actually fits in memory and the problem is you're dealing with variable length data (not all rows are the same size). It's not as easy as it seems, but it doesn't change much anyway, you still need an external sort algorithm, I don't even know why I added the limit in the example.
@@tony_saro Even variable length has a maximum length, so you can conservatively estimate the maximum for that.
@@paulstelian97 You can estimate a max size using the table schema, as long as it doesn't have TEXT or BLOB fields it's pretty reasonable. But anyway, I'm just so dumb that I didn't even think about optimizing LIMIT queries when I wrote the DB, I actually didn't even implement LIMIT, I only put it in the video example because usually you won't SELECT * FROM a giant table, so I thought it'd be more "realistic", but still, as mentioned, that doesn't change anything about the video, when applicable you still need an external sort algorithm. I might pin this comment if more people have doubts about this.
Something so simple ended up being so complex and you didn’t even talk about distributed DBs. Top notch
This is top tier. The people who invented these algorithms. Insane. We are taking technology for granted. You explain this really well. For a person who doesn’t have a formal computer science background, this video helps me understands how the tools I use day to day work under the hood
I have a formal Computer Science background and I still find databases very hard to understand 😂. There's a lot of research that went into them over the past 5 decades. People who came up with these algorithms are definitely insane.
@@tony_saro Well CS is kinda orthogonal to the database design. While RMDBs are based on math more-or-less, you won't be able to explain why Mongodb, a No-SQL database, supports join operations, while an egregiously RDBMS Postgresql supports egregiously unrelational formats like JSON and XML, with math theory alone.
And SQL databases weren't THAT math-based to begin with, back when PostgreSQL started there was no such thing as "SQL compliant", since every RDB had its own incompatible flavour of SQL. Considering it was also the time when the applications were written expecting to talk to the DB directly, the hatred for SQL and the need for ORMs at the time makes sense, especially since there was no XML crutch to use as a go-to serializible format for messaging.
I'm watching this just from the perspective of someone wanting to sort tables that might become very large. This explains why it's so important to create indexes on any columns you might want to sort.
True, BTree indexes will skip this algorithm altogether.
This is what for I pay my internet bills. Btw great work and explained well
Thank you
You are a legend. I've just watched your video coding a database and I'm very impressed. I'd really like to see you coding a compiler/interpreter. I know it takes a while, but you have awesome didactics. Keep going man, your channel is top tier!
I will at some point, now I'll focus on smaller projects because the database drained all my energy 😂
I remember my dad describing this algorithm to me - except he was doing it on an old mainframe where each of the temporary files were on tapes that needed manually changed between passes!
You opened a door of new kinds of algorithms for me. I never realized that there is a huge world of algorithms and data structures to work with data that don't fit in memory. Amazing!
This video is pretty good at picturing file operations as something that just works and not a clusterfuck at all.
It just works but it's not easy to implement at all 😂
Que sorpresa encontrarme con videos tuyos de nuevo! Finalmente todos los años que llevo aprendiendo inglés valieron la pena 😂. Mucho animo con este canal me encantan tus vídeos y aprendo mucho de ellos
Damn! This guy is next level - insane quality. Bravo!
This is some great content, man! I really appreciate the effort that you put into it and for making it available for free on CZcams. Thanks!
Thanks man
At my work, we have a project where in an application several queries are run, the results are mapped into memory via 'Object Relational Mapping' and then stuff like joins, limiting and SORTING is done in the application with the RAM of the query results. I pointed out the performance difference it would make but the fact that it's not even possible to do this for bigger tables real adds some fuel to the fire :D Great video, shows why leaving as much processing as possible to the DBMS is a good idea
If you're selecting only a few megabytes of data that's probably fine, but otherwise the database will do a much better job, it's designed and optimized for that.
You do joins in the application instead of on the database???? Why???
@@TapetBart We used microservices and chose to have one database per service. Turnrd out that joining across multiple databases is very cumbersome especially if they have different credentials. So we simply did the joins with ORM in the application... and by "we" I mean my team before I joined the project. Later I suggested we used one schema per service eliminating all of that. Don't even know why we used microservices in the first place - we don't have to scale out the app for higher throughput because only a few dozen users use it at once
@@gu1581 ah, makes perfect sense then.
I am doing something a bit similar with DuckDB.
Very intuitive and concise explanation!
Very good explanation and amazing animations! Please keep up the content!
This is actually valuable lessons presented in awesome way. Man i just hope you blow up because we need way more of this this type of content.
Working on it 📈
Really nice video man. Merge sort was my favorite sorting algorithm when I was beginning my journey into computing. Nice to see that what databases use is a variation of it (with a lot more complications hehe). Saludos de Brazil.
The explanation is so clear, I didn't have to put any effort to get the idea 👏👏👏
I subscribed, dropped a like and I hope you continue producing great content 🙏🏻
I will, thanks for the sub and like 👍
Outstanding job! So well presented and extremely clear.
Great presentation and explanation on this topic. I've happily subscribed
Que buen videoo!! Muchisima suerte y ánimo con este canal! Tienes muchísimo potencial para crecer. No te rindas y muchísimo ánimo❤❤
I've heard this kind of algorithm since university but too lazy to dig to it, now thank you for helping me got its idea 💪
Awesome. Great work
This channel is lit, man, instant sub 🎉
Very well explained. I look forward to more videos!
Working on it 👨💻
Pretty good explanation! I just remember your first videos in the spanish channel and it's just amazing to see how you are progressing as engineer!
Thank You again! A really interesting problem with different sizes.
The animations are cool, and your explanation is right and on point. Really enjoyed your content 👏
wow such a clear explanation 👍👍
Fascinating! Subscribed.
Buen vídeo, me alegro de volver a verte
Excelente como siempre 💯💪
Great video. Thanks
No hay cosa más hermosa que ver tu contenido en inglés tambien 🥹🫶
Amo enserió tu contenido brou ❤
Just subed, love the video hopefully, you do many many more and gain a yuuugggee fan base
Muy buenos tus videos. Las animaciones hacen que sea mucho más fácil de entender. Saludos!
El rey a vuelto, ahora sí hay un buen motivo para aprender inglés.
Hey! I think what you did in your Spanish channel was much less niche and this content definitely looks much much more polished and informative. Congratulations!
So for this you would use things like RAM drive or buy Optane and it become much easier with modern SSD I would guess. I never thought that database use files and hard disk to sort stuff. Thank you for this lesson! Very cool animations, easy to follow and I like that you really implemented it and not just read some documentation! 💛Huge respect. It may be just basic and naive implementation but that is the best place to start with such complex thing. Looking forward to next video! :) Cheers
Really good content, like it
Gran explicación, sigue asi, ademas me ayudas a mejorar mi ingles, un saludo desde Colombia
You have explained it in such a way that I have understood it even though I am an HTML programmer.
Wow, this was very well explained. Thanks for posting. Can we have more videos plz. Thank you again
Sure, I'm working on a video similar to this one focused on algorithms and then I'll start working on another #mkown episode
glad you went in-depth about this topic. will you do more like this with other sub-topics? or do you have a new big project you're working on that you will release in the future when done?
Both, next video will be similar to this one and then I'll move to another project
@@tony_saro awesome, looking forward to it. thanks a lot!
Please keep up the good job
Nice seeing someone else writing Rust code
Regarding dealing with variable-length, during pass-0, when outputing a sorted page of tuples, offset from the start of the file and page size can be recorded in an in-memory 'page file' which can be just an ArrayList (java background here, use something which can grow and has 0-n lookup speed). Later stages can use that 'page file' list as a way to lookup the position of the page in the file for the cursor to use. Their outputs will generate a new 'page file'.
Now, if we want to parallelise this algorithm, since we have the offset and size, we can let each worker thread find the location of whichever page range its going to be working with using that 'page file'. The output will always be starting from the minimum offset and be the total size of the range and the 'page file' since its an Array List each thread can insert in the appropriate index the new page values.
That's more or less what I'm doing but you can't store it in memory only, let's say you need to sort 1TB and pages are 4KB, you'd need to store the offsets of approximately 250 million pages, using 4 bytes per offset would already require 1GB of memory. So in case that happens you need to store what you call "page file" on disk as well. Another very important detail is that the number of pages will change in each different run, just like I showed the example of producing 3 pages from 2 pages only, maybe in the next pass you produce 1 page using 3 pages. It's still paralelizable, but you have to store the offsets for each thread during each run.
And this is just a basic algorithm I came up with, if you take a look at the Postgres version it's much more complex than what I've explained here 😂.
@tony_saro Will take a look. Also I didn't consider such large queries where GBs or even TBs of data are involved.
The videos are great. Got me thinking on a lot of stuff around db engine design.
Looking forward to more videos 😁
Another great video. I would really like to see the code of this.
It's on GitHub, linked in the description.
Great!
I found my new role model
Great video, so for every order by query, a separate file with the sorted results will be produced? Or does it actually sorts the records in place (in the db files) and keep them sorted for later queries ?
Separate file unless the results fit in memory
The page size can be the size the disk read at once (if you try to read a byte, disk will read more than a byte), or the size OS read at once (if I am not mistake, OS also read more to optimize IO)
It's at least equal to the file system page size. Otherwise it's a multiple of the FS page size.
Amazing explanation about the sorting, but I'm actually not sure you need to sort the whole table any time someone asks for top N values...
It would make much more sense to select the top 100,and then just sort that 100
Check the pinned comment
This video is complex as is, and now I am wondering what will happen when we have to implement Isolation.
I guess, it'd simply be reading required pages during initial run, and then we can work with that dataset and sort it
Here's one more, implementing postgres statement timeout
This reminds me the bottom up approach of merge sort, tricky, would this K-way done in such approach actually? Subcribed and liked, hope this channel always continue. Here are DSA topics beyond textbooks but also explained well, specialized for DB . I remember "The Arts of Programming" mentioned external sort too, from random places, I bookmarked some other external sorting algo: Poly-phase Mergesort, cascade-merge, oscillating sort, I know I don't know them
It's similar to bottom up you can represent the passes as a tree
if you have a limit on the number of results, so you have to show the first k rows sorted in a particular way (so basically the first k rows in an ordered data set), it's better if you made k pages of size(maximum size of a row) and just select the first k rows to be in pages and keep going thru the list till you found the first k
this way your algorithm has a complexity of O(n * logk), assuming that you can hold in memory the k pages, and you would use a priority_queue to handle the data, than you can sort it in O(k logk) but that is smaller than O(n logk), whilst that order by algorithm has O(n * logn * log base k (n)) where n is the number of rows in the database
also you can get the rows a-b that would be in a data base if it were sorted, you can do the same thing, except you take the k as being b, with the same algorithm and limitation
We've already discussed the "LIMIT" thing in the pinned comment. Add your comment there if you want to contribute further to that conversation.
Perfectamente explicado, y eso que no se me da muy bien el inglés
You could make a kind of hash for the variable data that keeps the order that way you only need to compare the actual variable size data if the hash is equal, you just have to not multiple the hash by a prime number
You know any algorithm that preserves order and doesn't cause collisions? When two hashes are equal how do you know if the original variable data, for example strings, were equal or they just produced the same hash value?
Hey pal! Thank you very much! I wanna to create my own dbms(cloud, embedded and so on) too! If I would have a some result than can I send a repo github under you commentary?)
I think CZcams will flag it as spam if you add a link to your commentary
Are there any other external sorting algorithms like this?
Yes, SQLite 3 uses something called PMA (Packed Memory Array) to sort on disk. I don't know how it works, didn't research enough.
Why do databases tend to do their own swapping to temporary files for tables that don't fit in RAM rather than just doing everything in-memory (with a cache-friendly algorithm) and letting the OS's paging facilities handle swapping to disk. Process memory can already be much larger than RAM (with appropriate swap space configured).
Because the replacement algorithm is determined by the OS in that case. Databases don't have control over that, and the OS doesn't know anything about databases so they just roll their own optimized algorithms.
eso es contenido del bueno, eres el mejor crack, si se te ocurre hacer estos mismos videos en español , asi sean de paga, por aqui tiene un cliente! 😎
Estos videos son gratis, los hago en inglés porque tienen más público que en español.
@@tony_saro esque aun no domino bien en ingles maestro 😅
No sabia que tenias un canal en inglés 😮
Lo sabes ahora jaja
Una pena que en la comunidad hispana no haya interés para este tipo de temas :( La mayoría de gente solo se centra en el desarrollo web, cuando no conocen el fascinante mundo del bajo nivel
Justo, por eso me he cambiado a la inglesa porque esto es lo que me interesa a mí.
@@tony_saro
Mucho ánimo con este nuevo canal.
Just wanted to thank you for this.
In near future, if you have time will you implement redis from scratch with master replica command propagation.
I don't know, I won't touch databases any time soon after this project.
@@tony_saro Well it Iooks like I need to wait
Cómo te volviste tan fluido en inglés? Llevo años con el inglés y no soy ni la mitad de fluido...
Escuchando cómo hablan los nativos y practicando la pronunciación. Una cosa es el inglés que te enseñan en la escuela o academias y otra es el inglés que se habla en la calle 😂.
Sir, do you have a license for those guns?
They might be illegal ☠️
Volveras al canal de Antonio Sarosi?
Ahora mismo no
I feel like this could take a while 😂
Virtual memory?
Nah, because too dependent on OS
El doble de Sarosi?
Sarosi es una IA
Ey there is man named Antonio who does programming videos too, is he your twince?
Nah man he's an AI
Either way, new sub!
did you really need to have built a DB to realize you can't assume some arbitrary piece of data may not necessarily fit into memory (ram) ?
Yes, I have a negative IQ.
I think you didnt really explain something well because im confused......
You say you need to implement k way merge sort with paging because you cant assume the tuple set fits in memory
But based on your explanation, the buffer seems to double each time as the pages are merged, presumably until the # of pages inside the buffer = the number of records / page size.... .
Meaning in the kast pass, you will have the final output buffer size (in memory) = the entire result set?
Now toure back at the original problem you had at the beginning of the video. That is, for the bulk of the algorithm you are space efficient but in the last oass, the final output buffer will have the entire resultset in memory before writing it to the final output file .....
How do you get around this issue? Are you writing to the disk as you are merging basically?
Look at 04:43. Pages you see at the bottom are on disk and the ones you see at the top are in memory. I think it's pretty clear what's in memory and what isn't.
You only load as many pages as input buffers you have, that's it, last pass included. The memory buffers never change their size, and in the last pass you can see with the animations that it's still loading 2 pages at a time.
Nos olvidaste, toñito:c
Ya he hablado sobre el tema en Instagram y Twitter.
Amazing work
I felt myself a way worse software engineer after I saw you ...
This is leaning more towards pure Computer Science than software engineering.
What so you mean?
Wouldn't that be unnecessary if you had index on name which could be already sorted on insert and just take first 100 elements from it? This algorithm was invented when hdds were expensive. Today 1gb is cheaper than one minute of your time.
If there's a BTree index and the query sorts by that index only then yes. Otherwise, if there are 100 queries sorting 1GB at the same time you still need an external sort algorithm, and if there's one query sorting 5TB you still need the same algorithm. All databases have some variant implemented.
Why do you have a Greek accent?
It's not a Greek accent, I'm not Greek and I don't speak Greek
No it isn't
What isn't?
thank you
You're welcome, thank you for commenting