System Design Interview - Top K Problem (Heavy Hitters)
Vložit
- čas přidán 26. 06. 2019
- Please check out my other video courses here: www.systemdesignthinking.com
Topics mentioned in the video:
- Stream and batch processing data pipelines.
- Count-min sketch data structure.
- MapReduce paradigm.
- Various applications of the top k problem solution (Google/Twitter/CZcams trends, popular products, volatile stocks, DDoS attack prevention).
Merge N sorted lists problem: leetcode.com/problems/merge-k...
Inspired by the following interview questions:
Amazon (www.careercup.com/question?id...)
Facebook (www.careercup.com/question?id...)
Google (www.careercup.com/question?id...)
LinkedIn (www.careercup.com/question?id...)
Twitter (www.careercup.com/question?id...)
Yahoo (www.careercup.com/question?id...) - Věda a technologie
Jesus christ this guy's material is amazing... and each video is so compact. He basically never wastes a single word....
I have to pause or rewind constantly, and watch every video twice to digest it.
@@antonfeng1434 me too
@@antonfeng1434 Same here
@@xordux7 Same here
A summary of questions and answers asked in the comments below.
1. Can we use use hash maps but flush it's content (after converting to heap) each few seconds to the storage instead of using CMS?
For small scale it is totally fine to use hash maps. When scale grows, hash maps may become too big (use a lot of memory). To prevent this we can partition data, so that only subset of all the data comes to a Fast Processor service host. But it complicates the architecture. The beauty of CMS is that it consumes a limited (defined) memory and there is no need to partition the data. The drawback of CMS, it calculates numbers approximately. Tradeoffs, tradeoffs...
2. How do we store count-min sketch and heap into database? Like how to design the table schema?
Heap is just a one-dimensional array. Count-min sketch is a two-dimensional array. Meaning that both can be easily serialized into a byte array. Using either language native serialization API or well-regarded serialization frameworks (Protobufs, Thrift, Avro). And we can store them is such form in the database.
3. Count-min sketch is to save memory, but we still have n log k time to get top k, right?
Correct. It is n log k (for Heap) + k log k (for sorting the final list). N is typically much larger then k. So, n log k is the dominant.
4. If count-min sketch is only used for 1 min count, why wouldn't we directly use a hash table to count? After all the size of data set won't grow infinitely.
For small to medium scale, hash tables solution may work just fine. But keep in mind that if we try to create a service that needs to find top K lists for many different scenarios, there may be many such hash tables and it will not scale well. For example, top K list for most liked/disliked videos, most watched (based on time) videos, most commented, with the highest number of exceptions during video opening, etc. Similar statistics may be calculated on channels level, per country/region and so on. Long story short, there may be many different top K lists we may need to calculate with our service.
5. How to merge two top k lists of one hour to obtain top k for two hours?
We need to sum up values for the same identifiers. In other words we sum up views for the same videos from both lists. And take the top K of the merged list (either by sorting or using a Heap). [This won't necessarily be a 100% accurate result though]
6. How does count min sketch work when there are different scenarios like you mentioned.... most liked/disliked videos. Do we need to build multiple sketch? Do we need to have designated hash for each of these categories? Either ways, they need more memory just like hash table.
Correct. We need its own sketch to count different event types: video views, likes, dislikes, submission of a comment, etc.
7. Regarding the slow path, I am confused by the data partitioner. Can we remove the first Distribute Messaging system and the data partitioner? The API gateway will send messages directly to the 2nd Distribute Messaging system based on its partitions. For example, the API gateway will send all B message to partition 1, and all A messages to partition 2 and all C messages to partition 3. Why we need the first Distribute Messaging system and data partitioner? If we use Kalfa as Distribute Messaging system, we can just create a topic for a set of message types.
In case of a large scale (e.g. CZcams scale), API Gateway cluster will be processing a lot of requests. I assume these are thousands or even tens of thousands of CPU heavy machines. With the main goal of serving video content and doing as little of "other" things as possible. On such machines we usually want to avoid any heavy aggregations or logic. And the simplest thing we can do is to batch together each video view request. I mean not to do any aggregation at all. Create a single message that contains something like: {A = 1, B = 1, C = 1} and send it for further processing. In the option you mentioned we still need to aggregate on the API Gateway side. We cannot afford sending a single message to the second DMS per each video view request, due to a high scale. I mean we cannot have three messages like: {A = 1}, {B = 1}, {C = 1}. As mentioned in the video, we want to decrease request rate at every next stage.
8. I have a question regarding the fast path through, it seems like you store the aggregated count min sketch in the storage system, but is that enough to calculate the top k? I felt like we would need to have a list of the websites and maintain a size k heap somewhere to figure out the top k.
You are correct. We always keep two data structures: a count-min sketch and a heap in Fast Processor. We use count-min sketch to count, while heap stores the top-k list. In Storage service we also may keep both or heap only. But heap is always present.
9. So in summary, we still need to store the keys...count-min sketch helps achieve savings by not having to maintain counts for keys individually...when one has to find the top k elements, one has to iterate thru every single key and use count-min sketch to find the top k elements...is this understanding accurate?
We need to store the keys, but only K of them (or a bit more). Not all.
When every key comes, we do the following:
- Add it to the count-min sketch.
- Get key count from the count-min sketch.
- Check if the current key is in the heap. If it presents in the heap, we update its count value there. If it not present in the heap, we check if heap is already full. If not full, we add this key to the heap. If heap is full, we check the minimal heap element and compare its value with the current key count value. At this point we may remove the minimal element and add the current key (if current key count > minimal element value).
This way we only keep a predefined number of keys. This guarantees that we never exceed the memory, as both count-min sketch and the heap has a limited size.
Video Notes by Hemant Sethi: tinyurl.com/qqkp274
Hi Saurabh. This is amazing! Thank you for collecting all these questions and answers in one place. I would like to find time to do something like this for other videos as well.
I have pinned this comment to be at the top. Thank you once again!
Thanks a lot for this Saurabh!
Need more people like you. Thank you
@@atibhiagrawal6460 glad it's helpful!
Might be worth posting a link to your notes in a standalone comment too so that everyone can see it
@@saurabhmaurya94 That is good idea ! Thank you :D
PLEASE come back and make videos again. There's no resource quite like this channel.
One of the best system design channel ive come across! great job! I particularly liked how you were able to describe a fundamental pattern that can be applied in multiple scenarios
These are by far the best videos on system design for interviews. Thanks a lot for taking the time to make and publish these!
This is the best explanation on system design I've ever seen. Thanks Mikhail, that helps A LOT!
I wish all sys interview tutorials are like yours, with so much information precisely and carefully explained in a clear manner, with diff trade offs and topics to discuss interviewers along the way! Thank you so much
As luck would have it i had a similar question for make or break round in google and I nailed it since I watched it several times over before the interview. Got a L6 role offered at Google. Thanks for making my dream come true.
i feel bad that im not paying for this video! the quality is beyond amazing
You shouldn't feel bad. With this much knowledge, he must be getting atleast $500k+ on his current job. And by now he must be looking beyond money and must be looking for making meaningful contribution to the society.
Very clear solution and something that can actually be used in an interview! Please keep making more of these.
You're amazing, by far the most detailed and deeply analysed solution I've seen on any design channel. Please never stop making videos.
I love Mikhail's content, the video is so interactive that it looks like he is talking to you and he knows what is going inside your head :)
Your accent is hard to understand initially, but now I fall in love with you accent.
This is an excellent video, but I am left with these questions:
1. Count min-sketch does not really keep track of video IDs in its cells. Each cell in the table could be from several collisions from different videos. So once we have our final aggregated min-sketch table, we pick the top k frequencies, but we can't tell which video ID each cell corresponds to. So how would it work? I haven't come up with an answer for this.
2. What would be type of database used to store the top k lists?
I would just use a simple MySql database since the number of rows would not be very large if we have to retain top k lists for a short window of time (say for 1 week) and k is not too big. We can always add new instances of the db for each week of data if we need preserve data for older weeks. We would have to create an index for the time range column to efficiently search.
Thanks Mikhail. I can bet..this is the best channel on CZcams. Just binge watch all the videos from this channel and you will learn so much.
This is one of the best system design content I have came across. Thanks a lot.
Awesome videos Mikhail... thanks a lot for sharing! That last part showing other problems with similar solutions was the cherry on top.
I had an interview step with AWS a couple of days ago and they asked me exactly this question. Thank you for your videos.
Thank you very much!! I had gone over all your videos multiple times to understand it well. I had 2 interviews with FAANG in the last week and was offered a job in both! I have to say a lot of the credit goes to you!
Misha,
Loved the structure as well as depth and breadth of the topics you touched on!
All videos in this channel are the best on YT in this category even to this date. You can find many other channels which may give similar data divided into more than 5 videos with a lot of fluff. Mikhael's video touches upon every important part without beating around the bush and also gives great pointers in identifying what the interviewer may be looking for. Kudos to all the videos in this channel !
This s one of the best system design video I came across in long time .. keep up the good work !
Thank you, Sourav. Appreciate the feedback.
Nicely structured ! covering both depth and breadth of the concepts as much as possible.
The best system design answer I have seen on CZcams. Thank you!
Thank you, Hugh, for the feedback.
This is one of the best system design videos on this topic I have come across. Thanks & keep up the great work, Mikhail!
The amount of info you have covered here is amazing! Thank you so much!
These are the best videos on system design I've seen, thanks so much!
Excellent video. A key thing that you did at the end (and is very useful IMHO) is that you identified many other interview questions that are really the same problem in disguise. That is very good thinking that we all probably need to learn and develop. I encourage you to do that in your other design solutions as well. Thank you for another excellent video.
Very clean explanation, which is rare nowadays, why did you stop ? It would be nice to see your new videos , good luck man!
I agree. Can you please continue doing this?
PLEASE MAKE MORE VIDEOS. WE WILL PAY FOR IT (ADD JOIN BUTTON)!
Hands down the best system design videos so far !! and I have watched lots of the system design videos. Love how you start from simple and work all the way to complex structure and how it can applies to different situations.
You are too kind to me, Joy! Thank you for the feedback!
couldn't solve this problem in an interview. found this gem of a video a month after. will get them next time!
Please do more of them as your videos are very good from a content perspective :) Extremely informative ...
Your content is PURE GOLD. Hats off! :)
Sr your videos are gold, I got no interview but it’s rare to find architecture so well explained, thanks
How can someone even downvote this? This is just so amazing. Have not learnt so much in 30 minutes in my whole life.
THIS GUY is SO COOL. Who else feel that when he's speaking, explaining difficult concepts in the most concise way possible - and also touching on what we really need to hear about?!
I'm devastated.
I just got out of a last round interview, it was my first time ever being asked a system design question.
I used this channel, among others, to study, and this video is the ONLY video I didn't have time to watch.
My interview question was exactly this, word for word.
I made up a functional and relatively scalable solution on the fly, and the interview felt conversational + it lasted 10 minutes more than it should have, so I think I did alright, but I still struggled a lot in the begining and needed some help.
Life is cruel sometimes.
This is just incredible! Please do publish more videos.
OMG, this is still the best system design video i've ever seen. it's not only for interview, but also for actual system solution design.
Excellent video! Has depth and breadth that isn’t seen elsewhere. Keep it up!
Appreciate the feedback, Abbas! Thanks.
Among all the materials I have seen in youtube, this is really the top one. Keep up the good work and thanks for sharing
Awesome video. Discussion of various approach (with code snippet) and the drawback is the highlight. Thanks a lot!
So far I am loving it. Keeps me glued to ur channel. Fantastic job I must say
Wow! This is the best system design review video I've ever seen.
I think it is admirable that you explained all the inner workings. In a real interview you can probably skip the single host solution with the heap, that's good for an explanation on youtube. What I think is more valuable is to also propose some actual technologies for the various components to make it clear that you are not proposing building this from scratch. I'm surprised that Kafka Streams was not mentioned. Also for the long path, it is worth discussing the option to store the raw or pre-aggregated requests in an OLAP db like Redshift. The olap can do the top k efficiently for you with a simple sql query (all the map reduce magic will be handled under the hood), can act as main storage, and will also make you flexible to other analytics queries. Integrates directly with various dashboarding products and one rarely wants to do just top k.
Great work. I am a senior engineer at a big tech company and I'm still learning a lot from your videos.
This is the most tech intense 30min video I've ever seen :) Thank you!
Awesome. Simply awesome. You killed it completely!
and huge thank you for all your videos! They are the best I could find on system design!
The best that I have seen so far!
Phenomenal. We do something very similar with hot and cold path in microsoft. Instead of countmin sketch we use hyperloglog
Thank you for such a detailed explanation. Awesome as usual!
Thank you @Memfis for providing consistent feedback!
I wish I could give this video a thousand likes instead of just 1 !!! these contents are fantastic!!!
Amazing video. Thank you! The way you structured it is commendable.
Thank you, Algorithm Implementer. Glad to hear that!
Bonus on mentioning using Spark and Kafka as I was thinking that during the video. Great stuff as usual!
Thank you, @Collected Reader. Glad to see you again!
Thank you very much Sir, excellent demonstration of coherent design thinking. I feel more equipped than ever to solve system design problems.
Hey, Thank you so much all your knowledge sharing. I am able to perform very nice in all my interviews. Keep up the good work. More power to you.
Keep rocking!!!
one of the best technical discussions I have seen
Thanks, Stefan. Appreciate the feedback!
Great stuff!! Thanks a ton for such in depth explanation of these concepts and correlation.
19:05 slow path
22:00 faster than map reduce but more accurate than countmin
22:43 fast path
25:38 Data partitioner is basically kafka that reads message(logs, processed logs w counts,etc..) and stores them to topics
Please, make more videos! Absolutely amazing explanation!!!!!!!!!!!
Ohhhh why I did not find this channel before.... The way you approach the problem and take it forward it make it so easy else the realm of system design concepts are huge.... We need more videos like this.... This is design pattern of system design.... Good Job!!!!
Glad to have you aboard, coolgoose8555! Thank you for the feedback!
This is by far the best content I have found on the System Design. I am addicted to this content.
Keep up the good work, waiting for more videos .. :)
Glad you enjoy it, Dinkar! Sure, more videos to come. I feel very busy these days. But I try to use whatever time is left to work on more content.
Excellent explanation ! I really appreciate your work!
Appreciate the feedback! Thanks.
The system design video to beat. PERIOD!!!
Thank you, Anubhav!
Cant thank you enough for your efforts in sharing such a high quality content for us!
Hi Harish. Thanks!
watched other videos before this.. so liking this before starting...
this channel has the best System design explanations ... thank you so much and keep up the good work!!
Thank you for the feedback, Soubhagyasri! Glad you like the channel!
OMG. I love these videos. Thank you so much for creating these. Please write a book or open a course, it may fund you to focus much time on very helpful content like this. I am very happy today.
Appreciate your feedback, Karthik!
It is not enough to send the count min sketch matrix to storage only, you also need to send a list of all the event types that were processed, otherwise you have no way of moving from the matrix data to the actual values (before hashing). The only advantage over the map solution is that you don't need to keep all of it in memory at once, you can stream it as you go from disk for example.
Calculating the min for each key is O(number of hash functions, H) and you need to do that for all types of events, so O(E*H). Then you use the priority queue to get the top K, O(E*log(K)), so total time complexity is O(E*H*log(K)).
Well, you are right. But I think the video is more about one of a general design for a single event type. Then we can start from here based on the functional requirement.
Awesome and detailed explanation. Hats off
Thank you, Saurabh.
Thanks for all the efforts you put here to describe. This is a great material.
So funny, found this channel yesterday and watched this video and been asked pretty much same question at my interview at LinkedIn today. Thanks a lot.
Funny, indeed )) This world is so small ))
Thanks for sharing!
Actually got an offer from Amazon, LinkedIn, Roku and probably Google as well. A lot of it because of this channel. Can’t recommend it enough! Thanks again!
I was asked this same question at my interview last Friday and found out your video today :( Didn't nail it though, hope I can do better next time. Thank you Mikhail, hope you can spend time to create more video like this.
Wow, Sergey. You rock!
And thank you for the praise.
Time will come, Hugh. Just keep pushing!
This is really the best tutorial, and I hope there is article like this content!
It's really helpful. I already watched each videos so many times, I learned a lot. Initially, I was so frustraded with the accent(I am not native Eng speaker either). But now I am okay watching it without CC.
That's awesome learning material! I hope you can keep publishing new video about system design
Glad you liked. Thanks for sharing the feedback!
amazing, i was like wtf you talking about at the beginning. It all makes sense now after the data retrieval part.
This is pure Gem!.. Take a bow ....
Thank you for making this video. It was very helpful. It will be great if you can post more such videos.
For people wondering why heap complexity is O(nlog(k)) for single host top k, we do a simple optimization to pop least frequent item when heap size reaches K, so we have n operations each taking order log(k).
Please upload more content ! Awesome cntent for the viewers!! Great Great stuff
Thank you for the feedback, oopsywtf. More videos to come.
I think the open question on this video is how the fast path stores and retrieves data. It's not really answered clearly in any of the comments I could find.
It seems like we are missing an "aggregator" component, which combines the count-mins/heaps from all the fast processors. The video seems to imply we'd have a single count-sketch / heap per time interval. But this will put a huge contention on the database - every fast processor will have to lock the count-sketch and heap, add its local count-sketch/update heap, and store it back. So we will have a large contention on the DB. In addition, like others pointed out, we need the list of all video IDs to do this - so we can rebuild the heap. But that becomes impractical at large volumes.
Only things I can think of are :
1) Each fast processor stores its heap into the db (local heap) for the time interval. On query, we aggregate all the local heaps for the interval and build a new global top K heap. The query component can then store this in a cache like redis, so it doesn't need to be recalculated. This approach however requires we partition by video_id all views that are sent to the fast processor. Otherwise we can't accurately merge the local Ks. The problem with this, though, is we can get hot videos and those video counts will be handled entirely by a single processor.
2) Use a DB with built in topK support, like Redis. In this case, we don't need to partition views at all and can balance across all fast processor. Each fast processor then stores a local map of video counts for a short period of time (like 5s), and periodically flushes all the counts to Redis. Redis takes care of storing topK in its own probabilistic data structure. Redis should be able to handle 10k RPS like this. If we need to scale it further, then we have to partition Redis on video_id, for example. And again, our query component will have to aggregate on read all the partitioned local topKs and merge sort them.
For option 1, if fast processors sends their local top-k to the aggregator, that should be enough to calculate global top-k for 1-minute. I don't think there's any need to send CMS to the aggregator. The aggregator creates 1-minute top-k by merging the local heaps, and the query service can simply read the value.
Amazing Video and in detail great explanation. Thanks a lot for creating this in-depth video. Please keep creating more awesome stuff.
Thank you, Ashish, for the feedback!
Awesome video. Well explained. Waiting for your next video :) Please upload soon.
These videos are gem for System design noob like me.
Great video! Nice work!! Thank you!
Great video. Request you to cover couple of popular System Design questions when get chance: (1) recommendation of celebrity on Instagram or Song Recommendation (2) Real time coding competition and display 10 top winners.
All your videos are really amazing. I hope you would post it more often.
Thank you, Nikhil. I will surely come back with more regular video postings.
I have seen lot of system design videos but this content's quality is way above rest. Really appreciate the effort. Please keep posting new topics. Or you can pick top k heavy hitters system design problem requests from comments :)
Thank you for the feedback Mohit! Much appreciated.
this is really good stuff, keep up the good work thanks
super helpful and pretty on point. appreciate the video.
This is Terrific stuff, keep these coming.
Thank you for sharing your feedback, Nataraja.
@@SystemDesignInterview at 13:53, how did min val for A become 4 ?
My mistake. It should be 3, of course. Thanks for pointing out.
@@SystemDesignInterview I am sorry, I didn't mean to point mistake, i was just inquisitive. You have done a tremendous job (i don't have a better word) in explaining these so beautifully. I keep looking into this every week !
Thank you, Nataraja, for the kind words.
I think your great coverage of the topic show how you really know it and understand it compared to other guys who just share what they read last night. Thank you
awesome content! learnt a lot, many thanks !
Pretty good tech video as usual, Carry on, Bro!!!!!!
Thank you, Zhi Guo Qin. Good to see you coming back and providing the feedback! Carry on, Bro!
Thank you so much Mikhail for adding top quality system design videos. I find the content very useful not only for preparing system design interviews but also applying them in my daily work.
I have a question regarding slow path: What if some certain message keys become hot? In other words how should we rebalance the partitions if most of the messages go to the same partition?
As far as, i know Kafka does not support increasing partitions in a topic dynamically. In here, it seems to me that we should use a different approach than distributed cache design to solve hot partitions.
Thanks
Hi erol serbest,
Good question. I talk about hot partitions a little bit in this video: czcams.com/video/bUHFg8CZFws/video.html (step by step interview guide). One of the ideas mentioned there is to include event time, for example in minutes, into partition key. All events within the current minute interval are forwarded to some partition. Next minute, all events go to a different partition. Within one minute interval a single partition gets a lot of data, but over several minutes data is spread more evenly among partitions.
In general, the hot partition problem is a tough one. And there is no ideal solution. People try to chose partition keys and strategies properly to achieve more or less even distribution. Typically, systems heavily rely on monitoring to timely identify hot partitions and do something. For example, split partitions, if this is a consistent high load. Or use a fallback mechanism to handle excessive traffic, if it is temporary.
Thanks a lot for detailed explanation. much appreciated! ❤
I got an offer from an interview I did next day after binging all your videos (looking forward to your distributed counter video!) on top of studying and reviewing all my previous notes on networking and algo. This really bridges a gap of knowledge for some of us here who had some experience in specific areas but don't have enough to put a whole system together or think about it this way, and when i used yours as part of my review material I always found myself feeling mentally prepared and confident to be in the driver's seat!
Hi SupremePancakes. Really glad for you! Thanks for sharing. Always nice to hear feedback like this!
Even though you passed the interview already, please come back to the channel from time to time. I want this channel not only help with interviews, but even more important, help to improve system design skill for your daily job.
Helping someone to become a better engineer is what makes this all worthwhile for me.
System Design Interview Of course!!! I look forward to more videos and how this channel grows
System Design Interview In the fast path how is the heap constructued from count min sketch table?
Hi Tej. Please take a look at this comment and let me know if more details are needed: czcams.com/video/kx-XDoPjoHw/video.html&lc=UgzcpyPR8nmCoaxTV3Z4AaABAg.8xFD1xe1cgU91u3EpZgosP
Awesome session! Another super effective way to prepare: Do mock interviews with FAANG engineers at Meetapro.
You explained it very well, thank you!