Paste bin system design | Software architecture for paste bin
Vložit
- čas přidán 9. 07. 2024
- Pastebin is a service that allows users to share text over the internet by generating a unique URL. In this video, lets learn how to design paste bin
Scale correction: its 13Million/24/3600 requests per second
System design: imgur.com/a/15E9eNa
#pastebinsystemdesign #pastebin #systemdesign
Hi Naren, this is one of the greatest system design overviews for this particular problem. A good approach in both LLD and HLG well justified and explained each trade-offs of the implementation. Thank youl
Hands down the best channel I've found for system design. Kudos bro!
Hi Narendra. I listened to your system design videos and practised. Now I got the job I was trying to get. Thank you so much!
very underrated design video that covers most use cases, thx a lot
Outstanding, clear and very deep solution for this problem. Really enjoyed it. Very well presented.
Very good approach. Thank you. Wish me luck in my systems design interview today!
I loved this design! thanks a lot! I'd like to add one point not covered in the capacity estimation. based on the input, users can add up to 10Mb per paste. 100k per day, deriving in 1000GB per day.
I think it's good to mention that we can apply compression (manually or let the db do it) to the text that way we can save up to ~60% of the initial calculation for storage, reducing costs quite a lot
This guy is truly a genius. Keep doing good work
Happy happieee birthday champ 😉
Your dedication of posting video even on your bday is just amazing.
Looking forward to learn more from you 😊
Thanks :)
just want you know your videos are great, appreciate your efforts!
Thanks! Your videos are always top quality
Couple of things. 1. Using serverless will not give you predictable SLA. 2) Cleanup service need to delete the entry from cache also. else, expired paste will still be accessible. 3.) rather than DKGS , you can use UUID (it takes double size 128bit) but u will not need redis and DKGS
If you use UUID inside the writepaste lambdas or containers then there is a chance of duplication as there are multiple instances. I don’t think that will work well.
@@kartikvaidyanathan1237 A collision is when the same UUID is generated more than one time and is assigned to different objects. Even though it is possible, the 128-bit value is extremely unlikely to be repeated by any other UUID. The possibility is close enough to zero, for all practical purposes, that it is negligible.
@@rabindrapatra7151 yes but what is stopping from multiple lambdas from generating the same UUID. I understand if the Id gen service is centralised and UUID comes from there. But my understanding is each lambda internally generates a UUID
Cleanup service need to delete the entry from cache also. else, expired paste will still be accessible -> Can't we add into cache with TTL. That should automatically solve the issue
@@TheCosmique11 Acc to wikipedia - the number of random version-4 UUIDs which need to be generated in order to have a 50% probability of at least one collision is 2.71 quintillion.
en.wikipedia.org/wiki/Universally_unique_identifier#Random_UUID_probability_of_duplicates:~:text=22%5D%5B23%5D-,Collisions,-%5Bedit%5D
Whenever I get a job I'm definetly joining your channel
Thank you, Narendra, for your exceptional content! I learned a lot from you.
Hi Narendra - Thanks for making these system design videos. Its useful for all engineers irrespective of whether they are interviewing or not. Please make a detailed video on following topics - a) System design for Heatmap. Let's say Heatmap of Uber drivers. b) Zookeeper functionality c) DevOps best practices or DevOps series.
sir, you are doing a great job, please don't stop making such videos
Great work. I really appreciate the explanation. It was really well done.
If anyone is thinking how 64 comes up at 20:40, that is because [A-Z,a-z,0-9] sums up to [26+26+20]=62 and special characters like '+' and '/' adds up to 64, Base64 encoding
Thanks Narian for great videos! I have a question regarding DKGS. Isn't it an overhead? I mean that key generation formula (timestamp+nodeid+counter) seems to be already enough to cover uniqueness requirement. And even if there will be collision -- is it still worse to call DKGS service instead of DB directly? Thanks!
Really cool idea to show the preview of 100KB while fetching data from S3.
Very nicely explained thank you.
Simply awesome!!😎😎
Your videos keeping getting better
Great explanation!!
Thank you for great content :)
This is good. Thank you :)
Please make a video on github design! :)
yes make a video on this.
yes Tech Dummies please make a video on this.
I would love to watch this!
hey,
thanks for tutorial.This channel is really awesome.
I have a doubts:-
1.When you are mentioning data base comparision @13:00(approx). Why did you choose partitioning in RDMS.I think partitioning would be easy in NO SQL world as some db provide inbuild solution for this and RDMS is often easy to scale veritcally instead horizontally. Isnt it better to have NOSQL if we want to scale later?
2.And if we use partitioning .the way u mentioned that if one db is filled ,we will go other. Shouldnt we be using consistent hashing in this case instead of one range filled, go to other. Range based partitioning has multiple db nodes running parallel each handling different data.
Please let me know you thoughts
Happy birthday 🎂 Dear bhaiya😘
Wishing you a very happy and prosperous birthday with full of love, joy and happiness...🌷🌺🏵️🌸💐
.....
.....
It's belated but yeah.....🥰
🌸🌿 Stay blessed , safe and protected always ❣️🌿🌸
hi. some really important points for decision making. thanks
I love your videos. you taught me a lot.
I hope you make a video of e-learning system design.
Great Job👍
Waiting for Token based authentication system design.
This is super good bro...Great Thx for ur great work! Could u do a video about calculating those normally seen volumnes, eg, 1 video takes how much bytes etc. Thx :D
How is "User can decide on the pastebin link" solved? Here it seems that the user can keep on trying for a unique ID, of length 10 as constrained, and keep on hitting on the already used ones, leading to going to and fro until he finds a unique key? Does the snowflake generator or just a normal key generator help with this?
Hi I think the paste table needs a 'usr_id' field so you can find out who created the paste.
@29, if we know algo to generate keys, like time stamp+node_id+counter. Then why cant write service generate on its own?It will take almost ineligible time
Hi Naren, your channel is full of resource ..thanks for sharing. Can you have one video on designing dashboard for efficient data centers monitoring using data model.
Excellent explanation Narendra, Wish you a very happy birthday dear !!!
Thank you so much 🙂
Hi Narendra, May be I missed something, but why we cant store everything in S3 and use a Geo specific CDN around that to get the data very fast? in that case we dont need the actual data in DB.
Thanks for the explanation!
In the async clean-up step, should we also clear the value in the mem cache? For long texts, we'd end up with a preview (from the cache) but when it tries to fetch the full text from the DB it'll error out?
Mostly we will cache the data with some expiry dates time like 2 days or 7 days something like that, so it will automatically expiry automatically
Hi Narain if we are using twiiter snowflake to generate unique IDs and generate hashvalue from IDs using base 62 conversation then there is no chance of collisions. So I think we don't need KGS service . Write service can handle the key generation part and no need to check db if key exist or not as keys are always unique. What do you think ?
Hi Narain. Great video. However, there seems to be a mistake at 3:26 in the estimation part. 100k/(24*3600) = ~1.5 not 1.5k writes/sec.
Feels stupid, why did I make that mistake😁
@@TechDummiesNarendraL You are doing thousands of smart things. No prob!!!
Your comment confused me further. You said "not 1.5k writes/sec" but in video he said "150 writes/sec". Did you mean it should be 1.5 writes/sec and not 150 writes/sec?
Also to note, here "per hour" and "per second" is not really used in the design later on.
@@IC-kf4mz Yes, it should be 1.5 writes every second.
good video sir
Hey narendra, what shouldbe the initial approach on any system design. Ex: I can we have split the requirement into functional requirement and non functional.
Also what are videos which we need to go thru first to get the fundamentals and to full scale design.
Any recommendation on materials to understand the basic module of SD.
Let's see how many legends will watch this incredible video to the end🧡
for unique keys, can't we use same zookeeper process as we used in the url shortening design?
Can you please make a video about Notion? I'm working on my final year project at university, measuring the impact of various software on the planet - your videos are so helpful! Thank you
Happy Birthday! 🎉🎂
Can't we use Range based approach with Zookeeper as u explained in urlShortner design video to generate the keys?
Please try to include Food delivery system( real time tracking, multiple different apps for each types of user(buyer/delivery/hotel), synch between apps, sending device info when app is web based/sdk based etc) and ecommerce site( like best practices for big billion day with less latency(how we can optimize locking), Order management services - where millions of customers wants to view order in real time, order tracking, canceling order and similar kinds of design challenges in such systems)
Yeah, please make video on Swiggy or zomatos system design video.
lol - in other words "please design my business idea"
Your video is great !!
Had one small doubt though below:
How do you explain how can 10 character key be used to generate 100GB?
Ideally total storage for 10 characters can be (62^10) * 10 bytes [assuming a character takes 1 byte] which is a lot more than 100GB
If we assume we are using 5 character key we will get (62^5)*5 bytes which is approx 34GB
12:30 , isn't 65 KB Max row size in DBs like MYSQL? How are we going to store 100 KB data then in content field ?
Thanks Narendra! I learned lots of great ideas from you! One question, there's can be race condition when two write APIs will get same key from DKGS. How can we prevent such race condition?
first one!
Thanks for the video. What about adding a CDN between the end user and read request? This would cache and offload all large portion of the read requests from ever hitting the API. Only a small percentage of pages would need to hit the origin. You could introduce a queue for expiring cached pages upon edit or deletion.
I did not understand the 64^10 part?
I understand that we have 64 bit(8 character) unique Id, that can be used in our URL right? How are we generating 10 character url with 64 possibilities at each character?
Thanks! I think cleanup service has to delete record from MemCache if record present in cache.
Awesome video. Distributed key generation service can use bloom filter instead of Redis.
Why? Bloom filter is not deterministic , it is a probabilistic data structure, it only gives true negation but can give a false positive.
This is grade A stuff!
Hi, very nice video. Why are you saying that memcached is faster than redis for this usage pattern? Are there some benchmarks or white papers? Or what are you basing this assumption on?
Great video. Thanks for your effort. One doubt though, How to allow custom string provided by user? I understand that we can store mapping between key and custom string for each user. However, how to do for anonymous user? Wouldn't we be overwriting the paste or exposing name of paste in such case?
Hi Narain, good video and explanation.
Question on S3 Blob retrieval! Do you share the blob link to client and client downloads content? In this case, how authentication to S3 happens?
Or
Webserver behind the gateway retrieves the content from s3 by authentication and then send the downloaded data to client?.
shouldnt the cleanup service also cleanup the used keys in Redis?
I am wondering why blob storage read does not have cached?
The traffic estimates for writes is incorrect. 100K/(24*3600) = 1.1 pastes/sec . Am I missing something here?
interesting thanks
Why were both redis and memcached used? Wouldn't either solve the job?
When key is expired, should we clear MemCache as well? Or it will be automatically cleared once DB is cleared. Thanks.
Does memcache support replication? I guess not and it isn’t great for distributed systems. I would like to know more why memcache is a better choice here than redis.
Hey Naren, where are you these days?? I really miss your videos on SAD. Please come back..
Hi Naren, Thank you so much for the amazing work of teaching system design that you are doing. Over the last few minutes I just kept wondering should we also not flush out the paste from the cache as well? As in what happens if my highly popular JSON’s soul just doesn’t become too out of talks to leave the cache even though its long dead in the DB?
Yup your right!. Also most caching services have in-build expiry feature so those services can work independently.
Do we really need to keep track of used token ranges? Threads in the key generation service will always increment and return anyways, question of providing used keys will never arise
One small correction, there would be userId field in the paste table.(13:48)
Key generation service , can we use bloomfilter to reduces size of redis memory size?
Yes
@@TechDummiesNarendraL Hmm .. bloom filter may return true when it's not, wouldn't it? I suppose since we have quite a few IDs to spare in this case, that could do. :)
(64^10)*10 ≈ 1e+19, so shouldn't this be more like 10000000 TB and not 100 GB?
Hi Narendra, can you please discuss the feature of sharing the paste with other users and discuss it at scale and how to handle it
Why are we using SQL for storing data here?
For keys can't we use UUID V4?
Thanks Narendra for great videos. I see you corrected the Traffic estimation. Shouldn't it also change the storage estimation?
Hi Naren, great content as always. I noticed u have not posted since last year. Are you going to put any more great videos as u used to or r u taking a break?
Paste schema needs to also have user_id and user schema needs to have a list of all pastes created to fulfill the functional requirements
I have a suggestion/question - could we not create GUID instead of using DKGS
Put a queue/message broker between the DKGS and consumers
Please, describe socket connection in a distributed system/multi-server system
I think the redis and memcached are both performant enough.
1. Is schema important to discuss in System Design in detail?
2. If serverless, in URL Shortner also can we use serverless?
3. Why do we use memcached and not redis, just cost? High performance and cost?
4. Is Zookeeper a DKGS?
Why couldn't zookeeper be used to generate unique id's in this case also just like in URL shortener video instead of KGS?
Hi sir, i have one question. I am new to all this i am 12th passout so it might be dumb question for you but please reply the answer.
if pastebin gets expired how to delete it from database either scan database daily for which text has expired and remove it or any other technique which one is best, optimized,etc.
and if we bin gets expired after 10 mins, 1hr,etc. then how to implement how to do it?
Thank you so much for uploading great content. Can you please explain what happen to local counter when it recover from a failure? will be it reset to initial value?
Why can't we use Zookeeper for key generation just like we used for url shortener?
You can, the reason why I didn't use is to show different ways of doing same thing
Redis vs Memcache, when to use what?
Make a video on tradingview system design....
could you do a video on WAZE system design? Or is that similar to UBER.
Sir make a video on charting website like tradingview system design.....
Bro Can you make video of ZERODHA KITE system design
Do you still make videos in this channel?
hey bro
if you have time can you make video on Brave Browser Blockchain based - System design, architecture. thanks
Why have you used serverless for this problem, it will cost more than using a full server for the problem
Please make a video on Skip list
aren't the initial back of envelope calculations wrong ?