I built data pipelines at Netflix that ran 2000 TBs per day, here’s what I learned about huge data!
Vložit
- čas přidán 23. 03. 2024
- Check out my academy at www.DataExpert.io where you can learn all this in much more detail!
Use code EARLYSUB30 at checkout to be one of my first 100 paid academy subscribers!
#dataengineering
#netflix - Věda a technologie
I’m so glad I found this video, I was just sitting here with 60 million gigabytes and was figuring out what joins to use so this was perfect timing.
if all u registered was 60 mil gb & joins ur not flowing
You're kidding, but somehow I just started a data analysis project of two terabytes and this video shows up.
@@aripapas1098 if you think comments must indicate a user registered every aspect of a video, ur not following
@@aripapas1098this is a sad comment
Sarcasm ???? 😂
Can't wait to build hyperscale pipelines for my startup with 0 users
But it sounds powerful when you say it, like you mean business.
Based
1 user (me)
If you build it, they will come.
I have 1k TB data just sitting around in my backyard. Glad your video came up to get me started on atleast something.
What I absolutely love about your videos is that as a beginner in the data engineering field, you often talk about things that I had no conception of. In this video for example, I have never heard of SMBs or broadcast joins. This gives me an oppurtunity to learn these things, even hearing them be mentioned from someone as widely experienced as you.
You need not necessarily have to even go into detail, but these short form videos act as beacons of knowledge that I can throw myself into learning about.
Thanks a lot, and keep these coming Zach!
Really appreciate this comment! It reminds to that the value im putting out there is important!
@@EcZachly_ ✌
Great summation! I was thinking the exact same thing while watching. It's nice hearing even the specialized lingo from technical experts in their fields, it peaks my curiosity.
@@EcZachly_thanks
@@EcZachly_did you already know the importance of these two before Netflix or did you learn that while working at Netflix?
In the future a wrist watch will have a little blinking light that will have 60 million gigabytes of data in it
You mean an Electron app?
yeah okay crack smoker
And it will still lag and hit 99% singularities
@@dhillaz that will just show current time
@@Ivan-Bagrintsev Yes it will show the time, but with full DRM. Unless you have a license to view certain minutes it will be denied.
Boyfriend simulator: you sit with your bf and he starts talking about this nerdy stuff you have no idea about but need to keep listening because you love him
This is exactly correctly
aww 🥰
After marriage they no longer pretend to listen to
If only a girl would fall for me when I speak nerdy stuff 🫠
@@rajns8643 are you kidding me? This is what most people like the most! Intelligent people are extremely attractive
I love that you kept it short and to the point.
He sure wanted to save some data… 😅
Thanks Zach, hopefully one day I will understand what all of that means
😂😂😂, I’m starting now
Thank you Zach for taking the time to give us the hard truth and hands down your experience. It helps a lot of enthuastic students/people to know how we can in some way support or help others in the subjects we like. I don't imagine myself processing 2000TBs per day, but it helps give a bigger picture. Once again, appreciate the short video and thank you for sharing
2 pita bites a day, the same as me when I’m on a diet.😊
Holy crap. I’m currently learning about data science, the various roles, etc. -with the hope of one day switching careers. But the current state of learning is all about the languages and software used etc, not about the infrastructure and what to do with massive datasets. So this just 🤯
its really about math but no one talks about it. get at least 1 year university math comprehension and then get into the python and tech tools. the most competent and successful data engineers are always people with a good STEM background. for example Zach has a Bachelor's Degree in Applied Mathematics and a Bachelor's Degree in Computer Science so he is a heavy numbers guy. That's what most of Data Science \ Engineering CZcamsrs don't tell their viewers cause that will cause them to loose viewers.
learning the tools can be very different from solving real world problems.
@@samuelisaacs7557 True asf
@@samuelisaacs7557Yep, even a business administration bachelors will have a lot of maths and it's nowhere near data science which is 3x that.
Great content, an honour to be able to listen to someone who has handled that volume of data.
literally 🎉
Have chat gpt explain it too you or some other LLM.
Just started following you. Really appreciate you for sharing your knowledge with the community.
Half of what you said I had no idea what you were taking about but I was very engaged and now I’m gonna look all this stuff up for centering my div!
Informative and straight to the point, great stuff as usual
instant subscribe - really appreciate the concise explanation and clear examples
Hey Zach, your content is consistently amazing! As a newcomer to the field, I'm considering diving into data engineering. What roadmap would you recommend, and are there any certifications that could enhance my journey? I already have a solid grasp of Python and SQL in data analysis.
Excellent video, thanks Zach!
Thanks for the info Zach. Could you please make an elaboriative video on SMB join.
I am a regional IT installer who runs Cat6 Ethernet pipelines for managing 1gb loads on HP laptops, this video is really awesome and breaks down your workflow and mindset in a complicated field really efficiently. I would love to get more short videos about the industry like this.
I'll keep them coming. I make much more on Tiktok and Instagram since I like making vertical content!
@@EcZachly_ Ill check it out! Keep it up!
Thanks Zach , but I have a question broadcast join is used when we have a small dimensions joined with big table this is your case? Or are you used hash join with two large table?
I've never heard of these terms, thank you sharing your real case scenarios(The FB notification example)
In the 37 years I’ve been working in data, I’ve never heard anyone call it Peter 😂. PETA
What's wrong with a Peter bite?
Heya Peeda
Could be an accent or a slip 😂
I just found ur stuff but thanks for the content mang keep it up 🙏
Very important concept in such short time.. thank u so very much ❤
Thanks, looking forward to more such content
The amount of knowledge you shared here is astonishing
Great points to remember!
There are a lot more underlying abstraction layers you can add at these different points to further optimize the second network hop. Caching is a simple one.
Can you implement an efficient snapshot system with delta encoding of entities and compress the message? Would be a cool video for you to implement!
Did Facebook use Databricks or did they have HPC Clusters for you to run Spark on?
Thanks for sharing, now I can finally put some good numbers on my resume 🎉
If you come across a scenario to join 2 large datasets. You could do an iterative broadcast join. Basically you are going the break one of the df into multiple dfs and join the dataframe in a loop till all the multiple dfs are joined.
You’ll require a lot of memory and have long start times, no?
I would be interested in the architecture and content delivery for pre and post cdn from a network design perspective. Are there any examples or presentations regarding networking at netflix?
Insightful as always.💯
Appreciate that!
I love how you acronym Sorted Bucket Merge as SMB. Think you may have had Super Mario Bros on the mind 😂
Dude has beef with Bezos😂
Please more data stuff!!! I hardly understood what you said, but it’s sounds interesting
Hi, what about replacing torrents with IPFS? That's data pipelining, right ?
you can make a bios optimized for throughput and without interrupta , to speeden 67x and more
Please keep up the great content!
Never thought broadcast join is a Netflix saviour
Damn I just wanted to shuffle like there’s no tomorrow and then I found this video.
Subscribing just for the britto. One of my favourite hoods
I love that I’m only a software engineer but I can understand all of this
Hey are you familiar with cosmosDB from azure? Its a db like mongo but claims to be able to scale infinitely... What are your thoughts on that?
Imma wait for Primeagen to confirm this as well when he reacts to this video inevitably 😁
optimizing selling personal data to minimize cost is something i never thought about
Thanks for the video
Insanely valuable content
I just felt like drinking from the fountain of knowledge and instantly drowning. Definitily haven't had to deal with these kind of volumes yet...
Very useful and interesting, even to a layman
No idea what this guy is talking about, but thankful CZcams sent me this
My medical science clients called, they need an 800tb imaging data set parsed by end of day (thank you kubernetes)
Thank you Tony Hawk, very cool!
Managing retention, storage and flow is always important. Im sitting on a toilet as im writing this.
Thanks zech for the video
That's cool bro. Will it fix the Netflix app where it shows the title of one show but the preview and description of another?
It was to look at network traffic to keep your credit card data secure
I'd like to learn more about these pitabytes. What are they? What do they taste like?
Pretty interesting, even though I had no idea about most of what he was talking about.
My problem is how do people even find out about the careers that they go into?
Are those joins available in MySQl or specific to dbms at meta you worked?
I think they're not available on MySQL because it's an OLTP database. Those joins are used for analytics
These are not database joins, they are processing joins. Frameworks such as Flink and Spark would leverage broadcasts.
It basically boils down to a single coordinator instance that publishes a small, often changing dataset to all parallel processors. Usually used to enrich, prune, or map the main dataset.
I’m trying to get into data analytics and most of this we t over my head but this still sounds lit 🔥
Sir this is a Wendy's.
Bro can figure out how to send my entire homework folder in 1/500th of a second but can’t flip the camera sideways
Whenever I hold on to more than 60 petabytes I just call the assistant to the regional manager and he runs a fix from his mainframe.
gotta love a good pita byte
I still bite my gigas when my man hustling meta in peta
Very nice. Short and sweet.
Glad you enjoyed it
Ah yes, data structures and sorting… but with the “can you even scale bro” tick enabled.
How do you deal with log data
"FNA developer"
I'm sorry, my brain couldn't let go of it
I might get 5 users on my site this month so this will come in handy
my data pipeline usually processes one pitabyte every other day and one shawarmabyte every week week
What engine were you using to do these massive joins? Spark?
Yep!
Quality content!
would you say that using bucketing and basically constraining against “acceptable” throughput as well as risking on creating gazillion files in process is more acceptable approach then more ad hoc ones like: z ordering and bloom filters?
Bro is the PewDiePie of data Engineering
Is this only available with sparksql?
No, broadcasts can be leveraged in any processing framework that leverages two sets of processing logic. Your highly parallelized logic as well as a commonly single process. The single process “broadcasts” data for all of the parallel instances. It can be implemented other ways but that is the most common.
When I was hired to do data engineering, it was always data that could fit on a single hard drive and it was boring af. I hated it. This sounds way more challenging and interesting.
I love technology and I know more than your average user, yet I have no IT qualifications and I am light years away from this knowledge, but for some reason, I love watching these videos as if I was ever going to use the information 😂
Interesting! I would have thought something like sharding (or partitioning and clustering) so data processing and access can scale horizontally.
Bucketing and clustering are similar
: multiple streams across entire ddrs directly accessible
I really wonder how netflix achieves 100tb/hr just with only streaming videos.
Very informative, I wanna ask you, which certification can help me as a fresh graduate, is AWS data engineer Certification worth it or not? And thank's a lot Zach
It’s pretty great!
Interviewer: name 5 data types
Me:
Oh yeah that’s really great and insightful, now what’s a join?
The Venn diagram of people who use TikTok and data scientists is two circles my dude lol
I have 66k followers on TikTok and this video did 375k views there.
I love pita bites as much as the next guy, but I don't think I can take more than 35 before I'm full
Hey absolutely curious about the content your are doing.
In my company we are working dbt and snowflake. I can't find a possibility to work with broadcast joins there. do you see a possibility to replicate this process?
Snowflake isn’t suitable for volumes >100tbs in my opinion.
Clustering is an option in snowflake that helps though
I use just a database with just value as field (long string) and nothing else
Bucketing is a one time process. But what if everyday new data comes in?
For example if our bucketing takes say 2hr per day for say 10 gb data(right table), and every next day, this increases by 10 gb, don't you think that it'll take more and more time as more data get accumulated?
You have to partition your data. Unless your data is genuinely doubling everyday (which I doubt it is)
The bucket joins should only be between events for that day and dimensions for that day. Not all the data going back
As the business grows, this can still get bigger because 10 GB/day might become 50 after some time and you need to account for that
Short and informative
Thank you! What other videos would you like to see from me?
Wow, didn't know Owen Wilson was working on data
Wow, if I knew all this, it's pretty amazing content...
If only...
Wait, i have 200TB/hr what do I do? Please help!
1. Are you a data engineer?
2. What tech is this? AWS, Snowflake?
I suddenly feel like pita bread...
Love the way you tried to make it sound more complicated than it actually is and failed.
ok, so how to do that ...can you make a screencast and show us how to do it!
I am senior year software engineer intern. I didn't understand anything you said except "joins". Not even the variants. Where can I learn things like that? please
Me watching this not knowing anything hes talking about makes me feel like starting a big tech company 😀
He is channeling a young William Benney over here isn't he