Hadoop Tutorial - Architecture
Vložit
- čas přidán 22. 02. 2017
- Spark Programming and Azure Databricks ILT Master Class by Prashant Kumar Pandey - Fill out the google form for Course inquiry.
forms.gle/Nxk8dQUPq4o4XsA47
-------------------------------------------------------------------
Data Engineering using is one of the highest-paid jobs of today.
It is going to remain in the top IT skills forever.
Are you in database development, data warehousing, ETL tools, data analysis, SQL, PL/QL development?
I have a well-crafted success path for you.
I will help you get prepared for the data engineer and solution architect role depending on your profile and experience.
We created a course that takes you deep into core data engineering technology and masters it.
If you are a working professional:
1. Aspiring to become a data engineer.
2. Change your career to data engineering.
3. Grow your data engineering career.
4. Get Databricks Spark Certification.
5. Crack the Spark Data Engineering interviews.
ScholarNest is offering a one-stop integrated Learning Path.
The course is open for registration.
The course delivers an example-driven approach and project-based learning.
You will be practicing the skills using MCQ, Coding Exercises, and Capstone Projects.
The course comes with the following integrated services.
1. Technical support and Doubt Clarification
2. Live Project Discussion
3. Resume Building
4. Interview Preparation
5. Mock Interviews
Course Duration: 6 Months
Course Prerequisite: Programming and SQL Knowledge
Target Audience: Working Professionals
Batch start: Registration Started
Fill out the below form for more details and course inquiries.
forms.gle/Nxk8dQUPq4o4XsA47
--------------------------------------------------------------------------
Learn more at www.scholarnest.com/
Best place to learn Data engineering, Bigdata, Apache Spark, Databricks, Apache Kafka, Confluent Cloud, AWS Cloud Computing, Azure Cloud, Google Cloud - Self-paced, Instructor-led, Certification courses, and practice tests.
========================================================
SPARK COURSES
-----------------------------
www.scholarnest.com/courses/s...
www.scholarnest.com/courses/s...
www.scholarnest.com/courses/s...
www.scholarnest.com/courses/s...
www.scholarnest.com/courses/d...
KAFKA COURSES
--------------------------------
www.scholarnest.com/courses/a...
www.scholarnest.com/courses/k...
www.scholarnest.com/courses/s...
AWS CLOUD
------------------------
www.scholarnest.com/courses/a...
www.scholarnest.com/courses/a...
PYTHON
------------------
www.scholarnest.com/courses/p...
========================================
We are also available on the Udemy Platform
Check out the below link for our Courses on Udemy
www.learningjournal.guru/cour...
=======================================
You can also find us on Oreilly Learning
www.oreilly.com/library/view/...
www.oreilly.com/videos/apache...
www.oreilly.com/videos/kafka-...
www.oreilly.com/videos/spark-...
www.oreilly.com/videos/spark-...
www.oreilly.com/videos/apache...
www.oreilly.com/videos/real-t...
www.oreilly.com/videos/real-t...
=========================================
Follow us on Social Media
/ scholarnest
/ scholarnesttechnologies
/ scholarnest
/ scholarnest
github.com/ScholarNest
github.com/learningJournal/
========================================
Want to learn more Big Data Technology courses. You can get lifetime access to our courses on the Udemy platform. Visit the below link for Discounts and Coupon Code.
www.learningjournal.guru/courses/
What an explanation!!!! GREAT GREAT GREAT explanation.. going straight to the concept.. covering all features in short time..a BIG BIG BIG THANK-YOU...
Wonderful instructor! I watched many videos on Hadoop and got confused by all of them, but this one is amazing:)
you have all the instructor' skills and deep knowledge of Hadoop.
great tutorials! You make complex things look simpler
Very clear and well presented. A good use of graphics and did not make the all too frequent mistake of simply reading out the text already on the slide
Very clear explaining ! Thank you so much
Crystal clear explanation of concept. Thank you sir . God bless you .
Really Amazing mate! your way of delivery is awesome...keep teaching
I disabled Adblocker just to support you :)
@Learning Journal sir, your videos are exemplary, simple - straight forward and highly informative. Keep up this good work to the society. God Bless :)
Clear, concise, goes straight to the point! thanks!!!
I really happy about your explaining., Thank you very much sir...
Best Video to understand the core concepts. I saw almost 20 other videos but this one is the best
Your video training is the best!!!
Your unique way of explanation made complex architecture easy to understand and follow. Thank you and keep it up Sir.. :)
Very nice Explained , Things was complex but it has been explained easily., Thanks a lot!!!
Great explanation, going directly to the point. Thanks
One of the best explanations i have come across. Thanks
Thank you for great explanation!
Fantastic explanation of the architecture.
Thanks! Great explanation!!!
sir i like your way of teaching .. lots of , lots of thanks
I m a beginner at hadoop and big data plateforms, I watched a lot of tutorials but yours is clearly the best. keep going, you are a great teacher !!
Great... You made harder thing simpler ...
Great Video,Thankyou
What a wonderful video!
WOW!!! what a wonder? thank you so much
you are a great teacher. I love you man.
Excellent and clear explanation
Thanks very well explained.
Thank you so much, sir. This is a well and clear explanation.
excellent lecture, thank you very much sir
very nice explanation. many thanks.
clear explanation. thank you so much!!
Excellent. Thank you.
You explained it very well.
Great Video
beautifully explained...
Amazing!
Great video.
Excellent!
BEST OF THE BEST , thznk you very much
Excellent explanation 👍
Awsome video sir keep making keep helping us
Well explained
Thank you!
Thank you
You have beautifully explained the write operation in hdfs, could you also explain how read happens?
Very good...
wonderfull
Très bonne explication!
Merci Monsieur
merci beaucoup pour vos encouragements
Nice
why local buffer?what is streaming read/write capability?how it helps us....plz explain sir
In summary you wrote client can directly interact with DN. Is it not required to go to NN to know file/block location to read/write data. Pls explain?
Question 1: Around 4:10 "the NameNode will create an entry for a new file", does it actually create a new file 'myfile.txt' on, say, DataNode 3 as per this picture?
Question 2: Can one actually login to a Data-Node (let's say #3 in this case) and be able to do a 'ls -l' to see files that got created ? I know one can use 'hadoop fs -ls' to see files across all the hadoop cluster, but is it possible to look at a file on a particular data node?
Question 3: When multiple 128MB block(s) of this 'myfile.txt' file are stored on different Data Node(s) do we see partial 'myfile.txt' (when doing ls -l) on each of those data nodes, with just partial content of the real/original file ?
Ans 1: Name node creates only metadata for the file and allocates data node to hold the data blocks.
Ans 2: You can login to the data node and you can do 'ls -l', but you won't see your file because only a few blocks are stored there.
Ans 3: Yes, you will see some files at the appropriate location.
so nice...is this tutorial
What happen if Streamer failed to copy the data into Data Node ? as directory already created on Name node, what will happen the created directly because that directory doesn't containing anything as streamer failed to copy the data to the Data node
Sir, I wonder how one can make concepts crystal clear like you do..sir by the way, may I know your name?
please upload videos about what are the cache block cache pool,hdfs federation,NFS.Plz and thank you.
Sure, Will add these topics to my list.
Suppose we are maintaining 3Copies of Data ( 1 is in A Rack 2,3 are in B Rack ) suppose if B Rack fails due to some network problem . Hadoop can access data from A Rack it is fine. But my doubt is before we fixing up B Rack if A Rack also fails How to get the Data? Do we have any mechanism maintaining Replication factor as 3 if some of copy fails means does it create those 2 copies by using A Rack copy to maintain Replication factor as 3 before we fix the problem of B Rack???
Yes it does maintain the copies when it identifies that the block is under replicated.
@@ScholarNest so it creates those 2 copies another rack or in same rack so finally we do have 5 copies of file in it 3 are working 2 copie of files are under not working Am I right?? If it does the same thing whenever the problem resolved in B Rack again it will come to network and maintain all files well so What Hadoop will do for those extra 2 copies does it delete those 2 copies to maintain Replication factor as 3 . If it isn't deleted if this kind situation arraises frequently unnecessarily data we are maintaining by consuming some storage
It does cleanup as well :-)
I have a question regarding a datanode. Assume that the datanode is broken. How can the broken DataNode be detected
by Hadoop? what actions are done by the HDFS regarding this issue. What about the access to this namenode?
Data node sends heartbeat to name node. Absence of HB indicates lost data node.
ok
thank you sir..can we have the learning ppts please
What do you want to do with ppt?
your unique method of teaching is helping to understand the topic well. However to remember or recollect it after some time quickly during interview preparation, the same reading material would help .
As we understand the content through your videos, can easily recollect and revise the ppts during preparations.
Very
Thank you sir...could you take my class pls.
No private tuitions.
Sir can you please provide notes ?
yes.
Thank you