73 - Image Segmentation using U-Net - Part1 (What is U-net?)

Sdílet
Vložit
  • čas přidán 28. 06. 2024
  • Many deep learning architectures have been proposed to solve various image processing challenges. SOme of the well known architectures include LeNet, ALexNet, VGG, and Inception. U-net is a relatively new architecture proposed by Ronneberger et al. for semantic image segmentation. This video explains the U-Net architecture; a good understanding is essential before coding.
    Link to the original U-Net paper: arxiv.org/abs/1505.04597
    The code from this video is available at: github.com/bnsreenu/python_fo...
  • Věda a technologie

Komentáře • 192

  • @burakkahveci4123
    @burakkahveci4123 Před 4 lety +22

    Thank you for the video. I think the best video for basic levels / intermediate levels.

  • @iamadarshmohanty
    @iamadarshmohanty Před 2 lety +1

    the best explanation I found on the internet. Thank you

  • @shafagh_projects
    @shafagh_projects Před 9 měsíci

    I am speechless. your tutorials are beyond the amazing. thank you so much for all you have done!

  • @Rocky-xb3vc
    @Rocky-xb3vc Před 3 lety

    This is the first video I'm watching on this channel, and I need to say huge THANK YOU. You helped me connect so many dots that were all over the place in understanding this. Amazing.

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety +1

      Thank you very much for your kind feedback. I hope you’ll watch other videos on my channel and find them useful too.

    • @Rocky-xb3vc
      @Rocky-xb3vc Před 3 lety

      @@DigitalSreeni Of course, I've already watched the full course and the next thing is time series forecasting. Thanks for your reply and everything you do!

  • @zeeshankhanyousafzai5229

    I can not express my wishes for you in the words.
    You are more than the best.
    Thank you so much.

  • @azamatjonmalikov9553
    @azamatjonmalikov9553 Před 2 lety

    Amazing content as usual, well done :)

  • @andresbergsneider6644
    @andresbergsneider6644 Před 3 lety

    Thanks for sharing! Very well presented and super informative. Saving this video

  • @brunospfc8511
    @brunospfc8511 Před 2 lety +5

    Thanks Professor, there's so much knowledge in you channel, i'll need months to go through as it seems it's right in the deep learning area i want to focus, as an Computer Engineering going throught Veterinary course, blood sample analysis may be my final project, thanks from Brazil

    • @DigitalSreeni
      @DigitalSreeni  Před 2 lety +2

      I am sure you'll benefit from my tutorials if your goal is to analyze images by writing code in python.

  • @tonihullzer1611
    @tonihullzer1611 Před 2 lety

    First of all thx for your work here on CZcams, when I'm done with your series I will definitely support you. One question here: I thought that in the upward path you do add the upsampled features and the corresponding ones from the contracting path, but in your code you have concat?

    • @MrAmgadHasan
      @MrAmgadHasan Před rokem

      He's concatenating and then uses a convolution layer. This has a similar effect to adding since the convolution operation adds the results after multiplication

  • @user-gy8km4km6y
    @user-gy8km4km6y Před 7 měsíci

    thank you, professor, helps a lot in my understanding of deep learning.

  • @sarahs.3395
    @sarahs.3395 Před 4 lety +2

    Good explanation, thank you.

  • @mincasurong
    @mincasurong Před 15 dny

    Thanks for your amazing presentation!

  • @Vibertex
    @Vibertex Před 2 lety

    Great Video! Really helped me understand U-Nets for my own use!

  • @hanfeng32
    @hanfeng32 Před 4 lety +2

    thank you, this video is the best

  • @VLM234
    @VLM234 Před 3 lety

    Great explanation....Please keep on posting such high-value videos.....
    If we have less data, then we should go for Transfer learning or Machine Learning approach??

  • @victorcahui732
    @victorcahui732 Před 3 lety

    Thank you for your explanation.

  • @varungoel185
    @varungoel185 Před 3 lety +1

    Nice video, thanks! One question - this architecture is for semantic segmentation right? How would the final layer (or layers) differ for the instance segmentation, wherein the output would be bounding boxes or co-ordinates of the instances?

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety +2

      Instance segmentation requires different architecture, you cannot swap the final layer to convert them from one to another application. I only wish life were that easy!!!

  • @MrChudhi
    @MrChudhi Před rokem

    Hi, Sreeni, Nice explanation and I managed to clear my doubts. Thanks. Do you have any videos on image segmentation with pertained models.

  • @boy1190
    @boy1190 Před 3 lety +26

    I wish youtube give us an option of liking video after every minute, this idea came in my mind for the first time in this video, I really want to give this video a like on every small bit of concept. Because it is explained so well. Respect Sir.

  • @BareqRaad
    @BareqRaad Před 3 lety

    Great demonstration thank you so much

  • @shanisssss5906
    @shanisssss5906 Před 4 lety

    Fantastic video!

  • @lazotteliquide
    @lazotteliquide Před 6 měsíci

    Incredible that someone as dedicated as you gave accss to such great knowledge. Thanks you, you help create better sciences

  • @tamerius1
    @tamerius1 Před 3 lety +3

    Why does the feature space and thus depth increase as we go down? Is this a design choice or a consequence?
    It's confusing for me that each first convolutional operation increases the depth and the second one which seems identical, does not.

  • @pratheeeeeesh4839
    @pratheeeeeesh4839 Před 4 lety +1

    classy explanation!

  • @matthewchung74
    @matthewchung74 Před 4 lety +3

    Thank you for this very helpful video. In the unet diagram, there are 3 output features, but your implementation only has one. I'm confused as to why?

    • @julianwittmann7302
      @julianwittmann7302 Před 3 lety

      As im just starting to dig into this field im not quite sure but my suggestion would be that the output has to be a segmented image. Segmented images have value 1 for the segmented part and value 0 for the remaining non segmented part of the picture. Usually when using segmentation grey values are considered. And for grey values only one channel is needed.

  • @vikaskarade5585
    @vikaskarade5585 Před 3 lety +3

    Amazing Lecture. You can also create one on UNET++ and attention UNET. I was looking for these topics and I wish you had one on these topics... :)

  • @josemiguelc.tasayco4028

    Very well !!! more videos please

  • @kebabsharif9627
    @kebabsharif9627 Před 2 lety

    Can you make a video in which your code detect the orientation of page from a photography of the page , suppose to the page is up-side down or 90° let /right rotated.

  • @ramanjaneyuluthanniru1428

    Well explained....sreeni
    you have amazing teaching skills...your explanation pretty good.
    i watched more and more videos in youtube...you also one of the best person
    thanks for sharing information

  • @Tomerkad
    @Tomerkad Před 6 měsíci +1

    thank you. can you please explain what does it mean to add C4 to U6 in the first Upsample step?

  • @BiswajitJena_chandu
    @BiswajitJena_chandu Před 3 lety +2

    Sir, please do a video for segmentation of BRATS dataset

  • @carolinchensita
    @carolinchensita Před rokem +1

    Thank you very much for this explanation. I have one question, could I use this same method on an RGB image? Or does it have to be grayscale? Thanks!

    • @rohanaggarwal8718
      @rohanaggarwal8718 Před 6 měsíci

      This is a late reply but yes, you have to expand your thinking... You can't assume just because someone made a tutorial this is what i have to do. Ask yourself these questions instead of trying to get help, What is a grayscale image? (1 is white, 0 is black, in between is gray) Can I apply this concept to RGB? (Three color channels, each same principle) How does my code change, (Input shoudl be three, maybe I need to flatten differently), etc. Good luck learning!

  • @maciejkolodziejczyk4136

    Many thanks, well done !

  • @icomment4692
    @icomment4692 Před 3 lety

    What implication do the cross-links have for backpropagation in the U-net architecture?

  •  Před 4 lety +1

    Thanks for video

  • @DAYYAN294
    @DAYYAN294 Před 29 dny

    Great job by you sir salute to u❤

  • @mqfk3151985
    @mqfk3151985 Před 3 lety +1

    As usual! Amazing tutorial. I just want to confirm, in the training phase, all images have to be of the same shape (width, height and depth), right? what if my training data varies in shape? Do I need to resize the images?
    Also, I Will be really thankful if you can give a tutorial on Mask RCNN. It's also a very good algorithm that can be used for semantic segmentation.
    Thanks a lot for your time.

    • @manishsharma2211
      @manishsharma2211 Před 3 lety +2

      Yes. Always apply transformation on image ( like resizing and rotation etc)

    • @mqfk3151985
      @mqfk3151985 Před 3 lety +1

      I see, thanks for the reply. Image rotation will be performed for data augmentation. but regarding the image resizing, I think it's a requirement by the algorithm.

    • @manishsharma2211
      @manishsharma2211 Před 3 lety +1

      @@mqfk3151985 Yes , there is never that you might find images of all same size. Unless you go for normal competation
      So better resize :)

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety

      You will represent your data as numpy array so you need all images to be of same size. Yes, it is customary to resize images to a predefined shape in machine learning.
      I will consider making Mask-RCNN videos.

  • @Irfankhan-jt9ug
    @Irfankhan-jt9ug Před 3 lety

    Great work......which tool creates Image masks?

  • @saifeddinebarkia7186
    @saifeddinebarkia7186 Před 2 lety

    Thanks for the video,so is it transposed convolution or up-sampling for the expansive path because they are 2 different things.

    • @DigitalSreeni
      @DigitalSreeni  Před 2 lety +1

      It can be either. Please watch the following video if interested in learning about the differences between the two. But, you can use either as the idea is to get back to the large resolution image from a smaller size.
      czcams.com/video/fMwti6zFcYY/video.html

  • @CristhianSanchez
    @CristhianSanchez Před 3 lety

    Great explanation!

  • @tonix1993
    @tonix1993 Před 2 lety

    Very helpful video thank you!

  • @mohamedelbshier2818
    @mohamedelbshier2818 Před rokem

    Thank you and Respect Sir

  • @ioannisgkan8930
    @ioannisgkan8930 Před 2 lety

    Great explanation SIR
    You made us simple

  • @ApPillon
    @ApPillon Před rokem

    Thanks bro. Cheers!

  • @doraadventurer9933
    @doraadventurer9933 Před 3 lety

    thank you for your sharing, however, do you have the training part?

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety +1

      Please keep watching videos on this playlist, I have training and segmentation part covered.

  • @anishjain3663
    @anishjain3663 Před 3 lety +1

    Sir i am doing image segmentation with coco like dataset sir already see yours tutorials but still not able to implement

  • @-arabsoccer1553
    @-arabsoccer1553 Před 4 lety +1

    Thanks for your video,but i have question regarding the U-net and i hope that you can answer me
    from my understanding that the u-net is ended by image of the same input size ?but how we can predict the class of each pixel.
    i understand classification problem that it the last convolution is following by flatting and fully-connected layer so the number of n-classes as outputs ,but i don't understand how we get the result in segmentation

    • @DigitalSreeni
      @DigitalSreeni  Před 4 lety +1

      The convolution pooling operations (down sampling) understands the 'what' information in the image but has no information on the 'where' aspect which is required for semantic segmentation (pixel level). In order to get the 'where' information Unet uses upsampling (decoder), converting low resolution to high resolution. Please read the original paper for more information: arxiv.org/abs/1505.04597

  • @zeeshanahmed3997
    @zeeshanahmed3997 Před 4 lety

    hello! I want ask something, can I train my unet model with the input training images having only single channel? like (img_height, img_width, 1) or (img_height, img_width) ?

    • @DigitalSreeni
      @DigitalSreeni  Před 4 lety +3

      Yes. Please watch my other videos on U-net. Every network expects certain dimensions and you can reshape your arrays to fit those dimensions. For example if you have grey images with dimensions (x, y, 1) and if the network takes 3 channels then just copy the image 2 more times to convert to (x, y, 3).

  • @NH-gl8do
    @NH-gl8do Před 3 lety

    Very excellent explanation

  • @siddharthmagadum16
    @siddharthmagadum16 Před 2 lety

    5:12 . which architecture would be good for cassava leaf disease detection dataset?

  • @4MyStudents
    @4MyStudents Před 2 lety

    basically, ReLU is used to prevent overfilling to maintain non-linearity

  • @mohammadkarami8984
    @mohammadkarami8984 Před 4 lety

    Thanks a lot for your video

  • @joshizic6917
    @joshizic6917 Před rokem

    Hi sir i was wondering if you could help to train my model i am trying to create a dataset where only the element of interest is visible and the rest is blacked out with transparent background , will this be great or i should create a binary mask by coloring the element of interest in white and keeping the background white

  • @efremyohannes2334
    @efremyohannes2334 Před 3 lety

    Thank you sir, very nice video.

  • @temurochilov
    @temurochilov Před 2 lety

    Thank you very informative tutorial

  • @bhavanigarrepally4164
    @bhavanigarrepally4164 Před 2 lety

    Can you give the implementation for unsupervised semantic segmentation also

  • @haythammagdi3956
    @haythammagdi3956 Před 7 měsíci

    Hi every one. It is really amazing video on U-Net.
    But waht about U2-Net? is it better?

  • @NeverTrustTheMarmot
    @NeverTrustTheMarmot Před rokem +2

    Pick up line for data scientists:
    Why is U-Net architecture so beautiful?
    Cause it looks like U

  • @Abhisingh-cl9xm
    @Abhisingh-cl9xm Před 4 lety +1

    Best resource

  • @chitti1120
    @chitti1120 Před 3 lety

    can someone tell me and give examples of why the u-net architecture uses the 'copy and crop' for every block?

  • @lorizoli
    @lorizoli Před 2 lety

    Great video!

  • @Julian-ri9od
    @Julian-ri9od Před 2 lety +2

    Is there a reason why always two convolutions are applied after the max pooling step? Is it a convention to use always two?

    • @DigitalSreeni
      @DigitalSreeni  Před 2 lety

      No reason. It may appear that 2 convolutions are added after maxpool on some architectures but that is not the general case.

  • @TheedonCritic
    @TheedonCritic Před 2 lety +1

    Awesome!
    I'm trying to use GAN for augmenting my images and masks which I will use as input to my semantic segmentation models, but I can't find any tutorials online.
    Most of them are for classification datasets, any advice, please?

  • @adityagoel237
    @adityagoel237 Před 2 lety

    14:25 In upsampling (before adding C4), why the 8*8*256 got transformed to 16*16*128 ? Why not 16*16*256 ?

  • @Shadow-pn2us
    @Shadow-pn2us Před 4 lety +1

    still confused with the concatnation operation how it works, such as adding 16x16x128 featuremap with upsampled 8x8x256, the dimension is different

    • @DigitalSreeni
      @DigitalSreeni  Před 4 lety

      You’ll be concatenating data with same dimensions, not different dimensions. Please have a second look at the graphic describing the architecture, the two layers fused together showing dimensions are being concatenating to form a dataset with combined dimension.

  • @poopenfarten4222
    @poopenfarten4222 Před rokem

    what are the numbers above the layers, for eg in the first layer 16 is written above it, what does it signify could someone please explain

  • @ahmedhafez3758
    @ahmedhafez3758 Před 3 lety

    I want to make a 3D medical image segmentation , can you tell me how to start, I want the input to be .obj file and the output to be either .dcm files ( for each segment ) or .obj files

  • @nourhanelsayedelaraby4271

    first of all thank u for the great explanation and wanted to ask u about the slides if they are available

    • @DigitalSreeni
      @DigitalSreeni  Před 2 lety

      Sorry, I wasn't very planned with my presentation slides so unfortunately I cannot share them. Also, I often use images and content from Google searching that come with copyright. I cannot legally distribute them.

  • @RizwanAli-jy9ub
    @RizwanAli-jy9ub Před 4 lety +1

    salute

  • @bicyclingmartian1873
    @bicyclingmartian1873 Před 4 lety +1

    Web application that uses uNet model: backdrop.vercel.app/
    Source: github.com/Prottoy2938/backdrop

  • @alessioandreoli2145
    @alessioandreoli2145 Před 4 lety

    Hi!which is the best segmentation technique I can use in python for cells image counting/object detection/size definition?

    • @DigitalSreeni
      @DigitalSreeni  Před 4 lety +1

      The best method is always traditional approaches of using histogram for thresholding and then some operators like open/close to clean up. If that is not possible then the next best option is to use traditional machine learning (extract features and then Random Forest or SVM). I covered that topic on my channel. FInally, if you have the luxury of 1000s of labeled images then use deep learning.

    • @alessioandreoli2145
      @alessioandreoli2145 Před 4 lety

      @@DigitalSreeni , please let me one more question. My purpose is to avoid manual settings to use macros or python over big amount of images taken at cells on a big microscale range. Any suggestions there? Have you any reference for deep learning?

  • @chouchou2445
    @chouchou2445 Před 3 lety +1

    Thank you again
    Would you please tell me, is it possible to use data augmentation befor semantic segmentation an how to apply same function on both image and mask

  • @jijiqueen5823
    @jijiqueen5823 Před 3 lety

    thanks

  • @chouchou2445
    @chouchou2445 Před 3 lety

    thank you this is how you know what are y doing :)

  • @mstozdag
    @mstozdag Před 4 lety

    Hello, great content! Where is the code for U-Net? Can u post the link here pls?

    • @DigitalSreeni
      @DigitalSreeni  Před 4 lety +1

      github.com/bnsreenu/python_for_microscopists

  • @HenrikSahlinPettersen
    @HenrikSahlinPettersen Před 2 lety +1

    For a tutorial on how to do deep learning based segmentation without the need to write any code using only open-source free software, we have recently published an arXiv preprint of this pipeline with a tutorial video here: czcams.com/video/9dTfUwnL6zY/video.html (especially suited for histopathological whole slide images).

  • @soumyadrip
    @soumyadrip Před 4 lety +1

    ❤❤❤

  • @codebeings
    @codebeings Před 3 lety +1

    13:54 Do check, second last layer in the decoder side have wrong connections !

    • @kunalsuri8316
      @kunalsuri8316 Před 2 lety

      How is it wrong?

    • @codebeings
      @codebeings Před 2 lety

      @@kunalsuri8316 In the second last layer of decoder (corresponding to P1), its input to the last layer of decoder is incorrect. Just check the original paper, one can easily notice it.

  • @talha_anwar
    @talha_anwar Před 3 lety +1

    Thanks first of all. Can you provide the image you have used, the architecture image?

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety +1

      You can search for U-net on Google. I did the same and created my own, to make sure I do not infringe on copyright.

    • @sourabhsingh4895
      @sourabhsingh4895 Před 3 lety

      @@DigitalSreeni sir you are great sir it would be a great help if you could upload a video on semantic segmentation using double-UNET model

  • @amaniayadi9591
    @amaniayadi9591 Před 3 lety

    so useful thnks :*

  • @muhammadzubairbaloch3224

    Depth estimation using neural network. please make the lecture

  • @kethusnehalatha6091
    @kethusnehalatha6091 Před 3 lety

    For better results what changes we have to do in the u net sir ???

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety

      Many things. For example you can try replacing generic encoder (down sampling) part with something sophisticated like efficientnet.

  • @NS-te8jx
    @NS-te8jx Před 2 lety

    do you have slides for all these videos?

  • @nailashah6918
    @nailashah6918 Před 3 lety

    very good lecture
    just one thing I am unable to understand about feature space or dimension?plz reply with answer
    thanks

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety +1

      Not sure where your confusion is.... I am referring to the filtered results (after convolutional filtering) as feature space. This is where you will have multiple responses for every input image and these responses contain the information about features in the image.

    • @nailashah6918
      @nailashah6918 Před 3 lety

      I wanted to ask about feature space that was 64 in start then 128 in 2nd block of unet
      64 means 64 output filtered results? is that true?
      or we can say 64 filters were applied, then 128 filters and so on ...?

  • @tangallen2008
    @tangallen2008 Před 4 lety +1

    make a video about v-net plz!

  • @snehalwagh2283
    @snehalwagh2283 Před 2 lety

    Question: What happens if it is 128X128X1 ? will it still become 128X128X16 ?

  • @nickpgr10
    @nickpgr10 Před 3 lety

    @14.11.. can anyone please explain how size changes from 8*8*256 to 16*16*128 due to up sampling??. why number of channels get reduced in this step??

    • @zhenxingzhang6429
      @zhenxingzhang6429 Před 3 lety

      If you checkout the part2 of this video, you can see that it uses Conv2DTranspose (transpose convolutions) for upsampling instead of just simply UpSampling2D (repeat the value to match the desired dimosions), because the filter number is set to 128, so we end up with 8*8*256 -> 16*16*128. Check this for more details: www.jeremyjordan.me/semantic-segmentation/#upsampling

  • @mimo-wx9mc
    @mimo-wx9mc Před 4 lety

    why the first parameters don't work very well and how we can determine the best parameters

    • @DigitalSreeni
      @DigitalSreeni  Před 4 lety

      No sure what you mean by parameters. If you are asking about hyper parameters that go into defining your network then it is not an easy answer. People are still researching the effect of parameters for various applications.

  • @MadharapuKavyaP
    @MadharapuKavyaP Před 2 lety

    Hello sir, can u please make a video on brain tumor segmentation using u net architecture integrated with correlation model and fusion mechanism.

  • @RAZZKIRAN
    @RAZZKIRAN Před 2 lety

    input size for U-nET?

  • @ExV6120
    @ExV6120 Před 4 lety

    I still don't get it, what exactly is the 16, 32, 64, 128, 256 that being called features in the next two layers each?

    • @DigitalSreeni
      @DigitalSreeni  Před 4 lety +1

      Think of it as applying 16 different digital filters and then 32 and the 64 and so on.... Therefore, if you take a single image of size 256x256 and apply 16 different filters on it you will end up with 16 responses from this single image --> 256x256x16 data points.

    • @andresbergsneider6644
      @andresbergsneider6644 Před 3 lety

      ​@@DigitalSreeni What is the design principle behind this filters, any rules of thumb? Are they generated at random? Or are this manually configured?
      Thanks again for sharing this video!

  • @mrityunjaykumar2893
    @mrityunjaykumar2893 Před 3 lety

    Respect 🎓

  • @ariouathanane
    @ariouathanane Před rokem

    Hello, i have a rgb masks, it's possible to do the image segmentation? thanks in advance

    • @DigitalSreeni
      @DigitalSreeni  Před rokem

      Yes. I have done that here. czcams.com/video/jvZm8REF2KY/video.html

  • @akainu3668
    @akainu3668 Před 2 lety

    hi can you also create one tutorial on unet based segmentation for isbi 2012 data set or brats data set
    ?

    • @DigitalSreeni
      @DigitalSreeni  Před 2 lety +1

      I already did Brats. Please check my videos 231 to 234.

  • @xxxtj3679
    @xxxtj3679 Před rokem

    Please do a W-net tutorial

  • @pearlmarysamuel4809
    @pearlmarysamuel4809 Před 3 lety

    How much memory does the original unet require?

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety

      Not a simple answer. Here is some good reading material on this topic. imatge-upc.github.io/telecombcn-2016-dlcv/slides/D2L1-memory.pdf

  • @leo46728
    @leo46728 Před 2 lety

    17:56 Does the model need to be trained after compiling?

    • @DigitalSreeni
      @DigitalSreeni  Před 2 lety

      Compiling just defines the model, you need to train the model on real data to update the weights and customize it for a specific job to be done, for example identify cats and dogs.

    • @leo46728
      @leo46728 Před 2 lety

      @@DigitalSreeni ok thanks

  • @shreearmygirl9878
    @shreearmygirl9878 Před 2 lety

    Hello sir, plcan u provide the links of videos for creating our own dataset from scratch fro satellite images, pl sir.. its very important.I hope you will...

    • @DigitalSreeni
      @DigitalSreeni  Před 2 lety

      You just need to annotate your images using any of the image annotation tools out there. I use www.apeer.com as that is what our team does at work.

  • @deepMOOC
    @deepMOOC Před 4 lety

    Thank you,but how can I get the code

    • @DigitalSreeni
      @DigitalSreeni  Před 4 lety

      You can get the code from my GitHub page. The link is provided under my channel description.