204 - U-Net for semantic segmentation of mitochondria

Sdílet
Vložit
  • čas přidán 4. 09. 2024
  • Code generated in the video can be downloaded from here:
    github.com/bns...
    Dataset info: Electron microscopy (EM) dataset from
    www.epfl.ch/la...
    To annotate images and generate labels, you can use APEER (for free):
    www.apeer.com

Komentáře • 116

  • @DigitalSreeni
    @DigitalSreeni  Před 3 lety +1

    Just want to let you guys know that I love Kite's AI-powered coding assistant. Works great in giving smart completions and documentation as we type. Check it out if you are looking for smart completion tools while coding.
    www.kite.com/get-kite/?downloadstart=false

  • @kannanv9304
    @kannanv9304 Před 3 lety +3

    Dear Ajarn, my telepathy was telling me, and I was expecting this on U-net from you, as I was about to go the "Convolutional + RF" way, for a segmentation task........Can't wink while learning through your tutorials......Another in depth and informative tutorial......And awaiting the sequels to it, for "Instance segmentation".......And as always, my humble pranams to you.....

  • @ankurgupta3749
    @ankurgupta3749 Před 3 lety +2

    Trying to apply u-net for Glaucoma detection
    This helped a lot sir, thank you so much
    🙏🏽

  • @amarug
    @amarug Před 2 lety +1

    the quality of your videos is insane, thank you so much!!

  • @DrRubidium
    @DrRubidium Před 3 lety

    That is the superb application of U-Net. Thank you

  • @yogitasawant3017
    @yogitasawant3017 Před 3 měsíci

    thank u so much ! for all your informative videos sir

  • @dimane7631
    @dimane7631 Před 2 lety +2

    to get image and labels patches : czcams.com/video/7IL7LKSLb9I/video.html

  • @NopeYup-i5f
    @NopeYup-i5f Před 6 dny

    Thank you very much for your work, it has proven really helpful! Is it possible to use image augmentation in this simple model? (without having to use flow from directory)

  • @linameghouche1392
    @linameghouche1392 Před 6 měsíci

    hi i really liked the way you explain the network ! i have a question about normalshization of images should we normalize even for RGB images ? for example ISIC dataset

  • @willberger96
    @willberger96 Před rokem

    Hey Sreeni, first thank you for all your training sessions. Truly appreciate them and enjoy the way you present it!
    I did notice a small bug. I followed your steps and when I was extracting the images from tiff file where you break them out into 256x256 images and save them to the images and masks folder. The issue is in linux when you retrieve them using os.listdir they are not returned in the same order. Meaning mask_dataset[0] will likely not correspond to the image_dataset[0] . To fix I did the following.
    images = os.listdir(image_directory)
    images = sorted(images)
    masks = os.listdir(mask_directory)
    masks = sorted(masks)
    Hope that helps someone.

    • @DigitalSreeni
      @DigitalSreeni  Před rokem

      Thanks for pointing this out. I should have mentioned it in my videos. I sort the files when I work on colab where images and masks may not be lined up based on the file name by default.

  • @roby1251
    @roby1251 Před 2 lety

    Hello Sreeni Sir, I'm so happy that I discovered your channel and videos on neual networks and cell image classifications, they're really a gold mine of information!
    May I know in what video exactly you talked about how to save and load models and their results, for the first time? (I'm referring to 17:36). I've watched your first 60 videos and lost track on them. Thank you in advance and cheers!

  • @meysamakbari6523
    @meysamakbari6523 Před 21 dnem

    Hello,
    I have set of images with size of about 1024x1024 but the size of object to be semantic segmented are big almost like 600x600. My dataset size is too low but not sure if patchify makes sense here?

  • @amitkumar-od1ui
    @amitkumar-od1ui Před rokem

    hey man you are oding great job! thanks for these video lessons

  • @khondokermirazulmumenin8201

    you are best .

  • @Hartvig5k
    @Hartvig5k Před 2 lety

    Amazing, as always! A general question reg input channels: Considering HE stained images, do you think color deconvolution as a preprocessing step should help segmentation? Having tried it, it seems not make much difference. I am assuming network training would regardless focus on decisive color ranges?

    • @DigitalSreeni
      @DigitalSreeni  Před 2 lety

      Color deconvolution may not make a difference if you are using deep learning for segmentation. This is because deep learning learns from the raw data much better than other techniques. Color deconvolution helps when you perform traditional image analysis.

  • @bijoyalala5685
    @bijoyalala5685 Před 3 lety

    Hello Sreeni Sir, Thanks for an informative video about Sementic Segmentation. I have a request to make. How about explaining the Learning Curve of the respective segmentation that you have done.
    I have saw your tutorial about Learning Curve, loss function, accuracy metric. Those videos are also very informative. In addition to this, I want to learn, based on this sementic segmentation, what is the outcome value of loss function, validation accuracy --what is impact of learning curve on segmentation purpose?
    You have done well so far for explaining everything so preciously. It will be a great help for me, if you consider my request.
    Thank you.

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety +1

      The only point of learning (loss) curve is for us to keep an eye on it to make sure it is trending downward. Also, to make sure training and validation curves are close together. For semantic segmentation where you are tracking IoU, you can monitor this instead of accuracy so you can understand the general trend when it comes to segmentation quality. Please stay tuned for the upcoming videos where we monitor IoU instead of Accuracy.

  • @ShahidKhan-jp9mt
    @ShahidKhan-jp9mt Před 4 měsíci

    Please make a video on HubMap competition:- hacking the human vasculature structure from 2D PAS-stained human kidney images

  • @deepakraj008
    @deepakraj008 Před 3 lety

    Little Correction reading image using cv2 and converting them to PIL image.
    numpy.asarray(PIL.Image.open('test.jpg'))

  • @alokchauhan6653
    @alokchauhan6653 Před rokem

    Great explanation, a question though @DigitalSreeni - would it be possible to somehow extract the region of interest from predicted image and store pixel point coordinates somehow into csv or json format so that we can import the point coordinated in ImageJ? if in case one wants to correct/adjust the ROI's manually. Cheers!

  • @rachelbj3840
    @rachelbj3840 Před 2 lety

    Hello Sreeni, loads of thanks for the wonderful video lecture. Am I allowed to use your code for research purposes?

  • @hosniboughanmi4130
    @hosniboughanmi4130 Před 3 lety +1

    Dear Ajarn, thank you for this video, could you please tell me how did generate the patches from the data?

    • @user-xk8pv6bz6r
      @user-xk8pv6bz6r Před 3 lety

      You can use any online resource to split one .tif file to many (or google some python code, like i did :) ). And python module image_slicer to generate patches from source data. (smth like image_slicer.slice('file.tif', ))

  • @agnarrenolen1336
    @agnarrenolen1336 Před 4 měsíci

    I'm a Python newbe, and I am unable to set up my Python environment to work with your code. Almost got there with Miniconda, but couldn't find a way to install pathchify. Would you please give some tips on how to set up Python ans Spyder with the correct environment and all the needed libraries. (I'm stuck to Windows)

  • @minipc123
    @minipc123 Před rokem

    hello sir, may I know what did you choose to normalize axis 1 instead of -1 which is the default axis to be normalized?

  • @akashdebnath3631
    @akashdebnath3631 Před 3 lety +1

    great sir

  • @nouraalmusaynid751
    @nouraalmusaynid751 Před rokem

    Keep going 👏🏻👏🏻👏🏻

  • @tapansharma460
    @tapansharma460 Před 3 lety

    great work sir

  • @venkatesanr9455
    @venkatesanr9455 Před 3 lety

    Hi Sreeni sir, Thanks for the highly informative content as usual. I have one doubt that whether U-net can perform/applied on a small dataset like 100 images or we can go with only the ML approach. Kindly share your experience

  • @kctsui4351
    @kctsui4351 Před rokem

    Thanks very much and it is very useful. I encountered an error: "NotImplementedError: Cannot convert a symbolic tf.Tensor (dice_loss_plus_1focal_loss/truediv:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported." when I ran the code in google colab. I have tried to change the versions of tensorflow and numpy but the issue persists. How can I solve the issue?

  • @farizasiddiqua17
    @farizasiddiqua17 Před rokem

    Thank you so much!

  • @anitadalla
    @anitadalla Před 3 lety

    Thank you very much ...I have learned a lot from your methods...Can you please apply one or two pretrained models like ResNet50/EfficientNetB7 or ensemble of 3-4 models on HAM10000...please please please make a video on that also

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety

      Another thing to add to my list. But it is easy as I showed in my ensemble videos so I hope you don't have to wait for me to make these videos, they take time.

  • @samarafroz9852
    @samarafroz9852 Před 3 lety

    Superb sir

  • @vikashkumar-cr7ee
    @vikashkumar-cr7ee Před 2 lety

    Dear Sreeni,
    I am requesting you to make a tutorial on how to run this lecture or any lecture on Google colab, as colab support compatibility and GPU. As well as how to import the dataset from sites (without uploading on a local drive/ or google drive).

  • @RuchiTripathi-gu3hu
    @RuchiTripathi-gu3hu Před rokem

    sir when i am trying to load images and masks they are not coming in proper sequence during image and mask directory creation.what i need to do

  • @harrishvar7677
    @harrishvar7677 Před 6 měsíci

    Does this works only on 256x256 patches?

  • @padmavathiv2429
    @padmavathiv2429 Před 2 lety

    hello sir ....your videos are too helpful for me sir....is UNET apply for LUNG segmentation too? will it give better accuracy ?
    thanks

  • @joaosacadura6097
    @joaosacadura6097 Před 3 lety

    I applied U-NET to some datasets with ORTOFOTOS with RGB + IR that i have with high resolution (9-30cm per pixel) with the groud truth mask corresponding to vehicles, roofs, roads etc...
    So i prepare all the images before inputing into the model and i cut the ORTOFOTOS with size (10000, 10000, 4) into pieces of (512, 512, 4) and before inputing into the model i have to resize them to (128, 128, 4) because in higher sizes the kernel just implodes and i can't run the model. I'm afraid that with all this resizes i'm losing the the form of the objects that i want to predict. I thought on applying gaussian filters just to smooth the resolution. However since, I heard that in next videos you will talk about it i will wait to see. I wonder if it improves the results.
    Keep up the great videos!

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety

      This is a common problem for us when working with limited computing resources. Big companies invest in big hardware that lets them handle large datasets. But, as individuals we need to work with resources we can access such as Google colab. In summary, in general it is recommended to work with batch sizes 32 or 64. This makes sure that enough data is provided in a batch and by using smaller batch sizes we also ensure that the model is generalized. So you need to find down the smallest image size that you can handle in a batch of 32. You cannot fit 512x512 images at batch 32 on typical hardware available to us. So you need to crop them down to may be 128x128 which you seem to have done. Unfortunately, when you crop images too small you may be cropping large features and they lose context which is important for segmentation. In such cases you can consider using 256x256 but smaller batch size. Once you train the model you can apply the model to large images by cropping them to smaller sizes and combining the patches.

    • @joaosacadura6097
      @joaosacadura6097 Před 3 lety

      ​@@DigitalSreeni Yup that's exactly what i did. With 256x256 i assigned the batch size at 16 or 32 with 2136 images and it runs for the class cars it labeled with the IoU score 70% for 10 epochs. I was ondering if there's a way of changing the parameters of the model in order to input this images at a higher size and train the model not losing features of the objects or i just need to try with better resources.
      Thank's for the reply!

    • @matancadeporco
      @matancadeporco Před 3 lety

      @@joaosacadura6097 Ei João, estou tentando fazer segmentação semantica multiclasse do sintoma de uma praga em folhas de tomate, porém estou com dificuldades, poderia me ajudar?

  • @moisesdesouzafeitosa3364

    Hi thank you for the amazing video.
    is there in your channel some video using an u-net with data augmentation?

  • @almag4810
    @almag4810 Před rokem

    i tried making a similar project, but for some reason, my predicted masks are all black, and im not sure how to fix this. I double-checked, triple-checked and the model is correct, my dataset is also correct, and my images match their ground truth masks...i used dice score as a metric, but it's always 0

    • @edenvelascohernandez7633
      @edenvelascohernandez7633 Před rokem

      igual no he podido hacerlo funcionar con mi propia base de datos :C ¿ya lo solucionaste?

  • @vanshikahari4746
    @vanshikahari4746 Před 2 lety

    Hi. I wanted to know what should be done if my images are not matching the corresponding masks when I di the sanity check. Please let me know

    • @DigitalSreeni
      @DigitalSreeni  Před 2 lety

      You may find this tutorial useful... czcams.com/video/XNf1ATR9OSk/video.html

  • @sohailmalic
    @sohailmalic Před rokem

    Hey Sreeni, I have faced the issue IndexError: list index out of range in the spyder interface. while i am applying both of your codes and downloading the dataset through the mentioned link. I have downloaded one image slide with all 1600 images.
    how to overcome this problem

    • @DigitalSreeni
      @DigitalSreeni  Před rokem

      Sohail miya, assalam alaikum, sab khairiyat?
      This video shows U-net based segmentation where the input needs to be 256x256x1. You need to get the data ready into a format (N x 256 x 256 x1), as shown in the video. In my case, N was 1600 as I got that many images. If you are working with the same data set and follow all the steps accordingly, you will end up with a shape (1600, 256, 256, 1). If not, you may be making some mistake which is not possible for me to guess. Please go through line by line and execute each line at a time. Look at the variable explorer to see if the result is what you expect. If things are still confusing, you may have take a step back and learn a bit more about numpy or where ever you are getting stuck. Good luck.
      Khuda Hafiz

  • @sherrishah
    @sherrishah Před 2 lety

    Does this Unet work with different input sizes? for ex. 1024 as well?

  • @talha_anwar
    @talha_anwar Před 3 lety

    is decoder in unet is exactly opposite of encoder, all the time?

  • @sohailmaqsood381
    @sohailmaqsood381 Před rokem

    i have a question can anyone help me. how to single slice of 1600 images stack into 1600 images? I am stuck over their...downloaded dataset with one image slice stack. can anyone help me with this issue?

  • @saqibqamar9270
    @saqibqamar9270 Před 2 lety

    Hello Sreeni Sir,
    You videos are very interesting and helpful for deep learning guys. I want to make instance segmentation of overlapped region of two tissues in my microscopic images. Which method is better to do measure overlapped region of same tissues for instance segmentation as you mentioned UNet+ Warshed in your next lecture.

    • @DigitalSreeni
      @DigitalSreeni  Před 2 lety

      When you have overlapped objects that you'd like to segment you have to find a method that incorporates shape in the training process. If the overlapped region comes in many shapes, the problem gets challenging. If you do not care about the overlapped region that is underneath another region and only want to separate the boundary between objects, you can try Unet followed by watershed.

    • @saqibqamar9270
      @saqibqamar9270 Před 2 lety

      @@DigitalSreeni Thank you so much for your response. Would you suggest methods to incorporate objects during training process. I want to segment overlapped regions that come in many shapes. I would be highly grateful to your suggation or particular videos on it...I can send the images as sample....

  • @Brickkzz
    @Brickkzz Před 3 lety

    Thank for his amaizng vid. How big is your training set? (i.e. how many labelled mitochondria?) I'm thinking of doing something similar but labelling thousands of pictures may be tedious

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety

      The training dataset is 165 large images with about 10 mitochondria per image on average. So about 1600 total mitochondria. Labeling is time consuming but if your research relies on it then you find a way to get your dataset labeled. If not, try augmentation but the results will not be as accurate.

    • @kaydee6328
      @kaydee6328 Před 3 lety

      @@DigitalSreeni Thanks a lot for your videos. I learned quite a lot from them. Could you recommend some efficient tools/software that could speed up image labeling or annotating? Thanks!

  • @surflaweb
    @surflaweb Před 3 lety

    Hi dear, I want to run this code, but where I can labelling my images. I want to use RGB images. Another question if i have 3 classes, I should changue the last layer to a softmax classifier?
    Thanks so much.

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety

      You can label/annotate your images here: www.apeer.com (it is free)
      Regarding multiclass U-net - please stay tuned...

    • @surflaweb
      @surflaweb Před 3 lety

      @@DigitalSreeni Last question sir, U-net works with RGB images only with one channel?

    • @surflaweb
      @surflaweb Před 3 lety

      Hi @DigitalSreeni I tried use apeer.com but I don't know what annotation download "annotations as image" I have two options: binary mask or labeled image. which of this two annotations I should choice for U-net?
      thanks so much.

  • @user-xp7um9lg8h
    @user-xp7um9lg8h Před 8 měsíci

    Hello, Thank you for your channel, Im interestind in extracting and selecting features while image processing phase, and if you have any code please share it, thank you

  • @hadeerabdellatif2335
    @hadeerabdellatif2335 Před 3 lety

    i apply the same in the video but give me this result "loss: 0.0477 - accuracy: 0.0022 - val_loss: 0.0600 - val_accuracy: 0.0022
    " what is wrong i do? please reply

  • @chandrakanthats2523
    @chandrakanthats2523 Před 2 lety

    please let me know the tensorflow and keras version requirement for this

  • @alirajabi2388
    @alirajabi2388 Před rokem

    Hi Sreeni, can you upload the dataset on your github page?

  • @khaleddawoud363
    @khaleddawoud363 Před 2 lety

    Hi Sir, First of all thank you for work, I want to apply 3 RGB channel in model I have tried to enter this in the list but did not work, can you please provide me with code of RGB for images and masks part

  • @ism_9648
    @ism_9648 Před 3 lety

    These early studies proposed hybrid solutions based on independent component analysis (ICA) [3], wavelet transform [4-6], support vector machine (SVM) [5] and principal component analysis (PCA) [6]. In [6] features are extracted from MR images with discrete wavelet transformation (DWT). Then PCA is employed for feature reduction and finally feed forward backpropagation artificial neural network (FP-ANN) and k-nearest neighbor (k-NN) based classifiers are used to classify the normal and abnormal brain MR images. help me with can you make video

    • @user-xp7um9lg8h
      @user-xp7um9lg8h Před 8 měsíci

      hello, would you share the references for those articles you mensioned above, Im interestind in extracting and selecting features while omage processing phase, and if you have any code please share it, thank you

  • @rehmanayounis8429
    @rehmanayounis8429 Před 3 lety

    Hi Seerni, Thank you for the amazing videos. In the start of the video you mentioned about a script for dividing large size images into small chunks. Could you please elaborate or share code on it.

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety

      You can use patchify library to do this task, very easy to use and works for 2D and 3D images. pypi.org/project/patchify/

    • @GodsOwn4142
      @GodsOwn4142 Před 2 lety

      The link to the tutorial on how to divide images of large size is here: czcams.com/video/7IL7LKSLb9I/video.html

  • @JS-tk4ku
    @JS-tk4ku Před 3 lety

    I suppose that if we train with 280X280 but how can I apply this model for 4000x4000 something like that?

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety +3

      That video is coming soon.... a video on applying models trained on small patches to segment large images. Please stay tuned.

    • @JS-tk4ku
      @JS-tk4ku Před 3 lety

      @@DigitalSreeni since I’ve followed you so far, That makes me feel familiar with deep learning, tks for your contributions

  • @منةالرحمن
    @منةالرحمن Před 3 lety

    hi sir
    please any solution? i tried this with 45 images dataset to do nuclei segmentation
    no error but result image is black any nuclei was detected !!!

  • @tuyenlevan6804
    @tuyenlevan6804 Před 3 lety

    Hi thank you for the amazing video.
    How to create new dataset?

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety

      If you are inquiring about annotating your images to generate labels then I use APEER for that purpose. (www.apeer.com)

  • @marcusbranch2100
    @marcusbranch2100 Před 3 lety

    Awesome video, Sreeni! Thanks a lot. How can I do online augmentation in that case? My dataset has two folders like yours (images and masks) and I want to apply online augmentation, feeding directly to the network. Can you help me with this?

    • @surflaweb
      @surflaweb Před 3 lety +1

      Hi Marcus, I have a question about U-net. Where did you labeled your images, what tool did you used?
      This Is compatible the code of this video?.
      Please share your knowledge.
      Thanks so much.

    • @marcusbranch2100
      @marcusbranch2100 Před 3 lety +1

      @@surflaweb Hey, I didn't need to label my images because the dataset I'm using comes ready with two folders (images and masks, as seen in the video), the only difference is that the format of the images is .png, super easy to manipulate and deal with. So I didn't need to use any tools to label them, and yes, it is very compatible with the code of this video. Now I want to know how I can do an online data augmentation and feed directly on the network

    • @surflaweb
      @surflaweb Před 3 lety +1

      @@marcusbranch2100 Ok Man thanks, remenber if you make data augmentation you Will need to label those new images.

    • @marcusbranch2100
      @marcusbranch2100 Před 3 lety

      @@surflaweb Yeah, for sure. But the data augmentation is already applyed to the images and masks

    • @surflaweb
      @surflaweb Před 3 lety +1

      @@marcusbranch2100 do you know a tool to do that? The code of this video work only for binary clasification?

  • @ahmedgaber8819
    @ahmedgaber8819 Před 2 lety

    thanks sir for this video i have simple question s
    1-test_img_other_norm=test_img_other_norm[:,:,0][:,:,None]
    what dose it mean [:,:,0][:,:,None] ?
    2- prediction_other = (model.predict(test_img_other_input)[0,:,:,0] > 0.2).astype(np.uint8)
    what dose it mean (test_img_other_input)[0,:,:,0] ?
    thanks

    • @DigitalSreeni
      @DigitalSreeni  Před 2 lety +1

      In both cases I am just choosing the appropriate sliced array from a larger array. Please print the results and shapes without the [:,:,:] part to understand how it looks before and after.

    • @ahmedgaber8819
      @ahmedgaber8819 Před 2 lety

      @@DigitalSreeni thank you sir

  • @manjaripalanichamy9800

    Can we use any other kernel_initializers other than he_normal?

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety +1

      For ReLu activation layers it is recommended to use he_normal which makes sure the variance is appropriate based on your input data.

    • @manjaripalanichamy9800
      @manjaripalanichamy9800 Před 3 lety

      @@DigitalSreeni thank you so much for the explanation

  • @sudhakumaravel8277
    @sudhakumaravel8277 Před 2 lety

    why i am getting my output as black screen.any one reply me guys

    • @DigitalSreeni
      @DigitalSreeni  Před 2 lety

      Black screen for what? Do you mean segmented images?

    • @sudhakumaravel8277
      @sudhakumaravel8277 Před 2 lety

      @@DigitalSreeni yes sir, my input is xray image (RGB),mask is also(RGB),but it shows output only in plain black image. 70 images are enough to train the model.

    • @edenvelascohernandez7633
      @edenvelascohernandez7633 Před rokem

      @@sudhakumaravel8277 pudo solucionarlo??

  • @AhmedKhaled-qr7vc
    @AhmedKhaled-qr7vc Před 2 lety

    where the data please

  • @jizhang02
    @jizhang02 Před 3 lety

    hello, what's the difference of 'concatenate' and 'add' operation in Keras, thanks

    • @DigitalSreeni
      @DigitalSreeni  Před 3 lety

      Add operation adds two tensors and concatenate, as the name suggests just puts the two tensors together along the defined axis. keras.io/api/layers/merging_layers/

  • @matancadeporco
    @matancadeporco Před 3 lety

    anyone with experience on multiclass semantic segmentation to help me? truly appreciated