Object Detection Using YOLOv4-tiny | Part 1

Sdílet
Vložit
  • čas přidán 13. 09. 2024

Komentáře • 44

  • @arnavthakur5409
    @arnavthakur5409 Před 6 měsíci

    Wonderful watching again

  • @pifordtechnologiespvtltd5698

    Perfect

  • @JGamonalLara
    @JGamonalLara Před 3 lety

    Nice video, i've watched tons of tutorials about Yolo and i think this is the most comprehensive tutorial, thanks a lot!

  • @freecode.ai-
    @freecode.ai- Před 3 lety

    Thank you. I always learn something new from your videos.

  • @Sunil-ez1hx
    @Sunil-ez1hx Před rokem

    Awesome video mam

  • @nugratasik4137
    @nugratasik4137 Před rokem

    Great video! You got my sub!!!

    • @CodeWithAarohi
      @CodeWithAarohi  Před rokem

      Welcome aboard!

    • @nugratasik4137
      @nugratasik4137 Před rokem

      @@CodeWithAarohi how to handle when my train stops too early? It says "Tensor Cores are disabled until the first 3000 iterations are reached"

  • @vidhyashree8202
    @vidhyashree8202 Před 3 měsíci

    How to restart the training process..once it is interrupted..can anyone please suggest me a solution

  • @eljacruz
    @eljacruz Před 4 měsíci

    need help. everything worked well until the LAST part where an image and the bounding boxes should be shown. only the image is displayed and no objects were “detected”

    • @CodeWithAarohi
      @CodeWithAarohi  Před 4 měsíci +1

      Try to train your model for more epochs. No bounding box means your model haven't learned yet. Before trying this, you can try to lower the threshold value of confidence and check if your model is detecting objects with that.

    • @eljacruz
      @eljacruz Před 4 měsíci

      @@CodeWithAarohi i trained it for 6000 iterations and still no bounding boxes. how do i lower the threshold value of confidence? is it the ignore_thresh variable in the test cfg file?
      for additional context, my custom dataset is of rice panicles, and the model should detect each grain in the panicle image. thanks in advance for the reply.

  • @soravsingla6574
    @soravsingla6574 Před 11 měsíci

    Best CZcams channel / Platform to learn Artificial Intelligence , Data Science , Data Analysis , Machine Learning.
    #BestChannel #CZcamsChannel #ArtificialIntelligence #CodeWithAarohi #DataScience #Engineering #MachineLearning #DataAnalysis #BestLearning #LearnDataScience #DataScienceCourse #AytificialIntelligenceCourse #Codewithaarohi #CodeWithAarohi Code with Aarohi

  • @onah9613
    @onah9613 Před 2 lety

    Firstly, thank you for the awesome and well explained video. secondly, may I ask how did you train your yolov4 tiny? And if you could provide the link?
    Thank you so much

    • @CodeWithAarohi
      @CodeWithAarohi  Před 2 lety

      github.com/AarohiSingla/Object-Detection-Using-yolov4-tiny/blob/main/yolov4_tinyimplementation.ipynb

  • @betulsahin5419
    @betulsahin5419 Před 2 lety

    Thank you for video. I want to ask. Can I use Yolov4 with open cv ? My Raspberry Pi is 3b+ and I don't want to use tensorflow, cause of size I want to use Yolov4-tiny. Can u show me the way?

  • @rahulrock9750
    @rahulrock9750 Před rokem

    Mam I got error while training custom object detection using yolov4 tiny model at last can't open weight file in respective location,what might be the reason

  • @ayeshakhatun3114
    @ayeshakhatun3114 Před rokem

    where is the dataset?

  • @hinainam5037
    @hinainam5037 Před 2 lety

    How can i run yolov4 tiny for 300 epochs???

  • @sagargu618
    @sagargu618 Před rokem

    What are the values given for anchor box suggest ?

    • @CodeWithAarohi
      @CodeWithAarohi  Před rokem +1

      In the yolov4.cfg configuration file, the anchor boxes are defined as a list of comma-separated values.
      Each set of three values corresponds to the width, height, and aspect ratio of an anchor box.
      For example, the default anchor boxes for YOLOv4 have the following values:
      anchors = 12,16, 19,36, 40,28, 36,75, 76,55, 72,146, 142,110, 192,243, 459,401
      These anchor boxes are used to detect objects at different scales and aspect ratios in the image. The first three values (12,16) represent the width, height, and aspect ratio of the anchor box used for detecting objects at the lowest resolution feature map (i.e., the grid cell size of 8x8). The next three values (19,36) correspond to the anchor box used for detecting objects at the next higher resolution feature map (i.e., the grid cell size of 16x16), and so on.

    • @sagargu618
      @sagargu618 Před rokem

      @@CodeWithAarohi Thank you for clearing my doubt

  • @shraddhagupta7384
    @shraddhagupta7384 Před rokem

    could you please help me to figure out "how we can create yolov4 tiny model for the C# application"

  • @JOHN-vb5bh
    @JOHN-vb5bh Před 3 lety

    mam, how we can use the detectron2 resnext weights in our mobilephones

    • @CodeWithAarohi
      @CodeWithAarohi  Před 3 lety +1

      HI, Never tried it but I think process will be similar. Convert your model into tensor flow model and then to tflite model Because tflite models are light model which work on mobiles.

  • @freecode.ai-
    @freecode.ai- Před 3 lety

    Would you be willing to discuss yolor with deep sort on opencv? Thanks

  • @abhishekkumarsrivastava7677

    What if I want to create model with an image size of 224x224? Specifying this size in cfg file will make darknet convert input images to the given size and create a model accordingly?

    • @CodeWithAarohi
      @CodeWithAarohi  Před 2 lety

      Yes, you are correct. You can choose any image size which is a multiple of 32

  • @abhishekkumarsrivastava7677

    After training using darknet on traffic sign dataset, it generates all weight files of approx. size 244MB. This size is not suitable for embedded applications. How can I reduce the weight size to less than 5 MB?

    • @CodeWithAarohi
      @CodeWithAarohi  Před 2 lety

      convert your darknet model into tensorflow model and then to tflite. You will get a light weight model

    • @abhishekkumarsrivastava7677
      @abhishekkumarsrivastava7677 Před 2 lety

      @@CodeWithAarohi Will 244MB model be reduced to less than the 5MB model after conversion or do I need to modify my input image size, dataset size, etc? I am asking it because, in your second video, ur weight file size was 22MB but in my case weight file is 224MB so I am doubtful that tflite could do this much reduction in model size!

    • @abhishekkumarsrivastava7677
      @abhishekkumarsrivastava7677 Před 2 lety

      @@CodeWithAarohi how come the converted TensorFlow model is approx 2 MB and tflite model is 23 MB? It should have been the other way round, I guess! I wanted to use tflite on microcontrollers but this model size is pretty heavy. Please comment

    • @CodeWithAarohi
      @CodeWithAarohi  Před 2 lety

      @@abhishekkumarsrivastava7677 tensorflow model should be in greater in size as compare to tflite model. 2nd you can reduce the size of tflite model by quantize.

    • @abhishekkumarsrivastava7677
      @abhishekkumarsrivastava7677 Před 2 lety

      @@CodeWithAarohi I have rechecked my process multiple times, where I am creating yolo v4 tiny model weights using Darknet and then converting to tflite model using instructions given in video. For some strange reason, my tflite model file is equivalent to the size of yolov4 tiny model size. Even after quantization for Float 16, tflite model size is greater than yolov4 tiny. Can you please direct me u where I might be going wrong or where should I look to resolve this issue?