DTU RoboCup 2022 - Wild Willy - First run with onboard camera overlay

Sdílet
Vložit
  • čas přidán 28. 04. 2022
  • Data from the robot onboard camera is shown as an overlay. The red bar at the bottom shows the detected line used for the line following. The camera operates at 10fps 320x240.
    The robot is built on a rocker-bogie design, making stair climbing quite easy. Unfortunately the robot didn't go up the stairs this year because the wooden floor was too slippery.
    All parts of the robot rover are 3d printed. Model of the robot can be found here: www.printables.com/model/1942...
  • Věda a technologie

Komentáře • 7

  • @McGivrer
    @McGivrer Před 2 lety +1

    Your robot is incredible ! Which kind of IMage analysis software do you use for Whilte line detection ? (based on the tensorflow google software or something else ?)

    • @wildwillyrobots
      @wildwillyrobots  Před 2 lety

      It is very basic thresholding with some tweaks to make it independent of the overall light in the image. I would like to do something more advanced, because it has some problems i.e. in the video from 2:22-2:30 it actually does not detect the line all the time due to the flare on the floor.

  • @archerlee8091
    @archerlee8091 Před rokem

    WOW

  • @barknvar3902
    @barknvar3902 Před rokem

    Hi, im a big fan of your robot designs and the algorithms. Is your code publicly available? I really would like to read it and understand the control mechanisms.

    • @wildwillyrobots
      @wildwillyrobots  Před rokem +1

      The code is currently a mess, so I haven't shared it. Also it has various dependencies, so it is a bit hard to get it running. But overall I have low level control on a Raspberry Pico, that controls the speed of the wheels and the servos. Then I have a higher level program, running on a Raspberry Pi 4. This makes the image analysis for detecting obstacles, and calculates the required speed and position of the wheels, which is then sent to the Pico.

    • @barknvar3902
      @barknvar3902 Před rokem

      @@wildwillyrobots is it just a simple feedback from vision that controls the motor velocities or you calculate the position data to drive the robot to the position

    • @wildwillyrobots
      @wildwillyrobots  Před rokem

      @@barknvar3902 Image analysis finds the line in the bottom of the image. Then a P-controller is used to calculate the steering direction. Is is just the error (deviation from center of image) times a constant. This gives a steering direction, and from this the speed and orientation of all wheels are calculated. So it is just a controller trying to keep the line in the center of the image.