Video není dostupné.
Omlouváme se.

Triangulation for Image Pairs (Cyrill Stachniss)

Sdílet
Vložit
  • čas přidán 17. 08. 2024

Komentáře • 28

  • @nigelpluto3443
    @nigelpluto3443 Před rokem +4

    thanks for making this nice explanation public and freely accessible

  • @letatanu
    @letatanu Před 3 lety +6

    I almost watch all the videos from Prof. Stachniss. Thank you for your lecture.

  • @SpatialAIKR
    @SpatialAIKR Před 3 lety +8

    I guess the equations in 9:01 are incorrect..
    Even the professor Cyrill indicated that the equation (f-g)⋅r should be (g-f)⋅r, the following equations are weird.
    I guess the following equations would be (q + μs - p - λr)⋅s = 0 and (q + μs - p - λr)⋅r = 0.

    • @eigenb6455
      @eigenb6455 Před 3 lety +2

      You're right. Although the left vector should still be (f-g), to be consistent with the two equations in slide 11 with the real parameters are substituted in. Those two equations are correct. The left vectors in the equations in slide 9 should've been (f - g) = (p+λr - (q+μs))

  • @eigenb6455
    @eigenb6455 Před 3 lety +9

    Shouldn't the right hand side of the matrix form of the equation in 12:02 be one column vector? The ] [ brackets between the transposed vectors and r, s shouldn't be there.

    • @CyrillStachniss
      @CyrillStachniss  Před rokem +2

      Well spotted. This is a mistake on the slides. In the right hand side of the lower equations the "] [" must be removed, otherwise we would not get the desired 2D vector. Thanks for pointing out this mistake.

  • @byynee
    @byynee Před 3 lety +1

    Finally it’s here. Was waiting for this!

  • @CyrillStachniss
    @CyrillStachniss  Před 8 měsíci

    Corrections:
    11:56 Mistake in the brackets in the last row, right hand side. Remove inner ][

  • @durandthibaud9445
    @durandthibaud9445 Před 3 lety +1

    Again, this comes exactly when i need it 👌

  • @vikasshetty6725
    @vikasshetty6725 Před 3 lety +1

    very well explained. waiting for a video on sensor fusion of camera images and 3D point cloud

  • @senthilpalanisamy151
    @senthilpalanisamy151 Před 7 měsíci

    @CyrillStachniss Thanks for the great video professor. I have a question on quality of triangulation? Is there a way I can estimate the uncertainty or the covariance matrix of the triangulated point? The lines may not perfectly intersect (due to noise in relative poses) and the pixel sizes could define a larger unprojected area. Is there any source where I can learn about how I can encode this uncertainty as a covariance matrix? You do show this for the 2 view case, is there a way to estimate this for the multiview case?

  • @afaqsaeed622
    @afaqsaeed622 Před 2 lety +2

    How can one find the camera constant c for a real camera during the calibration process. It would be a great help if anyone could answer that

    • @CyrillStachniss
      @CyrillStachniss  Před 2 lety +1

      See video on camera calibration (Zhangs method) in my list of videos

  • @AliDeeb-wh3il
    @AliDeeb-wh3il Před 4 měsíci

    Thank you Cyrill for this streamlined explanation, but can I ask you about the name of the reference or paper you took that from?

  • @childhoodgames1712
    @childhoodgames1712 Před 3 lety

    D. Cyrill, Does this algorithm is used to generate the DSM ( dense surface model ) as a point cloud ? if not, which one does the photogrammetric software such as PhotoModeler use ?

  • @nazaninsafavian1026
    @nazaninsafavian1026 Před 2 lety

    Thank you for the great lecture. I think that Matlab implementation for triangulation use SVD which is a linear solution for that, do you know any other implementation that offers non-linear solution for triangulation and you've used it in your lab maybe?!

  • @nazaninsafavian1026
    @nazaninsafavian1026 Před 2 lety

    Another question I had is about hand eye calibration. I've tried to capture images from a pattern and the same time record the position of the robot , however I was expected to get one fix result but it's not the case! obviously the transformation between the robot and camera coordinate system is fixed but can slightly be different I think in x element of translation bc it depends on focal length! I've tried to capture images in a range(movements 1-4cm from the pattern) but the estimated transformation seems to have the best result for the middle of the range! would you please shed some light on this? I cannot end up with an estimated transformation can have good results in different distances!

  • @janghopark4637
    @janghopark4637 Před rokem

    I have a question on "Absolute orientation". If we can estimate 3D points from stereo camera (=we know the baseline) and control points w.r.t. global frame, then we don't need to estimate "scale" parameter right? In this case, 6 DoF?

    • @CyrillStachniss
      @CyrillStachniss  Před rokem

      If the baseline is perfect, no. Otherwise a scale correction can be useful

  • @davidbowman8285
    @davidbowman8285 Před 3 lety

    Thank you so much Professor.

  • @sags
    @sags Před 11 měsíci

    Here you mention that you get the 3D points in the local frame czcams.com/video/UZlRhEUWSas/video.html. But however we don't have the scale information from the essential estimation until we get the control points. Am I missing something ?

    • @CyrillStachniss
      @CyrillStachniss  Před 8 měsíci

      For the photogrammetric model, we do not have the scale. If we use a stereo setup with known baseline, we have a good estimate. Thus, it depends on the precise camera setup

  • @fabricenoreils3907
    @fabricenoreils3907 Před 3 lety

    I really like these courses but someone can tell me why there are advertizings every 3 to 4 mn ? It is really annoying and was not the case before...

  • @darkside3ng
    @darkside3ng Před 3 lety

    Amazing!!!

  • @AmirSepasi
    @AmirSepasi Před 3 lety

    this lecture I didnt like. many points in it was not clear enough as they were in other lectures