Dyson Robotics Laboratory at Imperial College
Dyson Robotics Laboratory at Imperial College
  • 51
  • 198 292
COMO: Compact Mapping and Odometry
Project page: edexheim.github.io/como/
Paper: edexheim.github.io/como/pdf/como.pdf
Video for "COMO: Compact Mapping and Odometry" by Eric Dexheimer and Andrew J. Davison.
Dyson Robotics Lab, Imperial College London.
zhlédnutí: 503

Video

U-ARE-ME: Uncertainty-Aware Rotation Estimation in Manhattan Environments
zhlédnutí 519Před měsícem
Accompanying video for U-ARE-ME: Uncertainty-Aware Rotation Estimation in Manhattan Environments Aalok Patwardhan*, Callum Rhodes*, Gwangbin Bae, and Andrew J. Davison. * denotes equal contribution Project page: callum-rhodes.github.io/U-ARE-ME/ Abstract: Camera rotation estimation from a single image is a challenging task, often requiring depth data and/or camera intrinsics, which are generall...
[CVPR 2024] Rethinking Inductive Biases for Surface Normal Estimation
zhlédnutí 1,7KPřed 2 měsíci
IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024 Project Page: baegwangbin.github.io/DSINE/ Paper Link: github.com/baegwangbin/DSINE/raw/main/paper.pdf Authors: Gwangbin Bae and Andrew J. Davison Organisation: Dyson Robotics Laboratory, Imperial College London
Fit-NGP: Fitting Object Models to Neural Graphics Primitives
zhlédnutí 378Před 3 měsíci
Project page: marwan99.github.io/Fit-NGP/ Paper link: arxiv.org/abs/2401.02357 ICRA 2024 Authors: Marwan Taher Ignacio Alzugaray Andrew J. Davison Dyson Robotics Lab, Imperial College London
ICRA 2023 collection
zhlédnutí 461Před 3 měsíci
A collection of videos from 2014-2023
[CVPR 2024] SuperPrimitive: Scene Reconstruction at a Primitive Level
zhlédnutí 833Před 3 měsíci
IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024 Project Page: makezur.github.io/SuperPrimitive/ Code: github.com/makezur/super_primitive Paper Link: makezur.github.io/SuperPrimitive/assets/pdf/SuperPrimitive.pdf Authors: Kirill Mazur, Gwangbin Bae, Andrew J. Davison Organisation: Dyson Robotics Laboratory, Imperial College London
[CVPR'24 Highlight] Gaussian Splatting SLAM
zhlédnutí 19KPřed 5 měsíci
IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024 (Highlight) Project Page: rmurai.co.uk/projects/GaussianSplattingSLAM/ Paper Link: www.imperial.ac.uk/media/imperial-college/research-centres-and-groups/dyson-robotics-lab/hide-et-al_GaussianSplattingSLAM_Dec2023.pdf Code: github.com/muskie82/MonoGS Authors: Hidenobu Matsuki*, Riku Murai*, Paul H.J. Kelly, Andrew J. Davison (*e...
GBP Planner Code Tutorial
zhlédnutí 525Před 7 měsíci
Tutorial on running the code for the GBP Planner Project page: aalpatya.github.io/gbpplanner Code available at github.com/aalpatya/gbpplanner! This video covers: 00:00:00 Introduction 00:01:15 Code download and installation 00:04:22 Running and Interacting with the Simulation 00:06:38 Overview of relevant parts of the code 00:09:53 Create your own scenario/formation 00:20:26 Create your own obs...
A Distributed Multi-Robot Framework for Exploration, Information Acquisition and Consensus
zhlédnutí 499Před 7 měsíci
Authors: Aalok Patwardhan, Andrew J. Davison Dyson Robotics Lab, Imperial College London Under review ICRA 2024
vMAP: Vectorised Object Mapping for Neural Field SLAM
zhlédnutí 1,9KPřed rokem
Project Page: kxhit.github.io/vMAP vMAP: Vectorised Object Mapping for Neural Field SLAM Authors: Xin Kong, Shikun Liu, Marwan Taher, Andrew J. Davison Organisation: Dyson Robotics Lab, Imperial College London CVPR 2023 Paper Link: arxiv.org/abs/2302.01838 Code Link: github.com/kxhit/vMAP
Learning a Depth Covariance Function
zhlédnutí 503Před rokem
Project page: edexheim.github.io/depth_cov/ Paper: arxiv.org/abs/2303.12157 Video for "Learning a Depth Covariance Function" by Eric Dexheimer and Andrew J. Davison. Dyson Robotics Lab, Imperial College London. To be presented at CVPR 2023.
The GBP Planner || Distributing Collaborative Multi-Robot Planning with Gaussian Belief Propagation
zhlédnutí 1,1KPřed rokem
Authors: Aalok Patwardhan, Riku Murai, Andrew J. Davison Dyson Robotics Lab, Imperial College London Published in Robotics and Automation Letters (RA-L) (doi: 10.1109/LRA.2022.3227858) Preprint Paper: arxiv.org/abs/2203.11618
Real-time Mapping of Physical Scene Properties with an Autonomous Robot Experimenter
zhlédnutí 635Před rokem
An autonomous robot experimenter discovers and maps dense physical scene properties by providing the outcomes of sparse experiments a poke, spectroscopy measurement or lateral push to a 3D neural field. CoRL 2022 (oral) ihaughton.github.io/RobE/ arxiv.org/abs/2210.17325
Feature-Realistic Neural Fusion for Real-Time, Open Set Scene Understanding
zhlédnutí 1,3KPřed rokem
Project Page: makezur.github.io/FeatureRealisticFusion/ Paper: arxiv.org/abs/2210.03043 Authors: Kirill Mazur, Edgar Sucar, Andrew Davison Organisation: Dyson Robotics Lab, Imperial College London
BodySLAM: Joint Camera Localisation, Mapping, and Human Motion Tracking
zhlédnutí 1,3KPřed rokem
BodySLAM: Joint Camera Localisation, Mapping, and Human Motion Tracking Authors: Dorian F. Henning, Tristan Laidlow and Stefan Leutenegger To appear in: European Conference on Computer Vision (ECCV) 2022 Paper: arxiv.org/abs/2205.02301 Abstract: Estimating human motion from video is an active research area due to its many potential applications. Most state-of-the-art methods predict human shape...
From Scene Flow to Visual Odometry through Local and Global Regularisation in Markov Random Fields
zhlédnutí 731Před 2 lety
From Scene Flow to Visual Odometry through Local and Global Regularisation in Markov Random Fields
iLabel: Interactive Neural Scene Labelling
zhlédnutí 2,9KPřed 2 lety
iLabel: Interactive Neural Scene Labelling
ReorientBot: Learning Object Reorientation for Specific-Posed Placement
zhlédnutí 927Před 2 lety
ReorientBot: Learning Object Reorientation for Specific-Posed Placement
SafePicking: Learning Safe Object Extraction via Object-Level Mapping
zhlédnutí 564Před 2 lety
SafePicking: Learning Safe Object Extraction via Object-Level Mapping
CodeMapping: Real-Time Dense Mapping for Sparse SLAM using Compact Scene Representations
zhlédnutí 3KPřed 2 lety
CodeMapping: Real-Time Dense Mapping for Sparse SLAM using Compact Scene Representations
SIMstack: A Generative Shape and Instance Model for Unordered Object Stacks
zhlédnutí 587Před 3 lety
SIMstack: A Generative Shape and Instance Model for Unordered Object Stacks
In-Place Scene Labelling and Understanding with Implicit Scene Representation
zhlédnutí 7KPřed 3 lety
In-Place Scene Labelling and Understanding with Implicit Scene Representation
iMAP: Implicit Mapping and Positioning in Real-Time
zhlédnutí 9KPřed 3 lety
iMAP: Implicit Mapping and Positioning in Real-Time
End-to-End Egospheric Spatial Memory
zhlédnutí 538Před 3 lety
End-to-End Egospheric Spatial Memory
NodeSLAM: Neural Object Descriptors for Multi-View Shape Reconstruction
zhlédnutí 4,3KPřed 4 lety
NodeSLAM: Neural Object Descriptors for Multi-View Shape Reconstruction
MoreFusion: Multi-object Reasoning for 6D Pose Estimation from Volumetric Fusion
zhlédnutí 4,3KPřed 4 lety
MoreFusion: Multi-object Reasoning for 6D Pose Estimation from Volumetric Fusion
Comparing View-Based and Map-Based Semantic Labelling in Real-Time SLAM
zhlédnutí 698Před 4 lety
Comparing View-Based and Map-Based Semantic Labelling in Real-Time SLAM
RLBench: The Robot Learning Benchmark
zhlédnutí 340Před 4 lety
RLBench: The Robot Learning Benchmark
DeepFactors: Real-Time Probabilistic Dense Monocular SLAM
zhlédnutí 8KPřed 4 lety
DeepFactors: Real-Time Probabilistic Dense Monocular SLAM
Learning One-Shot Imitation from Humans without Humans
zhlédnutí 255Před 4 lety
Learning One-Shot Imitation from Humans without Humans

Komentáře

  • @torquebiker9959
    @torquebiker9959 Před 8 dny

    crazy!!!

  • @mg4340
    @mg4340 Před měsícem

    神中神

  • @JustFor-dq5wc
    @JustFor-dq5wc Před měsícem

    Great job! Is there a way to remove blue and red glow on sides? Something like "indoor, outdoor, object" option for light source. Edit: Nevermind. For Unity Engine I have to use DepthMap and create NormalMap from grayscale in Editor for best results.

  • @dibbidydoo4318
    @dibbidydoo4318 Před 2 měsíci

    so... where's the code?

    • @hidenobumatsuki6981
      @hidenobumatsuki6981 Před 2 měsíci

      Thank you for your interest in our work. Now finalising the code, it will come in next few days.

    • @TheDozman
      @TheDozman Před měsícem

      For now you get pringles

  • @callumrhodes3026
    @callumrhodes3026 Před 2 měsíci

    These might be the crispiest normals I have ever seen!

  • @HoiDooLi
    @HoiDooLi Před 2 měsíci

    Will the code be released?

  • @Bekabai
    @Bekabai Před 3 měsíci

    no sound

  • @al-to2sx
    @al-to2sx Před 4 měsíci

    cant watch this right after waking up. saw a face in the thumbnail

  • @pedroantonio5031
    @pedroantonio5031 Před 4 měsíci

    How better the method works with binocular vision (camera glasses)?

  • @xyeB
    @xyeB Před 4 měsíci

    Nice!

  • @railgap
    @railgap Před 4 měsíci

    You should talk to some SAR people you have very similar challenges and methods. ;)

  • @pooip2169
    @pooip2169 Před 5 měsíci

    Cool

  • @haikeye1425
    @haikeye1425 Před 5 měsíci

    Welcome to our New UE5 Plugin: "UEGaussianSplatting: 3D Gaussian Splatting Rendering Feature For UnrealEngine 5" czcams.com/video/4xTEyz9bx5E/video.html

  • @Instant_Nerf
    @Instant_Nerf Před 5 měsíci

    Time to render the entire planet

    • @colinhoek
      @colinhoek Před 2 měsíci

      When in the future everyone has vr glasses with this technique built in. Slowly rendering the entire world.

  • @Instant_Nerf
    @Instant_Nerf Před 5 měsíci

    So splats in real time? Wow

  • @synchro-dentally1965
    @synchro-dentally1965 Před 5 měsíci

    Nice. How well does it work with reflections?

  • @ChangLiu
    @ChangLiu Před 5 měsíci

    Is the code planed to be release soon?

  • @manu.vision
    @manu.vision Před 5 měsíci

    Incredible!

  • @drakefruit
    @drakefruit Před 5 měsíci

    the cool stuff never has a demo link

  • @jurandfantom
    @jurandfantom Před 8 měsíci

    Hi there, any updates about project? dziękuje :)

  • @user-zh7ur8rg2e
    @user-zh7ur8rg2e Před 8 měsíci

  • @Galbizzim
    @Galbizzim Před rokem

    so impressive!!

  • @repositorytutorial3d50

    wow! is there any official publication that describe the math behind it? PS: what approach did you use to produce an accurate height map from the normal map? the one that I find always flatten high frequency details or gives locally valid but globally wrong results

  • @O_A_Koroleva
    @O_A_Koroleva Před rokem

    Good day. You build robot or not?

  • @daasfaas1116
    @daasfaas1116 Před rokem

    Hey, can you describe the algorithm or at least refer to algorithm sources pls

  • @user-zd1jk7cl3y
    @user-zd1jk7cl3y Před rokem

    is code available?

  • @synapticaxon9303
    @synapticaxon9303 Před 2 lety

    object detection on the keyframes would probably integrate nicely to form 3d (or 4d!) semantic bounding boxes...

  • @guanbinhuang3027
    @guanbinhuang3027 Před 2 lety

    is code available??

  • @mattanimation
    @mattanimation Před 2 lety

    vacuum: $600, arm: $15k, reorientation of cheezit boxes: priceless

  • @francoisrameau4900
    @francoisrameau4900 Před 2 lety

    Very impressive work! Congratulations to the team

  • @user-do3do7qr7p
    @user-do3do7qr7p Před 2 lety

    Great work! Is the code open and Where can I find it?

  • @kunzhang7654
    @kunzhang7654 Před 2 lety

    Dear Dr. Zhi, Great work and congratulations for being accepted to ICCV2021 with Oral presentation! I was trying to contact with you by e-mail but it seems that your address could not be reached. Could you provide the camera trajectories you used in the Replica dataset? Meanwhile, any plan for releasing the code? Thanks a lot and looking forward to your reply!

    • @zhishuaifeng3342
      @zhishuaifeng3342 Před 2 lety

      Hi Kun. Thank you for your interests in our work. I am sorry I was busy writing thesis. My email address should work well right now and not sure if it is some wierd server issues. If you can not contact me via imperial email, you can also drop me a message to z.shuaifeng@foxmail.com if you like. I will release the rendered Replica sequences after the recent thesis DDL and sorry for the delay.

  • @longvuong2600
    @longvuong2600 Před 2 lety

    Very Impressive!!!

  • @kotai2003
    @kotai2003 Před 3 lety

    Very fast.

  • @kwea123
    @kwea123 Před 3 lety

    Too much information on each slide and the slides are switched too quickly... it makes the reader have to constantly stop the video to read... 1. The pixel denoising and region denoising results is counter-intuitive for me. With 90% chance of corruption, the same 3d point has so little change to be "consistent" across views. How can the model fuse the information, which is totally random from each view? Region-wise denoising is much more reasonable because only few images are perturbed, so the same chair has higher probability of having the same label across views. The quantitative results for pixel-wise denoising is therefore intriguing, how can it be better than region-wise denoising, despite having more noise? With 90% pixel noise I'd expect that the chairs are also 90% wrong, resulting in a lot more noise than the region-wise noise experiment... 2. Results of Super resolution and label propagation is also confusing. Sparse label with S=16 basically means 1/256=0.3% pixels per frame, and in this case the ground class is likely to be dominant, and some small classes might not be sampled at all. Why is the mIoU better than label propagation, where at least all classes are sampled once, with 1% pixels? Did I misunderstand anything? Thank you

    • @zhishuaifeng3342
      @zhishuaifeng3342 Před 3 lety

      Hi kwea123 (AI葵), thank you for your interests and feedback. I also learn a lot from your tutorial videos of NeRF which are very helpful. I agree that the information in this video is a bit dense and we have tried to keep a good balance between video length and presentation experience. I could possibly make another longer version on project page so that people can better follow the details.

    • @zhishuaifeng3342
      @zhishuaifeng3342 Před 3 lety

      About pixel-wise denoising: The performance of pixel-denoising task is quite surprising at first glance, especially when some fine-structures can be well persevered. In the denoising task, we randomly vary the labels of randomly selected 90% pixels for each training label image. In my opinion, I think there are several factors making this happen: (1)Coherent consistency and smoothness within NeRF and view-invariant property of semantics are the key. (2)The underlying geometry and appearance play a very important role so pixels with similar texture and geometry tend to have same classes. The photometric loss is important here as an auxiliary loss. I personally think denosing task here is a “NeRF-CRF” given that CRF also refines semantics by modeling similarity in geometry and appearance in an explicit way. (3)There are still average 10% pixels unchanged per frame and in addition a 3D position may have corrupted label in one view but may have a correct label in another view. I also tried 95% or even higher noise ratio, and as expected the fine-structures become much harder to recover with less accurate boundaries, etc. The quantitative results does not aim to show which task is easier or harder in any sense but mainly to show that Semantic-NeRF has the ability to recover from those noisy labels. Note that the evaluation are computed using full label frames including chairs and other classes as well.

    • @zhishuaifeng3342
      @zhishuaifeng3342 Před 3 lety

      It is true that a larger scaling factor (x16, x32) has a risk of missing tiny structures. And we indeed observe, for example, prediction of windows frames (red) around blinds (purple) in spx8 is more accurate than that of spx16. Again, the tables does not mean to compare these two tasks but to show the capability of Semantic-NeRF. A better way to think about super-resolution and propagation is how they sample the sparse/partial labels. Super-resolution (e.g, SPx16) sparsely decimate label maps following a regular grid pattern with a space of 16 pixels while label propagation (LP) select a “seed” randomly from each class per frame.

    • @zhishuaifeng3342
      @zhishuaifeng3342 Před 3 lety

      In SP, a class/instance larger than 16 pixels is very likely to be sampled at least once (i.e., having one/more seeds on this class/instance). Therefore I think the main difference is the coverage of seeds: SP spreads the seeds within class while LP learn from more labels from a local proximity. This is also one of reasons why prediction of light (pink) on the ceiling (yellow) in SP has better quality (Fig.7 and 10) than that in LP(Fig. 8), partly because the appearance and geometry of light and ceiling are too similar for LP to interpolate and the spread of seeds in SP helps

    • @zhishuaifeng3342
      @zhishuaifeng3342 Před 3 lety

      Hope this information and my understanding is helpful. If you have any further questions, please feel free to discuss via emails.

  • @vanwespe2165
    @vanwespe2165 Před 3 lety

    Great stuff!

  • @lucillaberwick2439
    @lucillaberwick2439 Před 3 lety

    Impressive work. What visualiser do you use?

  • @iraklistaxiarchis6980

    Amazing work!

  • @abdelrahmanwaelhelaly1871

    any updates on the code release

  • @luckyrevenge8211
    @luckyrevenge8211 Před 3 lety

    What program do you use for simulation ? thanks

  • @zebulonliu5607
    @zebulonliu5607 Před 3 lety

    where is the source code ?

  • @xinyangzhao8057
    @xinyangzhao8057 Před 3 lety

    Very impressive!!!

  • @user-zt8xy4gs4u
    @user-zt8xy4gs4u Před 3 lety

    牛逼!

  • @ArghyaChatterjeeJony
    @ArghyaChatterjeeJony Před 3 lety

    Great work.

  • @xingxingzuo7028
    @xingxingzuo7028 Před 3 lety

    So promising!

  • @mattanimation
    @mattanimation Před 4 lety

    quite impressive.

  • @BhargavBardipurkar42
    @BhargavBardipurkar42 Před 4 lety

    wow👏👏

  • @ymxlzgy
    @ymxlzgy Před 4 lety

    Nice work!!

  • @Ruhgtfo
    @Ruhgtfo Před 4 lety

    Tutorial?

  • @Abd0z
    @Abd0z Před 4 lety

    Excellent Work, should we expect any public implementation or release for this approach?

    • @dysonroboticslaboratoryati9846
      @dysonroboticslaboratoryati9846 Před 4 lety

      Thank you! We will start working on releasing the code as soon as we finalise the RA-L journal publication.

    • @Abd0z
      @Abd0z Před 4 lety

      @@dysonroboticslaboratoryati9846 Great Wish you the best of luck (y)