Image to Mesh using ComfyUI + Texture Projector

Sdílet
Vložit
  • čas přidán 9. 06. 2024
  • In today's video, we will talk about image-to-mesh workflow. Including 3D reconstruction from a simple image or multiple images.
    00:00 Introduction
    00:46 ComfyUI Layer Diffuse
    01:52 3D reconstruction solutions
    04:54 CRM introduction
    05:52 CRM diagram
    07:19 ComfyUI 3D Pack
    07:41 CRM Image to Mesh workflow in ComfyUI
    12:26 Wonder 3D Image to Mesh workflow in ComfyUI
    13:24 Import CRM mesh in 3D Max
    14:42 Mesh comparison of CRM, TripoSR, Wonder3D+NeuS
    15:38 Mesh optimization
    16:56 Comparison of retopology and optimization process
    22:02 Zbruch Z-Remesher
    24:13 UV
    24:52 Create outline texture for mesh using Texture Projector in UE
    27:22 Texture refinement
    29:33 Project textures to mesh using Texture Projector in UE
    30:59 Bake texture using Property Baker in UE
    32:06 Single image to mesh final results
    32:48 Why the reference image must be close to the frontal view
    34:43 Use depth Control Net to control the view angle
    37:56 Multi-view images to mesh workflow in ComfyUI
    42:18 Gaussian Splatting + DMTet
    43:02 Restriction of Multi-view images to mesh workflow in ComfyUI
    44:04 Summary
    Music: Sunny Skies (by Suno)
    Create various textures using Texture Projector and Stable Diffusion
    • Create various texture...
    -----------------------------------
    MARS Texture Projector:
    www.unrealengine.com/marketpl...
    MARS Property Baker:
    www.unrealengine.com/marketpl...
    MARS Master Material:
    www.unrealengine.com/marketpl...
    -----------------------------------
    Houdini Lego Mesh
    • Legoize geometry & RBD...
    • brickini - procedural ...
    • Procedural Lego Bricks...
    Gaussian Splatting
    • 3D Gaussian Splatting ...
    • Photogrammetry / NeRF ...
    • What is 3D Gaussian Sp...
    • Step-by-Step Unreal En...
    • Gaussian Splatting exp...
    -----------------------------------
    #imagetomesh #3dreconstruction #sv3d #triposr #unreal #textureprojector #stablediffusion #comfyui
  • Věda a technologie

Komentáře • 69

  • @michaelmurrillus915
    @michaelmurrillus915 Před 25 dny +2

    methodical presentation. Very well done

  • @soma78
    @soma78 Před měsícem +2

    impressive. the amount of work you put in this video...well done. subscribed.

  • @vivigomez5960
    @vivigomez5960 Před měsícem

    Beautiful!! Great video. A lot of work and time in this great explanation.

  • @Meteotrance
    @Meteotrance Před 17 dny

    They could use metaball instead of voxel for the mesh cloud point recreation, it's super light and fast for generated volume, blender handle the metaball convertion to polygon very well...

  • @EqualToBen
    @EqualToBen Před 2 měsíci +4

    Awesome topology comparison! This video is gold

  • @artmosphereID
    @artmosphereID Před 2 měsíci +2

    good for hard surface/static/prop assets. For organic and animated models, a big no no, will be a nightmare for animator

  • @brianmcquain3384
    @brianmcquain3384 Před 2 měsíci

    cool song, totally unexpected out of the blue!

  • @teambellavsteamalice
    @teambellavsteamalice Před 2 měsíci +1

    I have a feeling this has way more potential.
    Is there any way to deconstruct the image into parts, then compare these parts to a set of variants, pick the closest and construct a composition of these? Like a reference model to help the process?
    I imagine you'd need a few basic head shapes, ears, chins, eyebrows and perhaps even hairdos. Then have sets of images (angles or a lora model?) for archetype heads (or complete bodies).
    Like a base bland model, one with extreme elvish ears, one with a pronounced chin, one with exaggerated brows, etc. Then you'd need to reconstruct the mix you want from these archetype sets to get the right mix. I'm not sure you can use interpolation that easily (iirc ControlNet had options?) but if you use the same process on each image of these consistent sets, the resulting set should be consistent too, right?
    Then if each archetype has a nicely fixed 3d model, you could also generate one for the mixed composition.
    Would such a process to create an approximation or base model be doable?
    Could you use this and the actual image (in iterations?) to create a consistent 3D model without any manual fixes?

    • @kefuchai5995
      @kefuchai5995  Před 2 měsíci +1

      Great idea! That will be the next generation of AI 3D Mesh. SD AI should learn this.

  • @MaxSMoke777
    @MaxSMoke777 Před 2 měsíci +4

    You could do all of those dozens of steps... OR... just use the front and side images for reference and simply build the model like you would any other. You've put so much work into saving time, you've definitely made it harder.

    • @kefuchai5995
      @kefuchai5995  Před 2 měsíci

      You are right. AI has caused me a lot of confusing behaviors and complicated simple problems.

    • @IS0JLantis
      @IS0JLantis Před 2 měsíci +5

      no, Its like spending 5 hours to write a script that will automate a task that only takes 30 min do. it is not intended for single use. once you find a reliable workflow leveraging AI, productivity will skyrocket, old modelling techniques will simply not be able to keep up.
      we need tests like these to learn from.

  • @catparadise950
    @catparadise950 Před 2 měsíci

    能分享一下怎麼安裝ComfyUI 的3D pack 嗎?我有嘗試過裡面其他模型的custom node 不過這個整合包我就是不太清楚怎麼安裝,我有使用python 虛擬環境。

    • @kefuchai5995
      @kefuchai5995  Před 2 měsíci

      可以先看看这个,之前列的注意事项:
      www.bilibili.com/read/cv33521683

  • @MrGATOR1980
    @MrGATOR1980 Před 2 měsíci +3

    tbh i would faster sculpt and paint this myself than all that shenaningans

  • @mikerhinos
    @mikerhinos Před 2 měsíci

    Personnally I'm getting this error when trying to run the CRM to multiview to CCM example workflow (it happens at the mesh construction node which is quite frustrating because the different images are looking good) :
    "RuntimeError: Error building extension 'nvdiffrast_plugin': ninja: error: build.ninja:3: lexing error
    nvcc = D:\pinokio\bin\miniconda\bin
    vcc.exe"
    It may be a path problem I guess, but can't find how to resolve it yet :(

    • @kefuchai5995
      @kefuchai5995  Před 2 měsíci

      It is an installation problem with VS or CUDA. Maybe it is the CUDA path?
      github.com/MrForExample/ComfyUI-3D-Pack?tab=readme-ov-file#install

  • @Zamundani
    @Zamundani Před 2 měsíci +7

    basically its an over glorified base mesh.

  • @Rahviel80
    @Rahviel80 Před 2 měsíci +4

    Baked light and no pbr textures are showstopper for game dev, they also have this AI look and unoptimised mesh. Thats far from useful.

    • @kefuchai5995
      @kefuchai5995  Před 2 měsíci +1

      Some checkpoints can generate diffuse (albedo) texture using a light environment prompt like soft ambient light. Then generate PBR textures based on the diffuse texture.

  • @linnkoln11
    @linnkoln11 Před měsícem

    Hey! about the img2img support for layer diffusion. You need to make background 50% grey. For me it was doing a work! By the way i am not yet done with a video but what i've seen so far is awesome!

  • @RoN43wwq
    @RoN43wwq Před 2 měsíci +2

    nice. Thanks

  • @cj5787
    @cj5787 Před 2 měsíci +3

    "looks cool and effective" for the untrained eye.. in reality this is like 20 times more complicated and time consuming than a regular 3d workflow getting a result that it's not even usable...

    • @kefuchai5995
      @kefuchai5995  Před 2 měsíci +1

      It's because we're used to the old workflow.

  • @USBEN.
    @USBEN. Před 2 měsíci +1

    Now just have to automate all this.

  • @masterkarlzon
    @masterkarlzon Před 2 měsíci

    Really cool!

  • @samwalker4442
    @samwalker4442 Před 2 měsíci

    THANK YOU!

  • @Keji839
    @Keji839 Před 2 měsíci

    This would be good for hard surface objects. Organic is a no go

  • @shiccup
    @shiccup Před 2 měsíci

    sick

  • @AX-032
    @AX-032 Před měsícem

    Can u share your json file please?

    • @kefuchai5995
      @kefuchai5995  Před měsícem

      The ComfyUI workflow? It is from 3D Pack with some customization. github.com/MrForExample/ComfyUI-3D-Pack/tree/main/_Example_Workflows

  • @arberstudio
    @arberstudio Před 2 měsíci +5

    or you can learn to 3D model lol

  • @user-li7ce3fc3z
    @user-li7ce3fc3z Před měsícem

    Куча сил и денег потрачено на програмное обеспечение а на выходе полный шлак, куда проще создать все с нуля а рисунки аи использовать как реф

  • @piotrek7633
    @piotrek7633 Před 2 měsíci +2

    Even if ai makes you witcher 3 quality models in the future from thin air, there's no sense of fulfillment from that. Ai is taking our jobs and ways to have fun while at it. If ai makes making games ridiculously easy then game dev will be even more competetive than it already is today

    • @kefuchai5995
      @kefuchai5995  Před 2 měsíci

      I would rather think of AI as an assistant.

    • @piotrek7633
      @piotrek7633 Před 2 měsíci

      @@kefuchai5995 Yeah but for how long though. Artists are already taking a hit because midjourney is literally better quality than most of them, and it can pick up styles of top artists. Ai generated ads are already popping up so thats less work for employees. So when will it hit 3d modeling, game dev as a whole is my question, since cleary it's going in that direction looking at meshy, and altman tweeted 2 days ago that "movies are going to become video games and video games are going to become something unimaginably better". Doesn't this mean trouble for game dev's? If we can't do something 'unimaginably better' now in big teams like cd projekt red or rockstar, so what does he mean? Ai generation of course, if he's not yapping and its true that his team is cooking something hot for games, then oh lord have mercy, i will have 0 fun in life if they take game dev, although ai generated movies that let you interact won't affect the video game market though since it will be like traditional art and digital art, people will want to consume both probably

    • @kefuchai5995
      @kefuchai5995  Před 2 měsíci +1

      @@piotrek7633 For me, I won't think about it for now. I will keep following and looking forward that day when everyone can make games. I hope what altman says is true, not just imagined but experienced.

    • @vivigomez5960
      @vivigomez5960 Před měsícem

      Your comment seems more typical of a person from the 15th century in front of the printing press.

  • @scrutch666
    @scrutch666 Před 2 měsíci +12

    Any medicore artist would have completed that task in half the time and successfully. This is not even a mesh you could use in game production for a human character. Would cost more time to fix it over doing it yourself by hand. Nobody would hire you with such topology

    • @kefuchai5995
      @kefuchai5995  Před 2 měsíci +1

      yes, that's why it still needs re-topo and re-mesh.

    • @AIJOBSFORTHEFUTURE
      @AIJOBSFORTHEFUTURE Před 2 měsíci +3

      @scrutch666 Do not fear the death of an industry, Celebrate the birth of a new reality….or cope

    • @2slick4u.
      @2slick4u. Před měsícem +4

      That's where UE5.3 comes in clutch with their new nanite skeletal.mesh.
      It's the future and it's gonna hit you hard in the tool box

    • @TheSleepfight
      @TheSleepfight Před 22 dny

      ⁠@@2slick4u.scrutch is correct..I’m a Principal Character and Hard surface artist with 18 years in the industry and over 15 shipped titles..What do you think nanite will change?

    • @2slick4u.
      @2slick4u. Před 22 dny

      @@TheSleepfight with the bandwidth and internal storage constantly increasing its becoming realistic to dump high poly and unoptimized models.UE5 almost completely remove the relevance of retopo

  • @GradeMADE
    @GradeMADE Před 2 měsíci

    Hey Bro do you the workflow you created for us to use ?

    • @kefuchai5995
      @kefuchai5995  Před 2 měsíci +1

      The workflow I used is copied from the example of ComfyUI 3D Pack,.
      github.com/MrForExample/ComfyUI-3D-Pack/tree/main/_Example_Workflows

    • @samsilva7209
      @samsilva7209 Před 2 měsíci

      @@kefuchai5995 when I open "Multi-View-Images_to_Instant-NGP_to_3DMesh" workflow, for example, even if I install the missing nodes in the manage panel, there are still many nodes with this message: hhen loading the graph, the following node types were not found:
      [Comfy3D] Preview 3DMesh 🔗
      [Comfy3D] Gaussian Splatting Orbit Renderer 🔗
      [Comfy3D] Stack Orbit Camera Poses 🔗
      [Comfy3D] Switch 3DGS Axis 🔗
      [Comfy3D] Load 3DGS 🔗
      [Comfy3D] Save 3D Mesh 🔗
      [Comfy3D] Instant NGP 🔗
      [Comfy3D] Fitting Mesh With Multiview Images 🔗
      Nodes that have failed to load will show as red on the graph.
      Do you have any idea what I might be doing wrong? or not doing?
      Thank u in advance

    • @GradeMADE
      @GradeMADE Před 2 měsíci

      @@kefuchai5995 Ty Bruv

    • @user-bl8lb7yy1l
      @user-bl8lb7yy1l Před měsícem

      @@kefuchai5995 Hey bro.The example workflow of CRM does not include the upscale-img part.I tried to load an upscale model,but it seems have some errors in python--“Input type (struct c10::Half) and bias type (float) should be the same” Could you help with this?Deep thanks.

    • @kefuchai5995
      @kefuchai5995  Před měsícem

      @@user-bl8lb7yy1l czcams.com/video/Y6-JGi_ksos/video.htmlsi=1qBaFHsGtPETxU87&t=611
      Check out the video from this time. The format of CRM-generated images requires manual conversion before they can be used for upscale.

  • @Mr3Dmutt
    @Mr3Dmutt Před měsícem +2

    Alright... watched the whole video and can confidently say this is useless in any level of production, indie or big budget, also, if youre going to invest that much time into something, why not have fun and sculpt and paint it?

    • @kefuchai5995
      @kefuchai5995  Před měsícem

      That's right, I only use this when sculpting becomes boring sometimes.

    • @DimensionDoorTeam
      @DimensionDoorTeam Před měsícem

      "I only use this when Sculpting becomes boring sometimes"🤞Technically, that's the best part of creating a character.