PBR Video for 3D!
Vložit
- čas přidán 13. 09. 2024
- ❤️ ➟ / cg_matter
🌐 ➟ www.cgmatter.com
the thing is called SwitchLight www.switchligh...
============================
Cool Blender addons I made👇
============================
🔍Live details about your scene!🔍
➟ blendermarket....
-----------------------------------------------
🧑🤝🧑Share .blends and renders!🧑🤝🧑
➟ blendermarket....
-----------------------------------------------
🖼️Render gifs straight out of Blender!🖼️
➟ blendermarket....
-----------------------------------------------
🛜Text/email yourself when renders finish!🛜
➟ blendermarket....
-----------------------------------------------
📷Camera settings where they actually make sense!📷
➟ blendermarket....
============================
Cool Blender addons I didn't make👇
============================
🐍Serpens - Make addons with nodes!🐍
➟ blendermarket....
-----------------------------------------------
🎯Flaxtrax - Automatic camera tracking🎯
➟ blendermarket....
-----------------------------------------------
📺Node Preview - See what you're making!📺
➟ blendermarket....
Rule of thumb in any 3D package. Everything except Diffuse, Albedo, or Color is RAW, Non-Color, or the equivalent. The only things that should ever be effected by any kind of color processing are things providing actual color information. Anything that is providing variables to a calculation like a normal, or roughness map should remain unaltered by any color space calculation. So depth maps = Non-Color, RAW, or equivalent. This also applies to most game engines.
TGA and PNG still are the main ones for game textures.
@@19n42 He isn't talking about image formats.
@@fabiocoelho1679 yeah but can 0-255 range turn into data?
im about to download this app and lose it in a random hard drive to find it in a few years when its no longer free
great now i see myself in this exact scenario
So you are one of those... heroes?
smart idea only if it wast an app that doesn't need to connect to the internet to work then you'd be fine
And hope it will always be open source n shit😂😂😂
Kind of funny how many programs have followed this exact route of launching for free and empowering smaller creators blablabla then becoming a subscription-based tool acquired by industry giants and ditch their targeted fanbase that gave them all the data necessary to expand. So many off the top of my head!
they will not succeed. There's too much competition. An awful lot of AIs are already available as free and open source
good thing though is afaik open source alternatives are quick to rise
Well the bills have to be paid somehow, dont they
@@AlexOfCR These are services that are made from open-source papers
I think this will solve some issues Ian didn't know he had until he sees this tool
i messaged him
@@CGMatter that's nice of you
As he's previously mentioned, the key to making greenscreen people shots look good is to put all the effort into the actual recording itself rather than figuring it out later when the lighting and camera angles are basically baked into the footage. This could help in cases where you didn't exactly shoot something correct and dont have the ability to reshoot. Resolve already has a built-in relight feature that similarly does this, but for something directly integrated into Blender having the actual data in could help dial the look a bit better if it comes to having to do this.
@@jaym2267 Yeah this is right. A big mistake of "Bad greenscreen" is actually just bad on-set lighting, or lighting that is completely mismatched from the background. The other simple factor people forget to do is matching the black levels (the darkest thing in the footage, against your CG). You should never see things 100% pure black, which often times render engines make overly black things. Other things also lift the black level, for example if a light is near a person and it causes diffusion or glow "over" the character - that means you need to add that back over the top of your CG as well.
HOLY DESKTOP ORGANIZATION
innit arrange by type?
what could be the "beard" blender file
You call it organization? LOL
I want to see people like Corridor Crew making something fun out of this. Thank you for sharing this with us, looks amazing! Didn't think that AI could extract pbr maps for us.
Somebody show this to Ian Hubert.
Ian have his own style and probably he find easiest way to add good lightning and detail to greenscreen character. Last time I see he's now using real 3D scanned characters
@@Ismailoff_eth Yeah, but here’s the thing-3D scanned people are pretty limited. They can handle basic stuff like idling or walking, but when it comes to main characters or anyone who’s front and center, 3D scans just don’t hold up. They end up looking fake, so you really need green screen footage in those cases.
If you plan the scene right, you can match the CG lighting to what you used on set (or vice versa). But let’s be real, things don’t always go as planned. Whether it’s a tight deadline, missing gear, or just not knowing exactly how the scene will look, you might not get the lighting exactly as needed.
That’s where this AI relight comes in-it’s a great backup for those moments when the lighting isn’t quite right. It’s not a replacement for the real workflow, but it’s nice to know you’ve got an alternative if things don’t go as expected.
He probably did use this software
Exactly my thought as well
Seen this a few times, I need to play around with this and give it a try. The main thing that always seems off with the result of these relights is the skin always get very flat. One technique I like to do is if you do a frequency separation (blur & divide the footage against itself) and multiply the texture detail you get from that divided result against this lighting pass, you could probably get a more realistic result. The shininess / roughness texture pass this thing gives isn't 100% there yet but with that trick I think it probably could be a good result.
glad to see you here, amazing tutorials!
So glad you covered this! I found out about this software from Joshua M Kerr’s video and joined the beta when it first launched, I even have videos on my channel that utilize this relighting method (street lights passing by characters on highway for instance). It’s been incredibly useful and the development team has been really interested in improving this technology, so the more eyes on it the better!
8:23 "Not anything" So, it's something?? I'll get hacked and sue you to the ground.
I've been using it since it was dropped. I had a vfx scene where I had to make fake thunder lighting around the actor and it turned out amazingly realistic after composting.
Cool. Tutorial?
IMAGINE THE POSSIBILITIES!!
imagine the possibilities...
In three to five years this technology will be 1000% better and that blows my mind to think about it.
these videos are fire, keep up man
Hey if you guys are looking for another way to generate depth & normal maps , you can actually use a local version of stable diffusion and use a model for depth maps and normals to generate an image sequence,
I've been really fiddling with normal, specular, and roughness mapping for game development recently, and with great timing, this really cool bit of knowledge showed up!
Best use of AI. We need more stuff like this!!! Tools that HELP artists instead of hindering them.
finally AI that i can actually see myself using
There's no way that k for knife tool works in video editing view... it's so clever and I totally missed it up till now! I love watching blender videos and discovering more and more shortcuts! Can't wait for them to come back to shader nodes, shift+a, n, r is still ingrained in my muscle memory
I got really excited until I realized you weren't talking about the true champagne of beers
Magic Mask is insane, how did I not know about this 🤯
This is a really good tool for small studios
This is so cool! Thank you for showing us.
each one of duch videos really show to me that I have to learn blender nodes
Thank you for showing all the process!
Non color just means it's not apply any color transformation. Pretty much everything is non color unless it has a color texture. But that doesn't mean you can't apply a transformation for a certain effect. But, with normal maps it would likely ruin the directions.
so now I can make fake 3d mountains instead of using image planes as background, noice!!
Ian Hubert is about to go crazy with this
Color and Non-Color (or in engines like Unreal Engine it's referred to as Linear Color) is about Gamma correction. For any texture where you may have data (e.g: you're storing a larger number outside of 0-255 in a 32 Bit HDR) you'll want in non-color. In the case of your Roughness texture, you want whatever value 'as is' without any adjustments, so Non-Color is correct here, same for Depth. Essentially it's treating your data from the texture 'as is' without doing any corrections for Gamma.
cg matter literally looks like the smartest hobo
The superimposed images remind me of the style of those old point and click adventure games after MYST that added (live action) characters that would walk out in frame and talk to you or do something (I used to play an old Goosebumps game where you were trapped in a haunted theme park that did this). This would be a neat way to bring back that kind of retro style, but with more flexibility with the environments you could put the characters in, and better visuals with the lighting of the scene affecting the actors, etc.
looking very cool! lots of possibility
switchlight is finally getting some recognition
you tube unsub me from this amazing channel, not sure why.
the ending was funny and to the point!
Lucky the free Open Beta is still here. A few months ago it was said that it would be closed for free, but luckily there is still time to use it for free while your GPU is strong enough...
God damn, this is cool!
I'm pretty sure the tech behind switchlight is IC-Light by lllyasviel. This is completely open source, though I guess harder to get working than switch light.
It just works 👏
It is cool! Thanks for sharing.
Im not a fan of generative ai, but this actually seems really cool and id love to use it for cgi and stuff
This isn’t really generative AI. You need something already usable to input for it to actually output
thx a lot! 🙏
super-cool-thanks
thats crazy
backrooms vids bouta go crazy
We know what you did to Default cube.
Kinda cursed but if a. the occlusion was added in and b. a way to introduce rim lighting when back-lit was found, this could be quite a workflow
this guy is good
7:48 Didnt know you were in DeadPool and Wolverine Cameo...
Would love to see a video of u going over pulling a random character (stylized) to a facial rig for motion capture
i like the Normals one
bro turned into caseoh
The Technique is very cool!!! Thanks!
dope
only reason color spaces is here is to match colors of real world to colors of display, so if it is any texture that captured not in real world should be treated as non-color data.
Pabst Blue Ribbon FTW ✊🏾
The 🐐
Sensei is back
Ayo we got real-life G buffer before GTA 6!
thats awesome
i have a project coming up and this will be so essential to get the result i need
cool
This is epic. Thanks!
Thanks for demonstrating nerds can have girlfriend too!
Great sharing all jokes aside.
this thumbnail is the one.
The app looks cool but needs a login, like autodesk or creative cloud. I thought it was worth mention. May be some one could build an ai workflow in comfyui.
hope so - i gotta look into comfyui
@@CGMatter you will love it
Cool
if you can make all maps from single image, then there is possibility to make 3d model from it
Wow you released this right when Meta's new tool just released lol.
As long as these models can be downloaded and the app works offline, this current version can stay free forever!
videos going to be crazy, forget capcut, all my bros edit in free software without ads
I would like to see that bottom lit image.
Whats the point of this though? To make more realistic cheap crowds scenes? That's the only thing I can think of.
just better greenscreen 3d integration basically
You can change the lighting on people without having full 3D models of those people. There's gotta be something that's useful for.
i think can help on cheap films to make an cheap and easy lighting ajustment etc
*COMPOSITING*
*IT'S COMPOSITING*
When you're compositing someone into another shot, whether it be green-screened or roto work, you're going to do your typical color correction passes. One big issue is light in-general. You can make something have the same values of another shot, but you're going to have to do extra work to make sure the light matches. In a normal production, where everyone has at least a _slight_ understanding what they're doing: The shot that's going to be comped either has flat light for editing later, or an attempt to match the light to the shot they're comping into.
_This is not always a thing that happens._
So the alternative is having to do some very painful tweaks to the video. And if it's lit flat you're still going to do a ton of manual editing.
Or do something like this:
Start at around 20 seconds into the video.
czcams.com/video/ZU_NOoug-hQ/video.htmlsi=qBatuYj3n3iCGVcl&t=20
That's a digital double being animated and comped onto Black Widow for lighting.
Comparing this to Davinci's Relight which is better?
Pretty 😎
amazing, so the issue you are having is, dont use normals and displacement together (hence the glitch). just need one or the other. this is coming form blender guru. i am just a messenger
it's the same princible with davinci relight it uses the same passes on the background..
I did this in ComfyUI with a controlnet node.
this seems very similar to nvidia's lumos paper (2022) or google's total relight (2021) (neither of which have released code, or demos afaik, nvidia links one that doesn't work, and the github account is deleted)
I’ve discovered this long ago
Im still new to this. Great video by the way. at 6:47 when your in settings and you click on Displacment and Bump, The displacement drop down doesn't show up. What am I doing Wrong , thanks
The material displacement is a cycles only feature, you need to set CYCLES in the render properties. Let me know ;)
Can you do depth and Normal based edge detection on the footage now?
Dam
👑
Anyone else thinking adding lightsaber glow to pre led shots??
Soon with a few clicks you’ll be able to turn a video of someone directly into a 3d model 😅
The thumbnail looks pretty normal to me.
This can all be done with ComfyUI and some open source models.
You do realize that they use your footage to make their software better before it becomes paid for
I mean. Most stuff does that nowadays. Internet Privacy is a long-gone thing.
K in compositor and Y in the NLA! Y is it different?
Did you just video edit in blender?
Ah yes, another AI software that thinks having a proper installer is too 1990s and we should just dump it straight into the C: along with its massive dependency install.
Oh yeah
my range on a depthmap that i got is form
300 to 3000 and it's looking pure white in a viewer.
i dont know how to shink it
We just finished using Switchlight for a 10 minute greenscreen episode of our series Cradle in the Waves: czcams.com/video/Luk7j6ghwGA/video.html Once the series premeires we'll post some clips of what we ran into, how it worked in our workflow etc. Overall it was extraordinarily valuable, even being able to use the depth maps to create animated collision maps for an episode where people are swarmed by mosquitos. A few flaws, but mainly in their 'background removal' tool, which got used a few times just because it wasn't any better than the roto brush or our other matte generating tools.
The largest issue we ran into was actually the eyeballs going pure black. Losing eye-light is bad! The solution we ended up using was to add the original footage with de-spill into the emission input at .1 or .025 (depending on the shot) so we could get the highlights back. But that was only ideal on shots where we had 'sort of' replicated the lighting on set anyway.
And the greatest unexpected use of this was to make instant digi-doubles on set. One of the actors was only available for a few minutes between takes for one of the wardrobe changes. Instead of doing a photoscan like we had done for the other wardrobe changes, they just did a T pose in front of a longer lens, and spun around on the spot. The depth map, normal map, de-lit albedo, and roughness etc was perfect for a medium-long distance digi double, and it was easy to just apply the depth map and make almost a cake-mould of them. Obviously not perfect, but impressive in a pinch!
can do this with comfyui and controlnets that will extract stuff
Welp, the download for the beta is gone already little over a week after this video 😅
is this a use of ai that doesnt steal creative hobbies/jobs? thats not allowed wtf!
Doesn't that site steal the images that you upload and violate your privacy?
no clue
All sites do it nowadays 🤷
Anyone who has tried this tool, did you find it useful and/or is it worth it?
stereoscopy go brrr
you have too many .blends on your desktop.