This AI deepfake is next level: Control expressions & motion

Sdílet
Vložit
  • čas přidán 25. 07. 2024
  • LivePortrait tutorial: Free, opensource AI deepfake tool for animating 1 photo. It can even handle tricky expressions & movements
    #aitools #aivideo #deepfake #ainews #ai #agi #singularity
    TurboType is a Chrome extension that helps you type faster with keyboard shortcuts. Try it today - they have a FREE forever plan.
    www.turbotype.app/
    Live Portrait:
    liveportrait.github.io/
    github.com/KwaiVGI/LivePortrait
    Newsletter: aisearch.substack.com/
    Find AI tools & jobs: ai-search.io/
    Support: ko-fi.com/aisearch
    Here's my equipment, in case you're wondering:
    Dell Precision 5690 www.dell.com/en-us/dt/ai-tech...
    GPU: Nvidia RTX 5000 Ada nvda.ws/3zfqGqS
    Mouse/Keyboard: ALOGIC Echelon bit.ly/alogic-echelon
    Mic: Shure SM7B amzn.to/3DErjt1
    Audio interface: Scarlett Solo amzn.to/3qELMeu
    0:00 Scope and examples
    6:25 Installation
    7:20 Git
    8:18 Installation (continued)
    9:19 Conda
    12:12 Installation (continued)
    15:37 Running LivePortrait
  • Věda a technologie

Komentáře • 852

  • @theAIsearch
    @theAIsearch  Před 17 dny +20

    TurboType is a Chrome extension that helps you type faster with keyboard shortcuts. Try it today and start saving time. They have a free forever plan!
    www.turbotype.app/

    • @nartrab1
      @nartrab1 Před 17 dny

      Thanks man, amazing video and I will try this hkey manager.

    • @KryzysX
      @KryzysX Před 17 dny +3

      Lowkey one of the best AI youtubers

    • @laylasmart
      @laylasmart Před 17 dny

      Free and open source? No such thing. If it was so, installing it wouldn't be needed. Double click and it would work.

    • @minhuang8848
      @minhuang8848 Před 17 dny +3

      @@laylasmart lmao nonsense

    • @laylasmart
      @laylasmart Před 17 dny

      @@minhuang8848
      Harvesting info is the goal today. For that Microsoft made Copilot AI and Recall. A spyware within the operating system.

  • @gunnarswank
    @gunnarswank Před 11 dny +22

    How soon can I get a Hogwarts painting of my dead grandmother to tell me to wash my hands every hour?

  • @filipr3336
    @filipr3336 Před 17 dny +35

    First thing I think of is a potential VR application. This would be in real time , your whole expression would be projected on your VR avatar.

    • @iankrasnow5383
      @iankrasnow5383 Před 11 dny

      It looks like it still takes some processing time to run the software, so at least currently you couldn't animate an avatar in real time on a local recording. So for V-tubers, at least a few months off. But if it's run by a cloud service, maybe they could make it work.

    • @Metapharsical
      @Metapharsical Před 11 dny +1

      It's not new, Meta has already demo'd this type of realtime avatar . Zuck showed it off on Lex Friedman's podcast.
      It's realtime and very high quality , just requires some pre-processing of a full face scan

    • @jeff_clayton
      @jeff_clayton Před 11 dny

      @@iankrasnow5383 with the speed of tech innovations in the last few years, if they decide to work toward this it won't be that long

    • @artificiyal
      @artificiyal Před 11 dny +1

      video calling

    • @tusharbhatnagar8143
      @tusharbhatnagar8143 Před 8 dny

      @@iankrasnow5383 A lot of time. Still not a consumer friendly implementation. Might take some more time to be realtime ready. Fingers crossed.

  • @user-wh2cn3cx5j
    @user-wh2cn3cx5j Před 15 dny +2

    I subscribed to you when you had 2K subscribers, and today you have 150K. Bro, you totally deserve it, and your content is worth much more than 150K. Soon you will reach 1M. Love you bro, from India ❤

  • @bmoviecreature1507
    @bmoviecreature1507 Před 16 dny +1

    always appreciate you show every single steps to install. It's very helpful for person who is not familiar with any codes

  • @FriscoFatseas
    @FriscoFatseas Před 17 dny +22

    i normally dont mess with this stuff until there is an involved interface, super excited for this stuff to be working open source

    • @SVAFnemesis
      @SVAFnemesis Před 17 dny +7

      you're right. For reasons I could not fathom, very few AI tool were pushed to the point where they get a user friendly client like Midjourney. Even programmer like me struggles to pull a repository from git and build it up myself, I don't think any reguar artist out there can do AI without at least a client side.

    • @English_Lessons_Pre-Int_Interm
      @English_Lessons_Pre-Int_Interm Před 15 dny

      @@SVAFnemesis what, you don't like typing in Discord? heretic.

    • @SVAFnemesis
      @SVAFnemesis Před 15 dny +1

      @@English_Lessons_Pre-Int_Interm can you please carefully read and understand my comment.

  • @pepsico815
    @pepsico815 Před 17 dny +139

    What happens if the source sticks out a tongue?

    • @mikezooper
      @mikezooper Před 17 dny +103

      The entire internet crashes.

    • @42ndMoose
      @42ndMoose Před 17 dny +12

      there are already artifacts with the teeth where it stays static as the hair is with the background, i'd imagine the same will happen with the tongue.
      huuuge step up for open-source nonetheless!

    • @just_a_person-z1m
      @just_a_person-z1m Před 17 dny +18

      Harambe is resurrected

    • @drmarioschannel
      @drmarioschannel Před 17 dny +2

      doesnt work

    • @lol_09.
      @lol_09. Před 17 dny +29

      What is bro planning to do

  • @VintageForYou
    @VintageForYou Před 12 dny +2

    This is insanely fantastic for controlling expressions keep making your great videos.💯Top Notch.😁

  • @AlphaProto
    @AlphaProto Před 17 dny +87

    I could fix the lip sync of the Teenage Mutant Ninja Turtles movie with this!

    • @eccentricballad9039
      @eccentricballad9039 Před 17 dny +5

      I can think of a hundred ways to use this creatively but i have only got 4GB VRAM.

    • @GameOver-qk2ys
      @GameOver-qk2ys Před 17 dny

      ​@@eccentricballad9039 Use Google colab👀

    • @matthewmcneill5320
      @matthewmcneill5320 Před 16 dny

      You mean the original 1990 movie? Never noticed

    • @RaptureMusicOfficial
      @RaptureMusicOfficial Před 14 dny +1

      The 1990 movie is ace! You can correct espeially the 3rd one! ;)

  • @ECHOPULSENEWS
    @ECHOPULSENEWS Před 17 dny +1

    This is probably the best instructional video I have seen lately. Going thru the steps in great detail. Thank you for that.!! Once installed it runs smoothly.

    • @theAIsearch
      @theAIsearch  Před 17 dny

      You're very welcome!

    • @abhishekpatwal8576
      @abhishekpatwal8576 Před 16 dny

      were you able to run it on a group photo to animate multiple faces? i was unable to

    • @ECHOPULSENEWS
      @ECHOPULSENEWS Před 16 dny

      @@abhishekpatwal8576 you can uncheck do crop and it does try but the result isnt there yet

  • @Dina_tankar_mina_ord
    @Dina_tankar_mina_ord Před 17 dny +10

    Create a source video from the movie "The Mask". When Jim's character becomes freaky with his eyes and mouth.

  • @DanMalandragem
    @DanMalandragem Před 17 dny

    how do i create a file like exe to just open it up later? i could use it first time but when i closed the tap and cmd i wasnt able to open again...

    • @joseavilasg
      @joseavilasg Před 17 dny

      You could create a .bat instead and run commands there.

    • @nnkaz1k856
      @nnkaz1k856 Před 17 dny

      Maybe auto-py-to-exe

  • @Crisisdarkness
    @Crisisdarkness Před 17 dny

    Wow, you always present great advances in AI, and open source, I am very grateful for your channel, I will try this soon

  • @parallaxworld
    @parallaxworld Před 17 dny +6

    love your videos, keep up the good work :D

  • @AIShipped
    @AIShipped Před 17 dny +2

    Very thorough tutorial and a very good project to cover!

  • @Azguilianify
    @Azguilianify Před 17 dny +2

    Thanks! It's absoluetly awesome as nodes in comfyUI!

  • @PINKALIMBA
    @PINKALIMBA Před 14 dny +1

    I have a basic laptop without the NVidia graphic card, can I use this as well?

  • @thays182
    @thays182 Před 16 dny

    Amazing walkthrough! Thank you! I'm having an issue where the output video bobbles around and shakes a bit. I'm not seeing this in your examples. Any ideas on why or how a shake is happening on the output?

  • @Enjoykian
    @Enjoykian Před 9 dny

    I just want to say you are the best, thx a lot dude

  • @heartshinemusic
    @heartshinemusic Před 17 dny

    Wow, great stuff. This is the next step I was waiting for. All the puzzle pieces are falling together... moving characters and then making them talk or sing. It just all needs to come together in one platform to combine different features to create movies or music videos or virtual avatars. Thanks for sharing! EDIT: Can you make this work with 16:9 ratio images? I see a lot of lip-sync programs that are just square.

  • @AdvantestInc
    @AdvantestInc Před 16 dny

    Amazing demo! What are the potential security implications of using AI deepfake technology like Live Portrait? Are there measures in place to prevent misuse?

  • @elyakimlev
    @elyakimlev Před 17 dny +13

    wow, the option to animate a face on a source video has a great potential. I can already see people creating scenes of people interacting with each other with Runway Gen-3 or another video generator and then editing the video so that the people in the scene actually talk!
    We're one step closer to creating movie scenes.

    • @theAIsearch
      @theAIsearch  Před 17 dny +1

      exactly!

    • @user-cz9bl6jp8b
      @user-cz9bl6jp8b Před 17 dny +5

      Am I missing something. I don't have the option to use a source video to animate as the input only an image option. Is that a different program than this?

    • @elyakimlev
      @elyakimlev Před 17 dny

      @@user-cz9bl6jp8b I don't know. I never tried it. I was only commenting on what I saw in the video.

    • @jaywv1981
      @jaywv1981 Před 16 dny

      @@user-cz9bl6jp8b I'd like to know this too.

  • @juggernautknight2749
    @juggernautknight2749 Před 17 dny +25

    Absolutely incredible!

  • @BIPPITYYIPYIP
    @BIPPITYYIPYIP Před 8 dny

    i've been using this from beta on starting about 3 years ago. their current version is proprietary and does so much more and mimics lifelike in every way.

  • @issa0013
    @issa0013 Před 17 dny +1

    Can you make a video on your hardware? That setup you have looks cool

  • @BUY_YOUTUB_VIEWS_6
    @BUY_YOUTUB_VIEWS_6 Před 17 dny

    We really need more post from you, we are bored without you

  • @rainy.aesthetics
    @rainy.aesthetics Před 17 dny +21

    YOU ARE REALLY GIVING ME ALL THESE THINGS whenever I NEEDED THIS TO Make my animation!!!!

  • @AlejandroGuerrero
    @AlejandroGuerrero Před 11 dny

    This turorial was great. Thanks. Question: Is there any tool (like this to run locally) to upload an mp3 voiceover and generate the mouth and eye movements to later use in this process? Thanks!

    • @theAIsearch
      @theAIsearch  Před 11 dny

      yes, is this what you're looking for? czcams.com/video/rlnjcRP4oVc/video.html

  • @HariWiguna
    @HariWiguna Před 5 dny

    Thanks to your very detailed patient step by step instruction, I was able to generate my own live portraits.
    My results are not as perfect as the examples, but amazing nonetheless. Thank you! Thank you! Thank you!

  • @lamanchatecno9684
    @lamanchatecno9684 Před 10 dny

    Thank you. Excellent Video, very detailed and well explained. New subscriber

  • @MrDanINSANE
    @MrDanINSANE Před 17 dny +2

    Very cool, thank you for sharing ❤
    Too bad it won't work on VIDEOS or Multiple Faces like in their examples.

  • @legacylee
    @legacylee Před 17 dny +10

    Working on a cartoon about a mischeivous young girl named Yumi, I've been using AI since the beginning and finally found a way to make her consistently with apps that effectively use Character Reference techniques, I've even trained a model with my character. Being a creative partner with Pika, Leonardo AI, and finally Runway ML I am able to create a ton of content, but I will need to add the character animation, and while I actually turned my AI character into a fully rigged Metahuman, it's nice to know if and when I need a quick shot and don't have the time to set it up in Unreal, I can quicky generate my character in the scene, then I or my niece, who will do most of the performance stuff for Yumi, can act out and voice her and I can use that footage and audio to animate the clip. This is an amazing time to be in. As someone who uses AI as a tool, I can see the several use cases for stuff like this, and it's going to make life easier for me as I am a studio of one and I have zero budget to make most of my stuff. So using a combination of free tools and my natural resourcefulness I am starting to make head way. The one man film studio era is here now.

    • @jmg9509
      @jmg9509 Před 14 dny

      I am right alongside you brother!

    • @kliersheed
      @kliersheed Před 9 dny

      can you give a short list as to how you would go for character consistency with free tools? i found it to be either lacking extremely (wasnt consistent) or its payed models i couldnt try.
      example:
      1. Ai x: its free and has no tokens, i use it to do this and that. then i can you this is step 2 for ..
      2. AI x2: also free and has no tokens, now you can...

  • @simplereport8040
    @simplereport8040 Před 17 dny +1

    Man this seems insane. I love your findings. Will test it tonight!
    Just one thing! If you could add the timer or how long it took to process that would be very much appreciated 🙏

    • @theAIsearch
      @theAIsearch  Před 17 dny +1

      Thanks. For a 10s video, it took maybe 1-2min to generate. Very quick compared to other tools

    • @simplereport8040
      @simplereport8040 Před 17 dny

      @@theAIsearch thank you very much! That’s waaay faster than I expected! 🤯

  • @looooool3145
    @looooool3145 Před 17 dny +1

    It looks weird in motion, but if u pause the expressions at any point during its animation, it looks good and natural.

  • @johncappt
    @johncappt Před 17 dny

    How long should it take? With a 3070 I'm at 1280 seconds of processing time. Wondering if it's actually processing or has stopped at this point.

  • @smartduck904
    @smartduck904 Před 17 dny +1

    This is going to be so great for video editing instead of having to animate facial animations for characters we could just use this software

  • @zakaroonetwork777
    @zakaroonetwork777 Před 8 dny

    Does it run from the cloud, or will this run offline? Is there a way to make a quick launch package to simplify it for the user?

  • @ParvathyKapoor
    @ParvathyKapoor Před 15 dny

    Available in pinokio?

  • @kukukachu
    @kukukachu Před 16 dny +2

    Dell and Nvidia huh? Were one of those your Chinese friends that gave you the code :D

  • @marcus8451
    @marcus8451 Před 13 dny +1

    I need to recant and say that everything worked out in the end. It is necessary to install all the components first, and only at the end of it all can you install the platform. Thanks for the video.

    • @theAIsearch
      @theAIsearch  Před 10 dny +1

      glad you got it to work!

    • @User-Mtw-j4n
      @User-Mtw-j4n Před 9 dny

      which py version did you use ?

    • @marcus8451
      @marcus8451 Před 8 dny

      @@User-Mtw-j4n What I did was follow the tutorial in this video and after four or five failed attempts it ended up working.

  • @sosameta
    @sosameta Před 17 dny +1

    insane use cases are coming

  • @patnor7354
    @patnor7354 Před 17 dny

    This is awesome. AI is really advancing fast.

  • @datadrivenschool_
    @datadrivenschool_ Před 13 dny

    Incredible! Thanks for sharing!

  • @davimak4671
    @davimak4671 Před 15 dny

    Bro, good continuation would be to explain how to use it - vid to vid. Can you explain this? I saw examples when guys making vid to vid using liveportait and this is awesome

  • @jencodeit
    @jencodeit Před 17 dny

    Amazing feature 🤩Greatly appreciate doing this super simple video guide on how to use this tool. Game changer! Thanks so muchhhh

  • @phantasiaentertainment2170

    thats crazy! i wanted to create a yt channel for so long but didnt want to use my own voice nor face. i can do it now :)

    • @monday304
      @monday304 Před 17 dny +1

      That's a great idea Good luck to you and your channel! Did you need a Chinese phone number to run this app?

    • @theAIsearch
      @theAIsearch  Před 17 dny +5

      @monday304 LivePortrait doesn't require any number

  • @PINKALIMBA
    @PINKALIMBA Před 15 dny +3

    I got a problem during entering the line "conda activate LivePortrait". It returns "CondaError: Run 'conda init' before 'conda activate'. What shall I do?

    • @bandaniji9061
      @bandaniji9061 Před 14 dny +4

      type this : conda init
      it will ask you to close the cmd
      then restart and do the same instructions
      after that type : conda activate LivePortrait

    • @PINKALIMBA
      @PINKALIMBA Před 14 dny +1

      @@bandaniji9061 Thank you! It worked. 🤝

    • @bandaniji9061
      @bandaniji9061 Před 13 dny +1

      @@PINKALIMBA Good to know

  • @LoFiChillandBeatsVibe
    @LoFiChillandBeatsVibe Před 11 dny +1

    Great demo! Curious as to your CPU / GPU / ram configuration that you ran this on?

    • @theAIsearch
      @theAIsearch  Před 10 dny

      Thanks. RTX 5000 ada, 16g vram. cpu is intel i7, but i dont think that matters

  • @shabadooshabadoo4918
    @shabadooshabadoo4918 Před 14 dny

    Great info! TY

  • @MrGui203
    @MrGui203 Před 17 dny

    Thanks for the update 😊

  • @noop-chair
    @noop-chair Před 16 dny

    Thx man the ad was smooth

  • @anshikasrivastava3385
    @anshikasrivastava3385 Před 14 dny

    Does it only generate animations? Can I change actions on static images

  • @francom121ify
    @francom121ify Před 16 dny +1

    Great video! Just wondering how you do a video as source and a video as the driving, i only see image as the source option. If you can let us know. Thanks!

    • @theAIsearch
      @theAIsearch  Před 16 dny +2

      it will be released soon: github.com/KwaiVGI/LivePortrait/issues/27

  • @ai-bokki
    @ai-bokki Před 17 dny

    TurboType is great! was looking for something like that.
    Btw, Your input files are very small. How long does it take to render? Can we change a 1080p video that is of 30seconds? the input would be around 100mb

  • @yessinjarraya6076
    @yessinjarraya6076 Před 17 dny +68

    This is gonna be a nightmare soon enough ...

    • @nicktaylor5264
      @nicktaylor5264 Před 17 dny +11

      Yea - one of my favourite (and somewhat fringe) concepts is that "Everything comes true in the end" - because the context around it changes.
      My fav example of this is "primitive tribes in 1970s National Geographic Magazine" being afraid of cameras because they thought they could steal your soul.
      Well.
      Here we are. We are within days of there being a browser extension that with a single click can superimpose any photo into any porn video.... to take any youtube video of anyone and turn it into a kind of voodoo doll or gollum than can be made to perform any action imaginable, including ringing up your family, friends, enemies and doing such a perfect impersonation of you that it is actually more realistic than you are yourself.
      In a way, the Algos already have voodoo dolls of you... a lifetime of clicks and comments etc, rows in a database tied by a single user_id.
      I think "Text" was a massive massive revolution in what it meant to be human because it collapsed the time dimension. Memories became something that took zero energy to maintain. I think AI is a process towards collapsing some other dimension, although I've yet to figure out what it is, and I might of course be talking bollocks.

    • @killerx4123
      @killerx4123 Před 17 dny +8

      @@nicktaylor5264 i want what youre having

    • @CPB4444
      @CPB4444 Před 17 dny +2

      @@nicktaylor5264 Beautifully written, I too need my overlord my one true leader, a god not to worship but to follow, the basilisk, one of our own making.

    • @rakinrahman890
      @rakinrahman890 Před 17 dny

      ​@@nicktaylor5264☠️☠️☠️

    • @rakinrahman890
      @rakinrahman890 Před 17 dny

      Sure, if you have no idea about tech. This AI is amazing and it's only gonna get better.

  • @TrigFX
    @TrigFX Před 2 dny

    Does the app have to be installed on the same drive as miniconda? Cos I keep getting an error message: CondaError: Run 'conda init' before 'conda activate'

  • @s11-informationatyourservi44

    Exciting functionality! I’m gonna do a deep scan with some security tools like wireshark n such. I suggest y’all do the same just in case. There are quite a few repos with zero day backdoors that specifically target windows. Happy prompting y’all

  • @__ZANE__
    @__ZANE__ Před 3 dny

    so how much disk space does it take up total?

  • @abhishekpatwal8576
    @abhishekpatwal8576 Před 16 dny

    were you able to run it on a group photo to animate multiple faces? i was unable to

  • @wesleyworkman2372
    @wesleyworkman2372 Před 8 dny

    The cat chopping chicken was awesome!

  • @osm369
    @osm369 Před 6 dny

    That's good, but do you have any tools like this to animate hands and the rest of the body ?

  • @1murkeybadmayn
    @1murkeybadmayn Před 15 dny

    what's the maaximum length of ai video you generate with this?

  • @Kjxperience1
    @Kjxperience1 Před 2 dny

    THIS IS INSANE... All wanna be yahoo boys from Nigeria Say Hi. Your work is so easy now haha

  • @guillermosepulvedaf
    @guillermosepulvedaf Před 4 dny

    Thanks!! Now video-2-video is supported. Can you update this guide please? Its better to made a clean install or just re-download the entire repository??

  • @UnmotivatedTechToober
    @UnmotivatedTechToober Před 17 dny +12

    Part of the webui is a file (frpc_windows_amd64_v0.2) which is a reverse proxy utility. Looks extremely untrustworthy to me. Running under a virtual environment mitigates some of the risk but I'm still skeptical. You should really be running this in an extremely sandboxed operating system.

  • @bagradbadalian6191
    @bagradbadalian6191 Před 17 dny

    Hey thanks for all your help! I have a geforce 770 gtx, is that enough to run this ? I followed your tutorial and arrived to the point of installing miniconda, but when I run the code "conda create -n LivePortrait python==3.9.18" it just says that value is not recognized...even asking for the conda version with "conda --version" says unrecognized command, and I did add the path to environment variables 🙃

    • @theAIsearch
      @theAIsearch  Před 17 dny

      hmm, exit out of everything and try again. also, I'm not sure about the 770 gtx. if it has cuda, then it should be ok

  • @giuseppedaizzole7025
    @giuseppedaizzole7025 Před 11 dny

    and what a bout the aspect ratio crop thing? thanks

  • @islandofmisfityoutubers6734

    How long can the videos be?

  • @sigitpoerwoto
    @sigitpoerwoto Před 17 dny +1

    i use amd radeon graphics and seem this application cannot work in my device, please help

  • @PredictAnythingSoftware
    @PredictAnythingSoftware Před 17 dny +3

    How about using Videos instead of Images as a source file. Just like the samples you show, please show us how we can do that as well, thanks. Anyway, I have successfully installed this in my computer using phyton only env. And u r right, it generate so fast unlike any other video generation like hallo which I have installed as well.

    • @theAIsearch
      @theAIsearch  Před 17 dny +1

      glad you got it to work. they will release the video feature soon github.com/KwaiVGI/LivePortrait/issues/27

  • @Puppetgate
    @Puppetgate Před 4 dny +1

    Any idea why I am getting this error, everything else worked up until this point:
    C:\Users\funky\Desktop\liveportrait\LivePortrait>conda create -n LivePortrait python==3.9
    'conda' is not recognized as an internal or external command,
    operable program or batch file.

  • @sopriojang
    @sopriojang Před 14 dny

    can u help me, when i paste "conda activate LivePortrait" its said "CondaError: Run 'conda init' before 'conda activate'...how to solve this?

    • @sopriojang
      @sopriojang Před 14 dny

      solve hehe...
      i try, conda init bash > then reopen > conda init > conda activate livepotrait > done

  • @lorenakademar5267
    @lorenakademar5267 Před 16 dny

    It's very interesting, I think I'll try it

  • @adeusexmachina
    @adeusexmachina Před 12 dny

    You are guiding like a god. Thanks for instructions

  • @ankitpandey24feb
    @ankitpandey24feb Před 12 dny

    The examples you showed are for still images, how to generate for videos?
    +

    • @theAIsearch
      @theAIsearch  Před 10 dny

      they haven't released that part yet. i'll keep you posted

  • @High-Tech-Geek
    @High-Tech-Geek Před 17 dny +4

    I love that you walk through the installation. Thank you!

  • @MatthewMS.
    @MatthewMS. Před 17 dny

    Live Portrait is wild wow

  • @paralucent3653
    @paralucent3653 Před 15 dny

    It's works very well when the source photo and input video are at the same angle but there is obvious warping when the angle is different. Best to keep the angles the same.

  • @adilsiddiqui7207
    @adilsiddiqui7207 Před 11 dny

    Just a question.. if the source video is already talking in the video and then if you put Driving video full of talk and expressions as well.. then what would be the outcome..??
    Will it mask the driving video talk on a source video talk??
    and thanks for the video and information..👍👍👍

    • @theAIsearch
      @theAIsearch  Před 10 dny

      good question. they haven't released the video feature yet, but thats a good thing to test out

  • @RandomBros88
    @RandomBros88 Před 17 dny

    Does this work with cartoon drawn faces?

  • @behrampatel4872
    @behrampatel4872 Před 16 dny

    thank you for stepping us through the miniconda installation. Is it not possible to install this using virtual environments via python itself ? what is conda doing for this install that cant be done via a regular venv setup ?
    Thanks,
    b

    • @theAIsearch
      @theAIsearch  Před 16 dny +1

      it's likely possible to use venv. i just followed the install instructions in the repo, which used conda

    • @behrampatel4872
      @behrampatel4872 Před 16 dny

      @@theAIsearch Thanks for the clarification. Cheers

  • @Kingside88
    @Kingside88 Před 14 dny

    Very cool. Imagine watching a foreign movie and the lips move like the ones from the translation. Or a video game sequence

  • @Littlefighter1911
    @Littlefighter1911 Před 17 dny +1

    24:50 More impressively it also does the shadow in a believable fashion.

  • @sird135
    @sird135 Před 6 dny

    The numa numa songs are gonna upgrade with this

  • @unagisama5476
    @unagisama5476 Před 16 dny

    I've seen tons and tons of songs with these, I think even during pandemic LOL. I didn't know it was this accessible.

  • @birdcallprrr5837
    @birdcallprrr5837 Před 17 dny

    are there web interface editions of this stuff???

  • @sleepy_dobe
    @sleepy_dobe Před 16 dny

    That's it, I'm never ever gonna believe what I see online or on digital media anymore.

  • @mrk-ism
    @mrk-ism Před dnem

    What’s the resolution/quality like?

  • @MrPer4illo
    @MrPer4illo Před 16 dny

    Great stuff!
    How long does it take to generate a 5 sec animation?

    • @theAIsearch
      @theAIsearch  Před 16 dny +1

      it depends on your gpu, but for me, less than 5min

  • @andreimade3261
    @andreimade3261 Před 16 dny

    only left to make this all in real time and in holographic, thank you for the video and beginner friendly

  • @N1ghtR1der666
    @N1ghtR1der666 Před 15 dny

    still not mapping much of the eye expression, not sure if its a setting or something but when the source goes cross-eyed its not conveyed at all in the result, and there expressions are much more muted than the source, hopefully this is configurable

  • @menamariano
    @menamariano Před 17 dny

    Has this plans For realtime performance ?

  • @michaelemerson570
    @michaelemerson570 Před 16 dny

    I get stuck installing miniconda. Apparently it's a known issue where it doesn't link some dependencies and therefore I am unable to even set up live portrait :( why couldn't they just have a regular installer instead of having to do it through command line

  • @Ai-Art-42
    @Ai-Art-42 Před 17 dny

    Interresting, thank you

  • @Maika-Man
    @Maika-Man Před 2 dny

    Tried to do everything as in video step by step but when it was almost finished then started get different errors until I don't know anymore how to fix them. So wasn't able to finish it

  • @Nakul-f1rq
    @Nakul-f1rq Před 17 dny +1

    Best channel 🔥🔥🔥

  • @gojoooooooooooo91
    @gojoooooooooooo91 Před 8 dny

    can i run this locally with intel arc a750 gpu?

  • @nathanrunda6053
    @nathanrunda6053 Před 17 dny +2

    What is the output resolution of these videos?

    • @user-cz9bl6jp8b
      @user-cz9bl6jp8b Před 17 dny +1

      For the few I tried, It is based on the size of the image you use as an input.

  • @CraigUKgames
    @CraigUKgames Před 9 dny +1

    At 3:58 you start talking about how you can use live portrait on not just stationary images, but moving videos too. Then in none of the examples at the end did you show how to use it on videos. I have tried uploading video but they are not a supported format.
    What is going on?

  • @TomiTom1234
    @TomiTom1234 Před 17 dny +2

    I tried to upload a video instead of a picture, but it doesn't allow any extension other than a photo extension. But you showed in your video that it is also possible to add a video 🤔

    • @PredictAnythingSoftware
      @PredictAnythingSoftware Před 17 dny

      I hope someone else will show us how to do it. I want to know how to do that as well.

    • @theAIsearch
      @theAIsearch  Před 17 dny +1

      they will release the video feature 'in a few days' github.com/KwaiVGI/LivePortrait/issues/27