@@iamkylebalmer - I agree 100%. For a stillshot manipulation it is quite impressive. It will clearly find its niche rapidly. I'm curious if they have a foresight proposal on increasing the angle-indexing of the vapp's 3-dimensional canonical representation and the zdyn system's vector output. It could be that the training data is limited or they perhaps are currently bottlenecked by complexity vs performance (the teeth stretching, ear morphs, and interior of the mouth).
I think if it’s remote anyways, there’s a time and a place to use an avatar. This is kinda creepy but not as terrifying as the whole FaceTime with an Apple Vision Pro 😂
It still can't compute parallax, which makes the hair, ears, and teeth look like some form of shifting monstrosity.
Very true. There’s still something uncanny. The fact it does it real time is very impressive though
@@iamkylebalmer - I agree 100%. For a stillshot manipulation it is quite impressive. It will clearly find its niche rapidly. I'm curious if they have a foresight proposal on increasing the angle-indexing of the vapp's 3-dimensional canonical representation and the zdyn system's vector output. It could be that the training data is limited or they perhaps are currently bottlenecked by complexity vs performance (the teeth stretching, ear morphs, and interior of the mouth).
I think if it’s remote anyways, there’s a time and a place to use an avatar. This is kinda creepy but not as terrifying as the whole FaceTime with an Apple Vision Pro 😂
Yeah the Vision Pro facetime is really off looking. Very uncanny