The Visibility Problem - Computerphile

Sdílet
Vložit
  • čas přidán 2. 01. 2014
  • Which triangles should be in front and which should be behind? The problems computers face when collapsing 3D graphics down to 2 dimensions.
    Graphics series with John Chapman:
    1/ Universe of Triangles : • A Universe of Triangle...
    2/ Power of the Matrix : • The True Power of the ...
    3/ Triangles to Pixels : • Triangles to Pixels - ...
    4/ Visibility Problem : • The Visibility Problem...
    5/ Lights and Shadows in Computer Graphics: • Lights and Shadows in ...
    John Chapman is a graphics programmer who blogs here: www.john-chapman.net
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computerphile is a sister project to Brady Haran's Numberphile. See the full list of Brady's video projects at: bit.ly/bradychannels

Komentáře • 220

  • @budulududu1
    @budulududu1 Před 10 lety +111

    I now respect my gpu for all the hard work it does for me.

  • @deektedrgg
    @deektedrgg Před 10 lety +75

    That certainly explains a lot of weirdness that happens in videogames.

  • @tobortine
    @tobortine Před 10 lety +91

    Brilliant. I could listen to this chap explain things for hours.

  • @frollard
    @frollard Před 10 lety +49

    I would love to see a discussion on the nightmare that must be rendering scenes in the game Portal where you can dynamically place a texture on a wall that acts as a camera to wherever the other portal is, and it can withstand dozens of recursions if the portals face one another.

    • @kaitlyn__L
      @kaitlyn__L Před 10 lety +9

      i think they artificially limit the recursions to a smallish number, i want to say 8 or 10 but best to look it up. if you make an "infinite mirror" type dealy you can see that a few times in (if you have a high enough resolution) it's just opaque. of course, you usually don't see that at all~

    • @kaitlyn__L
      @kaitlyn__L Před 10 lety

      Mikhail Malakhov that's where i got the idea from, i just don't remember the specifics~

  •  Před 10 lety +14

    So THIS is why Minecraft has so many problems with transparent blocks not rendering behind other transparent blocks! I never understood that. Thank you :D

    • @DFX2KX
      @DFX2KX Před 10 lety +2

      yup! Minecraft uses a very basic implementation of OpenGL (no per-pixel lighting, at first, either, if you recall) The barebones uses of both D3D, and OpenGL don't always help with these cases unless you get creative. Game Maker's 3d mode did the same thing, if you're not careful.

  • @ollpu
    @ollpu Před 10 lety +40

    Now I understand why you couldn't see water through ice before in Minecraft!

    • @ollpu
      @ollpu Před 10 lety

      ***** That is almost exactly what he explained in te video, using the term "window".

  • @CareyPortnoyBeauford
    @CareyPortnoyBeauford Před 10 lety +14

    Specialization runs/rules the world. So to me it's fascinating to get a deeper look into a subject I wouldn't normally study in depth. Thanks for sharing

  • @NikolajLepka
    @NikolajLepka Před 10 lety +20

    I was just about to study, and here computerphile goes "lolnope" and slaps me in the face with a new video...

    • @HassanSelim0
      @HassanSelim0 Před 10 lety +5

      I know someone who will be watching this video to help her study :D (she has a bad graphics course lecturer :S)

  • @Yoni0505Blogspot
    @Yoni0505Blogspot Před 10 lety +11

    The Z-Buffer also have a precision problem. When two pixels are very close to each other, they may get the same depth value.

  • @EliteTester
    @EliteTester Před 10 lety +31

    In older versions of the game minecraft if you put ICE infront of other ICE or WATER the object at the back would not be visible/rendered.

    • @HassanSelim0
      @HassanSelim0 Před 10 lety +10

      yup I was thinking about that same thing when he said "objects behind the window will fail the depth test" :)

  • @furrball
    @furrball Před 9 lety +3

    Hmmm already to the visibility problem? Looks like we're missing rasterization. It isn't trivial to make it efficient, you might want to at least explore off-screen Bresenham for scan line limits, and then dive a bit into edge equations for perfect meshing. Or would it be too mind-blowing for a crash course?

  • @D1rtyraver
    @D1rtyraver Před 10 lety +30

    His hands are in front of his face a lot. His face fails the depth-test.

  • @slpk
    @slpk Před 10 lety +12

    I've gained a lot of appreciation towards gaming during this series. I have one question though: Generally, who would do this kind of coding? Would it be the GPU manufacturers, while building the video card drivers, or the game library/game engine developers?

    • @Kwauhn.
      @Kwauhn. Před 10 lety +15

      Typically, native graphics libraries like OpenGL and DirectX handle all the buffers and whatnot under a layer of abstraction. The developers can still access this stuff, but by default depth testing and the color buffer are handled by the native library.

    • @niedelou
      @niedelou Před 10 lety +1

      I'd imagine both, in a way; on the one hand, it would seem like the game devs as they have to decide which method to use and how they wanted to do it, but on the other hand the gpu manufacturers have to put in coding to support the methods

    • @satellite964
      @satellite964 Před 10 lety +1

      I'm guessing the engine devs together with the API coders.

  • @WoestijnArts
    @WoestijnArts Před 10 lety +6

    Your videos teach me alot, I only ever listen to the ones with John in them.
    is the best!

  • @klopo333
    @klopo333 Před 10 lety +2

    Thank you for uploading this! It helped me a lot. A few days ago i tried to make a simple cpu based 3D render engine, (Although i know basic OpenGL,) using only basic java. I managed to make a cube mesh, rotate it, and project it onto the screen. The projection is simple and just orthographic. I just took the 3D Coords and removed the z-component. It all renders very nicely, but only from some angles. This is because i don't have depth calculation, so sometimes the back face is in front of the actual face in the front. This was three days ago, and i'm so glad you made this video. I really love these videos with John Chapman about 3D rendering, and i would be very happy if you made more of these.

  • @TheWeepingCorpse
    @TheWeepingCorpse Před 10 lety +13

    He didn't mention this so I'll add the following : Even with Z-buffer testing it's still important to sort the geometry front to back (e.g painters) (using each objects origin is fine) otherwise you'll waste massive amounts of GPU pixel shader bandwidth, due to overdrawing.

    • @antiHUMANDesigns
      @antiHUMANDesigns Před 10 lety +1

      That is, assuming you're using advanced shaders? If you're drawing simple colored triangles with no textures and default shader, I dare assume you'll lose performance by sorting them, no?
      However, if what I'm saying is true, I wonder at what point you start gaining performance from z-sorting. Just curious, I suppose I could do some test. And perhaps it's useless to generaliza, and you should do a per basis test.

    • @TheWeepingCorpse
      @TheWeepingCorpse Před 10 lety +2

      *****
      why have GPU power just to waste it? better to be a good engineer and use that power to create higher quality games.

  • @bobbobkilu
    @bobbobkilu Před 10 lety +8

    This is very interesting, because it explains why, in minecraft, you could previously see through water and other translucent objects without the color filter when looking at it through ice.

    • @VictorZamanian
      @VictorZamanian Před 10 lety +7

      The One What's so funny? :P bobbobkilu is most definitely right. Minecraft has been messing up for a long time, regarding depth-buffer testing, blending and z-sorting when it comes to transparent objects. In fact, in the very latest version (1.7.4 as of now), take a look at how the potion particle effects are draw in your own view. The particles aren't transparent, but if you look down onto water, you'll notice the water is drawn on top of the particles, but the particles are drawn on top of the rest of the terrain. Transparent ice blocks also seem to be draw on top of the particles. This makes it look very weird when all three are combined together in a scene: If the "o" is your eye, and you are looking at this:
      o -> [particle] | [ice] | [water] | [opaque terrain]
      you'll notice some weird things. It's kind of difficult to explain, but a lot of these problems were recently fixed. Just not all of them. :P Not sure if trolling, etc.

  • @geordonworley5618
    @geordonworley5618 Před 10 lety

    This is, by far, the most informative video on Computerphile (to me). Thanks!

  • @lvachon
    @lvachon Před 10 lety +1

    I love these videos, I hope there will be videos for lighting, texturing, and post processing as well.

  • @squidcaps4308
    @squidcaps4308 Před 10 lety

    Again, just golden stuff. I know now why gMotor2 can only draw two semi-transparent objects behind each other.. It has only two buffers and so the third transparent object overwrites the first transparent pixels. It looks EXACTLY like that on screen, the furthest transparency is suddenly at the front, fourth is second, fifth is again front and so on.
    Keep em coming. PS: Shaders, pretty please :)

  • @ErikScott128
    @ErikScott128 Před 10 lety +2

    This reflects older and real-time render engines more so than say un-biased/physically accurate render engines and ray-tracers. Ray tracing render engines, for example, work differently and are much more complicated. Distances must not only be calculated from the camera to the object, but from the surface of an object to other objects, repeatedly, for every simulated light beam and bounce.

  • @angeldude101
    @angeldude101 Před 10 lety +4

    The transparency + depth buffer problem is probably what caused Minecraft to not render transparent objects behind other transparent objects. Making it impossible to see the ice you're trapped under. :(

  • @danhorus
    @danhorus Před 9 lety

    These videos are great! They're gonna be very helpful in around 2 years when we start learning CG. :)

  • @smegskull
    @smegskull Před 10 lety +6

    And now I know why Shepherds hair disappears when I angle the camera through a transparent object.

  • @meyakabrown4725
    @meyakabrown4725 Před 8 lety +2

    This helped a lot. Thank you for awesome uploads. ^_^

  • @haos4574
    @haos4574 Před 6 lety +1

    Teacher explained the opengl depth buffer a little bit, not quite sure about what it does. Now it's more clear

  • @NeilRoy
    @NeilRoy Před 8 lety +1

    Nicely done, thanks.

  • @TechLaboratories
    @TechLaboratories Před 10 lety

    I think that it's important to note that the process of z-buffering is only important when doing 2-D shading type rendering, and not when doing ray tracing. It is a common computer UI, graphics and gaming techniques these days, while cinema, computer aided design, and digital visual effects usually rely on ray tracing, which completely solves the z-buffering issues described here, but at a much higher processing cost (even on the fastest consumer or workstation GPUs).

  • @Zauviir
    @Zauviir Před 10 lety

    This is such a fun and complex problem I am glad to see a basic video on it.
    How about a video on pathing? Pathing is something interesting that people might think about a lot but really hard to make progress beyond the common strategies that dominate the industry today.

  • @chibishadowgod4504
    @chibishadowgod4504 Před 10 lety +5

    What do you do for a reflective object. Example a mirror in a bathroom.

  • @GameDevSPS
    @GameDevSPS Před 10 lety +2

    And so you should draw your scene front to back to perform an early z-test (z-culling) which will discard any pixels of hidden surfaces which in turn increases performance.

  • @ricodelta1
    @ricodelta1 Před 10 lety

    a very good elementary set of videos
    cheers

  • @dzaima4737
    @dzaima4737 Před 10 lety

    This REALLY helped me! I was about to write a 3D renderer :D

  • @skizzworld
    @skizzworld Před 10 lety

    I'm only watching cause of the awesome way he speaks. Also the awesome content.

  • @Alex-Lay
    @Alex-Lay Před 10 lety

    So that's how it's done. Fascinating.

  • @dominiccasts
    @dominiccasts Před 10 lety +1

    Order-independent transparency is a fairly new and not-inexpensive set of techniques, which are primarily useful for industrial viewing applications. My guess is that the speaker figured that it wasn't worth bringing up since the audience is unlikely to even know about those applications, let alone care. Granted, it wouldn't have been a bad idea to mention it.

    • @Kram1032
      @Kram1032 Před 10 lety +1

      seeing as at what pace these videos are comming and how more *could* be said on any given issue, I'm sure we'll get to the more advanced stuff eventually.

    • @dominiccasts
      @dominiccasts Před 10 lety +1

      Given that he seems to be roughly following a 300-level intro to graphics programming course layout, my guess is that the next step will be shading, followed by textures, and possibly a digression on framebuffers and framebuffer objects and how one can use the final rendered image as a texture (he hinted at it in this video).

  • @RupeeRhod
    @RupeeRhod Před 10 lety

    Hope to see an extension on this explaining Culling, as that can be closely related to the z-writing.

  • @alb4599
    @alb4599 Před 9 lety +1

    It seems to me like testing for intersection of triangles and cutting them appropriately might actually improve quality, and solve the issues with alpha, though I could see that it might be more computationally expensive.

  • @sarowie
    @sarowie Před 10 lety

    perfect - thank you.
    It also explains why there is a funny interference pattern when two surfaces share the exact same coordinates. I use that phenomena to check the size of an unkown object with complicated shapes against an know cube.

    • @TheWeepingCorpse
      @TheWeepingCorpse Před 10 lety +1

      That phenomenon is called z-fighting lol.

    • @sarowie
      @sarowie Před 10 lety

      Thank you for that word.
      English is not my primary language but even in my first language I did not know the proper word for it.
      Thank you.

  • @TheLostSorcerer
    @TheLostSorcerer Před 10 lety

    Could one of the future videos be on bar-codes and their uses in information communication? I've always wondered how they work as you can scan them right side up or upside down and it still works. Maybe for a follow up you could talk about QR codes.

  • @markus031098
    @markus031098 Před 10 lety

    What about, to fix the transparency problem... When a pixel is going to be rendered, rather than drawing it and then later drawing over it if a new pixel's depth is further forward, just add the pixel to an array of all pixels that are at that position, e.g. point x0y0 = [rgba(255, 0, 0, 1), depth: 2, rgba(0, 255, 0, 0.5), depth: 1].
    Then when all pixels have been added, sort all the arrays acording to depth with the furthest back first in the array, then proceed to draw each pixel in the array one by one starting at the last opaque one. If one of them is transparent, draw it blended with the current pixel like you would normally.
    This would be quite resource intensive because of the arrays but I'm sure it could be optimized, e.g. by wiping the array whenever a new opaque pixel closer than any others is added or not using an array if there are no transparent objects at that location. It would work though wouldn't it?

  • @affablegiraffable
    @affablegiraffable Před 10 lety

    this guys voice/emphasis is just like so sensual

  • @MatthewHolevinski
    @MatthewHolevinski Před 9 lety +9

    raytrace everything :) that's how I roll

  • @aaronjensen7355
    @aaronjensen7355 Před 9 lety +4

    I'd make an alpha buffer to fix the window problem.

  • @Holobrine
    @Holobrine Před 10 lety

    So if the origin moves with the camera, can't you use the z axis as a meter for depth and say "don't draw the pixel if it has a greater z than another one" or draw the pixels in the back and the front ones override?

  • @JayMannStuff
    @JayMannStuff Před 10 lety +1

    Can't you use an Alpha color channel, alongside with Red, Blue, Green and Gamma? And just multiply the difference between the opaque background and the semi-transparent layer, by the alpha value, and that way the transparency value can be used instead? Doesn't that also help solve the problem with fog layers, smoke, glass, clouds and so forth?

    • @Tupster
      @Tupster Před 10 lety

      Graphics hardware only has RGB and alpha. There is no "gamma" channel. Also, I cannot quite figure out what the algorithm you are proposing is.
      There is no shortcut solution for order independent transparency, since the number of layers of 'transparency' and how complex it is to combined them is unbounded.
      Generally, the mathematical formula for combining 'transparent' layers could be almost anything, and it changes depending on your viewpoint.

  • @SendyTheEndless
    @SendyTheEndless Před 10 lety +1

    Clearly the most elegant solution is just to make it a sidescroller :>

  • @mkaatr
    @mkaatr Před 10 lety +7

    I am wondering if "Alpha Blending" combined with ZBuffer could help solve the two transparent windows problem

    • @SprocketWatchclock
      @SprocketWatchclock Před 10 lety +4

      Well it's not really a problem if you're using OpenGL or DirectX. They both have solutions coded into their renderers. Not to mention, modern graphics hardware contain hardware solutions to most classic rendering problems. Nobody renders graphics the old fassioned software way anymore, that's just inefficient and limited.

    • @Kram1032
      @Kram1032 Před 10 lety

      It could and if you want to special-case all the way through, it comes down to repeatedly applying the same algorithm with varying depth-buffers over and over until all problems are resolved.
      I wonder, though, if there is a better method for storing per-pixel-order.

    • @TheWeepingCorpse
      @TheWeepingCorpse Před 10 lety +6

      most engines render transparent objects in a separate pass after all the opaque objects have been drawn. They use a combination of painter (z sorting), z-buffer testing and then alpha blending in the pixel shader.

    • @Kram1032
      @Kram1032 Před 10 lety

      you can't completely discard the opaque objects though: If you have three layers, the middle one being opaque, you'll end up overwriting that middle layer if you then only consider the two transparent ones, even if you take the individual z-values of both transparent layers.

    • @PSNB92
      @PSNB92 Před 10 lety +8

      ***** Rendering translucent objects is still an issue that the programmer needs to work around, even with OpenGL and DirectX.
      As far as I know, the most modern method for rendering translucent geometry is via Depth Peeling (developer.download.nvidia.com/SDK/10/opengl/src/dual_depth_peeling/doc/DualDepthPeeling.pdf).

  • @theycallme_nightmaster
    @theycallme_nightmaster Před 10 lety

    I like the way this guy talks

  • @coloneldookie7222
    @coloneldookie7222 Před 10 lety

    I may be misunderstanding the explanation of the window problem, but would it not be best to calculate the opaque parts as dominant overlay, and basically cut out the section that is the window? As I understood it, you're creating an object, and it's possible to define a coded space as empty versus zero.

  • @unvergebeneid
    @unvergebeneid Před 10 lety

    I'm really curious how transparency is handled in the upcoming applications of the z-buffer.

  • @haha71687
    @haha71687 Před 10 lety

    Can you get John to do a video on raytracing? It allows for very realistic lighting with reflection, refraction and diffusion, but is very computationally expensive as far as I know.

  • @JZL003
    @JZL003 Před 10 lety

    Was this all shot at one time and then split up, that must have been one shoot

  • @JackDander
    @JackDander Před 10 lety

    They also call it depth sorting. If you have odd triangles like that example then you have other problems with your objects. You can't really properly blend colors in different orders with different transparencies if you do not depth sort first. It is pretty expensive though. You can see in a lot of games they do not do it.

  • @therasmustrew
    @therasmustrew Před 10 lety

    In the window situation, couldnt you have a secondary depth buffer for the transparent objects, rather than use the painterly algorithm?

  • @lasharn07
    @lasharn07 Před 10 lety

    So how do reflective surfaces get rendered? What's the solution for something like that?

  • @theonlyari
    @theonlyari Před 10 lety

    So, how do you handle looking through a portal which is in line with another portal?

  • @GBFU2016
    @GBFU2016 Před 10 lety +1

    MORE VIDEOOS. MORE!!

  • @sjcwoor
    @sjcwoor Před 10 lety

    Are you able to do a video on how anti-aliasing works with respect to overlapping 3D objects?

  • @karangupta4615
    @karangupta4615 Před 5 lety

    for transparent objects can we not projectively transform opaque objects behind them onto the transparent object and change it's appearance, then, we don't have to selectively switch off the depth buffer.

  • @PsychoticusRex
    @PsychoticusRex Před 10 lety

    Can you next tie in implementations of these 3D rendering techniques like Unity (Used by Kerbal Space Program) among others?

  • @sogwatchman
    @sogwatchman Před 10 lety

    I was curious about how this worked... Minecraft still has problems with transparent objects. If you stand in front of say a waterfalls and eat something. The animation for food being eaten will appears as if it's behind the waterfall.

  • @Uterr
    @Uterr Před 10 lety

    Hey cool. i got understanding what deph buffer is %:)

  • @BejayWaddell
    @BejayWaddell Před 10 lety

    why wouldn't you treat the window as a point of view plane? It would have it's own depth map for the items behind it, however for the primary point of view it would be a flat surface representing the pane of glass.

  • @BrainSeepsOut
    @BrainSeepsOut Před 10 lety

    So this is the mythical Z-Buffer that was a toggle'abe option in a lot of older games...

  • @luisff7030
    @luisff7030 Před 10 lety

    but if the program draw the transparency only by the depth value, it may not make the transparent physics when is need to make more real. I mean, all the transparent objects change the direction of the light.

  • @dhuyd
    @dhuyd Před 10 lety

    To solve the window problem I'd make an alpha buffer or an object priority buffer.

  • @bigflytrap
    @bigflytrap Před 10 lety

    Insane how programmers figured out how to put a 3d world into a computer

  • @ChrisKramins
    @ChrisKramins Před 10 lety

    Excuse my Ignorance but in regards to opaque/transparent object rendering. Wouldn't ray tracing fix that problem?

  • @MissPiggyM976
    @MissPiggyM976 Před 8 lety +1

    I.e. the z-buffer. If windows are simply holes in the walls there's no rendering problem.

  • @stopfidgetting
    @stopfidgetting Před 8 lety

    Couldn't you make separate buffers for every polygon then use an algorithm to combine them?

  • @AdeonWriter
    @AdeonWriter Před 10 lety +1

    Has there ever been a game engine that could render three mutually overlapping transparent triangles correctly?

  • @QuazarGamesYT
    @QuazarGamesYT Před 10 lety

    What about the efficiency of these solutions? I'm not to keen on computer algorithms, but it seems like going through every single pixel and drawing every single pixel and handling every transformation calculation, every single frame that a 3D-graphics program is running (like a game for example), would lead to a significant loss of computing speed, no?

    • @20electric
      @20electric Před 10 lety +3

      No, this is how games deal with depth.

    • @SprocketWatchclock
      @SprocketWatchclock Před 10 lety +4

      That's why modern machines have dedicated graphics rendering hardware that have hardware shortcuts to do this stuff quite quickly.

    • @Kram1032
      @Kram1032 Před 10 lety +1

      per pixel operations are actually pretty darn efficient nowadays. It's a lot of special-casing which might reduce parallelizability that really bogs things down, as far as I know.
      If you were content with pure single-color materials, like flat shades of green or blue or what ever, without shadows or lights or transparency of any kind, drawing trillions of triangles would probably actually be easy enough.
      Of course, though, that means that there will be huge performance differences between 640p (by now ancient for PCs but you might deal with that on phones), 1080p or the somewhat soonish upcomming 4K screens. More pixels can really slow you down quite a bit.
      However, since you'll, in the end, need the solution for each pixel, you'll not really get around that problem any time soon. You'll eventually need to calculate per pixel.
      One solution that's sometimes done is to do, say, 720p and scale it up to full HD. Especially when decent and consistent framerate is more important than ultra high resolution.

    •  Před 10 lety +2

      Almost all modern machines handle z-buffering on a hardware level with amazing optimization tricks. Using z-buffer efficiently also reduces the amount of rendering needed, effectively cutting down on costs. Especially in scenes where there aren't many big surfaces(i.e. many pixels that need checking), z-buffer is quite efficient. Still, it takes up quite a lot of processing power, and many researchers are working on ways to make it more efficient.

    • @Borednesss
      @Borednesss Před 10 lety

      I have no idea honestly, I'm in the same boat you are in asking this question... but computers do like 500 billion calculations a second so they handle it somehow

  • @GegoXaren
    @GegoXaren Před 10 lety

    Can we have *Carmacks Reverse* explained next?

  • @ThomasKole
    @ThomasKole Před 10 lety +1

    What about z-fighting? What's going on there? Is the zbuffer not precise enough?

    • @20electric
      @20electric Před 10 lety +2

      Partly, there are 8,16,24,32 bit z-buffers but it can't be completely removed with the z-buffer alone and you would need to implement more algorithms but it's rare to see z-fighting and it's easier to stop it manually then to add a more complex method of rendering.

    • @Kram1032
      @Kram1032 Před 10 lety +2

      Well, you can store the z-buffer with any bit-depth you like. You could go for 8 bit like colors typically are, which would leave you with only 256 different distances. Typically, afaik, you'd take floats, which give you a greater range and accuracy. However, even with floats, if you have two triangles that are so close to each other (for instance when intersecting but they don't even need to intersect), that the resolution of float (e.g. the smallest possible float difference) isn't high enough to differentiate the two distances (both get assigned the same value), it'll come down to essentially rounding as well as order of comparisons, which one of the two triangles shines through. - And that might vary per pixel.
      Because of the way floats work, this problem worsens for things that are further away (have a higher z-buffer value), since the bigger a number is, the less accurately it will be stored, so things could be further apart to be assigned the same value. - However, usually, things that are further away also are smaller, so this isn't a frequent problem.
      In some cases, z-fighting might happen due to who ever made the model not removing double triangles, e.g. triangles that are in the exact same positions. In that case no drawing-algorithm in the world can resolve this perfectly.
      (However, there usually are algorithms that either warn about or automatically fix triangles that are very close to each other pre drawing to help reduce z-fighting)

    •  Před 10 lety +3

      Exactly. When the distance between two objects is less than the precision of the buffer on that point, they get assigned the same depth, z-fighting occurs. The renderer thinks that the objects are intersecting(or overlapping) at the pixels where the depths are equal.

    • @antiHUMANDesigns
      @antiHUMANDesigns Před 10 lety

      You set a near and far plane to try to compress the area and increase the precision of the z buffer.
      When you've got really long distances that you need to represent, you may get Z-fighting issues. One way to solve this is to use multi-pass rendering, if everything else fails. That simply means you render a section of the distance at a time, giving good Z buffer precision to each section. This way, there will be no Z-fighting no matter the distance.
      And yes, it's really, really sad to see games like Battlefield 3 and 4 suffer from major Z-fighting. I'm pretty disappointed in DICE.

    • @ThomasKole
      @ThomasKole Před 10 lety

      Ah, makes sense, got it.

  • @retepaskab
    @retepaskab Před 10 lety

    Couldn't we just add a transparency layer to the z-buffer, and render pixels that pass either the z-test or the alpha test?

  • @noseman123
    @noseman123 Před 10 lety

    Is this more or less ray tracing?

  • @Curixq
    @Curixq Před 10 lety

    What about reflections? (e.g a mirror or a curved metal surface)

  • @L4Vo5
    @L4Vo5 Před 8 lety

    What if there's an object in front of a window? wouldn't the window be drawn over it?

  • @Brejla
    @Brejla Před 10 lety

    DX11 linked lists solve the tranparent object rendering order, right? :)

  • @MrThompsoon
    @MrThompsoon Před 10 lety

    What would happen if two triangles had the same z value?

  • @Boomshicleafaunda
    @Boomshicleafaunda Před 10 lety

    Couldn't you just create a second depth buffer for transparent objects?

  • @jdgrahamo
    @jdgrahamo Před 10 lety

    Now I know why I'm useless at lobbing grenades through windows.
    How do you make ballistics work, by the way?

  • @HassanSelim0
    @HassanSelim0 Před 10 lety +2

    Can you make a video specifically about how the perspective projection matrix is generated? I've having some trouble with them right now :D (spent hours yesterday trying to make depth values from the Kinect match the depth of some 3D objects that are supposed to be in the same scene, but I can't get the view and projection matrices in the right way :S)

    • @Salabar_
      @Salabar_ Před 10 lety

      Easiest solution is to download GLM library.

    • @HassanSelim0
      @HassanSelim0 Před 10 lety

      this doesn't help me, I already have functions that generate it for me, but it doesn;t generate it the way I expect it, I thought that giving the same field of view as the Kinect camera would be enough to make the resulting Z value (after projection) to match the ones coming from the Kinect Depth sensor, but it's like one is linear and the other isn't!

    • @Salabar_
      @Salabar_ Před 10 lety

      Probably, you are using functions, designed for OpenGL. Try to a) transpose your matrix b) inverse z coordinate of any data you apply your matrix to c) both. Those are only mathematical differences between OGL and DirectX spec I aware of. And by the way, don't you forget to use homogeneous coordinates? Each vector has to has 4th coordinate, that is equal to 1. Good luck!

    • @HassanSelim0
      @HassanSelim0 Před 10 lety

      Sa1abar I'm using XNA which has its own functions that work well with DirectX (the underlying technology behind XNA), and it works great and looks fine for making normal games, I'm having trouble matching depth of rendered graphics with actual depth info captured by the Kinect :S
      last time I did this I did orthographic projection then added some scaling, it worked fine but it wasn't clean and was hard to maintain, I just wanted to do it correctly this time with clear mathematics and not just magic numbers based on trial and error :S

  • @gracicot42
    @gracicot42 Před 10 lety

    Not all kind of rendering need to test the depth of the pixel, pure ray tracing does not need it

  • @DanielPCline
    @DanielPCline Před 10 lety

    Be nice to hear some mathematics behind ray tracing

  • @CjqNslXUcM
    @CjqNslXUcM Před 10 lety

    the guy legitimately looks insane

  • @newsoupvialt
    @newsoupvialt Před 10 lety

    What if something goes in front of the window?

  • @kevind814
    @kevind814 Před 10 lety

    The depth is basically just a Z axis for a 3D grid, right?

    • @nikelf1
      @nikelf1 Před 10 lety +1

      As far as I understand it, yes.

    • @matsv201
      @matsv201 Před 10 lety

      Well, its the transform Z-axis for the view port. It´s not the same as the X-Y-Z-axis for the terrain.
      Also, it have to be calculated seperatly anyway because the surface distance is not calculated with the triangle because in the transform calculation only the edges are calculated.

    • @ThisNameIsBanned
      @ThisNameIsBanned Před 10 lety

      Depth is depending on the viewer. In general that means if you take a point as the viewer, everything away from that point is allready depth, go further and its more depth. Anything around the viewer in a circle will have the same depth.
      If you are rendering in 3D it can be even more funny, at least if you do a "proper" 3D with actual 2 viewers in a short seperation from each other (cheap 3D is done with still 1 viewer, which produces a headache for some people as its so unnatural).

  • @Animuldok
    @Animuldok Před 10 lety

    Could someone explain "Z-fighting." its a term that gets thrown around often when overlapping textures seem to flicker back and forth. I think I understand what's happening, but would like an actual educated explanation. Thanks =)
    (BTW its a non-infrequent problem in Minecraft)

  • @victorjatem2
    @victorjatem2 Před 10 lety

    Why is it rare to have mirrors in games and if there it is, its rendering is slow?

  • @EvanHotwingsCorgiat
    @EvanHotwingsCorgiat Před 10 lety

    You should do a video on anti aliasing!!!

  • @bouldersky2906
    @bouldersky2906 Před 10 lety

    so then where does ray tracing enter in to this?

  • @GoldphishAnimation
    @GoldphishAnimation Před 10 lety

    Agnostic
    I personally like the behaviors. It'd be boring if he wasn't talking like he was about to snap.

  • @onwul
    @onwul Před 10 lety

    Please make a video about z-fighting,

  • @TheBcoolGuy
    @TheBcoolGuy Před 10 lety

    This is why you can't see ice, glass, water and such through either in Minecraft.

  • @CDBelfer4
    @CDBelfer4 Před 10 lety

    BSP trees?

  • @stevecummins324
    @stevecummins324 Před 9 lety

    aren't windows just viewports?

  • @figloalds
    @figloalds Před 10 lety

    Direct X uses the term Z-BUFFER to talk about that techinique.

  • @coldmirrorfan
    @coldmirrorfan Před 10 lety

    At first I was like WTH am I watching but this is really Interesting.
    Subscribe!