3D Rendering with Binary Space Partitions

Sdílet
Vložit
  • čas přidán 27. 04. 2014
  • Another project I did for school. There are no code examples here, but this illustrates the basic concepts behind a first-person 3D rendering engine.
  • Věda a technologie

Komentáře • 148

  • @SireSquish
    @SireSquish Před 5 lety +202

    Despite your very detailed and well described presentation, I still can't get my head around it.

    • @PASTRAMIKick
      @PASTRAMIKick Před 4 lety +9

      you probably don't know how Binary Trees work

    • @meanmole3212
      @meanmole3212 Před 3 lety +29

      I was watching this video some time ago and had the same feeling. It gives a general representation about BSP trees and one way to use them, but it does not go into all of the important details.
      The "map file" described in the video would probably be in most cases just a list of convex 3D polygons. You would then build the BSP tree based on the these polygons, while possibly splitting some of the original polygons (1 polygon for each side of the divided space). This means you need know much more than just the top level algorithms of constructing and using a BSP tree. You'll at least need to know how to construct a mathematical representation of a plane from a set of vertices and use its orientation and normal to test whether other 3D points are on the plane, on front of it or on back of it. In addition, you need to know how to use this kind of plane representation to split a convex polygon in half.
      The information is certainly out there in the web, it's just a bit hard to find and connect all the dots to make sense of it. It also requires a lot of work. A great place to start is by googling "opengl bsp FAQ". This little text file is a great resource for the starters.
      Good luck!

    • @Doom2pro
      @Doom2pro Před 3 lety +6

      The lines part is confusing they look random...

    • @SerBallister
      @SerBallister Před 2 lety +2

      @@Doom2pro Half of the complexity of this algorithm is how to build the tree in the first place, and that involves picking "optimal" splitting planes

    • @Catonator
      @Catonator Před 2 lety +7

      This video skips over explaining some of the fundamental parts of what the BSP tree is even used for, which seems to be kind of a running theme with videos explaining the algorithm.
      The partitioning algorithm uses a plane and divides the space given to it into a list of polygons (or objects of any kind) behind and in front of the particular spot chosen. Generally the plane is put along a wall. That way, if you can find which wall the camera is standing in front of, you can always find a complete list of things in front of the wall by simply stepping 'rightward' in the binary tree. It achieves a pretty efficient method of depth testing and view culling.

  • @vanlitespeed693
    @vanlitespeed693 Před 6 lety +83

    All I've needed to create the next revolutionary game, thanks!

    • @franz.hildenburg
      @franz.hildenburg Před 5 lety +8

      You could create the next Wolfenstein 3D!

    • @JonesCrimson
      @JonesCrimson Před 5 lety +24

      Good luck and have fun in 1993!

    • @fabricebacquart
      @fabricebacquart Před 3 lety +6

      @@franz.hildenburg Wolfenstein 3D doesn't use BSP.

    • @R3KS9476
      @R3KS9476 Před 3 lety

      @@franz.hildenburg actually he could make the new doom wolfenstein used raycasting not BSP

    • @OpenGL4ever
      @OpenGL4ever Před 4 měsíci

      @@fabricebacquart That's not quite true. The SNES version of Wolfenstein 3d does use BSP. It was in fact the first game at ID Software, that used the BSP algorithm.
      And the result was so convincing that they then used the BSP algorithm on the PC for DOOM and put the performance gain into drawing the floors and ceilings.

  • @kevito666
    @kevito666 Před 3 lety +13

    I watched many videos about BSP theory, and this is by far the clearest and most concise with more real world examples.

  • @OwenMiller9825
    @OwenMiller9825 Před 10 lety +83

    It wasn't clear to me how you chose the partitions for the original tree

    • @totallynuts7595
      @totallynuts7595 Před 5 lety +1

      maybe an algorythm that splits the level map in triangles in halves that are not larger than a certain surface size? So you start with a rectangle which becomes two triangles and each triangle becomes another two triangle?

    • @Lattamonsteri
      @Lattamonsteri Před 5 lety +4

      @@totallynuts7595I didn't understand much of this video but I've heard that it's at least important to get those partitions to be convex so that there won't be any weird drawing errors caused essentially by partitions that curve onto themselves like a horseshoe. Any triangle is convex so I think that's enough of splitting :D

    • @totallynuts7595
      @totallynuts7595 Před 5 lety +2

      After wathcing more videos I think as long as you breakdown all concave spaces into convex ones it should work.
      PS: Bisqwit has a nice video that also delves into the rendering method. "Doom style C++ engine" I believe it's called (or smthing like that)

    • @Lattamonsteri
      @Lattamonsteri Před 5 lety

      @@totallynuts7595 Yes :D I was considering linking that video to you too ^^

    • @storerestore
      @storerestore Před 4 lety +2

      @@totallynuts7595 Bisqwit's video covers a portal rendering engine, where walls in convex rooms, for the sake of rendering, can be "portals" through another wall in another room. This way, the drawing order is natural and you don't need binary space partitioning, because each room links to its immediately adjacent rooms. So it's not quite like the Doom engine and doesn't use BSP.
      The game Descent used such an engine, I believe, but in 3D. The Build engine is a portal engine is also a portal based rendering engine, and thus Build engine maps don't need a BSP baking step (like Doom levels) and can have dynamic changes along the xy plane. Somehow the rooms in Build also don't need to be convex. I don't understand the manner in which that is achieved.

  • @frankynakamoto2308
    @frankynakamoto2308 Před 4 lety +18

    They need raycasting for video game characters to be use as A.I field of vision, because when competing against A.I software it usually detects the location of the user no matter where the user is, so it fails to represent a human opponent because it doesnot have a raycasting field of view.

    • @OpenGL4ever
      @OpenGL4ever Před 4 měsíci

      It doesn't stop there. Essentially, you would also need to consider the effect of camouflage that textured polygons can provide.

  • @jsgoller1
    @jsgoller1 Před 7 lety +10

    Excellent explanation - thank you for posting this!

  • @skilz8098
    @skilz8098 Před 4 lety +20

    The painter's algorithm can work efficiently provided that you have all of your objects stored into a sorted structure by depth. However, you would only draw the visible regions of the farthest objects first if a section of that object happens to be overlapped by a closer object, you would cull that portion of the object. So in essence, you would have to sort all of your objects based on Z-Depth, then you'd have to implement a culling function that will cull all renderable objects that don't need to be drawn. These include objects outside of the view frustum, all primitives that are back-faced (the opposite of a wall to the camera's view direction), and all objects that would be occluded. This is only 1 half of the story and 1 half of the complete algorithm. You would need to have a second render pass for any object that would contain transparencies. If the depth ordering isn't correct and you end up drawing transparent objects in the wrong order, the total additive color will be wrong! This is kind of like the painter's algorithm without implementing either a BSPT or a Ray Caster. It is kind of the inverse of the Painter's algorithm as it will only draw visible portions of the farthest objects first. The Ray Caster Algorithm can be highly optimized by using the grid cells with a specified 2D grid size and by having each cell node to either contain an object or to be null. You'd have to search through all the grid cells and you'd have to do a neighboring searching algorithm that can be optimized by using a 3x3 array kernel using bit manipulation. Instead of casting out to every square grid cell that has 4 vertices each the algorithm will check to see if there is a neighbor in specific directions to determine if it exists, it will then create a hash table of labeled edges for that node in the x or y-directions. This will continue until all cells have been checked. This is a pre-rendering phase. Once this is completed all of the polygonal geometry will be created reducing the number of vertices to cast the rays out to. Let's say you have 4 adjacent grid cells that are 2x2 pixels in size in the X-direction, instead of having 16 vertices to trace for intersections by the raycaster where many of these vertices are shared, these adjacent cells would then be combined to contain 4 labeled edges that only use the outer 4 vertices. This in effect will create a geometric object by combining adjacent cells that have shared boundaries. Then when you perform the raycasting you wouldn't only send 3 rays for each vertice that exists for each hashed labeled edge. So if a constructed geometry has 4 edges with 4 vertices there will only be 12 max rays that are cast out. This can be optimized even further, omitting the rays that would be cast to a vertice that is hidden by a closer edge. They only need to travel as far as the closest edge. If one of the vertices is hidden behind an edge that's already been detected, then there is no need to cast those 3 rays for the buried or hidden vertice. The reason you cast 3 rays is that the initial ray will line up directly from the camera's or player's view direction to the vertice in question. The other two casted rays are slightly offset to both sides by a small marginal equidistant angle which could be like 1 degree on each side. The reason for doing this is to properly determine both interior and exterior corners of the edges by the polygons that are formed either being convex or concave polygons. This also helps with lighting calculations. Then when you are doing your ray tracing for collision detection, the mechanism is already there and you can just reuse the functions or methods that are needed. So there are multiple ways to make 3D rendering algorithms efficient. Another method that isn't mentioned here is Octal Trees! This gives you more flexibility and power than basic BSPT! It is similar to how the BSPT works but is closer in concept with the Cell-Node structure that I described above. Instead of being a 2D object, the Oct Trees work with a 3D volumetric cube and the entire 3D scene is partitioned similar to that of the BSPT. Then your depth ordering and transparencies come into effect. This can all be done on the CPU! Now let's step away from those and we can then get into the use of sending vertex and pixel data to the GPU via Shaders. Now there's an entirely different approach as you have multiple rendering targets and you get into concepts of cube and or environment maps! Pre and Post rendering techniques! Building a working 3D Graphics Rendering Engine is a lot of work, a lot of math, but is truly enjoyable once you have a working application!

  • @mohhingman
    @mohhingman Před 10 lety +9

    i really enjoyed this, thanks for uploading :)

  • @thisjointisloose
    @thisjointisloose Před 8 lety +143

    Doom baby

    • @deathpick2
      @deathpick2 Před 8 lety +12

      Doom guy

    • @lethalbroccoli01
      @lethalbroccoli01 Před 7 lety +6

      RomeroFanboy Roblox isn't bad, it's the people that play it that makes an impact.
      It's the youtubers and their fans that make the game un-attractive.
      I'm sure if you try it out and build something, you might like it.
      Roblox is quite easy to use.

    • @clankill3r
      @clankill3r Před 6 lety +2

      Doom was not 3d.

    • @alkolove1
      @alkolove1 Před 5 lety +3

      @@clankill3r С хуя ли такие выводы? Лол. Там вертексы полигонов находятся в трёх измерениях, ровно как и позиционирование спрайтов. BSP-деревья - да, двумерные. Но, это отдельный алгоритм оптимизации отсечения невидимых полигонов. Но ведь геометрия 3д.

    • @sweetberries4611
      @sweetberries4611 Před 5 lety +3

      @@clankill3r it was

  • @akianaray651
    @akianaray651 Před 6 lety +9

    This was a really clear explanation!

  • @heacac3729
    @heacac3729 Před 20 dny

    the is so far the best video explained how BSP works i ever found, a lot thx!

  • @latestGreen
    @latestGreen Před 3 lety

    Thank you so much! I've finally got my head around the idea!

  • @maxis_scott_engie_maximov_jr

    Thanks for the explanation, I was always confused why does the Doom engine render everything in such a weird way, one time the wall closer to the player, another time one further away

  • @kamranki
    @kamranki Před 4 lety +1

    Lovely explanation sir. Love these kind of vids.

  • @eyithings
    @eyithings Před 2 lety

    Thank you for the description. I'm currently looking for this concept in 2D.

  • @Slamy4096
    @Slamy4096 Před 6 lety +1

    A very helpful video. Thank you for sharing.

  • @KuraIthys
    @KuraIthys Před 5 lety +33

    My favourite method is portal rendering, though it is not all that well suited to large open spaces or exteriors, it's got a lot of amusing side benefits.
    Your raycasting example glosses over the fact that raycasting imposes a lot of limitations.
    Also most implementations of it in practice use datastructures designed so it shouldn't be nessesary to check against every possible wall in the field of view.
    After all, you only care which wall is closest, for which there are a bunch of possible shortcuts that don't require such haphazard methods...
    Still, BSP and it's 3d equivalent the octree have a lot of advantages, though they do require an extensive pre-computation step

    • @sebastianmestre8971
      @sebastianmestre8971 Před 3 lety

      An octree splits space using axis-aligned planes, where a set of 3 consecutive planes with different alignment all converge on a point. (Effectively splitting each subspace in 8)
      A 3d kd tree is similar: it splits space on axis-aligned planes, but it has no requirement for planes converging on a single point.
      A 3d bsp is similar to a 3d kd tree: it splits space on planes, but they don't need to be axis-aligned.
      The point of bsp rendering is that we can make a cutting plane match up with a surface, which is not generally possible using octrees

    • @zzador
      @zzador Před 2 lety +2

      Portal rendering is best if the map changes a lot during gameplay...like complete rooms that move through the map and stuff like that. I too like portal rendering more than BSP rendering. BSP is just too static. Portal-Algorithm is simpler to understand and simpler to implement. It is however only really usefull in indoor-environment where you have many portals (like doors etc.) and where complete sections of the map are then just skipped while rendering cause their portals aren't visible. In an outdoor-environment there are just no portals present but if you create giant portals for complete terrain sections you can at least benefit from the reduced overdraw of the front-to-back drawing. And to be honest: BSP also doesn't really speed up outdoor-terrain rendering that much.

    • @SerBallister
      @SerBallister Před 2 lety

      @@zzador Quad/Oct trees are maybe better for outdoors. BVH is also viable for outdoor scenes, possibly better, given they handle uneven polygon density better than Quad/Oct trees

    • @TastyPurpleGum
      @TastyPurpleGum Před 2 lety

      @@zzador Can't it be both ? I seem to remember that the quake engine and subsequent quake game and half life engine use both BSP and portal rendering : there's a Potentially Visible Set for each VIS group and if camera hits a VIS group portal its geometry gets cached in memory for the BSP rendering algorithm to search into

    • @zzador
      @zzador Před 2 lety

      @@TastyPurpleGum No. Quake and all ID games up to the point when Carmack left had just a BSP. One should not misinterpret In-Game-Portal-Effects like in Prey (2006) with a real Portal-Engine. It is however possible to combine both algorithms as suggested. The map would be segmented into big chunks which each would be a BSP-Tree. All are the chunks are then connected by portals.

  • @FPChris
    @FPChris Před 2 lety +1

    I still feel like if you have walls as 2d lines from top down. you could project the endpoints into screen from the cam view along the X center line across the width. Zbuffer them and do YTop and YBottom as you go. 1/z is linear in screenspace. So you could interpolate Zstart to Zend for each wall projected.

  • @Jinnsy
    @Jinnsy Před 9 měsíci

    This helped me dearly
    Thanks!

  • @MuhammadHamza-lf5vu
    @MuhammadHamza-lf5vu Před 5 lety

    Literally Amazing.. :)

  • @seacook
    @seacook Před 3 lety

    Great explanation! thanks

  • @sashakoshka
    @sashakoshka Před 3 lety

    this is really cool

  • @jimmea6317
    @jimmea6317 Před 3 lety +2

    This was almost perfectly explained in a way that made sense but you should consider including z-buffering techniques to clear away any ambiguity

    • @SerBallister
      @SerBallister Před 2 lety +1

      One of the side effects of BSP rendering is that it can render the scene from front-to-back (or back-to-front), so no need for a ZBuffer.

    • @OpenGL4ever
      @OpenGL4ever Před 4 měsíci

      In the old days, Z-buffers had high memory requirements. This was still a big problem back when Wolfenstein 3d and DOOM were released. There were even a few of the first 2d/3d accelerator cards that didn't have a Z-buffer, precisely because you would have needed extra VRAM for that.

  • @tearlach47
    @tearlach47 Před 8 lety +4

    Thank you.

  • @SevenDeMagnus
    @SevenDeMagnus Před 4 lety +2

    Cool, Doom's technique. Wolfenstein was raycasting.

  • @PeteRoyJackson
    @PeteRoyJackson Před 3 lety

    Excellent!

  • @intron9
    @intron9 Před 2 lety +2

    (wrong comment, correction below) The part at 5:55 is not actually applicable, not even doom does that. Because often, in the same vertical line, there are more than one wall. For example, in a staircase .

    • @mastermax7777
      @mastermax7777 Před rokem

      Yeah.. i dont understand that part.. and also when he explained the raycasting part 2:35 "Must repeat for every x coordinate on the screen. yea thats true, but what about every y coordinate ? Because usually games have objects other than walls, Like lamps, ceiling, floor, etc...

    • @intron9
      @intron9 Před rokem

      @@mastermax7777 in that part he's talking about raycasting games like Wolfenstein 3D, which doesn't have vertically stacked things, so it's not necesary

    • @intron9
      @intron9 Před rokem

      @@mastermax7777 it turns out that I was wrong in my original comment. Doom does this, but there is an important detail : the walls that when they're rendered define the end of the rendering for that vertical part of the screen, are only the one-sided walls. The edges of the stairs on a staircase would be two-sided walls.

  • @puppergump4117
    @puppergump4117 Před 2 lety +1

    He's good at leading people on

  • @aussieexpat
    @aussieexpat Před 4 lety +2

    Excellent. This along with one other lecture made the process clear... however.... floor and ceiling height also occluded further away walls and closer floors and ceilings... it's unclear to me how this was also done in doom.

    • @crungushakooter
      @crungushakooter Před 4 lety +1

      Using this rendering method, the color value of each pixel in any column of pixels is determined as a group, split into floor, wall, and ceiling. Where these values actually start and stop within that column is defined by a depth calculation on the assigned floor/ceil heights.
      In Doom, BSPs were not limited to only walls, but could be defined by a section with a different floor/ceil height. I'm fairly certain that sloped floors/ceilings were not supported in the original Doom engine, just flat values. For the purpose of demonstration, consider these such BSPs to also have invisible walls.
      So, when the renderer hits one of these invisible wall BSPs, it knows how high up the column to draw the floor, and how far down the column to draw the ceiling. Boom, you have those pixels rendered, don't touch them again. You now have a column that is rendering only the closest floor and closest ceil in that direction, and you simply fill the remaining pixels in that column by repeating the BSP traversal while only considering those "invisible wall" pixels and doing this same method again and again, until eventually you do hit a wall, or the floor+ceil height meet each other somewhere in the middle.

  • @frogge3978
    @frogge3978 Před rokem

    Thank you!

  • @cesteres
    @cesteres Před 5 lety

    I love the 321 resolution

  • @nicholasoliveira6363
    @nicholasoliveira6363 Před 4 lety

    well i know quakes use BSP (q1, q2, q3) but the game is hella fast how does this algorithm always get the right tree to render while strafe jumping?
    there some commands that limits the trees
    skydistance
    if you default is 3000
    is you put a low value it just cut the leafs prevents to render
    if you put a high value tries to use all leafs in the tree
    ps: im a quake 2 player

  • @BudgiePanic
    @BudgiePanic Před rokem +1

    So you always have to traverse the entire tree each frame?

  • @YXalr
    @YXalr Před 5 lety +8

    If the painters algorithm knows the distance of the walls, why not just reverse the draw order and not draw a wall where something's already been drawn?

    • @YXalr
      @YXalr Před 5 lety +1

      Doesn't the original painter's algorithm have to do the same kind of sorting to determine the distances of the walls?

    • @sebastianmestre8971
      @sebastianmestre8971 Před 5 lety +12

      The thing with the Painter's algorithms is that you have to re-sort the walls on every frame.
      Whereas the BSP is only built once, and afterwards you only traverse it

    • @sebastianmestre8971
      @sebastianmestre8971 Před 5 lety +1

      Since the wall order probabaly doesn't change much between frames, you can probably get significative gains if you use an adaptive sorting algorithm in your implementation of the painter's algorithms

    • @KuraIthys
      @KuraIthys Před 5 lety +3

      It's because your drawing algorithm typically only understands how to draw very simple shapes, like say a rectangle or a triangle.
      Imagine you have a small wall in front of a bigger one, such that the smaller wall only blocks a small chunk of the wall behind it.
      If you draw from back to front, you guarantee that you'll draw the entire scene because such a small wall will be drawn in front of the larger wall behind it, but only block part of it.
      If you reverse the logic, then you need to keep track of what you've already drawn.
      If you don't then the moment you draw the larger wall that's behind the smaller one, it will erase the smaller one, and thus everything will look wrong.
      However, drawing front to back is a common tactic as well, it just requires a bunch of extra stuff to happen.
      The most common method of doing front to back drawing is the Z-buffer.
      This is a depth map of the scene, where for each pixel you keep track of what the closest thing was that was drawn to that pixel.
      It starts out with each pixel set to infinity, then you sort the objects/walls/polygons you want to draw front to back.
      You then draw each item, and as you draw each pixel, you first check the z-buffer. If that pixel is nearer than what is already in the z-buffer, you draw the pixel, otherwise you don't. If you drew a new pixel, you record it's depth into the z-buffer.
      This requires being able to determine the depth of every pixel as you draw it, and requires extra memory and some extra steps (the Z-buffer itself, and the comparisons you have to do against it every time you draw something.)
      It also doesn't work on transparent surfaces. (if you try to draw a transparent object, the Z-buffer will mean all the objects behind the transparent one will be invisible.)
      The most common solution for transparency with a z-buffer is to draw all the non-transparent surfaces front-to-back first, then draw the transparent surfaces back to front over the top.
      Anyway, so your idea isn't wrong as such, but it's got a bunch of extra complications and requirements. (Even so, z-buffering is now the norm, and the painter's algorithm isn't used that much.)
      Really, the painter's algorithm is a fairly terrible method for performance. You're sorting stuff multiple times, drawing to a single pixel many times...
      On really old computers fill rate (the number of pixels that can be drawn in a second) was extremely limited, so anything that wasted it was exceptionally bad. - modern systems can afford to be more wasteful, but it still remains a wasteful way of doing it.
      The one virtue of the painter's algorithm is simplicity.
      All you have to be able to do to correctly draw a scene with the painter's algorithm is to sort whatever's visible by depth, then draw it back to front, and the result will look correct.
      This is terribly inefficient, but it's vastly simpler to implement than nearly any other solution you can think of.
      It also takes less memory than most other solutions.
      Z-buffering requires a z-buffer the same size as your whole screen.
      a BSP tree requires a decent chunk of memory to store the tree, and so on...

    • @TheFlynCow
      @TheFlynCow Před 5 lety

      @@KuraIthys I guess the painter's algorithm cant handle intersecting faces aswell? Also, in my opinion depth testing is at least as easy to implement as the painter's algorithm.

  • @BinaryReader
    @BinaryReader Před 3 lety +3

    The scan line discard algo wouldnt translate to 3D so well.

  • @putrescentvermin
    @putrescentvermin Před rokem

    I came here because I was curious as to what the bsp extension meant for half life 1 maps. I haven't taken computer graphics yet but with my little knowledge of data structures and algorithms, this interests me.

    • @watchm4ker
      @watchm4ker Před rokem

      HL1 uses a Quake engine derivation, which likewise uses precalculated trees for polygon culling. It's a bit trickier, and involves tracing paths through common surfaces - or 'portals' - but the principal is similar, tracing through tree branches to map out what parts of a level *could* be visible from the viewpoint, and which ones can't.

  • @Julio_Rats
    @Julio_Rats Před 2 lety +1

    I just don't understand how the wall size is calculated, because it's exactly in this part that ray casting is not optimized, but I didn't understand how the sizes would be calculated using BSP

    • @themeangene
      @themeangene Před rokem

      Once you test the distance you can use very basic trigonometry to find the vertical wall size for example.
      Ray casting requires going pixel by pixel.
      Imagine you had a screen with a width of 1024 pixels. There's a wall taking up the entire screen with a wall hidden behind it. Raycasting would have to be performed 1024x unless the camera was fixed. A tree is never going to get near that many cycles.

  • @dickheadrecs
    @dickheadrecs Před rokem

    this video is so old that youtube needed to BSP the render buffer to play it back then

  • @TonyBraun
    @TonyBraun Před 4 lety

    All I can say is......wow.

  • @samowarow
    @samowarow Před 3 lety +3

    Are BSPs also good for true 3d where you have a world varying across z-axis?

    • @caspianahlberg9027
      @caspianahlberg9027 Před 3 lety +1

      Yes. The Quake engine used BSPs to predetermine polygon visibility.

    • @watchm4ker
      @watchm4ker Před rokem +2

      The problem with BSP trees are twofold: First, how big they can get, since a lot of this is based on fairly simple level geometries. Some modern character models have more triangles than an average Quake level. Second, all of this is precalculated, so anything dynamic or moving can't use this approach.

    • @OpenGL4ever
      @OpenGL4ever Před 4 měsíci

      @@watchm4ker And these are exactly the two main reasons why BSP algorithms are no longer used for modern 3D games today.
      The BSP algorithm was okay when the game world with all objects still consisted of a few polygons. So that would have been the period from around 1993 to 2005.

    • @watchm4ker
      @watchm4ker Před 4 měsíci

      @@OpenGL4ever The key point is when 3D hardware came into the picture. With a true Z buffer, you can cull a lot from the rendering pipeline at runtime, as an example.

    • @OpenGL4ever
      @OpenGL4ever Před 4 měsíci

      @@watchm4ker I know, but especially with the first 3D hardware for consumers, not every 3D hardware had a Z-buffer. The reason was that the Z-buffer requires a lot of memory, which was still expensive at the time. That's why Z-buffer support was omitted from a few of the first 3D accelerators. Voodoo Graphics is one of the honorable exceptions, it had one.
      Also, I'm just wondering what happened to my first comment, it's no longer there.

  • @solsticeprojekt1937
    @solsticeprojekt1937 Před 4 lety +1

    Haven't found a single explanation of BSP that allowed me to reproduce it in my head. This is no different..

    • @Ganibbasald
      @Ganibbasald Před 3 lety

      The easiest way to understand it is to think of the goal we are trying to accomplish. We want to fill up the player's screen with everything they can see, without drawing extra stuff (stuff behind walls). A BSP tree sounds complicated to us, but to a computer it is very easy to understand.
      Let's say you are in your bedroom. Your bedroom has a door to the hallway, and a door to a bathroom. The only door to the bathroom is from your bedroom. That means if we are standing in your bedroom, the only two nodes we can connect to are the hallway and the bathroom. If we are in the living room, there is no chance at needing to render the bathroom. That is the purpose of the tree. It's an elegant way to tell the computer based on which node we are standing in, which nodes could possibly need to also be drawn. In practice it's slightly more complicated because a single room can be split multiple times by the BSP algorithm depending on the size, but that's the simplest way to look at it.
      When doom was released, the requirements were a 386 processor with 4mb of ram. It sounds silly compared to today's hardware, but back then a PC couldn't simply just render true 3d the way we do now. There were no video cards, and processing power was a premium. This was a way to draw everything you were looking at quickly and efficiently. Since the tree and the calculations are pre calculated when you create the level, much of the legwork is done.

    • @SerBallister
      @SerBallister Před 2 lety

      It's mostly useful in "hidden surface removal" - You can quickly discard giant sub-trees of your scene (e.g. off camera geometry). Like someone once said, the fastest polygons are those you don't draw.

    • @solsticeprojekt1937
      @solsticeprojekt1937 Před 2 lety

      @@SerBallister Reread my post. You're not explaining it, you're just stating what it does. Massive difference.

    • @SerBallister
      @SerBallister Před 2 lety +1

      @@solsticeprojekt1937 sorry I misread. A BSP tree arranges geometry in a spacially sub divided tree, where each branch nodes is a splitting plane and leaf nodes contain diced up sections of your world. it works by walking recursively down the tree in a specific manner (typically which side of a plane a reference point on is on), such that it hits the leaf nodes in a special order (such as ordering nodes by distance to reference point). So you can do things like depth sort the scene very quickly, the tree never changes during rendering so it's far faster than sorting a list. Because it's heirachial you can reject big chunks of tree cheaply too (most spatial hierarchies are good at this, see quad trees, etc) so that's another bonus. There is some explanation needed for how to construct trees too, which is keep halving sections your world using some chosen plane each time. some trees are more efficient than others, you need a clever plane choosing algorithm too

    • @solsticeprojekt1937
      @solsticeprojekt1937 Před 2 lety

      @@SerBallister Man, that really wasn't necessary. You're telling me all I can gather anywhere else. I wrote "I can't get it to work out in my head" and just repeating the same thing everyone else says, while still omitting important details, isn't helpful. I appreciate the effort, but you're wasting your time repeating past efforts.

  • @Brain_quench
    @Brain_quench Před 3 lety

    Does this mean that the room i have just left "disappears" until i re-enter? Then how come in halo sometimes the dead bodies remain in that room after I have left and returned. Please clarify. Thanks.

    • @igorthelight
      @igorthelight Před 3 lety +2

      Rooms just will not rendered anymore.
      All information (about dead bodies positions and poses) are still intact.

    • @NERDSAUCE
      @NERDSAUCE Před 3 lety +1

      visual rendering and physics calculations are separate. many 3D objects in games have a separate, simplified, non-rendered mesh used for physics / collision / shot registration. it's a similar concept, the systems are separate.

    • @igorthelight
      @igorthelight Před 3 lety

      @@NERDSAUCE True.

    • @clonkex
      @clonkex Před 2 lety +1

      Yes absolutely. An efficient rendering engine will only render what's actually visible and not anything that's behind the player or outside their view. However the information about what's in the room (entities and their states) is not deleted. In fact AI might still be processing in the room even if it's not being drawn to the player.

  • @GeorgeTsiros
    @GeorgeTsiros Před 4 měsíci

    this is quite late, but, as long as the viewer's position stays within the same leaf-region, is there some calculation that can be skipped? Some invariant? Perhaps which walls are to be drawn?

  • @ohnocoder
    @ohnocoder Před 2 lety

    Best!

  • @dmitrykim3096
    @dmitrykim3096 Před 4 lety

    Horizontal depth buffer

  • @TheBoringEdward
    @TheBoringEdward Před 5 lety +1

    4:17 I understand, how the computer is locating the player, but I do not understand, how it's projecting the wall, in the shape of view.

    • @Torqueyeel
      @Torqueyeel Před 4 lety +1

      If the wall is 50 feet long and 10 feet tall all the way across, you calculate how far the back corner is. Depending on the distance, you multiply the 10 foot height by a certain ratio and you get the total height. You can then take those four coordinates and render them. I hope I explained this well enough.

    • @TheBoringEdward
      @TheBoringEdward Před 4 lety

      @@Torqueyeel, thanks for the info. I pretty new to this.

    • @TheBoringEdward
      @TheBoringEdward Před 4 lety

      Thanks!

    • @eyezaryvideos
      @eyezaryvideos Před 2 lety +1

      1) Translate XYZ from world 3D coords to player view 3D coords (subtract position).
      2) Rotate XYZ by player view direction (making final Z as depth).
      3) Multiply X and Y by screen fov coeffient and divide by Z distance.
      4) And add screen centering. For both X and Y.
      screenX = (fovCoef * X / Z) + screenCenterX
      screenY = (fovCoef * Y / Z) + screenCenterY
      Or with some optimization:
      fovDistCoef = fovCoef / Z
      screenX = (fovDistCoef * X) + screenCenterX
      screenY = (fovDistCoef * Y) + screenCenterY
      And for wall height (instead of two points):
      screenH = fovDistCoef * H

    • @OpenGL4ever
      @OpenGL4ever Před 4 měsíci +1

      All walls here are vertical. There are no slants.
      Therefore, you only need the distance to the wall for each line drawn vertically on the screen.
      And then you draw n steps up and n steps down from the center point, depending on how far away the wall is, i.e. how high you want it to appear.
      You do this for every vertical line from right to left.

  • @junkokonno
    @junkokonno Před 3 lety

    aight thanks

  • @pajeetsingh
    @pajeetsingh Před 3 lety

    Don't directx and OpenGL already take care of this? As application developer, do you ever need to implement this?

    • @Hopsonn
      @Hopsonn Před 2 lety +2

      Yes modern rendering APIs store a depth buffer to determine what is closer to the camera - however the objects that are hidden would still be processed for drawing and so an optimised engine would still need to determine what is close before sending it to the draw buffers.
      The BSP technique in particular is seldom used these days, it was more common in the 90s where you didn't have the luxury of a GPU to handle these things so people had to come up with clever techniques to "fake" 3D. For example, Doom used the BSP method and Duke Nukum 3D used a "portal rendering" technique which is similar but a lot more flexible eg allows floors drawing ontop of eachother whereas bsp does not.

    • @SerBallister
      @SerBallister Před 2 lety +1

      @@Hopsonn For something like CPU side 3d collision detection then BSPs/KDTrees are useful, even though most of those algorithms are now obselete for rendering..

  • @EvilStreaks
    @EvilStreaks Před 2 lety

    That "scan line" that keeps track of the ranges of screen that have been drawn... if you use the painter's algorithm but render closest walls first, and use that "scan line"... you can omit a lot of obscured geometry rendering. You can completely ignore walls that are within an obscured screen region, and partially draw others. That's how I'd do it since there's no way I'll have the brains to implement binary space partitioning in my lifetime. Not that I'll ever need my own software renderer.

    • @eyezaryvideos
      @eyezaryvideos Před 2 lety +1

      You forgot about skipping entire branches if done by tree. With just "scan line" you will be checking all objects, even many of those in entire regions that already should not be seen.

    • @themeangene
      @themeangene Před rokem

      What you're describing is called a reverse painter's algorithm. That works for fixed camera and very predictable on rails games but its efficiency declines rapidly as the camera takes on more degrees of freedom.
      The computer needs to send a ray to every single object at the start. Before sending the draw instructions it needs to know the distance to each wall.
      It then needs to know how wide the wall is (pretty simple - you can hard code it) so you can make perception adjustments.
      Then once that wall is drawn (or stored), you still have the rest of the screen real estate. You are reducing the amount of work but it's barely more efficient

  • @mtalhakhalid1679
    @mtalhakhalid1679 Před rokem

    Bsprender like DFS got to end of each node

  • @AgnaktoreX
    @AgnaktoreX Před rokem

    How does this algorithm make sure that polygons are rendered front to back? 3:30 The narrator does not lose a word about this :( But I got a nice understanding about what is roughly going on :)

  • @arsnakehert
    @arsnakehert Před 3 lety +1

    0:53 Metal Gear Solid 1 vibes

  • @TheSkyGamez
    @TheSkyGamez Před 4 lety

    holy heck

  • @iindii
    @iindii Před 5 lety

    👍

  • @lexibigcheese
    @lexibigcheese Před 3 lety

    doooooom

  • @cesteres
    @cesteres Před 5 lety +1

    And then there is depth buffering

    • @Cryssball
      @Cryssball Před 5 lety

      in this case depth buffering is not used

  • @DhavalChaudhary
    @DhavalChaudhary Před 9 lety

    maza ai gai...!!!

  • @ImaginaryNumb3r
    @ImaginaryNumb3r Před 3 lety

    Too bad there was never a follow-up

  • @chinopuertorico
    @chinopuertorico Před 3 měsíci

    ❤ this video is based on real life. Old technology hologram. We are the game. 1's and 0's. So. Make your dreams come true. No wrong or right. Just on and off.

  • @johneygd
    @johneygd Před 8 měsíci

    What i found interesting is that both pc versions of doom & wolf3D uses raycasting while the mimiced pc versions of doom & wolf3D on the snes just uses a bsp engine,the bsp engine is similar to raycasting except that everything is already pre-calculated beforehand and that only the vissible parts of the screen is calculated,the saves ram and processing power BUT it will require more rom data,
    Sadly trough even the super fx2 chip for snes doom is just not fast enough to handle doom and lots and lots of concessions had to be made to make doom berally run on snes but it was a nice attempt non the less.

  • @DarkRedman31
    @DarkRedman31 Před 3 lety +4

    In my opinion your explanation is bad, I don't see with classing raycasting how you can overdraw, since you only draw what the camera sees, not what's behind. So what's the point of using binary space partitions then?

  • @lechu1113
    @lechu1113 Před 5 lety

    use hint and skip dude...

  • @yaroslavpanych2067
    @yaroslavpanych2067 Před 5 lety +2

    So despite of title and numerous repeat in video, it is not 3D rendering at all. Well, it is rendering of 3D objects into 1D space - horizontal line. So called 'pseudo 3D planes' It's been used in cheap console HW to imitate perspective and ripoff money.

    • @cesteres
      @cesteres Před 5 lety

      Even cell phones have advanced hw acceleration now so these techniques are history.

    • @clonkex
      @clonkex Před 2 lety

      It depends how you define 3D. It creates the illusion of viewing a realistic three-dimensional room, the same as modern rendering techniques that rely on powerful GPUs. Just because it's limited (you can't have rooms over rooms or arbitrary polygons) doesn't make it not 3D.