Prey vs Predators - preparing bigger simulation
Vložit
- čas přidán 1. 05. 2024
- Optimizing my prey vs predators project for future bigger simulations.
00:00 Introduction
02:00 Data optimization
04:00 Neural Network optimization
05:40 Space partitioning
06:30 Multithreading - Věda a technologie
i want to suggest something: adding objects and obstacles into the "arena" so that the "agents" can evolve to use those to their advantage, similar to how animals have evolved to use certain land features for cover or nesting
yessss pls
That's indeed interesting, but it also sounds like another type of object that needs recognizing. No idea if difficulty has to scale the performance issues linearly, in principle you could probably get away with less impact after changing the architecture the first time, but I'm not sure ^^
@@EliasMheart As far as I understand it is using a neural network for vision. This means that it would not be much of a problem to implement physical objects in code. But the network should learn to recognize them
@@francobernaldodequiros9509 yep just make it a boundary they bonk off of and see what they do with that
As a partially colorblind person, I find the new colors harder to differentiate than the old ones. I would recommend that you use light orange and dark blue, as they are the most easily distinguished colors across all forms of color blindness. I'm still glad that you bothered to think about colorblind people in the first place, though :)
as a person who can see very well i also found the previous colors better to see. maybe a different design for both groups would solve this.
There are free online utilities which transform a regular photo into how it would look to someone with color blindness. A quick search should find them.
Hey jack, my brain always registers green and orange as the same, what you think that means? Haha I'm not color blind but I often say green when I look at orange & vise versa. Maybe you have a wild perspective that I don't see. But I also agree, for some reason the colors were off-putting for me, compared to the first.
@@Al-tg7ok I also see green and orange as similar sometimes, so you might have protanopia/protanomaly (difficulty perceiving red light), or deuteranopia/deuteranomaly (difficulty perceiving green light). These two forms of color deficiency, especially protanopia, are surprisingly common among men specifically, because the biology of the eye is slightly different between genders. I myself have mild protanomaly, which causes me to mix up lots of colors. People seem to think that when someone is red-green color deficient they simply see all shades of red and green as identical, but in reality having a color deficiency is much more complicated and affects many more colors than people seem to realize, and it can vary even between people who have the same type of deficiency. Another big factor of color deficiency that people often don't realize is that anyone, even someone who is entirely colorblind, can differentiate between dark and light colors. Even though I often mix up red and green, I can easily differentiate between a light green and a dark red.
I'm also partially red/green colorblind and I agree, the previous video's colors also had a lot of contrast in brightness in comparison with these colors which are a similar brightness.
I really really liked the visualization of the concepts you discussed in this video, these videos are amazing.
This video is awesome! The explanatory animations are icing on the cake. What software did you use to make them?
Thank you very much! I wrote a little lib myself to do these animations because I couldn't find a nice software for this
@@PezzzasWork That's so cool.
@@PezzzasWork amazing ! do you plane on releasing it or open sourcing it ?
@@PezzzasWork Plz Plz upload the source code. Even the lasts video's code is enough.
@@valet_noir I think I will but I need to improve the setup and interface and it’s currently quite hard to use
*I must consume more pezza videos*
Same
Yes
This
As ml engeener, I don't think that for such small networks topology is crucial for interesting behaviour. Even with fixed topology (but with mutating weights) you can get impressive results in supervised or reinforcement learning tasks (see hide and seek multi-agent project from OpenAI, not an evolution tho). But! With fixed topology you can store weights simply as matrices and forward pass as matrix multiply. With synchronized step for all agents, you can even step on all env at once (concatenate weights from all agents, matrix multiply on gpu). Usually with such setup, MASSIVE simulations are possible.
Mutating weights more or less simulate topology changes?
@@WsprWndrr Yeah, but usually evolutionary community advocates also for networks plasticity, like in NEAT algorithm, where you can add neurons, remove them, add connections, that stuff. While in conventional deep learning, topology usually fixed as this is make possible a lot of optimizations (gpu, autodiff frameworks like pytorch, jax, etc)
an ML engineer that does not know how to spell "engineer" 🤨
That said, heck yeah, using a matrix for the connections and using a compute shader would be a huge win.
@@paulpach well, I am not a native speaker, not even work in english on daily basis (I know enough to read papers and docs, not to write without typos), lol, pathetic
I would really like to use matrices but I don't really see how to do that considering that there is no notion of layer in the approach I am using
YESSSS WOOOO BEST CHANNEL ON CZcams
Really cool to see such a concrete example of optimization done at the right time -- when it's needed. You could've easily just hand waved this in the next video, but it's awesome that you took the time to make this interstitial video 🙌.
what a tease, shows us all that work and then doesn't even run it lol.
anyway can't wait to see what you do next with this project.
Yes I am sorry for this, optimizing took me quite some time and I don't know how much more I will need to run and tweak the simulation so I preferred to do this that way
Love this stuff, helps me with my coding.
Many people will take the animated explanations for granted, but they are amazing. They do a really good job of helping to explain
So glad you're continuing this project
It seems like the optimization with the neural networks was needed due to the sparse nature of your neural network. But I wonder since GPU's nowadays are very optimized to preform matrix multiplications if it would be faster to have the neural network instead be fully connected but with the unwanted connections' weights set to 0 and frozen during training, so that the weights for each layer could become a 2d array and the multiplication could be done on the gpu. But then again I don't think the neural network here is the bottleneck anyway.
I know that NVIDIAs tensor cores are optimised for sparse matrices
You can optimize your multithreading even further by taking into account a "complexity rating" while queueing up tasks: Long tasks being executed at the end would currently block the frame until the last long task finishes. If you can rate how long tasks will take, assigning the longer tasks to workers first will improve consistency and speed of frames. You can do this either by hand "guessing", or dynamically using some sort of profiler and then assigning the tasks that took long on one frame a higher priority on the next.
I think the creatures are evenly distributed enough so each thread will execute in the same time. (You'd need a large population in one grid to get a long pole which is unlikely).
@@TheRainHarvesterwith the last simulations, there were tons of units bunched up in the same areas, so it might be nessacary now
love these predator v prey vids
I know nothing about coding or programming, but your explanations are very clear and easy to understand!
Also big props for the production quality. Those graphics are really nice and help a lot in conveying what you are doing.
Keep it up!
I can’t even imagine how much work went into animating this video. Awesome job! Your videos are each masterpieces.
The visuals of the data structures is gorgeous!
Wonderful ! Hope to see more about this ! There is so much possibilities. Good luck with the project.
Inspiring! Optimization, when it works, is probably the most satisfying part of programming
I really liked the visualizations of the concepts in this video. Keep up the amazing work!
Wow, the quality of explanations and the video itself is insane!
this stuff is amazing! I cannot believe I missed the upload. I love that you're making the simulation larger.
I loved the explanations of the optimizations. So informative and concise! Your voice is very soothing. I wish you had videos simply explaining different algorithms, computer science students around the world would eat that up with the quality of these animations and the production quality.
Hi Pezzza,
Your first video was really great, it got me motivated to play a bit with evolving agents too. I did notice the exact same problem you have here: It gets slow with a lot of agents, and the majority of the time is spent on calculating the networks.
The solution that worked for me was to just do all network calculations on the gpu, this allowed 60k+ agents in realtime (depending on net complexity of course). Its adds more complication with the memory management, but I would assume it is the only realistic solution to get a high agent count in realtime, otherwise just the number of floating point operations required for the network will probably hit the limit of the cpu.
Warning: Neural networks are only fast on Nvidia gpu's. AMD is slow and dumb and inferior.
@@puppergump4117 and AMD has no API to even program their GPU!! How are game devs getting their api for amd?!
You could use fixed point numbers instead.
I can't explain how much I love optimisation, it's so satistfying.
Your channel is becoming amazing ! Great french accent btw ! And the animations are ON POINT ! 👌
Thank you for having more colorblind friendly colors, you're like the first I've seen do this actually
cant wait to see your next vids love your content
Finally another masterpiece, colorblind friendly, Awesome
Awesome, these videos are always bangers
Very cool! Loving the work so far!
Dunno if this will help but there's been a breakthrough in neurology by the University of Tokyo where they appeared to have identified how the brain achieves self-awareness. This may be worth investigating for development of better neutral networks.
In short, most neural networks are monodirectional which was believed to be how synapses work. But what has been found is that along the network are clusters of bidirectional synaptic nodes that compare the inputs from multiple monodirectional inputs and create a self-contained loop with one output.
This appears to be a weighting system whereby the final output that is fed into the rest of the network is the one which didn't get cancelled out by the cross-connections within these bidirectional nodes. When you look at this from experience, this is how it is possible for you not to notice a headache when you stub your toe as it generates a stronger reaction. Or how a room can be so noisy that it's not possible to focus on a particular task or thought.
The current neuroscience equasion for brain activity is r=f(s) but this discovery has them investigating an additional theory of sentience being C =g(r) where r is brain activity and C is a measurement of consciousness.
I love these videos also your voice is really nice to listen to
Really compliment for the quality of your animations!! Good job ;)
Thank you!
I love the animation! I did one of these years ago using the Qt Mouse Sprite demo as a base. One thing my kids loved in elementary school was choosing a mouse tribe to follow so I gave the mice different colours from a small pallette such that there were at least 10 mice of each colour. They had different colour ears for boy, girl, diseased(green) , old (white) then different sizes for child and adult and different rules for interactions between all characteristics. They would watch an initial world-building and different colours dying out or thriving and then the game would stop and they could type in their name to choose which colour from the remaining mice they thought would win by surviving longest. Some runs lasted hours!
That's awesome, it reminded my my own performance improvements search in my projects at work)))) Event processing is sometimes an interesting task) The conveyor with paralellized stages rules!))))
Wow! You really stepped up the animation in this video. Nice work dude.
Also nice voice
Du grand mendez, et super bien expliqué en plus ! Bravo !
very good explanation, underrated channel.
That was really impressive optimization!
Cool can't wait to see the results
Very interesting and educative content, keep up the good work!
I don't know a lot about code optimisation but this was impressive as hell!
I can't wait to see the results
Nice insights ! Lovely smooth animations as well ;)
Thank youuuu :D
The animation in this video is really nice!
Even though we didn't get more results, this video was VERY interesting.
These were most of the optimizations I also ran through when I was doing my version of this in Rust. My NN are just forward pass matrix mults with a relu activation though
i used to watch your ant sim vids and i loved them, but this is on another level! the video is really well made and feels really proffesional, honestle youre one of my favorite coding channels, keep it up
Sounds great
t'es un vrai fou mec, continue comme ça 👌
yes!! i like simulation like these, there you are again
Amazing!
Storing the NN as a matrix will be much more efficient, since computing the next layer will be as simple as activation(input x weights) which will be way more efficient than manual looping if performed on a GPU. That alone might give you a significant performance boost.
Also, if you wish to optimize the k-nearest neighbor queries, you can look into Quad-Trees
this is giving me the idea of my own evolution simulator
amazing animations
Can't wait to see it. :D
Awesome, great video!
Very interesting!
Beautiful video
Very interesting video!
Wow, very interesting.
wow this is so impressive
Your tasks & threads representation at 7:11 is beautiful, what language did you use to write this little lib? I could see myself implementing something similar in CSS/JS
I love it
Does anyone know if there is a github repo on this, or if the author has stated what tools they used to create this simulation?
I've just been talking (I'm a noob) about caching and lowering detection range and the video appears. Great noob-friendly video. You shoulda create a learning program and sell it!
Very interesting, most of the time people don't speak about their optimisations ^^
didnt notice your french lmao
(jkiff ce que tu fais t'es un crack, continue comme ça bg)
Merci, c'est toi le bg ;)
Cool project! A couple of suggestions, reading a bit between the lines: Sounds like you did a lot of guesswork on the optimizations - using profiler would discover the actual hot paths easily. Your reasoning for a graph representation sounds weird - a matrix representation is faster and if anything easier to update. Not aware of any reasons to use pointers besides saving memory on sparse graphs with a lot of nodes. Reorganizing agents from aos to soa easing threading seems weird - your problem domain seems trivially data parallel: Just split the agents into # of threads chunks and proceed normally. Use separate output buffer if data races are a concern - flip input and output buffer for the next frame.
Next step: Cuda/OpenCL ^^
I didn't understand how object-oriented things like that worked on such a high level, I assumed that it would only load what you needed, and I figured that object storage was optimal, as I figured that they would be optimized in languages like Javascript which are designed for them.
Love this! Can you download or buys this simulation?
As others have said, the visualizations are great, and the project too : )
I was wondering, does the rendering have any reasonable impact on performance?
Thank you! The rendering is really fast compared to update time, around 2ms. But it could certainly be optimized
Really nice video.
I have a little question, when you already implemented multithreading into the simulation, why didn't you just use the gpu instead of the cpu, since the gpu is made for parallel processing?
A few ideas:
1) instanced rendering, hopefully parallel push data to a command buffer if you can, depending on what lang you’re using.
2) for a spacial partitioning, you dont need a full blown physics solution. Use a quadtree, or even simpler: fixed cell, where you hash entities to a cell id via position. Say a cell is 10 x 10 units and an entity is at x:9.5,y:45 -> cell 1,5. With a fixed grid size this can be mapped to a 1 dimensional array. Honestly a multi-value hashmap is all you need for your simulation.
3) dont need to raycast. detect entities nearby then use the dot product. a ratio using dot products and distance will resolve line of site.
4) obstacles can be navigated once detected by influencing the steering by shifting the direction towards its perimeter. 2D line - polygon intersection is pretty simple.
Can you explain 3?
Its amazing what you are doing! can you make a tutorial how to create such a simulation? It would be great
I had the same performance issue in my simulation project. One big problem was that creatures seeing each other leads to exponential interations since each creature has to check each other's distance. I thought about the solution of internal square blocks in which creatures can enter temporarily, so they only have to iterate through creatures in neighbour blocks, which could keep the total number low. When they move, they enter a new block. But i didn't test that out yet.
@@miquellluch1928 can you explain how distance would make O(n)? I'm picturing a sort of distances but then i'd need to create distance lists for every object.
@@TheRainHarvester You don't check for every creature, you have some preprocessing step eliminating most pairs. Like explained in the video by separating the world in small worlds which only perform collision checks between them.
Or create buckets that only check themselves and neighboring ones.
There must be thorough explanations of this existing online as this is quite a common problem and can be used outside collision checking as well. As long as you have some other information that tells you you can discard checks.
@@someonespotatohmm9513 yeah i do grids in my pps videos. Check them out. Do you use openMP. that's an easy speed up.
What did you use for the ray casting ? Are you checking for the collision between the vision rays and all the spheres in the environment ?
Wow, impressive that you got it down from 570 ms to 19 ms!
cool video
Make agents have hearing so they evolve to be more quiet.
Also add a day-night cycle and when its night make there be an atribute of how stealthy an agent is, so even if a ray hit it the agent would not see it
This is a cool channel
@Pezzza's Work
5:25 how you store the information which connection belongs to which node then? in otherwords, what is the update-iteration?
this going to be open source? I love this and think it would be cool for us to run our own simulations
This is very well presented, I love your style and I love this video concept, can’t wait to see what else you do with it!
What are you using for your animations while you are explaining?
Thank you very much!
For the animations I am using a tool I made for this purpose
@@PezzzasWork I somehow knew you would say that 😎. Very professional! Thanks for sharing
If you're familiar with the bibites, there's a lot of features you could add from there
Great video, and amazing visuals! Did you use a library or make your own?
Thank you! I made my own library for these animations
@@PezzzasWork That's amazing! Are you going to make them open source, because they're pretty damn good and I'd love to use it :D
I don't know if it was for oversimplification or actual implementation, but you showed updating agents one by one. Isn't there any way to use vectorization and batch process to update a bunch of them simultaneously?
Hi, it might be interesting to show the progress of both populations as a curve on a single graph with axes representing the populations of prey and predators respectively and compare it to theoretical models such as Lotka-Volterra equations
Did you think of using you GPU in order to increase performances ?or is it not possible at all
Good
he
is
ALIVE
Hi, I'm slowly getting into machine learning. I'd like to ask, which language are you using to make these simulations?
I think instead of having a capacity, add a floor with food for preys to eat which runs out. That’ll make things a lot more complicated. There also might need to be some number changes to make sure one team doesn’t win immediately
I imagine a simulation of the evolution of predator eaters. As a predator scans and fails to see it's preferred food, it has x/20 chance of evolving to eat others of its own kind, where x is the number of units of its own kind that it can see.
Maybe even throw in a special case for the "plants" in this, like if a plant sticks around long enough it will increase the chance of another plant growing around there eventually. This would give a simulation of how plants reproduce and grow over the years.
Honestly, respect to you & your work. Your first video awake a passion about neural network theory and i even tried to reproduce it on Unity. By the way, what do you use for your simulation ? Is it from scratch or do you use a game engine ?
Thank you! For my simulations I made my own framework from scratch
@@PezzzasWork oh my god... 😂
Amount of work is so hudge.
I'm impressed and I'm waiting for your next video with impatience !
Good luck
Really fantastic visuals, awesome video. Is there a limit on the range of the raycasting?
There isn't any "hard" limit but a bigger range would be very costly, that's why I choose this one, being a balance between range and performance
what approach do you used for space partitioning?
What Ai Libary Are You Using And What Lang?