At 7:50 he was talking about the 2nd image and at 8:04 he was talking about the 4th image. Thats why theres always a red arrow that shows what picture hes talking about
@@FaeTheMf poles are used constantly to determine where you are. A lot of places look very, very similar so you see a certain pole you might be able to eliminate somewhere or know which country, or province you're in by that alone or with other clues
@@petrkdn8224 I'd assume that it because foliage being correct relies on the complex 3D structure of the trees being correct, that with branches often being obscured, hidden by bloom, blending with background or getting messed up by image compression due to them being too small. And sometimes you need to get details which are not represented by anything in the image, like wind direction right That and AI does fine texture worse when it's not the focus.
@@petrkdn8224 Yeah any sort of pattern generally throws the AI off. With trees and leaves however, you just need to ask yourself; what’s the logic being that branch and where is it coming from? It usually ends up looking like how fever dreams feel. In this case tho, I couldn’t see anything through my phone screen. Couldn’t see the details in a thumbnail resolution.
Yea I agree with previous replies. AI is terrible at leaves and other coordinated or semi coordinated structures (like candles, fires, lights and shadows) Branches out in the open wind will all lean towards the same direction under the effect of the sun and the wind. Individual leaves will always face the direction that gives them most sunlight, top leaves usually face east while bottom leaves face any opening to the sky they can find. A pile of big leafed plants is always enough to thwart any AI.
I found the difference in how a geoguesser player vs a "normal" person approaches this very interesting. You used mostly your game knowledge to find the right images, while I searched for misstakes in the details like wrong shadows etc.
for me, the difficult part about looking for shadows and distortions is that its hard to tell if its due to the camera/blurring or if its an ai mistake
Personally I could tell because it felt off and looking a bit longer I noticed some details thats seemed weird But crazy that this gut feeling can tell that something is off
yeah for example in the first quiz.. number 3 looks totally off for me.. it looks like they put together bushy greens, too clean accentuated road with too maintained side of the road for that part of the world, and the background sky pops way too much.. but for him it "made sense" somehow..
I think the craziest part about them is if nobody told you that any were AI generated, aside from the obvious "3"s, is that they fool just about anyone.
I went 5/5 but not for the reason of knowing its AI like him I just found distinctive color grading differences between real cameras and FOV it feels like art more than an actual picture its scary how accurate some were tho.
@@ABC-lm9do Yeah, that's another scary thing. When you have to be an expert in a relevant field to have much of a chance when you *know* something AI is present, pretty much no one will have any hope spotting these things in the wild. And it's only gonna get better, with video and eventually synced audio (by eventually I mean like 2 or 3 years tops) that to most people will appear completely seamless.
I went 5/5, I'm not a geoguessr pro, I'm not an expert in any relative field at all, AI just looks uncanny to any human being with functioning eyes(no, I wasn't just guessing)
This kinda reminded me of that one game, exit 8 I believe. Where mind starts playing tricks on you and just start overthinking way too much so that even the most normal things start feeling like they dont belong there
@@lpharmer3496 Thing is, the human brain has vastly more neurons than our image generating AI right now. Them having fewer neurons fundamentally limits their maximum performance even if you train them in a loop like that, i.e. they just don't have the brainpower. Normal adversarial networks like you describe are equal in network size and so they eventually are able to fool the discriminator (the other AI checking if the output is real or fake), but if the discriminator (a human in this case) is much bigger, it may not ever get higher than a certain success rate. True human/superhuman performance might be a long way off (until we can make much bigger neural networks).
I actually knew this one! If you do rock climbing in UK, it's pretty easy. And something about that rock by the parking also screams UK to me idk why lol but yeah, does not look like UK to a normal person, sky too blue lmao
@@Pavel-yp2je Can you confirm my theory that rock is there to stop cars from pulling out at an angle and prevent crashes with cars coming around the corner?
Hey editor, I got a pro tip: If you have information on screen like at 3:17, a good rule of thumb is to have it there long enough that you personally can read it twice. I get that it would likely mess with the pacing if you're dedicating time to it, but having it on screen for not even 2 seconds is very easy to outright miss for casual viewers like myself.
I doubt this was the intention of the editor, but flashes of text probably help the algorithm because viewers either need to pause or rewind to read it, which increases the watch time per click.
@@shmooveyea It was on for such a short time that I literally couldn't even pause it in time. If I have to rewind just to not miss kind of important information, it sucks :/
what bothers me these days with all the AI stuff is those arrogant people who surged who keep telling how "extremely easy" it is to tell real from AI, especially in drawings like No bruh, that's not easy, it's becoming very hard for the common folks to differentiate certain stuff. And then people come with all the technical knowledge about design/art/photography and I be like bro do u know most people don't have said knowledge right lol
The last one had some clear giveaways if you spotted them. The trees on the center-left have gaps in them, they are just floating. Also the trees on the right are super squiggly
I'll be honest I'm surprised you missed the last one; if you look at the center-right of the trees, there's clear artifacting from panoramic/stitched imaging just on the border of it. I don't think AI would have mimicked that sort of artifact.
@@colecube8251top left pic on the last challenge, above where the road ends you can see the sky between the trees on the left and right side of the road there is a horizontal line
@@colecube8251 Look at the top-left image, dead center but on the right edge - there's like a "bone" shape in a branch. Directly to its side, there's an identical copy of that bone shape. If you look closelier, you'll notice that there's an entire smear of clone/copy of that whole area, which is pretty common in panoramic imaging when it's stitching images together. Essentially the whole right edge (like the last 30 columns or so) are just copied from directly to the left of it.
@@luka188it's full potential is based on what information we use to train it, most ai already are trained billions if not trillions of input, I doubt we can get more than that tbh, for now the big difference is how well you can train the ai and how much of its potential we can get out
@@luka188it hasn't been for "a few years", it has been literally decades of research and development at this point. It's been only "a few years" since AI image generation got commercially viable, which for many products is close to the final form (though not for AI of course), but still it definitely isn't "bottom".
I think with a few small changes you could make this a lot harder. If you limited it to one country (Brazil) then the AI would more consistently generate realistic images. Also if you had a pro work with the person generating them to select the best images you could filter out some of the easier tells of these images.
AI imagery is definitely harder to identify when there's a lot of visual noise and few actual landmarks to compare. This test is one of the first times I've actually had to struggle to tell it apart, even got two of them wrong
ngl, you might benefit from an eye tracker. You could just show what you're looking at when you talk about stuff instead of having to explain or edit in the red arrow.
The scary part will be that when someone is malicious with AI, it won't be in a challenge where you know it's present, and there won't be a reveal to show that you were right and wrong. It will all slowly seed itself unannounced into our images, art, libraries, and legal evidence.
Just want to say this is an awesome concept for a video, would watch more of these for sure. It's a novel idea to have you guessing something other than location. I think the viewer learns things about the thought process of a GeoGuessr that likely wouldn't come up in a normal video.
It's really impressive how accurate Rainbolt was, but like he said it's not easy. He's spending a long time analyzing each image and goes in with the knowledge that one of them is fake, the average person is not going to spend more than a minute looking at photos while scrutinizing every detail.
I think one giveaway for all of them was the incorrect exposure of the sky. especially the last 2 rounds, the imposter images felt like the sky should be bright white to match the exposure of the road/foliage
I tried to play along, here were my results (honestly I "locked down" my answer before he revealed his thought process): 1. genuinely didn't know, all looked very good. Though once he pointed out the difference in grass on both sides of the road for pic 2 I saw it. 2. This one was fairly easy actually. Pic 4: The wires and the telephone poles don't match up, and the road has the smoothing effect, a typical fabric of AI art. 3. Like rainbolt says, fairly obvious 3. Smoothing effect, road width inconsistent, weird lines, pavement on the left abruptly stops. 4. Pic 3 was also semi-easy - the wires on the telephone pole didn't continue and the picture is weirdly slanted. I didn't catch Pic 2 though! So 1/2. 5. I was struggling a lot, but ultimately went for 1. Towards the end of the road the greenery "blends" into the road, again a typical artefact of AI art, where distant objects start randomly melting into each other. Also the shadow of the coniferous tree on the left wasn't reflected on the road surface. 4/6. Decent, but defo room for improvement. (the fact that I only got one point below the geoguessr genius is already a giant validation tho lmao)
Wait you saying that 2:48 no2 is fake and no3 is real.? I thought 3 was fake cuz they border between the road and the grass is insanely clean, someone must have cut it recentlt or some.
Haha, was waiting for this one! However - huge nerd talk incoming - there are definitely some things AI just can't replicate... For an example, I'm from the UK and can (pretty much) recognise all National Grid standard electrical infrastructure. AI really can't produce anything close to, for me, genuine British electrical "pylons" and substations - and it likely won't even as the tech progresses! Guessing you can apply some similar logic to your thinking as well... Anything from streetlights, housing, cell towers, road signs, cars, etc... In short - scanning an image not for looks, but for genuine realism - if you get what I'm saying!
For the first one I noticed that the clouds appeared to be covering the sky which should make the shadows overcast yet the trees at the far point in the road had hard shadows which you would not get on an overcast day.
It's actually really easy to guess most of the time, just look at the photo with the most dynamic range and the best looking exposure, look for photos without blown out skies... To explain further - the google cameras aren't like human eyes, of course, nor are they Hollywood level cinema cameras, therefore it is safe to assume they wouldn't have 12-18 stops of dynamic range but rather less than 10 (after post processing increases shadows and decreases highlights), in other words if you can see a sky that is blown out and the detail in the shadows and midtones is fine, that is likely real. If you can see noise in the shadows and darker parts of the image, again, likely real. On the other hand if it looks "too real" and "too perfect" and both the white sky and the shadows are perfectly exposed for (on a sunny day of course, that'll not work on an overcast or cloudy day) that'll generally mean that it's AI generated since the google cameras aren't really that good and the AI is trained not to leave stupid white blobs in the sky where a google cam might have lost detail but instead put a high quality perfectly exposed sky there.
i dont even know how to determine dynamic range and exposure in a picture. All I know that tho9se are important factors for a high quality image. As a layman im not able to identify that in an image. " it is safe to assume they wouldn't have 12-18 stops of dynamic range but rather less than 10" You cannot expect me to know this kind of information. What you are saying is, that it is easy for an expert or photography entusiast to spot the AI image assuming they know what to look out for.
was playing along myself and only guessed pic 1 of the last set incorrectly without any geoguesser knowledge, knowing what ai likes to do really helps you spot things that are likely generated
Yeah I think Geoguessr knowledge only takes you so far on something like this. I've worked with a fair bit of AI generation and while I've barely played the game I only missed the round 5 while watching on a small tablet. It's going to get harder and harder in the coming months and years though to pick up on the small things that don't really pass the smell test as its getting so much more right than wrong compared to even 12 months ago.
@@unorevers7160 I'll try to explain better. Exposure is literally how bright the photo is, it is measured in stops, that isn't very important info for a non-enthusiast. What you need to know is that the fact that on the same image a camera captured high detail and great color in both the shadows and in the highlights is the sign of high dynamic range (generally cameras can either expose for the shadows and likely get a fully white sky or expose for the highlights and get fully black shadows), while in a forest on a sunny day with the sky perfectly visible and full detail in the shadows of the forest without any noise (literal multicolored grain in the image, common in low light, high ISO causes it - not important rn), if you have such high dynamic range it is very likely that the photo is AI generated cuz google street view cameras aren't that good, don't get me wrong, they can get perfect exposure in most conditions but when you have both very dark and very light area in the same it is likely to expose for one of them since it cannot get high detail in both, hence a lower dynamic range. The number of stops of dynamic range is the difference in exposure measured in stops between the darkest and lightest area in a given photo, it isn't important how that's measured, what you need to know is: Human eye: 18-21 stops, Hollywood level cinema camera: 13-16 stops, Pro DSLR: 10-13 stops max, phone: 7-12 stops, old camcorder: 5-9 stops and crucially google street view gen3,4 is probably 9-13 stops. AI is likely to go for like 16 stops at least so it's obvious if the example is easy as I stated previously, detail in both very dark and in very light areas of the image at the same time on a sunny day would likely, but not 100%, indicate an AI gen image.
At 3:20 the thing that gave #4 away for me was the shadow of the snowbank on the right. There is a little dip in the middle of the shadow closest to the camera but there is no dip in the actual snowbank to create it.
I actually got 4/5, only getting the one with two wrong. But if I didn't know they were AI then maybe the only one I would've thought was off was number 3
Rainbolt in the first set: Hmm, this grass is mowed on one side but not on the other, the camera doesn't look like one from streetview... me: hmm... the sky looks all grey, but also like it shouldn't be all grey...
The thing that sold me for 1 being real on the last round was because on the right if you look at the edge of the image at the bush you will see a duplication of the bush, which I doubt AI would replicate.
You were on the right track with the leaves on the last one. The pattern with which they were spread out was way too perfect too, they were almost in a single file line lol, definitely AI generated
in the first one the road doing a wavy thing as it goes into the distance gave it away for me. the other roads look practical. the wavy road looks like the road to a cartoon castle on a hill
In general, I would have struggled to spot these, but when you maybe the 4th image in that last round black and white, I suddenly noticed this weird tagent of shadow that seemed to "continue" the road up into the sky, which I didn't notice in color at ALL. that one where the Trees on the Right gave it away for you too - the thing I noticed there was how the upper branches all formed a line in a way that looked like a climbing vine hanging on wire but without any poles going on.
for set 4, image top-left: why are there repetitions in the lower-right area of the foliage? there are clearly some branches and leaves which are copy-pasted. are those street view image stitching artefacts?
This competition is unfair, we all know that rainbolt is an ai
R AI NBOLT
@@manjilmanjil3003is he cooking??!!!🗣🗣🗣🗣🔥🔥🔥
pin pls
1.9k likes and 4 replys? -lemme change that- changed.
Rainbot
7:50 “I don’t know what this pole is doing here”
8:04 “this pole makes a lot of sense”
Oh ok understandable
You must be new around here 😂
The Polish are important in geoguesser. Especially our trees.
@@harstar12345 yep I am 💀
At 7:50 he was talking about the 2nd image and at 8:04 he was talking about the 4th image. Thats why theres always a red arrow that shows what picture hes talking about
@@FaeTheMf poles are used constantly to determine where you are. A lot of places look very, very similar so you see a certain pole you might be able to eliminate somewhere or know which country, or province you're in by that alone or with other clues
7:40 "These leaves don't make sense"
Ok bro, if you say so lol
They look like they're floating in midair
AI foliage is really obvious, i looked at the trees of each round and had 100% win rate, got all of them correct, even identified the double AI round
@@petrkdn8224 I'd assume that it because foliage being correct relies on the complex 3D structure of the trees being correct, that with branches often being obscured, hidden by bloom, blending with background or getting messed up by image compression due to them being too small.
And sometimes you need to get details which are not represented by anything in the image, like wind direction right
That and AI does fine texture worse when it's not the focus.
@@petrkdn8224 Yeah any sort of pattern generally throws the AI off. With trees and leaves however, you just need to ask yourself; what’s the logic being that branch and where is it coming from?
It usually ends up looking like how fever dreams feel. In this case tho, I couldn’t see anything through my phone screen. Couldn’t see the details in a thumbnail resolution.
Yea I agree with previous replies. AI is terrible at leaves and other coordinated or semi coordinated structures (like candles, fires, lights and shadows)
Branches out in the open wind will all lean towards the same direction under the effect of the sun and the wind. Individual leaves will always face the direction that gives them most sunlight, top leaves usually face east while bottom leaves face any opening to the sky they can find. A pile of big leafed plants is always enough to thwart any AI.
I found the difference in how a geoguesser player vs a "normal" person approaches this very interesting. You used mostly your game knowledge to find the right images, while I searched for misstakes in the details like wrong shadows etc.
for me, the difficult part about looking for shadows and distortions is that its hard to tell if its due to the camera/blurring or if its an ai mistake
Personally I could tell because it felt off and looking a bit longer I noticed some details thats seemed weird
But crazy that this gut feeling can tell that something is off
It was pretty obvious, stuff like the distorted piles of dirt and oddly formed tree branches gave it away for the first one@@thegreendude2086
exactly my thought, last round picture 3 the shadow of the tree was way too on point to be AI, didn't need to see anything else.
yeah for example in the first quiz.. number 3 looks totally off for me.. it looks like they put together bushy greens, too clean accentuated road with too maintained side of the road for that part of the world, and the background sky pops way too much.. but for him it "made sense" somehow..
I think the craziest part about them is if nobody told you that any were AI generated, aside from the obvious "3"s, is that they fool just about anyone.
Yeah it's horrifying. Objectivity is gonna vanish
That's the issue with Deceptive Imagery: You caption it enough, just make it "look real", and people are going to believe it.
I went 5/5 but not for the reason of knowing its AI like him I just found distinctive color grading differences between real cameras and FOV it feels like art more than an actual picture its scary how accurate some were tho.
@@ABC-lm9do
Yeah, that's another scary thing.
When you have to be an expert in a relevant field to have much of a chance when you *know* something AI is present, pretty much no one will have any hope spotting these things in the wild.
And it's only gonna get better, with video and eventually synced audio (by eventually I mean like 2 or 3 years tops) that to most people will appear completely seamless.
I went 5/5, I'm not a geoguessr pro, I'm not an expert in any relative field at all, AI just looks uncanny to any human being with functioning eyes(no, I wasn't just guessing)
It’s crazy how many “felt like AI” that weren’t.
That's the effect that AI images give. Once you start questioning reality, everything looks fake.
Yeah whoever picked the real images did a great job at that, it's very intentional.
This kinda reminded me of that one game, exit 8 I believe. Where mind starts playing tricks on you and just start overthinking way too much so that even the most normal things start feeling like they dont belong there
Google Streetview does have a lot of weird distortion and such that looks AI generated because it kind of is, with the way 360 views are spliced.
"I'm not gonna sit here and cope"
Cuts video short to sit here an cope 😂
you forgot to put what other pros are choosing on the last round
messed up the last like 20 sec on the export, sorry!
4/13 chose 4 was AI (31%)
7/13 chose 1 was AI (54%)
2/13 chose 2 was AI (15%)
Thank you so much!
@@georainboltthe abrupt ending makes a lot more sense now
@@georainboltLETS GO🐗
did anyone get perfect on all of them?@@georainbolt
It would have been funny if the last slide was a trick question with 4 ai pics just to see them sweat for 20minutes for no reason
We're going to need experts in very specific things to help us spot AI images in the future
I know nothing but I imagine you could use tools like how there is software that can be used to detect if a picture has been photoshopped
@@lpharmer3496 At that point it just becomes an arms race so those also won't hold forever.
@@cameron7374 true. Better ai image detection = better ai images until maybe an AI image can be created that is indistinguishable from a real image
@@lpharmer3496 Thing is, the human brain has vastly more neurons than our image generating AI right now. Them having fewer neurons fundamentally limits their maximum performance even if you train them in a loop like that, i.e. they just don't have the brainpower. Normal adversarial networks like you describe are equal in network size and so they eventually are able to fool the discriminator (the other AI checking if the output is real or fake), but if the discriminator (a human in this case) is much bigger, it may not ever get higher than a certain success rate. True human/superhuman performance might be a long way off (until we can make much bigger neural networks).
@@lpharmer3496Ai is often trained by using that tech, back and forth, informing itself with a pass/fail nonstop several thousand times a second.
You can't spell RAINBOLT without AI.
d”A” n”I”nja
Let him cook
Recaptcha be Like:
Image 2 at 6:24 is Cheddar Gorge, taken in the June 2023 images, at 51.284166, -2.762760
Google lens
I actually knew this one! If you do rock climbing in UK, it's pretty easy. And something about that rock by the parking also screams UK to me idk why lol
but yeah, does not look like UK to a normal person, sky too blue lmao
@@Pavel-yp2je Can you confirm my theory that rock is there to stop cars from pulling out at an angle and prevent crashes with cars coming around the corner?
@@ArthurB26 can't confirm but wouldn't be surprised if that's the case
you might be AI
Hey editor, I got a pro tip: If you have information on screen like at 3:17, a good rule of thumb is to have it there long enough that you personally can read it twice. I get that it would likely mess with the pacing if you're dedicating time to it, but having it on screen for not even 2 seconds is very easy to outright miss for casual viewers like myself.
pause the video.
Someone teach bro how to pause a video 😭 🙏
I doubt this was the intention of the editor, but flashes of text probably help the algorithm because viewers either need to pause or rewind to read it, which increases the watch time per click.
people who are saying pause the video are missing the point 😂
@@shmooveyea It was on for such a short time that I literally couldn't even pause it in time. If I have to rewind just to not miss kind of important information, it sucks :/
I've been wondering who gets to keep their job when AI takes over... Just him.
E
For now
wait until There are AI that can detect AI photo
AI won’t replace you, a person who uses AI will.
8:19 "But like actually, you have to use my... brain" you can feel the struggle not to say "use my noggin" :D
he cut right after, so probably did say noggin and then deleted it
JHK working overtime to match the AI locations and select 10 troll daily challenge locations for two videos posted on the same day damn haha
🧑🏭
what bothers me these days with all the AI stuff is those arrogant people who surged who keep telling how "extremely easy" it is to tell real from AI, especially in drawings like
No bruh, that's not easy, it's becoming very hard for the common folks to differentiate certain stuff. And then people come with all the technical knowledge about design/art/photography and I be like bro do u know most people don't have said knowledge right lol
And it will also only become more difficult. (Saying this as an artist)
it was easy not that long ago, the technology is moving faster than people can update the information in their brains about said technology
"And then people come with all the technical knowledge about design/art/photography" so for some people it is easy?
@@anikinmartinez4726 even with the knowledge its not given some of the scores the pros got
It's still pretty easy most of the time
2:54 picture 2 was super easy, just from a quick glance I could see the trees were casting a shadow on a cloudy day.
Your so damn smart
For round 2 no roads towards the houses
oh yeah the shadows under the trees were suspiciously darker than they should be
good eye
On round 1, i thought 3 would be AI because that road looks too smooth without any texture
Same
The last one had some clear giveaways if you spotted them. The trees on the center-left have gaps in them, they are just floating. Also the trees on the right are super squiggly
No, those are not gaps. Those are leaves in front of the trunks.
Clear giveaway on number 2 as well. The snow one. The poles don't have wires. The electric wires are hanging mid air
My favorite video you have ever done, this is just incredible
This was an awesome video & concept. You've been dropping constant bangers lately keep it up!
3:00 i thouhg it was 3 so bad the road looks so shiny
I'll be honest I'm surprised you missed the last one; if you look at the center-right of the trees, there's clear artifacting from panoramic/stitched imaging just on the border of it. I don't think AI would have mimicked that sort of artifact.
I can't see what ur talking about at all :(
@@colecube8251top left pic on the last challenge, above where the road ends you can see the sky between the trees on the left and right side of the road there is a horizontal line
@@colecube8251 Look at the top-left image, dead center but on the right edge - there's like a "bone" shape in a branch. Directly to its side, there's an identical copy of that bone shape. If you look closelier, you'll notice that there's an entire smear of clone/copy of that whole area, which is pretty common in panoramic imaging when it's stitching images together. Essentially the whole right edge (like the last 30 columns or so) are just copied from directly to the left of it.
@@colecube8251 i.imgur.com/7Q8C0yt.png here's a little diagram of what I mean
@@colecube8251 i.imgur.com / 7Q8C0yt.png - here's a visual explanation of what I mean
6:17 that's cheddar gorge, went there as a kid and remember that vantage point
This is CRAZY. And to think this is basically the bottom of the exponential development curve.
I'd guess middle. He got a dude that does ai and a geoguesser pro.
@@Speed001 Now, this is the bottom. This AI stuff has barely been out for a few years. It is nowhere near to completion or to its full potential.
Might not actually be the bottom
@@luka188it's full potential is based on what information we use to train it, most ai already are trained billions if not trillions of input, I doubt we can get more than that tbh, for now the big difference is how well you can train the ai and how much of its potential we can get out
@@luka188it hasn't been for "a few years", it has been literally decades of research and development at this point.
It's been only "a few years" since AI image generation got commercially viable, which for many products is close to the final form (though not for AI of course), but still it definitely isn't "bottom".
I think with a few small changes you could make this a lot harder. If you limited it to one country (Brazil) then the AI would more consistently generate realistic images. Also if you had a pro work with the person generating them to select the best images you could filter out some of the easier tells of these images.
AI imagery is definitely harder to identify when there's a lot of visual noise and few actual landmarks to compare. This test is one of the first times I've actually had to struggle to tell it apart, even got two of them wrong
Thank you so much for the link through to my video Rainbolt! You are amazing. Thanks for all of your brilliant work - you're an inspiration to many! 🙌
ngl, you might benefit from an eye tracker. You could just show what you're looking at when you talk about stuff instead of having to explain or edit in the red arrow.
great video. would love to see a part 2
The scary part will be that when someone is malicious with AI, it won't be in a challenge where you know it's present, and there won't be a reveal to show that you were right and wrong.
It will all slowly seed itself unannounced into our images, art, libraries, and legal evidence.
Just want to say this is an awesome concept for a video, would watch more of these for sure. It's a novel idea to have you guessing something other than location. I think the viewer learns things about the thought process of a GeoGuessr that likely wouldn't come up in a normal video.
This is really fun to play along to the video ✌🏻
Time for a weekly series of this type of video, super interesting and entertaining!
Ooo the first one I was like #3 looks too crispy it has to be that-nope that’s just normal gen 4
Same, but I thought the road was onto of the rightside foiliage. The gap between road and plants was missing to me
"as someone with 10 thousand hours on street view"
wo.
love this type of content
Who knew rainbolt was just 2 facebook moms in a trenchcoat
His face's so smooth he looks like he's AI generated
rAInbolt moment
haha
7:22 there's a face in the top right of the 3rd image
There's also one in the right of the second picture
Great idea!! Please do another one!
bro dont have to work for FBI, FBI needs to work for him 💀
Edit: i wake up and saw i was famous 🗿
im gonna touch u
100 likes and no comment let me fix it
@@himanshucubing7541 Corny asf
the on/off vocal fry 😬😬😬 so hard to watch but i love the content
Rainbolt trying to ride a bike in autumn: “Meh, these leaves don't make sense!”
It's really impressive how accurate Rainbolt was, but like he said it's not easy. He's spending a long time analyzing each image and goes in with the knowledge that one of them is fake, the average person is not going to spend more than a minute looking at photos while scrutinizing every detail.
What you said about the leaves made a lot of sense, surprised you didn't go with that one for the last choice
0:04 I literally remember my mom thinking that image on the left is real when there was a huge snow storm 😂
This just goes again to show, you gotta lock in!!!! On that last one you had it and it was just a matter of locking in...
Love how ghk definitely put the 2 on round four for the round four blunder
I love every time he says "this pole makes sense, yes" and I'm sitting here like "...alright bro, if you say so"
I think the clouds for the last one gave it away
Great content!
The fact "Ryanbrawl" claims he is a human shows how dangerous AI is
You noticing the leaves staying on the road is big brain, I didn't even think of that. You really use your noggin.
I think one giveaway for all of them was the incorrect exposure of the sky. especially the last 2 rounds, the imposter images felt like the sky should be bright white to match the exposure of the road/foliage
Our last hope in the robot uprising.
Who needs terminator warning dogs, well just have rainbolt, fiercely smacking his mouse behind a cctv.
I tried to play along, here were my results (honestly I "locked down" my answer before he revealed his thought process):
1. genuinely didn't know, all looked very good. Though once he pointed out the difference in grass on both sides of the road for pic 2 I saw it.
2. This one was fairly easy actually. Pic 4: The wires and the telephone poles don't match up, and the road has the smoothing effect, a typical fabric of AI art.
3. Like rainbolt says, fairly obvious 3. Smoothing effect, road width inconsistent, weird lines, pavement on the left abruptly stops.
4. Pic 3 was also semi-easy - the wires on the telephone pole didn't continue and the picture is weirdly slanted. I didn't catch Pic 2 though! So 1/2.
5. I was struggling a lot, but ultimately went for 1. Towards the end of the road the greenery "blends" into the road, again a typical artefact of AI art, where distant objects start randomly melting into each other. Also the shadow of the coniferous tree on the left wasn't reflected on the road surface.
4/6. Decent, but defo room for improvement. (the fact that I only got one point below the geoguessr genius is already a giant validation tho lmao)
well its not as impressive considering you were also istening to his thought process but yeah pretty good all things considered.
Wait you saying that 2:48 no2 is fake and no3 is real.? I thought 3 was fake cuz they border between the road and the grass is insanely clean, someone must have cut it recentlt or some.
I don't know man... Theres no way im getting these without your insights. But I don't getvwhy you suspected #1 as hard as you did.
Great video
No one break this guys heart....
Haha, was waiting for this one! However - huge nerd talk incoming - there are definitely some things AI just can't replicate... For an example, I'm from the UK and can (pretty much) recognise all National Grid standard electrical infrastructure. AI really can't produce anything close to, for me, genuine British electrical "pylons" and substations - and it likely won't even as the tech progresses! Guessing you can apply some similar logic to your thinking as well... Anything from streetlights, housing, cell towers, road signs, cars, etc...
In short - scanning an image not for looks, but for genuine realism - if you get what I'm saying!
Bro "specialises in creating realistic AI images" bruh 💀💀💀
bold of me to assume i would be able to point out the difference like him
Round 3 Picture 2 is Cheddar Gorge in the UK for anyone wondering.
It's a lovely place but ruined by people with loud cars doing races up it
bro is the one who makes maps from memory
For the first one I noticed that the clouds appeared to be covering the sky which should make the shadows overcast yet the trees at the far point in the road had hard shadows which you would not get on an overcast day.
Even as a non-pro, you can get pretty far by simply checking shadow consistency.
A lot of times it's just paying attention to things you wouldn't, like tree branches not floating in air lol
It's actually really easy to guess most of the time, just look at the photo with the most dynamic range and the best looking exposure, look for photos without blown out skies... To explain further - the google cameras aren't like human eyes, of course, nor are they Hollywood level cinema cameras, therefore it is safe to assume they wouldn't have 12-18 stops of dynamic range but rather less than 10 (after post processing increases shadows and decreases highlights), in other words if you can see a sky that is blown out and the detail in the shadows and midtones is fine, that is likely real. If you can see noise in the shadows and darker parts of the image, again, likely real. On the other hand if it looks "too real" and "too perfect" and both the white sky and the shadows are perfectly exposed for (on a sunny day of course, that'll not work on an overcast or cloudy day) that'll generally mean that it's AI generated since the google cameras aren't really that good and the AI is trained not to leave stupid white blobs in the sky where a google cam might have lost detail but instead put a high quality perfectly exposed sky there.
i dont even know how to determine dynamic range and exposure in a picture. All I know that tho9se are important factors for a high quality image.
As a layman im not able to identify that in an image.
" it is safe to assume they wouldn't have 12-18 stops of dynamic range but rather less than 10"
You cannot expect me to know this kind of information. What you are saying is, that it is easy for an expert or photography entusiast to spot the AI image assuming they know what to look out for.
was playing along myself and only guessed pic 1 of the last set incorrectly without any geoguesser knowledge, knowing what ai likes to do really helps you spot things that are likely generated
Yeah I think Geoguessr knowledge only takes you so far on something like this. I've worked with a fair bit of AI generation and while I've barely played the game I only missed the round 5 while watching on a small tablet. It's going to get harder and harder in the coming months and years though to pick up on the small things that don't really pass the smell test as its getting so much more right than wrong compared to even 12 months ago.
@@semi-automaticchickennugge6417 the last one was the only one I got wrong, that's why I didn't say it works every time, cuz it doesn't
@@unorevers7160 I'll try to explain better. Exposure is literally how bright the photo is, it is measured in stops, that isn't very important info for a non-enthusiast. What you need to know is that the fact that on the same image a camera captured high detail and great color in both the shadows and in the highlights is the sign of high dynamic range (generally cameras can either expose for the shadows and likely get a fully white sky or expose for the highlights and get fully black shadows), while in a forest on a sunny day with the sky perfectly visible and full detail in the shadows of the forest without any noise (literal multicolored grain in the image, common in low light, high ISO causes it - not important rn), if you have such high dynamic range it is very likely that the photo is AI generated cuz google street view cameras aren't that good, don't get me wrong, they can get perfect exposure in most conditions but when you have both very dark and very light area in the same it is likely to expose for one of them since it cannot get high detail in both, hence a lower dynamic range. The number of stops of dynamic range is the difference in exposure measured in stops between the darkest and lightest area in a given photo, it isn't important how that's measured, what you need to know is: Human eye: 18-21 stops, Hollywood level cinema camera: 13-16 stops, Pro DSLR: 10-13 stops max, phone: 7-12 stops, old camcorder: 5-9 stops and crucially google street view gen3,4 is probably 9-13 stops. AI is likely to go for like 16 stops at least so it's obvious if the example is easy as I stated previously, detail in both very dark and in very light areas of the image at the same time on a sunny day would likely, but not 100%, indicate an AI gen image.
At 3:20 the thing that gave #4 away for me was the shadow of the snowbank on the right. There is a little dip in the middle of the shadow closest to the camera but there is no dip in the actual snowbank to create it.
I actually got 4/5, only getting the one with two wrong. But if I didn't know they were AI then maybe the only one I would've thought was off was number 3
07:08 picture 2 is cheddar gorge in the UK
Rainbolt in the first set: Hmm, this grass is mowed on one side but not on the other, the camera doesn't look like one from streetview...
me: hmm... the sky looks all grey, but also like it shouldn't be all grey...
On the 4th round #2 and #3 had ditches beside the roads which just felt weird to me.
We need Rainbold to review evidence in court. This is to only way to determine the authenticity of images anymore.
the fear of being a facebook mom is scary
The thing that sold me for 1 being real on the last round was because on the right if you look at the edge of the image at the bush you will see a duplication of the bush, which I doubt AI would replicate.
You were on the right track with the leaves on the last one. The pattern with which they were spread out was way too perfect too, they were almost in a single file line lol, definitely AI generated
Bruh i already lost from the first round, 3 road looks so weird
ive noticed that ai always make long roads
the clouds in #4 of the last round are the giveaway
Lmao on the very first example you did.. the trees, look at the trees! they're floating, the branches lead nowhere, that was the dead giveaway really
Round one I legit immediately guessed 2. Will take that as a W
We need a game like this, seems insanely hard.
on round 5 #4 had a missing part of one of the large evergreens trunks which gave it away
in the first one the road doing a wavy thing as it goes into the distance gave it away for me. the other roads look practical. the wavy road looks like the road to a cartoon castle on a hill
7:05 the "double line" could have been just a line that got worn out in the middle - i see that all the time on the less maintained roads.
Man the shadows really give them away
And we never get to know what the other pros picked for the last one
This was crazy to watch!
Prateek collab is crazy
I was going for "Nah, this road is slightly tilted, must be an AI", and it worked 4/5 times
In general, I would have struggled to spot these, but when you maybe the 4th image in that last round black and white, I suddenly noticed this weird tagent of shadow that seemed to "continue" the road up into the sky, which I didn't notice in color at ALL. that one where the Trees on the Right gave it away for you too - the thing I noticed there was how the upper branches all formed a line in a way that looked like a climbing vine hanging on wire but without any poles going on.
for set 4, image top-left:
why are there repetitions in the lower-right area of the foliage?
there are clearly some branches and leaves which are copy-pasted. are those street view image stitching artefacts?
the last round, one of the large trees to the left.. had a chunk missing out of it lol, dead giveaway
with this mic quality i didnt expect you to look like the most attractive guy i've ever seen wtf
Third Round No.2 is Cheddar Gorge in England. Been there. Brilliant hike.