Battling Big Tech: Truth, Lies and AI

Sdílet
Vložit
  • čas přidán 12. 06. 2024
  • Arvind Narayanan has built a career deflating the hype around claims made by Big Tech. He took on Netflix on user privacy and is now setting his sights on the latest neural networks and generative artificial intelligence. Can AI algorithms really predict our future behavior? Should programmers decide what invasive technologies are implemented, or should the public? How do so called "objective" machine learning algorithms actually reflect the biases and prejudices of society at large?
    Narayanan is a professor of computer science at Princeton University, affiliated with the Center for Information Technology Policy. He studies the societal impact of digital technologies.
    Read the full article at Quanta Magazine: www.quantamagazine.org/he-pro...
    0:00 Who is Dr. Arvind Narayanan?
    0:27 Taking on Netflix and privacy
    1:17 Your apps are tracking you everywhere
    1:58 An unexpected defender of digital privacy
    2:10 Do AI technologies predicting behavior actually work?
    3:40 Does tech amply the best and and worst of society?
    - VISIT our Website: www.quantamagazine.org
    - LIKE us on Facebook: / quantanews
    - FOLLOW us Twitter: / quantamagazine
    Quanta Magazine is an editorially independent publication supported by the Simons Foundation www.simonsfoundation.org/
    #artificialintelligence #technology #privacy
  • Věda a technologie

Komentáře • 173

  • @QuantaScienceChannel
    @QuantaScienceChannel  Před rokem +20

    Read Sheon Han's written interview with Arvind Narayanan on our website: www.quantamagazine.org/he-protects-privacy-and-ai-fairness-with-statistics-20230310/
    You can explore all of our computer science coverage and every topic covered by Quanta here: www.quantamagazine.org/topics

    • @chillinJohnny
      @chillinJohnny Před rokem

      Many people praise apple for privacy but forget that they used many monopolistic strategies over the years: that let me be clear - aim to take our choice
      This is far worse actually than tracking - if some Chinese phone manufacturer abuses it's position you can easily switch to a different phone.
      If apple is closing you in and forcing you to using their products openly threatening you that if you stop all your watches/ headphones/ smart home devices even ppl group chats stop working I think it is far worse than netflix releasing data without our id and name attached to it

  • @Arvind_Narayanan
    @Arvind_Narayanan Před rokem +894

    Hi! That's me in the video. I'm so sorry I sound like I'm shilling for Apple. That's not what actually happened! Awkward phrasing by me + context lost in editing. The question (which got cut) was what surprised me the most in my web privacy research. I expressed surprise that tech companies did much to protect privacy at all. Apple's App Tracking Transparency feature is a notable example. I gave other examples (didn’t make it in the cut). I was talking specifically about App Tracking Transparency, which I do think has done more to cut down on app tracking than regulation (but I shouldn’t have framed it as one vs the other, sorry!) I emphasized the importance of regulation elsewhere in the interview.
    By the way, I specifically refuse funding from all Big Tech companies, including Apple, because I work on tech accountability. Thanks for watching!

    • @kylec.1611
      @kylec.1611 Před rokem +102

      I get that the video is only meant to be short, but they really should have done more to make that clear. Thank you for clarifying

    • @chrisberdin
      @chrisberdin Před rokem +6

      Thats why we have to pay more with Apple products and servicea cause Apple is protecting our data too.

    • @philosuit
      @philosuit Před rokem +33

      This should be pinned to the top

    • @senerzen
      @senerzen Před rokem

      @@chrisberdin All Apple is doing is collecting the data for themselves and not sharing it with their competitors. It is not like Apple cares about you.

    • @Sinquin
      @Sinquin Před rokem +33

      @@chrisberdin LOL

  • @TuxedoMaskMusic
    @TuxedoMaskMusic Před rokem +13

    There is a HUGE Difference between
    "I Don't have anything to hide.."
    and
    "I Don't value Secure Technology"
    Sadly people often say one when what they really mean is the other.
    When you hear peoples reasoning as simple minded as "I dont have anything to hide"
    it is clear to me why NOT EVERYONE IS A TECHNICIAN OR DESERVES A SAY ON THE MATTER.
    Simply put if they dont work in security? They lack the technical qualifications to understand why its a VERY IMPORTANT DISTINCTION.

  • @EmissaryOfSmeagol
    @EmissaryOfSmeagol Před rokem +18

    Great reporting, thank you for bring this to us! And thanks to Dr. Narayanan for his work!

  • @abdykerimovurmat
    @abdykerimovurmat Před rokem +59

    So nice seeing smart and intelligent people doing awesome job
    they are counterweight to all these tiktok/ig influencers out there

    • @time3735
      @time3735 Před rokem +5

      I don't know why you have a problem with tiktokers like of course not everyone is stupid but these big-brain jobs are not for everyone either.

    • @vaisakhkm783
      @vaisakhkm783 Před rokem +12

      ​@@time3735 cuz those tictokers are spreading misinformation... it is fine to dance and do harmless pranks on someone...
      but issue comes when they spread miss-information
      and many are intentional just to sell you stuff

    • @taiqidong9841
      @taiqidong9841 Před rokem +1

      Don't remember who said it, Dave Chapelle I think, but Twitter and Tiktok for that matter, is not a real place. It's difficult, but just don't watch/read it. This kind of stuff is far more inspiring and in the end, important.

  • @emmanuelogunlana877
    @emmanuelogunlana877 Před rokem +178

    Apple being one of biggest privacy protector is news to me.

    • @primenumberbuster404
      @primenumberbuster404 Před rokem +7

      well apple ecosystem is just 😂

    • @MobiusCoin
      @MobiusCoin Před rokem +36

      You don't remember when Facebook's ads business took a nose dive last year when Apple turned off third party cookies on iOS devices or how Siri barely works compared to other voice assistants because they don't have the large datasets that Amazon has for Alexa to feed the machine learning algorithms?

    • @elosant2061
      @elosant2061 Před rokem +34

      Relative to big tech sure, but its still a marketing lie. They've increasingly been rolling out anti-privacy measures but providing privacy features which detriment the competition (like the app store notices about what kinds of data are collected about you from an app, which was one of the reasons facebook stock took a big hit).

    • @emmanuelogunlana877
      @emmanuelogunlana877 Před rokem +14

      @@MobiusCoin I have always viewed that as Apple's way to monopolize user data but Siri's effectiveness does make me rethink that. However, I will not completely take their word for it.

    • @weystrom
      @weystrom Před rokem +12

      That's where he completely lost me.

  • @Leon_George
    @Leon_George Před rokem +5

    An important topic, as the saying goes, an idiot admires complexity, a genius admires simplicity. Using the appropriate solutions, not hammering in a fancy one is the way.

  • @toolittletoolate3917
    @toolittletoolate3917 Před rokem +15

    I have been saying for decades that the rush to bring on the techno-paradise has caused us as a society to ignore the risks that are inherent in allowing a global data network to exist at all. If we allow ourselves to believe for one second that government is not already in bed with Big Tech, or to believe that any putative “regulations” won’t be used to give us a false sense of security while every detail of our lives is made available to anyone with the right connections, then we will deserve everything we get.

  • @sandrajones1609
    @sandrajones1609 Před rokem +1

    Thank you for sharing your thoughts, knowledge and time with all.Much gratitude

  • @fornarnia_
    @fornarnia_ Před rokem +2

    we need more people like you out there

  • @mikebauer6917
    @mikebauer6917 Před rokem +6

    Indeed, the representativeness of Algorithmic Inference (AI) is at best only as representative as the training datasets used to create them. In other words, bias in, bias out.

  • @sunroad7228
    @sunroad7228 Před rokem +2

    "In any system of energy, Control is what consumes energy the most.
    No energy store holds enough energy to extract an amount of energy equal to the total energy it stores.
    No system of energy can deliver sum useful energy in excess of the total energy put into constructing it.
    This universal truth applies to all systems.
    Energy, like time, flows from past to future".

  • @1DangerMouse1
    @1DangerMouse1 Před rokem +6

    I appreciate what this person is doing a lot

  • @georgetheodoulides26
    @georgetheodoulides26 Před rokem +1

    The issue of routine tracking of individuals by technology is an established reality, prompting the pertinent question of what action individuals can take. Is it appropriate to allow this practice to continue, given that it is deemed to be for the greater good of society? or we should cover our web footsteps by all cost?

  • @MoechtegernPimP
    @MoechtegernPimP Před rokem +7

    Hello. The solution to the problem that people might use AI to their own advantage e.g in political surroundings is, transparancy. If people in this advantageous position are bound to publish prompts and prepromts they used to get answers of the AI, we can see their intentions. It makes a difference if they ask AI to help them to win a conflict in their favor, or If they ask the AI to find a solution that is a win win for all Partys included.

    • @Lala-io9gn
      @Lala-io9gn Před rokem

      ?
      The technology that Dr. Arvind Narayanan is referring to is not just generative algorithms (stuff like ChatGPT or all those image algorithms), but also predictive algorithms, and categorization algorithms.
      There is also another dimension that Dr. Narayanan did not discuss, and that is a social one. If people believe that all of the things that culminate in our experience of other's humanity -art, music, and communication- then people may become alienated from other people, and loose value in human relationships.
      I use the term alienated not to describe the process of something becoming unfamiliar, but instead to describe the loss of a valued relationship.
      You can be very familiar with your stay at home desk job, where you send emails back and forth to people you've never seen in person, doing things for reasons you don't fully understand, and not care about it in the slightest.
      You can go to a part of town you've never visited before, talk with someone in a third place, and be surrounded by complete strangeness, complete non-hegemony, and care more about the simple ways you relate, or don't relate, to others than you've ever cared about the same old cul-de-sac in the same old suberb.
      This type of alienation is not unheard of; cars (electric and self driving too) alienate us from the space we occupy; social media alienates us from our own experiences; smart phones make sure we're always connected to everything all at once, meaning we never have time to do anything in particular.
      "AI" just seems like the next big thing, letting us alienate ourselves (or forcing us to) further from the things we're already told aren't valuable by society.

  • @EdelBass
    @EdelBass Před rokem

    The modem sounds at "when I was a kid" were a nice tough. 😂

  • @rachaelb9469
    @rachaelb9469 Před rokem +1

    Great video!

  • @WilfEsme
    @WilfEsme Před rokem +11

    AI companies are already using information available online. Like how chatGPT is showing some public figures some data like their real names and such. I hope that generative AIs like Bluewillow implements restrictions and regulations to make AIs safe for everyone.

  • @curiousphilosopher2129
    @curiousphilosopher2129 Před rokem +3

    Book Recommendation: "A Brief Guide of 12 Strategies to Minimize the Adverse Impact of Artificial Intelligence on Your Daily Life"

  • @benderthefourth3445
    @benderthefourth3445 Před rokem +4

    More researcher and tech people pushing this out there. Just imagine being a victim of an algorithm... we face the normal day to day prejudice and then by machines... this is not the future they promised us.

  • @nias2631
    @nias2631 Před rokem +1

    This should not be a 5 minute video. But maybe that is a statement in itself about the importance placed on our privacy.

  • @BhavaniSrinivasan-jw4xm

    hey Arvind, very interesting video long time :)

  • @annettecantu3126
    @annettecantu3126 Před rokem +1

    True! And I am not even married yet, 12yrs so far being single, with too many assuming. It's like these tech devices have given some the ability to put people in a box, and isolation is a form of cruelty. We are supposed to respectfully respect each other, and more children are playing with the new techs as if it's nothing, as the rest have learned to be respectful professionals. This just goes back to education needing to return more than making a sale over someone's life for greedy gains. Education is important

  • @Potatotoro
    @Potatotoro Před rokem +4

    4:08 people live in a society
    Bottom text

  • @Mad_Morrison
    @Mad_Morrison Před rokem +1

    What about Black Rock's Aladdin wich by analyzing past investments data can accurately predict the way the market moves, it's in use since more than 20 years now and a LOT of companies use it to make decisions with their money

  • @protonjicari5990
    @protonjicari5990 Před rokem +1

    Well compare to my Samsung, I dont really get too much “targeted adds” on my iPhone account. Unlike on my Samsung again it shows me what it hears. Don’t know why😅😅 fyi using iPhone 8 Plus just replaced its batteries twice already and Samsung fold 3

  • @xena8_8
    @xena8_8 Před rokem +5

    Apple being the biggest/most effective privacy protector is due to their “Do not track” feature that was implemented 1-2 years ago, for those wondering. Facebook’s profits took quite a hit from that and they had to almost completely reconfigure the way they do tracking (for iPhone users)

  • @frankieboyseje
    @frankieboyseje Před rokem +1

    Just wait for lenses with tech / passive scanning technology and the ability to record every waking moment google and other giants are developing these devices but it might take another 15 years now try to imagine the problems this will bring

  • @jonathanmendez7117
    @jonathanmendez7117 Před rokem +2

    PsychoPass lore let’s goooo

  • @sisyphus_strives5463
    @sisyphus_strives5463 Před 4 měsíci

    An erosion of online privacy means that you must deeply consider how you spend your time on the internet, as it will be known what you do and that will be judged by employers. What you consider a private activity, would no longer be as such, all your activities would have to be done with consideration to its publicity.

  • @Sara-wb2bs
    @Sara-wb2bs Před rokem

    Wow, this guy knows what he's talking about!!!

  • @szymonkedzior
    @szymonkedzior Před rokem

    WHy so short?

  • @ardiris2715
    @ardiris2715 Před rokem +4

    LOL
    So, the movie viewers had all given up their privacy already to IMDB.
    AI is not needed for any of this. In fact, AI would be unnecessary overhead for most of it.
    (:

    • @NimTheHuman
      @NimTheHuman Před 5 měsíci

      I had a similar thought. De-anonymizing Netflix's dataset by correlating it with IMDb's data (that users have knowingly made public) doesn't sound like an issue with Netflix's dataset.

  • @theunknown4834
    @theunknown4834 Před rokem +1

    Okay but what can we do? What do we want to do about it?

    • @ardiris2715
      @ardiris2715 Před rokem +2

      Get off the internet.
      Simple.
      (:

    • @GdaySouthAmerica
      @GdaySouthAmerica Před rokem

      Make the profile-building business model fail. Make it not worth the huge cost and resource use needed for algorithmic permanence.
      I think the above suggestion is the ideal but not always doable. Perhaps there could be an app or browser plugin that constantly searches our interacts with a broad and random mix of things, possibly occasionally showing you a thumbnail of one of it's searches/interactions in case you actually want to go to that page? Basically hide your activity in noise, so at worst your profile is 'uses the noise extension and accesses tech related videos 0.00001% more than average for that group"?
      I'm not very tech literate, but if it didn't use a huge amount of processing power or bandwidth could be an option. Trying to keep up with privacy is challenging for most people, and the net does have good bits and is hard to completely go without. So if keeping up with tech is beyond the average person, and dropping out of the race isn't a reasonable option, maybe using tech against tech could work?

    • @ardiris2715
      @ardiris2715 Před rokem

      @@GdaySouthAmerica
      The computational complexity of profiling is essentially O(1). That is damn near real time.
      (:

    • @GdaySouthAmerica
      @GdaySouthAmerica Před rokem

      @@ardiris2715 meaning the amount of noise required is more jet engine than gentle rain for that idea to be at all practical?

    • @ardiris2715
      @ardiris2715 Před rokem

      @@GdaySouthAmerica
      Meaning the entire industry, good players and bad, will say, "Fuck off."
      LOL
      Regulate it to hell and back, no problem, but a computational hurdle is a joke.

  • @jatintomar8170
    @jatintomar8170 Před rokem +1

    A4I

  • @wobbinhood1453
    @wobbinhood1453 Před rokem

    i don't get why the notion that tracking is fine if it's anonymous even needs refuting, it's inherently contradictory
    the value of tracking is that it provides information on the user and their actions and states, done over time that creates a pattern, tracking ID means pattern ID, there's nothing anonymous about any of this, none of it would have any value if it was

  • @a4ldev933
    @a4ldev933 Před 5 měsíci

    In short, anything you do via a electronic form is tracked and and sold. No buts and no ifs either.

  • @cubbyvespers6389
    @cubbyvespers6389 Před rokem +2

    I don't want to live in this new world.

  • @ConnoisseurOfExistence
    @ConnoisseurOfExistence Před rokem +1

    In the future, we will share our thoughts. So yes, we can and we will give up on privacy.

  • @KSRKiller
    @KSRKiller Před rokem +2

    I would like to see some sources when claiming that AI "in most cases" is only "a little bit" better than flipping a coin. These claims seem sketchy.

    • @DaniilDimitrov
      @DaniilDimitrov Před rokem

      😂

    • @mohdashhad3002
      @mohdashhad3002 Před rokem +2

      He was not talking about AI in general but about certain applications of AI such as those used for behaviour prediction.

    • @kyleyake1217
      @kyleyake1217 Před rokem +1

      I can confirm this for predicting pricing behavior, so I can imagine it being the same for other complex types (i.e., human).

  • @time3735
    @time3735 Před rokem +9

    Exactly! It can sometimes be dangerous to let Al freely learn from the Internet. Since it mostly depends on what data you feed into it. If it feeds into racist data, it will become racist.

    • @ardiris2715
      @ardiris2715 Před rokem +1

      Google has been scraping the internet since Google's inception.
      (:

    • @sabyasachibandyopadhyay8558
      @sabyasachibandyopadhyay8558 Před rokem +1

      there are already multiple examples of this happening. One of the areas most affected is AI in healthcare. Since people from underpriviledged financial backgrounds rarely go into critical care due to the insurmountable costs of healthcare in the US, most of the data these models learn from come from rich, white people. Hence, they are generally bad at predicting diseases in people from other racial, demographic background.

    • @ardiris2715
      @ardiris2715 Před rokem

      @@sabyasachibandyopadhyay8558
      "Since people from underpriviledged financial backgrounds rarely go into critical care", then their data would skew the results for those who CAN afford critical care.
      (:

    • @sabyasachibandyopadhyay8558
      @sabyasachibandyopadhyay8558 Před rokem +4

      @@ardiris2715 not really, balanced datasets will uncover general disease signatures at the expense of performance. Models trained on Unbalanced datasets use shortcuts to achieve high overall performance while sacrificing performance on particular samples. For example, if there are 80 white people and 20 black people in a dataset, the model can achieve high performance on the overall dataset, while performing abysmally on the black population. Then it becomes a problem if you administer the model recommendations on both white and black people equally (which you would typically do). Whereas if your dataset contains an equal number of black and white people then the model's performance will reduce (owing to more diversity in the dataset), but it's learning will be more general.

    • @ardiris2715
      @ardiris2715 Před rokem

      ​@@sabyasachibandyopadhyay8558
      "Models trained on Unbalanced datasets use shortcuts"
      I quit reading here.
      Models don't know datasets are unbalanced, so they would form "shortcuts" regardless.
      Also, what exactly do you mean by "shortcuts"?
      In machine learning, a "shortcut" can refer to a method for reducing the computational cost of training a model. For instance, in deep learning, a popular type of neural network called a "residual network" uses shortcuts, or "skip connections," to enable faster and more effective training by allowing information to flow directly between non-adjacent layers.
      In the context of reinforcement learning, a "shortcut" can refer to a suboptimal policy that leads to a high reward in the short term but may hinder long-term performance. For example, an agent learning to play a game might learn to exploit a particular glitch or exploit in the game that yields a high reward, but this approach may not generalize well to other game states and could ultimately hurt the agent's performance.
      LOL

  • @billaak417
    @billaak417 Před rokem

    you've acted in Mr RoBOt

  • @TerminallyUnique95
    @TerminallyUnique95 Před rokem +2

    Privacy is almost non existent on today's devices. Switch to GrapheneOS, Linux Mint, disable Intel ME. There are many steps that should be taken. Trusted open source hardware, firmware, and software are the answer.

  • @aadityavikram6352
    @aadityavikram6352 Před rokem +3

    Of all AI researchers out here, Quanta managed to find someone creating nothing, begging for sinecures. Respect!!!

  • @jeffbrownstain
    @jeffbrownstain Před rokem

    If one human's subjective statement is fed into an AI that can remove bias and make it objective, and then the objective information is interpreted through the lens of the biases of the second user, this could potentially lead to more productive and objective conversations. The AI's ability to remove bias from the initial statement could help to prevent the second user from misinterpreting or dismissing the statement based on their own biases.
    Therefore, while this approach could potentially lead to more productive conversations, it's important to recognize that complete objectivity may not be achievable and that ongoing efforts to recognize and address biases are necessary

    • @Lala-io9gn
      @Lala-io9gn Před rokem +2

      "potentially" is carrying a lot of weight here.
      No such thing as an "objective" opinion. Just opinions operating off of given information, and a method of interpreting it.
      There are infinite ways of interpreting information, and none of them are true by the standards of another. What's to say the one decided upon by the developer can bridge the gap between the two parties involved? The only voice singing that song is one of ignorance and naïveté.

    • @jeffbrownstain
      @jeffbrownstain Před rokem +1

      @@Lala-io9gn As for actionable options that stem from this conclusion, here are a few:
      1. Encourage self-reflection: Encourage individuals to reflect on their own biases and how they may impact their interpretations of information. This can help them to become more aware of their own potential biases and how they may be influencing their reactions.
      2. Promote open-mindedness: Encourage individuals to approach conversations with an open mind and be willing to consider viewpoints that may differ from their own. This can help to foster a more productive and collaborative exchange of ideas.
      3. Use multiple sources of information: Encourage individuals to seek out multiple sources of information and perspectives to ensure that they are not relying solely on one biased source. This can help to ensure that they are getting a more complete and accurate picture of the topic at hand.
      4. Validate and fact-check information: Encourage individuals to validate and fact-check information before drawing conclusions or making decisions. This can help to ensure that the information being used is accurate and unbiased

    • @jeffbrownstain
      @jeffbrownstain Před rokem

      @@Lala-io9gn Thankfully we use language to interpret language. Weird, huh?

    • @jeffbrownstain
      @jeffbrownstain Před rokem

      @@Lala-io9gn This was an AI prompt and its output:
      First, give me an example of a complex biased opinion. Then represent that opinion in as objective of language as possible. Include no other remarks.
      Biased opinion: "All politicians are corrupt and can't be trusted."
      Objective representation: "Some people/this person hold/s the belief that all politicians are generally corrupt and untrustworthy."

    • @jeffbrownstain
      @jeffbrownstain Před rokem

      @@Lala-io9gn While it's natural to feel fear or concern about the potential risks of AI, it's important to remember that these technologies are created and controlled by humans. By understanding the intentions and motivations behind their development and use, we can better evaluate the potential risks and benefits of AI. It's important to confront these fears and engage in open dialogue about the responsible use of AI to ensure that it is used for the benefit of humanity as a whole.
      Don't let fear and misunderstanding be the captain of your ship, chart your own course:
      -Take the Wheel
      -Raise the Anchor
      -Keep an eye on the horizon
      -Use all your tools
      And Embrace this journey.

  • @BitCloud047
    @BitCloud047 Před 11 měsíci

    You lost me when he said apple is protecting privacy lol

  • @AlgoNudger
    @AlgoNudger Před rokem +1

    Privacy? 😂

  • @file4318
    @file4318 Před rokem

    man is concerned about privacy while literally using a macbook lmfao

  • @primenumberbuster404
    @primenumberbuster404 Před rokem +6

    Doing God's work right there😅

  • @Corvaire
    @Corvaire Před rokem +6

    Well, now we know who is funding his research. :O/-

  • @moroteseoinage
    @moroteseoinage Před rokem

    Civilians do not need privacy

  • @lr937
    @lr937 Před 7 měsíci

    There is no privacy no more, stop doing research on that and wasting valuable resources… you are welcome

  • @xbzq
    @xbzq Před rokem

    That's some seriously terrible music.

  • @bhatkrishnakishor
    @bhatkrishnakishor Před rokem +4

    Sponsored by Apple

  • @tiborsaas
    @tiborsaas Před rokem +3

    Geez, he couldn't talk in any more generic ways while playing totally safe not say anything specific. Taking this "concerned" tone is so yesterday.
    He's a professor, I'm a nobody, but I know all of this and I think AI will transform everything. He totally lost me at "slightly better than a coinflip". Slightly by 5 or 49 percent? :)

    • @mohdashhad3002
      @mohdashhad3002 Před rokem +3

      While saying "slightly better than a coin flip", he was not talking about all of AI algorithms but rather one specific subset that is behaviour prediction.

    • @sisyphus_strives5463
      @sisyphus_strives5463 Před 4 měsíci

      this is a common phrase, it is not meant to be taken literally, the meaning is that the method of prediction isn't very strong(such that it is comparable to a random event like a coin flip). Please, ensure you know what you're talking about in order to save yourself and others the embarrassment.

  • @syntaxerorr
    @syntaxerorr Před 5 měsíci

    Apple? The biggest protector of privacy? hahahah what joke this video is. Had to stop watching after that statement.

  • @luizcarlosf2
    @luizcarlosf2 Před rokem +1

    I stop the video when he said apple is the best in privacy...🤮🤮🤮

  • @mahavakyas002
    @mahavakyas002 Před rokem +1

    another Hindu Brahmin .. dominating fools out there.. yeeeeeeeeeeeeeee

  • @quietanonymous
    @quietanonymous Před rokem +2

    PRIVACY IS A HUMAN RIGHT!