Search

‘Social Dilemma’ Star Tristan Harris Responds to Criticisms of the Film, Netflix’s Algorithm, and More - OneZero

kojongpana.blogspot.com

Big Technology

In a new interview, the former Google design ethicist weighs in on the backlash to the popular Netflix doc

Image for post
Tristan Harris

OneZero is partnering with the Big Technology Podcast from Alex Kantrowitz to bring readers exclusive access to interview transcripts with notable figures in and around the tech industry.

This week, Kantrowitz sits down Tristan Harris, the star of the Social Dilemma film on Netflix. This interview has been edited for length and clarity.

To subscribe to the podcast and hear the interview for yourself, you can check it out on Apple Podcasts, Spotify, and Overcast.

You won’t find a more controversial film in Silicon Valley than The Social Dilemma. The film, now available on Netflix, features confessions from early consumer internet employees who rue the destruction their inventions have wrought.

The film’s portrayal of social media causing conflict, isolation, nationalism, and disaster has resonated with a broad audience. But tech insiders say it’s guilty of many of the practices it decries, stoking fear and outrage in exchange for mass appeal.

To address the film and its critiques, Tristan Harris, its star and the co-founder of the Center for Humane Technology, sat down for an interview on the Big Technology Podcast with no questions off limits.

Kantrowitz: Let’s start with your life. Your house burned down in the middle of the California fires. Is that right?

Harris: Yeah, That just happened a few days ago. We are reeling from that. It’s a pretty, pretty significant event.

How are you holding up and where are you staying?

Luckily we have a lot of different friends. It was our family’s house and we lost basically everything that we own, so it’s a good exercise and impermanence, non-attachment. So just taking it day by day and figuring out soon what the future is going to look like.

Living in the Bay Area, it’s so crazy to see all these natural events happening and then it really brings it home when you speak with someone who it’s happened to. I hope you hang in there on that front and I do appreciate you still hopping on the line to speak with me.

Yeah, thanks.

Let’s talk about your main thesis in the Social Dilemma movie. You say tech companies are controlling our lives through algorithms — Is that right?

The major point of the film is that a business model that is infused in the social communications infrastructure that 3 billion people live by, and are dependent on, is misaligned with the fabric of society and specifically poses a kind of existential threat to democracy and a functioning society. If we don’t have a common conversation or a capacity to trust and to have shared faith in the same information and to reach agreement, then nothing else works in a society.

While we’ve had polarized and hyperpartisan media on television and radio before. Social media has become the background upstream, a place that even television, radio, Fox News, MSNBC get their information on Twitter, et cetera. I think that this business model of doing whatever is best for engagement will always privilege giving each person their own reality.

The way that I interpret in the movie is that tech companies are using algorithms in order to inflame tensions so that we can spend more time on their platforms. Does that sum it up the right way?

We kind of profit off of our own self-destruction because the more conflict there is, the more people die, the more attention-grabbing stuff there is. The more tribalism there is. The more outrage and conspiracy thinking there is that degrades the epistemic and information ecology, the more money they make. The truth is quite boring and usually not nearly as interesting as being able to assert that Trump does or doesn’t have Covid, and it’s a conspiracy theory or Biden was wearing an earpiece and it’s a conspiracy theory.

Social media companies would say: We are holding up a mirror to society. If you have tribal conflict or racism or people disagreeing about climate change, we are just holding up a mirror to the fact that those fault lines and divisions already exist.

This is an incomplete and misleading thing for them to say because they are in fact holding up a mirror to society, but that mirror is a fun house mirror that warps and distorts the image that we see in return. Specifically, it amplifies bully-like behavior, harassment, hate speech, conspiracy thinking, addiction, outrage.

One issue people have brought up with the film is it replaces one conspiracy theory with another. Kevin Roose from the New York Times said, “I can see how someone who believes in QAnon could effectively replace one conspiracy theory (the cabal controls the media) with another (the cabal controls the media…in California).” Platformer’s Casey Newton said “This cartoon super villain view of the world strikes me as a kind of mirror image of the right-wing conspiracy theories which hold that a cabal of elites are manipulating every world event in secret.” What do you think about those claims?

I really respect Casey and Kevin’s work a lot, but what’s interesting is it seems to be a really misrepresented view of what the film says. The film doesn’t say there’s a group of 10 tech insiders who are deliberately and maliciously mustache twirling all the way home to the bank in bringing out the worst in society or trying to control the media. It doesn’t say that at all. In fact, it says these platforms have a mind of their own and they’ve become a digital Frankenstein that no one knows how it works, but all we know is that it tends to reward the worst aspects in society. It’s the insiders coming to say, “Look, I helped build this thing and there’s more authority in the human mind in terms of what’s persuasive.”

It’s one thing if you have many researchers who by the way, there are so many researchers and especially black and women of color who’ve been sounding the alarm on some of the social impacts of technology for a long time and how it’s affect marginalized communities. But the film is rhetorically powerful because it’s the first time that I think the insiders who were there at that time can say, Here are some of the harms that are emerging from these decisions that were made innocuously, and no one knew that it would lead to this harm.

So I don’t actually think that you could draw the conclusion from the film that there’s a secret cabal of insiders that are trying to manipulate. Maybe there’s some extra marketing for getting you to watch the film saying they’re all trying to do this to you, but really if you look at the full content-

Yes. But in the film you see three guys who are standing there and saying how are we going to manipulate this person. Maybe you didn’t say this explicitly, but it certainly seems like these dramatizations are sort of part of the problem that the film is trying to address.

Well, that’s interesting. Let’s make sure we meet it head on because I really care about authentic debate here. So you got those three AI characters played by the guy from Mad Men, right? And there’s three AIs. There’s the growth AI that’s trying to figure out how do I get you to invite more people, tag more people, recommend more people, things like that. You’ve got the engagement AI that’s trying to figure out what can I show you that’s going to keep you scrolling and eliminate the bottom on the bowl and remove the stopping cues and things like that.

Then you’ve got the advertising AI, that’s trying to figure out how do we make sure that each session is as profitable as possible. This is really not far from the truth at all. There are in fact a growth team that actually in Facebook’s case built something called the PYMK, people you may know where I actually talked to a Facebook insider from very early on who was there who was proud at the fact that when you just let people add their friends on Facebook autonomously on their own, they would end up and hover around an average of about 150 friends, which sort of replicates the Dunbar number, the fact that we’re generally in tribes in the Savannah and would end up with about 150 close relationships.

But that wasn’t enough. When you run a growth team and you have an AI that you need to figure out, we need to get people using the platform a lot. And as Chamath, the head of growth in the film says, “What was the key to addicting a user and getting you for life?” “It was very simple,” he said. “All we had to do was get you to seven users in 10 days and then we had you for life.” That’s literally what that playbook said. So how did they grow the number of users that you had.

Well, they actually kind of injected social the user with social growth hormone almost like we inject cows to make them produce more milk. So we said, “What if we could get you to invite and grow to more friends?” And the way they did that is by literally building an AI saying, “Who are friends who, if you were to get them to join, would likely get them to use the platform the most?” So for example, if I’m on Twitter, I would say, “Yes, you followed these first 10 users. Maybe Ashton Kutcher, Demi Moore or whatever, the first celebrities they had you follow are.”

But then when it recommends, “Here’s more users you might want to follow.” It picks those users based on which of them would be the most engaging so that you would come back the most often and it has models of which users if they were to follow them would keep people coming back more often. So that’s actually a fair and pretty accurate representation.

We talked earlier about, there’s no cabal, it’s these systems that we don’t know how they work. But then isn’t it a disconnect to portray it these three guys standing there?

I see that point and I’m speaking to you as a subject in the film not the director or filmmaker.

Of course.

They have the creative control of what they choose to do here. I think that’s a fair critique. I think that in the cuts of the film that I saw, these are meant to be systems that are simply tracking various features and then recommending things to you, which I think that’s also what the dialogue represents. If there’s a little bit of mustache twirling in the way the character shows that maybe that’s something I’m not picking up as much because the way that the script was written it’s supposed to be just sort of amoral algorithms that are maximizing for each of their own goals.

The film brings up some really important points that we ought to be thinking about and maybe the questions are in terms of style. There’s the rise in self-harm among teenage girls. There’s a rise of nationalism. Maybe the rise of loneliness and withdrawal, which are these other impacts.

Do we really think that the Facebook algorithm is responsible for all this, or aren’t there other things going on in society responsible for these outside of the algorithms?

There’s always going to be a “Yes, and…” here because the trend toward loneliness and isolated atomized individuality per the book, Bowling Alone by Robert Putnam, and these trends, the elimination of shared spaces in public space, fewer parks, the hollowing out of Main Street, inequality, more drug use, opioid addiction, various forms of addiction, less meaning… There’s multiple overlapping crises that we find ourselves in.

However, if I ask myself, okay, if there’s an industry operating on a business model of addiction and engagement, let’s just say engagement, right? I need you to use that platform for 45 minutes a day, and by the way everyone else also needs to use it for 45 minutes a day. So in general having you sit there by yourself on a screen is way more profitable to that entire industry than having you spend more time in community with friends over dinner tables with candles. That’s just built into it. So in other words loneliness and isolation are definitely exacerbated by that background effect that subtly wants to atomize and pull us apart.

This is not meant to vilify all technology at all. I think if YouTube was a library like the library of Alexandria for how to make improvements in your life, learn skills, learn musical instruments, do self-medical care, things like this. This would be amazing, right? I think these are the kinds of things that people do find valuable on YouTube. Actually at our house in Santa Rosa before unfortunately burned down in the fires, we used YouTube to figure out how we would supply ourselves to the generator so that when the power would go out, how would we know how to hook this up and mix it with the gas pipeline and all of that kind of stuff.

Now, that’s fine but the problem is when there’s a business model built on automating where 3 billion people’s attention goes, in languages that the engineers don’t speak, and you have cases where two years ago a teen girl who went to watch a dieting video, what does YouTube recommend on the right hand side for all those teen girls? Thinspiration anorexia videos because those were better at keeping attention. These trends have to do with this business model that is subtly influencing the way that all of us are thinking, and feeling, and believing on a daily basis.

These are problems. My only critique is you watch some of these montages and you’re just like social media is the root of all evil in our society. I would have loved a little bit more nuance to that. I mean, obviously social media is a problem. But the question is when you look at the percentage and look at what degree of responsibility does this stuff have?

Yeah. Well, I think I appreciate you bringing up this nuance. I mean, the film isn’t I think ever claiming that all the problems in society are coming from social media…

Yeah. It sometimes gives that impression. I know it’s trying to make a point. And there have been people in the film who’ve said it needed to be a little bit more simplistic to get the point across. But that’s sort of the reason why we want to have this discussion here, is to get a little more nuanced and dig in.

Of course, I completely appreciate it and black-and-white thinking is one of the externalities of the attention economy because it rewards simpler, shorter black-and-white metaphors for the problem as opposed to longer complex nuanced high cognitive junk sizes for dealing with these issues.

I think that if we made a list of the claims of the film about which specific harms, addiction, loneliness, teenage mental health problems, conspiracy thinking, breakdown of shared truth, the film is very specific I think about the harms that… If we use language to say, okay, which claims is the film actually making and for each one of them, we can find clear evidence of an asymmetric responsibility.

I’ll give you a clear example. In Facebook’s own leaked documents from that Wall Street Journal piece has now become a famous stat. They found that 64% of the extremist groups that people joined were due to Facebook’s own recommendation system. In other words, I don’t know if you know this, but back in 2018, they changed their mission statement from “making the world more open and connected” to-

Oh yes, I’ve covered them

… to “bringing the world closer together” and the way they’re going to do that is with Facebook Groups. We said this in our 2019 SFJazz presentation that’s in the film that they said, “So what did we do? We built an AI that would recommend groups for people to join.” Then Zuckerberg claims in this blog post that, “And it works! We were able to get people to join 50 more groups than they would have if we hadn’t built this AI to recommend them.”

Renee DiResta, one of our colleagues is in the film who studies Russian disinformation and some conspiracy groups. She talks about her own experience as a mom where she had joined a “make your own baby food” group on Facebook. So organic do-it-yourself baby food. You can imagine, What was the most recommended Facebook group to her when she joined that group? Those anti-vaccine conspiracy theories which is another related do-it-yourself medicine type approach to being a mother.

But then of course once you join those groups you’re recommended, Pizzagate, Flat Earth, Chemtrails, et cetera. And there you have that stat that’s 64% of the extremist groups that people joined were due to Facebook’s own recommendation systems. In the case of YouTube, we know that of the billion hours that are watched daily 70% of at least… By the way, the stat is two years old because I think they’ve stopped wanting to brag about how good their recommendation system is after this pushback. But they briefly claimed that more than 70% of all the watch time, so 700 million hours out of that billion hours, is due to the YouTube recommendation system.

And we know that they recommended flat earth videos hundreds of millions of times. They recommended Alex Jones, InfoWars, conspiracy theory videos 15 billion times. So that’s more than the combined traffic of the Washington Post, BBC, Guardian, Fox news combined. So if you make a list of these claims on addiction, loneliness, mental health, conspiracy thinking, there’s clear evidence for each one of those claims specifically.

I’m going to just go back to the main question, which is social media that’s mostly responsible for this. Is it one factor of many. Why don’t you give us your personal opinion in terms of how you would contextualize its responsibility here?

Okay. So let’s take a look at conspiracy thinking and before I say this, I want to mention that COINTELPRO, MKUltra, these are real conspiracy theories. So I don’t want to say these are real things. So if I use the phrase conspiracy, we also know the CIA created the term conspiracy theory to dismiss things that might have been legitimate. So I want to make sure that we’re all self-aware this is not meant to vilify any question of the establishment narrative.

But if you were to ask okay, so we have a third of the Republican Party inside of the kind of QAnon movement, we’ve got flat earth conferences that are very well attended. We’ve got more 5G coronavirus Bill Gates satanic cult conspiracy theory stuff than we’ve ever had before. I think we’ve seen a rise of this thinking in the last two to three years than we’ve ever seen in the last 30 years I would at least say. I’ve studied cults earlier in my career, so I’m very familiar with the kind of dynamics of group think and self-enclosed belief systems that weighs that evidence is used to even further justify that we were right.

Leon Festinger’s work on why prophecies fail that when I say that the world is going to end at May 22nd at 2:00 am because the stars are aligning this way or that way and then what happens when it doesn’t happen that way, we just re-justify and double down and say, “We got the math wrong.” It’s the same formula, but we were using the BC calendar instead of the AD calendar or whatever you want to say here. When you think about conspiracy thinking and you have Facebook doing these group recommendations, each of those groups are a self-enclosed echo chamber, and we know that from Facebook’s own research that if more than 50% of their recommendations came from them as opposed to users going around and searching for groups to join, we have clear responsibility that that was on the side of Facebook.

We know from the research that the best predictor of whether you’ll believe in a new conspiracy theory is whether you’ve already believe in one; that’s the best predictor of whether you’ll believe in a new conspiracy theory.

Okay. So I think what you’re saying is that social media is the main factor here.

Yeah. I probably phrase it a little bit more delicately, which is that I would say social media has been a dominant force in the rise of conspiracy oriented thinking and paranoia and distrust in the last five years. And we have to remember that we’re actually about 10 years into these recommendation systems warping society. I think of YouTube… If you remember in 2015, 2016 just how toxic YouTube used to feel. I don’t know if you remember the background radiation of hate.

We had a former YouTube recommendations engineer, Guillaume Chaslot who’s in the film, who built a website called algotransparency.org and he actually monitored what were the most common verbs that showed up in the right-hand sidebar, meaning if you look at all the recommended videos across YouTube in English, what words were used the most. I think the list was hates, obliterates, destroys, owns, right?

Yup.

So it’s like Jordan Peterson destroys, social justice warrior, and debate, right? So this is the background radiation of hate that we were dosing our population with for, again more than 3 billion people, and we were doing that for years. So I think we have to look at those consequences over time.

If it’s not the majority, the main factor, but it’s a dominant force, how do you square that with some of the comments that you made in the film, for instance? One thing that struck me is when you said this is “Checkmate on humanity,” tools to destabilize every country, everywhere. Is that putting it in the proper context?

Well, that quote actually that they use in the film when I say it’s checkmate on humanity was in reference to a specific thing from that presentation that was not quite actually in the film. And what it had to do with was what we diagnosed as we call the inversion point. So previously, people in AI, futurism, effective altruism circles, AI safety have all been worried about the singularity point, the point when technological intelligence when AI outcompetes human intelligence and strengths because that’s when it takes our jobs and takes off and all that, but we miss the much earlier point when technology undermines human weaknesses, which happens much earlier in that time.

Yeah. That was in the movie. That was a new thought that was interesting to me.

Yeah. But then the checkmate point actually was following a different part of the presentation that was not there which was actually around deepfakes and the ability to completely break the basis of what heuristics our minds use to know whether to trust information.

Trust, okay.

So how do I know that you’re trustworthy? Well, maybe you squint your eyebrows in a trustworthy way. Maybe you use your voice in a trustworthy way. Maybe you have a Stanford shirt on that says, “You went to Stanford,” and I’m the kind of person who appeals to authority. So if you went to Stanford, clearly you must be smart or thoughtful or ethical. Whatever it is that we use as a basis whether something is trustworthy or not that is being reduced down to a simpler, simpler set of signals. On Twitter, it’s how many followers do you have. Does it look like you’re in Kansas? Does the tweet timeline look real and does your photo look like it’s authentic?

That’s a small number of discrete signals that are increasingly fakeable and the checkmate humanity was the point that I could completely undermine your faith that something was either human generated or machine generated. And when that point gets crossed, there’s a sort of checkmate on humanity, in addition to the fact that these systems have controlled our information environment and by doing that they’re kind of controlling human behavior. So there’s a bigger point there too.

Are you upset that the film seemed to have taken that line out of context than if it was referring to something else? We talk a little bit about the way that these algorithms generate help prey on fear for instance. Isn’t that sort of a case of the filmmakers doing some of the same stuff they’re decrying?

Yeah. I mean, I think that films have to do editing to try to compress information down and they thought that it was probably possible to make the point that it was checkmate humanity from… Because it’s really an extension of the points that are already being made that if you continue to undermine more and more and more of human weaknesses and to therefore take down and erode the kind of life support systems that make up a social fabric, you kind of get to checkmate humanity from there and I think that’s what they were probably referring to. But I take the point that you know the film has music that is maybe exaggerating or setting the tone.

It’s not just music on that front, it’s the fact that you’re talking about deepfakes, which is a totally different technology from the social media algorithms and engagement machine. And just to juxtapose that, I feel like there should have been more context on that one. I’m glad we’re discussing it, but I’m also scratching my head to see why they would use that without the context that you just delivered.

Yeah. I mean, that may be fair. I mean, I think that the point… I mean, really again it’s through hacking more and more and more of human weaknesses has you arrive at that checkmate point. So the Marshall Islands of technology hacking human weaknesses was when it overloaded our short-term memory. Seven plus or minus two things we can hold in short-term working memory as we know from cognitive science and we feel that. That was our first felt sense of technology overriding human weaknesses and we felt that as information overload or I have too many tabs open or what was I doing? I came here to open up that tab and now I can’t remember why.

That was kind of the first point. And then you can map each of the other points of polarization, giving us our own filter bubble, changing human weaknesses on how we perceive other people’s reality. These are all just on a continuity landscape of hacking more and more of human weaknesses until you arrive at checkmate. But maybe that wasn’t as clearly presented in the film, so it’s a fair critique.

I’m giving you a hard time here, but no one talked about this stuff. It was rarely talked about before you started speaking out about it and it’s not all going to be perfect. It’s why I’m glad you’re doing the work that you’re doing.

I agree with you and I think also we encourage people after they watch the film to really educate themselves and go deeper on our podcast, your undivided attention. We interview many of the subjects who are in the film, who go just into detail. And it’s not exaggerated at all. It’s just an honest reflection of what they found on Russian disinformation or YouTube recommendations.

I’m laughing because they did have the website up at the end of the movie, but before anyone could take it down, it already started auto-playing the next thing, probably based off of an algorithmic recommendation.

And that’s Netflix for you.

How much of this comes down to the actions of the platforms versus how much of this comes down to the actions of the people? Is there some level of responsibility that we ourselves have and can we blame all this stuff going on in our lives on Facebook and on YouTube.

I don’t blame the consequences of my life or the entire world on Facebook or YouTube. I think where responsibility lies has to do with where there is asymmetric power. So if I have more than 50% influence over your actions and you’re choosing from menus that I am providing, and more and more of your choices, if I look at the surface area of choices that you make in your life, and what percentage of that surface area occurs on a smartphone, and from a handful of user interfaces that are designed by a handful of 21- to 40-year-olds in California who are mostly in the Bay Area.

Well, that number, whatever that percentage is has been going up over time by a lot, right? One of the things as we talked about in the film, my background as a kid was in magic and what astonishes me in magic is how many people think that they make genuinely free choices, when magicians constantly are manipulating the basis of those choices? I mean, the simplest thing is by controlling the menu, you control the outcome, right?

In the field of rhetoric, we know from Bertrand Russell, Russell conjugation that you can conjugate the emotional feeling you want someone to have about something before they hear it, so you can say, well, embattled, the embattled leader or strong… He says the embattled leader of that company… I’m trying to think of an example. I don’t know who’s an embattled leader. Travis Kalanick from Uber.

So you could say embattled CEO Travis Kalanick, and I’ve told you how I want you to feel about Travis before you even think for yourself. Well, do I feel good about Travis or do I feel bad about Travis? In general, our choices, our thinking, our feeling are being pre-conjugated by interfaces that we don’t see or control always. The work of George Lakoff is really good on this in terms of language. In magic and in general in design, we use the phrase “choice architecture” that we live inside of a choice architecture and a menu. And in that choice architecture, we’re making and privileging certain features of choices like when I buy that food at Starbucks, does it privilege the price of that food with a dollar sign or does it privilege the calorie count? And that would shift the basis of the choices that I would make to be privileging one piece of information over another.

If I change the social psychology, where everyone is rushing to get the illuminated in a shiny sign, McMuffin sandwich because everyone else is going for it or maybe that’s not the best example, but using social proof by saying 3,400 other two people like this post. Don’t you also want to like it? This also influences our psychology. In fact social proof is one of the most powerful ways of making us seem that something is legitimate. I mean, take conspiracy theories.

If everyone is believing it, then how can it be false if a majority of people believe it? Usually, if a majority believe it, that would mean that it must be true. But if you think if I’m a deceptive actor, it’s not very hard for me to slowly grow a population to get more than 50% of people to believe in something. So we don’t like to admit to ourselves the extent to which our sense-making and choice-making are driven by factors in our environment and are outside of our control. And the degree to which we are not aware of those things is the degree to which we are controlled.

Meaning if I know about social proof, and then when I see that 3,000 other people like it or the majority of people believe it, and if I say to myself I’m going to ask on what basis would I know that, that was true, that’s one micro degree of free will because I’ve created an awareness about one of the things that would otherwise have me turn into an automatic believing machine.

So that’s a big complex answer, but I think that when you ask about free will you know the level of asymmetry between people who are designing technology and structuring the choices, and the features, and the colors, and the notifications, and the appearance of news feeds and the fact that news feeds don’t show you if an article that was posted was five years old versus it was yesterday allowing people to not post fake news, but fake recent news. It was not actually recent. Those are all decisions made by designers in California and have big consequences.

I remember like the opening scene is you sitting down and immediately checking your phone as you go.

Yeah, exactly.

You’re aware of this stuff and you can’t help it.

This is actually really important for people to get. No one should feel bad when we’re using technology and feel like we got sucked in or… I have been studying these things for such a long time and do I think that I’m immune when I post something about The Social Dilemma and I get lots of likes versus a bunch of angry comments? I mean, social approval is one of the things we are most evolved to care about. If 99 comments on a post are positive and one is negative, which does our-

That negative sticks with you.

Yeah. Does our mind remember the 99 or does it loop, and loop, and loop on the negative, right? In general, we’re evolved to look on the negative because that’s helpful for us evolutionarily. But with social media, it’s never been easier to see a tree of people who don’t like you or if you’re black or LGBTQ people who are more harassed and discriminated against on social media, you have infinite hate trails that you can keep clicking on through the tree for hours. There’s so much vitriol that is so easy to take us over that I think we have to be aware.

One thing the film didn’t discuss is the share button and the retweet button. I think they have a profound effect in terms of the type of information we share and the type of information that populates these platforms because people write for retweets and shares. So I’m kind of dubious that it’s the algorithms and I think that the share and the retweet are much worse, but I never hear about them. So what’s your take on that?

It’s interesting because in my mind, the film does include that, but maybe it’s not as evident.

There’s a real clear issue with the fact that when we don’t think before we share, we’ll pass along stuff that’s fake or sensationalized or confirms our emotions without a second thought, but whereas when we pause to think, we are often much less susceptible. Even Twitter is running this experiment right now making people click before they retweet something or asking if they want to. And that’s shown to improve the information ecosystem on the platform. I wish that that was foregrounded a little bit vs. algorithms and engagement.

Yeah. I mean, I think the reason that my mind includes the share button in this is that those algorithms wouldn’t work if people weren’t hitting share buttons and retweet buttons and so they include them in the premise. But I understand what you’re saying, and the example of what Twitter is doing with showing you a prompt saying, “Have you read this before you share it?” I mean, these were things that were being advocated for five or six years ago frankly among people in our community.

So it’s taken a long time to get some of these things in. I would use the metaphors of epidemiology. I think what’s interesting about the coronavirus is it has imbued the culture with a new kind of way of understanding the world in terms of infection and in terms of superspreaders, in terms of shedding who is an asymptomatic carrier and who’s a symptomatic carrier.

So each of us are spreading information and infecting others with beliefs and biases and some of us are superspreaders. Some of us are shedding biases whenever we like things and retweet things. We’re shedding biases for how other people should see things. Some of us are doing that symptomatically like we are very obviously polluting the information environment. Some of us are doing that more asymptomatically by maybe boosting things in more subtle ways by clicking on them. And the fact that we click on them is actually making the algorithm up-regulate them to other people even though we don’t explicitly share.

So I think when you think about it that way, the share button and those pauses are like loading vitamin C into each carrier and saying, “Maybe we are going to put a mask on and so we’re not going to share everything to everyone else.” That’s what that share interstitial is on Twitter.

But is it like more profound than that because one of the things that people have talked about when you look at algorithms especially is what do you make of the fact that some of these same problems occur on WhatsApp? WhatsApp has the forward, but it doesn’t have ads and it doesn’t have an algorithm that shows you stuff and you still see people spreading conspiracy theories. WhatsApp isn’t dependent on the time that you spent there. So isn’t that like a fairly compelling counter argument to the one that you’re advancing?

Well, so WhatsApp’s business model is still dependent on its parent company, which is based on an engagement driven model. That’s why the VP of growth at Facebook, Chamath says, “How do we make these things work? We hack into human vulnerabilities. WhatsApp has done it. LinkedIn has done it. Facebook has done it.” He uses that line and he includes WhatsApp because it is hacking that same thing. It is delivering reasons, excuses for you to go back and check messages. It is creating social signals for us to respond to.

It makes us feel guilty when we don’t respond to them whether that’s feel guilty that we didn’t like our partners post back or feel guilty that we didn’t respond to that message using read receipts so that now you know that I saw that message and it was a big message from you that talked about maybe your house burning down or something like that, and if I don’t respond to that now I feel really guilty.

This is all tapping into really deep human psychology and vulnerability. So it actually is driven by that same business model just not explicitly advertisement based. I mean, WhatsApp is essentially an advertising based business model, it’s just not happening on WhatsApp. It’s happening on the other platforms that subsidize it.

So is the idea that basically it’s driving all these engagement methods so that you end up going to Facebook and Instagram?

Yeah. I mean, I think..

Yeah, there’s no ads there. So it’s interesting the same stuff is happening.

Yeah. I mean, that’s why the ads themselves, the rectangles of the advertisements are not what the critique of the film is about, it’s about a business model that is dependent upon the zombification of human beings, domesticating people into addicted, distracted, outraged, polarized, and disinformed humans writ large and that business model powering WhatsApp still has benefits to turning us into addicted, distracted, responsive human beings.

So again, if we want a society that’s addicted, distracted, and hyper-responsive to others, then maybe that business model is aligned with the social fabric. But in general, these things weren’t designed by social theorists who say, “Well, what makes a healthy social fabric or weren’t designed by child psychologists to say what’s good for children.” They were just designed based on, “Hey, did we get a flywheel turning and get engagement and growth going up and to the right?”

It’s been interesting being in the Valley for a while and seeing the way that executives, and this might have been in the film too, but the way that executives hand this stuff to their kids and the caution that they take whereas none of this stuff comes with a user manual to anybody when it comes out of the box.

I mean, I think the point there to make is would you trust a doctor if you say, “Well, would you get the surgery for yourself or for your own kids and they’d say, “Hell, no. Would you take their advice?” If you go to a lawyer and say, “Hey, would you argue the case this way if it were your own children?” They’d say, “Oh my god. I would never do that.” And you say, “Well, why would you give that advice to me? The CEO of Lunchables foods didn’t give his own kids Lunchables.”

I think that’s all you need to know if you’re a parent and I think one of the basic principles of ethics is the ethics of symmetry doing onto others as we would do to ourselves or even doing to the most vulnerable of our own children and if we lived by that ethical protocol, I’m sure we would live in a much healthier better society and that would be the unit test if we had made humane technology for kids is that technology was used, parents in the tech industry were giving it to their own children and didn’t feel bad about it.

Why did the film go with Netflix? Netflix is sort of ground zero for all those problems, right? It says it’s competing against sleep. It gives you the recommendation algorithm to make sure that you’re watching the next thing. And you guys also created or the film created accounts I think on Facebook and Instagram and Twitter. What’s going on there?

I hear you asking multiple questions and one is about the hypocrisy of starting social accounts on the very services that you’re criticizing. And that critique is answered by the fact that if you want to change the public perception and create the only thing that will actually change these systems, which is government regulation through a massive cultural movement of shared understanding about the problem, kind of the climate change of culture, how are you going to reach millions of people except through one of these limited platforms whether it’s YouTube, Facebook, Twitter, et cetera.

So I think the fact that we have to use these platforms to try to critique them and to build support for changing them, speaks to their monopoly power. And the fact that it would be seen as hypocritical actually empowers the anti-trust arguments that are proceeding right now on Capitol Hill. So I think it’s the fact that we don’t have another place to go, the fact that children who want to opt out of Instagram, don’t have another place to go, but another addictive manipulative system of TikTok. This speaks to the problem. So we actually use that as further ammunition of that is exactly right. That’s what the problem is here.

Now, in terms of why would the film, Netflix… The film is launching on Netflix, I think one thing that gets confused about these platforms is the belief that you Netflix is a video site so therefore it’s only competing with other video sites like YouTube or back in the day, Facebook in 2009 was only competing against other social networks like Twitter. I remember there was this day, I was at a cafe in San Francisco, Samovar Tea Lounge and I was talking with a friend who was deep in the growth team at Facebook. And by the way, I mean that’s where this work comes from because I know so many people who’ve told me these decisions over and over and over again kind of what the calculus was.

He said to me, people think that our biggest competitor at Facebook is Twitter or is one of these other social networks or Myspace or something. It might have been. But actually, our biggest competitor is probably YouTube because they’re not competing for social networks, they’re competing for time spent. So it doesn’t really matter whether it’s Netflix or YouTube or whatever, everyone is competing in a finite attention economy and that’s the problem we have to face is that because it is a commons, it is an attention commons, it is a consciousness commons, we are all sharing one airspace of finite amount of attention.

Much like in the climate movement, there’s something called Earth Overshoot Day where it’s the day in a year that we have overshot the Earth’s resources so past the replenishment rate and that day moves earlier and earlier because we keep consuming way more beyond our means with an infinite growth-based economy. The same thing is true for the attention economy. We have a shared commons and we are overshooting the limited capacity of attention that we have as a culture with essentially trivia.

And this is where I think Aldous Huxley’s book, Brave New World or Postman’s book Amusing Ourselves to Death is really about that instead of this Orwellian dystopia of censorship and big brother and surveillance, we have this other dystopia of the Huxleyan dystopia, Brave New World where we give people so much amusement that they amuse themselves to death. We give people so much trivia and trivialities and egoism and passivity that we devolve not through restricting ourselves, but by overwhelming us with our vices. And I think that’s the dystopia that we’re actually living in strangely simultaneous with a kind of Orwellian surveillance or utopia. So we’ve kind of gotten both.

I think that’s what we have to fundamentally change is have a more healthy and humane attention economy that respects the finite commons that we have to share for airspace and have it reflect the things that we would want our common attention landscape to reflect.

Let's block ads! (Why?)



"film" - Google News
October 07, 2020 at 09:46PM
https://ift.tt/3iITQ3E

‘Social Dilemma’ Star Tristan Harris Responds to Criticisms of the Film, Netflix’s Algorithm, and More - OneZero
"film" - Google News
https://ift.tt/2qM7hdT
https://ift.tt/3fb7bBl

Bagikan Berita Ini

0 Response to "‘Social Dilemma’ Star Tristan Harris Responds to Criticisms of the Film, Netflix’s Algorithm, and More - OneZero"

Post a Comment

Powered by Blogger.