Facebook CEO Mark Zuckerberg. | Andrew Caballero-Reynolds/AFP via Getty Images
This is how Facebook's CEO is thinking about democracy, speech, and racial justice at a critical moment.
On Tuesday morning, Facebook CEO Mark Zuckerberg led a tense video call meeting with 25,000 of his employees to address the issue that's divided his company and the public over the past week: How to handle Trump's controversial Facebook posts that some see as a glorification of violence against American protesters.
Recode obtained leaked audio of the meeting and transcribed it.
In recent days, Facebook employees have shown unprecedented levels of open dissent against Zuckerberg by criticizing his decision not to take down or moderate Trump's recent posts that referred to the ongoing protests in the US against racism and police brutality by saying, "when the looting starts, the shooting starts." They've also taken issue with the company's decision to not fact-check the president's posts that shared misleading information about voting by mail.
In this nearly hour-and-a-half-long meeting, Zuckerberg first explained his process and rationale for leaving the post up, including saying that he didn't see Trump's apparent reference to civil rights era segregationist police rhetoric as being "read as a dog whistle for vigilante supporters to take justice into their own hands." In the latter Q&A portion of the meeting, employees pressed Zuckerberg on how he came to that conclusion and whether there's enough diversity in the company's upper ranks. The dialogue in its entirety offers fascinating insights into the thinking of one of the most important business leaders of our time on issues of democracy, speech, and racial justice at a critical moment in US history, and in the face of sharp criticism from many of the people who work for him.
This transcript was edited for clarity, and parts where the audio was not audible have been omitted. This transcript doesn't include around 10 minutes of the beginning of the meeting, when Zuckerberg called for "unity and calmness and empathy for people who are struggling" during this time, acknowledging the history of racism in the US and current pain around that.
On his process of initially determining what to do with Trump's controversial posts
Mark Zuckerberg
I do just want to acknowledge up front that, you know, this isn't like 100 percent clear-cut of a decision, even though I do think that the underlying principle of the platform and our policies and the evidence strongly weighed in one favor towards making the decision.
So, this stuff is difficult. But let me go through quickly what the process is because I know a lot of people have asked questions about this. And some of the things that we weighed during this. I'll try to go through this quickly because I think I talked about this on Friday, and I know other people have done Q&As, and I have written about this as well.
But the basic process was the president tweeted early in the morning on Friday, and I was asleep. The policy team saw it, started working across the East Coast and the London teams to pull together an analysis that would be in my inbox when I woke up in the morning. So I could see, basically, the analysis of the post and the policies that beared on the recommendation. And I got that when I woke up around 7:30 in the morning, and it basically outlines the three categories of how one might interpret this and what that would lead to.
So, the first category would be this was a discussion of state use of force, which is something that we allow. States are legally allowed to use force, in many cases, and discussion about that and even threats around that are the things that we have policies that allow, and the team concluded that that was the most likely reading of this, the most reasonable reading.
The second category would be a prediction of violence in the future. If someone's saying, "If this happens, then this will happen," not necessarily encouraging it or calling for it in any way. That also would be allowed across our services. And the team basically suggested that that would be the second most likely way to read this.
And then there's a third category, which is incitement of violence, which is someone directly calling for violence, you know, and if we decided it was incitement of violence, then we would take it down. And just to be clear on this, there is no newsworthiness or politician exception to our policies on an incitement of violence. We'll get into that in a little more detail in a minute. But this is kind of an important nuance because if anyone is calling for violence, you know, I don't — it's not clear what the right thing to do is — put that behind a flag, but keep it up.
You know, if somebody is actually going to encourage violence, I think in general, you just — you just don't want that content up. But our policies around incitement of violence, you know, have pretty — have some clear precedents right around if people have to be calling for violence or targeting specific individuals. There have been examples of government officials around the world, we've taken them down. There was a legislator in Hong Kong who called for the police to come in and clear out and kill the protesters to restore order in society. You know, that was — that's obviously inciting and calling for violence. We took that down. And there have been cases in India, for example, where someone said, "Hey, if the police don't take care of this, our supporters will get in there and clear the streets." That is kind of encouraging supporters to go do that in a more direct way, and we took that down. So we have a precedent for that.
And by the way, while we're on that, we also have a precedent for taking down Trump's stuff. You know, this is something that I think a lot of people haven't necessarily really focused on, but earlier in the year he ran — or his campaign — a bunch of ads that we ruled were census misinformation and took them down. So this isn't a case where he's allowed to say anything he wants or that we let government officials or policymakers say anything they want. And we have rules around what is incitement of violence. We looked at both — the basic interpretation was that this did not clearly fall into those rules.
On why he believes Trump's reference to looting and shooting at protests has no history of being read as a "dog whistle" for vigilante violence
So then after basically getting that, I spent the rest of the day talking to the team and talking to different people and getting different people's opinions, including calling a diverse set of folks across the company and factoring in, you know, a lot of different people's opinions went into the initial policy analysis. We can get into more detail on that in a minute.
But, it was a discussion where, even if the initial assessment was we shouldn't take this down, I spent a lot of time trying to wrestle with, "What are the best possible arguments for why this would be incitement to violence?" So we're getting into the history of the comment around "when the looting starts, the shooting starts," and it's clearly a troubling historical statement and reference, whether or not it's inciting supporters to go to violence, and we basically concluded after the research and after everything I've read and all the different folks that I've talked to, that that reference is clearly to aggressive policing — maybe excessive policing — but has no history of being read as a dog whistle for vigilante supporters to take justice into their own hands. But I spent a lot of time basically going through all these different arguments about why this could potentially be over the line and thought very carefully about it and knew that the stakes were very high on this. And I knew that a lot of people would be upset if we made the decision to leave it up.
But then after that period, basically, I couldn't get there. I couldn't get to that even with my personal feelings about the content and even knowing that a lot of employees would disagree with this. I think the principles that we have are and how we run the platform, the policies that we have in the evidence here, overall, on balance by quite a bit, would suggest that the right action for where we are right now is to leave this up.
On changing Facebook's policies
Now that gets to this question, which is, well, are the policies correct? And how do we act on this going forward? And one of the things, by the way, that I explored is what a lot of other people's first reaction is, which is, you know, "Why does it have to be binary?" Right? Why does this have to be, "Take it down or leave it up there"? In general, I don't think anyone else has taken it down. Twitter didn't take it down. Most people, I don't think, think that it should come down.
On speaking with Trump about his decision
And sorry, there was one other really important point on the process that I wanted to make sure that I talked through, which is that there's been a lot of talk and that there was some press around us reaching out to the White House. And I want to clarify what happened here. You know, earlier in the morning, our policy team reached out to make sure that the White House understood the policies and to express concern about the post and whether or not it violated our policies. What basically happened here was, they got escalated on their side and later in the day, after the decision had basically been made about how we should handle the content, the president called me. And I used that opportunity to make sure that he understood directly how I felt about the content. And that I felt that it was divisive and inflammatory and unhelpful and pushed on that. But I wanted to make sure that that detail of how the process worked was kind of addressed upfront in this and I'm happy to talk about that more as well.
What he will do moving forward
So let me talk a bit about what I think we should do going forward. And I know that there's a lot of discussion around this, so don't consider what I'm going to say here to be the final word by any means, but I just want to share some reflections on what I've learned in terms of where our systems and processes can be better and areas for improvement.
So, the first, and I'm gonna go through seven things that I think we can improve, starting with things that were specific to this decision. And the second category is around how we can improve decision-making overall. And then the third is around proactive things that we can do to work on racial justice and civic discourse. So I'm starting with kind of the most tactical things around this policy specifically.
On Facebook's policies about use of state violence
I know there's a real question that we need to address in our policies, which is that right now, you know, as I mentioned, discussion on state use of force is allowed in our policies. I know there's a lot of good reasons for that. But I think what we're seeing right now is that excessive use of force, by police especially, is a huge part of the problem. And that I want to make sure that we're not somehow creating an environment where the policies allow for discussion of police going in and taking an enforcement action, but not somehow unequally allowing people to talk about the other sides of that.
I also want to make sure that we are balanced on the discussion around state use of force because clearly not all state use of force is legitimate. And I think that we need to make sure that, that there are policies around that, especially going into a period where there may be a somewhat prolonged period of civil unrest here in the US, that we can … that either we're basically forward-looking and can have the policies in place that reflect the fullness of where the country is right now.
On how Facebook's voter suppression policies aren't ready for new challenges the pandemic has introduced to voting
The second is kind of related to the fact that we — is another policy question which is related to the fact that we're kind of in a different situation now than just a few months ago before Covid, before these murders, before the protest. And this one, rather than being about violence, touches on another post that was controversial, which is around voting and the election.
And I think at this point, it's really clear that we still have a pandemic, and come November voting is going to have to look a little bit different this year, and we have done a huge amount of work on elections and election integrity over the last several years since 2016 to basically get to a much better place on this.
Going into this election, I felt very good about where we are. But I think Covid really changes things. And it means that there's going to be new dynamics around how people are voting, and a lot of fear and uncertainty.
And I just want to make sure that our voter suppression policies are fully up to date with the new reality that the world is in so we can make sure that our policies encompass all of the things that could be potentially harmful or suppressing voting during this somewhat extraordinary election. So that's the second area.
On "nonbinary" labels that Facebook could place on contentious posts from world leaders like Trump
The third area which we talked about a bit is exploring nonbinary labels for content. I went through a bit why it wasn't obvious to me that this is the right thing to do. But I know that a lot of people — there's a lot of energy around this internally. I know a lot of smart people are thinking about this, which means that we'll probably get some new ideas that we hadn't had before. That, to me, at least is a signal that I want to hear those ideas. And I want to have a chance to engage with them and see what people are thinking, to see if there may be a different way to look at this going forward than we have to date.
And I want to be clear, I don't want to over-promise here because I do think that the current stance that we have is reasonable and principled. But I also know that there's a lot of smart people thinking about this right now and that's a good area to follow up on as well.
On improving decision-making
In terms of improving decision-making, there are a few things that I've heard pretty clearly through this process and that I think we can try to improve on going forward. The second is basically transparency and in making sure that the procedures are clear around decision-making. Right? So what goes into the briefing email that I get for escalations like this outlining — basically outlining — the different issues? What different groups are involved in that? I actually think internally, most people would feel pretty good if they understood what that process was, you know, like the outcome of the decisions. But we do incorporate a wide diversity of perspectives and roles into this. And a bunch of them I also generally seek out directly myself, but it's also included in the process. And I think just making sure that that is codified and more transparent will be helpful.
On what he would have done differently
For example, one thing that I wish I that I had done on Friday morning was just sending out a quick note to the company telling everyone that we're looking at this, that they should expect to hear from me in five or six hours because I was going to take a bunch of time to think about this and make sure I understood all the arguments and read a bunch of historical context and get input from the external civil rights advisers and different folks like that, who were sending notes about how we should take this. So I think more transparency so that that way people can get a better sense of the procedural fairness on the specific decisions.
On having a diverse set of opinions in process-making decisions
The next one is — more broadly than a specific decision itself, I think in some ways it's even more important that we have the right structures around inclusion, to make sure that you know when the precedents are being set, that you have the right diversity, and everyone is helping to be involved in setting those precedents, so that when you come to a decision like this, it's not just a matter of having diverse views in the room for this decision. It's — we want to make sure that the org is set up and that we have the right points of view in every decision before this.
So that's something that I'm gonna follow up on as well. And there may be some org changes that we want to make around that. Just to elevate some of the work that we're doing around inclusion, to make sure that it's not just these decisions that they should be involved in — it should be stuff more broadly.
Now, again, I actually from a process perspective think we do pretty well on the process around that today. But I do think that there are probably going to be areas for improvement and areas where it may make sense to change some of the orgs around in some way to improve this and elevate inclusion a bit more.
Employee question on how employees can trust leadership on these issues
Facebook employee
We've been told by internal PR that you have a call with Trump only after it was on the news, not before Friday's Q&A. On top of that we are getting … PR-focused posts from Mark sharing the head of PR and director of PR, posts that say that you're listening to us but at the same time with no action. And more importantly, that you don't understand why it is not just about the events from last week but the everyday lives from people of color inside and outside this company. Very quick examples: Sheryl saying the meeting with civil rights [groups] went well, when they said the meeting was a disaster, and that Facebook doesn't understand the current situation.
One of the directors of PR tried to show [it's] like a positive thing that Facebook shows billions of African Americans being killed by police on our platform. Legit posts from black activists are being incorrectly flagged by Facebook. My question is, how can we trust Facebook leadership if you show us a lack of transparency and lack of understanding on the world outside of their privilege?
Response to employee question on trust
Mark Zuckerberg
Yeah, I understand this question. I mean, I, I mean, generally, what I would say — I would say three things. One is: We're trying to be as transparent as possible. And the decision came up on Friday. It was a tough one. So I spent most of the day working on it. And then we made the decision and then worked on communicating it and writing up the explanation and getting in front of employees to talk about it by Friday afternoon. So I think on that, we tried to move as quickly as we can, but there's a tension between this being a difficult decision and being able to communicate what we're thinking about it as quickly as possible. I think we could have done a little bit better. I could have been clearer that I was working on it, that I'd address folks in a few hours. But we did that.
Different leaders have tried to shop and do Q&As. Schrep [Facebook CTO Mike Schroepfer] did a Q&A yesterday. I'm doing this today. I'm sure we'll continue the conversation.
We're trying to be as transparent as possible here, even though I get that there's a lot of questions. And in terms of the decision-making process, I think that's one of the things that we want to follow up on to make sure that there's transparency about what goes into the policy briefings. Whose input is in that? Who is around when I make the decisions? Things like that.
I generally think that when we provide more transparency, there will be — it's not that there won't be areas for improvement, but I think that people probably feel pretty good about some of those processes, even if the outcomes aren't exactly what they want. Um … sorry.
So then you were asking about how can we trust the intent of leadership, and I mean, on this, I would, you know, I understand that a lot of people don't agree with this decision. And I think that — I understand that.
You know, I'm not disappointed that not everyone agrees with the decision. I think people come at things with a somewhat different approach and could reasonably make different assessments.
And I think that that is fine and good — that there's a diversity of opinions. I also know that this isn't completely one-sided and that there are a lot of people who agree with the decision as well. Even if they don't feel like they want to express that loudly inside the company right now.
In terms of the intent of leadership, I mean, look, I do think that it's one thing for, you're kind of seeing like every corporate CEO across the country right now just stand up and say, "All right, yeah, Black Lives Matter. We stand with our black community." And it's like — that stuff — I think it's important to say and remind people to say it, but I don't think it takes any particular courage to say those things, like, when there's a huge crisis.
I think what I would hope that people would look at is the track record that I and the other leaders have of focusing on these issues before it was in the news and going from the day that I founded the Chan Zuckerberg Initiative, justice and opportunity was one of the central pillars. And in terms of racial justice, I think the interactions with the criminal justice system are one of the areas where racism has the biggest disparate impact and desperately needs to be fixed. And there's so many aspects of that.
And I do feel like there's something to that where, you know, the fact that this is a thing that I engage with a lot of people in the company and outside for a long period of time, there are a lot of conversations we've had. This is not the first policy decision that we've taken around this. And I think the overall impact that the company has had has been pretty positive for giving underrepresented groups a voice — I think should give a — hopefully — should give people some context. And that it's not like — I'm not new to this. I may not understand it as much as, you know, someone in one of the affected communities who lives it every day, but it's not like I just showed up or started caring about this, either. So that is one piece just on people and how you think about leadership.
And then in terms of how you think about the company and the values and the kind of overall moral impact of what we're doing. I think a lot of people — there are a lot of people who want to push on us to do more in different directions, and the reality here is that we're an incredibly important communication platform.
And the incentive for almost any community is going to be to push us to do — to basically try to do as much as possible to help a cause. And that's good. And they should, they should be doing that, it makes sense. But it kind of sets us up in this dynamic where it's very difficult for us to kind of ever do everything that anyone, that any specific party, wants. So basically, all different groups on all sides of issues end up being quite upset with us. And because of that, they air the things that are negative that we didn't do more than they celebrate the things that we did do.
And I think we've seen an example of that this week, which I've mentioned here a couple of times, which is, you know, it's the discussion is overwhelmingly, even just in this set of episodes, has been about the impact of whether we added a flag to Trump's post. And while I get that that's obviously an important thing in the world. And I understand that it's really important for employees and a lot of people in the community to understand, you know, whose side are we on? Taking this as a signal — even I don't agree that that's how that should be interpreted.
I actually think the fact that the video of the murder was posted through giving people a voice on our service, something that becomes enabled that way, has just had an immense impact. And I just — I would urge people to not look at the moral impact of what we do just through the lens of harm and mitigation. That's clearly — that's a huge part of what we have to do. I'm not downplaying that, and we spend massive resources, thousands of people working on this and billions of dollars a year.
But it's also good to remember the upside and the good and the giving people a voice who wouldn't have previously been able to get into the news and talk about stuff and having painful things be visible. And I think that matters, too. So I guess that's kind of, that's how I think about all that stuff.
Employee question [seemingly read out by comms person]: How many black and other people of color employees were involved in the decision around whether or not to take action on Trump's "looting … shooting" post?
I don't know the exact number. But I mean, here's what I can tell you. There's the initial policy briefing. I know that there are several black employees who are part of the group that both … just as black employees who happen to be doing a role and institutional roles around diversity — and focusing on making sure that we institutionally are representing different perspectives and the process — that goes into the initial process. Then when I get together the team, it's a small group because we're trying to have a productive conversation with, you know, eight or nine people.
Maxine was there for hours as we worked through this, even before we got on that call. I personally care a lot about the opinion that she had and what she was hearing so I actually called her directly myself one on one, just to make sure that I had that view. And all the while we were also getting briefs and kind of people sending emails and opinions from the external civil rights advisers and folks like that — which include a number of people of color — so it's, um, so it's, I don't know the exact number but — and there may be ways to improve this further. But I actually think if people had transparency into the process, I think that they wouldn't necessarily feel bad about that part of it.
The part where I actually think that we might go to do somewhat better is there were clearly a set of precedents and decisions that were made that led to this being the right thing to do in this case. And while I believe that there were, I'm sure that there were black employees and people representing specific institutional interests around diversity and things like that included in those policy-making efforts as well — I just think that this whole set of events is a call to elevate that organization to work a little bit more and make sure that it's really on all of the other things around this. So that way it's not just by the time you get to the specific escalation question — it's kind of the framework of the infrastructure that you've built. That all is including inclusion at the appropriate level. Because like I'm saying, I mean, giving people a voice is a huge priority and principle for us, but so is serving everyone. And that's really important. We take that very seriously, too.
Employee question on Facebook considering labeling posts like Trump's in the future and who was involved in the decision-making process
Facebook employee
And you kind of already answered my question, which is like, why don't we have this — why do we just like pick between this binary when we don't have anything in between? So I just want to confirm before moving on to my follow-up that you are considering all the informed treatments [such as Facebook labeling posts that contain violent state speech] that are going on right now that people have done and posted about.
Mark Zuckerberg
Yeah, and to be clear, what I've heard so far is that a lot of thoughtful people are working on this. And I'm planning time to go through those. So I don't want to say that I've sat down and already looked at all the informed treatments and the ideas because I actually haven't had a chance to do that yet. Right now, I'm just going off with the fact that there's been a lot of energy around this, I think it's a reasonable question. A lot of smart people are looking at it. So I am imagining that they're going to be ideas that we're going to want to consider and that I want to learn those things.
Facebook employee
Awesome. So my follow-up is kind of adjacent to how you made that decision. And kind of also ties into the previous question, where I still feel like you're being a little bit vague on who exactly was involved in this decision and whether or not you're the ultimate person making the decision. So I would love for you to say exactly which execs are involved in these meetings, the small circle that are composed of, which teams they come from, who they represent, and where they voted on this issue, because I would really love to hear exactly who is involved in this. And like you said, more transparency would be good for this process. So I think employees would love to hear that.
Mark Zuckerberg
Yeah, sure. It's, um, you know, I think it's, it's, it's basically who you'd expect, and I'll make sure I'm not getting anyone wrong. I'll follow up if I'm missing this. It's who you would expect: It's Sheryl. It's Nick. It's Maxine. It's Joel. On issues that are going to be sensitive for employees, it's Laurie. On issues that might be sensitive legally, it's Jennifer Newstead, who is the general counsel. You know, it's — and I don't know if I'm missing anyone or — Monika Bickert, who runs the actual team that sets the content policies.
Facebook employee
I don't know. Correct me if I'm wrong. Besides Maxine, everyone you've listed is white, correct?
Mark Zuckerberg
That's correct.
Facebook employee
And these are the small circle groups that are making the decisions, of which you only included one black woman, and that was it. And you also have spurred up such an amazing initiative to fund my work directly, which I'm very proud to work on — the integrity team — and it doesn't seem like that team, which specifically works on voter suppression, societal violence, and …
Mark Zuckerberg
I'm sorry. I believe Guy was included, too. Sorry. I think that's it.
Facebook employee
So I attended Guy's Q&A this morning …
Mark Zuckerberg
Maybe he wasn't. I actually, I'm not sure if he was.
Facebook employee
So I don't think it's probably great that we're not super clear on whether or not the VP of integrity was included on an integrity decision involving civic matters of voter suppression and societal violence, right?
Mark Zuckerberg
Um, yeah, I think you want to make sure that you have people's viewpoints in this. I think I would say that his view, that his role is probably more to enforce — like build these systems in place to make sure that we enforce this well than to specifically weigh in on a content policy decision. But I mean, Guy is a very thoughtful person and he's definitely someone whose opinion I'd want to make sure is included in the process.
Facebook employee
Okay, so in the future with escalations like this, Guy would be more involved or someone representing from my team, or societal violence, or our misinformation, or our voter suppression orgs, would be more involved in a decision that directly affects this?
Mark Zuckerberg
Yeah, although what I'd say is, I think [it's] Monika Bickert's team that does the policy analysis. Sorry, I lost you there. I don't know if you're still on. But Monica Bickert's team is the one that basically is charged with defining the content policies that weigh these different equities, giving people a voice, preventing harm, and safety and all the different types of harm that we talked about, and making sure that we serve everyone. So that voice is certainly there. And then she also gets opinions from a lot of different sets of people, as do Sheryl or all the other folks who are in the room or soliciting a lot of opinions and bring them to the table when we talk about this.
So I feel like I was able to hear all the different opinions. You know, the thing that I would look back and say, okay, we really didn't do the process correctly — if this were the case — would be if after the fact that I made a decision, someone raised some context or question where I thought, "Hey, I hadn't considered that before." Or, "Wow, if I considered that then maybe we would have made this decision differently."
And that's certainly not how I felt here. I do feel like all the folks in the process were pretty rigorous. I feel like all the arguments were considered. Things that people are asking now are not — I don't think there's been a whole lot of things where someone said, "Wow, I really didn't consider that before."
So I think in this case, the decision-making process was pretty rigorous. I think we could do better on the transparency around it and making sure that people have a sense of the procedures around this, but that part, I think, is different from kind of the judgment that people would make given all the information at the end.
Employee question about the limits of acceptable state violence per Facebook's rules
Facebook employee
Okay. I'm super pleased to hear that there's room to review and revise this rule that posts by state actors about state-sponsored use of force should stay up. I'm super curious about what you think the limit of this should be and how you're thinking about the global implications. For example, police are state actors. So under current rules, if police chiefs use their platform to, say, you know, send squads out into black neighborhoods to shoot them — that would still under our rules be use of state force? Similarly, in Turkey, if Erdogan directed forces to go out and shoot Kurds, that would be a legitimate use of state force? And so it would stay up?
So I'm curious, given what we did in Myanmar, where we removed the generals from our platform, what you see the differences is there? And then finally, history shows that violence by state actors targeting vulnerable communities — even if it's directed to soldiers or police — it's always resulted in vigilante action because it creates a vulnerable group that everyone can act against. So from the Holocaust, Erdogan, to the genocide in Myanmar. So my secondary question is whether you think the amplified danger to vulnerable communities should inform a review of the rule.
Mark Zuckerberg
Yeah, those are good questions. So I think we'll need to think through. I want to be careful not to … I think this is an area that we need to think through, especially given that a lot of the concerns here are around excessive policing. It just strikes me that this is an area that we may need — I think there were actually two reasons why it makes sense to think about where we are here.
One is because the concerns were about excessive policing. The second is that we have a somewhat different set of policies around countries like a number of the ones that you mentioned, that we view either at-risk or in conflict situations. And if we were entering a period where there may be a prolonged period of civil unrest, then that might suggest that we need different policies, even if just temporarily, in the United States for some period, compared to where we were before. And we have some precedents for what that might look like, from some of the places that you've mentioned around the world where there have been ongoing violent conflict.
So clearly, we're in a situation where there is an ongoing, violent conflict. That is certainly it's more of a tinderbox and you know, the stakes are higher if people are circulating information that sort of like we saw with Covid, where it's this, it's this emergency situation.
So there's more content that would be classified as harmful misinformation that we might want to take down. I think that there may be something like that here. That's an analogy that we should consider.
But part of why I just — you know, I felt uncomfortable changing the policy of the lines on Friday given where we are, is one, the situation is fluid, and the civil unrest is continuing and escalating. But two, it's that these policies have to be developed.
And it's just — you mentioned a bunch of examples from a lot of countries around the world. And the cultural and historical context in all these places is so different. And you want to get perspectives, from lots of diverse groups and international and all of this. And there's just no way that we can do that on the fly and put something into place that is going to not have more downstream negative consequences than positive.
So kind of the way that we handle this is we try to rigorously and continuously update the policies. But when something comes up, we try to enforce within the framework and the infrastructure that we have with a constant reevaluation and kind of enhancement of what we have.
So that's, I guess that's kind of a long way of saying I don't actually know this is going to land. But this is why I think that this is the thing that needs to be reconsidered. At this point it's how we would change this because I just think we're moving into a new reality in the United States.
Or sorry, let me clarify that. My last point is, I think one aspect of it — potentially an ongoing conflict as a new reality. I think the excessive use of police force is unfortunately not a new reality. And something that our policies should probably — is something that I want to make sure we have another think on.
Employee question on why Facebook seems to be "contorting its policies" to "avoid antagonizing Trump"
Facebook employee
The catalyst for the original Trump post was about who is eligible to get ballots in California. I'll try to keep it short, but I have some quotes from policy so it might go on a little bit. So a quote from policy is that we disallow "misrepresentation of who can vote … whether a vote will be counted and what information and or materials must be provided in order to vote." We also disallow misrepresentation of the methods for voting or voter registration.
And then Trump posted on Facebook — I'm not sure if it's on Twitter — that the governor of California is sending ballots to millions of people, anyone living in the state, no matter who they are and how they got there, we'll get one. So to me, you know, it pretty plainly misrepresents methods for voting and who can vote because it indicates that anyone in the state can vote, regardless of their voter registration status.
If I'm a person that's on the fence about registering, I may not bother to register to vote since Trump says that anyone gets one anyway. So this can result in the harm of suppressing voter registration.
So, all that being said, my question is, you know, why are the smartest people in the world focused on contorting or sort of twisting our policies to avoid antagonizing Trump instead of driving social issue progress?
Mark Zuckerberg
Well, I'll spend most the time addressing the policy question you have, which I think is — that's a real question and why one of the things that I think we need to raise going forward is voting by mail is clearly going to be a more contentious and important thing, given the context that we're in around having a pandemic. And if you predicted three months ago, given everything we've seen with different folks and elections integrity over the last several years, is the debate going to be around vote by mail? I think the answer would be no, that's not where most of the tactics from different folks were trying to interfere in the election would be focused.
So now, given the new reality around the pandemic, I think we just need to be clear around it, given that there is going to be an unprecedented amount of fear about how to go vote because a lot of people are just gonna be worried that if they go to their polling place, they're going to get sick. I think we should take another rev on the voter suppression policies. And there are kind of two cases that immediately come to mind that I'm pretty focused on, but there may be others that we need to take into account, too.
So one is basically the debate that you're flagging here. On the one hand, I think there's an ongoing political debate around what the vote-by-mail policies should be in different states. But it's clearly varied by state today, everyone can have an absentee ballot, but the laws around how governors send them out or distribute them or how exactly it works in different places vary, and our current consideration would be that that's a political debate.
And you can certainly engage in that. And if the president or anyone else is accusing a governor of doing the wrong thing, I mean, the politicians accuse each other of different things all the time, we generally try to not get into legislative — like making a legal judgment on whether what he's saying is true or whether the governor actually is doing something illegal or not. And that's kind of one set of things.
But anything that was said would sort of give people the impression that if they voted by mail they would be committing fraud, or they shouldn't vote, or they don't need to register. Like you're saying, then those are the things that we'd look to and be worried about.
I think given the heightened importance about vote-by-mail and the selection, it would be helpful to have specifically clear guidance on where the set of — like what are the parameters on what we're going to enable in terms of discourse around it? What is debate around the policy and where vote-by-mail should be applied? And where are you crossing the line into, "No, now you're not having a conversation with the governor of California or debating about policy. You're talking to individuals and potentially doing something that might confuse them."
Our read on this, given where things currently stand in the context that we've had, is that this shouldn't be read as bearing on an individual decision — was not likely to encourage anyone basically to not register or not vote.
But the former was more about the policy decision. Sorry, how to vote by mail. But that's something that I think we should consider.
The other thing on voter suppression I'm somewhat worried about is just that as we get closer to the election, I worry that it may be hard to distinguish between folks who are writing about the health issues of Covid existing in November, or a big resurgence of it. Folks writing about that as a health concern versus folks writing about that to discourage specific populations and specific areas to not go to the polls.
And I think that that is going to be a very difficult one where I'm not sure what's going to be possible on the policy side in terms of distinguishing between those two cases. But it's something that I'm fairly worried about. That we're basically going to have a somewhat targeted effort by different folks at different areas to be talking about, "Hey, there's a big health risk if you go vote here." So I'm not even talking — I'm not even encouraging or discouraging people to do something explicitly, but just by putting that confusion and fear out there, that would create concern.
So again, I guess one final thing on this is we're going to have, we're going to revisit and have another thought on what the policy should be around that area. But I think equally as important, if not more, will be the voter hub that I think we should go build and that we're currently working on scoping out, that's going to be sort of like the Covid hub that we've had for authoritative information to make sure that regardless of what people are saying, back and forth, that there's one place that people can trust and can go to to get really accurate information, to know how to register, to know whether they have registered, whatever we're going to be able to do. We want to make the civic engagement as much as possible.
Employee question on how to show support for leadership without seeming insensitive to colleagues who disagree
Facebook employee
Hi, Mark. I'm [redacted]. I wanted to thank you for everything. I also want to observe that there's a lot of turmoil in our workplace right now. And I wanted to ask, how can employees express that they stand with you and they stand with Sheryl and they stand by M-team and they stand by the incredibly difficult decisions that you have had to make, without seeming insensitive to the incredibly real concerns of our colleagues?
Mark Zuckerberg
I think this is a good question. Because, you know, this is clearly, I think, a moment where we should be focused on seeing what we can do to push forward the … anything that we can do to advance the work on racial justice. But I also want people who think that we're doing the right thing on voice and expression and balancing the equities in the right place to feel like this is a safe place for them, too, right. And they can, they can, they should be able to express those views because I do think that there are a lot of reasons why giving people a voice has been valuable and will continue to be valuable for a long time.
And you know, it's the, you know, over time in general, we just we tend to add more policies to restrict things more and more. And I think that this, while each one is thoughtful and good and we're articulating specific harms — and I think that's important — I do think that expression and voice is also a thing that routinely needs to be stood up for because it has the property that, you know, when something is uniformly positive, no one argues for taking it down. It's always only when there's something that's controversial. Every time there's something that's controversial, your instinct is, "Okay, let's restrict a lot," then you do end up restricting a lot of things that I think will be eventually good for everyone. So thanks. Thanks for raising this.
Facebook employee
Thank you, thank you.
Employee question on polarization
Facebook employee
Hi Mark, last question. You spoke a little about do no harm. And you also spoke about freedom of speech. I was wondering if you could speak about the intersection with a third vector, which is polarization on the platform. And specifically, what is your view, sort of competing goals around free speech combined with concerns about polarization, and what you've seen and what your position is on the intersection of the two?
Mark Zuckerberg
Yeah, so [redacted], thanks for raising this. I mean, this is … you're right this is as central of a part of our mission as giving people a voice. I mean, I basically read our mission as there's three parts to the mission. There's, you know, give people the power, right, which is basically about individual voice and empowerment of individuals to build communities, right, to bring groups of people together in ways that they care about on a day-to-day basis in order to bring the world closer together. So give people the power to build community and bringing the world closer together.
And the last point — bring the world closer together — clearly bears on this point that you're talking about, which is reducing polarization. Right. And if people are super divided and very negative towards each other, and that's kind of the opposite of bringing the world closer together. And so that's clearly something that we care about. And it speaks to kind of the end state of what we're trying to work on. And the reality is that there's a lot of work that we're doing, I can give a kind of a longer answer to this question because I think it's the last one.
But it's, um, but I know there's this article in the Wall Street Journal earlier last week saying that a bunch of internal researchers came up with some research showing some ways that our products might be increasing polarization. And we didn't do anything about it. And I just have to say, I think that piece of journalism is one that I just strongly disagree with. And we tried to give the reporter all the examples of things that we've done specifically to address polarization, the newsfeed ranking changes to make sure that the news that we showed was more broadly trusted to overall reduce the prevalence of kind of news because we found that news was driving polarization more than people connecting with each other. The work that we do on groups recommendations, to make sure that things that are fringe or conspiracy theories are not the things that we go recommend to people. If you, if something doesn't violate our policies, then you can go seek out that group, but we're not going to try to grow it.
I can go on and on. And we've heard the engagement of our products by taking these positions, but we care about this deeply. And we will continue studying it. And that doesn't mean, you know, that if you're an individual researcher and individual engineer that every idea or every issue that you've come up with, kind of every mitigation that you propose, that we're going to conclude is the right one to do. You know, some of them are more effective than others and then have bad side effects. So we'll kind of prioritize them. But clearly, this is definitely a top priority.
I think it might be worth — just because this is a big question on everyone's mind — summarizing some of the research that we've seen recently. The first aspect of research is that there are different aspects of polarization. Some are actually kind of healthy and normal and some are the negative ones. Where healthy polarization would be okay — you have a jury deliberating, and they're deciding on something. And initially, what happens is you have the nine people and they all kind of have different views, and then they polarize into a couple of different views or a few different views. That kind of congeals and then they argue it out. And then hopefully there's a consensus and then that's kind of, it's part of the normal process of coming together, as you have that kind of polarization before you have coming together. I think that's a normal thing that society does. And then all groups of all sizes do before finding a way to come together.
And a lot of scholars who studied this don't necessarily think that that's negative. What they think is negative is basically when the group's polarized in such a way that they start hating each other, or having very negative feelings about each other. And that what's been measured there is this study. There's this measure that academics call affective polarization, which is basically their negative feelings towards a different group, and academics measure this by having a measure of something like "would you let your kid or be happy if your kid married someone of x group," right — some a different race, a different gender, a different ethnicity or country.
And a bunch of the research that we've seen on this internally has actually concluded that if anything, on a number of those fronts, usage of social media is positive, has correlated positively, with people being more tolerant and a number of dimensions. So that is not personally surprising to me. But it's certainly counter to a lot of the narratives that people have externally.
On a national scale, there is a piece of research that some Stanford researchers did recently, I think, Gentzkow and a couple of other folks that were at Stanford, that study on polarization by country. And what they basically found was that polarization was trending very differently in different countries. Across Europe, there were some countries that were flat, some that were down, the US is up, especially in political polarization. And one of the conclusions that they come to is that because social media and the internet are present in all these places, and the impact on all these places is different, it is highly unlikely that social media or the internet are the primary cause of that polarization regardless of what a lot of people have wanted to say over time.
So all I'm saying is just to go to this point of saying we care a lot about this. Our mission at the end of the day is to empower individuals to make your voice heard, to come together and community, is knit society together and ultimately bring the world closer together. So I really care about this and we're going to work hard on it and we already have it, I'm sure that there are areas in our products that have a more positive impact on this and other areas where we may be having a negative impact we should be working to mitigate. I really do care about that. It's even if the uniform or the overall effect is positive or neutral. So there's a lot more to do on that.
But I don't know if I address the specific part of this that you wanted me to. But I appreciate the ability to talk about that part of the mission more broadly because I think it's something that just a lot of people outside the company question right now. And, frankly, I think a lot of the narratives are not backed up by a lot of the research that I've seen or the work that we've done.
Facebook employee
That answers my question. And I also appreciate the time that you share because over the weekend to discuss getting additional viewpoints into the room, which was one of the earlier questions, so thank you for that as well.
Mark Zuckerberg
All right. Well, thank you for all tuning in for like, an hour and a half. I'm sure, I know we're gonna keep talking about this. Some of the issues, they're deep, and they're not going to go away anytime soon. And we do have a big role to play. And I get that not everyone is going to agree with everything that we do. But there are a lot of things that I think we can do.
And I hope that we can find ways to positively engage to make sure that even if every decision doesn't go in the way that everyone wants, which will be impossible, that we find ways to make sure that everyone, here and outside, it feels like the net impact of the different things that we're doing in the world is positive. And I really believe it is.
I believe that we've given a lot of people a voice today that they wouldn't have had otherwise. I think defending the ability to do that is often controversial and means standing up sometimes for things that you disagree with personally. But I do think, over time, it's served our community well, and I appreciate all of you for the dialogue on this, and we will continue it. So I'm thinking of all of you and I hope you all stay safe and I will see you soon.
Support Vox's explanatory journalism
Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that has the power to save lives. Our mission has never been more vital than it is in this moment: to empower you through understanding. Vox's work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources — particularly during a pandemic and an economic downturn. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts at the quality and volume that this moment requires. Please consider making a contribution to Vox today.
Source: vox.com
Комментариев нет:
Отправить комментарий