Facebook Ads

Mark Zuckerberg Stands for Voice and Free Expression | Facebook Newsroom

Today, Mark Zuckerberg spoke at Georgetown University about the importance of protecting free expression. He underscored his belief that giving everyone a voice empowers the powerless and pushes society to be better over time — a belief that’s at the core of Facebook.

In front of hundreds of students at the school’s Gaston Hall, Mark warned that we’re increasingly seeing laws and regulations around the world that undermine free expression and human rights. He argued that in order to make sure people can continue to have a voice, we should: 1) write policy that helps the values of voice and expression triumph around the world, 2) fend off the urge to define speech we don’t like as dangerous, and 3) build new institutions so companies like Facebook aren’t making so many important decisions about speech on our own. 

Read Mark’s full speech below.

Standing For Voice and Free Expression

Hey everyone. It’s great to be here at Georgetown with all of you today.

Before we get started, I want to acknowledge that today we lost an icon, Elijah Cummings. He was a powerful voice for equality, social progress and bringing people together.

When I was in college, our country had just gone to war in Iraq. The mood on campus was disbelief. It felt like we were acting without hearing a lot of important perspectives. The toll on soldiers, families and our national psyche was severe, and most of us felt powerless to stop it. I remember feeling that if more people had a voice to share their experiences, maybe things would have gone differently. Those early years shaped my belief that giving everyone a voice empowers the powerless and pushes society to be better over time.

Back then, I was building an early version of Facebook for my community, and I got to see my beliefs play out at smaller scale. When students got to express who they were and what mattered to them, they organized more social events, started more businesses, and even challenged some established ways of doing things on campus. It taught me that while the world’s attention focuses on major events and institutions, the bigger story is that most progress in our lives comes from regular people having more of a voice.

Since then, I’ve focused on building services to do two things: give people voice, and bring people together. These two simple ideas — voice and inclusion — go hand in hand. We’ve seen this throughout history, even if it doesn’t feel that way today. More people being able to share their perspectives has always been necessary to build a more inclusive society. And our mutual commitment to each other — that we hold each others’ right to express our views and be heard above our own desire to always get the outcomes we want — is how we make progress together.

But this view is increasingly being challenged. Some people believe giving more people a voice is driving division rather than bringing us together. More people across the spectrum believe that achieving the political outcomes they think matter is more important than every person having a voice. I think that’s dangerous. Today I want to talk about why, and some important choices we face around free expression.

Throughout history, we’ve seen how being able to use your voice helps people come together. We’ve seen this in the civil rights movement. Frederick Douglass once called free expression “the great moral renovator of society”. He said “slavery cannot tolerate free speech”. Civil rights leaders argued time and again that their protests were protected free expression, and one noted: “nearly all the cases involving the civil rights movement were decided on First Amendment grounds”.

We’ve seen this globally too, where the ability to speak freely has been central in the fight for democracy worldwide. The most repressive societies have always restricted speech the most — and when people are finally able to speak, they often call for change. This year alone, people have used their voices to end multiple long-running dictatorships in Northern Africa. And we’re already hearing from voices in those countries that had been excluded just because they were women, or they believed in democracy.

Our idea of free expression has become much broader over even the last 100 years. Many Americans know about the Enlightenment history and how we enshrined the First Amendment in our constitution, but fewer know how dramatically our cultural norms and legal protections have expanded, even in recent history.

The first Supreme Court case to seriously consider free speech and the First Amendment was in 1919, Schenk vs the United States. Back then, the First Amendment only applied to the federal government, and states could and often did restrict your right to speak. Our ability to call out things we felt were wrong also used to be much more restricted. Libel laws used to impose damages if you wrote something negative about someone, even if it was true. The standard later shifted so it became okay as long as you could prove your critique was true. We didn’t get the broad free speech protections we have now until the 1960s, when the Supreme Court ruled in opinions like New York Times vs Sullivan that you can criticize public figures as long as you’re not doing so with actual malice, even if what you’re saying is false.

We now have significantly broader power to call out things we feel are unjust and share our own personal experiences. Movements like #BlackLivesMatter and #MeToo went viral on Facebook — the hashtag #BlackLivesMatter was actually first used on Facebook — and this just wouldn’t have been possible in the same way before. 100 years back, many of the stories people have shared would have been against the law to even write down. And without the internet giving people the power to share them directly, they certainly wouldn’t have reached as many people. With Facebook, more than 2 billion people now have a greater opportunity to express themselves and help others.

While it’s easy to focus on major social movements, it’s important to remember that most progress happens in our everyday lives. It’s the Air Force moms who started a Facebook group so their children and other service members who can’t get home for the holidays have a place to go. It’s the church group that came together during a hurricane to provide food and volunteer to help with recovery. It’s the small business on the corner that now has access to the same sophisticated tools only the big guys used to, and now they can get their voice out and reach more customers, create jobs and become a hub in their local community. Progress and social cohesion come from billions of stories like this around the world.

People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society. People no longer have to rely on traditional gatekeepers in politics or media to make their voices heard, and that has important consequences. I understand the concerns about how tech platforms have centralized power, but I actually believe the much bigger story is how much these platforms have decentralized power by putting it directly into people’s hands. It’s part of this amazing expansion of voice through law, culture and technology.

So giving people a voice and broader inclusion go hand in hand, and the trend has been towards greater voice over time. But there’s also a counter-trend. In times of social turmoil, our impulse is often to pull back on free expression. We want the progress that comes from free expression, but not the tension.

We saw this when Martin Luther King Jr. wrote his famous letter from Birmingham Jail, where he was unconstitutionally jailed for protesting peacefully. We saw this in the efforts to shut down campus protests against the Vietnam War. We saw this way back when America was deeply polarized about its role in World War I, and the Supreme Court ruled that socialist leader Eugene Debs could be imprisoned for making an anti-war speech.

In the end, all of these decisions were wrong. Pulling back on free expression wasn’t the answer and, in fact, it often ended up hurting the minority views we seek to protect. From where we are now, it seems obvious that, of course, protests for civil rights or against wars should be allowed. Yet the desire to suppress this expression was felt deeply by much of society at the time.

Today, we are in another time of social tension. We face real issues that will take a long time to work through — massive economic transitions from globalization and technology, fallout from the 2008 financial crisis, and polarized reactions to greater migration. Many of our issues flow from these changes.

In the face of these tensions, once again a popular impulse is to pull back from free expression. We’re at another cross-roads. We can continue to stand for free expression, understanding its messiness, but believing that the long journey towards greater progress requires confronting ideas that challenge us. Or we can decide the cost is simply too great. I’m here today because I believe we must continue to stand for free expression.

At the same time, I know that free expression has never been absolute. Some people argue internet platforms should allow all expression protected by the First Amendment, even though the First Amendment explicitly doesn’t apply to companies. I’m proud that our values at Facebook are inspired by the American tradition, which is more supportive of free expression than anywhere else. But even American tradition recognizes that some speech infringes on others’ rights. And still, a strict First Amendment standard might require us to allow terrorist propaganda, bullying young people and more that almost everyone agrees we should stop — and I certainly do — as well as content like pornography that would make people uncomfortable using our platforms.

So once we’re taking this content down, the question is: where do you draw the line? Most people agree with the principles that you should be able to say things other people don’t like, but you shouldn’t be able to say things that put people in danger. The shift over the past several years is that many people would now argue that more speech is dangerous than would have before. This raises the question of exactly what counts as dangerous speech online. It’s worth examining this in detail.

Many arguments about online speech are related to new properties of the internet itself. If you believe the internet is completely different from everything before it, then it doesn’t make sense to focus on historical precedent. But we should be careful of overly broad arguments since they’ve been made about almost every new technology, from the printing press to radio to TV. Instead, let’s consider the specific ways the internet is different and how internet services like ours might address those risks while protecting free expression.

One clear difference is that a lot more people now have a voice — almost half the world. That’s dramatically empowering for all the reasons I’ve mentioned. But inevitably some people will use their voice to organize violence, undermine elections or hurt others, and we have a responsibility to address these risks. When you’re serving billions of people, even if a very small percent cause harm, that can still be a lot of harm.

We build specific systems to address each type of harmful content — from incitement of violence to child exploitation to other harms like intellectual property violations — about 20 categories in total. We judge ourselves by the prevalence of harmful content and what percent we find proactively before anyone reports it to us. For example, our AI systems identify 99% of the terrorist content we take down before anyone even sees it. This is a massive investment. We now have over 35,000 people working on security, and our security budget today is greater than the entire revenue of our company at the time of our IPO earlier this decade.

All of this work is about enforcing our existing policies, not broadening our definition of what is dangerous. If we do this well, we should be able to stop a lot of harm while fighting back against putting additional restrictions on speech.

Another important difference is how quickly ideas can spread online. Most people can now get much more reach than they ever could before. This is at the heart of a lot of the positive uses of the internet. It’s empowering that anyone can start a fundraiser, share an idea, build a business, or create a movement that can grow quickly. But we’ve seen this go the other way too — most notably when Russia’s IRA tried to interfere in the 2016 elections, but also when misinformation has gone viral. Some people argue that virality itself is dangerous, and we need tighter filters on what content can spread quickly.

For misinformation, we focus on making sure complete hoaxes don’t go viral. We especially focus on misinformation that could lead to imminent physical harm, like misleading health advice saying if you’re having a stroke, no need to go to the hospital.

More broadly though, we’ve found a different strategy works best: focusing on the authenticity of the speaker rather than the content itself. Much of the content the Russian accounts shared was distasteful but would have been considered permissible political discourse if it were shared by Americans — the real issue was that it was posted by fake accounts coordinating together and pretending to be someone else. We’ve seen a similar issue with these groups that pump out misinformation like spam just to make money.

The solution is to verify the identities of accounts getting wide distribution and get better at removing fake accounts. We now require you to provide a government ID and prove your location if you want to run political ads or a large page. You can still say controversial things, but you have to stand behind them with your real identity and face accountability. Our AI systems have also gotten more advanced at detecting clusters of fake accounts that aren’t behaving like humans. We now remove billions of fake accounts a year — most within minutes of registering and before they do much. Focusing on authenticity and verifying accounts is a much better solution than an ever-expanding definition of what speech is harmful.

Another qualitative difference is the internet lets people form communities that wouldn’t have been possible before. This is good because it helps people find groups where they belong and share interests. But the flip side is this has the potential to lead to polarization. I care a lot about this — after all, our goal is to bring people together.

Much of the research I’ve seen is mixed and suggests the internet could actually decrease aspects of polarization. The most polarized voters in the last presidential election were the people least likely to use the internet. Research from the Reuters Institute also shows people who get their news online actually have a much more diverse media diet than people who don’t, and they’re exposed to a broader range of viewpoints. This is because most people watch only a couple of cable news stations or read only a couple of newspapers, but even if most of your friends online have similar views, you usually have some that are different, and you get exposed to different perspectives through them. Still, we have an important role in designing our systems to show a diversity of ideas and not encourage polarizing content.

One last difference with the internet is it lets people share things that would have been impossible before. Take live-streaming, for example. This allows families to be together for moments like birthdays and even weddings, schoolteachers to read bedtime stories to kids who might not be read to, and people to witness some very important events. But we’ve also seen people broadcast self-harm, suicide, and terrible violence. These are new challenges and our responsibility is to build systems that can respond quickly.

We’re particularly focused on well-being, especially for young people. We built a team of thousands of people and AI systems that can detect risks of self-harm within minutes so we can reach out when people need help most. In the last year, we’ve helped first responders reach people who needed help thousands of times.

For each of these issues, I believe we have two responsibilities: to remove content when it could cause real danger as effectively as we can, and to fight to uphold as wide a definition of freedom of expression as possible — and not allow the definition of what is considered dangerous to expand beyond what is absolutely necessary. That’s what I’m committed to.

But beyond these new properties of the internet, there are also shifting cultural sensitivities and diverging views on what people consider dangerous content.

Take misinformation. No one tells us they want to see misinformation. That’s why we work with independent fact checkers to stop hoaxes that are going viral from spreading. But misinformation is a pretty broad category. A lot of people like satire, which isn’t necessarily true. A lot of people talk about their experiences through stories that may be exaggerated or have inaccuracies, but speak to a deeper truth in their lived experience. We need to be careful about restricting that. Even when there is a common set of facts, different media outlets tell very different stories emphasizing different angles. There’s a lot of nuance here. And while I worry about an erosion of truth, I don’t think most people want to live in a world where you can only post things that tech companies judge to be 100% true.

We recently clarified our policies to ensure people can see primary source speech from political figures that shapes civic discourse. Political advertising is more transparent on Facebook than anywhere else — we keep all political and issue ads in an archive so everyone can scrutinize them, and no TV or print does that. We don’t fact-check political ads. We don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying. And if content is newsworthy, we also won’t take it down even if it would otherwise conflict with many of our standards.

I know many people disagree, but, in general, I don’t think it’s right for a private company to censor politicians or the news in a democracy. And we’re not an outlier here. The other major internet platforms and the vast majority of media also run these same ads.

American tradition also has some precedent here. The Supreme Court case I mentioned earlier that gave us our current broad speech rights, New York Times vs Sullivan, was actually about an ad with misinformation, supporting Martin Luther King Jr. and criticizing an Alabama police department. The police commissioner sued the Times for running the ad, the jury in Alabama found against the Times, and the Supreme Court unanimously reversed the decision, creating today’s speech standard.

As a principle, in a democracy, I believe people should decide what is credible, not tech companies. Of course there are exceptions, and even for politicians we don’t allow content that incites violence or risks imminent harm — and of course we don’t allow voter suppression. Voting is voice. Fighting voter suppression may be as important for the civil rights movement as free expression has been. Just as we’re inspired by the First Amendment, we’re inspired by the 15th Amendment too.

Given the sensitivity around political ads, I’ve considered whether we should stop allowing them altogether. From a business perspective, the controversy certainly isn’t worth the small part of our business they make up. But political ads are an important part of voice — especially for local candidates, up-and-coming challengers, and advocacy groups that may not get much media attention otherwise. Banning political ads favors incumbents and whoever the media covers.

Even if we wanted to ban political ads, it’s not clear where we’d draw the line. There are many more ads about issues than there are directly about elections. Would we ban all ads about healthcare or immigration or women’s empowerment? If we banned candidates’ ads but not these, would that really make sense to give everyone else a voice in political debates except the candidates themselves? There are issues any way you cut this, and when it’s not absolutely clear what to do, I believe we should err on the side of greater expression.

Or take hate speech, which we define as someone directly attacking a person or group based on a characteristic like race, gender or religion. We take down content that could lead to real world violence. In countries at risk of conflict, that includes anything that could lead to imminent violence or genocide. And we know from history that dehumanizing people is the first step towards inciting violence. If you say immigrants are vermin, or all Muslims are terrorists — that makes others feel they can escalate and attack that group without consequences. So we don’t allow that. I take this incredibly seriously, and we work hard to get this off our platform.

American free speech tradition recognizes that some speech can have the effect of restricting others’ right to speak. While American law doesn’t recognize “hate speech” as a category, it does prohibit racial harassment and sexual harassment. We still have a strong culture of free expression even while our laws prohibit discrimination.

But still, people have broad disagreements over what qualifies as hate and shouldn’t be allowed. Some people think our policies don’t prohibit content they think qualifies as hate, while others think what we take down should be a protected form of expression. This area is one of the hardest to get right.

I believe people should be able to use our services to discuss issues they feel strongly about — from religion and immigration to foreign policy and crime. You should even be able to be critical of groups without dehumanizing them. But even this isn’t always straightforward to judge at scale, and it often leads to enforcement mistakes. Is someone re-posting a video of a racist attack because they’re condemning it, or glorifying and encouraging people to copy it? Are they using normal slang, or using an innocent word in a new way to incite violence? Now multiply those linguistic challenges by more than 100 languages around the world.

Rules about what you can and can’t say often have unintended consequences. When speech restrictions were implemented in the UK in the last century, parliament noted they were applied more heavily to citizens from poorer backgrounds because the way they expressed things didn’t match the elite Oxbridge style. In everything we do, we need to make sure we’re empowering people, not simply reinforcing existing institutions and power structures.

That brings us back to the cross-roads we all find ourselves at today. Will we continue fighting to give more people a voice to be heard, or will we pull back from free expression?

I see three major threats ahead:

The first is legal. We’re increasingly seeing laws and regulations around the world that undermine free expression and people’s human rights. These local laws are each individually troubling, especially when they shut down speech in places where there isn’t democracy or freedom of the press. But it’s even worse when countries try to impose their speech restrictions on the rest of the world.

This raises a larger question about the future of the global internet. China is building its own internet focused on very different values, and is now exporting their vision of the internet to other countries. Until recently, the internet in almost every country outside China has been defined by American platforms with strong free expression values. There’s no guarantee these values will win out. A decade ago, almost all of the major internet platforms were American. Today, six of the top ten are Chinese.

We’re beginning to see this in social media. While our services, like WhatsApp, are used by protesters and activists everywhere due to strong encryption and privacy protections, on TikTok, the Chinese app growing quickly around the world, mentions of these protests are censored, even in the US.

Is that the internet we want?

It’s one of the reasons we don’t operate Facebook, Instagram or our other services in China. I wanted our services in China because I believe in connecting the whole world and I thought we might help create a more open society. I worked hard to make this happen. But we could never come to agreement on what it would take for us to operate there, and they never let us in. And now we have more freedom to speak out and stand up for the values we believe in and fight for free expression around the world.

This question of which nation’s values will determine what speech is allowed for decades to come really puts into perspective our debates about the content issues of the day. While we may disagree on exactly where to draw the line on specific issues, we at least can disagree. That’s what free expression is. And the fact that we can even have this conversation means that we’re at least debating from some common values. If another nation’s platforms set the rules, our discourse will be defined by a completely different set of values.

To push back against this, as we all work to define internet policy and regulation to address public safety, we should also be proactive and write policy that helps the values of voice and expression triumph around the world.

The second challenge to expression is the platforms themselves — including us. Because the reality is we make a lot of decisions that affect people’s ability to speak.

I’m committed to the values we’re discussing today, but we won’t always get it right. I understand people are concerned that we have so much control over how they communicate on our services. And I understand people are concerned about bias and making sure their ideas are treated fairly. Frankly, I don’t think we should be making so many important decisions about speech on our own either. We’d benefit from a more democratic process, clearer rules for the internet, and new institutions.

That’s why we’re establishing an independent Oversight Board for people to appeal our content decisions. The board will have the power to make final binding decisions about whether content stays up or comes down on our services — decisions that our team and I can’t overturn. We’re going to appoint members to this board who have a diversity of views and backgrounds, but who each hold free expression as their paramount value.

Building this institution is important to me personally because I’m not always going to be here, and I want to ensure the values of voice and free expression are enshrined deeply into how this company is governed.

The third challenge to expression is the hardest because it comes from our culture. We’re at a moment of particular tension here and around the world — and we’re seeing the impulse to restrict speech and enforce new norms around what people can say.

Increasingly, we’re seeing people try to define more speech as dangerous because it may lead to political outcomes they see as unacceptable. Some hold the view that since the stakes are so high, they can no longer trust their fellow citizens with the power to communicate and decide what to believe for themselves.

I personally believe this is more dangerous for democracy over the long term than almost any speech. Democracy depends on the idea that we hold each others’ right to express ourselves and be heard above our own desire to always get the outcomes we want. You can’t impose tolerance top-down. It has to come from people opening up, sharing experiences, and developing a shared story for society that we all feel we’re a part of. That’s how we make progress together.

So how do we turn the tide? Someone once told me our founding fathers thought free expression was like air. You don’t miss it until it’s gone. When people don’t feel they can express themselves, they lose faith in democracy and they’re more likely to support populist parties that prioritize specific policy goals over the health of our democratic norms.

I’m a little more optimistic. I don’t think we need to lose our freedom of expression to realize how important it is. I think people understand and appreciate the voice they have now. At some fundamental level, I think most people believe in their fellow people too.

As long as our governments respect people’s right to express themselves, as long as our platforms live up to their responsibilities to support expression and prevent harm, and as long as we all commit to being open and making space for more perspectives, I think we’ll make progress. It’ll take time, but we’ll work through this moment. We overcame deep polarization after World War I, and intense political violence in the 1960s. Progress isn’t linear. Sometimes we take two steps forward and one step back. But if we can’t agree to let each other talk about the issues, we can’t take the first step. Even when it’s hard, this is how we build a shared understanding.

So yes, we have big disagreements. Maybe more now than at any time in recent history. But part of that is because we’re getting our issues out on the table — issues that for a long time weren’t talked about. More people from more parts of our society have a voice than ever before, and it will take time to hear these voices and knit them together into a coherent narrative. Sometimes we hope for a singular event to resolve these conflicts, but that’s never been how it works. We focus on the major institutions — from governments to large companies — but the bigger story has always been regular people using their voice to take billions of individual steps forward to make our lives and our communities better.

The future depends on all of us. Whether you like Facebook or not, we need to recognize what is at stake and come together to stand for free expression at this critical moment.

I believe in giving people a voice because, at the end of the day, I believe in people. And as long as enough of us keep fighting for this, I believe that more people’s voices will eventually help us work through these issues together and write a new chapter in our history — where from all of our individual voices and perspectives, we can bring the world closer together.



Facebook Advertising Cost

Read More
Facebook Ads

European Court Ruling Raises Questions About Policing Speech | Facebook Newsroom

By Monika Bickert, VP, Global Policy Management 

Imagine something you wrote and shared on Facebook was taken down, not because it violated our rules, and not because it broke the law in your country, but because someone was able to use different laws in another country to have it removed. Imagine as well that your speech was deemed illegal not by a judge who carefully weighed the facts, but by automated tools and technology. 

This scenario became much more likely last week when the Court of Justice of the European Union ruled that European Union countries can order the removal of content not only in their own country, but all over the world. The ruling also opened the door for courts to order the removal of content that is similar to the illegal speech, meaning that something you posted might be removed even if you knew nothing about the earlier post that a European country had deemed illegal. 

The ruling arose from a personal defamation case brought by an Austrian politician. The post in question shared a news article in which the Austrian politician had outlined her and her party’s views on immigration, together with a comment from a Facebook user strongly critiquing the Austrian politician. 

Although some people might find the post unwarranted or upsetting, it was not against our rules. We prohibit threats of violence against politicians, as well as harassment and hate speech, but we allow people to criticize elected officials and their policies. We believe this is an important part of the right to freedom of expression afforded under Article 19 of the Universal Declaration of Human Rights. Nevertheless, we respect local laws when their limits on free expression meet the legitimacy, necessity and proportionality tests required by human rights standards, so when a court in Austria found that this violated Austrian law, we made the post unavailable in Austria.  

This was not enough for the Austrian court, which asked that we remove this post worldwide and also remove similar content criticising this politician. The matter was referred to the Court of Justice of the European Union. 

The Court’s ruling last week raises critical questions for freedom of expression, in two key respects. 

First, it undermines the long-standing principle that one country does not have the right to impose its laws on another country. This is especially important with laws governing speech, because what is legally acceptable varies considerably in different parts of the world and even within the EU. The ruling also opens the door for other countries around the world, including non-democratic countries who severely limit speech, to demand the same power. 

Second, the ruling might lead to a situation in which private internet companies could be forced to rely on automated technologies to police and remove “equivalent” illegal speech. This is especially troubling for situations, like this one, where the speech is political in nature. 

While our automated tools have come a long way, they are still a blunt instrument and unable to interpret the context and intent associated with a particular piece of content. Determining a post’s message is often complicated, requiring complex assessments around intent and an understanding of how certain words are being used. A person might share a news article to indicate agreement, while another might share it to condemn it. Context is critical and automated tools wouldn’t know the difference, which is why relying on automated tools to identify identical or “equivalent” content may well result in the removal of perfectly legitimate and legal speech. 

Organizations around the world have expressed fears about this ruling and its impact on freedom of speech, including Article 19, CCIA, Access Now and EDRi. Many people have also voiced their concerns about private companies standing in the place of courts to police content and determine what is legal or illegal, particularly when it comes to speech criticising a public figure. 

National courts will play the primary role in implementing this ruling. We hope that in doing so, they weigh the effects of their injunctions on free expression rights and set clear definitions of ”identical” and ”equivalent” speech. We also hope that in the interest of respecting the rights of people in other countries, they will limit their injunctions or blocking of access to information to their own geographical boundaries.  

See Mark Zuckerberg’s comments on this issue below from last week’s public Q&A.

“There was this European Court of Justice ruling on content and speech which basically said that one country in Europe presumably can enforce its speech rules outside of the country itself which, I think, is just a very troubling precedent to set. A lot of what we do internally is focused on giving people a voice, on enabling more freedom of expression, allowing people to express all the things they want and there are a lot of challenges to that. Some are cultural. There are safety issues. We want to make a welcoming community. Some of the stuff people want to post there are real issues and we need to deal with that, but there are a lot of policy and legal issues around the world and that’s an area where we are constantly engaging with different governments and pushing back. The current set of things that we do are: when a government is democratic and has good rule of law, we generally will follow the local laws in that country. So if someone posts something we won’t show it in that country if it is against the laws in that country, but we haven’t had any precedent where any country has tried to say ‘hey, you can’t do that outside of our country.’ We have had precedents but we have successfully fought them. This is one where a lot of the details of exactly how this gets implemented are going to depend on national courts across Europe, and what they define as the same content versus roughly equivalent content. This is something we and other services will be litigating and getting clarity on what this means. I know we talk about free expression as a value and I thought this was a fairly troubling development.”



Facebook Advertising Cost

Read More
Opening Our Offices to Small Businesses Around the World | Facebook Newsroom
Facebook Ads

Opening Our Offices to Small Businesses Around the World | Facebook Newsroom

By Michelle Klein, VP of Global Business Marketing

We know the holidays are one of the busiest times of the year for many businesses, so having the right resources and skills to manage your business during this time is critical. According to a Facebook-commissioned Ipsos study, in 2018, nearly half of US shoppers had started shopping for the holidays in November or earlier. That’s why today, we’re opening our doors to small businesses to help them prepare for the holiday season and introducing new tools to help them manage their business across our apps more efficiently

We estimate more than 140 million businesses use our apps every month to find new customers, hire employees or engage with their communities. And today, we’re opening up 17 of our largest offices and hubs around the world to host Boost with Facebook Holiday Bootcamp — a training program designed to help small businesses and nonprofits learn how to grow their business and get ready for the holiday shopping season.

Over a 24-hour period, the Boost with Facebook Holiday Bootcamp is welcoming small businesses to our offices in New York City, Menlo Park, Austin, Chicago, London, Dublin, Berlin, Madrid, Warsaw, Istanbul, Lagos, Johannesburg, São Paulo, Mexico City, Buenos Aires, Singapore and the Philippines.

Introducing Customizable Templates to Boost Your Holiday Creative

We know businesses have limited resources and time, and it may not always be possible to create new assets for ad campaigns. So we’re making it easier for businesses of all sizes to create vertical, full-screen assets by introducing customizable templates for Stories, available across Facebook, Instagram and Messenger. 

New Features to Manage Customer Communications
Earlier this year, we enabled businesses to manage their messages from Messenger and Instagram Direct in a single location from their Facebook Page Inbox. And during the holiday season, we’ll introduce new features to Instagram Direct messages to help businesses manage customer communications more seamlessly and efficiently across our apps. 

From fulfilling orders to keeping up with customer requests, we know staying on top of customer communications is important, so we also created new messaging tools like labels, search and folders to help businesses stay organized.

Since businesses may not always be available to respond to customers right away, in the coming weeks we’ll be rolling out tools like instant replies to let businesses automatically respond to initial messages and give people more information about their business or let them know their typical response time. Businesses can also set up an away message for when their business is closed or on vacation and create saved replies to answer commonly asked questions.

More Tips and Training to Make Your Holiday Marketing Stand Out

We’re also sharing new tips for helping businesses get ready for the holiday season.

This year, we’re hosting over 200 free training events for small businesses and nonprofits around the world. We do this because we believe growth benefits everyone: every day, people launch and grow businesses, which help strengthen their communities and grow their local economies. 



Facebook Advertising Cost

Read More
A Conversation with Mark Zuckerberg, Joe DeRisi and Steve Quake | Facebook Newsroom
Facebook Ads

A Conversation with Mark Zuckerberg, Joe DeRisi and Steve Quake | Facebook Newsroom

As part of his series of conversations on tech and society, Mark Zuckerberg sat down with Dr. Joe DeRisi and Dr. Steve Quake, who lead the Chan Zuckerberg Biohub, a nonprofit research center that brings together scientists and engineers from Stanford, Berkeley and UCSF. They talked about how technology is accelerating health research, the new advancements they’re most excited about, how to restore faith in science and what wearables will mean for the future of health.

See all of Mark’s challenge videos here.



Facebook Advertising Cost

Read More
An Update on Our App Developer Investigation | Facebook Newsroom
Facebook Ads

An Update on Our App Developer Investigation | Facebook Newsroom

By Ime Archibong, VP of Product Partnerships

We wanted to provide an update on our ongoing App Developer Investigation, which we began in March of 2018 as part of our response to the episode involving Cambridge Analytica.

We promised then that we would review all of the apps that had access to large amounts of information before we changed our platform policies in 2014. It has involved hundreds of people: attorneys, external investigators, data scientists, engineers, policy specialists, platform partners and other teams across the company. Our review helps us to better understand patterns of abuse in order to root out bad actors among developers.

We initially identified apps for investigation based on how many users they had and how much data they could access. Now, we also identify apps based on signals associated with an app’s potential to abuse our policies. Where we have concerns, we conduct a more intensive examination. This includes a background investigation of the developer and a technical analysis of the app’s activity on the platform. Depending on the results, a range of actions could be taken from requiring developers to submit to in-depth questioning, to conducting inspections or banning an app from the platform.

Our App Developer Investigation is by no means finished. But there is meaningful progress to report so far. To date, this investigation has addressed millions of apps. Of those, tens of thousands have been suspended for a variety of reasons while we continue to investigate.

It is important to understand that the apps that have been suspended are associated with about 400 developers. This is not necessarily an indication that these apps were posing a threat to people. Many were not live but were still in their testing phase when we suspended them. It is not unusual for developers to have multiple test apps that never get rolled out. And in many cases, the developers did not respond to our request for information so we suspended them, honoring our commitment to take action.

In a few cases, we have banned apps completely. That can happen for any number of reasons including inappropriately sharing data obtained from us, making data publicly available without protecting people’s identity or something else that was in clear violation of our policies. We have not confirmed other instances of misuse to date other than those we have already notified the public about, but our investigation is not yet complete. We have been in touch with regulators and policymakers on these issues. We’ll continue working with them as our investigation continues. One app we banned was called myPersonality, which shared information with researchers and companies with only limited protections in place, and then refused our request to participate in an audit.

We’ve also taken legal action when necessary. In May, we filed a lawsuit in California against Rankwave, a South Korean data analytics company that failed to cooperate with our investigation. We’ve also taken legal action against developers in other contexts. For example, we filed an action against LionMobi and JediMobi, two companies that used their apps to infect users’ phones with malware in a profit-generating scheme. This lawsuit is one of the first of its kind against this practice. We detected the fraud, stopped the abuse and refunded advertisers. In another case, we sued two Ukrainian men, Gleb Sluchevsky and Andrey Gorbachov, for using quiz apps to scrape users’ data off our platform.

And we are far from finished. As each month goes by, we have incorporated what we learned and reexamined the ways that developers can build using our platforms. We’ve also improved the ways we investigate and enforce against potential policy violations that we find.

Beyond this investigation, we’ve made widespread improvements to how we evaluate and set policies for all developers that build on our platforms. We’ve removed a number of APIs, the channels that developers use to access various types of data. We’ve grown our teams dedicated to investigating and enforcing against bad actors. This will allow us to, on an annual basis, review every active app with access to more than basic user information. And when we find violators, we’ll take a range of enforcement actions.

We have also developed new rules to more strictly control a developer’s access to user data. Apps that provide minimal utility for users, like personality quizzes, may not be allowed on Facebook. Apps may not request a person’s data unless the developer uses it to meaningfully improve the quality of a person’s experience. They must also clearly demonstrate to people how their data would be used to provide them that experience.

We have clarified that we can suspend or revoke a developer’s access to any API that it has not used in the past 90 days. And we will not allow apps on Facebook that request a disproportionate amount of information from users relative to the value they provide.

The Path Forward

Our new agreement with the FTC will bring its own set of requirements for bringing oversight to app developers. It requires developers to annually certify compliance with our policies. Any developer that doesn’t go along with these requirements will be held accountable.

App developers remain a vital part of the Facebook ecosystem. They help to make our world more social and more engaging. But people need to know we’re protecting their privacy. And across the board, we’re making progress. We won’t catch everything, and some of what we do catch will be with help from others outside Facebook. Our goal is to bring problems to light so we can address them quickly, stay ahead of bad actors and make sure that people can continue to enjoy engaging social experiences on Facebook while knowing their data will remain safe.



Facebook Advertising Cost

Read More
Next Steps for the Global Internet Forum to Counter Terrorism | Facebook Newsroom
Facebook Ads

Next Steps for the Global Internet Forum to Counter Terrorism | Facebook Newsroom

Today, members of the Global Internet Forum to Counter Terrorism (GIFCT) are meeting with government leaders, led by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron at the United Nations General Assembly to share progress on the steps taken to implement the Christchurch Call to Action. At this important convening, GIFCT is announcing it will become an independent organization led by an Executive Director and supported by dedicated technology, counterterrorism and operations teams. Evolving and institutionalizing GIFCT’s structure from a consortium of member companies will build on our early achievements and deepen industry collaboration with experts, partners and government stakeholders – all in an effort to thwart increasingly sophisticated efforts by terrorists and violent extremists to abuse digital platforms.

The new, independent GIFCT will integrate its existing work to develop technology, cultivate strong corporate policies and sponsor research with efforts to fulfill commitments in the nine-point action plan released after the Christchurch Call. More importantly, it will institutionalize the spirit of shared purpose that the Call represents. GIFCT has made significant achievements since it was founded in 2017, and worked closely with a range of governments, particularly under the auspices of the European Union Internet Forum, but the horrific terrorist attack in Christchurch and the extraordinary virality of the attacker’s video online illustrated the need to do even more. We believe these next steps are best executed within an industry-led framework with deep input from both civil society and governments.

Progress on Our Commitments to the Christchurch Call to Action

In addition to restructuring GIFCT to carry collaboration forward, we have made significant progress on some of the Christchurch Call’s core initiatives:

  • Introduced industry’s Content Incident Protocol to guide a collaborated response amongst GIFCT members to terrorist attacks like we saw in Christchurch and combat the spread of terrorist content across the platforms
  • Published a cross-platform, countering violent extremism toolkit, developed with the Institute for Strategic Dialogue, to help civil society organizations build online campaigns that challenge extremist ideologies, while prioritizing safety
  • Released algorithms for our hashing technology to help additional companies build their capacity to use and contribute to the hash sharing consortium
  • Published the first GIFCT Transparency Report to shine a light on our efforts as an industry

Adopting a New Vision for an Independent Institution

As an independent organization, GIFCT will adopt a new mission statement: “Prevent terrorists and violent extremists from exploiting digital platforms” to guide its work across four foundational goals:

  1. Empower a broad range of technology companies, independently and collectively, with processes and tools to prevent and respond to abuse of their platforms by terrorists and violent extremists.
  2. Enable multi-stakeholder engagement around terrorist and violent extremist misuse of the Internet and encourage stakeholders to meet key commitments consistent with the GIFCT mission.
  3. Promote civil dialogue online and empower efforts to direct positive alternatives to the messages of terrorists and violent extremists.
  4. Advance broad understanding of terrorist and violent extremist operations and their evolution, including the intersection of online and offline activities.

GIFCT was formally established by Facebook, Microsoft, Twitter and YouTube with the objective of disrupting terrorist abuse on their respective platforms. Since then, the consortium has grown with new global technology companies joining GIFCT, and now Amazon, LinkedIn and WhatsApp are joining. An even broader group collaborates closely on critical initiatives focused on tech innovation, knowledge-sharing and research. Most recently, we reached our 2019 goal of collectively contributing more than 200,000 hashes, or unique digital fingerprints, of known terrorist content into our shared database, enabling each of us to quickly identify and take action on potential terrorist content on our respective platforms. 

Establishing the Structure for an Independent Institution

The updated GIFCT will be led by an independent Executive Director, who will be responsible for leading and coordinating all operations, including core management, program implementation and fundraising, and engagement with the Operational Board and Advisory Committee. 

GIFCT’s efforts will be organized into three key pillars of work:

  1. “Prevent” to equip digital platforms and civil society groups with awareness, knowledge and tools, including technology, to develop sustainable programs in their core business operations to disrupt terrorist and violent extremist activity online.
  2. “Respond” will develop tools and capacity, including via regular multi-stakeholder exercises, for platforms to cooperate with one another and with other stakeholders to mitigate the impact of a terrorist or violent extremist attack.
  3. “Learn” will empower researchers to study terrorism and counterterrorism, including creating and evaluating best practices for multi-stakeholder cooperation and preventing abuse of digital platforms.

GIFCT will establish working groups in order to engage stakeholders from government and civil society focused on specific projects and advise GIFCT’s efforts. These working groups will have the ability to coordinate multistakeholder funding for specific programmatic efforts supported by GIFCT. Initial working groups are expected to address topics such as positive interventions with respect to radicalization, algorithmic outcomes, improving the multistakeholder Crisis Response Protocol and legal challenges to data sharing.

GIFCT governance will reside with the industry-led Operating Board, which will work closely with a multistakeholder Independent Advisory Committee, and a broad Multi-stakeholder Forum. The Independent Advisory Committee will be chaired by a non-governmental representative, and include members from civil society, government, and inter-governmental entities. So far, the United States, United Kingdom, France, Canada, New Zealand, Japan, United Nations Counter-Terrorism Committee Executive Directorate and the European Commission have signed on to the Advisory Committee and we look forward to sharing additional members, including advocacy groups, human rights specialists, foundations, researchers and technical experts soon. The Multistakeholder Forum is designed as a broader community of dedicated parties interested in regular updates from the GIFCT and engaging in events designed to funnel broad feedback to the industry Operating Board and Executive Director. Here is more information about the new structure.

Since its founding in 2017, GIFCT has focused its efforts on innovative and emerging technology solutions, knowledge sharing and supporting research into terrorists’ use of digital platforms. We are grateful for the support of, and collaboration with, our member companies, governments and civil society organizations that share our commitment to prevent and disrupt terrorists and violent extremists from exploiting technology. Most recently, we conducted 11 separate workshops in partnership with our UN CTED backed partner Tech Against Terrorism to facilitate outreach, knowledge sharing, and technology capacity building with smaller tech platforms, government and non-governmental organizations and academic experts. We also invested in the Global Research Network on Terrorism and Technology (GRNTT) to develop research and policy recommendations designed to prevent terrorist exploitation of technology. But there’s more to do. We are confident that this new chapter will provide greater resources and capacity for our collective long-term success and we look forward to sharing further progress.



Facebook Advertising Cost

Read More
Facebook, Elections and Political Speech | Facebook Newsroom
Facebook Ads

Facebook, Elections and Political Speech | Facebook Newsroom

Speaking at the Atlantic Festival in Washington DC today, I set out the measures that Facebook is taking to prevent outside interference in elections and Facebook’s attitude towards political speech on the platform. This is grounded in Facebook’s fundamental belief in free expression and respect for the democratic process, as well as the fact that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is.  

You can read the full text of my speech below, but as I know there are often lots of questions about our policies and the way we enforce them I thought I’d share the key details.  

We rely on third-party fact-checkers to help reduce the spread of false news and other types of viral misinformation, like memes or manipulated photos and videos. We don’t believe, however, that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny. That’s why Facebook exempts politicians from our third-party fact-checking program. We have had this policy on the books for over a year now, posted publicly on our site under our eligibility guidelines. This means that we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements. You can find more about the third-party fact-checking program and content eligibility here.

Facebook has had a newsworthiness exemption since 2016. This means that if someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm. Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard. However, in keeping with the principle that we apply different standards to content for which we receive payment, this will not apply to ads – if someone chooses to post an ad on Facebook, they must still fall within our Community Standards and our advertising policies.

When we make a determination as to newsworthiness, we evaluate the public interest value of the piece of speech against the risk of harm. When balancing these interests, we take a number of factors into consideration, including country-specific circumstances, like whether there is an election underway or the country is at war; the nature of the speech, including whether it relates to governance or politics; and the political structure of the country, including whether the country has a free press. In evaluating the risk of harm, we will consider the severity of the harm. Content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value. Each of these evaluations will be holistic and comprehensive in nature, and will account for international human rights standards. 

Read the full speech below.

Facebook

For those of you who don’t know me, which I suspect is most of you, I used to be a politician – I spent two decades in European politics, including as Deputy Prime Minister in the UK for five years.

And perhaps because I acquired a taste for controversy in my time in politics, a year ago I came to work for Facebook.

I don’t have long with you, so I just want to touch on three things: I want to say a little about Facebook; about how we are getting ourselves ready for the 2020 election; and about our basic attitude towards political speech.

So…Facebook. 

As a European, I’m struck by the tone of the debate in the US around Facebook. Here you have this global success story, invented in America, based on American values, that is used by a third of the world’s population.

A company that has created 40,000 US jobs in the last two years, is set to create 40,000 more in the coming years, and contributes tens of billions of dollars to the economy. And with plans to spend more than $250 billion in the US in the next four years.

And while Facebook is subject to a lot of criticism in Europe, in India where I was earlier this month, and in many other places, the only place where it is being proposed that Facebook and other big Silicon Valley companies should be dismembered is here.

And whilst it might surprise you to hear me say this, I understand the underlying motive which leads people to call for that remedy – even if I don’t agree with the remedy itself.

Because what people want is that there should be proper competition, diversity, and accountability in how big tech companies operate – with success comes responsibility, and with power comes accountability.

But chopping up successful American businesses is not the best way to instill responsibility and accountability. For a start, Facebook and other US tech companies not only face fierce competition from each other for every service they provide – for photo and video sharing and messaging there are rival apps with millions or billions of users – but they also face increasingly fierce competition from their Chinese rivals. Giants like Alibaba, TikTok and WeChat.

More importantly, pulling apart globally successful American businesses won’t actually do anything to solve the big issues we are all grappling with – privacy, the use of data, harmful content and the integrity of our elections. 

Those things can and will only be addressed by creating new rules for the internet, new regulations to make sure companies like Facebook are accountable for the role they play and the decisions they take.

That is why we argue in favor of better regulation of big tech, not the break-up of successful American companies. 

Elections

Now, elections. It is no secret that Facebook made mistakes in 2016, and that Russia tried to use Facebook to interfere with the election by spreading division and misinformation. But we’ve learned the lessons of 2016. Facebook has spent the three years since building its defenses to stop that happening again.

  • Cracking down on fake accounts – the main source of fake news and malicious content – preventing millions from being created every day;
  • Bringing in independent fact-checkers to verify content;
  • Recruiting an army of people – now 30,000 – and investing hugely in artificial intelligence systems to take down harmful content.

And we are seeing results. Last year, a Stanford report found that interactions with fake news on Facebook was down by two-thirds since 2016.

I know there’s also a lot of concern about so-called deepfake videos. We’ve recently launched an initiative called the Deepfake Detection Challenge, working with the Partnership on AI, companies like Microsoft and universities like MIT, Berkeley and Oxford, to find ways to detect this new form of manipulated content so that we can identify them and take action.

But even when the videos aren’t as sophisticated – such as the now infamous Speaker Pelosi video – we know that we need to do more.

As Mark Zuckerberg has acknowledged publicly, we didn’t get to that video quickly enough and too many people saw it before we took action. We must and we will get better at identifying lightly manipulated content before it goes viral and provide users with much more forceful information when they do see it.

We will be making further announcements in this area in the near future.

Crucially, we have also tightened our rules on political ads. Political advertising on Facebook is now far more transparent than anywhere else – including TV, radio and print advertising.

People who want to run these ads now need to submit ID and information about their organization. We label the ads and let you know who’s paid for them. And we put these ads in a library for seven years so that anyone can see them.

Political speech

Of course, stopping election interference is only part of the story when it comes to Facebook’s role in elections. Which brings me to political speech.

Freedom of expression is an absolute founding principle for Facebook. Since day one, giving people a voice to express themselves has been at the heart of everything we do. We are champions of free speech and defend it in the face of attempts to restrict it. Censoring or stifling political discourse would be at odds with what we are about.

In a mature democracy with a free press, political speech is a crucial part of how democracy functions. And it is arguably the most scrutinized form of speech that exists.

 In newspapers, on network and cable TV, and on social media, journalists, pundits, satirists, talk show hosts and cartoonists – not to mention rival campaigns – analyze, ridicule, rebut and amplify the statements made by politicians.

At Facebook, our role is to make sure there is a level playing field, not to be a political participant ourselves.

To use tennis as an analogy, our job is to make sure the court is ready – the surface is flat, the lines painted, the net at the correct height. But we don’t pick up a racket and start playing. How the players play the game is up to them, not us.

We have a responsibility to protect the platform from outside interference, and to make sure that when people pay us for political ads we make it as transparent as possible. But it is not our role to intervene when politicians speak.

That’s why I want to be really clear today – we do not submit speech by politicians to our independent fact-checkers, and we generally allow it on the platform even when it would otherwise breach our normal content rules.

Of course, there are exceptions. Broadly speaking they are two-fold: where speech endangers people; and where we take money, which is why we have more stringent rules on advertising than we do for ordinary speech and rhetoric.

I was an elected politician for many years. I’ve had both words and objects thrown at me, I’ve been on the receiving end of all manner of accusations and insults.

It’s not new that politicians say nasty things about each other – that wasn’t invented by Facebook. What is new is that now they can reach people with far greater speed and at a far greater scale. That’s why we draw the line at any speech which can lead to real world violence and harm.

I know some people will say we should go further. That we are wrong to allow politicians to use our platform to say nasty things or make false claims. But imagine the reverse.

Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be. In open democracies, voters rightly believe that, as a general rule, they should be able to judge what politicians say themselves.  

Conclusion

So, in conclusion, I understand the debate about big tech companies and how to tackle the real concerns that exist about data, privacy, content and election integrity. But I firmly believe that simply breaking them up will not make the problems go away. The real solutions will only come through new, smart regulation instead.

And I hope I have given you some reassurance about our approach to preventing election interference, and some clarity over how we will treat political speech in the run up to 2020 and beyond.

Thank you.



Facebook Advertising Cost

Read More
Oculus Connect 6: Introducing Hand Tracking on Oculus Quest, Facebook Horizon and More | Facebook Newsroom
Facebook Ads

Oculus Connect 6: Introducing Hand Tracking on Oculus Quest, Facebook Horizon and More | Facebook Newsroom

Today, at our sixth annual Oculus Connect conference, we shared our vision for VR and our plans to build the future of computing with people at the center. VR helps give people the freedom to be wherever they want and still have the power to connect to the people, places and opportunities that matter most. During today’s keynote, we revealed innovations that will change the way we interact in VR, and ultimately AR. 

Visit the Oculus blog for a full recap of today’s announcements. Here’s a look at the highlights.

Building a Sustainable VR Ecosystem

In order for VR to truly transform the way we live, work and connect with each other, we need to build a sustainable ecosystem — from action-packed games and artistic experiences, to cutting-edge training apps and beyond. Today, we announced that people have spent $100 million USD on the Oculus Store and 20% of that is from Quest alone, which is a testament to the health of the ecosystem, as well as the passion and commitment of the developers and content creators designing and building for VR today. 

Oculus Quest Just Got Better

Oculus Link is a new way for people who own Quest and a gaming PC to access content from the Rift Platform. We’ll release the software in beta this November, and it will work with most high-quality USB-C cables. Later this year, we’ll also release a premium cable with maximum throughput to run Rift content and a longer cord so you can move easily in VR. Click here to learn more.

Hand Tracking on Oculus Quest

When Oculus Touch controllers launched in 2016, they ushered in a new era of VR by introducing hand presence: the sensation that your own hands are actually there with you in a virtual environment. Today, we’re marking another important milestone with the announcement of hand tracking on Oculus Quest, enabling natural interaction in VR using your own hands on an all-in-one device — no extra hardware required. 

This is an important step, not just for VR, but for AR as well. Hand tracking on Quest will be released as an experimental feature for Quest owners and a developer SDK in early 2020. Click here to learn more.

Introducing Facebook Horizon: An Ever-Expanding VR World

Our goal is to put people at the center of computing, not just with great hardware, but with amazing software experiences as well. Today, we announced Facebook Horizon, a new social experience in VR where you can build your own worlds with easy-to-use tools (no coding skills required). Click here to learn more or sign up for the beta scheduled to begin early next year at oculus.com/facebookhorizon.

These innovations help us continue the journey toward putting people at the center of AR and VR, giving them the power to create and connect with each other. 



Facebook Advertising Cost

Read More
Privacy Matters: Threads | Facebook Newsroom
Facebook Ads

Privacy Matters: Threads | Facebook Newsroom

By Karina Newton, Head of Policy, Instagram

We know people want to stay connected with their close friends online. That’s why Facebook is launching Threads, a new camera-first messaging app from Instagram for keeping up with your close friends in a dedicated space. We built Threads with privacy in mind, so that you can feel comfortable using the app to communicate with your close friends.

What choices and controls do I have?

  • Close Friends: Threads is designed for you to share what you’re up to with the people on your close friends list. We introduced Close Friends last year, so you can choose to share your Stories with a smaller group of people. Your close friends list is private and totally in your control – only you can see it, no one can request to be added to it and no one will be notified if you add or remove them from your list – so you can feel comfortable adjusting it when you need. On Threads, there is no pending inbox with message requests, so only the people you choose to put on your list can send you a message.
  • Status: Status is an opt-in feature on Threads for sharing what you’re up to with your close friends. You can choose from suggested statuses like “📚 Studying,” create your own such as “😅 Procrastinating” or turn on Auto Status, which automatically lets your close friends know what you’re up to without having to actively send them a message. If you turn it on, Auto Status is informed by your device, like the charging state of your phone “🔌 Low battery,” or from location services, which you’ll need to allow in your phone settings. Turning on location services is required for Auto Status to identify and share the general category of the place you’re at, like “🏖 At the beach,” “🚗 On the move” or “🏠 At home.” For example, the Auto Status “ ☕ At a cafe” is set when you’re at a coffee shop, but it won’t share the name of the cafe or the address. Auto Status is completely opt-in and you can change or turn it off at any time.

How does this impact data collection and the ads I see?

If you enable Auto Status, Threads will request your location, movement, battery level and network connection from your phone in order to determine what context to share. For example, Auto Status might use your precise location to show your friends that you’re “☕ At a cafe.”  Or Auto Status might detect that you’re biking and set your status to “🚲 On the Move.” Before this is enabled, you’ll be told what information Auto Status is requesting and will be asked to specifically agree.  Auto Status will not share your precise location with your friends, and when Threads sends location information to our server to look up locations, it’s not stored there – this information is only stored on your device for a limited time. It is also deleted if you remove Threads. 

The way we use data from other parts of Facebook and Instagram to deliver relevant ads to you remains the same. Precise location information collected for Auto Status is a new feature specific to Threads and will not be used for ads.

Who sees my information and activity?

Threads is for connecting with your close friends list on Instagram, which you control. Your conversations are between you and the people you’re talking to, and only your close friends will see your status. 

As with the main Instagram app, you can easily and anonymously report any message you feel violates our Community Guidelines. We also have resources for parents that highlight some of the tools we have in place to keep teens safe. We’re committed to protecting people’s well-being and privacy, so people can feel comfortable connecting with their close friends.



Facebook Advertising Cost

Read More
Introducing Threads | Facebook Newsroom
Facebook Ads

Introducing Threads | Facebook Newsroom

By Robby Stein, Director of Product, Instagram

Today, Facebook is launching Threads from Instagram, a new camera-first messaging app that helps you stay connected to your close friends.

Over the last few years, we’ve introduced several new ways to share visually on Instagram and connect with people you care about from sharing everyday moments on Stories to visual messages on Direct. But for your smaller circle of friends, we saw the need to stay more connected throughout the day, so you can communicate what you’re doing and how you’re feeling through photos and videos. That’s why we built Threads, a new way to message with close friends in a dedicated, private space.

Threads is a standalone app designed with privacy, speed and your close connections in mind. You can share photos, videos, messages, Stories and more with your Instagram close friends list. You are in control of who can reach you on Threads, and you can customize the experience around the people who matter most. 

Message Only Your Close Friends

Last year, we introduced Close Friends for sharing more personal moments with a select group of people you choose. Now, you can use Threads to message those people on your Instagram close friends list and you’ll have a dedicated inbox and notifications just for them. If you don’t have a list set up yet, you can make one directly from Threads when you download the app.

Share Photos and Videos Instantly

Threads is the fastest way to share a photo or video with your close friends on Instagram. It opens directly to the camera and allows you to add shortcuts, so you can share what you’re doing in just two taps.

Find Out What Friends Are Up To With Status

We’ve heard that you want an easier way to keep up with your friends throughout the day – especially when you don’t have the time to send a photo or have a conversation. That’s why we created status. You can choose from a suggested status (📚 Studying), create your own (😅 Procrastinating) or turn on Auto Status (🚗 On the move), which automatically shares little bits of context on where you are without giving away your coordinates. Only your close friends will see your status, and it’s completely opt-in.

Status was created with your privacy in mind – you control whether you share your status and with whom. Learn more about privacy and Threads here. 

Continue Using Instagram Direct As You Do Today

Threads offers a new, dedicated home for your favorite conversations. Messages from your close friends list will appear in both Threads and Direct, so you have full control over how and with whom you want to interact.

Threads will begin rolling out globally today. We hope that Threads can bring you a little closer to the people you care about. 



Facebook Advertising Cost

Read More
Removing Coordinated Inauthentic Behavior in UAE, Nigeria, Indonesia and Egypt | Facebook Newsroom
Facebook Ads

Removing Coordinated Inauthentic Behavior in UAE, Nigeria, Indonesia and Egypt | Facebook Newsroom

By Nathaniel Gleicher, Head of Cybersecurity Policy

We removed multiple Pages, Groups and accounts that were involved in coordinated inauthentic behavior on Facebook and Instagram. We found three separate operations: one of which originated in the United Arab Emirates, Egypt and Nigeria, and the other two in Indonesia and Egypt. These three campaigns we removed were unconnected, but each created networks of accounts to mislead others about who they were and what they were doing. We have shared information about our findings with industry partners.

We’re constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people. We’re taking down these Pages, Groups and accounts based on their behavior, not the content they posted. In each of these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.

We are making progress rooting out this abuse, but as we’ve said before, it’s an ongoing challenge. We’re committed to continually improving to stay ahead. That means building better technology, hiring more people and working closer with law enforcement, security experts and other companies.

What We’ve Found So Far

We removed 211 Facebook accounts, 107 Pages, 43 Groups and 87 Instagram accounts for engaging in coordinated inauthentic behavior that originated in the UAE, Egypt and Nigeria. There were multiple sets of activity, each localized for a specific country or region, primarily in the Middle East and Africa, and some in Europe, North and South America, South Asia and East Asia, and Australia. The people behind this network used fake accounts – some of which had already been disabled by our automated systems — to run Pages, post in Groups, disseminate their content and artificially increase engagement. They managed Pages — some of which changed names over time — sharing local news in targeted countries and promoting content about UAE. The Page admins and account owners primarily posted videos, photos and web links related to local events and issues in a particular country, and some content on topics including elections and candidates; UAE’s activity in Yemen; the first Emirati astronaut; criticism of Qatar, Turkey, and Iran; the Iran nuclear deal, and criticism of the Muslim Brotherhood. Although the people behind this activity attempted to conceal their identities, our investigation found links to three marketing firms — Charles Communications in UAE, MintReach in Nigeria and Flexell in Egypt.

  • Presence on Facebook and Instagram: 211 Facebook accounts, 107 Pages, 43 Groups and 87 Instagram accounts.
  • Followers: Less than 1.4 million accounts followed one or more of these Pages, less than 100 accounts joined one or more of the Groups and less than 70,000 accounts followed at least one of these Instagram accounts.
  • Advertising: Less than $150,000 spent on Facebook ads paid for primarily in US dollars, Emirati dirham and Indian rupee.

We identified these accounts as part of our follow-on investigation into the coordinated inauthentic behavior in the region we had previously removed.

Below is a sample of the content posted by some of these Pages:

We also removed 69 Facebook accounts, 42 Pages and 34 Instagram accounts that were involved in domestic-focused coordinated inauthentic behavior in Indonesia. The people behind this network used fake accounts to manage Pages, disseminate their content and drive people to off-platform sites. They primarily posted in English and Bahasa Indonesia about West Papua with some Pages sharing content in support of the independence movement, while others posting criticism of it. Although the people behind this activity attempted to conceal their identities, our investigation found links to an Indonesia media firm InsightID.

  • Presence on Facebook and Instagram: 69 Facebook accounts, 42 Pages and 34 Instagram accounts.
  • Followers: About 410,000 accounts followed one or more of these Pages and around 120,000 accounts followed at least one of these Instagram accounts.
  • Advertising: About $300,000 spent on Facebook ads paid for primarily in Indonesian rupiah.

We identified these accounts through ongoing investigations into suspected coordinated inauthentic behavior in the region.

Below is a sample of the content posted by some of these Pages:

Finally, we removed 163 Facebook accounts, 51 Pages, 33 Groups and 4 Instagram accounts that were involved in coordinated inauthentic behavior originating from Egypt that focused on Somalia, Yemen, Saudi Arabia, Sudan, Yemen, Tunisia, Iran, Turkey, Lebanon and Qatar. The people behind this activity used fake accounts — some of which had previously been disabled by our automated systems — to manage Pages posing as independent local news organizations, post in Groups, amplify their content and drive people to off-platform domains. Some of these Pages appear to be purchased and some changed names over time. The Page admins and account owners typically posted about domestic news and political topics including content in support of the United Arab Emirates, Saudi Arabia, and Egypt; criticism of Qatar, Iran, and Turkey; and Yemen’s southern separatist movement. Although the people behind this activity attempted to conceal their identities, our investigation found links to an Egyptian newspaper El Fagr.

  • Presence on Facebook and Instagram: 163 Facebook accounts, 51 Pages, 33 Groups and 4 Instagram accounts.
  • Followers: Around 5.6 million accounts followed one or more of these Pages, less than 3,000 accounts joined one or more of the Groups and less than 3,000 accounts followed at least one of these Instagram accounts.
  • Advertising:Less than $31,000 spent on Facebook ads paid for primarily in US dollars and Egyptian pounds.

We identified these accounts through ongoing investigations into suspected coordinated inauthentic behavior in the region.

Below is a sample of the content posted by some of these Pages:

Translation: Saudi interior minister meets with the Afghan president in Jeddah

Translation; Today … press conference to announce details on the third edition of the Elgouna Film Festival

Translation: #Radio_Arab_Yemen | Between deescalation and escalation … statements reveal ambivalence in Iranian rhetoric

Translation: #QatarTampersWithSomaliasSecurity (Text below): The helping hand of the Emirates in Somalia: – The Emirates has taken responsibility for building Somali state institutions – [The Emirates has] trained security and aid organizations and trained their workers – [The Emirates has] worked to modernize the army, police, hospitals, schools, and infrastructure in several regions



Facebook Advertising Cost

Read More
Let
Facebook Ads

Let’s Talk About Mental Health | Facebook Newsroom

By Antigone Davis, Global Head of Safety

Social media is where people can turn to celebrate life’s most joyful moments and seek support in some of the hardest. While an online community can provide invaluable support, we know that many find it uncomfortable to share personal feelings in a broad public setting.

Private messaging, on the other hand, can make it easier to talk about emotional or serious subjects, according to a survey Facebook conducted in the UK, US and Australia. Respondents said they could communicate more clearly and be more open when messaging versus in person. In fact, 80% of people surveyed said they felt they could be completely honest when messaging.

In honor of World Mental Health Day, and to help people have important conversations around mental health, we’re releasing a “Let’s Talk” Stories filter on Facebook and Messenger. Developed with input from the World Health Organization (WHO), the filter acts as an invitation for friends who might be struggling to reach out for support through Messenger.

We’re also releasing a “Let’s Talk” sticker pack on Messenger with 16 stickers that can help when words are hard to find. Each time a sticker is sent, Facebook will donate $1 to a group of mental health organizations, up to $1 million USD. It’s our hope that these tools will make it easier for people to begin conversations that can lead to support.

Let's Talk Sticker Pack

It takes less than a minute to show someone you care. This year for World Mental Health Day, the World Health Organization is encouraging people to take 40 seconds of action to let people who are struggling know they’re not alone. Sharing your “Let’s Talk” selfie is an easy way to do that. To use the World Mental Health Day “Let’s Talk” camera filter, open the camera in Facebook or Messenger and tap on the filter on the bottom of your screen. You can also download the sticker pack by clicking on the smiley face in the text box of any Messenger conversation.

Showing you’re available to help a friend is the first step, but what should you do next? According to mental health experts, it’s important to show that you care and are really listening. Express your concern, allow them to open up and help them find resources. Facebook offers some resources through our safety and well-being center. You can also find support there for yourself if you are struggling. Facebook Groups can be another place to go to find supportive connections. More than 2.5 million people in the US, UK and Australia are members of at least one of the 7,000 groups dedicated to supporting people with mental health. 

We encourage people to look for more resources from local, state, federal or international organizations. A list of organizations Facebook is donating to can be found below. 



Facebook Advertising Cost

Read More