By Ime Archibong, VP of Product Partnerships
We wanted to provide an update on our ongoing App Developer Investigation, which we began in March of 2018 as part of our response to the episode involving Cambridge Analytica.
We promised then that we would review all of the apps that had access to large amounts of information before we changed our platform policies in 2014. It has involved hundreds of people: attorneys, external investigators, data scientists, engineers, policy specialists, platform partners and other teams across the company. Our review helps us to better understand patterns of abuse in order to root out bad actors among developers.
We initially identified apps for investigation based on how many users they had and how much data they could access. Now, we also identify apps based on signals associated with an app’s potential to abuse our policies. Where we have concerns, we conduct a more intensive examination. This includes a background investigation of the developer and a technical analysis of the app’s activity on the platform. Depending on the results, a range of actions could be taken from requiring developers to submit to in-depth questioning, to conducting inspections or banning an app from the platform.
Our App Developer Investigation is by no means finished. But there is meaningful progress to report so far. To date, this investigation has addressed millions of apps. Of those, tens of thousands have been suspended for a variety of reasons while we continue to investigate.
It is important to understand that the apps that have been suspended are associated with about 400 developers. This is not necessarily an indication that these apps were posing a threat to people. Many were not live but were still in their testing phase when we suspended them. It is not unusual for developers to have multiple test apps that never get rolled out. And in many cases, the developers did not respond to our request for information so we suspended them, honoring our commitment to take action.
In a few cases, we have banned apps completely. That can happen for any number of reasons including inappropriately sharing data obtained from us, making data publicly available without protecting people’s identity or something else that was in clear violation of our policies. We have not confirmed other instances of misuse to date other than those we have already notified the public about, but our investigation is not yet complete. We have been in touch with regulators and policymakers on these issues. We’ll continue working with them as our investigation continues. One app we banned was called myPersonality, which shared information with researchers and companies with only limited protections in place, and then refused our request to participate in an audit.
We’ve also taken legal action when necessary. In May, we filed a lawsuit in California against Rankwave, a South Korean data analytics company that failed to cooperate with our investigation. We’ve also taken legal action against developers in other contexts. For example, we filed an action against LionMobi and JediMobi, two companies that used their apps to infect users’ phones with malware in a profit-generating scheme. This lawsuit is one of the first of its kind against this practice. We detected the fraud, stopped the abuse and refunded advertisers. In another case, we sued two Ukrainian men, Gleb Sluchevsky and Andrey Gorbachov, for using quiz apps to scrape users’ data off our platform.
And we are far from finished. As each month goes by, we have incorporated what we learned and reexamined the ways that developers can build using our platforms. We’ve also improved the ways we investigate and enforce against potential policy violations that we find.
Beyond this investigation, we’ve made widespread improvements to how we evaluate and set policies for all developers that build on our platforms. We’ve removed a number of APIs, the channels that developers use to access various types of data. We’ve grown our teams dedicated to investigating and enforcing against bad actors. This will allow us to, on an annual basis, review every active app with access to more than basic user information. And when we find violators, we’ll take a range of enforcement actions.
We have also developed new rules to more strictly control a developer’s access to user data. Apps that provide minimal utility for users, like personality quizzes, may not be allowed on Facebook. Apps may not request a person’s data unless the developer uses it to meaningfully improve the quality of a person’s experience. They must also clearly demonstrate to people how their data would be used to provide them that experience.
We have clarified that we can suspend or revoke a developer’s access to any API that it has not used in the past 90 days. And we will not allow apps on Facebook that request a disproportionate amount of information from users relative to the value they provide.
The Path Forward
Our new agreement with the FTC will bring its own set of requirements for bringing oversight to app developers. It requires developers to annually certify compliance with our policies. Any developer that doesn’t go along with these requirements will be held accountable.
App developers remain a vital part of the Facebook ecosystem. They help to make our world more social and more engaging. But people need to know we’re protecting their privacy. And across the board, we’re making progress. We won’t catch everything, and some of what we do catch will be with help from others outside Facebook. Our goal is to bring problems to light so we can address them quickly, stay ahead of bad actors and make sure that people can continue to enjoy engaging social experiences on Facebook while knowing their data will remain safe.