Tag Archives: Facebook
A picture illustration shows a Facebook logo reflected in a person’s eye, in Zenica, March 13, 2015. REUTERS/Dado Ruvic
(Reuters) – Facebook Inc (FB.O) will buy back an additional $ 9 billion of its shares, as it looks to pacify investors following a slump in its stock.
The social media giant’s shares, which have tumbled more than 22 percent this year, rose nearly 1 percent in extended trading.
The new program is in addition to a share buyback plan of up to $ 15 billion announced by the company last year.
Facebook is being investigated by lawmakers in Britain after consultancy Cambridge Analytica, which worked on Donald Trump’s U.S. presidential campaign, obtained personal data of 87 million Facebook users from a researcher.
Concerns over the social media giant’s practices, the role of political adverts and possible foreign interference in the 2016 Brexit vote and U.S. elections are among the topics being investigated by British and European regulators.
Reporting by Vibhuti Sharma in Bengaluru; Editing by Anil D’Silva
Mark Zuckerberg would like you to know that despite a scathing report in The New York Times, which depicts Facebook as a ruthless, self-concerned corporate behemoth, things are getting better—at least, the way he sees it.
In a lengthy call with reporters Thursday, and an equally lengthy “note” published on Facebook, the company’s CEO laid out a litany of changes Facebook is making, designed to curb toxic content on the platform and provide more transparency into the decisions Facebook makes on content. But perhaps the most consequential update is that the Facebook News Feed algorithm will now try to limit the spread of sensationalist content on the platform, which represents a major change from how the company has traditionally approached moderation. All of it is in service of restoring trust in a company whose public reputation—and the reputation of its leaders—have taken near constant body blows over the past two years.
“When you have setbacks like we’ve had this year that’s a big issue, and it does erode trust, and it takes time to build that back,” Zuckerberg said on the call. “Certainly our job is not only to have this stuff at a good level and to continually improve, but to be ahead of new issues. I think over the last couple of years that’s been one of the areas where we’ve been most behind, especially around the election issues.”
Zuckerberg’s words come a day after the Times published a damning report that portrays Facebook as not merely behind on issues of election interference, as Zuckerberg suggests, but actively working to downplay what it knew about that interference. It suggests that Facebook’s executives, wary of picking sides in a partisan battle over Russian interference in the 2016 election, aimed to minimize Russia’s role in spreading propaganda on the platform. The story states that Facebook’s former head of cybersecurity, Alex Stamos, was chastised by the company’s chief operating officer, Sheryl Sandberg, for investigating Russian actions without the company’s approval and berated again for divulging too much information about it to members of Facebook’s board.
In his remarks, Zuckerberg flatly denied this allegation. “We’ve certainly stumbled along the way, but to suggest that we weren’t interested in knowing the truth or that we wanted to hide what we knew or that we tried to prevent investigations is simply untrue,” he said. (Stamos, for his part, tweeted earlier on Thursday that he was “never told by Mark, Sheryl or any other executives not to investigate.”)
The Times story also alleges that Facebook waged a smear campaign against its competitors through an opposition research firm called Definers Public Relations. The firm repeatedly worked to tie Facebook’s detractors, including groups like the Open Markets Institute and Freedom from Facebook, to billionaire George Soros. Critics say that in doing so, Facebook engaged with the same anti-Semitic tropes that have been used by white nationalists and other hate groups that regularly villainize Soros.
Zuckerberg denied having any personal knowledge of Definers’ work with Facebook, and said he and Sheryl Sandberg, Facebook’s chief operating officer, only heard about the relationship yesterday. That’s despite the fact that Definers often coordinated large-scale calls with the press on behalf of Facebook and its employees and, in at least one case, sat in on meetings between Facebook and the media.
After Zuckerberg read the story in the Times, he says Facebook promptly ended its relationship with the firm. “This type of firm might be normal in Washington, but it’s not the type of thing I want Facebook associated with, which is why we’re no longer going to be working with them.”
But while Zuckerberg said he had no knowledge of Definers’ work or its messaging, he defended Facebook’s criticism of activist groups like Freedom from Facebook. He said the intention was not to attack Soros, for whom Zuckerberg said he has “tremendous respect,” but show that Freedom from Facebook “was not a spontaneous grassroots effort.”
Zuckerberg declined to assign blame for the tactics allegedly employed by Definers, or to comment on broader personnel issues within Facebook itself. He said only that Sandberg, who has been overseeing Facebook’s lobbying efforts and who is portrayed unfavorably throughout the Times story, is “doing great work for the company.” “She’s been an important partner to me and continues to be and will continue to be,” Zuckerberg said. (Sandberg was not on the call.)
For the umpteenth time this year, Zuckerberg found himself working overtime to clean up Facebook’s mess, even as he wanted desperately to tout the progress the company’s been making. And it has made important progress. In Myanmar, where fake news on Facebook has animated a brutal ethnic cleansing campaign against the Rohingya people, the company has hired 100 Burmese speakers to moderate content there and is now automatically identifying 63 percent of the hate speech it takes down, up from just 13 percent at the end of last year. Facebook has expanded its safety and security team to 30,000 people globally, more than the 20,000 people the company initially set out to hire this year. It’s also changed its content takedown process, allowing people to appeal the company’s decisions about content they post or report. On Thursday, Facebook announced that within the next year, it will create an independent oversight body to handle content appeals.
But by far the biggest news to come out of Thursday’s announcements is the change coming to Facebook’s News Feed algorithm. Zuckerberg acknowledged what most observers already know to be one of Facebook’s most fundamental problems: That sensationalist, provocative content, even content that doesn’t explicitly violate Facebook’s policies, tends to get the most engagement on the platform. “As content gets closer to the line of what is prohibited by our community standards, we see people tend to engage with it more,” he said. “This seems to be true regardless of where we set our policy lines.”
This issue is arguably what undergirds most of Facebook’s problems the past few years. It’s why divisive political propaganda was so successful during the 2016 campaign and why fake news has been able to flourish. Until now, Facebook has operated in a black-and-white environment, where content either violates the rules or it doesn’t, and if it doesn’t, it’s free to amass millions of clicks, even if the poster’s intention is to mislead and stoke outrage. Now Facebook is saying that even content that doesn’t explicitly violate Facebook’s rules might see its reach reduced. According to Zuckerberg’s post, that includes, among other things, “photos close to the line of nudity” and “posts that don’t come within our definition of hate speech but are still offensive.”
Zuckerberg called the shift “a big part of the solution for making sure polarizing or sensational content isn’t spreading in the system, and we’re having a positive effect on the world.”
With this move, Facebook is taking a risk. Curbing engagement on the most popular content will likely cost the company money. And such a dramatic change no doubt opens Facebook up to even more accusations of censorship, at a time when the company is fending off constant criticism from all angles.
But Facebook is betting big on the upside. If outrage is no longer rewarded with ever more clicks, the thinking goes, maybe people will be better behaved. That Facebook is prepared to take such a chance says a lot about the public pressure that’s been placed on the company these last two years. After all of that, what does Facebook have to lose?
More Great WIRED Stories
Hackers have tried to convince potential buyers—and the BBC Russian Service—that they had cracked Facebook’s security and extracted private messages from 120 million accounts. However, according to an outside expert reported by the BBC, it appears likely that at least 81,000 Facebook accounts had their privacy breached. And according to Facebook, the breach is due to malware-containing browser extensions.
“We have contacted browser makers to ensure that known malicious extensions are no longer available to download in their stores and to share information that could help identify additional extensions that may be related,” Facebook’s vice president of product manager, Guy Rosen, said in a statement.
The hackers originally published an offer in September for personal information related to 120 million Facebook accounts on a English-language forum. This included a sample of data that the BBC had an expert examine, confirming that over 81,000 profiles’ private messages were included. An additional 176,000 accounts had data that could have been scraped from public Facebook pages.
Facebook’s Rosen said that its security wasn’t compromised, and urged people to remove any plug-ins they don’t fully trust. Rosen said the social network had notified law enforcement, had the website hosting the Facebook account data had been taken down.
Depending on the browser, plug-in extensions may be able to monitor a user’s activity on any web page. This typically doesn’t include keystrokes, but extensions can sweep in anything rendered on a page for a user to see, such as public and private messages.
Plug-ins that provide toolbars or insert links for coupons for e-commerce are common. However, with so many extensions available, malicious parties have many options: compromise existing software through insiders or poor developer security; release their own seemingly benign plug-ins that provide a useful function alongside snooping; or buy extensions from developers and then update them to include malware.
So, install at your own risk.
New charges against a Russian national for allegedly trying to influence the 2016 U.S. presidential elections and upcoming midterms reveal the creative techniques that Kremlin-linked groups have used to sow discontent among Americans.
The Department of Justice said Friday that it filed criminal charges against Elena Alekseevna Khusyaynova for her alleged role with the Russian propaganda operation “Project Lakhta.” This operation, according to the complaint, oversaw multiple Russian-linked entities like the Internet Research Agency that lawmakers say spread fake news and ginned up controversy on Twitter and Facebook.
Here’s some interesting takeaways from the lawsuit:
Capitalizing on polarized topics of national interest
The complaint alleges that the Russian groups grasped onto polarized issues like gun control, race relations, and immigration to further divide the U.S. populace. They spread both liberal and conservative viewpoints to various groups on social media, tailoring the message to each one, including choosing which publication to share on them.
One unnamed Russian cited in the complaint allegedly said, ” If you write posts in a liberal group,…you must not use Breitbart titles. On the contrary, if you write posts in a conservative group, do not use Washington Post or Buzzfeed’s titles.”
The Russian groups appeared to practice their own form of racism, with one member reportedly saying “Colored LGTB are less sophisticated than white; therefore, complicated phrases and messages do not work.”
The groups apparently discovered that “infographics work well among LGTB and their liberal allies,” while conservatives appeared to be indifferent to graphics.
Spinning the news
Members of the Russian entities were well versed in summarizing popular news stories and spinning them in a way that would antagonize Americans. The entities created a Facebook group dubbed “Secure Borders” that would aggregate news stories and then sensationalize them to draw emotional responses.
Here’s an example of one way the Russian groups discussed among themselves about how to spin a news story about the late John McCain’s criticism of President Donald Trump’s immigration policies.
Creating fake user accounts on Facebook and Twitter
The Russian groups couldn’t have spread propaganda as effectively if they used their real identities. Instead, they created fake profiles on the social media to do things like promote protests and rallies and to post divisive and hateful content.
For instance, the fictitious New York City resident “Bertha Malone” created 400 Facebook posts that allegedly contained “inflammatory political and social content focused primarily on immigration and Islam.”
Get Data Sheet, Fortune’s technology newsletter.
The “Malone” personal also communicated with an unnamed real Facebook user to assist in posting content and managing a Facebook group called “Stop A.I.”
On March 9, 2018, a fake Twitter user named @JohnCopper16 attempted to influence Twitter users by commenting on President Trump’s recent summit with North Korean President Kim Jong Un:
Facebook‘s ongoing crusade against election-related abuse of its platform will involve the removal of some (but not all) disinformation that’s designed to suppress voting.
The company told Reuters that it would ban false information about voting requirements. It will also flag for moderation reports that may aim to keep people away from polling stations by alleging violence or long queues—if the reports are shown to be false, they will be suppressed in people’s news feeds, but they won’t be deleted.
Facebook (fb) generally does not remove falsehoods, even if they are demonstrated, so nixing false information about voting requirements is a notable step. It banned lies about voting locations a couple years back, but this latest move involves exaggerations about voter identification requirements.
In the wake of the mass disinformation campaigns that accompanied the 2016 election, Facebook has come under a great deal of pressure over its role in combatting the problem. It has partnered with think tanks in an attempt to better spot propaganda; it has removed “inauthentic” profiles that were aiming to spread misinformation; and it has sponsored research on the overall problem.
But, while the company is willing to suppress certain kinds of “fake news,” it won’t delete the vast majority of it.
“We don’t believe we should remove things from Facebook that are shared by authentic people if they don’t violate those community standards, even if they are false,” News Feed product manager Tessa Lyons told Reuters.
Facebook’s cybersecurity policy chief, Nathaniel Gleicher, also told the news service that the company is considering banning posts that linked to hacked material, as Twitter (twtr) recently did. As shown with the hacking of the Democratic National Committee in 2016 by Russian operatives, this technique can form part of a coordinated effort to sway elections. However, the dissemination of some hacked materials is in the public interest, making this a tricky tightrope for social media firms to negotiate.
Facebook responded to a New York Times investigation on Oct. 15 into ethnic violence incited by members of the Myanmar military by banning 13 additional pages with a combined 1.35 million followers. The pages represented themselves as offering beauty, entertainment, and general information, but the Times report said the military controlled the pages, rather than fans of pop stars and national heroes, as the pages alleged.
Facebook announced the move in an update to a blog post about its previous actions to combat the spread of false information via its service in Myanmar that have led to killings and widespread violence against ethnic minorities. Accounts and pages previously banned had about 12 million followers, and included the account for the general in charge of the country’s armed forces.
Facebook received heavy criticism for its slow response to its platform being used for violence, and said as much in August in the initial version of its Myanmar blog post, when it admitted “we have been too slow to prevent misinformation and hate.”
The Times reported that the Myanmar military have engaged in a systemic campaign for five years on Facebook to target the Rohingya, a stateless minority population in the country comprising mostly Muslims. As many as 700 military personnel were involved, the Times said. Facebook confirmed many details for the newspaper. The company did not immediately respond to a request from Fortune for comment.
The spread of fake information ranging from specific false accounts about rapes and murders by Muslims to blanket condemnations of Islam are seen as leading directly to a large-scale campaign of ethnic cleansing. Over 700,000 Rohingya have left Myanmar since August 2017, joining more than 300,000 who had already departed the country, according to the United Nations Refugee Agency. A report from the agency in August estimates at least 10,000 Rohingya people had been killed in violence, but other observers believe the number could be far higher.
[unable to retrieve full-text content]
The social networking giant said in a blog post on Thursday that it is removing 559 pages–the public profiles of firms and celebrities–and 251 accounts, claiming that they were intentionally misleading people, engaging in “inauthentic behavior,” and posting spam.
Although a lot of the accounts and pages that Facebook typically removes involve scams intended to sell “fake sunglasses or weight loss ‘remedies,’” Facebook said that shady actors are increasingly sharing “sensational political content.”
This questionable political content, which Facebook said covers the gamut of the political spectrum, helps these bad actors “build an audience and drive traffic to their websites, earning money for every visitor to the site,” the company said.
“And like the politically motivated activity we’ve seen, the ‘news’ stories or opinions these accounts and pages share are often indistinguishable from legitimate political debate,” Facebook executives said in the post. “This is why it’s so important we look at these actors’ behavior–such as whether they’re using fake accounts or repeatedly posting spam–rather than their content when deciding which of these accounts, Pages or Groups to remove.”
Get Data Sheet, Fortune’s technology newsletter.
Facebook did not name in its blog post any of the pages and accounts that it removed, but several media outlets reported that they included groups with names like “Reverb Press,” “Nation in Distress,” “Right Wing News.” and “Snowflakes.”
The progressive non-profit Media Matters cited over the summer that Nation of Distress was one of the biggest spreaders of conspiracy theories and propaganda on Facebook.
Nation In Distress is one of the most popular (and extreme: https://t.co/F5drgCUm5x) right-wing meme pages on Facebook. In the past 72 hours, they have:
– pushed a conspiracy from fake news sites
– resurfaced an old Clinton conspiracy
– re-posted Russian propaganda pic.twitter.com/t2t5KyZRuG
— Natalie Martinez (@natijomartinez) August 21, 2018
Some of the other misleading pages previously highlighted by Media Matters included “Dean James III%”, “USA in Distress,” and “The voice of the people.” These pages are currently inaccessible, suggesting that Facebook removed them as well.
Fortune contacted Facebook for more information and will update this post if it responds.
Are you on Facebook? If you are, you’ve most likely received a repetitive, canned note (or 100) from your friends/family that is driving you into a fit of rage. If you haven’t, consider yourself lucky. However, there’s indeed an irritating hoax going around that has grabbed some serious attention. Here’s what the message says:
Hi….I actually got another friend request from you yesterday…which I ignored so you may want to check your account. Hold your finger on the message until the forward button appears…then hit forward and all the people you want to forward too….I had to do the people individually. Good Luck!
Spoiler: there’s no ‘clone’ account. This is just a hoax, so delete the message and be worry-free that an account or second-degree account is compromised.
We’re all familiar with this level of chain-like-mail, but what makes this time so different? The obvious answer could be any of the following:
- It’s coming from friends & family — so you can trust it
- There’s clear instruction on what to do
- It doesn’t contain a link
- You’re doing it through Messenger (it’s more novel), vs. a status update
However, it goes deeper than that.
We need to remember that Facebook has its fair share of ‘bad press’ (yes, there is such a thing) the past couple years, stemming from the Cambridge Analytica scandal which affected 87 million accounts. Then, all 2.2 billion Facebook users received a notice in an effort to inform them on how to protect their information. Add to this that on September 28th, hackers exploited a flaw which resulted in compromised data for 50 million accounts. Yikes.
And what do you get when you mix that all together?
A user constantly on high-alert due to the endless loop of security & privacy concerns
The decision to forward is almost an irrational one–and an innate reaction to Facebook’s shaky history and hyper-recent exploitation. All of that creates an uncomfortable level of ‘unknown’ when it comes to privacy and, at the end of the day, your friends & family are really just trying to help inform of a potential concern.
So, the next time you receive one of these messages, maybe take a deep breath and if you feel like a good Samaritan, let them know that they don’t need to forward the message out to anyone else–the clones aren’t here (yet).
Published on: Oct 7, 2018
Over the past year, Facebook has faced a reckoning over the way its plan to connect the next billion users to the internet has sown division, including spreading hate speech that incited ethnic violence in Myanmar and disseminating propaganda for a violent dictator in the Philippines. But even as the company admits that it was “too slow to prevent misinformation and hate” in Myanmar and makes promises to be more proactive about policing content “where false news has had life or death consequences,” Facebook’s efforts in the developing world appear to be speeding up rather than pausing to ensure that history doesn’t repeat itself.
In mid-August, Facebook said it was making progress in Myanmar by adding more Burmese speakers and changing its content-moderation policies to make it easier to report bad conduct and root out hate speech. By the end of this year, Facebook says, it expects to hire 100 Burmese speakers to review content. The changes come more than two years after Facebook pushed into Myanmar. During that time civil-society groups have repeatedly asked the company to do a better job patrolling hate speech, and UN investigators said Facebook had played a “determining role” in the killing of Rohingya Muslims.
But Facebook’s lack of preparedness in Myanmar has not halted its efforts to expand access to the internet—and to Facebook—in the Global South. A couple of weeks after touting its progress in Myanmar, Facebook quietly celebrated plans to expand Wi-Fi access in India, Indonesia, Kenya, Nigeria, and Tanzania through partnerships with companies that sell Wi-Fi hardware. In Tanzania, for example, the World Bank estimates that only 13 percent of people use the internet, roughly the same internet penetration in Myanmar in November 2015.
From Free Basics to Express Wi-Fi
Facebook’s efforts to connect the next billion fall under internet.org, which the company describes as an effort to get 4.5 billion unconnected people on the internet. Initially, Facebook’s preferred vehicle to spread connectivity was Free Basics, an app that provided free access to a limited number of websites. Amid criticism of that approach in India and elsewhere, Facebook in the past year has instead promoted a program called Express Wi-Fi, where local merchants or business owners offer affordable access to Wi-Fi hot spots, using Facebook’s Express software as a platform for billing and managing accounts.
Express Wi-Fi has raised fewer red flags because, unlike Free Basics, users can access the full internet. Facebook provides financial support to set up the hot spots, but the company says Express Wi-Fi is not supposed to be a profit center. Rather, Facebook wants partners to get enough financial return to keep expanding connectivity efforts.
Facebook would not disclose how many Express Wi-Fi hot spots there are or how the program has grown, but it is clearly part of Facebook’s larger push into Africa. Three of the five countries where Express Wi-Fi has launched are in Africa. In March, Facebook launched an Express Wi-Fi app in the Google Play store in Kenya and Indonesia. Facebook’s ISP partner in Kenya, Surf, says it has 1,100 Express Wi-Fi hot spots in the country, up from 100 in February 2017. In September, Facebook announced a partnership with The Internet Society, an American nonprofit, to improve internet connections throughout Africa.
Digital rights advocates in Africa say Facebook has evolved its approach after the problems in Myanmar. Facebook is working more closely with civil society groups, sending more delegations, recruiting native language speakers, planning for contentious elections, and hosting digital literacy efforts.
Ephraim Percy Kenyanito, a digital program officer at the East Africa office for Article 19, a nonprofit that defends freedom of expression, says Facebook’s decision to hire more Africans, especially from civil society groups, has made it easier for concerns to be heard, if not always addressed. During the 2017 presidential election in Kenya, for example, Facebook responded when advocates reported hate speech or fake news, but the company did not always protect female journalists who became targets for harassment on the platform after writing critical stories about politicians. “They’re trying to get there, but they need to do better.”
Still, some of the civil society groups say Facebook’s efforts often fall short. Advocates say it’s hard to get straight answers from Facebook about its content-moderation process, plans for hiring native language speakers, meetings with the government, or the goal of its connectivity efforts, leaving some to suspect that Facebook’s recent overtures are more of a public relations campaign. As governments elsewhere crack down on Facebook and Free Basics, they worry Facebook is targeting Africa because there are fewer protections for user privacy and freedom of online expression. (Kenyanito says only about half of the 50 countries in the African Union have data protection and privacy laws.) What’s more, some critics also suspect that Express Wi-Fi is just a way for Facebook to rebrand its connectivity efforts as something less controversial.
Julie Owono, executive director of Internet Without Borders, says Facebook is facing the same explosive ingredients in Africa that it encountered in Myanmar, including unstable regimes, ethnic tensions, and a flood of new users. She fears that Facebook’s reliance “on algorithms to solve complex issues” means that the brunt of preventing abuse may once again fall on nonprofits. Facebook has pledged to hire 20,000 content moderators in 2018, but will not disclose where those people will be located, partly to protect them.
The need for real transparency became clear during Facebook’s recent activities in Cameroon, which holds elections on Sunday. In September, Facebook helped sponsor a symposium on digital rights and election safety in Yaoundé, Cameroon’s capital. Facebook’s presence shows that the company is “a bit more humble than a few years ago, when they thought they had the solutions to every problem,” Owono says. But just one month earlier, civil society groups were blindsided by news that Facebook met with government officials about fighting fake news during the election. Activists feared Facebook might be planning to censor accounts at the government’s behest. Although concerns were eventually assuaged, Facebook’s initial scripted statements only fueled confusion.
Negotiating with the government becomes fraught in repressive regimes where political parties can manipulate Facebook’s platforms—and may shut down internet access during elections or to silence dissent. “During political moments, the same political actors are the ones fueling misinformation and memes,” says Grace Bomu, a tech policy advocate based in Kenya.
Facebook says it has met with a range of stakeholders in Cameroon, including civil society groups and human rights activists, and made no agreements with the government.
In a statement to WIRED, a Facebook spokesperson said, “We know we were initially too idealistic” about connecting people worldwide, “and didn’t focus enough on preventing abuse or thinking through all the ways people could use the tools on the Facebook platform to do harm. That’s why we have invested in people and technology to build better safeguards. This includes the roll out of third party fact-checking, better detection of bad content, improved enforcement of our policies, and deeper support for digital literacy efforts. There is always more to do, and that’s why we have a dedicated team of product, policy, and partnerships experts who are focused on helping to keep the platform safe.”
But Tessa Wandia, who works at iHub, a hackerspace for technologists and entrepreneurs in Kenya, says Facebook’s connectivity efforts steer users toward choosing Facebook. In Africa, for instance, the Express Wi-Fi app can feature a prominent link to Free Basics, with the tagline “See popular websites for free,” a tempting offer for users in a region where data plans can be relatively expensive. Wandia believes Facebook may be using Express Wi-Fi “to make people quiet down” about Free Basics, and “convince us that they really do have a philanthropic angle.” Facebook says it offers partners the option of including Free Basics in Express Wi-Fi, but it’s not required.
Concerns about social media’s influence are not theoretical. In March, for example, Cambridge Analytica executives were caught on tape bragging about influencing Kenya’s presidential elections in 2017 and 2013. The controversial political consultancy reportedly experimented in Africa in part because of lax privacy rules and access to government data from willing politicians. A case study on Cambridge Analytica’s website says polling data was used to target social media ads to youth voters. Wandia says she reported some inflammatory ads that spread on social media, which contained misinformation and were used to psychologically manipulate citizens. “We have to be worried about how Kenyans are influenced, how they are making decisions,” she says.
To be sure, many of these worries stem from Facebook’s staggering popularity, and would likely exist even without efforts like Express Wi-Fi or Free Basics. Telecom operators in Africa, for instance, often include free use of WhatsApp or Facebook as part of a data bundle to entice users who want to use those services.
But Facebook’s continued push to connect the globe raises questions about who bears responsibility for unintended consequences, which have disproportionately affected people in the Global South. After the violence in Myanmar, we now know how Facebook’s promises to help the developing world can play out.
On Wednesday, Bloomberg reported that a former government official in Sri Lanka had been warning Facebook about abuse on its platform by the Sri Lankan government since 2014. Facebook began to address concerns after the government shut down access, but won’t disclose how many content moderators it has hired.
Mark Zuckerberg’s plan to connect the next billion has been greeted with suspicion since it was announced in 2015, but tensions boiled over when Facebook tried to push Free Basics in India as a philanthropic act. “This isn’t about Facebook’s commercial interests—there aren’t even any ads in the version of Facebook in Free Basics,” Zuckerberg wrote in an op-ed in the Times of India. Eventually, the Indian government banned programs like Free Basics, which favored some content over others.
Some of the skepticism towards Express Wi-Fi is residual distrust from those days. For example, Zuckerberg said Free Basics was for people who had never accessed the Internet before and all content providers were welcome to apply. But a study of Free Basics published by Global Voices, a media organization of advocates and journalists from 170 countries, in August 2017, found that it was often marketed to urban millennials, who used it as a way to access Facebook for free. Within the app, users may have a harder time identifying fake news. The report found that the only local news sites prominently displayed in Kenya and Ghana had either faced pressure to fire journalists or were “known for sensational coverage” and questionable standards.
Facebook says the report reflects the experience of Global Voices volunteers in a limited number of countries, not the people benefitting from the program.
Facebook says it does not track whether expanding Express Wi-Fi has led to more Free Basics users because the programs are separate. But Mark Summer, CEO of Surf the Kenyan ISP working with Facebook, says Free Basics is “very popular,” with Surf’s Express Wi-Fi users. Although Express Wi-Fi is billed as a way to connect communities with limited access, Surf has placed hot spots in major towns and focused on lower-class to middle-class users, who typically already have other, more expensive options, Summer says. “It’s not super low-end users like slum areas or refugee camps and very much not the high end areas where upper class to high income people,” he says. “We provide it in the neighborhoods where the people go and work and shop, where they go on and buy food and go to restaurants and cafes where people sit out and congregate.”
Ellery Biddle, advocacy director of Global Voices, says the availability of Free Basics through Express Wi-Fi can influence users’ media choices. “If you have the one thing that is cheaper than everyone else, it makes it really easy to spread a lot of information quickly,” he says. Facebook successfully neutralized many internet.org critics by positioning its work as a choice between bringing affordable internet access to the neediest members of society or elite concerns about the purity of internet access. But Nikhil Pahwa, founder of news site MediaNama and a key voice during the fight over Free Basics in India, says Facebook does not have to play a central role in expanding access. Fostering competition can lower data prices. Since regulators passed a net neutrality rule in India, he says prices have dropped roughly 90 percent. “There is no need for Free Basics lately,” he says.
More Great WIRED Stories
(Reuters) – Facebook Inc on Monday said a technical problem prevented some users from accessing and posting on the social network as well as messaging app Whatsapp and Instagram, and it had mostly fixed the issue.
FILE PHOTO: A Facebook panel is seen during the Cannes Lions International Festival of Creativity, in Cannes, France, June 20, 2018. REUTERS/Eric Gaillard/File Photo
“Earlier today, a networking issue caused some people to have trouble accessing or posting to various Facebook services. We quickly investigated and started restoring access, and we have nearly fixed the issue for everyone. We’re sorry for the inconvenience,” Facebook spokesman Jay Nancarrow said.
Most affected users experienced problems for less than 90 minutes and the problem was not specific to a particular region.
Reporting by Nikhil Subba in Bengaluru; Editing by Cynthia Osterman