Tag Archives: Facebook
People take pictures on a pedestrian bridge, illuminated with colors of New Zealand’s national flag as a tribute to victims of the mosque shootings in Christchurch, in Jakarta, Indonesia, March 17, 2019. REUTERS/Willy Kurniawan
(Reuters) – Facebook Inc said it removed 1.5 million videos globally of the New Zealand mosque attack in the first 24 hours after the attack.
“In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload…,” Facebook said in a tweet bit.ly/2HDJtPM late Saturday.
The company said it is also removing all edited versions of the video that do not show graphic content out of respect for the people affected by the mosque shooting and the concerns of local authorities.
The death toll in the New Zealand mosque shootings rose to 50 on Sunday. The gunman who attacked two mosques on Friday live-streamed the attacks on Facebook for 17 minutes using an app designed for extreme sports enthusiasts, with copies still being shared on social media hours later.
New Zealand Prime Minister Jacinda Ardern has said she wants to discuss live streaming with Facebook.
Reporting by Bhanu Pratap in Bengaluru; Editing by Richard Borsuk
The EU’s crusade against hate speech is a long-running issue that has involved threats of new regulation, as happened in Germany, if the big social media firms don’t do more to tackle the problem. As things stand, the companies have signed up to a voluntary code of conduct.
On Monday the European Commission—the bloc’s executive body—said its clean-up drive is bearing fruit. According to its statistics, 89% of suspect content is being evaluated within a day of someone flagging it up, and 72% of the content that is found to be illegal is removed.
For comparison’s sake, those figures were 40% and 28% respectively, back in 2016. The stats come from civil society organizations that monitor take-downs across various EU countries.
“Illegal hate speech online is not only a crime, it represents a threat to free speech and democratic engagement,” said Justice Commissioner Vĕra Jourová in a statement. “In May 2016, I initiated the Code of conduct on online hate speech, because we urgently needed to do something about this phenomenon. Today, after two and a half years, we can say that we found the right approach and established a standard throughout Europe on how to tackle this serious issue, while fully protecting freedom of speech.”
The European Commission claimed “there is no sign of over-removal,” and noted that content using racial slurs and degrading images in reference to certain groups is only removed in 58.5% of cases.
Facebook is apparently being especially cooperative, assessing 92.6% of hate-speech notifications within 24 hours. This is notable, as CEO Mark Zuckerberg recently pledged to make Facebook—whose AI isn’t yet fully up to the task—better at properly adjudicating what qualifies as hate speech and what doesn’t. The company’s nature as a conduit for hate speech was highlighted last year by its reported role in the Myanmar genocide.
The Commission’s one gripe on Monday was the lack of transparency and feedback to users, when content gets flagged up and removed—the levels of feedback actually fell over the last year, from 68.9% to 65.4%. Again, Facebook did well here, while YouTube failed quite badly, offering feedback less than quarter of the time.
Facebook knew children were spending money in games without getting parental consent and the company did nothing about it, according to newly unsealed court documents from a 2012 lawsuit.
More than 100 pages of private Facebook documents were released following a request by the Center for Investigative Reporting and shed light on Facebook’s tactics. For years, the company was aware that children were playing games on accounts tied to a credit card and were, in some cases, unknowingly racking up thousands of dollars in bills by simply clicking within a game to get new abilities or upgrades.
The company ignored a plan developed by an employee in 2011 that would curb children from spending money without a parent’s permission.
The more games children played, the more Facebook’s revenue grew. When angry parents saw their credit card bills and in some cases reported not even receiving a receipt, they found it difficult to get their money back from Facebook, so they turned to credit card companies, the Better Business Bureau and finally, a lawsuit.
While the documents are old, they shed light on Facebook’s past business practices as the company continues to be under immense scrutiny for its numerous privacy breaches. Facebook changed its refund policy around games in 2016 and now has a detailed site about how to handle payment disputes with developers. Additionally, a Parents Portal offers tips for parents about how their kids can stay safe online.
“Facebook works with parents and experts to offer tools for families navigating Facebook and the web. As part of that work, we routinely examine our own practices, and in 2016 agreed to update our terms and provide dedicated resources for refund requests related to purchases made by minors on Facebook,” the company said in a statement.
(Reuters) – Facebook Inc Chief Executive Mark Zuckerberg is planning to unify the underlying messaging infrastructure of the WhatsApp, Instagram and Facebook Messenger services and incorporate end-to-end encryption into these apps, the New York Times reported on Friday.
WhatsApp and Facebook messenger icons are seen on an iPhone in Manchester , Britain March 27, 2017. REUTERS/Phil Noble
The three services will, however, continue as stand-alone apps, the report said, citing four people involved in the effort.
Facebook said it is working on adding end-to-end encryption, which protects messages from being viewed by anyone except the participants in a conversation, to more of its messaging products, and considering ways to make it easier for users to connect across networks.
“There is a lot of discussion and debate as we begin the long process of figuring out all the details of how this will work,” a spokesperson said.
After the changes, a Facebook user, for instance, will be able send an encrypted message to someone who has only a WhatsApp account, according to the New York Times report.
Integrating the messaging services could make it harder for antitrust regulators to break up Facebook by undoing its acquisitions of WhatsApp and Instagram, said Sam Weinstein, a professor at the Benjamin N. Cardozo School of Law.
“If Facebook is worried about that then one way it can defend itself is to integrate those services,” Weinstein said.
But Weinstein said breaking up Facebook is viewed as an “extreme remedy” by regulators, particularly in the United States, so concerns over antitrust scrutiny may not have been a factor behind the integration.
Some former Facebook security engineers and an outside encryption expert said the plan could be good news for user privacy, in particular by extending end-to-end encryption.
“I’m cautiously optimistic it’s a good thing,” said former Facebook Chief Security Officer Alex Stamos, who now teaches at Stanford University. “My fear was that they were going to drop end-to-end encryption.”
However, the technology does not always conceal metadata – information about who is talking to whom – sparking concern among some researchers that the data might be shared.
Any metadata integration likely will let Facebook learn more about users, linking identifiers such as phone numbers and email addresses for those using the services independently of each other.
Facebook could use that data to charge more for advertising and targeted services, although it also would have to forgo ads based on message content in Messenger and Instagram.
Other major tradeoffs will have to be made too, Stamos and others said.
Messenger allows strangers to contact people without knowing their phone numbers, for example, increasing the risk of stalking and approaches to children.
Systems based on phone numbers have additional privacy concerns, because governments and other entities can easily extract location information from them.
Stamos said he hoped Facebook would get public input from terrorism experts, child safety officers, privacy advocates and others and be transparent in its reasoning when it makes decisions on the details.
“It should be an open process, because you can’t have it all,” Stamos said.
Reporting by Munsif Vengattil in Bengaluru, Jan Wolfe in Washington and Joseph Menn in San Francisco; Writing by Katie Paul; Editing by Tom Brown
A picture illustration shows a Facebook logo reflected in a person’s eye, in Zenica, March 13, 2015. REUTERS/Dado Ruvic
(Reuters) – Facebook Inc (FB.O) will buy back an additional $ 9 billion of its shares, as it looks to pacify investors following a slump in its stock.
The social media giant’s shares, which have tumbled more than 22 percent this year, rose nearly 1 percent in extended trading.
The new program is in addition to a share buyback plan of up to $ 15 billion announced by the company last year.
Facebook is being investigated by lawmakers in Britain after consultancy Cambridge Analytica, which worked on Donald Trump’s U.S. presidential campaign, obtained personal data of 87 million Facebook users from a researcher.
Concerns over the social media giant’s practices, the role of political adverts and possible foreign interference in the 2016 Brexit vote and U.S. elections are among the topics being investigated by British and European regulators.
Reporting by Vibhuti Sharma in Bengaluru; Editing by Anil D’Silva
Mark Zuckerberg would like you to know that despite a scathing report in The New York Times, which depicts Facebook as a ruthless, self-concerned corporate behemoth, things are getting better—at least, the way he sees it.
In a lengthy call with reporters Thursday, and an equally lengthy “note” published on Facebook, the company’s CEO laid out a litany of changes Facebook is making, designed to curb toxic content on the platform and provide more transparency into the decisions Facebook makes on content. But perhaps the most consequential update is that the Facebook News Feed algorithm will now try to limit the spread of sensationalist content on the platform, which represents a major change from how the company has traditionally approached moderation. All of it is in service of restoring trust in a company whose public reputation—and the reputation of its leaders—have taken near constant body blows over the past two years.
“When you have setbacks like we’ve had this year that’s a big issue, and it does erode trust, and it takes time to build that back,” Zuckerberg said on the call. “Certainly our job is not only to have this stuff at a good level and to continually improve, but to be ahead of new issues. I think over the last couple of years that’s been one of the areas where we’ve been most behind, especially around the election issues.”
Zuckerberg’s words come a day after the Times published a damning report that portrays Facebook as not merely behind on issues of election interference, as Zuckerberg suggests, but actively working to downplay what it knew about that interference. It suggests that Facebook’s executives, wary of picking sides in a partisan battle over Russian interference in the 2016 election, aimed to minimize Russia’s role in spreading propaganda on the platform. The story states that Facebook’s former head of cybersecurity, Alex Stamos, was chastised by the company’s chief operating officer, Sheryl Sandberg, for investigating Russian actions without the company’s approval and berated again for divulging too much information about it to members of Facebook’s board.
In his remarks, Zuckerberg flatly denied this allegation. “We’ve certainly stumbled along the way, but to suggest that we weren’t interested in knowing the truth or that we wanted to hide what we knew or that we tried to prevent investigations is simply untrue,” he said. (Stamos, for his part, tweeted earlier on Thursday that he was “never told by Mark, Sheryl or any other executives not to investigate.”)
The Times story also alleges that Facebook waged a smear campaign against its competitors through an opposition research firm called Definers Public Relations. The firm repeatedly worked to tie Facebook’s detractors, including groups like the Open Markets Institute and Freedom from Facebook, to billionaire George Soros. Critics say that in doing so, Facebook engaged with the same anti-Semitic tropes that have been used by white nationalists and other hate groups that regularly villainize Soros.
Zuckerberg denied having any personal knowledge of Definers’ work with Facebook, and said he and Sheryl Sandberg, Facebook’s chief operating officer, only heard about the relationship yesterday. That’s despite the fact that Definers often coordinated large-scale calls with the press on behalf of Facebook and its employees and, in at least one case, sat in on meetings between Facebook and the media.
After Zuckerberg read the story in the Times, he says Facebook promptly ended its relationship with the firm. “This type of firm might be normal in Washington, but it’s not the type of thing I want Facebook associated with, which is why we’re no longer going to be working with them.”
But while Zuckerberg said he had no knowledge of Definers’ work or its messaging, he defended Facebook’s criticism of activist groups like Freedom from Facebook. He said the intention was not to attack Soros, for whom Zuckerberg said he has “tremendous respect,” but show that Freedom from Facebook “was not a spontaneous grassroots effort.”
Zuckerberg declined to assign blame for the tactics allegedly employed by Definers, or to comment on broader personnel issues within Facebook itself. He said only that Sandberg, who has been overseeing Facebook’s lobbying efforts and who is portrayed unfavorably throughout the Times story, is “doing great work for the company.” “She’s been an important partner to me and continues to be and will continue to be,” Zuckerberg said. (Sandberg was not on the call.)
For the umpteenth time this year, Zuckerberg found himself working overtime to clean up Facebook’s mess, even as he wanted desperately to tout the progress the company’s been making. And it has made important progress. In Myanmar, where fake news on Facebook has animated a brutal ethnic cleansing campaign against the Rohingya people, the company has hired 100 Burmese speakers to moderate content there and is now automatically identifying 63 percent of the hate speech it takes down, up from just 13 percent at the end of last year. Facebook has expanded its safety and security team to 30,000 people globally, more than the 20,000 people the company initially set out to hire this year. It’s also changed its content takedown process, allowing people to appeal the company’s decisions about content they post or report. On Thursday, Facebook announced that within the next year, it will create an independent oversight body to handle content appeals.
But by far the biggest news to come out of Thursday’s announcements is the change coming to Facebook’s News Feed algorithm. Zuckerberg acknowledged what most observers already know to be one of Facebook’s most fundamental problems: That sensationalist, provocative content, even content that doesn’t explicitly violate Facebook’s policies, tends to get the most engagement on the platform. “As content gets closer to the line of what is prohibited by our community standards, we see people tend to engage with it more,” he said. “This seems to be true regardless of where we set our policy lines.”
This issue is arguably what undergirds most of Facebook’s problems the past few years. It’s why divisive political propaganda was so successful during the 2016 campaign and why fake news has been able to flourish. Until now, Facebook has operated in a black-and-white environment, where content either violates the rules or it doesn’t, and if it doesn’t, it’s free to amass millions of clicks, even if the poster’s intention is to mislead and stoke outrage. Now Facebook is saying that even content that doesn’t explicitly violate Facebook’s rules might see its reach reduced. According to Zuckerberg’s post, that includes, among other things, “photos close to the line of nudity” and “posts that don’t come within our definition of hate speech but are still offensive.”
Zuckerberg called the shift “a big part of the solution for making sure polarizing or sensational content isn’t spreading in the system, and we’re having a positive effect on the world.”
With this move, Facebook is taking a risk. Curbing engagement on the most popular content will likely cost the company money. And such a dramatic change no doubt opens Facebook up to even more accusations of censorship, at a time when the company is fending off constant criticism from all angles.
But Facebook is betting big on the upside. If outrage is no longer rewarded with ever more clicks, the thinking goes, maybe people will be better behaved. That Facebook is prepared to take such a chance says a lot about the public pressure that’s been placed on the company these last two years. After all of that, what does Facebook have to lose?
More Great WIRED Stories
Hackers have tried to convince potential buyers—and the BBC Russian Service—that they had cracked Facebook’s security and extracted private messages from 120 million accounts. However, according to an outside expert reported by the BBC, it appears likely that at least 81,000 Facebook accounts had their privacy breached. And according to Facebook, the breach is due to malware-containing browser extensions.
“We have contacted browser makers to ensure that known malicious extensions are no longer available to download in their stores and to share information that could help identify additional extensions that may be related,” Facebook’s vice president of product manager, Guy Rosen, said in a statement.
The hackers originally published an offer in September for personal information related to 120 million Facebook accounts on a English-language forum. This included a sample of data that the BBC had an expert examine, confirming that over 81,000 profiles’ private messages were included. An additional 176,000 accounts had data that could have been scraped from public Facebook pages.
Facebook’s Rosen said that its security wasn’t compromised, and urged people to remove any plug-ins they don’t fully trust. Rosen said the social network had notified law enforcement, had the website hosting the Facebook account data had been taken down.
Depending on the browser, plug-in extensions may be able to monitor a user’s activity on any web page. This typically doesn’t include keystrokes, but extensions can sweep in anything rendered on a page for a user to see, such as public and private messages.
Plug-ins that provide toolbars or insert links for coupons for e-commerce are common. However, with so many extensions available, malicious parties have many options: compromise existing software through insiders or poor developer security; release their own seemingly benign plug-ins that provide a useful function alongside snooping; or buy extensions from developers and then update them to include malware.
So, install at your own risk.
New charges against a Russian national for allegedly trying to influence the 2016 U.S. presidential elections and upcoming midterms reveal the creative techniques that Kremlin-linked groups have used to sow discontent among Americans.
The Department of Justice said Friday that it filed criminal charges against Elena Alekseevna Khusyaynova for her alleged role with the Russian propaganda operation “Project Lakhta.” This operation, according to the complaint, oversaw multiple Russian-linked entities like the Internet Research Agency that lawmakers say spread fake news and ginned up controversy on Twitter and Facebook.
Here’s some interesting takeaways from the lawsuit:
Capitalizing on polarized topics of national interest
The complaint alleges that the Russian groups grasped onto polarized issues like gun control, race relations, and immigration to further divide the U.S. populace. They spread both liberal and conservative viewpoints to various groups on social media, tailoring the message to each one, including choosing which publication to share on them.
One unnamed Russian cited in the complaint allegedly said, ” If you write posts in a liberal group,…you must not use Breitbart titles. On the contrary, if you write posts in a conservative group, do not use Washington Post or Buzzfeed’s titles.”
The Russian groups appeared to practice their own form of racism, with one member reportedly saying “Colored LGTB are less sophisticated than white; therefore, complicated phrases and messages do not work.”
The groups apparently discovered that “infographics work well among LGTB and their liberal allies,” while conservatives appeared to be indifferent to graphics.
Spinning the news
Members of the Russian entities were well versed in summarizing popular news stories and spinning them in a way that would antagonize Americans. The entities created a Facebook group dubbed “Secure Borders” that would aggregate news stories and then sensationalize them to draw emotional responses.
Here’s an example of one way the Russian groups discussed among themselves about how to spin a news story about the late John McCain’s criticism of President Donald Trump’s immigration policies.
Creating fake user accounts on Facebook and Twitter
The Russian groups couldn’t have spread propaganda as effectively if they used their real identities. Instead, they created fake profiles on the social media to do things like promote protests and rallies and to post divisive and hateful content.
For instance, the fictitious New York City resident “Bertha Malone” created 400 Facebook posts that allegedly contained “inflammatory political and social content focused primarily on immigration and Islam.”
Get Data Sheet, Fortune’s technology newsletter.
The “Malone” personal also communicated with an unnamed real Facebook user to assist in posting content and managing a Facebook group called “Stop A.I.”
On March 9, 2018, a fake Twitter user named @JohnCopper16 attempted to influence Twitter users by commenting on President Trump’s recent summit with North Korean President Kim Jong Un:
Facebook‘s ongoing crusade against election-related abuse of its platform will involve the removal of some (but not all) disinformation that’s designed to suppress voting.
The company told Reuters that it would ban false information about voting requirements. It will also flag for moderation reports that may aim to keep people away from polling stations by alleging violence or long queues—if the reports are shown to be false, they will be suppressed in people’s news feeds, but they won’t be deleted.
Facebook (fb) generally does not remove falsehoods, even if they are demonstrated, so nixing false information about voting requirements is a notable step. It banned lies about voting locations a couple years back, but this latest move involves exaggerations about voter identification requirements.
In the wake of the mass disinformation campaigns that accompanied the 2016 election, Facebook has come under a great deal of pressure over its role in combatting the problem. It has partnered with think tanks in an attempt to better spot propaganda; it has removed “inauthentic” profiles that were aiming to spread misinformation; and it has sponsored research on the overall problem.
But, while the company is willing to suppress certain kinds of “fake news,” it won’t delete the vast majority of it.
“We don’t believe we should remove things from Facebook that are shared by authentic people if they don’t violate those community standards, even if they are false,” News Feed product manager Tessa Lyons told Reuters.
Facebook’s cybersecurity policy chief, Nathaniel Gleicher, also told the news service that the company is considering banning posts that linked to hacked material, as Twitter (twtr) recently did. As shown with the hacking of the Democratic National Committee in 2016 by Russian operatives, this technique can form part of a coordinated effort to sway elections. However, the dissemination of some hacked materials is in the public interest, making this a tricky tightrope for social media firms to negotiate.
Facebook responded to a New York Times investigation on Oct. 15 into ethnic violence incited by members of the Myanmar military by banning 13 additional pages with a combined 1.35 million followers. The pages represented themselves as offering beauty, entertainment, and general information, but the Times report said the military controlled the pages, rather than fans of pop stars and national heroes, as the pages alleged.
Facebook announced the move in an update to a blog post about its previous actions to combat the spread of false information via its service in Myanmar that have led to killings and widespread violence against ethnic minorities. Accounts and pages previously banned had about 12 million followers, and included the account for the general in charge of the country’s armed forces.
Facebook received heavy criticism for its slow response to its platform being used for violence, and said as much in August in the initial version of its Myanmar blog post, when it admitted “we have been too slow to prevent misinformation and hate.”
The Times reported that the Myanmar military have engaged in a systemic campaign for five years on Facebook to target the Rohingya, a stateless minority population in the country comprising mostly Muslims. As many as 700 military personnel were involved, the Times said. Facebook confirmed many details for the newspaper. The company did not immediately respond to a request from Fortune for comment.
The spread of fake information ranging from specific false accounts about rapes and murders by Muslims to blanket condemnations of Islam are seen as leading directly to a large-scale campaign of ethnic cleansing. Over 700,000 Rohingya have left Myanmar since August 2017, joining more than 300,000 who had already departed the country, according to the United Nations Refugee Agency. A report from the agency in August estimates at least 10,000 Rohingya people had been killed in violence, but other observers believe the number could be far higher.