Tag Archives: Scandal
Mark Zuckerberg would like you to know that despite a scathing report in The New York Times, which depicts Facebook as a ruthless, self-concerned corporate behemoth, things are getting better—at least, the way he sees it.
In a lengthy call with reporters Thursday, and an equally lengthy “note” published on Facebook, the company’s CEO laid out a litany of changes Facebook is making, designed to curb toxic content on the platform and provide more transparency into the decisions Facebook makes on content. But perhaps the most consequential update is that the Facebook News Feed algorithm will now try to limit the spread of sensationalist content on the platform, which represents a major change from how the company has traditionally approached moderation. All of it is in service of restoring trust in a company whose public reputation—and the reputation of its leaders—have taken near constant body blows over the past two years.
“When you have setbacks like we’ve had this year that’s a big issue, and it does erode trust, and it takes time to build that back,” Zuckerberg said on the call. “Certainly our job is not only to have this stuff at a good level and to continually improve, but to be ahead of new issues. I think over the last couple of years that’s been one of the areas where we’ve been most behind, especially around the election issues.”
Zuckerberg’s words come a day after the Times published a damning report that portrays Facebook as not merely behind on issues of election interference, as Zuckerberg suggests, but actively working to downplay what it knew about that interference. It suggests that Facebook’s executives, wary of picking sides in a partisan battle over Russian interference in the 2016 election, aimed to minimize Russia’s role in spreading propaganda on the platform. The story states that Facebook’s former head of cybersecurity, Alex Stamos, was chastised by the company’s chief operating officer, Sheryl Sandberg, for investigating Russian actions without the company’s approval and berated again for divulging too much information about it to members of Facebook’s board.
In his remarks, Zuckerberg flatly denied this allegation. “We’ve certainly stumbled along the way, but to suggest that we weren’t interested in knowing the truth or that we wanted to hide what we knew or that we tried to prevent investigations is simply untrue,” he said. (Stamos, for his part, tweeted earlier on Thursday that he was “never told by Mark, Sheryl or any other executives not to investigate.”)
The Times story also alleges that Facebook waged a smear campaign against its competitors through an opposition research firm called Definers Public Relations. The firm repeatedly worked to tie Facebook’s detractors, including groups like the Open Markets Institute and Freedom from Facebook, to billionaire George Soros. Critics say that in doing so, Facebook engaged with the same anti-Semitic tropes that have been used by white nationalists and other hate groups that regularly villainize Soros.
Zuckerberg denied having any personal knowledge of Definers’ work with Facebook, and said he and Sheryl Sandberg, Facebook’s chief operating officer, only heard about the relationship yesterday. That’s despite the fact that Definers often coordinated large-scale calls with the press on behalf of Facebook and its employees and, in at least one case, sat in on meetings between Facebook and the media.
After Zuckerberg read the story in the Times, he says Facebook promptly ended its relationship with the firm. “This type of firm might be normal in Washington, but it’s not the type of thing I want Facebook associated with, which is why we’re no longer going to be working with them.”
But while Zuckerberg said he had no knowledge of Definers’ work or its messaging, he defended Facebook’s criticism of activist groups like Freedom from Facebook. He said the intention was not to attack Soros, for whom Zuckerberg said he has “tremendous respect,” but show that Freedom from Facebook “was not a spontaneous grassroots effort.”
Zuckerberg declined to assign blame for the tactics allegedly employed by Definers, or to comment on broader personnel issues within Facebook itself. He said only that Sandberg, who has been overseeing Facebook’s lobbying efforts and who is portrayed unfavorably throughout the Times story, is “doing great work for the company.” “She’s been an important partner to me and continues to be and will continue to be,” Zuckerberg said. (Sandberg was not on the call.)
For the umpteenth time this year, Zuckerberg found himself working overtime to clean up Facebook’s mess, even as he wanted desperately to tout the progress the company’s been making. And it has made important progress. In Myanmar, where fake news on Facebook has animated a brutal ethnic cleansing campaign against the Rohingya people, the company has hired 100 Burmese speakers to moderate content there and is now automatically identifying 63 percent of the hate speech it takes down, up from just 13 percent at the end of last year. Facebook has expanded its safety and security team to 30,000 people globally, more than the 20,000 people the company initially set out to hire this year. It’s also changed its content takedown process, allowing people to appeal the company’s decisions about content they post or report. On Thursday, Facebook announced that within the next year, it will create an independent oversight body to handle content appeals.
But by far the biggest news to come out of Thursday’s announcements is the change coming to Facebook’s News Feed algorithm. Zuckerberg acknowledged what most observers already know to be one of Facebook’s most fundamental problems: That sensationalist, provocative content, even content that doesn’t explicitly violate Facebook’s policies, tends to get the most engagement on the platform. “As content gets closer to the line of what is prohibited by our community standards, we see people tend to engage with it more,” he said. “This seems to be true regardless of where we set our policy lines.”
This issue is arguably what undergirds most of Facebook’s problems the past few years. It’s why divisive political propaganda was so successful during the 2016 campaign and why fake news has been able to flourish. Until now, Facebook has operated in a black-and-white environment, where content either violates the rules or it doesn’t, and if it doesn’t, it’s free to amass millions of clicks, even if the poster’s intention is to mislead and stoke outrage. Now Facebook is saying that even content that doesn’t explicitly violate Facebook’s rules might see its reach reduced. According to Zuckerberg’s post, that includes, among other things, “photos close to the line of nudity” and “posts that don’t come within our definition of hate speech but are still offensive.”
Zuckerberg called the shift “a big part of the solution for making sure polarizing or sensational content isn’t spreading in the system, and we’re having a positive effect on the world.”
With this move, Facebook is taking a risk. Curbing engagement on the most popular content will likely cost the company money. And such a dramatic change no doubt opens Facebook up to even more accusations of censorship, at a time when the company is fending off constant criticism from all angles.
But Facebook is betting big on the upside. If outrage is no longer rewarded with ever more clicks, the thinking goes, maybe people will be better behaved. That Facebook is prepared to take such a chance says a lot about the public pressure that’s been placed on the company these last two years. After all of that, what does Facebook have to lose?
More Great WIRED Stories
LONDON (Reuters) – A researcher at the center of a scandal over the alleged misuse of the data of nearly 100 million Facebook users said on Tuesday the work he did was useless for the sort of targeted adverts that would be needed to sway an election.
Aleksandr Kogan, who worked for the University of Cambridge, is at the center of a controversy over Cambridge Analytica’s use of millions of users’ data without their permission after it was hired by Donald Trump for his 2016 election campaign.
Kogan said it was unlikely Cambridge Analytica had used the data in the Trump campaign, although he also said that its suspended CEO Alexander Nix had lied to a committee of British lawmakers about how the two worked together.
Kogan said that even if the dataset he compiled was used in a political campaign, it would be little use for targeted advertising.
“Quite frankly, if the goal is micro-targeting using Facebook ads, (the project) makes no sense. It’s not what you would do,” he told a parliamentary committee, adding that Facebook itself had better tools for such adverts and that the work was worth “literally nothing”.
“If the use case you have is Facebook ads, it’s just incompetent to do it this way.”
Facebook has said that the personal information of about 87 million users may have been improperly shared with political consultancy Cambridge Analytica, after Kogan created a personality quiz app to collect the data.
Facebook and Cambridge Analytica have blamed Kogan for alleged data misuse, but he has said that he was being made a scapegoat by the companies for the scandal.
Kogan said that former Cambridge Analytica CEO Alexander Nix, who was also a director of the consultancy’s parent firm SCL Group, had previously lied to lawmakers when he said he had not received data from Kogan.
“We certainly gave them data, that’s indisputable,” Kogan told lawmakers. Asked if Nix had lied, Kogan answered: “Absolutely.” A spokesman for Cambridge Analytica declined to comment on Nix’s testimony, noting that he was suspended pending an investigation.
Kogan said Facebook provided him data in an email, and he had not needed to sign an agreement to use it. However, he said that he did not sell the data provided to him by Facebook.
Instead, Kogan said he collected new data through an app for work with SCL, Cambridge Analytica’s parent company.
He hired a market research firm called Qualtrics to recruit 200,000 to 300,000 people to take the quiz to collect the data, resulting in expenses of $ 600,000-$ 800,000. Kogan’s company was paid 230,000 pounds ($ 320,643.00) by SCL for its predictive analysis based on the findings, Kogan said.
In written evidence to parliament, Kogan said that all of his academic work was reviewed and approved by the University’s ethics committees.
However, a letter from 2015, published by the Guardian, shows that the ethics committee rejected one of Kogan’s projects in 2015 and said that Facebook’s privacy project was “not sufficient protection” to address concerns.
Kogan said that the data he collected had now all been deleted, to the best of his knowledge, but he would double check that none remained. Cambridge Analytica also said that it had deleted the data when asked to by Facebook.
“We’re extremely sorry that we ended up in possession of data that clearly had breached Facebook’s terms of service,” spokesman Clarence Mitchell told reporters.
“That’s something that we wouldn’t have wanted to happen. But we have put in place the procedures to begin to rectify it.”
Mitchell also said that the data was not used in the Trump campaign after it had been demonstrated to be ineffective.
“Any suggestion that the GSR Kogan data was used in that campaign is utterly incorrect. Its effective uselessness had already been identified by then,” he said.
Kogan said that he never drew a salary from GSR, the company that he founded to do the research that was wound up last year. Most of the money received from SCL was spent on coding work, acquiring data and legal fees. He was allowed to keep the data he gathered on the project.
Kogan said that GSR had a close relationship with Facebook, and one of his partners at the firm, Joseph Chancellor, now worked for the social media giant.
“This has been a very painful experience, because when I entered into all of this, Facebook was a close ally,” Kogan said.
“I was thinking this would be helpful to my academic career and my relationship with Facebook. It has very clearly done the complete opposite”
($ 1 = 0.7173 pounds)
Reporting by Alistair Smout and Douglas Busvine; Editing by Guy Faulconbridge and Matthew Mpoke Bigg
On Wednesday, Apple confirmed what many customers have long suspected: The company has been slowing the performance of older iPhones. Apple says it started the practice a year ago, to compensate for battery degradation, rather than push people to upgrade their smartphones faster. But even giving that benefit of the doubt, there are plenty of better ways Apple could have accomplished the same goal without betraying customer trust.
Earlier this week, John Poole, a developer at Geekbench, published a blog post indicating that a change in iOS is slowing down performance on older devices. According to Apple, factors like low charge, cold climates, and natural battery degradation can all affect the performance of its mobile devices, and the company confirmed that this policy was implemented last year to counteract these effects.
As much sense as that explanation may make, Apple could have made plenty of choices that would have benefited consumers instead of penalizing them. These same choices could have also saved the company from the public shaming it suffered this week.
In a statement to WIRED, Apple confirmed Poole’s findings, saying it was purposely slowing down older iPhones to compensate for the effects of age on their batteries. “Lithium-ion batteries become less capable of supplying peak current demands when in cold conditions, have a low battery charge or as they age over time, which can result in the device unexpectedly shutting down to protect its electronic components,” the company says.
While many have speculated that the company has been doing this for years, Apple says the feature was implemented last year for the iPhone 6, iPhone 6 Plus, and iPhone SE. Now, with iOS 11.2, the iPhone 7 and 7 Plus are getting the same treatment, and the company intends to bring other devices into the fold down the road.
Rather than secretly hamstring the iPhone’s CPU, though, Apple could have simply educated users about the limitations of lithium-ion batteries, says Kyle Wiens, CEO of iFixit, a company that sells repair kits and posts repair guides for consumer electronics. While Apple does say in the iPhone user manual that batteries degrade over time and should be replaced, you’d have to dig through a few links outside of the manual to learn that by 500 charge cycles, your phone’s battery will hold a charge of about 80 percent.
Another tactic Apple could employ is selling battery replacement kits to consumers, letting them pop a fresh battery into their aging iPhone. It would be an easily understandable solution to an easily understandable problem, rather than software manipulation that feeds into a long-running, planned obsolescence conspiracy theory. But Apple has actively fought against laws that would require it to provide a way for users to repair their devices. According to a report from HuffPost, Apple argues that allowing consumers to replace the battery could make the iPhone more vulnerable to hacks, and that letting people peek inside would make the iPhone easier to counterfeit.
“Apple won’t sell batteries to consumers, people should be furious about that,” Wiens says. “Your battery is a maintenance item, and everyone should expect to replace their battery fairly frequently.”
Apple does cover one battery replacement under its one-year warranty program, but only for “defective batteries,” a term that isn’t clearly defined on the company’s site. If your phone is out of warranty and you don’t have an AppleCare+ plan, the company offers a battery replacement for $ 79 plus a $ 6.95 shipping charge. The problem, Wiens says, is that Apple doesn’t advertise this policy to consumers, leaving iPhone users to believe that the only solution is to buy a costly iPhone.
Direct battery fixes certainly would have made the most sense. But even allowing that a software tweak was the only way Apple could have proceeded—untrue, but just for argument’s sake—it had a much better option than making its software solution covert.
Rather than quietly push out an update that crimped older iPhones, it should have made that throttling opt-in. As it stands, there’s no way to avoid having your phone slowed down once the battery reaches its limits. By giving users the choice, and giving them the information necessary to make their own decision, Apple could avoid the frustrations many have expressed over the policy.
While making the throttling opt-in could cause performance issues for users who opt-out, it would give users a sense of control over the situation and avoid making them feel like they’re being tricked into buying a new phone. As it stands, Apple’s move comes off as deceptive.
Instead of leaving users confused about why their phones are suddenly slowing to a crawl, Apple could take user education a step further by providing a battery health monitor in the Settings app. That way, an iPhone owner could figure out if the battery is the issue, or if something else is going on.
Lay Down the Law
The damage, unfortunately, is already done. But it’s also unlikely that Apple will behave differently going forward. At the very least, the company almost certainly won’t shift gears and start selling battery replacement kits to consumers. For starters, the iPhone’s casing uses proprietary Pentalobe screws, which make it hard for average users to get inside to swap the battery.
Apple has also lobbied against right-to-repair legislation, which would allow third-party repair shops and typical consumers to more easily fix their broken phones. Proposed right-to-repair laws typically require companies to publish their repair manuals, as well as make the necessary repair tools available for purchase rather than requiring a specialist to make these repairs.
Wiens says that, ideally, right-to-repair legislation would pass and ensure consumers have the ability to fix their devices on their own terms without having to deal with warranties or acquire difficult-to-find tools.
Apple’s throttling is misleading, and it’s far from the best way the company could have handled the situation. Still, lithium-ion batteries are riddled with problems users should be aware of. The company isn’t likely to change its stance on the matter, but if you’ve noticed your iPhone getting slower over the last year, at least you know it wasn’t all in your head—and that a battery fix might bring your iPhone back up to speed.
The fallout from the Volkswagen diesel scandal is spreading fast to the company’s other famous brands, including Porsche and Audi, and across the Atlantic to the U.S.
The scandal reached down into the company’s engineering corps as the CEO of Volkswagen’s US business, the research and development chief from Audi and the engine chief from Porsche, which are part of the Volkswagen Group, are said to be following Volkswagen’s CEO out the door of the company, according to multiple reports Thursday
The impending departures are a sign that the Volkswagen scandal is ready to grow to much larger proportions. Read more…