Tag Archives: Speech
Elon Musk will not go quietly. On Monday night, lawyers representing the Tesla CEO submitted a filing to a federal judge in New York arguing that she should deny the Securities and Exchange Commission’s request to hold Musk in contempt of court for—what else?—a tweet. Musk’s legal team argued the SEC overreached in its request, and claimed the agency is trying to violate his First Amendment right to free speech.
If the judge, Alison Nathan of the Southern District Court of New York, does hold Musk in contempt of court, she would decide the penalty. “If the SEC prevails, there is a good likelihood that the District Court will fine Mr. Musk and that it will put him on a short leash, with a strong warning that further violations could result in Mr. Musk being banned for some period of time as an officer or director of a public company,” Peter Haveles, a trial lawyer with the law firm Pepper Hamilton, told WIRED last month.
This latest chapter in Musk’s ongoing legal spat with the SEC dates back to the evening of February 19, 7:15 pm Eastern Time to be exact, when Musk wrote on Twitter, “Tesla made 0 cars in 2011, but will make around 500k in 2019.” About four and a half hours later—at 11:41 pm ET—Musk corrected himself, tweeting, “Meant to say annualized production rate at the end of 2019 probably around 500k, i.e. 10k cars/week. Deliveries for the year still estimated to be around 400k.”
Musk is the head of a publicly traded company, so making a mistake about his business on Twitter—which investors treat as a valid source of news like any other—is already less than ideal. But Musk and Tesla also reached a settlement with the SEC in September over another tweet containing misinformation about the electric carmarker’s operations. That was after Musk tweeted that he planned on taking Tesla private, and that he had the “funding secured.” He soon revealed he did not have that funding secured, and Tesla announced it would stay public.
In the ensuing deal with the SEC, Musk gave up his role as Tesla’s chairman for at least three years. He and Tesla each paid a $ 20 million fine. And Musk and Tesla agreed that the CEO’s tweets about the carmaker would be truthful, and reviewed by a team of Tesla lawyers before sending. According to the filing, Tesla’s general counsel and an assigned “disclosure counsel” are in charge of approving Musk’s Tesla tweets. The lawyers write that “the disclosure counsel and other members of Tesla’s legal department have reviewed the updated controls and procedures with Musk on multiple occasions.”
In December, Musk said on CBS’s 60 Minutes that he does not respect the SEC, and that the only tweets of his that require pre-approval are those that can affect Tesla’s stock price. Asked how Tesla could know which tweets would do that, Musk said, “Well, I guess we might make some mistakes. Who knows?” The SEC cited that interview in its motion for a contempt of court charge, writing that “Musk has not made a diligent or good faith effort to comply” with the terms of his settlement.
Now, though, Musk and the SEC are debating what that “pre-approval” actually means. Tesla’s lawyers say nobody pre-approved the tweet in question, but that it shouldn’t matter, because it had already made public the information about those production numbers: in an earnings call, in end-of-year financial results, and in an SEC filing submitted on the day Musk sent out the tweets in question. Musk did not receive pre-approval before sending that tweet because it “was simply Musk’s shorthand gloss on and entirely consistent with prior public disclosures detailing Tesla’s anticipated production volume,” according to the filing.
Moreover, the Musk team argues, the SEC’s attempt to limit Musk’s tweeting is a violation of his First Amendment rights to free speech.
The Musk legal team also argues that the CEO has really worked very hard since the SEC settlement to be careful about his tweeting behavior. It wrote that Musk’s less frequent tweeting about Tesla “is a reflection of his commitment to adhering the Order and avoiding unnecessary disputes with the SEC.” In fact, it says the correction tweet, the one sent four-and-a-half hours later, “is precisely the kind of diligence that one would expect from someone who is endeavoring to comply with the Order.”
More Great WIRED Stories
The EU’s crusade against hate speech is a long-running issue that has involved threats of new regulation, as happened in Germany, if the big social media firms don’t do more to tackle the problem. As things stand, the companies have signed up to a voluntary code of conduct.
On Monday the European Commission—the bloc’s executive body—said its clean-up drive is bearing fruit. According to its statistics, 89% of suspect content is being evaluated within a day of someone flagging it up, and 72% of the content that is found to be illegal is removed.
For comparison’s sake, those figures were 40% and 28% respectively, back in 2016. The stats come from civil society organizations that monitor take-downs across various EU countries.
“Illegal hate speech online is not only a crime, it represents a threat to free speech and democratic engagement,” said Justice Commissioner Vĕra Jourová in a statement. “In May 2016, I initiated the Code of conduct on online hate speech, because we urgently needed to do something about this phenomenon. Today, after two and a half years, we can say that we found the right approach and established a standard throughout Europe on how to tackle this serious issue, while fully protecting freedom of speech.”
The European Commission claimed “there is no sign of over-removal,” and noted that content using racial slurs and degrading images in reference to certain groups is only removed in 58.5% of cases.
Facebook is apparently being especially cooperative, assessing 92.6% of hate-speech notifications within 24 hours. This is notable, as CEO Mark Zuckerberg recently pledged to make Facebook—whose AI isn’t yet fully up to the task—better at properly adjudicating what qualifies as hate speech and what doesn’t. The company’s nature as a conduit for hate speech was highlighted last year by its reported role in the Myanmar genocide.
The Commission’s one gripe on Monday was the lack of transparency and feedback to users, when content gets flagged up and removed—the levels of feedback actually fell over the last year, from 68.9% to 65.4%. Again, Facebook did well here, while YouTube failed quite badly, offering feedback less than quarter of the time.
Mark Zuckerberg would like you to know that despite a scathing report in The New York Times, which depicts Facebook as a ruthless, self-concerned corporate behemoth, things are getting better—at least, the way he sees it.
In a lengthy call with reporters Thursday, and an equally lengthy “note” published on Facebook, the company’s CEO laid out a litany of changes Facebook is making, designed to curb toxic content on the platform and provide more transparency into the decisions Facebook makes on content. But perhaps the most consequential update is that the Facebook News Feed algorithm will now try to limit the spread of sensationalist content on the platform, which represents a major change from how the company has traditionally approached moderation. All of it is in service of restoring trust in a company whose public reputation—and the reputation of its leaders—have taken near constant body blows over the past two years.
“When you have setbacks like we’ve had this year that’s a big issue, and it does erode trust, and it takes time to build that back,” Zuckerberg said on the call. “Certainly our job is not only to have this stuff at a good level and to continually improve, but to be ahead of new issues. I think over the last couple of years that’s been one of the areas where we’ve been most behind, especially around the election issues.”
Zuckerberg’s words come a day after the Times published a damning report that portrays Facebook as not merely behind on issues of election interference, as Zuckerberg suggests, but actively working to downplay what it knew about that interference. It suggests that Facebook’s executives, wary of picking sides in a partisan battle over Russian interference in the 2016 election, aimed to minimize Russia’s role in spreading propaganda on the platform. The story states that Facebook’s former head of cybersecurity, Alex Stamos, was chastised by the company’s chief operating officer, Sheryl Sandberg, for investigating Russian actions without the company’s approval and berated again for divulging too much information about it to members of Facebook’s board.
In his remarks, Zuckerberg flatly denied this allegation. “We’ve certainly stumbled along the way, but to suggest that we weren’t interested in knowing the truth or that we wanted to hide what we knew or that we tried to prevent investigations is simply untrue,” he said. (Stamos, for his part, tweeted earlier on Thursday that he was “never told by Mark, Sheryl or any other executives not to investigate.”)
The Times story also alleges that Facebook waged a smear campaign against its competitors through an opposition research firm called Definers Public Relations. The firm repeatedly worked to tie Facebook’s detractors, including groups like the Open Markets Institute and Freedom from Facebook, to billionaire George Soros. Critics say that in doing so, Facebook engaged with the same anti-Semitic tropes that have been used by white nationalists and other hate groups that regularly villainize Soros.
Zuckerberg denied having any personal knowledge of Definers’ work with Facebook, and said he and Sheryl Sandberg, Facebook’s chief operating officer, only heard about the relationship yesterday. That’s despite the fact that Definers often coordinated large-scale calls with the press on behalf of Facebook and its employees and, in at least one case, sat in on meetings between Facebook and the media.
After Zuckerberg read the story in the Times, he says Facebook promptly ended its relationship with the firm. “This type of firm might be normal in Washington, but it’s not the type of thing I want Facebook associated with, which is why we’re no longer going to be working with them.”
But while Zuckerberg said he had no knowledge of Definers’ work or its messaging, he defended Facebook’s criticism of activist groups like Freedom from Facebook. He said the intention was not to attack Soros, for whom Zuckerberg said he has “tremendous respect,” but show that Freedom from Facebook “was not a spontaneous grassroots effort.”
Zuckerberg declined to assign blame for the tactics allegedly employed by Definers, or to comment on broader personnel issues within Facebook itself. He said only that Sandberg, who has been overseeing Facebook’s lobbying efforts and who is portrayed unfavorably throughout the Times story, is “doing great work for the company.” “She’s been an important partner to me and continues to be and will continue to be,” Zuckerberg said. (Sandberg was not on the call.)
For the umpteenth time this year, Zuckerberg found himself working overtime to clean up Facebook’s mess, even as he wanted desperately to tout the progress the company’s been making. And it has made important progress. In Myanmar, where fake news on Facebook has animated a brutal ethnic cleansing campaign against the Rohingya people, the company has hired 100 Burmese speakers to moderate content there and is now automatically identifying 63 percent of the hate speech it takes down, up from just 13 percent at the end of last year. Facebook has expanded its safety and security team to 30,000 people globally, more than the 20,000 people the company initially set out to hire this year. It’s also changed its content takedown process, allowing people to appeal the company’s decisions about content they post or report. On Thursday, Facebook announced that within the next year, it will create an independent oversight body to handle content appeals.
But by far the biggest news to come out of Thursday’s announcements is the change coming to Facebook’s News Feed algorithm. Zuckerberg acknowledged what most observers already know to be one of Facebook’s most fundamental problems: That sensationalist, provocative content, even content that doesn’t explicitly violate Facebook’s policies, tends to get the most engagement on the platform. “As content gets closer to the line of what is prohibited by our community standards, we see people tend to engage with it more,” he said. “This seems to be true regardless of where we set our policy lines.”
This issue is arguably what undergirds most of Facebook’s problems the past few years. It’s why divisive political propaganda was so successful during the 2016 campaign and why fake news has been able to flourish. Until now, Facebook has operated in a black-and-white environment, where content either violates the rules or it doesn’t, and if it doesn’t, it’s free to amass millions of clicks, even if the poster’s intention is to mislead and stoke outrage. Now Facebook is saying that even content that doesn’t explicitly violate Facebook’s rules might see its reach reduced. According to Zuckerberg’s post, that includes, among other things, “photos close to the line of nudity” and “posts that don’t come within our definition of hate speech but are still offensive.”
Zuckerberg called the shift “a big part of the solution for making sure polarizing or sensational content isn’t spreading in the system, and we’re having a positive effect on the world.”
With this move, Facebook is taking a risk. Curbing engagement on the most popular content will likely cost the company money. And such a dramatic change no doubt opens Facebook up to even more accusations of censorship, at a time when the company is fending off constant criticism from all angles.
But Facebook is betting big on the upside. If outrage is no longer rewarded with ever more clicks, the thinking goes, maybe people will be better behaved. That Facebook is prepared to take such a chance says a lot about the public pressure that’s been placed on the company these last two years. After all of that, what does Facebook have to lose?
More Great WIRED Stories
WASHINGTON (Reuters) – The U.S. Department of Justice and state attorneys general will meet this month to discuss concerns that social media platforms are “intentionally stifling the free exchange of ideas,” the department said on Wednesday.
Its statement did not name Facebook Inc (FB.O) and Twitter Inc (TWTR.N), whose executives testified in Congress on Wednesday, but the firms have been harshly criticized by President Donald Trump and some of his fellow Republicans for what they see as an effort to repress conservative voices.
The companies deny any such bias.
U.S. Attorney General Jeff Sessions convened the meeting, set for Sept. 25, “to discuss a growing concern that these companies may be hurting competition and intentionally stifling the free exchange of ideas on their platforms,” Justice Department spokesman Devin O’Malley said.
It was not known which state attorneys general would attend. Representatives for the attorneys general in New York, Connecticut and Iowa said that they had not been contacted.
Shares of social media companies slipped on Wednesday as the executives met skeptical lawmakers, with Twitter off 6.1 percent and Facebook around 2.3 percent lower in late afternoon trading. Shares of Google parent Alphabet Inc.(GOOGL.O) sank about 1 percent.
In the morning, Facebook Chief Operating Officer Sheryl Sandberg and Twitter Chief Executive Jack Dorsey testified at a Senate Intelligence Committee hearing on efforts to counteract foreign efforts to influence U.S. elections and political discourse.
The Senate panel has been examining reported Russian efforts to influence U.S. public opinion throughout Trump’s presidency, after U.S. intelligence agencies concluded that entities backed by the Kremlin had sought to boost his chances of winning the White House in 2016.
Sandberg and Dorsey said the companies had stepped up efforts to fight such influence operations, but lawmakers said there was far more to be done and suggested Congress might have to take legislative action.
“Clearly, this problem is not going away. I’m not even sure it’s trending in the right direction,” said Senator Richard Burr, the committee’s Republican chairman.
Senator Mark Warner, the committee’s top Democrat said, “I’m skeptical that, ultimately, you’ll be able to truly address this challenge on your own. Congress is going to have to take action here.”
Legislation addressing the use of social media for political disinformation could resemble a bill passed earlier this year – and signed into law by Trump – that made it easier for state prosecutors and sex-trafficking victims to sue social media companies, advertisers and others who failed to keep exploitative material off their sites.
Committee members also criticized Google for refusing to send top executives to testify at the Senate hearing, with just weeks before the Nov. 6 congressional elections.
Republican Senator Marco Rubio said the company might have skipped the hearing because it was “arrogant.”
Dorsey then testified at a House of Representatives Energy and Commerce Committee hearing focused on the bias issue.
Representative Greg Walden, the committee’s Republican chairman, said Twitter had made “mistakes” that, he said, minimized Republicans’ presence on the social media site, a practice conservatives have labeled “shadow banning.”
“Multiple members of Congress and the chairwoman of the Republican Party have seen their Twitter presences temporarily minimized in recent months, due to what you have claimed was a mistake in the algorithm,” he said.
Dorsey denied any deliberate attempt to target conservatives, or promote liberals, during more than four hours of questioning.
“Recently we failed our intended impartiality. Our algorithms were unfairly filtering 600,000 accounts, including some members of Congress, from our search auto-complete and latest results. We fixed it,” he said.
Ahead of Wednesday’s hearings, Trump, without offering evidence, accused social media companies of interfering in the November elections, telling the Daily Caller conservative website that social media firms are “super liberal.”
Trump was quoted as saying in the interview on Tuesday that “I think they already have” interfered.
Democratic House committee members accused Republicans of calling the hearing for political reasons, noting that Trump had featured accusations of bias in fundraising letters. The mid-terms will decide whether Republicans will keep their majorities in the House and Senate.
“Over the past weeks, President Trump and many Republicans have peddled conspiracy theories about Twitter and other social media platforms to whip up their base and fundraise,” said Representative Frank Pallone, the committee’s top Democrat.
Wednesday’s hearings were attended by conspiracy theorists known as Trump supporters, who have dealt with bans on social media.
The conspiracy theorist Alex Jones, who was temporarily suspended from Twitter, sat in the front row of the Senate hearing, and interrupted Rubio.
The House hearing was interrupted by Laura Loomer, a conspiracy theorist who has been banned from major social media sites. She shouted that Dorsey was lying, accusing him of banning conservatives and saying Twitter was going to help Democrats “steal” the November elections.
Loomer was removed from the room as Republican Representative Billy Long used the droning cadence of his former career as an auctioneer to drown her out.
Reporting by Patricia Zengerle; Additional reporting by Diane Bartz in Washington and Shreyashi Sanyal in Bangalore; Editing by Susan Thomas and Grant McCool
IBM has announced that its cognitive intelligence platform Watson has been upgraded with speech, vision and language capabilities, allowing developers to to build smarter apps. On the language side of things, IBM says Watson can now understand ambiguous language in text through a few different modules. The IBM Watson Natural Language Classifier understands meaning, while IBM Watson Dialog makes for more natural app interactions by tailoring language to the style used by a person asking a question. Perhaps more interestingly than that though, the new Visual Insights capabilities promise to allow developers to glean insights from images and videos on…
This story continues at The Next Web