Tag Archives: Russia
Russian Foreign Minister Sergei Lavrov arrives for a news conference on the sidelines of the Organization for Security and Co-operation in Europe (OSCE) summit in Milan, Italy, December 7, 2018. REUTERS/Alessandro Garofalo
MILAN (Reuters) – Russian Foreign Minister Sergei Lavrov said on Friday that the detention of Chinese technology giant Huawei’s chief financial officer in Canada was an example of “arrogant” U.S. policy abroad.
Speaking at a news conference in Milan, Lavrov said the detention showed how Washington imposes its laws beyond its jurisdiction.
Huawei CFO Meng Wanzhou, 46, who is also the daughter of the company founder, was arrested on Dec. 1 at the request of the United States. The arrest, revealed by Canadian authorities late on Wednesday, was part of a U.S. investigation into an alleged scheme to use the global banking system to evade U.S. sanctions against Iran, people familiar with the probe told Reuters.
Reporting by Crispian Balmer; writing by Tom Balmforth and Maria Kiselyova; Editing by Peter Graff
Over the last two days, Facebook CEO Mark Zuckerberg was questioned for more than 10 hours by two different Congressional committees. There was granular focus on privacy definitions and data collection, and quick footwork by Zuckerberg—backed by a phalanx of lawyers, consultants, and coaches—to craft a narrative that users “control” their data. (They don’t.) But the gaping hole at the center of both hearings was the virtual absence of questions on the tactics and purpose of Russian information operations conducted against Americans on Facebook during the 2016 elections.
Here are the five of the biggest questions about Russia that Zuckerberg wasn’t asked or didn’t answer—and why it’s important for Facebook to provide clear information on these issues.
1. What were the tools and tactics used by Russian entities to execute information operations against American citizens, and what were the narratives pursued?
In both hearings, in answering unrelated questions, Zuckerberg began to describe “large networks of fake accounts” established by Russian entities. In both instances, he was cut off. This was a significant missed opportunity to pull back the curtain on the mechanisms of Russian information operations against the American public.
The vast majority of information made available by Facebook—and the focus of questions in response—have been about ads and promoted content from Russian entities like the Internet Research Agency. In fact, this was not the primary means of distributing content, collecting information, identifying potential supporters, and promoting narratives. The main tool for this was fake accounts posting “native” content—plain old Facebook posts—building relationships with real users.
In Wednesday’s hearing before the House Energy & Commerce Committee, for example, Zuckerberg said that tens of thousands of fake accounts were taken down to prevent interference in elections in 2017, implying that this was mostly relating to Russia. But this wholesale removal of accounts obviously went way beyond the 740 accounts that have been identified as buying ads on behalf of the IRA. Zuckerberg focused only on ads bought by Russian accounts, not the regular Facebook posts that were so much more numerous. He testified that the Russian accounts were primarily using “issues ads”—aimed at influencing people’s views on issues rather than promoting specific candidates or political messaging. Asked about the content though, Zuckerberg said he had no specific knowledge.
In the indictment of the IRA, prosecutors highlighted the fact that the agency had used false IDs to verify false personas. So, while Facebook’s announcement that group pages will now require verification with a government ID and a physical address that can be validated, fake IDs and the use of US-registered shell corporations (a point raised by Senator Sheldon Whitehouse) can be used to bypass these security protocols—albeit with a much more significant expenditure of resources.
Zuckerberg said Facebook only identified Russian information operations being conducted on their platform right before the 2016 elections. But in his written testimony, he says they saw and addressed activity relating to Russian intelligence agencies earlier. And from 2014 onward, Facebook was made aware of the aggressive information campaigns being run against Ukraine by Russia.
It wasn’t an accident that Zuckerberg used the term “sophisticated adversaries” in his prepared statement. Facebook, more than anyone, has visibility into what Russia does and why it works. Apparently, no one was interested in hearing what he had to say.
2. What personal data does Facebook make available to the Russian state media monitoring agency Roskomnadzor or other Russian agencies? Is this only from accounts located in or operated from Russia, or does this include Facebook’s global data?
These questions were asked by former fighter pilot and Russia-hawk Rep. Adam Kinzinger—and answered evasively by Zuckerberg, who did not address the fact that the Russian government requires companies like Facebook to store their data in Russia precisely so they can access it (and that the Russians say that Facebook has agreed to comply). Very few companies—including Twitter and YouTube—have provided much transparency on what data they share with the Russian government. This is important because, depending on the scale, Russia doesn’t need to rely on data harvesters if they can just get it themselves. In another instance, a corporate partnership was formed with Uber to force data sharing.
This is also important because Zuckerberg expressed extreme skepticism about sharing data with the US government. Does he feel the same way about foreign entities? When law enforcement or intelligence agencies from more aggressive foreign governments ask for information, does Facebook comply? Is there any instance where they have complied with a foreign government request that they would deny the United States?
In both hearings, Zuckerberg was also asked if Russia or China scrape Facebook data, or used apps like the one used by Aleksandr Kogan, the data scientist who provided Facebook data to Cambridge Analytica. Zuckerberg responded that he didn’t have specific knowledge of that—but, as Rep. Jan Schakowsky pointed out, there were 9 million apps scraping data, so how can they possibly begin to know where the data and all its derivative copies went?
Zuckerberg called Chinese internet companies a “strategic and technological threat”—and whoever asked the question just moved on. This is a huge admission from one of the people best positioned to understand how AI and data tech can be weaponized by adversaries. Next time, maybe let the man talk about what he sees and the threats we are up against?
3. Did Facebook delete data related to Russian information operations conducted against American citizens? Will it agree to make this material available for researchers?
In the House hearing, there was one question relating to data preservation in connection to the Cambridge Analytica case. But not a single member asked if Facebook has preserved all of the data and content connected to Russian information operations conducted against American citizens, or whether that data and content would be made available to researchers or intelligence agencies for evaluation.
Many accounts have been pulled down and deleted, and while some of the advertising clients have been exposed, many of the fake accounts and false identities are not known to the public. It is vital that this information be analyzed by people who understand what the Russians were trying to achieve so we can evaluate how to limit computational propaganda from hostile entities and assess the impact these operations had on our population. Without this kind of analysis, we will never unravel the damage or build realistic defenses against these capabilities.
Zuckerberg got no questions about mitigating the psychological impact of these operations. There were no questions to about Facebook’s own internal research and evaluation of these tools and tactics. And no one asked what Facebook knows about their broader effectiveness or impact on the public.
4. What assistance do Facebook employees embedded with advertising clients provide? Did any Facebook employees provide support to the Internet Research Agency or any other business or agency in Russia targeting content to American citizens?
Facebook dodged a major bullet because this entire line of questioning was left unexplored. There was one question about Facebook employees embedded in 2016 political campaigns; largely Zuckerberg answered sideways. But there are extremely important questions to be raised about the way in which Facebook employees aided and enabled harvesters of data and the targeting of hostile information operations—not only against the American public, but in other countries as well.
If Facebook employees worked with the Russians to define more effective audience targeting, for example, then they had vastly more knowledge than they admit and are vastly more complicit. The same would be true if Facebook embeds were working with third parties like Cambridge Analytica and other companies that help governments and ruling parties target their oppositions and win elections. For example, Cambridge Analytica/SCL’s work in Africa shows how aggressively Facebook was used in elections. Did Facebook know? Were they involved? Do their employees have direct knowledge of or aid “black PR” and coercive psychological operations?
5. Does Facebook have copies of data uploaded to “custom audiences” by any Russian entity?
In many ways, the data will be the fingerprints of the investigations of the Russian operations in the 2016 elections. As part of Facebook’s “custom audiences” feature, you can upload datasets to target Facebook users. If there is overlapping targeting data or instances in which similar data was used by different advertising clients, you can show potential coordination between separate entities—for example, maybe the IRA and the NRA, or the dark money PACs running ads against Clinton. Does Facebook have any known Russian datasets from 2016 that could be compared to Cambridge Analytica and or Trump campaign data?
Senator Amy Klobuchar highlighted the fact that 126 million people saw IRA content and asked if these people overlapped with the 87 million who had their data scraped by Cambridge. Zuckerberg said it was “entirely possible” that they overlapped. If this can be documented, it would make it likely that the Cambridge Analytica data was used by the Russians and by the Trump campaign—and this would mean coordination between the two entities. The question then would be who knew about the shared data?
American privacy is important. But gaining a more expansive understanding of the information operations being targeted against our population by hostile foreign actors like Russia is also critical. In that respect, the Zuckerberg hearings were a huge missed opportunity. We do not have a lot of time to assess and evaluate what happened in 2016 before the 2018 elections are upon us. This is not merely a cybersecurity challenge; it’s not just about protecting voting machines or email servers. There is an information component that is not being addressed, and doing so gets harder when companies like Facebook are erasing and suppressing the data that can help us become more informed and help us develop a new kind of human-led deterrence that will prevent these campaigns from being as effective in the future.
Zuckerberg repeatedly referred to the idea of data “control” that was completely nonsensical to anybody who actually speaks English as a first language. We don’t control our data. Especially not when Facebook is aggressively harvesting data on everyone, not just their 2 billion users, and building internet access globally so they can get even more data. It doesn’t matter that Facebook isn’t “selling data”—an oft-repeated theme. They are using psychographics to profile you and selling advertisers access to the products of those algorithms. This is why there was evasion on questions about predictive profiling—the entire backend of adtech. Facebook knows it works. They use it every day—and they understand exactly how effective it can be for hostile actors like Russia.
Mr. Zuck Goes to Washington
Molly K. McKew (@MollyMcKew) is an expert on information warfare and the narrative architect at New Media Frontier. She advised Georgian President from 2009-2013 and former Moldovan Prime Minister Vlad Filat in 2014-15.
For some time, there has been a conflation of issues—the hacking and leaking of illegally obtained information versus propaganda and disinformation; cyber-security issues and the hacking of elections systems versus information operations and information warfare; paid advertising versus coercive messaging or psychological operations—when discussing “Russian meddling” in the 2016 US elections. The refrain has become: “There is no evidence that Russian efforts changed any votes.”
But the bombshell 37-page indictment issued Friday by Robert Mueller against Russia’s Internet Research Agency and its leadership and affiliates provides considerable detail on the Russian information warfare targeting the American public during the elections. And this information makes it increasingly difficult to say that the Kremlin’s effort to impact the American mind did not succeed.
The indictment pulls the curtain back on four big questions that have swirled around the Russian influence operation, which, it turns out, began in 2014: What was the scope of the Russian effort? What kind of content did it rely on? Who or what was it targeting, and what did it aim to achieve? And finally, what impact did it have?
Most of the discussion of this to date has focused on ideas of political advertising and the reach of a handful of ads—and this discussion has been completely missed the point.
So let’s take these questions one at a time.
1. What was the scope of the Russian effort?
The Mueller indictment permanently demolishes the idea that the scale of the Russian campaign was not significant enough to have any impact on the American public. We are no longer talking about approximately $ 100,000 (paid in rubles, no less) of advertising grudgingly disclosed by Facebook, but tens of millions of dollars spent over several years to build a broad, sophisticated system that can influence American opinion.
The Russian efforts described in the indictment focused on establishing deep, authenticated, long-term identities for individuals and groups within specific communities. This was underlaid by the establishment of servers and VPNs based in the US to mask the location of the individuals involved. US-based email accounts linked to fake or stolen US identity documents (driver licenses, social security numbers, and more) were used to back the online identities. These identities were also used to launder payments through PayPal and cryptocurrency accounts. All of this deception was designed to make it appear that these activities were being carried out by Americans.
Additionally, the indictment mentions that the IRA had a department whose job was gaming algorithms. This is important because information warfare—the term used in the indictment itself—is not about “fake news” and “bots.” It is about creating an information environment and a narrative—specific storytelling vehicles used to achieve goals of subversion and activation, amplified and promoted through a variety of means.
2. What kind of content did it rely on?
As the indictment lays out in thorough detail, the content pumped out by the Russians was not paid or promoted ads; it was so-called native content—including video, visual, memetic, and text elements designed to push narrative themes, conspiracies, and character attacks. All of it was designed to look like it was coming from authentic American voices and interest groups. And the IRA wasn’t just guessing about what worked. They used data-driven targeting and analysis to assess how the content was received, and they used that information to refine their messages and make them more effective.
3. Who or what was the operation targeting, and what did it aim to achieve?
The indictment mentions that the Russian accounts were meant to embed with and emulate “radical” groups. The content was not designed to persuade people to change their views, but to harden those views. Confirmation bias is powerful and commonly employed in these kinds of psychological operations (a related Soviet concept is “reflexive control”—applying pressure in ways to elicit a specific, known response). The intention of these campaigns was to activate—or suppress—target groups. Not to change their views, but to change their behavior.
4. What impact did it have?
We’re only at the beginning of having an answer to this question because we’ve only just begun to ask some of the right questions. But Mueller’s indictment shows that Russian accounts and agents accomplished more than just stoking divisions and tensions with sloppy propaganda memes. The messaging was more sophisticated, and some Americans took action. For example, the indictment recounts a number of instances where events and demonstrations were organized by Russians posing as Americans on social media. These accounts aimed to get people to do specific things. And it turns out—some people did.
Changing or activating behavior in this way is difficult; it’s easier to create awareness of a narrative. Consistent exposure over a period of time has a complex impact on a person’s cognitive environment. If groups were activated, then certainly the narrative being pushed by the IRA penetrated people’s minds. And sure enough, The themes identified in the indictment were topics frequently raised during the election, and they were frequently echoed and promoted across social media and by conservative outlets. A key goal of these campaigns was “mainstreaming” an idea—moving it from the fringe to the mainstream and thus making it appear to be a more widely held than it actually is.
This points to another impact that can be extracted from the indictment: It is now much more difficult to separate what is “Russian” or “American” information architecture in the US information environment. This will make it far harder to assess where stories and narratives are coming from, whether they are real or propaganda, whether they represent the views of our neighbors or not.
This corrosive effect is real and significant. Which part of the fear of “sharia law in America” came from Russian accounts versus readers of InfoWars? How much did the Russian campaigns targeting black voters impact the low turnout, versus the character attacks run against Clinton by the Trump campaign itself? For now, all we can know is that there is shared narrative, and shared responsibility. But if, as the indictment says, Russian information warriors were instructed to support “Sanders and Trump,” and those two campaigns appeared to have the most aggressive and effective online outreach, what piece of that is us, and what is them?
Persuasion and influence via social media cannot be estimated in linear terms; it requires looking at network effects. It is about the impact of a complex media environment with many layers, inputs, voices, amplifiers, and personalities. All of these elements change over time and interact with each other.
So anyone trying to tell you there was little impact on political views from the tools the Russians used doesn’t know. Because none of us knows. No one has looked. Social media companies don’t want us to know, and they obfuscate and drag their feet rather than disclosing information. The analytical tools to quantify the impact don’t readily exist. But we know what we see, and what we heard—and the narratives pushed by the Russian information operation made it to all of our ears and eyes.
The groups and narratives identified in the indictment were integral parts of the frenzied election circus that built momentum, shaped perceptions, and activated a core base of support for now-President Trump—just as they helped disgust and dismay other groups, making them less likely to vote (or to vote for marginal candidates in protest).
In the indictment, Trump campaign officials are referred to as “unwitting” participants in Russian information warfare. This gives the White House an out—and a chance to finally act against what the Kremlin did. But the evidence presented in the indictment makes it increasingly hard to say Russian efforts to influence the American mind were a failure.
Molly K. McKew (@MollyMcKew) is an expert on information warfare and the narrative architect at New Media Frontier. She advised Georgian President Mikheil Saakashvili’s government from 2009 to 2013 and former Moldovan Prime Minister Vlad Filat in 2014-15.
President Trump has approved a plan to send Javelin anti-tank missile systems to Ukraine to help the U.S.-backed government there fight Russian-allied forces. Russian military and allied forces have been active in Ukraine since the 2014 ouster of pro-Russian president Viktor Yanukovych.
The sale, reported by the Wall Street Journal, would put a uniquely effective weapon into play in the conflict. The Javelin, developed by Raytheon and Lockheed-Martin and first put in service in 1996, is a shoulder-fired missile designed to track targets by infrared. But rather than hitting a tank in the front or sides, where its armor is thickest, the Javelin projectile flies along a long arc to hit a tank’s roof, where the armor on most models is thinnest.
The Javelin is both more powerful, more expensive, and more tightly controlled than other anti-tank weapons, such as the older BGM-71 TOW system. According to an in-depth overview by The National Interest, the Javelin had a major showing in the 2003 invasion of Iraq. In one battle, it enabled a small group of U.S. special operations troops with four Javelin launchers to destroy a substantially larger Iraqi tank unit.
Get Data Sheet, Fortune’s technology newsletter.
The Javelin is said to be effective against most tanks in the Russian arsenal, though it has not been battle-tested against the most modern tanks. The State Department also recently approved the sale of Javelins to Georgia, which has had its own recent clashes with Russia, and has also sent units to Lithuania and Estonia.
Russian tanks have been instrumental in some victories by pro-Russian forces in the Ukrainian conflict. However, commentators have also described tank battles as relatively rare. That has led some to speculate that the decision is primarily political rather than tactical, intended to signal deeper American support for anti-Russian forces. Ukraine expert Michael Kofman told the Washington Post that Russia would “see this as a premise of the U.S. wanting to kill Russians,” pointing to a possible escalation of both the conflict, and broader U.S.-Russia tensions.
MOSCOW (Reuters) – Uber [UBER.UL] and Yandex’s ride-sharing businesses can merge in Russia, anti-monopoly regulator FAS ruled on Friday, but stipulated that the combined company not bar drivers from working for competitors.
Uber and Yandex, often referred to as the “Google of Russia”, announced plans in July to combine operations in 127 cities in Russia, Armenia, Azerbaijan, Belarus, Georgia and Kazakhstan.
San Francisco-based Uber has agreed to invest $ 225 million while Yandex will contribute $ 100 million into a new joint company in which Yandex will own 59.3 percent.
The two companies must allow their partners, drivers and passengers to work for or use competitors’ services and fully inform users of the legal entity providing the service, the FAS said in a statement.
Yandex said consumers would be able to use both Yandex.Taxi and Uber apps, while their driver apps will be integrated, leading to shorter passenger wait times, increased driver utilization rates, and higher service reliability.
The companies aim to close the deal in January 2018, after the New Year holidays in Russia, Yandex said in a statement.
Moscow-listed Yandex was up 3.47 percent as of 1123 GMT.
It said the anti-monopoly regulator in Belarus had also approved the deal while a decision by the Kazakh regulator was pending.
Reporting by Maria Kiselyova; editing by Jason Neely