Tag Archives: Phishing

Cyber Saturday—Apple iPhone Phishing Trick, Zscaler as Best Tech IPO, Facebook Fails
June 9, 2018 6:14 pm|Comments (0)

Good morning, Cyber Saturday readers.

A month ago I was milling about a hotel room in New Orleans, procrastinating my prep for on-stage sessions at a tech conference, when I received a startling iMessage. “It’s Alan Murray,” the note said, referring to my boss’ boss’ boss.

Not in the habit of having Mr. Murray text my phone, I sat up straighter. “Please post your latest story here,” he wrote, including a link to a site purporting to be related to Microsoft 365, replete with Microsoft’s official corporate logo and everything. In the header of the iMessage thread, Apple’s virtual assistant Siri offered a suggestion: “Maybe: Alan Murray.”

The sight made me stagger, if momentarily. Then I remembered: A week or so earlier I had granted a cybersecurity startup, Wandera, permission to demonstrate a phishing attack on me. They called it, “Call Me Maybe.”

Alan Murray had not messaged me. The culprit was James Mack, a wily sales engineer at Wandera. When Mack rang me from a phone number that Siri presented as “Maybe: Bob Marley,” all doubt subsided. Jig, up.

There are two ways to pull off this social engineering trick, Mack told me. The first involves an attacker sending someone a spoofed email from a fake or impersonated account, like “Acme Financial.” This note must include a phone number; say, in the signature of the email. If the target responds—even with an automatic, out-of-office reply—then that contact should appear as “Maybe: Acme Financial” whenever the fraudster texts or calls.

The subterfuge is even simpler via text messaging. If an unknown entity identifies itself as Some Proper Noun in an iMessage, then the iPhone’s suggested contacts feature should show the entity as “Maybe: [Whoever].” Attackers can use this disguise to their advantage when phishing for sensitive information. The next step: either call a target to supposedly “confirm account details,” or send along a phishing link. If a victim takes the bait, the swindler is in.

The tactic apparently does not work with certain phrases, like “bank” or “credit union.” However, other terms, like “Wells Fargo,” “Acme Financial,” the names of various dead celebrities—or my topmost boss—have worked in Wandera’s tests, Mack said. Wandera reported the problem as a security issue to Apple on April 25th. Apple sent a preliminary response a week later, and a few days after that said it did not consider the issue to be a “security vulnerability,” and that it had reclassified the bug as a software issue “to help get it resolved.”

What’s alarming about the ploy is how little effort it takes to pull off. “We didn’t do anything crazy here like jailbreak a phone or a Hollywood style attack—we’re not hacking into cell towers,” said Dan Cuddeford, Wandera’s director of engineering. “But it’s something that your layman hacker or social engineer might be able to do.”

To Cuddeford, the research exposes two bigger issues. The first is that Apple doesn’t reveal enough about how its software works. “This is a huge black box system,” he said. “Unless you work for Apple, no one knows how or why Siri does what it does.”

The second concern is more philosophical. “We’re not Elon Musk saying AI is about to take over the world, but it’s one example of how AI itself is not being evil, but can be abused by someone with malicious intent,” Cuddeford said. As we continue to let machines guide our lives, we should be sure we’re aware how they’re making decisions.

Have a great weekend—and watch out for imposters.

Maybe: Robert Hackett

@rhhackett

robert.hackett@fortune.com

Welcome to the Cyber Saturday edition of Data Sheet, Fortune’sdaily tech newsletter. Fortune reporter Robert Hackett here. You may reach Robert Hackett via Twitter, Cryptocat, Jabber (see OTR fingerprint on my about.me), PGP encrypted email (see public key on my Keybase.io), Wickr, Signal, or however you (securely) prefer. Feedback welcome.

Tech

Posted in: Cloud Computing|Tags: , , , , , , , , ,
It Is Mind-Bogglingly Easy to Rope Apple’s Siri into Phishing Scams
June 9, 2018 6:12 pm|Comments (0)

A month ago I was milling about a hotel room in New Orleans, procrastinating my prep for on-stage sessions at a tech conference, when I received a startling iMessage. “It’s Alan Murray,” the note said, referring to my boss’ boss’ boss.

Not in the habit of having Mr. Murray text my phone, I sat up straighter. “Please post your latest story here,” he wrote, including a link to a site purporting to be related to Microsoft 365, replete with Microsoft’s official corporate logo and everything. In the header of the iMessage thread, Apple’s virtual assistant Siri offered a suggestion: “Maybe: Alan Murray.”

The sight made me stagger, if momentarily. Then I remembered: A week or so earlier I had granted a cybersecurity startup, Wandera, permission to demonstrate a phishing attack on me. They called it, “Call Me Maybe.”

Alan Murray had not messaged me. The culprit was James Mack, a wily sales engineer at Wandera. When Mack rang me from a phone number that Siri presented as “Maybe: Bob Marley,” all doubt subsided. Jig, up.

There are two ways to pull off this social engineering trick, Mack told me. The first involves an attacker sending someone a spoofed email from a fake or impersonated account, like “Acme Financial.” This note must include a phone number; say, in the signature of the email. If the target responds—even with an automatic, out-of-office reply—then that contact should appear as “Maybe: Acme Financial” whenever the fraudster texts or calls next.

The subterfuge is even simpler via text messaging. If an unknown entity identifies itself as Some Proper Noun in an iMessage, then the iPhone’s suggested contacts feature should show the entity as “Maybe: [Whoever].” Attackers can use this disguise to their advantage when phishing for sensitive information. The next step involves either calling a target to supposedly “confirm account details” or sending along a phishing link. If a victim takes the bait, the swindler is in.

The tactic apparently does not work with certain phrases, like “bank” or “credit union.” However, other terms, like “Wells Fargo,” “Acme Financial,” the names of various dead celebrities—or my topmost boss!—have worked in Wandera’s tests, Mack said. Wandera reported the problem as a security issue to Apple on April 25th. Apple sent a preliminary response a week later, and a few days after that said it did not consider the issue to be a “security vulnerability,” and that it had reclassified the bug as a software issue “to help get it resolved.”

What’s alarming about the ploy is how little effort it takes to pull off. “We didn’t do anything crazy here like jailbreak a phone or a Hollywood style attack—we’re not hacking into cell towers,” said Dan Cuddeford, Wandera’s director of engineering. “But it’s something that your layman hacker or social engineer might be able to do.”

To Cuddeford, the research exposes two bigger issues. The first is that Apple doesn’t reveal enough about how its software works. “This is a huge black box system,” he said. “Unless you work for Apple, no one knows how or why Siri does what it does.”

The second concern is more philosophical. “We’re not Elon Musk saying AI is about to take over the world, but it’s one example of how AI itself is not being evil, but can be abused by someone with malicious intent,” Cuddeford said. As we let machines guide our lives, we should be sure we know how they’re making decisions.

This article first appeared in Cyber Saturday, the weekend edition of Fortune’s tech newsletter. Sign up here.

Tech

Posted in: Cloud Computing|Tags: , , , , , , ,
The next target for phishing and fraud: ChatOps
November 19, 2016 7:05 pm|Comments (0)

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Enterprise chat applications have surged in popularity, driven in large part by Slack, which now claims to serve more than three million users daily.  What’s more, the popularity of these apps has given rise to a new phenomenon known as ChatOps, which is what happens when these new messaging systems are used to automate operational tasks. 

The ChatOps term was coined by GitHub to describe a collaboration model that connects people, tools, processes and automation into a transparent workflow.  According to Sean Regan, Atlassian’s Head of Product Marketing for HipChat, this flow connects the work needed, the work happening and the work done in a consistent location staffed by people, bots and related tools.  Its transparent nature hastens the feedback loop, facilitates information sharing, and enhances team collaboration, but also ushers in a new set of challenges for securityand risk professionals.

To read this article in full or to leave a comment, please click here


All articles

Posted in: Web Hosting News|Tags: , , , ,