Earlier this month, I woke up to a disastrous security bug in Apple’s FaceTime that could let anyone easily eavesdrop on iOS and macOS devices. In case you haven’t heard about it yet, FaceTime, the audio and video conferencing app that comes preinstalled on all iPhones, iPads, and Mac computers, had a major security flaw that could let a caller hear the audio from the device they were calling before the person on the other end accepted or rejected the call.
The bug could easily be reproduced and exploited. All you need to do is initiate a call with FaceTime and use its group feature to add your own number while the recipient’s device is still ringing, and you’ll be able to hear what they’re saying. In fact, the flaw was reportedly discovered by a 14-year-old a week before it was reported by 9to5mac.com and acknowledged by Apple.
— Benji Mobb™ (@BmManski) January 28, 2019
(In case you’re hearing about this for the first time, don’t worry. Apple has already disabled Group FaceTime, the feature that contained the bug, on its servers. But you should disable FaceTime on your devices for good measure until Apple fixes the flaw.)
But while Apple is struggling with the technical and legal ramifications of the FaceTime vulnerability, let’s briefly consider the other implications that this latest revelation has.
Security through obscurity sucks
In the aftermath of the exposure of the FaceTime eavesdropping vulnerability, there’s been a lot of speculation over how a security flaw so obvious can slip past the defenses of Apple.
Apple has one of most robust software development processes. Each of its products go through extensive testing to make sure they don’t have functional or security flaws. Its iOS App Store is one of the most secure online software markets.
But Apple is also a walled garden. All its applications are closed-source, which means only its own developers and security experts analyze and test them. There’s no independent review and testing of its applications.
Since the security bug made the headlines, I’ve heard any number of explanations ranging from government-implanted backdoors to disgruntled employees sabotaging the software to take revenge on their employer. For all its worth, it could have been an honest mistake. We’ll never know because Apple has opted for security through obscurity. We have no visibility into FaceTime’s development process and the versioning software that controls and documents changes made to the source code.
We also don’t know when the bug was introduced. According to reports, the bug existed on any device running iOS 12.1 or later. Released on October 30, 2018, iOS 12.1 added Group FaceTime, the feature that contained the eavesdropping bug.
Therefore, if the flaw was in the client FaceTime app, it has been in the wild for almost three months. It’s hard to imagine such an evident bug remaining undiscovered for such a long time. Had this been the case, malicious actors had probably already discovered it and were exploiting it for such a long time.
But the security flaw could have also been part of an update Apple made to its FaceTime servers a few days or even hours before it was first discovered. Again, unless Apple decides to make the information public, we’ll never know.
Had FaceTime been an open-source application, independent experts could have examined and vetted its code. While open-source applications are far from perfectly secure, at least they’re perfectly transparent. If the source code exists in GitHub or some other publicly available repository, everyone can see track the history of code changes and commits. A transparent and open process makes it extremely difficult to sabotage source code or intentionally introduce back doors into applications. We would at least know that it was an honest mistake.
But as we don’t know what’s going on behind Apple’s walled gardens, we’ll just have to speculate and guess on the reasons behind the disastrous FaceTime security bug while we wait for the iOS update coming later this week.
Complexity is hard to secure
In retrospect, we can always criticize Apple and opine on how it could have discovered and fixed the FaceTime security bug before it found its way into hundreds of millions of consumer handsets. But no matter how hard it tries, Apple can’t ensure the absolute security of its phones and their applications. Neither can Google or Samsung or other manufacturers of smartphones.
In contrast, with this phone, you can rest assured that until you pick up the receiver, no one will be able to hear your voice.
Since their invention, telephones have evolved from mechanical devices to becoming fully featured computing devices. In a 2016 hearing held by the Congress Committee on Energy and Commerce, security researcher and cryptography pioneer Bruce Schneier said, “Everything is now a computer.” And then he held up his iPhone and said, “This is not a phone. This is a computer that makes phone calls.”
Schneier also said, “complexity is the worst enemy of security. Complex systems are hard to secure for an hour’s worth of reasons.”
Computers and computer software are among the most complex systems. An old telephone consists of a few dozens of parts. An application like FaceTime possibly consists of hundreds of thousands of lines of code spread across hundreds of modules. Finding and fixing flaws in voice call application is orders of magnitude harder than troubleshooting an old telephone.
Schneier also touched upon another important point: “There are new vulnerabilities in the interconnections. The more we connect things to each other, the more vulnerabilities in one thing affect other things.” FaceTime consists of a client application and server-side software that enables different users to call each other with their iPhones.
Also, the FaceTime development team probably uses general-purpose libraries written by other programmers at Apple. Code and component reuse is a common practice in software companies. It saves time, it saves money and makes the entire development process much more efficient. Google has the largest code base, with 2 billion lines of code, a modular system that allows thousands of developers to update separate parts of the tech giants huge line of software and simultaneously roll out new features to the entire ecosystem of online applications the company runs.
But if an update to any of those components introduces a security flaw, then all the applications that use that component will suddenly become vulnerable. So at the end of the day, the FaceTime security flaw might not even have been the fault of the app’s developers. It might have been the result of an update introduced in a general-purpose iOS library or a change to one of the server components.
If such security flaws can occur at companies with the history and integrity of Apple, think of the vulnerabilities that smaller, inexperienced companies that are turning our home appliances, offices, cars and cities into computers can introduce. That’s the bigger and creepier threat that is looming large on the horizon.
What must we do? First, we must acknowledge that we’re living in a digital world that is becoming more complex and by consequence, less secure. Next, we must take measures to ensure the security of our products. In his latest book, “Click Here to Kill Everybody: Security and Survival in a Hyper-connected World,” Schneier stresses that more active involvement of the government and legislature will be a vital component of securing the internet and all internet-connected devices.
With the FaceTime security flaw behind us, we must look toward the future and more serious threats that we’ll be facing.