Do smartphones pose a danger to corporate security and well-being? To believe some recent analysts and commentators, smartphones carry an unseen threat of chaos, disruption, and financial loss into any company naive enough to tolerate their employees using them.
Allegedly, smartphones can be the vehicle for viruses and other malware to penetrate corporate defences. Equally worrying (it is said), smartphones can be the vehicle for sensitive business information to leak out of the company into the prying eyes of competitors and ne’er-do-wells. Finally, the purported headaches of coping with multiple different kinds of smartphones, each with its own distinct protocols and complications, mean that any benefits from use of smartphones are outweighed by the greater cost of managing and supporting these disparate devices.
I understand the concerns that lie behind such beliefs. But I reject – strongly – the conclusions.
Yes, there are good reasons for the industry to be concerned with the issues of smartphone security. And yes, businesses need to think ahead when deploying enterprise applications onto the smartphones used by their employees. But provided people follow some basic rules, there’s no real threat from smartphones. Rather than smartphones being an Achilles heel of security, they will increasingly be its keystone.
The first point to appreciate is that smartphones are not all equal. To be specific, the operating systems underlying smartphones are not all equal. Just because operating system A has known security problems, it does not follow that operating system B has these same problems. Any such line of reasoning is an example of the (sadly widespread) relativistic fallacy that smartphone operating systems are a commodity, lacking any real distinctiveness from a technology or product point of view. On the contrary, security against malware is an example where one operating system – Symbian OS – has consistently adopted a very different approach from the pack.
As I’ll explain later in this article, there are some important developments, security-wise, in the latest major upgrade to Symbian OS, v9. But the underlying difference in approach to security dates back to the design of the earliest version of Symbian OS. To explain this point, I need to take a slight technical detour.
When you look at the innards of a large software system, you’ll find many instances of pieces of software copying data from one memory location to another. For example, an application might copy a piece of text from the contacts database, reformat it, and then pass the new data to the graphics software to display on the screen. Or data could be copied from a plug-in networking component into the contacts database.
Now consider what happens when a piece of software receives more data than it expects. Imagine if software has set aside 256 bytes of memory (in a so-called “buffer”) to receive information from a networking component, but the networking component actually pumps in 300 bytes. What happens to the extra 44 bytes? By default, the extra bytes get placed into whatever memory location lies next to the original buffer, overwriting the previous contents. This kind of issue arises so often in software that it has its own name: “buffer overflow” (or “buffer overrun”).
The outcome of a buffer overflow depends on the meaning given by the software to the data in the adjacent piece of memory. In essence, this data can sometimes be interpreted as a new set of software instructions to be executed. In this case, the buffer overflow can result in program control passing sooner or later to an alien piece of software. Usually, the alien software won’t make much sense, and the system will lock up fairly soon afterwards. In maliciously engineered cases, however, this alien software has been specially crafted by an ill-intentioned writer – and that’s how malware seizes control of the device, via a so-called “backdoor” or “side-window”.
This sets the scene for a short historical note. The 16-bit precursor to Symbian OS was a software system known as EPOC, used in a succession of popular 1990s handheld computing devices such as the Psion Series 3. The development of EPOC was often held up for days as my colleagues and I tried to diagnose various system lockups. Over the course of many all-night debugging sessions, we gained an intense appreciation of the significance of buffer overflows as the root causes of many of these tiresome bug hunts. Colly Myers, who would later become Symbian’s first CEO, made the bold determination that Symbian OS would not suffer from the same problem. As Colly defined the fundamental building blocks of Symbian OS (then known as “EPOC32”) during the fourth quarter of 1994, he gave special attention to the low-level software to be used for storing and copying text. He had two main objectives in mind:
Efficient memory usage – in contrast to the alternative class libraries that Colly considered (which he found on the fledgling Internet) which frequently made profligate use of memory and/or CPU cycles when storing or manipulating text; Robustness and security – to eliminate the insidious consequences of buffer overflows, by terminating application execution immediately if any software tried to copy data beyond the end of a buffer. So there’s no scope for alien software to take control of the phone – and debugging tools can identify the root causes of system lockups much more quickly
The outcome was what is called the Symbian “descriptor” class hierarchy. Generations of software engineers learning Symbian OS have suffered a culture shock when encountering descriptors for the first time: they’re very different from how text is handled in other operating systems or class libraries.
The designers of other smartphone operating systems have been aware of the drawbacks of buffer overflows, but they lacked the courage (or the ability) to impose a systematic low-level solution akin to Symbian’s descriptors. For these designers, it was a higher priority to provide a programming system that was similar to what people were already accustomed to using. In effect, they took the view that security problems could be patched individually, as and when they were discovered. However, given the millions of lines of code that exist in modern computing systems, this outcome is far from satisfactory.
Symbian’s adoption of descriptors is but one example of what is called a “defensive” approach to software: each software component has to be ready to deal with malformed data passed to it by other components. It cannot take it for granted that the data conforms to the expected structure. This attitude is reinforced by various development tools and the general development culture in the Symbian ecosystem. As a result, I’m pleased to report that there are (as yet) no known cases of “backdoor” or “side-window” security flaws in Symbian OS. Unlike with other operating systems, malware cannot exploit buffer overflows or similar programming bugs to install itself onto a Symbian smartphone and cause damage. Instead, the only known route for malware onto a Symbian smartphone is via the “front door” – the installation dialog which requires the user’s consent to add new software into the smartphone.
The key thing about the installation dialog is that the user is warned when an application comes from an untrusted source. Even if the application tells the user that it is (for example) a “software update from Symbian” or an “important piece of news from your company IT department”, the phone itself will know better, and will make this fact clear to the user. The common sense rule that people need to follow is: don’t install software that you don’t trust. Consider the following analogy. If a complete stranger comes up to you in a pub and asks you if he can borrow your phone to take it away for thirty minutes, to install some extra software on it, would you hand over your phone? Or how about handing over your wallet? Would you expect to see it back? In such a case, before handing anything over, you would need to be mighty sure that the stranger had proper ID, and that you could trust him. Well it’s the same with allowing unknown software onto your phone. Users have to learn only to install software from trusted sources.
Analysts and commentators sometimes get over-excited about so-called “Bluetooth viruses” or “MMS viruses”. Readers are left with the (false!) impression that, merely by having Bluetooth turned on in their smartphone, or merely by accepting an MMS message, they are vulnerable to their phones being hijacked. If there were side-window security problems with Symbian OS, there might be a reason to worry. However, any malware that arrives on your phone via Bluetooth or MMS – or indeed via any other route – remains impotent unless it manages to persuade the user to press ‘Yes’ in the front door dialog to install new software, and ‘Yes’ again in the dialog warning the user that the application is untrusted.
Here’s another source of confusion. Just because there are an increasing number of viruses that target Symbian OS, it does not mean that Symbian OS is intrinsically insecure. It just means that: Symbian OS is running on the majority of advanced, programmable mobile phones; Symbian OS phones therefore present a numerically attractive target for people interested in mobile malware; However, the same common-sense rule stops each and every one of these viruses – namely, users should avoid installing untrusted software.
So here are the first three rules for companies wishing to take advantage of the potential for smartphones to boost the effectiveness of their staff: Ensure that the smartphones are running an operating system that puts a high priority on security; Remind all users about the drawbacks of installing untrusted software; For extra confidence, consider installing a virus scanner on the phone – or activating any that is built-in: this provides a second line of protection.
Note that even if one employee misguidedly installs a malicious piece of software, there’s very little risk of this spreading throughout the company. Unlike the case with networked desktop computers, malware cannot leap across from one smartphone to another, without the active assistance of human users.
The fourth rule is that companies should pay at least as much attention to the “device management” aspects of smartphone deployment as they do to protection against malware. As described on the partnering pages of Symbian’s website, a range of third party companies already provide comprehensive advice and solutions. These solutions include: “Remote wipe” – whereby the data on a smartphone can be removed from it, using OTA (over-the-air) wireless instructions, in case the smartphone becomes lost or stolen; “Remote maintenance” – whereby problems being experienced by smartphone users can be diagnosed OTA using remote viewers and, if necessary, solved using OTA software patches.
The fifth and final rule arises from the observation that companies will have to invest considerable energy and resources to ensure that their adoption of smartphones goes well, and can deliver good results on a sustained basis. The good news is that the payback has the potential to become larger and larger, over time – provided the investment is done in a way that: Maximises the freedom of choice of devices, user interfaces, and applications; Is ready to adapt to the inevitable surprise new trends in technology and business practice.
Symbian OS fares admirably on both these criteria – supporting a wide variety of different devices, all using the same software platform, and with a healthy and active roadmap of forthcoming new features.
In closing, this brings me back to the topic of the significant incremental improvements made to the security system in v9 of Symbian OS. These improvements – known as “Platform Security” – address the only residual point of vulnerability with pre-v9 phones, which is none other than the end user of the phone. As mentioned, users sometimes install software which ends up misbehaving on the phone. But from v9 onwards, all add-on software is checked at run-time before it is allowed to use any sensitive smartphone functionality. The application is allowed to use this functionality, only if it has been verified as trusted for that functionality. Here, “sensitive” functionality includes: Anything that will cause the user to incur a phone bill; Access to contacts information, call log information, agenda information, or location information; Access to any data files created by another application.
It’s as if you let someone into your house, through the front door, having checked that he is a certified plumber. He would be allowed to access the waterworks of the house, but if he slyly tries to take a look in your jewellery box, or in your bedside diary, the operating system of the house would prevent it (unless he also happens, for example, to be an authorised jeweller). That’s how Platform Security protects mobile phones. And the real beauty is that users don’t need to understand any of the details of what’s happening: the security works despite what users do.