This article was published on October 2, 2014

The network admin’s dilemma: Balancing security and productivity in the age of BYOD


The network admin’s dilemma: Balancing security and productivity in the age of BYOD

The modern network swims in a sea of Wi-Fi and 3G dongle signals. Into it are slotted innumerable USB devices, tablets, telephones, even internet-capable printers. Whilst the necessity of such attachments might be debatable, what isn’t is that network management today is HARD.

Managing a corporate network, with the myriad of attached devices and internet-facing facets is a daily compromise. It’s a delicate balancing act between legitimate access and opportunity for malicious exploit. Unfortunately finding that balance is only going to get harder.

As we find ever more ways to work locally and remotely, so we expose the infrastructure supporting those methods. That infrastructure has to contend with the fact that smartphones and tablets are such a part of our everyday life that they have become an extension of our digital selves.

Our desire to retain that identity and familiarity has pushed us to attach a wide range of devices to the corporate network. Unfortunately that means opening potentially sensitive areas of the business to an equally wide range of operating systems, software applications and security practices.

Ye olden times

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

I remember (and I’m probably giving quite a lot away here) working on a ring topology network. The biggest danger to the admin of that network was an unexpected foot/coaxial cable interface. Policing those types of networks wasn’t difficult, because accessing them would have been quite a lot of real-world work (it had no external connections, and the 5 ¼ floppy drives were secured with screws). The attacker would have had be present in the room to interact with the devices in it.

The mantra was largely “if there’s an attacker on site, we’ve got bigger problems than the network”. It was probably also the case that the (tiny by modern standards) amount of data stored on the monolithic mainframe was worth considerably less to any 3rd party than the hardware of the mainframe or dumb terminals themselves. The largest vector of attack for that network was therefore, physical.

Roll forward a few years and that network is now on Ethernet, every machine has a floppy drive to save work and move it between sites. Now, in addition to worrying about a human attacker on site, the network admin has to be concerned about what’s being taken offsite, and what’s being brought in from outside.

If that admin is unlucky its 1981, he’s on an Apple 2 network, and someone brings a disk infected with “Elk Cloner”, (the first malware as we’d recognise it today) loads it up onto a machine, and it spreads. 5 years later Brain.A (the first Windows virus) appears.

Dialup days and remote threats

shutterstock_42746A few years more and the network is connected to others via the internet, a 56k modem caterwauling in the corner. Now in addition to physical attack, and infected media on site, threats to the network can come in from remote sources.

I remember sitting on Microsoft’s helpdesk when the ILOVEYOU worm broke. In particular I remember speaking to someone who’d been infected, had had their machine cleaned and had then become infected again after opening a second worm attachment (because how were they to know THIS one wasn’t genuine?).

More time passes, more exploit vectors. Laptops. Wifi. Outsourcing. BYOD. Today it’s clear that any route into a network is a possible route for attack, and that trend will continue.

Of course, the flip side for that is that for every route into the network, the potential for productivity has increased. Where we were chained to workstations, we developed laptops. Where there where cubicles and offices, now there are hotdesks and homeworking. How we work is changing as fast as the technologies we use to enable that work.

As early as 1991 RFCs were created to help the site admin understand and implement “best practice” in securing their network. RFC 1244 (and later 2196) began to outline a “site security” handbook. Some of RFC1244 seems quite archaic to today’s reader – “computers behind locked doors, and accounting for all resources“, some of it seems more than a little familiar. For example: “Three days later, you learn that there was a break in. The center director had his wife’s name as a password.”

It did, however establish a series of basic ground rules for network management that are as relevant today as they were 23 years ago:

  • Look at what you are trying to protect.
  • Look at what you need to protect it from.
  • Determine how likely the threats are.
  • Implement measures which will protect your assets in a cost-effective manner.
  • Review the process continuously, and improve things every time a weakness is found.

This basic analyze > secure > review process is still at the core of how networks are secured to this day. It’s sufficiently open that it doesn’t care if the threat is a hacker on another continent taking advantage of a flaw in your website’s theme, or an employee bringing a virus in on his phone after some dubious personal browsing at the weekend.

As well written as it was, it didn’t matter. Unfortunately the rate of application of “best practice” didn’t keep pace with the rate new networks sprang up. Throughout the late 1990s and early 2000s there were a number of high-profile data breaches of various forms – stolen laptops with unencrypted customer data, identity thefts, unintended disclosure of medical data, stolen credit card info.

This prompted a number of Information Security Management Systems (ISMS) to be proposed and adopted globally, because if there’s one thing guaranteed to ensure that people stop ignoring long-winded advice documentation, it’s more long-winded advice documentation (I jest, of course).

Given these measures and the attention security gets, why then is the risk so great? Why is the frequency of breaches trending upwards? There are a number of factors at play, and unfortunately for the secure network most of them are proliferating. These include:

  • The growing penetration of internet use into daily lives.
  • The increasing number of devices which could be used in the office environment.
  • The development/support cycle of devices brought into the office.

It’s not controversial to say that internet use is increasing. The number of connected users grows year on year, new sites spring up, new devices, new modules, themes, scripts etc. The average smartphone now has more computational power than was required to send people to the moon, and those smartphones are in the hands of 9 year olds.

Bring Your Own… headache for network admins

We’ve got tablets in schools; even our TVs are now network capable. For every person joining a network, a myriad of possible networked devices could join. As they move between networks, they are attached to different security practices and standards. Their use changes too.

The BYOD concept is really a modern one (computers being less portable the further back you go as a rule) but the problems it engenders are the same as they were in the days of Elk Cloner; things of uncertain provenance being brought in from outside and placed into a secure environment. Floppy disks became USB drives. Secured company laptops became personal ones, which in turn became tablets and phones. All are capable of harbouring infections, or opening windows to the outside world.

shutterstock_205056382

No matter how well behaved your employees and their devices whilst attached to the company network, come clocking off time they become social tools visiting un-vetted sites and installing who knows what. How secure is the state-of-the-art network, firewalled away from direct attacks, when the compromise vector walks through the door with a staff pass, and a smartphone? “OMG BRO. JUST UPLOADED MALWARE 2 NETWRK. OOPS, LOL”.

This wouldn’t be so bad (ok, it’s pretty bad) if everything was patched up to date and protected from exploits in the wild, but of course that’s not the case. Smartphones have an average lifecycle of about 18 months. Tablets are kept around for longer. Nearly one in every 100 android machines in the wild is still running version 2.2 (Froyo). Software installs on those devices have to be compatible with that OS. Older versions mean more possibility of discovered vulnerabilities. Even if those vulnerabilities are known about, there’s no guarantee the employee will have secured against them.

These are not zero-day compromises, more like 4-year ones. On top of that, there’s all the ancillary software, all the different builds. Samsung, HTC and the like all have their own layers on top of the core OS, all install their own privileged apps and as the number of connected users grows, so do organisations seeking to make a business by supplying hardware.

FRANCE-US-TECHNOLOGY-BUSINESS-RETAIL

Now you can pick up a tablet with proprietary software from your supermarket. All of that variant hardware and software is passing through and attaching to the corporate network, the security of which is vital to the survival of the business.

All this is assuming that the corporate network itself is secure, and you know the expression about assuming (it makes an ass of u and…Ming the Merciless?). If that were the case then the penetration testing industry wouldn’t be necessary. Yet still we see reports of open Wi-Fi hotspots being attached to the core of the corporate network. We hear of devices phoning home for instructions when they’re safely secreted inside the network. I remember a time when even the Sega Dreamcast got in on this action.

Sensible security

Of course, the development of BYOD and the associated risks it poses aren’t happening in a vacuum. The good news is that those organisations supplying the network infrastructure and those making the devices that will be brought in are working to minimise the threat, up to and including remote wiping of a BYOD device.

The problem reduces down to a couple of core concepts for network security – permissions and confidence, the one springing from the other. A device’s permissions dictate what it can and can’t access. It gives it a level of privilege in a system, but in doing so it prevents it from accessing / making changes to anything that lies beyond that level of privilege.

Think of it like handing the device a pass card. Swiping that card will allow it to go where it likes on the ground floor, but swiping at the doors to the lifts to the floors above produces a red light and angry buzzing sound. Consequently, you can be more confident of the sanctity of those floors above – parts of the network needing escalated permissions are secured away from those ground floor devices.

As long as you keep checking that permissions fit requirements safely, you can be relatively confident that the network is less likely to be compromised from within. If this process sounds vaguely familiar, there’s a reason for that. It’s the essence of RFC 1244 and 2196, our 1990s site security handbook.

To conclude: what, then, is the modern network admin to do? Compliance with the law goes without saying. Compliance with regulations and standards necessary to conduct business online should also be similarly obvious. Then come the harder decisions – who and what to give access and permissions to.

Where to draw the line between the safety of the business and the productivity of its workforce? To mix metaphors as seamlessly as a world-cup pundit, which greater good is the lesser of two evils? Personally, I’d suggest going back to the heady days of 1991 and reading the RFCs that are as valuable and applicable today as they were back then.

Image credits: Shutterstock: 1, 2, 3PATRICK KOVARIK/AFP/Getty Images

Get the TNW newsletter

Get the most important tech news in your inbox each week.