Monthly Archives: July 2014

Managing the risk of everything else (and there’s about to be more of everything else)

see me, feel me, touch me, heal me

see me, feel me, touch me, heal me

As organizations, whether it be companies, government, or education, when we talk about managing information risk, it tends to be about desktops and laptops, web and application servers, and mobile devices like tablets and smartphones. Often, it’s challenging enough to set aside time to talk about even those. However, there is new rapidly emerging risk that generally hasn’t made it to the discussion yet. It’s the everything else part.

The problem is that the everything else might become the biggest part.

 

Everything else

This everything else includes networked devices and systems that are generally not workstations, servers, and smart phones. It includes things like networked video cameras, HVAC and other building control, wearable computing like Google Glass, personal medical devices like glucose monitors and pacemakers, home/business security and energy management, and others. The popular term for these has become Internet of Things (IoT) with some portions also sometimes referred to as Industrial Control Systems (ICS).

The are a couple of reasons for this lack of awareness. One is simply because of the relative newness of this sort of networked computing. It just hasn’t been around that long in large numbers (but it is growing fast). Another reason is that it is hard to define. It doesn’t fit well with historical descriptions of technology devices and systems. These devices and systems have attributes and issues that are unlike what we are used to.

Gotta name it to manage it

So what do we call this ‘everything else’ and how do we wrap our heads around it to assess the risk it brings to our organizations? As mentioned, devices/systems in this group of everything else can have some unique attributes and issues. In addition to using the unsatisfying approach of defining these systems/devices by what they are not (workstations, application & infrastructure servers, and phones/tablets), here are some of the attributes of these devices and systems:

  •  difficult to patch/update software (& more likely, many or most will never be patched)
  •  inexpensive — there can be little barrier to entry to putting these devices/systems on our networks, eg easy-setup network cameras for $50 at your local drugstore
  • large variety/variability — many different types of devices from many different manufacturers with many different versions, another long tail
  • greater mystery to hardware/software provenance (where did they come from? how many different people/companies participated in the manufacture? who are they?)
  • large numbers of devices — because they’re inexpensive, it’s easy to deploy a lot of them. Difficult or impossible to feasibly count, much less inventory
  • identity — devices might not have the traditional notion of identity, such as having a device ‘owner’
  • little precedent — not much in the way of helpful existing risk management models. Little policies or guidelines for use.
  • everywhere — out-ubiquitizes (you can quote me on that) the PC’s famed Bill Gatesian ubiquity
  • most are not hidden behind corporate or other firewalls (see Shodan)
  • environmental sensing & interacting (Tommy, can you hear me?)
  • comprises a growing fraction of Industrial Control and Critical Infrastructure systems

So, after all that, I’m still kind of stuck with ‘everything else’ as a description at this point. But, clearly, that description won’t last long. Another option, though it might have a slightly creepy quality, could be the phrase, ‘human operator independent’ devices and systems? (But the acronym ‘HOI’ sounds a bit like Oy! and that could be fun).

I’m open to ideas here. Managing the risks associated with these devices and systems will continue to be elusive if it’s hard to even talk about them. If you’ve got ideas about language for this space, I’m all ears.

 

Managed risk is not the only risk

CockpitInstrumentation

Just because we choose to measure these doesn’t mean that we choose what can go wrong

While we might do careful reflection on what risks to track and manage in our information systems, it is important for us to remember that just because we’ve chosen to measure or track certain risks, that doesn’t mean that the other unmeasured and unmanaged risks have gone away. They’re still there — we’ve just made a choice to look elsewhere.

In an aircraft, a choice has been made on what systems are monitored and then what aspects of those systems are monitored. In the aircraft that I flew in the Marine Corps some years ago, a CH 53 helicopter, the systems monitored were often gearboxes, hydraulic systems, and engines. Then of each of those systems, particular attributes were monitored — temperatures and pressures in gearboxes, pressures in hydraulic systems, and turbine speeds and temperatures in engines.

Good to know

These were all good things to know. They were reasonable choices of things to be aware of. I liked knowing that the pressure in the hydraulics system was not too high and not too low. I liked knowing that the engine temperature might be getting a little high and maybe I was climbing too fast or a load might have been heavier than I had calculated. Turbine speed in the ball park of where it’s supposed to be? Good to know too.

These were all good things to monitor, but they were not the only places that things could go wrong. And you couldn’t monitor everything — otherwise you’d spend all of your time monitoring systems when your primary job was to fly. (And there wouldn’t be enough room in the cockpit to have an indicator for everything!)

Subset of risks

So a selection of things to measure and monitor is made and certain sensors and indicators are built into the aircraft. Of all of the things that can go wrong, the aircraft cockpit only gave me a subset of things to monitor. But again, this is by design because the job was to get people and stuff from Point A to Point B. The job wasn’t to attempt to identify, measure, monitor, mitigate every conceivable risk. It’s just not possible to do both.

Much like a movie director chooses what the viewer will see and not see in a movie scene, in our information systems we choose what risks we will be most aware of by adding them to our risk management plan. It is important for us to keep in mind, though, that the other risks with all of their individual and cumulative probabilities and impacts have not gone away. They’re still there — we’ve just chosen not to look at them.