Category Archives: IT Risk Management

Shodan creator opens up tools and services to higher ed

beecham_research_internet_of_things

Cisco/Beecham

The Shodan database and web site, famous for identifying and cataloging the Internet for Industrial Control Systems and Internet of Things devices and systems, is now providing free tools to educational institutions. Shodan creator John Matherly says that “by making the information about what is on their [universities] network more accessible they will start fixing/ discussing some of the systemic issues.”

The .edu package includes over 100 export credits (for large data/report exports), access to the new Shodan maps feature which correlates results with geographical maps, and the Small Business API plan which provides programmatic access to the data (vs web access or exports).

It has been acknowledged that higher ed faces unique and substantial risks due in part to intellectual property derived from research and Personally Identifiable Information (PII) issues surrounding students, faculty, and staff. In fact, a recent report states that US higher education institutions are at higher risk of security breach than retail or healthcare. The FBI has documented multiple attack avenues on universities in their white paper, Higher Education and National Security: The Targeting of Sensitive, Proprietary and Classified Information on Campuses of Higher Education .

The openness and sharing and knowledge propagation mindset of universities can be a significant component of the risk that they face.

Data breaches at universities have clear financial and reputation impacts to the organization. Reputation damage at universities not only affects the ability to attract students, it also likely affects the ability of universities to recruit and retain high producing, highly visible faculty.

This realm of risk of Industrial Control Systems combined with Internet of Things is a rapidly growing and little understood sector of exposure for universities. In addition to research data and intellectual property, PII data from students, faculty, and staff, and PHI data if the university has a medical facility, universities can also be like small to medium sized cities. These ‘cities’ might provide electric, gas, and water services, run their own HVAC systems, fire alarm systems, building access systems and other ICS/IoT kinds of systems. As in other organizations, these can provide substantial points of attack for malicious actors.

Use of tools such as Shodan to identify, analyze, prioritize, and develop mitigation plans are important for any higher education organization. Even if the resources are not immediately available to mitigate identified risk, at least university leadership knows it is there and has the opportunity to weigh that risk along with all of the other risks that universities face. We can rest assured that bad guys, whatever their respective motivations, are looking at exposure and attack avenues at higher education institutions — higher ed institutions might as well have the same information as the bad guys.

Managing the risk of everything else (and there’s about to be more of everything else)

see me, feel me, touch me, heal me

see me, feel me, touch me, heal me

As organizations, whether it be companies, government, or education, when we talk about managing information risk, it tends to be about desktops and laptops, web and application servers, and mobile devices like tablets and smartphones. Often, it’s challenging enough to set aside time to talk about even those. However, there is new rapidly emerging risk that generally hasn’t made it to the discussion yet. It’s the everything else part.

The problem is that the everything else might become the biggest part.

 

Everything else

This everything else includes networked devices and systems that are generally not workstations, servers, and smart phones. It includes things like networked video cameras, HVAC and other building control, wearable computing like Google Glass, personal medical devices like glucose monitors and pacemakers, home/business security and energy management, and others. The popular term for these has become Internet of Things (IoT) with some portions also sometimes referred to as Industrial Control Systems (ICS).

The are a couple of reasons for this lack of awareness. One is simply because of the relative newness of this sort of networked computing. It just hasn’t been around that long in large numbers (but it is growing fast). Another reason is that it is hard to define. It doesn’t fit well with historical descriptions of technology devices and systems. These devices and systems have attributes and issues that are unlike what we are used to.

Gotta name it to manage it

So what do we call this ‘everything else’ and how do we wrap our heads around it to assess the risk it brings to our organizations? As mentioned, devices/systems in this group of everything else can have some unique attributes and issues. In addition to using the unsatisfying approach of defining these systems/devices by what they are not (workstations, application & infrastructure servers, and phones/tablets), here are some of the attributes of these devices and systems:

  •  difficult to patch/update software (& more likely, many or most will never be patched)
  •  inexpensive — there can be little barrier to entry to putting these devices/systems on our networks, eg easy-setup network cameras for $50 at your local drugstore
  • large variety/variability — many different types of devices from many different manufacturers with many different versions, another long tail
  • greater mystery to hardware/software provenance (where did they come from? how many different people/companies participated in the manufacture? who are they?)
  • large numbers of devices — because they’re inexpensive, it’s easy to deploy a lot of them. Difficult or impossible to feasibly count, much less inventory
  • identity — devices might not have the traditional notion of identity, such as having a device ‘owner’
  • little precedent — not much in the way of helpful existing risk management models. Little policies or guidelines for use.
  • everywhere — out-ubiquitizes (you can quote me on that) the PC’s famed Bill Gatesian ubiquity
  • most are not hidden behind corporate or other firewalls (see Shodan)
  • environmental sensing & interacting (Tommy, can you hear me?)
  • comprises a growing fraction of Industrial Control and Critical Infrastructure systems

So, after all that, I’m still kind of stuck with ‘everything else’ as a description at this point. But, clearly, that description won’t last long. Another option, though it might have a slightly creepy quality, could be the phrase, ‘human operator independent’ devices and systems? (But the acronym ‘HOI’ sounds a bit like Oy! and that could be fun).

I’m open to ideas here. Managing the risks associated with these devices and systems will continue to be elusive if it’s hard to even talk about them. If you’ve got ideas about language for this space, I’m all ears.

 

Managed risk is not the only risk

CockpitInstrumentation

Just because we choose to measure these doesn’t mean that we choose what can go wrong

While we might do careful reflection on what risks to track and manage in our information systems, it is important for us to remember that just because we’ve chosen to measure or track certain risks, that doesn’t mean that the other unmeasured and unmanaged risks have gone away. They’re still there — we’ve just made a choice to look elsewhere.

In an aircraft, a choice has been made on what systems are monitored and then what aspects of those systems are monitored. In the aircraft that I flew in the Marine Corps some years ago, a CH 53 helicopter, the systems monitored were often gearboxes, hydraulic systems, and engines. Then of each of those systems, particular attributes were monitored — temperatures and pressures in gearboxes, pressures in hydraulic systems, and turbine speeds and temperatures in engines.

Good to know

These were all good things to know. They were reasonable choices of things to be aware of. I liked knowing that the pressure in the hydraulics system was not too high and not too low. I liked knowing that the engine temperature might be getting a little high and maybe I was climbing too fast or a load might have been heavier than I had calculated. Turbine speed in the ball park of where it’s supposed to be? Good to know too.

These were all good things to monitor, but they were not the only places that things could go wrong. And you couldn’t monitor everything — otherwise you’d spend all of your time monitoring systems when your primary job was to fly. (And there wouldn’t be enough room in the cockpit to have an indicator for everything!)

Subset of risks

So a selection of things to measure and monitor is made and certain sensors and indicators are built into the aircraft. Of all of the things that can go wrong, the aircraft cockpit only gave me a subset of things to monitor. But again, this is by design because the job was to get people and stuff from Point A to Point B. The job wasn’t to attempt to identify, measure, monitor, mitigate every conceivable risk. It’s just not possible to do both.

Much like a movie director chooses what the viewer will see and not see in a movie scene, in our information systems we choose what risks we will be most aware of by adding them to our risk management plan. It is important for us to keep in mind, though, that the other risks with all of their individual and cumulative probabilities and impacts have not gone away. They’re still there — we’ve just chosen not to look at them.

Does trust scale?

In this age where scale is king and where government sanctioned pension default, where executive compensation and line worker pay disparities continue to grow, and where willingness to shed trust for a few moments of attention, among others exist, what does trust mean to us? Is there a limit to how large a business can grow and still be trusted, both internally (employee to business) and externally (business to customer)?

Many, if not most, of our information systems rely on trust. Prime examples are banking systems, healthcare systems, and Industrial Control Systems (ICS). We expect banking and healthcare systems to have technical protections in place to keep our information from ‘getting out’. We expect that the people who operate these systems won’t reveal our data or the secrets and mechanisms that protect them.

Similarly, critical infrastructure ICS, such as power generation and distribution systems, must deliver essential services to the public, government, and businesses. To prevent misuse, whether ignorance or malicious intent, it must do so without revealing to all how it is done. Again, we expect there to be sufficient protective technologies in place and trusted people who, in turn, protect these systems.

The problem is that I’m not sure that trust scales at the same rate as other aspects of the business.

British anthropologist Robin Dunbar’s research suggests that the maximum number of stable relationships a person can maintain is in the ball park of 150. After that number, the ability to recognize faces, trust others in the organization, and other attributes of a stable group begin to roll off.

Exacerbating this numerical analysis are the recent phenomena mentioned above of pension defaults, unprecedented compensation disparities, and selling trust for attention. We don’t trust our employers like we used to. That idealized 1950’s corporate loyalty image is simply not there.

No data centers for trust

So as critical information systems such as healthcare, banking, and ICS seek to scale to optimize efficiency for profit margins and their systems require trust and the required trust doesn’t scale with them, what does that mean?

It means there is a gap. There are no data centers for trust amongst people. The popular business model implies that trust scales as the business scales, but trust doesn’t scale that way, and then we’re surprised when things go awry.

I think it’s reasonable to assert that in an environment of diminishing trust in business and corporations (society today), that the likelihood goes up of one or more constituents violating that trust and possibly disclosing data or the secrets of the mechanisms that protect that data.

Can we fix it?

I don’t think so. It’s a pleasant thought and it’s tidy math, but it’s just that — pleasant and tidy and not real. However, the next best thing is to recognize and acknowledge this. Recognize and plan for the fact that the average trust level across 100 large businesses is probably measurably less than the average trust level across 100 small businesses.

With globalization and mingling of nationalities in a single business entity, there is talk of misplaced loyalties as a source of “insider threat” or other trust leakage or violation. That may be, but I don’t know that it’s worse than the changes in perception of loyalty in any one country stemming from changes in trust perception over the past couple of decades.

So what do we do — Resilience

It gets back to resilience. If we scale beyond a certain point, we’re going to incur more risk — so plan for it. Set aside resources to respond to data breach costs, reputation damage, and other unpleasantness. Or plan to stop scaling fairly early on. Businesses that choose this route are probably fairly atypical, but not unheard of.

We can’t control what happens to us, but we can plan for a little more arbitrariness and a few more surprises. This doesn’t mean the check is in the mail, but it increases the likelihood that our business can make it to another day.

A trash can, a credit card, & a trip to the computer store

“A trash can, credit card, and a trip to the computer store” is how Bruce Schneier recently described the software update process (patch management) for networked consumer devices, aka Internet of Things devices. This category of devices already include home/small business routers and cable modems and is quickly growing to include home energy management devices, home health devices and systems, and a plethora of automation devices and systems.

I believe he is spot on. There may be a few people who consistently download, reprogram, and reconfigure their devices but I would estimate that it’s well under 1%.

The problem of software updates/patch management for Internet of Things devices, both consumer and enterprise, is a significant issue on its own. The bigger issue, though, is that we largely tend to think we’re going to manage these updates in a traditional way such as Microsoft’s famous Patch Tuesday. That simply won’t happen with the raw number of Internet of Things devices as well as the variability of types of devices.

The work before us then is twofold: 1) Are there automated patch management solutions that can be developed to detect outdated software and update/patch the same for at least a subset of all of the devices on the network, and 2) Find a way to formally acknowledge and document the risk of that larger group of devices that remain forever unpatched.

Option 1 has a cost. Option 2 has a cost. I think it will turn out that wrapping our heads around Option 2, the risk, will prove to be more difficult than creating some automated patching solutions.

Use Heartbleed response to help profile your vendor relationships

Heartbleed

Heartbleed vulnerability announced on 4/17/14

To paraphrase REM, whether Heartbleed is the end of the world as we know it (11 on a scale of 10) or if we feel fine (or at least not much different), how our vendors respond or don’t respond gives us the opportunity to learn a little more about our relationship with them.

I’ve only seen one unsolicited vendor response that proactively addressed the Heartbleed discovery. In effect, the email said that they (the vendor) knew there was a newly identified vulnerability, they analyzed the risk for their particular product, took action on their analysis, and communicated the effort to their customers. This was great. But it was only one vendor.

Other vendors responded to questions that I had, but I had to reach out to them. And from some vendors, it has been crickets (whether there was an explicit Heartbleed vulnerability in their product/service or not).

Ostensibly, when we purchase a vendor’s product or service, we partner with them. They provide a critical asset or service and often an ongoing maintenance contract along with that product/service. The picture that we typically have in our heads is that we are partners; that we’re in it together. Generally, that’s also how the vendor wants us to feel about it.

What does it mean then, if we have little or no communication from our ‘partner’ when a major vulnerability such as Heartbleed is announced? Where this is the case, the partner concept breaks down. And if it breaks down here, where else might it break down?

Because of this, we can use the Heartbleed event to provide a mechanism to revisit how we view our vendor relationships. A simple table that documents vendor response to Heartbleed could give us broader and deeper perspective into understanding our vendor relationships.

vendorprofilebyheartbleedresponse

For this example, because of their quick communication that did not require me to reach out, I might send a thank you email to Vendor A to further tighten that relationship . Vendor C and Vendor Z are in the same ball park, but I might want to follow up on the delay.  I’ll definitely be keeping Vendor B’s complete lack of response in mind the next time the sales guy calls.

Again, some vendor responses might be great. However, I think vendor and partner relationships aren’t as tight as we may like to tell ourselves and we can use vendor customer response to Heartbleed as an opportunity to reflect on that.

 

[Heartbleed image/logo: Creative Commons]

Some Heartbleed vendor notifications from SANS

Catastronomics

armageddonweatherMega-disruptive economic phenomena, or ‘catastronomics’, were the topic of a Centre for Risk Studies seminar at the University of Cambridge last month. Large scale cyberattacks and logic bombs were included with other relatively-unlikely-but-high-impact events like flu pandemics, all out wars between China and Japan, large scale bioterrorism, and other general unpleasantness.

So, we’ve got that going for us.

 

[Image: http://supremecourtjester.blogspot.com/2012/12/friday-final-weather-forecast-wear.html]