Monthly Archives: January 2014

In search of a denominator

churchill2

Winston Churchill 1942

Winston Churchill is frequently quoted as saying,  ‘Democracy is the worst form of government, except for all the other ones.’  I feel a similar thing might be said about risk management: ‘Risk management is the worst form of predictive analysis for making IT, Information Management, and cybersecurity decisions, except for all the other ones.’

Because there is so much complexity and so many factors and systems to consider in IT, Information Management, and cybersecurity, we resort to risk management as a technique to help us make decisions. As we consider options and problem-solve, we can’t logically follow every possible approach and solution through and analyze each one on its relative merits.  There are simply way too many.  So we try to group things into buckets of like types and analyze them very generally with estimated likelihoods and estimated impacts.

We resort to risk management methods, because we know so little about the merits of future options that we have to ‘resort to’ this guessing game.  That said, it’s the best game in town.  We have to remind ourselves that the reason that we are doing risk management to begin with is that we know so little

In some ways, it is a system of reasoned guesses — we take our best stabs at the likelihood of events and what we think the impacts of  events might be. We accept the imprecision and approximations of risk management because we have no other better alternatives.

Approximate approximations

It gets worse. In the practice of managing IT and cybersecurity risk, we find that we have trouble doing the things that allow us to make those approximations that are to guide us.  Our approximations are approximations.

Managing risk is about establishing fractions. In theory we establish the domain over which we intend to manage risk, much like a denominator, and we consider various subsets of that domain as we group things into buckets for simpler analysis, the numerator.  The denominator might be all of the possible events in a particular domain, or it might be all of the pieces of equipment upon which we want to reflect, or maybe it’s all of the users or another measure.

When we manage risk, we want to do something like this:

  1. Identify the stuff, the domain — what’s the scope of things that we’re concerned about? How many things are we talking about? Where are these things? Count these things
  2. Figure out how we want to quantify the risk of that stuff — how do we figure out the likelihood of an event, how do we want to quantify the impact?
  3. Do that work — quantify it, prioritize it
  4. Come up with a plan for mitigating the risks that we’ve identified, quantified, & prioritized
  5. Execute that mitigation (do that work)
  6. Measure the outcome (we did some work to make things less bad; we’re we successful at making things less bad?)

The problem is that in this realm of increasingly complex problems, the 1st step, the one in which we establish the denominator, doesn’t get done. We’re having a heck of a time with #1 — identifying and counting the stuff — and this is supposed to be the easy step.  Often we’re surprised by this. In fact, sometimes we don’t even realize that we’re failing at step #1 — because it’s supposed to be the easy part of the process.  However, the effects of BYOD, hundreds of new SaaS services that don’t require substantial capital investment, regular fluctuations in users, evolving and unclear organizational hierarchies, and other factors all contribute to this inability to establish the denominator.  A robust denominator, one that fully establishes our domain, is missing.

Solar flares and cyberattacks

solarflareUnlike industries such as finance, shipping, and some health sectors, which can have sometimes hundreds of years of data to analyze, the IT, Information Management, and cybersecurity industries are very new and are evolving very rapidly.  There is not the option of substantial historical data to assist with predictive analysis. 

However, even firms such as Lloyd’s of London must deal with emerging risk for which there is little to no historical data.  Speaking on attempting to underwrite risks from solar flare damages (of all things), emerging risks manager, Neil Smith says, “In the future types of cover may be developed but the challenge is the size of the risk and getting suitable data to be able to quantify it … The challenge for underwriters of any new risk is being able to quantify and price the risk. This typically requires loss data, but for new risks that doesn’t really exist.”  While this is talking about analyzing solar flare risk, it can be readily applied to risk we have in cybersecurity and Information Management.

In an interview with Lloyds.com, reinsurer Swiss Re Corporate Solutions  head of global markets, Oliver Dlugosch, says that he takes the following steps when evaluating an emerging risk:

  1. Is the risk insurable?
  2. Does the company have the risk appetite for this particular emerging risk?
  3. Can the risk be quantified (particularly in the absence of large amounts of historical data)?

Whence the denominator?

So back to the missing denominator. What to do? How do we do our risk analysis?  Some options for addressing this missing denominator might be:

  1. Go no further until that denominator is established. (However, that is not likely to be fruitful as we may never get to do any analysis if we get hung up on step 1) OR
  2. Make the problem space very small so that we have a tractable denominator (but then our risk analysis is probably so narrow that it is not useful to us) OR
  3. Move forward with only a partial denominator and being ready to continually backfill that denominator as we learn more about our evolving environment.

I think there is something to approach #3.  The downside is that with it we may feel we are departing what we perceive as intellectual honesty, scientific method, or possibly even ‘truthiness.’  However, it is better than #1 or 2 and better than doing nothing. I think we can resolve the intellectual honesty part to ourselves by 1) acknowledging that this approach will take more work (ie constantly updating the quantity and quality of the denominator) and 2) acknowledging that we’re going to know even less than we were hoping for.

So, more work to know less than we were hoping for. It doesn’t sound very appetizing, but as the world becomes increasingly complex, I think we have to accept that we will know less about the future and that this approach is our best shot.

 

“Not everything that can be counted counts. Not everything that counts can be counted.”

William Bruce Cameron (probably)

 

[Images: WikiMedia Commons]

Corporations better than government for your data ? (!!??!!)

hudsonfina

Hudson concurs with Ripley’s proposal

Do you remember, in the movie Aliens, Hudson’s (played by Bill Paxton) enthusiastic concurrence to Ripley’s (Sigourney Weaver) proposal of “I say we take off and nuke the entire site from orbit. It’s the only way to be sure” ? I have similar enthusiasm for the point of this Guardian article — why in the world are we thinking that leaving our private data with megacorporations instead with the government is better? I’m not advocating giving it to the government, but I don’t understand all of the excitement about why leaving it with arguably less transparent corporations is better.

 

We need to be careful to not throw the baby out with the bath water.

 

Automation — another long tail

taskfrequency

Descending frequency of different tasks moving left to right — and the who and what performs these tasks

Jason Kingdon illustrates what types of tasks are done by different parts of an organization and argues that that more use can be made of “Software Robots” to handle those many different types of tasks that occur with low frequency, aka in the long tail — stretching out to the right. (That is, each particular type of task in this region occurs with low frequency but there are many of these low frequency task types).

The chart shows how often particular tasks are carried out, ranked in descending order left to right, and who (or what) does the task — an organization’s core IT group, its support IT group, or end users.

  • Core IT has taken on tasks such as payroll, accounting, finance, and HR
  • Support IT has taken on roles such as CRM systems, business analytics, web support, and process management
  • People still staff call centers to work with customers and people are still used to correct errors, handle invoices, and monitor regulation/compliance implementation.

Kingdon says that Software Robots mimic humans and work via the same interfaces that humans do, thereby making them forever compatible. That is, they don’t work at some deeper abstracted systems logic, but rather via the same interface as people. Software Robots are trained, not programmed and they work in teams to solve problems.

I think there is something to this and look forward to hearing more.

Highlights of Cisco 2014 Annual Security Report

cybercrime hierarchy

  • Report focuses on exploiting trust as thematic attack vector
  • Botnets are maturing capability & targeting significant Internet resources such as web hosting servers & DNS servers
  • Attacks on electronics manufacturing, agriculture, and mining are occurring at 6 times the rate of other industries
  • Spam trends downward, though malicious spam remains constant
  • Java at heart of over 90% of web exploits
  • Watering hole attacks continue targeting specific industries
  • 99% of all mobile malware targeted Android devices
  • Brute-force login attempts up 300% in early 2013
  • DDOS threat on the rise again to include integrating DDOS with other criminal activity such as wire fraud
  • Shortage of security talent contributes to problem
  • Security organizations need data analysis capacity

Top themes for spam messages:

  1. Bank deposit/payment notifications
  2. Online product purchase
  3. Attached photo
  4. Shipping notices
  5. Online dating
  6. Taxes
  7. Facebook
  8. Gift card/voucher
  9. Pay Pal

Cisco 2014 Annual Security Report

Satellite communication systems vulnerable

dishSimilar to other Industrial Control System (ICS) and Internet of Things (IoT) devices and systems, small satellite dishes called VSAT’s (very small aperture terminals) are also exposed to compromise.  These systems are often used in critical infrastructure, by banks, news media, and also widely used in the maritime shipping industry. Some of the same problems exist with these systems as with other IoT and ICS:

  • default password in use
  • no password
  • unnecessary communications services turned on (eg telnet)

According to this Christian Science Monitor article, cybersecurity group IntelCrawler reports 10,500 of these devices being exposed globally.  Indeed, a quick Shodan search just now for ‘VSAT’ in the banner returns over 1200 devices.

The deployment of VSAT devices continues to rapidly grow with 16% growth demonstrated in 2012  (345,000 terminals sold) and 1.7 million sites in global service in 2012.  2012-2016 market growth is expected to be almost 8% for the maritime market alone.

Clearly, another area in need of buttoning up.

[Image:WikiMedia Commons]

Don’t forget the water

GrandCoulee

Grand Coulee Dam

Changes in water temperature and water availability will lead to more power disruptions in the next decades.

Ernie Hayden points out several often overlooked facts regarding electrical power generation in his Infrastructure Security blog.

Water is critical in electricity generation. Heat is generated by fueled power sources which spin the generators, whether combustion engines, coal-fired, nuclear, or other. That heat has to go somewhere. The primary coolant used in industrial power generation is water. Warmer water and diminished water flow reduce the ability to take that heat away which in turn reduces power generation.

Hayden points out a few examples of where warmer water or reduced water flow caused power degradation or complete shutdown:

* Millstone Nuclear Plant, Connecticut, 2012 — natural cooling water source (Long Island Sound) became too warm (almost 3 degree F ambient increase since plant’s inception in 1975). Plant shutdown for 12 days
* Browns Ferry Nuclear Plant, Alabama, 2011 — shutdown multiple times because water from Tennessee River was too warm
* Corette Power Plant, Montana, 2001 — plant shut down several times due to reduced water flow from Yellowstone River

Estimates of thermoelectric power generating capability are expected to drop by as much as 19% due to lack of cooling water.  Further, incidents of extreme drops in generation capability, ie complete disruption, is expected to almost triple.

Keep up your scan

An Information Risk Management 'scan' can be similar to a cockpit scan

An Information Risk Management ‘scan’ can be similar to a cockpit scan

Much like flying an aircraft, we have ‘keep up our scan‘ when analyzing these system risks. These are complex interconnecting systems. We are becoming increasingly concerned about cyberattacks on electrical and smart grid systems. That attention is good and overdue, but that is only part of the puzzle. We have to train ourselves to constantly scan the whole system — just because there’s a big fire in front of us, it doesn’t mean that there’s not another one burning somewhere else.
 
 

For want of a nail the shoe was lost.
For want of a shoe the horse was lost.
For want of a horse the rider was lost.
For want of a rider the message was lost.
For want of a message the battle was lost.
For want of a battle the kingdom was lost.
And all for the want of a horseshoe nail.

 

[Image 1: Wikimedia Commons: Farwestern / Gregg M. Erickson. Image 2: author’s]

More ghosts in the machine

It turns out that the microprocessors that are on almost every SD memory device produced are accessible and fully programmable. This has significant implications for security and data assurance and is an archetypal example of what we are facing regarding new risks from the Internet of Things (IoT).

Two researchers, Sean Cross who goes by xobs and Andrew Huang, aka bunnie, presented at Chaos Communication Congress last week and simultaneously released a blog post on their discovery and approach.

SD Card Image

SD devices are ubiquitous and found in smart phones, digital cameras, GPS devices, routers and a rapidly growing number of new devices.  Some of these are soldered onto circuit boards and others, probably more familiar to us, are in removable memory devices developed by Kingston, SanDisk and a plethora of others.

 

Microprocessors built into memory devices?

According to the presentation and Bunnie’s blog post, the quality of memory in some of these chips varies greatly.  There is variance within a single manufacturer, even a highly reputable one like SanDisk. There is also variance because there are many, many SD device manufacturers and a lot of those don’t have the quality control or reputation of the larger companies.  In his presentation, he gives a funny anecdote involving a fabrication factory tour of a manufacturer in China that was complete with chickens running across the floor.

So, on any memory device produced, there are bad sections. Some devices have a lot more than others, eg up to 80% of the memory real estate is unusable.  That’s where the onboard microprocessors come in.  These microprocessors run algorithms that identify bad memory blocks and perform complex error correction.  Those processes sit between the actual physical memory and the data that is presented at the output of the card. On a good day, that data that the user sees via the application on the smart phone, camera or whatever is an error-corrected representation of what the actual ones and zeros are on the chip. Ideally, that data that ultimately reaches you as the user appears to be what you think your data should be.

“Mother nature’s propensity for entropy”

Huang explains that as business pressures demand smaller and smaller memory boards, the level of uncertainty of the data in the memory portion of the chip increases.  This in turn increases the demand on error correction processes and the processor running them. He suggests that it is probably cheaper to make the chips in bulk (with wide quality variance) and put an error-correcting processor on the chip than it is to produce a chip and then fully test and profile it.  In their research, it was not uncommon to find a chip labelled with significantly smaller capacity than that available on the actual chip.

What Cross and Huang were able to do is actually access that microprocessor and run their own code with it.  They did this by development of their own hardware hacking platform (Novena), physically cracking open a lot of devices (breaking plastic), lots of trial and error, and research on Internet search sites, particularly Google and China’s Baidu.

The fact that there is a way to get to the processes that manipulate the data sitting on the physical memory has significant implications. Ostensibly, these processors are there to support error correction so that your data appears to be what you want it to be.  However, if (when) those processors are accessed by a bad guy:

  • the bad guy can read your data while it’s on its way to you, ie Man In the Middle (MITM) attack
  • the bad guy can modify your data in place — to include modifying encryption keys to make them less secure
  • the bad guy can modify your data while it’s on its way to you
  • others

Illusion of perfect data

At about 7:28 in the video, Huang makes a statement that I believe captures much of the essence of the risk associated with IoT:

“… throw the algorithm in the controller and present the illusion of perfect data to the user …”

While this specific example of IoT vulnerability is eye-opening and scary, to me it is a way of showcasing a much bigger issue. Namely, the data/information presented to us as a user is not the same data/information that is stored somewhere else. That physically stored data has gone through many processes by the time it reaches the user in whatever software/hardware application that they are using.  Those intermediate processes where the data is being manipulated, ostensibly for error correction, are opportunities for misdirection and attack.  Even when the processes are well-intentioned, we really don’t have a way of knowing what they are trying to do or if they are being successful at what they are trying to do.

What to do

So what can we do? Much like watching a news story on TV or reading it online, the data/information that we see and use has been highly manipulated — shortened, lengthened, cropped, edited, optimized, metadata’d, and on and on — from its original source.

matrixmoviecover

Can we know all of the processes in all of the devices? Unlikely.  Not to get too Matrix-y, but how can we know what is real? I’m not sure that we can.  I think we can look at certifications of some components and products regarding these intermediate processes, similar to what Department of Defense and other government agencies wrestle with regarding supply chain issues.  However, there will be a cost that the consumer feels for that and the competition driven from the lower cost of non-certified products will be high. Maybe it’s in the public interest that products with certified components are publicly (government) subsidized in some way?

An unpleasant pill to swallow is that some, probably substantial, portion of the solution is to accept that we simply don’t know.  And knowing this, modify our behavior — choose what information we save, what we write down, what we communicate. It’s not the idyllic privacy and personal freedom place that we’d prefer, but I think at the end of the day, we won’t be able to get away from it.

 

slides from conference 
Bunnie blog
Xobs blog
[MicroSD image from Bunnie blog]