Hacking the Internet of Things

A research botnet was created to detect and compromise unprotected embedded control devices.  These embedded control devices can be found in industrial control systems, medical devices, home appliances, and similar.  A botnet is created when multiple devices are attacked, compromised (aka ‘owned’), and subsequently controlled by the attacker.  The botnet was named Carna Botnet after the Roman goddess of the protection of vital organs and health.

Carna botnet global distribution

Carna botnet global distribution

  • Unprotected embedded devices (tiny computers that typically control things) detected at rate of 1 every 5 minutes
  • Discovered millions of unprotected devices, eg no or trivial username & password.  
  • Carna Botnet now 1.2 million compromised devices
  • Of that 1.2 million, 420,000 have sufficient functionality & resources to continue to propagate the botnet
Distribution of hacked devices by country

Distribution of hacked devices by country

The initial report is here.  Presentation here.

As the world continues to control more and more with networked embedded devices, ie the Internet of Things, we can expect a global rapidly growing platform for malicious behavior that we will need to attend to.

 

Internet Trends – Mary Meeker @ All Things Digital Conference Today

Some select slides from Mary Meeker/KPCB presentation at All Things Digital Conference:

China's smartphone growth over 50% faster than US

China’s smartphone subscriber growth over 50% faster than US

Reaching for the phone

Reaching for the phone 150 times a day …

Internet user growth - emerging markets dwarf others

Internet user growth – emerging markets dwarf others

A zettabyte??

A zettabyte?? (it’s the new terabyte — 1 zettabyte = 1 billion terabytes.) yowza.

Video upload in hours per minute ...

Currently over 100 hours per minute of video being uploaded to YouTube alone

Emerging markets seem to share a lot more

Some surprising data on online social sharing 

Managing Small Business Rival Online Trash Talk

Per a recent Wall Street Journal article, competing small to medium businesses with questionable ethics can have an effect on your business via online posts.  A 2011 survey shows that of consumers surveyed:

  • 80% changed purchasing decisions based on negative online reviews
  • 87% did the same based on positive reviews
  • 69% did online research before buying
  • 64% read consumer/user reviews
  • 42% read articles and blogs

WSJ suggests the following:

  • Be alert for rival activity — negative messages are often the same or similar across multiple sites
  • Take your suspicions to site administrators
  • Once posts are gone, follow-up on forums to remove any lingering suspicions (old posts can still show up on Google searches)
  • If any attacks were effective, supply a Q&A on your web site or social media page
  • When countering claims, keep cool — keep your tone helpful & neutral
  • Develop a presence in relevant online forums early, before an attack happens — this will give you credibility when you have to respond to attacks

More here.

Horseshoe Irony & Uncertainty

As we manage the online existence of our enterprises, we have discussed that we must rid ourselves of the illusion of total control, certainty, and predictability.  Ironically, however, we often must move forward as if we’re certain or near-certain of the outcome.  No shortage of irony, contradiction, and paradox here.

While playing horseshoes over Memorial Day Weekend with neighbors, I had a minor epiphany regarding uncertainty.  I realized that once I tossed my horseshoe and it landed in the other pit (a shallow three-sided box made up of sand, dirt, some tree roots, and the pin itself) that

I had no idea about where that horseshoe was going to bounce.

I had some control over the initial toss — I could usually get it in the box — but once it hit the ground, I had zero certainty about which direction it was heading. It might bury in the sand, bounce high on the dirt, or hit a tree root and fly off to parts unknown.

Unpredictable bounce

Unpredictable bounce

Because I couldn’t control that bounce, my best opportunity for winning was to get that horseshoe to land in the immediate vicinity of that pin as often as possible so that I created the opportunity for that arbitrary bounce to land on the pin as often as possible.

By seeking to place that horseshoe near that pin for the first bounce as often as I could, the more likely it was to land on, near, or around the pin for points or even the highly coveted ringer.

Here’s the irony.  One of the best ways to achieve this goal of landing the shoe near the pin as often as possible, is to aim for the pin every time.  That is, toss the horseshoe like you expect to get a ringer with every throw.

So, we’re simultaneously admitting to ourselves that we can’t control the outcome while proceeding as if we can control the outcome.  

This logical incongruity is precisely the sort of thing that can make addressing uncertainty and managing risk so challenging.

Similar things happen with the management of our information systems.  We want to earnestly move forward with objectives like 100% accurate inventory and 100% of devices on a configuration management plan.  Most of us know that’s not going to happen in practice with our limited resources.  However,

    • by choosing good management objectives (historically known as ‘controls’) 
    • executing earnestly towards those objectives while
    • thoughtfully managing resources,

we increase the chances of things going our way even while living in the middle of a world that we ultimately can’t control.

Those enterprises that do this well not only increase survivability, but also increase competitive advantage because they will use resources where they are most needed and not waste them where they don’t add value.

Ringer

Ringer

So, yes, I’m trying to get a ringer with every throw, but at the same time, I know that is unlikely. But while shooting for the ringer every time, I increase the opportunity for that arbitrary bounce to go in a way that’s helpful to me.

Much like horseshoes, in Information Risk Management, I can’t control what is going to happen on every single IT asset, be it workstation, server, or router, but I can do things to increase the chances for things move in a helpful way.

The opportunity for growth before us, then, is to have that self-awareness of genuinely moving towards a goal, simultaneously knowing that it is unlikely that we will reach it, and being ready to adjust when and if we don’t.

 

How do you create opportunity for taking advantage of uncertainty in your organization?

Inverting Sun Tzu – Know Yourself 1st

While Sun Tzu implores us in The Art of War to, “Know your enemy, know yourself” to win 100 battles, in information risk management for small and medium-sized businesses, we need to invert that priority to “Know yourself and then know your enemy.”

Sun Tzu

Sun Tzu

Actually, for those of us in resource-constrained organizations trying to protect ourselves and manage our information risk, we need to add a middle piece to that phrase, “Know your environment.”  Knowing ourselves, though, is the fundamental foundation.  So it looks something like this:

Fundamentals -- know yourself

Fundamentals — know yourself

The trick, though, is that it’s not so easy to know ourselves.  A major challenge for small and medium-sized businesses in this time of BYOD and indeterminate interconnectivity, is that trying to know ourselves is tough all by itself.  With unknown devices in unknown configurations with unknown operating parameters entering the business everyday, it’s hard to even know ourselves.  Just getting a device inventory is difficult.  And that’s really just counting.  So if counting is hard, we know we’ve got a challenge.

Knowing ourselves does not enable us to predict or control the future, but it does allow us to make better decisions when unpredictable things happen

As we work towards mastery of knowing ourselves, we then begin to endeavor to better understand the environment in which we work.  How rough is the online neighborhood in which we’re doing business? (pretty rough).  Who are we connected to? Who’s connected to us? Who might be trying to connect to us?

BYOD makes increases the complexity of knowing yourself ...

BYOD increases the complexity of knowing yourself …

The “Know your enemy” part is important, and intriguing, and sexy, but we can’t get there without better knowing ourselves and the environment in which we work and defend ourselves  

 

Do you know your company’s business objectives, its assets, its capabilities, its vulnerabilities?  What techniques do you use to know yourself, your business? Do you use a risk register to do this? Informal focus groups? Something else?

2012 Internet Crime Report

Some snippets from the Internet Crime Complaint Center 2012 Report released last week —

  • Almost 290,000 complaints received
  • Total Loss over $500 million
  • 8.3% increase in total reported loss since 2011
  • Average dollar loss for those reporting loss over $4,500
  • 43% of complainants in 40 – 59 age group
  • Top states by complaint count:
from Internet Complaint Center 2012 Report

from Internet Complaint Center 2012 Report

  • Top 5 countries filing complaints by count:

US
Canada
UK
Australia
India

  • FBI Impersonation E-mail Scams — over 14,000 complaints totaling over $4.6 million in losses (40% more men reported complaints than women)
  • Intimidation/Extortion Scams — over 8,000 complaints totaling almost $11 million in losses (60% more women reported complaints than men)
  • Hit man scams continue (where emailer portrays him/herself as a hit man demanding money not to execute contract) — over 1,300 complaints.  Uptick in social media use in executing this scam

More on Internet Crime Complaint Center here.

 

Lions and Tigers and Bears

In this age of exponentially growing information risk, we can become like Dorothy was early in her journey and focus only on the things that can go wrong. We can get so caught up in what can go wrong that we forget to take inventory of what needs to go right.

Lions and tigers and bears ...

Lions and tigers and bears …

Over lunch recently, a friend of mine with a career in risk management shared a helpful perspective on this.  Instead of always approaching risk as trying to think of everything that can go wrong, think of what must go right first. That might sound like two sides of the same coin, but I think it is more than that. This approach helps to prioritize efforts and resources.

It’s easy to get caught up in trying to create an exhaustive list of everything that can go wrong. A problem with this is that it can:

      1. be overwhelming to the point of analysis paralysis, and
      2. tend to identify risk that may not be relevant to your situation. 

There are some risks that may not be immediately pertinent to you. For example, the latest specification for encryption for data at rest for DOD contractors might not be at the top of your list.  However, having an always-on internet connection so that you can make company website updates might be.

Take the hypothetical of a bike shop with three stores.  Some things that must go right for the owner might be:

  • Internet connection constant for running credit cards
  • Customer information (to include personally identifiable information) retained for billing and marketing and only accessible by authorized employees
  • Bookkeeper has secure connection to financials from outside the stores
  • Safe, secure workstations available 6:00 am – 6:00 pm for employees
  • 24/7 access to current inventory across all stores

These are some pretty basic requirements, but they help to prioritize need. By looking at these requirements for things to go right, what are things that can prevent this from happening? What’s the risk of loss of internet connection? Are we sure that customer information is available to only authorized employees?   What kind of connection is the bookkeeper using? Do the workstations have regular anti-virus updates? Are there policies/guidelines on workstation use by employees? If the computer with the store inventory fails, is there backup? How quickly does it need to be recovered?

In spite of the risks of lions, and tigers, and bears, Dorothy was able to return to her mission and seek the Wizard. We must do the same and not lose sight of our business objectives in our analysis of the lions and tigers and bears.

 

Can you name 5 things in your organization that must go right for you to be successful? What vulnerabilities do these objectives have? What threats do they face?

 

Wrestling With Risk & Uncertainty

We have a love-hate relationship with risk and uncertainty. We hate uncertainty when it interferes with us trying to get things done such as projects we are trying to complete, programs we are developing, or things that break.  We love it because of the excitement that it brings when we go to Las Vegas, or a enjoy a ball game, or watch a suspenseful movie.  There’s uncertainty and risk in all of these, but that is what makes it fun and attracts us to them.

When we think of risk and uncertainty in terms of our IT and Information Management projects, programs, and objectives, and the possibility of unpleasant surprises, we are less enamored with uncertainty.

Why do we have a hard time managing uncertainty and incorporating risk-based analysis into our day-to-day work? We’ve already talked some about resource constraints that can make information risk management challenging, particularly for small and medium-sized businesses.  However, there is a deeper issue that stems from over 300 years of human thought. In a nutshell, we just don’t think that way.  Why is this?  Who got this ball rolling?  Isaac Newton.

Newton

Isaac Newton lowers uncertainty and increases predictability

In 1687 Isaac Newton lowers uncertainty and increases predictability with the publication of Principia

Isaac Newton (1642 – 1727) published Principia in 1687. In this treatise, Newton’s three laws of motion were presented as well as the law of universal gravity.  This was a huge step forward in human knowledge and capability. Application of these laws provided unprecedented predictive power. The laws separated physics and philosophy into separate professions and formed the basis of modern engineering. Without the predictive power provided by Principia (say that three times real fast), the industrial and information revolutions would not have occurred.

Now, for the sneaky part.  The increased ability to predict motion and other physical phenomena was so dramatic and the rate of knowledge change was so rapid that the seeds were planted that we must be able to predict anything.  Further, the universe must be deterministic because we need only to apply Newton’s powerful laws and we can always figure out where we’re going to be.  Some refer to this uber-ordered, mechanical view as Newton’s Clock.

A little over a century later, Pierre-Simon Laplace (1749-1827) takes this a step further.

Laplace

Pierre-Simon Laplace ups the ante and says that we can predict anything if we know where we started

Pierre-Simon Laplace ups the ante in 1814 and says that we can predict anything if we know where we started

In his publication, A Philosophical Essay on Probabilities (1814), he states:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

In other words, Laplace is saying:

      • if you know where you started
      • if you have enough intellectual horsepower
      • then if you apply Newton’s laws
      • you’ll always be able to figure out where you’ll end up

Now, almost 200 years later, we’ve got enough experience to know that things don’t always work out as planned. That is, as much as one may desire certainty, you can’t always get what you want.

Jagger

!n 1969 Jagger-Richards revisit uncertainty & remind us that we can't always get what we want

!n 1969 Jagger-Richards revisit uncertainty & remind us that we can’t always get what we want

The problem is that we largely still think the Newtonian/Laplacian way.

Even after the practical and theoretical developments of the last century, eg Heisenberg’s Uncertainty Principle, the unknowability in Schrödinger’s Wave Equation and Godel’s Incompleteness,  we still think the old school way on a day-to-day basis.

This gets reaffirmed in our historical IT service experience as well.  We are used to systems having strong dependencies.  For example, to set up a traditional content management system, there had to be supporting hardware, there had to be a functional OS on top of that, there had to be a functional database, and there had to be a current CMS framework with supporting languages.  This is a very linear, dependent approach and configuration. And it was appropriate for simpler systems of the past.

Complexity Rising

The issue now is that our systems are so complex, with ultimately indeterminate interconnectivity, and systems are changing so rapidly that we can’t manage them with hard dependency mapping.  Yet our basic instinct is to keep trying to do so.  This is where we wrestle with risk management. It is not natural for us.  Newton and Laplace still have a big hold on our thinking.

Our goal is to train ourselves to think in terms of risk, of uncertainty.  One way to do this is to force ourselves to regularly apply risk management techniques such as management of a Risk Register. Here we write down things that could happen — possibilities — and also estimate how likely we think they are to occur and estimate their impact to us. When we add to and regularly review this list, we begin to get familiar with thinking and planning in terms of uncertainty.

We have to allow for and plan for uncertainty.  We need to create bounds for our systems as opposed to attempting explicit system control.  The days of deterministic control, or perceived control, are past.  Our path forward, of managing the complexity before us, is to learn to accept and manage uncertainty.

 

How do you think about uncertainty in your planning? Do you have examples of dependency management that you can no longer keep up with?

Developing Your Scan in Information Risk Management

Cockpit Scan

Learning to scan is one of the most important skills any pilot develops when learning to fly.    In flying, scan is the act of keeping your eyes moving in a methodical, systematic, and smooth way so that you take in information as efficiently as possible, while leaving yourself mental bandwidth for processing that information.

While flying the aircraft, as you look out across the horizon, your scan might start near the left of the horizon and scan across to the right.  Then, upon approaching the right of the horizon, you drop your eyes down a little bit and start scanning back right to left and pick up some instrument readings along the way.  As you approach the left side, you might drop your eyes a little again, change directions, scanning left to right again.  You might repeat this one more time and then finally return back to where you started.  Then do it again.

The exact pattern doesn’t matter, but having a pattern and method does.

cockpit scan

One possible cockpit scan

At first, this is all easier said than done.  When you’re learning to fly, so much is uncertain.  You really don’t know much.  You want to lower your uncertainty.  There is a real tendency to want to know everything about everything.  But you’ll never know everything about everything.  Slowly, a budding pilot begins to learn that.

When I was learning to fly and while mistakenly trying to know every detail about every flight parameter, ironically I would end up (unhelpfully) having what I now call my “instrument of the day”.  I would get so focused on one thing, say the engine power setting, that I would disregard most of everything else.  (This is why there are instructor pilots.)  On the subsequent flight, the instrument might have been the altimeter where I’d be thinking, “By God, I’m going to hold 3,000 feet no matter what! I’m going to own this altitude!” Now the fact that I wasn’t watching anything else and we may have been slowing to stall speed was lost on me because I was so intent on complete knowledge of that one indicator or flight parameter.  (This is why there are instructor pilots.)

To develop an effective scan, you slowly learn that you can’t know everything.  You learn that you have to work with partial information.

You have to live with uncertainty in order to fly.  

By accepting partial information about many things and then slowly integrating that information through repetitive, ongoing scans, you gain what you need to fly competently and safely.  Conversely, if you focus solely on one or two parameters and really ‘know’ them, you’ve given up your ability to have some knowledge on the other things that you need to fly.  In my example above with engine power setting, I could ‘know’ the heck out of what that power setting was, but that told me nothing about my airspeed, altitude, turn rate, etc.  It doesn’t even tell me if I’m right side up. While we can’t know everything about everything, we do want to know a little bit about a lot of things.

“Scan” even becomes a noun.  “Develop your scan.”  “Get back to your scan.” “Don’t let your scan break down.”

Your flying becomes better as your scan becomes better.

After a while, that scan becomes second nature.  As a pilot gains in experience, the trick then becomes to keep your scan moving even when something interesting is happening, eg some indicator is starting to look abnormal (and you might be starting to get a little nervous).  You still want to keep your scan moving and not fixate on any one parameter —

Just because something bad is happening in one place does not mean that something bad (or worse) is not happening somewhere else.

The pilot’s scan objectives are:

  • Continually scan
  • Don’t get overly distracted when anomalies/potential problems appear — keep your scan moving 
  • Don’t fixate on any one parameter — force yourself to work with partial information on that parameter so that you have bandwidth to collect & integrate information from other parameters and other resources
  • If disrupting your scan is unavoidable, return to your scan as soon as possible

Maintaining Your Scan In Information Risk Management

This applies to Information Risk Management as well.  We want to continually review our information system health, status, and indicators. If an indicator starts to appear abnormal, we want to take note but continue our scan.  Again, just because an indicator appears abnormal doesn’t preclude there being a problem, possibly bigger, somewhere else.

An Information Risk Management 'scan' can be similar to a cockpit scan

An Information Risk Management ‘scan’ can be similar to a cockpit scan

A great example is the recent rise in two-pronged information attacks against industry and government.  Increasingly, sophisticated hackers are using an initial attack as a diversion and then launching a secondary attack while a company’s resources are distracted by the first attack.  A recent example of this sort of approach is when the Dutch bank, ING Group, had their online services disrupted by hackers and then followed with a phishing attack on ING banking customers.

This one-two punch is also an approach that we have seen terrorists use over the years where an initial bomb explosion is followed by a second bomb explosion in an attempt to target first-responders.

(As an aside, we know Boston Marathon-related phishing emails were received within minutes of news of the explosions.  I don’t know whether this was automated or manual phishing attacks, but either way, someone was waiting for disasters or other big news events to exploit or leverage.)

I believe that we will continue to see more of these combination attacks.  Further, it is likely that not just one, but rather multiple, incidents will serve as distractions while the real damage is being done elsewhere.

To address this, we must continue to develop and hone our scan skills.  We must:

  • Develop the maturity and confidence to operate with partial information
  • Practice our scan, our methodical and continual monitoring, so that it becomes second nature to us
  • Have the presence of mind and resilience to return to our scan if disrupted

 

Do you regularly review your risk posture? What techniques do you deploy in your scan? What are your indicators of a successful scan?

 

Chuck Benson’s Information Risk Management Lectures on Coursera.org

Slide from lectures -- Building an Information Risk Management Toolkit -- Week 9

Slide from lectures — Building an Information Risk Management Toolkit — Week 9

Check out my Information Risk Management lecture videos on Coursera.org — Building an Information Risk Management Toolkit — Week 9.  (Click on Video Lectures on left & then go to Week 9.  The video “Bounded Rationality” is a good place to start).