Tag Archives: device

Can we manage what we own? — IoT in smart cities & institutions

The rate of growth of IoT devices and systems is rapidly outpacing the ability of an institution or city to manage those same devices and systems. The tools, capacities, and skill sets in institutions and cities that are currently in place were built and staffed for different information systems and technologies — centralized mail servers, file sharing, business applications, network infrastructure support, and similar. Some of these systems still exist within the enterprise and still need robust, effective support while others have moved to the cloud. The important consideration is to not assume that toolsets developed for traditional enterprise implementations are appropriate or sufficient for IoT Systems implementations.

What's manageable- 032217

things are increasing faster than the ability to manage those things

Working from the outside in

Starting with the outer ring, the number of ‘things’ — the T in IoT — is rapidly growing within institutions and cities. From my perspective, an IoT ‘thing’ is a device that computes in some way, is networked, and interacts with its local environment in some way. Further, these systems may be acquired via non-traditional methods. For example, a city’s transportation department may seek and acquire a sensor, data aggregation, and analysis system for predictive maintenance for a particular roadway. This system might have been selected, procured, implemented, and subsequently managed independently of the organization’s traditional central IT organization & processes. Complex and high data producing systems are entering the institution/city from a variety of sources and with little formal vetting or analysis.

Can we even count them?

Because of the rapid growth of IoT devices and systems in concert with alternative entry points into the city/institution, even counting (enumerating)  — these devices — which can compute with growing ability and are networked — is increasingly difficult. This lack of countability in itself is not so bad, it’s just a fact of life – the trouble comes when we base our management systems on the assumption that we can count, inventory, much less manage all of our devices.

What do we know about the devices?

Do we have documentation and clarity of support for the tens, hundreds, thousands (or more) of devices. What do they do? How are they configured? Have we set a standard for configuration? How do we know that that standard is being met? What services do we think should be running on the devices? Are those services indeed running on them? Are there more services than those required running? Are there processes for sampling and auditing those device services over the next 12 – 36 months?  Or did we install them, or have them installed, and simply move onto the next thing?

We can borrow from the construction industry and ask for as-built documentation. What actually got installed? What are the documents that we have to work with to support this system? Drawings? IP addresses? Configuration documents for logins, passwords, open ports/services?

What is manageable?

If we are in the fortunate position to be able to actually count these computing/networked/sensing devices with reasonable accuracy and we know some (enough) things about the devices, then the next question is — do we have the resources — staffing, time, skill sets, opportunity cost, etc — to actually support the devices? Suddenly in smart cities, smart institutions, smart campuses, we’re installing things, endpoints, in the field that may require regular updating (yearly, monthly, …) — and this occurs between the customer network with its protocols/processes and the vendor system that is proposed. Not all (possibly substantial) device updating can be accomplished effectively remotely.

Another challenge is that often the organizations that are charged with staffing, installing, and supporting these deployed IoT devices, such as smart energy meters or environmental monitoring systems, are more accustomed to supporting machines that last for years or decades. Such facilities management organizations have naturally built their planning, repair, and preventative maintenance cycles around longer periods. For example, a centrifugal fan in a building might have a projected lifespan of approximately 25 years, soft start electric motors 25 years, and variable air volume (VAV) boxes with expectancies of 25 years.

Similarly, central IT organizations generally are not accustomed to running out into the field with trucks and ladders to support 100’s, 1000’s, or more of computing, networked devices in a city or institution. So the question of who’s going to do the actual support work in the field is not clear in terms of capacity, skill sets, and costs.

device count vs mgmt ability 032217-3

Actually managing the things

So, if we have all of the above — and that subset gets smaller and smaller — have the decisions been made and priorities established to actually manage the devices? That is, to prioritize, risk manage, and develop process to manage the devices in practice? There’s a good chance that manageable things won’t actually be managed due to lack of knowledge of owned things, competing priorities, and other.

On not managing the things

It is my opinion that we will not be able to manage all of the ‘things’ in the manner that we have historically managed networked, computing things. While that’s a change, that’s not all bad either. However we do have to realize, acknowledge, and adjust for the fact that we’re not managing all of these things like we thought we could. Thinking we’re managing something we’re not is the biggest risk.

We’re moving into a world of potentially greater benefit to the populace via technology and information systems. However, we will have to do the hard work of being thoughtful about it across multiple populations and realize that we’re bringing in new risks with some known — and unknown — consequences.

More ghosts in the machine

It turns out that the microprocessors that are on almost every SD memory device produced are accessible and fully programmable. This has significant implications for security and data assurance and is an archetypal example of what we are facing regarding new risks from the Internet of Things (IoT).

Two researchers, Sean Cross who goes by xobs and Andrew Huang, aka bunnie, presented at Chaos Communication Congress last week and simultaneously released a blog post on their discovery and approach.

SD Card Image

SD devices are ubiquitous and found in smart phones, digital cameras, GPS devices, routers and a rapidly growing number of new devices.  Some of these are soldered onto circuit boards and others, probably more familiar to us, are in removable memory devices developed by Kingston, SanDisk and a plethora of others.


Microprocessors built into memory devices?

According to the presentation and Bunnie’s blog post, the quality of memory in some of these chips varies greatly.  There is variance within a single manufacturer, even a highly reputable one like SanDisk. There is also variance because there are many, many SD device manufacturers and a lot of those don’t have the quality control or reputation of the larger companies.  In his presentation, he gives a funny anecdote involving a fabrication factory tour of a manufacturer in China that was complete with chickens running across the floor.

So, on any memory device produced, there are bad sections. Some devices have a lot more than others, eg up to 80% of the memory real estate is unusable.  That’s where the onboard microprocessors come in.  These microprocessors run algorithms that identify bad memory blocks and perform complex error correction.  Those processes sit between the actual physical memory and the data that is presented at the output of the card. On a good day, that data that the user sees via the application on the smart phone, camera or whatever is an error-corrected representation of what the actual ones and zeros are on the chip. Ideally, that data that ultimately reaches you as the user appears to be what you think your data should be.

“Mother nature’s propensity for entropy”

Huang explains that as business pressures demand smaller and smaller memory boards, the level of uncertainty of the data in the memory portion of the chip increases.  This in turn increases the demand on error correction processes and the processor running them. He suggests that it is probably cheaper to make the chips in bulk (with wide quality variance) and put an error-correcting processor on the chip than it is to produce a chip and then fully test and profile it.  In their research, it was not uncommon to find a chip labelled with significantly smaller capacity than that available on the actual chip.

What Cross and Huang were able to do is actually access that microprocessor and run their own code with it.  They did this by development of their own hardware hacking platform (Novena), physically cracking open a lot of devices (breaking plastic), lots of trial and error, and research on Internet search sites, particularly Google and China’s Baidu.

The fact that there is a way to get to the processes that manipulate the data sitting on the physical memory has significant implications. Ostensibly, these processors are there to support error correction so that your data appears to be what you want it to be.  However, if (when) those processors are accessed by a bad guy:

  • the bad guy can read your data while it’s on its way to you, ie Man In the Middle (MITM) attack
  • the bad guy can modify your data in place — to include modifying encryption keys to make them less secure
  • the bad guy can modify your data while it’s on its way to you
  • others

Illusion of perfect data

At about 7:28 in the video, Huang makes a statement that I believe captures much of the essence of the risk associated with IoT:

“… throw the algorithm in the controller and present the illusion of perfect data to the user …”

While this specific example of IoT vulnerability is eye-opening and scary, to me it is a way of showcasing a much bigger issue. Namely, the data/information presented to us as a user is not the same data/information that is stored somewhere else. That physically stored data has gone through many processes by the time it reaches the user in whatever software/hardware application that they are using.  Those intermediate processes where the data is being manipulated, ostensibly for error correction, are opportunities for misdirection and attack.  Even when the processes are well-intentioned, we really don’t have a way of knowing what they are trying to do or if they are being successful at what they are trying to do.

What to do

So what can we do? Much like watching a news story on TV or reading it online, the data/information that we see and use has been highly manipulated — shortened, lengthened, cropped, edited, optimized, metadata’d, and on and on — from its original source.


Can we know all of the processes in all of the devices? Unlikely.  Not to get too Matrix-y, but how can we know what is real? I’m not sure that we can.  I think we can look at certifications of some components and products regarding these intermediate processes, similar to what Department of Defense and other government agencies wrestle with regarding supply chain issues.  However, there will be a cost that the consumer feels for that and the competition driven from the lower cost of non-certified products will be high. Maybe it’s in the public interest that products with certified components are publicly (government) subsidized in some way?

An unpleasant pill to swallow is that some, probably substantial, portion of the solution is to accept that we simply don’t know.  And knowing this, modify our behavior — choose what information we save, what we write down, what we communicate. It’s not the idyllic privacy and personal freedom place that we’d prefer, but I think at the end of the day, we won’t be able to get away from it.


slides from conference 
Bunnie blog
Xobs blog
[MicroSD image from Bunnie blog]


Risk Managing Residual Old-School Devices

I’ve encountered this risk discussed in this Forbes article more than once when taking over an organization and doing an initial information risk assessment.  People tend to forget that these embedded devices can have simple or full fledged Linux distributions (or other OS) in the firmware.  Also, the default ports that were left open can be eye opening.

An image from H.D. Moore's presentation on serial server security vulnerabilities, showing an oil and gas infrastructure setup networked with serial port connections. (via Forbes.com)

An image from H.D. Moore’s presentation on serial server security vulnerabilities, showing an oil and gas infrastructure setup networked with serial port connections. (via Forbes.com)

Researcher’s Serial Port Scans Find More Than 100,000 Hackable Devices, Including Traffic Lights And Fuel Pumps