It turns out that the microprocessors that are on almost every SD memory device produced are accessible and fully programmable. This has significant implications for security and data assurance and is an archetypal example of what we are facing regarding new risks from the Internet of Things (IoT).
SD devices are ubiquitous and found in smart phones, digital cameras, GPS devices, routers and a rapidly growing number of new devices. Some of these are soldered onto circuit boards and others, probably more familiar to us, are in removable memory devices developed by Kingston, SanDisk and a plethora of others.
Microprocessors built into memory devices?
According to the presentation and Bunnie’s blog post, the quality of memory in some of these chips varies greatly. There is variance within a single manufacturer, even a highly reputable one like SanDisk. There is also variance because there are many, many SD device manufacturers and a lot of those don’t have the quality control or reputation of the larger companies. In his presentation, he gives a funny anecdote involving a fabrication factory tour of a manufacturer in China that was complete with chickens running across the floor.
So, on any memory device produced, there are bad sections. Some devices have a lot more than others, eg up to 80% of the memory real estate is unusable. That’s where the onboard microprocessors come in. These microprocessors run algorithms that identify bad memory blocks and perform complex error correction. Those processes sit between the actual physical memory and the data that is presented at the output of the card. On a good day, that data that the user sees via the application on the smart phone, camera or whatever is an error-corrected representation of what the actual ones and zeros are on the chip. Ideally, that data that ultimately reaches you as the user appears to be what you think your data should be.
“Mother nature’s propensity for entropy”
Huang explains that as business pressures demand smaller and smaller memory boards, the level of uncertainty of the data in the memory portion of the chip increases. This in turn increases the demand on error correction processes and the processor running them. He suggests that it is probably cheaper to make the chips in bulk (with wide quality variance) and put an error-correcting processor on the chip than it is to produce a chip and then fully test and profile it. In their research, it was not uncommon to find a chip labelled with significantly smaller capacity than that available on the actual chip.
What Cross and Huang were able to do is actually access that microprocessor and run their own code with it. They did this by development of their own hardware hacking platform (Novena), physically cracking open a lot of devices (breaking plastic), lots of trial and error, and research on Internet search sites, particularly Google and China’s Baidu.
The fact that there is a way to get to the processes that manipulate the data sitting on the physical memory has significant implications. Ostensibly, these processors are there to support error correction so that your data appears to be what you want it to be. However, if (when) those processors are accessed by a bad guy:
- the bad guy can read your data while it’s on its way to you, ie Man In the Middle (MITM) attack
- the bad guy can modify your data in place — to include modifying encryption keys to make them less secure
- the bad guy can modify your data while it’s on its way to you
Illusion of perfect data
At about 7:28 in the video, Huang makes a statement that I believe captures much of the essence of the risk associated with IoT:
“… throw the algorithm in the controller and present the illusion of perfect data to the user …”
While this specific example of IoT vulnerability is eye-opening and scary, to me it is a way of showcasing a much bigger issue. Namely, the data/information presented to us as a user is not the same data/information that is stored somewhere else. That physically stored data has gone through many processes by the time it reaches the user in whatever software/hardware application that they are using. Those intermediate processes where the data is being manipulated, ostensibly for error correction, are opportunities for misdirection and attack. Even when the processes are well-intentioned, we really don’t have a way of knowing what they are trying to do or if they are being successful at what they are trying to do.
What to do
So what can we do? Much like watching a news story on TV or reading it online, the data/information that we see and use has been highly manipulated — shortened, lengthened, cropped, edited, optimized, metadata’d, and on and on — from its original source.
Can we know all of the processes in all of the devices? Unlikely. Not to get too Matrix-y, but how can we know what is real? I’m not sure that we can. I think we can look at certifications of some components and products regarding these intermediate processes, similar to what Department of Defense and other government agencies wrestle with regarding supply chain issues. However, there will be a cost that the consumer feels for that and the competition driven from the lower cost of non-certified products will be high. Maybe it’s in the public interest that products with certified components are publicly (government) subsidized in some way?
An unpleasant pill to swallow is that some, probably substantial, portion of the solution is to accept that we simply don’t know. And knowing this, modify our behavior — choose what information we save, what we write down, what we communicate. It’s not the idyllic privacy and personal freedom place that we’d prefer, but I think at the end of the day, we won’t be able to get away from it.