Changes in water temperature and water availability will lead to more power disruptions in the next decades.
Ernie Hayden points out several often overlooked facts regarding electrical power generation in his Infrastructure Security blog.
Water is critical in electricity generation. Heat is generated by fueled power sources which spin the generators, whether combustion engines, coal-fired, nuclear, or other. That heat has to go somewhere. The primary coolant used in industrial power generation is water. Warmer water and diminished water flow reduce the ability to take that heat away which in turn reduces power generation.
Hayden points out a few examples of where warmer water or reduced water flow caused power degradation or complete shutdown:
* Millstone Nuclear Plant, Connecticut, 2012 — natural cooling water source (Long Island Sound) became too warm (almost 3 degree F ambient increase since plant’s inception in 1975). Plant shutdown for 12 days
* Browns Ferry Nuclear Plant, Alabama, 2011 — shutdown multiple times because water from Tennessee River was too warm
* Corette Power Plant, Montana, 2001 — plant shut down several times due to reduced water flow from Yellowstone River
Estimates of thermoelectric power generating capability are expected to drop by as much as 19% due to lack of cooling water. Further, incidents of extreme drops in generation capability, ie complete disruption, is expected to almost triple.
Keep up your scan
An Information Risk Management ‘scan’ can be similar to a cockpit scan
Much like flying an aircraft, we have ‘keep up our scan‘ when analyzing these system risks. These are complex interconnecting systems. We are becoming increasingly concerned about cyberattacks on electrical and smart grid systems. That attention is good and overdue, but that is only part of the puzzle. We have to train ourselves to constantly scan the whole system — just because there’s a big fire in front of us, it doesn’t mean that there’s not another one burning somewhere else.
For want of a nail the shoe was lost.
For want of a shoe the horse was lost.
For want of a horse the rider was lost.
For want of a rider the message was lost.
For want of a message the battle was lost.
For want of a battle the kingdom was lost.
And all for the want of a horseshoe nail.
It turns out that the microprocessors that are on almost every SD memory device produced are accessible and fully programmable. This has significant implications for security and data assurance and is an archetypal example of what we are facing regarding new risks from the Internet of Things (IoT).
Two researchers, Sean Cross who goes by xobs and Andrew Huang, aka bunnie, presented at Chaos Communication Congress last week and simultaneously released a blog post on their discovery and approach.
SD devices are ubiquitous and found in smart phones, digital cameras, GPS devices, routers and a rapidly growing number of new devices. Some of these are soldered onto circuit boards and others, probably more familiar to us, are in removable memory devices developed by Kingston, SanDisk and a plethora of others.
Microprocessors built into memory devices?
According to the presentation and Bunnie’s blog post, the quality of memory in some of these chips varies greatly. There is variance within a single manufacturer, even a highly reputable one like SanDisk. There is also variance because there are many, many SD device manufacturers and a lot of those don’t have the quality control or reputation of the larger companies. In his presentation, he gives a funny anecdote involving a fabrication factory tour of a manufacturer in China that was complete with chickens running across the floor.
So, on any memory device produced, there are bad sections. Some devices have a lot more than others, eg up to 80% of the memory real estate is unusable. That’s where the onboard microprocessors come in. These microprocessors run algorithms that identify bad memory blocks and perform complex error correction. Those processes sit between the actual physical memory and the data that is presented at the output of the card. On a good day, that data that the user sees via the application on the smart phone, camera or whatever is an error-corrected representation of what the actual ones and zeros are on the chip. Ideally, that data that ultimately reaches you as the user appears to be what you think your data should be.
“Mother nature’s propensity for entropy”
Huang explains that as business pressures demand smaller and smaller memory boards, the level of uncertainty of the data in the memory portion of the chip increases. This in turn increases the demand on error correction processes and the processor running them. He suggests that it is probably cheaper to make the chips in bulk (with wide quality variance) and put an error-correcting processor on the chip than it is to produce a chip and then fully test and profile it. In their research, it was not uncommon to find a chip labelled with significantly smaller capacity than that available on the actual chip.
What Cross and Huang were able to do is actually access that microprocessor and run their own code with it. They did this by development of their own hardware hacking platform (Novena), physically cracking open a lot of devices (breaking plastic), lots of trial and error, and research on Internet search sites, particularly Google and China’s Baidu.
The fact that there is a way to get to the processes that manipulate the data sitting on the physical memory has significant implications. Ostensibly, these processors are there to support error correction so that your data appears to be what you want it to be. However, if (when) those processors are accessed by a bad guy:
the bad guy can read your data while it’s on its way to you, ie Man In the Middle (MITM) attack
the bad guy can modify your data in place — to include modifying encryption keys to make them less secure
the bad guy can modify your data while it’s on its way to you
others
Illusion of perfect data
At about 7:28 in the video, Huang makes a statement that I believe captures much of the essence of the risk associated with IoT:
“… throw the algorithm in the controller and present the illusion of perfect data to the user …”
While this specific example of IoT vulnerability is eye-opening and scary, to me it is a way of showcasing a much bigger issue. Namely, the data/information presented to us as a user is not the same data/information that is stored somewhere else. That physically stored data has gone through many processes by the time it reaches the user in whatever software/hardware application that they are using. Those intermediate processes where the data is being manipulated, ostensibly for error correction, are opportunities for misdirection and attack. Even when the processes are well-intentioned, we really don’t have a way of knowing what they are trying to do or if they are being successful at what they are trying to do.
What to do
So what can we do? Much like watching a news story on TV or reading it online, the data/information that we see and use has been highly manipulated — shortened, lengthened, cropped, edited, optimized, metadata’d, and on and on — from its original source.
Can we know all of the processes in all of the devices? Unlikely. Not to get too Matrix-y, but how can we know what is real? I’m not sure that we can. I think we can look at certifications of some components and products regarding these intermediate processes, similar to what Department of Defense and other government agencies wrestle with regarding supply chain issues. However, there will be a cost that the consumer feels for that and the competition driven from the lower cost of non-certified products will be high. Maybe it’s in the public interest that products with certified components are publicly (government) subsidized in some way?
An unpleasant pill to swallow is that some, probably substantial, portion of the solution is to accept that we simply don’t know. And knowing this, modify our behavior — choose what information we save, what we write down, what we communicate. It’s not the idyllic privacy and personal freedom place that we’d prefer, but I think at the end of the day, we won’t be able to get away from it.
I missed this when it came out in last fall, but it is a step in the right direction. An IoT manufacturer has settled with the FTC for “failure to reasonably secure IP cameras against unauthorized access” for their cloud-connected IoT video camera. According to the complaint, the FTC went after the manufacturer, TrendView, for several issues with its SecurView cameras to include:
transmitted user login credentials in the clear
user credentials stored in the clear
vendor failed to implement a process to actively monitor security vulnerability reports from third-party researchers
lack of security architecture review
lack of security review and testing during software development
The FTC alleged that these, among others, contributed to putting users at “significant risk.”
Also, according to this article at ReadWrite.com, a security researcher figured out that one of the Internet domain names that the manufacturer had listed as a secure host for video streams was not registered! The researcher, Craig Heffner, now with Tactical Network Solutions, was able to acquire that domain name. If desired, he could have then picked up all of the video streams from users pointing their devices to that domain. Since the users were advised by the manufacturer to use that domain name, the users would have no idea that their data could have been streaming to someone else.
And the networked-based video surveillance business is booming. Per the FTC complaint, IP (Internet Protocol) video camera sales were $6.3 million in 2010, $5.8 million in 2011, and $7.4 million in 2012. Remember, this is just one company’s products. There are many others and the number of manufacturers will continue to grow.
Without such challenges by an agency like the FTC, there is little to motivate manufacturers to supply products that have been developed with reasonable security oversight in the process. That said, with tens of billions of IoT devices expected in the next few years, I don’t think that the FTC, as it stands, is going to be able to make a huge dent. It seems to me that an entirely new way of viewing and enforcing privacy law is required and I don’t see that coming in the near future.
As Brody observed in the movie Jaws in 1975, “We’re gonna need a bigger boat.”
A team of researchers has identified a way to extract full 4096-bit RSA decryption keys just by listening to (detecting) the sounds generated by a computer. Sound patterns can be associated with particular processes occurring on the computer. Of special interest are the unique sound patterns generated when cyphertext (text that has been encrypted) is in the process of being decrypted. The researchers claim that in less than an hour a decryption key can be identified by analyzing sound patterns generated by decryption of particular cyphertexts. Interestingly, this is not sound generated by fans, hard drives, or speakers, but rather sound generated by electronic components such as inductors and capacitors.
Handling interference
Most of the information-yielding acoustics occur above the 10 KHz range. Fan noise and typical room noise generally occurs at lower frequencies and can be filtered out.
Depending on the environment, some keys can be decrypted by using a smart phone within approximately 30 cm. Ranges of up to 4 meters have been successful using specialized equipment such as parabolic microphones.
Different computers have different signatures, but distinct core computing operations such as the HLT (cpu sleep), MUL (integer multiplication), & FMUL (floating point multiplication) X86 instructions can be identified in each.
“Magic-touch” attack
Another variant is what the authors call a magic-touch attack. In this scenario, instead of detecting patterns in sound coming from the computer, variations in ground potential of the device can be analyzed. As with the acoustic analysis, these voltage variations in the device’s ground can be also be correlated to specific processing patterns. These ground-potential changes can be measured directly or even by simply touching the chassis with one’s hand and then measuring the variation in body potential. Another approach is to measure the ground potential on the far side of a cable that has a ground, such as a VGA cable.
A couple of years ago $300 might have bought you, if you’re a cybercriminal, the online credentials to access a bank account with maybe $7,000 in it. Today $300 can get you access (username and password) to an account with well over $100,000 in it according to research from Dell SecureWorks. Prices are dropping. Which means that more bad guys can get it.
Speculation for the price drop is that a glut exists in the market subsequent to several large scale data breaches over the past year. This condition is expected to last for some time.
Personal identities comprised of information such as name, SSN, date of birth, etc are known as ‘fullz’. European fullz seem to sell for more than US citizen fullz. Maybe there is less availability of European stolen identities?
I was Googling ‘fullz’ to find a couple of different definitions, but I kept coming across advertisements to sell fullz, complete with price lists. At first, I thought I had stumbled across some secret cybercriminal stash of online identities, but they’re everywhere. Here are snippets of some of the ones that I ran across (ie 1st page of a Google search — I didn’t have to dig for these).
[click to enlarge]
Joe Stewart with Dell SecureWorks and independent researcher David Shear also report these prices for purchasing botnets (networks of pre-compromised computers from which the buyer can deliver a wide variety of malware options):
1,000 bots = $20
5,000 bots= $90
10,000 bots = $160
15,000 bots = $250
Customers shopping for Distributed Denial of Service Attacks can expect rates similar to these:
DDoS Attacks Per hour = $3-$5
DDoS Attacks Per Day = $90-$100
DDoS Attacks per Week = $400-600
These prices kind of bum me out because they are llloowww.
We hear all of the time that cybercrime and cyberattacks are terrible and getting worse. These numbers, though, drive that point home — it’s just not hard to buy into this game.
The just-released 2013 ENISA (European Union Agency for Network and Information Security) Threat Landscape report is consistent with Mick Jagger’s prescient 1978 prediction of the state of cybersecurity, captured here:
Don’t you know the crime rate
Is going up, up, up, up, up
To live in this town you must be
Tough, tough, tough, tough, tough
A number of known threats continue, attack tools are increasingly sophisticated, more nation-states are becoming proficient with these tools, and the mobile ecosystem is a ripe new battlefield. On the upside, reporting and information sharing between organizations has increased and vendor turn around in response to new vulnerabilities is faster.
I can’t give it away on 7th avenue — cheap and plentiful devices
While known to be a factor for some time, a newcomer to the threat list is the Internet of Things (IoT). IoT are networked devices that move, control, sense, surveil, video/audio, and otherwise collect and share information from and with the environment. Development tools and production for these networked devices and systems are cheap and billions more are expected in the next couple of years. (There’s even a conference preparing a road map for a trillion sensors in the next several years.)
Low security is the rule rather than exception for these devices and large amounts of data are being generated. The ENISA report says, “smart environments are considered the ultimate target for cyber criminals.” For example, preliminary work for phishing attacks can be augmented by gaining information about where a victim’s smart home is, picking up information leakage from their integrated media devices (Xbox One is doing more than just playing Halo), accessing what a user’s energy usage profile might be, etc. ENISA calls out the following top emerging threats in the Internet of Things space:
Other threats identified include:
Differences in many different smart appliances lead to large variances in context and content of transmitted data, opening avenues for cybercriminals.
Devices built on embedded systems, some of which have not yet been widely deployed. Some of these embedded cores (of many different types and manufacturers) will have unknown and unpublished functions and many will be difficult to maintain (keep patched). Look at the recent D-Link saga.
Many devices built on embedded systems do not communicate operational status to the user, eg “I am working,” “I am actively collecting data on your environment, “I am behaving erratically,” “I am off,” etc.
Increased data creation leads to increased data storage amounts, data concentration, and corresponding increased bandwidth requirements/loads. Even a little bit of analysis can result in a significant increase in resources. Remember the basic database join (or even simpler Cartesian product) ? — you start with three elements in one list (A,B,C), but want to relate them to data in another list (D,E,F), so you relate them in a third table and you have (AD,AE,AF,BD,BE,BF,CD,CE,CF). If each element used say 1 MB of space, your initial storage and bandwidth requirement quadrupled from 6 MB (A + B + C + D + E + F) to 24 MB (A + B + C + D + E + F + AD + AE + AF + BD + BE + BF + CD + CE + CF).
For me, the other thing about Internet of Things (IoT) devices is that we often don’t really think of them as sensing, computing, analyzing, data collecting and transmitting devices. Many seem innocuous and, often, we don’t even know they’re there.
Life’s just a cocktail party
Finally, assuming that these IoT devices have already been vetted by somebody else (like the store that we bought it from) is, unfortunately, flawed logic. Businesses large and small will be rushing to market with typically insecure devices and they won’t be taking the time to analyze all of the use cases of how their product could be misused. As consumers, we need to develop the skill of thinking, ‘how could this device be misused? ‘ Most of us aren’t used to thinking like that. A family in Texas learned that the hard way a few months ago with their baby monitor. In general, if a device operates over the network and we can see it, then somebody else can see it.
Shadoobie.
[chart images from http://www.enisa.europa.eu/activities/risk-management/evolving-threat-environment/enisa-threat-landscape-2013-overview-of-current-and-emerging-cyber-threats]
Like word distributions and company sizes, frequency of usage of particular passwords seems to follow a Zipf distribution or power law distribution. That is, there are a lot of people that pick from a small common pool of passwords and that the number of people that use a particular password drops off quickly once you step away from that common pool.
Ponemon Institute has released its Risk of an Uncertain Security Strategy study. It surveyed over 2000 IT professionals overseeing the security role in their respective organizations. The study identified 7 consequent risks of uncertainty in security strategy:
1. Cyber attacks go undetected
2. Data breach root causes are not determined
3. Intelligence to stop exploits is not actionable
4. Cybersecurity is not a priority
5. Weak business case for investing in cyber security
6. Mobility and BYOD security ambiguity
7. Financial impact of cyber crime is unknown
Most respondents believe that compliance efforts did not enhance security posture: [Do you agree that] “compliance standards do not lead to a stronger security posture?”
Types of attacks that respondents reported are summarized as:
Notably, 31% of respondents said that no one person or role was in charge of establishing security priorities. 58% said that management does not see cybersecurity as a significant risk. Finally, the study also indicated that the further up one went in the organization’s hierarchy, the more distant they were from understanding the organization’s cyber risk and related strategy. While not surprising, this is discouraging.
I keep getting back to the idea of force protection that the military had to develop 30 years ago. In response to world events to include attacks on bases and personnel, the military realized that it needed to explicitly remove resources, funds, and capacity off of the operational (pointy) end and use them to protect and resource the rear if they were to be survivable and sustainable. Over time, I think the market will bear this out too for most SMB’s. That is, I believe that those businesses that have been successful over several years will tend to be the ones that have made some investment in cybersecurity and resilience. And of the businesses that disappear after a short time, a high correlation will be made with those that did not invest in resilience.
Even though these conclusions might be fairly obvious, it’s not going to be pretty to watch.
This infographic from Matt McKeon steps you through the decreased privacy in Facebook default settings between 2005 and 2010. (Click on the picture to get the years in between).