Brian M. Waters’ Unencrypted

More Details are Needed About the Burlington Electric Hack

On Friday evening the Washington Post published a problematic story saying that a Vermont electric utility's computer systems had been compromised as part of a Russian hacking campaign called “Grizzly Steppe.” The piece cited anonymous U.S. government officials, but left out important details like how the intrusion had been linked to the Russian campaign and which agencies the cited officials might have worked for. The story has since been edited, but the original contained this gem:

It is unclear which utility reported the incident. Officials from two major Vermont utilities, Green Mountain Power and Burlington Electric, could not be immediately reached for comment Friday.

It's reasonable that the utilities might have been difficult to contact that evening. Many businesses wrap up around 4:30 in Chittenden County, especially on Fridays. (Especially before a holiday weekend.) So it sounds like the authors rang a few unattended phones and then just gave up on verification, instead publishing a story based solely on a tip from an anonymous source. But that story should have waited, because it was botched, badly.

Later that evening April McCullum from the Free Press managed to get in touch with a spokesperson from Burlington Electric, who filled in some of the missing details: first, which utility had been compromised (you guessed it, BED), and second, some actual information about the breach itself. Apparently, whatever malware or compromise had been detected, had been contained to a single laptop, which was not connected to the electric grid. (The original Post headline had contained language which could have been read to suggest otherwise, but it was later edited.)

Grizzly Steppe

A day before the Post jumped the gun with its' scoop, US-CERT had released a short report about Grizzly Steppe, a Russian intelligence-linked hacking campaign that supposedly targeted the DNC this past summer. The report contained a few hundred indicators of compromise, or IoC's, intended to help network defenders identify and contain attacks associated with the campaign. The Post story mentioned the Grizzly Steppe report, and the tight timing suggested that the BED-Grizzly Steppe attribution might have been based on information therein.

However, the Post also wrote,

This week, officials from the Department of Homeland Security, FBI and the Office of the Director of National Intelligence shared the Grizzly Steppe malware code with executives from 16 sectors nationwide, including the financial, utility and transportation industries, a senior administration official said. Vermont utility officials identified the code within their operations and reported it to federal officials Friday, the official said.

It's unclear to me whether this means those industries recieved the same report, perhaps a few days earlier than everyone else, or if there is additional declassified information floating around the private sector. However, on New Year's Eve, a (not anonymous) offical from DHS confirmed to Eric Geller of Politico that the attribution was in fact based on information in the public report:

It is particularly worth noting that it appears that indicators found on a single laptop appear to match those in the Join Analysis Report released on the 29th of December.

Details matter

Feel free to skip this section if you don't care to read the technical stuff.

The reason this matters is that most of the the IoC's in the Grizzly Steppe report are vague and lack context. An analyst given those indicators would have a hard time distinguishing a legitimate Russian intelligence operation from run-of-the-mill malware and normal network activity. (Many folks more respected than I have arrived at similar conclusions; their accounts are worth a read.)

The report and corresponding data from US-CERT contain one URL, 10 FQDN's (domain names), 876 IPv4 addresses, 24 hashes, and one Yara signature. However, only five of these indicators (discussed below) are listed with additional information that could suggest how an attacker might have used them, where in a network a defender might expect to find them, or what their presence might mean. The vast majority are listed beside vague comments like, “It is recommended that network administrators review traffic to/from the IP address to determine possible malicious activity.” 248 IP addresses are slightly better, each listed with geolocation information to the country level (which is about as coarse-grained as IP geolocation data can get).

Network indicators

The list of network indicators is especially troubling. IP addresses and domain names are reuseable items, subject to change over time, but no dating information is provided in the report. Without that, it is difficult to determine how an address or domain might have been used, or who it might have been used by, during the time that the indicator was relevant to Grizzly Steppe (because it's unclear which slice of the available historical data to examine.)

Furthermore, around 21% of the listed addresses were in use by Tor exit nodes on the day the report was released, while a full 426 — very nearly 50% of them — have served as exits at one point or another since 2010, according to data from the Tor project. (If you' like to know how I arrived at these figures, please contact me.) This seriously calls into question the provenance of the Grizzly Steppe IP indicators. Tor is a public service. The Grizzly Steppe attackers are known to have used it for malicious purposes before, but many people use Tor for benign reasons. The nature of the protocol is that a given exit node could be in use simultaneously by a malicious APT and someone just trying to check Facebook from China. So even if the exits were added to the report because of legitimate Grizzly Steppe activity, they are almost completely useless to network defenders and contribute nothing to attribution.

Hashes and signatures

The malware indicators, provided as MD5, SHA-1, and SHA-256 digests (and one Yara signature), also lack important contextual information. A few list file names (which are not terribly meaningful), and three hashes, discussed further below, have genuinely interesting metadata. But fully 17 of the 24 hashes are completely devoid of any contextual information whatsoever.

Almost three quarters of the malware indicators, including the singular Yara signature, appear to be PHP web shells. These are implants that can be uploaded to a compromised web server to give an attacker persistent backdoor access. They're probably not what was found on the laptop at Burlington Electric.

The remaining handful of samples are Windows malware of various sorts. VirusTotal's AV engines detect most of these as generic Trojan droppers, with first-seen dates throughout 2016. Without more information, it's hard to draw conclusions from these.

Rob Graham pointed out that at least one of the web shells is a generic tool called P.A.S, which can be downloaded by anyone (try it!) and is supposedly popular among Russian and Ukranian hackers. Given that, it seems like a stretch to attribute a P.A.S. web shell finding to a specific Russian intelligence operation, or even to Russian intelligence at all. The inclusion of a common tool like P.A.S. in the report is a red flag that calls into question the quality of the Windows indicators as well.

Some interesting IoC's

A small number of indicators are provided with contextual information beyond what is described above. One is another web shell; I won't describe it here. Two are hashes associated with the OnionDuke malware. One of these even comes with a short sample of network traffic observed between the malware and its' command and control, or C2 server. Two IP addresses are listed as C2 servers for these malware samples.

The first time either of these samples were uploaded to VirusTotal was the same day the report was released. Although the extra metadata is far from a full teardown of the malware and it's C2 infrastructure, I'm willing to give the government the benefit of the doubt and assume they're on to something with these IoC's.

What we need to find out

The problem is that most of the indicators in the US-CERT report are fairly generic and are presented with little context. All we know is that at least one of them, probably one the Windows-based malwares, was found on a computer at Burlington Electric. Two of these Windows malwares look interesting; the other handful look benign. Importantly, we don't know which indicator was found on that laptop. Only a few people know that at this point. It's hard to ignore the very likely possibility that whatever was found falls into the benign category, something that's been floating around the Internet for months — the type of threat that IT departments deal with every single day.

And the American people deserve to know the answer, because this attribution will invariably be used to inform policy decisions in the future. Senator Leahy, Representative Welch, and Governor Shumlin have each released aggressive statements about the story. The attribution in the DNC case was used to justify sanctions and the expulsion of Russian diplomats from the U.S. Some are alreading citing this as a possible precursor to actual electrical grid hacking. And we will soon have a recklessly aggressive President who claims to know “a lot about hacking.” If you thought major news outlets mouthing the unverified claims of anonymous government employees was an issue in 2016, it will be a complete disaster under the new administration. This cyber thing isn't going away. Americans need to know what the hell is going on.

On Learning how to “Hack”

So I just returned from DEF CON 23, which, if you’re not already aware, is basically a days-long drunken hacker party disguised as a conference. I ended up spending a lot of time at the “Internet of Things” or “IoT” village, where attendees try their hand at compromising crappy Best Buy-grade consumer electronics like SOHO (that’s small office/home office) routers, storage devices, baby monitors, and other mostly useless crap like “smart fridges” and Internet-connected doorknobs (seriously - what!?!?!). Needless to say, the security situation for most of this stuff is not good. The slogan for the area was “SOHOpelessly broken,” and the organizers were encouraging random people to try and find 0-days in various products they had brought along. I managed to find one in a very small surveillance camera advertised as “Simple. Smart. Secure.” (Good one.)

The flaw was a shockingly obvious authentication bypass via cross-site request forgery; one so ridiculous that several friends later suggested it had to be some sort of intentional back door. (I think it has more to do with a market segment that prioritizes price and time-to-market above all else, but that’s another story.) I managed to recruit a few other helpful folks, and we set forth trying to escalate a simple authentication bypass to a root shell. We ended up with full, remotely exploitable, firmware flashing pwnage. (I’ll post the details here after we notify the vendor.)

During this session, a guy came over who seemed to think I was some kind of 1337 w!z0rd h4xn0r (I’m not), asking how I learned to “hack” and how the exploit worked. He proceeded to whip out a pen and paper and take notes, as if the details of an obscure vulnerability in a cheap home camera were something worth committing to memory.

I think by “hack” he meant “gain unintended access,” and for some reason he made me think about technical education and how I got here to begin with. There are a lot of books out there with black covers and titles like “[insert system here] Hacking Secrets” and “The [insert system here] Hacker’s Field Manual” that claim to teach these kinds of skills. I have a number of these on my shelf, and I haven’t read any of them. They are mostly tutorials on specific (albeit useful) tools, or documentation on well-known bugs that are probably mostly patched by now. A few explain generally-useful techniques like ARP cache poisoning, and none contain any information that could be used outside the scope of “hacking” or defending.

The problem with this approach is that, in focusing on specific techniques, these books simultaneously avoid explaining the total context of whatever system they’re about, and deprive the reader of any chance at creativity by giving away all the answers on how to “hack in.” Rather than reading a book on “hacking networks,” for example, consider reading a book on networks. You will probably re-invent a few well-known techniques yourself (and feel good about yourself for doing so); moreover, you’ll have a much deeper and more generally useful understanding when you’re done. Reading a tutorial on using yersinia to become the spanning tree root bridge might earn you some serious pwnage, but it doesn’t teach you anything about running an STP network or why STP is important.

More importantly, the ground-up approach teaches research skills and critical thinking, and can give learners the confidence they need to find new flaws that aren’t in any of those books yet.

So, future 1337 h4xn0r man I met at DEF CON, if you’re out there and reading this, head to your local university library, grab a crusty old book on network protocols or UNIX programming (or a shiny new one on web development), and get to work. You might even enjoy yourself.

Rules of DEF CON

  1. The first rule of DEF CON is don’t connect to the WiFi.
  2. The second rule of DEF CON is seriously, don’t connect to the WiFi.
  3. The third rule of DEF CON is no, shut the fuck up and listen to me, don’t connect to the goddamn WiFi.
  4. The fourth rule of DEF CON is drink! (Unless you don’t, then that’s cool too.)
  5. The fifth rule of DEF CON is go to the villages.
  6. The sixth rule of DEF CON is talk to all the other cool people and stay away from the computer.
  7. The seventh rule of DEF CON is be nice to the Goons. They are working their ass off so you can have a good time.
  8. The eighth rule of DEF CON is something about taking a shower.
  9. The ninth rule of DEF CON is bring a lot of money because Vegas is a fucking ripoff.
  10. The last rule of DEF CON is all rules are guidelines.

“Switches Get Stitches,”

or Spamming CAM Tables for Fun and Profit

Layer 2 switching is one of the most common services found on modern networks. For the most part, it just works; but as with any technology, there are pathological cases that can cause things to go awry. Most layer 2 attacks work by poising hosts’ ARP tables, which works well on simple networks, but is easily detectable. Instead, it is possible to make limited progress by targeting a switch’s CAM table.

First, it might be a good idea to brush up on what a switch is and how it forwards traffic. Roughly, a switch is a piece of hardware or software that connects a number of physical or virtual ports (or “interfaces”) and forwards traffic between them at the data link layer. For example, if Alice, who is connected to port 1 and has MAC address aa:aa:aa:aa:aa:aa, wants to send a message to Bob, who is connected to port 2 and has mac address bb:bb:bb:bb:bb:bb, she would craft an Ethernet frame with its destination field set to bb:bb:bb:bb:bb:bb, and put it on the wire. When the switch receives the frame on port 1 (where Alice is connected), it inspects the destination field and looks up Bob’s MAC address in its CAM table. In this case, it finds a match; Bob’s MAC address corresponds to port 2. The switch puts the frame on the wire, and everybody is happy.

But how does the switch know Bob is on port 2? It keeps a table of mappings from MAC address to port numbers, and populates it using something called the “backwards learning algorithm.” You see, when Bob first connects, the switch doesn’t know who is there, because Bob hasn’t sent anything yet. However, when he sends his first frame, he will invariably set the source address to his own MAC (unless he is doing something evil). When the switch receives the frame, it will inspect the source field, notice that it doesn’t have a mapping for bb:bb:bb:bb:bb:bb, and create one that maps to port 2.

Of course, being physical objects, switches are limited to a finite number of mappings. Hardware switches use a technology called content-addressable memory, or CAM, which you can think of as a hardware-based associative array or hash table. As such, they do a great job of solving the MAC address-to-port problem illustrated above. (If you’re interested, their implementation has a lot in common with CPU caches. Think of CAM as a fully associative cache, but with MAC addresses where the tag bits would go, and port numbers instead of data blocks. On a lookup, the stored addresses are compared to the search key in parallel, and the corresponding port number is returned.) If that sounds a bit over your head, there’s one important thing to remember here: CAMs map a key to a single value; there can never be a situation in which the CAM table maps a MAC address to multiple different ports.

Source address spoofing

So what happens when Mallory, who is evil and connected on port 13, puts a frame on the wire with the source field set to Bob’s MAC address (bb:bb:bb:bb:bb:bb)? The switch will look up the address in its CAM table, find that it’s already got an entry that maps to port 2, and unquestioningly replace the mapping with a new one for port 13. Now, when Alice tries to send Bob a message, Mallory will get it instead.

Of course, the next time Bob sends, the attack will be undone, and Mallory will have to repeat it. If she wants to keep up, she could try to flood the network with something like this (using scapy):

#!/usr/bin/env python

import sys

from scapy.all import *

victim_mac = sys.argv[1]
broadcast_mac = "ff:ff:ff:ff:ff:ff" # Flood all switches
frame = Ether(src=victim_mac, dst=broadcast_mac) # No payload is necessary

while (1):

There are limits to this approach, however. It’s not an effective man-in-the-middle attack because it completely denies Bob of service. And Mallory can’t try to cover her tracks by forwarding the intercepted frames on to Bob, because the switch will just forward those frames back to her.

(Actually, it might be possible to develop an implementation that works, at least partially. Mallory could collect Bob’s incoming traffic for several milliseconds at a time, and periodically suspend her attack while she dumps the frames back to him, modified or not. I haven’t tried it, though.)

Filling the CAM table

There’s something I didn’t tell you earlier. We already know that when a switch receives a frame with a source address that’s in its CAM table, it just adds the entry and kicks out whatever was there before it. But what happens when a switch receives a frame with a destination address that’s not in the table? In this case, it has only one choice: forward the frame onto all ports, just like a hub. Hopefully, whoever the frame was destined for will respond; then, the switch can add their MAC address to its CAM table and start forwarding frames just to them.

On some switches, Mallory can take advantage of this by knocking all the legitimate addresses out of the switch’s CAM table. Then, the next time a frame comes in, it won’t be in the table, and the switch will broadcast it to all ports. In other words, Mallory can turn the switch into a hub by spamming it with lots of random MAC addresses:

#!/usr/bin/env python

from scapy.all import *

def gen_rand_mac():
    mac = [random.randint(0, 255) for _ in range(6)]
    mac[0] |= 0x02 # Set the locally-administered bit
    mac[0] &= ~0x01 # Unset the multicast bit
    return ":".join("%02x" % octet for octet in mac)

while (1):
    frame = Ether(src=gen_rand_mac())

If Mallory’s attack succeeds, she can use her capability to spy on other users.


Mid- to high-end switches offer a feature, which Cisco calls “port security,” that is designed to prevent exactly this kind of CAM table mischief. Port security allows an admin to lock down exactly which MAC addresses are allowed to communicate over which ports, or to limit the number of unknown MACs that a given port can map to. Obviously, this won't always be practical, but it's one more tool in the bag.

There are other options too, like VLANs. But before you go feature-crazy, it's useful to take a step back and remember that switched Ethernet was an upgrade to an older technology, which people now call “classic” Ethernet. In classic Ethernet, everything was broadcast: if you wanted to send a message to someone, you sent it to everybody; receivers got everyone’s traffic, but ignored it unless it was addressed to them. Switches were introduced to add some intelligence to noisy Ethernet networks, but that basic broadcast behavior lives on in them.

If you want to be able to control traffic between two hosts, make sure they’re not on the same LAN, full stop. Even if you can prevent the kind of CAM table smashing I describe here, there are a ton of other fun tricks an attacker can try if they're on a LAN with a potential target. Maybe I will explore some of those in the future. Thanks for reading.

A rackmount switch, a Raspberry Pi, and laptop on a desk in my home lab
The “lab” I used for these experiments. It’s an old PowerConnect I got at ReSOURCE in Burlington, Vermont for $10, a Raspberry Pi, and my ThinkPad.