InputOutput.io - Page 2 of 2 - The free-thinkin' free-speakin' rabble-rousin' geek.

A quick tutorial on getting USB EVDO working on BackTrack {3, 4}

I recently attended Shmoocon 2009, and was surprised to find a few attendees asking me how I got my EVDO Sprint Novatel u727 modem working in BackTrack 3. The process should be the same for BT4, which was just released on Friday at Shmoocon. So for convenience sake, I provide the script I use to connect, and the configuration file for kppp.

evdoconnect.sh:

#!/bin/bash
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root" 1>&2
exit 1
fi
eject /dev/sr0 2>/dev/null
sleep 1
modprobe -r usbserial
modprobe usbserial vendor=0x1410 product=0x4100
sleep 1
nohup kppp -c "Sprint Wireless" > /dev/null 2> /dev/null &

A note that the vendor and product codes will probably need to be changed if you’re using a different provider or card. Contact your provider for this information.

.kde/share/config/kppprc:

pppdArguments=

[Account0]
AccountingEnabled=0
AccountingFile=
Authentication=4
AutoDNS=1
AutoName=0
BeforeConnect=
BeforeDisconnect=
CallbackPhone=
CallbackType=0
Command=
DNS=
DefaultRoute=1
DisconnectCommand=
Domain=
ExDNSDisabled=0
Gateway=0.0.0.0
IPAddr=0.0.0.0
Name=Sprint Wireless
Password=user
Phonenumber=#777
ScriptArguments=
ScriptCommands=
StorePassword=1
SubnetMask=0.0.0.0
Username=user
VolumeAccountingEnabled=0
pppdArguments=

[General]
DefaultAccount=Sprint Wireless
DefaultModem=EVDO Modem
NumberOfAccounts=1
NumberOfModems=1
PPPDebug=0

[Graph]
Background=255,255,255
Enabled=true
InBytes=0,0,255
OutBytes=255,0,0
Text=0,0,0

[Modem0]
BusyWait=0
Device=/dev/ttyUSB0
Enter=CR
FlowControl=Hardware [CRTSCTS]
Name=EVDO Modem
Speed=921600
Timeout=60
UseLockFile=1
Volume=1
WaitForDialTone=1

[WindowPosition]
WindowPositionConWinX=283
WindowPositionConWinY=215

Again, this is simply my personal configuration. These should work quite the same for other distributions as well, provided that your card is connected via USB.

BackTrack 3, the EEE 701, and Disk Encryption

Explanation and Advantages

I recently decided to make BackTrack 3 the primary OS on my pearly EEE 701.  Given my EEE’s whopping 4GB of solid-state storage, I decided that rather than installing BackTrack directly onto the SSD, I would instead install the live distro to an 8GB SDHC card I had lying around, and use the remaining internal 4GB SSD as an encrypted /root partition using cryptsetup.  There are a few distinct advantages of such a setup.  Firstly, since the OS is installed as a live distro on a removable device, portability is not sacrificed – I am still able to boot into BackTrack from the same SDHC card plugged into another machine (assuming, of course, that machines BIOS supports booting from SD.)  Secondly, by overriding the default /root partition which is created by root.lzm, any changes I make to /root are persistent, and do not require a recompression of root.lzm.  This allows me to store application settings and files in a much more convenient manner.  Thirdly, since /root is encrypted, saving settings or files containing passwords or other sensitive information is less of a security risk.

Implementation

To install BackTrack onto the SDHC card, we use the same method as a USB install.  Format the SDHC to contain a vfat filesystem.  Extract the BackTrack 3 USB .iso file into the filesystem mount point, and run boot/bootinst.sh.  I tried this in Ubuntu 8.10, and had some trouble: the device was recognized as /dev/mmcblk0 and the partition as /dev/mmcblk0p1, a designation that shell script got mixed up on.  Running the script on the EEE’s previous OS, Xubuntu 8.04, the device and partition were recognized as /dev/sda and /dev/sda1, and I encountered no further problems.

Once we boot into BackTrack, we configure and install cryptsetup:

cd ~
wget http://luks.endorphin.org/source/cryptsetup-1.0.5.tar.bz2
tar -xvf cryptsetup-1.0.5.tar.bz2
cd cryptsetup-1.0.5
./configure
make
make install

Next, we create a .lzm file for cryptsetup to ensure that it will be available each time we boot:

mkdir -p usr/include usr/lib usr/man/man8 usr/sbin usr/share/locale/de/LC_MESSAGES
cp /usr/include/libcryptsetup.h usr/include/
cp /usr/lib/cryptsetup usr/lib/
cp /usr/lib/libcryptsetup.* usr/lib/
cp /usr/man/man8/cryptsetup.8 usr/man/man8/
cp /usr/sbin/cryptsetup usr/sbin/
cp /usr/share/locale/de/LC_MESSAGES/cryptsetup.mo usr/share/locale/de/LC_MESSAGES/
tar -zcvf cryptsetup.tgz usr/
tgz2lzm cryptsetup.tgz cryptsetup.lzm
cp cryptsetup.lzm /mnt/sda1/BT3/modules/ # my mountpoint was /mnt/sda1, yours probably is too

Now we have cryptsetup available in the live environment.  Next step is to format the EEE’s internal SSD.  I set up one primary filesystem, recognized as hdc1.  We’ll be formatting this with cryptsetup using a secure passphrase.

cfdisk # to set up the partition
umount /dev/hdc1
cryptsetup luksFormat /dev/hdc1
cryptsetup luksOpen /dev/hdc1 root_dir
mkfs.ext2 /dev/mapper/root_dir

And now we have an encrypted partition on the SSD.  Next mount it and copy the existing BackTrack /root files.

mkdir /mnt/root_dir
mount /dev/mapper/root_dir /mnt/root_dir
cp -a /root /mnt/root_dir
mv /mnt/root_dir/root/* /mnt/root_dir/root/.* /mnt/root_dir/
rmdir /mnt/root_dir/root

And we’re almost done.  We’ll create a script to make it easy to mount our /root every time we boot.  Create a file in /root/root/decrypt_root.sh with the following contents:

#!/bin/bash
cryptsetup luksOpen /dev/hdc1 root_dir
mount /dev/mapper/root_dir /root

Finally, create an .lzm file for the script.

cd ~
tar -zcvf decrypt_root.tgz root/
tgz2lzm decrpyt_root.tgz decrypt_root.lzm
cp decrypt_root.lzm /mnt/sda1/BT3/modules/

And we’re finished.  If all goes well, when you restart your machine you will have this script in your /root directory, and once run it will mount your encrypted SSD partition to /root.  From this point, you can issue a ctrl-alt-backspace and re-login, and startx if you’d like.  Welcome to a world of BackTrack possibilities!

How to Subvert Deep Packet Inspection, the Right Way.

Note: I was first inspired to write this post based on the great coverage of deep packet inspection by the Security Now (SN) podcast.  For more detailed information than I could ever provide, please listen to Security Now, especially episodes 149, 151, and 153.

What is deep packet inspection?

In bygone days, the role of an Internet Service Provider (ISP) had been that of a passive provider of content.  The ISP provided the necessary infrastructure to connect your computer with the larger global network of computers, the internet.  When you pay your monthly bill to the ISP, you are paying a usage charge, effectively ‘renting’ their infrastructure to get yourself on the grid.  They did not disrupt, filter, or sell your private information, and life was good.
In 1994, Congress passed the Communications Assistance for Law Enforcement Act (CALEA).  This act required all digital telecommunications carriers to enable wiretapping of their digital switches.  In 2005, CALEA was extended, at the behest of the DOJ, FBI, and DEA, to include the tapping of all ISP traffic.  Prior to this extension, the FBI had relied on court order or voluntary cooperation of individual ISPs, engaging in packet sniffing with programs such as Carnivore.  So the government spying on your net usage is nothing new.
Recently, however, a very disturbing friendship has developed between advertising agencies and ISPs.  Particularly nefarious advertising companies such as NebuAd have been approaching ISPs and offering them sums of money if they will install devices that monitor and even modify the on-the-fly communications to place ads on websites that you visit.  Earlier on, these ads were just crudely inserted JavaScript, sometimes causing a rendering error in the page.  Recently, some companies such as Phorm in Britain have gotten smarter and are using the devices they’ve bribed the ISPs into installing to monitor each and every website you visit.  This is frightening because your browsing habits reveal an enormous amount of information about the type of person you are, and in fact in many cases pinpoint exactly who you are.  So these advertising agencies are effectively logging and storing everything that you do across the web, to build a profile of you for the supposed intent of providing more targeted advertising.  Luckily, there is a way to protect yourself from these invasive policies.

Using SSH to create a secure SOCKS proxy tunnel

Note: My experience in subverting these practices is largely based in using SSH within a bash environment.  You can perform the same actions with Windows using putty as well, but the syntax is quite different, and not my area of expertise.  I suggest using the howto on this page if you are using putty.

Requirements:

  • Remote shell account on a machine running an SSH server
  • An SSH client
  • Programs that allow you to proxy traffic with SOCKS

One very effective way to subvert deep packet inspection is to create a secure, encrypted tunnel that you send all of your traffic through.  The ISPs cannot modify the content of the packets if they are point-to-point encrypted, since they have no way of seeing what the content actually is.  The idea is to wrap the packets with encryption only so far as to get them out of the reach of your ISP, and once they arrive at a remote server that you have shell access to, that server unwraps the traffic and sends it out on its merry way.  Be sure that the remote server that you have access to is secure and trusted.  If it is not, you may effectively be opening yourself to a man-in-the-middle attack or packet sniffing.  If you have access to a remote shell, you can use SSH to create a secure SOCKS proxy on a specific port of your local machine, which forwards all traffic to the remote machine before reaching its final destination.  Simply type:

ssh -D localhost:port -f -C -q -N user@host.tld

where port is the local port you want to open.  When this command is issued for the first time, make a note of the hex fingerprint that is displayed.  If at any time in the future you get a warning stating that there is a fingerprint mismatch or that the fingerprint does not match your known_hosts file, your traffic may be being intercepted.  This fingerprint acts as verification that you are indeed opening a connection to the remote server you intend to communicate with.  Now, if you issue the command “netstat -antp” and if everything went well you will see a new local port being provided by ssh.  If under the ‘local address’ field, your output looks like the following: “127.0.0.1:port” then this port is only accessible locally.  You can now configure programs such as x-chat, pidgin, and firefox to use the ip address “127.0.0.1” with the port you have specified to enable this proxy.

Word of warning #1: What you gain in privacy on the ISP side, you may lose in anonymity on the remote server side.  For instance, if your remote server has a static IP and your ISP doesn’t, it may be easier for the websites you access to track your browsing habits over time.  One way to counter this is to have a multitude of people use this server as their primary proxy; that way there is no way of pinpointing who exactly is accessing what.

Word of warning #2: When configuring certain programs to use the SOCKS proxy, there is a potential for DNS leakage.  What this means is that even though the traffic between yourself and the remote server is encrypted, the name resolution may not be.  This may present a problem, but certain programs such as firefox allow you to ensure that there is no DNS leakage.  In firefox, browse to “about:config” and make sure the setting for “network.proxy.socks_remote_dns” is set to true.  Certain extensions of firefox such as FoxyProxy take care of this for you in their plugin settings.

Complete SSH encapsulation: the tun module

Requirements:

  • Root access to a remote machine running an ssh server
  • An SSH client
  • The tun module installed both locally and remotely

The problem that I had with the solution in the previous section is as follows.  There are plenty of programs using my network that do not have the ability to use a SOCKS proxy.  Given the track record of the worst of the advertising companies, I wouldn’t put it past them to start intercepting and modifying all sorts of traffic, not only the traffic with the highest volume or visibility such as web traffic.  What is really needed is an all-encompassing proxy, one that just takes all outgoing and incoming traffic and sends it over that secure encrypted link.  SSH is such a wonderfully flexible and versatile program, and it has built-in support for creating a secure VPN to do just that.  The idea is to make it so that all your traffic is routed through the remote server using a secure VPN link.  This section requires a basic grasp of routing tables, kernel modules, and iptables.

So our first task is to establish the secure VPN.  To do this, both machines must have the ‘tun’ kernel module installed and loaded.  Just issue the comman

modprobe tun

on both the local and remote machines from their respective root shells.  Locally, issue the command:

ssh -w 0:0 -f -C -q -N root@host.tld

This command establishes a new network interface on both sides of the connection, tun0.  Then, type:

ifconfig tun0 10.0.0.200 pointopoint 10.0.0.100

SSH into the remote machine, and issue these commands:

ifconfig tun0 10.0.0.100 pointopoint 10.0.0.200
ping -c 3 10.0.0.200

If you get a ping response, you’ve successfully set up a secure VPN!  This is good, but in order to route your traffic through the remote machine, it must be set up to enable packet forwarding, and also to have iptables configured so that it acts as a gateway.  I’ve modified a small shell script for this purpose.  You may need to modify it further to suit your needs:

wget http://www.inputoutput.io/shareconnection.sh
chmod +x shareconnection.sh
./shareconnection.sh

Your remote server is now configured to act as a gateway.  Locally, you must now set up your routing tables to direct all traffic (except the traffic that is still needed to keep the tun0 interface alive!) through the tun0 interface.  The following commands assume that your default local gateway is a router with the ip address 192.168.0.1, and your default interface is eth0:

route add host.tld gw 192.168.0.1 eth0
route del default gw 192.168.0.1 eth0
route add default gw 10.0.0.100 tun0

And presto!  All outbound and incoming traffic is now routed through your remote machine.  Again, the same concern in the above section regarding anonymity and verification of the remote fingerprint applies in this case as well.  Since the remote server is now acting as your gateway, there is no reason to fear DNS leakage, and no programs to configure.  Now you can rest assured that your connectivity is secure from the prying eyes of the ISPs and their sneaky cohorts, the traffic-shaping advertising companies.

My next article will detail how to connect this tunneled interface with programs such as hostap and dhcpd to create a wireless access point providing an automatically proxified connection to wireless clients.

Just took the OSCP exam

Alright, so as the topic suggests, I just finished the Offensive Security 101 exam yesterday, and oh man.  I can’t disclose much information about the test itself, but let me tell you this: it was both frustrating, exciting, and triumphant all at once.  Well, only triumphant if you pass, I suppose.  Okay, so this is the first exam I’ve taken since college, and I have to admit, I was pretty nervous for it.  Alright, you could consider my post-college sociology degree job searching a test of some kind.  *Insert inaudible mutterings about the job market here.*

OS101 is unique in its field: it teaches you about software security holes from the perspective of the attacker.  It explains common vulnerabilities in network security, and the attack vectors involved in exploiting them.  It also teaches, among other things, enumeration techniques, Google h4x, and tunneling services through ssl proxies.  And it’s fun!  The Offensive Security team has built a lab environment that you VPN into, with a wide array of machines running different unpatched services.  And they give you access to a windows machine with OllyDbg, a windows debugger that allows you to develop exploits at a very low level using 32-bit assembly language.  Don’t be put off if you’re not familiar with assembly – I don’t even really know it myself, but nonetheless it was a blast learning how things that wind up on milw0rm actually get developed.

The lectures that the course provides are very straightforward and explain things in an easy-to-understand manner, so even if you haven’t coded before, it’s definitely worth it to give it a try.  OS101 assumes a basic understanding of the Linux command line and the bash environment.

This is somewhat tangential, but I have to make another recommendation here.  If you are interested in network security, cryptography, and electronic privacy and want to keep up to date on these and other things, I highly recommend listening to the Security Now weekly podcast.  Security Now features a maverick in the industry and the creator of the data recovery tool SpinRite, Steve Gibson.

Anyway, I really do feel like a walking billboard now, so I’ll leave it at that!

Cory Doctorow’s Little Brother

I just finished Cory Doctorow‘s Little Brother. And oh. My. God. Soooo good. Sooooooo good.

First let me note: Cory Doctorow is a sci-fi author, but this novel doesn’t read like sci-fi. Sure, it hinges on technology that doesn’t yet exist. But we’re talking about the near future, the very near future, no more than 4-5 years down the line. So there’s no robots with plasma spheres for heads screaming “Danger Will Robinson!,” or faster than light travel, or any of those elements that have given the genre an unfair reputation. Instead, it’s tech that we can see developing before our very eyes in real-time. In every chapter there is an explanation of real or conceivable computer systems, cryptographic systems, or mathematics that are relevant to the story in some way. And that’s the exciting part: the innovativeness and imagination that is embodied in the not-so-far-off world that Doctorow describes is believable because it comes from the authors understanding of how the technology really works, and how it is evolving in the present. As William Gibson explains, sci-fi “can’t be about the future. It’s about where the person who wrote it thought their present was, because you can’t envision a future without having some sort of conviction, whether you express it or not in the text, about where your present is.” And our present is a very exciting time indeed.

That being said, even without an understanding of the underlieing technology, it makes for a great read. Basically, it’s about a teenage hacker in San Francisco and how he deals with the Department of Homeland Security (DHS) taking over the bay area after the next terrorist attack. The DHS sets up random checkpoints throughout the city, extending the surveillance measures already in place, and tracks the movements of every citizen through the RFID tags they use when they take the BART (subway), or go through the FastTrac (RFID-enabled toll booth lane in SF). Furthermore, our protagonist and his friends are taken in and tortured by the DHS for days on end, in a secret prison the department has set up offshore. With this imagery, you can see how Doctorow’s vision of the near-future is also informed by the political realities of our time. Just as he projects the technologies of the near future based on the technological dynamics of the present, the stark political realities of today are extended into the near future in a way that seems not just believable, but inevitable. I’m not going to give away too much of the plot, but here’s the long and short of it. Just as these technologies can be used against the people, those same technologies can be used to promote and extend peoples rights and freedoms, and to subvert the governments attempt to take those freedoms away. A movement evolves, and at the center of it is the Xnet – an encrypted network of hacked Xbox Universals, using Paranoid Linux as its operating system.

Doctorow does such a wonderful job of interweaving the political, cultural, and technological strains of our current society and projecting them into the near future with an elegance that it is truly visionary. Anyone who is interested in cryptography, hacking, or activism should immediately drop whatever they’re doing, run to their nearest independent bookstore, and pick up a copy of this immediately. Well? Go!

Wikiverify

The Problem:

Wikipedia is a great resource. In fact, it’s such a great resource that it’s the de-facto source of information for – well – most things that you want to know about. And it’s so all encompassing, it’s so expansive and extensive that you can find just about anything on it, from the frighteningly large rodent “capubera” to what exactly a “femme fatal” is to the intricate workings of the Rijndael encryption standard. Just a few years ago, you couldn’t find all this information all in one place. After all, wikipedia.org as a domain was only registered in January, 2001. A decade ago, you’d have to rely on the old textbooks, the infomercials that try to sell you the words of old white guys, to get even close to compiling the information that is accessible within a few keystrokes today. And the beautiful thing about it is that it’s all community driven. It’s not just a few old white guys anymore, it’s people from all across the globe collaborating on a masterpiece of information and accessibility. It’s an amorphous entity that is always growing, always changing and constantly reshaping itself. It’s the closest thing we have to the real live Hitchhikers Guide to the Galaxy. Its democratic nature is great, but its unaccountable nature is where the whole thing gets tricky.

And thats the real problem. People add stuff they’re not supposed to. Stupid stuff, silly stuff, misconceptions, innaccurasies, rumors and downright lies. The entries get cluttered with things they’re not supposed to. I can’t claim I’m not guilty of it. I’ve edited wikipedia entries in the past for kicks *cough*electromagneticpulse*cough*- but this just underlines the whole structural problem. Stephen Colbert, a personal hero of mine, causes havoc on wikipedia in a single 10-minute segment by urging watchers to edit the “elephant” entry. The result is entry locks, warnings of bias, needed citations, and a general site-wide loss of credibility. People don’t trust wikipedia the way they trust the good ol’ fashioned ink and paper. It’s enough to cause nostalgia, a longing for the halcyon days when information was sparse, but at least it had some bite behind its bark. Whats an honest obscure-information-seeker to do? Where can we turn?

There is one solution, always turned to in times of despair. “Check your sources!” It’s the time-cherished response of every undergrad college professor you’ve ever had. “Wikipedia may not be cited, not now, not ever!” they cry out in unison. If I didn’t know any better, I’d say it was an affront on our information-sharing culture. A way to make the information they possess seem a precious commodity in light of the rubbish of our wikimedia commons. After all, are they not the arbiters of accuracy? Still, what they say does make sense: you have no way of distinguishing the trash from the treasure for any given entry. Someone may have edited it to add one of the aforementioned innaccuracies just 10 minutes ago. Hell, you may have edited it, just to get in the few extra footnotes you’ve been looking for and to get a chuckle out of quoting Cleopatra saying “I know kung-fu.” So there has to be some greater source of authority to reference to when trying to speak authoritatively yourself. You can’t just reference any odd article you like, and expect it to be listened to like the wisdom of the ancients.

The Solution:

“But I’m so lazy!” you complain. “I don’t want to spider across twenty different newspapers, essays and electronic journals just to quote something I already know as fact!” At this point a few months ago, I would have said “Well, too bad! Life is hard, suck it up! Get off, er, on your lethargic ass, do some fancy google voodoo, and find that australian PM quote you were looking for!” But in all honesty, the complaints do make sense. The information is out there, in all its cross-hyper-linked glory, but in order to get to it you have to spend inordinate amounts of time looking for it from credible sources. Oh, pity me, I wish that wikipedia had both the credibility I’m looking for and the vast resources it already has! If only there were a way to combine the best of both worlds!

But then I had this crazy idea. What if, instead of wikipedia referring to sources of higher authority, wikipedia brought those sources of higher authority to itself? What if wikipedia set off a corner of itself, however small it may initially be, and said “we swear by the hair on our chinny-chin-chin, that this information is verifiably true.” Let’s call it verified.wikipedia.org. Now how would they verify the quality of content on this subdomain? Fact-checkers. That’s right- every major newspaper has them. They go around, looking at the articles about to go to print, and, well, check their facts. To make sure the quality of the article that is about to go on the newsstands lives up to the time-honored tradition of the Sun Times Press Journal Herald. And somehow magically this works. So bring fact checkers to wikipedia. Make them paid staff-people on the largest source of information of all time.

Sure, there are some logistical problems. For instance, wikipedia coming up with crazy schemes to make make ends meet as it is, but no one claimed this would be easy. One possibility is getting folks over at the Science Commons on board. That organization is certainly starting to address some of the problems of reliability and access to information. Another avenue is actually getting the universities themselves to fund such a project, and improve their public image besides. With enough of a starting nudge, we could even get professors and voices of authority to volunteer, and offer their words of wisdom. Wikipedia could become a platform for the collaboration of experts in various fields, pooling their knowledge into a central, open network.

The great thing about this is that there is really nothing to lose. If it doesn’t catch on, then oh well, we tried. No harm done. But if this does catch on, it could be something really interesting. As the saying goes, the sky is the limit.