Bl@ckTo\/\/3r
![]() | In 2004, Fyodor wrote a chapter for the security fiction best-sellerStealingthe Network: How to Own a Continent. His chapter isavailable free online. In 2005, Syngress released a sequel, namedStealingthe Network: How to Own an Identity. The distinguished authorlist consists of Jeff "Dark Tangent" Moss, Jay Beale, Johnny "GoogleHacker" Long, Riley "Caezar" Eller, Raven "Elevator Ninja" Alder, TomParker, Timothy "Thor" Mullen, Chris Hurley, Brian Hatch, and Ryan"BlueBoar" Russell. Syngress has generously allowed Fyodor to posthis favorite chapter, which is Bl@ckTo\/\/3r (Ch5) byNmap contributor Brian Hatch.This chapter is full of wry humor and creative security conundrums tokeep the experts entertained, while it also offers informativesecurity lessons on the finer points of SSH, SSL, and X Windowsauthentication and encryption. It stands on its own, so you don'tneed to read chapters one though four first. Enjoy! |
Bl@ckTo\/\/3r
I have no idea if Charles is a hacker. Or rather, I know he'sa hacker; I just don't know if he wears a white hat or a black hat.
Anyone with mad skills is a hacker -- hacker is a good word: itdescribes an intimate familiarity with how computers work. But it doesn'tdescribe how you apply that knowledge, which is where the old white-hat / black-hatbit comes from. I still prefer using “hacker” and “cracker,” rather than hatcolor. If you're hacking, you're doing something cool, ingenious, for thepurposes of doing it. If you're cracking, then you're trying to get access toresources that aren't yours. Good versus bad. Honorable versus dishonest.
Unfortunately, I am not a hacker. Nor am I a cracker. I've gota lot of Unix knowledge, but it has all been gained the legitimate, more bookishway. No hanging out in IRC channels, no illicit conversations with people whouse sexy handles and alter egos, no trading secrets with folks on the otherside of the globe. I'm a programmer, and sometimes a systems administrator. Iwork for the IT department of my alma mater; that's what you do when you aretoo lazy to go looking for a 'real job' after graduation. I had a work-studyjob, which turned into full-time employment when I was done with school. Afterall, there's not much work out there for philosophy majors.
Charles went a different route. Or should I call himBl@ckTo\/\/3r? Yesterday he was just Charles Keyes. But yesterday I wasn'tbeing held hostage in my own apartment. Our own apartment.
In fact, I don't know if I should even be speaking about him inthe present tense.
He vanished a week ago. Not that disappearing without lettingme know is unusual for him; he never lets me know anything he does. But thiswas the first time he's been gone that I've been visited by a gentlemen whogave me a job and locked me in my apartment.
When I got home from work Friday night, the stranger wasdrinking a Starbucks and studying the photographs on the wall. He seemedcompletely comfortable; this wasn't the first unannounced house call in hisline of work, whatever that is.
He was also efficient. Every time I thought of a question, he wasalready on his way to answering it. I didn't say a thing the entire time he washere.
“Good evening, Glenn,” he began. “Sorry to startle you, pleasesit down. No, there's no need to worry; although my arrival may be a surprise,you are in no trouble. I'd prefer to not disclose my affiliation, but sufficeit to say I am not from the University, the local police, or the Recording IndustryAssociation of America. What I need is a bit of your help: your roommate hassome data stored on his systems, data to which my organization requires access.He came by this data through the course of contract actions on our behalf.Unfortunately, we are currently unable to find your co-tenant, and we need tore-acquire the data.
“He downloaded it to one of his Internet-connected servers, andstopped communicating with us immediately afterward. We do not know hislocation in the real world, or on the Internet. We did not cause hisdisappearance; that would not be in our interest. We have attempted to gainaccess to the server, but he seems to have invested significant time buildingdefenses, which we have unfortunately triggered. The data is completely lost;that is certain.
“We were accustomed to him falling out of communicationperiodically, so we did not worry until a few days after the data acquisitionoccurred and we still had not heard from him. However, we have a strongsuspicion that he uploaded some or all of the data to servers he kept here.
“In short, we need you to retrieve this data.
“We have network security experts, but what we lack is anunderstanding of how Bl@ckTo\/\/er thinks.
“His defenses lured our expert down a false path that led tothe server wiping out the data quite thoroughly, and we believe that your longacquaintance with him should provide you with better results.
“You will be well compensated for your time on this task, butit will require your undivided attention. To that end, we have set your voicemail message to indicate you are out on short notice until Monday night. Atthat point, either you will have succeeded, or our opportunity to use the datawill have passed. Either way, your participation will be complete.
“We'll provide for all your needs. You need to stay here. Donot contact anyone about what you are doing. We obviously cannot remove yourInternet access because it will likely be required as you are working for us,but we will be monitoring it. Do not make us annoyed with you.
“Take some time to absorb the situation before you attemptanything. A clear head will be required.
“If you need to communicate with us, just give us a call, we'llbe there.
“Good hunting.”
And out the door he walked.

Not sure what to do, I sat down to think. Actually, to freakout was more like it: this was the first I had ever heard of what sounded likea 'hacker handle' for Charles. I've got to admit, there's nothing sexy aboutCharles as a name, especially in 'l33t h&x0r' circles.
Maybe a bit of research will get me more in the mood, Ithought. At the least, it might take my mind off the implied threat. Won't needthe data after Monday, eh? Probably won't need me either, if I fail. Some datathat caused Charles to go underground? Get kidnapped? Killed? I had no idea.
Of course, I couldn’t actually trust anything they said. Theonly thing I knew for sure is that I hadn't seen Charles for a week and, like Isaid, that's not terribly unusual.
Google, oh Google my friend: Let's see what we can see, Ithought.
Charles never told me what he was working on, what he had done,or what he was going to do. Those were uninteresting details. Uninterestingdetails that I assume provided him with employment of one sort or another. Buthe needed attention, accolades, someone to tell him that he did cool things. Ioften felt as though the only reason he came to live in my place was because Ihumored him, gave him someone safe who he could regale with his cool hacks. Henever told me where they were used, or even if they were used. If he discovereda flaw that would let him take over the entire Internet, it would be just asinteresting to him as the device driver tweak he wrote to speed up the rate atwhich he could download the pictures from his camera phone. And he never eventook pictures, so what was the bloody point?
It didn't matter; they were both hacks in the traditionalsense, and that was what drove him. I had no idea how he used any of them. Notmy problem, not my worry.
Well, I thought, I guess this weekend it is my worry. Fuck you,Charles. Bl@ckTo\/\/er. Bastard.
That's right, let's get back to Google.
No results on it at all until my fifth l337 spelling.Blackt0wer - nada. Bl&ckt0wer, zip. Thank goodness Google is caseinsensitive, or it would have taken even longer.
Looks like Charles has been busy out there: wrote severalfrequently-referenced Phrack articles, back when it didn't suck. Some low-levelpacket generation tools. Nice stuff.
Of course, I don't know if that handle really belongs toCharles at all. How much can I trust my captor? Hell, what was my captor'sname? ‘The stranger’ doesn’t cut it. Gotta call him something else. How 'boutAgent Smith, from the Matrix? Neo killed him in the end, right? Actually, I'mnot sure, the third movie didn't make much sense, actually. And I'm not theuber hacker / cracker, or The One. Delusions of grandeur are not the way tostart the weekend. Nevertheless, I thought, Smith it is.
I took stock of the situation:
Charles had probably ten servers in the closet off of thecomputer room. We each had a desk. His faced the door, with his back to thewall, probably because he was paranoid. Never let me look at what he was doing.When he wanted to show me something, he popped it up onmy screen.
That's not a terribly sophisticated trick. X11, the foundationof any Linux graphical environment, has a very simple security model: if amachine can connect to the X11 server -- my screen, in this case -- which typicallylistens on TCP port 6000, and if it has the correct magic cookie, the remotemachine can create a window on the screen. If you have your mouse in thewindow, it will send events, such as mouse movements, clicks, and key-presses,to the application running on the remote machine.
This is useful when you want to run a graphical app on a remotemachine but interact with it on your desktop. A good example is how I runNessus scans on our University network. The Nessus box, vulture, only has sshopen, so I ssh to it with X11 forwarding. That sets up all the necessarycookies, sets my $DISPLAY variable to the port on vulture where /usr/sbin/sshdis listening, and tunnels everything needed for the Nessus GUI to appear on mydesktop. Wonderful little setup. Slow though, so don't try it withoutcompression: if you run ssh -X, don't forget to add -C too.
The problem with X11 is that it's all or nothing: if anapplication can connect to your display (your X11 server, on your desktop) thenit can readany X11 event, or manageany X11 window. It can dumpwindows (xwd), send input to them (rm -rf /, right into an xterm) or read yourkeystrokes (xkey). If Charles was able to display stuff on my screen, he couldget access to everything I typed, or run new commands on my behalf. Of course,he probably didn’t need to; the only way he should have been able to get anauthorized MIT magic cookie was to read or modify my .Xauthority file, and hecould only do that if he was able to log in as me, or had root permissions onmy desktop.
Neither of these would have been a surprise. Unlike him, Ididn’t spent much energy trying to secure my systems from a determinedattacker. I knew he could break into anything I have here. Sure, I had a BIOSpassword that prevented anyone from booting off CD, mounting my disks, anddoing anything he pleased. The boot-loader, grub, is password protected, sonobody can boot into single-user mode (which is protected with sulogin and thusrequires the root password anyway) or change arguments to the kernel, such asadding "init=/bin/bash" or other trickery.
But he was better than I am, so those barriers were for others.Nothing stopped anyone from pulling out the drive, mounting it in his tower,and modifying it that way.
That's where Charles was far more paranoid than I. We had anextended power outage a few months ago, and the UPS wasn't large enough to keephis desktop powered the whole time, so it shut down. The server room machinesare on a bigger UPS, so they lasted through the blackout. When the power wasback, it took him about twenty minutes to get his desktop back online, whereasI was up and running in about three. Though he grumbled about all the things heneeded to do to bring up his box, he still took it as an opportunity to showhis greatness, his security know-how, his paranoia.
"Fail closed, man. Damned inconvenient, but when somethingbad is afoot, there's nothing better to do than fail closed. I dare anyone totry to get into this box, even with physical access. Where the hell is thatblack CD case? The one with all the CDs in it?"
This was how he thought. He had about 4000 CDs, all in black40-slot cases. Some had black Sharpie lines drawn on them. I had a feeling thatthose CDs had no real data on them at all, they were just there to indicatethat he'd found the right CD case. He pulled out a CD that had a piece of cleartape on it, pulled off the tape, and stuck it in his CD drive. As it booted, hechecked every connector, every cable, the screws on the case, the tamper-proofstickers, the case lock, everything.
"Custom boot CD. Hard drive doesn't have a boot loader atall. CD requires a passphrase that, when combined with the CPU ID, the NIC'sMAC, and other hardware info, is able to decrypt the initrd."
He began to type; it sounded like his passphrase was more thansixty characters. I'd bet that he hashed his passphrase and the hardware bits,so the effective decryption key was probably 128, 256, or 512 bits. Maybe more.But it'd need to be something standard to work with standard cryptographicalgorithms. Then again, maybe his passphrase was just the right size, andrandom enough to fill out a standard key length; I wouldn't put it past him.Once he gave me a throwaway shell account on a server he knew, and the passwordwas absolute gibberish, which he apparently generated with something like this:
$cat ~/bin/randpw#!/usr/bin/perluse strict;use warnings;# All printable ascii charactersmy @chars = (32..126);my $num_chars = @chars;# Passwords must be 50 chars long, unless specified otherwisemy $length=$ARGV[0] || 50;while (1) { my $password; foreach (1..$length) { $password .= chr($chars[int(rand($num_chars))]); } # Password must have lower, upper, numeric, and 'other' if ( $password =~ /[a-z]/ and $password =~ /[A-Z]/ and $password =~ /[0-9]/ and $password =~ /[^a-zA-Z0-9]/ ) { print $password, "\n"; exit; }}$randpw 10 (8;|vf4>7X$randpw]'|ZJ{.iQo3(H4vA&c;Q?[hI8QN9Q@h-^G8$>n^`3I@gQOj/-($randpwQ(gUfqqKi2II96Km)kO&hUr,`,oL_Ohi)29v&[' Y^Mx{J-i(]
He muttered as he typed the CD boot passphrase (wouldn’t you,if your passwords looked like so much modem line noise?), one of the few timesI've ever seen that happen. He must type passwords all day long, but this wasthe first time I ever saw him think about it. Then again, we hadn't had a poweroutage for a year, and he was religiously opposed to rebooting Linux machines.Any time I rebooted my desktop, which was only when a kernel security updatewas required, he called me a Windows administrator, and it wasn't a compliment.How he updated his machines without rebooting I don't know, but I wouldn't putit past him to modify /dev/kmem directly, to patch the holes without everactually rebooting into a patched kernel. It would seem more efficient to him.
He proceeded to describe some of his precautions: the(decrypted) initrd loaded up custom modules. Apparently he didn't like thedefault filesystems available with Linux, so he tweaked Reiserfs3,incorporating some of his favorite Reiser4 features and completely changing thelayout on disk. Naturally, even that needed to be mounted via an encryptedloopback with another hundred-character passphrase and the use of a USB key fobthat went back into a box with 40 identical unlabelled fobs as soon as thatstep was complete. He pulled out the CD, put a new piece of clear tape on it,and back it went. Twenty minutes of work, just to get his machine booted.
So some folks tried to get access to one of his servers on theInternet. His built-in defenses figured out what they were doing and wiped theserver clean, which led them to me. Even if his server hadn’t wiped its owndrives, I doubted that they could have found what they were looking for on thedrive. He customized things so much that they benefited not only from securitythrough encryption, but also from security through obscurity. His customReiser4 filesystem was not built for security reasons, only because he has totinker with everything he touches. But it did mean that no one could mount itup on their box unless they knew the new inode layout.
I felt overwhelmed. I had to break into these boxes to findsome data, without triggering anything. But I did have something those guysdidn't: five-plus years of living with the guy who set up the defenses. TheHoneynet team's motto is "Know your Enemy," and in that regard I'vegot a great advantage. Charles may not be my enemy, I thought -- I had no ideawhat I was doing, or for whom! -- but his defenses were my adversary, and I had awindow into how he operated.
The back doorbell rang. I was a bit startled. Should I answerit? I wondered. I didn't know if my captors would consider that a breach of myimposed silence. But no one ever comes to the back door.
I left the computer room, headed through the kitchen, andpeered out the back door. Nobody was there. I figured it was safe enough tocheck; maybe it was the bad guys, and they left a note. I didn't know if mycaptors were good or bad: were they law enforcement using unorthodox methods?Organized crime? Didn't really matter: anyone keeping me imprisoned in my ownhouse qualified as the bad guys.
I opened the door. There on the mat were two large doublepepperoni, green olive, no sauce pizzas, and four two-liter bottles of Mr.Pibb. I laughed: Charles' order. I never saw him eat anything else. When he wasworking, and he was almost always working, he sat there with one hand on thekeyboard, the other hand with the pizza or the Pibb. It was amazing how fast hetyped with only one hand. Lots of practice. Guess you get a lot of practicewhen you stop going to any college classes after your first month.
That was how we met: we were freshman roommates. He was alreadyvery skilled in UNIX and networking, but once he had access to the Internet atEthernet speeds, he didn't do anything else. I don't know if he dropped out ofschool, technically, but they didn't kick him out of housing. Back then, heknew I was a budding UNIX geek, whereas he was well past the guru stage, so heenjoyed taunting me with his knowledge. Or maybe it was his need to show off,which has always been there. He confided in me all the cool things he could do,because he knew I was never a threat, and he needed to tell someone or he'd burst.
My senior year, he went away and I didn't see him again untilthe summer after graduation, when he moved into my apartment. He didn'tactually ask. He just showed up and took over the small bedroom, and of coursethe computer room. Installed an AC unit in the closet and UPS units. Got us aT1, and some time later upgraded to something faster, not sure what. He neverasked permission.
Early on, I asked how long he was staying and what we weregoing to do about splitting the rent. He said, "Don't worry aboutit." Soon the phone bill showed a $5000 credit balance, the cable wassuddenly free, and we had every channel. I got a receipt for the full paymentfor the five year rental agreement on the apartment, which was odd, given thatI'd only signed on for a year. A sticky note on my monitor had ausername/password for Amazon, which seemed to always have exactly enough giftcertificate credit to match my total exactly.
I stopped asking any questions.
I sat with two pizzas that weren't exactly my favorite. I’dnever seen Charles call the pizza place; I figured he must have done it online,but he'd never had any delivered when he wasn't here. I decided to give thepizza place a call, to see how they got the order, in case it could help trackhim down -- I didn't think the bad guys would be angry if they could find Charles,and I really just wanted to hear someone else's voice right now, so I couldpretend everything was normal.
“Hello, Glenn. What can we do for you?”
I picked up the phone, but hadn't started looking for thenumber for the pizza place yet. I hadn't dialed yet�
“Hello? Is this Pizza Time?”
“No. We had that sent to you. We figured that you're supposedto be getting into Bl@ckTo\/\/er's head, and it would be good to immerseyourself in the role. Don't worry; the tab is on us. Enjoy. We're getting somematerials together for you, which we'll give you in a while. You should startthinking about your plan of attack. It's starting to get dark out, and we don'twant you missing your beauty sleep, nor do we want any sleep-deprived slip-ups.That would make things hard for everyone.”
I remembered our meeting: Smith told me that I should call if Ineeded to talk, but he never gave me a phone number. They've played with thephone network, I thought, to make my house ring directly to them. I didn't knowif they had done some phreaking at the central office, or if they had justrewired the pairs coming out of my house directly. Probably the former, Idecided: after their problems with Charles' defenses, I doubted they would wantto mess with something here that could possibly be noticed.
Planning, planning: what the hellwas my plan? I knewphysical access to the servers was right out. The desktop-reboot escapadeproved that it would be futile without a team of top-notch cryptographers, andmaybe Hans Reiser himself. That, and the fact that the servers were locked inthe closet, which was protected with sensors that would shut all of the systemsdown if the door was opened or if anything moved, which would catch any attemptto break through the wall. I found that out when we had the earthquake up herein Seattle that shook things up. Charles was pissed, but at least he was amusedby the video of Bill Gates running for cover; he watched that again and againfor weeks, and giggled every time. I assumed there was something he could do toturn off the sensors, but I had no idea what that would be.
I needed to get into the systems while they were on. I neededto find a back door, an access method. I wondered how to think like him:cryptography would be used in everything; obscurity would be used in equalmeasure, to make things more annoying.
His remote server wiped itself when it saw a threat, whichmeant he assumed it would have data that should never be recoverable. However,I knew the servers here didn't wipe themselves clean. He had them wellprotected, but he wanted them as his pristine last-ditch backup copy. It waspretty stupid to keep them here: if someone was after him specifically -- and nowsomebody was -- that person would know where to go -- and he did. If he had spreadthings out on servers all over the place, it would have been more robust, and Iwouldn’t have been in this jam. Hell, he could have used the Google file systemon a bunch of compromised hosts just for fun; that was a hack he hadn't playedwith, and I bet it would have kept him interested for a week. Until he foundout how to make it more robust and obfuscate it to oblivion.
So what was my status? I was effectively locked in at home. Thephone was monitored, if it could be used to make outside calls at all. Theyclaimed they were watching my network access; I needed to test that.
I went to hushmail.com and created a new account. I was usingHTTPS for everything, so I knew it should all be encrypted. I sent myself anemail, which asked the bad guys when they were going to pony up their ‘materials.’I built about half of the email by copy/pasting letters using the mouse, sothat a keystroke logger, either a physical one or an X11 hack, wouldn't helpthem any.
I waited. Nothing happened. I read the last week of UserFriendly; I was behind and needed a laugh. What would Pitr do? He wouldprobably plug a laptop into the switch port where Charles’ desktop was, in hopeof having greater access from the VLAN Charles used.
Charles didn’t share the same physical segment of the networkin the closet or in the room. I thought that there could be more permissivefirewalls rules on Charles’ network, or that perhaps I could sniff traffic fromhis other servers to get an idea about exactly what they were or weren’tcommunicating on the wire. A bit of MAC poisoning would allow me to look likethe machines I want to monitor, and act as a router for them. But I knew itwould be fruitless. Charles would have nothing but cryptographic transactions,so all I’d get would be host and port information, not any of the actual databeing transferred. And he probably had the MAC addresses hard-coded on theswitch, so ARP poisoning wouldn't work, anyway.
But the main reason it wouldn’t work was that the switchenforced port-based access control using IEEE 802.1x authentication. 802.1x isinfrequently used on a wired LAN -- it’s more common on wireless networks -- but it canbe used to deny the ability to use layer 2 networking at all prior to authentication.
If I wanted to plug into the port where Charles had hiscomputer, I’d need to unplug his box and plug mine in. As soon as the switchsaw the link go away, it would disable the port. Then, when I plugged in, itwould send an authentication request using EAP, the Extensible AuthenticationProtocol. In order for the switch to process my packets at all, I would need toauthenticate using Charles’ passphrase.
When I tried to authenticate, the switch would forward myattempt to the authenticator, a Radius server he had in the closet. Based onthe user I authenticated as, the radius server would put me on the right VLAN.Which meant that the only way I could get access to his port, in the way hewould access it, would be to know his layer 2 passphrase. And probably spoofhis MAC address, which I didn’t know. I’d probably need to set up my networkingconfiguration completely blind: I was sure he wouldn’t have a DHCP server, andI bet every port had its own network range, so I wouldn’t even see broadcasts thatmight help me discover the router’s address.
How depressing: I was sitting there, coming up with a millionways in which my task was impossible, without even trying anything.
I was awakened from my self loathing when I received an emailin my personal mailbox. It was PGP encrypted, but not signed, and included allthe text of my Hushmail test message. Following that, it read, “We appreciateyour test message, and its show of confidence in our ability to monitor you.However, we are employing you to get access to the data in the closet servers,not explore your boundaries. Below are instructions on how to download tcpdumpcaptures from several hosts that seem to be part of a large distributed networkwhich seems to be controlled from your apartment. This may or may not help inaccessing the servers at your location.”
It was clear that they could decode even my SSL-encryptedtraffic. Not good in general, pretty damned scary if they could do it in nearreal-time. 128 bit SSL should take even big three letter agencies a week or so,given most estimates. This did not bode well.
If there was one thing I learned from living with Charles, itwas that you always need to question your assumptions, especially aboutsecurity. When you program, you need to assume that the user who is inputtingdata is a moron and types the wrong thing: a decimal number where an integer isrequired, a number where a name belongs. Validating all the input and beingsure it exactly matches what you require, as opposed to barring what you thinkis bad, is the way to program securely. It stops the problem of the moron atthe keyboard, and also stops the attacker who tries to trick you, say with anSQL injection attack. If you expect a string with just letters, and sanitizethe input to match before using it, it’s not possible for an attacker to slipin metacharacters you hadn't thought about that could be used to subvert yourqueries.
Although it would seem these guys had infinite computing power,that was pretty unlikely. More likely my desktop had been compromised. Perhapsthey were watching my X11 session, in the same manner Charles used to displaystuff on my screen. I sniffed my own network traffic using tcpdump to see ifthere was any unexpected traffic, but I knew that wasn’t reliable if they'dinstalled a kernel module to hide their packets from user-space tools. None ofthe standard investigative tools helped: no strange connections visible byrunning netstat -nap, no strange logins via last, nothing helpful.
But I didn’t think I was looking for something I'd be able tofind, at least not if these guys were as good as I imagined. They were a stepbelow Charles, but certainly beyond me.
If I wanted to really sniff the network, I needed to snag mylaptop, assuming it wasn't compromised as well, then put it on a span port offthe switch. I could sniff my desktop from there. Plenty of time to do thatlater, if I felt the need while my other deadline loomed. I had a differenttheory.
I tried to log into vulture, my Nessus box at the university,using my ssh keys. I run an ssh agent, a process that you launch when you login, to which you can add your private keys. Whenever you ssh to a machine, the/usr/bin/ssh program contacts the agent to get a list of keys it has stored inmemory. If any key is acceptable to the remote server, the ssh program allowsthe agent authenticate using that key. This allows a user to have an ssh key’spassphrase protected on disk, but loaded up into the agent and decrypted inmemory, which could authenticate without requiring the user to type apassphrase each time ssh connected to a system.
When I started ssh-agent, and when I added keys to it withssh-add, I never used the -t flag to specify a lifetime. That meant my keysstayed in there forever, until I manually removed them, or until my ssh-agentprocess died. Had I set a lifetime, I would have to re-add them when thatlifetime expired. It was a good setting for users who worried that someonemight get onto their machine as themselves or as root. Root can always contactyour agent, because root can read any file, including the socket ssh-agentcreates in /tmp. Anyone who can communicate with a given agent can use it toauthenticate to any server that trusts those keys.
If Smith and his gang had compromised my machine, they coulduse it to log on to any of my shell accounts. But at least they wouldn’t beable to take the keys with them trivially. The agent can actively log someonein by performing asymmetric cryptography (RSA or DSA algorithms) with theserver itself, but it won't ever spit out the decrypted private key. You can'tforce the agent to output a passphrase-free copy of the key; you'd need to readssh-agent’s memory and extract it somehow. Unless my captors had that ability,they'd need to log into my machine in order to log into any of my shellaccounts via my agent.
At the moment, I was just glad I could avoid typing my actualpasswords anywhere they might have been able to get them.
I connected into vulture via ssh without incident, which wasactually a surprise. No warnings meant that I was using secure end-to-endcrypto, at least theoretically. I was betting on a proxy of some kind, giventheir ability to read my email. Just to be anal, I checked vulture’s ssh publickey, which lived in /etc/ssh/ssh_host_rsa_key.pub, as it does on many systems.
vulture$cat /etc/ssh/ssh_host_rsa_key.pubssh-rsa AAAAB3NzaC1yc2EAAAABIwAcu0AjgGBKc2Iu[...]G38= root@vulture
This was the public part of the host key, converted to ahuman-readable form. When a user connects to an ssh server, the client comparesthe host key the server presents against the user’s local host key lists, whichare in /etc/ssh/ssh_known_hosts and ~/.ssh/known_hosts, using the standard UNIXclient. If the keys match, ssh will log the user in without any warnings. Ifthey don’t match, the user gets a security alert, and in some cases may not evenbe permitted to log in. If the user has no local entry, the client askspermission to add the key presented by the remote host to ~/.ssh/known_hosts.
I compared vulture’s real key, which I had just printed, to thevalue I had in my local and global cache files:
desktop$grep vulture ~/.ssh/known_hosts /etc/ssh/ssh_known_hosts/etc/ssh/ssh_known_hosts: vulture ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAvPCH9IMinzL[...]0=
The two entries should have matched, but they didn't. After thefirst 23 characters, they weren’t even close. Another ideapopped into my head: Dug Song's dsniff had an ssh man-in-the-middleattack, but it would always cause clients to generate host key errorswhen they attempted to log into a machine for which the user hadalready accepted the host key earlier: the keys would never match. Butsomeone else had come up with a tool that generated keys withfingerprints that looked similar to a cracker-suppliedfingerprint. The theory was that most people only looked at part ofthe fingerprint, and if it looked close enough, they'd accept thecompromised key.
Checking the fingerprints of vulture's host key and the one inmy known_hosts file, I could see they were similar but not quite identical:
# Find the fingerprint of the host key on vulturevulture$ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key.pub1024 cb:b9:6d:10:de:54:01:ea:92:1e:d4:ff:15:ad:e9:fb vulture# Copy just the vulture key from my local file into a new filedesktop$grep vulture /etc/ssh/ssh_known_hosts > /tmp/vulture.key# find the fingerprint of that keydesktop$ssh-keygen -l -f /tmp/vulture.key1024 cb:b8:6d:0e:be:c5:12:ae:8e:ee:f7:1f:ab:6d:e9:fb vulture
So what was going on? I bet they had a transparent crypto-awareproxy of some kind between me and the Internet. Probably between me and thecloset, if they could manage. If I made a TCP connection, the proxy would pickit up and connect to the actual target. If that target looked like an sshserver, it would generate a key that had a similar fingerprint for use withthis session. It acted as an ssh client to the server, and an ssh server to me.When they compromised my desktop, they must have replaced the /etc/ssh/ssh_known_hostsentries with new ones they had pre-generated for the proxy. No secure ssh forme; it would all be intercepted.
SSL was probably even easier for them to intercept. I checkedthe X.509 certificate chain of my connection to that Hushmail account:
desktop$openssl s_client -verify 0 -host www.hushmail.com -port 443</dev/null >/dev/nulldepth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=thawte SGC CAverify return:1depth=0 /C=CA/2.5.4.17=V6G 1T1/ST=BC/L=Vancouver/2.5.4.9=Suite 203 455 Granville St./O=Hush Communications Canada, Inc./OU=Issued through Hush Communications Canada, Inc.E-PKIManager/OU=PremiumSSL/CN=www.hushmail.comverify return:1DONE
Here were the results of performing an SSL certificateverification. The openssl s_client command opened up a TCP socket to www.hushmail.comon port 443, then read and verified the complete certificate chain that theserver presented. By piping to /dev/null, I stripped out a lot of s_clientcertificate noise. By having it read from </dev/null, I convinced s_clientto ‘hang up’, rather than wait for me to actually send a GET request to the Webserver.
What bothered me was that the certificate chain was not a chainat all: it was composed of one server certificate and one root certificate.Usually you would have at least one intermediate certificate. Back last week,before my network had been taken over, it would have looked more like this:
desktop$openssl s_client -verify -showcerts -host www.hushmail.com-port 443 </dev/null >/dev/nulldepth=2 /C=US/O=GTE Corporation/OU=GTE CyberTrust Solutions, Inc./CN=GTE CyberTrust Global Rootverify return:1depth=1 /C=GB/O=Comodo Limited/OU=Comodo Trust Network/OU=Terms and Conditions of use: http://www.comodo.net/repository/OU=(c)2002 Comodo Limited/CN=Comodo Class 3 Security Services CAverify return:1depth=0 /C=CA/2.5.4.17=V6G 1T1/ST=BC/L=Vancouver/2.5.4.9=Suite 203 455 Granville St./O=Hush Communications Canada, Inc./OU=Issued through Hush Communications Canada, Inc.E-PKI Manager/OU=PremiumSSL/CN=www.hushmail.comverify return:1DONE
In this case, depth 0, the Web server itself, was signed bydepth 1, the intermediate CA, a company named Comodo, and Comodo’s certificatewas signed by the top level CA, GTE CyberTrust. I hit a bunch of unrelatedSSL-protected websites; all of them had their server key signed by the sameThawte certificate, with no intermediates at all. No other root CA, likeVerisign or OpenCA, seemed to have signed any cert. Even Verisign’s website wassigned by Thawte!
It seemed that my captors generated a new certificate, whichsigned the certificates of all Web servers that I contacted. As Thawte is awell known CA, they chose this name for their new CA, in the hope that Iwouldn’t notice. It looked as though they set it as a trusted CA in Mozilla Firefox,and also added it to my /etc/ssl/certs directory, which meant that it would betrusted by w3m and other text-only SSL tools. It generated the fake servercertificate with the exact same name as the real website, too. My captors werecertainly thorough.
Just as with the ssh proxy, the SSL proxy must have acted as aman-in-the-middle. In this case, they didn't even need to fake fingerprints:they just generated a key (caching it for later use, presumably) and signed itwith their CA key, which they forcibly made trusted on my desktop, so that italways looked legitimate.
So here I am, I thought, well and truly monitored. Crap. Well,time to look at this traffic dump they've got for me.

The email they sent provided me with the location of an ftp site,which hosted the tcpdump logs. It was approximately two gigabytes worth ofdata, gathered from ten different machines. I pulled up each file in adifferent window of Ethereal, the slickest packet analyzer out there.
I could see why they thought the machine here served as thecontroller: each machine in the dumps talked to two or three other machines,but one of Charles' hosts here communicated with all of them. The communicationthat originated from the apartment was infrequent, but it seemed to set off alot of communication between the other nodes. The traffic all occurred in whatappeared to be standard IRC protocol.
I looked at the actual content inside the IRC data, but it wasgibberish. Encrypted, certainly. I caught the occasional cleartext string thatlooked like an IP address, but these IPs were not being contacted by the slavemachines, at least not according to these logs.
The most confusing part was that the traffic appeared almostcompletely unidirectional: the master sent commands to the slaves, and theyacknowledged that the command was received, but they never communicated back tothe master. Perhaps they were attacking or analyzing other hosts, and savingthe data locally. If that was what was going on, I couldn’t see it from thesedumps. But a command from the server certainly triggered a lot of communicationbetween the slave nodes.
I needed to ask them more about this, so I created two instantmessaging accounts and started a conversation between them, figuring that myowners would be watching. I didn’t feel like talking to them on the phone.Unsurprisingly, and annoyingly, they answered me right away.
-> Hey, about these logs, all I see is the IRC traffic.What's missing? What else are these boxes doing? Who are they attacking? Wheredid you capture these dumps from? Do I have everything here?
<- The traffic was captured at the next hop. It contains alltraffic.
-> All traffic? What about the attacks they're coordinating?Or the ssh traffic? Anything?
<- The dumps contain all the traffic. We did not missanything. Deal.
Now they were getting pissy; great.
I returned to analyzing the data. Extracting the data segmentof each packet, I couldn’t see anything helpful. I stayed up until 3
Suffice it to say, my dreams were not pleasant.
I woke in the morning, showered (which deviated from the “livein Charles' shoes” model, but I’ve got standards!), and got back to the networktraffic dumps.
For several hours I continued to pore over the communications.I dumped out all the data and tried various cryptographic techniques to analyzeit. There were no appreciable repeating patterns, the characters seemed evenlydistributed, and the full 0-255 ASCII range was represented. In short, it alllooked as though it had been encrypted with a strong cipher. I didn't think Iwould get anywhere with the data.
The thing that continued to bug me was that these machines weretalking over IRC and nothing else. Perhaps there were attacks occurring, orthey were sharing information. I messaged my captors again:
-> What was running on the machines? Were they writing todisk? Anything there that helps?
<- The machines seemed to be standard webservers, administeredby folks without any security knowledge. Our forensics indicate that hecompromised the machines, patched them up, turned off the original services,and ran a single daemon that is not present on the hard drive -- when they wererebooted, the machines did not have any communication seen previously.
-> Is there nothing? Just this traffic? Why do you think thisis related to the data that is here?
<- The data we're looking for was stored on 102.292.28.10,which is one of the units in your dump logs. We have no proof of it beingreceived back at his home systems, but in previous cases where he has acquireddata that was lost from off site servers, he was able to recover it frombackups, presumably here.
I still don't see how that could be: the servers here wouldsend packets to the remote machines, but they did not receive any data fromthem, save the ACK packets.
Actually, that might be it, I thought: Could Charles be hidingdata in the ACKs themselves? If he put data inside otherwise unused bits in theTCP headers themselves, he could slowly accumulate the bits and reassemblethem.
So, rather than analyzing the data segments, I looked at bitsin the ACKs, and applied more cryptanalysis. A headache started. Another damnedpizza showed up at the door, and I snacked on it, my stomach turning all thewhile.
I came to the conclusion that I absolutely hated IRC. It wasthe stupidest protocol in the world. I've never been a fan of dual channelprotocols -- they're not clean, they're harder to firewall, and they just annoyme. What really surprised me was that Charles was using it: he always professeda hatred of it, too.
At that point, I realized this was insane. There was no wayhe'd have written this for actual communications. Given the small number of ACKpackets being sent, it couldn’t possibly be transferring data back here at adecent rate. The outbound commands did trigger something, but it seemedcompletely nonsensical. I refused to believe this was anything but a redherring, a practical joke, a way to force someone -- me, in this case -- to wastetime. I needed to take a different tack.
Okay, I thought, let's look at something more direct. There'sgot to be a way to get in. Think like him.
Charles obsessed about not losing anything. He had boatloads ofdisk space in the closet, so he could keep a month or two of backups from hisnumerous remote systems. He didn’t want to lose anything. I didn’t see why hewould allow himself to be locked out of what he had in the apartment. When hewas out and about, he must have had remote access.
He kept everything in his head. In a pinch, without his desktoptools, without his laptop, he'd have a way to get in. Maybe not if he werestuck on a Windows box, but if he had vanilla user shell access on a UNIX box,he'd be able to do whatever was necessary to get in here. And that meant alittle obfuscation and trickery, plus a boatload of passwords and secrets.
Forget this IRC bullshit, I thought, I bet he's got ssh access,one way or another.
I performed a portscan from vulture -- after all, I'm still loggedin -- on the whole IP range. Almost everything was filtered. Filtering alwaysmakes things take longer, which is a royal pain. I ate some more pizza -- I had toget in his head, you know.
I considered port knocking, a method wherein packets sent topredetermined ports will trigger a relaxation of firewall rules. Thiswould allow Charles to open up access to the ssh port from anotherwise untrusted host. I doubted that he would use port knocking:either he'd need to memorize a boatload of ports and manually connectto them all, or he'd want a tool that included crypto as part of theport-choosing process. I didn't think he would want either of those:they were known systems, not home-grown. Certainly he'd never standfor downloading someone else's code in order to get emergency accessinto his boxes. Writing his own code on the fly was one thing; usingsomeone else's was anathema.
Port scans came up with one open port, 8741. Nothing I'd heardof lived on that port. I ran nmap -sV, nmap’s Version fingerprinting, whichworks like OS fingerprinting, but for network services. It came up with zilch.The TCP three-way handshake succeeded, but as soon as I sent data to the port,it sent back a RST (reset) and closed the connection.
This was his last ditch back door. It had to be.
I wrote a Perl script to see what response I could get from theback door. My script connected, sent a single 0 byte (0x00), and printed outany response. Next, it would reconnect, send a single 1 byte (0x01), and printany response. Once it got up to 255 (0xFF), it would start sending two bytesequences: 0x00 0x00, 0x00 0x01, 0x00 0x02, and so on. Rinse, lather repeat.
Unfortunately, I wasn’t getting anything from the socket atall. My plan was to enumerate every possible string from 1 to 20 or sobytes. Watching the debug output, it became clear that this was notfeasible: there are 2^(8*20) different strings with twenty charactersin them. That number is approximately equal to 1 with 49 zeros behindit (and we're talking decimal decimal, not binary). If I limited my tests tojust lower case letters, which have less than 5 bits of entropy,instead of 8 bits like an entire byte, that would still be 2^(5*20),which is a 32-digit number. I realized that there was no way could I get even close totrying them all; I didn't know what I had been thinking.
So, instead of trying to hit all strings, I just sent invariations of my /usr/share/dict/words file, which contained about 100,000English words, as well as a bunch of combinations of two words from the file.While it ran, I took the opportunity to emulate my favorite hacker/cracker fora while, surfing Groklaw with my right hand and munching on the revoltingpizza, which I held in my left hand. Reading the latest SCO stories alwaysbrought a bit of reality back for me.
My brute force attempt using /usr/share/dict/words finallycompleted. Total bytes received from Charles' host: zilch, zero, nothing. Wasthis another thing he left to annoy people? A tripwire that, once hit,automatically added the offender to a block list for any actual services? Had Icompletely wasted my time?
I decided to look at the dumps in Ethereal, in case I was wrongand there had been data sent by his server that I hadn't been readingcorrectly. Looking at the dumps, which were extremely large, I noticedsomething odd.

First, I wasn't smoking crack: the server never sent back anydata, it just closed the connection. However, it closed the connection in twodistinct ways. The most common disconnect occurred when the server sent me aRST packet. This was the equivalent of saying “This connection is closed, don'tsend me anything at all any more in it, I don't even care if you get thispacket, so don't bother letting me know you got it.” A RST is a rude way ofclosing a connection, because the system never verifies that the other machinegot the RST; that host may think the connection is still open.
The infrequent connection close I saw in the packet dumps was anormal TCP teardown: the server sent a FIN|ACK, and waited for the peer toacknowledge, resending the FIN|ACK if necessary. This polite teardown is moreakin to saying “I’m shutting down this connection, can you please confirm thatyou heard me?”
I couldn’t think of a normal reason this would occur, so Iinvestigated. It seemed that every connection I established that sent either 1or 8 data characters received the polite teardown.
All packets that are sending one or eight character strings arebeing shut down politely, regardless of the data contained in them. So, ratherthan worrying about the actual data, I tried sending random packets of 1-500bytes. The string lengths 1, 8, 27, 64, 125, 216, 343, were all met with politeTCP/IP teardown, and the rest were shut down with RST packets.
Now I knew I was on to something. He was playing number games.All the connections with proper TCP shutdown had data lengths that were cubes!1^3, 2^3, 3^3, and so on. I had been thinking about my data length, but morelikely Charles had something that sent resets when incoming packet lengthsweren't on his approved list. I vaguely remember a '--length' option foriptables -- maybe he used that. More likely he patched his kernel for it, justbecause he could.
I get out my bible, W. Richard Stevens'TCP/IP Illustrated.Add the Ethernet and TCP headers together, and you will get 54 bytes. Anypacket being sent from a client will have some TCP options, such as atimestamp, maximum segment size, windowing, and so on. These are typically 12or 20 bytes long from the client, raising the effective minimum size to 66bytes; that's without actually sending any data in the packet.
For every byte of data, you add one more byte to the totalframe. Charles had something in his kernel that blocked any packets thatweren't 66 + (x^3) bytes long.
If I could control the amount of data sent in any packet, Icould be sure to send packets that wouldn’t reset the connection. Every decentprogramming language has a ‘send immediately, without buffering’ option. Unixhas thewrite(2) system call, for example, and Perl calls that viasyswrite.But what about packets sent by the client's kernel itself? I never manuallysent SYN|ACK packets at connection initiation time; that was the kernel's job.
Again, Stevens at the ready, I saw that the 66 + (x^3) rulealready handled this. A lone ACK, without any other data, would be exactly 66bytes long -- in other words, x == 0. A SYN packet was always 74 characterslong -- x==2. Everything else could be controlled by using as many packets withone data byte as necessary. A user space tool that intercepted incoming dataand broke it up into the right chunks would be able to work on any randomcomputer, without any alterations to its TCP/IP stack.
This is too mathematical -- a sick and twisted mind might sayelegant -- to be coincidence. I drew up a chart.
Charles’ Acceptable Packet Lengths
Data length | Data length significance | Total Ethernet Packet Length | Special matching packets |
---|---|---|---|
0 | 0 cubed | 66 | ACK packets (ACK, RST|ACK, FIN|ACK) |
1 | 1 cubed | 67 | |
8 | 2 cubed | 74 | SYN (connection initiation) packets. |
27 | 3 cubed | 93 | |
64 | 4 cubed | 130 | |
125 | 5 cubed | 191 | |
216 | 6 cubed | 282 | |
343 | 7 cubed | 409 |
Where he came up with the idea for this shit, I didn’t know.But I was feeling good: this had his signature all over it. This was a numbergame that he could remember, and software he could recreate in a time of need.
I needed to write a proxy that would break up data I sent intopackets of appropriate size. The ACKs created by my stack would automaticallybe accepted; no worries there.
Still, I felt certain this was an ssh server, but I realizedthat an ssh server should be sending a banner to my client socket, and thisconnection never sent anything.
Unless he's obfuscating again, I thought.
I realized that I needed to whip up a Perl script, which wouldread in as much data as it could, and then send out the data inacceptably-sized chunks. I could have my ssh client connect to it using aProxyCommand. After a bit of writing, I came up with something:
desktop$cat chunkssh.pl#!/usr/bin/perluse warnings;use strict;use IO::Socket;my $debug = shift @ARGV if $ARGV[0] eq '-d';my $ssh_server = shift @ARGV;die "Usage: $0 ip.ad.dr.es\n" unless $ssh_server andnot @ARGV;my $ssh_socket = IO::Socket::INET->new( Proto => "tcp", PeerAddr => $ssh_server, PeerPort => 22,) or die "cannot connect to $ssh_server\n";# The data 'chunk' sizes that are allowed by Charles' kernelmy @sendable = qw( 1331 1000 729 512 343 216 125 64 27 8 1 0);# Parent will read from SSH server, and send to STDOUT,# the SSH client process.if ( fork ) { my $data; while ( 1 ) { my $bytes_read = sysread $ssh_socket, $data, 9999; if ( not $bytes_read ) { warn "No more data from ssh server -exiting.\n"; exit 0; } syswrite STDOUT, $data, $bytes_read; }# Child will read from STDIN, the SSH client process, and# send to the SSH server socket only in appropriately-sized# chunks. Will write chunk sizes to STDERR to prove it's working.} else { while ( 1 ) { my $data; # Read in as much as I can send in a chunk my $bytes_left = sysread STDIN, $data, 625; # Exit if the connection has closed. if ( not $bytes_left ) { warn "No more data from client -exiting.\n" if $debug; exit 0; } # Find biggest chunk we can send, send as many of them # as we can. for my $index ( 0..@sendable ) { while ( 1 ) { if ( $bytes_left >= $sendable[$index] ) { my $send_bytes = $sendable[$index]; warn "Sending $send_bytes bytes\n"if $debug; syswrite $ssh_socket, $data, $send_bytes; # Chop off our string substr($data,0,$send_bytes,''); $bytes_left -= $send_bytes; } else { last; # Let's try a different chunk size } } last unless $bytes_left; } }
I ran it against my local machine to see if it was generatingthe right packet data sizes:
desktop$ssh -o "proxycommand chunkssh.pl -d %h"127.0.0.1 'cat /etc/motd'Sending 216 bytesSending 216 bytesSending 64 bytesSending 8 bytesSending 8 bytesSending 8 bytesSending 1 bytes...Sending 27 bytesSending 1 bytes##################################### #### Glenn's Desk. Go Away... #### #####################################Sending 8 bytessending 1 bytesNo more data from client - exiting.
I used an SSH ProxyCommand, via the -o flag. This told/usr/bin/ssh to run the chunkssh.pl program, rather than actually initiate aTCP connection to the ssh server. My script connected to the actual ssh server,getting the IP address from the %h macro, and shuttled data back and forth. AProxyCommand could do anything, for example routing through an HTTP tunnel,bouncing off an intermediate ssh server, you name it. All I had here wassomething to send data to the server only in predetermined packet lengths.
So, with debug on, I saw all the byte counts being sent, andthey adhered to the values I had reverse engineered. Without debug on, I wouldjust see a normal ssh session.
I've still got the slight problem that the server isn't sendinga normal ssh banner - usually the server sends its version number when youconnect:
desktop$nc localhost 22SSH-2.0-OpenSSH_3.8.1p1 Debian-8.sarge.4
My Perl script needed to output an ssh banner for my client. Ididn't know what ssh daemon version Charles ran, but recent OpenSSH serverswere all close enough that I hoped it wouldn’t matter. I added the followingline to my code to present a faked ssh banner to my /usr/bin/ssh client:
if ( fork ) { my $data; while ( 1 ) { print "SSH-1.99-OpenSSH_3.8.1p1 Debian-8.sarge.4\n"; my $bytes_read = sysread $ssh_socket, $data, 9999; ...
That would advertise the server as supporting SSH protocol 1and 2 for maximum compatibility. Now, it was time to see if I was right -- if thiswas indeed an ssh server:
desktop$ssh -v -lroot -o "ProxyCommand chunkssh.pl %h" 198.285.22.10debug1: Reading configuration data /etc/ssh/ssh_configdebug1: Applying options for *... debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 nonedebug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY root@198.285.22.10's password:
Yes!
I had only one problem: what the hell was his password? And didhe use root, or a user account? I could have brute-forced this all year withoutany luck. I needed to call on someone with more resources.
-> "I need his username and password."
<- "Yes, we see you made great progress. We don't havehis passwords though. If we did, we'd take it from here."
-> "You're completely up to speed on my progress? Ihaven't even told you what I've done! Are you monitoring from the network?Have you seen him use this before? Give mesomething to work withhere!"
<- "We told you we're monitoring everything. Here's whatwe do have. Uploaded to the same ftp site are results from a keystroke loggerinstalled on his system. Unfortunately, he's found some way to encrypt thedata."
-> "No way could you have broken into his computer andinstalled a software keystroke logger. That means you've installed hardware.But he checks the keyboard cables most every time he comes in - if you'dinstalled Keyghost or something, he'd have noticed -- it's small, but it'snoticeable to someone with his paranoia. No way."
<- "Would you like the files or not? "
-> "Yeah, fuck you too, and send them over."
I was getting more hostile, and I knew that was not good. Therewas no way could they have installed a hardware logger on his keyboard: thosethings are discreet, but if you knew what you're looking for, it was easy tosee them. I wouldn't have been surprised if he had something that detected whenthe keyboard was unplugged to defeat that attack vector. I download the logs...
01/21 23:43:10 x01/21 23:43:10 8 1p2g1lfgj23g2/ [cio01/21 23:43:11 ,uFeRW95@694:l|ItwXn01/21 23:43:13 cc01/21 23:43:13 x ggg01/21 23:43:13 o. x,9a [ F | 8 xi@x.7xdqz -x7o Goe9-01/21 23:43:14 a g [n7wq rysv7.q[,q.r{b7ouqno [b.uno01/21 23:43:15 .w U 6yscz h7,q 8oybbqz cyne 7eyg01/21 23:43:19 qxhy oh7nd8 ay. cu8oqnuneg
The text was completely garbled. It included timestamps, whichwas helpful. Actually, it was rather frightening: they had been monitoring himfor the last two months. More interesting was the fact that the latest entrieswere from that morning. When I knocked over the pop bottle on his keyboard.
Hardware keyloggers, at least the ones I was familiar with, hada magic password: go into an editor and type the password, then it will dumpout its contents. But you needed to have the logger inline with the keyboardfor it to work. If they retrieved the keylogger while I slept, I was sleepingmore soundly than I'd thought.
Or perhaps it was still there. I went under the desk and lookedaround, but the keyboard cable was completely normal, with nothing attached toit. But how else could they have seen my klutz maneuver last night? Did someonemake a wireless keylogger? I had no idea. How would I know?
They’d been monitoring for two months, from the looks of it. Ona hunch, I went to our MRTG graphs. Charles was obsessed with his bandwidth(though I was sure he didn’t pay for it), so he liked to take measurements viaSNMP and have MRTG graph traffic usage. One of the devices he monitored was thewireless AP he built for the apartment. He only used it for surfing Slashdotwhile watching Sci-Fi episodes in the living room. On my laptop, naturally.
Going back to the date when the keystroke logs started, therewas a dip of approximately five percent in bandwidth we'd been able to use onthe wireless network. Not enough to hurt our wireless performance enough toworry, but I bet the interference was because they were sending keystrokeinformation wirelessly. Probably doing it on my keyboard too. They must havehooked into the keys themselves, somewhere in the keyboard case rather than atthe end of the keyboard's cable. I'd never heard of such a device, which mademe worry more.
Of course I could just be paranoid again, I thought, but atthis point, I'd call that completely justified.
So now the puzzle: if Charles knew about the keystroke logger,why did he leave it there? And if he didn't know, how did manage to encrypt it?
I went over to Charles’ keyboard. His screen was locked, so tothe system wouldn’t care about what I typed. I typed the phrase, “Pack my boxwith five dozen liquor jugs,” the shortest sentence I knew that used all 26English letters.
-> "Hey, what did I just type on Charles' machine?"
<- "Sounds like you want to embark on a drinking binge,why?"
I didn't bother to answer.
Keyboard keys worked normally when he wasn’t logged in, sowhatever he did to encrypt it didn’t occur until he logged in. I bet that theseguys tried using the screensaver password to unlock it. They must not haveknown that you needed to have one of the USB fobs from the drawer, and the onehe kept with him. Without them, the screen saver wouldn’t even try toauthenticate your password. Another one of his customizations.
Looking at their keystroke log, the keyboard output was allgarbled -- but garbled within the printable ASCII range. If it were reallyencrypted, you would expect there to be an equal probability of any byte from 0through 255. I ran the output through a simple character counter, anddiscovered that the letters were not evenly distributed at all!
Ignoring the letters themselves, it almost looked like someoneworking at a command line. Lots of short words (UNIX commands like ls, cd, andmv?), lots of newlines, spaces about as frequently as I normally had whenworking in bash.
But that implied a simple substitution cipher, like the goodold fashioned ROT-13 cipher, which rotates every letter 13 characters down thealphabet. "A" becomes "M", "B" becomes"N", and so on. If this was a substitution cipher, and I knew thecontext was going to be lots of shell commands, I could do this.
First, what properties did the shell have? Unlike English,where I would try to figure out common short words like "a,""on," and "the," I knew that I should look for Linuxcommand names at the beginning of lines. And commands take arguments, whichmeant I should be able to quickly identify the dash character: it would be usedonce or twice at the beginning of many 'words' in the output, as in -v or--debug. Instead of looking for "I" and "a" assingle-character English words, I hoped to be able to find the "|"between commands, to pipe output of one program into the other, and"&" at the end to put commands in the background.
Time for some more pizza, I thought; this stuff grows on you.
Resting there, pizza in the left hand, right hand on the keyboard,I thought: This is how he works. He uses two hands no more than half the time.He's either holding food, on the phone, or turning the pages of a technicalbook with his left hand.
Typing one handed.
One handed typing.
It couldn't be that simple.
I went back to my machine, and opened up a new xterm. I set the"secure keyboard" option, so no standard X11 hacks could see mykeystrokes. I took quite some time to copy and paste the command setxkbmapDvorak-r, so as to avoid using the keyboard itself. I prefixed it with a space,to make sure it wouldn't enter my command history. This was all probablyfutile, but I thought I was on the home stretch, and I didn’t want to give thatfact away to my jailers. They may have been able to see my Hushmail email thatfirst night, even when I copied and pasted, but that was because the email wentacross the network, which was compromised. These cut/paste characters werenever leaving my machine, so I figured they shouldn't be able to figure outwhat I was doing.
I picked a line that read, “ o. x,9a [ F | 8 xi@x.7xdqz -x7oGoe9-” -- It looked like an average-sized command. I typed on my keyboard, whichhad the letters in the standard QWERTY locations. As I did so, my new X11keyboard mapping, set via the setxkbmap command, translated them to theright-handed Dvorak keyboard layout. On my screen appeared an intelligible UNIXcommand:
tr cvzf - * | s cb@cracked 'cat >tgz'
No encryption at all. Charles wasn’t using a QWERTY keyboard.The keystroke logger logged the actual keyboard keys, but he had them re-mappedin software.
The Dvorak keyboard layouts, unlike the QWERTY layouts, werebuilt to be faster and easier on the hands: no stretching to reach commonletters, which were located on the home row. The left and right handed Dvoraklayouts were for individuals with only one hand: a modification of Dvorak thattried to put all the most important keys under that hand. You would need tostretch a long way to get to the percent key, but your alphabetic characterswere right under your fingers. I'd known a lot of geeks who've switched toDvorak to save their wrists -- carpel tunnel is a bad way to end a career -- butnever knew anyone with two hands to switch to the single-hand layout. I don'tknow if Charles did because of his need to multitask with work and food, or forsome other reason, but that was the answer. And I certainly didn’t want tothink what he'd be doing with his free hand if he didn't have food in it. Butit was too late: that image was in my mind.
Dvorak Keyboard
One of the things that probably defeated most of the 'decryption'attempts is that he seemed to have a boatload of aliases. Common UNIXcommands like cd, tar, ssh, and find were shortened to c, tr, s, and fsomewhere in his .bashrc or equivalent. Man, Charles was eitherefficient or extremely lazy. Probably both.
Now I was stuck with an ethical dilemma. I knew I could lookthrough that log and find a screen saver password; it would be a very longstring, typed after a long period of inactivity. That would get me into hisdesktop, which might have ssh keys in memory. Sometime in the last two months,he must have typed the password to some of the closet servers, and now I had thesecret to his ssh security.
I got this far because I'd known Charles a long time. Knew howhe thought, how he worked. Now I was faced with how much I didn't know him.
I had been so focused on getting into these machines that Ididn't think about what I'd do once I got here. What did Charles have stashedaway in there? Were these guys the good guys or the bad guys? And what willthey do if I helped them, or stop them?
I couldn’t sit there, cutting and pasting letters all eveningso they couldn’t see my keystrokes. They would get suspicious. Actually, thatwouldn't work anyway: I would need to type on the keyboard to translate thelogs. I would need to download a picture of the layout, and then they'd see medoing so, and know what I'd discovered.
Charles, I thought, I wish I know what the hell you've gottenme into.
The End
If you enjoyed this chapter, you may enjoy the other nine storychapters. My next-favorite chapter is Tom Parker's Ch9 (especially theBlueTooth hacking details). Many of the others are great as well. Here is theTable of Contents. You canbuythe whole book at Amazon and save $14. You might also enjoyReturn on Investment, Fyodor's free chapter fromStealingthe Network: How to Own a Continent.
