Alkahest my heroes have always died at the end

December 18, 2009

“Hacking” predator drones

Filed under: Security,Technical — cec @ 1:05 pm

This just makes me sad.  Two articles, one in the WSJ, the other on CNN, describing how insurgents in Iraq are hacking predator drones and receiving the video feeds that the drones are sending back to U.S. ground stations.   First things first, let’s fix the headlines.  Both are running something like “Iraqi insurgents hacked Predator drone feeds.”  That should more clearly read:  “Iraqi insurgents watching the videos that the Predator drone sends out unencrypted.”  Or maybe “Iraqi insurgents watch Predator drone feeds on TV.”

If you look into the article, you find that insurgents are apparently using a $26 piece of software that let takes satellite data and saves parts of it that might not be intended for your computer.  Essentially, it monitors the data that is sent and when it sees a file transferred will save it to your hard drive, regardless of whether or not your computer was the intended destination.

Now, I’ve been doing computer security work for over a decade.  I was the first person at my university to implement anti-virus in email, I was the first to require a department to use all-encrypted communication for transmitting passwords.  I discovered one of the earliest IRC-based botnets.  I’ve found vulnerabilities in financial systems.  I’ve seen … [a]ttack ships on fire off the shoulder of Orion. I’ve watched C-beams glitter in the dark near the Tannhauser Gate.  Er, some of that last bit may have been someone else, but you get the idea.

This stuff isn’t that hard.  SSL is over 15 years old, we know how to do encryption.  Hell, back in the 90s when we were developing the Predator, the U.S. was treating encryption as a munition – you had to get the government’s blessing to use decent encryption.  Is it too much to ask that an actual weapon include the munition that was encryption?  And this from the WSJ article strikes me as BS:

Predator drones are built by General Atomics Aeronautical Systems Inc. of San Diego. Some of its communications technology is proprietary, so widely used encryption systems aren’t readily compatible, said people familiar with the matter.

In an email, a spokeswoman said that for security reasons, the company couldn’t comment on “specific data link capabilities and limitations.”

Or more  to the point, entirely irrelevant.  First, the communication system can’t be *that* proprietary, since the commercial (if somewhat sketchy) SkyGrabber software can read the transmissions.  Second, you developed a proprietary communication system in the mid to late 90s and didn’t include encryption?  That’s the sort of thing that makes the baby Bruce Schneier cry.

On the other hand, this from CNN seems far more likely:

A senior defense official who was not authorized to speak about the security breach said, “This was an old issue for us and it has been taken care of,” but he would not elaborate on what specifically had been taken care of.

The official said that many of the UAV feeds need to be sent out live to numerous people at one time, and encryption was found to slow the real-time link. The encryption therefore was removed from many feeds.

Removing the encryption, however, allowed outsiders with the correct tools to gain unauthorized access to these feeds.

I’ll buy that.   There are certainly a few encryption schemes that will send encrypted data to multiple parties, hell at the very least, you could use symmetric encryption with shared keys.  But that kinda sucks.  Most commercial communication encryption technology assumes point to point transfers.  If you wanted to send the same data to many people… you send it multiple times.

Regardless, this is just embarrassing.  These days I’m doing security modelling work and if this is the sort of thing that we’ll have to consider, I’m going to sink into

November 11, 2009

Great moments in . . .

Filed under: Random,Security,Technical — cec @ 8:39 pm

Minor notes, none worth their own post.

  • Traffic management: I get a call from K around 5:30. She’s stuck behind an accident and the cops on the scene, a) don’t tell people to take a detour until they’ve been there for a half hour; and b) once the ambulance has left the scene, don’t direct traffic around the one remaining open lane. So, after waiting a half hour, K has to take a 20+ minute detour home.
  • Memory: Once she gets in, K and I are fixing leftovers for dinner. C: “Hey, where are the mashed potatoes?” K: “Where did you put them?” “In the fridge, but I can’t find them.” “Maybe they’re in the freezer.” “Nope, not there either.” Ten minutes of looking for the potatoes. Did we throw them out on Sunday? Nope, not in the trash. Did C put them in the pantry? Nope. Can’t find ’em, can’t find ’em. Finally, K says, “wait, we fixed rice on Sunday.” There weren’t any potatoes. I would attribute it to getting old, but I’ve always been this way.
  • FUD (fear, uncertainty and doubt): we’re testing some things at the office – will our authentication system (active directory) honor password failure lockouts when using LDAP authentication? I ask our windows consultant to either a) answer the question, or b) enable an account lockout policy so we can test. He responds back that he can do that, but with the warning that “many Linux services aren’t well-designed for this, and repeatedly try a cached or user-provided password, so that users or service accounts may be mysteriously locked out after one attempt or at some future time when passwords change.” Which is complete and utter B.S. Signs that it’s BS? He references Linux services as opposed to open source, i.e. attempted linux dig. And I used to “own” identity management services, including authentication at a large university and if this was the case, things would have blown up within 10 minutes. I thanked him for the advice and noted that I’ve never seen this, but that it’s why we test.
  • OS Performance: we’re looking into some new ideas at the office. Things that could be useful as a preprocessor for a host based intrusion detection system. As part of my testing, I told my laptop to audit all syscalls made to the kernel, by all processes on the system. CPU load spiked, system performance went through the floor, the windowing system became almost completely non-responsive. In the two minutes it took to get access to a terminal, I logged 150 MB of audit logs. On the plus side, all of the information we need can be collected. Now I just need to figure out how to keep a usable system.
  • Self aggrandizement: talking to my technical manager, we need to write up two journal papers based on our recent work. Cool!

I hope everyone had a good Veteran’s Day and remembered to thank the veterans in their lives.

November 5, 2009

Facebook security vulnerabilities

Filed under: Security,Technical — cec @ 10:32 am

and this is why I like cross-posting to facebook from my blog.  It’s a healthy reminder that nothing on fb is actually private.  If it’s online – it’ll be exposed eventually, whether through a new exploit, or just because you “friend” someone in the future that you had written about in the past.

h/t hsarik

August 27, 2009

So, this is important

Filed under: Security,Social,Technical — cec @ 11:14 pm

I’m not a big baseball fan.  For that matter, there are few ball sports that interest me.  But, this is important.  If you recall, a few years ago (2004), there was a big furor over steroids in baseball.  The government searched BALCO and found evidence of rampant steroid use by baseball players.  Now I hadn’t been paying attention to this, but there has been an ongoing legal dispute over that search and how it was conducted.

Yesterday, the 9th Circuit Court of Appeals issued a 9-2 decision that restores a great portion of the 4th Amendment’s right to protection against unreasonable search and seizure in an electronic context.

Caveat lector, I am not a lawyer and I’ve never played one on TV.  Moreover, I haven’t finished reading the dissenting opinions and I’m almost certainly missing some of the nuances here.  In a nutshell, the government had evidence, sufficient to obtain a warrant, against 10 players.  Based on this evidence and the warrant, the prosecutors were able to search BALCO for information about those 10 players.  BALCO maintains all records on their computers, of course.

Now, I’ve had experience with these types of searches.  The government never takes what’s just in their warrant.  The defined search *process* always allows them to take the whole computer or the whole hard drive, or more often than not, an image of the whole hard drive.  The reasoning is that information pertaining to the search could be hidden, or their could be some form of booby trap or the data could be encrypted or …

So, the prosecutor in the steroids case took the whole directory in which there was a file containing drug tests of MLB players.  The file itself contained information about far more than the 10 players named in the warrant.  So, rather than taking the 10 rows of the spreadsheet, rather than taking just the one file, the prosecutor took a directory containing the results of thousands of drug tests.

The prosecutor then (as I understand it) went jurisdiction shopping until he found a judge willing to grant a new warrant for information about 104 players, based on the information found in the spreadsheet.  The argument being that once they had access to the spreadsheet, or the directory, or even the computer, the additional information was in plain sight.  Several judges believed that the prosecutor intentionally wrote the process for executing the search warrant in such a way that he could *expand* the scope of the investigation by introducing evidence based on this plain sight doctrine in order to find new players to prosecute.

What’s interesting is that this seems fairly normal to many of us.  Of course the prosecutor will search your whole hard drive, of course they will bring new charges, etc.  The problem is that a) BALCO itself was not the subject of the prosecution, and b) this IS NOT the way things work in the tangible world.  Prosecutors are exploiting the new(ish) electronic domain to gain access to information they wouldn’t have if files were stored on paper.

Apparently (I need to look into this), the relevant doctrine in the physical world is the United States vs Tamura, 1982.  In this case, the object of a search was stored in a file cabinet.  It was not feasible to search that file cabinet in the office, so the prosecutors obtained access to it, with the requirement that they only pull information relevant to their warrant – even if they stumbled across additional criminal information.

The majority in the 9th Circuit decision believe that a sensible application of Tamura to an electronic domain means that information/documents stored in proximity to the information sought in the warrant is *not* in plain view.  And they are correct.  If information in adjacent files in a file cabinet are not in plain view, then neither is information stored electronically in adjacent files, folders or computers.

Explicitly, the justices stated:

In general, we adopt Tamura’s solution to the problem of necessary over-seizing of evidence: When the government wishes to obtain a warrant to examine a computer hard drive or electronic storage medium in searching for certain incriminating files, or when a search for evidence could result in the seizure of a computer, see, e.g., United States v. Giberson, 527 F.3d 882 (9th Cir. 2008), magistrate judges must be vigilant in observing the guidance we have set out throughout our opinion, which can be summed up as follows:

1. Magistrates should insist that the government waive reliance upon the plain view doctrine in digital evidence cases. See p. 11876 supra.

2. Segregation and redaction must be either done by specialized personnel or an independent third party. See pp. 11880-81 supra. If the segregation is to be done by government computer personnel, it must agree in the warrant application that the computer personnel will not disclose to the investigators any information other than that which is the target of the warrant.

3. Warrants and subpoenas must disclose the actual risks of destruction of information as well as prior efforts to seize that information in other judicial fora. See pp. 11877-78, 11886-87 supra.

4. The government’s search protocol must be designed to uncover only the information for which it has probable cause, and only that information may be examined by the case agents. See pp. 11878, 11880-81 supra.

5. The government must destroy or, if the recipient may lawfully possess it, return non-responsive data, keeping the issuing magistrate informed about when it has done so and what it has kept. See p. 11881-82 supra.

As someone who has participated in prosecutorial searches, these strike me as eminently sensible guidelines.  The first states that there’s no such thing as plain view in computer cases – each piece of information is in its own separate space.  To consider otherwise is to allow every piece of electronic equipment in the world to be searched since they are all connected via the Internet.  The second states that the prosecutor shouldn’t be the one doing the search, b/c the searching personnel *will* wind up seeing information that isn’t related to the warrant.  The problem is that since nothing is in plain view (can you tell what does a hard drive contain by looking at the physical device?), an in-depth search is required to fulfill the warrant, but that search will violate the terms of the warrant if all of the information is shared with the prosecutor.  The third states that prosecutors can’t *overestimate* the risk of booby traps, deadfalls, etc. that would destroy data.  There was no reason to think there were such in the BALCO computers and therefore, a full copy of their hard drives was not required.  The fourth is pretty plain – the process/protocol must be restricted to what the government is allowed to find.  And the fifth says that the prosecutor can’t keep things that it found that it wasn’t supposed to have.

All in all, a very reasonable balance of 4th Amendment rights in a digital context – no matter what Orin Kerr might say. Good news on the electronic privacy front… for once.

March 28, 2008

Digital Amway

Filed under: Security,Social,Technical — cec @ 9:22 pm

A few years ago, I was accused of using the word “interesting” in subtle ways.  Sometimes it means a truly novel idea that I would like to learn more about, other times, it’s a novel idea of which I’m more than a little skeptical.  In both cases, I stand by the description, to me, both are interesting – but it can make it a little hard to know what I’m really thinking.  So take it with a grain of salt that I just read an interesting article in the February 2008 issue of IEEE Computer on how to turn music lovers (particularly teenagers) into music distributors.

The idea assumed a secure hardware architecture using digital certificates (for an idea of how this might work, read the novel “Rainbows End” by Vernor Vinge). Customers would buy music directly from the industry and would have the option of buying redistribution rights (at say a 10% discount).  The authors imagined that in addition to buying the song for personal use, customers could buy a 10 pack of redistribution licenses for maybe $8.99.  This 10 pack could be resold either as an end user license or a redistribution license so that the customer’s customer could resell it too.  Unsold licenses could be returned to the industry distributor for credit.

Having dealt with Microsoft Windows Server licensing at the office, I’m a little skeptical that any end user would want to get involved in such a scheme.  But then again, the office is paying MS, so what do I know.  The biggest problem that I see with the redistribution scheme is that customers have to pre-purchase redistribution licenses without knowing whether or not they could be resold.  Here’s my suggestion (perhaps I should get it published in IEEE Computer 🙂 ), the redistribution should be in an Amway style.  For example, person A purchases the song for full price (say $0.99).  Person A can give a copy of the song to a friend, Person B, who can play the song for only a limited number of times.  If they want to keep it, B does not go and buy it from the original retailer, they activate it instead.  They pay the retailer the full amount ($0.99), but person A receives 10% maybe in credit, maybe in an account that pays out on occasion.  If Person B distributes to Person C, then both A and B get paid (A gets less than B being one removed).

The industry would go along with this due to the significantly reduced bandwidth costs for distribution.  Users (might) go along with it because it’s a more natural distribution method and there’s a direct payment with low effort ovehead.

Don’t get me wrong, I’m not advocating this, I’m not a huge Digital Rights Management (DRM) fan – too much potential to restrict fair use; however, it does seem like a more natural approach to turning consumers into distributors.

March 6, 2008

Who could have guessed?

Filed under: Security,Social — cec @ 7:27 am

Gee, nobody could have predicted this I suppose:

The FBI improperly used national security letters in 2006 to obtain personal data on Americans during terror and spy investigations, Director Robert Mueller said Wednesday.

Admittedly, Mueller goes on to say that the reports were prior to new policies being put into place, but somehow that doesn’t make me feel much better.  It’s things like this that have always made me very nervous about partnerships between law enforcement and industry.  I’ll try to post something about InfraGard one of these days.  It’s a little scary in its own right.

January 15, 2008

Is this thing on?

Filed under: Guitar,Personal,Security — cec @ 9:25 pm

<thump> <thump> Is this thing still on?

Okay, it’s been about three weeks since I’ve blogged anything.  As I’ve stated before, this tends to happen when I’m too involved in living life to actually write about it.  Fortunately (unfortunately?) it’s nothing terribly exciting.  Let’s see:

  • Guitar: my guitar playing has been scientifically shown to have 10% less suck than it did a month ago.  However, with such a large amount of suck to begin with, we’re still not at anything that looks like good.  I’m getting more fluent with the open chords and can switch between them reasonably well.  I’m just starting to learn barre chords – the E barre chord to start with.  There are still vast tracts of untouched suck in the barre chords.  Also, I’m actually thinking of picking up some lessons – the ones at seem pretty good.
  • Work:  still going well.  There’s enough to do.  I’m still not entirely used to billing by the (tenth of an) hour.  Also, not really looking forward to flying to Ohio next week.  I’ll only be gone for a day, but, ugh – who wants a 6am flight to Dayton!
  • Break-in update:  nothing much new here.  Still looking to make it harder for someone to break in.   Had a neighborhood watch meeting last weekend – that’ll be good.  Turns out this may be neither contractors nor kids.  There are apparently some professional (stretching that word a bit) thieves working this area.  There have been some eight different break-ins near by.
  • Non-profit work: I’m convinced that whomever coined the phrase “academic politics are so sorted because the stakes are so small” never worked with a non-profit.  It’s just amazing the degree to which politics enters into the smallest damned thing.
  • New year’s resolutions: didn’t make any – never do.  That said, I am trying to exercise more and cut down on my use of vulgarities.  Profanity and cursing can wait until another year 🙂

I think that’s about it for now.

November 6, 2007

Two factor authentication

Filed under: Security,Technical — cec @ 9:05 pm

A couple of weeks ago, Hunter and I were talking about passwords. More to the point, the inadequacy of passwords and why we haven’t moved beyond them yet. This touches on several points that I made last year. Specifically, that a password that is secure enough starts to restrict its usability.

In a nutshell, authentication is proving that you are who you claim to be. The standard ways of authenticating yourself are through: something you know (e.g., a password), something you have (e.g., a token) or something you are (e.g., biometrics, facial recognition, etc.). So the claim here is that the human brain is not good enough at remembering things to make “something you know” secure. Unfortunately, it’s cheap and easy to implement. Two things which are always important.

Our other options are something you are or something you have. Something you are can be complicated and expensive. At the very least, it requires a something-you-are-reader anywhere you want to authenticate yourself. Want to use your computer at home to access the one at work? Make sure you have your trusted, secure something-you-are reader set up (finger print scanner, iris reader, etc.). Want to authenticate from an Internet cafe? Good luck. Besides that, there’s some argument that many of the approaches used to date are not secure; and there’s the creepiness factor.

So, something you have. This one can also get potentially expensive, but is potentially cheaper than the rest which is why you see it being used by banks to access online accounts. Here we have some sort of hardware “token.” Most traditionally, these tokens have a simple processor, a clock and an LED display. The display shows a pseudo-random number. At a regular interval, the number changes. To log into a service, you key in the random number and maybe an a password. Since the service you access knows the pseudo-random number generating algorithm for your device and the time, it can validate the number you entered. Allow a little bit of logic to deal with clock skew and you are set. Several companies will sell you something like this. Of course, you pay for the devices, pay for the authentication server and then, in some cases, pay for each service.

So, what about an open source solution?  This is in-part what Hunter and I were talking about. Imagine if you had an encrypted private certificate stored on a thumb drive. You could fairly easily write up a challenge-response protocol to validate the certificate. Since it’s certificate based, you could authenticate without a centralized authentication server – the ability of the certificate signed by your (private) certificate authority to participate in the response authenticates the certificate holder. You could create PAM modules for unix/linux and the equivalent for Windows and Mac. On the client side, stored on the same drive, you would have software to mediate the authentication.

I could see two ways for the client to do this. 1) a separate process that connects to the service’s server and essentially allows access for this IP. The service then needs to talk to the server-side piece to see if a user is allowed to access from the IP. That plus a password and you’re in pretty good shape. No connection to the authentication service means that you can’t log in. 2) Try to create a service along the lines of stunnel that mediates all communication between the client and the service. This is extremely ugly and I wouldn’t recommend it.

So, what are the advantages/disadvantages?

  1. Advantage: low hardware cost. Most every computer has a USB reader
  2. Advantage: relatively simple to implement
  3. Disadvantage: even the cheapest thumb drives are on the order of $5 each
  4. Advantage: many people already have one and they could be used for this purpose without wasting too much space
  5. Disadvantage: to a certain extent, this is not secure. Specifically, there’s no proof that the user actually has the key as opposed to a copy of the certificate and the algorithm required.

#5 seems like the biggest problem. As an open source product, all one needs is the certificate to spoof the token. Okay, we could incorporate the USB serial number, but that can also be copied. Ideally, all the processing would occur on the thumb drive, but that takes us out of the realm of commodity. So, the risk here is that using your token on a compromised computer compromises the token in the same way that using your password on a compromised computer compromises your password.

This is definitely not a hypothetical problem, but I don’t know how to resolve it. Is it still worth implementing something like this? If folks have thoughts or suggestions, I would love to hear them.

March 14, 2007

Bob Ross

Filed under: Funny,Security — cec @ 3:54 pm

Created by “the Robot Economist,” and for hsarik, it’s Bob Ross and the Joy of Painting Missiles.  Click the picture for the full sized image with rotating scenery.

March 9, 2007

The FBI’s national security letters

Filed under: Security,Social — cec @ 9:55 pm

Sometime in late September or early October of 2001, I received a call from an individual identifying himself as an agent of the FBI and asking for information about the owner of an email account from the place I worked.  He stated that he believed the account was relevant to a terrorist investigation. Of course, this was in the immediate aftermath of September 11th and everyone had security concerns, but I was also certain that I didn’t want to give away information to someone who shouldn’t have it. Following a fairly standard procedure, I requested his phone number, badge number and locale so that I could contact the FBI to confirm his identity. The agent gave me a lot of grief about this, noted that I was putting lives at risk by not immediately complying, etc., but I assured him that I would call right back.

I contacted the FBI and after quite a bit of checking, they confirmed he was an agent. The reason for the delay is that he was actually an ATF agent on loan to the FBI. So I called him back and asked what information he was looking for. It turned out that he was investigating an arms sale online. I was surprised that someone dealing in illegal weapons for terrorism would use their personal email account, but sure. I told the agent that if he would provide a subpoena or court order, I would be happy to respond. This generated another round of everyone’s favorite game, “do you want the terrorists to kill people?!” I apologized, but explained that it was my job to do otherwise. I never heard from him again.

Given the nature of the crime he described, the fact that I never received any valid order, that this seemed like a small issue relative to the claim of terrorism, and that he was on loan from the ATF; I can only conclude that, with the tint not even dried on his shiny new FBI sun glasses, he was overstepping his authority, and claiming a terrorism investigation, in order to pursue a standard, probably pre-existing, case. With that in mind, today’s report by the Investigator General regarding errors in the FBI’s use of National Security Letters (NSLs) comes as no surprise.

The USA PATRIOT Act removed any judicial oversight required for NSLs in order to ensure that they could be executed in a timely fashion. The law then prevented anyone receiving a NSL from mentioning it to anyone. So you have a secret, self-issued warrant for information, creating a situation ripe for abuse. The IG’s report indicates that these abuses were errors and lack of internal oversight. That may be, but it is also clear that there were many cases of over-aggressive investigators issuing NSLs (which are intended for investigations into terrorism) in cases which had nothing to do with terrorism. I guarantee that if the USA PATRIOT Act had been law when I spoke to my ATF agent, I would have received an NSL, turned over the information, and been unable to discuss the demand with anyone.

When the USA PATRIOT Act was passed, the administration basically asked the country to trust the executive branch by allowing it a surveillance tool that had no oversight, was self issued and would remain entirely secret. The IG’s report demonstrates that our trust was abused. However, I’ll go further and say that the concept of trusting the executive branch for activities undertaken without oversight (either judicial or congressional) is fundamentally un-American and a violation of constitutional principles as espoused in the Federalist Papers and other writings of the founders of this country. I hope that the IG’s report will encourage congress to rethink their blind trust in the executive branch under this, or any, administration.

Older Posts »

Powered by WordPress