Alkahest my heroes have always died at the end

January 30, 2008

Diagnostics

Filed under: Photography,Technical — cec @ 9:26 pm

An interesting conversation at work today. I was talking with the guy, L (no – a different one), who has been responsible for our IT for the past 3.5 years. L isn’t the guy that always does the work, although he can. His real job is as an engineer in the company and that’s his primary focus. However, he does handle the technology strategy and monitors the folks that do the IT work. (He and I are talking quite a bit since I’m trying to take some of the load off in this area).

Anyway, L made the observation that he is often frustrated by IT people in their approach to diagnosing a problem. His experience, both with our ISP and sometimes our desktop support, is that IT people will poke at a problem system in a non-systematic way, replacing components, etc., until the problem disappears. They may never identify the problem, but they did make it go away.

In my experience, this is not true across the board, there are many IT folks that do a post-mortem on problems, that diagnose issues methodically to pin-point the problem, etc. But he’s right – there are many that don’t.

The funny thing is that the methodical approach comes naturally to L, with his engineering background, but doesn’t come naturally to many other people and particularly people without any training in methodical testing. I recognized some of this back in high school. One thing that separated fair computer users from the great (and yes, this was back in the 80s) was diagnostic ability. My college roommate, for example, was lousy at diagnostics. He knew a fair amount about computers, but didn’t have a methodical approach for diagnosing problems.

KL was suggesting that people should identify experiments, knowing in advance what the different results would indicate. As an engineer, he called them experiments. However, many people with natural diagnostic ability do this instinctively. For example, the symptoms of the problem are known and could be either hardware or software. Some people naturally recognize this and do tests to rule out one or the other. Okay, the problem is in the hardware. Can we tell if it’s system or network? etc.

The funny thing is that I don’t think universities ever teach this skill. Computer Science does in a sense. If you can’t diagnose problems in your program in a rapid, methodical way, then you’ll probably fail out. But this seems to be more weeding out than teaching. IS courses and Engineering courses aren’t any better to my knowledge. I don’t think I’ve ever seen a “Debugging” course offered. So we’re left with people that can either perform diagnostics instinctively or people that can’t and replace/test parts randomly.

Am I missing something, are there courses in debugging? If not, should there be? I tend to think of debugging/diagnostics as a skill separate from coding or engineering. If that’s the case then it can and should be taught. Hell, there’s a whole television show (“House”) based on medical diagnostics – the least we can do is to teach future programmers, engineers and IT people the same skills.

January 27, 2008

Work blogging

Filed under: Personal,Technical — cec @ 10:37 pm

Last week at work was pretty interesting. On Tuesday, L and I flew up to Dayton for the day to meet with our sponsors. That meant a 6am flight out and originally a 11pm return flight. Fortunately, we wrapped it up a bit early and landed home back around 8:30pm. The meeting went well. We were presenting some ideas that I had to the sponsors and they seemed to like them. Strangely enough, in talking to L, he said that he hadn’t really understood what I was proposing until the meeting. Not bad for a guy who wrote a short paper for the sponsors that described what he didn’t understand! But then, I guess that’s why L’s a full professor 🙂

Wednesday was largely recovery. Thursday, I met with the other modeling guy on the team who was in from New York. We discussed what I wanted to do and he seemed to have a good handle on it which was good because later in the afternoon L came in and wanted to discuss it in more detail. He had been stuck in an airport and had time to think through some things. The bad news is that he didn’t think my proposed modeling would work. The good news is that he thought he had a fix.

Turned out that I had forgotten that the underlying model we are using is a Markov model. The basic requirement of the model is that the software agent can only be in one state at a time. I was essentially proposing that the agent could be in multiple states simultaneously. As we talked more, I realized that L’s proposal was to generalize what I was getting at. Essentially, I described the special case of m=1. Before my proposal, we had been working with m=n. L’s proposal was to let m vary between 1 and n. This bugged me at first since I thought m>1 wasn’t necessary. I was also concerned since L and the other modeler, N, had theoretical objections to m=1. I pointed out that really for any m<n, their theoretical objections held true, so nothing was really more wrong with m=1. Everyone agreed with that and we further agreed that a varying m would still have some theoretical problems, but would be more effective, more realistic and more tractable than m=n or m=1.

Wrapping that up, I received a short email from L on Friday thanking me for the work, saying that was a good white board session and that he could see how my thinking was influencing his for the better. That was nice 🙂

p.s. if y’all are really nice, I promise not to write a blog post on Markov decision processes

December 23, 2007

Information markets

Filed under: Technical — cec @ 5:32 pm

One of my projects at work involves information markets: tools to extract aggregate knowledge from groups of people.  For example, at Intrade, you can buy and sell contracts on questions like, “who will be the democratic nominee for president?” or “how likely is it that the US economy will slip into recession?”  The market price reflects the cumulative belief that a given event has a certain probability.

IEEE Spectrum also recently did a piece on information markets which talked about Microsoft’s use of a market in 2004 to predict the likelihood of an internal product meeting its production schedule.  By opening the market to its employees, who presumably had knowledge of the issues with the product, Microsoft predicted that it would ship three months late – which it did.

All of that got me to thinking about how much fun (and subversive 😉 ) an employee operated information market would be.  The questions are great: “how likely is it that we’ll be re-organized my March?”  “will the next good job go to the boss’s friend, neighbor or gardener?”  “which product will be selected in the next RFP?”  “how likely is this project to fail (or succeed)?”

If anyone wants to set up something like that, I’ve got some code, or you can use Zocalo.

December 4, 2007

It lives

Filed under: Technical — cec @ 11:02 pm

Right before Thanksgiving my iPod went to that great electronics superstore in the sky.  More specifically, the cute little 1.8″ hard drive died.  Having looked around, I figured the best (or at least most interesting) fix was to replace the drive with a compact flash card.  The connector part finally came in today and we had a lovely (and to spare you the suspense, successful) operation.

We started with a 30 GB 5th generation iPod video showing the unhappy ipod symbol:

dsc_2091_m.jpg

The first step was to open it up.  They make special tools for this, but I didn’t have any, so I used my old stand-by, a pocket knife to open it:

dsc_2093_m.jpg   dsc_2094_m.jpg

I got the parts ready:

dsc_2095_m.jpg

and did the surgery.  The result was the same iPod, no hard drive but with a 16 GB cf card instead:

dsc_2096_m.jpg

At this point, I connected the power back up but did not completely reseal the iPod (in case I accidentally disconnected the audio out line.  Total time for the surgery, 15 minutes.  I turned the iPod on and saw:

dsc_2097_m.jpg

Okay, it knows that it’s got a new drive and wants me to connect it to iTunes.  Hrm, problem.  iTunes doesn’t seem to have a linux version.  Okay, let’s boot the computer upstairs into windows mode – an ancient win2k install.  Run an old version of iTunes.  No go.  I had removed QuickTime to make space for ArcGIS and now iTunes is unhappy.  I tried to explain that I don’t really want to listen to the iPod on windows.  It doesn’t care.

I download iTunes.  It claims to be a 47 MB download – it’s actually 57 MB.  Fine.  Run the installer.  Nope – this version requires XP or Vista.  Crap.  Go back to Apple, find the older Win2k version.  Claims to be a 47 MB download – seems to be 37MB.  I don’t think Apple engineers understand file size.

Install iTunes, reboot the computer, run iTunes It won’t fix the iPod because it can’t find the network.  Network is working, iTunes can’t find it.  Search the web, find the solution and re-run iTunes.  It downloads a new firmware and fixes the device.  Time screwing with iTunes: 40 minutes.  Result:

dsc_2098_m.jpg

Okay, now we’re in business.  I take a break for dinner and fire up GtkPod.  Too many mp3s to fit on the new system.  Fine, I get it down to the right size and start the sync.  Two and half hours later, we have music!

dsc_2099_m.jpg

not much space left, but that’s okay:

dsc_2100_m.jpg

First impressions?  It’s nice.  There’s no disk noise, no disk vibration and best of all, no disk spin-up delay.  You can jump between songs without waiting until the drive spins up and seeks.  Overall, it was definitely worth it.  Now when 32 GB flash drives are affordable I’ll have a good upgrade path 🙂

November 25, 2007

Rebuilding an iPod

Filed under: Technical — cec @ 11:08 pm

Over the holiday (I hope everyone had a good Thanksgiving!) I spent some time figuring out what to do about my broken iPod hard drive. The simplest/cheapest thing is to replace the drive with another 30 GB 1.8″ drive. The problem is that this is boring. Okay, next thought – up the drive size. I can get a 40 GB drive that’s the same size and is a drop-in replacement. Only problem is that this isn’t very cost effective in terms of dollars per GB. Apparently the wide-spread use of 30 GB drives in iPods has lowered the price point here. What about a 60 GB? It looks like that’s doable too and at a good price point. Downside is that I need a new back to accommodate the thicker drive. Hrm, that’s no good either.

In looking around online, I ran across Tarkan Akdam’s website. He had the same issue with a dead ipod drive and resolved it with a very cool hack. He built a connector board that connects a compact flash card to the ZIF connector for an ipod. This let him connect a 4 GB CF card in place of the drive. As an electronics engineer he did this right – not a cheap connector with random wires (like I would do), but a custom connector board. He’s now selling these and I’ve ordered one, along with a 16 GB CF card. Hopefully by next weekend everything will be here and I can put it all back together.

Cons:

  • less space
  • slightly slower data transfer speed

Pros:

  • exceedingly cool
  • hardier – no moving parts to break the next time I drop it
  • no spin-up time when I change songs – i.e., better ipod response times
  • better battery life (Tarkan’s done some tests and the results are impressive)

I would have preferred not to have broken the ipod, but this will at least be interesting and cheaper than a replacement 🙂

November 6, 2007

Two factor authentication

Filed under: Security,Technical — cec @ 9:05 pm

A couple of weeks ago, Hunter and I were talking about passwords. More to the point, the inadequacy of passwords and why we haven’t moved beyond them yet. This touches on several points that I made last year. Specifically, that a password that is secure enough starts to restrict its usability.

In a nutshell, authentication is proving that you are who you claim to be. The standard ways of authenticating yourself are through: something you know (e.g., a password), something you have (e.g., a token) or something you are (e.g., biometrics, facial recognition, etc.). So the claim here is that the human brain is not good enough at remembering things to make “something you know” secure. Unfortunately, it’s cheap and easy to implement. Two things which are always important.

Our other options are something you are or something you have. Something you are can be complicated and expensive. At the very least, it requires a something-you-are-reader anywhere you want to authenticate yourself. Want to use your computer at home to access the one at work? Make sure you have your trusted, secure something-you-are reader set up (finger print scanner, iris reader, etc.). Want to authenticate from an Internet cafe? Good luck. Besides that, there’s some argument that many of the approaches used to date are not secure; and there’s the creepiness factor.

So, something you have. This one can also get potentially expensive, but is potentially cheaper than the rest which is why you see it being used by banks to access online accounts. Here we have some sort of hardware “token.” Most traditionally, these tokens have a simple processor, a clock and an LED display. The display shows a pseudo-random number. At a regular interval, the number changes. To log into a service, you key in the random number and maybe an a password. Since the service you access knows the pseudo-random number generating algorithm for your device and the time, it can validate the number you entered. Allow a little bit of logic to deal with clock skew and you are set. Several companies will sell you something like this. Of course, you pay for the devices, pay for the authentication server and then, in some cases, pay for each service.

So, what about an open source solution?  This is in-part what Hunter and I were talking about. Imagine if you had an encrypted private certificate stored on a thumb drive. You could fairly easily write up a challenge-response protocol to validate the certificate. Since it’s certificate based, you could authenticate without a centralized authentication server – the ability of the certificate signed by your (private) certificate authority to participate in the response authenticates the certificate holder. You could create PAM modules for unix/linux and the equivalent for Windows and Mac. On the client side, stored on the same drive, you would have software to mediate the authentication.

I could see two ways for the client to do this. 1) a separate process that connects to the service’s server and essentially allows access for this IP. The service then needs to talk to the server-side piece to see if a user is allowed to access from the IP. That plus a password and you’re in pretty good shape. No connection to the authentication service means that you can’t log in. 2) Try to create a service along the lines of stunnel that mediates all communication between the client and the service. This is extremely ugly and I wouldn’t recommend it.

So, what are the advantages/disadvantages?

  1. Advantage: low hardware cost. Most every computer has a USB reader
  2. Advantage: relatively simple to implement
  3. Disadvantage: even the cheapest thumb drives are on the order of $5 each
  4. Advantage: many people already have one and they could be used for this purpose without wasting too much space
  5. Disadvantage: to a certain extent, this is not secure. Specifically, there’s no proof that the user actually has the key as opposed to a copy of the certificate and the algorithm required.

#5 seems like the biggest problem. As an open source product, all one needs is the certificate to spoof the token. Okay, we could incorporate the USB serial number, but that can also be copied. Ideally, all the processing would occur on the thumb drive, but that takes us out of the realm of commodity. So, the risk here is that using your token on a compromised computer compromises the token in the same way that using your password on a compromised computer compromises your password.

This is definitely not a hypothetical problem, but I don’t know how to resolve it. Is it still worth implementing something like this? If folks have thoughts or suggestions, I would love to hear them.

September 17, 2007

are there no good ISPs?

Filed under: Personal,Technical — cec @ 7:19 pm

I’m starting to think that there aren’t any good Internet service providers.  hsarik had troubles with Rimu Hosting.  My own ISP seems to be far more focused on (their own version of) security than on usability.  At work we’ve been using Pair.  I’ve been pretty happy with them until this afternoon around 5pm when our project went dead while we had some 50 people using it.

When we first developed the project, Pair was using php4 (yeah, I know – it’s my own fault for using php 🙂 ).  Fortunately, they also provided phpwrap to allow cgi access.  Okay, that’s not great, but it at least let our project which required php5 go live.  Sometime recently, they made the default php5 without phpwrap.  If I had known, well great – but I didn’t see any mention of it. Today, right after I went home they broke phpwrap.  Easy enough to fix, but still irritating.

August 16, 2007

power management and Linux

Filed under: Personal,Technical — cec @ 8:12 pm

From skvidal.

If you haven’t seen powertop yet, you’ve got to look into it.  Arjan van de Ven, one of the linux kernel hackers and an employee of Intel, released powertop back in May.  What is powertop?  Think of the Unix “top,” but monitoring power, not CPU, usage.  It tells you how long you are spending in different CPU states, how long at different CPU frequencies, what is waking up the CPU most frequently, and best of all, it makes recommendations for saving power.

Powertop has already identified a number of issues in various software packages and these problems are now being addressed.  In my personal case, I found that powertop didn’t help much since I was running an old kernel on an old distribution (FC4 – I know, I know).  So I upgraded my laptop to FC7.  Before the upgrade, I got about 2 – 2.5 hours on battery.  With the upgrade and following the recommendations, I can now get 3.5 hours!

very cool.

June 24, 2007

It’s official…

Filed under: Personal,Technical — cec @ 9:27 pm

K thinks I’m a freak. In her defense, I’m not certain that she would have ordered a $70 keyboard, so she may be right.  In my defense, I bought my first IBM-clone in 1990 and have kept the keyboard I chose since then. The company I bought it from, Formosa Computing (for those of you in NC, think Intrex), assembled parts and sold white-box computers. The keyboard that they originally gave me was awful. I didn’t like the mushy keys, so I went back, tried a dozen different keyboards and finally exchanged it for something with a bit more tactile response. With every new computer I bought, I used this keyboard. At one point, it had an AT to PS/2 to USB adapter in order to get it working.

Unfortunately, it finally died a couple of months ago. I bought a replacement, ergonomic, wireless keyboard that I promptly hated. I did try, I gave it a chance, but I kept mistyping. Tonight, I finally broke down and bought a new keyboard from Unicomp, the people that own the license to the original IBM buckling spring keyboard patent used on the Model M keyboards.

Does all of this make me a freak? Probably, but it could be worse. I could be as obsessive as these guys.

June 17, 2007

A mental sigh of relief

Filed under: Personal,Technical — cec @ 9:40 pm

Working in IT, one of the things that is often on my mind is backing up my data. It’s often on my mind, but I seldom do anything about it. A few years ago, K and I bought an external USB drive to which we occasionally sync our desktop. That made me a bit more comfortable, we’re now covered if the drive crashes. Of course, I then started worrying about a fire in the house. Since my external drive is sitting on top of the computer itself, anything that destroys the one will likely destroy the other.

At work, I was recently on the “storage advisory team.” Sounds more grandiose than it was – we were basically a bunch of folks on a death march project to make recommendations for things dealing with data storage. One of the projects we were tasked with was data backup and recovery. None of the outsourced solutions we looked at were going to support linux, but the project did increase my backup anxiety.

Last week, I finally did something about it. I signed up for Amazon’s Simple Storage Service (S3) and downloaded a copy of s3sync. I took the USB drive up to the office (for the fast networking) and backed everything up to S3. This weekend, I tested the backup service by backing up the home computer – essentially, anything that had changed since last weekend. Overall, it went well. The original upload took maybe 8 hours for 17GB. The updates took about 1 hour, but that is in part because of all the deletion requests needed (I cleaned up quite a bit) and my slow desktop hard drive.

This isn’t something I would have recommended on the storage team, but it’s effective and cheap. Rather than being ~$20/month for 17GB, it cost ~$2.60 to upload the data and will cost ~$2.50/month to store it. Plus, I think I can stop worrying about backups 🙂

« Newer PostsOlder Posts »

Powered by WordPress