Alkahest my heroes have always died at the end

March 30, 2010

In which my denial of free will receives support from Science [tm]

Filed under: Random — cec @ 4:56 pm

I’ve long maintained that free will is an illusion.  That the mind arises from the physicality of the brain and that there is no room for an active will separate from the physical processes of the brain.  That’s not to say that I’m a fatalist.  I don’t believe that thoughts are deterministic, let alone subject to perfect prediction.  My position boils down to the brain as a (gigantic) black box containing an uncountable number of states.  Input from the senses changes the state of the brain and occasionally results in actions.

Assuming (which I don’t) that the processes of the brain were completely deterministic, chaos theory tells us that they would not be predictable (what’s the solution to the three-body problem? [other than a king-sized bed]).  Moreover, the processes themselves are dependant on physicalities small enough that quantum effects are relevant and therefore the state changes contain a strong stochastic component.

Philosophically speaking, none of this affects the way we should live.  Education, punishment, personal interaction all affect the state of the brain and are therefore worthwhile [and inform my position that the real purpose of the criminal code should be rehabilitation and not warehousing, punishment or societal retribution… but that’s a post for another time].

Yesterday, I heard the coolest story on NPR.  In a nutshell, moral judgements are apparently influenced by the right temporoparietal junction so that a magnetic pulse disrupting that region of the brain affects those judgements.  In the study published in the Proceedings of the National Academy of Science (PNAS), the researchers told one of four stories to the participants.  The stories addressed permutations of effect (neutral or negative) and intent (neutral or negative).  So, one story described an unintentional (neutral intent) poisoning resulting in death (negative effect), another described an intentional (negative intention) failed poisoning (neutral effect), etc.

When making moral judgements, adults generally consider intention.  So if you didn’t intend to poison someone and did, it’s understandable; whereas if you intended to poison and failed, you are still morally culpable.  This is exactly what the researchers found in their controls.  However, after disrupting the right temporoparietal junction, participants started making moral judgements based on the effect.  You intended to kill someone, but failed?  No problem, the person is still alive.  You didn’t mean to kill someone and did?  You bad person, someone died.  Apparently, this is common in children before they learn to make moral judgements based on intention.

So, in a nutshell, temporarily altering the physicality of the brain affects people’s thoughts with respect to moral judgements.  I’ll consider that support for my position that the mind arises from the physicality of brain and any belief you have in a free will separate from those physical processes is an illusion.

March 22, 2010

Model update (updated!, updated again)

Filed under: Random,Technical — cec @ 9:09 pm

Earlier, I posted my current model for predicting the NCAA tournament.  Since the whole thing is probabilistic, I figured that I would test it out against the current NCAA standings.  I considered four models:

  1. The one that I described
  2. A random selection of which team would win (50/50 chance)
  3. Always picking the top seeded team
  4. A model suggested by a colleague at work

For each model, I ran 10,000 tests and compared them to the current NCAA tournament results, counting the scores for each test.  Results are:

The X axis is the score (0-64 at this point), the Y axis is the number of test runs (out of 10k) that achieved that score.  The number in the legend is the expected value (score) for each model.  As you can see, my model had the [second] highest expected value.  Choosing the top seeded team was the worst (guaranteed 10 points) best [see the 2nd update].  Choosing randomly was better than selecting the top seed the worst [see update] and my colleague’s model (cyan) was between my model and the random model.  Not bad.  I’ll update after the next two rounds of the tournament.

Update: one interesting thing is that this suggests that there was still a lot of luck in my ESPN pick.  Only about 0.5% of my model runs were as good as that one.

Update 2: So, I’m lying in bed when it occurs to me that I’m an idiot… the team with the *lowest* seed wins a game in Model 3.  This is why I say I don’t really know basketball.

They laughed at my theories!

Filed under: Random,Technical — cec @ 3:09 pm

They laughed at my theories.  They threw tomatoes when I presented my paper at the academy!  Tomatoes I tell you!  My minions cower in terror, shrinking in fright from the very ideas contained herein!  But I will show them!  I will PROVE IT TO THEM ONCE AND FOR ALL.  The FOOLS, I WILL DESTROY THEM!! MWAHAHAHAAAA! (ask me how)

Oh, sorry.  Where was I?  Apparently, there’s this basketball thing going on.  Some sort of NCAA tournament that will prove who has the best basketball team.  But what if it doesn’t?  What if it’s all just arbitrary?  Could it be that the chances of any team winning a game are not deterministic, but rather stochastic?  I’ll admit that I don’t know that much about basketball.  I mean, I played the sport in junior high.  I do know the rules.  And I even think that it’s a pretty game.  But I don’t follow the ins and outs of a particular season.

So what’s a guy to do when he doesn’t really follow basketball, but you live in NC where bball is life and it’s bracket time?

You model it.   Which is exactly what I did.

The basic model:

  1. Compute a team’s wins minus their losses, I’m sure there’s a word for this, but let’s call it demonstrated strength (D)
  2. For a given match-up, take a draw from a Beta distribution parameterized by each team’s demonstrated strength (D1 and D2)
  3. The resulting draw is the probability that the team representing the first parameter wins
  4. Draw from a uniform random variable to predict if that team actually will win

There are some flaws with the model, the two obvious ones:

  1. Different teams have different schedules, so one team with a 30-5 record might be a lot better than another with a 30-5 record in a different conference (I’m looking at you SEC)
  2. It’s not clear that you should parameterize directly on the demonstrated strengths.  There should probably be a scaling factor in there.  So that rather than drawing from Beta(D1, D2), you should draw from Beta(alpha*D1, alpha*D2)

But this is close enough.  The nice features of the model are:

  1. The expected probability that a team will win is proportional to D1/(D1+D2).  So, a team whose wins outnumber their losses by 10, will have an expected probability of winning of 50% when playing against another team with D2=10.  And only a 33% chance of winning when playing against someone with a D2=20
  2. The closer two teams’ demonstrated strength is to zero, the broader the probability distribution is.  This reflects added uncertainty for two teams who win only slightly more often than they lose.
  3. The larger two team’s demonstrated strength is, the narrower the probability distribution is.  For example, D1=20, D2=40 has the same expected probability as D1=10, D2=20; but because this is a more common pattern for the two teams, we don’t have the same variance.
  4. This is actually pretty rigorous in Bayesian terms.  Throughout the season, we can update the posterior distribution of the probability of winning based on the prior distribution and the most recent game.

So, how well does the model work?  Good question.  I used it on ESPN, and it’s currently ranked in the 92.9th percentile, i.e., better than almost 93% of all ESPN brackets.  All of my final four teams are still alive, and in general, the model predicted several of the biggest upsets in the tournament (e.g., Murray State vs Vanderbilt!).  That said, this is just one random draw from the model.  To test it further, I would like to go through a whole season of games and figure out if the probabilities of winning correspond to the statistics of a Beta distribution for the game’s D1 and D2.  Moreover, I would like to infer the alpha parameter that I mention above.

If the model appears accurate, and we can properly infer alpha, then we get a probabilistic assessment of how feasible it is to even pick tournament champions.  It may just be that at the end of the day, it comes down to luck.

November 24, 2009

Allman Brothers Band

Filed under: Random — cec @ 3:31 pm

10256 It’s a good day for the Allman Brothers. Great southern rock without the confederate overtones and segregationist dog whistles in Lynard Skynyrd.

November 11, 2009

Great moments in . . .

Filed under: Random,Security,Technical — cec @ 8:39 pm

Minor notes, none worth their own post.

  • Traffic management: I get a call from K around 5:30. She’s stuck behind an accident and the cops on the scene, a) don’t tell people to take a detour until they’ve been there for a half hour; and b) once the ambulance has left the scene, don’t direct traffic around the one remaining open lane. So, after waiting a half hour, K has to take a 20+ minute detour home.
  • Memory: Once she gets in, K and I are fixing leftovers for dinner. C: “Hey, where are the mashed potatoes?” K: “Where did you put them?” “In the fridge, but I can’t find them.” “Maybe they’re in the freezer.” “Nope, not there either.” Ten minutes of looking for the potatoes. Did we throw them out on Sunday? Nope, not in the trash. Did C put them in the pantry? Nope. Can’t find ’em, can’t find ’em. Finally, K says, “wait, we fixed rice on Sunday.” There weren’t any potatoes. I would attribute it to getting old, but I’ve always been this way.
  • FUD (fear, uncertainty and doubt): we’re testing some things at the office – will our authentication system (active directory) honor password failure lockouts when using LDAP authentication? I ask our windows consultant to either a) answer the question, or b) enable an account lockout policy so we can test. He responds back that he can do that, but with the warning that “many Linux services aren’t well-designed for this, and repeatedly try a cached or user-provided password, so that users or service accounts may be mysteriously locked out after one attempt or at some future time when passwords change.” Which is complete and utter B.S. Signs that it’s BS? He references Linux services as opposed to open source, i.e. attempted linux dig. And I used to “own” identity management services, including authentication at a large university and if this was the case, things would have blown up within 10 minutes. I thanked him for the advice and noted that I’ve never seen this, but that it’s why we test.
  • OS Performance: we’re looking into some new ideas at the office. Things that could be useful as a preprocessor for a host based intrusion detection system. As part of my testing, I told my laptop to audit all syscalls made to the kernel, by all processes on the system. CPU load spiked, system performance went through the floor, the windowing system became almost completely non-responsive. In the two minutes it took to get access to a terminal, I logged 150 MB of audit logs. On the plus side, all of the information we need can be collected. Now I just need to figure out how to keep a usable system.
  • Self aggrandizement: talking to my technical manager, we need to write up two journal papers based on our recent work. Cool!

I hope everyone had a good Veteran’s Day and remembered to thank the veterans in their lives.

November 20, 2008

robert johnson

Filed under: Random — cec @ 2:57 pm

Dear Amazon.com:

After buying the collected works of Robert Johnson, blues guitarist from the 1930s, the man who was reputed to have sold his soul to the devil to become the greatest blues guitarist ever, I really don’t think I want albums by Robert Johnson the 70s power pop musician.  I know they have the same name, but trust me, they aren’t the same person.

kthxbai

November 24, 2006

Poindexter

Filed under: Random,Security,Social,Technical,University Life — cec @ 12:55 pm

It’s taken me a bit to write about Admiral Poindexter’s visit and the small group talk we had with him. Let me start by reminding folks that here’s a guy who was convicted of lying to congress. The conviction was later overturned on a technicality. He’s also very politically savvy. I once asked my father if he would ever pursue becoming a general in the army. He told me that he was hoping to make full colonel (he later retired as a lt. colonel), but that becoming a general required a literal act of congress and that you needed to become a politician. I would assume the same thing is the case with an admiral and doubly so in the case of Poindexter who managed to become the highest ranking geek in government. All of which is to say take my impressions with a grain of salt.

When I met Poindexter, he came across as a very kind, gentle and grandfatherly figure. He smokes a pipe and was more than willing to tell stories about his career. It seems that he started in the Navy in college, finishing up with a degree in engineering (w00t!). This was around the time the soviets sent up Sputnik. The first Russian satellite caused something of a panic in the US and, arguably, did more to encourage investment in science and engineering than any other event. The military’s response was to select 5 men from the army and 5 from the navy to pursue graduate degrees in science or engineering, anywhere in the country. Poindexter chose physics at Cal-tech. After discussing he trials getting into and then through grad school, he notes that he’s never taught physics, never been in a lab, never really used his degree, but it did give him a solid understanding of the scientific method.

After gradschool, he had several different positions and in each, he played the role of technology evangelist. One of the first to use computers in the Navy, set up the first video conferencing system among the nation security counsel offices, first to use email (on a mainframe!) in the whitehouse, etc. Like I said, the highest ranking geek in government.

Shortly after September 11, Poindexter was asked to head up the DARPA Office of Information Awareness (OIA) projects. In talking with him, I definitely have the sense of a man who loves his country and truly believes that terrorism is the greatest threat it has ever encountered. I disagree with him regarding the extent of the threat that terrorism presents, and so he and I may disagree on the appropriateness of the OIA, but unlike many politicians, I don’t think that he’s using the terrorism to advance other goals. I don’t believe that he’s hypocritical about his work.

So, what is his work? One of Poindexter’s chief complaints is that he (and TIA) were unfairly maligned in the media. If you recall, TIA was presented as a giant “Hoover” of a database. The government would collect information from a number of private sources and perform data mining on it in order to identify (potential) terrorists amongst us. Lots of us whom are concerned with security and privacy were worried about this. The privacy angle is disturbing enough, but from the security stand point, you are creating an attractive nuisance. The first hacker that comes along and can get through the governments security measures is going to have a huge amount of data. Consolidating databases also increases the likelihood that the businesses involved will use the information. For example, can you be denied insurance if you are overweight, but grocery records indicate you buy junk food?

Beyond the privacy and security concerns was the very real question of how this was going to work, i.e., would it really keep us safer? Traditional data mining techniques find statistically significant patterns in large data sets. Terrorists (one hopes) are not statistically significant – unless there are a lot more of them. This is actually one of Poindexter’s complaints – that his proposal should never be called data mining, data mining won’t work. He was working on a “data analysis” system.
In his presentation, Poindexter tells us that the media got it wrong. He never planned a single huge database. Instead, he planned to leave the data where it was and to build a distributed database on top. Each participating database would make use of a “privacy appliance.” The privacy appliance would be connected to a query system and would anonymize the data before sending it to the query system.

To detect terrorists, he would have a “Red Team.” This is the group that is intended to think like terrorists. Their job is to hatch plots and to determine what it would take to implement the plots. For example, blowing up a building might require large amounts of fertilizer and fuel oil. Purchasing these supplies would leave a footprint in “information space.” The Red Team would pass this step along to the analysts who would then query the system with this pattern to find anonymous individuals matching it. Of course, purchasing fuel oil and fertilizer would flag every small farmer in the country. So the Red Team would go back and look at step two, perhaps renting a large van. New query pattern, new search. Repeat until you either don’t find anyone, or until you are specific enough to get a legally authorized search warrant.

Poindexter also notes that this was a research and not an operational program. That the “total” in TIA was meant to encourage researchers to think broadly. Finally, that the reason the privacy part did not get off the ground sooner is that none of the researchers were interested in this aspect – they only received two privacy proposals.

Interesting idea. A few problems:

  1. I’ve gone back through the documentation available at the time and I see nothing about either red teams, distributed databases or privacy appliances. The early architecture diagrams all seem to indicate a monolithic database.
  2. It’s still not clear to me that this will work. The red teams will have to come up with millions of patterns and even then, you are not guaranteed to come up with everything.
  3. Regarding research vs. operational. This is a lovely thought, but at the time, iirc, there were reports of TIA receiving real data. In fact, even as a research project, it would need real data in order to test.
  4. Regarding the “total” in TIA – that was a pretty scary logo if that was the case.

So, it may be that this is a refinement of the original ideas. In which case, they seem like a good refinement. From the privacy and security standpoint, this seems to be better suited that the original ideas. However, I don’t think that Poindexter was being entirely forthcoming.

All in all, a very interesting data and a very interesting man.

November 12, 2006

“A plan, a plan, my kingdom for a plan!”

Filed under: Random — cec @ 1:47 pm

Now that the Democratic Party has won congress, a result I am thrilled to see, there will be increasing cries for them to present “A Plan” for Iraq. Leaving aside the constitutional issues, i.e., the executive branch is responsible for military operations, while congress controls the purse strings and provides oversight, asking for a plan at this point is a complete exercise in futility. There were a number of plans presented or endorse by Democrats over the past four years and all of them were ignored. At this point, Republicans are basically saying, “we’ve ignored you for years, but now that it’s completely fubar’d what do you intend to do about it?”

Unfortunately, there is probably nothing that can be done about Iraq – or at least very little that is palatable. We’ll see what the Baker commission comes back with, but in the meantime let’s review the things that we could have done:

1) Most obvious, we could have avoided the whole war to begin with. As was known before and proved after the invasion, Saddam had no WMDs, was not a threat to us, was not involved in 9/11, and was cooperating with the IAEA. I remember the state of the union speech that Bush gave in the run-up to the war. I remember screaming at the television that the man was lying. Aluminum tubes? They were of insufficient strength to separate uranium. Uranium from Niger? Complete fabrication. I could go on and on about what was known before the SOTU address. The only question left in my mind is whether Bush was lying or completely ignorant.

2) Sufficient troops. General Eric Shinseki suggested that we needed several hundred thousand troops to pacify Iraq. For this, he had his authority ripped out from under him when a report was released that stated Rumsfeld had named Shinseki’s successor 14 months before his term expired. in any event, Shinseki’s estimate of several hundred thousand troops seems to have come straight out of the Army War Colleges journal, Parameters. As this article notes, necessary force ratios are proportional to population size and are affected by the level of violence. A peaceful country like the U.S. might maintain a police force with a force ratio of around 3 per thousand, but to pacify a country might take a force ratio between 10 and 20 per thousand. The article notes that the British had around 20 per thousand in Northern Ireland during the height of the unrest there. Right now, our force ratio in Iraq is in the neighborhood of 5 to 6 per thousand, and we’re in the midst of a civil war. Bringing order to Iraq could easily require 400-500,000 troops. As the 1995 article notes, “we must finally acknowledge that many countries are simply too big to be plausible candidates for stabilization by external forces.” At 26 million people, some 6 million in Baghdad alone, Iraq may be one of them.

3) Of course, at the outset, a lower force ratio could have worked and some of those troops could have been Iraqi. So, a third plan: maintain and don’t disband the army. Unfortunately, the administration through the CPA did disband the army, forming the nucleus of the insurgency. So, instead of having a stabilizing force, we gave a leg up to our enemies and gave them weapons with which to attack us.

4) Along the lines of disbanding the army, many of us opposed de-baathification. This policy essentially drove out of government anyone who had belonged to the Baath party, i.e., all of the people that actually knew how to work the government and to provide services. In order to have any position of responsibility in Iraq under Saddam, you had to belong to the Baath party. You didn’t have to kill people, you didn’t have to believe in it, you just had to officially sign up. A much smarter policy would have been to get rid of the killers and abusers and leave the engineers, military officers and teachers.

5) As the situation started to deteriorate, we needed to train up the Iraqis to get them in a position to stabilize their own country, i.e., get the overall, effective force ratio up around 20 per thousand. The problem is that the soldiers, police officers and recruits are a) afraid to leave their families; and b) getting killed before they can be trained to defend themselves – in many cases, killed before they join up. Addressing this problem, by either moving the recruits to another country for training or to a safer location within Iraq could have helped.

I’m sure I’m missing other “plans” from the past 4 years, these are just the ones I can come up with off the top of my head.  At this point, there aren’t too many options left. Stabilizing the country will take a massive increase in troops – troops the U.S. doesn’t have. Most of our allies have either bailed out or are in the process of bailing. This leaves us with only a few other sources of troops, one could imagine that we would have to bribe provide incentives to the Egyptians or more distastefully to the Iranians and/or Syrians. None of these countries will willingly put forward the hundreds of thousands of troops needed and their price for doing so would be incredibly steep, but still cheaper than our participation in an ongoing civil war.

The moral is that preventing a problem is far easier than correcting it. The Republicans didn’t prevent the problem, but now they want someone else to take the responsibility for a lack of options for fixing it.

October 9, 2006

Free hugs!

Filed under: Random — cec @ 10:15 pm

and if you haven’t seen this, you should – it’s very good.

YouTube Preview Image

October 1, 2006

workplace culture

Filed under: Random — cec @ 9:51 pm

Caveat lector – this is not a fully formed thought, I’m just trying to get some ideas down.

I worked with a guy (not a friend) who was obsessed by workplace culture. As near as I could tell, he came from an environment with an entirely different culture and didn’t care for the one we had. At one point, he and another person (a friend) gave a talk on culture. They described different aspects of culture, including appearance and behavior.

Workplace culture is important. Studies have shown that companies with a “positive” culture are more productive. What I don’t have a good sense of is whether the culture causes productivity, productivity causes positive culture or if the two are correlated but not causally determined.

I tend to think that workplace culture is an evolved group response to circumstances. Changing the external manifestations of culture, without changing the circumstances that the culture evolved in response to, is pointless. Cultures (workplace and other) seem to be “evolutionarily stable strategies” (ESS). In evolutionary biology, an ESS is a strategy that *if* adopted by all members of the population can not be beaten by adopting another strategy. Essentially, deviations from the norm are penalized, driving the population back to the norm.

I suspect that what makes cultures so difficult to change is that they are ESSs. Trying to change culture by changing specific behaviors or by bringing in a new employee exhibiting the desire behavior, is destined to fail, because success within the culture depends on behaving with the cultural norms. I think that the only way to change cultures is to identify the underlying issues that caused the culture to evolve and to address those issues.

For example, I know of people that would like to see their work environment be more innovative – like (they perceive) Apple’s or Google’s. To get there, they try to encourage people to be more creative in their jobs or they propose that we all dress in a cool, hip way, like the Apple “geniuses” (seriously, I’m not making this up). The problem is that if you perceive that the culture is not creative, it is likely the case that the work place does not reward creativity. For example, as I understand it, Google allows employees to spend up to one day a week working on a personally chosen, pie-in-the-sky project. But to do that, they’ve had to overstaff. If they didn’t overstaff, then people spending time on random projects would cause other things to break.

So, if you want your culture to be more like Google’s, then you may have to consider overstaffing in order to get there. You can’t just expect that encouraging people to change their behavior and to be innovative will work

Powered by WordPress