I spent most of my day today helping out a friend in the lab reinstall his computer. I tried to do something sophisticated and beautiful, which of course failed miserably, but that’s not a very entertaining story.

Eventually we gave up and just installed a standard, basic Windows XP Pro, from the original install CD. The ethernet driver was not included, so I had to download it on another machine and then copy it over, but it installed correctly. We planned to download the rest of the drivers using Internet Explorer, which is included in the default installation.

We opened IE, which went to its default homepage, www.msn.com … and immediately crashed. It threw up one of those dialogs: “Do you want to report an error to Microsoft”. We tried it again, and got the same result every time. Sometimes a bit of the page would load in the second or two before the crash occurred, but it always crashed and never finished loading. Eventually, I managed to resolve the issue by launching it from the command line, “explorer http://google.com”, after which we changed the homepage to Google.

Anyway, the point is that Microsoft, in the process of updating their own website (msn.com), has rendered the default installation of their own browser completely unusable. If I hadn’t known how to invoke the thing from the command prompt, or had another computer handy, we would have been completely stuck.


Nothing too exciting. Experiments in lab, dinner with friends, life as it is meant to be, but not remarkably so.

I spent today writing a pretty nifty data structure, but then, I’m working on a library of nifty data structures, so that’s a fairly common occurrence these days. This one is just your basic rooted tree, but represented as a pseudotime command stream so that multiple non-synchronized users can edit the tree simultaneously. Some work is required to maintain the tree invariants…


Yesterday was the inauguration. It seemed like it would be a sufficiently historic occasion to be worth watching live, even though it came in the middle of the workday, so I started making plans with friends. I even baked some cookies in preparation (dark chocolate dough with white-chocolate chips or crushed candy canes). We decided to meet up before the main event in a lounge at the med school, find an open classroom with a big screen for presentations, and plug in a laptop, streaming the speeches over the internet.

This turned out not to be necessary. The Dean’s Office had helpfully put the multimedia team on the task in advance, and so there was already an enormous plasma screen TV, hooked up to an ATSC antenna and big concert-type speakers on tripods, in that very lounge. We watched the proceedings in a crowd of at least 100 people, at just one of five locations at the medical school alone.

It was quite an experience. During the speech, there were waves of applause after compelling lines (and particularly after Obama said “we will restore science to its rightful place”). It is quite astonishing when a group of scientists and doctors, cerebral folk not easily lost in the moment, start emitting spontaneous cheers while watching television.

Let’s check back in a year and see how we’re doing.

Mystery Hunt

I spent the weekend at Mystery Hunt 2009, with team Codex (they change the name a bit every year, it seems; this year it was Codex Magliabechiano). I’ve participated for a few years, but never like this.

Codex is a relatively typical top-tier Mystery Hunt team: over 50 people, most with a strong social connection to MIT but fewer with academic history here, many who’ve participated in Mystery Hunt for years, if not decades. I shouldn’t reveal too many details, but Codex also brings to bear a wide array of collaborative technologies, prepared in advance, to allow groups to work together on puzzles as efficiently as possible, even when, as was the case this year, many of the best solvers are physically in Mountain View, CA or Zurich.

The puzzles were hard, as always, but maybe harder than I had previously realized. The puzzles are typically multi-layered, requiring hours to solve with a typical subteam of about 5 people. Some are solved much faster than others, and the system generally allows a team to win without solving every puzzle.

Let me give you an example: Space Madness. This puzzle at first appeared to be a typical word-find, of the sort you might give to a third-grade class. The puzzle was a 13×13 grid of letters, with columns marked a-m and rows n-z. After staring at the page for a few seconds, one would quickly notice a major change from the standard word-find: the letters are changing in time, fading from one board to another in an endless cycle. The second change one might notice is that the words are not provided: instead, we have two sets of clues, also labeled a-m and n-z. Each clue has a number next to it, in a standard form to indicate the number of letters in the correct answer. Also, many of the letters in the grid seemed to be “X”.

Based on this information, we concluded that the problem was a word-find in three dimensions, with time supplying the third, and so a group of people downloaded the animation, split it into component frames (there were in fact 13, forming a 13x13x13 cube of letters), and started transcribing them. I wrote a word search program in python that could read in this text, form it into a cube, and then search for any word that can be formed by stepping between adjacent cells.

A second group, possibly in Zurich, started work deciphering the clues. They eventually discovered the following: the clues each contained one extra word that, when removed, resulted in a more sensible clue. For example, one clue was “Cruise ship feature 1988 (8)”, but the word “ship” was extraneous, leaving “Cruise feature 1988 (8)” as the correct clue, with the answer “COCKTAIL”, the only Tom Cruise movie from 1988 with 8 letters in its name. With this in mind, we determined the extraneous word and correct solution for each of the 26 clues. The first letters of the 26 extra words, spelled out in the order in which the clues were given, read “INEACHPHASEREADBETWEENTHEEXES” (“in each phase, read between the X’s”).

I punched the correct words into my search routine, and while most were present, a few were not. I tried many variations on the search method, including a method in which the letter “X” was ignored, and some words were still not present. We concluded that this was not simply a 3-D word search.

We noticed something odd about our answers. The first 13 were mostly adjectives, and the last 13 were mostly animal names. Someone punched the names into Google, and discovered that these words represent the episode titles of the short-lived 2004 TV show “Wonderfalls”, of which there are exactly 13 episodes, each with a two-word name. The second word is always an animal (the show has an animal theme.). This allowed us to match up one clue from a-m (the first word), one clue from n-z (the second word), and a number 1-13 (the number of the episode whose title they create). These three things form a three-dimensional coordinate, specifying a letter in the cube. We looked at these letters, but saw only gibberish when we combined them.

The next step was made overnight, by Codex hunters in some other timezone. They discovered that for each “phase” of the grid, a few letters would be “X” in both the preceding and following phases. Reading between the X’s in this way, they considered only these letters, and found that they spelled out a sentence in each phase. Each sentence comes from a single episode of Wonderfalls. They performed the same 3-dimensional indexing described previously, but this time reordered the letters according to the number of the episode corresponding to the quote in each phase. The result was “XHOCKEYPLAYER”.

We agonized over XHOCKEYPLAYER for hours. We called this in as the answer, but it was wrong. We called in EXCHEQUER, but that was wrong too. Eventually, after other puzzles in this round were solved, we began to suspect a pattern, that they were all types of saws (other elements were Band(saw) and Jig(saw)). Another puzzler found that the Buffalo Sabres had previously used a logo consisting of two crossed swords, forming an X. Wonderfalls is set in Buffalo. We called in SABRE, which was the correct answer.

There were about 108 puzzles in total, not including challenges involving cooking, costumes, robot-building, metapuzzles, or meta-meta-puzzles. Codex had upwards of 50 puzzlers, I suspect, and we worked for 63 hours. We didn’t win the race, though we were respectably close. That’s not a bad result in Mystery Hunt, especially since the winning team is duty-bound to write the next year’s Hunt.


Last night was the only time I could join in with OLPC’s XOCamp, which has been scheduled during work hours all this week. We settled on dinner at a restaurant in Central Square, a few blocks from my apartment, so I invited people back afterwards. It was probably the most international gathering I’ve ever had, with guests from at least Guatemala, Denmark, Uruguay, Sweden, and more. We sat around and chatted loudly about technical stuff for a few hours. It was nice.

Today I had the MRI scanner reserved from 6 PM onwards (it’s mostly booked during the day for scanning patients). I managed to record some datasets that could mark a significant milestone in my project… but I have no idea if the files were actually saved, or automatically deleted. It’s only my second time scanning alone on a GE machine, though, so I don’t feel too bad.

Making the acquisitions was a bit of a dance. I had wheeled down a cart into the MRI control room, on which rested an entire desktop PC, including monitor and keyboard, for ultrasound data acquisition. Each dataset required a five-step procedure: pre-scan on the MRI control computer, launch my acquisition program on the ultrasound acquisition computer, run back to the MRI controller and start the scan within 15 seconds, check back on the ultrasound machine to see that data is flowing (Windows is a bit unreliable for this), and then run over to the open door of the MRI room and start pulling on the string in a vinyl tube that’s connected to the motion phantom in the bore. The process is a bit of a mess, and my contraption looks like a middle-school class project… but I guess that’s how science works.

Tomorrow, and all weekend, is MIT Mystery Hunt. It’s basically the world’s largest and most difficult puzzle challenge. I’ve participated about four times, and never managed to solve a single puzzle. To give you a flavor, here‘s a few from past years.


Tonight, once the clinical studies were done for the day, I ran my first test of MRI-ultrasound interference. The results were encouraging.

Among the many problems with doing ultrasound inside an MRI machine is the interference problem. My experiment requires an ultrasound transducer to be placed inside the bore of an MRI machine. The equipment to pulse and receive from the transducer, being metal, has to live outside of the room altogether. They are connected by a cable, about a meter of which is inside the MRI machine. This length of cable acts as an antenna, picking up the electromagnetic disturbances created by the scanner.

One thing that’s (deliberately) underemphasized to patients entering an MRI machine is just how much power is being sent through their bodies. I’ve never hooked up an ammeter to an MRI, but my impression is that the imaging process typically uses kilowatts of peak power. That power passes through the patient but is hopefully not absorbed, and instead bounces out of the machine altogether.

My cables, being coaxial, are theoretically self-shielding, but in the intense environment inside the bore, they still pick up huge signals from the MRI, big enough to totally swamp any ultrasound measurements I might be trying to make. Luckily for me, there is one saving grace: frequency filtering.

MRI uses two different kinds of time-varying magnetic fields called the gradients and the B1 field. The gradients are operated like a lightswitch, switched on when needed, and then switched back off. They therefore run at some very low effective frequency of a few tens of Hz. The B1 field is also switched, but when it’s on, it’s an antenna oscillating at 63 MHz. My ultrasound transducer runs at about 5 MHz.

I checked today, and sure enough, setting up a bandpass filter from 1-10 MHz almost perfectly eliminates the MRI interference, leaving behind only the signal I’m looking for. It’s nice when the world behaves they way they told you it would in freshman electrical engineering.


It’s snowing beautifully outside. I got home just as it was starting. I looked out my window, at the amber color of the flakes beneath the sodium lamps, and wondered: how much does a snowstorm weigh?

A really serious snowstorm might drop a foot of snow on the entire northeast US (CT, MA, VT, NH, and much of NY, say). A foot of snow, if you melt it, is about 3 cm of water. That means 30 m^2 of snow weighs a ton. A km^2 is 10^6 m^2, so the amount of snow that falls on a square kilometer is about 30,000 tons. That’s a lot.

Very roughly, the area of a typical large storm might be about 300 km by 300 km (180 miles by 180 miles). That’s about 100,000 km^2, for a total weight of 3 billion tons.

Blizzards are heavy.


Today I’m at the Fedora Users and Developers Conference, mostly to talk about Sugar/OLPC stuff. We scheduled a conference for this weekend several months ago… and things have changed quite a bit since then. We have a lot to talk about, though not quite the things we were planning on talking about.

These sorts of conferences are always cool, because I end up meeting people whom I’ve previously only read about on the internet. It really feels like hanging out with celebrities. The last time I went to one of these things I saw my first OLPC machine. Who knows what could happen this time…

The weather has cleared up substantially, and is now the classic Boston January day: crystalline cold, with a pure cloudless sky and the remaining snow deep-frozen into solid rock. It is a harsh landscape, but beautiful.

Bad Weather

The moment I wrote my first line of code for OLPC, I discovered that the project was deeply, unexpectedly fulfilling to me.  The work had a combination of instant feedback, an important cause, a friendly community of truly brilliant people, and intriguing technical challenges.  At a time when my commitments in graduate school were a bit underspecified, I found myself spending many hours every day working on various projects for OLPC, a few of which even came to fruition.  By luck, I happen to live a few minutes’ walk from the headquarters, so it was easy for me to meet the engineers there, who soon became some of my closest friends.

OLPC was hiring during all of last year, and so I considered applying for a job there.  Every time, I concluded that it was more important to me to finish grad school first.   I know too many stories of graduate students who left to work for a startup and never came back.  Once I had graduated, in a few years, then maybe I could work for them.

That now seems unlikely.  OLPC has laid off most of their software engineers.  As a charitable organization, they are dependent on donations from their corporate backers, who have become less and less generous for a variety of reasons.

I spent some time with the staff last night, before skating home across the ice-sheet sidewalks of Cambridge.  The people around the table had largely just lost their jobs, and yet they shared with me an unexpected optimism.  The OLPC Foundation has been financially strapped for months, and the resulting internal tensions have slowed progress tremendously by causing the organization to focus on short-term concerns just to stay afloat.  Their ability to maintain momentum and focus on long-term technical improvements had been lost months ago.  The resulting divergence between the goals of Sugar Labs and OLPC created friction that wasted a great deal of energy.

I foresee, or at least hope for, a renewed relationship between Sugar Labs and OLPC, and also between Sugar Labs and the hundreds of thousands of children who are using XOs and Sugar every day.  There are many challenges ahead, and principal among them is the problem of finding the hundreds of thousands of dollars we would need to pay the salaries of developers to build this system.

Even without any immediate source for that funding, though, I am still optimistic.  A company is hard to maintain, requiring millions of dollars a year and a great deal of management overhead.  An open coalition of volunteers, engineering firms, support companies, teachers, students… that’s much harder to kill.

So here’s to a new year of Sugar and OLPC.  May it be a good year for all concerned.

EDIT: modified to be less inaccurate regarding the number of engineers.

Back to Work

I’m back in Boston, and back at work.  Life is pretty much back to normal, except that with the bike lanes covered in packed snow I am unable (or at least unwilling) to pedal back and forth.

In the life of Ben, it’s a slow news day.