The Clock

I watched The Clock at the Boston Museum of Fine Arts from about 4:45 to 6:30 on Friday. My thoughts on The Clock:

The experiment works in part because scenes with clocks in them are usually frenetic. In a movie, the presence of a clock usually means someone is in a rush, and so most of the sequences convey urgency.

It’s often hard to spot the clock in each scene. In less exciting sequences, this serves as a game to pass the time. In many cases the clock in question is never in focus, or is moving too fast for the viewer to notice. The editors must have done careful freeze-frames and zoomed in on wristwatches to work out the indicated time.

The selected films are mostly in English, with a fair number in French and very few in any other languages. This feels fairly arbitrary to me.

Scenes from multiple films are often mixed within each segment. It seems like the editors adopted a relaxed rule, maybe something like: “if a clock appeared in an original, then a one minute window around the moment of appearance is fair game to include during that minute of The Clock, spliced together with other clips in any order”.

The editing makes heavy use of L cuts and audio crossfades to make the fairly random assortment of sources feel more cohesive.

I swear I saw a young Michael Cain at least twice in two different roles.

Some of the sources were distinctly low-fidelity, often due to framerate matching issues. I think this might be the first production I’ve seen that would really have benefited from a full Variable Frame Rate render and display pipeline.

I started to wonder about connections to deep learning. Could we train an image captioning network to identify images of clocks and watches, then run it on a massive video corpus to generate The Clock automatically?

Or, could we construct a spatial analogue to The Clock’s time-for-time conceit? How about a service that notifies you of a film clip shot at your current location? With a large GPS-tagged corpus (or a location-finder neural network) it might be possible to do this with pretty broad coverage.

The SONY hack is the NSA’s fault, not North Korea’s

New York Times anonymous source notwithstanding, it seems unlikely that North Korea is really responsible for the “Guardians of Peace” hack at SONY. It’s an interesting puzzle who exactly executed the attack, but never mind the facts. Who should we blame?

I say blame the NSA. Here’s why.

The NSA is chartered both to spy on our enemies and to protect ourselves. The offensive* mission traditionally produced espionage operations like ECHELON, while the defensive mission resulted in improved security standards like DES, SHA-1, and SELinux. The defensive mission also produced corporate partnerships to secure our commercial infrastructure.

(Which of these two missions sounds like Security to you?)

The problem is, these two missions are sometimes in conflict with each other. For example, companies around the world use the same networking equipment, so fixing flaws in that equipment helps make the US more secure, but it also makes it harder to spy on companies in enemy countries. Before 9/11, the balance seems to have been relatively stable, but in the Bush administration it shifted radically, as the NSA’s mission shifted from foreign espionage to domestic surveillance, presumably motivated by a desire to discover and monitor all the world’s angry young men. When your mission is to break into American companies’ computer systems inside the USA, there’s not much sense claiming to care about domestic security.

The result was a breakdown of the NSA’s traditional defensive role. Most pointedly, in 2004, the NSA invented and promoted the Dual_EC_DRBG standard, which was later discovered to have a mathematical “back door” that made systems that use it less secure, a stark contrast to the NSA’s previous standards proposals, which improved security.

More broadly, it seems that the NSA has been collecting exploits for widely used computer systems, and not informing the users or manufacturers. It’s like discovering a way to crack your hotel room’s safe, and not telling the hotel or the maker because if they fix the safes, you won’t be able to rob the other guests. This seems like a great strategy until someone else figures out the flaw, and robs you.

That’s what happened to SONY.

The NSA’s annual budget is estimated to be about $11 billion. Try, if you can, to imagine what it would be like if half of that budget were spent on making the computer security of all companies and citizens of the US and our allies better … the other half were not spent on making that security worse.

Half the NSA’s budget is close to the entire budget of the National Science Foundation, which funds about 10,000 different scientific initiatives in the US every year. If that half of the NSA were wholeheartedly devoted to defense, I imagine we would see military-scale red-team efforts to find (and report, and fix!) holes in corporate infrastructure, deep funding of efforts to produce and invent suitable, secure consumer software and hardware, and thousands more things to strengthen our society’s digital infrastructure.

In that world, I don’t think SONY would be getting hacked so easily.

And I think to myself:
“What a wonderful world”

* I think this qualifies as a pun.

Daylight hours

Theory of the day: unscheduled employees tend to work longest hours in late spring, and shortest hours in late fall, because their sense of how much daylight there ought to be at the end of the workday lags the actual changes in sunset times by several weeks. At least, that seems to be the case for me.

Maybe I should get my watch repaired.


Anyone who loves video software has probably caught more than one glimpse of the Blender Foundation’s short films: Elephants Dream, Big Buck Bunny, Sintel, and Tears of Steel. I’ve enjoyed them from the beginning, and never paid a dime, on account of their impeccable Creative Commons licensing.

I always hoped that the little open source project would one day grow up enough to make a full length feature film. Now they’ve decided to try, and they’ve raised more than half their funding target … with only two days to go. You can donate here. I think of it like buying a movie ticket, except that what you get is not just the right to watch the movie, but actually ownership of the movie itself.


My home for the next two nights is the Hotel 309, right by the office.  It’s the stingiest hotel I’ve ever stayed in.  Nothing is free: the wi-fi is $8/night and the “business center” is 25 cents/minute after the first 20.  There’s no soap by the bathroom sink, just the soap dispenser over the tub.  Even in the middle of winter, there is no box of tissues.  Its status as a 2-star hotel is well-deserved.

The rooms are also very stylish.  There’s a high-contrast color scheme that spans from the dark wood floors and rug to the boldly matted posters and high-concept lamps.  The furniture has high design value, or at least it did before it got all beat up.

These two themes come together beautifully for me in the (custom printed?) shower curtain, which features a repeating pattern of peacocks and crowns … with severe JPEG artifacts!  The luma blocks are almost two centimeters across.wpid-IMG_20140217_181332.jpgSomeone should tell the artist that bits are cheap these days.



So you’re trying to build a DVD player using Debian Jessie and an Atom D2700 on a Poulsbo board, and you’ve even biked down to the used DVD warehouse and picked up a few $3 90’s classics for test materials.  Here’s what will happen next:

  1. Gnome 3 at 1920×1080.  The interface is sluggish even on static graphics.  Video is right out, since the graphics is unaccelerated, so every pixel has to be pushed around by the CPU.
  2. Reduce mode to 1280×720 (half the pixels to push), and try VLC in Gnome 3.  Playback is totally choppy.  Sigh.  Not really surprising, since Gnome is running in composited mode via OpenGL commands, which are then being faked on the low-power CPU using llvmpipe.  God only knows how many times each pixels is getting copied.  top shows half the CPU time is spent inside the gnome-shell process.
  3. Switch to XFCE.  Now VLC runs, and nothing else is stealing CPU time.  Still VLC runs out of CPU when expanded to full screen.  top shows it using 330% of CPU time, which is pretty impressive for a dual-core system.
  4. Switch to Gnome-mplayer, because someone says it’s faster.  Aspect is initially wrong; switch to “x11” output mode to fix it.  Video playback finally runs smooth, even at full screen.  OK, there’s a little bit of tearing, but just pretend that it’s 1999.  top shows … wait for it … 67% CPU utilization, or about one fifth of VLC’s.  (Less, actually, since at that usage VLC was dropping frames.)  Too bad Gnome-mplayer is buggy as heck: buttons like “pause” and “stop” do nothing, and the rest of the user interface is a crapshoot at best.

On a system like this, efficiency makes a big difference.  Now if only we could get efficiency and functionality together…

Proposal: book club for freetarians

There are lot of people who, for various reasons, prefer to consume food whose creation did not harm animals. (This is not always clear, but for the most part there is consensus on which foods are harmful.) Much (maybe most) food in our society doesn’t meet this standard, so seeking out food that does requires some effort, but it is not tremendously difficult. Many people succeed in eating their preferred food some of the time, when the effort they are willing to expend is greater than the amount required. Some people (vegans) are willing to make enough of an effort that their meals will always meet the criterion.

There are a lot of people who prefer to consume media that are licensed to users on terms that do not harm their freedoms. (This is not always clear, but there is a surprising amount of consensus on the matter.) Certainly most popular media don’t meet the standard; most recorded media, printed works, and live performances are licensed in ways that prevent us from embracing and adapting the work, so seeking out freely licensed works requires some effort.

But that’s where the parallel ends. There are no* free culture vegans (let’s call them “freetarians”). In my opinion, the reason for this is pretty simple: artistic works are cultural works. They are made to be shared. We watch movies, read books, and listen to music with the expectation that we will be able to share the experience with our friends. This shared experience is not just enjoyable in the obvious sense; it’s also crucial for maintaining our membership and status in the social networks that lay the foundation of our lives. Reading the right novel or watching the right movie provides common ground for dropping or catching the right offhand reference that marks you, to the right person, as the right kind of “cultured”. At US high-tech companies, that means science fiction, from Asimov to Stephenson and Star [Trek|Wars]. Among urban hipsters it’s an ever-rotating cast of up-and-coming musicians. Among the devout these works of art are religious, in which case they may actually be freely licensed!

The desire to share a cultural base with others is what makes media marketing so powerful. The purpose of the marketing is not really to convince us that, say, a movie, will be enjoyable on its own merits. No one goes to the movies alone! The purpose of the marketing is to give us the feeling that everyone else will go and see the film, and so we have to go in order to keep our cultural capital up to date. In fact, we should go now, because it’s going to be way popular and our investment will pay off (Ponzi-style) if we see it first.

Hollywood-scale marketing effectively crowds out alternative works. In my free time (and with my free dollars), I will of course choose the work that represents the best investment in cultural capital. That almost always means the one that I can most easily discuss with my friends, which in turn means the one that they’re most likely to consume as well.

I have not spent this long blather trying to prove that centralized control of culture is inevitable, although it might sound that way. There’s an escape hatch: if a social network is willing to join together and select collectively which works they will consume, then the global popularity becomes much less relevant. As long as several of my friends also consume the work, I can safely consume it myself, knowing that it will be a good investment in building foundations for my relationships. This is especially true if I know that we will have an occasion on which to discuss the work, in which case the discussion is actually my entree into a scene that I could not otherwise access.

This kind of collectively directed consumption has a name: it’s called a book club.

If you want to support the creation of free culture, the easiest way would be to start a free culture book club. There are plenty to choose from: the 7 novels of Cory Doctorow, to the magnum opus of Jonathan Zittrain, and countless others. That’s just books. We could also mention feature-length films like Sita Sings the Blues and TV series like Pioneer One.

I tried to watch Pioneer One, but found that despite the engaging storyline, I simply couldn’t muster the effort to spend time on something that I would never be able to share. In a sense I was prototyping the free culture book club when I convinced Sarah to watch it with me. Together we watched the entire 6-episode series and were sorely disappointed that there wasn’t more to see.

There are a million ways you could set up such a club. Here’s one: get some friends together and put together a list of works, then vote (by Selectricity, of course). People who like the winner enough to participate also agree to make a non-zero payment to the author, but the amount need not be disclosed. Set a date to meet at a public establishment (quiet enough for a good discussion) to talk about the work. Do it a little and you’ll live in a happy bubble of free cultural works. Do it enough, and that bubble might grow.

So who’s in? Oops, BTW I live in Seattle.

*: ok, maybe not none. Exactly who would qualify is a sticky question.

It’s Google

I’m normally reticent to talk about the future; most of my posts are in the past tense. But now the plane tickets are purchased, apartment booked, and my room is gradually emptying itself of my furniture and belongings. The point of no return is long past.

A few days after Independence Day, I’ll be flying to Mountain View for a week at the Googleplex, and from there to Seattle (or Kirkland), to start work as a software engineer on Google’s WebRTC team, within the larger Chromium development effort. The exact project I’ll be working on initially isn’t yet decided, but a few very exciting ideas have floated by since I was offered the position in March.

Last summer I told a friend that I had no idea where I would be in a year’s time, and when I listed places I might be — Boston, Madrid, San Francisco, Schenectady — Seattle wasn’t even on the list. It still wasn’t in March, when I was offered this position in the Cambridge (MA) office. It was an unfortunate coincidence that the team I’d planned to join was relocated to Seattle shortly after I’d accepted the offer.

My recruiters and managers were helpful and gracious in two key ways. First, they arranged for me to meet with ~5 different leaders in the Cambridge office whose teams I might be able to join instead of moving. Second, they flew me out to Seattle (I’d never been to the city, nor the state, nor any of the states or provinces that it borders) and arranged for meetings with various managers and developers in the Kirkland office, just so I could learn more about the office and the city. I spent the afternoon wandering the city and (with help from a friend of a friend), looking at as many districts as I could squeeze between lunch and sleep.

The visit made all the difference. It made the city real to me … and it seemed like a place that I could live. It also confirmed an impressive pattern: every single Google employee I met, at whichever office, seemed like someone I would be happy to work alongside.

When I returned there were yet more meetings scheduled, but I began to perceive that the move was essentially inevitable. The hiring committee had done their job well, and assigned me to the best fitting position. Everything else was second best at best.

It’s been an up and down experience, with the drudgery of packing and schlepping an unwelcome reminder of the feeling of loss that accompanies leaving history, family, and friends behind. I am learning in the process that, having never really moved, I have no idea how to move.

But there’s also sometimes a sense of joy in it. I am going to be an independent, free adult, in a way that cannot be achieved by even the happiest potted plant.

After signing the same lease on the same student apartment for the seventh time, I worried about getting stuck, in some metaphysical sense, about failure to launch from my too-comfortable cocoon. It was time for a grand adventure.

This is it.

Ethics in an unethical world: Ethics Offsets

The recent hubbub regarding the (admirably public) debate within Mozilla about codec support has set me thinking about how to deal with untenable situations. After rightly railing against H.264 on the web for several years, and pushing free codecs with the full thrust of the organization, Mozilla may now be approaching consensus that they cannot win, and that continued refusal to capitulate to the cartel is tantamount to organizational suicide.

So what can you do, when you find yourself compelled to do something that goes against your ethics? To make a choice that you feel is wrong on its own because it benefits you in other ways, a choice you would like to make only when really necessary and never otherwise? Any thinking person will have this problem, to greater and lesser degrees, throughout their lives. We are not martyrs, so we do what we have to do to survive and try to keep in mind our need to escape from the trap.

Organizations cannot simply keep something in mind, but they can adopt structures that remind their members of their values even when those values are compromised. A common structure of this type is the sin tax, a tax designed (in a democracy) by members of a state to help them break or prevent their own bad habits. Sin taxes work by countering the locally perceived benefit of some action that’s harmful in a larger way, by reminding us of less visible but still important negative considerations. Some of their effect is straightforwardly economic, but some is psychological, to help us remember the bigger picture.

Sin taxes are more or less involuntary, but when the government does not impose these reminders, we often choose to remind ourselves. One currently popular implementation of this concept is the Carbon offset, a payment typically made when burning fuel to counter the effect of global warming. Organizations that buy carbon offsets for their fuel consumption do so to send a message, both internally and externally, that they place real value on minimizing carbon emissions. They may send this message both explicitly (by publicizing the purchase) and implicitly (by its effect on internal and external economic incentives).

Carbon offsets may be in fashion this decade, but there are many older forms of this concept. Maybe the most quotidian is the Curse Jar*, traditionally a place in a home or small office where individuals may make a small payment when using discouraged vocabulary. The Curse Jar provides a disincentive to coarse language despite being strictly voluntary, and despite not purchasing any effect on the linguistic environment (although the coffee fund may help for some). The Curse Jar works simply by reminding group members which behaviors are accepted and which are not.

For Mozilla, the difficulty is not emissions, verbal or vaporous, but ethical behavior. How can Mozilla publicly commit to a standard of behavior while violating it? I humbly submit that the answer is to balance its karmic books, by introducing an Ethics Offset**. When Mozilla finds itself cornered, it may take the necessary unfortunate action … and introduce a proportionate positive action as a reminder about its real values.

In the case at hand, a reasonable Ethics Offset might look like an internal “tax” on all uses of patented codecs. For example, for every Boot2Gecko device that is sold, Mozilla could commit to an offset equal to double the amount spent on patent licenses for the device. The offset could be donated to relevant worthy causes, like organizations that oppose software patents or contribute to the development of patent-free multimedia … but the actual recipient matters much less than the commitment. By accumulating and periodically (and publicly) “losing” this money, Mozilla would remind us all about its commitment to freedom in the multimedia realm. A similar scheme may be appropriate for Firefox Mobile if it is also configured for H.264 support.

Without a reminder of this kind, Mozilla risks becoming dangerously complacent and complicit to the cartel-controlled multimedia monopolies. As long as H.264 support appears to serve Mozilla’s other goals, Mozilla’s commitment to multimedia freedom will remain uncomfortable, inconvenient, and tempting to forget. Greater organizations have slid down off their ethical peaks, on paths paved all along with good intentions.

Most companies would not even consider a public and persistent admission of compromise, but Mozilla is not most companies. Neither are the companies that produce free operating systems, and many other components of the free software ecosystem. None of them should be ashamed to admit when they are forced to compromise their values and support enterprises that, on ethical grounds, they despise … but they should make their position clear, by committing to an Ethics Offset until they can escape from the compromise entirely.

*: Why is there no Wikipedia entry for “Curse Jar”!?
**: Let’s not call it an indulgence.


I was really impressed by Michael Bebenita’s Broadway.js, the recent port of an H.264 decoder to pure Javascript using Emscripten, a LLVM-based C-to-JS converter … but of course this is the opposite of what we want! Who needs H.264? We want WebM!

I’ve spent the past few weekends digging into Broadway.js, stripping out the H.264 bits and replacing them with libvpx and libnestegg. Now it’s working, to a degree. You can see it for yourself at the demo page (so far tested only in Firefox 7…).

I’m not going to be able to take this much further … at least not right now. It’s been a fun exercise though. I invite all interested comers to read some more details and then fork the repo.

Take this thing, and make it your own.
Continue reading “Route9.js”