Short: click here if you have Chrome or Firefox. (Only tested in Chrome though; Ubuntu LTS appears to be stuck on Firefox 17.)
So I read all the tutorials and put together the simplest demo page I could manage for live video in WebGL. I just wanted to display the video feed, with some kind of trivial GPU effect. It seemed like a good Hello World to dip a toe into WebGL. I didn’t know the first thing about GL, but it seems really exciting and this seemed like a good way to learn.
It wasn’t easy. Writing WebGL for the first time today, it felt like assembler on a machine with only one register of each type. Want to perform an action on a buffer? Load it into the register, and then on the next line you can run the instruction to do whatever it is you’re trying to do. I have no idea why the API works this way, but I have to imagine it’s decades of accumulated legacy cruft.
You have to poll for new video frames, which makes me sad, because there’s no new-frame event. This simple implementation is even worse than polling; it recalculates even if nothing has changed. Maybe you could avoid that by inspecting the video timestamp …
Also, there’s no support for rectangles, only triangles. So to display a rectangular video, you have to define two triangles by manually specifying all six vertices, which happen to be at the corners of the unit square. This makes sense, I guess, but it also feels faintly ridiculous.
I think my favorite niggle is that textures are scaled to the unit square [0, 1], but the canvas is scaled to [-1, 1]. This convention mismatch means that even if all you want to do is load an image and display it, you have to write a coordinate conversion in your vertex shader. As Hello Worlds go, this one’s a doozy.
It’s pretty though. Now I just have to think of a pixel shader that you couldn’t do (fast enough) on the CPU.