Review Four years ago in a feature for The Register, I wrote about the latest technologies for three-dimensional photography and videography. At the time, the tech required an array of tens to hundreds of cameras, all pointed inward at a subject, gathering reams of two-dimensional data immediately uploaded to the cloud for hours of post-processing, image recognition, feature extraction, and assembly into three- or four-dimensional media.
Today, I can fire up an app on my whizzy new iPhone 13 Pro, point its onboard LiDAR sensor at a subject, and record – in four dimensions – and in real time. That’s enormous progress – a real revolution in sensors that gives our devices the capacity to capture depth. But, as I noted in the closing paragraphs of that feature, capturing depth does not mean that you can display it. All of our screens live in Flatland – everything projected onto a surface of zero depth. Even the lively four-dimensional worlds of computer gaming still squash themselves against the screen. There may be depth within those virtual worlds, but it’s not presented that way to our eyes.
Three-dimensional displays have been a bit of a holy grail of computing. Virtual- and augmented-reality systems use stereo pairs, projecting a slightly different image into each eye, but they’re still too big and clumsy to be widely adopted, even for professional uses. Far better to use something that looks like a screen – but with depth. 3DTV had a go at that a decade ago, but fell into the chicken-and-egg abyss of little content meaning slow adoption, meaning even less content, meaning … extinction.