On July 31, photographers will finally be able to wrap their hands around the Lytro Illum, the world’s first pro-level light field camera. Unlike the DSLRs that everyone is now used to, the Illum lets you create what Lytro calls “living pictures,” derived from the camera’s sensor and lens, specialized software and 3D graphics. Together, these create a novel, immersive visual experience.
The most compelling aspect of living pictures is their interactivity. Because the camera shoots images with information on the direction, color and intensity of the light rays, viewers can later shift the focus, tilt, perspective, and depth of field to different subjects in a frame on the fly. Photographers can concentrate on the overall scene and the story it tells without having to fret about capturing a single, perfectly focused shot.
New York, are you ready?
We’re building Momentum: an all killer, no filler event this November.
The computational aspect of light field photography pushes photographers to compose their shots with depth in mind. The integration of the fixed-lens hardware, post-processing software and an open-source WebGL browser-based player creates a unique, all-encompassing ecosystem.
The Lytro Illum presupposes that people will primarily be viewing photos via an electronic medium such as a smartphone, tablet, or their desktop computer, so they can also view images as dynamic animations or in 3D.
The Lytro Illum costs $1599. It’s currently available for preorder for $1499.
We chatted with Lytro’s CEO Jason Rosenthal to get his take on where light field cameras will take photographers and artists now and in the future.
TNW: What is the significance of the Lytro Illum for photography?
Rosenthal: If you look at the history of photography you can look at film-based cameras as Camera 1.0. The transition from film to digital is Camera 2.0. What we’re working on is what we think will be the next major revolution in imaging and photography and we call it Camera 3.0. We feel like Camera 3.0 is going to be based on light field cameras and computational photography.
Historically, since photography was invented 175 years ago, whenever you took a picture — it didn’t’ matter whether it was a still picture or a video — you’re capturing two data points about your scene. You have the brightness and the color and the photon capture on some kind of sensor. That sensor for a long time was film-based, and for the last few years it’s been digital.
In a light field camera, you’re getting all the three dimensional information about how every ray of light is flowing through every point in space of a given picture. The reason why this is important — and we think transformational — is that it lets you transform physical components of the camera like lenses and optics and shutters into software computation, which is going to lead to tremendous advances in the power of cameras in terms of resolution and the images that we can capture as well as the dramatic reduction in size and weight and cost of how everything with a sensor and a lens is built.
The second thing is that we can take aspects of imaging that you always had to get right at the time you take the picture — like focus and perspective and depth of field — and we can move all that into after-the-fact adjustments after you take the picture. The way that this works going forward is that you capture this rich data and then you make the picture look exactly how you want it to after the fact through software. You’ll be able to do this with much more powerful, much less expensive and much more compact systems than you can today.
So, practically speaking, does that mean with an Illum you always have to go into a post production mode to do major things to the picture before printing it, and that assumes that you are printing it.
No, not really. You optionally can do all that, but basically you can just get better pictures at the time that you take them. The primary environment that you’ll view it (the image) is actually on the Web or on mobile devices. You can certainly still print the picture but 90-plus percent of picture consumption happens today electronically vs. through printing so we’re really optimized for the online and electronic experience.
Lytro’s first light field camera, launched in 2012.
That makes sense.
Just think of Lytro as a new platform for imaging that’s going to make pictures better, clearer, give you perfect focus every time and lets consumers essentially buy better cameras in their smartphones and standalone devices and integrated into more things than you can possibly get today because we’re applying software computation to solving the problem, which has not really been done before.
That’s what we’re doing with the Illum. The first generation Lytro we built more as a consumer gadget, but we found that everyone who bought it was a pretty experienced photographer and owned at least one high-end camera already. They understood things like focus and depth of field, and they really liked the way you could make pictures look and behave differently with a Lytro camera than you could with any traditional camera. That was the main selling point.
So what we ended up doing with the Lytro Illum is we took that audience and we got together with that market and the capabilities that these people really like and we put that on steroids. There’s a bigger, much higher resolution sensor. A brand new light field lens designed from the ground up gives you optical capabilities that you can’t get in any other camera and then we put massive computational power inside the camera. It’s the equivalent of taking a third-generation iPad Air and putting it right inside a high-end camera.
The things that people are excited about and the problem that we’re solving is that with the rise of smart phones and Instagram we’re inundated with images in a way we’ve never been before in the history of photography. There have been more pictures taken and shared online in the last three years than there’s been in the entire 175-year history of photography.
There’s been an explosion of pictures and on the one hand it’s great, and more people are exposed to photography and picture taking than ever before. On the other hand, if you’re an enthusiast or a professional photographer it’s become increasingly difficult to make your images stand out from the crowd and do something different. And that’s really the problem we’re solving with Lytro Illum. We’re giving people a richer more immersive way to tell visual stories.
The way we do that is to bring cienmatic elements like focus and 3D and depth of field into our living pictures.
So do you envision the Illum replacing a Canon or Nikon DSLR, or what do you see as the relationship?
Eveyrone who had the original Lytro already had at least one DSLR or high-end digital camera. The Illum will allow you to take pictures that you could never get, and have online experiences that you could never view with a Canon or Nikon or any traditional camera, and it really enables people to get those shots.
At this point in the evolution of the technology there are still things that traditional cameras are great for and we think that people will keep using them. Over time we believe that in the same way that digital replaced film we feel that light field will replace digital because you’ll be able to get better pictures at a lower cost more easily.
Will that be an interchangeable lens thing or will it be all computational in a single lens?
There will be multiple options for the Illum and there will be options for people who want interchangeable lenses. We’re able to support both. The trajectory is that the more computational power you drive inside the camera, and the higher resolution sensors we bring to market, that will continue to mature the technology.
What about the software? Is Lytro going to take charge of the software or will it have hooks into software that’s already out there? Seems like that would be a major point.
That’s really one of the more exciting things about Lytro. There’s really a connection between the hardware and the software. It’s a brand new kind of image sensor in the form of a light field sensor and it’s the entire imaging pipeline and iOS apps and desktop software and Web software that goes behind it that makes the whole solution end-to-end.
That said, there’s a big mature photo ecosystem out there that we’re also deeply into. With Lytro Illum you’ll be able to export your images and edit them in products like Adobe Lightroom and Adobe Photoshop and Aperture, so we’re already embracing the broader ecosystem.
Plus, our core player technology, which lets you display a Lytro picture on a website, we’ve open sourced that and 500px, which is a big community for professional photographers, has become the first photo-sharing site on the Web that will natively integrate support for our format.
We are very much building our own end-to-end workflow so we can deliver the absolute best and most integrated experience for our customers, but also working with the broader ecosystem so that light field is really pervasive and available.
Talk about the Android operating system. Will that make it more attractive to people who have Android smartphones as opposed to people who have an iPhone?
From the chipset perspective we use the Qualcom Snapdragon that powers about 80 percent of the Android phones out there and we built our own UI and imaging pipeline on top of the open source version of Android.
It will be completely independent. We use Android as a completely embedded operating system. Think of it as a version of Linux that has a lot of mobile and open source, so we built upon it for the OS that runs on the camera itself. There’s also a Wi-Fi chip in the Illum that you can (use to) connect to your iPhone.
There’s an iPhone app that goes with it. We will, in the future, have an Android app, but we don’t right now, which is funny because even though we’re built on Android we have a tighter connection with iOS, but we’ll support both platforms going forward.
We’re different than Samsung. They took a phone and put it on a camera so you could download things from the Google Play Store. That’s not what we’re doing. We’re using Android as an embedded OS. It’s Android that we’re using but we’ve written it to be custom software to create a Lytro experience. But you wouldn’t go to the GPS and put Angry Birds on it.