Interview: that says lytro boss jon karafin to the new lytro-cinema light field camera

The attempt lytros with the light field technology to revolutionize the camera technology in the mass market has failed, now the American company wants to conquer the field of TV productions – with a new light field camera named Lytro Cinema. It offers a resolution of 755 megapixels, a dynamic circumference of 16 apertures, 40k videos and videos with up to 300 frames per second. The generated amounts of data are – in every way – hard to grasp. The light field camera produces up to 300 GB of image data per second. Stefan Mollenhoff from our partner portal Techstage has made Jon Karafin – Head of Light Field Video at Lytro – at Cine Europe 2016 in Barcelona to interview.

Here is the interview transcript:

Stefan Mollenhoff: There are currently these huge numbers through the Internet: 755 megapixels with up to 300 frames per second and 300 GB data generated per second, and that of a video camera. Why should someone do this?

Jon Karafin: Well, these are many questions. Okay, so these data rates are of course the upper limit when you absorbs frame rates of 300 fps. If one moves, on the other hand, rather with 24 pictures per second, the data rate is significantly lower – but is still roughly roughly compared to what traditional film cameras supply.

Let’s get back to the basics what a light field is superimposed. A light field is the collection of viewpoints, from which you can see light beams that are reflected by an object. So that’s a somewhat comfortable way to say that in principle is the ability to capture a holographic picture.

To the things that can now be done with a light field camera, such as the refocusing of recordings after recording, or you can adjust the aperture, the frame rate or the shutter angle, and all by post-processing on the computer. And these are now just the things that one can still adapt optics and camera side in post-processing, additional to all the Visual Effects features that make a light field.

An example we have shown on the NAB and here on the CineEurope is the possibility of a so-called depth screens. This Depth Screen is the ability to understand where all the light rays exist in the room. This makes it possible to extract the foreground and the background as if you were working with a greenscreen – only without greenscreen.

That’s just the tip of the iceberg – I’ll stay with it now and see if this is answered about your question.

So that means that you take virtually no 2D image, but almost a 3D image of a scene.

Carafine: Yes, that’s just right. One catches a volume of the scene. This preserves all light beams as focused light rays and can reject them in the room again and thereby creates a new two-dimensional image – or any two-dimensional image within the refocusable area.

What now means for movies? How will that be the way to make films, the type of images we see, and things that happen in movies? How will this be commander with this technology?

Carafine: So, the easiest way on how to look at this is that you have now during the machining of the films the possibility to adapt to the camera settings. And that was just something that was burned with the recording firmly into the pictures. Some directors and cameramans are absolutely perfect in it. But now one can consider the decision on the focus, which was hit on the film set. And instead of focusing on the eye of a person, you have focused on the back of the shoulder to tell the story with just this scene.

At that point, you now have more creative clearance, so the end result is a better story. And that’s exactly what we want: to make better tools to tell the story and do things that were previously not really possible with traditional cameras. What else can still be done from the output page is to protect the contents for different types of playback and formats. This is also something that you could not do with traditional cameras so far. For example, to come from the cinema on broadcast format, one had to carry out a pull-down conversion, during which one can output any format with the Lytro Cinema system.

The next interesting area relates to CGI, so visual effects that can be built in a scene in a scene. What is changed here, if you already captured a room, and you might be three-dimensional, computer-generated images inserted? There are completely new opportunities here?

Carafine: We like to see this here as the complete virtualization of the camera. To the things that the system automatically delivers, the camera tracking information. So you have this data, which still comes the dense point projection of each light beam in the room, which then allows you to integrate the computer-generated models into the scenery without first rounding out the basic environment first. So it’s a really simple way to approach this integration process.

You can also consider that like a deep image if you are familiar with the format. A deep image means that one gets the 2D pixels from a CG renderer, but has several versions of each pixel. That’s how I can now have a particle cloud, which now has a Z value for each pixel next to the RGBA values. In the light field, it is a very similar concept, where you now receive all the different light rays that get into the system through the entrance pupil. This then allows to place the objects directly at the different levels and have all the alpha information of the recorded image to summarize the scene. And that’s then the Depth Screen in conjunction with the camera tracking and the point projection, which results in this complete density point projection and a meaning of the volume to create this environment.

Under the line, this then ensures that it becomes much more complicated to introduce computer-generated effects, simplifies the process and probably also improves the quality of the results. And perhaps it also allows more filmmakers to use more complex computer-generated effects in their films?

Carafine: Yes, that’s absolutely right. The possibility, a scene that previously had no visual effects to transform into a visual effect, is exactly what this system is biggest.

And where we talk about the affability and that the system simplifies certain processes: Who is currently being able to use Lytro Cinema?

Carafine: That’s a good question. We are convinced that the efficiency will play a role in the future that brings this technology. However, today we are primarily concerned with the creative advantages and the new possibilities and less about the key figures, how much time and money you can save.

But as soon as we have several productions under the roof and subject, we will certainly learn more about how these key figures look like, but today we have no information. So we are really focusing on all the creative advantages and all the things you can do that was previously simply principally not possible.

So how does such a camera look like? I suspect she looks less like a traditional camera and probably does not fit in a suitcase, or?

Carafine: She comes more like a couch as something that was fit in a suitcase. The first generation, which we have now introduced to NAB and had there too, is about six FUB long (1.83 m), just to give you an idea of the rough. To continue to classify this: the image sensor inside is more than half a meter wide. Instead of having a 35-millimeter sensor, which is about 24 to 28 millimeters wide, depending on the system used, this sensor is over 550 millimeters wide.

With that I just wanted to clarify how many rays of light we really capture and how rough also the pixels are. And that’s exactly what we do to ensure the level of image quality required for the cinema. We communicate this information, so that it is lacking, why the system must be as rough today.

At the same time, however, we also work on a miniaturization of technology, which requires pretty much feedback from the set. But we have already brought these design studies behind us to make sure we understand how we need to build the system – and that is already in progress today. The final product will then be portable, wireless and even light. So that’s the next generation we already work. However, this is still many years in the future.

How does the process of image processing and postproduction are changing, on the one hand as far as the video formats needed to use, and on the other hand, as far as the amount of data generated, which are generated here for a film.

Carafine: Good questions – the simple answer is: it does not have to worry about it. There are two different workflows here, and I liked starting with the simplest workflow. The first workflow goods that you create and arrest the light field data. Then you rend out the desired two-dimensional result after you almost have a light field machining session. As a matter of principle, one goes into the Dailies together with the director and, where appropriate, the other technical employees through the results and produces a final result, renders this out and let it fill in the existing workflow without further amendments. After all, this is only a normal, two-dimensional image or stereoscopic image pair. The long-term workflow, on the other hand, means that the light field raw data is maintained during the entire post production. We support this workflow through the implementation of a whole cloud architecture into the system, so that the effect studios, the post production and the studio itself do not worry about any changes in the architecture of their own backend systems.

Homes: We take the data from the set – and here we are developing locations around the globe, which then allow it to upload the raw data into the cloud. Here then the whole data processing happens, here the necessary memory is available and here is also the software required for light field processing. We have implemented these in Nuke and work together with both Google and The Foundry to develop this software solution.

The ultimate goal is here to avoid a push of the data. These should be in a central memory architecture, from where you always streamline only the user interface. This means that you can also work from the notebook or smartphone and at the same time high-end visual effects exports. This is also something we have now shown on the NAB for the first time – with this session unfortunately we went out. But that’s something you can really realize with very low bandwidth, because you access to remote computer at this time as I said.

That’s really big and very fascinating. My last question now: When will we see the first movie filmed with this technology?

Carafine: Good question: We are still in the alpha and beta stage today – and that will take you until the end of this year. We will carry out our official commercial start beginning the next year. This will then be the time we will give an official envision what we work on. Between the present time and then we are subject to various confidentiality clarifications, but there are really many exciting things we work on.

Like this post? Please share to your friends:
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: