Light Field and Lumigraph are both Light fields.

All light functions

The Plenoptic function is used to describe everything that a person can see.

The dimensions of the all-optical function

Assuming that the person is in a fixed position and looking in all directions, we can define polar representation in any direction
P ( Theta. . ϕ ) P(\theta,\phi)

We can continue to improve, introduce new parameters wavelength, to define the color of the light, and get the color of the world.

Continue to expand the new parameter time, then this is called a movie.

If the position of the person or the position of the camera can be moved arbitrarily, then we have a holographic movie. If you don’t think of a function as a movie, where I look in any direction at any place, and I see different colors at any time, this seven-dimensional function is the all-optical function that we want.

We can extract some information from the all-optical function to represent more complex light. When we look at it from any position, in any direction, the information we record is the information of light coming from any direction. The light field is a small part of the total light function.

Definition of the light

Let’s start with a ray in three dimensions, and it has to take its starting point and its direction, its position in three dimensions and its direction in two dimensions,

Assuming we know the direction, if we take two points on the ray, we can also define a ray. In other words, to define a ray, we only need two dimensions of position and two dimensions of direction.

Parameterize object surfaces

Definition of the surface of an object, we can put an object on a bounding box, we look from any direction in any position to an object, the light of the reversible tells us, all want to describe objects, seen the possibility of describing objects in it on the bounding box of light in all directions in the past, you can get from any location on any direction to an object, What it’s supposed to look like.

When we look at the object, it’s like a point on the box, and a point on the observation position, and we identify a ray and a direction, we can query the function of the record. This function records how much light is emitted at different points on the surface in all directions. Recording this information is called a light field.

A three-dimensional surface is actually in a two-dimensional space, similar to a texture map, where we use UV to represent position, and the direction in space is represented by polar coordinates θ, ϕ\theta, \phiθ, ϕ, so only two two-dimensional parameters are needed to describe light.

Black boxes instead of surfaces

If we look at an object anywhere, there’s a light field, and we know what we’re going to see. Because the light field is defined on the surface of the object, and the object is like a black box, we don’t need to know what it is, we just care about how much energy is carried by a particular ray of light reaching the point of observation.

Flat instead of black box

Further, we can think of a black box as a plane, assuming that to the right of the plane is a luminous object that emits light through the plane.

The two planes redefine the light

We can also define a light field with two planes, replacing the light represented by θ, ϕ\theta, \phiθ, ϕ with the UV coordinates of the two optical fields. The connection between the two points gives the direction of the light.

Light field camera

The light field is decomposed Irradiance

If you look at a point in the STSTST plane and see what all the UvuvUVs look like, you’ll see different uvuvUVs at different points on the STSTST.

If we take fixed points from the UvuvUV and look at all the STSTST planes, it is equivalent to having no light field, just like a pinhole camera.

If we look at a point on the STSTST from the UvuvUV plane, we see what the same object looks like in different directions. Or a picture taken by a camera. The pixel is irradianceirradianceirradiance, and from uvuvuv plane at STSTST on one point, Is equivalent to expand the irradianceirradianceirradiance radianceradianceradiance.

Principle of light field camera imaging

The eye of a fly in nature is just like a light field. The sub-eye is used as a camera, and the red, green and blue light collected from the nature is not directly recorded through the lens, but recorded three color values respectively. Essentially record is no longer a pixel irradianceirradianceirradiance radianceradianceradiance but to the world from a different direction.

The pixels are replaced by lenses that separate the light in different directions and then record it. The most important benefit of the light field camera is to support the refocusing of the later stage, such as aperture size, virtual focal length and so on can be changed dynamically in the later stage

The pixels in a normal camera are replaced by microlenses that scatter light in different directions and record it on a sensor. , which used to record one pixel now needs to record one pixel. Look closely at one of the pixels and inside it is the radiance that is recorded in different directions. When viewed from the lens to the left, it is a field of light. Different positions, can look in different directions.

To restore an ordinary photograph with a light field camera, you simply select a record in the same direction on each lens, as if a photograph had been taken in that direction. With the light field, we were able to virtually move the camera.

It is because the light field camera records the information of the entire light field that virtual movement and refocus can be achieved.

Problems with light field cameras

Light field camera usually have the problem of insufficient resolution, the same sensor, originally a record information of a pixel, pixels and one hundred pixels to light field camera records one hundred directional information of a pixel, so the film resolution requirement is very high, and the design and manufacture may need to be more precise, the cost of natural will multiply.