# Introduction

We are trying to reproduce inside a computer how light works. Or maybe not, we are going for a completly non real style... but then the issue is still there, as the tool we are going to use tries to do simulate and the viewers are going to receive the information as light, interpreting it in relation to what they already know. So we have to understand some basic principles to help us control the tool and communicate with the viewer.

# A Bit of Physics

Visible light is part of the electromagnetic spectrum, so get ready your physics book... just kidding. We are not going into details, but you probably have heard that the particles of light are named photons, and also that light is formed of waves. We do not care about the formulas, or the reasoning for this particle-wave duality or get into other theories that try to explain light.

On the other hand knowing that you can think of light as particles, you can understand easily that light sources send photons, they move along the media in which they can travel, and bounce or get absorbed by other media. When these particles end in our eyes, they are absorbed by cones and rods, transformed into neural transmissions sent to your brain where they are interpreted. But what about if we take them as waves?

Waves, in general terms, have some characteristic that describe them like amplitude (intensity) and frequency (how often the peaks repeat for a given time). For light, you have to think in a system in which the norm is multiple waves at the same time. If you have problems visualizing such idea, try thinking about vibrations (sound) in a material, it can vibrate as the sum of multiple pure waves (notes).

A rather limited range of frequencies is what we can see, what our eyes are capable of sensing, the ones in which peaks repeat 7.5x10^14 to 4.3x10^14 per second. A given frequency is what we perceive as colour, and the amplitudes of the waves as intensity. Of course, as we said waves can mix, when we see a colour, it hardly means we are receiving waves of one exact frequency, but a sum of multiple frequencies.

In our eyes, rods are in charge of detecting the intensity and cones the color. This is done as the rods do not care about the frequency, they report for any they can handle. But cones are capable of detecting frequencies as we have three kinds, each sensible to a limited and different range of frequencies, while the other cone types can not sense them or do with less strength. What makes the colours is our brain, by interpreting all these readings.

The kinds of cones could be described as the red, green and blue ranges. If you look into a medical book, you will see how the sensibility forms three mountains, with the peak around those colours. This is a reason RGB is a typical way of expressing a colour.

# Description of Colour: the Wheel

Another typical way to describe light is by means of a colour wheel. This is the range of all visible colours (remember that means all the frequencies), from red to violet, but making the highest and lowest frequencies touch, doing a full circle.

The more complex representation is two cones joined by their base, with the colour wheel of pure colours in the joint, so we have white in the tip of one of the cones and black in the other.

When we describe a colour we can do it by many means. One would be the red, green and blue intensities, matching our cones or computer systems, as pointed out earlier. A simplest to understand ones is by means of the above wheel/cones and the terms Hue, Saturation and Value. Hue makes reference to the point in the circle, and this we say something is of blue or yellow hue because it is located in the blue or yellow part of the wheel.

Saturation refers to how pure is a colour. The more pure, the more focused in a limited range of frenquencies, the more near the border of the wheel, and the less pure, the more spread over many frequencies, the more near the centre.

Finally we have Value, which makes reference to the intensity, and thus requires the two cone representation. A colour with low Value is placed near the black tip, medium Value would be in the bases and high Value near the white tip.

# Some Interactions

Light interacts with things, as demonstrated by our first test of black vs cubes image. Some small notes about basic types of interaction will help us create better images.

## Basic Interaction

Let's get more into what happens with the simplest case, a lamp in the void emits photons, these travel hitting something and bouncing, finally ending in the eyes. Do all of them bounce? No, some are absorbed, so what we see is what is left.

Imagine the original photons cover all the visible frequencies, but the object only bounces back the ones that cover the red range. It is not hard to realize that what we will see is red. So things have colours because they bounce some frequencies and absorb others.

What happens when we have a light composed of the ranges blue to green with an object that bounces everything? We could quickly say that as it bounces all, we see white... but there is no red hitting the object, so we are not going to get any red, and thus we will see the mix of blue and green, cyan. The red ball will appear as black.

A trickier case is when we have the blue and green light, with a yellow object (that means it would absorb blue, and bounce red and green). Think about it... correct, we will see it as just green. No red provided by the lamp, and blue absorbed, so only green bounces to our eyes. In broader terms, it means that no matter what light we use, the material also takes part in the final result.

## Multiple Interaction

Photons do not have to go from source (light) to the destination (eye) directly or by just doing one bounce in a given item. That is how Blender works, but in reality photons travel all over the place, until they are absorbed.

Photographers take advantage of this with reflectors. On a sunny day, the light will be rather strong and mostly from above. By placing a reflector (those white, silver, gold, grey... surfaces you see in photo sessions) it will bounce some light into the subject, making the light less hard and contrasted. In other cases you will also see they point the flash to the ceiling, using it as bounce surface, or place diffusers in front of lights to spread the photons and produce softer light.

Reality vs 3D

Blender basic work mode is not even a simplification of reality. It calculates what is visible from the camera first, creating the zbuffer (a surface that describes nearest items to the camera), and then checks for what lights affect that items. So, all it is doing is following the path of real photons, in reverse, failing to take into account those photons that bounce more than one time.

So when we use Blender we have to remember those principles, and add lights to compensate for the lack of automatic bounce, or change the settings of lights to create the diffusion. Of course, in other cases we will be happy as we will be free of problems photographers and film people have. You probably have seen black surfaces in photo sessions to avoid light reaching a model or the headaches that green screen causes, requiring complex methods to remove the green that appears in things.

Do not worry so about it too much in any case, since we can always get some bounce calculated for us by means of Radiosity. This technique makes Blender cast light from surfaces with Emit above 0.0, bounce them around the scene, absorbing a bit in each bounce. As it is slow we should avoid it or use at least Baked Version, but if it is not possible, it will always be in our bag of tricks.

## Interaction with Medium

Until now we had always assumed the photons just bounced from one thing to another, with nothing affecting them in the middle of the travel from hit to hit. In reality, except in space where there is void, photons have to travel along things like air or water, and they will hit things, just not so "radical" as a stone.

These mediums allow photons to travel, but cause some perturbations, as not all photos can move freely, some will hit particles and bounce or be absorbed. And what is probably more obvious in everyone's life, moving from one to another medium causes the photons to change path in some cases.

The important details to remember is that air, and even more in the case of water, will scatter the light it is trying to pass through it. Or in other words, it will absorb part of the light and diffuse other part, thus far away things will appear of lower contrast. And I am not talking about mist or dirty water, thus cases are even more clear, but clean air does it too, just with less strength. If you want to test, look at some mountains while you travel and compare when you are far to when you are near the same mountains. They will look muted and gain its normal colour as you move toward them.

In cases where the photons are allowed to pass but with a different direction, we will see things distorted, and will also notice how light focuses in some areas, which we call caustics. Try putting a strong light near a glass or a lens under the Sun. You will notice how they create patterns.

To simulate these effects we will need to use materials (refraction, etc) but also lights (create our own fake caustic "light sources").

# Illusions

Now that we have seen how light behaves, let's see how the brain play tricks on us.

Illusions could be about one colour near other, in which you think you see something, but in reality it is a different colour, and the neighbour area is what makes you think different. This is a good reason to use a mid gray interface (even wallpaper), so you have a neutral, medium intensity, background around your work subject. Or in other words, you do not give any special bias towards a colour or intensity.

You probably have seen around the Internet a famous case of this, the image in which a cylinder projects a shadow over a checkerboard and you are asked if two checkers are the same grey or not.

Other examples can be seen here, in which a fixed colour band looks differently based in which background they are over. The middle band is constant, but each end looks a bit different, and just because it is near a different colour.

 Hue comparison. Saturation comparison. Value comparison.

Here you can compare the same, black and white image (a crop from Elephants Dream) surrounded by a white or black area. The white frame makes the dark image look bad, you have some problems seeing the details in the jacket shadows, while the black frame gives you a better representation of details. If the image would be of lighter style, the reverse would be true. So for a better background, we should use the neutral grey. So yes, the default theme in Blender is not exactly a good default, it is too light. You can create your own theme or maybe try one from K-Rich theme repository

 ED crop over black background. ED crop over white background. ED crop over grey background.

# Conclusions

You can avoid all the maths behind lights to make 3D images, but you can not avoid understanding the effects, both in space and in the viewer's mind. As artists we can bend the physics, but we must pay attention to results.