Throughout the centuries, the atmosphere effect in colour and object visibility in open outdoor scenery have been of interest to many groups. This was initially of interest to artists, later physicists and mathematicians and in the last decades has been studied by signal processing and machine vision engineers.

In natural scenery, where the distance between the observer and the object increases, the colour of the atmosphere has a tendency to replace an object´s colour, which in turn produces a gradual loss in contrast until the object finally disappears into the horizon. This effect, known as aerial perspective, was researched as far back as by Leonardo da Vinci himself in his painting compositions. In fact, it is also a visual feature of the human eye which is used to better distinguish distant objects by comparing the blue hue intensities between them.

In fact, both the distance as well as the composition of the atmosphere influences this phenomenon of visibility loss/disappearance. This effect is known as haze, which can be seen when taking an image or when looking at distant areas.

Distant objects become less sharp and much more attenuated whereby their visual quality is reduced. This makes the analysis of distant areas in an image a complex task and even deteriorates the images that are obtained by land astronomy equipment.

Specific methods have been developed in recent years to deal with this problem that have allowed recuperating this image degradation caused by the atmospheric effect. This post will explain the method known as DarkChannel Dehazing.

#### The Mathematical Model:

When trying to restore an image that has undergone a process of physical degradation as the aforementioned, it is important to have a mathematical model that is an accurate imitation of that process in order to reverse the effect. One of the most popular models is that known as dichromatic light model. First, let´s look at the equation and then we will explain it:

In this equation, the undegraded image has been labelled as **J **(vector, as we take into account three image channels, R(ed), G(reen) and B(lue), and we shall ignore hereunder). This is multiplied times **t**(which ranges between 0 and 1), quantity known as transmission. The transmission of light exponentially decreases with the observer`s distance in accordance with the law of Lambert_Beer, modulated by a coefficient that takes into account the atmospheric composition, known as the attenuation coefficient:

so it represents a type of exponential inverse of the distance in the scenery: the greater the value of **t**, the lesser the distance of the object to the observer. This models the fact that the observation will be “worse” when there is greater distance from the pixel. To conclude with the model, an additive degradation term is added that uses **A**, the colour of fog, and which is multiplied times **t**. So that:

- When an object is at distance 0 from the observer,
**t**is very big, until it reaches the value of 1, whereby the colour of fog disappears and we only have**I=J**. - When the object is very distant from the observer,
**t**can reach a 0 value, whereby the image disappears and there is only fog,**I=A**.

#### How to provide a solution for this model:

The problem with the earlier equation is that one simply cannot subtract the colour of fog **A** and divide it by the inverse of distance **t**, because we don´t know them. Somehow, they have to be estimated and here is where **DarkChannel Prior** comes into play.

This expression makes reference to a simple statistical observation: in an outdoor natural scenery that is not degraded by fog, if we choose a random pixel, almost always in its surroundings, due to shades, textures (trees, bushes, mountains, soil) and other elements, we can find some of the three channels in which the pixel value is very low- usually, the lower the value of the pixel, the greater its colour intensity. In fact, black has RGB coordinates of (0,0,0) and white has RGB of (1,1,1). That is to say, there will at least be a pixel in which there is a big amount of red, blue or green. Therefore, we construct a new image from the original image, which we call DarkChannel.

In order to build a DarkChannel, we select a minimum value of neighbourhood in each pixel. This can be mathematically described as:

In an ideal situation where an image has no fog, its DarkChannel will practically be black:

The DarkChannel provides the information that is lacking in the first equation in order to solve the model. The concept is as follows: An image with fog will have a DarkChannel that is not black, but whitish. This whitish colour represents a good estimate of the colour of fog **A**.

On the other hand, fog is more dense as the distance with the observer grows. Therefore, we can also obtain an estimate of the transmission **t**.

As can be seen in the earlier image, the DarkChannel provides us with an accurate reconstruction of the distances in the scenery: the object will be more distant as it is blacker. Trees which were originally covered by fog appear with a slightly greater depth than the house, but not with the same depth as the sky.

Once we have enough information to invert the model, we have to make a few simple arithmetic operations and we can obtain an image **J** that is not degraded by fog. Let´s take a look at a few results with this processing:

At present, we are applying these techniques for image enhancement in complex situations such as fires, underwater images, rain, fog, snow, complex environments (factories) for diverse applications such as traffic control, safety, quality control…. These techniques allow the use of artificial vision techniques in extreme conditions.

Future posts will look at how to adapt this efficient strategy to the elimination of the degradation caused by marine environments. The conditions are quite different, as below water, the red colour disappears before green or blue, but the explanation of some modifications of the process will also permit us to recover visibility in these types of images.