The spatial scales (from meters to kilometers) and temporal (hours, days, years) that characterize the coastal dynamics, make that the classical measurement techniques are limited and very expensive, to study the behavior of coastal systems. The inclusion of measurement techniques by means of video images (commonly called coastal videometry), nowadays allows to describe physical processes on a wide range of spatial and temporal scales, something unthinkable until very recently.
A coastal videometry system consists in cameras installed on the coast that allow the capture of images and their spatial referencing. The products derived from the image processing give very interesting information for the different activities that are developed in the coastal areas and that depend on the waves, currents and tide (hydrodynamic conditions) as well as the configuration of the beach, dunes, channels and bars (sedimentary elements). But trying to go a step further, based on these tools, in recent years an important advance has been made in the ability to make reliable measurements of sea conditions on the coast in base to these images.
In the sea storms that have an impact on the coast, knowing the characteristics of the wave dynamics is a determining factor. The system currently operative in the DAEM / Euskalmet for the determination of risk due to impact on the coast is based on the determination of the so-called overtopping indexes.
Knowing these indexes in advance is crucial to anticipate a potential impact that an adverse event may generate on the Basque coast and the potential degree of flooding that can be generated in areas sensitive to these situations. Nowadays, by means of meteorological and oceanographic prediction tools, the value of future coastalimpacts is estimated. But its follow-up is not in real time.
In addition, for its subsequent validation, the DAEM has information coming from buoys, ocean-meteorological platforms, as well as more recently from coastal videometry systems. Among the usual products of the videometry systems are the images called timestack, the result of the successive accumulation of the pixels in a predefined line during a time interval and with a frequency also defined. An example image is shown below.
Images analysis allows, among other applications, the follow-up of the processes of waves run up on the slope of the coast.
The POCTEFA MAREA project “Modeling and Aid in Decision-making to face of Coastal Risks in the Euskal Atlantic Area”, has among other objectives to implement and share real-time observation and monitoring systems for the coast (video systems, stations meteorological-meteorological, current meters, sensors of extreme levels of water sheet …), in order to study the key processes for the associated problems in our coast to storms with extreme waves in a context of climate change and rise in sea level ( overtopping of waves, port agitation, damage to infrastructure, erosion of beaches).
In this project, in which DAEM and AZTI are partners, Zarautz and Biarritz have been chosen as pilot areas for the development of tools that improve knowledge and follow-up of flood and erosion processes associated with waves. Specifically, the database of the Zarautz videometry station installed in 2010 within the framework of a collaboration agreement between AZTI and the City of Zarautz is defined as a key tool for this purpose. In parallel and during the development of the MAREA project, a new videometry station has been installed on the beach of Biarritz.
Computer Vision group of Tecnalia Research & Innovation is an expert in images enhancement and pre-processing, as well as in their automatic interpretation through automatic learning techniques. Within the framework of the POCTEFA MAREA project, Computer Vision group has contributed to the development of algorithms that allow, in real time and automatically, using timestack image processing, information on the characterization of the processes associated with waves.
The videometry coastal system (KostaSystem) developed by AZTI and used by the DAEM is made up of several fixed cameras that cover different coastal environments (beach and ports). The information obtained is processed obtaining timestack images. These images are a composition obtained from the successive accumulation of pixels located in a predefined line during a given interval and frequency.
The objective of the collaboration of the Tecnalia Computer Vision group with the DAEM in the POCTEFA MAREA project has been the study and SW application implementation that provides information on the characterization of the processes associated with waves allows in real time by processing timestack images.
The main goal is to provide knowledge regarding image analysis techniques to isolate different signals related to sea level variations from the timestack images.
As specific objectives:
Develop and implement an algorithm to determine the run-up from the timestack images, tracking the processes of rising and falling waves on the slope of the coast.
Develop and implement an algorithm for counting the number of waves overtopping events from the timestack images.
Timestack images of the videometry stations of Zarautz and Biarritz were used for these objectives.
For run-up algorithm development, 35 timestack images dataset with enough variability of the beaches of Zarautz and Biarritz were used. These annotated images were the Groundtruth for the subsequent validation of the selected algorithms.
Almar and Otsu algorithms were selected, implemented and validated to test which of them gave the best results with the 35 images dataset. Both methods are coded in Matlab. Tecnalia implemented the Almar method as the Otsu method had been already implemented for this purpose by AZTI previously to the development of this work.
Otsu method (1979) is an accepted technique since it gives good results especially on sloping beaches. However, these results are no longer so reliable in other types of beaches, for example, dissipative ones. In this case, the Almar et al. 2017 based on the Radon transform, is more accepted by the scientific community.
Almar method has three input parameters: (image, max, min) and requires setting the variable called “smoothing length” (number of pixels on the time axis). In the case of pixel size, the results were optimal with a value of 3. The maximum and minimum must be defined manually for each image. Therefore, automating the search for these values is essential to be able to apply this method automatically.
In the overtopping algorithm, after a preliminary images’ analysis, it is decided to classify them according to their extension or level of affection in two types: partial and complete. It has been checked that in the Zarautz walk during flood events there are overtoppings that, although they produce a flood of the walk, do not have enough intensity to cover the entire width of the walk (of the order of 15-20 m). However, the more energetic overtoppings can cover the entire walk hitting hard on the structures and buildings that border it.
Application user will manually define the position of two virtual barriers in the timestack image to be analyzed (see Image 4): orange barrier and blue barrier. If an overtopping exceeds the orange barrier, it will be complete overtoppingovertopping, and if the overtoppingovertopping exceeds the blue barrier but not the orange will be partial overtoppingovertopping.
The algorithm that has been developed is based on the detection of points where the intensity changes and is independent of the time (day, night) or artefacts such as brightness, etc. As a first step, the image is transformed from RGB to LAB and only the L-channel is processed so finally the intensity of the image is analysed.
The algorithm does not allow a complete and partial overtopping to be counted in the same pixel line, as it means both are at the same time which is considered not possible. Also, in case the algorithm counts more complete overtopping than partial overtopping it applies a correction that looks for a partial overtopping previous that has to correspond to a complete one to be counted.
The output of the algorithm will be:
- Number of partial overtoppings and time.
- Number of complete overtoppings and time
- Complete overtoppings, in units [v] = pixel x / pixel y
- Graph of the overtoppings position and graphs of the luminosity in the two-pixel lines.
For the overtopping algorithm validation, 48 images with enough variability have been considered (day images, night images, images with many overtoppings and images with few overlaps).
None of the methods (Almar or Otsu) works satisfactorily for the dataset considering the different types of beach, waves and lighting conditions. Although in absolute terms the Otsu method with a specific parameter configuration is the best, the most appropriate implementation is the combination of both methods, so that input parameters for the Almar method will be done with the results of the method Otsu
These results can be improved in the future by incorporating the information of metadata as the estimated location of the rise-fall zone on the beach profile and its amplitude based on the expected tide level and wave intensity.
The algorithm is a tool capable of working properly in the different light conditions (night / day, clouds / sun, etc.) and overtopping type in the Zarautz station database.
60 % of the 48 images of the dataset are very well processed, since the count made with the algorithm with respect to the groundtruth only differs by a maximum of 1 overtopping. 35% of the images had a maximum difference of 5 overtoppings with respect to the groundtruth and only in 5% of the images, the maximum difference with respect to the groundtruth is 10.
Considering the 60% of the images well processed, they are mostly images with few overtoppings, but it should be noted that the tool has never generated false positives in the case of images with 0 overtoppings. This is very convenient, since in practice it is better not expect false alerts.
35% of the images, where the difference of the algorithm with respect to the groundtruth has not exceeded 5 overtoppings, the difference was in partial overtoppings, but number of complete overtoppings was correct. These images had between 10 and 20 partial overtoppings.
Finally, in the case of the images (5%) with a maximum difference of 10 overtoppings both partial and complete, they are images with many overtoppings, a minimum of 25 partial overtoppings. These situations in which the system counts less overtoppings, is due to the difficulty of differentiating overtoppings very close over time.