alfonso.medela

About Alfonso Medela Ayarzaguena

This author has not yet filled in any details.
So far Alfonso Medela Ayarzaguena has created 4 blog entries.
11 Jul 2019

ISBI 2019

2019-10-11T11:48:32+02:00Categories: Deep Learning|0 Comments

Last April 8th-11th took place the IEEE International Symposium on Biomedical Imaging (ISBI), a scientific conference dedicated to mathematical, algorithmic, and computational aspects of biological and biomedical imaging, across all scales of observation. The member of piccolo team and Tecnalia Research & Innovation, Alfonso Medela, presented the paper “Few-shot learning in histopathological images: reducing the need of labelled data on biological datasets”. The team has been working on a few-shot approach in parallel with the acquisition of the datasets. To overcome the problem of scarce data in new imaging modalities such as OCT and MPT, few-shot techniques provide a solution to create algorithms out of a small number of images. The results showed that by using the proposed method it is possible to beat classical transfer-learning approach when only few images per class are available. The results encouraged the team to continue working on the same track and as [...]

8 Aug 2018

Siamese Neural Networks

2019-10-25T11:20:16+02:00Categories: Deep Learning|Tags: , , |1 Comment

Siamese networks were first introduced by Bromley and LeCun [1] in early 1990s to solve signature verification as an image matching problem. A similar Siamese architecture was independently proposed for fingerprint identification by Baldi and Chauvin [2] in 1992.  Later in 2015, Gregory Koch et al. [3] proposed to use Siamese neural networks for one-shot image recognition. Siamese neural networks are designed as two twin networks that are connected by their final layer by means of a distance layer that is trained to predict whether two images belong to the same category or not. The networks that compose the siamese architecture are called twins because all the weights and biases are tied, which means that both networks are symmetric. Symmetry is important as the network should be invariant to switching the input images. Moreover, this characteristic makes the networks much faster to train since the number of [...]