Preface
In recent few months I work at my master thesis. The topic is Image Similarity Assessment. The goal of my work is to evaluate different algorithms for comparing and retrieving similar images from given dataset. Basically the thing that Google does so well.
Few years back I did similar research based on feature extraction and comparison. The process was to run wavelet transform, extract interesting coefficients and use them as key for searching. Well it turned that this approach isn't much useful. In the end I was able to get >98% accuracy for corrupted images (changed aspect ratio, added blur), but for similarities it didn't work well.
When I started work on my thesis my supervisor asked me to try different approach - neural networks. More specifically deep neural networks. The researchers from LISA Lab in Montreal found new ways to effectively train deep neural networks and my goal is to try this new stuff. In short, my goal is to train deep auto-encoder which would generate low dimensional codes used for image retrieval.
The idea behind my work is, that user provides some image input and software will return images, which are in some way similar. I have to add advance notice here. The algorithm "sees" differently than humans. To prove my point look at following images:
|
As you can see this is quite challenging task and I'm really interested what results will my research yield.
Žádné komentáře:
Okomentovat