CNN, DeepLearning, Tensorflow
Quoting *1: Georeferencing is the process of taking a digital image, it could be an airphoto, a scanned geologic map, or a picture of a topographic map, and adding geographic information to the image so that GIS or mapping software can ‘place’ the image in its appropriate real world location.
There are few ways of achieving this, though we can split them in two categories: “manual” and “automated”. When comes to manual — it means that for two images(one is part of another), one has to pick manually the points of intersection(same points found in both images) using a GIS software and then run the georeferencing process based on them.
This is how it would look like for images A and B:
These particular images are an exception from the norm since they are obtained via two different satellites (EU and Asian), at different resolutions (10m and 5m) which makes the pixel and RGB distribution differ which makes this problem even more complex — though the purpose of this trial is to observe the results of available automated tools. 🧰 If you want to try this on your own, I will suggest you use images from the same source.
and the result obtained with QGIS(more or less accurate depending on the accuracy and number of points you picked as matching between the two images):
Now, let’s consider the automated way of doing this:
1. using a computer vision library, and
2. using machine learning
First let’s see how the method finds points of interest on the both images:
Now let’s have a look at how points are matched between same images:
Now, the points above matched by the lines should be the same, which means there is need for further tuning of parameters, which will leave for other time.
Finally, we can check how a deep learning algorithm would work:
Keeping in mind that we are using a little bit of transfer learning, which means model was trained on images that have a different pixel distribution, plus our two images differ from one another — we could consider this first attempt quite promising. To verify if this works better on images with same distribution we crop a smaller part from first image and will see better results bellow:
and based on the points and lines/matches detected by the algorithm we can also have a general mosaic preview:
We can see that the second image, that needs to be georeferenced was warped accordingly — slightly off — though closer to what we are expecting. Further tuning and image pre-processing could bring you to 💯 match. So far we are happy to discover new deep learning tools that can help GIS.
The imperfect examples above are just to show a few steps that can be taken when comes about georeferencing satellite imagery of different resolution and origin. Maybe a better example would have helped, though limited time on my hands brings you only a proof of concept.
I hope you’ve enjoyed it.
- 1: Georeferencing - SERC/Carleton College https://serc.carleton.edu;
- 2: OpenCV: Point Feature Matching;
- 3: CNN Papers: “Learning-local-features-from-images”, “A semi-automatic tool to georeference historical landscape images”, “Detecting Ground Control Points via Convolutional Neural Network”;