Spatial Data Science
Beyond "I make maps": how to make your research understood (and why you should!)
4:36
Most of the maps we use in daily life are based on information captured in Earth Observation data, such as aerial photographs or satellite images. Elements that you find in those images include buildings, roads, water bodies, and trees. The more detailed the maps, the higher the resolution of the images should be. And, unfortunately, it's difficult to find and map all those detailed objects automatically.
In this article, we focus on the procedure to produce a map of the objects that are visible in high-resolution images and height data captured from cameras and laser scanners mounted in airplanes. A human operator integrates implicit knowledge on how the objects should be generalized in a map. The question is whether we can automate this process: how do we teach the computer that a group of pixels from the images should be labelled as, for example, “building”, “bare ground”, “cycle lane” or “bridge”? How do we teach the algorithm to draw boundaries between objects?
We make use of big geodata to train deep learning networks on how the maps should be produced. To be precise, we are using existing maps, together with aerial images and height data, to train the network. For this, we make use of nationwide open data. That is a huge dataset, containing billions of polygons and even more image pixels and height data.
In our education at ITC, we cover topics on cartographic rules in map production, image analysis, point cloud processing, data fusion, deep learning, and quality analyses of the produced results.