Semantic understanding of urban data (e.g. buildings, streets, neighborhoods) is critical for urban sensing as well as many commercial applications such as accurate antenna placement for cellular networks, flood planning, and architectural urban visualisations. Without knowing the surface properties of urban models it is impossible to calculate, for example, the thermal properties of buildings or to simulate window-visibility. In this project, the goal is to utilize deep neural network architectures to fuse and understand noisy urban data from multiple sources. It will study the space of urban sensors, their competencies, errors, and failure cases, resulting in a robust framework for semantic urban reconstruction. Unlike many rigid urban modeling pipelines, the desired outcome is a system that is entirely modular in its selection of sensors, allowing the addition, or removal, of data sources to suit the many different situations facing real-world urban planners.
Basic knowledge of computer vision and deep learning. Programming skills: python, TensorFlow (optional), PyTorch (optional)
The successful candidate will design and train novel deep neural networks on novel synthetic datasets, to fuse disparate data sources and create a semantically labelled 3D model of urban scale.
Expected deliverables: Final report, trained networks, code base