AdapNet: Adaptive Semantic Segmentation in Adverse Environmental Conditions

Abstract

IEEE International Conference on Robotics and Automation, Singapore 2017

project page + web demo | videos [1] [2] [3]

Robust scene understanding of outdoor environments using passive optical sensors is an onerous and essential task for autonomous navigation. The problem is heavily characterized by changing environmental conditions throughout the day and across seasons. Robots should be equipped with models that are impervious to these factors in order to be operable and more importantly to ensure safety in the real-world. In this paper, we propose a novel semantic segmentation architecture and the convoluted mixture of deep experts (CMoDE) fusion technique that enables a multi-stream deep neural network to learn features from complementary modalities and spectra, each of which are specialized in a subset of the input space. We present results from experimentation on three publicly available datasets that contain diverse conditions including rain, summer, winter, dusk, fall, night and sunset, and show that our approach exceeds the state-of-the-art. In addition, we evaluate the performance of autonomously traversing several kilometers of a forested environment (see the video links for more info) using only the segmentation for perception.