Show All
Today Last Week Last Month
 Search for a specific document
Deep Multispectral Semantic Scene Understanding of Forested Environments using Multimodal Fusion
Abhinav Valada, Gabriel L. Oliveira, Thomas Brox, Wolfram Burgard
The 2016 International Symposium on Experimental Robotics (ISER 2016)
2016
valada16iser.pdf



Notes:
Semantic scene understanding of unstructured environments is a highly challenging task for robots operating in the real world. Deep Convolutional Neural Network architectures define the state of the art in various segmentation tasks. So far, researchers have focused on segmentation with RGB data. In this paper, we study the use of multispectral and multimodal images for semantic segmentation and develop fusion architectures that learn from RGB, Near-InfraRed channels, and depth data. We introduce a first-of-its-kind multispectral segmentation benchmark that contains 15, 000 images and 366 pixel-wise ground truth annotations of unstructured forest environments. We identify new data augmentation strategies that enable training of very deep models using relatively small datasets. We show that our UpNet architecture exceeds the state of the art both qualitatively and quantitatively on our benchmark. In addition, we present experimental results for segmentation under challenging real-world conditions. Benchmark and demo are publicly available at http://deepscene.cs.uni-freiburg.de.


BibTeX:
@inproceedings{valada16iser,
  author = {Abhinav Valada and Gabriel Oliveira and Thomas Brox and Wolfram Burgard},
  title = {Deep Multispectral Semantic Scene Understanding of Forested Environments using Multimodal Fusion},
  booktitle = {The 2016 International Symposium on Experimental Robotics (ISER 2016)},
  year = 2016,
  month = oct,
  url = {http://ais.informatik.uni-freiburg.de/publications/papers/valada16iser.pdf},
  address = {Tokyo, Japan}
}