発表論文

2016.12

Automatic feature extraction using cnn for robust active one-shot scanning

佐川 立昌, Yuki Shiba, Takuto Hirukawa, Satoshi Ono, Hiroshi Kawasaki, Ryo Furukawa

概要

Active one-shot scanning techniques have been widely used for various applications. Stereo-based active one-shot scanning embeds a positional information regarding the image plane of a projector onto a projected pattern to retrieve correspondences entirely from a captured image. Many combinations of patterns and decoding algorithms for active one-shot scanning have been proposed. If the capturing environment lacks the assumed conditions, such as the absence of strong external lights, then reconstruction using those methods is degraded, because the pattern decoding fails. In this paper, we propose a general reconstruction algorithm that can be used for any kind of patterns without strict assumptions. The technique is based on an efficient feature extraction function that can drastically reduce redundant information from the raw pixel values of patches of captured images. Shapes are reconstructed by efficiently finding correspondences between a captured image and the pattern using low-dimensional feature vectors. Such a function is created automatically by a convolutional neural network using a large database of pattern images that are efficiently synthesized by using GPU with wide variation of depth and surface orientation. Experimental results show that our technique can be used for several existing patterns without any ad hoc algorithm or information regarding the scene or the sensor.