AISL-TUT
Active Intelligent Systems Laboratory (Miura Laboratory) in
Toyohashi University of Technology
Qualification video
Team description paper
Active Intelligent Systems Laboratory (Miura Laboratory) in
Toyohashi University of Technology
Qualification video
Team description paper
Active Intelligent Systems Laboratory (Miura Laboratory) in TUT is conducting research projects on various intelligent systems, such as intelligent robots, which can operate autonomously in complex real environments. We focus on sensory data analysis and action planning to develop such sophisticated intelligent systems. Toward RoboCup @ Home 2017, we've integrated our technologies on HSR (TOYOTA) so that it can provide users with various services in daily life. A key to the assistive robot system is sophisticated information processing for grasping the situation and planning the appropriate action.
For person tracking, we first extract leg-like clusters in laser range data by finding local minima in the distance histogram. Next, leg clusters are detected among them by calculating features, such as the length, the mean curvature, and the variance ratio by PCA, and classifying them with Support Vector Machine (SVM). These two steps are applied to each laser scan, and the robot tracks the target person's legs position using Unscented Kalman Filter (UKF). The position is published as ROS tf.
We have implemented object recognition functions in HSR. For general object recognition, we have developed YOLO-based object recognition system. For specific object recognition such as particular bottles and cans, we have developed a method based on image matching using several features, such as Scale Invariant Feature Transform (SIFT) and Binary Robust Independent Elementary Features (BRIEF). The 3D position of each detected object is broadcasted so that HSR can grasp and handle them.
For recognizing human voice, we first use HARK to perform sound source localization and separation. The separated sound is then recognized with Google Speech API. Morphological analysis is also applied to the extracted texts voice with Stanford NLP to obtain the meaning. By extracting words and the words' relationships in the recognized speech, HSR understands what the target person means and execute the corresponding actions.
We estimate the state of the target person for adaptive attendance to provide appropriate service. The robot estimates the body orientation of the target person using torso shape data by extending the above-mentioned people tracking. Based on the position and orientation information, the robot judges the target person's state. We had a Hidden Conditional Random Fields (HCRF) learn the best discriminative structure from 5-frame consecutive features consisting of the walking speed, the distance and orientation to the nearest chair. The HCRF successfully recognized the state of the person in real time based on same length of the consecutive features. Note that the robot shown in the qualification video had 2 laser range finders at different heights to measure leg and torso shapes respectively, the proposed activity recognition method is available on HSR by using Xtion on it as an alternative of the upper laser range finder.
Active Intelligent Systems Laboratory (AISL, Miura Laboratory)
Department of Computer Science and Engineering, Toyohashi University of Technology
C2-503 Building C, 1-1 Hibarigaoka, Tempaku-cho, Toyohashi, Aichi 441-8580, Japan
oishi@cs.tut.ac.jp