About us...


Active Intelligent Systems Laboratory (Miura Laboratory) in TUT is conducting research projects on various intelligent systems, such as intelligent robots, which can operate autonomously in complex real environments. We focus on sensory data analysis and action planning to develop such sophisticated intelligent systems.

Toward RoboCup @ Home 2017, we've integrated our technologies on HSR (TOYOTA) so that it can provide users with various services in daily life. A key to the assistive robot system is sophisticated information processing for grasping the situation and planning the appropriate action.

Videos

People detection



For person tracking, we first extract leg-like clusters in laser range data by finding local minima in the distance histogram. Next, leg clusters are detected among them by calculating features, such as the length, the mean curvature, and the variance ratio by PCA, and classifying them with Support Vector Machine (SVM). These two steps are applied to each laser scan, and the robot tracks the target person's legs position using Unscented Kalman Filter (UKF). The position is published as ROS tf.


Object detection & manipulation



We have implemented object recognition functions in HSR. For general object recognition, we have developed YOLO-based object recognition system. For specific object recognition such as particular bottles and cans, we have developed a method based on image matching using several features, such as Scale Invariant Feature Transform (SIFT) and Binary Robust Independent Elementary Features (BRIEF). The 3D position of each detected object is broadcasted so that HSR can grasp and handle them.


Speech recognition



For recognizing human voice, we first use HARK to perform sound source localization and separation. The separated sound is then recognized with Google Speech API. Morphological analysis is also applied to the extracted texts voice with Stanford NLP to obtain the meaning. By extracting words and the words' relationships in the recognized speech, HSR understands what the target person means and execute the corresponding actions.


Activity Recognition




We estimate the state of the target person for adaptive attendance to provide appropriate service. The robot estimates the body orientation of the target person using torso shape data by extending the above-mentioned people tracking. Based on the position and orientation information, the robot judges the target person's state. We had a Hidden Conditional Random Fields (HCRF) learn the best discriminative structure from 5-frame consecutive features consisting of the walking speed, the distance and orientation to the nearest chair. The HCRF successfully recognized the state of the person in real time based on same length of the consecutive features. Note that the robot shown in the qualification video had 2 laser range finders at different heights to measure leg and torso shapes respectively, the proposed activity recognition method is available on HSR by using Xtion on it as an alternative of the upper laser range finder.

Members

Shuji Oishi

Team leader
Assistant professor

Jun Miura

Director
Professor


Kenji Koide

Developer
Doctoral student

Yoshiki Kohari

Developer
Master cource student

Mitsuhiro Demura

Developer
Master cource student

Seiichiro Une

Developer
Master cource student

Liliana Villamar Gomez

Developer
Master cource student

Tsubasa Kato

Developer
Undergraduate student

Motoki Kojima

Developer
Undergraduate student

Kazuhi Morohashi

Developer
Undergraduate student

Publications


Person detection, tracking, and identication

  • K.Kidono, T.Miyasaka, A.Watanabe, T.Naito, and J.Miura. Pedestrian recognition using high-de nition lidar. In 2011 IEEE Intelligent Vehicles Symp, pages 405-410, 2011.  
  • K.Koide and J.Miura. Identi cation of a speci c person using color, height, and gait features for a person following robot. Robotics and Autonomous Systems, 84(10):76-87, 2016.  
  • K.Misu and J.Miur. Speci c person tracking using 3d lidar and espar antenna for mobile service robots. Advanced Robotics, 29(22):1483-1495, 2015.  
  • Igi Ardiyanto and J.Miura. Partial least squares-based human upper body orientation estimation with combined detection and tracking. Image and Vision Computing, 32(11):904-915, 2014.  
  • M.Shimizu, K.Koide, I.Ardiyanto, J.Miura, and S.Oishi. Lidar-based body orientation estimation by integrating shape and motion information. In IEEE Int. Conf. on Robotics and Biomimetics, pages 1948-1953, 2016. 
  • B.S.B. Dewantara and J.Miura. Optifuzz: A robust illumination invariant face recognition system and its implementation. Machine Vision and Applications, 27(6):877-891, 2016. 

Planning a safe and efficient robot motion

  • I.Ardiyanto and J.Miura. Real-time navigation using randomized kinodynamic planning with arrival time eld. Robotics and Autonomous Systems, 60(12):1579-1591, 2012.  
  • M.Chiba J.Satake and J.Miura. Visual person identi cation using a distance dependent appearance model for a person following robot. Int. J. of Automation and Computing, 10(5):438-446, 2013. 
  • I.Ardiyanto and J.Miura. Visibility-based viewpoint planning for guard robot using skeletonization and geodesic motion model. In IEEE Int. Conf. on Robotics and Automation, pages 652-658, 2013.  
  • S.Oishi Y.Kohari and J.Miura. Toward a robotic attendant adaptively bahaving according to human state. In Int. Symp. on Robot and Human Interactive Communication, pages 1038-1043, 2016. 

SLAM and place recognition

  • Y.Okada and J.Miura. Exploration and observation planning for 3d indoor mapping. In IEEE/SICE Int. Symp. on System Integration, pages 599-604, 2015.  
  • S.Kani and J.Miura. Mobile monitoring of physical states of indoor environments for personal support. In IEEE/SICE Int. Symp. on System Integration, pages 393-398, 2015. 

Human robot collaboration

  • K.Chikaarashi J.Miura, S.Kadekawa and J.Sugiyama. Human-robot collaborative remote object search. In Int. Conf. on Intelligent Autonomous Systems, 2014. 
  • J.Miura H.Goto and J.Sugiyama. Human-robot collaborative assembly by on-line human action recognition based on an fsm task model. In HRI2013 Workshop on Collaborative Manipulation: New Challenges for Robotics and HRI, 2013. 
  • H.Goto T.Hamabe and J.Miura. A programming by demonstration system for human-robot collaborative assembly tasks. In IEEE International Conference on Robotics and Biomimetics, pages 1195-1201, 2015. 
  • K.Yamada and J.Miura. Ambiguity-driven interaction in robot-to-human teaching. In Int. Conf. on Human-Agent Interaction, pages 257-260, 2016.