氏名

マツマル タカフミ

松丸 隆文

職名

教授 (https://researchmap.jp/read0060093/)

所属

(大学院情報生産システム研究科)

連絡先

メールアドレス

メールアドレス
matsumaru@waseda.jp

URL等

WebページURL

http://www.waseda.jp/sem-matsumaru/(laboratory HP)

http://www.f.waseda.jp/matsumaru/(personal HP)

研究者プロファイリング(Pure)
Scival
研究者番号
10313933
ORCID ID
http://orcid.org/0000-0001-7776-2984

本属以外の学内所属

学内研究所等

理工学術院総合研究所(理工学研究所)

兼任研究員 2018年-

学歴・学位

学歴

-1985年 早稲田大学 理工学部 機械工学科
-1987年 早稲田大学 理工学研究科 機械工学専攻 生物制御工学専修

学位

博士(工学) 論文 早稲田大学 知能機械学・機械システム

経歴

1987年04月-株式会社 東芝 入社 総合研究所 機械研究所
1994年04月-同上 研究開発センター 機械・エネルギー研究所 研究主務
1998年04月-同上 研究開発センター 機械システム研究所 研究主務(改組)
2002年04月-2003年03月静岡県静岡工業技術センター 客員研究員
2003年04月-2003年12月フランスLSC(Laboratoire Systemes Complexe)-CNRS(Centre National de la Recherche Scientifique) 客員教授((財)日本学術振興会「特定国派遣研究者」制度)
2004年04月-2005年03月静岡理工科大学 非常勤講師(理工学部機械工学科)
1999年04月-2010年08月静岡大学 助教授(准教授)(工学部機械工学科)
2010年09月-2011年03月静岡大学 客員教授(工学部,大学院工学研究科)
2010年09月-早稲田大学 教授(理工学術院,大学院情報生産システム研究科)

所属学協会

IAENG (the International Association of Engineers)

IEEE(Institute of Electrical and Electronics Engineers, Inc.)

(社)自動車技術会

日本バーチャルリアリティ学会

ヒューマンインタフェース学会

(社)日本機械学会 校閲委員

(社)計測自動制御学会 評議員

(社)日本ロボット学会

バイオメカニズム学会 理事

受賞

Excellent Oral Presentation Award (ISIPS 2018)

2018年11月授与機関:ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018)

タイトル:"Intuitive Control of Virtual Robots using Leap Motion Sensor"

受賞者(グループ):Rajeevlochana G. Chittawadigi, Subir Kumar Saha, Takafumi Matsumaru

Excellent Poster Presentation Award (ISIPS 2018)

2018年11月授与機関:ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018)

タイトル:"Fingertip pointing interface by hand detection using Short range depth camera"

受賞者(グループ):Kazuki Horiuchi, Takafumi Matsumaru

Excellent Paper Award (ISIPS 2017)

2017年11月授与機関:11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan]

タイトル:Usability Study of Aerial Projection of 3D Hologram Object in 3D Workspace

受賞者(グループ):Septiana Asyifa I., Jiono Mahfud, Kazuki Horiuchi, Takafumi Matsumaru

Excellent Paper Award (ISIPS 2016)

2016年11月授与機関:10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)

受賞者(グループ):R. P. Joshi, P. Sharma, R. A. Boby, S. K. Saha, T. Matsumaru

Excellent Paper Award (ISIPS 2015)

2015年11月授与機関:ISIPS 2015

Excellent Poster Presentation Award (ISIPS 2015)

2015年11月授与機関:ISIPS 2015

Excellent Paper Award (ISIPS2014)

2014年11月授与機関:ISIPS 2014

Excellent Paper Award (7th IPS-ICS)

2013年11月授与機関:7th IPS-ICS

Excellent Poster Presentation Award (7th IPS-ISC)

2013年11月授与機関:7th IPS-ICS

Excellent Poster Presentation Award (6th IPS-ICS)

2012年11月授与機関:6th IPS-ICS

(社)日本機械学会ロボティクス・メカトロニクス部門, ロボティクス・メカトロニクス講演会2009における優秀講演ノミネーション(78件/約350件/1035件)

2009年05月授与機関:JSME ROBOMEC 2009

(社)日本機械学会ロボティクス・メカトロニクス部門, ロボティクス・メカトロニクス講演会2008における優秀講演ノミネーション(59件/1054件)

2008年06月授与機関:JSME ROBOMEC 2008

(社)計測自動制御学会第6回システムインテグレーション部門講演会, SI2005ベストセッション講演賞

2005年12月授与機関:SICE SI2005

(社)日本機械学会ロボティクス・メカトロニクス部門, ロボティクス・メカトロニクス講演会’00における優秀講演の認定(169件/653件)

2000年05月授与機関:JSME ROBOMEC 2000

取材ガイド

カテゴリー
工学
専門分野
ロボット工学、メカトロニクス、ヒューマン・ロボット・インタラクション

研究分野

キーワード

バイオ・ロボティクス&ヒューマン・メカトロニクス、ロボット工学、生体工学

科研費分類

工学 / 機械工学 / 知能機械学・機械システム

情報学 / 人間情報学 / 知能ロボティクス

複合領域 / 人間医工学 / リハビリテーション科学・福祉工学

情報学 / 計算基盤 / マルチメディア・データベース

共同研究希望テーマ

人−ロボットインラクション情報システム

希望連携機関:産学連携、民間を含む他機関等との共同研究等

目的:技術相談、受託研究、共同研究、その他

人−ロボットインタラクションパワーシステム

希望連携機関:産学連携、民間を含む他機関等との共同研究等

目的:技術相談、受託研究、共同研究、その他

研究テーマ履歴

1999年-メカトロニクス体験学習システムの構築

研究テーマのキーワード:LEGO MindStorm, SONY Aibo

個人研究

1999年-人の運動・動作の計測と解析・評価

研究テーマのキーワード:鉄棒運動,重量物挙上動作

個人研究

1999年-人間共存型ロボットの対人インタラクション

研究テーマのキーワード:ステップ・オン・インタフェース(SOI),フレンドリ・アミュージング・モバイル(FAM)機能,アプリケーション:光踏み遊び

個人研究

1999年-人間共存型ロボットの形態と動作(人間機械系の情報動作学)

研究テーマのキーワード:エモーショナル・モーション,インフォマティブ・モーション,人間機械系の情報動作学

個人研究

1999年-移動ロボットのこれからの動作の周囲の人への予告

研究テーマのキーワード:速度と方向の予告表示,対人模擬環境実験,表示予告と音声予告

個人研究

1999年-移動ロボットのネットワーク遠隔操作システム

研究テーマのキーワード:融合制御,環境地図,操作インタフェース

個人研究

論文

An Object Model and Interaction Method for a Simulated Experience of Pottery on a Potter's Wheel

Takafumi Matsumaru, Ami Morikawa

Sernsors (MDPI AG (Multidisciplinary Digital Publishing Institute)) (ISSN 1424-8220; CODEN: SENSC9)査読有り20(11)2020年05月-2020年05月 

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:1424-8220

概要:This paper introduces an object model and an interaction method for a simulated experience of pottery on a potter’s wheel. Firstly, we propose a layered cylinder model for a 3D object of the pottery on a potter’s wheel. Secondly, we set three kinds of deformation functions to form the object model from an initial state to a bowl shape: shaping the external surface, forming the inner shape (deepening the opening and widening the opening), and reducing the total height. Next, as for the interaction method between a user and the model, we prepare a simple but similar method for hand-finger operations on pottery on a potter’s wheel, in which the index finger movement takes care of the external surface and the total height, and the thumb movement makes the inner shape. Those are implemented in the three-dimensional aerial image interface (3DAII) developed in our laboratory to build a simulated experience system. We confirm the operation of the proposed object model (layered cylinder model) and the functions of the prepared interaction method (a simple but similar method to actual hand-finger operations) through a preliminary evaluation of participants. The participants were asked to make three kinds of bowl shapes (cylindrical, dome-shaped, and flat-type) and then they answered the survey (maneuverability, visibility, and satisfaction). All participants could make something like three kinds of bowl shapes in less than 30 min from their first touch.

Dynamic Hand Gesture Recognition for Robot Arm Teaching based on Improved LRCN Model

Kaixiang Luan, Takafumi Matsumaru

2019 IEEE International Conference on Robotics and Biomimetics (IEEE Robio 2019) [Dali, Yunnan, China] (2019.12.6-8)査読有りp.1268 - 12732019年12月-2019年12月 

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:In this research, we focus on finding a new method of human-robot interaction in industrial environment. A visionbased dynamic hand gestures recognition system has been proposed for robot arm picking task. 8 dynamic hand gestures are captured for this task with a 100fps high speed camera. Based on the LRCN model, we combine the MobileNets (V2) and LSTM for this task, the MobileNets (V2) for extracting the image features and recognize the gestures, then, Long Short-Term Memory (LSTM) architecture for interpreting the features across time steps. Around 100 samples are taken for each gesture for training at first, then, the samples are augmented to 200 samples per gesture by data augmentation. Result shows that the model is able to learn the gestures varying in duration and complexity and gestures can be recognized in 88ms with 90.62% accuracy in the experiment on our hand gesture dataset.

Intuitive Control of Virtual Robots using Transformed Objects as Multiple Viewports

Rajeevlochana G. Chittawadigi, Takafumi Matsumaru, Subir Kumar Saha

2019 IEEE International Conference on Robotics and Biomimetics (IEEE Robio 2019) [Dali, Yunnan, China] (2019.12.6-8)査読有りp.822 - 8272019年12月-2019年12月 

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:In this paper, the integration of Leap Motion controller with RoboAnalyzer software has been reported. Leap Motion is a vision based device that tracks the motion of human hands, which was processed and used to control a virtual robot model in RoboAnalyzer, an established robot simulation software. For intuitive control, the robot model was copied and transformed to be placed at four different locations such that the user watches four different views in the same graphics environment. This novel method was used to avoid multiple windows or viewports and was observed to have a marginally better rendering rate. Several trials of picking up cylindrical objects (pegs) and moving them and placing in cylindrical holes were carried out and it was found that the manipulation was intuitive, even for a novice user.

Brand Recognition with Partial Visible Image in the Bottle Random Picking Task based on Inception V3

Chen Zhu, Takafumi Matsumaru

IEEE Ro-Man 2019 (The 28th IEEE Intarnational Conference on Robot & Human Interactive Communication) [Le Meridien, Windsor Place, New Delhi, India] (14-18 Oct, 2019)査読有りp.1 - 62019年10月-2019年10月 

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:In the brand-wise random-ordered drinking PET bottles picking task, the overlapping and viewing angle problem makes a low accuracy of the brand recognition. In this paper, we set the problem to increase the brand recognition accuracy and try to find out how the overlapping rate infects on the recognition accuracy. By using a stepping motor and transparent fixture, the training images were taken automatically from the bottles under 360 degrees to simulate a picture taken from viewing angle. After that, the images are augmented with random cropping and rotating to simulate the overlapping and rotation in a real application. By using the automatically constructed dataset, the Inception V3, which was transferred learning from ImageNet, is trained for brand recognition. By generating a random mask with a specific overlapping rate on the original image, the Inception V3 can give 80% accuracy when 45% of the object in the image is visible or 86% accuracy when the overlapping rate is lower than 30%.

Three-dimensional Aerial Image Interface, 3DAII

Takafumi MATSUMARU, Asyifa Imanda SEPTIANA, Kazuki HORIUCHI

Journal of Robotics and Mechatronics (JRM) (Fuji Technology Press Ltd.)査読有り31(5)p.657 - 6702019年10月-2019年10月 

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:0915-3942(Print) / 1883-8049(Online)

概要:In this paper, we introduce the three-dimensional aerial image interface, 3DAII. This interface reconstructs and aerially projects a three-dimensional object image, which can be simultaneously observed from various viewpoints or by multiple users with the naked eye. A pyramid reflector is used to reconstruct the object image, and a pair of parabolic mirrors is used to aerially project the image. A user can directly manipulate the three-dimensional object image by superimposing a user’s hand-finger or a rod on the image. A motion capture sensor detects the user’s hand-finger that manipulates the projected image, and the system immediately exhibits some reaction such as deformation, displacement, and discoloration of the object image, including sound effects. A performance test is executed to confirm the functions of 3DAII. The execution time of the end-tip positioning of a robotic arm has been compared among four operating devices: touchscreen, gamepad, joystick, and 3DAII. The results exhibit the advantages of 3DAII; we can directly instruct the movement direction and movement speed of the end-tip of the robotic arm, using the three-dimensional Euclidean vector outputs of 3DAII in which we can intuitively make the end-tip of the robotic arm move in three-dimensional space. Therefore, 3DAII would be one important alternative to an intuitive spatial user interface, e.g., an operation device of aerial robots, a center console of automobiles, and a 3D modelling system. A survey has been conducted to evaluate comfort and fatigue based on ISO/TS 9241-411 and ease of learning and satisfaction based on the USE questionnaire. We have identified several challenges related to visibility, workspace, and sensory feedback to users that we would like to address in the future.

Pre-Robotic Navigation Identification of Pedestrian Crossings & Their Orientations

Ahmed Farid, Takafumi Matsumaru

12th Conference on Field and Service Robotics (FSR 2019) [Tokyo, Japan], (August 29-31, 2019)査読有りp.000 - 0002019年08月-2019年08月 

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Path Planning in Outdoor Pedestrian Settings Using 2D Digital Maps

Ahmed Farid, Takafumi Matsumaru

Journal of Robotics and Mechatronics (JRM) (Fuji Technology Press) (ISSN: 0915-3942(Print) / 1883-8049(Online))査読有り31(3)p.464 - 4732019年06月-2019年06月 

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:0915-3942(Print) / 1883-8049(Online)

概要:This article presents a framework for planning sidewalk-wise paths in data-limited pedestrian environments by visually recognizing city blocks in 2D digital maps (e.g. Google Maps, OpenStreetMaps) using contour detection, then applying graph theory to infer a pedestrian path from start till finish. There are two main targeted problems; firstly, several locations around the world (e.g. suburban / rural areas) do not have recorded data on street crossings and pedestrian walkways. Secondly, the continuous process of recording maps (i.e. digital cartography) is, to our current knowledge, manual and not yet fully automated in practice. Both issues contribute towards a scaling problem in which it becomes time and effort consuming to continuously monitor and record such data on a global scale. As a result, the framework’s purpose is to produce path plans that do not depend on prerecorded (e.g. using SLAM) or data-rich pedestrian maps, thus facilitating navigation for mobile robots and people of visual impairments alike. The framework was able to produce pedestrian paths for most locations where data on sidewalks and street crossings were indeed limited, but still some challenges remain. In this article, the framework’s structure, output, and challenges are explained. Additionally, we mention some works in the literature on how to make use of such path plan effectively.

Image Processing for Picking Task of Random Ordered PET Drinking Bottles

Chen Zhu, Takafumi Matsumaru

Journal of Robotics, Networking and Artificial Life (JRNAL) (Atlantis Press)査読有り6(1)p.38 - 412019年06月-2019年06月 

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:2405-9021(Print) / 2352-6386 (Online)

概要:In this research, six brands of soft drinks are decided to be picked up by a robot with a monocular Red Green Blue (RGB) camera. The drinking bottles need to be located and classified with brands before being picked up. The Mask Regional Convolutional Neural Network (R-CNN), a mask generation network improved from Faster R-CNN, is trained with common object in contest datasets to detect and generate the mask on the bottles in the image. The Inception v3 is selected for the brand classification task. Around 200 images are taken or found at first; then, the images are augmented to 1500 images per brands by using random cropping and perspective transform. The result shows that the masked image can be labeled with its brand name with at least 85% accuracy in the experiment.

Image Processing for Picking Task of Random Ordered PET Drinking Bottles

Chen Zhu, Takafumi Matsumaru

The 2019 International Conference on Artificial Life and Robotics (ICAROB 2019) [B-Con PLAZA, Beppu, Japan], (January 10-13, 2019), GS2-4, pp.634-637, (2019.01.12 Sat).査読有りp.634 - 6372019年01月-2019年01月 

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:In this research, six brands of soft drinks are decided to be picked up by a robot with a monocular RGB camera. The drinking bottles need to be located and classified with brands before being picked up. A Mask R-CNN is pretrained with COCO datasets to detect and generate the mask on the bottles in the image. The Inception v3 is selected for the brand classification task. Around 200 images are taken, then, the images are augmented to 1500 images per brand by using random cropping and perspective transform. The results show that the masked image can be labeled with its brand name with at least 85% accuracy in the experiment.

Short Range Fingertip Pointing Operation Interface by Depth Camera

Kazuki Horiuchi, Takafumi Matsumaru

2018 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 2018), [Kuala Lumpur, Malaysia], (December 12-15, 2018), Thu1-4 (4), pp.132-137, (2018.12.13).査読有りp.132 - 1372018年12月-2018年12月 

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:In this research, we proposed and implemented a finger pointing detection system with a short-range type depth camera. The most widely used pointing device for a computer interface is a mouse, and as an alternative there is a depth sensor to detect hand and finger movement. However, in the present literature, since the user must operate the cursor at a relatively large distance from the detection device, the user needs to raise his arm and make wide movements, which is inconvenient over long periods of time. To solve such usability problem, we proposed a comfortable and easy to use method by narrowing the distance between the keyboard and the pointing device. Next, we compared various depth sensors and selected one that can recognize even small movements Additionally, we proposed a mapping method between the users perceived cursor position and the real one pointed by the index finger direction. Furthermore, we compared our pointing method with a mouse and touch pad for the usability, accuracy and working speed. The results showed that it has users have better performance on continuous operation of character input from the keyboard and cursor pointing.

Proposing Camera Calibration Method using PPO (Proximal Policy Optimization) for Improving Camera Pose Estimations

Haitham K. Al-Jabri, Takafumi Matsumaru

2018 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 2018), [Kuala Lumpur, Malaysia], (December 12-15, 2018), Thu1-8 (2), pp.790-795, (2018.12.13).査読有りp.790 - 7952018年12月-2018年12月 

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:This paper highlights camera orientation estimation accuracy and precision, as well as proposing a new camera calibration technique using a reinforcement learning method named PPO (Proximal Policy Optimization) in offline mode. The offline mode is used just for extracting the camera geometry parameters that are used for improving accuracy in real-time camera pose estimation techniques. We experiment and compare two popular techniques using 2D vision feedbacks and evaluate their accuracy beside other considerations related to real applications such as disturbance cases from surrounding environment and pose data stability. First, we use feature points detection ORB (Oriented FAST and Rotated BRIEF) and BF (Brute-Force) matcher to detect and match points in different frames, respectively. Second, we use FAST (Features from Accelerated Segment Test) corners and LK (Lucas–Kanade) optical flow methods to detect corners and track their flow in different frames. Those points and corners are then used for the pose estimation through optimization process with the: (a) calibration method of Zhang using chessboard pattern and (b) our proposed method using PPO. The results using our proposed calibration method show significant accuracy improvements and easier deployment for end-user compared to the pre-used methods.

Integration of Leap Motion Controller with Virtual Robot Module of RoboAnalyzer

Rajeevlochana G. Chittawadigi, Subir Kumar Saha, Takafumi Matsumaru

9th Asian Conference on Multibody Dynamics (ACMD 2018), [Xian, China] (August 19-23, 2018)査読有り2018年08月-2018年08月 

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:In this paper, an integration of Leap Motion controller with RoboAnalyzer software is proposed. Leap Motion is an inexpensive sensor which has three Infrared (IR) emitters and two IR cameras and can track 10 fingers of the two human hands. The device sends the data to the computer connected to it through its controller software which can be accessed in a variety of programming languages. In the proposed setup, the position of the index finger of the right hand is tracked in a Visual C# Server Application and the coordinates are extracted accurately with respect to a frame attached at the center of the device. The coordinates are then sent to Virtual Robot Module (client application) in which a Coordinate System (marker) is mapped with the input it receives. Based on the movement of the index finger, the marker moves in the Virtual Robot Module (VRM). When the marker moves closer to the end-effector of the robot, the Server application attaches the marker with the end-effector. Thereafter, any incremental Cartesian motion of the index finger is mapped to equivalent Cartesian motion of the robot endeffector. This is achieved by finding the inverse kinematics solution for the new pose of the robot EE and of the eight solutions obtained through inverse kinematics, the solution that is closest to the current pose is selected as the appropriate solution for the new pose. The server application then updates the VRM with the new joint angles and accordingly the robot moves instantly on the software. The setup connected to a laptop running Leap Motion Controller and VRM is shown in Figure 2(a) and the workflow is illustrated in Figure 2(b).

Path Planning of Sidewalks & Street Crossings in Pedestrian Environments Using 2D Map Visual Inference

Ahmed Farid, Takafumi Matsumaru

Vigen Arakelian, Philippe Wenger (eds): ʺROMANSY 22 - Robot Design, Dynamics and Controlʺ, CISM International Centre for Mechanical Sciences (Courses and Lectures)査読有り584p.247 - 2552018年06月-2018年06月 

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)ISSN:0254-1971

概要:This paper describes a path planning framework for processing 2D maps of given pedestrian locations to provide sidewalk paths and street crossing information. The intention is to allow mobile robot platforms to navigate in pe-destrian environments without previous knowledge and using only the current location and destination as inputs. Depending on location, current path planning solutions on known 2D maps (e.g. Google Maps and OpenStreetMaps) from both research and industry do not always provide explicit information on sidewalk paths and street crossings, which is a common problem in suburban/rural areas. The framework’s goal is to provide path planning by means of visual inference on 2D map images and search queries through downloadable map data. The re-sults have shown both success and challenges in estimating viable city block paths and street crossings.

Measuring Performance of Aerial Projection of 3D Hologram Object (3DHO)

Asyifa I.Septiana, Mahfud Jiono, Takafumi Matsumaru

2017 IEEE International Conference on Robotics and Biomimetics (IEEE-ROBIO 2017), [Macau SAR, China], (December 5-8, 2017),査読有りp.2081 - 20862017年12月-2017年12月 

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:The Aerial Projection of 3D Hologram Object (3DHO) which we have proposed, is a hand gesture-based control system with an interactive 3D hologram object floating in mid-air as the hand movement reference. This system mainly consists of the pyramid-shaped reflector which produces 3D hologram object, the parabolic mirror, and the leap motion controller for capturing hand gesture command. The user can control or interact with 3DHO by using their finger or a baton-shaped object. This paper is focusing on the evaluation of the 3DHO by comparing it to other 3D input devices (such as joystick with a slider, joystick without the slider, and gamepad) on five different positioning tasks. We also investigate the assessment of comfort, ease of learning, and user satisfaction by questionnaire survey. From the experimentation, we learn that 3DHO is not good to use in one-dimensional workspace task, but has a good performance in two-dimensional and three-dimensional workspace tasks. From questionnaire results, we found out that 3DHO is averagely comfortable but may cause some fatigues around arm and shoulder. It is also easy to learn but not satisfying enough to use.

Calligraphy-Stroke Learning Support System Using Projector and Motion Sensor

Takafumi Matsumaru, Masashi Narita

Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII) (Fuji Technology Press) (ISSN: 1343-0130(Print) / 1883-8014(Online))査読有り21(4)p.697 - 7082017年07月-2017年07月 

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:1343-0130(Print) / 1883-8014(Online)

概要:This paper presents a newly-developed calligraphy-stroke learning support system. The system has the following functions: a) displaying a brushwork, trajectory, and handwriting, b) recording and playback of an expert's calligraphy-stroke, and c) teaching a learner a calligraphy-stroke. The system has the following features which shows our contribution: (1) It is simple and compact built up with a sensor and a projector so as to be easy to introduce to usual educational fields and practical leaning situations. (2) Three-dimensional calligraphy-stroke is instructed by presenting two-dimensional visual information. (3) A trajectory region is generated as continuous squares calculated using a brush model based on a brush position information measured by a sensor. (4) A handwriting is expressed by mapping a handwriting texture image according to an ink concentration and a brush handling state. The result of a trial experiment suggests the effectiveness of the learning support function about a letter form and a calligraphy-stroke.

Calibration and Statistical Techniques for Building an Interactive Screen for Learning of Alphabets by Children

Riby Abraham Boby, Ravi Joshi Prakash, Subir Kumar Saha, Takafumi Matsumaru, Pratyusha Sharma, Siddhartha Jaitly

International Journal of Advanced Robotic Systems (IJARS) (SAGE Publishing) (eISSN: 17298814 / ISSN: 17298814)査読有り14(3)p.1 - 172017年05月-2017年01月 

DOIlinkScopus

詳細

掲載種別:研究論文(学術雑誌)ISSN:17298814

概要:This article focuses on the implementation details of a portable interactive device called Image-projective Desktop Varnamala Trainer. The device uses a projector to produce a virtual display on a flat surface. For enabling interaction, the information about a user’s hand movement is obtained from a single two-dimensional scanning laser range finder in contrast with a camera sensor used in many earlier applications. A generalized calibration process to obtain exact transformation from projected screen coordinate system to sensor coordinate system is proposed in this article and implemented for enabling interaction. This permits production of large interactive displays with minimal cost. Additionally, it makes the entire system portable, that is, display can be produced on any planar surface like floor, tabletop, and so on. The calibration and its performance have been evaluated by varying screen sizes and the number of points used for calibration. The device was successfully calibrated for different screens. A novel learning-based methodology for predicting a user’s behaviour was then realized to improve the system’s performance. This has been experimentally evaluated, and the overall accuracy of prediction was about 96%. An application was then designed for this set-up to improve the learning of alphabets by the children through an interactive audiovisual feedback system. It uses a game-based methodology to help students learn in a fun way. Currently, it has bilingual (Hindi and English) user interface to enable learning of alphabets and elementary mathematics. A user survey was conducted after demonstrating it to school children. The survey results are very encouraging. Additionally, a study to ascertain the improvement in the learning outcome of the children was done. The results clearly indicate an improvement in the learning outcome of the children who used the device over those who did not.

ORB-SHOT SLAM: Trajectory Correction by 3D Loop Closing based on Bag of Visual Words (BoVW) Model for RGB-D Visual SLAM

Zheng Chai, Takafumi Matsumaru

Journal of Robotics and Mechatronic (Fuji Technology Press) (ISSN: 0915-3942(Print) / 1883-8049(Online))査読有り29(2)p.365 - 3802017年04月-2017年04月 

DOIlinkScopus

詳細

掲載種別:研究論文(学術雑誌)ISSN:ISSN: 0915-3942(Print) / 1883-8049(Online)

概要:This paper proposes the ORB-SHOT SLAM or OS-SLAM, which shows the novel method of 3D loop closing which plays an important role in the trajectory correction for RGB-D visual SLAM. We get point clouds from a RGB-D sensor like Kinect or Xtion, and we use the 3D SHOT descriptors to describe the ORB corners. Then we train an offline 3D vocabulary including more than six hundred thousand words by using two millions 3D descriptors from a large number of images provided by a public dataset provided by TUM. We convert new images to BoVW (Bag of Visual Words) vectors and push these vectors into an incremental database. We query the database for new images to detect the corresponding 3D loop candidates, and compute the similarity scores between the new image and each corresponding 3D loop candidate. After detecting 2D loop closures by ORB-SLAM2 system, we accept those loop closures which are also included in the 3D loop candidates, and we give them corresponding weights according to the scores stored before. In the final graph-based optimization, we create edges with different weights for loop closures and correct the trajectory by solving a non-linear least square optimization problem. We compare our results with several state-of-the-art systems such as ORB-SLAM2 and RGBD-SLAM by using TUM public RGB-D dataset. And then we find that accurate loop closures and suitable weights reduce the error on trajectory estimation more effectively than other systems. The performance of ORB-SHOT SLAM has been demonstrated by 3D reconstruction application.

Touchless human-mobile robot interaction using a projectable interactive surface

Agarwal, R.; Sharma, P.; Saha, S. K.; Matsumaru, T.

SII 2016 - 2016 IEEE/SICE International Symposium on System Integrationp.723 - 7282017年02月-2017年02月 

DOIScopus

詳細

概要:© 2016 IEEE. This paper showcases the development of a mobile robot integrated with Projectable Interactive Surface to facilitate its interaction with human users. The system was designed to interact with users of any physical attributes such as height, arm span etc. without re-calibrating it. The system was designed in such a way that there would be no need for the human to come in physical contact with the robot to give it instructions. This system uses a projector to render a virtual display on the ground allowing us to project large displays. Microsoft Kinect integrated in the systems performs a dual functionality of tracking the user movements along with mapping the surrounding environment. The gestures of the tracked user are interpreted and an audio visual signal is projected by the robot in response.

Interactive aerial projection of 3D hologram object

Mahfud, Jiono; Matsumaru, Takafumi

2016 IEEE International Conference on Robotics and Biomimetics, ROBIO 2016p.1930 - 19352017年02月-2017年02月 

DOIScopus

詳細

概要:© 2016 IEEE.In this paper we present an interactive aerial projection of 3D hologram objects by using the pyramid hologram and parabolic mirror system (for 3D hologram object reconstruction) and the Leap Motion sensor (as a finger movement detector of a user). This system not only can reconstruct and project the 3D hologram object in the mid-air, but also provides a way to interact with it by moving a user's finger. There are three main steps: the reconstruction of 3D object, the projection of 3D hologram object in the mid-air, and the interactive manipulation of 3D hologram object. The first step is realized by using pyramid hologram with LCD display. The second step is achieved with the parabolic mirror hologram. And Leap Motion sensor is used for the last step to detect user finger movement. This paper traces the design concept and confirms the system function of an interactive aerial projection of 3D hologram objects by a prototype demonstration.

Touchless human-mobile robot interaction using a projectable interactive surface

Agarwal, R.; Sharma, P.; Saha, S. K.; Matsumaru, T.

SII 2016 - 2016 IEEE/SICE International Symposium on System Integrationp.723 - 7282017年02月-2017年02月 

DOIScopus

詳細

概要:© 2016 IEEE. This paper showcases the development of a mobile robot integrated with Projectable Interactive Surface to facilitate its interaction with human users. The system was designed to interact with users of any physical attributes such as height, arm span etc. without re-calibrating it. The system was designed in such a way that there would be no need for the human to come in physical contact with the robot to give it instructions. This system uses a projector to render a virtual display on the ground allowing us to project large displays. Microsoft Kinect integrated in the systems performs a dual functionality of tracking the user movements along with mapping the surrounding environment. The gestures of the tracked user are interpreted and an audio visual signal is projected by the robot in response.

Interactive aerial projection of 3D hologram object

Mahfud, Jiono; Matsumaru, Takafumi

2016 IEEE International Conference on Robotics and Biomimetics, ROBIO 2016p.1930 - 19352017年02月-2017年02月 

DOIScopus

詳細

概要:© 2016 IEEE.In this paper we present an interactive aerial projection of 3D hologram objects by using the pyramid hologram and parabolic mirror system (for 3D hologram object reconstruction) and the Leap Motion sensor (as a finger movement detector of a user). This system not only can reconstruct and project the 3D hologram object in the mid-air, but also provides a way to interact with it by moving a user's finger. There are three main steps: the reconstruction of 3D object, the projection of 3D hologram object in the mid-air, and the interactive manipulation of 3D hologram object. The first step is realized by using pyramid hologram with LCD display. The second step is achieved with the parabolic mirror hologram. And Leap Motion sensor is used for the last step to detect user finger movement. This paper traces the design concept and confirms the system function of an interactive aerial projection of 3D hologram objects by a prototype demonstration.

Touchless human-mobile robot interaction using a projectable interactive surface

Agarwal, R.; Sharma, P.; Saha, S. K.; Matsumaru, T.

SII 2016 - 2016 IEEE/SICE International Symposium on System Integrationp.723 - 7282017年02月-2017年02月 

DOIScopus

詳細

概要:© 2016 IEEE. This paper showcases the development of a mobile robot integrated with Projectable Interactive Surface to facilitate its interaction with human users. The system was designed to interact with users of any physical attributes such as height, arm span etc. without re-calibrating it. The system was designed in such a way that there would be no need for the human to come in physical contact with the robot to give it instructions. This system uses a projector to render a virtual display on the ground allowing us to project large displays. Microsoft Kinect integrated in the systems performs a dual functionality of tracking the user movements along with mapping the surrounding environment. The gestures of the tracked user are interpreted and an audio visual signal is projected by the robot in response.

Interactive aerial projection of 3D hologram object

Mahfud, Jiono; Matsumaru, Takafumi

2016 IEEE International Conference on Robotics and Biomimetics, ROBIO 2016p.1930 - 19352017年02月-2017年02月 

DOIScopus

詳細

概要:© 2016 IEEE.In this paper we present an interactive aerial projection of 3D hologram objects by using the pyramid hologram and parabolic mirror system (for 3D hologram object reconstruction) and the Leap Motion sensor (as a finger movement detector of a user). This system not only can reconstruct and project the 3D hologram object in the mid-air, but also provides a way to interact with it by moving a user's finger. There are three main steps: the reconstruction of 3D object, the projection of 3D hologram object in the mid-air, and the interactive manipulation of 3D hologram object. The first step is realized by using pyramid hologram with LCD display. The second step is achieved with the parabolic mirror hologram. And Leap Motion sensor is used for the last step to detect user finger movement. This paper traces the design concept and confirms the system function of an interactive aerial projection of 3D hologram objects by a prototype demonstration.

Interactive Aerial Projection of 3D Hologram Object

Mahfud Jiono, Takafumi Matsumaru

2016 IEEE International Conference on Robotics and Biomimetics (IEEE-ROBIO 2016), [Qingdao, China], (December 3-7, 2016)査読有りTuC05.3p.1930 - 19352016年12月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:In this paper we present an interactive aerial projection of 3D hologram objects by using the pyramid hologram and parabolic mirror system (for 3D hologram object reconstruction) and the Leap Motion sensor (as a finger movement detector of a user). This system not only can reconstruct and project the 3D hologram object in the mid-air, but also provides a way to interact with it by moving a user’s finger. There are three main steps: the reconstruction of 3D object, the projection of 3D hologram object in the mid-air, and the interactive manipulation of 3D hologram object. The first step is realized by using pyramid hologram with LCD display. The second step is achieved with the parabolic mirror hologram. And Leap Motion sensor is used for the last step to detect user finger movement. This paper traces the design concept and confirms the system function of an interactive aerial projection of 3D hologram objects by a prototype demonstration.

Touchless Human-Mobile Robot Interaction using a Projectable Interactive Surface

Rishabh Agarwal, Pratyusha Sharma, Subir Kumar Saha, Takafumi Matsumaru

2016 IEEE/SICE International Symposium on System Integration (SII 2016), [Sapporo, Japan], (December 13-15, 2016)査読有りWeP1E.6p.723 - 7282016年12月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:This paper showcases the development of a mobile robot integrated with Projectable Interactive Surface to facilitate its interaction with human users. The system was designed to interact with users of any physical attributes such as height, arm span etc. without re-calibrating it. The system was designed in such a way that there would be no need for the human to come in physical contact with the robot to give it instructions. This system uses a projector to render a virtual display on the ground allowing us to project large displays. Microsoft Kinect integrated in the systems performs a dual functionality of tracking the user movements along with mapping the surrounding environment. The gestures of the tracked user are interpreted and an audio visual signal is projected by the robot in response.

Near-field touch interface using time-of-flight camera

Zhang, Lixing; Zhang, Lixing; Matsumaru, Takafumi

Journal of Robotics and Mechatronics査読有り28(5)p.759 - 7752016年10月-2016年10月 

DOIlinkScopus

詳細

掲載種別:研究論文(学術雑誌)ISSN:09153942

概要:© 2016,Journal of Robotics and Mechatronics. All rights reserved.The purpose of this study is to realize a near-field touch interface that is compact,flexible,and highly accurate. We applied a 3-dimensional image sensor (time-of-flight camera) to achieve the basic functions of conventional touch interfaces,such as clicking,dragging,and sliding,and we designed a complete projector-sensor system. Unlike conventional touch interfaces,such as those on tablet PCs,the system can sense the 3-dimensional positions of fingertips and 3- dimensional directions of fingers. Moreover,it does not require a real touch screen but instead utilizes a mobile projector for display. Nonetheless,the system is compact,with a working distance of as short as around 30 cm. Our methods solve the shadow and reflection problems of the time-of-flight camera and can provide robust detection results. Tests have shown that our approach has a high success rate (98.4%) on touch/hover detection and a small standard error (2.21 mm) on position detection on average for different participants,which is the best performance we have achieved. Some applications,such as the virtual keyboard and virtual joystick,are also realized based on the proposed projector-sensor system.

Feature Tracking and Synchronous Scene Generation with a Single Camera

Zheng Chai, Takafumi Matsumaru

International Journal of Image, Graphics and Signal Processing (IJIGSP)査読有り8(6)p.1 - 122016年06月-

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:2074-9074(Print), 2074-9082 (Online)

概要:This paper shows a method of tracking feature points to update camera pose and generating a synchronous map for AR (Augmented Reality) system. Firstly we select the ORB (Oriented FAST and Rotated BRIEF) [1] detection algorithm to detect the feature points which have depth information to be markers, and we use the LK (Lucas-Kanade) optical flow [2] algorithm to track four of them. Then we compute the rotation and translation of the moving camera by relationship matrix between 2D image coordinate and 3D world coordinate, and then we update the camera pose. Last we generate the map, and we draw some AR objects on it. If the feature points are missing, we can compute the same world coordinate as the one before missing to recover tracking by using new corresponding 2D/3D feature points and camera poses at that time. There are three novelties of this study: an improved ORB detection, which can obtain depth information, a rapid update of camera pose, and tracking recovery. Referring to the PTAM (Parallel Tracking and Mapping) [3], we also divide the process into two parallel sub-processes: Detecting and Tracking (including recovery when necessary) the feature points and updating the camera pose is one thread. Generating the map and drawing some objects is another thread. This parallel method can save time for the AR system and make the process work in real-time.

Contour-based binary image orientation detection by orientation context and roulette distance

Jian Zhou, Takafumi Matsumaru

IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences査読有りE99A(2)p.621 - 6332016年02月-

DOICiNiilinkScopus

詳細

掲載種別:研究論文(学術雑誌)ISSN:09168508

概要:© 2016 The Institute of Electronics, Information and Communication Engineers. This paper proposes a novel technology to detect the orientation of an image relying on its contour which is noised to varying degrees. For the image orientation detection, most methods regard to the landscape image and the image taken of a single object. In these cases, the contours of these images are supposed to be immune to the noise. This paper focuses on the the contour noised after image segmentation. A polar orientation descriptor Orientation Context is viewed as a feature to describe the coarse distribution of the contour points. This descriptor is verified to be independent of translation, isotropic scaling, and rotation transformation by theory and experiment. The relative orientation depends on the minimum distance Roulette Distance between the descriptor of a template image and that of a test image. The proposed method is capable of detecting the direction on the interval from 0 to 359 degrees which is wider than the former contour-based means (Distance Phase [1], from 0 to 179 degrees). What's more, the results of experiments show that not only the normal binary image (Noise-0, Accuracy-1: 84.8%) (defined later) achieves more accurate orientation but also the binary image with slight contour noise (Noise-1, Accuracy-1: 73.5%) could obtain more precise orientation compared to Distance Phase (Noise-0, Accuracy-1: 56.3%; Noise-1, Accuracy-1: 27.5%). Although the proposed method (O(op2)) takes more time to detect the orientation than Distance Phase (O(st)), it could be realized including the preprocessing in real time test with a frame rate of 30.

Projectable interactive surface using microsoft kinect V2: Recovering information from coarse data to detect touch

Sharma, P.; Joshi, R. P.; Boby, R. A.; Saha, S. K.; Matsumaru, T.

2015 IEEE/SICE International Symposium on System Integration, SII 2015p.795 - 8002016年02月-2016年02月 

DOIScopus

詳細

概要:© 2015 IEEE.An Image-projective Desktop Varnamala Trainer (IDVT) called SAKSHAR has been designed to improve the learning by children through an interactive audio visual feedback system. This device uses a projector to render a virtual display, which permits production of large interactive displays with minimal cost. The user's hand is recognized with the help of a Microsoft Kinect Version 2. The entire system is portable, i.e., it can be projected on any planar surface. Since the Kinect does not give precise 3D coordinates of points for detecting a touch, a model of recognition of a touch purely based on the contact of the user's hand with the surface would not yield accurate results. We have instead modeled the touch action by using multiple points along the trajectory of the tracked point of the user's hand while hand makes contact with the surface. Fitting a curve through these points and analyzing the errors is used to make the detection of touch accurate.

Simulating and displaying of puck motion in virtual air hockey based on projective interface

Dai, Chuankai; Matsumaru, Takafumi

2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015p.320 - 3252016年02月-2016年02月 

DOIScopus

詳細

概要:© 2015 IEEE.This paper shows the simulating and displaying of Puck motion in virtual Air Hockey based on projective interface for multi-users HRI Image-projective Desktop Arm Trainer (IDAT) is an upper-limb rehabilitation system for hand-eye coordination training based on projective interface. To expand more entertainment functions and realize multi-user HRI, we want to develop a virtual Air Hockey application in IDAT. To develop this application, Puck motion should be simulated correctly and displayed smoothly on the screen. There are mainly 3 problems: Data updating rate problem, virtual collision calculation and individual's hand misrecognition problem. We proposed our method into three parts corresponding to different problems. In part 1, we used multiple timers with shared memory to deal with unsynchronized data updating rate problem. In part 2, an original physical engine is designed to calculate Puck's velocity and detect virtual collision. In part 3, to deal with the individual's hand misrecognition problem, a history-based hand owner recognition algorithm is implemented to distinguish users' hands.

Extraction of representative point from hand contour data based on laser range scanner for hand motion estimation

Dai, Chuankai; Matsumaru, Takafumi

2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015p.2139 - 21442016年02月-2016年02月 

DOIScopus

詳細

概要:© 2015 IEEE.This paper shows a novel method to extract hand Representative Point (RP) based on 2-dimensional laser range scanner for hand motion estimation on projecting system. Image-projecting Desktop Arm Trainer (IDAT) is a projecting system for hand-eye coordination training, in which a projector displays an exercise screen on the desktop, and a laser range scanner detects trainee's hand motion. To realize multi-user HMI and expand more entertainment functions in IDAT system, an Air Hockey application was developed in which the hand RP requires a high precision. To generate hand RP precisely, we proposed our method in two parts to solve the data error problem and changeable hand contour problem. In part one, a data modifier is proposed and a sub-experiment is carried out to establish a modifying function for correcting sensor original data. In part two, we proposed three RP algorithms and carried out an evaluation experiment to estimate the reliability of three algorithms under different conditions. From the result, we get the most reliable algorithm corresponding to different situations in which the error of hand RP is less than 9.6 mm.

Projectable interactive surface using microsoft kinect V2: Recovering information from coarse data to detect touch

Sharma, P.; Joshi, R. P.; Boby, R. A.; Saha, S. K.; Matsumaru, T.

2015 IEEE/SICE International Symposium on System Integration, SII 2015p.795 - 8002016年02月-2016年02月 

DOIScopus

詳細

概要:© 2015 IEEE.An Image-projective Desktop Varnamala Trainer (IDVT) called SAKSHAR has been designed to improve the learning by children through an interactive audio visual feedback system. This device uses a projector to render a virtual display, which permits production of large interactive displays with minimal cost. The user's hand is recognized with the help of a Microsoft Kinect Version 2. The entire system is portable, i.e., it can be projected on any planar surface. Since the Kinect does not give precise 3D coordinates of points for detecting a touch, a model of recognition of a touch purely based on the contact of the user's hand with the surface would not yield accurate results. We have instead modeled the touch action by using multiple points along the trajectory of the tracked point of the user's hand while hand makes contact with the surface. Fitting a curve through these points and analyzing the errors is used to make the detection of touch accurate.

Simulating and displaying of puck motion in virtual air hockey based on projective interface

Dai, Chuankai; Matsumaru, Takafumi

2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015p.320 - 3252016年02月-2016年02月 

DOIScopus

詳細

概要:© 2015 IEEE.This paper shows the simulating and displaying of Puck motion in virtual Air Hockey based on projective interface for multi-users HRI Image-projective Desktop Arm Trainer (IDAT) is an upper-limb rehabilitation system for hand-eye coordination training based on projective interface. To expand more entertainment functions and realize multi-user HRI, we want to develop a virtual Air Hockey application in IDAT. To develop this application, Puck motion should be simulated correctly and displayed smoothly on the screen. There are mainly 3 problems: Data updating rate problem, virtual collision calculation and individual's hand misrecognition problem. We proposed our method into three parts corresponding to different problems. In part 1, we used multiple timers with shared memory to deal with unsynchronized data updating rate problem. In part 2, an original physical engine is designed to calculate Puck's velocity and detect virtual collision. In part 3, to deal with the individual's hand misrecognition problem, a history-based hand owner recognition algorithm is implemented to distinguish users' hands.

Extraction of representative point from hand contour data based on laser range scanner for hand motion estimation

Dai, Chuankai; Matsumaru, Takafumi

2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015p.2139 - 21442016年02月-2016年02月 

DOIScopus

詳細

概要:© 2015 IEEE.This paper shows a novel method to extract hand Representative Point (RP) based on 2-dimensional laser range scanner for hand motion estimation on projecting system. Image-projecting Desktop Arm Trainer (IDAT) is a projecting system for hand-eye coordination training, in which a projector displays an exercise screen on the desktop, and a laser range scanner detects trainee's hand motion. To realize multi-user HMI and expand more entertainment functions in IDAT system, an Air Hockey application was developed in which the hand RP requires a high precision. To generate hand RP precisely, we proposed our method in two parts to solve the data error problem and changeable hand contour problem. In part one, a data modifier is proposed and a sub-experiment is carried out to establish a modifying function for correcting sensor original data. In part two, we proposed three RP algorithms and carried out an evaluation experiment to estimate the reliability of three algorithms under different conditions. From the result, we get the most reliable algorithm corresponding to different situations in which the error of hand RP is less than 9.6 mm.

Simulating and displaying of puck motion in virtual air hockey based on projective interface

Dai, Chuankai; Matsumaru, Takafumi

2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015p.320 - 3252016年02月-2016年02月 

DOIScopus

詳細

概要:© 2015 IEEE.This paper shows the simulating and displaying of Puck motion in virtual Air Hockey based on projective interface for multi-users HRI Image-projective Desktop Arm Trainer (IDAT) is an upper-limb rehabilitation system for hand-eye coordination training based on projective interface. To expand more entertainment functions and realize multi-user HRI, we want to develop a virtual Air Hockey application in IDAT. To develop this application, Puck motion should be simulated correctly and displayed smoothly on the screen. There are mainly 3 problems: Data updating rate problem, virtual collision calculation and individual's hand misrecognition problem. We proposed our method into three parts corresponding to different problems. In part 1, we used multiple timers with shared memory to deal with unsynchronized data updating rate problem. In part 2, an original physical engine is designed to calculate Puck's velocity and detect virtual collision. In part 3, to deal with the individual's hand misrecognition problem, a history-based hand owner recognition algorithm is implemented to distinguish users' hands.

Extraction of representative point from hand contour data based on laser range scanner for hand motion estimation

Dai, Chuankai; Matsumaru, Takafumi

2015 IEEE International Conference on Robotics and Biomimetics, IEEE-ROBIO 2015p.2139 - 21442016年02月-2016年02月 

DOIScopus

詳細

概要:© 2015 IEEE.This paper shows a novel method to extract hand Representative Point (RP) based on 2-dimensional laser range scanner for hand motion estimation on projecting system. Image-projecting Desktop Arm Trainer (IDAT) is a projecting system for hand-eye coordination training, in which a projector displays an exercise screen on the desktop, and a laser range scanner detects trainee's hand motion. To realize multi-user HMI and expand more entertainment functions in IDAT system, an Air Hockey application was developed in which the hand RP requires a high precision. To generate hand RP precisely, we proposed our method in two parts to solve the data error problem and changeable hand contour problem. In part one, a data modifier is proposed and a sub-experiment is carried out to establish a modifying function for correcting sensor original data. In part two, we proposed three RP algorithms and carried out an evaluation experiment to estimate the reliability of three algorithms under different conditions. From the result, we get the most reliable algorithm corresponding to different situations in which the error of hand RP is less than 9.6 mm.

Projectable Interactive Surface Using Microsoft Kinect V2: Recovering Information from Coarse Data to Detect Touch

Pratyusha Sharma, Ravi Prakash Joshi, Riby Abraham Boby, Subir Kumar Saha, Takafumi Matsumaru

2015 IEEE/SICE International Symposium on System Integration (SII 2015), [Meijo University, Nagoya, Japan], (December 11-13, 2015)査読有りSuD4.3p.795 - 8002015年12月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Simulating and Displaying of Puck Motion in Virtual Air Hockey based on Projective Interface

Chuankai Dai, Takafumi Matsumaru

IEEE International Conference on Robotics and Biomimetics, (IEEE-ROBIO 2015), [Zhuhai, China], (December 6-9, 2015)査読有りMoM02.6p.320 - 3252015年12月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Extraction of Representative Point from Hand Contour Data Based on Laser Range Scanner for Hand Motion Estimation

Chuankai Dai, Takafumi Matsumaru

IEEE International Conference on Robotics and Biomimetics, (IEEE-ROBIO 2015), [Zhuhai, China], (December 6-9, 2015)査読有りWeA06.5p.2139 - 21442015年12月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Calligraphy-Stroke Learning Support System Using Projection

Masashi Narita, Takafumi Matsumaru

The 24th IEEE International Symposium on Robot and Human Interactive Communication, (RO-MAN 2015), [Kobe International Conference Center, Japan], (Aug. 31-Spt. 04, 2015)査読有り2015-Novemberp.640 - 6452015年08月-2015年11月 

DOIlinkScopus

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:In this paper, a calligraphy learning support system is presented for supporting brushwork learning by using a projector. The system was designed to provide the three kinds of training according to the learner's ability as followings: copying training, tracing training, and combination of them. In order to instruct the three-dimensional brushwork such as the writing speed, pressure, and orientation of the brush, we proposed the instruction method by presenting the information to only brush tip. This method can be visualized a brush position and the orientation. In addition, the copying experiment was performed using the proposed method. As a result, the efficiency of the proposed method was examined through experiment.

SAKSHAR An Image-projective Desktop Varnamala Trainer (IDVT) for Interactive Learning of Alphabets

Ravi Prakash Joshi, Riby Abraham Boby, Subir Kumar Saha, Takafumi Matsumaru

Developing Countries Forum - ICRA 2015, [Seattle, Washington, USA], (May 26-30, 2015),査読有り2(3)2015年05月-

link

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Real-time gesture recognition with finger naming by RGB camera and IR depth sensor

Phonpatchara Chochai, Thanapat Mekrungroj, Takafumi Matsumaru

2014 IEEE International Conference on Robotics and Biomimetics (ROBIO2014), [Bali, Indonesia], (5-10 December, 2014)査読有りp.931 - 9362014年12月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:This paper introduces the real-time fingers naming method then illustrates how hand languages could be recognized by hand gesture and relation between fingertips. Moreover, this paper provides information on how to calculate and name each finger then using arm for dynamically adjust and improve the stability while the hand is moving. Supported by the study of relation between fingertips, palms, and arms, this proposed method can recognize hand gesture and translate these signs into numbers according to the standard sign language. The approach using in this paper relies on the image depth and the RGB image to identify hands, arms and fingertips. Then, the relation between each part is used to recognize a finger name regardless of the different direction of movement. Also, this report describes how to implement the proposed method into the ASUS Xtion as a sensor.

A Walking Training System with Customizable Trajectory Designing

Shiyang Dong, Takafumi Matsumaru

Paladyn. Journal of Behavioral Robotics (ISSN: (Online)2081-4836)査読有り5(1)p.35 - 522014年06月-

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:(Online)2081-4836

概要:This paper shows a novel walking training system for foot-eye coordination. To design customizable trajectories for different users conveniently in walking training, a new system which can track and record the actual walking trajectories by a tutor and can use these trajectories for the walking training by a trainee is developed. We set the four items as its human-robot interaction design concept: feedback, synchronization, ingenuity and adaptability. A foot model is proposed to define the position and direction of a foot. The errors in the detection method used in the system are less than 40 mm in position and 15 deg in direction. On this basis, three parts are structured to achieve the system functions: Trajectory Designer, Trajectory Viewer and Mobile Walking Trainer. According to the experimental results,we have confirmed the systemworks as intended and designed such that the steps recorded in Trajectory Designer could be used successfully as the footmarks projected in Mobile Walking Trainer and foot-eye coordination training would be conducted smoothly.

Human detecting and following mobile robot using a laser range sensor

Cai, Jianzhao; Cai, Jianzhao; Matsumaru, Takafumi

Journal of Robotics and Mechatronics査読有り26(6)p.718 - 7342014年01月-2014年01月 

DOIlinkScopus

詳細

掲載種別:研究論文(学術雑誌)ISSN:09153942

概要:© 2014, Fuji Technology Press. All rights received.To meet the higher requirements of human-machine interface technology, a robot with human-following capability, a classic but significant problem, is discussed in this paper. We first propose a human detection method that uses only a single laser range scanner to detect the waist of the target person. Second, owing to the limited speed of a robot and the potential risk of obstructions, a new human-following algorithm is proposed. The speed and acceleration of a robot are adaptive to the human-walking speed and the distance between the human and robot. Finally, the performance of the proposed control system is successfully verified through a set of experimental results obtained using a two-wheelmobile robot working in a real environment under different scenarios.

Image-projecting desktop arm trainer for hand-eye coordination training

Matsumaru, Takafumi; Liu, Yang; Jiang, Y.; Dai, Chuankai

Journal of Robotics and Mechatronics査読有り26(6)p.704 - 7172014年01月-2014年01月 

DOIlinkScopus

詳細

掲載種別:研究論文(学術雑誌)ISSN:09153942

概要:© 2014, Fuji Technology Press. All rights received.This paper presents a novel arm-training system, known as the image-projecting desktop arm trainer (IDAT), which is aimed at hand-eye coordination training. The projector displays an exercise image on a desktop in front of a seated patient, and the scanning range finder measures the behavior of the patient as he/she performs the exercise. IDAT is non-invasive and does not constrain the patient. Its efficiency is based on the voluntary movements of the patient, although it offers neither the physical assistance nor tactile feedback of some conventional systems. Three kinds of training content have been developed: “mole-hitting,” “balloon-bursting,” and “fish-catching.” These games were designed for training hand-eye coordination in different directions. A patient and/or medical professional can set a suitable training level, that is, the training time, speed of movement of the objects, and number of objects to appear at any one time, based on the patient’s condition and ability. A questionnaire survey was carried out to evaluate IDAT-3, and the results showed that it was highly acclaimed in terms of user-friendliness, fun, and usefulness.

Human-machine interaction using the projection screen and light spots from multiple laser pointers

Zhou, Jian; Matsumaru, Takafumi

2014 IEEE/SICE International Symposium on System Integration, SII 2014査読有りp.16 - 212014年01月-2014年01月 

DOIlinkScopus

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:© 2014 IEEE.Multi-user laser pointers system, in which more than single people could use the laser pointers concurrently, has promising applications, such as group discussion, appliances control for several handicapped persons, controlling the large displays by a few users and the like. Conventional methods employed in the above-mentioned applications are common mouse, gesture control utilizing the Kinect sensor or leap motion, including the single laser pointer. Common mouse and single laser pointer could not meet the requirements, which let a few users operate simultaneously. And gesture control is limited to a far distance. Compared to multi-user laser pointers, the majority research focuses on single laser pointer. However, multiple users would suffer the dilemma in which each one is not able to point at the target on the screen any time, at far distance, just because the laser pointer is controlled by only one person. The paper proposes a novel way to make it possible that 3 laser pointers are pointing at the screen without intervening between each other. Firstly, the foreground with all the dynamic spots is extracted. On the foreground, these spots will be searched all the time by grasping the contours of them. Hence, various information of the spots like the coordinate, pixel value, area and so on is obtained. Secondly, a square image containing the whole spot is referred to as the input of the designing back propagation neural network. The BPNN output indicates the category to which the laser pointer belongs. An experiment verifies that it works well under certain light conditions (12-727lux) if green laser pointers are used.

Image-projective Desktop Arm Trainer IDAT for therapy

Matsumaru, Takafumi; Jiang, Yi; Liu, Yang

Proceedings - IEEE International Workshop on Robot and Human Interactive Communication査読有りThA2T2.3p.801 - 8062013年12月-2013年12月 

DOIlinkScopus

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:Image-projective Desktop Arm Trainer IDAT is designed to improve the upper limb therapy effect. This paper is focused on its design concept. The most important structural feature of IDAT is that it is non-invasive and unconstrained. Setting on the desk apart from the trainee sitting makes no body contact. It works based on the trainee's voluntary movement, although it has neither physical assist nor tactile feedback on conventional systems. IDAT is developed based on the design concept that the direct interaction is critical in terms of the motivation increase. Instead of a joystick, a handle, a robotic arm and a display screen on conventional systems, IDAT uses the screen projected on a desktop and produces the visual reaction at the time on the spot where the trainee operates. It inspires the vivid, lively and actual feeling to the trainee. © 2013 IEEE.

Robot human-following limited speed control

Cai, Jianzhao; Matsumaru, Takafumi

Proceedings - IEEE International Workshop on Robot and Human Interactive Communication査読有りTuA1.1P.2p.81 - 862013年12月-2013年12月 

DOIlinkScopus

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:Robot human-following is an important part of interaction between robot and people. Generally, speed of mobile robots is limited and far slower than human beings naturally walking speed. In order to catch the human rapidly, in this paper, we introduce a control method which uses adaptive acceleration of robots speed. The speed of robots mainly depends on the human speed and distance. Also the robots speed acceleration is adaptive to the distance between human and robots. The proposed control is successfully verified through experiments. © 2013 IEEE.

巻頭言 半世紀を生きて,次の半世紀へ

松丸隆文

バイオメカニズム学会誌37(3)p.1512013年08月-

Measuring the performance of laser spot clicking techniques

Widodo, Romy Budhi; Matsumaru, Takafumi

2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013査読有りp.1270 - 12752013年01月-2013年01月 

DOIlinkScopus

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:Laser spot clicking technique is the term of remote interaction technique between human and computer using a laser pointer as a pointing device. This paper is focused on the performance test of two laser spot clicking techniques. An off-the-shelf laser pointer has a toggle switch to generate a laser spot, the presence (ON) or absence (OFF) of this spot and its combination are the candidates of the interaction technique. We conducted empirical study that compared remote pointing technique performed using combination of ON and OFF of the laser spot, ON-OFF and ON-OFF-ON, and using a desktop mouse as a baseline comparison. We present quantitative performance test based on Fitts' test using a one-direction tapping test in ISO/TS 9241-411 procedure; and assessment of comfort using a questionnaire. We hope this result give contribution to the interaction technique using laser pointer as a pointing device especially in selecting the appropriate clicking technique for real application. Our results suggest ON-OFF technique has positive advantages over ON-OFF-ON technique such as the throughput and comfort. © 2013 IEEE.

Design and Evaluation of Throw-over Movement Informing a Receiver of Object Landing Distance

Takafumi Matsumaru

Calin Ciufudean and Lino Garcia (ed.): "Advances in Robotics - Modeling, Control and Applications", ISBN 978-1-922227-05-8 (Hardcover) 978-1-461108-44-3 (Paperback), iConcept Press査読有り9p.171 - 1942013年01月-

link

詳細

掲載種別:研究論文(学術雑誌)

Development and Evaluation of Operational Interface Using Touch Screen for Remote Operation of Mobile Robot

Takafumi Matsumaru

Calin Ciufudean and Lino Garcia (ed.): "Advances in Robotics - Modeling, Control and Applications", ISBN 978-1-922227-05-8 (Hardcover) 978-1-461108-44-3 (Paperback), iConcept Press査読有り10p.195 - 2172013年01月-

link

詳細

掲載種別:研究論文(学術雑誌)

Comparison of Displaying with Vocalizing on Preliminary-Announcement of Mobile Robot Upcoming Operation

Takafumi Matsumaru

Calin Ciufudean and Lino Garcia (ed.): "Advances in Robotics - Modeling, Control and Applications", ISBN 978-1-922227-05-8 (Hardcover) / 978-1-461108-44-3 (Paperback), iConcept Press査読有り7p.133 - 1472013年01月-

link

詳細

掲載種別:研究論文(学術雑誌)

Laser spotlight detection and interpretation of its movement behavior in laser pointer interface

Widodo, Romy Budhi; Chen, Weijen; Matsumaru, Takafumi

2012 IEEE/SICE International Symposium on System Integration, SII 2012査読有りMP1-C.4p.780 - 7852012年12月-2012年12月 

DOIlinkScopus

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:A laser pointer can be used as an input interface in human-machine interaction. Such utilization, however, can be problematic and one of the main issues is the lack of good reliability in the laser spotlight detection. Another problem is how to interpret the user's movement of the spotlight into commands for the application. This paper proposes a method for a laser spotlight detection. The aim is to improve the practicality and reliability of the previous approaches. We use the maximum pixel value as a multiplier in determining the threshold. Maximum pixel value is obtained from environment brightness at a specified time. For the second problem we propose a simple interpretation of incidents that allows the user to use the application, with three main events: laser-move, hover, and single-click. There is no need for users and program to wait a specified time span to be able to interact with each other, and the user can directly give commands to the application after the single-click event. These approaches result in better reliability, easier operation of the application by the user, and allow opportunity for development of a system for rehabilitative, recreation, and input interface devices in the future. © 2012 IEEE.

Applying infrared radiation image sensor to step-on interface: Touched point detection and tracking

Jiang, Yi; Liu, Yang; Matsumaru, Takafumi

2012 IEEE/SICE International Symposium on System Integration, SII 2012査読有りMP1-B.4p.752 - 7572012年12月-2012年12月 

DOIlinkScopus

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:We propose and implement a solution for applying an infrared radiation (IR) image sensor to step-on interface (SOI). SOI is a kind of natural human-robot interface what consists of a projector and a laser range scanner (LRG) sensor. And it enables any interactive touch applications on desktop or floor. We attempt to introduce an IR image sensor such as ASUS Xtion to SOI instead of LRG sensor. In this paper, we will describe the procedure that how to use the Xtion and detect touched point. We distinguish user's hand from background (surface) depends on depth data from ASUS Xtion, and detecting touching action when finger almost close to the background. The proposed processes involve IR depth image acquisition, seeking for hand and its contours by thresholding, recognition of touched areas and computing of theirs center position. The research enables ASUS Xtion be applied on SOI in a simple way. Moreover, this system can realize touch interaction on any surface. © 2012 IEEE.

Development of image-projective desktop arm trainer, IDAT

Liu, Yang; Jiang, Yi; Matsumaru, Takafumi

2012 IEEE/SICE International Symposium on System Integration, SII 2012査読有りSP2-A.5p.355 - 3602012年12月-2012年12月 

DOIlinkScopus

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:Aim to improve upper limb rehabilitation effect, we design and develop an Image-projective Desktop Arm Trainer (IDAT). Compared with conventional therapy, IDAT provides a more effective and interesting training method. IDAT's goal is to maintain and improve patients' upper limb function by training their eye-hand coordination. We select step-on interface (SOI) as the input system which makes trainees can operate IDAT with hand directly. Trainees can make a customized training setting. It can provide motivation and accomplishment to trainees and maintain their enthusiasm and interest. So IDAT provides a much different human robot interaction with pervious upper limb rehabilitation robots that equip a joystick or controller to operate remotely. We propose this idea in 2007 and have applied SOI on some mobile robots. Now we apply it on IDAT to make a new way to upper limb rehabilitation. © 2012 IEEE.

Interaction using the projector screen and spot-light from a laser pointer: Handling some fundamentals requirements

Widodo, Romy Budhi; Chen, Weijen; Matsumaru, Takafumi

Proceedings of the SICE Annual Conference査読有りWeA10-04p.1392 - 13972012年11月-2012年11月 

linkScopus

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:This paper presents one of the interaction models between humans and machines using a camera, the projector, and the spot-light from a laser pointer device. A camera was attached on the top of the projector, and the projector directed a direction screen display on the wall, while the user pointed a laser pointer to the desired location on the direction screen display. It is confirmed that this system can handle some distortion conditions of the direction screen display, such as an oblique rectangle, horizontal trapezoid distortion, and vertical trapezoid distortion as well as some surface illuminance - 127, 425, 630, and 1100 lux; and the system is designed to be used for static and moving objects. The coordinates that were obtained from the distorted screen can be used to give commands to a specific machine, robot, and application. © 2012 SICE.

Application of Step-on Interface to Therapy

Takafumi Matsumaru, Shiyang Dong, and Yang Liu

IEEE/RSJ IROS 2012 Workshop on Motivational Aspects of Robotics in Physical Therapy, [Vilamoura, Algarve, Portugal], (October 7-12, 2012)査読有りp.6 pages2012年10月-

link

詳細

掲載種別:研究論文(国際会議プロシーディングス)

概要:This paper describes the application of step-on interface (SOI) to therapy. SOI consists of a projector and a sensor such as range scanner, and its special feature is using a projected screen as bidirectional interface through which the information is presented from a robot to a user and the user instructions are delivered to the robot. The human-friendly amusing mobile robot HFAMRO equipped with SOI on a mobile platform is a playing-tag-robot which can be used for gait training. The image-projective desktop arm trainer IDAT is designed set on a desk in front of a trainee sitting. This kind of system is adjustable and customized to each individual by setting parameters, using multimedia channels, or uploading the program. Therefore it could provide the motivation and accomplishment to a trainee and maintain his/her enthusiasm and interest.

リレーエッセイ マイフェイバリット 18 夢をもって実現するために

松丸隆文

機械設計(日刊工業新聞社)56(7)p.142012年06月-

link

詳細

掲載種別:研究論文(その他学術会議資料等)ISSN:0387-1045

Design and Evaluation of Handover Movement Informing Reciever of Weight Load

Takafumi Matsumaru

15th National Conference on Machines and Mechanisms (NaCoMM 2011), [IIT Madras, India]査読有りB-5-52011年11月-

link

詳細

掲載種別:研究論文(国際会議プロシーディングス)

W021003 生物の運動に学ぶロボット技術([W02100](バイオエンジニアリング部門,機械力学・計測制御部門,流体工学部門企画),生物に学ぶ機械工学 -生物の仕組みを機械に生かす-)

年次大会 : Mechanical Engineering Congress, Japan2011p."W021003 - 1"-"W021003-8"2011年09月-2011年09月 

CiNii

詳細

概要:This paper presents the informative motion study to make a human-coexistence robot useful. First the usage and the design and marketing of a human-coexistence robot are considered. Next, the informative motion study that we are tackling to improve the human-coexistence robot's personal affinity is explained advocating the informative kinesics for human-machine system. As an example of application deployment, study on continuous movement (usual movement) and preliminary operation (prior operation) is shown.

第31回バイオメカニズム学術講演会 SOBIM 2010 in Hamamatsu

バイオメカニズム学会誌 = Journal of the Society of Biomechanisms35(1)p.81 - 832011年02月-2011年02月 

CiNii

詳細

ISSN:02850885

Design and evaluation of handover movement informing receiver of weight load

Matsumaru, Takafumi

15th National Conference on Machines and Mechanisms, NaCoMM 20112011年01月-2011年01月 

Scopus

詳細

概要:This paper presents the study results on the handover movement informing a receiver of the weight load as an example of the informative motion for the humansynergetic robot. To design and generate the movement depending on the weight load, the human movement is measured and analyzed, and four items are selected as the parameter to vary - the distance between target point and transferred point (in front-back direction), the distance between highest point and transferred point (in vertical direction), the elbow rotation angle, and the waist joint angle. The fitted curve of the parameter variation depending on the weigh load is obtained from the tendency of the subjects' movement data. The movement data for an arbitrary weight load is generated processing the standard data at 0 kg of weight load so that each parameter follows the fitted curve. From the questionnaire survey, although it is difficult for a receiver to estimate the exact weight load, he may distinguish the heavy weight load from the light weight load so that the package will be received safely and certainly.

Design and evaluation of handover movement informing receiver of weight load

Matsumaru, Takafumi

15th National Conference on Machines and Mechanisms, NaCoMM 20112011年01月-2011年01月 

Scopus

詳細

概要:This paper presents the study results on the handover movement informing a receiver of the weight load as an example of the informative motion for the humansynergetic robot. To design and generate the movement depending on the weight load, the human movement is measured and analyzed, and four items are selected as the parameter to vary - the distance between target point and transferred point (in front-back direction), the distance between highest point and transferred point (in vertical direction), the elbow rotation angle, and the waist joint angle. The fitted curve of the parameter variation depending on the weigh load is obtained from the tendency of the subjects' movement data. The movement data for an arbitrary weight load is generated processing the standard data at 0 kg of weight load so that each parameter follows the fitted curve. From the questionnaire survey, although it is difficult for a receiver to estimate the exact weight load, he may distinguish the heavy weight load from the light weight load so that the package will be received safely and certainly.

Design and evaluation of handover movement informing receiver of weight load

Matsumaru, Takafumi

15th National Conference on Machines and Mechanisms, NaCoMM 20112011年01月-2011年01月 

Scopus

詳細

概要:This paper presents the study results on the handover movement informing a receiver of the weight load as an example of the informative motion for the humansynergetic robot. To design and generate the movement depending on the weight load, the human movement is measured and analyzed, and four items are selected as the parameter to vary - the distance between target point and transferred point (in front-back direction), the distance between highest point and transferred point (in vertical direction), the elbow rotation angle, and the waist joint angle. The fitted curve of the parameter variation depending on the weigh load is obtained from the tendency of the subjects' movement data. The movement data for an arbitrary weight load is generated processing the standard data at 0 kg of weight load so that each parameter follows the fitted curve. From the questionnaire survey, although it is difficult for a receiver to estimate the exact weight load, he may distinguish the heavy weight load from the light weight load so that the package will be received safely and certainly.

ステップ・オン・インタフェース技術を応用したユーザー・ロボット・インタラクション

松丸隆文, 斎藤渉, 伊藤祐一

日本バーチャルリアリティ学会論文誌 (ISSN:1344-011X)査読有り15(3)p.335 - 3452010年09月-

CiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:1344-011X

概要:The friendly amusing mobile (FAM) function in which a robot and a user can interact through a motion is proposed applying the mobile robot step-on interface (SOI) in which the user can direct robot movement or operation by stepping or pointing on the button which shows the contents to desire on the operation screen projected on a floor. HFAMRO-2 mobile robot is developed to realize playing a light tag and three applications-animal tail stepping, bomb fuse stamping, and footprint stepping-are produced as a trial after the design policy.

Friendly Amusing Mobile Function for Human-Robot Interaction

Takafumi Matsumaru

The 19th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 10), [Principe di Piemonte, Viareggio, Italy] (September 13-15, 2010)査読有りp.88 - 932010年09月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)ISSN:1944-9445

概要:This paper introduces a tag-playing robot as a “Friendly Amusing Mobile” (FAM) function to expand the mobile robot Step-On Interface (SOI) use and to promote the human-robot interaction through a motion, targeting maintenance, training, and recovery of human physical and cognitive functions in the elderly, physically challenged, injured, etc. Displaying a previously prepared bitmap (BMP) image becomes the display update rate faster and makes it easy to design button placement and other screen parameters using a painting software package. The scope matrix generated from the BMP image simplifies the step detection and recognition, and the motion trajectory design editor facilitates the robot behavior design.

Truly-Tender-Tailed Tag-Playing Robot Through Friendly Amusing Mobile Function

Takafumi Matsumaru, Yasutada Horiuchi, Kosuke Akai and Yuichi Ito

Journal of Robotics and Mechatronics査読有り22(3)p.301 - 3072010年06月-

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:0915-3942(Print) / 1883-8049(Online)

概要:To expand use of the mobile robot Step-On Interface (SOI), originally targeting maintenance, training, and recovery of human physical and cognitive functions, we introduce a “Truly-Tender-Tailed” (T3, pronounced tee-cube) tag-playing robot as a “Friendly Amusing Mobile” (FAM) function. Displaying a previously prepared bitmap (BMP) image and speeding up display make it easy to design button placement and other screen parameters using a painting software package. The BMP-image scope matrix simplifies step detection and recognition and the motion trajectory design editor facilitates robot behavior design.

The Step-on Interface (SOI) on a Mobile Platform - Rehabilitation of the Physically Challenged -

Takafumi Matsumaru, Yuichi Ito and Wataru Saitou

5th ACM/IEEE International Conference on Human-Robot Interaction (HRI2010), [2-5 Mar 2010, Osaka, Japan]査読有りp.345 - 3462010年03月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

The Step-on Interface (SOI) on a Mobile Platform - Basic Functions -

Takafumi Matsumaru, Yuichi Ito and Wataru Saitou

5th ACM/IEEE International Conference on Human-Robot Interaction (HRI2010), [2-5 Mar 2010, Osaka, Japan]査読有りp.343 - 3442010年03月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Discrimination and Implementation of Emotions on Zoomorphic Robot Movements

Takafumi Matsumaru

SICE Journal of Control, Measurement, and System Integration査読有り2(6)p.365 - 3722009年11月-

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:1882-4889

概要:This paper discusses discrimination and implementation of emotions on a zoomorphic robot aiming at designing emotional robot movements and improving robot friendliness. We consider four emotions; joy, anger, sadness, and fear. Two opposite viewpoints, performer and observer, are considered to acquire emotional movement data for analysis. The performer subjects produce emotional movements and movements that are easy to be recognized as expressions of the intended emotion are selected among the measured movements by the observer subjects. Discriminating the emotion embedded in a movement is tried using feature values based on the Laban movement analysis (LMA) and the principal component analysis (PCA). By the discrimination using PCA, the resultant rates of the correct discrimination are about 70% for all four emotions. The features of emotional movements are presumed from the coefficients of the discrimination function obtained in the emotion discrimination using PCA. Emotions are implemented by converting a setup for basic movements by employing the design principle based on the movement features. The result of the verification experiment suggests that four emotional movements are divided into two groups; the joy and anger group and the sadness and fear group. The emotional movements of joy or anger are dynamic, large-scale and frequent being relatively easy to interpret the intended emotion, while the emotional movements of sadness or fear are static, small-scale and little making difficult to understand the feature of movements.

Step-On Interface on Mobile Robot to Operate by Stepping on Projected Button

Takafumi Matsumaru and Kosuke Akai

The Open Automation and Control Systems Journal査読有り2p.85 - 952009年11月-

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:1874-4443

概要:This paper proposes the step-on interface (SOI) to operate a mobile robot in which a projector projects and displays a direction screen on a floor and a user specifies a button showing the selected movement by stepping or pointing. The HFAMRO-1 mobile robot has been developed to demonstrate the SOI’s potential (HFAMRO: “human-friendly amusing” mobile robot). The SOI of HFAMRO-1 consists of a projector and a range scanner on an omni-directional mobile platform. From operational comparison with other input interfaces, it is confirmed that we can direct the robot movement using our own foot. We had some students who does not specialize robotic systems try to operate HFAMRO-1 with their shoes, and all trial students could specify the button to operate the robot satisfactorily and everyone mastered the SOI immediately.

Discrimination of Emotion from Movement and Addition of Emotion in Movement to Improve Human-Coexistence Robot's Personal Affinity

Takafumi Matsumaru

The 18th IEEE International Symposium on Robot and Human Interactive Communication, (RO-MAN 09), [Toyama International Conference Center, Japan]査読有りp.387 - 3942009年09月-

DOIlink

Handover Movement Informing Receiver of Weight Load as Informative Motion Study for Human-friendly Robot

Takafumi Matsumaru

The 18th IEEE International Symposium on Robot and Human Interactive Communication, (RO-MAN 09), [Toyama International Conference Center, Japan]査読有りp.299 - 3052009年09月-

DOIlink

Informative Motion Study to Improve Human-Coexistence Robot’s Personal Affinity

Takafumi Matsumaru

IEEE RO-MAN 2009 Workshop on Robot Human Synergetics, [Toyama International Conference Center, Japan]2009年09月-

Study on Handover Movement Informing Receiver of Weight Load as Informative Motion of Human-friendly Robot

Takafumi Matsumaru and Shigehisa Suzuki

International Journal of Factory Automation, Robotics and Soft Computing査読有り2009(3)p.11 - 192009年07月-

詳細

ISSN:1828-6984

Dynamic Remodeling of Environmental Map using Range Data for Remote Operation of Mobile Robot

Takafumi Matsuamru, Hiroshi Yamamori and Takumi Fujita

Journal of Robotics and Mechatronics査読有り21(3)p.332 - 3412009年06月-

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:0915-3942(Print) / 1883-8049(Online)

概要:In studying dynamic remodeling of environmental mapping around a mobile robot operated remotely while data measured by the robot range sensor is sent from the robot to the operator, we introduce the Line & Hollow method and the Cell & Hollow method for environmental mapping. Results for the three types of environmental situation clarifies features and effectiveness of our approach. In Line & Hollow method, an isosceles triangle is set based on the range data. The base line is pulled to express obstacle shape and the inside is hollowed out to express vacant space. In Cell & Hollow method, the cell value corresponding to the range data is incremented, and an obstacle is assumed to be exist if the cell value exceeds the ascending threshold. The cell value is decremented on the line between the cell that measured data indicates and the cell located at the sensor, and the obstacle is deleted if the value drops below the descending threshold. We confirmed that environmental mapping for either reflects a dynamic environmental change.

荷物重量を受け手に伝える手渡し動作の検討—インフォマティブ・モーションの研究事例—

松丸隆文

計測と制御査読有り48(6)p.508 - 5122009年06月-

CiNiilink

詳細

ISSN:04534662

モーション・メディアとインフォマティブ・モーション—モーションを基軸にしたシステム・インテグレーション—

岩城敏, 松丸隆文

計測と制御査読有り48(6)p.443 - 4472009年06月-

CiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:04534662

A Characteristics Measurement of Two-dimensional Range Scanner and its Application

Takafumi Matsumaru

The Open Automation and Control Systems Journal査読有り2p.21 - 302009年05月-

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:1874-4443

概要:This paper shows the result of the characteristics measurement of a two-dimensional active range scanner, URG made by Hokuyo Automatic Co., Ltd. which was released in 2004 and is spreading widely as an external sensor for mobile robot. The following items were clarified from the characteristics measurement of URG-X002S in various conditions. (1) In the case that the object has a gloss surface or a black surface, the error rate (the frequency that the scanner judges to be impossible to measure and an error code is output) rises when the oblique angle of object becomes large and also the distance to object becomes long. (2) In the case that the object has a white surface or a rough surface, not only the error rate is zero but also the margin of error becomes dozens of millimeters and the varying is small, if the oblique angle is smaller than 60 deg and the distance is shorter than 1 m. (3) The lateral error is negligibly small if the distance to detect is shorter than 1 m. Moreover it shows the result of the examination to apply the range scanner in the Step-On Interface (SOI), in which the scanner is used for detection and measurement of the stepping of an operator. Based on the measured results, we designed the judgment method of the stepping, the installation position of the scanner, and the placement of buttons in the direction screen to apply the range scanner to the SOI for operation of a mobile robot.

Functions of Mobile-Robot Step-On Interface

Takafumi Matsumaru and Kosuke Akai

Journal of Robotics and Mechatronics査読有り21(2)p.267 - 2762009年04月-

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:0915-3942(Print) / 1883-8049(Online)

概要:To improve HFAMRO-1 mobile robot maneuverability and safety, we added a step-on interface (SOI) to direct robotic or mechatronic tasks and operations (HFAMRO: “human-friendly amusing” mobile robot). To do so, a projector displays a direction screen on a floor or other surface, and an operator specifies a button showing the selected movement by stepping or pointing. We modified the direction screen so that among buttons displayed on two lines, stepped-on buttons directing movement are displayed only on the lower line. We also shortened retention time and had selected movement executed only when the foot was removed from the stepped-on button. The robot has 2 SOIs and multiple projection screens, and can be controlled from either direction for the same function. We synchronized direction and preliminary-announcement screens to inform passersby which way the robot would move. Using range scanner data, the robot distinguishes feet from other objects based on size and autonomous movement fusion control to avoid obstacles is implemented.

Mobile Robot with Preliminary-announcement and Indication Function of Upcoming Operation just after the Present

Takafumi Matsumaru

International Journal of Factory Automation, Robotics and Soft Computing査読有り2009(1)p.102 - 1102009年01月-

詳細

ISSN:1828-6984

Experimental Examination in Simulated Interactive Situation between People and Mobile Robot with Preliminary-Announcement and Indication Function of Upcoming Operation

Takafumi Matsumaru

2008 IEEE International Conference on Robotics and Automation (ICRA-08), [Pasadena, CA, USA]査読有りp.3487 - 34932008年05月-

DOIlink

これからの動作の予告表示機能をもつ移動ロボットの対人模擬環境での評価実験

松丸隆文

ヒューマンインタフェース学会論文誌査読有り10(1)p.11 - 202008年02月-

CiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:1344-7262 (print) 2186-8271 (online)

Development of Four Kinds of Mobile Robot with Preliminary-Announcement and Indication Function of Upcoming Operation

Takafumi Matsumaru

Journal of Robotics and Mechatronics,査読有り19(2)p.148 - 1592007年04月-

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:0915-3942(Print) / 1883-8049(Online)

概要:We propose approaches and equipment for preliminarily announcing and indicating to people the speed and direction of movement of mobile robots moving on a two-dimensional plane. We introduce the four approaches categorized into (1) announcing the state just after the present and (2) indicating operations from the present to some future time continuously. To realize the approaches, we use omni-directional display (PMR-2), flat-panel display (PMR-6), laser pointer (PMR-1), and projection equipment (PMR-5) for the announcement unit of protobots. The four protobots were exhibited at the 2005 International Robot Exhibition (iREX05). We had visitors answer questionnaires in a 5-stage evaluation. The projector robot PMR-5 received the highest evaluation score among the four. An examination of differences by gender and age suggested that some people prefer simple information, friendly expressions, and a minimum of information to be presented at one time.

プロジェクタを用いて次の動作を予告表示する機能をもつ移動ロボットの開発

松丸隆文, 干場祐, 平岩慎司, 宮田康広

日本ロボット学会誌査読有り25(3)p.410 - 4212007年04月-

DOICiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:0289-1824

概要:This paper discusses the mobile robot PMR-5 with the preliminary-announcement and display function which indicates the forthcoming operations to the people near the robot by using a projector. The projector is set on a mobile robot and a 2-dimensional frame is projected on a running surface. In the frame, not only the scheduled course but also the states of operation can be clearly announced as the information about movement. We examine the presentation of the states of operation such as stop or going back including the time information of the scheduled course on the developed robot. Scheduled course is expressed as the arrows considering the intelligibility at sight. Arrow expresses the direction of motion directly and the length of arrow can announce the speed of motion. Operation until 3-second-later is indicated and three arrows classified by color for each second are connected and displayed so these might show the changing of speed during 3-second period. The sign for spot revolution and the characters for stop and going back are also displayed. We exhibited the robot and about 200 visitors did the questionnaire evaluation. The average of 5-stage evaluation is 4.5 points and 3.9 points for the direction of motion and the speed of motion respectively. So we obtained the evaluation that it is intelligible in general.

Mobile Robot with Preliminary-announcement and Indication Function of Forthcoming Operation using Flat-panel Display

Takafumi Matsumaru

2007 IEEE International Conference on Robotics and Automation (ICRA'07), [Rome, Italy]査読有りp.1774 - 17812007年04月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

重量物挙上動作におけるValsalva効果による腹圧増加分を考慮した解析モデルの提案

松丸隆文, 福山聡, 佐藤智祐

日本機械学会論文集C編査読有り72(724)p.3863 - 38702006年12月-

CiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:0387-5024

概要:This paper proposed the model to estimate the load on the lumber vertebra considering not only the value presumed from posture but also the increased amount by Valsalva maneuver of extrapolating from the vital capacity. The pressure force can be estimated to be reduced by about 30% by the effect of the abdominal pressure by using the proposed model as said so far. From the error of the presumed value of ground reaction force from an actual measurement, it is thought that the presumed accuracy of the lumbar vertebra load using the proposed model is smaller than 10%. Furthermore, two operations with extreme start-on posture were compared on transition of the compressive force and shear force on the lumbar vertebra. It was brought out that a start-on posture signficantly affects the maximum load on the lumber vertebra. The result suggests that optimal start-on posture may be between two postures.

光線を用いて予定経路を表示する機能をもつ移動ロボットの開発

松丸隆文, 草田享, 岩瀬和也

日本ロボット学会誌査読有り24(8)p.976 - 9842006年11月-

DOICiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:0289-1824

概要:This paper discusses the design and the basic characteristic of the mobile robot PMR-1 with the preliminaryannouncement and display function of the forthcoming operation (the direction of motion and the speed of motion) to the people around the robot by drawing a scheduled course on a running surface using light-ray. The laser pointer is used as a light source and the light from the laser pointer is reflected in a mirror. The light-ray is projected on a running surface and a scheduled course is drawn by rotating the reflector around the pan and the tilt axes. The preliminary-announcement and display unit of the developed mobile robot can indicate the operation until 3-second-later preliminarily, so the robot moves drawing the scheduled course from the present to 3-second-later. The experiment on coordination between the preliminary-announcement and the movement has been carried out, and we confirmed the correspondence of the announced course with the robot trajectory both in the case that the movement path is given beforehand and in the case that the robot is operated with manual input from a joystick in real-time. So we have validated the coordination algorithm between the preliminary-announcement and the real movement.

Mobile Robot with Preliminary-Announcement Function of Following Motion using Light-ray

Takafumi Matsumaru, Takashi Kusada and Kazuya Iwase

The 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2006), [Beijing International Convention Center (BICC), Beijing, China]査読有りp.1516 - 15232006年10月-

Mobile Robot with Preliminary-announcement and Display Function of Forthcoming Motion using Projection Equipment

Takafumi Matsumaru

The 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 06), [University of Hertfordshire, Hatfield, United Kingdom]査読有りp.443 - 4502006年09月-

Examination on Lifting Motion with Different Start-on Posture, and Study on the Proper Opration using Minimum Jerk Model [in Japanese]

松丸隆文, 福山聡, 島和義, 伊藤友孝

日本機械学会論文集C編査読有り72(720)p.2554 - 25612006年08月-

CiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:03875024

概要:The aim of this research is to establish the quantitative evaluation method for lifting task and to clarify the safety posture and optimal motion. This paper firstly examines the operation from differenet three start-on postures with four items: the maximum compressive force and share force on L5/S1 joint, the energy efficiency index, and the degree of contribution of each joint. As a result it has been shown that the load not only to lumbar vertebra but to knee joint should be emphasized and the pose-C is an appropriate start-on posture in which the knee is flexed almost at right-angled and the upper body is raised. Moreover from the simulation using minimum jerk model we found a proper operation, but the actual operation has earlier timing to extend the lower body than presumed. So not only the criteria on the whole operation or the relative criteria but also the absolute criteria paying attention to a portion like knee joint moment seems necessary to study the optimal lifting motion.

ヒューマノイド・ロボットの体形・動作の設計に関する一考察

松丸隆文

日本バーチャルリアリティ学会論文誌査読有り11(2)p.283 - 2922006年06月-

CiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:1344011X

概要:This paper discusses the design method of the bodily shape and motion of a humanoid robot to raise not only the emotional but informative interpersonal-affinity of a humanoid robot. Concrete knowledge and a concrete opinion are classified into the movement prediction from its configuration and the movement prediction from continuous motion or preliminary motion, and they are discussed mentioning the application and usage. Specifically, the bodily shape and motion, which is easier to predict and understand the capability and performance and the following action and intention of a humanoid robot for the surrounding people who are looking at it, are considered.

重量物挙上動作における動作姿勢の受容率を用いた評価

松丸隆文, 島和義, 福山聡, 伊藤友孝

計測自動制御学会論文集査読有り42(2)p.174 - 1822006年02月-

DOICiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:0453-4654

概要:The aim of this research is to establish the quantitative evaluation method for lifting task and to clarify the safety posture and optimal motion. This paper examines the motion from three kinds of start-on posture with the acceptance rate. Operation-A starts from the pose-A in which the knee is extended to the maximum. Operation-B starts from the pose-B in which the knee is flexed to the maximum and the upper body is raised as much as possible. Operation-C starts from the pose-C in which the knee is flexed at almost right-angled and the upper body is raised. The acceptance rate is the estimated population rate who can permit the joint moment during the lifting operation, based on the presumed moment and the coefficient of variation of the acceptable marginal moment on each joint. The compressive force on lumbar vertebrae computed from the L5/S1 joint moment at 85[%] of the acceptance rate (about Av.-1SD) has not been over the standard value of a previous research. So we have set 85[%] as the safety standard acceptance rate, then we settled judging-A (over 95[%], recommended), judging-B (85-95[%], should note), and judging-C (under 85[%], should modify). Although the ankle joint on operation-A and the knee joint on operation-B is judged as C rank, every joint on operation-C shows high acceptance rate. So the validity of operation-C has been clarified quantitatively.

人間機械系の情報動作学の応用展開

松丸隆文

バイオメカニズム学会誌査読有り29(3)p.139 - 1452005年08月-

DOICiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:02850885

概要:人工機械システムに対する恐怖感や違和感を排除するには,外見を人間型や動物型にするだけでなく,サイズと性能・機能の関係などにおいて,人間の常識,経験知・暗黙知に合致した形態・動作が必要だと考える.具体的には,移動ロボットやヒューマノイド・ロボットにおいて,その次の行動・意図が,それを見ている周囲の人間によって予測しやすい形態・動作を考えている.本稿は,まず人間機械系の情報動作学の応用展開における研究項目と手順を説明し,つぎにこれまでに入手した資料から得られている知見のいくつかを,(1)形態からの運動予測と(2)連続動作・予備動作からの運動予測に分類し,その応用・適用方法に言及しながら論じる.

Mobile Robot with Eyeball Expression as the Preliminary-announcement and Display of the Robot's Following Motion

Takafumi Matsumaru, Kazuya Iwase, Kyohei Akiyama, Takashi Kusada and Tomotaka Ito

Autonomous Robots査読有り18(2)p.231 - 2482005年03月-

DOIlink

詳細

掲載種別:研究論文(学術雑誌)

移動ロボットの遠隔操作における手動操作と自律動作の融合制御手法のシミュレーションによる検討

松丸隆文, 萩原潔, 伊藤友孝

計測自動制御学会論文集査読有り41(2)p.157 - 1662005年02月-

DOICiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:1883-8189 (online) 0453-4654 (print)

概要:This paper examines the combination control of the manual operation and the autonomous motion to improve the maneuverability and safety on teleoperation of mobile robot. The autonomous motion which works to support the manual operation processing information from simple range sensors on mobile robot is examined with using a computer simulation. Three types of autonomous motion, revolution (RV), following (FL), and slowdown (SD), are proposed and the way to be equipped on the system are examined. In revolution, robot turns autonomously when robot approaches some obstacle too much. In following, robot translates parallel keeping its orientation to go along the form of the obstacle. In slowdown, robot is restricted its translation speed according to the distance to the obstacle and the translation speed of the robot. Features of each autonomous motion is declared; transit time and mileage become shorter with the revolution or the following, and the contact between robot and obstacle is almost avoided with the slowdown. When the distance to some obstacle to apply some autonomous motion is adjusted depending on the translation speed of the mobile robot, too much addition of autonomous motion against operator's intention is reduced so the maneuverability can be improved. When the revolution or the following is incorporated with the slowdown, even though the transit time is prolonged, the number of near miss for the robot to some obstacles is reduced so the safety can be improved.

移動ロボットの遠隔操作における手動操作と自律動作の融合制御手法−状況に適した自律動作の検討−

松丸隆文, 萩原潔, 伊藤友孝

計測自動制御学会論文集査読有り40(9)p.958 - 9672004年09月-

DOICiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:0453-4654

概要:This paper examines the combination control of the manual operation and the autonomous motion on tele-operation of mobile robot. The autonomous motion that is suitable for the situation when the robot passes through a passage with bends is examined to improve the maneuverability with using a computer simulation. The situation that the manually operated robot contacts with the sidewall of a passage is investigated with the experiments. It is pointed out that the contact tends to occur around the entrance and exit of bends: a robot tends to contact with the inside near the entrance of the bend and the outside around the exit of the bend. The situation that the operator uses the autonomous motion of the mobile robot under the combination control is also investigated with the experiments. The operator tends to use the autonomous following (FL) near the entrance, and the autonomous revolution (RV) is effective to make the robot return to the center of the passage around the exit. From these situation analyses the selective-revolution/following (S-R/F) is developed in which the situation is estimated based on the direction of the manual operation of the operator and the direction of the obstacle from the robot then the autonomous revolution or the autonomous following is selected and applied according to the estimated situation. The new technique is equipped with the simulation system and it is confirmed that the autonomous revolution against the operator's intention is not applied based on the situation, so the maneuverability can be improved.

人間共存型移動ロボットの行動を予告表示する方法と有効性のシミュレーションによる検討

松丸隆文, 工藤新之介, 遠藤久嗣, 伊藤友孝

計測自動制御学会論文集査読有り40(2)p.189 - 1982004年02月-

DOICiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:0453-4654

概要:This paper examines the preliminary-announcement and display function of human-friendly robot's following action and intention, especially about the direction of motion and the speed of motion for mobile robot which moves on a 2-dimentional plane. We proposed 2 types of methods: the method indicating a state just after the moment (lamp and party-blowouts) and the method displaying from the present to some future time continouously (beam-of-light and projector). Simulation system has been developed to confirm the effectiveness of the preliminary-announcement and display. Effctiveness can be evaluated by the mobile robot chasing. The mobile robot moves about at a random speed in a random direction. Subject person moves the operation robot by using joystick with looking at the preliminary-announcement on the mobile robot. So the method to display (lamp/blowout/beam) and the timing to announce (0.5-3.0 [s] before the actual motion of the robot) are evaluated numerically with the position/direction gap. The data is processed not only as the average and the standard deviation but also with the two-way ANOVA and t-screening. It was examined on a translation and a rotation separately and then on a 2-dimentional plane.
We have found that the method displaying from the present to some future time continuously (beam) is easy to understand. But some amount of length of the displayed path in necessary, which means an appropriate timing depending on conditions. The optimal timing for indicating a state is almost 1.0-1.5 [s] before. If the time difference is too much long, the position/direction gap become large due to the poor memory and the operational mistake. If that is too short the operation will be late owing to the reaction delay. Moreover it seems that human being tends to understand some information with a transforming object than with a changing-color object, and continuous changing is easier to understand than distributed changing.

人間・機械・情報系とロボティックバーチャルシステム

松丸隆文

計測と制御査読有り43(2)p.116 - 1212004年02月-

DOICiNiilink

詳細

ISSN:ONLINE ISSN: 1883-8170 PRINT ISSN: 0453-4662

Examination by Software Simulation on Preliminary-Announcement and Display of Mobile Robot's Following Action by Lamp or Blowouts

Takafumi Matsumaru, Hisashi Endo, and Tomotaka Ito

2003 IEEE International Conference on Robotics and Automation (2003 IEEE ICRA), [The Grand Hotel, Taipei, Taiwan]査読有りp.362 - 3672003年09月-

DOIlink

Synchronization of Mobile Robot’s Movement and Preliminary-announcement using Omni-directional Display

Takafumi Matsumaru, Kazuya Iwase, Takashi Kusada, Kyouhei Akiyama, Hirotoshi Gomi, and Tomotaka Ito

IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003), [Kobe, Japan]査読有りp.246 - 2532003年07月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Robot-to-Human Communication of Mobile Robot’s Following Motion using Eyeball Expression on Omni-directional Display

Takafumi Matsumaru, Kyouhei Akiyama, Kazuya Iwase, Takashi Kusada, Hirotoshi Gomi, and Tomotaka Ito

IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003), [Kobe, Japan]査読有りp.790 - 7962003年07月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Simulation of Preliminary-Announcement and Display of Mobile Robot's Following Action by Lamp, Party-blowouts, or Beam-light

Takafumi Matsumaru, Shinnosuke Kudo, Takashi Kusada, Kazuya Iwase, Kyouhei Akiyama, and Tomotaka Ito

IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003), [Kobe, Japan]査読有りp.771 - 7772003年07月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Eyeball Expression for Preliminary-Announcement of Mobile Robot’s Following Motion

Takafumi Matsumaru, Kyouhei Akiyama, Kazuya Iwase, Takashi Kusada, Hirotoshi Gomi, and Tomotaka Ito

Proceedings of The 11th International Conference on Advanced Robotics (ICAR 2003), [University of Coimbra, Portugal]p.797 - 8032003年07月-

Preliminary-Announcement Function of Mobile Robot's Following Motion by using Omni-directional Display

Takafumi Matsumaru, Kazuya Iwase, Takashi Kusada, Kyouhei Akiyama, Hirotoshi Gomi, and Tomotaka Ito

Proceedings of The 11th International Conference on Advanced Robotics (ICAR 2003), [University of Coimbra, Portugalp.650 - 6572003年07月-

Incorporation of Autonomous Control Elements in Combination Control of Remote Operation and Autonomous Control

Takafumi Matsumaru, Kiyoshi Hagiwara, and Tomotaka Ito

Proceedings of The 28th Annual Conference of the IEEE Industrial Society (IEEE-IECON 02)(ISBN0-7803-7474-6), [Melia Lebreros Hotel, Sevilla, Spain]査読有りp.2311 - 23162002年11月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Advanced Autonomous Action Elements in Combination Control of Remote Operation and Autonomous Control

Takafumi Matsumaru, Kiyoshi Hagiwara, and Tomotaka Ito

Proceedings of 11th IEEE International Workshop on Robot and Human Interactive Communication (IEEE ROMAN 2002), [Japanese-German Center Berlin (JDZB), Berlin, Germany]査読有りp.29 - 342002年09月-

DOIlink

Examination on Virtual Environment for Combination Control of Human-Friendly Remote-Operated Mobile Robot

Takafumi Matsumaru and Shin’ichi Ichikawa

Proceedings of 6th Joint International Conference on Advanced Science and Technology (JICAST 2001), [Zhejiang University, Hangzhou, China]p.177 - 1802001年12月-

Examination on Virtual Environment for Preliminary-Announcement and Display of Human-Friendly Mobile Robot

Takafumi Matsumaru,and Kiyoshi Hagiwara

Proceedings of 6th Joint International Conference on Advanced Science and Technology (JICAST 2001), [Zhejiang University, Hangzhou, China]p.169 - 1722001年12月-

Preliminary-Announcement and Display for Translation and Rotation of Human-Friendly Mobile Robot

Takafumi Matsumaru, Kiyoshi Hagiwara

Proceedings of 10th IEEE International Workshop on Robot and Human Communication (ROMAN 2001), [Bordeaux (ENSEIRB) and Paris (INSTN), France]査読有りp.213 - 2182001年09月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Method and Effect of Preliminary-Announcement and Display for Translation of Mobile Robot

Takafumi Matsumaru and Kiyoshi Hagiwara

Proceedings of the 10th International Conference on Advanced Robotics (ICAR 2001), [Hotel Mercure Buda, Budapest, Hungary]査読有りp.573 - 5782001年08月-

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Combination Control of Remote Operation with Autonomous Behavior in Human-Friendly Mobile Robot

Takafumi Matsumaru and Shin’ichi Ichikawa

Proceedings of the 10th International Conference on Advanced Robotics (ICAR 2001), [Hotel Mercure Buda, Budapest, Hungary]査読有りp.567 - 5722001年08月-

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Preliminary Announcement and Display for Human-Friendly Mobile Robot

Takafumi Matsumaru and Yoshinori Terasawa

Preprints of IFAC Workshop on Mobile Robot Technology, [Jejudo, Korea]査読有りp.226 - 2312001年05月-

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Dynamic Brief-to-Precise Strategy for Human-Friendly NeuRobot

Takafumi Matsumaru

Proceedings of the 32nd ISR (Internatioal Symposium on Robotics), [Soul ,Korea]査読有りp.526 - 5312001年04月-

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Action Strategy for Remote Operation of Mobile Robot in Human Coexistence Environment

Takafumi Matsumaru, Tetsuo Kotoku, Atsushi Fujimori, and Kiyoshi Komoriya

2000 IEEE International Conference on Industrial Electronics, Control and Instrumentation (IECON-2000), [Nagoya Congress Center, Nagoya, Japan]査読有りp.1 - 62000年10月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Trial Experiment of the Learning by Experience System on Mechatronics using LEGO MindStorms

Takafumi Matsumaru, Chieko Komatsu, and Toshikazu Minoshima

Proceedings of International Conference on Machine Automation (ICMA2000), [Osaka Institute of Technology, Osaka, Japan]査読有りp.207 - 2122000年09月-

詳細

掲載種別:研究論文(国際会議プロシーディングス)

通信回線を介したロボットの遠隔操作におけるタスク規範型データ伝送手法

松丸隆文, 川端俊一, 神徳徹雄, 松日楽信人, 小森谷清, 谷江和雄, 高瀬國克

日本ロボット学会誌査読有り17(8)p.1114 - 11251999年11月-

DOICiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:02891824

概要:This paper proposes the task based data exchange for teleoperation systems through communication network as an efficient transmission method of data between an operation device and a remote robot. On the task-based data exchange, the more important information according to the contents and conditions of the task which the robot performs is given priority to transmit, for example, by altering the contents of the transmitted data. We have built the experimental system in which the master arm in Tsukuba and the slave arm in Kawasaki are connected through N-ISDN and the standard techniques are utilized, such as TCP/IP, socket, JPEG, etc. A series of experimental task has been effectively carried out by the task based data exchange, that is, the crank operation which consists of grasp and revolution. The communication network with capacity limitation was used effectively and the high maneuverability in real-time with bilateral servo control has been realized. Then the effectiveness of the task based data exchange has been confirmed.

Workability Estimation of remote operation thorough communication circuit

Takafumi Matsumaru, Shin’ichi Kawabata, Tetsuo Kotoku, Nobuto Matsuhira, Kiyoshi Komoriya, Kazuo Tanie, and Kunikatsu Takase

Proceedings of The 9th International Conference on Advanced Robotics ('99ICAR), [Keidanren Hall, Tokyo, Japan]p.231 - 2381999年10月-

Remote Collaboration Through Time Delay in Multiple Teleoperation

Kohtaro Ohba, Shun'ichi Kawabata, Nak Young Chong, Kiyoshi Komoriya, Takafumi Matsumaru, Nobuto Matsuhira, Kunikatsu Takase, and Kazuo Tanie

Proceedings of IEEE/RSJ Interntaional Conference on Intelligent Robots and Systems (IROS'99), [Kyongju, Korea]査読有りp.1866 - 18711999年10月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

通信回線ISDNを介したロボットの遠隔操作

松丸隆文

日本ロボット学会誌査読有り17(4)p.481 - 4851999年05月-

DOIlink

詳細

掲載種別:研究論文(学術雑誌)ISSN:ONLINE ISSN: 1884-7145 PRINT ISSN: 0289-1824

Task-based data exchange for remote operation system through a communication network

Takafumi Matsumaru, Shun'ichi Kawabata, Tetsuo Kotoku, Nobuto Matsuhira, Kiyoshi Komoriya, Kazuo Tanie, and Kunikatsu Takase

Proceedings of the 1999 IEEE International Conference on Robotics and Automation (ICRA'99), [Detroit, Michigan, USA]査読有りp.557 - 5641999年05月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

人間共存型ロボットシステムにおける技術課題

人間共存型ロボット研究専門委員会(野崎武敏, 山田陽滋, 小笠原司, 菅野重樹, 藤江正克, 松丸隆文)

日本ロボット学会誌査読有り16(3)p.288 - 2941998年04月-

DOIlink

詳細

ISSN:ONLINE ISSN: 1884-7145 PRINT ISSN: 0289-1824

モジュラー・マニピュレータの構成・形状認識と作業遂行可/不可判定の方法に関する検討

松丸隆文, 松日楽信人

日本ロボット学会誌査読有り15(3)p.408 - 4161997年04月-

DOICiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:02891824

概要:A design concept of TOMMS, TOshiba Modular Manipulator system, has been already proposed to achieve a modular manipulator system which can be assembled into any desired configuration to provide adaptability to tasks, using as few kinds and types of modules as possible, without special handling such as modification of control software. To realize the concept, we developed a constitution and configuration recognition method of the assembled manipulator using electric resistance which is simple, practical and reliable. Moreover, to actualize the system which can offer the best suitable manipulator constitution and configuration for the desired task, we developed a workability judgment method considering the degeneracy of degreeds of freedom (d. o. f.) of the manipulator and the conditions of the desired. These methods were applied to the trial system TOMMS-1 and their efficiency and practicality were confirmed.

Design Disquisition on Modular Robot Systems

Takafumi Matsumaru

Journal of Robotics and Mechatronics査読有り8(5)p.408 - 4191996年10月-

DOIlink

詳細

掲載種別:研究論文(学術雑誌)

Modular Design Scheme for Robot Manipulator Systems

Takafumi Matsumaru

The 3rd International Symposium on Distributed Autonomous Robotic Systems (DARS'96), [Wakoh, Japan]査読有り1996年10月-

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Corresponding-to-Operation-Motion Type Control Method for Remote Master-Slave Manipulator System

Takafumi Matsumaru

Proceedings of the 3rd International Conference on Motion and Vibration Control: MOVIC, [Makuhari, Japan]p.204 - 2081996年09月-

Recognition of constitution/configuration and workability judgement for the modular manipulator system, TOMMS

Takafumi Matsumaru

The 22nd Annual Intertnational Conference of the IEEE Industrial Electronics Society IECON'96, [Taipei, Taiwan]査読有りp.493 - 5001996年08月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

モジュラー・マニピュレータTOMMSの設計と制御

松丸隆文, 松日楽信人

日本ロボット学会誌査読有り14(3)p.428 - 4351996年04月-

DOICiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:02891824

概要:The TOshiba Modular Manipulator System, TOMMS, consists of joint modules, link modules, and a control unit with a joystick. As to the trial manufacturing, a manipulator with 3 d. o. f. is assembled using three joint modules and optional link modules into any desired configuration and shape, for instance, a horizontal type and a vertical type. The assembled manipulator is connected to the control unit, and the position of the end tip of the manipulator is controlled using the joystick without special handling. There is only one type of joint module and link module. There are three input ports and two output ports on the joint module. The distance between the fore side and the back side of the link module is adjustable. The Jacobian matrix is applied to the control software. Control experiments were carried out and the efficiency of the design concept of TOMMS for mechanical hardware and control software was confirmed.

押し付け力制御マニピュレータの遠隔操作手法

松丸隆文, 松日楽信人

日本ロボット学会誌査読有り14(2)p.255 - 2621996年03月-

DOICiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:02891824

概要:This paper describes a control method for manipulators which work by pressing the end-effector to the workpiece under constant force (e.g. grinding and cleaning) and also positioning the end-tip to everywhere on the workpiece using the operation device. Based on ergonomics, "the operator coordinate system" is introduced which is determined from the glance line to the workpiece and the both eyes of the operator. Further, "the corresponding-to-operational-motion type control method" is proposed, on which method the direction of motion of the operation device and the reactive direction of motion of the end-effector are corresponded in the operator coordinate system. Especially for the workpiece with wave shape, "the corresponding-to-objective-shape type control method" is designed, on which method the winding line and the valley line of the wave are recognized during the work and the directions of motion of the operation device are corresponded to these lines. These methods have been applied to the remote control system including the joystick and the lightweight manipulator, so the efficiency of these methods have been confirmed.

Design and Control of the Modular Robot System: TOMMS

Takafumi Matsumaru

Proceedings of the 1995 IEEE International Conference on Robotics and Automation (ICRA 95), [Nagoya, Japan]査読有りp.2125 - 21311995年06月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

Windshield Cleaning Robot System: WSC

Takafumi Matsumaru, Nobuto Matsuhira, Makoto Jinnno

Proceedings.of the IEEE/RSJ/GI International Conference on Intelligent Robots and Systems ‘94 (IROS 94), [Munich, Germany]査読有りp.1964 - 19711994年09月-

DOIlink

詳細

掲載種別:研究論文(国際会議プロシーディングス)

風防ガラス・クリーニング・ロボットの開発

松丸隆文, 松日楽信人

日本ロボット学会誌査読有り12(5)p.743 - 7501994年07月-

DOICiNiilink

詳細

掲載種別:研究論文(学術雑誌)ISSN:02891824

概要:This paper describes the development of Windshield Cleaning Robot System : WSC. This system is intended to be used for Boeing 747, commonly called jumbo jet, parked at airports prior to service. The objects to be cleaned are spots on windshields caused by collision with dusts, insects, and birds during takeoff and landing. The intention of this new system is that one operator would perform the whole work in 10 minutes. So the system consists of the manipulator (the arm and the cleaning device), the installation unit, the control unit, and the operation unit. A position and force control method is applied to this system. The target position of the arm tip is modified using the signals from the force sensor and the joystick. In accordance with this control method, the pressure force is kept constant and the tip is moved so as to follow the shape of the windshields. The various safety features provided include interference limit to limit the area of movement. System experiments were carried out and the effectivity applying lightweight manipulator with long arms to this work was confirmed.

書籍等出版物

"Design and Evaluation of Handover Movement Informing Reciever of Weight Load", in S. Bandyopadhyay, G. Saravana Kumar, et al (Eds): "Machines and Mechanisms"

Takafumi Matsumaru

Narosa Publishing House (New Delhi, India)2011年 11月-

詳細

ISBN:978-81-8487-192-0

"Study on Handover Movement Informing Receiver of Weight Load as Informative Motion of Human-friendly Robot", in Salvatore Pennacchio (ed.): "Emerging Technologies, Robotics and Control Systems - Third edition"

Takafumi Matsumaru and Shigehisa Suzuki

INTERNATIONALSAR (Palermo, Italy, EU)2009年 06月-

詳細

ISBN:978-88-901928-8-3

"Mobile Robot with Preliminary-announcement and Indication Function of Upcoming Operation just after the Present", in Salvatore Pennacchio (ed.): "Recent Advances in Control Systems, Robotics and Automation- Third edition Volume 2"

Takafumi Matsumaru

INTERNATIONALSAR (Palermo, Italy, EU)2009年 01月-

詳細

ISBN:978-88-901928-7-6

"生体機械機能工学(バイオメカニズム学会編 バイオメカニズム・ライブラリー)"

松丸隆文

東京電機大学出版局2008年 10月-

詳細

ISBN:978-4-501-41750-5

"Chapter 18 - Mobile Robot with Preliminary-Announcement and Indication of Scheduled Route and Occupied Area using Projector", in Aleksandar Lazinica (ed.): "Mobile Robots Motion Planning, New Challenges"

Takafumi Matsumaru

I-Tech Education and Publishing (Vienna, Austria, EU)2008年 07月-

詳細

ISBN:978-3-902613-35-6

"Chapter 4 - Preliminary-Announcement Function of Mobile Robots’ Upcoming Operation", in Xing P. Guô (ed.): "Robotics Research Trends"

Takafumi Matsumaru

Nova Science Publishers (Hauppauge, NY, USA)2008年 05月-

詳細

ISBN:1-60021-997-7

"Lesson10 遠隔操作システム", in "Webラーニングプラザ「事例に学ぶロボティックス」"

松丸隆文, 伊藤友孝

(独)科学技術振興機構JST2002年 03月-

"Granularity and Scaling in Modularity Design for Manipulator Systems", in H.Asama, T.Fykuda, T.Arai, I.Endo (Eds.): "Distributed Autonomous Robotic Systems 2"

Takafumi Matsumaru

Springer-Verlag1996年 11月-

詳細

ISBN:10: 4431701907

講演・口頭発表等

Fingertip pointing interface by hand detection using Short range depth camera

Kazuki Horiuchi, Takafumi Matsumaru

ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018)(Waseda-IPS)2018年11月15日

詳細

国際会議ポスター発表開催地:北九州

概要: Computer mice and keyboards are most widely used to work with personal computers as pointing and typing devices respectively. As alternatives to a computer mouse, there are other types of pointing devices that use depth sensor or camera for pointing detection. However, current implementations are uncomfortable for usage because a user must raise his arm(s) to control pointing, as well as having a relatively long distance from the sensing device. To solve those usability problems, we propose a pointing device that can narrow the distance within which users can comfortably move their wrists between the keyboard and the pointing device itself. For this system, we compared between various depth sensors’ performance and eventually chose the Intel Realsense sensor for our system’s use. Additionally, we performed a comparative study involving our proposed system’s performance and that of other conventional input devices. Although the total time to complete experimental tasks using our system was longer than by using other conventional input devices, our proposed system has the fastest time when switching between pointing and typing (i.e. moving hand from mouse to keyboard).

On the Applicability of Mobile Robot Traversal in Pedestrian Settings Without Utilizing Pre-prepared Maps

Ahmed Farid, Takafumi Matsumaru

ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS(Waseda-IPS)2018年11月15日

詳細

国際会議口頭発表(一般)開催地:北九州

概要: This paper discusses the prospect of mobile robots’ navigational ability in pedestrian settings (e.g. sidewalks and street crossings), under the condition of not utilizing feature-rich maps that are provided before-hand (e.g. from SLAM-based algorithms). The main motivation is to mimic the human way of interpreting 2D maps (e.g. widely available Google Maps), which would lead to negate the need for pre-mapping a given location. The paper starts by summarizing previous literature regarding robotic navigation in pedestrian settings, leading up to outcomes of our own research. We aim to present results in path planning and real-world scene interpretation, then finally address remaining problems and future prospects.

Intuitive Control of Virtual Robots using Leap Motion Sensor

Rajeevlochana Chittawadigi, Subir Kumar Saha, Takafumi Matsumaru

ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS(Waseda-IPS)2018年11月15日

詳細

国際会議口頭発表(一般)開催地:北九州

概要: Serial robots used in industry can be controlled by various means such as joint and Cartesian jogging using dedicated teach pendants or by using offline and online programming in software environment. They can also be controlled using master-slave manipulation technique where an exoskeleton can act as a master and the robot as a slave, mimicking the motion of the exoskeleton worn by a user. A recently developed sensor named Leap Motion can also be used to detect motion of the hands of a user, in sub-millimeter accuracy level. In the proposed work, a Leap Motion sensor has been used to track one of the hand of a user. The incremental motion of the tip of index finger is used as Cartesian increment of the end-effector of a virtual robot in RoboAnlayzer software. Currently, work is underway for detecting the orientation of two or three fingers and accordingly control the orientation of the robot’s end-effector. An exoskeleton has to be worn by the user and excessive usage may cause fatigue to the user. The proposed application of Leap Motion relieves the user of the fatigue and yet has good accuracy, and hence can be used as an intuitive method.

Control Mobile Robot using Single 2d-Camera with New Proposed Camera Calibration Method

Haitham Al Jabri, Takafumi Matsumaru

ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS(Waseda-IPS)2018年11月15日

詳細

国際会議口頭発表(一般)開催地:北九州

概要: This paper presents a summary of mobile robot control using 2d-vision feedback with new proposed camera calibration method. The main goal is to highlight the 2d-vision feedback to be analyzed and used as a main pose feedback of a mobile robot. We use 2-active omni-wheels mobile robot with a single 2d-webcam in our experiments. Our main approach is to solve and tackle present limitations of using feature points with a single 2d-camera for the pose estimation such as: non-static environments and data stability. The results discuss the issues and point out the strengths and weaknesses of the used techniques. First, we use feature points detection ORB (Oriented FAST and Rotated BRIEF) and BF (Brute-Force) matcher to detect and match points in different frames, respectively. Second, we use FAST (Features from Accelerated Segment Test) corners and LK (Lucas–Kanade) optical flow to detect corners and track their flow in different frames. Those points and corners are then used for the pose estimation through optimization process with the: (a) calibration method of Zhang using chessboard pattern and (b) our proposed method using reinforcement learning in offline mode.

Image Segmentation and Brand Recognition in the Robot Picking Task

Chen Zhu, Takafumi Matsumaru

ISIPS 2018 (12h International collaboration Symposium on Information, Production and Systems) [Kitakyushu, Japan], (14-16 November, 2018), Waseda-IPS(Waseda-IPS)2018年11月15日

詳細

国際会議口頭発表(一般)開催地:北九州

概要: The computer vision guided picking system has been developed for many decades, however, the picking of random ordered multiple objects cannot be treated well with the existing software. In this research, 6 brands of random-placed drinking bottles are designed to be picked up by using a 6 degree of freedom robot. The bottles need to be classified by the brands before picking from the container. In this article, Mask R-CNN, a deep-learning-based image segmentation network are used to process the image taken from a normal camera and Inception v3 are used for the brand recognition task. The Mask R-CNN is trained by using the COCO datasets to detect the bottles and generate a mask on it. In the brand recognition task, 150-200 images are taken or find for each brand first and augmented to 1000 images per brands. As a result, the segmented images can be labeled with the brand name with at least 80% accuracy in the experiment environment.

3次元空中ホログラフィック画像インターフェースを用いたリアルタイム遠隔投影

堀内一希, セプティアナ アシファイ マンダ, 松丸隆文

日本機械学会 ロボティクス・メカトロニクス講演会2018 (ROBOMECH 2018 in Kitakyushu), [北九州国際コンベンションゾーン] (2018.06.02-06.05), 2P2-H17 (4pages), (2018.06.05).(日本機械学会ロボティクスメカトロニクス部門)2018年06月05日

詳細

国内会議ポスター発表開催地:北九州

概要: The three-dimensional aerial holographic image interface (3DAHII) is a system for aerial projection of varying 3D objects. The projected objects can be viewed from multiple directions without the use of special devices such as eyeglasses, gas or vapor for projection, rotating parts, and so on. In this research, we propose a system to capture a video of a human body from different perspectives in real-time, and project the result using 3DAHII. Because of realtime capturing and projection, users can interact and communicate directly. To realize this goal, we used a smaller prototype system in which four cameras capture a video of a physical object inside a frame, and the video result will be remotely projected in real-time. The smaller prototype is developed for evaluation and confirmation of the proposed system’s operation.

手検出深度カメラを用いた近距離指先ポインティング

堀内一希, 松丸隆文

日本機械学会 ロボティクス・メカトロニクス講演会2018 (ROBOMECH 2018 in Kitakyushu), [北九州国際コンベンションゾーン] (2018.06.02-06.05), 主催:日本機械学会ロボティクスメカトロニクス部門, 1P2-G11 (4pages), (2018.06.04).(日本機械学会ロボティクスメカトロニクス部門)2018年06月04日

詳細

国内会議ポスター発表開催地:北九州

概要: Computer mice and keyboards are most widely used to work with personal computers as pointing and typing devices respectively. As alternatives to a computer mouse, there are other types of pointing devices that use depth sensor or camera for pointing detection. owever, current implementations are uncomfortable for usage because a user must raise his arm(s) to control pointing, as well as having a relatively long distance from the sensing device. To solve those usability roblems, we propose a pointing device that can narrow the distance within which users can comfortably move their wrists between the keyboard and the pointing device itself. For this system, we compared between various depth sensors’ performance and eventually chose the Intel Realsense sensor for our system’s use. Additionally, we performed a comparative study involving our proposed system’s performance and that of other conventional input devices. Although the total time to complete experimental tasks using our system was longer than by using other conventional input devices, our proposed system has the fastest time when switching between pointing and typing (i.e. moving hand from mouse to keyboard). Key Words: Hand gesture recognition, Motion capture camera, Pointing device

Dynamic Precise Localization of Multi-Mobile Robots in 2D Plane

Haitham Khamis Mohammed Al Jabri, Tafukumi Matsumaru

11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], P2E-3:(#070), pp.255-2572017年11月15日

詳細

国際会議ポスター発表開催地:Kitakyushu

概要: A Small Mobile Robot (SMR) moves inside another wider Frame Mobile Robot (FMR) and one of them stops at a time to be as a reference for the other using multi-laser pointers fixed in the FMR. These robots are designed to form a closed system that is to minimize accumulated errors occurred during the robot moves in which the robots’ positions are corrected regularly. There are several motivations that required high precision and accurate move of a mobile robot. For example; nowadays, different size of printers and plotters are available. Each printer/plotter has limited size of paper to print on. This limitation is a problem that can be solved if a mobile robot which moves precisely is made. However, can this robot perform printing task for unlimited size of area? What about accumulated errors that surely occur while the mobile robot moves? In what resolution can this robot print? The proposed system in this paper is a closed system that can collaborate between different robots in which an efficient method in localizing mobile robots is introduced using the advantage of straightening of laser waves transmissions in digitalize mode. The main two strengths of the research are as follow: • Achieve precise and accurate mobile robot coordinates by minimizing accumulated errors • Manage mobile robots’ cooperation in localizing and performing different task at a certain time The robots’ movements are checked discretely by multi-laser pointers and corrected accordingly. This can contribute in achieving accurate and precise coordinates and thus, minimizing accumulated errors. As well as, the ability to collaborate between other robots can enhance some features of the system. For instance, it can accelerate the progress by adding other SMRs or even FRMs to finish a big job in less time. In this case, the system can divide different tasks between available robots to finish any assigned job earlier.

Dynamic Precise Localization of Multi-Mobile Robots in 2D Plane

Haitham Khamis Mohammed Al Jabri, Tafukumi Matsumaru

11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], O3C-2:(#020), pp.68-702017年11月15日

詳細

国際会議口頭発表(一般)開催地:Kitakyushu

概要: A Small Mobile Robot (SMR) moves inside another wider Frame Mobile Robot (FMR) and one of them stops at a time to be as a reference for the other using multi-laser pointers fixed in the FMR. These robots are designed to form a closed system that is to minimize accumulated errors occurred during the robot moves in which the robots’ positions are corrected regularly. There are several motivations that required high precision and accurate move of a mobile robot. For example; nowadays, different size of printers and plotters are available. Each printer/plotter has limited size of paper to print on. This limitation is a problem that can be solved if a mobile robot which moves precisely is made. However, can this robot perform printing task for unlimited size of area? What about accumulated errors that surely occur while the mobile robot moves? In what resolution can this robot print? The proposed system in this paper is a closed system that can collaborate between different robots in which an efficient method in localizing mobile robots is introduced using the advantage of straightening of laser waves transmissions in digitalize mode. The main two strengths of the research are as follow: • Achieve precise and accurate mobile robot coordinates by minimizing accumulated errors • Manage mobile robots’ cooperation in localizing and performing different task at a certain time The robots’ movements are checked discretely by multi-laser pointers and corrected accordingly. This can contribute in achieving accurate and precise coordinates and thus, minimizing accumulated errors. As well as, the ability to collaborate between other robots can enhance some features of the system. For instance, it can accelerate the progress by adding other SMRs or even FRMs to finish a big job in less time. In this case, the system can divide different tasks between available robots to finish any assigned job earlier.

Outdoor Navigation for Mobile Robot Platforms with Path Planning of Sidewalk Motion Using Internet Maps

Ahmed Farid, Takafumi Matsumaru

11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], P2E-4:(#085), pp.298-301, (2017.11.15.Wed).2017年11月15日

詳細

国際会議ポスター発表開催地:Kitakyushu

概要: This paper describes a path planning system for processing 2D color maps of a given location to provide sidewalk paths, street crossing landmarks, and instructions for a robot platform or user to navigate, using only the current location and destination as inputs. For the navigation of robots and especially disabled people in outdoor pedestrian environments, path planning that explicitly takes into account sidewalks and street crossings is of great importance. Current path planning solutions on known 2D maps (e.g. Google Maps) from both research and industry do not always provide explicit information on sidewalk paths and street crossings, which is a common problem in suburban/rural areas. The path planner test results are shown for location of our campus.

Outdoor Navigation for Mobile Robot Platforms with Path Planning of Sidewalk Motion Using Internet Maps

Ahmed Farid, Takafumi Matsumaru

11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], O3C-3:(#046), pp.175-178, (2017.11.15.Wed).2017年11月15日

詳細

国際会議口頭発表(一般)開催地:Kitakyushu

概要: This paper describes a path planning system for processing 2D color maps of a given location to provide sidewalk paths, street crossing landmarks, and instructions for a robot platform or user to navigate, using only the current location and destination as inputs. For the navigation of robots and especially disabled people in outdoor pedestrian environments, path planning that explicitly takes into account sidewalks and street crossings is of great importance. Current path planning solutions on known 2D maps (e.g. Google Maps) from both research and industry do not always provide explicit information on sidewalk paths and street crossings, which is a common problem in suburban/rural areas. The path planner test results are shown for location of our campus.

A Progressively Adaptive Approach for Tactile Robotic Hand to Identify Object Handling Operations

Duc Anh Than, Takafumi Matsumaru

11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], P2C-2:(#019), pp.65-67, (2017.11.15.Wed).2017年11月15日

詳細

国際会議ポスター発表開催地:Kitakyushu

概要: We as human learn specific operations that typically belong to the object by touching and with sensory feedback data, we form particular skills of object handling such as with which hand pose to grasp the object, how much force to use, and which hand movement to make. As human development in babyhood, we all start learning how to handle objects by hands with repetitive interactions, from which we can learn our own skill on behaviors we could do with the object and how we handle it as well. By equipping a touch-oriented robot hand with tactile sensors and by using the collected sensory feedback data, our research is targeted at forming the underlying link between a particular object itself and the specific corresponding operations acted on the object which are helpful to feasibly accomplish a task ordered by human. Besides, a classification of those operations on the object is presented, from which optimal ones that best suit the human task requirements are determined. Specifically, in this paper we propose a machine-learning based approach in combination with an evolutionary method to help progressively build up the hand-based object cognitive intelligence. Overall, the scheme proposed exploits and reveals the robot-hand potential touch-based abilities/intelligence of object interaction and manipulation apart from the existing visually built-up cognition.

Active Secondary Suspension of a Railway Vehicle for Improving Ride Comfort using LQG Optimal Control Technique

Kaushalendra K Khadanga, Tafukumi Matsumaru

11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], O1C-2:(#061), pp.226-2292017年11月14日

詳細

国際会議口頭発表(一般)開催地:Kitakyushu

概要: Passenger comfort has been paramount in the design of suspension systems of high speed cars. The main objective of this paper is to reduce the vertical and pitch accelerations of a half car rail model. A rigid half-car high speed passenger vehicle with 10 degrees of freedom has been modelled to study the vertical and pitch accelerations. State space mathematical approach is used to model rail input that takes in track vibrations. An augmented track and the vehicle model is then designed. An active secondary suspension system based on the Linear Quadratic Gaussian (LQG) optimal control method is designed. Vehicle performances like vertical and pitch accelerations, front and rear suspension travel and control forces have been studied and compared with that of a passive system.

Usability Study of Aerial Projection of 3D Hologram Object in 3D Workspace

Septiana Asyifa I., Jiono Mahfud, Kazuki Horiuchi, Takafumi Matsumaru

11h International collaboration Symposium on Information, Production and Systems (ISIPS 2017), [Kitakyushu, Japan], O2C-1:(#033), pp.115-118, (2017.11.14.Tue).2017年11月14日

詳細

国際会議口頭発表(一般)開催地:Kitakyushu

概要: We have proposed a hand gestured based control system with an interactive 3D hologram object floating in mid-air as the hand movement reference, named The Aerial Projection of 3D Hologram Object (3DHO). The system consists of the hologram projector and a sensor to capture hand gesture. Leap motion is used for capturing the hand gesture command while it manipulates the 3DHO which produced by a pyramid-shaped reflector and two parabolic mirrors. We evaluate our 3DHO’s performance by comparing it to other 3D input devices such as joystick with slider, joystick without slider, and gamepad to do several pointing tasks. The assessment of comfort and user satisfaction also evaluated by using questionnaire survey. We found out that the 3DHO is good to use in three-dimensional workspace tasks. From the experiments, participants think our 3DHO is easy to learn but they felt some fatigue in their hands so 3DHO is not satisfying enough to use. The development by adding haptic feedback and make a wider 3DHO should be done for improving 3DHO performance.

Use of Kinect Sensor for Building an Interactive Device

R. P. Joshi, P. Sharma, R. A. Boby, S. K. Saha, T. Matsumaru

10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)2016年11月10日

詳細

国際会議口頭発表(一般)開催地:Kitakyushu

3D Hologram Object Manipulation

Jiono Mahfud, Takafumi Matsumaru

10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)2016年11月10日

詳細

国際会議ポスター発表開催地:Kitakyushu

3D Hologram Object Manipulation

Jiono Mahfud, Takafumi Matsumaru

10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)2016年11月10日

詳細

国際会議口頭発表(一般)開催地:Kitakyushu

Use of Kinect Sensor for Building an Interactive Device

R. P. Joshi, P. Sharma, R. A. Boby, S. K. Saha, T. Matsumaru

10th International Collaboration Symposium on Information, Production and Systems (ISIPS 2016), [Kitakyushu, Japan], (09-11 November, 2016)2016年11月09日

詳細

国際会議ポスター発表開催地:Kitakyushu

Introduction to Robotics and Mechatronics

松丸隆文

『北九州ゆめみらいワーク2016』 みらい教室(主催: 北九州市,運営: (株)マイナビ)招待有り2016年08月27日

詳細

国内会議口頭発表(招待・特別)開催地:西日本総合展示場新館, 北九州市

Introduction to Robotics and Mechatronics

松丸隆文

『北九州ゆめみらいワーク2016』 みらい教室(主催: 北九州市,運営: (株)マイナビ)招待有り2016年08月26日

詳細

国内会議口頭発表(招待・特別)開催地:西日本総合展示場新館, 北九州市

移動ロボット接近時における動作予告を用いた恐怖感低減に関する検討

廣井富(大阪工大), 前田彰大, 田中佑季, 松丸隆文(早大), 伊藤彰則(東北大)

日本機械学会 ロボティクス・メカトロニクス講演会2016(ROBOMECH 2016 in Yokohama), [パシフィコ横浜] (2016.06.08-06.11), 2P1-11b2 (3pages)2016年06月10日

詳細

国内会議ポスター発表

プロジェクタを用いた書道のための筆遣い学習支援システムの開発(第2報) ―毛筆の位置情報からの軌跡生成方法の提案―

成田昌史, 松丸隆文

日本機械学会 ロボティクス・メカトロニクス講演会2016(ROBOMECH 2016 in Yokohama), [パシフィコ横浜] (2016.06.08-06.11), 2P1-12a3 (2pages)2016年06月10日

詳細

国内会議ポスター発表

Obstacle avoidance based on improved Artificial Potential Field for mobile robot

Sihui Zhou, Takafumi Matsumaru

9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.242-2432015年11月16日

詳細

国際会議ポスター発表

Facial Expression Recognition based on Neural Network using Extended Curvature Gabor Filter Bank

Pengcheng Fang, Takafumi Matsumaru

9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.68-692015年11月16日

詳細

国際会議口頭発表(一般)

Real-time hand side discrimination based on hand orientation and wrist point localization sensing by RGB-D sensor

Thanapat Mekrungroj, Takafumi Matsumaru

9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.672015年11月16日

詳細

国際会議口頭発表(一般)

Feature Tracking and Synchronous Scene Generation with a Single Camera

Zheng Chai, Takafumi Matsumaru

9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.38-392015年11月16日

詳細

国際会議口頭発表(一般)

Brushwork Learning Support System Using Projection

Masashi Narita, Takafumi Matsumaru

9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.92015年11月16日

詳細

国際会議口頭発表(一般)

An Economical Version of SAKSHAR-IDVT: Image-projective Desktop Varnamala Trainer

Pratyusha Sharma, Vinoth Venkatesan, Ravi Prakash Joshi, Riby Abraham Boby, Takafumi Matsumaru, Subir Kumar Saha

9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.5-82015年11月16日

詳細

国際会議口頭発表(一般)

Extraction of Representative Point from Hand Contour Based on Laser Range Scanner

Chuankai Dai, Kaito Yano, Takafumi Matsumaru

9th International Collaboration Symposium on Information, Production and Systems (ISIPS 2015), [Kitakyushu, Japan], (16-18 November, 2015), pp.1-42015年11月16日

詳細

国際会議口頭発表(一般)

プロジェクタを用いた書道のための筆遣い学習支援システムの開発(第1 報) —システムの提案と構築—

日本機械学会 ロボティクス・メカトロニクス講演会2015(ROBOMECH 2015 in Kyoto), [京都市勧業館「みやこめっせ」]2015年05月19日

詳細

口頭発表(一般)

Screen-Camera Position Calibration and Projected Screen Detection

8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), PS6-10, (2014.11.13)2014年11月13日

詳細

国際会議ポスター発表

Virtual Musical Instruments based on Interactive Multi-Touch system sensing by RGB Camera and IR Sensor

8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), PS6-7, (2014.11.13)2014年11月13日

詳細

国際会議ポスター発表

Development of calligraphy-stroke learning support system using projection

8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), PS5-6, (2014.11.13)2014年11月13日

詳細

国際会議ポスター発表

Human-Machine Interaction using Projection Screen and Multiple Light Spots

8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), OS9-2, (2014.11.13), PS6-2, (2014.11.13)2014年11月13日

詳細

国際会議口頭発表(一般)

Multi-finger Touch Interface based on ToF camera and webcam

8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), OS3-3, (2014.11.12), PS5-3, (2014.11.13)2014年11月12日

詳細

国際会議口頭発表(一般)

SAKSHAR : An Image-projective Desktop Varnamala Trainer (IDVT) for Interactive Learning of Alphabets

8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), OS3-2, (2014.11.12), PS5-2, (2014.11.13)2014年11月12日

詳細

国際会議口頭発表(一般)

Touching Accuracy Improvement and New Application Development for IDAT

8th International Collaboration Symposium on Information, Production and Systems (ICIPS 2014), [Kitakyushu, Japan], (12-13 November, 2014), OS3-1, (2014.11.12), PS5-1, (2014.11.13)2014年11月12日

詳細

国際会議口頭発表(一般)

画像投射式卓上型上肢訓練装置(IDAT)を用いた片麻痺患者上肢の訓練−予備的研究−

第39回日本脳卒中学会総会 (STROKE2014),[大阪国際会議場], (2014.3.13-15)2014年03月14日

詳細

国内会議口頭発表(一般)

Introduction of Bio-Robotics and Human-Mechatronics Laboratory

Takafumi MATSUMARU

Invited Talk(Beijing Institute of Technology)招待有り2014年02月21日

詳細

国際会議口頭発表(招待・特別)開催地:Beijing Institute of Technology

Human Detection and Following Mobile Robot Control System Using Laser Range Sensor

Takafumi MATSUMARU

Joint meeting of Peking University and Waseda University(Peking University)招待有り2014年02月19日

詳細

国際会議口頭発表(招待・特別)開催地:Peking University

パンチルトズームカメラによる投影画面とのサイズ調整と追従制御

第14回 計測自動制御学会 システムインテグレーション部門講演会 (SICE SI2013), [神戸国際会議場], (2013.12.18-20), 1J1-2, pp.703-708.2013年12月18日

詳細

国内会議口頭発表(一般)

Contact/non-contact Interaction System using Camera and Depth Sensors

7th IPS International Collaboration Symposium (IPS-ICS 2013), [Kitakyushu, Japan], (11-13 October, 2013), PS-5042013年11月11日

詳細

国際会議ポスター発表

Kinect Sensor Application to Control Mobile Robot by Gesture, Facial movement and Speech

7th IPS International Collaboration Symposium (IPS-ICS 2013), [Kitakyushu, Japan], (11-13 October, 2013), PS-5152013年11月11日

詳細

国際会議ポスター発表

Human Detection and Following Mobile Robot Control System Using Range Sensor

7th IPS International Collaboration Symposium (IPS-ICS 2013), [Kitakyushu, Japan], (11-13 October, 2013), OS4-2, PS-5112013年11月11日

詳細

国際会議口頭発表(一般)

Relative Position Calibration using Pan-tilt-zoom Camera for Projection Interface

7th IPS International Collaboration Symposium (IPS-ICS 2013), [Kitakyushu, Japan], (11-13 October, 2013), PS-5022013年11月11日

詳細

国際会議ポスター発表

Human-robot interaction based on contact/non-contact sensing by camera and depth sensors

International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P017, p.1202013年10月20日

詳細

国際会議ポスター発表

Mobile robot control system based on gesture, speech and face track using RGB-D-S sensor

International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P016, p.1192013年10月20日

詳細

国際会議ポスター発表

Robot Human-following Limited Speed Control

International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P015, p.1182013年10月20日

詳細

国際会議ポスター発表

Automatic Adjustment and Tracking of Screen Projected by Using Pan-tilt-zoom Camera

International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P014, p.1172013年10月20日

詳細

国際会議ポスター発表

Using Laser Pointer for Human-Computer Interaction

International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), P013, p.1162013年10月20日

詳細

国際会議ポスター発表

Human-robot interaction on training

International Workshop on Machine Vision for Industrial Innovation (MVII2013 ), [Kitakyushu, Japan], (20-21 October, 2013), O003, pp.41-46,2013年10月20日

詳細

国際会議口頭発表(一般)

Plenary talk on Image-projective Desktop Arm Trainer and Touch Interaction Based on IR Image Sensor

Takafumi MATSUMARU

Joint Seminar of Peking University and Waseda University(Peking University)招待有り2013年02月25日

詳細

国際会議口頭発表(招待・特別)開催地:Peking University

Development of Image-Projective Desktop Arm Trainer and Measurement of Trainee’s Performance

6th IPS International Collaboration Symposium (IPS-ICS), [Kitakyushu, Fukuoka, Japan], (14-16 November, 2012), PS-232012年11月15日

詳細

国際会議ポスター発表

Development of Walking Training Robot with Customizable Trajectory Design System

6th IPS International Collaboration Symposium (IPS-ICS), [Kitakyushu, Fukuoka, Japan], (14-16 November, 2012), PS-222012年11月15日

詳細

国際会議ポスター発表

Development of Image-Projective Desktop Arm Trainer and Measurement of Trainee’s Performance

International Workshop on Image & Signal Processing and Retrieval (IWISPR2012), [Kitakyushu]2012年10月18日

詳細

国際会議ポスター発表

Development of Walking Training Robot with Customizable Trajectory Design System

International Workshop on Image & Signal Processing and Retrieval (IWISPR2012), [Kitakyushu]2012年10月18日

詳細

国際会議ポスター発表

Recent Advance on SOI (Step-On Interface) Applications

2012年10月18日

詳細

国際会議口頭発表(一般)

どこでも楽しく上肢機能訓練ー画像投射式卓上型上肢訓練装置 Image-projective Desketop Arm Trainer,IDATの開発ー

第19回産業医科大学リハビリテーション医療研究会 [産業医科大学2号館 2305講義室]招待有り2012年07月20日

詳細

国内会議口頭発表(一般)

バイオ・ロボティクス&ヒューマン・メカトロニクス研究室

旭興産グループ奨学金授与式 [旭興産門司工場]招待有り2012年03月09日

詳細

国内会議口頭発表(一般)

Introduction of Bio-Robotics and Human-Mechatronics Laboratory

Workshop co-hsted by Waseda University and Peking University, [Peking University]2012年02月29日

詳細

国際会議口頭発表(一般)

Prototype of Touch Game Application on Development of Step-On Interface

5th IPS International Collaboration Symposium 2011, [Kitakyushu, Japan]2011年11月10日

詳細

国際会議ポスター発表

Human-Robot Interaction Design on Mobile Robot with Step-On Interface

5th IPS International Collaboration Symposium 2011, [Kitakyushu, Japan]2011年11月10日

詳細

国際会議ポスター発表

Prototype of Touch game Application on Development of Step-On Interface

International Workshop on Target Recognition and Tracking (IWTRT2011), [Kitakyushu]2011年10月20日

詳細

国際会議ポスター発表

Human-Robot Interaction Design on Mobile Robot with Step-On Interface

International Workshop on Target Recognition and Tracking (IWTRT2011), [Kitakyushu]2011年10月20日

詳細

国際会議ポスター発表

Human-Robot Interaction using Projection Interface

International Workshop on Target Recognition and Tracking, [Kitakyushu, Japan]2011年10月20日

詳細

国際会議口頭発表(一般)

生物の運動に学ぶロボット技術

日本機械学会 2011年度年次大会 [東京工業大学大岡山キャンパス]2011年09月04日

詳細

国内会議口頭発表(一般)

人間共存型ロボットのインフォマティブ・モーションに関する研究(第15報) —到達距離を受け手に伝える物体投げ渡し動作の評価—

日本機械学会 ロボティクス・メカトロニクス講演会2011(ROBOMEC2011)[岡山コンベンションセンター]2011年05月

詳細

国内会議ポスター発表

人間共存型ロボットのインフォマティブ・モーションに関する研究(第14報) —到達距離を受け手に伝える物体投げ渡し動作の生成—

日本機械学会 ロボティクス・メカトロニクス講演会2011(ROBOMEC2011)[岡山コンベンションセンター]2011年05月

詳細

国内会議ポスター発表

人間共存型ロボットの操作手法に関する研究(第11報) −レーザー・ポインタによるステップ・オン・インタフェースの操作の検討—

日本機械学会 ロボティクス・メカトロニクス講演会2011(ROBOMEC2011)[岡山コンベンションセンター]2011年05月

詳細

国内会議ポスター発表

人間共存型ロボットの操作手法に関する研究(第10報) —ステップ・オン・インタフェース利用したタッチゲーム・アプリケーションの開発—

日本機械学会 ロボティクス・メカトロニクス講演会2011(ROBOMEC2011)[岡山コンベンションセンター]2011年05月

詳細

国内会議ポスター発表

人間共存型ロボットの遠隔操作に関する研究(第56報) −タッチパネルを用いた操作インタフェースによる二輪移動機構の操作—

日本機械学会 ロボティクス・メカトロニクス講演会2011(ROBOMEC2011)[岡山コンベンションセンター]2011年05月

詳細

国内会議ポスター発表

人間共存型ロボットの遠隔操作に関する研究(第55報) −ジョイスティックの操作と二輪機構の動作の関係の検討—

日本機械学会 ロボティクス・メカトロニクス講演会2011(ROBOMEC2011)[岡山コンベンションセンター]2011年05月

詳細

国内会議ポスター発表

人間共存型ロボットのインフォマティブ・モーションに関する研究(第13報) —到達距離を受け手に伝える物体投げ渡し動作の作成—

第31回バイオメカニズム学術講演会(SOBIM2010)2010年11月

詳細

国内会議口頭発表(一般)

人間共存型ロボットのインフォマティブ・モーションに関する研究(第12報) —到達距離を受け手に伝える物体投げ渡し動作の解析—

第31回バイオメカニズム学術講演会(SOBIM2010)2010年11月

詳細

国内会議口頭発表(一般)

人間共存型ロボットのインフォマティブ・モーションに関する研究(第11報) —到達距離を受け手に伝える物体投げ渡し動作の実験・評価—

日本機械学会 ロボティクス・メカトロニクス講演会2010(ROBOMEC2010)2010年06月

詳細

国内会議ポスター発表

人間共存型ロボットのインフォマティブ・モーションに関する研究(第10報) —到達距離を受け手に伝える物体投げ渡し動作の計測・解析—

日本機械学会 ロボティクス・メカトロニクス講演会2010(ROBOMEC2010)2010年06月

詳細

国内会議ポスター発表

人間共存型ロボットの操作手法に関する研究(第9報) —ステップ・オン・インタフェースにおける制御機能の検討—

日本機械学会 ロボティクス・メカトロニクス講演会2010(ROBOMEC2010)2010年06月

詳細

国内会議ポスター発表

人間共存型ロボットの遠隔操作に関する研究(第54報) —タッチパネルを用いた操作インタフェースの開発と評価—

日本機械学会 ロボティクス・メカトロニクス講演会2010(ROBOMEC2010)2010年06月

詳細

国内会議ポスター発表

ステップ・オン・インタフェースを搭載した移動ロボットと人とのインタラクションの設

第59回ヒュ-マンインタフェ-ス学会研究会「インタラクションのデザイン(特集:デザインとア-ト)および一般」2010年03月

詳細

口頭発表(一般)

人間共存型ロボットのインフォマティブ・モーションに関する研究(第9報) —到達距離を受け手に伝える物体投げ渡し動作の特徴—

計測自動制御学会 第10回システムインテグレーション部門講演会(SI2009)2009年12月

詳細

口頭発表(一般)

人間共存型ロボットの操作手法に関する研究(第8報) —ステップ・オン・インタフェースを搭載した移動ロボットの応用検討—

計測自動制御学会 第10回システムインテグレーション部門講演会(SI2009)2009年12月

詳細

口頭発表(一般)

荷物重量を受け手に伝える手渡し動作の生成と評価

第21回バイオメカニズムシンポジウム2009年08月

詳細

口頭発表(一般)

人間共存型ロボットのインフォマティブ・モーションに関する研究(第8報) —到達位置を受け手に伝える物体投げ渡し動作の特徴抽出—

日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)2009年05月

詳細

ポスター発表

人間共存型ロボットのインフォマティブ・モーションに関する研究(第7報) —荷物重量を受け手に伝える手渡し動作の作成—

日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)2009年05月

詳細

ポスター発表

人間共存型ロボットの操作手法に関する研究(第7報) —ステップ・オン・インタフェースを搭載した移動ロボットの利用方法—

日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)2009年05月

詳細

ポスター発表

人間共存型ロボットの操作手法に関する研究(第6報) —ステップ・オン・インタフェースを搭載した移動ロボットの制御機能向上—

日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)2009年05月

詳細

ポスター発表

人間共存型ロボットの動作予告に関する研究(第3報) —音声予告と表示予告の比較実験—

日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)2009年05月

詳細

ポスター発表

人間共存型ロボットの遠隔操作に関する研究(第53報) —タッチパネルを用いた移動ロボットの操作インタフェース—

日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)2009年05月

詳細

ポスター発表

"人間共存型ロボットの遠隔操作に関する研究(第52報) —音声認識を用いた移動ロボットの操作インタフェース—

日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)2009年05月

詳細

ポスター発表

人間共存型ロボットの遠隔操作に関する研究(第51報) —レンジスキャナを用いた環境地図の作成—

日本機械学会 ロボティクス・メカトロニクス講演会2009(ROBOMEC2009)2009年05月

詳細

ポスター発表

作品・ソフトウェア・教材・フィールドワーク等

2017国際ロボット展・RT産学共創マッチング補助事業 RT交流プラザ(主催:(一社)日本ロボット工業会/日刊工業新聞社) [東京国際展示場 東1~6ホール], RT09, (2017.11.29(水)~12.02(土) 10h00-17h00).

早稲田大学大学院 松丸隆文研究室

芸術活動2017年11月-2017年12月

詳細

発表場所:東京国際展示場 東1~6ホール

発表内容:(1) 3DAHII (3D Aerial Holographic Image Interface: 三次元空中ホログラフィック画像インタフェース) . [*] Asyifa Imanda SEPTIANA (M2), Duc THAN (M2), Ahmed FARID (M2), 堀内 一希 (M1), 松丸 隆文.

2015国際ロボット展・RT産学共創マッチング補助事業 RT交流プラザ(主催:(一社)日本ロボット工業会/日刊工業新聞社) [東京国際展示場 東ホール], RT-04, (2015.12.02(水)~05(土) 10h00-17h00).

早稲田大学大学院情報生産システム研究科 松丸隆文研究室

芸術活動2015年12月-2015年12月

詳細

発表場所:東京国際展示場 東ホール

発表内容:(1) IDAT-3 (Image-projective Destop Arm Trainer: 画像投射式卓上型上肢訓練装置) . (2) CSLSS-1 (Calliography-Stroke Learning Support System: 書道運筆学習支援システム).

第41回国際福祉機器展H.C.R.2014(主催:(社福)全国社会福祉協議会,(一財)保健福祉広報協会) [東京国際展示場 東ホール], (2014.10.01(水)〜03(金) 10h00-17h00).

芸術活動2014年10月-

2013国際ロボット展・RT産学共創マッチング補助事業 RT交流プラザ(主催:(一社)日本ロボット工業会,共催:日刊工業新聞社) [東京国際展示場], SRT-21, (2013.11.06(水)〜09(土) 10h00-17h00).

芸術活動2013年11月-

北九州学研都市第13回産学連携フェア(主催:北九州学研都市産学連携フェア実行委員会,公益財団法人北九州産業学術推進機構)[北九州学術研究都市](2013.10.23(水)〜25(金) 10h00-17h00)

芸術活動2013年10月-

北九州学術研究都市第12回産学連携フェア(主催:北九州学研都市産学連携フェア実行委員会,公益財団法人北九州産業学術推進機構)[北九州学研都市](2011.10.17(水)〜19(金) 10h00-17h00)

芸術活動2012年10月-

第49回 日本リハビリテーション医学会学術集会(主催:(社)日本リハビリテーション医学会) [福岡国際会議場], (2012.05.31(木)〜06.02(土))

芸術活動2012年05月-

北九州学術研究都市 報道記者見学ツアー(北九州産業学術推進機構FAIS) [北九州学術研究都市会議場]

芸術活動2012年02月-

2011国際ロボット展 RT交流プラザ(主催:(社)日本ロボット工業会,共催:日刊工業新聞社) [東京国際展示場], SRT-5, (2011.11.09(水)〜11.12(土) 10h00-17h00)

芸術活動2011年11月-

北九州学研都市第11回産学連携フェア(主催:北九州学研都市産学連携フェア実行委員会,北九州産業学術推進機構)[北九州学研都市](2011.10.19(水)〜21(金) 10h00-17h00)

芸術活動2011年10月-

第51回西日本総合機械展 ロボット産業マッチングフェア北九州(主催:(財)西日本産業貿易コンベンション協会) [西日本総合展示場 新館], (2011.06.23-06.25)

芸術活動2011年06月-

2009国際ロボット展 RT交流プラザ(主催:(社)日本ロボット工業会,日刊工業新聞社) [東京国際展示場], (2009.11.25(水)〜11.28(土))

芸術活動2009年11月-

第4回モーションメディアコンテンツコンテスト (主催: (社)計測自動制御学会SI部門モーションメディア部会) [東京 家の光会館], (2008.07.04(金)).

芸術活動2008年07月-

2007国際ロボット展 RT交流プラザ(主催:(社)日本ロボット工業会,日刊工業新聞社) [東京国際展示場], (2007.11.28(水)〜12.01(土))

芸術活動2007年11月-

イノベーション・ジャパン2007−大学見本市(展示会) (主催:(独)科学技術振興機構+(独)新エネルギー・産業技術総合開発機構) [東京国際フォーラム], (2007.09.12(水)〜14(金))

芸術活動2007年09月-

高校生のためのハイテクイベント (主催: (社)日本機械学会東海支部) [産業技術記念館 大ホール], (2007.08.03(金)).

芸術活動2007年08月-

第2回モーションメディアコンテンツコンテスト (主催: (社)計測自動制御学会SI部門モーションメディア調査研究会) [電気通信大学 総合研究棟], (2006.06.28(水)).

芸術活動2006年06月-

2005国際ロボット展 RT交流プラザ (主催: (社)日本ロボット工業会, 共催:日刊工業新聞社) [東京国際展示場東1,2ホール], (2005.11.30(水)〜12.03(土)).

芸術活動2005年11月-

キャラロボ2005 (主催: (社)大阪国際見本市委員会) [インテックス大阪], (2005.07.16(土)〜07.17(日)).

芸術活動2005年07月-

第1回モーションメディアコンテンツコンテスト(主催: (社)計測自動制御学会SI部門モーションメディア調査研究会) [NTT武蔵野研究開発センター], (2005.06.22)

芸術活動2005年06月-

国際新技術フェア2002 RT交流プラザ (主催: (社)日本ロボット工業会&日刊工業新聞社) [東京ビックサイト東4ホール], (2002.09.25(水)〜09.27(金)).

芸術活動2002年09月-

ロボット・ステーション2002(主催: (財)日本玩具文化財団&NHKサービスセンター) [松坂屋静岡店], (2002.08.01(木)〜08.06(火)).

芸術活動2002年08月-

外部研究資金

科学研究費採択状況

研究種別:

ステップ・オン・インタフェースの高機能化とさまざまな形態での実現

2011年-0月-2014年-0月

配分額:¥5200000

研究種別:

ALS患者のための視線検出機能付HMDの開発と双方向コミュニケーションの実現

配分額:¥15000000

研究種別:

遠隔体験鑑賞システムを目的とした人間共存型移動ロボットの遠隔と自律の融合制御手法

配分額:¥3600000

研究種別:

ヒューマン・ロボット・インタラクションのための2D映像-3D動作マッピング手法

2017年-0月-2020年-0月

配分額:¥4810000

研究資金の受入れ状況

実施形態:受託教育

ステップ・オン・インタフェースを用いたロボットに関する先行技術調査(科学技術振興機構)2007年-

実施形態:受託教育

ALS患者のための視線検出機能付HMDの開発と高度双方向コミュニケーションの実現(静岡大学工学部)2005年-2006年

実施形態:受託教育

無軌道走行移動体に関する研究(矢崎化工(株))2004年-

実施形態:共同研究

MERG(マルチメディア教育に関する研究グループ)(東京農工大学工学部)2003年-

実施形態:共同研究

温室メロン遠隔制御に関する研究(沢田工業(株),静岡県農業試験場)2002年-

実施形態:共同研究

無線式台車型ロボットに関する研究(静岡県静岡工業技術センター,(有)サンテクニカル)2002年-

実施形態:共同研究

インテリジェント移動ロボットの誘導制御に関する研究(静岡大学工学部機械工学科)2000年-

実施形態:共同研究

通信ネットワーク接続したロボットの遠隔操作に関する研究(通商産業省工業技術院機械技術研究所)1999年-2000年

学内研究制度

特定課題研究

投影式インターフェースを用いた高齢者・身障者用機能維持・回復訓練システムの研究開発

2011年度

研究成果概要: 本課題は,上体の運動および認知機能の維持・回復訓練を目的として,SOI(ステップ・オン・インタフェース:プロジェクタの投影画面を機器から人への情報提示だけでなく人から機器への指示入力としても利用する新たらしいインタフェース)とP... 本課題は,上体の運動および認知機能の維持・回復訓練を目的として,SOI(ステップ・オン・インタフェース:プロジェクタの投影画面を機器から人への情報提示だけでなく人から機器への指示入力としても利用する新たらしいインタフェース)とPCのみで構成する単純な可搬型機器を製作すると共に,訓練コンテンツとしてのゲーム・アプリケーションを開発するものである. まず2011年度の前半は,SOIを二輪型移動機構の前後に2組搭載した移動ロボットHFAMRO-2上でアプリケーションの開発をすすめ,「風船割り(対象が上方に動く)」,「魚つかみ(対象が左右に動く)」,「もぐらたたき(対象がランダムな位置に出現)」について目処を立てていた.2011年度の後半からは,北九州ロボットフォーラムの事業である市内発ロボット創生プロジェクトにも採択され,(財)北九州産業学術推進機構を事務局として,産業医科大学リハビリテーション医学講座,九州産業大学工学部,リーフ(株),(株)コア九州カンパニーをメンバーとする共同研究としても実施できた.これにより,機器装置としてこれまでに,1号機(反射型:普及型プロジェクタを用いるが投影距離を稼ぎながら装置の高さを低く抑えるために反射鏡を用いるもの),2号機(直映型:比較的低い位置に設置した少し高価な短焦点型プロジェクタで机上に直接投影するもの),3号機(臨床試用プロトタイプ:最新型のプロジェクタを用いて収納性と可搬性を狙ったもの)を試作し,産業医科大学リハビリテーション科や特別養護老人ホームもじみ苑に持ち込んで意見をうかがう機会を持つこともできた.一方,画面デザインの追加・変更の容易性や詳細な調整を可能とするためにアプリケーション・プログラムのアルゴリズムと実現する手法を全面的に改めた新規開発を実施している,いまだに完全には動作しておらず,不十分な状態である. 2011年度末には産業医科大学と早稲田大学の双方における「人を対象とする研究に関する倫理審査委員会」の承認を得て,2012年度には,リハビリテーション用(医大リハ科)およびレクリエーション用(特養施設)としての臨床試用を開始することになっているため,引き続き何らかの助成と必要としている.

画像投射式卓上型上肢訓練装置IDATの機能向上と試用調査

2013年度

研究成果概要:画像投射式卓上型上肢訓練装置IDAT-3のアンケート調査による評価 (2014年04月01日)1. 目的 さまざまな年代の男女から次の3項目について評価を得ること.(1) 操作性(使いやすさ,使い勝手,直感性など)(2) 面白さ(...画像投射式卓上型上肢訓練装置IDAT-3のアンケート調査による評価 (2014年04月01日)1. 目的 さまざまな年代の男女から次の3項目について評価を得ること.(1) 操作性(使いやすさ,使い勝手,直感性など)(2) 面白さ(楽しさ,充実感,達成感)(3) 有用性(有効性,実用性,利便性)また意見や所感を得ることで,現在のIDAT-3の欠点や不足を理解し,今後の性能の向上,新機能の開発の検討材料とすること.2. 方法と手順 Method and procedure 次の2つの展示会にIDAT-3を出展した.(1) 展示会1:北九州学術研究都市第13回産学連携フェア(http://fair.ksrp.or.jp/) 期日: 2013年10月23日(水)~10月25日(金) 10h00-17h00 場所:北九州学術研究都市 体育館 対象:専門家,職業人,学生,市民(2) 展示会2:2013国際ロボット展(http://www.nikkan.co.jp/eve/irex/) 期日:2013年11月06日(水)~11月09日(土) 10h00-17h00 場所:東京国際展示場 東1・2・3ホール 対象:専門家,職業人,学生,市民アンケートの実施手順は次のとおり:1) 研究開発の背景と目的,IDATの構成と機能,を来訪者に説明し,デモンストレーションする.2) 来訪者にトレーニングプログラム(モグラたたき,風船割り,さかな捕り)を体験してもらう.3) 好きなだけ試してもらったのちに,アンケートへ記入してもらう.アンケートの内容は次のとおり:1. Gender: □male □female2. Age: □-10s □20s □30s □40s □50s □60s □70s □80s- 3. Country: □Japan □Other ( )4. User-friendliness: (good)9・8・7・6・5(average)・4・3・2・1(bad) Comment: ( )5. Amusingness: (good)9・8・7・6・5(average)・4・3・2・1(bad) Comment: ( )6. Usefulness: (good)9・8・7・6・5(average)・4・3・2・1.(bad) Comment: ( )7. Feedback and opinions: ( )3つの評価項目によるIDAD-3の評価は,相対的な値ではなく主観的な絶対的な値でお願いした.合計7日間の展示より88通の回答を得た.性別では男性67名,女性21名.年齢別では10代以下40名,20代30名,30代4名,40代4名,50代以上6名となった.3. 評価値 3つの評価項目における平均値(標準偏差)は次のとおり:(1) 操作性: 10代以下7.25(1.80),20代7.23(1.45),30代7.38(1.32),40代7.75(0.83),50代9.00(0.00),60代6.33(0.94),70代7.00(0.00),男性7.28(1.54),女性7.29(1.72),合計7.28(1.59)(2) 面白さ: 10代以下8.25(1.34),20代7.93(1.18),30代8.25(0.97),40代6.75(1.09),50代9.00(0.00),60代5.67(0.94),70代7.00(0.00),男性7.88(1.37),女性8.33(1.17),合計7.99(1.34)(3) 有用性: 10代以下7.60(1.50),20代7.67(1.45),30代7.38(1.32),40代8.00(0.71),50代9.00(0.00),60代8.33(0.94),70代7.00(0.00),男性7.60(1.51),女性7.95(1.27),合計7.67(1.46)すべての回答者による平均値は,3つの評価項目(操作性,面白さ,有用性)においてすべて,7と8の間にある. “操作性”が,3項目の中で最小の平均値(7.28)と最大の標準偏差(1.59)を得た. “面白さ”が,3項目の中で最大の平均値(7.99)と最小の標準偏差(1.34)を得た.女性の回答者が少ないため,男女間の結果の差異の検討はむずかしい.そこで以下では,年齢による結果の差異を検討する.3.1 操作性 年齢を若年(-10s),壮年(20s-40s),後年(50s-)の3群に分けてまとめた得点はそれぞれ7.25(1.80),7.31(1.39),7.33(1.37)となる.年齢群間での得点の有意な差は認められない.すなわち,どの年齢群でもほぼ同じ得点(7.25, 7.31, 7.33)で,IDATの操作感は年齢による影響は小さい.しかし他の2項目に比べてあまり高く評価されていない.問題点はあとで回答者からの意見に基づいて検討する.3.2 面白さ 年齢を若年(-10s),壮年(20s-40s),後年(50s-)の3群に分けてまとめた得点はそれぞれ8.25(1.34),7.89(1.20),7.00(1.63)となる.年齢群間での得点の有意な差は認められない.しかし,年齢が上がると面白さの得点が下がる傾向がある.若年者による得点は8.25と高く,興味を引くことができたと考えられる.ゲームのようなトレーニング内容により,若者の関心を得ることが容易だが,高齢者を引き付けるのはそれほど簡単ではない.3.3 有用性 年齢を若年(-10s),壮年(20s-40s),後年(50s-)の3群に分けてまとめた得点はそれぞれ7.60(1.50),7.64(1.46),8.33(0.94)となる.年齢群間での得点の有意な差は認められない.しかし,年齢が上がると有用性を評価する傾向がある.これは,このアンケート調査において回答者の一人一人にIDAT訓練のねらいである手と眼の協調訓練の重要性を説明しているからだと思われる.高齢者は身体および認知機能の低下にともなう危機意識をもっており,このIDAT訓練の意義を認識してもらいやすい.4. 意見や所感 意見や所感に寄せられたコメントを項目に分けて紹介し考察する.4.1 表示 ・もぐらは,平面的な絵が出てくるより立体的なのが出てきたら面白いかも.あとは壁に映してみるとか.・投影面が斜面のときはどうなるでしょう?リハビリテーション訓練装置の映像にどの程度の臨場感が必要かは,よく検討するべきである.一方,家庭用ゲーム機を応用対象としたリアルタイム3次元コンピュータグラフィックスを実現するLSIの性能は向上し続けている.コスト・パフォーマンスを考慮しながら,高精細で高品質の3次元画像を導入することも将来的にはあり得る.また本体が平面に垂直に設置されていれば,その面が傾いていてもその動作に問題はない.机の上でなく壁での実施も,機器構成の工夫により可能になる.4.2 反応感度 ・タッチ反応が少し遅い.・反応が鈍い.・反応速度を向上してほしい.・タイムラグが気になる.障害物を検知するための探索区域は,作業面から約1cm離れた平面上である.したがって,システムは面に触れていない手にも反応する.これは反応感度に対してよく働くはずであるが,確かに応答時間差を感じる.これには,ハードウェアとソフトウェアとの両面で原因が考えられる.まず信号伝達である:動く物体を検出するにはセンサ情報の伝達速度が遅い.センサデータを取得するサイクルタイムを短縮するために,コンピュータの処理能力を含めた見直しが必要である.次にプログラミングである:当たり判定の計算に時間がかかりすぎる.アルゴリズムを見直して最適化する必要がある.4.3 判定精度 ・当たり判定の精度を良くして欲しい.・連続的な当たり判定が遅いときがある.ある程度の反応感度を得るために,障害物の正確な大きさや形状を認識していない.手の大きさ(子供/大人,人種,など)だけでなく,手の状態(握った手,開いた手,など)やトイハンマの使用などにより,算出された障害物の位置が多少異なる.あらゆる状況により正確に対応するためには,訓練をはじめる前に訓練者の叩いた状態を計測して補正値を算出しておくなどの方法を考える必要がある.4.4. 小型化・低コスト化 ・小型化が楽しみにしている.・コストが高い.現在は研究開発のために汎用コンピュータを使っている.しかし商品化にはワンチップマイクロコンピュータを使用して本体と一体化することもできる.また,高価なレーザレンジファインダ(URG)を置き換えて同じ機能を提供する低価格のRGB-Dセンサについても別途検討している.4.5. 全体的な意見 多くの肯定的で励みになる回答を得た.・これなら楽しくリハビリが進むと思いました.・操作しやすい,いろんな人が楽しめる,ぜひまたやりたい.・リハビリに最も良い,楽しさを感じる,実用性あり.・未来の発展を期待しています.このように,多くの来場者は我々の研究開発の重要性と必要性を認め,高く評価してくれた.しかし同時に,改善の余地があることを私たちに知らせる貴重な意見を得ることができた.これらの提言に基づいて,さらにより良いシステムにするべく研究を続ける.

投射画面インタフェースの高度・高機能化に関する研究

2014年度

研究成果概要: 壁面などに投射した操作パネルの画面を,ロボット・メカトロニクス機器の操作に利用する,投射画面インタフェースの高度・高機能化として,任意の面に投影する双方向インタフェース(ステップ・オン・インタフェースStep-on Interf... 壁面などに投射した操作パネルの画面を,ロボット・メカトロニクス機器の操作に利用する,投射画面インタフェースの高度・高機能化として,任意の面に投影する双方向インタフェース(ステップ・オン・インタフェースStep-on Interface, SOI)の汎用性の向上に関する研究開発を実施した.具体的には,プロジェクタで壁に投影した操作画面の操作ボタンを,レーザ・ポインタで特定して機器を動作させるシステムの実現を目指した. 本課題では,3つのレーザ・ポインタが互いに干渉することなく,ひとつの画面上で操作することができる,新しい方法を提案した.

画像投射式インタフェースの高性能・高機能化及び応用展開

2014年度

研究成果概要: 本課題では,複数の物体が同時に接触でき,接触する物体を識別する機能を持つ仮想タッチ・スクリーンの研究開発を実施した.具体的には,Xtionセンサによる測定データに基づき,接触/非接触センシングだけでなく,手指/道具の区別および接... 本課題では,複数の物体が同時に接触でき,接触する物体を識別する機能を持つ仮想タッチ・スクリーンの研究開発を実施した.具体的には,Xtionセンサによる測定データに基づき,接触/非接触センシングだけでなく,手指/道具の区別および接触速度の検出をする,仮想キーボード(鍵盤)および仮想シロフォン(木琴)を実現した.Xtionセンサを機器に実装する手法,各種アルゴリズム:(1)手指の認識,(2)接触の認識,(3)接触位置のマッピング,(4)手指/道具(マレット=ばち)の区別,(5)接触速度の検出の手法,を検討した.これらの機能は,エデュテインメントとエンターテイメントの両方に役立つものだと考える.

距離画像情報を利用した映像投影下でのタッチ・インタラクション技術

2015年度

研究成果概要:<1> Research and Development of Near-field Touch Interface using Time-of-flight CameraThe purpose of this stu...<1> Research and Development of Near-field Touch Interface using Time-of-flight CameraThe purpose of this study is to apply the 3-dimensional image sensor, time-of-flight camera, into a projector-sensor system, to achieve the basic functions (clicking, dragging and sliding) of conventional touch interface and expand it with some new functions such as finger direction detection. The research items are as followings: (1) Near-field hand extraction method, (2) High accuracy touch/hover detection, (3) Integrated and complete projector-sensor system, (4) Evaluation experiments.<2> Research and Development of Calligraphy-Brushwork Learning Support SystemA calligraphy learning support system was proposed for supporting brushwork learning by using a projector. The system was designed to provide the three kinds of training style according to the learner's ability: copying training, tracing training, and combination of both. In order to instruct the three-dimensional brushwork such as writing speed, pressure force, and orientation of the brush, we proposed the instruction method by presenting the information about the brush tip. This method visualizes the position, orientation and moving direction of the brush. A preliminary experiment on learning was performed and the efficiency of the proposed method was examined through the experimental results.

二次元投影映像を用いた三次元動作の人への教示手法に関する研究

2016年度

研究成果概要:Research and Development activities have been continued in order to deepen the human robot interaction technology i...Research and Development activities have been continued in order to deepen the human robot interaction technology in the field of Robotics and Mechatronics, and then the following results have been obtained. (1) Visual SLAM using RGB-D sensor: ORB-SHOT SLAM, the trajectory correction by 3D loop closing based on Bag-of-Visual-Words (BoVW) model for RBG-D Visual SLAM has newly been proposed. --> JRM article (to be appeared). (2) Learning system using RGB-D sensor and projector: Calibration and statistical learning techniques for building an interactive screen for children have been proposed, and its trials in a School has been done. --> IEEE/SICE SII conference paper (presented) and IJARS article (to be appeared). (3) Leaning system using LM sensor and projector: Calligraphy-stroke learning support system using projector and motion sensor has been proposed. --> JACIII article (to be appeared). (4) Interactive interface using LM sensor and 3D hologram: Interactive aerial projection of 3D hologram object has been proposed. --> IEEE/ROBIO conference paper (presented).

歩道通行や交差点横断を特徴とする屋外における歩行者誘導ロボティック・システム

2018年度

研究成果概要:Outdoor pedestrian-navigation robotic-system characterized by passing sidewalk and crossing intersection: We studie...Outdoor pedestrian-navigation robotic-system characterized by passing sidewalk and crossing intersection: We studied path planning for pedestrians in outdoor environment using two-dimensional digital map. The two-dimensional digital map is obtained on the network like Google Maps and OpenStreetMaps. However, it doesn't record all the data about crosswalks and pedestrian paths, for example outside urban areas. Therefore, a path planning which does not depend on preliminarily-recorded map data for pedestrians or large amount of data, for example, to execute SLAM (simultaneous localization and mapping), should be realized. Once given the departure and the destination, it gets the map data around and between them. First, it performs an image processing (contour detection) and visually recognizes city blocks. Next, a graph theory is applied to deduce the pedestrian path from the departure to the destination. In the trial using actual map data, it was possible to plan a reasonable path with 70 to 80% of success rate, including which side of the road to go through. In the future, we plan to detect pedestrian crossings and footbridges from satellite images and merge them into graph data, and to guide pedestrians using a mobile robot in actual environments.

現在担当している科目

科目名開講学部・研究科開講年度学期
バイオ・ロボティクス&ヒューマン・メカトロニクス研究(修士論文)大学院情報生産システム研究科2020通年
バイオ・ロボティクス&ヒューマン・メカトロニクス研究(修士) 春大学院情報生産システム研究科2020春学期
バイオ・ロボティクス&ヒューマン・メカトロニクス研究(修士) 秋大学院情報生産システム研究科2020秋学期
ロボティクス・メカトロニクス特論大学院情報生産システム研究科2020秋学期
生体工学大学院情報生産システム研究科2020秋学期
ヒューマン・ロボット・インタラクション大学院情報生産システム研究科2020春学期
ロボット工学大学院情報生産システム研究科2020秋学期
ロボティクス・メカトロニクス演習D大学院情報生産システム研究科2020秋学期
ロボティクス・メカトロニクス演習B大学院情報生産システム研究科2020春学期
ロボティクス・メカトロニクス演習C大学院情報生産システム研究科2020春学期
ロボティクス・メカトロニクス演習A大学院情報生産システム研究科2020秋学期
バイオ・ロボティクス&ヒューマン・メカトロニクス研究(博士論文)大学院情報生産システム研究科2020通年
バイオ・ロボティクス&ヒューマン・メカトロニクス研究(博士) 春大学院情報生産システム研究科2020春学期
バイオ・ロボティクス&ヒューマン・メカトロニクス研究(博士) 秋大学院情報生産システム研究科2020秋学期

作成した教科書・教材・参考書

生体機械機能工学

2008年10月

詳細

概要:ISBN: 978-4-501-41750-5, 東京電機大学出版局

"Lesson10 遠隔操作システム", in "Webラーニングプラザ「事例に学ぶロボティックス」"

2002年03月

詳細

概要:(独)科学技術振興機構JST