KOMEI SUGIURA

Google Scholar page


Journal Articles

  1. D. Yashima, R. Korekata, and K. Sugiura,"Open-Vocabulary Mobile Manipulation Based on Double Relaxed Contrastive Learning with Dense Labeling", IEEE Robotics and Automation Letters, 2025, to appear.
  2. N. Hosomi, Y. Iioka, S. Hatanaka, T. Misu, K. Yamada, N. Tsukamoto, S. Kobayashi, and K. Sugiura, "Multimodal Target Localization with Landmark-Aware Positioning for Urban Mobility", IEEE Robotics and Automation Letters, Vol. 10, Issue 1, pp. 716-723, 2025.
    DOI: 10.1109/LRA.2024.3511404 pdf
  3. T. Komatsu, M. Kambara, S. Hatanaka, H. Matsuo, T. Hirakawa, T. Yamashita, H. Fujiyoshi, and K. Sugiura, "Nearest Neighbor Future Captioning: Generating Descriptions for Possible Collisions in Object Placement Tasks", Advanced Robotics, Vol. 38, Issue 18, pp. 1265-1276, 2024.
    DOI: 10.1080/01691864.2024.2388114 pdf
  4. H. Matsuo, S. Ishikawa, and K. Sugiura, "Co-Scale Cross-Attentional Transformer for Rearrangement Target Detection", Advanced Robotics, Vol. 38, Issue 18, pp. 1277-1286, 2024.
    DOI: 10.1080/01691864.2024.2381812 pdf
  5. H. Itaya, T. Hirakawa, T. Yamashita, H. Fujiyoshi, and K. Sugiura. "Mask-Attention A3C: Visual Explanation of Action-State Value in Deep Reinforcement Learning", IEEE Access, Vol. 12, pp. 86553-86571, 2024.
    DOI: 10.1109/ACCESS.2024.3416179
  6. N. Hosomi, S. Hatanaka, Y. Iioka, W. Yang, K. Kuyo, T. Misu, K. Yamada, K. Sugiura, "Trimodal Navigable Region Segmentation Model: Grounding Navigation Instructions in Urban Areas", IEEE Robotics and Automation Letters, Vol. 9, No. 5, 2024.
    DOI: 10.1109/LRA.2024.3376957 pdf
  7. K. Kaneda, S. Nagashima, R. Korekata, M. Kambara, K. Sugiura, "Learning-To-Rank Approach for Identifying Everyday Objects Using a Physical-World Search Engine", IEEE Robotics and Automation Letters, Vol. 9, No. 3, 2024.
    DOI: 10.1109/LRA.2024.3352363 pdf
  8. A. Ueda, W. Yang and K. Sugiura, "Switching Text-based Image Encoders for Captioning Images with Text", IEEE Access, Vol. 11, pp. 55706-55715, 2023.
    DOI: 10.1109/ACCESS.2023.3282444 pdf
  9. S. Ishikawa and K. Sugiura, "Affective Image Captioning for Visual Artworks using Emotion-based Cross-Attention Mechanisms", IEEE Access, Vol. 11, pp. 24527-24534, 2023.
    DOI: 10.1109/ACCESS.2023.3255887 pdf
  10. S. Matsumori, Y. Abe, K. Shingyouchi, K. Sugiura, and M. Imai, "LatteGAN: Visually Guided Language Attention for Multi-Turn Text-Conditioned Image Manipulation", IEEE Access, Vol. 9, pp. 160521 - 160532, 2021.
    DOI: 10.1109/ACCESS.2021.3129215 pdf
  11. M. Kambara and K. Sugiura, "Case Relation Transformer: A Crossmodal Language Generation Model for Fetching Instructions", IEEE Robotics and Automation Letters, Vol. 6, Issue 4, pp. 8371-8378, 2021.
    DOI: 10.1109/LRA.2021.3107026 pdf
  12. S. Ishikawa and K. Sugiura, "Target-dependent UNITER: A Transformer-Based Multimodal Language Comprehension Model for Domestic Service Robots", IEEE Robotics and Automation Letters, Vol. 6, Issue 4, pp. 8401-8408, 2021.
    DOI: 10.1109/LRA.2021.3108500 pdf
  13. A. Magassouba, K. Sugiura, and H. Kawai, "CrossMap Transformer: A Crossmodal Masked Path Transformer Using Double Back-Translation for Vision-and-Language Navigation", IEEE Robotics and Automation Letters, Vol. 6, Issue 4, pp. 6258-6265, 2021.
    DOI: 10.1109/LRA.2021.3092686 pdf
  14. A. Magassouba, K. Sugiura, A. Nakayama, T. Hirakawa, T. Yamashita, H. Fujiyoshi, and H. Kawai, "Predicting and Attending to Damaging Collisions for Placing Everyday Objects in Photo-Realistic Simulations", Advanced Robotics, Vol. 35, Issue 12, pp. 787-799, 2021. pdf
  15. N. Nishizuka, Y. Kubo, K. Sugiura, M. Den, M. Ishii, "Operational Solar Flare Prediction Model Using Deep Flare Net", Earth, Planets and Space, Vol. 73, Article 64, pp. 1-12, 2021.
    DOI: 10.1186/s40623-021-01381-9
    Journal's Highlighted Paper 2021
    pdf
  16. T. Ogura, A. Magassouba, K. Sugiura, T. Hirakawa, T. Yamashita, H. Fujiyoshi, H. Kawai, "Alleviating the Burden of Labeling: Sentence Generation by Attention Branch Encoder-Decoder Network", IEEE Robotics and Automation Letters, Vol. 5, Issue 4, pp. 5945-5952, 2020.
    DOI: 10.1109/LRA.2020.3010735 pdf
  17. N. Nishizuka, Y. Kubo, K. Sugiura, M. Den, M. Ishii, "Reliable Probability Prediction Model of Solar Flares: Deep Flare Net-Reliable (DeFN-R)", The Astrophysical Journal, Vol. 899, No. 2, 150(8pp), 2020.
    DOI: 10.3847/1538-4357/aba2f2 pdf
  18. A. Magassouba, K. Sugiura, H. Kawai, "A Multimodal Target-Source Classifier with Attention Branches to Understand Ambiguous Instructions for Fetching Daily Objects", IEEE Robotics and Automation Letters, Vol. 5, Issue 2, pp. 532-539, 2020.
    DOI: 10.1109/LRA.2019.2963649 pdf
  19. A. Magassouba, K. Sugiura, A. Trinh Quoc, H. Kawai, "Understanding Natural Language Instructions for Fetching Daily Objects Using GAN-Based Multimodal Target-Source Classification", IEEE Robotics and Automation Letters, Vol. 4, Issue: 4, pp. 3884 - 3891, 2019.
    DOI: 10.1109/LRA.2019.2926223 pdf
  20. A. Magassouba, K. Sugiura, H. Kawai, "A Multimodal Classifier Generative Adversarial Network for Carry and Place Tasks from Ambiguous Language Instructions", IEEE Robotics and Automation Letters, Vol. 3, Issue 4, pp. 3113-3120, 2018.
    DOI: 10.1109/LRA.2018.2849607 pdf Slides
  21. K. Sugiura, "SuMo-SS: Submodular Optimization Sensor Scattering for Deploying Sensor Networks by Drones", IEEE Robotics and Automation Letters, Vol. 3, Issue 4, pp. 2963-2970, 2018.
    DOI: 10.1109/LRA.2018.2849604 pdf Slides
  22. N. Nishizuka, K. Sugiura, Y. Kubo, M. Den, and M. Ishii, "Deep Flare Net (DeFN) Model for Solar Flare Prediction", The Astrophysical Journal, Vol. 858, Issue 2, 113 (8pp), 2018.
    DOI: 10.3847/1538-4357/aab9a7 pdf
  23. 奥川雅之, 伊藤暢浩, 岡田浩之, 植村渉, 高橋友一, 杉浦孔明: "ロボカップ西暦2050年を目指して", 知能情報ファジィ学会誌, Vol. 29, No.2, pp. 42-54, 2017.
  24. T. Nose, Y. Arao, T. Kobayashi, K. Sugiura, and Y. Shiga: "Sentence Selection Based on Extended Entropy Using Phonetic and Prosodic Contexts for Statistical Parametric Speech Synthesis", IEEE Transactions on Audio, Speech, and Language Processing, Vol. 25, Issue 5, pp. 1107-1116, 2017. pdf
  25. N. Nishizuka, K. Sugiura, Y. Kubo, M. Den, S. Watari and M. Ishii: "Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetogram", The Astrophysical Journal, Vol. 835, Issue 2, 156 (10pp), 2017. pdf
  26. S. Takeuchi, K. Sugiura, Y. Akahoshi, and K. Zettsu: "Spatio-Temporal Pseudo Relevance Feedback for Scientific Data Retrieval," IEEJ Trans., Vol. 12, Issue 1, pp. 124-131, 2017. pdf
  27. K. Lwin, K. Sugiura, and K. Zettsu: "Space-Time Multiple Regression Model for Grid-Based Population Estimation in Urban Areas," International Journal of Geographical Information Science, Vol. 30, No. 8, pp. 1579-1593, 2016.
  28. 杉浦孔明: "模倣学習における確率ロボティクスの新展開", システム制御情報学会誌, Vol. 60, No. 12, pp. 521-527, 2016. pdf
  29. 杉浦孔明: "ロボットによる大規模言語学習に向けて -実世界知識の利活用とクラウドロボティクス基盤の構築-", 計測と制御, Vol. 55, No. 10, pp. 884-889, 2016. pdf
  30. 杉浦孔明: "ビッグデータの利活用によるロボットの音声コミュニケーション基盤構築", 電子情報通信学会誌, Vol. 99, No. 6, pp. 500-504, 2016. pdf
  31. B. T. Ong, K. Sugiura, and K. Zettsu: "Dynamically Pre-trained Deep Recurrent Neural Networks using Environmental Monitoring Data for Predicting PM2.5," Neural Computing and Applications, Vol. 27, Issue 6, pp. 1553–1566, 2016. pdf
  32. 杉浦孔明: "ロボカップ@ホーム: 人と共存するロボットのベンチマークテスト", 人工知能, Vol. 31, No. 2, pp. 230-236, 2016. pdf
  33. L. Iocchi, D. Holz, J. Ruiz-del-Solar, K. Sugiura, and T. van der Zant: "RoboCup@Home: Analysis and Results of Evolving Competitions for Domestic and Service Robots," Artificial Intelligence, Vol. 229, pp. 258-281, 2015.
  34. K. Sugiura, Y. Shiga, H. Kawai, T. Misu, and C. Hori: "A Cloud Robotics Approach towards Dialogue-Oriented Robot Speech," Advanced Robotics, Vol. 29, Issue 7, pp. 449-456, 2015. pdf
  35. 杉浦孔明, 岩橋直人, 芳賀麻誉美, 堀智織: "観光スポット推薦アプリ「京のおすすめ」を用いた長期実証実験", 観光と情報, Vol. 10, No. 1, pp. 15-24, 2014. pdf
  36. M. Dong, T. Kimata, K. Sugiura, and K. Zettsu: "Quality-of-Experience (QoE) in Emerging Mobile Social Networks," IEICE Transactions on Information and Systems, Vol.E97-D, No.10, pp. 2606-2612, 2014. pdf
  37. T. Inamura, J. T. C. Tan, Y. Hagiwara, K. Sugiura, T. Nagai, and H. Okada: "Framework and Base Technology of RoboCup@Home Simulation toward Longterm Large Scale Human-Robot Interaction," JSOFT, Vol.26, No.3, pp. 698-709, 2014.
  38. 杉浦孔明, 長井隆行, "ロボカップ@ホームにおける日用品マニピュレーション", 日本ロボット学会誌, Vol. 31, No. 4, pp. 370-375, 2013. pdf
  39. 杉浦孔明, "ロボット対話 -実世界情報を用いたコミュニケーションの学習-", 人工知能学会誌, Vol. 27 No. 6, pp. 580-586, 2012. pdf
  40. 柏岡秀紀, 翠輝久, 水上悦雄, 杉浦孔明, 岩橋直人, 堀智織, "観光案内への音声対話システムの活用", 情報処理学会デジタルプラクティス, Vol. 3, No. 4, pp. 254-261, 2012. pdf
  41. 杉浦孔明, "ロボカップ道しるべ第8回「ロボカップ@ホームリーグ」", 情報処理, Vol. 53, No. 3, pp. 250-261, 2012. pdf
  42. T. Nakamura, M. Attamimi, K. Sugiura, T. Nagai, N. Iwahashi, T. Toda, H. Okada, T. Omori, "An Extended Mobile Manipulation Robot Learning Novel Objects," Journal of the Japanese Society for Artificial Intelligence, Vol.30, No.2, pp. 213-224, 2012. abstract
    It is convenient for users to teach novel objects to a domestic service robot with a simple procedure. In this paper, we propose a method for learning the images and names of these objects shown by the users. The object images are segmented out from cluttered scenes by using motion attention. Phoneme recognition and voice conversion are used for the speech recognition and synthesis of the object names that are out of vocabulary. In the experiments conducted with 120 everyday objects, we have obtained an accuracy of 91% for object recognition and an accuracy of 82% for word recognition. Furthermore, we have implemented the proposed method on a physical robot, DiGORO, and evaluated its performance by using RoboCup@Home's "Supermarket" task. The results have shown that DiGORO has outperformed the highest score obtained in the RoboCup@Home 2009 competition.
  43. T. Nakamura, K. Sugiura, T. Nagai, N. Iwahashi, T. Toda, H. Okada, T. Omori, "Learning Novel Objects for Extended Mobile Manipulation", Journal of Intelligent and Robotic Systems, Vol. 66, Issue 1-2 , pp 187-204. 2012. pdfabstract
    We propose a method for learning novel objects from audio visual input. The proposed method is based on two techniques: out-of-vocabulary (OOV) word segmentation and foreground object detection in complex environments. A voice conversion technique is also involved in the proposed method so that the robot can pronounce the acquired OOV word intelligibly. We also implemented a robotic system that carries out interactive mobile manipulation tasks, which we call "extended mobile manipulation", using the proposed method. In order to evaluate the robot as a whole, we conducted a task "Supermarket" adopted from the RoboCup@Home league as a standard task for real-world applications. The results reveal that our integrated system works well in real-world applications.
  44. K. Sugiura, N. Iwahashi, H. Kawai, S. Nakamura, "Situated Spoken Dialogue with Robots Using Active Learning", Advanced Robotics, Vol.25, No.17, pp. 2207-2232, 2011. pdfabstract
    In a human-robot spoken dialogue, a robot may misunderstand an ambiguous command from a user, such as ``Place the cup down (on the table),'' potentially resulting in an accident. Although making confirmation questions before all motion execution will decrease the risk of this failure, the user will find it more convenient if confirmation questions are not made under trivial situations. This paper proposes a method for estimating ambiguity in commands by introducing an active learning scheme with Bayesian logistic regression to human-robot spoken dialogue. We conducted physical experiments in which a user and a manipulator-based robot communicated using spoken language to manipulate objects.
  45. T. Misu, K. Sugiura, T. Kawahara, K. Ohtake, C. Hori, H. Kashioka, H. Kawai and S. Nakamura: "Modeling Spoken Decision Support Dialogue and Optimization of its Dialogue Strategy", ACM Transactions on Speech and Language Processing, Vol. 7, Issue 3, pp.10:1-10:18, 2011. abstract
    This paper addresses a user model for user simulation in spoken decision-making dialogue systems. When selecting from a set of alternatives, users have various decision criteria for making decision. Users often do not have a definite goal or criteria for selection, and thus they may find not only what kind of information the system can provide but their own preference or factors that they should emphasize. In this paper, we present a user model and dialogue state expression that consider user's knowledge and preferences in spoken decision-making dialogue. In order to estimate the parameters of the user model, we implement a trial sightseeing guidance system and collected dialogue data. Then, we model the dialogue as partially observable Markov decision process (POMDP), and optimize its dialogue strategy so that users can make a better choice.
  46. K. Sugiura, N. Iwahashi, H. Kashioka, and S. Nakamura: "Learning, Generation, and Recognition of Motions by Reference-Point-Dependent Probabilistic Models", Advanced Robotics, Vol. 25, No. 6-7, pp. 825-848, 2011. pdfabstract
    This paper presents a novel method for learning object manipulation such as rotating an object or placing one object on another. In this method, motions are learned using reference-point-dependent probabilistic models, which can be used for the generation and recognition of motions. The method estimates (1) the reference point, (2) the intrinsic coordinate system type, which is the type of coordinate system intrinsic to a motion, and (3) the probabilistic model parameters of the motion that is considered in the intrinsic coordinate system. Motion trajectories are modeled by a hidden Markov model (HMM), and an HMM-based method using static and dynamic features is used for trajectory generation. The method was evaluated in physical experiments in terms of motion generation and recognition. In the experiments, users demonstrated the manipulation of puppets and toys so that the motions could be learned. A recognition accuracy of 90% was obtained for a test set of motions performed by three subjects. Furthermore, the results showed that appropriate motions were generated even if the object placement was changed.
  47. X. Zuo, N. Iwahashi, K. Funakoshi, M. Nakano, R. Taguchi, S. Matsuda, K. Sugiura, and N. Oka: "Detecting Robot-Directed Speech by Situated Understanding in Physical Interaction", Journal of the Japanese Society for Artificial Intelligence, Vol.25, No.6, pp. 670-682, 2010. pdfabstract
    In this paper, we propose a novel method for a robot to detect robot-directed speech: to distinguish speech that users speak to a robot from speech that users speak to other people or to themselves. The originality of this work is the introduction of a multimodal semantic confidence (MSC) measure, which is used for domain classification of input speech based on the decision on whether the speech can be interpreted as a feasible action under the current physical situation in an object manipulation task. This measure is calculated by integrating speech, object, and motion confidence with weightings that are optimized by logistic regression. Then we integrate this measure with gaze tracking and conduct experiments under conditions of natural human-robot interactions. Experimental results show that the proposed method achieves a high performance of 94% and 96% in average recall and precision rates, respectively, for robot-directed speech detection.
  48. K. Sugiura, N. Iwahashi, H. Kashioka, and S. Nakamura: "Object Manipulation Dialogue by Estimating Utterance Understanding Probability in a Robot Language Acquisition Framework", Journal of the Robotics Society of Japan, Vol. 28, No. 8, pp. 978-988, 2010. pdfabstract
    本論文では,物体操作対話タスクにおいて動作および発話を生成する手法を提 案する.ユーザの発話は,音声・画像・動作などを統計学習の枠組みに統合し た確信度関数を用いて理解される.本手法は,ユーザが曖昧性が少ない発話を 行った場合は,状況に応じて最も適切な動作軌道を隠れマルコフモデルを用い て生成する.また,曖昧性が大きい発話に対しては,自然な確認発話を生成し てユーザに確認を求めることで,不適切な動作を実行前に中止させることが可 能になった.
  49. K. Sugiura, H. Kawakami, and O. Katai: "Simultaneous Design Method of the Sensory Morphology and Controller of Mobile Robots", Electrical Engineering in Japan, Vol. 172, Issue 1, pp 48-57, 2010. abstract
    This paper proposes a method for automatic design of the sensory morphology of a mobile robot. The proposed method employs two types of adaptations, ontogenetic and phylogenetic, to optimize the sensory morphology of the robot. In ontogenetic adaptation, reinforcement learning searches for the optimal policy, which is highly dependent on the sensory morphology. In phylogenetic adaptation, a genetic algorithm is used to select morphologies with which the robot can learn tasks faster. Our proposed method was applied to the design of the sensory morphology of a line-following robot. We performed simulation experiments to compare the design solution with a hand-coded robot. The results of the experiments revealed that our robot outperformed the hand-coded robot in terms of the following accuracy and learning speed, although our robot had fewer sensors than the hand-coded one. We also built a physical robot using the design solution. The experimental results revealed that this physical robot used its morphology effectively and outperformed the hand-coded robot.
  50. T. Taniguchi, N. Iwahashi, K. Sugiura, and T. Sawaragi: "Constructive Approach to Role-Reversal Imitation Through Unsegmented Interactions", Journal of Robotics and Mechatronics, Vol.20, No.4, pp. 567-577, 2008. abstract
    This paper presents a novel method of a robot learning through imitation to acquire a user's key motions automatically. The learning architecture mainly consists of three learning modules: a switching autoregressive model (SARM), a keyword extractor without a dictionary, and a keyword selection filter that references to the tutor's reactions.
  51. K. Sugiura, H. Kawakami, and O. Katai: "Simultaneous Design Method of the Sensory Morphology and Controller of Mobile Robots", IEEJ Trans. EIS, Vol. 128-C, No. 7, pp. 1154-1161, 2008. pdfabstract
    This paper proposes a method that automatically designs the sensory morphology of a mobile robot. The proposed method employs two types of adaptations - ontogenetic and phylogenetic - to optimize the sensory morphology of the robot. In ontogenetic adaptation, reinforcement learning searches for the optimal policy which is highly dependent on the sensory morphology.
  52. K. Sugiura, T. Shiose, H. Kawakami, and O. Katai: "Co-evolution of Sensors and Controllers", IEEJ Trans. EIS, Vol. 124-C, pp. 1938-1943, 2004. abstract
    The paper describes the evolutionary development of embodied agents that evolve the parameters of their controllers and sensors. The experimental results show that the physical characteristics of the agents and the task environment affect the temporal resolution of the sensors.
pagetop

Invited Talks

The full list of 44 invited talks is shown here.

  1. K. Sugiura, "Cloud Robotics for Building Conversational Robots", IROS 2016 Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics, Oct. 14, 2016. Slides
  2. K. Sugiura, "A New Challenge in RoboCup 2017 Nagoya", IROS 2016 RoboCup Tutorial: Multi-Robot Autonomy in Robot Soccer as an Adversarial Domain, Oct. 10, 2016. Slides
  3. K. Sugiura, "Statistic Imitation Learning and Human-Robot Communication", The 2nd International Workshop on Cognitive Neuroscience Robotics, Sankei Conference Osaka, Feb. 21, 2016.
  4. K. Sugiura, "Cloud Robotics for Human-Robot Dialogues", Japan-UK Robotics and Artificial Intelligence Seminar 2016, Embassy of Japan in the UK, Feb. 18, 2016.
  5. K. Sugiura, "Data-Driven Robotics", Kyoto University, Jan. 21, 2016.
  6. K. Sugiura, "RST-Seminar: Speech Communication Technology for Robots," Chubu University, June 17, 2015.
  7. K. Sugiura, "Toward Large-Scale Robot Language Learning," Doshisha University, Mar. 26, 2015.
  8. K. Sugiura, "Speech Processing and Cloud Robotics for Service Robots," Japan Robot Week 2014 Keihanna Robot Forum, Tokyo Big Site, Oct. 17, 2014.
  9. K. Sugiura, "Grounded Spoken Dialogues with Robots: Cloud Robotics Tools and Service Robot Applications," SIG AI Challenge, Kyoto University, Mar. 18, 2014.
  10. K. Sugiura, "Towards Robots that learn communications", Keihanna Plaza, Feb. 22, 2014.
  11. K. Sugiura, "Artificial Intelligence Systems 2: Machine Learning in Human-Robot Dialogs", Nara Institute of Science and Technology, Dec. 13, 2013
  12. K. Sugiura, "Special Lecture on Multimedia: Multimodal dialogues with robots", Osaka University, Dec. 12, 2013
  13. K. Sugiura, "Grounded Spoken Dialogue Systems and Robotic Applications", The 80th Robotics Seminar, Oct. 9, 2013.
  14. K. Sugiura, "Grounded Human-Robot Dialogues", Nagoya-area NLP Seminar, Oct. 31, 2012.
  15. K. Sugiura, "Intelligent System Design 2: Machine Learning in Human-Robot Dialogs", Nara Institute of Science and Technology, Oct. 16, 2012.
  16. K. Sugiura, "Bayes Inferences," Nara Institute of Science and Technology, May 23, 2012.
  17. K. Sugiura, "RoboCup@Home: A Benchmark Test for Domestic Service Robots," Okayama University, Mar. 21, 2012.
  18. K. Sugiura, "Intelligent System Design 2: Machine Learning in Human-Robot Dialogs", Nara Institute of Science and Technology, Oct. 17, 2011.
  19. K. Sugiura, "The Cutting Edge of Robot Dialogue Research", Summer School of Japanese Cognitive Science Society, September 5, 2011.
  20. K. Sugiura, "A Real-World Dialogue System Using Sensory-Motor Information", The Young Researchers' Roundtable on Spoken Dialog Systems, September 23, 2010.
  21. K. Sugiura, "The Cutting Edge of Robotics Research", Doshisha University, September 11, 2010.
pagetop

International Conference Articles

  1. K. Matsuda, Y. Wada, and K. Sugiura. "DENEB: A Hallucination-Robust Automatic Evaluation Metric for Image Captioning", ACCV, 2024, to appear. (acceptance rate = 32%)
  2. M. Kambara and K. Sugiura, "Future Success Prediction in Open-Vocabulary Object Manipulation Tasks Based on End-Effector Trajectories", 3rd Workshop on Language and Robot Learning @CoRL, 2024.
  3. M. Goko, M. Kambara, S. Otsuki, D. Saito, and K. Sugiura, "Task Success Prediction for Open-Vocabulary Manipulation Based on Multi-Level Aligned Representations", CoRL, 2024. (acceptance rate = 38.2%)
  4. K. Kaneda, S. Nagashima, R. Korekata, M. Kambara, and K. Sugiura, "Learning-To-Rank Approach for Identifying Everyday Objects Using a Physical-World Search Engine", IEEE RAL presented at IEEE/RSJ IROS, 2024.
  5. T. Nishimura, K. Kuyo, M. Kambara, and K. Sugiura, "Object Segmentation from Open-Vocabulary Manipulation Instructions Based on Optimal Transport Polygon Matching with Multimodal Foundation Models", IEEE/RSJ IROS, 2024.
  6. S. Otsuki, T. Iida, F. Doublet, T. Hirakawa, T. Yamashita, H. Fujiyoshi, and K. Sugiura, "Layer-Wise Relevance Propagation with Conservation Property for ResNet", ECCV, 2024. (acceptance rate = 27.9%)
  7. Y. Wada, K. Kaneda, D. Saito, and K. Sugiura, "Polos: Multimodal Metric Learning from Human Feedback for Image Captioning", CVPR, pp. 13559-13568, 2024. (acceptance rate = 23.6%)
    Poster (highlight) : Top 3.6% out of 11,532 paper submissions
    pdf
  8. N. Hosomi, Y. Iioka, S. Hatanaka, T. Misu, K. Yamada, and K. Sugiura: “Target Position Regression from Navigation Instructions”, IEEE ICRA, 2024 [poster].
  9. R. Korekata, K. Kanda, S. Nagashima, Y. Imai, and K. Sugiura: “Multimodal Ranking for Target Objects and Receptacles Based on Open-Vocabulary Instructions”, IEEE ICRA, 2024 [poster].
  10. Y. Wada, K. Kaneda, and K. Sugiura, "JaSPICE: Automatic Evaluation Metric Using Predicate-Argument Structures for Image Captioning Models", CoNLL, 2023. (acceptance rate = 28%) pdf
  11. Y. Iioka, Y. Yoshida, Y. Wada, S. Hatanaka and K. Sugiura, "Multimodal Diffusion Segmentation Model for Object Segmentation from Manipulation Instructions", IEEE/RSJ IROS, 2023, to appear. pdf
  12. S. Otsuki, S. Ishikawa and K. Sugiura, "Prototypical Contrastive Transfer Learning for Multimodal Language Understanding", IEEE/RSJ IROS, 2023, to appear. pdf
  13. R. Korekata, M. Kambara, Y. Yoshida, S. Ishikawa, Y. Kawasaki, M. Takahashi and K. Sugiura, "Switching Head–Tail Funnel UNITER for Dual Referring Expression Comprehension with Fetch-and-Carry Tasks", IEEE/RSJ IROS, 2023, to appear. pdf
  14. K. Kaneda, R. Korekata, Y. Wada, S. Nagashima, M. Kambara, Y. Iioka, H. Matsuo, Y. Imai, T. Nishimura, and K. Sugiura, "DialMAT: Dialogue-Enabled Transformer with Moment-Based Adversarial Training", CVPR 2023 Embodied AI Workshop, 2023. (1st Place in DialFRED Challenge) pdf
  15. M. Kambara and K. Sugiura, "Fully Automated Task Management for Generation, Execution, and Evaluation: A Framework for Fetch-and-Carry Tasks with Natural Language Instructions in Continuous Space", CVPR 2023 Embodied AI Workshop, 2023. pdf
  16. K. Kaneda, Y. Wada, T. Iida, N. Nishizuka, Y Kubo, K. Sugiura, " Flare Transformer: Solar Flare Prediction using Magnetograms and Sunspot Physical Features", ACCV, pp. 1488-1503, 2022. (acceptance rate = 33.4%) pdf
  17. T. Iida, T. Komatsu, K. Kaneda, T. Hirakawa, T. Yamashita, H. Fujiyoshi, K. Sugiura, "Visual Explanation Generation Based on Lambda Attention Branch Networks", ACCV, pp. 3536-3551, 2022. (acceptance rate = 33.4%) pdf
  18. H. Matsuo, S. Hatanaka, A. Ueda, T. Hirakawa, T. Yamashita, H. Fujiyoshi, K. Sugiura, "Collision Prediction and Visual Explanation Generation Using Structural Knowledge in Object Placement Tasks", IEEE/RSJ IROS, 2022 [poster].
  19. R. Korekata, Y. Yoshida, S. Ishikawa, K. Sugiura, "Switching Funnel UNITER: Multimodal Instruction Comprehension for Object Manipulation Tasks", IEEE/RSJ IROS, 2022 [poster].
  20. M. Kambara, K.Sugiura, "Relational Future Captioning Model for Explaining Likely Collisions in Daily Tasks", IEEE ICIP, 2022. pdf
  21. S. Ishikawa, K. Sugiura, "Moment-based Adversarial Training for Embodied Language Comprehension", IEEE ICPR, 2022. pdf
  22. T. Matsubara, S. Otsuki, Y. Wada, H. Matsuo, T. Komatsu, Y. Iioka, K. Sugiura and H. Saito, "Shared Transformer Encoder with Mask-Based 3D Model Estimation for Container Mass Estimation", IEEE ICASSP, pp.9142–9146, 2022. pdf
  23. S. Matsumori, K. Shingyouchi, Y. Abe, Y. Fukuchi, K. Sugiura, M. Imai, "Unified Questioner Transformer for Descriptive Question Generation in Goal-Oriented Visual Dialogue", ICCV, pp. 1898-1907, 2021. (acceptance rate = 25.9%) pdf
  24. M. Kambara and K. Sugiura, "Case Relation Transformer: A Crossmodal Language Generation Model for Fetching Instructions", IEEE RAL presented at IEEE/RSJ IROS, 2021. pdf
  25. S. Ishikawa and K. Sugiura, "Target-dependent UNITER: A Transformer-Based Multimodal Language Comprehension Model for Domestic Service Robots", IEEE RAL presented at IEEE/RSJ IROS, 2021. pdf
  26. A. Magassouba, K. Sugiura, and H. Kawai, "CrossMap Transformer: A Crossmodal Masked Path Transformer Using Double Back-Translation for Vision-and-Language Navigation", IEEE RAL presented at IEEE/RSJ IROS, 2021.
  27. H. Itaya, T. Hirakawa, T. Yamashita, H. Fujiyoshi and K. Sugiura, "Visual Explanation using Attention Mechanism in Actor-Critic-based Deep Reinforcement Learning", IJCNN, 2021.
  28. T. Ogura, A. Magassouba, K. Sugiura, T. Hirakawa, T. Yamashita, H. Fujiyoshi, H. Kawai, "Alleviating the Burden of Labeling: Sentence Generation by Attention Branch Encoder-Decoder Network", IEEE RAL presented at IEEE/RSJ IROS, 2020.
  29. P. Shen, X. Lu, K. Sugiura, S. Li, H. Kawai, "Compensation on x-vector for Short Utterance Spoken Language Identification", Odyssey 2020 The Speaker and Language Recognition Workshop, pp. 47-52, Tokyo, Japan, 2020. pdf
  30. A. Magassouba, K. Sugiura, H. Kawai, "A Multimodal Target-Source Classifier with Attention Branches to Understand Ambiguous Instructions for Fetching Daily Objects", IEEE RAL presented at IEEE ICRA, 2020.
  31. A. Magassouba, K. Sugiura, A. Trinh Quoc, H. Kawai, "Understanding Natural Language Instructions for Fetching Daily Objects Using GAN-Based Multimodal Target-Source Classification", IEEE Robotics and Automation Letters presented at IEEE/RSJ IROS, Macau, China, 2019.
  32. A. Magassouba, K. Sugiura, H. Kawai, "Multimodal Attention Branch Network for Perspective-Free Sentence Generation", Conference on Robot Learning (CoRL), Osaka, Japan, 2019. (acceptance rate = 27.6%) pdf
  33. A. Nakayama, A. Magassouba, K. Sugiura, H. Kawai: "PonNet: Object Placeability Classifier for Domestic Service Robots," Third International Workshop on Symbolic-Neural Learning (SNL-2019), Tokyo, Japan, July 11-12, 2019 [poster].
  34. A. Magassouba, K. Sugiura, H. Kawai, "A Multimodal Classifier Generative Adversarial Network for Carry and Place Tasks from Ambiguous Language Instructions", IEEE Robotics and Automation Letters presented at IEEE/RSJ IROS, Madrid, Spain, 2018.
    IROS 2018 RoboCup Best Paper Award
    pdf Slides
  35. K. Sugiura, "SuMo-SS: Submodular Optimization Sensor Scattering for Deploying Sensor Networks by Drones", IEEE Robotics and Automation Letters presented at IEEE/RSJ IROS, Madrid, Spain, 2018. pdf Slides
  36. N. Nishizuka, K. Sugiura, Y. Kubo, M. Den, S. Watari and M. Ishii, "Solar Flare Prediction Using Machine Learning with Multiwavelength Observations", In Proc. IAU Symposium 335, Exeter, UK, vol.13, pp.310-313, 2018.
  37. K. Sugiura and H. Kawai, "Grounded Language Understanding for Manipulation Instructions Using GAN-Based Classification", In Proc. IEEE ASRU, Okinawa, Japan, pp. 519-524, 2017. pdf
  38. K. Sugiura and K. Zettsu: "Analysis of Long-Term and Large-Scale Experiments on Robot Dialogues Using a Cloud Robotics Platform", In Proc. ACM/IEEE HRI, Christchurch, New Zealand, pp. 525-526, 2016. pdf
  39. S. Takeuchi, K. Sugiura, Y. Akahoshi, and K. Zettsu: "Constrained Region Selection Method Based on Configuration Space for Visualization in Scientific Dataset Search," In Proc. IEEE Big Data, vol. 2, pp. 2191-2200, 2015.
  40. K. Sugiura and K. Zettsu: "Rospeex: A Cloud Robotics Platform for Human-Robot Spoken Dialogues", In Proc. IEEE/RSJ IROS, pp. 6155-6160, Hamburg, Germany, Oct 1, 2015. pdf
  41. T. Nose, Y. Arao, T. Kobayashi, K. Sugiura, Y. Shiga, and A. Ito: "Entropy-Based Sentence Selection for Speech Synthesis Using Phonetic and Prosodic Contexts", In Proc. Interspeech, pp. 3491-3495, Dresden, Germany, Sep. 2015.
  42. K. Lwin, K. Zettsu, and K. Sugiura: "Geovisualization and Correlation Analysis between Geotagged Twitter and JMA Rainfall Data: Case of Heavy Rain Disaster in Hiroshima", In Proc. Second IEEE International Conference on Spatial Data Mining and Geographical Knowledge Services, Fuzhou, China, July 2015.
  43. B. T. Ong, K. Sugiura, and K. Zettsu: "Dynamic Pre-training of Deep Recurrent Neural Networks for Predicting Environmental Monitoring Data," In Proc. IEEE Big Data 2014, pp. 760-765, Washington DC, USA, Oct 30, 2014. (acceptance rate = 18.5%)
  44. B. T. Ong, K. Sugiura, and K. Zettsu: "Predicting PM2.5 Concentrations Using Deep Recurrent Neural Networks with Open Data," In Proc. iDB Workshop 2014, Fukuoka, Japan, July 31, 2014.
  45. D. Holz, J. Ruiz-del-Solar, K. Sugiura, S. Wachsmuth: "On RoboCup@Home - Past, Present and Future of a Scientific Competition for Service Robots", In Proc. RoboCup Symposium, pp. 686-697, Joao Pessoa, Brazil, July 25, 2014.
  46. D. Holz, L. Iocchi, J. Ruiz-del-Solar, K. Sugiura, and T. van der Zant: "RoboCup@Home | a competition as a testbed for domestic service robots," In Proc. 1st International Workshop on Intelligent Robot Assistants, Padova, Italy, July 15, 2014.
  47. S. Takeuchi, Y. Akahoshi, B. T. Ong, K. Sugiura, and K. Zettsu: "Spatio-Temporal Pseudo Relevance Feedback for Large-Scale and Heterogeneous Scientific Repositories," In Proc. 2014 IEEE International Congress on Big Data, pp. 669-676, Anchorage, USA, July 1, 2014.
  48. K. Sugiura, Y. Shiga, H. Kawai, T. Misu and C. Hori: "Non-Monologue HMM-Based Speech Synthesis for Service Robots: A Cloud Robotics Approach," In Proc. IEEE ICRA, pp.2237-2242. Hong Kong, China, June 3, 2014.
  49. J. Tan, T. Inamura, K. Sugiura, T. Nagai, and H. Okada: "Human-Robot Interaction between Virtual and Real Worlds: Motivation from RoboCup@Home," In Proc. International Conference on Social Robotics, pp.239-248, Bristol, UK, Oct 27, 2013.
  50. T. Inamura, J. Tan, K. Sugiura, T. Nagai, and H. Okada: "Development of RoboCup@Home Simulation towards Long-term Large Scale HRI," In Proc. RoboCup Symposium, Eindhoven, The Netherlands, July 1, 2013.
  51. R. Lee, K. Kim, K. Sugiura, K. Zettsu, Y. Kidawara: "Complementary Integration of Heterogeneous Crowd-sourced Datasets for Enhanced Social Analytics," In Proc. IEEE MDM, vol. 2, pp. 234-243, Milan, Italy, June 3, 2013.
  52. K. Sugiura, R. Lee, H. Kashioka, K. Zettsu, and Y. Kidawara: "Utterance Classification Using Linguistic and Non-Linguistic Information for Network-Based Speech-To-Speech Translation Systems," In Proc. IEEE MDM, vol. 2, pp. 212-216, Milan, Italy, June 3, 2013.
  53. K. Sugiura, Y. Shiga, H. Kawai, T. Misu and C. Hori: "Non-Monologue Speech Synthesis for Service Robots," In Proc. Fifth Workshop on Gaze in HRI, Tokyo, Japan, March 3, 2013.
  54. K. Sugiura, N. Iwahashi and H. Kashioka: "Motion Generation by Reference-Point-Dependent Trajectory HMMs," In Proc. IEEE/RSJ IROS, pp.350-356, San Francisco, USA, September 25-30, 2011.
    IROS 2011 RoboCup Best Paper Award
    pdf
  55. T. Misu, K. Sugiura, K. Ohtake, C. Hori, H. Kashioka, H. Kawai and S. Nakamura: "Modeling Spoken Decision Making Dialogue and Optimization of its Dialogue Strategy", In Proc. SIGDIAL, pp.221-224, 2011.
  56. T. Misu, K. Sugiura, K. Ohtake, C. Hori, H. Kashioka, H. Kawai and S. Nakamura: "Dialogue Strategy Optimization to Assist User's Decision for Spoken Consulting Dialogue Systems", In Proc. IEEE-SLT, pp.342-347, 2010.
  57. N. Iwahashi, K. Sugiura, R. Taguchi, T. Nagai, and T. Taniguchi: "Robots That Learn to Communicate: A Developmental Approach to Personally and Physically Situated Human-Robot Conversations", In Proc. The 2010 AAAI Fall Symposium on Dialog with Robots, pp. 38-43, Arlington, Virginia, USA, November 11-13, 2010. pdf
  58. K. Sugiura, N. Iwahashi, H. Kawai, and S. Nakamura: "Active Learning for Generating Motion and Utterances in Object Manipulation Dialogue Tasks", In Proc. The 2010 AAAI Fall Symposium on Dialog with Robots, pp. 115-120, Arlington, Virginia, USA, November 11-13, 2010. pdf
  59. K. Sugiura, N. Iwahashi, H. Kashioka, and S. Nakamura: "Active Learning of Confidence Measure Function in Robot Language Acquisition Framework", In Proc. IEEE/RSJ IROS, pp. 1774-1779, Taipei, Taiwan, Oct 18-22, 2010. pdf
  60. X. Zuo, N. Iwahashi, R. Taguchi, S. Matsuda, K. Sugiura, K. Funakoshi, M. Nakano, and N. Oka: "Detecting Robot-Directed Speech by Situated Understanding in Physical Interaction", In Proc. IEEE RO-MAN, pp. 643-648, 2010.
  61. M. Attamimi, A. Mizutani, T. Nakamura, K. Sugiura, T. Nagai, N. Iwahashi, H. Okada, and T. Omori: "Learning Novel Objects Using Out-of-Vocabulary Word Segmentation and Object Extraction for Home Assistant Robots", In Proc. IEEE ICRA, pp. 745-750, Anchorage, Alaska, USA, May 3-8, 2010.
    【2011年 ロボカップ研究賞受賞(ロボカップ日本委員会)】
    pdfabstract
    This paper presents a method for learning novel objects from audio-visual input. Objects are learned using out-of-vocabulary word segmentation and object extraction. The latter half of this paper is devoted to evaluations. We propose the use of a task adopted from the RoboCup@Home league as a standard evaluation for real world applications. We have implemented proposed method on a real humanoid robot and evaluated it through a task called ''Supermarket''. The results reveal that our integrated system works well in the real application. In fact, our robot outperformed the maximum score obtained in RoboCup@Home 2009 competitions.
  62. X. Zuo, N. Iwahashi, R. Taguchi, S. Matsuda, K. Sugiura, K. Funakoshi, M. Nakano, and N. Oka: "Robot-Directed Speech Detection Using Multimodal Semantic Confidence Based on Speech, Image, and Motion", In Proc. IEEE ICASSP, pp. 2458-2461, Dallas, Texas, USA, March 14-19, 2010. abstract
    In this paper, we propose a novel method to detect robotdirected (RD) speech that adopts the Multimodal Semantic Confidence (MSC) measure. The MSC measure is used to decide whether the speech can be interpreted as a feasible action under the current physical situation in an object manipulation task. This measure is calculated by integrating speech, image, and motion confidence measures with weightings that are optimized by logistic regression. Experimental results show that, compared with a baseline method that uses speech confidence only, MSC achieved an absolute increase of 5% for clean speech and 12% for noisy speech in terms of average maximum F-measure.
  63. T. Misu, K. Sugiura, T. Kawahara, K. Ohtake, C. Hori, H. Kashioka, and S. Nakamura: "Online Learning of Bayes Risk-Based Optimization of Dialogue Management for Document Retrieval Systems with Speech Interface", In Proc. IWSDS, 2009.
  64. K. Sugiura, N. Iwahashi, H. Kashioka, and S. Nakamura: "Bayesian Learning of Confidence Measure Function for Generation of Utterances and Motions in Object Manipulation Dialogue Task", In Proc. Interspeech, pp. 2483-2486, Brighton, UK, September, 2009. pdfabstract
    This paper proposes a method that generates motions and utterances in an object manipulation dialogue task. The proposed method integrates belief modules for speech, vision, and motions into a probabilistic framework so that a user's utterances can be understood based on multimodal information. Responses to the utterances are optimized based on an integrated confidence measure function for the integrated belief modules. Bayesian logistic regression is used for the learning of the confidence measure function. The experimental results revealed that the proposed method reduced the failure rate from 12% down to 2.6% while the rejection rate was less than 24%.
  65. N. Iwahashi, R. Taguchi, K. Sugiura, K. Funakoshi, and M. Nakano: "Robots that Learn to Converse: Developmental Approach to Situated Language Processing", In Proc. International Symposium on Speech and Language Processing, pp. 532-537, China, August, 2009.
  66. K. Sugiura and N. Iwahashi: "Motion Recognition and Generation by Combining Reference-Point-Dependent Probabilistic Models", In Proc. IEEE/RSJ IROS, pp. 852-857, Nice, France, September, 2008. pdfabstract
    This paper presents a method to recognize and generate sequential motions for object manipulation such as placing one object on another or rotating it. Motions are learned using reference-point-dependent probabilistic models, which are then transformed to the same coordinate system and combined for motion recognition/generation. We conducted physical experiments in which a user demonstrated the manipulation of puppets and toys, and obtained a recognition accuracy of 63% for the sequential motions. Furthermore, the results of motion generation experiments performed with a robot arm are presented.
  67. K. Sugiura and N. Iwahashi: "Learning Object-Manipulation Verbs for Human-Robot Communication", In Proc. Workshop on Multimodal Interfaces in Semantic Interaction, pp. 32-38, Nagoya, Japan, November, 2007. pdfabstract
    This paper proposes a machine learning method for mapping object-manipulation verbs with sensory inputs and motor outputs that are grounded in the real world. The method learns motion concepts demonstrated by a user and generates a sequence of motions, using reference-point-dependent probability models. Here, the motion concepts are learned by using hidden Markov models (HMMs). In the motion generation phase, our method transforms and combines HMMs to generate trajectories.
  68. K. Sugiura, T. Nishikawa, M. Akahane, and O. Katai: "Autonomous Design of a Line-Following Robot by Exploiting Interaction between Sensory Morphology and Learning Controller", In Proc. the 2nd Biomimetics International Conference, Doshisha, pp. 23-24, Kyoto, Japan, December, 2006 abstract
    In this paper, we propose a system that automatically designs the sensory morphology of an adaptive robot. This system designs the sensory morphology in simulation with two kinds of adaptation, ontogenetic adaptation and phylogenetic adaptation, to optimize the learning ability of the robot.
  69. K. Sugiura, D. Matsubara, and O. Katai: "Construction of Robotic Body Schema by Extracting Temporal Information from Sensory Inputs", In Proc. SICE-ICASE, pp. 302-307, Busan, Korea, October, 2006. pdfabstract
    This paper proposes a method that incrementally develops the "body schema" of a robot. The method has three features: 1) estimation of light-sensor positions based on the Time Difference of Arrival (TDOA) of signals and multidimensional scaling (MDS); 2) incremental update of the estimation; and 3) no additional equipment.
  70. K. Sugiura, M. Akahane, T. Shiose, K. Shimohara, and O. Katai: "Exploiting Interaction between Sensory Morphology and Learning", In Proc. IEEE-SMC, Hawaii, USA, pp. 883-888, 2005. pdfabstract
    This paper proposes a system that automatically designs the sensory morphology of a line-following robot. The designed robot outperforms hand-coded designs in learning speed and accuracy.
  71. M. Akahane, K. Sugiura, T. Shiose, H. Kawakami, and O. Katai: "Autonomous Design of Robot Morphology for Learning Behavior Using Evolutionary Computation", In Proc. 2005 Japan-Australia Workshop on Intelligent and Evolutionary Systems, Hakodate, Japan, CD-ROM, 2005.
  72. K. Sugiura, T. Shiose, H. Kawakami, and O. Katai: "Co-evolution of Sensors and Controllers", In Proc. 2003 Asia Pacific Symposium on Intelligent and Evolutionary Systems (IES2003), Kitakyushu, Japan, pp. 145-150, 2003. pdfabstract
    In this paper we investigate the evolutionary development of embodied agents that are allowed to evolve not only control mechanisms but also the sensitivity and temporal resolution of their sensors. The experimental results indicate that the sensors and controller co-evolve in an agents through interacting with the environments
  73. K. Sugiura, H. Suzuki, T. Shiose, H. Kawakami, and O. Katai: "Evolution of Rewriting Rule Sets Using String-Based Tierra", In Proc. ECAL, Dortmund, Germany, pp. 69-77, 2003. pdfabstract
    We have studied a string rewriting system to improve the basic design of an artificial life system named String-based Tierra. The instruction set used in String-based Tierra is converted into a set of rewriting rules using regular expressions.
pagetop

Book Chapters

  1. T. Nagai, T. Nakamura, K. Sugiura, T. Taniguchi, Y. Suzuki, and M. Hirata: "Cooperative Control of Multiple CAs", Cybernetic Avatar, H. Ishiguro, F. Ueno, E. Tachibana (Eds.), Springer, 2024.
  2. A. Magassouba, K. Sugiura, and H. Kawai: "Latent-Space Data Augmentation for Visually-Grounded Language Understanding", Advances in Artificial Intelligence, Ohsawa, Y., Yada, K., Ito, T., Takama, Y., Sato-Shimokawara, E., Abe, A., Mori, J., Matsumura, N. (Eds.), Springer, to appear. download
  3. K. Sugiura, S. Behnke, D. Kulic, and K. Yamazaki (eds): "Preface: Special Issue on Machine Learning and Data Engineering in Robotics", Advanced Robotics, Vol. 30, Issue 11-12, May 10, 2016. pdf
  4. R. A. C. Bianchi, H. Levent Akin, S. Ramamoorthy, and K. Sugiura (eds): "RoboCup 2014: Robot World Cup XVIII", Lecture Notes in Computer Science 8992 (ISBN 978-3-319-18614-6), Springer, July 15, 2014.
  5. M. Attamimi, T. Nakamura, K. Sugiura, T. Nagai, and N. Iwahashi, "Learning Novel Objects for Domestic Service Robots", The Future of Humanoid Robots: Research and Applications (ISBN 978-953-307-951-6), InTech, pp. 257-276, 2011.
  6. T. Misu, K. Sugiura, T. Kawahara, K. Ohtake, C. Hori, H. Kashioka and S. Nakamura: "Online learning of Bayes Risk-based Optimization of Dialogue Management for Document Retrieval Systems with Speech Interface (Chapter 2)", Spoken Dialogue Systems Technology and Design (ISBN 978-1441979339), Springer, pp. 29-62, 2010.
  7. K. Sugiura, N. Iwahashi, H. Kashioka, and S. Nakamura: "Statistical Imitation Learning in Sequential Object Manipulation Tasks", Advances in Robot Manipulators, Ernest Hall (Ed.), InTech, pp. 589-606, 2010. p download
pagetop

Domestic Conferences

Komei Sugiura has published 131 papers in domestic conferences.

Full list is shown here.

pagetop

Patents

Komei Sugiura has published 19 patents.

Full list is shown here.

pagetop