Invited Speakers

Keynote 1

A Smart Vision System for Robot Navigation

Wen Gao
Peking University, Beijing, China


    Using a smart vision system to guide robot navigation in complex environment, like what human beings and other animals do, is a dream of robot researcher for long time. In this talk, I will discuss the latest technologies ready to be used in this field, including real time video processing systems, low delay and robust video transmission, mobile visual search, scene analysis, etc. With the development of machine learning and hardware, it seems we are on the way to get closer to the final goal, but there are still some problems needing to be solved. This talk will address current technical challenges and also future research directions.


Short biographical note:

    Wen Gao received his Ph.D. degree in electronics engineering from the University of Tokyo in 1991. He is a professor of computer science at the Peking University since 2006. He joined with the Harbin Institute of Technology from 1991 to 1995, as professor, department head of computer science. He was with Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS) from 1996 to 2005. During his career in CAS, he served as the managing director of ICT from 1998 to 1999, the executive vice president of Graduate School of CAS from 2000 to 2004, the vice president of University of Science and Technology China from 2000 to 2003. Dr. Gao is working in the areas of video coding, video processing, computer vision, and multimedia. He is a Member of Chinese Academy of Engineering, a Fellow of IEEE, and a Fellow of ACM.


Keynote 2

Force management for articulated robots. -Works with Robot-

Yuki Nakagawa
RT Corporation, Tokyo, Japan


    "Life with Robots", a corporate mission of RT, is not in distant future. In 2009, we have developed a 120cm tall humanoid robot RIC90 to experiment how such robots can collaborate with humans in daily life. Some of you might remember a humanoid robot with cat suit that was walking around in the competition site in RoboCup 2011 in Turkey. RIC of RIC90 means "Robot Inside Character" and 90 is the height of its shoulder. It can wear costumes like a cat, etc. While RIC90 runs on various OS, Android version received a highlight due to presence in Google I/O (2011 and 2012)  as a demonstration of future cloud robotics. After that, we realized that a force management system has to be developed to make humanoid and articulated robots to work with humans to keep safety and perform smart behaviors. This talk describes why we do need such torque value command motors and how that can be applied to broader robotics applications.


Short biographical note:

    Yuki Nakagawa, the CEO of RT Corporation headquartered in the center of ´Akihabara Electric Town´, Tokyo, Japan, is a well-established specialist in robotics. She has developed various intelligent robots such as autonomous and collaborative mobile robots for the RoboCup small robot soccer league and the cutting-edge humanoid robot RIC 90. Prior to her venture at RT Corporation, she participated as a researcher in the Kitano Symbiotic System Research Project, that is a part of the Exploratory Research for Advanced Technology program within the Japan Science and Technology Agency. She also served as a lead curator at the National Museum of Emerging Science and Innovation, and an assistant professor at the Interdisciplinary Graduate School of Science and Engineering at Tokyo Institute of Technology. She has been commended for numerous significant contributions in robotics from the Japanese Society of Artificial Intelligence, the Japan Society for Fuzzy Technologies and Intelligent Informatics and RoboCup Japan Open. She earned M.Sc. in System Engineering and B.E. in Measurement and Control Engineering from Hosei University, Japan.


Keynote 3

Machine Learning for Robots: Perception, Planning and Motor Control

Daniel Lee
University of Pennsylvania, Philadelphia, USA


    Machines today excel at seemingly complex games such as chess and trivia contests, yet still struggle with basic perceptual, planning, and motor tasks in the physical world.  What are the appropriate representations needed to execute and adapt robust behaviors in real-time? I will present some examples of learning algorithms from my group that have been applied to robots for monocular visual odometry, high-dimensional trajectory planning, and legged locomotion. These algorithms employ a variety of techniques central to machine learning: dimensionality reduction, online learning, and reinforcement learning.  I will show and discuss applications of these algorithms to autonomous vehicles and humanoid robots.


Short biographical note:

    Daniel Lee is the Evan C Thompson Term Chair, Raymond S. Markowitz Faculty Fellow, and Professor in the School of Engineering and Applied Science at the University of Pennsylvania. He received his B.A. summa cum laude in Physics from Harvard University in 1990 and his Ph.D. in Condensed Matter Physics from the Massachusetts Institute of Technology in 1995. Before coming to Penn, he was a researcher at AT&T and Lucent Bell Laboratories in the Theoretical Physics and Biological Computation departments. He is a Fellow of the IEEE and has received the National Science Foundation CAREER award and the University of Pennsylvania Lindback award for distinguished teaching. He was also a fellow of the Hebrew University Institute of Advanced Studies in Jerusalem, an affiliate of the Korea Advanced Institute of Science and Technology, and organized the US-Japan National Academy of Engineering Frontiers of Engineering symposium. He leads the Penn Robocup team and directs the GRASP Robotics Laboratory, where his group focuses on understanding general computational principles in biological systems and on applying that knowledge to build autonomous systems.


Invited talk from AAAI-15

From Goals to Behaviors: Automating the Search for Goal-Achieving Finite State Machines

Siddharth Srivastava
United Technologies Research Center


    Information about the high-level strategy for solving a problem is widely used as an input for increasing the performance and utility of autonomous agents. Such strategy captures vital insights for solving broad classes of problems in an application domain and is typically compiled by domain experts. In this talk I will discuss some of the recent progress made by the automated planning community towards computing such strategies in the form of generalized plans. I will present a brief overview of research on the topic while focusing on the notions of correctness and on efficient methods for computing generalized plans by utilizing existing planners. I will also illustrate some of our recent results on computing generalized plans under numeric uncertainty and on their applications in solving real- world mobile manipulation tasks such as doing the laundry using the PR2 robot.