InvitedSpeakers

From rss2012

Revision as of 03:24, 8 July 2012 by MihailPivtoraiko (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Banner.jpg


Contents

Sachin Chitta

Motion Planning with Constraints

Abstract:

Robots executing tasks in real-world environments are subject to multiple constraints, e.g. torque constraints, visibility constraints, etc. In this talk, I will present some of our recent efforts in motion planning with constraints with sampling and search-based planners. The first approach focuses on the sampling step, which we implement as the drawing of samples from a set that has been computed in advance. We show how this approach can lead to significant improvements in planning with different types of constraints. In the second approach, we address the problem of search-based planning for dual-arm manipulation. Here, we exploit the underlying structure of the problem to define a lower-dimensional space where the search-based planning method is more tractable. Our approach systematically constructs a graph in task space and generates consistent, low-cost motion trajectories while providing guarantees on completeness and bounds on the sub-optimality of the solution. We show how this approach can be used to generate consistent paths for dual-arm manipulation. I will conclude with recent work in constructing a benchmarking framework for motion planners.

Bio:

Sachin Chitta is a research scientist at Willow Garage. He received a PhD from the Grasp Lab at the University of Pennsylvania in 2005 and was a post-doctoral fellow in the Grasp Lab prior to joining Willow Garage in 2007. He finished his B.Tech from IIT, Mumbai in 1999. He worked on the dynamics and control of modular locomotion systems as a graduate student. As a post-doc, he was a part of the Littledog project at Penn which aimed to design controllers for robotic walking over extremely rough terrain. Sachin also worked in the Modular Robotics Lab at Penn, developing interesting dynamic gaits for modular robots.


Ludovic Righetti

Going into Contact: Strategies for Robust Manipulation

Abstract:

Touching the world is at the core of robotic manipulation. It is therefore crucial to understand how to exert and exploit contact forces in order to endow robots with useful manipulation skills. How are these forces controlled? How does a robot acquire force or impedance policies? And how does a robot quickly adapt to unexpected events in order to robustly achieve manipulation tasks? In this talk, I will present our approach to manipulation that emphasizes the importance of contact interactions. More specifically I will discuss three different but complementary aspects of our work. First, I will show how the use of model-based and force controllers can dramatically improve manipulation tasks performance while simplifying the problem of finding initial grasp poses. Second, building on this control architecture, I will show why it is interesting to control both end-effector positions and contact forces during a complex manipulation task and how such control policies can be quickly acquired using reinforcement learning. Finally, I will show how we can use sensory information from past experiences together with movement primitives to generate motions that can quickly adapt in an uncertain environment and significantly increase the robustness of grasping and manipulation skills.

Bio:

Ludovic Righetti is a postdoctoral researcher both at the Computational Learning and Motor Control Lab (University of Southern California) since March 2009 and at the Max Planck Institute for Intelligent Systems (Tübingen, Germany) since September 2011. He studied at the Ecole Polytechnique Fédérale de Lausanne where he received a diploma in Computer Science (eq. MSc) in 2004 and a Doctorate in Science in 2008. His doctoral thesis was awarded the 2010 Georges Giralt PhD Award given by the European Robotics Research Network (EURON) for the best robotics thesis in Europe. His research focuses on the generation and control of movements for autonomous robots, with a special emphasis on legged locomotion and manipulation.


Marc Toussaint

Symbol Learning from a Relational RL Perspective

Abstract:

Where do the symbols come from? This is perhaps the most common question people ask in response to our work on relational Reinforcement Learning in Robotics. (In fact, the question of appropriate symbols to describe states features and actions is indicated in all of robotics.) The relational Reinforcement Learning perspective implies a concrete definition of what are good symbols: We want to learn symbols such that model-based relational RL using these symbols leads to high-reward behavior. This objective is rather indirect. Given a set of candidate symbols, we can learn a relational world and reward model in terms of these symbols, use this for planning, and evaluate the choice of symbols based on the success of these plans. In this talk I will report on our efforts to realize these ideas for symbol learning and focus on the many interesting questions that arise within this approach.

Bio:

Marc Toussaint is assistant professor for Machine Learning and Robotics at the Free University Berlin since 2010. In 2007-2010 he was leading an Emmy Noether research group at the Berlin University of Technology on the same topic and, before this, spend two years as a post-doc at the University of Edinburgh with Prof. Chris Williams and Prof. Sethu Vijayakumar. His recent focus of research is on Machine Learning and probabilistic AI methods (in particular (model-based) Reinforcement Learning and probabilistic inference) and their application in robotics. Core motivationg questions of his research is: What are appropriate representations (e.g., symbols, temporal abstractions, relational representations) to enable planning in real world environments; how can continuous geometric and symbolic logic representations be coupled; and how might they be learnt from experience? He currently is coordinator of the German research priority programme Autonomous Learning, member of the editorial board of the Journal of AI Research (JAIR), grant reviewer for the German Research Foundation, and programme committee member of several top conferences in the field (UAI, RSS, ICRA, IROS, AIStats, ICML). His work was awarded best paper at ICMLA 2007 and second best paper at UAI 2008.


Sertac Karaman

Bio:

Sertac Karaman holds BS. degrees in Mechanical Engineering and in Computer Engineering both from Istanbul Technical University and an S.M. degree in Mechanical Engineering from Massachusetts Institute of Technology (MIT). Currently, he is a Ph.D. candidate in the Department of Electrical Engineering and Computer Science at MIT. He is the recipient of the AIAA Wright Brothers Graduate award in 2011, the NVIDIA Gradaute Fellowship in 2011, and the Willow Garage Best Open-source Code Award (with Emilio Frazzoli) in 2010.


Aude Billard

Kinematically Feasible Grasp Synthesis and Robust Motion Generation for Real-World Mobile Manipulation

Abstract:

Adaptability and fast reactivity to constantly changing environments have been identified as major challenges of mobile manipulation in real-world. A mobile manipulation agent must possess these properties while "performing physical work in the environment, other than self motion". This calls for flexible learning techniques which, apart from handling noise, must be able to recover quickly from unseen perturbations. This talk will present recent advances of our group in the field of grasp synthesis and robust motion generation in the light of the additional challenges posed by mobile manipulation.

Optimal grasp synthesis has traditionally been solved independent of the hand kinematics. However, optimal grasps depend on the configuration of the robot hand as much as they do on the contact points; it would hence be desirable to solve this in a single step. We show, by taking advantage of new developments in non-linear optimization, that this problem can be formulated and solved in a one-shot framework. However, implementing these solutions to a real platform presents further challenges due to inherent noise and hardware imperfections. A loop-closure is therefore needed even when optimal solutions are obtained using the above technique. We use state-of-the-art sensing technology provided by Syntouch-BioTac fingertip sensors to modify the obtained solutions through online sensing of touch and slippage.

Secondly, we focus on the problem of motion generation for acquiring the desired grasps using Programming-by-Demonstration. We take inspiration from the way humans reach to grasp everyday objects. We use the Coupled Dynamical System (CDS) model to encode the coupled dynamics of hand transport and finger closure. This results in a human-like approach and ensures proper closure of the fingers even in the presence of unseen perturbations to the target object, e.g., when the fingers need to reopen as the object is moved further away during the motion. Furthermore, we combine the dynamics for different grasping locations obtained from the above grasp synthesis technique. We modify the standard SVM formulation to encode a space partitioning, where each partition encloses separate reach and grasp motion dynamics directed to separate individual grasping points. This provides a fast and reliable method to select one out of many possible grasping points and, if required, switch between those upon sudden perturbations.

Overall, the above approaches give an end-to-end solution for a) detecting good quality grasps taking into account the hand kinematics, b) selecting one among many grasps depending on the relative robot and object positions and c) moving toward the selected grasping point in a smooth, human-like and robust manner.

Bio:

Prof. Aude Billard obtained a Bsc and Msc in Physics from EPFL in 1994 and 1995, respectively, and a PhD in Artificial Intelligence from the University of Edinburgh in 1998. She was a research assistant professor at the University of Southern California until 2002 and then assistant professor at EPFL until 2005. In 2006, she became Associate Professor and head of the LASA laboratory at EPFL.


Mike Stilman

Planning with Movable Obstacles in the Real World

Abstract:

Real mobile manipulators that perform task-level objectives should not get stuck just because there is a chair in their path or a box in the way of reaching the target object. Rather than using motion planning to avoid obstacles, we allow the robot to autonomously plan to move obstacles out of the way! The resulting motion planning problem is exponentially complex and its real world implementation requires further considerations with regard to limited sensor range and uncertainty about the environment.

In this talk, I will overview the last decade of development in the field of Navigation Among Movable Obstacles (NAMO). First, we will discuss representations and algorithms such as free-space decomposition and backward chaining that made it possible to handle the exponential complexity of deterministic NAMO problems. Second, we will look at extensions of these algorithms to 3D environments and alternative actions such as pushing. Finally, I will introduce our latest developments in NAMO that allow the robot to make provably efficient, rational real-time decisions to move obstacles in spaces where it has limited sensor range and significant uncertainty about the objects with which it interacts.

Bio:

Mike Stilman is an Assistant Professor of Robotics and Interactive Computing at the Georgia Institute of Technology. His Humanoid Robotics Lab (http://www.golems.org) develops novel algorithms for optimal planning and control of humanoids, robot manipulators and robots that interact with their environments. Stilman completed his BA in Mathematics and BS in Computer Science from Stanford University. He received a PhD from Carnegie Mellon University for his work on NAMO (Navigation Among Movable Obstacles). As a visiting researcher at the Digital Human Laboratory (AIST), Japan he implemented this work on humanoid robots, collaboratively deeveloping the first humanoid that autonomously manipulated unspecified objects to navigate its environment. His research now extends this work to unknown and uncertain environments, incorporates whole-body motion and the design of a novel humanoid robot, Golem Krang, with human and even super-human physical capabilities.