- Oliver Brock (TU Berlin): Interactive Perception in Biological and Artificial Systems
- Florian Pokorny (KTH): Grasping and Topology
- Matt Mason (CMU): You Know What You Did
- Aaron Bobick (GaTech): Perception
- Stan Birchfield (Clemson): Interactive Perception for Non-Rigid and Articulated Objects
- Dieter Fox (UW): Learning to Control Maniulators
In this introductory talk, I will attempt to set the stage for workshop and its presentations, discussions, and debates. As this is the first robotics workshop on interactive perception (as far as I know), it is worth asking the question: what is interactive perception? Is it something new? Roboticists have always combined action and perception, so what is the deal here? To answer these questions, I will take a look at some striking examples of interactive perception in biological systems. Analyzing these examples, one can make claims about why interactive perception systems have been so successful in evolution. Interactive perception, to put it with the words of philosopher Andy Clark, serves “to reduce the initially high-dimensional available dynamics to a much lower dimensional structure”. I follow his statement by arguing that interactive perception can make many robotics problems easier by revealing the “right” sensorimotor subspaces. If I can convince you of this, it should have some implications for our research, which we should discuss during the workshop. I hope to put forward some criteria for interactive perception that can help evaluate the work presented at the workshop and hopefully will lead to some interesting discussions and ideas for future research.
Oliver Brock is the Alexander von Humboldt-Professor of Robotics in the Faculty of Electrical Engineering and Computer Science at the Technische Universität Berlin in Germany. He received his Diploma in Computer Science from the Technische Universität Berlin and his Master's and Ph.D. in Computer Science from Stanford University. He also held post-doctoral positions at Rice University and Stanford University. He was an Assistant Professor and Associate Professor in the Department of Computer Science at the University of Massachusetts Amherst, prior to moving back to the Technische Universität Berlin. The research of Brock's lab, the Robotics and Biology Laboratory, focuses on autonomous mobile manipulation, interactive perception, manipulation, and the application of algorithms and concepts from robotics to computational problems in structural molecular biology.
Stan Birchfield
In everyday life, we humans routinely adopt an approach of “manipulation-guided sensing,” or "interactive perception." For example, we shuffle through papers on a desk or sift through objects in a drawer to more quickly and efficiently identify items of interest. In such cases, it is our interaction with the environment that increases our understanding of the surroundings, in order to more effectively guide our actions to achieve the desired goal. In a similar manner, it is becoming increasingly clear that for robotics systems to operate robustly in the real world they will need to rely, at least in part, on this interleaving of perception and manipulation. In this talk I will describe some of the work we have been doing in our lab to use interactive perception to more effectively deal with two challenging categories of items: non-rigid and articulated objects. In the context of automated laundry handling, I will present research aimed at isolating, classifying, unfolding, and pose estimation of highly non-rigid articles of clothing. In the context of domestic robotics, I will present research for recovering the 3D geometry of articulated objects in order to better facilitate real-world interaction. Throughout the presentation we will observe numerous instances where a problem that would be nearly impossible with purely passive sensing (especially with currently known technology) becomes tractable once manipulation is included in the loop.
Stan Birchfield received a Ph.D. from Stanford University in 1999, an M.S. from Stanford in 1996, and a B.S. from Clemson in 1993, all in Electrical Engineering. While at Stanford, his research was supported by a National Science Foundation Graduate Research Fellowship, and he was part of the team that won first place at the AAAI Mobile Robotics Competition of 1994. After graduating from Stanford, he was a research engineer with Quindi Corporation, a startup company in Palo Alto, California, where he developed algorithms for intelligent audio and video and was the lead engineer and principal architect of a commercial product. Over the years he has worked with or consulted for various companies, including Sun Microsystems, SRI International, Canon, Microsoft, and Autodesk. More recently, he has been instrumental in co-founding TrafficVision, which uses computer vision to automatically collect aggregate traffic parameters from live video feeds. Since 2003 he has been with the Electrical and Computer Engineering Department of Clemson University, where he is currently on a leave of absence, having recently joined the robotics group of Microsoft in Redmond, Washington. Dr. Birchfield has authored or co-authored more than 60 publications in the areas of computer vision, stereo correspondence, visual tracking, spatial acoustics, and mobile robotics; and his open-source Kanade-Lucas-Tomasi (KLT) feature tracker has been used by thousands of researchers around the world.
In this short talk, I will discuss some of the current research directions we are investigating at the Centre for Autonomous Systems at KTH Royal Institute of Technology, Sweden. At our lab, which is headed by Prof. Danica Kragic, we have recently started to investigate methods inspired by topology for the purpose of grasping and manipulation. Our key motivation is the need for novel continuous representations for grasping which allow us to carry out motion planning in a novel set of coordinates parametrizing concepts such as "winding around" an object. In our current work at ICRA "Grasping Objects with Holes: A Topological Approach", we investigated approaches using winding numbers and Gauss linking integrals while our recently accepted work "Grasp Moduli Spaces" at RSS 2013 proposes a new continuous representation of both grasps and shapes in a single space allowing for gradient based optimization. In this talk, I will give an overview of some of these ideas.
Florian Pokorny received a BSc (Honours) Mathematics from the University of Edinburgh in 2005. He then obtained a Master of Advanced Studies in Mathematics (Part III of the Mathematical Tripos) from the University of Cambridge before completing his PhD in pure mathematics under supervision of Prof. Michael Singer at the University of Edinburgh in 2011. His PhD thesis, entitled "The Bergman Kernel on Toric Kähler Manifolds", investigated the relationship between algebraic, topological and geometric properties of toric Kähler manifolds. In 2011, he changed subjects and joined the Centre for Autonomous Systems, KTH Royal Institute of Technology, Sweden (headed by Prof. Danica Kragic) as a postdoc. He is interested developing new ideas influenced by approaches in topology and Riemannian geometry for robotics and machine learning. Florian is involved in two of Prof. Kragic's ongoing projects, namely the EU FP7 project Topology Based Motion Synthesis for Dexterous Manipulation (TOMSY) and her ERC grant FLEXBOT. Website: http://www.csc.kth.se/~fpokorny