Andrew H. Fagg: Robotics and Machine Learning

Manipulation Skills in Humans and Robots

Skills involving reaching, grasping, and manipulation are a rich focus of inquiry because they enable both humans and monkeys to affect their environments in a flexible manner. By studying these motor skills, I hope to build robots that will be able to perform tasks within unstructured human environments, as well as environments that are inhospitable to humans, including space. I am particularly interested in drawing inspiration for robot control systems from the study of biological control and in the use of robots as a mechanism in which to test biological theories of motor control.

One of my projects has been the design and construction of the UMass Torso robot. As with many humanoid-form robots, the UMass Torso consists of many controllable degrees-of-freedom and sensors. Thus, there are often many ways in which a task may be accomplished with the available sensor and actuator set. Although this design increases the complexity of the control and sensing problem, the redundancies in which a task may be addressed can be exploited to allow the robot to perform a wide range of tasks while optimizing for a variety of task criteria. The research challenge is to manage these complexities and provide layers of abstraction that 1) enable a programmer to work at an intuitive task level, and 2) allow planning and machine learning algorithms to be used in a practical manner to automatically improve motor skills (or more specifically, control policies).

Recent movies: whole body manipulation and learning from demonstration

Movie archive: Grasping, juggling, visual tracking, and writing

Grasping as a Haptic Control Problem

One class of closed-loop controllers that we have been developing is aimed at the formation of stable grasps. Rather than starting with a detailed model of the object to be grasped (e.g., as derived from a vision system), the first step in our approach is to haptically explore the object to be grasped. At each contact with the object, the controller estimates the total force and torque applied to the object by the set of contacts. Given a simple model of the local object geometry, the controller computes movements of the fingers and arm that attempt to reduce the total force and torque. In particular, we formulated the problem in terms of simultaneously satisfying the objectives of minimizing the net force and net torque that are applied to an object. The power of this approach to grasp formation is that the controller can be assigned a variety of different physical resources, including finger tips, palms, multiple hands, and even ``virtual contacts,'' such as gravity.
  • Platt, Jr., R., Fagg, A. H., Grupen, R. A. (2004), Manipulation Gaits: Sequences of Grasp Control Tasks, to Appear in the International Conference on Robotics and Automation (ICRA'04)

  • Platt, Jr., R., Fagg, A. H., Grupen, R. A. (2003), Extending Fingertip Grasping to Whole Body Grasping, Proceedings of International Conference on Robotics and Automation (ICRA'03), pp. 2677-2682

  • Platt, Jr., R., Fagg, A. H., Grupen, R. A. (2002), Nullspace Composition of Control Laws for Grasping, Proceedings of the International Conference on Intelligent Robots and Systems (IROS'02), Electronically Published

Learning to Prospectively Select Grasps

The problem of grasping an object and moving it to another location has long been studied in robotics. One approach to this problem is to explicitly compute ``pick-and-place'' constraints and to perform a search within the constrained space. In contrast, humans are capable of robustly planning and executing grasps to objects about which their knowledge is incomplete. Furthermore, it appears that grasping strategies are acquired incrementally as a function of experience with different objects.

We have applied a reinforcement learning technique to the problem of discovering an appropriate sequence of grasp and place actions. Rather than starting with a model of which grasp was appropriate for a given final object configuration, the robot learned through interaction with the environment to select a grip in anticipation of how the grasped object was to be used in future actions. The behavior exhibited through the learning process by the robot demonstrated interesting qualitative similarities to what one sees in grip selection with children in a similar task.

Movies: Learning to grasp prospectively


fagg[at]ou.edu

Last modified: Wed Mar 17 23:14:52 2004