The focus of our research is intelligent control in biology and engineering.
We believe that the key to achieving dynamic intelligence is optimization.
In biology, motor behavior is shaped by processes (evolution, learning, adaptation)
that resemble iterative optimization .
In engineering, perhaps the best way to build a truly complex
controller that actually works is to specify a high-level performance criterion,
and leave the details of the design process to numerical optimization
We are pursuing multiple lines of research spanning many traditional disciplines:
control engineering, computer science, robotics, neuroscience, psychology, (bio) mechanics, applied mathematics.
Despite their interdisciplinary nature, all these efforts are aimed at a common goal:
understanding and synthesizing dynamic intelligence through learning and optimization.
NEWS ARTICLES ABOUT OUR WORK:
CONTROL THEORY THAT ENABLES FASTER ALGORITHMS
The trouble with control optimization is that it is easier said than done. For a system with
many degrees of freedom (such as a modern robot or a human body) the space of possible
control strategies is vast, and finding a sensible (let alone optimal) solution automatically
requires a staggering amount of computation. Computers have gotten really fast, and the multi-core
revolution is great news because the necessary computations are inherently parallel.
Nevertheless we need equally fast algorithms if we are to apply optimal control methodology to
complex dynamical systems. Developing such algorithms as well as the underlying control theory
has been a major focus of our work. This includes local trajectory-based methods ,
global function-approximation methods ,
hierarchical control methods , and a new framework for
stochastic optimal control which makes the problem linear even though the system being controlled
is non-linear . We are now starting to apply our
algorithms to hard control problems in robotics and biomechanics, namely legged locomotion and hand manipulation.
At the same time we will continute to develop new theory and algorithms tailored to these application domains.
Here are some movies illustrating the rich behaviors that can be generated fully automatically
using our algorithms:
swimming. The only thing that is designed manually here
is an intuitive cost function - which prescribes spatial targets for the end-effector or center of mass, and
penalizes control energy. The details of the behavior then emerge from the optimization procedure.
ROBOT DESIGN AND CONTROL
In order to do interesting robotics one needs interesting robots - in particular robots
that have many controllable degrees of freedom along with sufficient sensing capabilities,
and are fast and compliant enough so that they can interact with the world the way we do. To meet these
requirements, we have designed and built 3-dof modular legs and fingers (ModBots) that can be assembled
into various walkers and manipulators. The finger modules shown in the figure are equipped with 3-axis
force sensors in the fingertips and potentiometers in the joints, and can move substantially faster than a human finger.
We have also acquired some of the most advanced pneumatic robots available (ShadowHand and Kokoro). We found
that pneumatic actuators are easy to work with ,
contrary to popular belief. See movies of
full-body tracking and
on the humanoid robot, and
high-performance tracking on a simpler pneumatic robot.
On the control side, in addition to applying and customizing our latest algorithms,
we are excited about the idea of online optimization or model-predictive control.
This involves re-optimizing the movement plan at every time step of the real-time control loop, always starting
from the current state. See a movie of our robot
juggling two balls
using online optimization . The above
swimming behavior was also generated
using a similar approach . A big open question is what happens
when the controller is optimized with respect to an innacurate model of the robot. Our results will ball-bouncing
indicate that online optimization is surprisingly robust to model errors, but nevertheless a lot more work
along these lines is needed.
SIMULATION OF MULTI-JOINT DYNAMICS WITH CONTACT
Applying control optimization directly to a physical system is both slow and risky. Instead controllers
are usually optimized in simulation, and then fine-tuned on the physical system. This requires an accurate
simulation model that runs orders-of-magnitude faster than real time. Contact dynamics are particularly hard
to simulate accurately and efficiently. We are developing new algorithms to make this possible
, that go beyond the linear complementarity approach used
in existing engines such as ODE and PhysX. We are also implementing a new physics engine (MuJoCo)
which is designed from the ground-up for the purpose of control optimization, and exploits the
latest advances in parallel processing hardware. It combines our new algorithms for contact simulation
with the fastest recursive methods for multi-joint dynamics. The above controllers for
swimming were optimized using MuJoCo.
Here is a movie of dancing-like behavior
arising from an experimental modification of the equations of motion, without any control. MuJoCo
will soon be made publicly availabe.
REVERSE ENGINEERING THE BRAIN'S CONTROL MECHANISMS
It would be great to understand how the brain works, yet this goal remains distant and elusive.
We have developed computational theories of sensorimotor function on the single-neuron
level  as well as on the system level 
which are now mainstream. We have also performed a range of psychophysical experiments
testing the predictions and helping refine the theories .
While such developments remain a significant part of our agenda, we do not feel that the current trends in
sensorimotor control are leading towards algorithmic understanding, of the kind that can enable
artificial systems to match the brain's performance. Thus we are initiating a different
type of experiments and data analyses, designed not to test hypotheses about isolated
features of the brain's controller, but to directly reveal what that controller is. Instead of following
the tradition of studying many repetitions of a simple movement, we will be studying complex movements
executed under a wide variety of task conditions as well as random perturbations. We will then use
machine learning and inverse optimal control  techniques to
discover the structure in the data, and infer how humans would have acted in any possible situation.
The ability to make such inferences is equivalent to having an automatic controller.
The low-dimensionality typically observed
in motor behavior , along with the regularization afforded
by inverse optimal control, will hopefully mitigate the curse of dimensionality.
The specific experiments currently planned are recording hand kinematics and EMG from large numbers of channels,
as well as recording full-body kinematics and ground reaction forces during walking, while subjects are being
pushed unexpectedly. We are also adapting methods from computer graphics, which has a tradition of building
elaborate controllers based on motion capture data.
CONTROL FOR BRAIN-MACHINE INTERFACES
We are beginning to work with functional electrical stimulation (FES) of muscles,
as well as prosthetic and assistive robot arms that must perform daily tasks under the control of
a disabled user. We believe that the best approach to brain-machine interfaces is to obtain
high-level user commands, either from brain activity 
or from eye movements and speech, and put enough intelligence in the device itself so as
to translate these commands into actual movements. Optimal control is well-suited for such translation.
For example, our arm movement controller
maps spatial targets to muscle activations for a detailed biomechancial model of the human arm -
similar to what is needed in FES.