PROBABILISTIC MOTION ESTIMATION
This Matlab toolbox implements the algorithms developed
Given a database of motion capture data, the toolbox
estimates multi-joint movement trajectories as well as constant parameters
such as limb sizes, axes of joint rotation, marker positions and orientations
relative to the underlying limb segments. The computation is fully
probabilistic and yields not only the most likely values but also their
confidence intervals. It is based on an extension of the extended Kalman filter.
The user must provide sensible initial estimates of the uncertainty
in all the parameters to be estimated.
The entire toolbox, along with a user's manual and
numerical examples, can be downloaded as a single
ITERATIVE LQG CONTROL OF NONLINEAR SYSTEMS
The Matlab function ilqg_det.m implements the
deterministic case of the algorithm developed
here. Given a dynamical system and a cost function, it constructs a
locally-optimal feedback control law via iterated LQG approximations. Its use
in the context of a center-out reaching task (formulated in joint space) is
illustrated in the script test_ilqg_det.m.
An implementation for stochastic systems will be available soon.
Among the arguments of ilqg_det are handles to Matlab functions computing the
dynamics, cost, and their derivatives with respect to the state and control
variables. If the derivative calculation cannot be done analytically one should
implement a finite difference method. The format of the dynamics function is
illustrated in arm_dyn.m which computes the
dynamics of a 2-link torque-controlled arm moving in the horizontal plane. The
format of the cost function is illustrated in arm_cost.m
which computes a weighted sum of control energy and endpoint error. Both of
these functions are needed to run test_ilqg_det.
ESTIMATION AND CONTROL WITH SIGNAL-DEPENDENT NOISE
The Matlab function kalman_lqg.m implements
the algorithm developed here.
It constructs a modified Kalman filter + LQG controller pair for a
linear-quadratic system subject to a combination of additive, state-dependent,
control-dependent and internal noise. The function can also simulate a
specified number of noisy trajectories. Its application in the context of
reaching movements is illustrated in the script
The present algorithm is a generalization of the algorithm we used to construct
our optimal feedback control models of
motor coordination. The earlier version of the algorithm did not allow
state-dependent and internal noise. The latter are useful in modeling active
sensors and memory decay respectively.
MINIMUM JERK TRAJECTORIES
The Matlab function min_jerk.m implements the
algorithm developed here.
Given a sequence of points in 2D or 3D, the velocities and accelerations at the
two endpoints, and the movement duration, the function computes the
minimum-jerk trajectory (that is, the trajectory minimizing the integral of the
squared derivative of acceleration). When the passage times through the
intermediate points are not specified the function optimizes over them using
the nonlinear simplex method. This is particularly useful when a movement path
is given and one wants to compute a reasonable speed profile.
The script test_min_jerk.m illustrates
the use of the function in 2D. It waits for the user to enter a sequence of
points by clicking over a figure. Pressing Enter signals the end of the
sequence. Note that the points are not visible while they are being entered.
The Matlab function backprop.m is an efficient
implementation of the backpropagation algorithm for computing gradients in
feedforward neural networks. It avoids loops over the dataset and handles
arbitrary network topologies. The topology is defined by a square matrix whose
entries indicate the connectivity between all pairs of neurons. The transfer
functions of the neurons can be set independently (via a vector of flags) to
sigmoid, tanh, soft-threshold-linear, or linear.
Backpropagation computes the gradient of the error with respect to the weights,
but in itself is not a learning algorithm capable of improving the weights. To
implement learning one must couple backpropagation with gradient descent. This
is illustrated in the script test_backprop.m.
For more efficient optimization we recommend the function minimize.m written by
Carl Rasmussen and available on his