(Note: since this picture was taken, a significant amount of hair has migrated from the top of my head to under my chin.)

Galen Andrew


About me:

I have defended my dissertation as a PhD student in the CSE department at University of Washington. Recently I have been working with Emo Todorov in the Movement Control Laboratory on deep reinforcement learning for control of complex movements. I am also interested in deep representation learning in general.

Contact:

You can email me at [my first name] at cs.washington.edu.

Publications:

Tutorials/Notes:

I wrote these notes on L-BFGS quasi-Newton optimization for a tutorial at MSR.

Software/Data downloads:

Source code, documentation and examples for DCCA are here. The c++ code relies on Boost libraries and Intel MKL. You can also download the acoustic and articulatory features we computed from the Wisconsin X-ray Microbeam Database: first fold only (150MB) or all folds (916MB).

Source code for the OWL-QN L1 regularization optimization algorithm is available for download. You can define your own differentiable loss to optimize with L1 regularization, or use the standalone trainer for training L1-regularized least-squares or logistic regression models.

At Stanford, I wrote Tregex, a utility for matching patterns in syntax trees, similar to Tgrep-2. It contains essentially the same functionality as Tgrep-2, plus several extremely useful relations for natural language trees, for example "A is the lexical head of B". Due to lack of indexing, it is somewhat slower than Tgrep-2 when searching over large treebanks, but not prohibitively so. It is available publicly (under the GNU GPL) from the Stanford NLP group website.