(Note: since this picture was taken, a significant amount of hair has migrated from the top of my head to under my chin.)
I have defended my dissertation as a PhD student in the CSE
department at University of Washington. Recently I have been working
with Emo Todorov in the Movement Control Laboratory on deep reinforcement learning for control of complex movements. I am also interested in deep representation learning in general.
You can email me
at [my first name] at cs.washington.edu.
- Igor Mordatch, Kendall Lowrey, Galen Andrew, Zoran Popović and Emanuel Todorov, Interactive Control of Diverse Complex Characters with Neural Networks, selected for oral presentation at NIPS, 2015.
- Galen Andrew and Jeff Bilmes, Backpropagation in Sequential Deep Neural Networks, presented as poster at NIPS Deep Learning Workshop, 2013.
- Galen Andrew, Raman Arora, Jeff Bilmes and Karen Livescu Deep Canonical Correlation Analysis, in International Conference on Machine Learning, 2013. Here you can download the source code for DCCA.
- Galen Andrew and Jeff Bilmes, Memory-efficient inference in dynamic graphical models using multiple cores, in AISTATS, 2012. (This animation may be helpful for understanding the algorithm.)
- Galen Andrew and Jeff Bilmes, Sequential Deep Belief Networks, International Conference on Acoustics, Speech and Signal Processing, 2012.
- Galen Andrew and Jianfeng Gao, Scalable Training of L1-Regularized Log-Linear Models, in International Conference on Machine Learning, 2007
- Jianfeng Gao, Galen Andrew, Mark Johnson, and Kristina Toutanova, A Comparative Study of Parameter Estimation Methods for Statistical Natural Language Processing, in Annual Meeting of the Association for Computational Linguistics (ACL), Association for Computational Linguistics, 2007
- Galen Andrew, A Hybrid Markov/Semi-Markov Conditional Random Field for Sequence Segmentation, in Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, 2006
- Roger Levy and Galen Andrew. Tregex and Tsurgeon: tools for querying and manipulating tree data structures. 5th International Conference on Language Resources and Evaluation (LREC 2006).
- Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning, A Conditional Random Field Word Segmenter for Sighan Bakeoff 2005, in SIGHAN Workshop on Chinese Language Processing, Association for Computational Linguistics, 2005
- Galen Andrew, Trond Grenager, and Christopher Manning, Verb Sense and Subcategorization: Using Joint Inference to Improve Performance on Complementary Tasks, in Conference on Empirical Methods in Natural Language Procesing (EMNLP), Association for Computational Linguistics, 2004
- Kevin Leyton-Brown, Eugene Nudelman, Galen Andrew, Jim McFadden, and Yoav Shoham, A Portfolio Approach to Algorithm Selection, in International Joint Conference on Artificial Intelligence, 2003
- Kevin Leyton-Brown, Eugene Nudelman, Galen Andrew, Jim McFadden, and Yoav Shoham, Boosting as a Metaphor for Algorithm Design, in International Conference on Principles and Practice of Constraint Programming (CP), 2003. [The longer unpublished version can be found here.]
Source code, documentation and examples for DCCA
are here. The c++ code relies on Boost libraries and Intel MKL. You can
also download the acoustic and articulatory features we computed from
the Wisconsin X-ray Microbeam Database: first fold only (150MB)
or all folds (916MB)
Source code for the OWL-QN L1 regularization optimization algorithm
is available for download. You can define your own differentiable loss
to optimize with L1 regularization, or use the standalone trainer for
training L1-regularized least-squares or logistic regression models.
At Stanford, I wrote
Tregex, a utility for
matching patterns in syntax trees, similar to Tgrep-2. It contains
essentially the same functionality as Tgrep-2, plus several extremely
useful relations for natural language trees, for example "A is the
lexical head of B". Due to lack of indexing, it is somewhat slower than
Tgrep-2 when searching over large treebanks, but not prohibitively so.
It is available publicly (under the GNU GPL) from the Stanford NLP group website.