|Email: tqchen at cs dot washington dot edu|
|I am a Ph.D. Student in the Department of Computer Science and Engineering at the University of Washington working with Carlos Guestrin and Ben Taskar. I am a member of MODE Lab. Before comming to UW. I was Master student of Apex Data and Knowledge Management Lab, Shanghai Jiao Tong University, China. I received my Bachelor degree in Computer Science from Shanghai Jiao Tong University, where I was a member of ACM Class, now part of Zhiyuan College in SJTU.|
A Parallel and Efficient Algorithm for Learning to Match Jingbo Shang, Tianqi Chen, Hang Li, Zhengdong Lu, Yong Yu. ICDM 2014
General Functional Matrix Factorization using Gradient Boosting Tianqi Chen, Hang Li, Qiang Yang, Yong Yu. ICML 2013 [code][bibtex]
SVDFeature: A Toolkit for Feature-based Collaborative Filtering Tianqi Chen, Weinan Zhang, Qiuxia Lu, Kailong Chen, Zhao Zheng, Yong Yu. JMLR 13:3619-3622, 2012 [Project][bibtex]
Combining Factorization Model and Additive Forest for Collaborative Followee Recommendation Tianqi Chen, Linpeng Tang, Qin Liu, Diyi Yang, Saining Xie, Xuezhi Cao, Chunyang Wu, Enpeng Yao, Zhengyang Liu, Zhansheng Jiang, Cheng Chen, Weihao Kong, Yong Yu. KDD-Cup Workshop 2012 (1st in track1)
Local Implicit Feedback Mining for Music Recommendation Diyi Yang, Tianqi Chen, Weinan Zhang and Yong Yu. RecSys 2012 [bibtex]
Discriminative Factor Alignment across Heterogeneous Feature Space Fangwei Hu, Tianqi Chen, Nathan Nan Liu, Qiang Yang and Yong Yu. ECML/PKDD 2012
Relation of a New Interpretation of Stochastic Differential Equations to Ito Process Jianghong Shi, Tianqi Chen, Ruoshi Yuan, Bo Yuan and Ping Ao. Journal of Statistical Physics, Vol. 148, Issue 3 [arXiv][bibtex]
Dynamical Decomposition of Markov Processes without Detailed Balance Ping Ao, Tianqi Chen, Jianghong Shi. Chinese Physics Letters, 2013, 30 (7) ,070201
SVDFeature: A Scalable and Flexible toolkit for collaborative filtering This project provides an abstract framework to build new matrix factorization variants simply by defining features.
XGBoost: eXtreme Gradient Boosting (Tree) Library A General purpose parallel gradient boosting (tree) library, featured at its extremely fast speed (with 4 threads, can be 20 times faster than sklearn.GBM), easy to use python wrapper and widely supported objective functions (classification, regression, ranking).
MShadow: A unified CPU/GPU matrix template library in C++/CUDA Mshadow provides a compact and easy-to-use template to do common matrix operations used in machine learning. It allows you to write code once that can run in both CPU and GPU.
CXXNET: concise and fast implementation of (convolutional) neural nework A concise and fast implementation of using MShadow, with less than 1000 lines of core code and could achieve state-of-art perfomance.
Introduction to Boosted Trees This is a lecture on gradient boosting and boosted trees I made on Oct. 22, when I am TAing Machine learning in UW. I am taking the statistical view following the idea of LogitBoost, to present boosted trees algorithm as optimization for training loss and regularization in functional space. This is the model used in the xgboost package