|Email: tqchen at cs dot washington dot edu|
|I am a Ph.D. Student in the Department of Computer Science and Engineering at the University of Washington working with Carlos Guestrin and Ben Taskar. I am a member of MODE Lab. Before comming to UW. I was Master student of Apex Data and Knowledge Management Lab, Shanghai Jiao Tong University, China. I received my Bachelor degree in Computer Science from Shanghai Jiao Tong University, where I was a member of ACM Class, now part of Zhiyuan College in SJTU.|
Efficient Second-Order Gradient Boosting for Conditional Random Fields Tianqi Chen, Sameer Singh, Ben Taskar, Carlos Guestrin AISTATS 2015 (to appear)
A Parallel and Efficient Algorithm for Learning to Match Jingbo Shang, Tianqi Chen, Hang Li, Zhengdong Lu, Yong Yu. ICDM 2014 [code] [bibtex]
General Functional Matrix Factorization using Gradient Boosting Tianqi Chen, Hang Li, Qiang Yang, Yong Yu. ICML 2013 [code][bibtex]
SVDFeature: A Toolkit for Feature-based Collaborative Filtering Tianqi Chen, Weinan Zhang, Qiuxia Lu, Kailong Chen, Zhao Zheng, Yong Yu. JMLR 13:3619-3622, 2012 [Project][bibtex]
Combining Factorization Model and Additive Forest for Collaborative Followee Recommendation Tianqi Chen, Linpeng Tang, Qin Liu, Diyi Yang, Saining Xie, Xuezhi Cao, Chunyang Wu, Enpeng Yao, Zhengyang Liu, Zhansheng Jiang, Cheng Chen, Weihao Kong, Yong Yu. KDD-Cup Workshop 2012 (1st in track1)
Local Implicit Feedback Mining for Music Recommendation Diyi Yang, Tianqi Chen, Weinan Zhang and Yong Yu. RecSys 2012 [bibtex]
Discriminative Factor Alignment across Heterogeneous Feature Space Fangwei Hu, Tianqi Chen, Nathan Nan Liu, Qiang Yang and Yong Yu. ECML/PKDD 2012
Relation of a New Interpretation of Stochastic Differential Equations to Ito Process Jianghong Shi, Tianqi Chen, Ruoshi Yuan, Bo Yuan and Ping Ao. Journal of Statistical Physics, Vol. 148, Issue 3 [arXiv][bibtex]
Dynamical Decomposition of Markov Processes without Detailed Balance Ping Ao, Tianqi Chen, Jianghong Shi. Chinese Physics Letters, 2013, 30 (7) ,070201
The paper "Efficient Second-Order Gradient Boosting for Conditional Random Fields" listed in machine learning makes use of Markov chain's mixing rate bound for CRF learning.
Rabit: Reliable Allreduce and Broadcast Interface A light weight library that provides a fault tolerant interface of Allreduce and Broadcast for portable , scalable and reliable distributed machine learning programs. Rabit programs can run on various platforms such as Hadoop, MPI and no installation is needed.
XGBoost: Gradient Boosting (GBDT, GBRT or GBM) Library for large scale and distributed learning Distributed gradient boosting (tree) library. Featured at its extremely fast speed and scalability, available in python, R and Julia. The distributed version runs on MPI, SGE and YARN. The toolkit can scale to Terascale problems.
MShadow: A unified CPU/GPU matrix template library in C++/CUDA MShadow is an efficient, device invariant and simple tensor library for machine learning project that aims for both simplicity and performance.
CXXNET: fast, concise and distributed deep learning A concise and fast implementation of convnet using mshadow.
SVDFeature: A Scalable and Flexible toolkit for collaborative filtering This project provides an abstract framework to build new matrix factorization variants simply by defining features.
Introduction to Boosted Trees This is a lecture on gradient boosting and boosted trees I made on Oct. 22, when I am TAing Machine learning in UW. I am taking the statistical view following the idea of LogitBoost, to present boosted trees algorithm as optimization for training loss and regularization in functional space. This is the model used in the xgboost package
Introduction to Expression Template This is an tutorial about expression template. This is the trick that enables efficient and simple machine learning codes in C++