I’m interested in understanding the fundamental theoretical questions behind tools in modern machine learning and using them to develop new practical methods. My current interests revolve around how to evaluate sources of supervision (e.g., weakly, semi-supervised, and self-supervised) throughout the ML pipeline and the role of misspecified inductive biases.
Previously, I graduated summa cum laude from Princeton University in 2019 with a concentration in Operations Research and Financial Engineering (ORFE) and a certificate in Applications of Computing. I worked on my senior thesis on quantum machine learning with Prof. Elad Hazan and completed junior independent work on modeling misinformation in social networks with Prof. Miklos Racz.
Publications and Preprints
- Comparing the Value of Labeled and Unlabeled Data in Method-of-Moments Latent Variable Estimation.
Mayee F. Chen*, Benjamin Cohen-Wang*, Steve Mussmann, Frederic Sala, and Christopher Ré. AISTATS, 2021.
paper | slides from Google summit
Train and You’ll Miss It: Interactive Model Iteration with Weak Supervision and Pre-Trained Embeddings.
Mayee F. Chen*, Daniel Y. Fu*, Frederic Sala, Sen Wu, Ravi Teja Mullapudi, Fait Poms, Kayvon Fatahalian, and Christopher Ré. arXiv preprint arXiv:2006.15168, 2020.
paper | code | video
Fast and Three-rious: Speeding Up Weak Supervision with Triplet Methods.
Mayee F. Chen*, Daniel Y. Fu*, Frederic Sala, Sarah M. Hooper, Kayvon Fatahalian, and Christopher Ré. International Conference on Machine Learning (ICML), 2020.
paper | code | video | blog
Effect of Rotational Grazing on Plant and Animal Production.
Mayee F. Chen and Junping Shi. Journal of Mathematical Biosciences and Engineering, vol. 15, no. 2. 2018.
paper | slides
Efficient GCD Computation for Big Integers on Xeon Phi Coprocessor.
Jie Chen, William Watson, and Mayee F. Chen. IEEE Conference on Networking, Architecture, and Storage (NAS). 2014.
paper | slides