Hello! I am a software engineer at Google working on the Keras and TensorFlow teams. I am currently working on developing KerasNLP, a new open source library for NLP workflows within Keras ecosystem.
Almost an English major, I’m interested in fuzzy human problems and their intersection with technology. I am deeply interested in improving our understanding of language models, their mechanisms of success, and making sure we develop NLP tools that are aligned with our values.
Outside of work, I’m an avid climber and once spent three months in the woods hiking on the Pacific Crest Trail.
KerasNLP is a new library for natural language processing workflows within the Keras ecosystem. I have been the primary contributor to this library since it’s inception.
Keras is deep learning for humans! And the high-level library packaged with tensorflow for saving models. I have been a developer and maintainer of the Keras library since 2020.
A tech talk for Tensorflow on Keras Preprocessing Layers.
A walkthrough of NLP workflows in Keras given with Chen Qian.
Pretraining a Transformer from scratch with KerasNLP
A guide on pre-training a transformer with a masked language modeling loss that also serves as an introduction to the lower-level building blocks in the KerasNLP library.
An Introduction to Keras Preprocessing Layers
A post on the Tensorflow blog detailing the uses of Keras Preprocessing Layers.
Synthesizing Open Worlds with Constraints using Locally Annealed Reversible Jump MCMC
During my masters I worked at the Stanford Graphics Labs. This paper lays out a system where users could enter a system of constrains, and randomly sample layouts that met these constraints using a type of probabilistic model called a factor graph.