My main research project is to improve the accuracy of a classification system that is used across the federal government. I am applying Natural Language Processing techniques (TF-IDF, word embeddings, etc.) within a predictive model (currently Logistic Regressions and Random Forest). This work utilizes, by Census standards, novel data sets and yields promising results. Future work involves diagnosing why XGBoost yields lower accuracy and to use neural networks.
My secondary project similarly involves NLP and predictive models but in this case to create a propensity model.
I worked as a data analyst consultant for TubeScience, a Facebook/Instagram video advertising company in Los Angeles. Most of this work involved contributing to reports and dashboards but also a tiny amount of simple predictive modeling and using data to inform ad purchasing decisions.
I’m open to new consulting work on the side.
Details coming soon