Google Scholar  /  GitHub  /  CV
Hey there! I am an undergraduate researcher at Stanford University. I study computer science, psychology (minor) and creative writing (minor). I work in Chelsea Finn's IRIS lab on robot learning.
My research and project experiences include reinforcement learning, imitation learning, natural language processing, computer vision, and real robotics. I am interested in creating robots that approach problems like humans do. In past work, I've looked at how robots can learn from diverse data, leverage multimodal sensory inputs, and adapt through online interaction. I am also interested in the psychological mechanisms of learning and teaching.
In my free time, I love to write short fiction and creative non-fiction. I'm working on a book that looks at the human-animal connection through the stories of animal trainers.
Scroll down to see my research, other projects, teaching, writing, and course work. Feeling adventurous? Check these out as well:
Whale Book  /  Other Writing
When faced with a difficult task, a human may try different learned strategies until they discover a successful approach. As an example, a human may try pushing on a door first. If it does not move, then the human will try to pull. In this project, we try to create a similar try-retry paradigm for robots. The proposed try-retry paradigm would leverage a set of strategies learned from expert data and a time-to-success model that helps the agent decide when and how to switch strategies.
Often, robot data isn't shared across projects. We present a new way that past project data can be used to improve downstream learning. We select relevant data from a large dataset of robot interactions, which augments a small set of task demonstrations for use in a behavior cloning algorithm. Relevance is determined through a state-action embedding that is trained directly with the past project data. We show, in simulation and on real robots, a meaningful improvement of our Behavior Retrieval method over baselines.
When accomplishing certain tasks, we benefit from using other modalities like audio. In this project, we show that robots can also benefit from audio data while accomplishing visually-occluded tasks. We learn policies end-to-end from RGB vision and audio from a gripper-mounted microphone. The learned policies can accomplish difficult tasks, like extracting keys from a bag when the keys are not initially visible.
Ultra-short term wind power predictions can play an important role in the stability of a renewable energy power grid. In this project, I used auxiliary weather forecast data to improve such predictions.
As Large Language Models (LLMs) become more advanced, there's a strong push to create effective detection algorithms. In this work, we look at a new detection algorithm originally proposed by Mitchell et al: DetectGPT. We proposed and tested ways of improving DetectGPT by focusing on parts of speech. We also demonstrated that DetectGPT can be partially fooled by an adversarial prompt. Finally, we tested DetectGPT on ChatGPT.
Phone document scans can suffer from distortions from a crinkled document. We created a decrumpling model that will take in an image of a crumpled document and smooth it out. We find that an adversarial paradigm with a small PatchGAN yields the most realistic results with the best quantitative scores as well.
Thompson sampling is one approach to exploration, where you sample from belief distributions and test the samples in the real world. It is one way of balancing exploration and exploitation. In this simulation of Thompson sampling, we visualize ants finding a good location for a nest. We also implement Tandem Running, which allows ants to "persuade" other ants, resulting in faster convergence.
This simple Python-based program allows you to use keyboard shortcuts to annotate audio, video, and live events with timestamped comments. The program will export your annotations to copy-and-paste text that you can add to any literature review notes. I rely heavily on this tool to review hours of videos for my book.
Recently, I've also made a book annotator that allows you to make page-specific notes. This allows you digitize your annotations for physical books.
As a researcher, I often find that I use the same code many times in different applications. To help, I've been working on a large repository of code basics. It's a collection of code snippets that help with model debugging, plot making, basic PyTorch models, and more.
This webscraping software will search RSS, YouTube, and Twitter accounts of your choice for new content with relevant keywords. It saves these to a SQL database for later perusal. I've also included a slurry of specialty website parsers for my whale book research. While you probably won't need these, the specialty parsers demonstrate the modularity of this software.
Reinforcement learning in AI has origins in animal training in the 1900's. While much has been abstracted away since then, I take another look at this original connection between CS and psychology. In this 90-minute talk, I explore the interplay between challenges faced in robot labs and dolphin stadiums. We explore behavior cloning, reinforcement techniques, models of learning, and more.
I give this talk twice a year at Stanford Splash, an educational outreach program that serves students in the Bay area, especially students from underrepresented and disadvantaged backgrounds.
Slide Deck Sample  /  Stanford Splash Program
I was a section leader for Stanford's introductory CS106A and intermediate CS106B computer science courses. As a section leader, I hosted weekly sessions to review lecture content in a small group setting. I also hosted office hours, graded homeworks, provided interactive feedback, and helped graded exams.
I will be a mentor for the Deep Learning Portal outreach program starting Fall 2023. The Portal is a program created from the IRIS lab and the Stanford CS department, designed to help disadvantaged students learn AI by providing access to existing online courses and hosting live office hours. I will be helping with office hours, where I will be debugging code and answering conceptual questions.
Research and paper-writing are already rich with their own narratives. But outside of my day job, I am a creative writer. I work on short fiction and creative non-fiction. Currently, I'm mostly interested in the human-animal relationship and the immigrant identity. I draw inspiration from weird places: AI papers, nighttime fishing, SeaWorld, dead skunks. Find my writing here. One of my works has been the recipient of a Stanford Creative Writing Prize.
I'm currently working on a non-fiction book that dives deeper into this human-animal connection through the stories of whale trainers. I have been interviewed by VICE for my work and advocacy with trainers.
CS 224R Deep Reinforcement Learning
CS 234 Reinforcement Learning
CS 224N Natural Language Processing with Deep Learning
CS 330 Deep Multi-task and Meta Learning
CS 231N Deep Learning for Computer Vision
CS 229 Machine Learning
CS 285 Deep Reinforcement Learning (Berkeley, self-study)
CS 161 Design and Analysis of Algorithms
CS 110 Principles of Computer Systems
CS 107E Systems from the Ground Up
CS 106B Programming Abstractions in C++
EE 276 Information Theory
CS 228 Probabilistic Graphical Models
MATH 115 Real Analysis
MATH 113 Linear Algebra and Matrix Theory
MATH 51 Linear Algebra and Multivariable Calculus
CS 109 Probability for Computer Science
CS 103 Mathematical Foundations of Computing
PSYCH 45 Introduction to Learning and Memory
PSYCH 30 Introduction to Perception
PSYCH 50 Cognitive Neuroscience
PSYCH 1 Intro to Psychology
ENGLISH 290 Advanced Fiction
ENGLISH 191 Intermediate Creative Non-Fiction
ENGLISH 190 Intermediate Fiction
ENGLISH 92 Introduction Poetry
ENGLISH 91 Introduction Creative Non-Fiction
PSYCH 118F Literature and the Brain
PHIL 2 Moral Philosophy