Hello, I am Evan Pu


Postdoc under Prof. Armando Solar-Lezama
at Massachusetts Institute of Technology

My Research

My Research


My research focuses on the interplay between AI and Programming Languages. In particular, I'm interested in designing computations that can be learned by a machines, as human-learnability and machine-learnability are often incompatible. Specifically, my research covers the topics of active learning and program synthesis. Here is my Thesis Presentation

Research Topics


Write, Execute, Assess: Program Synthesis with a REPL (NeurIPS 2019)

Kevin Ellis*, Maxwell Nye*, Yewen Pu*, Felix Sosa*, Josh Tenenbaum, Armando Solar-Lezama

We present a neural program synthesis approach integrating components which write, execute, and assess code to navigate the search space of possible programs. We equip the search process with an interpreter or a read-eval-print-loop (REPL), which immediately executes partially written programs, exposing their semantics. The REPL addresses a basic challenge of program synthesis: tiny changes in syntax can lead to huge changes in semantics. We train a pair of models, a policy that proposes the new piece of code to write, and a value function that assesses the prospects of the code written so-far. At test time we can combine these models with a Sequential Monte Carlo algorithm. We apply our approach to two domains: synthesizing text editing programs and inferring 2D and 3D graphics programs.

arXiv

Discovering Representative Examples for Program Synthesis (ICML 2018)

Yewen Pu, Zachery Miranda, Leslie P Kaelbling, Armando Solar-Lezama

Program synthesis is a class of regression problems where one seeks a solution, in the form of a source-code program, mapping the inputs to their corresponding outputs exactly. Due to its precise and combinatorial nature, program synthesis is commonly formulated as a constraint satisfaction problem, where input-output examples are encoded as constraints and solved with a constraint solver. A key challenge of this formulation is scalability: while constraint solvers work well with a few well-chosen examples, a large set of examples can incur significant overhead in both time and memory. We describe a method to discover a subset of examples that is both small and representative: the subset is constructed iteratively, using a neural network to predict the probability of unchosen examples conditioned on the chosen examples in the subset, and greedily adding the least probable example. We empirically evaluate the representativeness of the subsets constructed by our method, and demonstrate such subsets can significantly improve synthesis time and stability.

arXiv GitHub

Learning to Acquire Information (UAI 2017)

Yewen Pu, Leslie P Kaelbling, Armando Solar-Lezama

We consider the problem of diagnosis where a set of simple observations are used to infer a potentially complex hidden hypothesis. Finding the optimal subset of observations is intractable in general, thus we focus on the problem of active diagnosis, where the agent selects the next most-informative observation based on the results of previous observations. We show that under the assumption of uniform observation entropy, one can build an implication model which directly predicts the outcome of the potential next observation conditioned on the results of past observations, and selects the observation with the maximum entropy. This approach enjoys reduced computation complexity by bypassing the complicated hypothesis space, and can be trained on observation data alone, learning how to query without knowledge of the hidden hypothesis.

arXiv GitHub

sk_p: a neural program corrector for MOOCs (OOPSLA Workshop 2016)

Yewen Pu, Karthik Narasimhan, Armando Solar-Lezama, Regina Barzilay

We present a novel technique for automatic program correction in MOOCs, capable of fixing both syntactic and semantic errors without manual, problem specific correction strategies. Given an incorrect student program, it generates candidate programs from a distribution of likely corrections, and checks each candidate for correctness against a test suite. The key observation is that in MOOCs many programs share similar code fragments, and the seq2seq neural network model, used in the natural-language processing task of machine translation, can be modified and trained to recover these fragments. Experiment shows our scheme can correct 29% of all incorrect submissions and out-performs state of the art approach which requires manual, problem specific correction strategies.

arXiv

Publications


Scholar DBLP

evanthebouncy @ internet


I am always ''evanthebouncy'' on the internet. I sometimes collect random datasets in-person like 1000 hand-drawn pineapples (github). I write blogs on reinforcement learning and games (medium). I stream my work regularly if you want to donate some twitch prime subscriptions (twitch.tv).

Resume


last updated 2019-09-10

Let's Talk!


yewenpu AT mit DOT edu