|Title: Improving Policy Learning via Programmatic Domain Knowledge|
|Seminar: Computer Science|
|Speaker: Yisong Yue, Cal Tech|
|Contact: Ymir Vigfusson, email@example.com|
|Date: 2020-09-25 at 1:00PM|
|Download Flyer Add to Calendar|
This talk explores how to leverage programmatic domain knowledge to improve policy learning (which includes reinforcement & imitation learning). I will consider two aspects. First, how can we express policy classes using domain specific programming languages to yield interesting inductive biases that lead to sample-efficient learning while preserving flexibility and improving interpretability? Second, building upon the data programming paradigm in supervised learning, how can we use expert-written programs as a form of auxiliary supervision to improve the reliability of policy learning? I will present problem framings, algorithms, and experiments for two settings: efficient learning of formally certified policies, and controllable generation of behaviors.
Bio: Yisong Yue is a professor of Computing and Mathematical Sciences at the California Institute of Technology. He was previously a research scientist at Disney Research. Before that, he was a postdoctoral researcher in the Machine Learning Department and the iLab at Carnegie Mellon University. He received a Ph.D. from Cornell University and a B.S. from the University of Illinois at Urbana-Champaign. Yisong's research interests are centered around machine learning, and in particular getting theory to work in practice. To that end, his research agenda spans both fundamental and applied pursuits. In the past, his research has been applied to information retrieval, recommender systems, text classification, learning from rich user interfaces, analyzing implicit human feedback, data-driven animation, behavior analysis, sports analytics, experiment design for science, protein engineering, program synthesis, learning-accelerated optimization, robotics, and adaptive planning & allocation problems.
See All Seminars