All Seminars

Title: Building Interactive Natural Language Interfaces
Seminar: Computer Science
Speaker: Ziyu Yao, Ohio State University
Contact: Dr. Vaidy Sunderam, VSS@emory.edu
Date: 2021-04-16 at 1:00PM
Venue: https://emory.zoom.us/j/92103915275
  Download Flyer  Add to Calendar
Abstract:
Constructing natural language interfaces (NLIs) that allow humans to acquire knowledge and complete tasks using natural language has been a long-term pursuit. This is challenging because human language can be very ambiguous and complex. Moreover, existing NLIs typically provide no means for human users to validate the system decisions; even if they could, most systems do not learn from user feedback to avoid similar mistakes in their future deployment.
In this talk, I will introduce my research about building interactive NLIs, where an NLI is formulated as an intelligent agent that can interactively and proactively request human validation when it feels uncertain. I instantiate this idea in the task of semantic parsing (e.g., parsing natural language into a SQL query). In the first part of the talk, I will present a general interactive semantic parsing framework [EMNLP 2019], and describe an imitation learning algorithm (with theoretical analysis) for improving semantic parsers continually from user interaction [EMNLP 2020]. In the second part, I will further talk about a generalized problem of editing tree-structured data under user interaction, e.g., how to edit the Abstract Syntax Trees of computer programs based on user edit specifications [ICLR 2021]. Finally, I will conclude by outlining future work around interactive NLIs and human-centered NLP/AI in general.
Bio: Ziyu Yao is a Ph.D. candidate at the Ohio State University (OSU). Her research interests lie in Natural Language Processing, Artificial Intelligence, and their applications to advance other disciplines. In particular, she has been focusing on developing natural language interfaces (e.g., question answering systems) that can reliably assist humans in various domains (e.g., Software Engineering and Healthcare). She has built collaborations with researchers at Carnegie Mellon University, Facebook AI Research, Microsoft Research, Fujitsu Laboratories of America, University of Washington, and Tsinghua University, and has published extensively at top-tier conferences in NLP (EMNLP, ACL), AI/Machine Learning (ICLR, AAAI), and Data Mining (WWW). In 2020, She was awarded the Presidential Fellowship (the highest honor given by OSU graduate school) and selected into EECS Rising Stars by UC Berkeley. Please visit her webpage for more details: https://ziyuyao.org/
Title: Multigrid Reduction for Multiphase Flow and Mixed-Precision Solvers
Seminar: Numerical Analysis and Scientific Computing
Speaker: Daniel Osei-Kuffuor, Lawrence Livermore National Lab
Contact: Yuanzhe Xi, yxi26@emory.edu
Date: 2021-04-16 at 1:30PM
Venue: https://emory.zoom.us/j/95900585494
  Download Flyer  Add to Calendar
Abstract:
Simulation of flow in porous media, such as reservoir geomechanics, involves solving multi-physics problems in which multiphase flow is tightly coupled with geomechanical processes. To capture this dynamic interplay, fully implicit methods, also known as monolithic approaches, are usually preferred. However, due to the strong coupling present in the continuous problem, efficient techniques such as algebraic multigrid (AMG) cannot be directly applied to the resulting discrete linear systems. This talk will present our efforts in developing an algebraic framework based on multigrid reduction (MGR) that is suited for tightly coupled systems of PDEs. I will demonstrate the applicability of the MGR framework to multiphase flow coupled with geomechanics and show that the framework is flexible to accommodate a wide range of scenarios, as well as efficient and scalable for large problems.

Time permitting, I will also discuss some of our recent efforts in utilizing mixed-precision strategies for numerical solvers, including multigrid solvers.
Title: Statistical Modeling and Learning in Single-cell RNA Sequencing Data
Defense: Computer Science
Speaker: Kenong Su, Emory University
Contact: Dr. Hao Wu, hao.wu@emory.edu
Date: 2021-04-13 at 2:00PM
Venue: https://emory.zoom.us/j/94588103525
  Download Flyer  Add to Calendar
Abstract:
Single-cell RNA sequencing (scRNA-seq) technologies have revolutionized biological research, and cell clustering becomes an important and commonly performed task in scRNA-seq data analysis. An essential step in scRNA-seq clustering is to select a subset of genes (referred to as “features”), whose expression patterns will further be adopted for downstream clustering analysis. It is noting that both the quality and quantity of the feature set will have significant impact on the clustering accuracy. However, almost all existing scRNA-seq clustering tools select features relying on some simple unsupervised methods, mostly based on statistical moments. Plus, these existing tools tend to choose random top (e.g., 1000 or 2000) features for cell clustering. In this talk, I will present a novel unsupervised algorithm named FEAST (Su et al., 2021) specifically designed for selecting most representative genes in scRNA-seq data before performing the core of clustering. Another common and practical question in the scRNA-seq experiment is how to decide a proper number of cells in order to reach a desired power level, in the context of differential expression tests and marker gene detections. I will also present POWSC pipeline (Su et al., 2020), a simulation-based approach, to provide comprehensive power evaluation and sample size recommendation. The findings from applying POWSC can potentially guide scRNA-seq experimental designs.
Title: Predicting Time-to-Event and Clinical Outcomes from High-Dimensional Unstructured Data
Defense: Computer Science
Speaker: Pooya Mobadersany, Emory University
Contact: Lee Cooper, lee.cooper@northwestern.edu
Date: 2021-04-05 at 2:00PM
Venue: https://northwestern.zoom.us/j/95131004240
  Download Flyer  Add to Calendar
Abstract:
This dissertation addresses challenges in learning to predict time-to-event outcomes such as survival and treatment response from high dimension data including whole slide images and genomic profiles that are being produced in modern pathology labs. Learning from these data requires integration of disparate data types, and the ability to attend to important signals within vast amounts of irrelevant data present in each sample. Furthermore, clinical translation of machine learning models for prognostication requires communicating the degree and types of uncertainty to clinical end users who will rely on inferences from these models.
This dissertation has addressed these challenges. To validate our developed data fusion technique, we have selected cancer histology data as it reflects underlying molecular processes and disease progression and contains rich phenotypic information predictive of patient outcomes. This study shows a computational approach for learning patient outcomes from digital pathology images using deep learning to combine the power of adaptive machine learning algorithms with survival models. We illustrate how these survival convolutional neural networks (SCNNs) can integrate information from both histology images and genomic biomarkers into a single unified framework to predict time-to-event outcomes and show prediction accuracy that surpasses the current clinical paradigm for predicting the overall survival of patients diagnosed with glioma. Next, to capture the volume of data and manage heterogeneity within the histology images, we have developed GestAltNet, which emulates human attention to high-yield areas and aggregation across regions. GestAltNet points toward a future of genuinely whole slide digital pathology by incorporating human-like behaviors of attention and gestalt formation process across massive whole slide images. We have used GestAltNet to estimate the gestational age from whole slide images of placental tissues and compared this to networks lacking attention and aggregation capabilities. To address the challenge of representing uncertainty during inference, we have developed a Bayesian survival neural network that captures the aleatoric and epistemic uncertainties when predicting clinical outcomes. These networks are the next generation of machine learning models for predicting time-to-event outcomes, where the degree and source of uncertainty are communicated to clinical end users
Title: Robust Crowdsourcing and Federated Learning under Poisoning Attacks
Defense: Computer Science
Speaker: Farnaz Tahmasebian, Emory University
Contact: Dr. Li Xiong, lxiong@emory.edu
Date: 2021-03-30 at 1:00PM
Venue: https://zoom.us/j/9828106847
  Download Flyer  Add to Calendar
Abstract:
Crowd-based computing can be described in a way that distributes tasks among multiple individuals or organizations to interact with their intelligent or computing devices. Two of the exciting classes of crowd-based computing are crowdsourcing and federated learning, where the first one is crowd-based data collection, and the second one is crowd-based model learning. Crowdsourcing is a paradigm that provides a cost-effective solution for obtaining services or data from a large group of users. It has been increasingly used in modern society for data collection in various domains such as image annotation or real-time traffic reports. Although crowdsourcing is a cost-effective solution, it is an easy target to take advantage of by assembling great numbers of users to artificially boost support for organizations, products, or even opinions. Therefore, deciding to use the best aggregation method that tackles attacks in such applications is one of the main challenges in developing an effective crowdsourcing system. Moreover, the original aggregation algorithm in federated learning is susceptible to data poisoning attacks. Also, the dynamic behavior of this framework in terms of choosing clients randomly in each iteration poses further challenges for implementing the robust aggregating method in federated learning. In this dissertation, we devise strategies that improve the system’s robustness under data poisoning attacks when workers intentionally or strategically misbehave. https://zoom.us/j/9828106847
Title: COVID-19 Vaccine Design using Mathematical Linguistics
Seminar: Computer Science
Speaker: Dr. Liang Huang, Oregon State University
Contact: Jinho Choi, jinho.choi@emory.edu
Date: 2021-03-19 at 1:00PM
Venue: https://emory.zoom.us/j/92103915275
  Download Flyer  Add to Calendar
Abstract:
Abstract: To defeat the current COVID-19 pandemic, a messenger RNA (mRNA) vaccine has emerged as a promising approach thanks to its rapid and scalable production and non-infectious and non-integrating properties. However, designing an mRNA sequence to achieve high stability and protein yield remains a challenging problem due to the exponentially large search space (e.g., there are $2.4 \times 10^{632}$ possible mRNA sequence candidates for the spike protein of SARS-CoV-2). We describe two on-going efforts for this problem, both using linear-time algorithms inspired by my earlier work in natural language parsing. On one hand, the Eterna OpenVaccine project from Stanford Medical School takes a crowd-sourcing approach to let game players all over the world design stable sequences. To evaluate sequence stability (in terms of free energy), they use LinearFold from my group (2019) since it’s the only linear-time RNA folding algorithm available (which makes it the only one fast enough for COVID-scale genomes). On the other hand, we take a computational approach to directly search for the optimal sequence in this exponentially large space via dynamic programming. It turns out this problem can be reduced to a classical problem in formal language theory and computational linguistics (intersection between CFG and DFA), which can be solved in $O(n^3)$ time, just like lattice parsing for speech. In the end, we can design the optimal mRNA vaccine candidate for SARS-CoV-2 spike protein in just about 10 minutes. To conclude, classical results (dating back to 1960s) from theoretical computer science and mathematical linguistics helped us solve the very challenging and extremely important problem in fighting the COVID-19 pandemic. \\ Bio: Liang Huang (PhD, Penn, 2008) is an Associate Professor of Computer Science at Oregon State University and Distinguished Scientist at Baidu Research USA. He is a leading theoretical computational linguist, and was recognized at ACL 2008 (Best Paper Award) and ACL 2019 (Keynote Speech), but in recent years he has been more interested in applying his expertise in parsing, translation, and grammar formalisms to biology problems such as RNA folding and RNA design. Since the outbreak of COVID-19, he has shifted his attention to the fight against the virus, which resulted in efficient algorithms for stable mRNA vaccine design, adapted from classical theory and algorithms from mathematical linguistics dating back to the 1960s.
Title: Optimal Control Approaches for Designing Neural Ordinary Differential Equations
Defense: Computer Science
Speaker: Derek Onken, Emory University
Contact: Lars Ruthotto, lruthotto@emory.edu
Date: 2021-03-10 at 1:00PM
Venue: https://emory.zoom.us/j/98688786075?pwd=ampLTG4reEV3ak5nbEJZUVdwRnljQT09
  Download Flyer  Add to Calendar
Abstract:
Neural network design encompasses both model formulation and numerical treatment for optimization and parameter tuning. Recent research in formulation focuses on interpreting architectures as discretizations of continuous ordinary differential equations (ODEs). These neural ODEs in which the ODE dynamics are defined by neural network components, benefit from reduced parameterization and smoother hidden states than traditional discrete neural networks but come at high computational costs. Training a neural ODE can be phrased as an ODE-constrained optimization problem, which allows for the application of mathematical optimal control (OC). The application of OC theory leads to design choices that differ from popular high-cost implementations. We improve neural ODE numerical treatment and formulation for models used in time-series regression, image classification, continuous normalizing flows, and path-finding problems.
Title: Machine Translation for All
Seminar: Computer Science
Speaker: Huda Khayrallah, Johns Hopkins University
Contact: Vaidy Sunderam, vss@emory.edu
Date: 2021-02-15 at 10:00AM
Venue: https://emory.zoom.us/j/92558356951
  Download Flyer  Add to Calendar
Abstract:
Machine translation uses machine learning to automatically translate text from one language to another and has the potential to reduce language barriers. Recent improvements in machine translation have made it more widely-usable, partly due to deep neural network approaches. However—like most deep learning algorithms—neural machine translation is sensitive to the quantity and quality of training data, and therefore produces poor translations for some languages and styles of text. Machine translation training data typically comes in the form of parallel text—sentences translated between the two languages of interest. Limited quantities of parallel text are available for most language pairs, leading to a low-resource problem. Even when training data is available in the desired language pair, it is frequently formal text—leading to a domain mismatch when models are used to translate a different type of data, such as social media or medical text. Neural machine translation currently performs poorly in low-resource and domain mismatch settings; my work aims to overcome these limitations, and make machine translation a useful tool for all users.

In this talk, I will discuss a method for improving translation in low resource settings—Simulated Multiple Reference Training (SMRT; Khayrallah et al., 2020)—which uses a paraphraser to simulate training on all possible translations per sentence. I will also discuss work on improving domain adaptation (Khayrallah et al., 2018), and work on analyzing the effect of noisy training data (Khayrallah and Koehn, 2018).
Title: Mining and Learning from Graph Processes
Seminar: Computer Science
Speaker: Arlei Lopes Da Silva, University of California, Santa Barbara
Contact: Vaidy Sunderam, vss@emory.edu
Date: 2021-02-12 at 1:00PM
Venue: https://emory.zoom.us/j/93293219464
  Download Flyer  Add to Calendar
Abstract:
The digital transformation has given rise to a new form of science driven by data. Graphs (or networks) are a powerful framework for the solution of data science problems, especially when the goal is to extract knowledge from and make predictions about the dynamics of complex systems such as those arising from epidemiology, social media and infrastructure. However, this representation power comes at a cost, as graphs are highly combinatorial structures, leading to challenges in search, optimization, and learning tasks that are relevant to modern real-world applications.

In this talk, I will overview my recent work on new algorithms and models for mining and learning from graph data. First, I will show how the interplay between graph structure and its dynamics can be exploited for pattern mining and inference in networked processes, such as improving the effectiveness of testing during a pandemic. Then, I will focus on machine learning on graphs, where novel deep learning and optimization approaches for predicting graph data, such as traffic forecasting, will be described. As the last topic, I will introduce combinatorial algorithms for optimization on graphs that enable us to attack/defend their core structure, among other applications. I will end by briefly contextualizing my ongoing work as part of a broader research agenda with new related problems that I plan to address in the next few years.
Title: Addressing Biases for Robust, Generalizable AI
Seminar: Computer Science
Speaker: Swabha Swayamdipta, Allen Institute for AI
Contact: Vaidy Sunderam, vss@emory.edu
Date: 2021-02-10 at 1:00PM
Venue: https://emory.zoom.us/j/95438087188
  Download Flyer  Add to Calendar
Abstract:
Artificial Intelligence has made unprecedented progress in the past decade. However, there still remains a large gap between the decision-making capabilities of humans and machines. In this talk, I will investigate two factors to explain why. First, I will discuss the presence of undesirable biases in datasets, which ultimately hurt generalization, regardless of dataset size. I will then present bias mitigation algorithms that boost the ability of AI models to generalize to unseen data. Second, I will explore task-specific prior knowledge which aid robust generalization, but are often ignored when training modern AI architectures on large amounts of data. In particular, I will show how linguistic structure can provide useful biases for inferring shallow semantics, which help in natural language understanding. I will conclude with a discussion of how this framework of dataset and model biases could play a critical role even in the societal impact of AI, going forward.