All Seminars
Title: Human-AI Systems for Making Videos Useful |
---|
Seminar: Computer Science |
Speaker: Amy Pavel, Carnegie Mellon University |
Contact: Vaidy Sunderam, vss@emory.edu |
Date: 2021-02-08 at 10:00AM |
Venue: https://emory.zoom.us/j/99774155333 |
Abstract: Video has become a primary medium for communication. Videos including explainers, how-to tutorials, lectures, and vlogs increasingly eclipse their text counterparts. While videos can be engaging to watch, it can be challenging to use videos when seeking information. First, the timeline-based interfaces we use for videos are difficult to skim and browse because they lack the structure of text. Second, the rich audio and visual content in videos can be inaccessible for people with disabilities. What are the future systems will make videos useful for all users? In this talk, I’ll share my work creating hybrid AI and interactive systems that leverage multiple mediums of communication (e.g., text, video, and audio) across two main research areas: 1) helping domain experts surface content of interest through interactive video abstractions, and 2) making videos non-visually accessible through interactions for video accessibility. First, I will share core challenges of video informed by interviews with domain experts. I will then share new interactive systems that leverage state-of-the-art AI/ML techniques, and evaluations demonstrating the efficacy of these systems. I will conclude with future research directions on how hybrid HCI-AI breakthroughs will improve digital communication, and how designing new interactions can help us to realize the full potential of AI/ML advances. |
Title: Positive AI with Social Commonsense Models |
---|
Seminar: Computer Science |
Speaker: Maarten Sap, University of Washington |
Contact: Vaidy Sunderam, vss@emory.edu |
Date: 2021-02-05 at 1:00PM |
Venue: https://emory.zoom.us/j/92294085195 |
Abstract: To effectively understand language and safely communicate with humans, machines must not only grasp the surface meanings of texts, but also their underlying social meaning. This requires understanding interpersonal social commonsense, such as knowing to thank someone for giving you a present, as well as understanding harmful social biases and stereotypes. Failure to account for these social and power dynamics could cause models to produce redundant, rude, or even harmful outputs. In this talk, I will describe my research on enabling machines to reason about social dynamics and social biases in text. I will first discuss ATOMIC, the first large-scale knowledge graph of social and interpersonal commonsense knowledge, with which machines can be taught to reason about the causes and effects of everyday events. Then, I will show how we can make machines understand and mitigate social biases in language, using Social Bias Frames, a new structured formalism for distilling biased implications of language, and PowerTransformer, a new unsupervised model for controllable debiasing of text. I will conclude with future research directions on making NLP systems more socially-aware and equitable, and how to use language technologies for positive societal impact. |
Title: What We Miss if We Don’t Talk to People: Understanding Users’ Diverse and Nuanced Privacy Needs |
---|
Seminar: Computer Science |
Speaker: Camille Cobb, Carnegie Mellon University |
Contact: Vaidy Sunderam, vss@emory.edu |
Date: 2021-02-03 at 10:00AM |
Venue: https://emory.zoom.us/j/91087670904 |
Abstract: In security and privacy research, we usually think about protecting against powerful adversaries who have substantial resources and strong technical abilities. Those types of threats are important to address, but are often not well-aligned with typical users’ privacy concerns. Instead, users frequently worry about information disclosure to their friends, family, coworkers, or employers; and they may face tradeoffs between their desires for privacy and other goals such as convenience, financial security, and personal connection. For example, despite the risks it can pose, people using online dating apps may decide to share personal, potentially sensitive information to increase their chances of finding a romantic partner. will discuss my prior and ongoing work, which takes a human-centered approach to understanding and addressing security and privacy concerns that affect users on a daily basis. First, I explore how real users’ smart home devices may introduce risks --- including to stakeholders who had no choice in their installation or configuration (e.g., children, visitors, neighbors, or household employees such as babysitters). Next, I discuss how online status indicators -- a UI element that communicates when users are actively online -- can lead to interpersonal tensions or make users contort their behaviors to achieve a desired self-presentation. In each project I show that users have nuanced and diverse technology goals and risk profiles, and that existing technologies fail to sufficiently support users. I discuss potential solutions and outline future research directions. |
Title: Trustworthy Machine Learning: On the Preservation of Individual Privacy and Fairness |
---|
Seminar: Computer Science |
Speaker: Xueru Zhang, University of Michigan |
Contact: Vaidy Sunderam, vss@emory.edu |
Date: 2021-02-01 at 10:00AM |
Venue: https://emory.zoom.us/j/92280212733 |
Abstract: Machine learning (ML) techniques have seen significant advances over the last decade and are playing an increasingly critical role in people's lives. While their potential societal benefits are enormous, they can also inflict great harm if not developed or used with care. In this talk, I will focus on two critical ethical issues in ML systems: fairness and privacy, and present mitigating solutions in various scenarios. On the fairness front, although many fairness criteria have been proposed to measure and remedy biases in ML systems, their impact is often only studied in a static, one-shot setting. In the first part of my talk, I will present my work on evaluating the long-term impact of (fair) ML decisions on population groups that are repeatedly subject to such decisions. I will illustrate how imposing common fairness criteria intended to protect disadvantaged groups may lead to undesirable pernicious long-term consequences by exacerbating inequality. I will then discuss a number of potential mitigations. On the privacy front, when ML models are trained over individuals’ personal data, it is critical to preserve their individual privacy while maintaining a sufficient level of model accuracy. In the second part of the talk, I will illustrate two key ideas that can be used to balance an algorithm’s privacy-accuracy tradeoff: (1) reuse intermediate results to reduce information leakage; and (2) improve algorithmic robustness to accommodate more randomness. I will present a randomized, privacy-preserving algorithm that leverages these ideas in the context of distributed learning. It is shown that our algorithm’s privacy-accuracy tradeoff can be improved significantly over existing algorithms. |
Title: Blockchains: Fundamental Concepts, Challenges, and Future Directions |
---|
Seminar: Computer Science |
Speaker: Jack Kolb, University of California, Berkeley |
Contact: Vaidy Sunderam, vss@emory.edu |
Date: 2021-01-20 at 10:30AM |
Venue: https://emory.zoom.us/j/95286467335 |
Abstract: Blockchains are a topic of immense interest in academia and industry, but their true nature is often obscured by marketing and hype. In this talk, I will explain the fundamental elements of blockchains and how they relate to traditional ideas in distributed computing. Blockchains have unique capabilities in terms of availability, consistency, and data integrity but also suffer from several limitations. I will discuss these issues and then focus on the challenges in building secure blockchain-based applications and recent efforts that aim to address these challenges. |
Title: The Search For Causal Explanations In The Presence Of Latent Confounders |
---|
Seminar: Computer Science |
Speaker: Rohit Bhattacharya, Johns Hopkins University |
Contact: Vaidy Sunderam, vss@emory.edu |
Date: 2021-01-12 at 10:30AM |
Venue: https://emory.zoom.us/j/95717646071 |
Abstract: The task of establishing a causal model that best explains the data is fundamental across scientific disciplines. However, data-driven causal model selection, a.k.a causal discovery, is often complicated by the presence of latent confounders, which makes it difficult to tease apart causal relations from spurious correlations. In this talk, I first motivate the need for causal model selection via my research in computational oncogenomics. I then describe my contributions to the development of algorithms for causal discovery in the presence of latent confounders, and other related phenomena such as latent homophily. I conclude with a forward-looking research agenda in the development of causal inference and missing data methods to correct for understudied yet ubiquitous sources of bias, with an emphasis on applications that improve public health outcomes. |
Title: Bio-inspired Swarm Robotics: Natural algorithms to monitor nature |
---|
Seminar: Computer Science |
Speaker: Melanie Moses, UNM |
Contact: Dr. Vaidy Sunderam, vss@emory.edu |
Date: 2020-11-20 at 1:00PM |
Venue: https://emory.zoom.us/j/92722816908 |
Abstract: Natural systems have evolved decentralized, collective behaviors that are much more adaptive, flexible, and robust than anything built by humans. For example, right now trillions of T cells are crawling through your tissues, without a blueprint of your body or centralized instructions, protecting you from viruses and tumors. Uncountable numbers of ants crawl across forest canopies, desert sands and perhaps your kitchen counter, and each species uses its own decentralized strategy that tailors a small repertoire of sensing, navigation, and communication behaviors to forage effectively in its environment. Yet it remains a formidable challenge to engineer flexible and cooperative robotic systems that can function in the real world. We emulate natural search behaviors in robotic swarms that sense, navigate and communicate to search effectively in unmapped environments. We show that provably efficient search algorithms that work well in theory are not necessarily the best algorithms in practice, and that bio-inspired designs can effectively scale to thousands of robots. We implement search algorithms in ground robots designed for NASA to explore for resources and support human settlements on other planets and in UAVs designed to monitor gases emitted from volcanos. Bio: Melanie Moses is a Professor of Computer Science with a secondary appointment in Biology at the University of New Mexico (UNM) and an external faculty member of the Santa Fe Institute (SFI). Her current research includes the VolCAN project to develop a swarm of autonomous adaptive robots to predict volcanic eruptions, and SIMCoV, a spatial model of COVID-19 lung infection and immune response. She is a co-PI on two AI research institutes, one to rethink the foundations of intelligence at SFI, and the Proteus Institute at the University of Vermont. She is a member of the UNM/SFI Interdisciplinary Working Group on Algorithmic Justice and the CRA Computing Community Consortium. She recently led the NASA Swarmathon and NM CSforAll educational programs for thousands of high school and undergraduate students. |
Title: Integrating Machine Learning and Discrete Optimization |
---|
Seminar: Computer Science |
Speaker: Bistra Dilkina, USC Viterbi |
Contact: Ymir Vigfusson, ymir@mathcs.emory.edu |
Date: 2020-11-13 at 1:00PM |
Venue: https://emory.zoom.us/j/92722816908 |
Abstract: Solving some of the most challenging environmental and societal problems of our times will require solving complex problems with limited resources, often at large scale. Applications such as wildlife conservation planning, disaster-resilient infrastructure planning and tuberculosis treatment can all benefit from improved integration between machine learning and combinatorial optimization. I will demonstrate how one can rethink the traditional branch-and-bound tree search for Mixed Integer Programming through the lens of learning-driven algorithm design to create more flexible combinatorial solvers able to learn tailored solution strategies over distributions of instances. In the opposite direction, I will also illustrate how combinatorial optimization can be directly integrated into deep learning pipelines to facilitate decision-focused learning -- where the training loss is a function of the quality of downstream optimization decisions based on parameters estimated by the ML model. Bio: Bistra Dilkina is an Associate Professor of Computer Science at the University of Southern California. She is also the co-Director of the USC Center for AI in Society (CAIS). During 2013-2017, Dilkina was an Assistant Professor in the College of Computing at the Georgia Institute of Technology and a co-director of the Data Science for Social Good Atlanta summer program. She received her PhD from Cornell University in 2012, and was a Post-Doctoral Associate at the Institute for Computational Sustainability until 2013. Dilkina is one of the junior faculty leaders in the young field of Computational Sustainability, and has co-organized workshops, tutorials, special tracks at major conferences on Computational Sustainability and related subareas. Her work spans discrete optimization, network design, and machine learning. |
Title: Visual Text Analytics and its Applications |
---|
Seminar: Computer Science |
Speaker: Wenwen Dou, UNCC |
Contact: Dr. Emily Wall, emily.wall@emory.edu |
Date: 2020-11-06 at 1:00PM |
Venue: https://emory.zoom.us/j/92722816908 |
Abstract: The increasing amount of textual data bears valuable insights in domains including business intelligence and public policy. While automated text-analysis algorithms produce compelling results on summarizing and mining textual data, the end results are often too complex for average users to make decisions upon. In this talk, I will introduce my research on integrating automated data-analysis algorithms with visual analytics systems that help decision makers make sense of large-scale textual data interactively. I will introduce applications that integrate text-analysis algorithms and interactive visualization of the topics and events. These applications not only facilitate domain experts to make decisions based on insights gained from textual data, but also serve as platforms to study human biases during decision making. Short bio: Dr. Wenwen Dou is currently an assistant professor of College of Computing and Informatics and a core faculty member at Charlotte Visualization Center at University of North Carolina at Charlotte. Her research interests include Visual Analytics, Text Mining, and Human Computer Interaction. She works in the cutting-edge research area of Visual Text Analytics, which integrates statistical and machine learning methods with powerful interactive visualization for analyzing large amounts of textual data. Dou has worked with various analytics domains in reducing information overload and providing interactive visual means to analyzing unstructured information. She has experience in turning cutting-edge research into technologies that have broad societal impacts, partially demonstrated by support from both academic and industry partners, including the Pacific Northwest National Laboratory, US Army Research Office, US Special Operations Command, National Science Foundation, US Army Engineering Research and Development Center, and Lowe’s company Inc. Dou has been serving on the organizing and program committee of the IEEE VIS conference, the premier conference for visualization research. |
Title: Machine Learning Systems for the Data Tsunami |
---|
Seminar: Computer Science |
Speaker: Christopher De Sa, Cornell University |
Contact: Ymir Vigfusson, ymir@mathcs.emory.edu |
Date: 2020-10-30 at 1:00PM |
Venue: https://emory.zoom.us/j/92722816908 |
Abstract: Much of the recent advancement in machine learning has been driven by the capability of machine learning systems to process and learn from very large data sets using very complicated models. Continuing to scale data up in this way—to handle the present "data tsunami"—presents a computational and algorithmic challenge, as power, memory, and time are all factors that limit performance. In this talk, I will discuss some recent advances from my lab that address these issues at every level of the systems stack, including algorithmic changes that make accurate statistical inference on large datasets feasible, numerical changes that increase our capabilities to train complicated models over unreliable networks, and principled approaches that ensure the accountability of large-scale learning systems. Bio:Christopher De Sa is an Assistant Professor in the Cornell Department of Computer Science, with additional field membership in ECE and Statistics. His research covers algorithmic, software, and hardware techniques for high-performance machine learning, with a focus on relaxed-consistency variants of stochastic algorithms such as asynchronous and low-precision stochastic gradient descent (SGD) and Markov chain Monte Carlo. |