All Seminars

Title: Learning Movement Representations of Small Humans with Small Data
Seminar: Computer Science
Speaker: Sarah Ostadabbas, Northeastern University
Contact: Vaidy Sunderam, vss@emory.edu
Date: 2022-11-04 at 1:00PM
Venue: MSC W201
  Download Flyer  Add to Calendar
Abstract:
Closely tracking the development of motor functioning in infants provides prodromal risk markers of many developmental disruption such as autism spectrum disorder (ASD), cerebral palsy (CP), and developmental coordination disorder (DCD), among others. Screening for motor delays will allow for earlier and more targeted interventions that will have a cascading effect on multiple domains of infant development, including communication, social, cognitive, and memory. However, only about 29% of US children under 5 years of age receive developmental screening due to expense and shortage of testing resources, contributing negatively to lifelong outcomes for infants at risk for developmental delays. My research aims to learn and quantify visual representations of motor function in infants towards designing an accessible and affordable video-based screening technology for their motor skills by developing novel data-/label-efficient AI techniques including biomechanically-constrained synthetic data augmentation, semantic-aware domain adaptation, and human-AI co-labeling algorithms.

While there are several powerful human behavior recognition and tracking algorithms, however, models trained on large-scale adult activity datasets have limited success in estimating infant movements due to the significant differences in their body ratios, the complexity of infant poses, and types of their activities. Privacy and security considerations hinder the availability of adequate infant images/videos required for training of a robust model with deep structure from scratch, making this a particularly constrained ``small data problem''. To address this gap, in this talk I will cover: (i) introduction of biomechanically-constrained models to synthesize labeled pose data in the form of domain-adjacent data augmentation; (ii) design and analysis of a semantic-aware unsupervised domain adaptation technique to close the gap between the domain-adjacent and domain-specific pose data distributions; and (iii) development and analysis of an AI-human co-labeling technique to provide high-quality labels to refine and adapt the domain-adapted inference models into robust pose estimation algorithms in the target application. These contributions enable the use of advanced AI in the small data domain.

Zoom Option: https://emory.zoom.us/j/95719302738
Title: Bias and XR
Seminar: Computer Science
Speaker: Tabitha Peck, Davidson College
Contact: Vaidy Sunderam, VSS@Emory.edu
Date: 2022-10-28 at 1:00PM
Venue: https://emory.zoom.us/j/95719302738
  Download Flyer  Add to Calendar
Abstract:
Bias, a prejudice in favor of or against one group compared to another, affects our lives from how we act to how things are designed. A person's biases can cause harm, such as a doctor's implicit bias when treating a patient or a teacher's implicit bias when working with students. Augmented, Mixed, and Virtual Reality are power technologies that can be used to study human behavior. In this talk I will present ways in which these XR technologies can be used to investigate and mitigate harmful biases. However, XR has been created by humans and I will further discuss ways that design biases have been added into these systems. I will conclude with a call to action to researchers providing actionable steps to help mitigate bias within our research practices.
Title: Computational Image Processing and Deep Learning with Multi-Model Biomedical Image Data
Defense: Computer Science
Speaker: Hanyi Yu, Emory University
Contact: TBA
Date: 2022-10-24 at 3:00PM
Venue: https://emory.zoom.us/j/97736956694
  Download Flyer  Add to Calendar
Abstract:
With the rapid advance in medical imaging technology in recent decades, computational image analysis has become a popular research topic in the field of biomedical informatics. Images from various imaging acquisition platforms have been widely used for the early detection, diagnosis, and treatment response assessment in a large number of disease and cancer studies. Although computational methods present higher analysis efficiency and less variability than manual analyses, they require appropriate parameter settings to achieve optimal results. This can be demanding for medical researchers lacking relevant knowledge about computational method development. In the last decade, deep neural networks trained on large-scale labeled datasets have provided a promising and convenient end-to-end solution to biomedical image processing. However, the development of deep-learning tools for biomedical image analysis is often restrained by inadequate data with high-quality annotations in practice. By contrast, a large number of unlabeled biomedical images are generated by daily research and clinical activities. Thus, leveraging unlabeled images with semi-supervised or even unsupervised deep learning approaches has become a significant research direction in biomedical informatics analysis. My primary doctoral research focuses on the field of medical image processing, utilizing computational methods to facilitate biomedical image analysis with limited supervision. I have explored two ways to achieve this primarily: (1) Optimizing the model of existing approaches for specific tasks and (2) Developing semi-supervised/unsupervised deep learning approaches. In my research, I mainly focus on image segmentation and object tracking, two common biomedical image analysis tasks. By experimenting with different types of images (e.g., fluorescence microscopy images and histopathology microscopy images) from various sources (e.g., bacteria, human liver biopsies, and retinal pigment epithelium tissues), my developed methods demonstrate their promising potential to support biomedical image analysis tasks. Zoom link: Join Zoom Meeting https://emory.zoom.us/j/97736956694
Title: Relating enhancer genetic variation across mammals to complex phenotypes using machine learning
Seminar: Computer Science
Speaker: Irene Kaplow, Carnegie Mellon University
Contact: Dr. Chinmay Kulkarni, chinmay.kulkarni@emory.edu
Date: 2022-10-21 at 1:00PM
Venue: MSC W201
  Download Flyer  Add to Calendar
Abstract:
Abstract: Many mammalian characteristics have evolved multiple times throughout history. For example, humans and dolphins have larger brains relative to body size than their close relatives, chimpanzees and killer whales. We want to identify the parts of the genome associated with these characteristics by comparing the genomes of hundreds of mammals. Rather than focusing on the small proportion of the genome that encodes genes, which cannot fully explain many of characteristics’ evolution, we present a new approach that uses machine learning to find conserved patterns of sequences at candidate enhancer regions, which control the levels of genes expressed in specific tissues. We established a new set of evaluation criteria for these machine learning models and used these criteria to compare our models to previous methods for this task. When applying our approach to the brain, we identified dozens of new enhancers associated with the evolution of brain size relative to body size and vocal learning. Bio: Irene Kaplow received her B.S. in Mathematics with a minor in Biology from the Massachusetts Institute of Technology in 2010. There, she began her career as a computational biologist while doing research with Bonnie Berger. She then went to graduate school at Stanford University, where she received her Ph.D. in Computer Science in 2017. At Stanford, she worked in the Hunter Fraser and Anshul Kundaje's labs to develop methods to analyze novel high-throughput sequencing datasets to better understand the roles of DNA methylation and Cys2-His2 zinc finger transcription factor binding in gene expression regulation. Irene is now a Lane Postdoctoral Fellow in Andreas Pfenning's lab in the Computational Biology Department at Carnegie Mellon University, where she is developing methods to identify enhancers involved in the evolution of neurological characteristics that have evolved through gene expression.
Title: Towards the development of adaptive and adaptable multimodal displays
Seminar: Computer Science
Speaker: Sara Riggs, PhD, University of Virginia
Contact: Vaidy Sunderam, vss@emory.edu
Date: 2022-10-14 at 1:00PM
Venue: https://emory.zoom.us/j/95719302738
  Download Flyer  Add to Calendar
Abstract:
Abstract: Data-rich environments, such as aviation, military operations, and medicine, impose considerable and continually increasing attentional demands on operators by requiring them to divide their mental resources effectively amongst numerous tasks and sources of information. Data overload, especially in the visual channel, and associated breakdowns in monitoring represent a major challenge in these environments. One promising means of overcoming data overload is through the introduction of multimodal displays, i.e., displays which distribute information across various sensory channels (including vision, audition, and touch). However, several questions remain unanswered regarding the design and limitations of this approach. In this talk, I will summarize two ongoing research efforts that seek to answer the following: (1) how movement affects sensory perception and performance in the real world and virtual reality and (2) how workload transitions affect performance and visual attention allocation. In combination, the results from these two ongoing research efforts will help inform design guidelines for adaptive and/or adaptable multimodal displays that can adjust the nature of information presentation in response to the user in a context-sensitive fashion. Bio: Sara Riggs is an associate Professor and Assistant Chair of Research and Development in the Department of Engineering Systems and Environment at the University of Virginia. She received her PhD and MSE in Industrial and Operations Engineering from the University of Michigan. Her research focuses on task sharing, attention management, and interruption management in complex environments that have included aviation, healthcare, and military operations. She has ongoing research in the areas of: (a) multimodal display design, (b) cognitive processing limitations, and (c) adaptive/adaptable display design. Her research been funded by the NSF, AHRQ, NIH, and AFOSR. She is the recipient of the NSF CAREER Award, 2019 Jerome H. Ely Human Factors Article Award, and 2016 APA Briggs Dissertation Award.
Title: Medical Image Analysis with Deep learning under Limited Supervision
Defense: Computer Science
Speaker: Xiaoyuan Guo, Emory University
Contact: Judy Wawira Gichoya,
Date: 2022-10-12 at 12:00PM
Venue: https://emory.zoom.us/j/99340675915?pwd=cmhjRS9EUGovWjhTenZ6VDlaM1YzZz09
  Download Flyer  Add to Calendar
Abstract:
Medical imaging plays a significant role in different clinical applications such as detection, monitoring, diagnosis, and treatment evaluation of various clinical conditions. Deep learning approach for medical image analysis emerged as a fast-growing research field and has been widely used to facilitate challenging image analysis tasks, for example, detecting the presence or absence of a particular abnormality, diagnosis of a particular tumor subtype. However, one important requisite is the large amount of annotated data for supervised training, which is often lacking in medicine due to the expensive and time-consuming expert-driven data curation process. Data insufficiency in medical images is also limited by healthcare data privacy requirements, which leads to barriers in the usage of deep learning methods across institutions. This thesis focuses on facilitating the applications of deep learning approaches to solve automatic medical image analysis tasks efficiently under limited supervision. Three situations are in consideration: (1) no annotated data; (2) limited annotated data; (3) curation of additional annotated data with minimal human supervision. The research covers multiple medical image modalities starting from fluorescence microscopy images (FMI), histopathological microscopy images (HMI) to mammogram images (MG), computed tomography (CT), chest radiographs (X-ray). The researched tasks are diverse including image segmentation, Out-of-Distribution (OOD) identification and medical image retrieval. The diversity and concreteness of the thesis can be a guide to facilitate the efficient usage of deep learning approaches in future medical image analysis with minimal cost.
Title: Disseminating Health Informatics/Data Science Innovations: Perspectives of an Editor-in-Chief
Seminar: Computer Science
Speaker: Suzanne Bakken, PhD, RN, FAAN, FACMI, FIAHSI, Columbia University
Contact: Vaidy Sunderam, vss@emory.edu
Date: 2022-10-07 at 1:00PM
Venue: https://emory.zoom.us/j/95719302738
  Download Flyer  Add to Calendar
Abstract:
Suzanne Bakken, PhD, RN, FAAN, FACMI, FIAHSI, is the Alumni Professor of Nursing and Professor of Biomedical Informatics at Columbia University. Following her doctorate in Nursing at the University of California, San Francisco, she completed a post-doctoral fellowship in Medical Informatics at Stanford University. Her program of research has focused on the intersection of informatics and health equity for more than 30 years and has been funded by AHRQ, NCI, NIMH, NINR, and NLM. Dr. Bakken’s program of research has resulted in > 300 peer-reviewed papers. She is a Fellow of the American Academy of Nursing, American College of Medical Informatics, International Academy of Health Sciences Informatics, and a member of the National Academy of Medicine. Dr. Bakken has received multiple awards for her research including the Pathfinder Award from the Friends of the National Institute of Nursing Research, the Nursing Informatics Award from the Friends of the National Library of Medicine, the Sigma Theta Tau International Nurse Researchers Hall of Fame, the Virginia K. Saba Award from the American Medical Informatics Association, and the Francois Gremy Award from the International Medical Informatics Association. Dr. Bakken currently serves as Editor-in-Chief of the Journal of the American Medical Informatics Association.
Title: Characterization and Mitigation of Misinformation in Social Media Characterization and Mitigation of Misinformation in Social Media
Seminar: Computer Science
Speaker: Francesca Spezzano, PhD, Boise State University
Contact: Vaidy Sunderam, vss@emory.edu
Date: 2022-09-30 at 1:00PM
Venue: https://emory.zoom.us/j/95719302738
  Download Flyer  Add to Calendar
Abstract:
Social media and Web sources have made information available, accessible, and shareable anytime and anywhere nearly without friction. This information can be truthful, falsified, or can only be the opinion of the writer as users on such platforms are both information creators and consumers. In any case, it has the power to affect the decision of an individual, the beliefs of the society, activities, and the economy of the whole country. Thus, it is imperative to identify misinformation and mitigate the effects of false information that are ubiquitous across the Web and social media. In this talk, we first analyze the reasons behind the success of misinformation, then, we present ways of identifying misinformation and the actors responsible for spreading it, and finally, we analyze novel ways to model misinformation diffusion.
Title: Crowd Sleuths: Solving Mysteries with Crowdsourcing, Experts, and AI
Seminar: Computer Science
Speaker: Kurt Luther, Virginia Tech
Contact: Vaidy Sunderam, vss@emory.edu
Date: 2022-09-16 at 1:00PM
Venue: https://emory.zoom.us/j/95719302738
  Download Flyer  Add to Calendar
Abstract:
Crowd Sleuths: Solving Mysteries with Crowdsourcing, Experts, and AI Professional investigators, such as journalists and police detectives, have long sought the public's help in solving mysteries, typically by soliciting tips. However, as social technologies mediate more aspects of daily life and enable new forms of collaboration, members of the public are increasingly leading their own investigations, with mixed results. In this talk, I present three projects from my research group, the Crowd Intelligence Lab, where we build software tools that bring together crowds, experts, and AI to support ethical and effective investigations and solve mysteries. In the CrowdIA project, we adapted the sense making loop for intelligence analysts to enable novice crowds to discover a hidden terrorist plot within large quantities of textual evidence documents. In the GroundTruth project, we developed a novel diagramming technique to enable novice crowds to collaborate with expert investigators to verify (or debunk) photos and videos shared on social media. In the Photo Sleuth project, we built and launched a free public website with over 10,000 registered users who employ AI-based face recognition to identify unknown soldiers in historical portraits from the American Civil War era. I will conclude the talk by discussing broader opportunities and risks in combining the complementary strengths of human and artificial intelligence for investigation, sense making, and other complex and creative tasks.
Title: Modeling Dyadic Interactions in Face-to-face Settings
Seminar: Computer Science
Speaker: Dr. Ifeoma Nwogu, University at Buffalo
Contact: Vaidy Sunderam, vss@emory.edu
Date: 2022-09-02 at 1:00PM
Venue: https://emory.zoom.us/j/95719302738
  Download Flyer  Add to Calendar
Abstract:
Modeling Dyadic Interactions in Face-to-face Settings Research in social psychology has extensively shown that in cohesive groups, individuals often respond to each other's prosody, facial expressions, and body movements. This effect where the behavior of two or more people involved in a face-to-face conversation become more synchronized with each other, so that they can appear to behave almost in direct response to one another, is termed interactional synchrony. The ability to computationally estimate interactional synchrony can therefore be explored to explain other deeper social psychology constructs. ?????? In this presentation, we discuss the analysis and synthesis of dyadic interactions under various social constellations, in the context of interactional synchrony. We demonstrate how this can be useful for evaluating rapport, exploring the role of entrainment in intimate partner violence, and understanding parent-infant interactions. We use various behavioral features such as facial expressions, head movements and speech prosody, which are treated as sampled time series signals, with sequence-based machine learning methods to make some useful predictions on real-life datasets. We also discuss some of our ongoing and future work in this and other areas.