Spring 2021
Computer Science Seminar Series
January 19, 2021
Abstract: In order to determine how the perception, autopilot, and driver monitoring systems of Tesla Model 3s interact with one another, and also to determine the scale of between- and within-car variability, a series of four on-road tests were conducted. Three sets of tests were conducted on a closed track and one was conducted on a public highway. Results show wide variability across and within three Tesla Model 3s, with excellent performance in some cases but also likely catastrophic performance in others. This presentation will not only highlight how such interactions can be tested, but also how results can inform requirements and designs of future autonomous systems.
Speaker Biography: Mary “Missy” Cummings received her BS in mathematics from the United States Naval Academy in 1988, her MS in space systems engineering from the Naval Postgraduate School in 1994, and her PhD in systems engineering from the University of Virginia in 2004. A naval officer and military pilot from 1988 to 1999, Cummings was one of the U.S. Navy’s first female fighter pilots. She is currently a professor in Duke University’s Electrical and Computer Engineering Department and the director of the Humans and Autonomy Lab. Cummings is an American Institute of Aeronautics and Astronautics fellow and a member of the Defense Innovation Board. Her research interests include human supervisory control, explainable artificial intelligence, human-autonomous system collaboration, human-robot interaction, human-systems engineering, and the ethical and social impact of technology.
Computer Science Seminar Series
February 4, 2021
Abstract: Decision-making processes are prevalent in many applications, yet their exact mechanism is often unknown, leading to challenges in replicating said processor—for instance, how medical providers decide on treatment plans for patients or how chronic patients choose and adhere to their dietary recommendations. Much effort has been focused on learning these decisions through data-intensive approaches; however, the decision-making process is usually complex and highly constrained. While the inner workings of these constrained optimizations may not be fully known, the outcomes of them (the decisions) are often observable and available (e.g., the historical data on clinical treatments). In this talk, we focus on inverse optimization techniques to recover the underlying optimization models that lead to the observed decisions. Inverse optimization can be employed to infer the utility function of a decision-maker or to inform the guidelines for a complicated process. We present a data-driven inverse linear optimization framework (called inverse learning) that finds the optimal solution to the forward problem based on the observed data. We discuss how combining inverse optimization with machine learning techniques can utilize the strengths of both approaches. Finally, we validate the methods using examples in the context of precision nutrition and personalized daily diet recommendations.
Speaker Biography: Kimia Ghobadi is the John C. Malone Assistant Professor of Civil and Systems Engineering, the associate director of the Center for Systems Science and Engineering, and a member of the Malone Center for Engineering in Healthcare. She obtained her PhD at the University of Toronto and before joining Hopkins was a postdoctoral fellow at the Massachusetts Institute of Technology Sloan School of Management. Ghobadi’s research interests include inverse optimization techniques, mathematical modeling, real-time algorithms, and analytics technics with applications in healthcare systems, including healthcare operations and medical decision-making.
February 12, 2021
Pathogen genomic data are rich with information and growing exponentially. At the same time, new genomics-based technologies are transforming how we surveil and combat pathogens. Yet designing biological sequences for these technologies is still done largely by hand, without well-defined objectives and with a great deal of trial and error. We lack computational capabilities to efficiently design and optimize frontline public health and medical tools, such as diagnostics, based on emerging genomic information.
In this talk, I examine computational techniques — linked closely with biotechnologies — that enhance how we proactively prepare for and respond to pathogens comprehensively. I discuss CATCH, an algorithm that designs assays for simultaneously enriching the genomes of hundreds of viral species including all their known variation; they enable hypothesis-free viral detection and sequencing from patient samples with high sensitivity. I also discuss ADAPT, which combines a deep learning model with combinatorial optimization to design CRISPR-based viral diagnostics that are maximally sensitive over viral variation. ADAPT rapidly and fully-automatically designs diagnostics for thousands of viruses, and they exhibit lower limits of detection than state-of-the-art design strategies. The results show that principled computational design will play a vital role in an arsenal against infectious diseases. Finally, I discuss promising directions for design methods and applications to other diseases.
Speaker Biography: Hayden Metsky is a postdoctoral researcher at the Broad Institute in Pardis Sabeti’s lab. He completed his PhD, MEng, and SB in computer science at MIT. His research focuses on developing and applying computational methods that enhance the tools we use to detect and treat disease, concentrating on viruses.
February 12, 2021
Precision medicine efforts propose leveraging complex molecular and medical data towards a better life. This ambitious objective requires advanced computational solutions. Here, however, deeper understanding will not simply diffuse from deeper machine learning, but from more insight into the details of molecular function and a mastery of applicable computational techniques.
My lab’s novel machine learning-based methods predict functional effects of genomic variants and leverage the identified patterns in functional changes to infer individual disease susceptibility. We have optimized our genome-to-disease mapping pipeline to both accommodate compute-resistant biologists and allow for custom variant scoring functions, feature selection, and machine learning techniques. We also built novel computational methods, including training the first general purpose language model for bacterial short-read DNA sequences, to be used in high-throughput functional profiling of microbiome data that can further elaborate on health and disease. Our purely computational work motivates new experimentally testable hypothesis regarding the biological mechanisms of disease. It also provides a potential means for earlier prognosis, more accurate diagnosis, and the development of better treatments.
Speaker Biography: Research in Yana Bromberg’s lab at Rutgers University is focused on designing machine learning, network analysis, and other computational techniques for the molecular functional annotation of genes, genomes, and metagenomes in the context of specific environments and diseases. The lab also studies evolution of life’s electron transfer reactions in Earth’s history and as potentially applicable to other planets. Dr. Bromberg received her Bachelor degrees in Biology and Computer Sciences from the State University of New York at Stony Brook and a Ph.D. in Biomedical Informatics from Columbia University. She is currently an Associate Professor at the Department of Biochemistry and Microbiology at Rutgers University. She also holds an adjunct position at the Department of Genetics at Rutgers and is a fellow of the Institute for Advanced Study at the Technical University of Munich, Germany. Dr. Bromberg is also the vice-president of the Board of Directors of the International Society for Computational Biology.
Computer Science Seminar Series
February 12, 2021
Abstract: Twelve years ago, biologists developed the repertoire sequencing technology that samples millions out of a billion constantly changing antibodies, or immunoglobulins, circulating in each of us. Repertoire sequencing represented a paradigm shift as compared to previous “one-antibody-at-a-time” approaches; raised novel algorithmic, statistical, information theory, and machine learning challenges; and led to the emergence of computational immunogenomics. Yana Safonova will describe her recent work on reconstructing the evolution of antibody repertoires, inferring novel diversity (D) genes in immunoglobulin loci, and solving the three-decade-old puzzle of explaining the mechanism for generating biomedically important, ultralong antibodies via tandem D-D fusions. She will also describe several collaborative projects in the emerging fields of personalized immunogenomics—analyzing how mutations in immunoglobulin loci affect our ability to develop antibodies that neutralize flu and HIV—and agricultural immunogenomics—analyzing cow antibody repertoires to assist in breeding efforts.
Speaker Biography: Yana Safonova received her BSc (2010) and MSc (2012) in computer science from the National Research State University of Nizhny Novgorod in Russia and her PhD (2017) in bioinformatics from Saint Petersburg State University. Since 2017, Safonova has been a postdoctoral scholar in the Computer Science and Engineering Department at the University of California, San Diego (UCSD). Since 2019, she has also been affiliated with the Department of Biochemistry and Molecular Genetics at the University of Louisville School of Medicine. Safonova’s research interests cover open problems in immunogenomics and computational immunology that include applications of the recent repertoire sequencing technologies to the design of antibody drugs, the prediction of vaccine efficacy, and population analysis of immunity loci. Safonova was selected as a recipient of the Data Science Postdoctoral Fellowship (2017) by UCSD and the Intersect Fellowship for Computational Scientists and Immunologists (2019) by the American Associations of Immunologists. She is a member of the the Adaptive Immune Receptor Repertoire Community of the Antibody Society and is an author of a graduate-level immunogenomics course.
Computer Science Seminar Series
February 16, 2021
Abstract: Sustained space habitation is no longer a next-generation challenge. With NASA’s Artemis Plan, the advent of the United States Space Force, and the “new space” sector’s scrappy enthusiasm, there is serious momentum to bring humans to space for extended periods in the coming decade. We can’t do this alone. We’ll need to adapt the highly automated systems we’ve been designing for everyday purposes to help us survive. However, if we build space-faring AI systems anything like how we have been building smart cities, we are going to have some problems. AI systems that have been designed for civil society are not built for digital or physical resilience. Largely, they have lacked human-centricity, which has been a major contributor to this challenge. In this talk, we’ll discuss and raise questions about the calls for autonomous space systems. If we do not have a track record for building safe and secure human-centric AI systems on Earth, how can we build them for space? The stakes are higher there.
Speaker Biography: Gregory Falco is the first faculty hire at the Johns Hopkins Institute for Assured Autonomy (IAA), where he will be an assistant professor jointly between the IAA and the Civil and Systems Engineering Department starting in the fall of 2021. Falco has been at the forefront of smart city and space system security and safety in both industry and academia for the past decade. His research, “Cybersecurity Principles for Space Systems,” was highly influential in the recent Space Policy Directive-5 of the same name. Falco has worked closely with NASA’s Jet Propulsion Laboratory to help advance space asset security capabilities using AI. Falco led the inaugural university cohort research team for the United States Space Force’s Hyperspace Challenge. He has been listed in Forbes’ 30 Under 30 for his inventions and contributions to critical infrastructure cybersecurity. Falco has also been published in Science for his work on cyber risk. Falco is a cyber research fellow at Harvard University’s Belfer Center, a research affiliate at the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory (MIT CSAIL), and a postdoctoral scholar at Stanford University. Falco completed his PhD at MIT CSAIL, his master’s degree at Columbia University, and his bachelor’s degree at Cornell University.
Computer Science Seminar Series
February 16, 2021
Abstract: Increasingly, practitioners are turning to machine learning to build causal models and predictive models that perform well under distribution shifts. However, current techniques for causal inference typically rely on having access to large amounts of data, limiting their applicability to data-constrained settings. In addition, empirical evidence has shown that most predictive models are insufficiently robust with respect to shifts at test time. In this talk, Maggie Makar will present her work on building novel techniques to address both of these problems. Much of the causal literature focuses on learning accurate individual treatment effects, which can be complex and hard to estimate from small samples. However, it is often sufficient for the decision-maker to have estimates of upper and lower bounds on the potential outcomes of decision alternatives to assess risks and benefits. Makar will show that, in such cases, we can improve sample efficiency by estimating simple functions that bound these outcomes instead of estimating their conditional expectations. She will present a novel algorithm that leverages these theoretical insights. Makar will also talk about approaches to deal with distribution shifts using causal knowledge and auxiliary data. She will discuss how distribution shifts arise when training models to predict contagious infections in the presence of asymptomatic carriers. She will present a causally-motivated regularization scheme that enables prediction of the true infection state with high accuracy even if the training data is collected under biased test administration.
Speaker Biography: Maggie Makar is a PhD student at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory. While at MIT, Makar interned at Microsoft Research and Google Brain. Prior to MIT, she worked at Brigham and Women’s Hospital studying end-of-life care. Makar’s work has appeared at the International Conference on Machine Learning, the Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence, and the Joint Statistical Meetings and in the Journal of the American Medical Association, Health Affairs, and Epidemiology, among other venues and publications. She received a BSc in mathematics and economics from the University of Massachusetts, Amherst.
Computer Science Seminar Series
February 19, 2021
Abstract: To create trustworthy AI systems, we must safeguard machine learning methods from catastrophic failures; for example, we must account for uncertainty and guarantee performance for safety-critical systems, like in autonomous driving and health care, before deploying these methods in the real world. A key challenge in such real-world applications is that the test cases are not well represented by the pre-collected training data. To properly leverage learning in such domains, we must go beyond the conventional learning paradigm of maximizing average prediction accuracy with generalization guarantees that rely on strong distributional relationships between training and test examples. In this talk, Anqi “Angie” Liu will describe a distributionally robust learning framework that offers accurate uncertainty quantification and rigorous guarantees under data distribution shift. This framework yields appropriately conservative, yet still accurate, predictions to guide real-world decision-making and is easily integrated with modern deep learning. Liu will showcase the practicality of this framework in applications on agile robotic control and computer vision. She will also introduce a survey of other real-world applications that would benefit from this framework for future work.
Speaker Biography: Anqi “Angie” Liu is a postdoctoral research associate in the Department of Computing and Mathematical Sciences at the California Institute of Technology. She obtained her PhD from the Department of Computer Science of the University of Illinois at Chicago. She is interested in machine learning for safety-critical tasks and the societal impact of AI. She aims to design principled learning methods and to collaborate with domain experts to build more reliable systems for the real world. She has been selected as a 2020 Electrical Engineering and Computer Science Rising Star at the University of California, Berkeley. Her publications appear in prestigious machine learning venues like the Conference and Workshop on Neural Information Processing Systems, the International Conference on Machine Learning, the International Conference on Learning Representations, the Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence, and the International Conference on Artificial Intelligence and Statistics.
Computer Science Seminar Series
February 19, 2021
Abstract: Why do some misleading articles go viral? Does partisan speech affect how people behave? Many pressing questions require understanding the effects of language. These are causal questions: Did an article’s writing style cause it to go viral or would it have gone viral anyway? With text data from social media and news sites, we can build predictors with natural language processing techniques, but these methods can confuse correlation with causation. In this talk, Dhanya Sridhar discusses her recent work on NLP methods for making causal inferences from text. Text data present unique challenges for disentangling causal effects from non-causal correlations. Sridhar presents approaches that address these challenges by extending black-box and probabilistic NLP methods. She outlines the validity of these methods for causal inference and demonstrates their applications to online forum comments and consumer complaints. Sridhar concludes with her research vision for a data analysis pipeline that bridges causal thinking and machine learning to enable better decision-making and scientific understanding.
Speaker Biography: Dhanya Sridhar is a postdoctoral researcher in the Data Science Institute at Columbia University. She completed her PhD at the University of California, Santa Cruz. Her current research is at the intersection of machine learning and causal inference, focusing on applications to social science. Her thesis research focused on probabilistic models of relational data.
Computer Science Seminar Series
February 26, 2021
Abstract: Video is becoming a core medium for communicating a wide range of content, including educational lectures, vlogs, and how-to tutorials. While videos are engaging and informative, they lack the familiar and useful affordances of text for browsing, skimming, and flexibly transforming information. This severely limits who can interact with video content and how they can interact with it, makes editing a laborious process, and means that much of the information in videos is not accessible to everyone. But what are the future systems that will make videos useful for all users? In this talk, Amy Pavel will share her work creating interactive human-AI systems that leverage multiple communication media (e.g., text, video, and audio) across two main research areas: 1) helping domain experts surface content of interest through interactive video abstractions and 2) making videos non-visually accessible through interactions for video accessibility. First, she will share core challenges of seeking information in videos from interviews with domain experts. Then, she will share new interactive systems that leverage AI and evaluations that demonstrate system efficacy. She will conclude with how hybrid human-computer interaction and AI breakthroughs will make digital communication more effective and accessible in the future and how new interactions can help us to realize the full potential of recent advances in AI and machine learning.
Speaker Biography: Amy Pavel is a postdoctoral fellow at the Carnegie Mellon University Human-Computer Interaction Institute and a research scientist in Machine Learning and AI at Apple. Her research explores how interactive tools, augmented with machine learning techniques, can make digital communication more effective and accessible. She has published her work in conferences including the ACM Symposium on User Interface Software and Technology, the ACM Conference on Human Factors in Computing Systems, the ACM Special Interest Group on Accessible Computing Conference on Computers and Accessibility, and other ACM and Institute of Electrical and Electronics Engineers venues. She previously received her PhD in computer science at the University of California, Berkeley, where her work was supported by a Department of Defense National Defense Science and Engineering Graduate Fellowship.
Computer Science Seminar Series
February 26, 2021
Abstract: Algorithms play a central role in our lives today, mediating our access to civic engagement, social connections, employment opportunities, news media, and more. While the sociotechnical systems deploying these algorithms—search engines, social networking sites, and others—have the potential to dramatically improve human life, they also run the risk of reproducing or intensifying social inequities. In Danaë Metaxa’s research, they ask whether and how these systems are biased and how those biases impact users. Understanding sociotechnical systems and their effects requires a combination of computational and social techniques. In this talk, Metaxa will describe their work conducting algorithm audits and randomized controlled user experiments to study representation and bias, focusing on their recent study of gender and racial bias in image searches. By auditing gender and race in image search results for common U.S. occupations and comparing to baselines in the U.S. workforce, they find that marginalized people are underrepresented relative to their workforce participation rates. When measuring people’s responses to synthetic search results in which gender and racial composition are manipulated, however, one can see that the effect of diverse image search results is complex and mediated by the user’s own identity. Metaxa will conclude by discussing the implications of these findings for building sociotechnical systems and directions for future research studying algorithmic bias.
Speaker Biography: Danaë Metaxa (they/she) is a PhD candidate in computer science at Stanford University, advised by James Landay and Jeff Hancock. A member of the Human-Computer Interaction group, Metaxa focuses on building and understanding sociotechnical systems and their effects on users in domains like employment and politics. Metaxa has been a predoctoral scholar with Stanford’s Program on Democracy and the Internet, a fellow with the McCoy Center for Ethics in Society, and the winner of an NSF Graduate Research Fellowship.
Computer Science Seminar Series
March 2, 2021
Abstract: As society progresses towards increasing levels of embedded, ubiquitous, and autonomous computation, one key societal opportunity is to leverage this technology to maximize human well-being. The challenge for well-being technology is two-fold: how to precisely measure well-being and how to deliver long-term, engaging interventions to optimize well-being states and their fundamental components, such as stress. Ultimately, managing stress, for example, can have significant implications for health, well-being, productivity, and attention. The current approaches to assessing well-being and stress are somewhat limited, as these assessments are based on subjective observations and impose models of use that do not scale or adapt well to diverse populations. Additionally, little research is done on developing human-centered intervention technology that maximizes engagement over the long term. In this talk, Pablo Peredes presents his research agenda, which focuses on unobtrusive sensing and interventions that are efficacious and engaging (i.e., allowing for long-term use, which is especially important for public health interventions). He presents a series of research projects exploring and validating novel ideas on the design of passive “sensorless” sensors and subtle, just-in-time personalized interventions. He shows the promise of repurposing existing signals from computing peripherals (i.e., mouse, trackpad) or cars (i.e., steering wheel) and repurposing existing media as subtle, just-in-time interventions. Finally, inspired by biology and the behavioral sciences, he proposes that we leverage technology to make “mundane” devices—such as chairs, desks, cars, and even urban lights—into devices that deliver personalized, adaptive, and autonomous well-being interventions. He closes with a brief discussion of the ethical implications and research needed to systematically study ethics in pervasive well-being technology.
Speaker Biography: Pablo Paredes earned his PhD in computer science from the University of California, Berkeley in 2015 with Professor John Canny. He is currently a clinical assistant professor in the Psychiatry and Behavioral Sciences Department and the Epidemiology and Population Health Department (by courtesy) at the Stanford University School of Medicine. Paredes leads the Pervasive Wellbeing Technology Lab, which houses a diverse group of students from multiple departments such as Computer Science, Electrical Engineering, Mechanical Engineering, Anthropology, Neuroscience, and Linguistics. Prior to joining the School of Medicine, Paredes was a postdoctoral researcher in the Computer Science Department at Stanford University with Professor James Landay. During his PhD studies, he held internships on behavior change and affective computing at Microsoft Research and Google. Paredes has been an active associate editor for the Interactive, Mobile, Wireless, and Ubiquitous Technology Journal and a reviewer and editor for multiple top computer science and medical journals. Before 2010, he was a senior strategic manager with Intel in Sao Paulo, Brazil; a lead product manager with Telefonica in Quito, Ecuador; and an entrepreneur in his native Ecuador and, more recently, the U.S. In these roles, Paredes has had the opportunity to hire and closely evaluate designers, engineers, businesspeople, and researchers in telecommunications and product development. During his academic career, Paredes has advised close to 40 mentees, including postdoctoral, PhD, master’s, and undergraduate students; collaborated with colleagues from multiple departments across engineering, medicine, and the humanities; and raised funding from the NSF, the National Institutes of Health, and large, multidisciplinary, intramural research projects.
Institute for Assured Autonomy & Computer Science Seminar Series
March 4, 2021
Abstract: Nicole Perlroth is a cybersecurity reporter at The New York Times and the author of This Is How They Tell Me the World Ends, the untold history of the global cyber arms trade and cyberweapons arms race spanning three decades. Perlroth reveals for the first time the classified market’s origins (a Russian attack on American embassy); its godfather, brokers, mercenaries, and hackers; and its spread to the furthest corners of the globe, from the United States to Israel, the Middle East, South America, China, and beyond. She documents attacks across nations and how each new attack builds on the last, as nation-states learn and improve upon one another’s playbooks, extending into high-profile attacks on multinational companies and private organizations. Perlroth’s reporting spans the period from the 1990s to the 2020 election and its aftermath, when Russia engaged in a months-long hack of the United States federal government itself, an attack that Perlroth continues to report for the Times, building on her book’s extraordinary revelations.
Speaker Biography: Nicole Perlroth is an award-winning cybersecurity journalist for The New York Times, where her work has been optioned for both film and television. She is a regular lecturer at the Stanford Graduate School of Business and is a graduate of Princeton University and Stanford University. She lives with her family in the Bay Area, but increasingly prefers life off the grid in their cabin in the woods.
Institute for Assured Autonomy & Computer Science Seminar Series
March 16, 2021
Abstract: Robots will transform our everyday lives, from home service and personal mobility to large-scale warehouse management and agriculture monitoring. Across these applications, robots need to interact with humans and other robots in complex, dynamic environments. Understanding how robots interact allows us to design safer and more robust systems. This talk presents an overview on how we can integrate underlying cooperation and interaction models into the design of the robot teams. We use tools from behavioral decision theory to design interaction models, combined with game theory and control theory, to develop distributed control strategies with provable performance guarantees. This talk focuses on applications in autonomous driving, where better understanding of human intent improves safety, and explores recent results in designing UV-C-equipped mobile robots for human-centric environments.
Speaker Biography: Alyssa Pierson is an assistant professor of mechanical engineering at Boston University. Her research interests include trust and cooperation in multi-agent systems, distributed robotics control, and socially compliant autonomous system design. She focuses on designing robotic systems that interact with humans and other robots in complex, dynamic environments. Prior to joining BU, Pierson was a research scientist with the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. She received her PhD from Boston University in 2017 and her BS in engineering from Harvey Mudd College. During her PhD, Pierson was awarded the Clare Booth Luce Fellowship and was a Best Paper Finalist at the 2016 International Conference on Robotics and Automation.
IAA & CS Seminar Series
March 22, 2021
Isaac Asimov’s Laws for Robots placed intelligent robots under three ethical duties eerily similar to the Belmont Report’s respect for persons, beneficence, and justice. Law scholars Jack Balkin and Frank Pasquale suggest that laws for AI/ML systems are best directed not at the robots but at humans who program them, use them, and let ourselves be governed by them. Recent theorizations of the perils of AI/ML software focus heavily on the problem of modern surveillance societies where citizens are relentlessly tracked, analyzed, and scored as they go about their daily lives. It is tempting for bioethicists to draw on these rich theorizations, but doing so mis-frames the challenges and opportunities of the healthcare context in which AI/ML clinical decision support software operates. This talk identifies distinctive features of the healthcare setting that make AI/ML medical software likely to break the emerging rules about how to protect human dignity in a modern surveillance society. Protecting patients in an AI/ML-enabled clinical heath care setting is a different problem. It requires fresh, context-appropriate thinking about a set of privacy, bias, and accountability issues that this talk sets out for debate.
Speaker Biography: Barbara J. Evans is Professor of Law and Stephen C. O’Connell Chair at University of Florida’s Levin College of Law and Professor of Engineering at UF’s Herbert Wertheim College of Engineering. Her work focuses on data privacy andthe regulation of machine-learning medical software, genomic technologies, and diagnostic testing. She is an elected member of the American Law Institute, a Senior Member of the Institute of Electrical and Electronics Engineers and was named a Greenwall Foundation Faculty Scholar in Bioethics for 2010-2013. Before coming to academia, she was a partner in the international regulatory practice of a large New York law firm and is admitted to the practice of law in New York and Texas. She holds a BS in electrical engineering from the University of Texas at Austin, an MS & PhD from Stanford University, a JD from Yale Law School, an LLM in Health Law from the University of Houston Law Center, and she completed a post-doctoral fellowship in Clinical Ethics at the MD Anderson Cancer Center.
April 15, 2021
In this talk, Dr. Pérez-Quiñones presents some of the somber statistics of underrepresentation in computing. He argues that computer science students and professionals should care deeply about this inequity: a lack of diversity in software development teams can have serious consequences for a fair society. Dr. Pérez-Quiñones presents examples of the negative effects that underrepresentation in computing teams can have. The presentation concludes with an open question: What can we do to broaden participation in computing?
Speaker Biography: Dr. Manuel A. Pérez-Quiñones is Professor of Software and Information Systems at UNC at Charlotte. His research interests include HCI, CS education, and diversity in computing. He has held various administrative positions in academia, including Associate Dean for the Graduate School at VT and Associate Dean of the College of Computing and Informations. He was Chair of the Coalition to Diversify Computing, Program Chair for the 2014 Tapia Conference, and Symposium Co-Chair for SIGCSE 2019. He serves on the SIGCSE Board, the Advisory Board for CMD-IT, member of the Steering Committee for BPCNet and Technical Consultant for the Center for Inclusive Computing at Northeastern. His service to diversify computing has been recognized with ACM Distinguished Member status, the A. Nico Habermann award, and Richard A. Tapia Achievement Award. In over 30 years of professional experience, he has worked at UNCC (6 years), Virgina Tech (15 years), University of Puerto Rico-Mayaguez (4 years), Visiting Professor at the US Naval Academy, and Computer Scientist at the Naval Research Lab (6 years).
Institute for Assured Autonomy & Computer Science Seminar Series
April 20, 2021
Abstract: Networks have historically been treated as plumbing, used to interconnect computing systems to build larger distributed computing systems—but advances in software-defined networks make it possible to treat the network itself as a programmable platform. Networks can now be programmed end-to-end and top-to-bottom. This talk discusses how this programmability can be used to support verifiable closed-loop control, including throughout 5G mobile networks. The talk also describes Larry Peterson’s experience building Aether, an open source 5G-enabled edge cloud that demonstrates the value of treating the network as a programmable platform. A pilot deployment of Aether is being deployed on campuses and in enterprises around the world.
Speaker Biography: Larry Peterson is the Robert E. Kahn Professor of Computer Science, Emeritus at Princeton University, where he served as chair from 2003 to 2009. He is a coauthor of the bestselling networking textbook Computer Networks: A Systems Approach (6th Edition), which is now available open-source on GitHub. His research focuses on the design, implementation, and operation of internet-scale distributed systems, including the widely used PlanetLab and MeasurementLab platforms. He is currently working on a new access edge cloud called CORD, an open-source project of the Open Networking Foundation, where he serves the chief technical officer. Peterson is a former editor-in-chief of the ACM Transactions on Computer Systems and served as program chair for the ACM Symposium on Operating Systems Principles, the USENIX Symposium on Networked Systems Design and Implementation, and the ACM Workshop on Hot Topics in Networks. He is a member of the National Academy of Engineering, a fellow of the ACM and the Institute of Electrical and Electronics Engineers (IEEE), the 2010 recipient of the IEEE Kobayashi Computer and Communication Award, and the 2013 recipient of the ACM Special Interest Group on Data Communication Award. He received his PhD from Purdue University in 1985.
April 29, 2021
By 2030, the old will begin to outnumber the young for the first time in recorded history. Population aging is poised to impose a significant strain on economies, health systems, and social structures. However, it also presents a unique opportunity for AI to introduce personalization and inclusiveness to ensure equity in aging. Vulnerable populations such as older adults learn, trust, and use new technologies differently. Any prediction algorithm that we develop must use high-quality and population-representative input data outside of the clinic and produce accurate, generalizable, and unbiased results. Therefore, the translational path for AI into clinical care needs deeper engagement with all the stakeholders to ensure that we solve a pressing problem with a practical solution that end-users, clinicians, and patients all find value in. In this talk, I will provide some examples of working systems, evaluated by controlled experiments, and potentially be deployed in the real world to ensure equity and access among the aging population. In particular, I will highlight two specific examples: 1. Innovating for Parkinson’s, the fastest-growing neurodegenerative disease. 2. Modeling end-of-life communication with terminal cancer patients where their values and preferences are respected as they plan for a deeply personal human experience such as death.
Speaker Biography: Ehsan Hoque is an associate professor of computer science at the University of Rochester, where he co-leads the Rochester Human-Computer Interaction (ROC HCI) Group. From 2018-2019, he was also the Interim Director of the Goergen Institute for Data Science. Ehsan earned his Ph.D. from MIT in 2013, where the MIT Museum highlighted his dissertation—the development of an intelligent agent to improve human ability — as one of MIT’s most unconventional inventions. Building on the work/patent, Microsoft released “Presenter Coach” in 2019 to be integrated into PowerPoint. Ehsan is best known for introducing and extensively validating the idea of using AI to train and enhance elements of basic human ability. Ehsan and his students’ work has been recognized by NSF CAREER Award, MIT TR35, Young Investigator Award by the US Army Research Office (ARO). In 2017, Science News named him one of the 10 scientists to watch, and in 2020, the National Academy of Medicine recognized him as one of the emerging leaders in health and sciences. Ehsan is an inaugural member of the ACM’s Future of Computing Academy.
Institute for Assured Autonomy & Computer Science Seminar Series
May 20, 2021
Abstract: Autonomous driving needs machine learning because it relies so heavily on perception. But machine learning is notoriously unpredictable and unverifiable. How then can an autonomous car ever be convincingly safe? Daniel Jackson and his research team have been exploring the classic idea of a runtime monitor: a small, trusted base that executes in parallel and intervenes to prevent violations of safety. Unfortunately, in this context, the traditional runtime monitor is not very plausible. If it processes sensor data itself, it is likely either to be no less complex than the main system or to be too crude to allow early intervention. And if it does not process sensor data and instead relies on the main system for that, the key benefit of a small trusted base is lost. Jackson’s research team has been pursuing a new approach in which the main controller constructs a “certificate” that embodies a run-time safety case; the monitor is only responsible for checking the certificate, which gives the desired reduction in complexity, exploiting the typical gap between the cost of finding solutions to computational problems and the cost of checking them. Jackson will illustrate this idea with some examples his team has implemented in simulation, with the disclaimer that this research is in the early stages. His hope is to provoke an interesting discussion.
Speaker Biography: Daniel Jackson is a professor of computer science at the Massachusetts Institute of Technology, a MacVicar Faculty Fellow, and the associate director of the Computer Science and Artificial Intelligence Laboratory. His research has focused primarily on software modeling and design. Jackson is also a photographer; his most recent projects are Portraits of Resilience and At a Distance. His book about software design, The Essence of Software: Why Concepts Matter for Great Design, will be published in the fall of 2021 by Princeton University Press.