Fall 2021

View the recording >>

Computer Science Seminar Series

September 7, 2021

Abstract: When we develop new renewable energy, traffic, transportation, medical, public safety, and even space systems that our modern society relies on, we expect them to be both safe and secure. Designers often integrate the latest computing techniques into these systems and expect them to work. However, most critical infrastructure operates under persistently harsh conditions and the high degree of automation these systems require does not always behave as intended. In this talk, we will discuss the soft, white digital underbelly of the systems that matter most to our daily lives and how breaking our critical infrastructure will help improve the resilience of future smart cities.

Speaker Biography: Greg Falco is an assistant professor in the Department of Civil and Systems Engineering and the Institute for Assured Autonomy, where he holds an appointment at the Applied Physics Lab. He is the director of the Autonomy OWL lab, a “breaker space” for cyber-physical systems at the IAA. Falco advises the national security community on space system security and his research has influenced related U.S. policy. He is a Fulbright Scholar and has been listed in Forbes’ “30 Under 30” for his contributions to industrial control system security. Falco holds a BS from Cornell University, an MS from Columbia University, and a PhD from the Massachusetts Institute of Technology.

Institute of Assured Autonomy & Computer Science Seminar Series

September 16, 2021

Abstract: In April 2021, the Software Engineering Institute concluded a study consisting of a panel of leaders in the software community to develop a research roadmap for software engineering. The report, expected out in the summer of 2021, identifies future challenges in software-reliant systems and identifies necessary advances in foundational software engineering principles across system types, such as intelligent, autonomous, safety-critical, and data intensive systems. The goal of the report is to raise the visibility of software so the research portfolio can receive sustained recognition commensurate with its importance for national security and competitiveness and to provide a framework for strategic partnership and collaboration that drives innovation among industry, academia, and government. The study found that the current notion of software development will be replaced by one where the software pipeline consists of humans and AI as trustworthy collaborators that rapidly evolve systems based on user intent. This will be accomplished with advanced development and advanced architectural paradigms. Areas of focus for the research roadmap are AI-augmented software development, assuring continuously evolving systems, software construction through composition, engineering societal-scale software systems, engineering AI-enabled software systems, and engineering quantum computing software systems. This overview will review the findings of the study and stimulate a discussion on how early results of the research might change the nature of software development and acquisition for government and industry.

Speaker Biography: Thomas Longstaff is the chief technology officer of the Carnegie Mellon University Software Engineering Institute (SEI). As CTO, he is responsible for formulating a technical strategy and leading the funded research program of the institute based on current and predicted future trends in technology, government, and industry. Before joining the SEI as CTO in 2018, Longstaff was a program manager and principal cybersecurity strategist for the Asymmetric Operations Sector of the Johns Hopkins University Applied Physics Laboratory, where he led projects on behalf of the U.S. government, including nuclear command and control, automated incident response, technology transition of cyber research and development, information assurance, intelligence, and global information networks. He also was chair of the Computer Science, Cybersecurity, and Information Systems Engineering Programs and co-chair of Data Science in the Whiting School of Engineering at Johns Hopkins. Longstaff’s academic publications span topics such as malware analysis, information survivability, insider threats, intruder modeling, and intrusion detection. He maintains an active role in the information assurance community and regularly advises organizations on the future of network threats and information assurance. He is an editor for Computers and Security and has previously served as associate editor for IEEE Security and Privacy. He is general chair for the New Security Paradigms Workshop and Homeland Security Technology Conference and a member of numerous other program and advisory committees. Prior to joining the staff at APL, Longstaff was the deputy director for technology for the CERT Division at the Software Engineering Institute. In his 15-year tenure at the SEI CERT Division, Longstaff helped create many of the projects and centers that made the program an internationally recognized network security organization. His work included assisting the Department of Homeland Security and other agencies to use response and vulnerability data to define and direct a research and operations program in analysis and prediction of network security and cyber terrorism events. Longstaff received his bachelor’s degree in physics and mathematics from Boston University, and his master’s degree in applied science and his PhD in computer science from the University of California, Davis.

View the recording >>

Computer Science Seminar Series

September 23, 2021

Abstract: Krishan Sabnani will present a summary of his networking research. His work on Soft Router is a breakthrough in internet redesign. The main idea behind this work was to separate control functions and complex software from the forwarding portions on internet routers. This work made it possible for forwarding technologies (e.g., different link layers and switching protocols) to evolve and be deployed independently from control protocols (e.g., routing, security). This contribution is a precursor to the current software defined networking revolution. Sabnani will also summarize his work on protocol conformance testing and a new class of denial-of-service attacks that can be launched against cellular networks.

Speaker Biography: Krishan Sabnani is a networking researcher. He has made many seminal contributions to internet infrastructure design, protocol design, and wireless networks. Sabnani was the vice president of networking research at AT&T Bell Laboratories from January 2000 to September 2013; in that role, he managed all networking research at Bell Labs, comprising nine departments in seven countries: USA, France, Germany, Ireland, India, Belgium, and South Korea. Sabnani is a member of the National Academy of Engineering and has won many awards, such as the 2005 Institute of Electrical and Electronics Engineers (IEEE) Eric E. Sumner Award and the 2005 IEEE Computer Society’s W. Wallace McDowell Award. He is a fellow of the IEEE, the ACM, and Bell Labs. He was inducted into the New Jersey Inventors Hall of Fame in 2014.

View the recording >>

Gerald M. Masson Distinguished Lecture Series

September 30, 2021

Abstract: As algorithms increasingly inform and influence decisions made about individuals, it becomes increasingly important to address concerns that these algorithms might be discriminatory. Multicalibration guarantees accurate (calibrated) predictions for every subpopulation that can be identified within a rich class of computations. It strives to protect against data analysis that inadvertently or maliciously introduces biases that are not borne out of the training data. Multicalibration may also help address other forms of oppression that may require affirmative action or social engineering. In this talk, we will discuss how this notion, recently introduced within the research area of algorithmic fairness, has found a surprising set of practical and theoretical implications. We will discuss multicalibration and touch upon some of its unexpected consequences, including: 1) practical methods for learning in a heterogeneous population, employed in the field to predict COVID-19 complications at a very early stage of the pandemic; 2) a computational perspective on the meaning of individual probabilities through the new notion of outcome indistinguishability; 3) a rigorous new paradigm for loss minimization in machine learning through the notion of omnipredictors that simultaneously applies to a wide class of loss-functions, allowing the specific loss function to be ignored at the time of learning; and 4) a method for adapting a statistical study on one probability distribution to another, which is blind to the target distribution at the time of inference and is competitive with widespread methods based on propensity scoring. Based on a sequence of works joint with (subsets of) Cynthia Dwork, Shafi Goldwasser, Parikshit Gopalan, Úrsula Hébert-Johnson, Adam Kalai, Christoph Kern, Michael P. Kim, Frauke Kreuter, Guy N. Rothblum, Vatsal Sharan, Udi Wieder, and Gal Yona.

Speaker Biography: Omer Reingold is the Rajeev Motwani Professor of Computer Science at Stanford University and the director of the Simons Collaboration on the Theory of Algorithmic Fairness. Past positions include the Weizmann Institute of Science; Microsoft Research; the Institute for Advanced Study in Princeton, New Jersey; AT&T Labs; and Samsung Research America. Reingold’s research is in the foundations of computer science and, most notably, in computational complexity, cryptography, and the societal impact of computation. He is an ACM Fellow and a Simons Investigator. Among his distinctions are the 2005 Grace Murray Hopper Award and the 2009 Gödel Prize.

Video Recording >>

Gerald M. Masson Distinguished Lecture Series

October 5, 2021

Revisiting Weiser’s 30-year old inspirational vision on ubiquitous computing, we see that there are three factors that today limit the kind of ubiquity that Weiser described: power, cost, and form factor. Using these factors to drive our efforts, we have created examples of computational materials that demonstrate self-sustaining computational devices that are manufactured with simple materials to perform interesting sensing and communication tasks. These computational materials can be more literally woven into the fabric of everyday life, inspiring many more applications of ubiquitous computing, as well as many avenues for research challenges. We will demonstrate some of these early examples, motivating an Internet of Materials vision. Is this a logical progression from the Internet of Things, or something fundamentally new? I will present examples of computational materials that have been created in collaboration with materials scientists, chemical engineers, and other disciplines. I will also discuss some of the exciting research challenges for this emerging field.

Speaker Biography: Gregory Abowd is Dean of the College of Engineering and Professor of Electrical and Computer Engineering at Northeastern University. Prior to joining Northeastern, he spent over 26 years on the faculty at Georgia Tech, where held the position of Regents’ Professor and the J.Z. Liang Chair in the School of Interactive Computing. His research falls largely in the area of Human-Computer Interaction with an emphasis on applications and technology development for mobile and ubiquitous computing in everyday settings. He has over 300 peer-reviewed publications and holds several issued patents, assisting in the formation of 6 commercialization efforts. He has graduated 30 PhD students who have gone on to faculty and industry. He is an elected member of the ACM SIGCHI CHI Academy and an ACM Fellow.

View the recording >>

Computer Science Seminar Series

October 7, 2021

Abstract: Binary program analysis is a fundamental building block for a broad spectrum of security tasks, including vulnerability detection, reverse engineering, malware analysis, patching, security retrofitting, and forensics. Essentially, binary analysis encapsulates a diverse set of tasks that aim to understand and analyze how a binary program runs and its operational semantics. Unfortunately, existing approaches often tackle each analysis task independently and heavily employ ad hoc heuristics as a shortcut for each task. These heuristics are often spurious and brittle, as they do not capture the real program semantics (behavior). While machine-learning-based approaches have shown early promise, they, too, tend to learn spurious features and overfit specific tasks without understanding the underlying program semantics. In this talk, Kexin Pei will describe two of his recent projects that learn program operational semantics for various binary analysis tasks. His key observation is that by designing pre-training tasks that can learn how binary programs execute, one can drastically boost the performance of binary analysis tasks. His pre-training tasks are fully self-supervised—they do not need expensive labeling effort. Therefore, his pre-trained models can use diverse binaries to generalize across different architectures, operating systems, compilers, and optimizations/obfuscations. Extensive experiments show that his approach drastically improves the performance of tasks like matching semantically similar binary functions and binary type inference.

Speaker Biography: Kexin Pei is a fifth-year PhD student in the Department of Computer Science at Columbia University. He is co-advised by Suman Jana and Junfeng Yang and works closely with Baishakhi Ray. He is broadly interested in security, systems, and machine learning, with a current focus on developing ML architectures to understand program semantics and using them for program analysis and security.

Gerald M. Masson Distinguished Lecture Series

October 21, 2021

Disinformation is a quintessential socio-technical challenge – it is driven fundamentally by people and amplified significantly by technology. As such, technological solutions alone will not be sufficient in addressing this key national security challenge – technical advancements in identifying and mitigating the spread of disinformation must be tightly coupled with social interventions in the areas of education, training, and ethics.

In this talk, Nadya Bliss will discuss what an interdisciplinary research agenda for tackling the challenge of disinformation could look like, along with the benefits and challenges of truly interdisciplinary research. Bliss will also provide examples of current research that bring experts from different disciplines together to develop systemic responses to the problem.

Speaker Biography: Dr. Nadya T. Bliss is the Executive Director of the Global Security Initiative at Arizona State University. In that capacity, she leads a pan-university institute-level organization advancing research, education, and other programming in support of national and global security. Prior to leading GSI, Dr. Bliss spent time as the Assistant Vice President of Research Strategy at ASU and a decade in various positions at MIT Lincoln Laboratory, most recently as the founding Group Leader of the Computing and Analytics Group. She has proven expertise in growing mission focused research organizations, strategic planning, and organizational design, along with deep knowledge of the technology transition pipeline, and significant experience identifying advanced research capabilities to address mission and application needs. Dr. Bliss is a Professor of Practice and Graduate Faculty in ASU’s School of Computing and Augmented Intelligence, and currently serves as an Executive Committee member of the Computing Community Consortium and as Vice Chair of the Defense Advanced Research Project Agency (DARPA) Information Science and Technology (ISAT) study group.

Video Recording >>

Association for Computing Machinery Lecture in Memory of Nathan Krasnopoler

October 26, 2021

In the wake of the 2020 Presidential contest, election security faces new challenges. Many voters’ confidence has been undermined by baseless conspiracy theories. At the same time, other voters have been given false assurance by misleading claims that 2020 was the “most secure election ever.” These views make it difficult to discuss the threats that elections actually face, but without further action by Congress and the states, voting will continue to be vulnerable both to real cyberattacks and to false accusations of fraud. It is essential that voters be accurately informed about real election risks, both to counter disinformation and to ensure that there is public support for badly needed reforms.

Speaker Biography: J. Alex Halderman is a Professor of Computer Science & Engineering and Director of the Center for Computer Security and Society at the University of Michigan. His research interests span security and applied cryptography, with a special focus on the interaction of technology with politics and international affairs. Among his recent projects are ZMap, Let’s Encrypt, and the TLS Logjam and DROWN vulnerabilities. Prof. Halderman has performed numerous security evaluations of real-world voting systems, both in the U.S. and around the world. He has twice testified before congress concerning election security and serves as co-chair of the State of Michigan’s Election Security Advisory Commission. In 2019, he was named an Andrew Carnegie Fellow for his work in strengthening election cybersecurity with evidence-based solutions, and last year, he received the University of Michigan President’s Award for National and State Leadership. Professor Halderman is the creator of “Securing Digital Democracy,” a massive, open, online course about the risks and potential of election technology that has attracted tens of thousands of participants worldwide.

Video Recording >>

Gerald M. Masson Distinguished Lecture Series

October 28, 2021

Understanding how large populations of neurons communicate and jointly fire in the brain is a fundamental open question in neuroscience. Many approach this by estimating the intrinsic functional neuronal connectivity using probabilistic graphical models. But there remain major statistical and computational hurdles to estimating graphical models from new large-scale calcium imaging technologies and from huge projects which image up to one hundred thousand neurons in the active brain.

In this talk, Genevera Allen will highlight a number of new graph learning strategies her group has developed to address many critical unsolved challenges arising with large-scale neuroscience data. Specifically, she will focus on Graph Quilting, in which she derives a method and theoretical guarantees for graph learning from non-simultaneously recorded and pairwise missing variables. Dr. Allen will also highlight theory and methods for graph learning with latent variables via thresholding, graph learning for spikey data via extreme graphical models, and computational approaches for graph learning with huge data via minipatch learning. Finally, she will demonstrate the utility of all approaches on synthetic data, as well as real calcium imaging data for the task of estimating functional neuronal connectivity.

Speaker Biography: Genevera Allen is an Associate Professor of Electrical and Computer Engineering, Statistics, and Computer Science at Rice University and an investigator at the Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital and Baylor College of Medicine. She is also the Founder and Faculty Director of the Rice Center for Transforming Data to Knowledge, informally called the Rice D2K Lab.

Dr. Allen’s research focuses on developing statistical machine learning tools to help people make reproducible data-driven discoveries. Her work lies in the areas of interpretable machine learning, data integration, modern multivariate analysis, and graphical models with applications in neuroscience and bioinformatics. In 2018, Dr. Allen founded the Rice D2K Lab, a campus hub for experiential learning and data science education.

Dr. Allen is the recipient of several honors for both her research and teaching including a National Science Foundation Career Award, Rice University’s Duncan Achievement Award for Outstanding Faculty, and the George R. Brown School of Engineering’s Research and Teaching Excellence Award; in 2014, she was named to the “Forbes ’30 under 30′: Science and Healthcare” list. Dr. Allen received her Ph.D. in statistics from Stanford University (2010), under the mentorship of Prof. Robert Tibshirani, and her bachelors, also in statistics, from Rice University (2006).

Video Recording >>

CS Seminar Series

November 11, 2021

The rapid development of Augmented Reality (AR) and Virtual Reality (VR) devices has created interest in leveraging these technologies across a wide range of clinical applications. Spatial Computing incorporates emerging technology that connects virtual space and the real world seamlessly and augments the user’s vision, enhancing situational awareness and enables physicians to visualize relevant information where it is needed which, in turn, improves efficiency and facilitates better outcomes.

This talk introduces some of the concepts as well as the challenges in extended reality. It discusses the architecture and implementation of an interactive mixed reality platform for the training and practice of medical procedures. The ecosystem takes into account that a procedure’s success depends not only on the clinician’s skill but on the harmonious operation of all the involved elements in the clinical theater. The presentation also provides examples of how these immersive platforms could be used in various clinical scenarios and their effectiveness in terms of reducing errors, system performance, and usability. Finally, it will touch on the implications of immersive technology for the future of healthcare.

Speaker Biography: Ehsan Azimi is an Assistant Professor at the School of Nursing and Director of Research at the Center for Immersive Learning and Digital Innovation at the Johns Hopkins University. He completed his Ph.D. in Computer Science at Johns Hopkins University. Ehsan is passionate about the intersection of technology in healthcare. His research focuses on extended reality, robotics, and human-centered design. He has developed novel display calibration methods and new user interaction modalities for smart glasses that improve surgical navigation and training of medical procedures. His work has been covered in the Engineering Magazine as well as the other media outlets. He also implemented techniques for robot-assisted cochlear implant placement, intraocular robotic snake, and needle steering. Before joining Johns Hopkins, he worked at Harvard Medical School where he innovated a method that improves the resolution and dynamic range of a medical imaging system.

Dr. Azimi holds multiple patents, and his work has led to over 20 peer-reviewed articles in journals and conferences. He was named a Siebel Scholar, received the Provost Postdoctoral Fellowship as well as the Link Fellowship. Ehsan served as a mentor for several students and scholars in their projects and studies.

Video Recording >>

CS Seminar Series

December 9, 2021

Over the course of the last fifty years, American society has undergone a significant paradigm shift in how it approaches people with disabilities. Laws like the Americans with Disabilities Act and public investments in Home and Community-Based Services reflect a belief that the problems of disability are not inevitable results of biology, but instead reflect an interaction between impairment and a range of societal factors. Though public policy is increasingly acknowledging the role of systemic injustice in the gaps between disabled and non-disabled persons, more work is needed to adapt the frameworks of quantitative social science to reflect this more nuanced approach to causality.

These questions have concrete implications for public policy. As the health care system shifts away from fee-for-service, policymakers are increasingly tasked with measuring quality and holding health plans and providers accountable for outcomes. Disability is often an important input into risk-adjustment and quality measurement strategies in such contexts. Decisions regarding how to attribute causality for differences in outcomes between disabled and non-disabled persons may impact plan and provider behavior, incentivizing both positive and negative responses, depending on the choices made. In this talk, I will frame these challenges in the context of prior work addressing similar issues in the realm of racial disparities. When measuring differences in outcomes between disabled and non-disabled persons, what are appropriate and inappropriate control variables? To what should we attribute to systemic injustice as opposed to biological impairment? How do these decisions influence key policy choices regarding risk-adjustment and quality measurement in our health care system?

Speaker Biography: Ari Ne’eman is a PhD Candidate in Health Policy at Harvard University in the Political Analysis Track. He also serves as a Visiting Scholar at the Lurie Institute for Disability Policy at Brandeis. Ari previously served as executive director of the Autistic Self Advocacy Network from 2006 to 2016 and as one of President Obama’s appointees to the National Council on Disability from 2010 to 2015. He is presently writing a book on the history of American disability advocacy for Simon & Schuster, anticipated in 2023.