Fall 2018
Computer Science Student Defense:
September 5, 2018
Scene understanding is fundamental to many computer vision applications such as autonomous driving, robot navigation and human-machine interaction; visual object counting and localization are important building blocks of scene understanding. In this dissertation, we present 1) a framework that employs doubly stochastic Poisson (Cox) processes to estimate the number of instances of an object in an image and 2) a Bayesian model that localizes multiple instances of an object using counts from image sub-regions.
Poisson processes are well-suited to model events that occur randomly in space, such as the location of objects in an image or the enumeration of objects in a scene. Our Cox model takes as its input, the results from a CNN (convolutional neural net) classifier which has been trained to classify image sub-regions for the presence of the target object. A density function, estimated via inference from the Cox process, is then used to estimate the count. Despite the flexibility and versatility of Poisson processes, their application to large datasets is limited, as their computational complexity and storage requirements do not easily scale with image size. To mitigate this problem, we employ the Kronecker algebra, which takes advantage of the tensor product structure of covariance matrices. As the likelihood is non-Gaussian, the Laplace approximation is used for inference, employing the conjugate gradient and Newton’s method. Our approach has then close to linear performance. We demonstrate the counting results on both simulated data and real-world datasets, comparing the results with state-of-the-art counting methods.
In addition, we consider the problem of quickly localizing multiple instances of an object by asking questions of the form “How many instances are there in this set?”, while obtaining noisy answers. This setting is a generalization of the game of 20 questions to multiple targets. We evaluate the performance of a policy using the expected entropy of the posterior distribution after a fixed number of questions with noisy answers. We derive a lower bound for the value of this problem and study a specific policy, named the dyadic policy. We show that this policy achieves a value which is no more than twice this lower bound when answers are noise-free, and show a more general constant factor approximation guarantee for the noisy setting. We present an empirical evaluation of this policy on simulated data for the problem of detecting multiple instances of the same object in an image. Finally, we present experiments on localizing multiple objects simultaneously on real images.
Speaker Biography: Purnima Rajan is a Ph.D. candidate in the department of Computer Science at the Johns Hopkins University. She is advised by Prof. Gregory Hager and Prof. Bruno Jedynak, and is a member of the Computational Interaction and Robotics Lab. Her current research focuses on statistical modeling to improve object counting and localization in image data. Her research interests include computer vision, machine learning, and statistical methods applied to image analysis.
Computer Science Student Defense:
September 7, 2018
Users on social media platforms routinely interact by posting text messages, sharing images and videos, and establishing connections with other users through friending. Learning latent user representations from these observations is an important problem for marketers, public policy experts, social scientists, and computer scientists.
In this thesis, we show how user representations can be learned from multiple types of user behavior on social media. We apply several extensions of generalized canonical correlation analysis, a set of multiview learning objectives, to learn these representations, and evaluate them at three tasks: predicting future hashtag mentions, friending behavior, and demographic features. We then show how user features can improve three additional downstream tasks: improving topic model fit, mental health status classification, and message-level stance classification.
Speaker Biography: Adrian Benton received the B.A. degree in Linguistics from the University of Pennsylvania in 2008. He received the M.S. degree in Computer Science from the University of Pennsylvania in 2012, and enrolled in the Computer Science Ph.D. program at Johns Hopkins University in 2013. He is advised by Dr. Mark Dredze. His research centers around applying machine learning techniques to analyze social media data.
September 11, 2018
Improved image-guidance has led to the implementation of minimally invasive alternatives to many complex procedures that are routinely performed across many disciplines. While percutaneous surgery benefits the patient, the task load for the surgeon increases drastically. As a consequence, there is substantial risk of failure that, depending on the intervention, is associated with poor functional outcome, need for revision surgery, or more severe complications. Assisting the surgeon in interventional decision making via advanced imaging, image processing, and visualization is, therefore, a key step in improving outcomes. In this talk, I will present our recent work on human-centric end-to-end systems that assist surgeons in improving on-task performance. In the context of image-guided interventions, this consists of acquiring images that contain information specific to the task, analyzing the images to extract cues otherwise inaccessible to the surgeon, and augmenting the perception of the surgeon to inform a decision that, finally, enables optimal actions.
Speaker Biography: Mathias Unberath is currently a postdoctoral fellow in the Laboratory for Computational Sensing and Robotics at Johns Hopkins University. He holds a BSc in Physics, a MSc in Optical Technologies, and a PhD in Computer Science from the Friedrich-Alexander-Universität Erlangen-Nürnberg from which he graduated summa cum laude in 2017. Previously, he was appointed as ERASMUS scholar at the University of Eastern Finland in 2011 and DAAD fellow at Stanford University throughout 2014. Mathias has won numerous awards including the “SPIE Outstanding Thesis Award” for his Master’s Thesis, and the “German Society of Medical Image Processing Award” and the “Staedtler Award” for his Dissertation. His research focuses on Multi-modal Imaging Systems, Machine Learning for interventional medicine, and Augmented Reality in the context of medical education and percutaneous procedures
Computer Science Student Defense:
September 20, 2018
Supervisory Control and Data Acquisition (SCADA) systems form the monitoring and control backbone of the power grid. It is critical to ensure that SCADA systems are continuously available and operating correctly at their expected level of performance. However, as key components of the power grid infrastructure, SCADA systems are likely to be targeted by nation-state-level attackers willing to invest considerable resources to disrupt the power grid.
We present the first intrusion-tolerant SCADA system that is resilient to both system-level compromises and sophisticated network-level attacks and compromises. While existing SCADA systems often deploy two control centers for fault tolerance, we show that two control centers, even if active at the same time, cannot provide the necessary resilience. We develop a novel architecture that distributes the SCADA system management across three or more active sites to ensure continuous availability in the presence of simultaneous intrusions and network attacks.
The system design is implemented in the Spire intrusion-tolerant SCADA system (http://dsn.jhu.edu/spire), which is available as open source. Spire was recently tested in a red-team experiment, during which an experienced hacker team completely compromised a traditional SCADA system setup according to best practices, but was unable to impact Spire’s guarantees over several days of attack. In addition, a wide-area deployment of Spire, using two control centers and two data centers spanning 250 miles (similar to large U.S. power grids), delivered nearly 99.999% of all SCADA updates initiated over a 30-hour period within 100ms. These results demonstrate that Spire provides meaningful security advantages over traditional SCADA systems and that Spire can meet the latency requirements of SCADA for the power grid.
Speaker Biography: Tom Tantillo is a Ph.D. candidate advised by Yair Amir in the department of Computer Science and is a member of the Distributed Systems and Networks (DSN) lab. His research interests include intrusion-tolerant systems, overlay networks, and resilient critical infrastructure. Tom is a co-creator of several open-source software projects developed in the DSN lab, including the Spines overlay messaging framework, Prime intrusion-tolerant replication engine, and Spire intrusion-tolerant SCADA system. He received the JHU Computer Science Outstanding Teaching Award in 2013 for outstanding effort and skill in assisting with the teaching of courses. Tom received a B.S. degree in Computer Engineering from the Johns Hopkins University in 2010 and received an M.S.E degree in Computer Science from the Johns Hopkins University in 2013.
Computer Science Student Defense:
September 21, 2018
Emerging applications such as remote manipulation and remote robotic surgery require communication that is both timely and reliable, but the Internet natively supports only communication that is either completely reliable with no timeliness guarantees (e.g. TCP) or timely with only best-effort reliability (e.g. UDP). We present an overlay transport service that can provide highly reliable communication while meeting stringent timeliness guarantees (e.g. 130ms round-trip latency across the US) over the Internet. To enable routing schemes that can support the necessary timeliness and reliability, we introduce dissemination graphs, providing a unified framework for specifying routing schemes ranging from a single path, to multiple disjoint paths, to arbitrary graphs. Based on an extensive analysis of real-world network data, we develop a timely dissemination-graph-based routing method that can add targeted redundancy in problematic areas of the network. We show that this approach can cover close to 99% of the performance gap between a traditional single-path approach and an optimal (but prohibitively expensive) scheme.
Speaker Biography: Amy Babay is a PhD student in the Department of Computer Science at Johns Hopkins University, where she works with Yair Amir in the Distributed Systems and Networks Lab. Her research focuses on enabling new Internet services using structured overlay networks and on building intrusion-tolerant critical infrastructure systems. She received her BA in Cognitive Science in 2012 and her MSE in Computer Science in 2014, both from Johns Hopkins University. Prior to starting her PhD, she gained experience with global overlay networks in the commercial world, working at LTN Global Communications.
Computer Science Student Defense:
September 24, 2018
With the advent of next generation sequencing, researchers can now investigate the genome of species and individuals in unprecedented detail. Each part of genome has its own function. Annotation is the process to identify the parts and their functions.
Deep RNA sequencing (RNA-seq) emerged as a revolutionary technology for transcriptome analysis, now widely used to annotate genes. We designed two transcript assemblers, CLASS and CLASS2, to better detect alternative splicing events and to find novel transcripts from RNA-seq data. With sequencing costs dropping, experiments now routinely include multiple RNA-seq samples, to improve the power of statistical analyses. We took advantage of the power of multiple samples in a new software, PsiCLASS. PsiCLASS simultaneously assembles multiple RNA-seq samples, which significantly improves performance over the traditional ‘assemble-and-merge’ model.
For many alignment and assembly applications, sequencing errors can confound downstream analyses. We implemented two k-mer-based error correctors, Lighter and Rcorrector, for whole genome sequencing data and for RNA-seq data, respectively. Lighter was the first k-mer-based error corrector without counting and is much faster and more memory-efficient than other error correctors while having comparable accuracy. Rcorrector searches for a path in the De Bruijn graph that is closest to the current read, using local k-mer thresholds to determine trusted k-mers. Rcorrector measurably improves de novo assembled transcripts, which is critical in annotating species without a high-quality reference genome. A newly assembled genome is typically highly fragmented, which makes it difficult to annotate. Contiguity information from paired-end RNA-seq reads can be used to connect multiple disparate pieces of the gene. We implemented this principle in Rascaf, a tool for assembly scaffolding with RNA-seq read alignments. Rascaf is highly practical, and has improved sensitivity and precision compared to traditional approaches using de novo assembled transcripts. Overall, the collection of algorithms, methods and tools represent a powerful and valuable resource that can be readily and effectively used in any genome sequencing and annotation project and for a vast array of transcriptomic analyses.
Speaker Biography: Li Song is a Ph.D. candidate in the Department of Computer Science at the Johns Hopkins University. He is working in the area of computational biology under the advice of Dr. Liliana Florea and is a member of The Center for Computational Biology. He received a B.S. degree from the Computer Science and Technology Department at Tongji Univeristy in 2009, and holds M.S. degrees in Computer Science from the Michigan Technological University (2011) and in Applied Mathematics and Statistics from the Johns Hopkins University (2017).
Distinguished Lecturer
September 25, 2018
The last decade has seen significant advances in robotics and artificial intelligence leading to an irrational exuberance in technology. I will discuss the challenges of realizing autonomous flight in environments with obstacles in the absence of GPS. I will describe our approaches to state estimation and designing perception-action loops for high speed navigation with applications to precision agriculture and first response. Finally, I will discuss some of limitations with current approaches in the field and challenges in robotics and autonomy.
Speaker Biography: Vijay Kumar is the Nemirovsky Family Dean of Penn Engineering with appointments in the Departments of Mechanical Engineering and Applied Mechanics, Computer and Information Science, and Electrical and Systems Engineering at the University of Pennsylvania. Since joining the faculty in 1987 he has served Penn Engineering in many capacities, including Deputy Dean for Research, Deputy Dean for Education, Chairman of the Department of Mechanical Engineering and Applied Mechanics and Director of the GRASP Laboratory, a multidisciplinary robotics and perception laboratory. Dr. Kumar has served as the assistant director of robotics and cyber physical systems at the White House Office of Science and Technology Policy (2012 – 2013). He received his Bachelor of Technology degree from the Indian Institute of Technology, Kanpur and his Ph.D. from The Ohio State University in 1987.
Krasnopoler Lecture
September 27, 2018
In the near future, there will likely be special-purpose quantum computers with 50-70 high-quality qubits and controllable nearest-neighbor couplings.
In this talk, I’ll discuss general theoretical foundations for how to use such devices to demonstrate quantum supremacy: that is, a clear quantum speedup for some task, motivated by the goal of overturning the Extended Church-Turing Thesis, which says that all physical systems can be efficiently simulated by classical computers, as confidently as possible. This part of the talk is based on joint work with Lijie Chen.
Then, in a second part of the talk, I’ll discuss brand new work on how these experiments could be used to generate cryptographically certified random bits.
Computer Science Student Defense:
September 28, 2018
Today, computer systems need to cope with the explosive growth of data in the world. For instance, in data-center networks, monitoring systems are used to measure traffic statistics at high speed; and in financial technology companies, distributed processing systems are deployed to support graph analytics. To fulfill the requirements of handling such large datasets, we build efficient networked systems in a distributed manner most of the time. Ideally, we expect the systems to meet service-level objectives (SLOs) using least amount of resource. However, existing systems constructed with conventional in-memory algorithms face the following challenges: (1) excessive resource requirements (e.g., CPU, ASIC, and memory) with high cost; (2) infeasibility in a larger scale; (3) processing the data too slowly to meet the objectives.
To address these challenges, we propose sketching techniques as a tool to build more efficient networked systems. Sketching algorithms aim to process the data with one or several passes in an online, streaming fashion (e.g., a stream of network packets), and compute highly accurate results. With sketching, we only maintain a compact summary of the entire data and provide theoretical guarantees on error bounds.
This dissertation argues for a sketching based design for large-scale networked systems, and demonstrate the benefits in three application contexts: (i) Network monitoring: we build generic monitoring frameworks that support a range of applications on both software and hardware with universal sketches. (ii) Graph pattern mining: we develop a swift, approximate graph pattern miner that scales to very large graphs by leveraging graph sketching techniques. (iii) Halo finding in astronomy simulations: we design scalable halo finders on CPU and GPU by leveraging sketch-based heavy hitter algorithms.
Speaker Biography: Zaoxing (Alan) Liu is a Ph.D. candidate in the Department of Computer Science at Johns Hopkins University, working with Prof. Vladimir Braverman. He received his master degree in Computer Science from JHU in 2013 and enrolled in the Ph.D. program in 2014. He is working in the areas of systems and algorithms with sublinear time/space requirements. His research aims to build efficient networked systems with strong theoretical guarantees. Liu’s research papers have been published in top venues, including ACM SIGCOMM and USENIX OSDI. He will be joining CMU/Harvard to continue exploring the problems in this field, as a postdoc researcher.
Lecturer
October 4, 2018
Random forests, or randomized decision trees, are a popular ensemble predictive model, which have a rich and successful history in machine learning in general and computer vision in particular. Deep networks, especially Convolutional Neural Networks (CNNs), have become dominant learning models in recent years, due to their end-to-end manner of learning good feature representations combined with good predictive ability. However, combining these two methods, i.e., Random forests and CNNs, is an open research topic that has received less attention in the literature. A main reason is that decision trees, unlike deep networks, are non-differentiable. In this talk, I will introduce my recent work on integrating RFs with CNNs (Deep Random Forests) to address various machine learning problems, such as label distribution learning and nonlinear regression. I will show their applications to computer vision problems, such as facial age estimation. I will demonstrate how to learn the Deep Random Forests for different learning tasks by a unified optimization framework. The talk will start with a brief description of a few representative examples of my work.
Speaker Biography: Wei Shen received his B.S. and Ph.D. degree both in Electronics and Information Engineering from the Huazhong University of Science and Technology (HUST), Wuhan, China, in 2007 and in 2012. From April 2011 to November 2011, he worked in Microsoft Research Asia as an intern. In 2012, he joined School of Communication and Information Engineering, Shanghai University. From 2017, he became an Associate Professor. He is currently visiting Department of Computer Science, Johns Hopkins University. He has over 40 peer-reviewed publications in machine learning and computer vision related areas, including IEEE Trans. Image Processing, NIPS, ICML, ICCV, CVPR and ECCV.
Computer Science Student Defense:
October 5, 2018
Streaming model supplies solutions for handling enormous data flows for over 20 years now. The model works with sequential data access and states sublinear memory as its primary restriction. Although the majority of the algorithms are randomized and approximate, the field facilitates numerous applications from handling networking traffic to analyzing cosmology simulations and beyond. This thesis focuses on one of the most foundational and well-studied problems of finding heavy hitters, i.e. frequent items: 1. We challenge the long-lasting complexity gap in finding heavy hitters with L2-guarantee in the insertion-only stream and present the first optimal algorithm with a space complexity of O(1) words and O(1) update time. Our result improves on Count sketch algorithm with space and time complexity of O(log n) by Charikar et al. 2002. 2. We consider the L2-heavy hitter problem in the interval query settings, rapidly emerging in the field. Compared to well known Sliding Window model where an algorithm is required to report the function of interest computed over the last N updates, Interval Query provides query flexibility, such that at any moment t one can query the function value on any interval (t1,t2) subset (t-N,t). We present the first L2-heavy hitter algorithm in that model and extend the result to the estimation of all streamable functions of frequency vector. 3. We provide the experimental study for the recent space optimal result on streaming quantiles by Karnin et al. 2016. The problem can be considered as a generalization to the heavy hitters. Additionally, we suggest several variations to the algorithms which improve the running time from O(1/eps) to O(log 1/eps), provide twice better space/precision tradeoff and extend the algorithm for the case of weighted updates. 4. We establish the connection between finding “haloes”, i.e. dense areas, in cosmology N-body simulation and finding heavy hitters. We build the first halo finder and scale it to handle datasets with up-to 10^12 particles via GPU boosting, sampling and parallel I/O. We investigate its behavior and compare it to traditional in-memory halo finders. Our solution pushes the memory footprint from several terabytes down to less than a gigabyte, therefore, make the problem feasible for small servers and even desktops.
Speaker Biography: I am a Ph.D. candidate at the Department of Computer Science at Johns Hopkins University, working with Prof. Vladimir Braverman. I received B.S and M.S degrees in Applied Math in 2011 and 2013 correspondingly both from Moscow Institute of Physics and Technology and enrolled in the Ph.D. program in 2014. My primary interest is concentrated around streaming algorithms and its applications, starting from theoretical foundations behind the time and space complexity guarantees ending with hardware specific optimizations in real-world applications.
Distinguished Lecturer
October 9, 2018
Blockchain is a Byzantine Fault Tolerant (BFT) replicated state machine, in which each state-update is by itself a Turing machine with bounded resources. The core algorithm for achieving BFT in a Blockchain appears completely different from classical BFT algorithms:Recent advances in blockchain technology blur these boundaries. Namely, hybrid solutions such as Byzcoin, Bitcoin-NG, Hybrid Consensus, Casper and Solida, anchor off-chain BFT decisions inside a PoW chain or the other way around. This talk keynote will describe Blockchain in the lens of BFT and BFT in the lens of Blockchain, and provide common algorithmic foundations for both.
Computer Science Student Defense:
October 9, 2018
In this thesis we extend signal processing techniques originally formulated in the context of image processing to techniques that can be applied to signals on arbitrary triangles meshes. We develop methods for the two most common representations of signals on triangle meshes: signals sampled at the vertices of a finely tessellated mesh, and signals mapped to a coarsely tessellated mesh through texture maps. Our first contribution is the combination of Lagrangian Integration and the Finite Elements Method in the formulation of two signal processing tasks: Shock Filters for texture and geometry sharpening, and Optical Flow for texture registration. Our second contribution is the formulation of Gradient-Domain processing within the texture atlas. We define a function space that handles chart discontinuities, and linear operators that capture the metric distortion introduced by the parameterization. Our third contribution is the construction of a spatiotemporal atlas parameterization for evolving meshes. Our method introduces localized remeshing operations and a compact parameterization that improves geometry and texture video compression. We show temporally coherent signal processing using partial correspondences.
Speaker Biography: Fabian Prada is a PhD candidate advised by Professor Michael Kazhdan. His main areas of research are Geometry Processing and Geometric Modeling . His research interests spans Computer Graphics, Computer Vision and Scientific Visualization.. Fabian was intern at Microsoft Research in the summers of 2015 and 2016 working on tracking and parameterization of dynamic meshes. Before joining the PhD program, Fabian got a master degree in Mathematics from IMPA (Brazil), and a bachelor degree in Mathematics from Universidad de los Andes (Colombia).
Computer Science Student Defense:
October 9, 2018
The rise of next-generation sequencing has produced an abundance of data with almost limitless analysis applications. As sequencing technology decreases in cost and increases in throughput, the amount of available data is quickly outpacing improvements in processor speed. Analysis methods must also increase in scale to remain computationally tractable. At the same time, larger datasets and the availability of population-wide data offer a broader context with which to improve accuracy.
This thesis presents three tools that improve the scalability of sequencing data storage and analysis. First, a lossy compression method for RNA-seq alignments offers extreme size reduction without compromising downstream accuracy of isoform assembly and quantitation. Second, I describe a graph genome analysis tool that filters population variants for optimal aligner performance. Finally, I offer several methods for improving CNV segmentation accuracy, including borrowing strength across samples to overcome the limitations of low coverage. Together, these methods compose a practical toolkit for improving the computational power of genomic analysis.
Speaker Biography: I am a Ph.D. candidate advised by Ben Langmead in the Department of Computer Science at Johns Hopkins University. My research focuses on developing scalable tools for genomic data analysis. I received a B.S. in Computer Science from Harvard University in 2013.
Lecturer
October 25, 2018
This talk is concerned with designing algorithms for large scale data science using massively parallel computation. The talk will discuss theoretical models and algorithms for massively parallel frameworks such as MapReduce and Spark. The constraints of the models are well connected to practice, but pose algorithmic challenges.
This talk introduces recent developments that overcome these challenges, widely applicable massively parallel algorithmic techniques, and key questions on the theoretical foundations of massively parallel computation. The methods introduced will be applied to large data problems that are central to operations research, theoretical computer science and machine learning, clustering and submodular function optimization.
The work in this talk has been supported by Google, Yahoo and the NSF.
Distinguished Lecturer
November 6, 2018
As our computational infrastructure races gracefully forward into increasingly parallel multi-core and clustered systems, our ability to easily produce software that can successfully exploit such systems continues to stumble. For years, we’ve fantasized about the world in which we’d write simple, sequential programs, add magic sauce, and suddenly have scalable, parallel executions. We’re not there. We’re not even close. I’ll present a radical, potentially crazy approach to automatic scalability, combining learning, prediction, and speculation To date, we’ve achieved shockingly good scalability and reasonable speedup in limited domains, but the potential is tantalizingly enormous.
Speaker Biography: Margo Seltzer is a Canada 150 Research Chair in Computer Science and Cheriton Family Chair in Computer Systems at the University of British Columbia. Her research interests are in systems, construed quite broadly: systems for capturing and accessing provenance, file systems, databases, transaction processing systems, storage and analysis of graph-structured data, new architectures for parallelizing execution, and systems that apply technology to problems in healthcare. Dr. Seltzer received an A.B. degree in Applied Mathematics from Harvard/Radcliffe College and a Ph. D. in Computer Science from the University of California, Berkeley.
Lecturer
November 27, 2018
The field of medical image analysis has experienced a significant shift towards deep learning approaches, particularly convolutional neural networks (CNNs), in recent years. Thanks to their ability to learn features, representations, and tasks directly from images – thereby eliminating the need for manual feature extraction and selection – deep learning solutions are becoming ever more prevalent.
In this talk I will present an overview of recent and ongoing collaborative work in topics related to medical image analysis using deep learning, particularly: (1) tuberculosis type classification from CT scans; (2) skin lesion segmentation and classification; and (3) surgery video summarization and content analysis.
Speaker Biography: Oge Marques, PhD is Professor of Computer Science and Engineering at the College of Engineering and Computer Science at Florida Atlantic University (FAU) (Boca Raton, Florida, USA).
He is Tau Beta Pi Eminent Engineer, ACM Distinguished Speaker, and the author of more than 100 publications in the area of intelligent processing of visual information – which combines the fields of image processing, computer vision, image retrieval, machine learning, serious games, and human visual perception –, including the textbook “Practical Image and Video Processing Using MATLAB” (Wiley-IEEE Press).
Professor Marques is Senior Member of both the IEEE (Institute of Electrical and Electronics Engineers) and the ACM (Association for Computing Machinery) and member of the honor societies of Sigma Xi, Phi Kappa Phi and Upsilon Pi Epsilon. He has more than 30 years of teaching and research experience in different countries (USA, Austria, Brazil, France, India, Spain, Serbia, and the Netherlands).