Spring 2019

Video Recording >>

CS Seminar

February 12, 2019

Users often fall for phishing emails, reuse simple passwords, and fail to effectively utilize “provably” secure systems. These behaviors expose users to significant harm and frustrate industry practitioners and security researchers alike. As consequences of security breaches become ever more grave, it is important to study why humans behave seemingly irrationally. In this talk, I will illustrate how modeling the effects of structural inequities — variance in skill, socioeconomic status, as well as culture and gender identity — can both explain apparent irrationality in users’ security behavior and offer tangible improvements in industry systems. Modeling and mitigating security inequities requires a combination of techniques from economic, data scientific, and social science methodologies to develop new tools for systematically understanding and mitigating insecure behavior.

Through novel experimental methodology, I empirically show strong evidence of bounded rationality in security behavior: Users make mathematically modelable tradeoffs between the protection offered by security behaviors and the costs of practicing those behaviors, which even in a highly usable system may outweigh the benefits, especially for less resourced users. These findings emphasize the need for industry systems that balance structural inequities and accommodate behavioral variance between users rather than one-size-fits-all security solutions. More broadly, my techniques for modeling and accounting for inequities have offered key insights in growing technical areas beyond security, including algorithmic fairness.

Speaker Biography: Elissa Redmiles is a Ph.D. Candidate in Computer Science at the University of Maryland and has been a visiting researcher with the Max Planck Institute for Software Systems and the University of Zurich. Elissa’s research interests are broadly in the areas of security and privacy. She uses computational, economic, and social science methods to conduct research on behavioral security. Elissa seeks to understand users’ security and privacy decision-making processes and specifically investigate inequalities that arise in these processes and to mitigate those inequalities through the design of systems that facilitate safety equitably across users. Elissa is the recipient of a NSF Graduate Research Fellowship, a National Science Defense and Engineering Graduate Fellowship, and a Facebook Fellowship. Her work has appeared in popular press publications such as Scientific American, Business Insider, Newsweek, and CNET and has been recognized with the John Karat Usable Privacy and Security Student Research Award, a Distinguished Paper Award at USENIX Security 2018, and a University of Maryland Outstanding Graduate Student Award.

Video Recording >>

CS Seminar

February 14, 2019

Social media can provide a rich platform for those seeking better health and support through difficult experiences. Yet, it can also provide space for deviant mental health behaviors, very dangerous and stigmatized behaviors related to mental health. These behaviors are dangerous to participants in the communities as well as to platform health. However, the deep complexities of mental health and these clandestine behaviors resist straightforward, data-driven approaches to detection and intervention.

In this talk, I will describe how human-centered algorithms can identify and assess deviant mental health behaviors in online communities. This work combines methods from Machine Learning, Natural Language Processing, and data science with interdisciplinary insights from psychology and sociology. Using the case study of pro-eating disorder communities, I will show how human-centered insights to algorithms enable robust computational models that identify mental health signals in social media. Then, I will demonstrate how these algorithms can be used to understand latent impacts of these behaviors in online communities, such as content moderation and deviant behavior. I will conclude with discussing how human-centered insights can be brought to computational methods to answer our toughest questions about deviant behavior online.

Speaker Biography: Stevie Chancellor is a PhD candidate in Human Centered Computing in Interactive Computing at Georgia Tech, and she is advised by Munmun De Choudhury. Her research interest lies in using computational approaches to understanding deviant behavior in online communities. Prior to GT, she received a BA from the University of Virginia and an MA from Georgetown University. Stevie’s work has won multiple Best Paper Honorable Mention awards at CHI and CSCW, premier venues in human computer interaction. Her work has been supported by a Snap Inc. Research Fellowship, the Georgia Tech Foley Scholars program, and has appeared in national publications such as Wired and Gizmodo.

Video Recording >>

CS Seminar

February 21, 2019

There has been a renewed focus on dialog systems, including non-task driven conversational agents (i.e. “chit-chat bots”). Dialog is a challenging problem since it spans multiple conversational turns. To further complicate the problem, there are many contextual cues and valid possible utterances. We propose that dialog is fundamentally a multiscale process, given that context is carried from previous utterances in the conversation. Deep learning dialog models, which are based on recurrent neural network (RNN) encoder-decoder sequence-to-sequence models, lack the ability to create temporal and stylistic coherence in conversations. João’s thesis focuses on novel neural models for topical and stylistic coherence and their evaluation.

Speaker Biography: João is a final year PhD student at the University of Pennsylvania, advised by Lyle Ungar. His PhD research focuses on Natural Language Generation, particularly deep learning methods for non-task driven conversational agents (chatbots) and the evaluation of these models. His research also includes work on word and sentence embeddings, word and verb predicate clustering, and multi-scale models. He is generally interested in Natural Language Processing, Time Series Analysis, and Deep Learning.

Video Recording >>

CS Seminar

March 7, 2019

The promise of AI in medical imaging lies not only in higher automation, productivity and standardization, but also in an unprecedented use of quantitative data beyond the limits of human cognition. This will support more accurate and more personalized diagnostics and therapies along multitude of disease pathways. Today, artificial intelligence already plays an important role in the everyday practice of image acquisition, processing and interpretation. In this talk, I provide example clinical applications where AI plays an integral role. These applications range from automated scanning, detection of anatomical structures, intelligent image registration and reformatting to predicting therapy outcome based on multimodal data.

Speaker Biography: Ali Kamen received BSc in EE and MSc in BME from Sharif University of Technology. He received PhD in ECE from the University of Miami. After graduation he joined Siemens Corporate Research in Princeton NJ, where he has been leading technology development teams in the areas of personalized healthcare and image guided procedures. Currently he leads initiatives in translating artificial intelligence based technologies to differentiated value-creating clinical products. Additionally, Dr. Kamen leads active collaborations with a number of universities including the University of Pennsylvania, Cleveland Clinic, Harvard Medical School, Johns Hopkins, and University of Iowa, with more than $5M awarded from a number of NIH-funded grants. He has more than 100 refereed publications (with h-Index 38), and more than 100 US and international patents (granted and pending) primarily in the areas of medical image computing, computational modeling, and image guided procedures. He is recognized as Siemens Inventor of the Year in 2015. He is also a Fellow of American Institute for Medical and Biological Engineers.

Computer Science Student Defense

March 7, 2019

This thesis presents new methods for unsupervised learning of distributed representations of words and entities from text and knowledge bases. The first algorithm presented in the thesis is a multi-view algorithm for learning representations of words called Multiview LSA (MVLSA). Through experiments on close to 50 different views, I show that MVLSA outperforms other state-of-the-art word embedding models. After that, I focus on learning entity representations for search and recommendation and present the second algorithm of this thesis called Neural Variational Set Expansion (NVSE). NVSE is also an unsupervised learning method, but it is based on the Variational Autoencoder framework. Evaluations with human annotators show that NVSE can facilitate better search and recommendation of information gathered from noisy, automatic annotation of unstructured natural language corpora. Finally, I move from unstructured data and focus on structured knowledge graphs. Moreover, I present novel approaches for learning embeddings of vertices and edges in a knowledge graph that obey logical constraints.

Speaker Biography: Pushpendre Rastogi graduated with a bachelors in Electrical Engineering and Masters in Information and Communication Technology in 2011 from IIT, Delhi. His masters’ thesis was on the Stationarity Condition for Fractional Sampling Filters. During 2011-12, he worked in Goldman Sachs as an Operations Strategist (Developer) where he implemented a Fat-Finger alert system to reduce operations risk to GS due to human error. From 2012-13 he worked at Aspiring Minds Pvt. Ltd. as an applied researcher on the problem of Automated English Essay Grading. In 2013 he entered the Ph.D. program at JHU. He interned at Samsung in 2017 and at Amazon in 2018. In 2017 he won the George M.L. Sommerman Engineering Graduate Teaching Assistant Award.

CS Seminar

March 12, 2019

The future will be defined by autonomous computer systems that are tightly integrated with the environment, also known as Cyber-Physical systems (CPS). Resilience and security become extremely important in these systems, as even a single error or security attack can have catastrophic consequences. In this talk, I will consider the resilience and security challenges of CPS, and how to protect them at low costs. I will give examples of two research projects in my group, one on improving the resilience of Deep Neural Network(DNN) accelerators deployed in self-driving cars, and the other on deploying host-based intrusion detection systems (IDS) on smart embedded devices such as smart electric meters and smart medical devices. Finally, I will discuss some of our ongoing work in this area, and the challenges and opportunities. This is joint work with my students and industry collaborators.

Speaker Biography: Karthik Pattabiraman received his M.S and PhD. degrees from the University of Illinois at Urbana-Champaign (UIUC) in 2004 and 2009 respectively. After a post-doctoral stint at Microsoft Research (MSR), Karthik joined the University of British Columbia (UBC) in 2010, where he is now an associate professor of electrical and computer engineering. Karthik’s research interests are in building error-resilient software systems, and in software engineering and security. Karthik has won distinguished paper/runner up awards at the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2018, the IEEE International Conference on Software Testing (ICST), 2013, the IEEE/ACM International Conference on Software Engineering (ICSE), 2014, He is a recipient of the distinguished alumni early career award from UIUC’s Computer Science department in 2018, the NSERC Discovery Accelerator Supplement (DAS) award in 2015, and the 2018 Killam Faculty Research Prize, and 2016 Killam Faculty Research Fellowship at UBC. He also won the William Carter award in 2008 for best PhD thesis in the area of fault-tolerant computing. Karthik is a senior member of the IEEE, and the vice-chair of the IFIP Working Group on Dependable Computing and Fault-Tolerance (10.4).

Video Recording >>

CS Seminar

March 14, 2019

Neural networks have rapidly become central to NLP systems. While such systems perform well on typical test set examples, their generalization abilities are often poorly understood. In this talk, I will discuss new methods to characterize the gaps between the abilities of neural systems and those of humans, by focusing on interpretable axes of generalization from the training set rather than on average test set performance. I will show that recurrent neural network (RNN) language models are able to process syntactic dependencies in typical sentences with considerable success, but when evaluated on more complex syntactically controlled materials, their error rate increases sharply. Likewise, neural systems trained to perform natural language inference generalize much more poorly than their test set performance would suggest. Finally, I will discuss a novel method for measuring compositionality in neural network representations; using this method, we show that the sentence representations acquired by neural natural language inference systems are not fully compositional, in line with their limited generalization abilities.

Speaker Biography: Tal Linzen is an Assistant Professor of Cognitive Science at Johns Hopkins University. Before moving to Johns Hopkins in 2017, he was a postdoctoral researcher at the École Normale Supérieure in Paris, where he worked with Emmanuel Dupoux and Benjamin Spector; before that he obtained his PhD from the Department of Linguistics at New York University in 2015, under the supervision of Alec Marantz. At JHU, Dr. Linzen directs the Computation and Psycholinguistics Lab; the lab develops computational models of human language comprehension and acquisition, as well as methods for interpreting, evaluating and extending neural network models for natural language processing. The lab’s work has appeared in venues such as EMNLP, ICLR, NAACL and TACL, as well as in journals such as Cognitive Science and Journal of Neuroscience. Dr. Linzen is one of the co-organizers of the BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (EMNLP 2018, ACL 2019).

Computer Science Student Defense

March 15, 2019

Graphs have been widely utilized in network design and other applications. A natural question is, can we keep as few edges of the original graph as possible, but still make sure that the vertices are connected within certain distance constraints.

In this thesis, we will consider different versions of graph compression problems, including graph spanners, approximate distance oracles, and Steiner networks. Since these problems are all $$\mathrm{NP}$$-hard problems, we will mostly focus on designing approximation algorithms and proving inapproximability results.

Speaker Biography: I am a Ph.D. candidate advised by Professor Michael Dinitz in the Department of Computer Science at Johns Hopkins University. My research focuses on approximation algorithms and graph algorithms. I received a B.S. in Mathematics from Tsinghua University in 2014.

Video Recording >>

CS Seminar

March 28, 2019

As digital imaging techniques have matured over the past three decades alongside computing technology, there is a growing interest in the ability to study and quantify medical images as the data they really are. Many researchers and industry partners hope to usher in a new era of medical imaging with the help of artificial intelligence. This period of development in medical imaging will challenge the classic rationalist approach to radiology with a modern empiricist approach. However, it is important to understand the relevant concepts in imaging technology and in medicine in order for solutions to be effective and reproducible in the clinical setting. The purpose of this talk is to introduce the computer science audience to medical imaging data alongside real-world clinical challenges with the goal to stimulate ideas for collaborative translational research opportunities between computer scientists and clinician scientists. The talk will also cover some of the current challenges faced in this space with the goal of stimulating discussion. Current datasets that are publically available and upcoming datasets currently being assembled will be discussed. Opportunities for quantitative imaging to impact oncoradiology (cancer imaging) will be highlighted. At the conclusion, the audience will understand basic concepts regarding types of medical imaging data, they will be introduced to the concept of diagnostic imaging vectors, and they will have a basic appreciation for the utility of hybrid imaging approaches and explainable AI (XAI) in clinical care.

Speaker Biography: Dr. Michael Morris graduated from Johns Hopkins University with his BS and MS in Molecular and Cellular Biology where he was first exposed to ‘multi-omics’ in biological systems. He then served as a team member on the initial FDA clinical trial for an intraoperative diagnostic tool, where he gained an interest in quantitative approaches to medical diagnostics. As the FDA study concluded, he matriculated to the University of Maryland School of Medicine where he earned his MD and became inspired about medical imaging and clinical informatics. Dr. Morris went on to complete his internship in the joint Mercy Medical Center/University of Maryland Medical Center program in internal medicine and his residency in diagnostic radiology at the University of Maryland Medical Center in the department of diagnostic radiology and nuclear medicine where he also completed his nuclear medicine training. Over the course of training, his academic interests have focused toward oncoradiology, molecular and hybrid imaging with PET/CT/MRI, and imaging informatics with various projects at his host institution, and in collaboration with the National Institutes of Health, Baltimore VA Medical Center, UMBC, Johns Hopkins University, among other academic, research, and industry organizations. He currently serves on the medical staff for Diagnostic Radiology, Nuclear Medicine, and Internal Medicine at Mercy Medical Center, a private academic affiliated hospital and large cancer referral center for the state of Maryland and surrounding regions. In his spare time, Michael enjoys thinking about tumor heterogeneity, image-guided treatment planning, cooking, snowboarding, traveling, and photography.

Video Recording >>

Distinguished Lecturer

April 2, 2019

Selfish behavior can often lead to suboptimal outcome for all participants, a phenomenon illustrated by many classical examples in game theory. Over the last decade we have studied Nash equilibria of games, and developed good understanding how to quantify the impact of strategic user behavior on overall performance in many games (including traffic routing as well as online auctions). In this talk we will focus on games where players use a form of learning that helps them adapt to the environment. We ask if the quantitative guarantees obtained for Nash equilibria extend to such out of equilibrium game play, or even more broadly, when the game or the population of players is dynamically changing and where participants have to adapt to the dynamic environment.

Speaker Biography: Eva Tardos is the Jacob Gould Schurman Professor of Computer Science at Cornell University, was Computer Science department chair in 2006-2010. She received her BA and PhD from Eotvos University in Budapest in 1984 and joined the faculty at Cornell in 1989. Her research interest is algorithms, networks, and the interface of economics and computer science, focusing on the theory of designing systems and algorithms for users with diverse economic interests. For her work, Tardos has been elected to the National Academy of Engineering, National Academy of Sciences, and the American Academy of Arts and Sciences, and she is a fellow of multiple societies (ACM, AMS, SIAM, INFORMS). Dr. Tardos is also the recipient of several fellowships and awards including the Packard Fellowship, the Fulkerson Prize and the Goedel Prize. Most recently, IEEE announced that Dr. Tardos will receive the 2019 IEEE John von Neumann Medal in May for outstanding achievement in computer-related science and technology.

Computer Science Student Defense

April 4, 2019

Object storage has emerged as a low-cost and scalable alternative solution in the cloud for storing unstructured data. However, performance limitations often compel users to employ supplementary storage services for their varied workloads. The result is growing storage sprawl, unacceptably low performance, and an increase in associated storage costs. We combine the assets of multiple cloud service on offer to develop NDStore, a scalable multi-hierarchical data storage deployment for open-science data in the cloud. It utilizes object storage as scalable base tier and an in-memory cluster as a low-latency caching tier to support a variety of workloads. Moreover, many applications that are reliant on richer file system APIs and semantics are unable to benefit from object storage. Users either transfer data between a file system and object storage or use inefficient file connectors over object stores. One promising solution to this problem is providing dual access, the ability to transparently read and write the same data through both, file system interfaces and object storage APIs. We discuss features which we believe are essential or desirable in a dual-access object storage file system—OSFS. Further, we design and implement Agni, an efficient dual-access OSFS, utilizing only standard object storage APIs and capabilities. Generic object storage’s lack of support for partial writes introduces a performance penalty for some workloads. Agni overcomes this shortcomings by implementing a multi-tier data structure which temporally indexes partial writes from the file system, persists them to a log, and merges them asynchronously for eventually consistent access through the object interface. Our experiments demonstrate that Agni improves partial write performance by two to seven times with minimal indexing overhead and for some representative workloads it can improve performance by 20%- 60% compared to S3FS, a popular OSFS, or the prevalent approach of manually copying data between different storage systems.

Speaker Biography: Kunal Lillaney is a Ph.D. candidate in the Department of Computer Science at Johns Hopkins University, working with Randal Burns in the Hopkins Storage Systems Lab (HSSL). His research focuses on enabling big data in the cloud by building hierarchical storage services over object storage. He received his B.Engg degree in Computer Engineering from University of Mumbai in 2011 and his MSE degree in Computer Science from Johns Hopkins University in 2013. During his Ph.D., Kunal has interned with IBM Research-Almaden and Lawrence Livermore National Laboratory. He has also served as the Secretary of the Upsilon Pi Epsilon (UPE)—JHU Chapter between 2015 and 2017, and won the UPE Executive Council Award in 2016.

Video Recording >>

Distinguished Lecturer

April 16, 2019

There is growing concern about fairness in algorithmic decision making: Are algorithmic decisions treating different groups fairly? How can we make them fairer? What do we even mean by fair? In this talk I will discuss some of our work on this topic, focusing on the setting of online decision making. For instance, a classic result states that given a collection of predictors, one can adaptively combine them to perform nearly as well as the best of those predictors in hindsight (achieve “no regret”) without any stochastic assumptions. Can one extend this guarantee so that if the predictors are themselves fair (according to a given definition), then the overall combination is fair as well (according to the same definition)? I will discuss this and other issues. This is joint work with Suriya Gunasekar, Thodoris Lykouris, and Nati Srebro.

Speaker Biography: Professor Blum’s main research interests are in Theoretical Computer Science and Machine Learning, including Machine Learning Theory, Approximation Algorithms, Algorithmic Game Theory, and Database Privacy, as well as connections among them. Some current specific interests include multi-agent learning, multi-task learning, semi-supervised learning, and the design of incentive systems. He is also known for his past work in Al Planning. Prof. Blum has served as Program Chair for the IEE Symposium and Foundations of Computer Science (FOCS) and the Conference on Learing Theory (COLT). He has served as Chair of the ACM SIGACT Committee for the Advancement of Theoretical Computer Science and on the SIGACT Executive Committee.

Video Recording >>

CS Seminar

April 18, 2019

Machine learning is subject to the limits of computation, and advances in algorithms can open up new possibilities for machine learning. The problem of nearest neighbour search arises commonly in machine learning; unfortunately, despite over 40 years of research, prior sublinear algorithms for exact nearest neighbour search suffer from the curse of dimensionality, that is, an exponential dependence of query time complexity on either the ambient or the intrinsic dimensionality. In the first part of this talk, I will present Dynamic Continuous Indexing (DCI), a new family of exact randomized algorithms that avoids exponential dependence on both the ambient and the intrinsic dimensionality. This advance enables us to develop a new method for generative modelling, known as Implicit Maximum Likelihood Estimation (IMLE), which I will present in the second part of the talk. IMLE can be shown to be equivalent to maximum likelihood under some conditions and simultaneously overcomes three fundamental issues of generative adversarial nets (GANs), namely mode collapse, vanishing gradients and training instability. I will illustrate why mode collapse happens in GANs and how IMLE overcomes it, and also demonstrate empirical results on image synthesis. I will close off with a brief discussion of another approach I introduced, known as Learning to Optimize.

Speaker Biography: Ke Li is a Ph.D. candidate at UC Berkeley advised by Prof. Jitendra Malik. He is interested in a broad range of topics in machine learning, and also enjoys working on computer vision, natural language processing and algorithms. He is particularly passionate about tackling long-standing fundamental problems that cannot be tackled with a straightforward application of conventional techniques. He received his Hon. B.Sc. in Computer Science from the University of Toronto and is grateful for the support of the Natural Sciences and Engineering Research Council of Canada (NSERC).

Computer Science Student Defense

April 25, 2019

The proliferation of scientific and industrial sensors is causing an accelerating deluge of data the processing of which into actionable knowledge requires fast and accurate machine learning methods. A class of algorithms suited to process these large amounts of data is decision forests, widely used methods known for their versatility, state of the art inference, and fast model training. Oblique Sparse Projection Forests — OSPFs — are a subset of decision forests, which provide data inference superior to other methods. Despite providing state of the art inference and having a computational complexity similar to other popular decision forests, there are no SPF implementations that scale beyond trivially sized datasets.

We explore whether OSPF training and inference speeds can compete with other popular decision forest variants despite an algorithmic incompatibility which prevent OSPFs from using traditional forest training optimizations. First, using R, we implement a highly extensible proof of concept version of a recently conceived OSPF, Randomer Forest, shown to provide state of the art results on many datasets and provide this system for general use via CRAN. We then develop and implement a postprocessing method, Forest Packing, to pack the nodes of a trained forest into a novel data structure and modify the ensemble traversal method to accelerate forest based inferences. Finally, we develop FastRerF, an optimized version of Randomer Forest which dynamically performs forest packing during training.

The initial implementation in R provided training speeds inline with other decision forest systems and scaled better with additional resources, but used an excessive amount of memory and provided slow inference speeds. The development of Forest Packing increased inference throughput by almost an order of magnitude as compared to other systems while greatly reducing prediction latency. FastRerF model training is faster than other popular decision forest systems when using similar parameters and trains Random Forests faster than the current state of the art. Overall, we provide data scientists a novel OSPF system with R and Python front ends, which trains and predicts faster than other decision forest implementations.

Speaker Biography: James Browne received a Bachelors degree in Computer Science from the United States Military Academy at West Point in 2002. In 2012 he received a dual Masters of Science Degree in Computer Science and Applied Mathematics from the Naval Postgraduate School in Monterey California where he received the Rear Admiral Grace Murray Hopper Computer Science Award for excellence in computer science research. He enrolled in the Computer Science Ph.D. program at Johns Hopkins University in 2016 and, after graduation, will become an instructor in the Electrical Engineering and Computer Science Department at the United States Military Academy.

Computer Science Student Defense

April 29, 2019

Today’s large-scale datasets necessitate scalable data analysis frameworks and libraries. Traditional distributed memory solutions neglect optimizations for prevalent Non-Uniform Memory Access (NUMA) architectures. Additionally, distributed memory solutions often solely rely on process-level parallelism and thus forgo shared and external memory optimizations, leading to suboptimal overall performance. This thesis explores the effects of NUMA-awareness and fine-grain I/O optimizations from SSDs to improve hardware minimality, scalability and memory parallelism in graph analytics and community detection. Our computation optimizations target data that reside either (i) in-memory, (ii) semi-externally, or (iii) in distributed memory. We first present Graphyti, a semi-external memory graph library built on the FlashGraph framework to demonstrate key design principles for vertex-centric, semi-external memory (SEM) graph applications. Graphyti on a single thick node achieves comparable performance to popular distributed graph libraries in a cluster. We then address web-scale community detection and present the clusterNOR framework. We advance the state-of-the-art for memory parallel NUMA computation of k-means and subsequently clustering algorithms that follow the Majorize-Minimization/Minorize-Maximization (MM) objective function optimization pattern. clusterNOR introduces semi-external memory I/O optimizations and cache friendly NUMA scheduling policies for both hierarchical and non-hierarchical clustering algorithms. We demonstrate how these optimizations lead to performance improvements of up to an order magnitude over state-of-the-art clustering frameworks.

Speaker Biography: Disa Mhembere is a Ph.D. candidate in computer science at the Johns Hopkins University. He received both a masters in engineering management in 2013 and a masters in computer science in 2015 from Johns Hopkins University. During his Ph.D. he interned with IBM Research and Kyndi Inc. Disa was awarded the Paul V. Renoff Computer Science Graduate Fellowship in 2014, the UPE Special Recognition award in 2014 and the UPE Academic Achievement Award in 2017. He also received the best presentation award at the High-Performance Parallel and Distributed Computing (HPDC) Conference in 2017.

Video Recording >>

CS Seminar

May 2, 2019

Over the last five years, methods based on Deep Convolutional Neural Networks (DCNNs) have shown impressive performance improvements for object detection and recognition problems. This has been made possible due to the availability of large annotated datasets, a better understanding of the non-linear mapping between input images and class labels as well as the affordability of GPUs. However, a vast majority of DCNN-based recognition methods are designed for a closed world, where the primary assumption is that all categories are known a priori. In many real-world applications, this assumption does not necessarily hold. In this talk, I will present some of my recent works on developing DCNN-based algorithms for open set recognition as well as novelty detection and one-class recognition. I will conclude my talk by describing several promising directions for future research.

Speaker Biography: Vishal M. Patel is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) at Johns Hopkins University. Prior to joining Hopkins, he was an A. Walter Tyson Assistant Professor in the Department of ECE at Rutgers University and a member of the research faculty at the University of Maryland Institute for Advanced Computer Studies (UMIACS). He completed his Ph.D. in Electrical Engineering from the University of Maryland, College Park, MD, in 2010. He has received a number of awards including the 2016 ONR Young Investigator Award, the 2016 Jimmy Lin Award for Invention, A. Walter Tyson Assistant Professorship Award, Best Paper Award at IEEE AVSS 2017, Best Paper Award at IEEE BTAS 2015, Honorable Mention Paper Award at IAPR ICB 2018, two Best Student Paper Awards at IAPR ICPR 2018, and Best Poster Awards at BTAS 2015 and 2016. He is an Associate Editor of the IEEE Signal Processing Magazine and serves on the Information Forensics and Security Technical Committee of the IEEE Signal Processing Society. He is serving as the Vice President (Conferences) of the IEEE Biometrics Council. He is a member of Eta Kappa Nu, Pi Mu Epsilon, and Phi Beta Kappa.

Computer Science Student Defense

May 21, 2019

The thesis focus on two problems about pseudorandom constructions.

The first problem is how to compute pseudorandom constructions by constant depth circuits. Pseudorandom constructions are deterministic functions which are used to substitute random constructions in various computational tasks. Constant depth circuits here refer to the computation model which can compute functions using circuits of AND , OR and negation gates, with constant depth, unbounded fan-in, taking function inputs by input wires and giving function outputs by output wires. They can be simulated by fast parallel algorithms. We study such constructions mainly for randomness extractors, secret sharing schemes and their applications. Randomness extractors are functions which transform biased random bits to uniform ones. They can be used to recycle random bits in computations if there are some entropies remaining. Secret sharing schemes efficiently share secrets among multi-parties s.t. the collusion of a bounded number of parties cannot recover any information of the secret while a certain larger number of parties can recover the secret. Our work constructs these objects with near optimal parameters and explores their applications.

The second problem is about applying pseudorandom constructions to build error correcting codes (ECCs) for edit distance. ECCs project messages to codewords in a metric space s.t. one can recover the codewords even if there are bounded number of errors which can drive the codeword away by some bounded distance. They are widely used in both the theoretical and practical part of computer science. Classic errors are hamming errors which are substitutions and erasures of symbols. They are well studied by numerous literatures before. We consider one kind of more general errors i.e. edit errors, consists of insertions and deletions that may change the positions of symbols. Our work give explicit constructions of binary ECCs for edit errors with redundancy length near optimal. The constructions utilize document exchange protocols which can let two party synchronize their strings with bounded edit distance, by letting one party send a short sketch of its string to the other. We apply various pseudorandom constructions to get deterministic document exchange protocols from randomized ones. Then we construct ECCs using them. We also extend these constructions to handle block insertions/deletions and transpositions. All these constructions have near optimal parameters.

Speaker Biography: I am a PhD candidate at Johns Hopkins University, Computer Science Department, advised by Xin Li.

My current research is about Randomness and Combinatorics in Computation, and their applications to Complexity Theory, Information Theory, etc. Machine Learning, Networks and other topics in Computer Science are also interesting to me.

Before Hopkins, I received a MSE degree from Tsinghua University and a B.E. degree from Shandong University.

I like playing soccer, basketball, music and travelling in my spare time.