Spring 2016
Student
February 10, 2016
Although widely adopted, one of the biggest concerns with cloud computing is how to preserve the security and privacy of client data being processed and/or stored in a cloud computing environment. When it comes to cloud data protection, the methods employed can be very similar to protecting data within a traditional data center. Authentication and identity, access control, encryption, secure deletion, integrity checking, and data masking are all data protection methods that have applicability in cloud computing. Current research in cloud data protection primarily falls into three main categories: 1) Authentication & Access Control, 2) Encryption, and 3) Intrusion Detection. This thesis examines the various mechanisms that currently exist to protect data being stored in a public cloud computing environment. It also looks at the methods employed to detect intrusions targeting cloud data when and if data protection mechanisms fail. In response to these findings, we present three primary contributions that focus on enhancing the overall security of user data residing in a hosted environment such as the cloud. We first provide an analysis of Cloud Storage vendors that shows how data can be exposed when shared – even in the most ‘secure’ environments. Secondly, we offer Pretty Good Privacy (PGP) as a method of securing data within this environment while enhancing PGP’s Web of Trust validation mechanism using Bitcoin. Lastly, we provide a framework for protecting data exfiltration attempts in Software-as-a-Service (SaaS) Cloud Storage environments using Cyber Deception.
Speaker Biography: Duane Wilson currently possess a B.S. in Computer Science from Claflin University (Thesis: Monitoring and Analysis of Malicious Network Traffic over Uni- versity Networks), a Masters of Engineering in Com- puter Science from Cornell University (Thesis: Design of Analysis Framework for Utilizing Firewall and System Logs as Source for Computer Intrusion Information), a Masters of Science in Information Security from Johns Hopkins University (Thesis: A Discretionary Access Control Method for Preventing Data Exfiltration via Removable Devices). His pursuit and completion of this terminal degree speaks to his passion for making contributions to the Computer Science and Cyber Security Bodies of Knowledge throughout the course of his future career.
Duane has spent over 13 years working in the field of Information Security beginning at the U.S. Army Research Laboratory as a contributing Network Analyst and subsequent Security Tool Researcher/Developer. In his role role as a Sr. Cyber Security Engineer, he focused extensively on: Network Analyst Training, performing Security/Risk Assessments for High Value Infrastructure Components, Security Tool Evaluations, and providing recommendations to enhance the Computer Network De- fense capabilities within the DoD. More recently, he was involved in the development of Advanced Cyber Security processes in the areas of Digital Forensics and Malware Analysis for the Security Operations Center of the Center for Medicare and Medicaid Services (CMS). Additionally, he provided insight into a series of test plans for the Joint Information Environment, operated and managed by the Defense Information Systems Agency. This effort is motivated by the DoDs desire to consolidate disparate data centers throughout the DoD into a Single Security Architecture. Lastly, he has also had the opportunity to serve as a guest lecture at Alabama State University to discuss the topic of Cyber Criminals and develop educational curricula for the MD State Dept of Education.
Starting in November 2015, Duane has been serving as the Director of Cyber Security for Sabre Systems Inc. In this new role, Duane will be responsible for all of the business development activities relating to Cyber across the company. The company will focus on identifying opportunities for sole source work, Small Business Innovative Research initiatives, Broad Agency Announcements and internal research projects to offer to government and commercial clientele. To date, Duane has con- tributed to a 30-yr strategic plan for the Department of the Navy based on Computer Immunology, Submitted proposals on Cyber Deception, Naval Aircraft Risk/Threat Assessments, a Cryptographic Workbench solution, Bitcoin Transaction Blockchain for Privacy Identity Management, and Cyber Resiliency for Industrial Control sys- tems and applications (via Office of Naval Research). Lastly, Duane has published a number of articles in reputable venues through- out his matriculation period at Johns Hopkins University. A Discretionary Access Control Method for Preventing Data Exfiltration (DE) via Removable Devices focuses on host-level protections for thumb drives or external hard drives. In, “To Share or not to Share” in Client-Side Encrypted Clouds, Duane presents his analysis of se-cure cloud storage providers and identifies a major flaw in the design of the sharing methodologies proposed. His last publication, From Pretty Good to Great: Enhancing PGP Using Bitcoin and the Blockchain presents an alternative method of validating PGP certificates for using in a hosted environment – such as the cloud. He is currently working on two additional publications: 1) Mitigating Data Exfiltration in Software- as-a-Service Cloud Storage Environments which leverages Cyber Deception concepts as an alternative or augment to traditional data loss and/or encryption methods of protection 2) Deceptive Identities for Cloud Sharing which discusses the possibility of using Cyber Deception to protect user information in the Cloud when information is shared.
Student
February 12, 2016
The healthcare industry has been adopting technology at an astonishing rate. This technology has served to increase the efficiency and decrease the cost of healthcare around the country. While technological adoption has undoubtedly improved the quality of healthcare, it also has brought new security and privacy challenges to the industry that healthcare IT manufacturers are not necessarily fully prepared to address.
This dissertation explores some of these challenges in detail and proposes solutions that will make medical devices more secure and medical data more private. Compared to other industries the medical space has some unique challenges that add significant constraints on possible solutions to problems. For example, medical devices must operate reliably even in the face of attack. Similarly, due to the need to access patient records in an emergency, strict enforcement of access controls cannot be used to prevent unauthorized access to patient data. Throughout this work we will explore particular problems in depth and introduce novel technologies to address them.
Each chapter in this dissertation explores some aspect of security or privacy in the medical space. We present tools to automatically audit accesses in electronic medical record systems in order to proactively detect privacy violations; to automatically fingerprint network-facing protocols in order to non-invasively determine if particular devices are vulnerable to known attacks; and to authenticate healthcare providers to medical devices without a need for a password in a way that protects against all known attacks present in radio-based authentication technologies. We also present an extension to the widely-used beacon protocol in order to add security in the face of active attackers; and we demonstrate an overhead-free solution to protect embedded medical devices against previously unpreventable attacks that evade existing control-flow integrity enforcement techniques by leveraging insecure built-in features in order to maliciously exploit configuration vulnerabilities in devices.
Speaker Biography: Paul D.\ Martin developed an interest in technology when he received his first computer at the age of ten. Since then, he has spent much of his time exploring this field. Initially a hobby, computer science quickly became a passion and central part of his life.
Paul received his B.\ S.\ and M.\ S.\ E.\ degrees in Computer Science from Johns Hopkins University in 2011 and 2013, respectively. He enrolled in the Computer Science Ph.D.\ program at Johns Hopkins University in 2011. He was inducted into the Upsilon Pi Epsilon International Computer Science Honor Society in 2013. His research interests include embedded systems security, operating system security, vulnerability analysis, reverse engineering, network protocol analysis, anomaly detection and big-data security analytics.
February 16, 2016
Data traffic has increased sharply over the past decade and is expected to grow further as the Internet becomes ever more popular. Yet data network capacity is not expanding fast enough to handle this exponential growth, leading service providers to change their mobile data plans in an effort to reduce congestion. Inspired by these ongoing changes and building on work from the 1990s, smart data pricing (SDP) aims to rethink data pricing for tomorrow’s networks. In this talk, I will focus on first the temporal and then the content dimensions of SDP. Time-dependent pricing (TDP) proposes to lower short-lived peaks in network congestion by incentivizing users to shift their data usage to less congested times. While TDP has been used in industries such as smart grids, TDP for mobile data presents unique challenges, e.g., it is difficult to predict how users will react to the prices on different days. Thus, we developed algorithms that continually infer users’ changing responses to the offered prices, without collecting private data usage information. We implemented these algorithms in a prototype system, which we used to conduct the first field trial of TDP for mobile data. We showed that our TDP algorithms led to significantly less temporal fluctuation in demand, benefiting the service provider and lowering users’ data prices overall. Sponsored data, an emerging form of data pricing offered by AT&T, allows content providers to subsidize their users’ data traffic; the resulting revenue can be used to expand existing data networks. We consider the impact of sponsored data on different content providers and users, showing that cost-aware users and cost-unaware content providers reap disproportionate benefits. Simulations across representative users and content providers verify that sponsored data may help to bridge the digital divide between different types of users, yet can exacerbate competition between content providers.
Speaker Biography: Carlee Joe-Wong is a Ph.D. candidate and Jacobus fellow at Princeton University’s Program in Applied and Computational Mathematics. She is interested in mathematical aspects of computer and information networks, including work on smart data pricing and fair resource allocation. Carlee received her A.B. in mathematics in 2011 and her M.A. in applied mathematics in 2013, both from Princeton University. In 2013–2014, she was the Director of Advanced Research at DataMi, a startup she co-founded from her data pricing research. Carlee received the INFORMS ISS Design Science Award in 2014 and the Best Paper Award at IEEE INFOCOM 2012.
February 18, 2016
The U.S. Department of Health and Human Services reports that the health records of up to 86% of the U.S. population have been hacked. The Ashley Madison breach revealed the private information of 37 million individuals and led to suicides and shattered families. The Apple iCloud breach led to the public release of nude photos of several celebrities. Data breaches like these abound.
In this talk, I will first describe my research toward understanding the security of existing data breach prevention systems. To thwart data breaches, property-preserving encryption has been adopted in many encrypted database systems such as CryptDB, Microsoft Cipherbase, Google Encrypted BigQuery, SAP SEEED, and the soon-to-be-shipped Microsoft SQL Always Encrypted system. To simultaneously attain practicality and functionality, property-preserving encryption schemes permit the leakage of certain information such as the relative order of encrypted messages. I will explain the practical implications of permitting such leakage, and show in real-world contexts that property-preserving encryption often does not offer strong enough security.
Next, I will describe an application-driven approach to developing practical cryptography to secure sensitive data. The approach involves collaborating with application domain experts to formulate the requirements; investigating whether a practical solution meeting the requirements is possible; and, if not, exploring the reasons behind it to relax the requirements so as to find a useful solution for the application. I will describe how I developed a cryptographic model called Controlled Functional Encryption, and how we can adopt it to address the privacy concerns in emerging applications such as personalized medicine.
Speaker Biography: Muhammad Naveed is a PhD candidate at UIUC studying applied cryptography and systems security. In applied cryptography, he develops practical-yet-provably-secure cryptographic systems for real applications. In systems security, he explores the fundamental security flaws in popular systems and builds defense systems. His work has had a significant impact on Android security and has helped companies such as Google, Samsung, Facebook, and Amazon secure their products and services, improving security for millions of Android users. He is the recipient of the Google PhD Fellowship in Security, the Sohaib and Sara Abbasi Fellowship, the CS@Illinois C.W. Gear Outstanding Graduate Student Award, and the best paper award at the NYU CSAW Security Research Competition. He was also a finalist in the NYU CSAW Cybersecurity Policy Competition.
February 23, 2016
Many of the services we use everyday now run in data centers or mobile devices. However, building systems in these modern platforms to provide reliable services is difficult. This is evidenced by the fact that despite the large amount of work put into system quality assurance, all modern systems continue to experience million-dollar outages and frustrating anomalies like battery drain.
In this talk, I will describe my research efforts to better understand and proactively tackle the reliability challenges in modern systems. First I will discuss work that looks into failures in cloud services. Instead of focusing on conventional root-cause analysis, this work takes a unique angle to examine the fault-tolerance mechanisms in cloud, and analyze why they did not prevent the service failures. I will summarize several challenges (opportunities) for reducing these failures in the future. One such challenge is system configuration: existing fault-tolerance techniques often cannot tolerate (or worse are nullified by) configuration errors, and misconfiguration becomes a major source of cloud outages. I will then present work that enables cloud practitioners to proactively prevent configuration error by using a systematic validation framework. The framework consists of a declarative language for developer/operator to express configuration specification, a service that continuously checks if configuration obeys its specification, and a tool that automatically infers basic specification. I will also touch on the challenge of app misbehavior in mobile ecosystem and proactive prevention at runtime by making mobile OS defensive.
Speaker Biography: Peng (Ryan) Huang is a Ph.D. candidate at UC San Diego advised by Professor Yuanyuan Zhou. His research interests intersect systems, software engineering and programming languages. He is particularly interested in understanding rising problems in real-world systems and reflecting that understanding in new techniques to improve system dependability. His work has been applied in industry including Microsoft and Teradata, and deployed to many real users. He is currently a part-time contractor with Facebook doing research on configuration management. Peng received his MS from UC San Diego in 2013, and his BS in computer science and BA in economics from Peking University in 2010.
February 25, 2016
Cloud users have to fine-tune and manually manage every distributed cloud system they deploy, ranging from distributed storage systems to distributed computation systems. Today there are few ways of allowing users to specify their requirements and having the system adapt automatically to meet these, no matter what the workload and environmental behavior. In this talk I will describe our work on incorporating user requirements, specified as SLAs or SLOs (Service Level Agreements/Objectives) into cloud systems. In a storage system like a NoSQL/key-value store, these SLAs/SLOs might specify conflicting latency and consistency requirements. In a computation system like Hadoop, they might entail job priorities and deadlines. Our adaptive query routers and schedulers change the system behavior to meet these SLAs/SLOs at all times. The talk will focus on our work on the probabilistic latency/consistency trade-off (and SLAs/SLOs) for distributed key-value stores. This work also enables us to specify a generalized probabilistic variant of the classical CAP theorem, and to measure how close our implementation is to the achievable envelope. Our implementations of these adaptive systems have been incorporated into Cassandra, Riak, and Hadoop. Besides systems design, predictability in cloud systems can also be achieved via verification and formal model checking.
Speaker Biography: Muntasir Raihan Rahman is a PhD candidate in Computer Science at the University of Illinois at Urbana-Champaign. He is a member of the Distributed Protocols Research Group (DPRG) led by his advisor Dr. Indranil Gupta. His research interests include distributed systems, big data systems, and cloud systems. He has won the 2014-2015 VMware Graduate Fellowship, Best Paper award at IEEE ICAC 2015, and the 2015-16 UIUC CS Excellence Fellowship. Muntasir has completed research internships at VMware, Microsoft Research, HP Labs, and Xerox Labs. He received a B.Sc. degree in computer science and engineering from Bangladesh University of Engineering and Technology in 2007, and an M. Math degree in computer science from the University of Waterloo in 2010.
March 3, 2016
Write-optimized dictionaries (WODs) are a promising building block for storage systems because they have the potential to strictly dominate the performance of B-trees and other common on-disk indexing structures. In particular, WODs can dramatically improve performance of both small, random writes and large, sequential scans, without harming other operations.
This talk will introduce the basics of write-optimization and the B^\epsilon tree, and will then describe BetrFS, the first in-kernel write-optimized file system. BetrFS contributes a combination of kernel-level techniques to leverage write-optimization in the VFS layer and data structure-level enhancements to meet the requirements of a POSIX file system. Compared to commodity file systems, such as ext4 and xfs, BetrFS can improve performance by up to orders of magnitude, and generally matches other file systems in the worst cases.
Speaker Biography: Don Porter is an Assistant Professor and Kieburtz Young Scholar of Computer Science at Stony Brook University. Porter’s research interests broadly involve improving efficiency and security of computer systems. Porter earned a Ph.D. and M.S. from The University of Texas at Austin, and a B.A. from Hendrix College. He has received awards including the NSF CAREER Award and the Bert Kay Outstanding Dissertation Award from UT Austin.
March 8, 2016
Today’s Internet has serious security problems. Of particular concern are distributed denial-of-service (DDoS) attacks, which coordinate large numbers of compromised machines to make a service unavailable to other users. DDoS attacks are a constant security threat with over 20,000 DDoS attacks occurring globally every day. They cause tremendous damage to businesses and have catastrophic consequences for national security. In particular, over the past few years, adversaries have started to turn their attention from traditional targets (e.g., end-point servers) to non-traditional ones (e.g., ISP backbone links) to cause much larger attack impact.
In this presentation, I will review recent results regarding non-traditional DDoS attacks and potential defense mechanisms. First, I will review a non-traditional type of link-flooding attack, called the Crossfire attack, which targets and floods a set of network links in core Internet infrastructure, such as backbone links in large ISP networks. Using Internet-scale measurements and simulations, I will show that the attack can cause huge connectivity losses to cities, states, or even countries for hours or even days. Second, I will introduce the notion of the routing bottlenecks, or small sets of network links that carry the vast majority of Internet routes, and show that it is a fundamental property of Internet design; i.e., it is a consequence of route-cost minimizations. I will also illustrate the pervasiveness of routing bottlenecks around the world, and measure their susceptibility to the Crossfire attack. Finally, I will explore the possibility of building a practical defense mechanism that effectively removes the advantages of DDoS adversaries and deters them from launching attacks. The proposed defense mechanism utilizes a software-defined networking (SDN) architecture to protect large ISP networks from non-traditional DDoS attacks.
Speaker Biography: Min Suk Kang is a Ph.D. candidate in Electrical and Computer Engineering (ECE) at Carnegie Mellon University. He is advised by Virgil D. Gligor in CyLab. Before he joined Carnegie Mellon, he worked as a researcher as part of Korean military duty at the Department of Information Technology at KAIST Institute. He received B.S. and M.S. degrees in Electrical Engineering and Computer Science (EECS) at Korea Advanced Institute of Science and Technology (KAIST) in 2006 and 2008, respectively. His research interests include network and distributed system security, wireless network security, and Internet user privacy.
Student
March 9, 2016
In this thesis, we propose a cooperative robot control methodology that provides real-time ultrasound-based guidance in the direct manipulation paradigm for image-guided radiation therapy (IGRT) in which a clinician and robot share control of a 3D ultrasound (US) probe. IGRT involves two main steps: (1) planning/simulation and (2) treatment delivery. The proposed US probe co-manipulation methodology has two goals. The first goal is to provide guidance to the therapists for patient setup on the treatment delivery days based on the robot position, contact force, and reference US image recorded during simulation. The second goal is the real-time target monitoring during fractionated radiotherapy of soft tissue targets, especially in the upper abdomen. We provide the guidance in the form of virtual fixtures, which are software-generated force and position signals applied to human operators that permit the operators to perform physical interactions, yet retain direct control of the task. The co-manipulation technique is used to locate soft-tissue targets with US imaging for radiotherapy, enabling therapists with minimal US experience to find an US image which has previously been identified by an expert sonographer on the planning day. Moreover, to compensate for soft tissue deformations created by the probe, we propose a novel clinical workflow where a robot holds the US probe on the patient during acquisition of the planning computerized tomography (CT) image, thereby ensuring that planning is performed on the deformed tissue. Our results show that the proposed cooperative control technique with virtual fixtures and US image feedback can significantly reduce the time it takes to find the reference US images, can provide more accurate US probe placement compared to finding the images free hand, and finally, can increase the accuracy of the patient setup, and thus, the radiation therapy.
Speaker Biography: H. Tutkun Şen received his B.S. degree in Mechanical Engineering with a double major in Electrical and Electronics Engineering from Middle East Technical University, Turkey in 2009 and 2010, respectively. In addition, he obtained a Master of Science in Computer Science from Johns Hopkins University in 2015. He has been a Michael J. Zinner Fellow (Brown Challenge Fellow in the Whiting School of Engineering) since 2010. He has been pursuing a Ph.D. in the department of Computer Science at Johns Hopkins University, advised by Dr. Peter Kazanzides and Dr. Russ Taylor since 2009. After completion of his PhD, Tutkun will begin work as a Control Systems Engineer at Verb Surgical Inc. in Mountain View, CA, where he will be responsible for performing system analysis and designing controllers for a new medical robotic system.
March 10, 2016
Computer networks run many network services (e.g., routing, monitoring, load balancing) to support applications from search engines to big data analytics. These network services have to continuously update network configurations to alleviate congestion, to detect and block cyber-attacks, to perform planned maintenance, etc. Network updates are painful because network administrators unfortunately have to balance the tradeoff between the disruption caused by the problem (e.g., congestion and cyber-attacks), and the disruption introduced in fixing the problem. In this talk, I will present my research on designing and building new network control systems to efficiently handle network updates for multiple network services. First, I will present CoVisor, a new network hypervisor that can host multiple network services and efficiently compile their configuration changes to a single update. Then, I will describe Dionysus, a new network update scheduler that can quickly and consistently apply the network update to a distributed collection of switches. I have built prototype systems for CoVisor and Dionysus, and part of CoVisor has been integrated into ONOS, a popular open-source control platform for software-defined networks developed by ON.LAB.
Speaker Biography: Xin Jin is a PhD candidate in the Department of Computer Science at Princeton University, advised by Professor Jennifer Rexford. He has a broad research interest in networked systems, cloud computing and computer networking. His PhD study focuses on Software-Defined Networking (SDN). He has published several research papers in this area in premier venues, including SIGCOMM, NSDI and CoNEXT. He has interned and collaborated with leading research institutes and cutting-edge startups like Microsoft Research and Rockley Photonics. He received his BS degree in computer science and BA degree in economics from Peking University in 2011, and his MA degree in computer science from Princeton University in 2013. He has received many awards and honors, including the Siebel Scholar (2016), a Princeton Charlotte Elizabeth Procter Fellowship (2015), and a Princeton Graduate Fellowship (2011).
March 22, 2016
Knowledge graphs such a NELL, Freebase, and YAGO provide a means to address the knowledge bottleneck in artificial intelligence domains such as natural language processing, computer vision, and robotics. State of the art knowledge graphs have accumulated large amounts of beliefs about real world entities using machine reading methods. Current machine readers have been successful at populating such knowledge graphs by means of pattern detection — a shallow way of machine reading which leverages the redundancy of large corpora to capture language patterns. However, machine readers still lack the ability to fully understand language. In the pursuit of the much harder goal of language comprehension, knowledge graphs present an opportunity for a virtuous circle: the accumulated knowledge can be used to improve machine readers; in turn, advanced reading methods can be used to populate knowledge graphs with beliefs expressed using complex and potentially ambiguous language. In this talk, I will elaborate on this virtuous circle, starting with building knowledge graphs, followed by results on using them for machine reading.
Speaker Biography: Ndapa Nakashole is a post-doctoral fellow at Carnegie Mellon University. Her research interests include machine reading, natural language processing, machine learning and data mining. She works with professor Tom Mitchell on using machine learning to build computer systems that intelligently process and understand human language. She received her PhD from Saarland University, Germany, and her MSc and BSc from the University of Cape Town, South Africa.
March 23, 2016
The big data revolution has profoundly changed, among many other things, how we perceive business, research, and application. However, in order to fully realize the potential of big data, certain computational and statistical challenges need to be addressed. In this talk, I will present my research in facilitating the deployment of machine learning methodologies and algorithms in big data applications. I will first present robust methods that are capable of accounting for uncertain or abnormal observations. Then I will present a generic regularization scheme that automatically extracts compact and informative representations from heterogeneous, multi-modal, multi-array, time-series, and structured data. Next, I will discuss two gradient algorithms that are computationally very efficient for our regularization scheme, and I will mention their theoretical convergence properties and computational requirements. Finally, I will present a distributed machine learning framework that allows us to process extremely large-scale datasets and models. I conclude my talk by sharing some future directions that I am and will be pursuing.
Speaker Biography: Yaoliang Yu is currently a research scientist affiliated with the center for machine learning and health, and the machine learning department of Carnegie Mellon University. He obtained his PhD (under Dale Schuurmans and Csaba Szepesvari) in computing science from University of Alberta (Canada, 2013), and he received the PhD Dissertation Award from the Canadian Artificial Intelligence Association in 2015.
March 29, 2016
The bulk of literature on missing data employs procedures that are data-centric as opposed to process-centric and relies on a set of strong assumptions that are primarily untestable (eg: Missing At Random, Rubin 1976). As a result this area of research is wanting in tools to encode assumptions about the underlying data-generating process, methods to test these assumptions and procedures to decide if queries of interest are estimable and if so to compute their estimands.
We address these deficiencies by using a graphical representation called “Missingness Graph” which portrays the causal mechanisms responsible for missingness. Using this representation, we define the notion of recoverability, i.e., deciding whether there exists a consistent estimator for a given query. We identify graphical conditions for recovering joint and conditional distributions and present algorithms for detecting these conditions in the missingness graph. Our results apply to missing data problems in all three categories — MCAR, MAR and MNAR — the latter is relatively unexplored. We further address the question of testability i.e. whether an assumed model can be subjected to statistical tests, considering the missingness in the data.
Furthermore viewing the missing data problem from a causal perspective has ushered in several surprises. These include recoverability when variables are causes of their own missingness, testability of the MAR assumption, alternatives to iterative procedures such as EM Algorithm and the indispensability of causal assumptions for large sets of missing data problems.
April 5, 2016
The increasing emergence of robotic technologies that serve as automated tools, assistants, and collaborators promises tremendous benefits in everyday settings from the home to healthcare, manufacturing, and educational facilities. While these technologies promise interactions that can be highly complex and beneficial, their successful integration into the human environment ultimately requires these interactions to also be natural and intuitive. To achieve complex but intuitive interactions, designers and developers must simultaneously understand and address human and computational challenges. In this talk, I will present my group’s work on building human-centered guidelines, methods, and tools to address these challenges in order to facilitate the design of robotic technologies that are more effective, intuitive, acceptable, and even enjoyable through successful integration into the human environment. The first part of the talk will review a series of projects that will demonstrate how the marrying of knowledge about people and computational methods through a systematic design process can enable effective user interactions with social, assistive, and telepresence robots. The second part of the talk will cover ongoing work that provides designers and developers with tools to apply these guidelines to the development of real-world robotic technologies and that utilizes partnerships with domain experts and end users to ensure the successful integration of these technologies into everyday settings through applications in healthcare, manufacturing, mission-critical environments, and the home. The talk will conclude with a discussion of high-level design guidelines that can be drawn from this body of work and a roadmap for future research.
Speaker Biography: Bilge Mutlu is an associate professor of computer science, psychology, and industrial engineering at the University of Wisconsin–Madison. He received his Ph.D. degree from Carnegie Mellon University’s Human-Computer Interaction Institute in 2009. His background combines training in interaction design, human-computer interaction, and robotics with industry experience in product design and development. Dr. Mutlu is a former Fulbright Scholar and the recipient of the NSF CAREER award as well as several best paper awards and nominations, including HRI 2008, HRI 2009, HRI 2011, UbiComp 2013, IVA 2013, RSS 2013, HRI 2014, CHI 2015, and ASHA 2015. His research has been covered by national and international press including the NewScientist, MIT Technology Review, Discovery News, Science Nation, and Voice of America. He has served in the Steering Committee of the HRI Conference and the Editorial Board of IEEE Transactions on Affective Computing, co-chairing the Program Committees for ROMAN 2016, HRI 2015, ROMAN 2015, and ICSR 2011, the Program Sub-committees on Design for CHI 2013 and CHI 2014, and the organizing committee for HRI 2017. More information on Dr. Mutlu and his research can be found at http://bilgemutlu.com and http://hci.cs.wisc.edu.
April 21, 2016
https://engineering.jhu.edu/ece/events/ece-seminar-urbashi-mitra/?instance_id=168#.VxeFXXpy12E
Speaker Biography: https://engineering.jhu.edu/ece/events/ece-seminar-urbashi-mitra/?instance_id=168#.VxeFXXpy12E
ACM Annual Lecture in Memory of Nathan Krasnopoler
May 3, 2016
“The basic tenets of Open Source can be confusing, especially now that it has become a popular ‘buzz-word’ in IT. During this chat, Jim Jagielski will provide a practical guide to the concepts and specifics of Open Source, the differences between Open Source and ‘Free Software’, a break down of Open Source licensing and governance, and finally, the ‘lessons learned’ in Open Source that Companies are levering in the Inner Source movement. All of this from a developer’s Point Of View”
Speaker Biography: Jim is a well known and acknowledged expert and visionary in Open Source, an accomplished coder, and frequent, engaging presenter on all things Open, Web and Cloud related. As a developer, he’s made substantial code contributions to just about every core technology behind the Internet and Web and in 2012 was awarded the O’Reilly Open Source Award and in 2015 received the Innovation Luminary Award from the EU. He is likely best known as one of the developers and co-founders of the Apache Software Foundation, where he has previously served as both Chairman and President and where he’s been on the Board Of Directors since day one. He serves as President of the Outercurve Foundation and was also a director of the Open Source Initiative (OSI) and has served on numerous boards and advisory councils. He works at Capital One as a Sr. Director in the Tech Fellows program. He credits his wife Eileen with keeping him sane.
Student Seminar
May 4, 2016
Computational modeling of the human brain has long been an important goal of scientific research. The visual system is of particular interest because it is one of the primary modalities by which we understand the world. One integral aspect of vision is object representation, which plays an important role in machine perception as well. In the human brain, object recognition is a part of the functionality of the ventral pathway. In this work, we have developed a computational and statistical techniques to characterize object representation among this pathway. The understanding of how the brain represents objects is essential to developing models of computer vision that are truer to how humans perceive the world.
In the ventral pathway, the lateral occipital complex (LOC) is known to respond to images of objects. Neural recording studies in monkeys have shown that the homologue for LOC represents objects as configurations of medial axis and surface components. In this work, we designed and implemented novel experiment paradigms and developed algorithms to test whether the human LOC represents medial axis structure as in the monkey models. We developed a data-driven iterative sparse regression model guided by neuroscience principles in order to estimate the response pattern of LOC voxels. For each voxel, we modeled the response pattern as a linear combination of partial medial axis configurations that appeared as fragments across multiple stimuli. We used this model to demonstrate evidence of structural object coding in the LOC. Finally, we developed an algorithm to reconstruct images of stimuli being viewed by subjects based on their brain images. As a whole, we apply computational techniques to present the first significant evidence that the LOC carries information about the medial axis structure of objects, and further characterize its response properties.
Speaker Biography: Haluk Tokgozoglu received the Bachelors of Engineering in Computer Science and Engineering from Bilkent University in 2009, and the Masters of Science in Computer Science from Johns Hopkins University in 2012. He enrolled in the Computer Science Ph.D. program at Johns Hopkins University in 2010. His research focuses on Machine Learning, Computer Vision and Visual Neuroscience.
Student Seminar
May 11, 2016
Healthcare reform, regulation, and adoption of technology such as wearables are substantially changing both the quality of care and how we receive it. For example, health and fitness devices contain sensors that collect data, wireless interfaces to transmit data, and cloud infrastructures to aggregate, analyze, and share data. FDA-defined class III devices such as pacemakers will soon share these capabilities. While technological growth in health care is clearly beneficial, it also brings new security and privacy challenges for systems, users, and regulators.
We group these concepts under health and medical systems to connect and emphasize their importance to healthcare. Challenges include how to keep user health data private, how to limit and protect access to data, and how to securely store and transmit data while maintaining interoperability with other systems. The most critical challenge unique to healthcare is how to balance security and privacy with safety and utility concerns. Specifically, a life-critical medical device must fail-open (i.e., work regardless) in the event of an active threat or attack.
This dissertation examines some of these challenges and introduces new systems that not only improve security and privacy but also enhance workflow and usability. Usability is important in this context because it is a deterrence for a secure system, thus, lending it to be improperly used or circumvented. We present this concern and our solution in its respective chapter. Each chapter of this dissertation presents a unique challenge, or unanswered question, and solution based on empirical analysis.
We present a survey of related work in embedded health and medical systems. The academic and regulatory communities greatly scrutinize the security and privacy of these devices because of their primary function of providing critical care. What we find is that securing embedded health and medical systems is hard, done incorrectly, and is analogous to non-embedded health and medical systems such as hospital servers, terminals, and BYOD devices. We perform an analysis of Apple iMessage which both implicates BYOD in healthcare and secure messaging protocols used by health and medical systems.
We analyze direct memory access engines, a special-purpose piece of hardware to transfer data into and out of main memory, and show that we can chain together memory transfers to perform arbitrary computation. This result potentially affects all computing systems used for healthcare. We also examine HTML5 web workers as they provide stealthy computation and covert communication. This finding is relevant to web applications such as electronic health record portals.
We design and implement two novel and secure health and medical systems. One is a wearable device that addresses the problem of authenticating a user (e.g., doctor) to a terminal in a usable way. The other is a light-weight and low-cost wireless device we call Beacon+. This device extends the design of Apple’s iBeacon specification with unspoofable, temporal, and authenticated advertisements; of which, enables secure location sensing applications that could improve numerous healthcare processes.
Speaker Biography: Michael Rushanan is a Ph.D. candidate in Computer Science at Johns Hopkins University. He is advised by Avi Rubin, and he is a member of the Health and Medical Security lab. His research interests include systems security, health information technology security, privacy, and applied cryptography. His hobbies include embedded system design and implementation (e.g., Arduino), mobile application development (i.e., Android), and programming.