Some people think that robots are merely villains in science fiction movies. But in reality, robots play an increasingly important role in society—repairing satellites, building cars, farming crops, and much more.
While some may fear the rise of the robots, Gopika Ajaykumar—a first-year PhD student in computer science affiliated with the Johns Hopkins Malone Center for Engineering in Healthcare—instead sees an opportunity for robots and humans to join forces.
Ajaykumar’s research investigates the growing field of human-robot interaction. She’s currently working toward the development of assistive robots in the Intuitive Computing Laboratory led by John C. Malone Assistant Professor of Computer Science Chien Ming-Huang.
Ajaykumar chose to attend Johns Hopkins because of the university’s strength in assistive robotics and highly collaborative environment.
“I find robotics exciting because it brings together people and ideas from so many different fields. Through the university—and the Malone Center in particular—I have the opportunity to collaborate with people from various disciplines and tackle problems from different perspectives, all while pursuing my passion for applying robotics in ways that can help people,” she says.
Her current project focuses on a robot programming subset known as programming by demonstration, or PbD. Traditionally, programming a robot requires an expert with specialized coding skills and knowledge about the intricacies of robotics. But with PbD, even a non-expert can program a robot to complete a specific task by demonstrating the task themselves.
PbD could be a game-changer for human-robot interaction. The ability for an average person to program a robot opens exciting possibilities for how we use robots in everyday life, and is especially promising for health care delivery. For example, personal home robots could provide on-demand assistance for aging adults or special needs populations.
Ajaykumar is particularly interested in the potential of the first-person user viewpoint to enhance PbD. It’s a relatively unexplored area for robotics research, which traditionally utilizes cameras mounted on robots or in the surrounding environment rather than on human users.
“The question that I am trying to answer is, ‘Is having access to the first-person viewpoint from a user’s head-mounted camera useful for a robot when it’s trying to learn a task from a user demonstration?'” she says. “I believe that first-person user sensor data can greatly aid robot learning since this data can convey a user’s intention. For example, head motion is a good indicator of user intention; this information could allow a robot to understand the underlying concepts in a demonstration that would usually be hidden in the traditional third- or first-person robot camera view.”
To test her hypothesis, Ajaykumar will collect video and depth data from users wearing first-person cameras similar to GoPros as they complete task demonstrations. She hopes the sensor data will provide insights, cues, or trends that robots can use when learning from human demonstrations.
“First-person vision is special because a robot can see the world from a user’s unique perspective,” she says. “Understanding user intention can help the robot figure out how it can best assist a user at any given time. As I continue with this project, I look forward to exploring how first-person vision can be used to augment robot-provided assistance.”
Prior to joining Hopkins, Ajaykumar earned her BS in electrical and computer engineering from the University of Texas at Austin. She has received a 2018 National Science Foundation Graduate Research Fellowship and the Howard and Jacqueline Chertkof Endowed Fellowship, which supports exceptional Hopkins graduate students in emerging technology fields.