Faculty Spotlight: Pedro Lopes

October 23, 2019

What if interfaces shared parts of our body? How can we engineer devices that connect more personally and directly to our body? These are key research questions posed by Pedro Lopes, an assistant professor in the University of Chicago’s Department of Computer Science. As the leader of the University’s Human Computer Integration Lab, Lopes uses a combination of computer science, engineering, and art to create interactive systems that borrow parts of the user’s body for input and output. This process allows computers to be more directly interwoven in our bodily senses and actuators.

Q. Much of your research focuses on human computer integration. How does this extend beyond the devices many of us use regularly (smartphones, wearable technology, etc.)?

A. The mission of my lab is to create the next generation of devices that supersede today’s wearable devices. Wearable devices appeared as an evolution of the smartphone, and introduced the ability to infer the user’s physiological state, such as measuring heart rate, counting steps, measuring attention, etc. Unlike these uses, the prototypes we build in my lab explore what happens when devices are able to not only sense our body, but to stimulate it. For instance, if we made a device that can communicate to your muscles by making them move, using safe and small electrical impulses, could this revolutionize the way we learn physical tasks?
 
Q. How do you predict this kind of integration evolving in the future?

A. By looking at the evolution of computer science, we find some clues that allow us to extrapolate a possible trend. First, the obvious observation is that computing devices are getting smaller (PC, laptop, tablet, phone, smartwatch). Less obvious, and perhaps more relevant, is that interactive devices are spending more time with users (a smartwatch is literally attached to you all day long). This trend is interesting because we can ask all sorts of questions from it: what will the next devices look like? Will they also attach to us? How will they connect to our body?

The key here is that devices are connecting to more and more of the user’s physiology. While early wearables like Fitbits and smartwatches can only read user’s body state (heart rate, steps, etc.), future devices can also actively leverage the user’s body as computing hardware. We’ve built a device that shows you how to use objects you have never used before by physically moving you in order to demonstrate how your muscles should use this object. We move the user by utilizing a technology called Electrical Muscle Stimulation (EMS) that uses electrical impulses to move someone’s muscles. The key benefit here is that you learn by doing. It suggests there might a new way of learning a physical task. These kinds of devices are fundamentally different from today’s wearables, and we believe this close integration with the human body is the evolution of today’s wearable device. That being said, predicting the evolution of the adoption of interactive devices is much harder since it is about market trends and other factors that stand outside of most researchers’ sphere of influence. 

Q. Some people are hesitant to embrace this type of technology. What are your thoughts on that?

A.
The prototypes we build are research artifacts, and none is meant to be used in its current form. Translating these new developments into mainstream commercial devices will take decades, as many of the current limitations have to be addressed. One of the limitations we are excited about solving is the question of agency: who’s in control when a device utilizes our own muscles to communicate to us? This is an exciting question because it has serious implications in how any automated or physical system is experienced by their users: in fact, in the age of automation, we believe it is increasingly more important to preserve the user’s agency.

We recently started looking into strategies to preserve the user’s feeling of control in two ways. The first is to provide users with ways to stay in control. For instance, one of our systems not only stimulates the user’s muscles to communicate messages, but continuously monitors if users resist the stimulation. If they do, the system automatically turns off as it detects that the user wants to dismiss the system’s action. Secondly, in some cases users want to deliberately let the system take control of their muscles, such as when learning how to perform a task they are not familiar with. In these cases, preserving the user’s agency is still critical as it allows users to learn faster. To tackle this problem, we explored whether manipulating the timing of the muscle stimulation preserves the user’s sense of, "I did this." We found that, in very simple cases, we are able to assist users with muscle stimulation in order to allow them to perform tasks they are normally not capable. Yet, by using a very specific timing for delivering the electrical impulses, we allow users to preserve some degree of agency, thus accelerating their learning of the physical task. 

Q. What are some things you’re currently working on?

A. We are working on building new types of wearable devices. My postdoc, Jun Nishida, and I are focusing on building devices that allow two users to share physical abilities and experiences that go beyond just words and symbolic communication. We’re building an exoskeleton device, a mechanical actuator that fits your hand like a glove, enabling you to feel the grasping ability of someone else. This approach is especially important for designing devices for people that are different from you, such as children or the elderly, or enabling doctors to understand their patient’s grasping ability. Also, together with the SAND Lab (Systems, Algorithms, Networking, and Data) led by Heather Zheng and Ben Zhao, we’re engineering a wearable device that jams surrounding microphones to protect the user’s privacy against eavesdropping microphones, such as those found in Amazon Alexa devices and smartphones.

Q. How are UChicago students working with interactive technology in your classes?

A.
In the three classes I created in the Department of Computer Science, undergrads are exploring many forms of interactive technology. In my Introduction to HCI class, we invite students to build all sorts of user interface systems, such as mobile devices that communicate with their users by means of vibrations rather than using visuals. Going deeper, in the HCI Engineering class, students build physical prototypes every week, and they put it all together to build a self-contained functional wearable device by the end of the class. Some of their wearables involved state-of-the-art ideas such as indoor localization via Bluetooth power signal measurement, a navigation system that moves your hand to indicate where you have to go, and even a water bottle that knows how much you drank throughout the day. The undergrads that took part in my Emergent Interface Technologies graduate class got to experiment with a new development in material science as they learned how to leverage silicon embedded with liquid metal to create stretchable and flexible electronics that can even self-heal.