Research Areas
I am a design and human-computer interaction (HCI) researcher who investigates how to make information technologies safe, secure, and trustworthy–especially those involving AI, sensors, and networked devices. My work focuses on the design of user interfaces and the interactions they mediate between people, data, algorithms, and physical environments. I approach interfaces broadly, as bridges between people, information, and technology. I work across visual and auditory screen-based applications, spatial and wearable sensor-based systems, and tactile physical interfaces.

In addition to developing and studying functional interfaces, I design speculative interfaces–such as proposals, scenarios, concept videos, and physical models–to explore emerging sociotechnical possibilities. My research integrates design practice with qualitative social science and technical development, combining methods from user experience (UX), industrial, visual, participatory, and speculative design with expertise in computer science, engineering, and the arts and humanities.

Since 2016, my research has focused on user trust and control of interactive technologies, particularly consumer Internet of Things (IoT) systems, and more recently, agentic AI interfaces across consumer and business applications. Design theory, methods, and criticism have been ongoing areas of my work since 2012. Previously, I led HCI/Design research programs on environmental sustainability and renewable energy (2009–2024), and information overload and technology addiction (2014–2016).

Below is an overview of my current research themes and programs.

Multi-User Trust and Control of AI Sensing Systems
My core research program examines how the design of AI-enabled sensing technologies–such as networked cameras, microphones, and environmental sensors–shapes trust, privacy, security, and safety among people who live with and around them. For the past eight years, I have investigated the social, ethical, and experiential dimensions of these technologies as they become embedded in everyday life.

With funding from the National Science Foundation, I have advanced a design and HCI agenda that addresses the harms and vulnerabilities affecting a large yet overlooked population I term adjacent users: people impacted by digital technologies they do not own or operate, including domestic workers, tenants, neighbors, children, employees, and public bystanders. In contrast to primary users who own and operate devices, adjacent users have limited awareness, consent, control, and benefits, yet remain subject to persistent sensing, recording, and algorithmic inference.

My research address four core challenges posed by smart sensing technologies:

• Spatial spillover. Cameras, mics, and other sensors extend invisibly into physical space, often over-sensing while under-signalling their states and capabilities to nearby people.

• AI-enhanced perception. Real-time recognition and inference can be useful and even life-saving, but also invasive, non-consensual, and creepy.

• Stakeholder tensions. Primary users’ needs for security and functionality often conflict with adjacent users’ needs for privacy, safety, and respect.

• Interface asymmetry. Primary users retain the most control and visibility, even when they want to maintain trust and safety for those affected by their devices.

I have focused much of this research on smart home security cameras because they represent both an urgent contemporary problem and a revealing window into broader issues with AI-enabled sensing systems. Products such as Ring doorbells and Nest indoor cameras are among the most widely adopted smart home technologies. As our studies show, people use them for diverse purposes—deterring intruders, monitoring pets and children, and tracking guests and deliveries. Yet these same devices raise serious concerns about privacy, power, and the health of interpersonal relationships.

While perceptions of privacy and trust are highly context-dependent, many of the underlying challenges and design opportunities surrounding smart home cameras generalize to emerging technologies, including municipal surveillance systems with facial recognition, service robots navigating with computer vision, wearables such as smart glasses and personal AI devices, and agentic AI applications that observe user activity in application windows.

Arca: Prototyping solutions for multi-user privacy, security, and trust
Smart home cameras are used deliberately and incidentally to surveil family members, roommates, guests, neighbors, domestic workers, and passersby. Prior work has highlighted “bystander privacy” concerns, yet few robust design solutions exist, and adjacent users remain largely unaddressed in both design research and commercial product development.

Our design and prototyping of Arca, a privacy-, safety-, and trust-enhancing smart home camera, directly responds to this gap. Through 60+ interview and participatory design engagements, we identified a need for solutions that balance primary user security needs with adjacent user privacy, trust, and respect.

We found persistent tensions between primary and adjacent users. Social factors and security concerns often led to poor disclosure and consent, producing stained relationships, hostile workplaces, and even attempted legal action. These problems extend beyond conventional privacy issues, centering issues of respect, power, and trust in everyday interpersonal surveillance.

In response, we developed a system of UX principles, novel privacy states, and design patterns tailored to these challenges. Arca has received international design awards (2024 IXDA and 2025 Core77) and has been published in top-tier HCI venues. Arca demonstrates interface and system architecture techniques that enable primary users to customize privacy states “between on and off” and securely share relevant information with adjacent users. From this work, we derived generalized responsible sensing UX patterns, including configurable “sensor dimmer switches,” secure and lightweight status- and capability-sharing for adjacent users, and access inhibitors that discourage invasive and disrespectful usage.


Qualitative studies of primary and adjacent smart device users
To better understand multi-user trust, control, and privacy/security, my research group conducts qualitative interviews and observational studies of how people use technologies and how these technologies mediate everyday practices. We have studied how smart camera owners and households use the devices for routine surveillance tasks. Fieldwork led us to develop an empirically grounded taxonomy of digital surveillance practices within homes and neighborhoods, revealing new forms of care-based, managerial, and atelic surveillance driven by curiosity or compulsion rather than rational goals.

Follow-up studies examined interpersonal relationships between primary and adjacent users. One of these studies focuses on smart cameras, and a second focuses on location sharing products. We identified motivations and impacts of casual, incidental surveillance among friends and family, as well as tensions and conflicts that arise. Emerging norms such as silent disclosure and consent show how primary users often avoid disclosure while adjacent users feel disempowered to question or complain.

Participatory design engagements with privacy, security, and data ethics
Another strand of my research on digital trust, safety, and privacy/security focuses on broadening participation by engaging diverse stakeholders in the design and governance of emerging technologies. Participatory methods in this space serve multiple roles. They surface stakeholder perspectives, inspire co-creative design directions, and cultivate public engagement and dialogue around technology’s social implications. I disseminate outcomes not only through academic publications but also through public design exhibitions, workshops, and online documentation.

One approach I have helped pioneer uses design proposals and use-case scenarios as tools for creative, critical discussion about surveillance, privacy, power, and policy. An early example was a collaboration with the UC Berkeley School of Information and the Center for Long-Term Cybersecurity (CLTC), where design workbooks prompted engineering students to reflect on ethical and privacy implications of biosensing technologies. A second project, Privacy and Data Policies in Print (Because Nobody Actually Reads Them), combined participatory and arts-based design to gather stories about why notice-and-consent mechanisms fail. Both projects led to operational insights for using design artifacts to elicit reflection and dialogue about privacy and security.

Subsequent collaborations between my University of Washington research group and the CLTC examined interpersonal surveillance and asymmetrical power relations with smart technologies. Based on our presentation of speculative design scenarios, our interviews revealed tensions between parents and children, landlords and tenants, and employers and domestic workers. Whereas researchers think in terms of “privacy and security”, our participants emphasized the interplay of issues like autonomy, respect, property rights, and physical safety.

Most recently, we have produced high-quality narrative films that translate critiques of digital surveillance into immersive, relatable scenarios. Drawing from real-world practices such as usage-based insurance and academic critiques like surveillance capitalism, these films envision near-future trajectories to provoke public understanding and dialogue about the social consequences of AI-enabled location tracking.


Speculative and Experimental Smart Technologies
Alongside my research program on multi-user trust and control, I conduct speculative design research exploring emerging opportunities, limits, and risks of AI, sensing, and data as a design material. Since 2019, I have built and deployed a series of eccentric sensing devices. The most recent is Wall, a functional system that listens, learns, and displays snippets of overhead conversation. One insight from Wall was that its high speech-to-text error rates actually improved trust and safety through accidental in-built distortion that still let users reminiscence about speech events, as intended.

Speculative futuring and artistic experimentation underpin my research and continually feed ideas into my most practical, solution-oriented work. For example, my prior work on eccentric camera curtains, boundary-transgressing robotic cameras, and counterfunctional technologies inspired unconventional design concepts that have led to commercializable innovations for “in between” sensing states and “access inhibitors” that deter misuse.


Design Theory, Methods, and Criticism
I am continually engaged in advancing theory, method, and criticism within design, information science, and HCI. Periodically, I step back from domain-specific topics to reflect on design practice, research, and discourse more broadly. My work examines design as a speculative, critical, conceptual, and experimental mode of inquiry, including writing on the role of speculative design, the importance of “undesigning” harmful technologies, and how designers use knowledge and information tools. I also organize workshops on design theory and research through design, including an ongoing series of conference workshops from 2017 to present.

I often use the artifacts I design and build as vehicles for theory development. For example, my research on multi-user trust produced a taxonomy of adjacent user types and practical methods for creating relationship personas. Across these projects, practice-based design and prototyping function as engines for conceptual innovation, supporting my broader argument that design must move beyond single-user frameworks to address multi-user tensions in networked, sensing, and non-deterministic AI systems.