Biography
Nikolas Martelaro is an Assistant Professor at Carnegie Mellon's Human-Computer Interaction Institute. His lab is dedicated to enhancing designers' abilities through innovative technologies and design methodologies. He is driven by a passion for crafting interactive and intelligent products, and seeks to develop new approaches to support designers in their work. Blending expertise in product design methods, interaction design, human-robot interaction, and mechatronic engineering, he creates tools and methods that empower designers to gain deeper insights into human behavior and ultimately produce more human-centered products. Prior to joining the HCII, he held the position of Digital Experiences Researcher at Accenture Technology Labs in San Francisco. He earned his Ph.D. in Mechanical Engineering from Stanford's Center for Design Research under the co-advisorship of Larry Leifer and Wendy Ju.
Areas of Expertise (4)
Interaction with Autonomous Systems
Mechatronics
Design Tools
Interaction Design
Media Appearances (2)
Futurity
Futurity online
2024-06-06
In their research, Carnegie Mellon University School of Computer Science faculty members Sarah Fox and Nikolas Martelaro highlight potential issues sidewalk robots encounter during deployment and propose solutions to mitigate them before the robots hit the streets.
A (hypothetical, incremental) revolution in automated transit
Politico online
2024-05-01
I called up Sarah Fox and Nikolas Martelaro, researchers at Carnegie Mellon University and authors of a 2022 policy paper on automated public transit, to ask them exactly how close, or far, we might be from that “win.” As it turns out, it’s a little bit more complicated than simply taking the driver out of every bus. An edited and condensed version of the conversation follows:
Media
Publications:
Documents:
Audio/Podcasts:
Accomplishments (2)
Best Paper Honorable Mention, CHI ’23 (professional)
2023
Best Demonstration, CSCW ’17. (professional)
2017
Education (3)
Stanford University: Ph.D., Mechanical Engineering 2018
Stanford University: M.Eng., Mechanical Engineering 2014
Franklin W. Olin College of Engineering: B.S., Engineering 2012
Affiliations (1)
- ACM
Links (4)
Research Grants (3)
Using Technology to Transform Makers into Creative Entrepreneurs
National Science Foundation - Future of Work at the Human-Technology Frontier $150,000
2022-2023
Supporting Designers in Learning to Co-create with AI for Complex Computational Design Tasks
National Science Foundation - Cyberlearning & Future Learning Technology $850,000
2021-2024
Equitable new mobility: Community-driven mechanisms for designing and evaluating personal delivery device deployments
National Science Foundation - Smart & Connected Communities - Planning Grant $150,000
2021-2022
Patents (3)
Articles (3)
Sharing the Sidewalk: Observing Delivery Robot Interactions with Pedestrians during a Pilot in Pittsburgh, PA
Multimodal Technologies and Interaction2023 Sidewalk delivery robots are being deployed as a form of last-mile delivery. While many such robots have been deployed on college campuses, fewer have been piloted on public sidewalks. Furthermore, there have been few observational studies of robots and their interactions with pedestrians. To better understand how sidewalk robots might integrate into public spaces, the City of Pittsburgh, Pennsylvania conducted a pilot of sidewalk delivery robots to understand possible uses and the challenges that could arise in interacting with people in the city. Our team conducted ethnographic observations and intercept interviews to understand how residents perceived of and interacted with sidewalk delivery robots over the course of the public pilot. We found that people with limited knowledge about the robots crafted stories about their purpose and function. We observed the robots causing distractions and obstructions with different sidewalk users (including children and dogs), witnessed people helping immobilized robots, and learned about potential accessibility issues that the robots may pose. Based on our findings, we contribute a set of recommendations for future pilots, as well as questions to guide future design for robots in public spaces.
Learning When Agents Can Talk to Drivers Using the INAGT Dataset and Multisensor Fusion
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies2021 This paper examines sensor fusion techniques for modeling opportunities for proactive speech-based in-car interfaces. We leverage the Is Now a Good Time (INAGT) dataset, which consists of automotive, physiological, and visual data collected from drivers who self-annotated responses to the question "Is now a good time?," indicating the opportunity to receive non-driving information during a 50-minute drive. We augment this original driver-annotated data with third-party annotations of perceived safety, in order to explore potential driver overconfidence. We show that fusing automotive, physiological, and visual data allows us to predict driver labels of availability, achieving an 0.874 F1-score by extracting statistically relevant features and training with our proposed deep neural network, PazNet. Using the same data and network, we achieve an 0.891 F1-score for predicting third-party labeled safe moments. We train these models to avoid false positives---determinations that it is a good time to interrupt when it is not---since false positives may cause driver distraction or service deactivation by the driver. Our analyses show that conservative models still leave many moments for interaction and show that most inopportune moments are short. This work lays a foundation for using sensor fusion models to predict when proactive speech systems should engage with drivers.
A comparative analysis of multimodal communication during design sketching in co-located and distributed environments
Design Studies2014 This study extends our understanding of multimodal communication during design sketching. Building on the literature, the theoretical dimension frames gesturing as a communication channel and a thinking medium, and postulates an interplay between gesturing and other channels. The empirical dimension explores the theoretical propositions in the context of co-located and distributed sketching. Quantitative analyses suggest that when gesturing is restricted, graphical communication is leveraged to compensate, and that verbal communication is incessant in both collaboration environments. They also highlight a non-compensatory design phase dependent interaction between gestural and graphical communication. Moreover, they reveal differences in the communication structure used in the two environments. Qualitative analyses identify a behavior termed “cross-gesturing,” which informs how gesturing facilitates shared understanding during collaborative sketching.
Social