
Nikolas Martelaro
Assistant Professor Carnegie Mellon University
- Pittsburgh PA
Nikolas Martelaro's lab focuses on augmenting designer's capabilities through the use of new technology and design methods
Biography
Areas of Expertise
Media Appearances
Carnegie Mellon Community Shines at SXSW 2025 — an Intersection of Culture, Tech and Innovation
CMU News online
2025-03-21
At another session, Sarah Fox and Nikolas Martelaro presented their collaboration with transit operators and their unions aimed at understanding the impacts of future technologies on transportation workers. The two explained the potential to leverage AI in "Creating Safer, More Equitable Public Transit Systems."
“We have been really excited to work with our union partners and to learn from real operators what is all the complexity and things that are happening on the road in real transit operations,” Martelaro said. “How can we understand and learn from operators to bring that knowledge into thinking about new technologies?”
Futurity
Futurity online
2024-06-06
In their research, Carnegie Mellon University School of Computer Science faculty members Sarah Fox and Nikolas Martelaro highlight potential issues sidewalk robots encounter during deployment and propose solutions to mitigate them before the robots hit the streets.
A (hypothetical, incremental) revolution in automated transit
Politico online
2024-05-01
I called up Sarah Fox and Nikolas Martelaro, researchers at Carnegie Mellon University and authors of a 2022 policy paper on automated public transit, to ask them exactly how close, or far, we might be from that “win.” As it turns out, it’s a little bit more complicated than simply taking the driver out of every bus. An edited and condensed version of the conversation follows:
Social
Accomplishments
Best Paper Honorable Mention, CHI ’23
2023
Best Demonstration, CSCW ’17.
2017
Education
Stanford University
Ph.D.
Mechanical Engineering
2018
Stanford University
M.Eng.
Mechanical Engineering
2014
Franklin W. Olin College of Engineering
B.S.
Engineering
2012
Affiliations
- ACM
Links
Research Grants
Using Technology to Transform Makers into Creative Entrepreneurs
National Science Foundation - Future of Work at the Human-Technology Frontier
2022-2023
Supporting Designers in Learning to Co-create with AI for Complex Computational Design Tasks
National Science Foundation - Cyberlearning & Future Learning Technology
2021-2024
Equitable new mobility: Community-driven mechanisms for designing and evaluating personal delivery device deployments
National Science Foundation - Smart & Connected Communities - Planning Grant
2021-2022
Patents
Articles
Sharing the Sidewalk: Observing Delivery Robot Interactions with Pedestrians during a Pilot in Pittsburgh, PA
Multimodal Technologies and Interaction2023
Sidewalk delivery robots are being deployed as a form of last-mile delivery. While many such robots have been deployed on college campuses, fewer have been piloted on public sidewalks. Furthermore, there have been few observational studies of robots and their interactions with pedestrians. To better understand how sidewalk robots might integrate into public spaces, the City of Pittsburgh, Pennsylvania conducted a pilot of sidewalk delivery robots to understand possible uses and the challenges that could arise in interacting with people in the city. Our team conducted ethnographic observations and intercept interviews to understand how residents perceived of and interacted with sidewalk delivery robots over the course of the public pilot. We found that people with limited knowledge about the robots crafted stories about their purpose and function. We observed the robots causing distractions and obstructions with different sidewalk users (including children and dogs), witnessed people helping immobilized robots, and learned about potential accessibility issues that the robots may pose. Based on our findings, we contribute a set of recommendations for future pilots, as well as questions to guide future design for robots in public spaces.
Learning When Agents Can Talk to Drivers Using the INAGT Dataset and Multisensor Fusion
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies2021
This paper examines sensor fusion techniques for modeling opportunities for proactive speech-based in-car interfaces. We leverage the Is Now a Good Time (INAGT) dataset, which consists of automotive, physiological, and visual data collected from drivers who self-annotated responses to the question "Is now a good time?," indicating the opportunity to receive non-driving information during a 50-minute drive. We augment this original driver-annotated data with third-party annotations of perceived safety, in order to explore potential driver overconfidence. We show that fusing automotive, physiological, and visual data allows us to predict driver labels of availability, achieving an 0.874 F1-score by extracting statistically relevant features and training with our proposed deep neural network, PazNet. Using the same data and network, we achieve an 0.891 F1-score for predicting third-party labeled safe moments. We train these models to avoid false positives---determinations that it is a good time to interrupt when it is not---since false positives may cause driver distraction or service deactivation by the driver. Our analyses show that conservative models still leave many moments for interaction and show that most inopportune moments are short. This work lays a foundation for using sensor fusion models to predict when proactive speech systems should engage with drivers.
A comparative analysis of multimodal communication during design sketching in co-located and distributed environments
Design Studies2014
This study extends our understanding of multimodal communication during design sketching. Building on the literature, the theoretical dimension frames gesturing as a communication channel and a thinking medium, and postulates an interplay between gesturing and other channels. The empirical dimension explores the theoretical propositions in the context of co-located and distributed sketching. Quantitative analyses suggest that when gesturing is restricted, graphical communication is leveraged to compensate, and that verbal communication is incessant in both collaboration environments. They also highlight a non-compensatory design phase dependent interaction between gestural and graphical communication. Moreover, they reveal differences in the communication structure used in the two environments. Qualitative analyses identify a behavior termed “cross-gesturing,” which informs how gesturing facilitates shared understanding during collaborative sketching.