4 min
NASA Asks Researchers to Help Define Trustworthiness in Autonomous Systems
A Florida Tech-led group of researchers was selected to help NASA solve challenges in aviation through its prestigious University Leadership Initiative (ULI) program. Over the next three years, associate professor of computer science and software engineering Siddhartha Bhattacharyya and professor of aviation human factors Meredith Carroll will work to understand the vital role of trust in autonomy. Their project, “Trustworthy Resilient Autonomous Agents for Safe City Transportation in the Evolving New Decade” (TRANSCEND), aims to establish a common framework for engineers and human operators to determine the trustworthiness of machine-learning-enabled autonomous aviation safety systems. Autonomous systems are those that can perform independent tasks without requiring human control. The autonomy of these systems is expected to be enhanced with intelligence gained from machine learning. As a result, intelligence-based software is expected to be increasingly used in airplanes and drones. It may also be utilized in airports and to manage air traffic in the future. Learning-enabled autonomous technology can also act as contingency management when used in safety applications, proactively addressing potential disruptions and unexpected aviation events. TRANSCEND was one of three projects chosen for the latest ULI awards. The others hail from Embry-Riddle Aeronautical University in Daytona Beach – researching continuously updating, self-diagnostic vehicle health management to enhance the safety and reliability of Advanced Air Mobility vehicles – and University of Colorado Boulder – investigating tools for understanding and leveraging the complex communications environment of collaborative, autonomous airspace systems. Florida Tech’s team includes nine faculty members from five universities: Penn State; North Carolina A&T State University; University of Florida; Stanford University; Santa Fe College. It also involves the companies Collins Aerospace in Cedar Rapids, Iowa and ResilienX of Syracuse, New York. Carroll and Bhattacharyya will also involve students throughout the project. Human operators are an essential component of aviation technology – they monitor independent software systems and associated data and intervene when those systems fail. They may include flight crew members, air traffic controllers, maintenance personnel or safety staff monitoring overall system safety. A challenge in implementing independent software is that engineers and operators have different interpretations of what makes a system “trustworthy,” Carroll and Bhattacharyya explained. Engineers who develop autonomous software measure trustworthiness by the system’s ability to perform as designed. Human operators, however, trust and rely on systems to perform as they expect – they want to feel comfortable relying on a system to make an aeronautical decision in flight, such as how to avoid a traffic conflict or a weather event. Sometimes, that reliance won’t align with design specifications. Equally important, operators also need to trust that the software will alert them when it needs a human to take over. This may happen if the algorithm driving the software encounters a scenario it wasn’t trained for. “We are looking at how we can integrate trust from different communities – from human factors, from formal methods, from autonomy, from AI…” Bhattacharyya said. “How do we convey assumptions for trust, from design time to operation, as the intelligent systems are being deployed, so that we can trust them and know when they’re going to fail, especially those that are learning-enabled, meaning they adapt based on machine learning algorithms?” With Bhattacharyya leading the engineering side and Carroll leading the human factors side, the research group will begin bridging the trust gap by integrating theories, principles, methods, measures, visualizations, explainability and practices from different domains – this will build the TRANSCEND framework. Then, they’ll test the framework using a diverse range of tools, flight simulators and intelligent decision-making to demonstrate trustworthiness in practice. This and other data will help them develop a safety case toolkit of guidelines for development processes, recommendations and suggested safety measures for engineers to reference when designing “trustworthy,” learning-enabled autonomous systems. Ultimately, Bhattacharyya and Carroll hope their toolkit will lay the groundwork for a future learning-enabled autonomous systems certification process. “The goal is to combine all our research capabilities and pull together a unified story that outputs unified products to the industry,” Carroll said. “We want products for the industry to utilize when implementing learning-enabled autonomy for more effective safety management systems.” The researchers also plan to use this toolkit to teach future engineers about the nuances of trust in the products they develop. Once developed, they will hold outreach events, such as lectures and camps, for STEM-minded students in the community. If you're interested in connecting with Meredith Carroll or Siddhartha Bhattacharyya simply click on the expert's profile or contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.
