NASA Grant Funds Research Exploring Methods of Training Vision-Based Autonomous Systems

Comparing Techniques Between Autonomous Aircraft and Autonomous Car Systems

Apr 3, 2025

4 min



Conducting research at 5:30 a.m. may not be everybody’s first choice. But for Siddhartha Bhattacharyya and Ph.D. students Mohammed Abdul, Hafeez Khan and Parth Ganeriwala, it’s an essential part of the process for their latest endeavor.


Bhattacharyya and his students are developing a more efficient framework for creating and evaluating image-based machine learning classification models for autonomous systems, such as those guiding cars and aircraft. That process involves creating new datasets with taxiway and runway images for vision-based autonomous aircraft.


Just as humans need textbooks to fuel their learning, some machines are taught using thousands of photographs and images of the environment where their autonomous pupil will eventually operate. To help ensure their trained models can identify the correct course to take in a hyper-specific environment – with indicators such as centerline markings and side stripes on a runway at dawn – Bhattacharyya and his Ph.D. students chose a December morning to rise with the sun, board one of Florida Tech’s Piper Archer aircraft and photograph the views from above.


Bhattacharyya, an associate professor of computer science and software engineering, is exploring the boundaries of operation of efficient and effective machine-learning approaches for vision-based classification in autonomous systems. In this case, these machine learning systems are trained on video or image data collected from environments including runways, taxiways or roadways.


With this kind of model, it can take more than 100,000 images to help the algorithm learn and adapt to an environment. Today’s technology demands a pronounced human effort to manually label and classify each image.


This can be an overwhelming process.


To combat that, Bhattacharyya was awarded funding from NASA Langley Research Center to advance existing machine learning/computer vision-based systems, such as his lab’s “Advanced Line Identification and Notation Algorithm” (ALINA), by exploring automated labeling that would enable the model to learn and classify data itself – with humans intervening only as necessary. This measure would ease the overwhelming human demand, he said.


ALINA is an annotation framework that Hafeez and Parth developed under Bhattacharyya’s guidance to detect and label data for algorithms, such as taxiway line markings for autonomous aircraft.


Bhattacharyya will use NASA’s funding to explore transfer learning-based approaches, led by Parth, and few-shot learning (FSL) approaches, led by Hafeez. The researchers are collecting images via GoPro of runways and taxiways at airports in Melbourne and Grant-Valkaria with help from Florida Tech’s College of Aeronautics.


Bhattacharyya’s students will take the data they collect from the airports and train their models to, in theory, drive an aircraft autonomously. They are working to collect diverse images of the runways – those of different angles and weather and lighting conditions – so that the model learns to identify patterns that determine the most accurate course regardless of environment or conditions. That includes the daybreak images captured on that December flight.


“We went at sunrise, where there is glare on the camera. Now we need to see if it’s able to identify the lines at night because that’s when there are lights embedded on the taxiways,” Bhattacharyya said. “We want to collect diverse datasets and see what methods work, what methods fail and what else do we need to do to build that reliable software.”


Transfer learning is a machine learning technique in which a model trained to do one task can generalize information and reuse it to complete another task. For example, a model trained to drive autonomous cars could transfer its intelligence to drive autonomous aircraft. This transfer helps explore generalization of knowledge. It also improves efficiency by eliminating the need for new models that complete different but related tasks. For example, a car trained to operate autonomously in California could retain generalized knowledge when learning how to drive in Florida, despite different landscapes.


“This model already knows lines and lanes, and we are going to train it on certain other types of lines hoping it generalizes and keeps the previous knowledge,” Bhattacharyya explained. “That model could do both tasks, as humans do.”


FSL is a technique that teaches a model to generalize information with just a few data samples instead of the massive datasets used in transfer learning. With this type of training, a model should be able to identify an environment based on just four or five images.


“That would help us reduce the time and cost of data collection as well as time spent labeling the data that we typically go through for several thousands of datasets,” Bhattacharyya said.


Learning when results may or may not be reliable is a key part of this research. Bhattacharyya said identifying degradation in the autonomous system’s performance will help guide the development of online monitors that can catch errors and alert human operators to take corrective action.


Ultimately, he hopes that this research can help create a future where we utilize the benefits of machine learning without fear of it failing before notifying the operator, driver or user.


“That’s the end goal,” Bhattacharyya said. “It motivates me to learn how the context relates to assumptions associated with these images, that helps in understanding when the autonomous system is not confident in its decision, thus sending an alert to the user. This could apply to a future generation of autonomous systems where we don’t need to fear the unknown – when the system could fail.”




Siddhartha (Sid) Bhattacharyya’s primary area of research expertise/interest is in model based engineering, formal methods, machine learning engineering, and explainable AI applied to intelligent autonomous systems, cyber security, human factors, healthcare, explainable AI, and avionics. His research lab ASSIST (Assured Safety, Security, and Intent with Systematic Tactics) focuses on the research in the design of innovative formal methods to assure performance of intelligent systems, machine learning engineering to characterize intelligent systems for safety and model based engineering to analyze system behavior.



Siddhartha Bhattacharyya is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

You might also like...

Check out some other posts from Florida Tech

2 min

Do We Need to Worry About Safety at the United States' Busiest Airports?

For the second time in two weeks, air traffic controllers directing planes into the Newark, New Jersey, airport briefly lost their radar. The outages have sparked travel chaos, with hundreds of flight delays and cancellations after the FAA slowed air traffic to ensure safety.  The country's aging air traffic control system is in the spotlight. Media, politicians and the public are demanding both solutions for the system and answers on how safe traveling is at the moment. To provide insight, Florida Tech's Margaret Wallace is lending her expert opinion and perspective on the issue. Margaret Wallace is Assistant Professor of Aviation Management at Florida Institute of Technology, where she teaches Air Traffic Control and Airport Management courses. She spent over 15 years in the industry prior to teaching as an Airport Manager (4 years) at Ramstein Air Base in Germany and an Air Traffic Controller (10+ years) in the U.S. Air Force. “The recent communication failure at Newark Liberty International Airport has raised serious concerns about the safety and dependability of air traffic control systems in the United States. On April 28, 2025, the Newark air traffic facilities lost all radio communication with approximately 20 airplanes for up to 90 seconds due to an equipment breakdown. During the outage, pilots and controllers were unable to communicate. Controllers were unable to maintain aircraft separation during crucial flight phases, and pilots were unable to receive air traffic clearances and instructions. Situations like this, as well as aircraft incidents, bring stress and trauma to the controller's mental state. Most people cannot fathom how much mental stress the controller experiences in everyday job settings. Situations with defective equipment, combined with lengthy work hours due to a scarcity of controllers, appear to have taken their toll based on the fact that several controllers have taken leave for mental stress. This situation posed a safety risk to all planes and passengers. Fortunately, there were no incidents, and everyone remained safe. However, this demonstrated some of the flaws in the outdated air traffic system equipment. Sean Duffy, the new Transportation Secretary, has acknowledged the critical need to improve our current technology. While air travel is generally safe, our current administration must continue to prioritize the upgrade of air traffic systems and increasing the staffing in air traffic facilities. To ensure safety, I believe we should consider having airlines restrict the number of flights available and the Air Route Traffic Command Center to introduce delays to avoid overloading the system.” Margaret Wallace If you're interested in connecting with Margaret Wallace about the ongoing issues at airports across the country, let us help. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology, at adam@fit.edu to arrange an interview today.

4 min

NASA Asks Researchers to Help Define Trustworthiness in Autonomous Systems

A Florida Tech-led group of researchers was selected to help NASA solve challenges in aviation through its prestigious University Leadership Initiative (ULI) program. Over the next three years, associate professor of computer science and software engineering Siddhartha Bhattacharyya and professor of aviation human factors Meredith Carroll will work to understand the vital role of trust in autonomy. Their project, “Trustworthy Resilient Autonomous Agents for Safe City Transportation in the Evolving New Decade” (TRANSCEND), aims to establish a common framework for engineers and human operators to determine the trustworthiness of machine-learning-enabled autonomous aviation safety systems. Autonomous systems are those that can perform independent tasks without requiring human control. The autonomy of these systems is expected to be enhanced with intelligence gained from machine learning. As a result, intelligence-based software is expected to be increasingly used in airplanes and drones. It may also be utilized in airports and to manage air traffic in the future. Learning-enabled autonomous technology can also act as contingency management when used in safety applications, proactively addressing potential disruptions and unexpected aviation events. TRANSCEND was one of three projects chosen for the latest ULI awards. The others hail from Embry-Riddle Aeronautical University in Daytona Beach – researching continuously updating, self-diagnostic vehicle health management to enhance the safety and reliability of Advanced Air Mobility vehicles – and University of Colorado Boulder – investigating tools for understanding and leveraging the complex communications environment of collaborative, autonomous airspace systems. Florida Tech’s team includes nine faculty members from five universities: Penn State; North Carolina A&T State University; University of Florida; Stanford University; Santa Fe College. It also involves the companies Collins Aerospace in Cedar Rapids, Iowa and ResilienX of Syracuse, New York. Carroll and Bhattacharyya will also involve students throughout the project. Human operators are an essential component of aviation technology – they monitor independent software systems and associated data and intervene when those systems fail. They may include flight crew members, air traffic controllers, maintenance personnel or safety staff monitoring overall system safety. A challenge in implementing independent software is that engineers and operators have different interpretations of what makes a system “trustworthy,” Carroll and Bhattacharyya explained. Engineers who develop autonomous software measure trustworthiness by the system’s ability to perform as designed. Human operators, however, trust and rely on systems to perform as they expect – they want to feel comfortable relying on a system to make an aeronautical decision in flight, such as how to avoid a traffic conflict or a weather event. Sometimes, that reliance won’t align with design specifications. Equally important, operators also need to trust that the software will alert them when it needs a human to take over. This may happen if the algorithm driving the software encounters a scenario it wasn’t trained for. “We are looking at how we can integrate trust from different communities – from human factors, from formal methods, from autonomy, from AI…” Bhattacharyya said. “How do we convey assumptions for trust, from design time to operation, as the intelligent systems are being deployed, so that we can trust them and know when they’re going to fail, especially those that are learning-enabled, meaning they adapt based on machine learning algorithms?” With Bhattacharyya leading the engineering side and Carroll leading the human factors side, the research group will begin bridging the trust gap by integrating theories, principles, methods, measures, visualizations, explainability and practices from different domains – this will build the TRANSCEND framework. Then, they’ll test the framework using a diverse range of tools, flight simulators and intelligent decision-making to demonstrate trustworthiness in practice. This and other data will help them develop a safety case toolkit of guidelines for development processes, recommendations and suggested safety measures for engineers to reference when designing “trustworthy,” learning-enabled autonomous systems. Ultimately, Bhattacharyya and Carroll hope their toolkit will lay the groundwork for a future learning-enabled autonomous systems certification process. “The goal is to combine all our research capabilities and pull together a unified story that outputs unified products to the industry,” Carroll said. “We want products for the industry to utilize when implementing learning-enabled autonomy for more effective safety management systems.” The researchers also plan to use this toolkit to teach future engineers about the nuances of trust in the products they develop. Once developed, they will hold outreach events, such as lectures and camps, for STEM-minded students in the community. If you're interested in connecting with Meredith Carroll or Siddhartha Bhattacharyya - simply click on the expert's profile or contact  Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

4 min

Hormone Supplementation in Rhesus Monkeys Points to Potential Autism Treatment

For years, Florida Tech’s Catherine Talbot, assistant professor of psychology, has worked to understand the sociality of male rhesus monkeys and how low-social monkeys can serve as a model for humans with autism. Her most recent findings show that replenishing a deficient hormone, vasopressin, helped the monkeys become more social without increasing their aggression – a discovery that could change autism treatment. Currently, the Centers for Disease Control and Prevention report that one in 36 children in the United States is affected by autism spectrum disorder (ASD). That’s an increase from one in 44 children reported in 2018. Two FDA-approved treatments currently exist, Talbot said, but they only address associated symptoms, not the root of ASD. The boost in both prevalence and awareness of the disorder prompts the following question: What is the cause? Some rhesus monkeys are naturally low-social, meaning they demonstrate poor social cognitive skills, while others are highly social. Their individual variation in sociality is comparable to how human sociality varies, ranging from people we consider social butterflies to those who are not interested in social interactions, similar to some people diagnosed with ASD, Talbot said. Her goal has been to understand how variations in biology and behavior influence social cognition. In the recent research paper published in the journal PNAS, “Nebulized vasopressin penetrates CSF [cerebral spinal fluid] and improves social cognition without inducing aggression in a rhesus monkey model of autism,” Talbot and researchers with Stanford, the University of California, Davis and the California National Primate Research Center explored vasopressin, a hormone that is known to contribute to mammalian social behavior, as a potential therapeutic treatment that may ultimately help people with autism better function in society. Previous work from this research group found that vasopressin levels are lower in their low-social rhesus monkey model, as well as in a select group of people with ASD. Previous studies testing vasopressin in rodents found that increased hormone levels caused more aggression. As a result, researchers warned against administering vasopressin as treatment, Talbot said. However, she argued that in those studies, vasopressin induced aggression in contexts where aggression is the socially appropriate response, such as guarding mates in their home territory, so the hormone may promote species-typical behavior. She also noted that the previous studies tested vasopressin in “neurotypical” rodents, as opposed to animals with low-social tendencies. “It may be that individuals with the lowest levels of vasopressin may benefit the most from it – that is the step forward toward precision medicine that we now need to study,” Talbot said. In her latest paper, Talbot and her co-authors tested how low-social monkeys, with low vasopressin levels and high autistic-like trait burden, responded to vasopressin supplementation to make up for their natural deficiency. They administered the hormone through a nebulizer, which the monkeys could opt into. For a few minutes each week, the monkeys voluntarily held their face up to a nebulizer to receive their dose while sipping white grape juice – a favorite among the monkeys, Talbot said. After administering the hormone and verifying that it increased vasopressin levels in the central nervous system, the researchers wanted to see how the monkeys responded to both affiliative and aggressive stimuli by showing them videos depicting these behaviors. They also compared their ability to recognize and remember new objects and faces, which is another important social skill. They found that normally low-social monkeys do not respond to social communication and were better at recognizing and remembering objects compared to faces, similar to some humans diagnosed with ASD. When the monkeys were given vasopressin, they began reciprocating affiliative, pro-social behaviors, but not aggression. It also improved their facial recognition memory, making it equivalent to their recognition memory of objects. In other words, vasopressin “rescued” low-social monkeys’ ability to respond prosocially to others and to remember new faces. The treatment was successful – vasopressin selectively improved the social cognition of these low-social monkeys. “It was really exciting to see this come to fruition after pouring so much work into this project and overcoming so many challenges,” Talbot said of her findings. One of Talbot’s co-authors has already begun translating this work to cohorts of autism patients. She expects more clinical trials to follow. In the immediate future, Talbot is examining how other, more complex social cognitive abilities like theory of mind – the ability to take the perspective of another – may differ in low-social monkeys compared to more social monkeys and how this relates to their underlying biology. Beyond that, Talbot hopes that they can target young monkeys who are “at-risk” of developing social deficits related to autism for vasopressin treatment to see if early intervention might help change their developmental trajectory and eventually translate this therapy to targeted human trials. Catherine F. Talbot is an Assistant Professor in the School of Psychology at Florida Tech and co-director of the Animal Cognitive Research Center at Brevard Zoo. Dr. Talbot joined Florida Tech from the Neuroscience and Behavior Unit at the California National Primate Research Center at the University of California, Davis, where she worked as a postdoc on a collaborative bio-behavioral project examining naturally occurring low-sociability in rhesus monkeys as a model for the core social deficits seen in people with autism spectrum disorder, specifically targeting the underlying mechanisms of social functioning. If you're interested in connecting with Catherine Talbot - simply contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

View all posts