NASA Grant Funds Research Exploring Methods of Training Vision-Based Autonomous Systems

Comparing Techniques Between Autonomous Aircraft and Autonomous Car Systems

Apr 3, 2025

4 min



Conducting research at 5:30 a.m. may not be everybody’s first choice. But for Siddhartha Bhattacharyya and Ph.D. students Mohammed Abdul, Hafeez Khan and Parth Ganeriwala, it’s an essential part of the process for their latest endeavor.


Bhattacharyya and his students are developing a more efficient framework for creating and evaluating image-based machine learning classification models for autonomous systems, such as those guiding cars and aircraft. That process involves creating new datasets with taxiway and runway images for vision-based autonomous aircraft.


Just as humans need textbooks to fuel their learning, some machines are taught using thousands of photographs and images of the environment where their autonomous pupil will eventually operate. To help ensure their trained models can identify the correct course to take in a hyper-specific environment – with indicators such as centerline markings and side stripes on a runway at dawn – Bhattacharyya and his Ph.D. students chose a December morning to rise with the sun, board one of Florida Tech’s Piper Archer aircraft and photograph the views from above.


Bhattacharyya, an associate professor of computer science and software engineering, is exploring the boundaries of operation of efficient and effective machine-learning approaches for vision-based classification in autonomous systems. In this case, these machine learning systems are trained on video or image data collected from environments including runways, taxiways or roadways.


With this kind of model, it can take more than 100,000 images to help the algorithm learn and adapt to an environment. Today’s technology demands a pronounced human effort to manually label and classify each image.


This can be an overwhelming process.


To combat that, Bhattacharyya was awarded funding from NASA Langley Research Center to advance existing machine learning/computer vision-based systems, such as his lab’s “Advanced Line Identification and Notation Algorithm” (ALINA), by exploring automated labeling that would enable the model to learn and classify data itself – with humans intervening only as necessary. This measure would ease the overwhelming human demand, he said.


ALINA is an annotation framework that Hafeez and Parth developed under Bhattacharyya’s guidance to detect and label data for algorithms, such as taxiway line markings for autonomous aircraft.


Bhattacharyya will use NASA’s funding to explore transfer learning-based approaches, led by Parth, and few-shot learning (FSL) approaches, led by Hafeez. The researchers are collecting images via GoPro of runways and taxiways at airports in Melbourne and Grant-Valkaria with help from Florida Tech’s College of Aeronautics.


Bhattacharyya’s students will take the data they collect from the airports and train their models to, in theory, drive an aircraft autonomously. They are working to collect diverse images of the runways – those of different angles and weather and lighting conditions – so that the model learns to identify patterns that determine the most accurate course regardless of environment or conditions. That includes the daybreak images captured on that December flight.


“We went at sunrise, where there is glare on the camera. Now we need to see if it’s able to identify the lines at night because that’s when there are lights embedded on the taxiways,” Bhattacharyya said. “We want to collect diverse datasets and see what methods work, what methods fail and what else do we need to do to build that reliable software.”


Transfer learning is a machine learning technique in which a model trained to do one task can generalize information and reuse it to complete another task. For example, a model trained to drive autonomous cars could transfer its intelligence to drive autonomous aircraft. This transfer helps explore generalization of knowledge. It also improves efficiency by eliminating the need for new models that complete different but related tasks. For example, a car trained to operate autonomously in California could retain generalized knowledge when learning how to drive in Florida, despite different landscapes.


“This model already knows lines and lanes, and we are going to train it on certain other types of lines hoping it generalizes and keeps the previous knowledge,” Bhattacharyya explained. “That model could do both tasks, as humans do.”


FSL is a technique that teaches a model to generalize information with just a few data samples instead of the massive datasets used in transfer learning. With this type of training, a model should be able to identify an environment based on just four or five images.


“That would help us reduce the time and cost of data collection as well as time spent labeling the data that we typically go through for several thousands of datasets,” Bhattacharyya said.


Learning when results may or may not be reliable is a key part of this research. Bhattacharyya said identifying degradation in the autonomous system’s performance will help guide the development of online monitors that can catch errors and alert human operators to take corrective action.


Ultimately, he hopes that this research can help create a future where we utilize the benefits of machine learning without fear of it failing before notifying the operator, driver or user.


“That’s the end goal,” Bhattacharyya said. “It motivates me to learn how the context relates to assumptions associated with these images, that helps in understanding when the autonomous system is not confident in its decision, thus sending an alert to the user. This could apply to a future generation of autonomous systems where we don’t need to fear the unknown – when the system could fail.”




Siddhartha (Sid) Bhattacharyya’s primary area of research expertise/interest is in model based engineering, formal methods, machine learning engineering, and explainable AI applied to intelligent autonomous systems, cyber security, human factors, healthcare, explainable AI, and avionics. His research lab ASSIST (Assured Safety, Security, and Intent with Systematic Tactics) focuses on the research in the design of innovative formal methods to assure performance of intelligent systems, machine learning engineering to characterize intelligent systems for safety and model based engineering to analyze system behavior.



Siddhartha Bhattacharyya is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

You might also like...

Check out some other posts from Florida Tech

4 min

Hormone Supplementation in Rhesus Monkeys Points to Potential Autism Treatment

For years, Florida Tech’s Catherine Talbot, assistant professor of psychology, has worked to understand the sociality of male rhesus monkeys and how low-social monkeys can serve as a model for humans with autism. Her most recent findings show that replenishing a deficient hormone, vasopressin, helped the monkeys become more social without increasing their aggression – a discovery that could change autism treatment. Currently, the Centers for Disease Control and Prevention report that one in 36 children in the United States is affected by autism spectrum disorder (ASD). That’s an increase from one in 44 children reported in 2018. Two FDA-approved treatments currently exist, Talbot said, but they only address associated symptoms, not the root of ASD. The boost in both prevalence and awareness of the disorder prompts the following question: What is the cause? Some rhesus monkeys are naturally low-social, meaning they demonstrate poor social cognitive skills, while others are highly social. Their individual variation in sociality is comparable to how human sociality varies, ranging from people we consider social butterflies to those who are not interested in social interactions, similar to some people diagnosed with ASD, Talbot said. Her goal has been to understand how variations in biology and behavior influence social cognition. In the recent research paper published in the journal PNAS, “Nebulized vasopressin penetrates CSF [cerebral spinal fluid] and improves social cognition without inducing aggression in a rhesus monkey model of autism,” Talbot and researchers with Stanford, the University of California, Davis and the California National Primate Research Center explored vasopressin, a hormone that is known to contribute to mammalian social behavior, as a potential therapeutic treatment that may ultimately help people with autism better function in society. Previous work from this research group found that vasopressin levels are lower in their low-social rhesus monkey model, as well as in a select group of people with ASD. Previous studies testing vasopressin in rodents found that increased hormone levels caused more aggression. As a result, researchers warned against administering vasopressin as treatment, Talbot said. However, she argued that in those studies, vasopressin induced aggression in contexts where aggression is the socially appropriate response, such as guarding mates in their home territory, so the hormone may promote species-typical behavior. She also noted that the previous studies tested vasopressin in “neurotypical” rodents, as opposed to animals with low-social tendencies. “It may be that individuals with the lowest levels of vasopressin may benefit the most from it – that is the step forward toward precision medicine that we now need to study,” Talbot said. In her latest paper, Talbot and her co-authors tested how low-social monkeys, with low vasopressin levels and high autistic-like trait burden, responded to vasopressin supplementation to make up for their natural deficiency. They administered the hormone through a nebulizer, which the monkeys could opt into. For a few minutes each week, the monkeys voluntarily held their face up to a nebulizer to receive their dose while sipping white grape juice – a favorite among the monkeys, Talbot said. After administering the hormone and verifying that it increased vasopressin levels in the central nervous system, the researchers wanted to see how the monkeys responded to both affiliative and aggressive stimuli by showing them videos depicting these behaviors. They also compared their ability to recognize and remember new objects and faces, which is another important social skill. They found that normally low-social monkeys do not respond to social communication and were better at recognizing and remembering objects compared to faces, similar to some humans diagnosed with ASD. When the monkeys were given vasopressin, they began reciprocating affiliative, pro-social behaviors, but not aggression. It also improved their facial recognition memory, making it equivalent to their recognition memory of objects. In other words, vasopressin “rescued” low-social monkeys’ ability to respond prosocially to others and to remember new faces. The treatment was successful – vasopressin selectively improved the social cognition of these low-social monkeys. “It was really exciting to see this come to fruition after pouring so much work into this project and overcoming so many challenges,” Talbot said of her findings. One of Talbot’s co-authors has already begun translating this work to cohorts of autism patients. She expects more clinical trials to follow. In the immediate future, Talbot is examining how other, more complex social cognitive abilities like theory of mind – the ability to take the perspective of another – may differ in low-social monkeys compared to more social monkeys and how this relates to their underlying biology. Beyond that, Talbot hopes that they can target young monkeys who are “at-risk” of developing social deficits related to autism for vasopressin treatment to see if early intervention might help change their developmental trajectory and eventually translate this therapy to targeted human trials. Catherine F. Talbot is an Assistant Professor in the School of Psychology at Florida Tech and co-director of the Animal Cognitive Research Center at Brevard Zoo. Dr. Talbot joined Florida Tech from the Neuroscience and Behavior Unit at the California National Primate Research Center at the University of California, Davis, where she worked as a postdoc on a collaborative bio-behavioral project examining naturally occurring low-sociability in rhesus monkeys as a model for the core social deficits seen in people with autism spectrum disorder, specifically targeting the underlying mechanisms of social functioning. If you're interested in connecting with Catherine Talbot - simply contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

5 min

Research Below the Surface

The roots of scuba diving lie in exploration. But in an age when advanced instruments can drive research, too, why not stay dry on land? Researchers have used scuba diving as a tool for decades, but as technology evolves, remotely operated vehicles (ROVs) can aid, and sometimes replace, divers in the research process. Still, argues Stephen Wood, no existing tools have the full capability of a human. The professor of ocean engineering says the ability to grab items or quickly turn one’s head is difficult to replicate in an ROV. He also argues that although robots can collect and send data, the ability to assess and interpret an environment through a human lens is essential. “The human cannot leave” the research, Wood says. The American Academy of Underwater Sciences (AAUS) defines scientific diving as “diving performed solely as a necessary part of a scientific, research, or educational activity by employees whose sole purpose for diving is to perform scientific research tasks.” With more than 140 organizational members, AAUS supports diving as a research tool and protects scientific divers’ health and safety. Researchers and students must obtain an AAUS certification, which Florida Tech offers, before undertaking a scientific dive. At Florida Tech, any diver who plans to use compressed air or air blends for activity involving teaching or research must comply with AAUS. Robert van Woesik, professor of marine sciences, studies the dynamics of coral reefs worldwide. He and his students scuba dive to examine and photograph coral assemblages, then return with information they can use to predict the impact of local and global disturbances, recovery from disturbances and future growth. The ability to personally identify different species underwater is crucial to understanding coral reef dynamics. He says that without scuba, the necessary training to develop that skill falls away. “I think it’s still worthwhile knowing the species composition of a reef underwater instead of just saying, ‘Okay, we don’t need scuba divers anymore. We just need photographs and ROVs,’” van Woesik says. He learns the most when he can descend to a reef and see the seascape himself. “I think there’s something to be said to just go in the water and ask some questions,” van Woesik says. “That’s the valuable part of being able to scuba dive, getting amongst it to experience the reef, in tandem with analyzing photographs from around the world on the computer.” Assistant professor of marine sciences Austin Fox says in his research in the Indian River Lagoon, diving is essential for operating—and sometimes finding—instruments. “We spend a lot of time trying to figure out ways to do this stuff without diving…but there’s just no replacement for it.” Austin fox, Assistant professor of marine sciences Scientific diving has taken Florida Tech researchers across the globe, from the murky floor of the Indian River Lagoon to the depths of Antarctica’s McMurdo Sound. Rich Aronson, department head and professor of ocean engineering and marine sciences, studies coral reefs in the tropics and subtidal communities in Antarctica. In 1997, he had the opportunity to visit the McMurdo Station to study invertebrate ecology—specifically, who eats what and whether they leave traces of their predatory activity on the shells of their prey. There, he completed 27 dives of up to 130 feet deep. Some were done through ice-cracks in remote areas, he recalls, whereas others were from holes drilled through 10 feet of sea-ice. He noted that the time to prepare for these dives was extensive—two 30-minute dives took eight hours—and they weren’t without risk. “That was the first and only time I’ve dived under the ice. It’s dangerous because there’s a ceiling above you,” Aronson says. “You jump in the hole and try not to screw it up because if you screw it up, you’re dead.” Though risky, Aronson says scuba diving was crucial to the research. He argues that neither ROVs nor oceanographic sensors could have collected or sampled organisms at fine scales, run transects and made behavioral observations like a human could. Additionally, he says his observations at depth, such as the “sting of subzero water” on his face and “the slowness of reaction of the animals living down there,” are what later inspired a project of his combining deep-sea oceanography and paleontology to project the future of Antarctic seafloor communities in a rapidly warming world. “Science is a lot more subjective than you might think, and feeling the environment helps you understand it.” Richard Aronson, department head and professor of marine sciences The risky nature of scuba diving is why programs like AAUS exist: to standardize safe and responsible diving practices for conducting scientific research. Divers are at risk for a number of pressure-related injuries, such as decompression sickness: a condition in which residual nitrogen can create bubbles in the blood and body tissue upon ascent if the diver rises to the surface too fast. To reduce their risk, divers must plan and track how deep they are going, the time at which they are that depth (and subsequent depths) and how long they need to wait before changing depth. Technology has also evolved since the beginning of scuba to support divers’ safety further. Digital dive computers, developed in the 1980s, help divers estimate how long they can stay at their current depth while underwater (among other things). Additionally, Enriched Air Nitrox (Nitrox) is a gas mixture that contains a higher percentage of oxygen than standard air. Divers who use Nitrox can extend their time at depth and reduce their risk of decompression sickness because of its reduced nitrogen pressure. Van Woesik predicts that dive technology will keep evolving. He imagines there could soon be a system that allows divers to upload data at depth, and a system that aids in species identification without having to decipher an image at the surface. He also believes that innovators will keep working to reduce hazards and prioritize safety, because despite the risks, divers will always get in the water. “Hopefully that technology will get better so we can go deeper, safer, and so we can stay down a bit longer to explore and further understand the natural wonders of the oceans,” van Woesik says. If you're interested in connecting with Stephen Wood, Austin Fox, Richard Aronson or Robert van Woesik - simply contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

2 min

Meet an Expert: Adrian Peter

Name: Adrian Peter Title: Associate professor of mathematics and systems engineering and electrical engineering and computer science (joint appointment); director, Center for Advanced Data Analytics and Systems (CADAS) Department/College: Department of Mathematics and Systems Engineering and Department of Electrical Engineering and Computer Science/College of Engineering and Science Current research funding: $2.19 million General research focus: Our Multi-domain, Multi-sensor, Cyber-physical Tactical Exploitation (M2CTE) project addresses a critical need for a robust analytic processing framework capable of supporting autonomous sensing and analytics on the edge – where devices and sensors collect data – with the ability to reach back to the cloud for more improvement. Adrian Peter's  research interests are in applying advanced analytics (e.g. machine learning, statistical modeling, optimization and visualization) to solve large-scale computing problems across a variety of domain areas (signal processing, geospatial, environmental, sensor fusion and enterprise intelligence). Q: What has you excited about your current research? We have built our entire infrastructure with the immensely talented graduate and undergraduate students at Florida Tech. Their tireless efforts have led to us delivering practical and operational real-world, machine-learning solutions that make us among the global leaders in machine learning at the edge. Q: Why is it important to conduct research? The objective of all research is to advance the frontiers of knowledge in a specific discipline. In my research, we are continually pushing state-of-the-art distributed sensing and edge analytics. Our results have helped transition conceptual ideas and customer requirements into operational solutions that improve situational awareness at tactical edge. Adrian Peter is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology, at adam@fit.edu to arrange an interview today.

View all posts