NASA Grant Funds Research Exploring Methods of Training Vision-Based Autonomous Systems

Comparing Techniques Between Autonomous Aircraft and Autonomous Car Systems

Apr 3, 2025

4 min



Conducting research at 5:30 a.m. may not be everybody’s first choice. But for Siddhartha Bhattacharyya and Ph.D. students Mohammed Abdul, Hafeez Khan and Parth Ganeriwala, it’s an essential part of the process for their latest endeavor.


Bhattacharyya and his students are developing a more efficient framework for creating and evaluating image-based machine learning classification models for autonomous systems, such as those guiding cars and aircraft. That process involves creating new datasets with taxiway and runway images for vision-based autonomous aircraft.


Just as humans need textbooks to fuel their learning, some machines are taught using thousands of photographs and images of the environment where their autonomous pupil will eventually operate. To help ensure their trained models can identify the correct course to take in a hyper-specific environment – with indicators such as centerline markings and side stripes on a runway at dawn – Bhattacharyya and his Ph.D. students chose a December morning to rise with the sun, board one of Florida Tech’s Piper Archer aircraft and photograph the views from above.


Bhattacharyya, an associate professor of computer science and software engineering, is exploring the boundaries of operation of efficient and effective machine-learning approaches for vision-based classification in autonomous systems. In this case, these machine learning systems are trained on video or image data collected from environments including runways, taxiways or roadways.


With this kind of model, it can take more than 100,000 images to help the algorithm learn and adapt to an environment. Today’s technology demands a pronounced human effort to manually label and classify each image.


This can be an overwhelming process.


To combat that, Bhattacharyya was awarded funding from NASA Langley Research Center to advance existing machine learning/computer vision-based systems, such as his lab’s “Advanced Line Identification and Notation Algorithm” (ALINA), by exploring automated labeling that would enable the model to learn and classify data itself – with humans intervening only as necessary. This measure would ease the overwhelming human demand, he said.


ALINA is an annotation framework that Hafeez and Parth developed under Bhattacharyya’s guidance to detect and label data for algorithms, such as taxiway line markings for autonomous aircraft.


Bhattacharyya will use NASA’s funding to explore transfer learning-based approaches, led by Parth, and few-shot learning (FSL) approaches, led by Hafeez. The researchers are collecting images via GoPro of runways and taxiways at airports in Melbourne and Grant-Valkaria with help from Florida Tech’s College of Aeronautics.


Bhattacharyya’s students will take the data they collect from the airports and train their models to, in theory, drive an aircraft autonomously. They are working to collect diverse images of the runways – those of different angles and weather and lighting conditions – so that the model learns to identify patterns that determine the most accurate course regardless of environment or conditions. That includes the daybreak images captured on that December flight.


“We went at sunrise, where there is glare on the camera. Now we need to see if it’s able to identify the lines at night because that’s when there are lights embedded on the taxiways,” Bhattacharyya said. “We want to collect diverse datasets and see what methods work, what methods fail and what else do we need to do to build that reliable software.”


Transfer learning is a machine learning technique in which a model trained to do one task can generalize information and reuse it to complete another task. For example, a model trained to drive autonomous cars could transfer its intelligence to drive autonomous aircraft. This transfer helps explore generalization of knowledge. It also improves efficiency by eliminating the need for new models that complete different but related tasks. For example, a car trained to operate autonomously in California could retain generalized knowledge when learning how to drive in Florida, despite different landscapes.


“This model already knows lines and lanes, and we are going to train it on certain other types of lines hoping it generalizes and keeps the previous knowledge,” Bhattacharyya explained. “That model could do both tasks, as humans do.”


FSL is a technique that teaches a model to generalize information with just a few data samples instead of the massive datasets used in transfer learning. With this type of training, a model should be able to identify an environment based on just four or five images.


“That would help us reduce the time and cost of data collection as well as time spent labeling the data that we typically go through for several thousands of datasets,” Bhattacharyya said.


Learning when results may or may not be reliable is a key part of this research. Bhattacharyya said identifying degradation in the autonomous system’s performance will help guide the development of online monitors that can catch errors and alert human operators to take corrective action.


Ultimately, he hopes that this research can help create a future where we utilize the benefits of machine learning without fear of it failing before notifying the operator, driver or user.


“That’s the end goal,” Bhattacharyya said. “It motivates me to learn how the context relates to assumptions associated with these images, that helps in understanding when the autonomous system is not confident in its decision, thus sending an alert to the user. This could apply to a future generation of autonomous systems where we don’t need to fear the unknown – when the system could fail.”




Siddhartha (Sid) Bhattacharyya’s primary area of research expertise/interest is in model based engineering, formal methods, machine learning engineering, and explainable AI applied to intelligent autonomous systems, cyber security, human factors, healthcare, explainable AI, and avionics. His research lab ASSIST (Assured Safety, Security, and Intent with Systematic Tactics) focuses on the research in the design of innovative formal methods to assure performance of intelligent systems, machine learning engineering to characterize intelligent systems for safety and model based engineering to analyze system behavior.



Siddhartha Bhattacharyya is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

You might also like...

Check out some other posts from Florida Tech

2 min

Could China Beat America in the Race to Get Boots Back on the Moon?

Call it a matter of pride, national security or a desire for astronomical dominance; there's a sense of urgency within the U.S. government to return to the moon, sparked by China's team of taikonauts, who could land there before American astronauts get back to the lunar surface. The latest space race is a topic that is making national news. Florida Tech's experts are lending their opinions and insights about the likelihood of a lunar return, and what it might mean. NASA, with the urging of many politicians, has been racing to get astronauts back to the moon — before the Chinese land taikonauts on the lunar surface. But what’s the rush to return to a place the United States has already been and left 53 years ago? Especially when Mars looms as an enticing option for interplanetary travel. Space experts say there’s plenty of reasons for the urgency: national pride and national security. But also returning to the moon and building habitats would mean long term dominance in space and ensure access to resources that NASA didn’t know where there when the Apollo missions flew. Now with the Chinese making significant progress in human space exploration, the clock is ticking. “The Chinese in the last 20 years have made amazing strides in all aspects of space. They’re sending robots to the moon on a very regular basis. Now they’re doing some pretty amazing activities even on the far side of the moon, and they have a Chinese space station now in Earth orbit,” said Don Platt, associate professor of space systems at Florida Tech. Can China beat NASA to the moon? “The Chinese have really caught up,” said Platt. “I do believe that the Chinese are definitely advancing their efforts on the moon, and are identifying it as a critical aspect of their strategic future in space." When asked about the prospect of Chinese astronauts making it to the moon before NASA's planned Artemis III mission, Platt said he believes it’s a possibility and he cited the efforts China is making to highlight the importance of the nation's space efforts to its own populace. “They have some amazing videos. They’re really engaging the Chinese public, and really using it to do what what we’ve always done in space, and that is to inspire the next generation and to show the world the technical abilities of the Chinese,” said Platt.  May 21 - USA Today The race is on, and it's getting a lot of attention. If you're a journalist following this ongoing story, let us help with your coverage. Dr. Don Platt's work has involved developing, testing and flying different types of avionics, communications and rocket propulsion systems. He also studies astrobiology and biotechnology systems and human deep space exploration tools. Don is available to speak with media anytime. Simply click on the icon below to arrange an interview today.

2 min

Do We Need to Worry About Safety at the United States' Busiest Airports?

For the second time in two weeks, air traffic controllers directing planes into the Newark, New Jersey, airport briefly lost their radar. The outages have sparked travel chaos, with hundreds of flight delays and cancellations after the FAA slowed air traffic to ensure safety.  The country's aging air traffic control system is in the spotlight. Media, politicians and the public are demanding both solutions for the system and answers on how safe traveling is at the moment. To provide insight, Florida Tech's Margaret Wallace is lending her expert opinion and perspective on the issue. Margaret Wallace is Assistant Professor of Aviation Management at Florida Institute of Technology, where she teaches Air Traffic Control and Airport Management courses. She spent over 15 years in the industry prior to teaching as an Airport Manager (4 years) at Ramstein Air Base in Germany and an Air Traffic Controller (10+ years) in the U.S. Air Force. “The recent communication failure at Newark Liberty International Airport has raised serious concerns about the safety and dependability of air traffic control systems in the United States. On April 28, 2025, the Newark air traffic facilities lost all radio communication with approximately 20 airplanes for up to 90 seconds due to an equipment breakdown. During the outage, pilots and controllers were unable to communicate. Controllers were unable to maintain aircraft separation during crucial flight phases, and pilots were unable to receive air traffic clearances and instructions. Situations like this, as well as aircraft incidents, bring stress and trauma to the controller's mental state. Most people cannot fathom how much mental stress the controller experiences in everyday job settings. Situations with defective equipment, combined with lengthy work hours due to a scarcity of controllers, appear to have taken their toll based on the fact that several controllers have taken leave for mental stress. This situation posed a safety risk to all planes and passengers. Fortunately, there were no incidents, and everyone remained safe. However, this demonstrated some of the flaws in the outdated air traffic system equipment. Sean Duffy, the new Transportation Secretary, has acknowledged the critical need to improve our current technology. While air travel is generally safe, our current administration must continue to prioritize the upgrade of air traffic systems and increasing the staffing in air traffic facilities. To ensure safety, I believe we should consider having airlines restrict the number of flights available and the Air Route Traffic Command Center to introduce delays to avoid overloading the system.” Margaret Wallace If you're interested in connecting with Margaret Wallace about the ongoing issues at airports across the country, let us help. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology, at adam@fit.edu to arrange an interview today.

4 min

NASA Asks Researchers to Help Define Trustworthiness in Autonomous Systems

A Florida Tech-led group of researchers was selected to help NASA solve challenges in aviation through its prestigious University Leadership Initiative (ULI) program. Over the next three years, associate professor of computer science and software engineering Siddhartha Bhattacharyya and professor of aviation human factors Meredith Carroll will work to understand the vital role of trust in autonomy. Their project, “Trustworthy Resilient Autonomous Agents for Safe City Transportation in the Evolving New Decade” (TRANSCEND), aims to establish a common framework for engineers and human operators to determine the trustworthiness of machine-learning-enabled autonomous aviation safety systems. Autonomous systems are those that can perform independent tasks without requiring human control. The autonomy of these systems is expected to be enhanced with intelligence gained from machine learning. As a result, intelligence-based software is expected to be increasingly used in airplanes and drones. It may also be utilized in airports and to manage air traffic in the future. Learning-enabled autonomous technology can also act as contingency management when used in safety applications, proactively addressing potential disruptions and unexpected aviation events. TRANSCEND was one of three projects chosen for the latest ULI awards. The others hail from Embry-Riddle Aeronautical University in Daytona Beach – researching continuously updating, self-diagnostic vehicle health management to enhance the safety and reliability of Advanced Air Mobility vehicles – and University of Colorado Boulder – investigating tools for understanding and leveraging the complex communications environment of collaborative, autonomous airspace systems. Florida Tech’s team includes nine faculty members from five universities: Penn State; North Carolina A&T State University; University of Florida; Stanford University; Santa Fe College. It also involves the companies Collins Aerospace in Cedar Rapids, Iowa and ResilienX of Syracuse, New York. Carroll and Bhattacharyya will also involve students throughout the project. Human operators are an essential component of aviation technology – they monitor independent software systems and associated data and intervene when those systems fail. They may include flight crew members, air traffic controllers, maintenance personnel or safety staff monitoring overall system safety. A challenge in implementing independent software is that engineers and operators have different interpretations of what makes a system “trustworthy,” Carroll and Bhattacharyya explained. Engineers who develop autonomous software measure trustworthiness by the system’s ability to perform as designed. Human operators, however, trust and rely on systems to perform as they expect – they want to feel comfortable relying on a system to make an aeronautical decision in flight, such as how to avoid a traffic conflict or a weather event. Sometimes, that reliance won’t align with design specifications. Equally important, operators also need to trust that the software will alert them when it needs a human to take over. This may happen if the algorithm driving the software encounters a scenario it wasn’t trained for. “We are looking at how we can integrate trust from different communities – from human factors, from formal methods, from autonomy, from AI…” Bhattacharyya said. “How do we convey assumptions for trust, from design time to operation, as the intelligent systems are being deployed, so that we can trust them and know when they’re going to fail, especially those that are learning-enabled, meaning they adapt based on machine learning algorithms?” With Bhattacharyya leading the engineering side and Carroll leading the human factors side, the research group will begin bridging the trust gap by integrating theories, principles, methods, measures, visualizations, explainability and practices from different domains – this will build the TRANSCEND framework. Then, they’ll test the framework using a diverse range of tools, flight simulators and intelligent decision-making to demonstrate trustworthiness in practice. This and other data will help them develop a safety case toolkit of guidelines for development processes, recommendations and suggested safety measures for engineers to reference when designing “trustworthy,” learning-enabled autonomous systems. Ultimately, Bhattacharyya and Carroll hope their toolkit will lay the groundwork for a future learning-enabled autonomous systems certification process. “The goal is to combine all our research capabilities and pull together a unified story that outputs unified products to the industry,” Carroll said. “We want products for the industry to utilize when implementing learning-enabled autonomy for more effective safety management systems.” The researchers also plan to use this toolkit to teach future engineers about the nuances of trust in the products they develop. Once developed, they will hold outreach events, such as lectures and camps, for STEM-minded students in the community. If you're interested in connecting with Meredith Carroll or Siddhartha Bhattacharyya - simply click on the expert's profile or contact  Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

View all posts