Researchers Seek Understanding of Early Life on Earth Following Chilean Expedition

Jan 21, 2025

3 min

Andrew Palmer, Ph.D.



In a discovery that may further our understanding of the early evolution of life on Earth, a research team, including associate professor Andrew Palmer and master’s student Caitlyn Hubric, identified Chile’s deepest and most northern cold seeps—openings in the ocean floor that emit gases and fluids— about 100 miles off the Chilean coast and thousands of feet below the surface.


This most terrestrial of discoveries may also yield insights that could benefit future space exploration, Palmer said.


Palmer, who runs the astrobiology and chemical ecology lab at Florida Tech, and Hubric, who has studied with him for the last three years, represented the university on Schmidt Ocean Institute’s (SOI) expedition through the Atacama Trench. The trench is a nearly 5-mile-deep oceanic trench in the eastern Pacific Ocean that has remained at the same latitude for the last 150 million years, suggesting an extremely stable and potentially ancient ecosystem.



The trench’s seeps, found at a depth of 2,836 meters (9,304 feet), provide chemical energy for deep sea animals living without sunlight, according to SOI. Seeps like this one can help astrobiologists understand how life developed on Earth and how those survival strategies and chemical conditions might sustain life on other planets.


Palmer and Hubric were members of the expedition’s microbiology team and were specifically searching for biosignatures. That meant looking out for novel microbes and chemical signatures, like proteins or carbohydrates, which may have existed in the region for millions of years.



The benefits of their research extend beyond life on Earth. They could also shape future space exploration. A big part of why they’re investigating water ecosystems is because of the popularity around Saturn’s moon Enceladus and Jupiter’s Europa, Hubric said. She said it’s not a perfect analog, but it’s close enough that they can look for patterns in how life’s chemical processes might operate at these sites.


“We hope that some of the questions we answer here find will help us in future endeavors when we do finally go explore the solar system,” Hubric said.


Back on campus after the expedition, which ran from May 24 to June 6, they’ve started working to solve those questions by both identifying molecules that guide the search for life and by understanding the limitations of the instruments that can detect metabolites, or early signatures of life, Palmer said.


“If [the instruments] can’t successfully identify traces of life on Earth, where we know there’s lots of life, how are they going to be successful in a place where it’s less likely than a needle in a haystack?” Palmer said. “It’s the bigger question of, what do we need to do in order to be successful in the search for life?”


For Palmer and Hubric, research has only just begun. They’ll test water and sediment samples and the filtrate that they’ll remove from their water filters and investigate for microbes of interest. Searching for novel metabolisms will be an even more extensive process, Palmer said.


“It’s weird doing something where you won’t be able to see the results for weeks or months,” Palmer said. “This is just the beginning.”


Looking to know more about the Schmidt Ocean Institute’s (SOI) expedition through the Atacama Trench and Dr. Palmer's research? Then let us help.


Dr. Andrew Palmer is an associate professor of biological sciences at Florida Tech and a go-to expert in the field of Martian farming. He is available to speak with media regarding this and related topics. Simply click on his icon now to arrange an interview.



Connect with:
Andrew Palmer, Ph.D.

Andrew Palmer, Ph.D.

Associate Professor | Ocean Engineering and Marine Sciences

Dr. Palmer's research interests include eavesdropping on bacterial 'conversations', Martian farming, and cell wall fragment-based signaling.

AstrobiologyOcean EngineeringBiomedicalMolecular BiologyBiochemistry

You might also like...

Check out some other posts from Florida Tech

4 min

NASA Grant Funds Research Exploring Methods of Training Vision-Based Autonomous Systems

Conducting research at 5:30 a.m. may not be everybody’s first choice. But for Siddhartha Bhattacharyya and Ph.D. students Mohammed Abdul, Hafeez Khan and Parth Ganeriwala, it’s an essential part of the process for their latest endeavor. Bhattacharyya and his students are developing a more efficient framework for creating and evaluating image-based machine learning classification models for autonomous systems, such as those guiding cars and aircraft. That process involves creating new datasets with taxiway and runway images for vision-based autonomous aircraft. Just as humans need textbooks to fuel their learning, some machines are taught using thousands of photographs and images of the environment where their autonomous pupil will eventually operate. To help ensure their trained models can identify the correct course to take in a hyper-specific environment – with indicators such as centerline markings and side stripes on a runway at dawn – Bhattacharyya and his Ph.D. students chose a December morning to rise with the sun, board one of Florida Tech’s Piper Archer aircraft and photograph the views from above. Bhattacharyya, an associate professor of computer science and software engineering, is exploring the boundaries of operation of efficient and effective machine-learning approaches for vision-based classification in autonomous systems. In this case, these machine learning systems are trained on video or image data collected from environments including runways, taxiways or roadways. With this kind of model, it can take more than 100,000 images to help the algorithm learn and adapt to an environment. Today’s technology demands a pronounced human effort to manually label and classify each image. This can be an overwhelming process. To combat that, Bhattacharyya was awarded funding from NASA Langley Research Center to advance existing machine learning/computer vision-based systems, such as his lab’s “Advanced Line Identification and Notation Algorithm” (ALINA), by exploring automated labeling that would enable the model to learn and classify data itself – with humans intervening only as necessary. This measure would ease the overwhelming human demand, he said. ALINA is an annotation framework that Hafeez and Parth developed under Bhattacharyya’s guidance to detect and label data for algorithms, such as taxiway line markings for autonomous aircraft. Bhattacharyya will use NASA’s funding to explore transfer learning-based approaches, led by Parth, and few-shot learning (FSL) approaches, led by Hafeez. The researchers are collecting images via GoPro of runways and taxiways at airports in Melbourne and Grant-Valkaria with help from Florida Tech’s College of Aeronautics. Bhattacharyya’s students will take the data they collect from the airports and train their models to, in theory, drive an aircraft autonomously. They are working to collect diverse images of the runways – those of different angles and weather and lighting conditions – so that the model learns to identify patterns that determine the most accurate course regardless of environment or conditions. That includes the daybreak images captured on that December flight. “We went at sunrise, where there is glare on the camera. Now we need to see if it’s able to identify the lines at night because that’s when there are lights embedded on the taxiways,” Bhattacharyya said. “We want to collect diverse datasets and see what methods work, what methods fail and what else do we need to do to build that reliable software.” Transfer learning is a machine learning technique in which a model trained to do one task can generalize information and reuse it to complete another task. For example, a model trained to drive autonomous cars could transfer its intelligence to drive autonomous aircraft. This transfer helps explore generalization of knowledge. It also improves efficiency by eliminating the need for new models that complete different but related tasks. For example, a car trained to operate autonomously in California could retain generalized knowledge when learning how to drive in Florida, despite different landscapes. “This model already knows lines and lanes, and we are going to train it on certain other types of lines hoping it generalizes and keeps the previous knowledge,” Bhattacharyya explained. “That model could do both tasks, as humans do.” FSL is a technique that teaches a model to generalize information with just a few data samples instead of the massive datasets used in transfer learning. With this type of training, a model should be able to identify an environment based on just four or five images. “That would help us reduce the time and cost of data collection as well as time spent labeling the data that we typically go through for several thousands of datasets,” Bhattacharyya said. Learning when results may or may not be reliable is a key part of this research. Bhattacharyya said identifying degradation in the autonomous system’s performance will help guide the development of online monitors that can catch errors and alert human operators to take corrective action. Ultimately, he hopes that this research can help create a future where we utilize the benefits of machine learning without fear of it failing before notifying the operator, driver or user. “That’s the end goal,” Bhattacharyya said. “It motivates me to learn how the context relates to assumptions associated with these images, that helps in understanding when the autonomous system is not confident in its decision, thus sending an alert to the user. This could apply to a future generation of autonomous systems where we don’t need to fear the unknown – when the system could fail.” Siddhartha (Sid) Bhattacharyya’s primary area of research expertise/interest is in model based engineering, formal methods, machine learning engineering, and explainable AI applied to intelligent autonomous systems, cyber security, human factors, healthcare, explainable AI, and avionics. His research lab ASSIST (Assured Safety, Security, and Intent with Systematic Tactics) focuses on the research in the design of innovative formal methods to assure performance of intelligent systems, machine learning engineering to characterize intelligent systems for safety and model based engineering to analyze system behavior. Siddhartha Bhattacharyya is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

2 min

With aviation in the news, Florida Tech's Shem Malmquist offers insight and clarity

Recent news on the safety of airlines in America has detailed tragic fatalities, airplanes flipping over and some crashing into prominent city streets, which has shone a less than flattering light on what is supposed to be a safe industry. Given recent events, Florida Tech College of Aeronautics visiting assistant professor Shem Malmquist has appeared in high-profile interviews on both current and historic aviation incidents. Recently, he spoke with the Boston Globe, Rolling Stone and the news platform FedScoop to lend his insight and expertise as a pilot. Officials have repeatedly warned about a shortage of air traffic controllers. Pilots have made up for that gap by accepting visual approaches and separation from other airplanes to relieve some of the workloads off controllers, said Shem Malmquist, a pilot and visiting instructor at the Florida Institute of Technology, who teaches courses on aviation safety. He noted that was “part of the problem” with the D.C. collision. Still, flying remains safe because “pilots are overcoming the challenges in the system to prevent accidents,” Malmquist said. “Random distribution can create clusters like this. ... That doesn’t mean there’s more risk.”  February 21 - Boston Globe One former pilot told FedScoop that the system can be overpopulated with notices, only some of which might be important for a pilot to understand before taking off. Still, there’s generally no automated way of sorting through these notices, which means they can be incredibly long and difficult to completely process before flights. The notices themselves are densely written and use terminology that is often not immediately discernible. An example provided by the FAA shows the notices’ unique format. Textual data can also limit the ability to modernize the NOTAM system, an FAA statement of objectives from 2023 noted. Shem Malmquist, a working pilot who also teaches at Florida Tech’s College of Aeronautics, said the entire NOTAM system “migrated from color pipe machines,” which locked in “certain abbreviations and codes” beyond their point of usefulness. “It’s really great for computers, which is kind of funny because it was created before computers,” Malmquist added. “But it’s … not really very user friendly for the way humans think.” February 21 -FedScoop Recently, Malmquist was featured on National Geographic's TV series, "Air Crash Investigation." There, he spoke about the China Eastern Airlines Flight 583 crash investigation from 1993. Looking to connect with Shem Malmquist regarding the airline industry? He's available. Click on his icon to arrange an interview today.

4 min

Expert Opinion: Maneuvering friendships in the age of half-truths can be challenging

I recently shared an op-ed written by my colleague and friend, Ted Petersen, on a few social media sites. His thoughtful piece advocated for media literacy education. Later that day I received an alert that someone had commented on my post. The comment, made by a dear friend, alluded to disinformation about U.S.A.I.D.’s use of funds ― a false assertion that the federal agency supported the news outlet Politico for partisan gain. The comment was a perfect example of why media literacy education is important ― not just for school children. It gives people the tools to navigate a borderless media environment in which news and opinion, verified facts and unsubstantiated statements, and information and entertainment coexist. My dilemma after reading the comment was multi-faceted. What should I do? Do I respond? If so, how do I tell my friend that he is misinformed? If I don’t respond, am I shirking my responsibility as a friend, a citizen, an educator? How do I now live in a world in which my friends and family consume and trust media that actively promote disinformation? And, most importantly, how do I live in a world in which people I love are listening to a barrage of messages telling them that I am evil? That I cannot be trusted? That I should be hated? Because underlying his deceptively simple comment is the possibility that, like many, my friend trusts certain media and messages while castigating all those that don’t always align with their world view. These messages are coming through media channels that give voice to leaders and media personalities who gain traction with their audiences by demonizing those they deem their enemies. They use half-truths and outright lies to gain sway with their followers. Anyone who thinks, looks, believes differently cannot be trusted. As a media scholar I have studied media effects, persuasion, and audiences. I’ve analyzed the meaning audiences give messages and how different approaches affect audience perceptions. I’ve written about the importance of narrative and message framing. I have advocated for the ethical use of these powerful tools. As a human being, I’m saddened as I witness blatant disregard for ethical principles in those leaders and media personalities who wield communication like a weapon to undermine trust. The results are impenetrable walls separating us from those who should be our allies. After spending most of my life believing I was part of a community, able to agree or disagree, discuss and argue, to teach and to learn in conversation with others, I find myself the “other.” Dismissed. Demonized. Hated. Not by faceless strangers, but by those dear to me. I suspect I’m not alone in this feeling ― regardless of ideological preferences. Discord is painful. My heart hurts. Yet, I am stubbornly hopeful. When I see my students from different backgrounds, cultures, and generations, discussing ideas for solutions to social issues, I am hopeful. When I hear my pastor fearlessly speaking to the congregation about loving each other even in disagreement, I am hopeful. When I speak to community groups and listen to their concerns and insights, I am hopeful. When I have a long-overdue conversation with my friend instead of relying on mediated social platforms, I am hopeful. I recently spoke to a Rotary Club and borrowed their four-way test to suggest a healthier relationship with media and communication generally. Of the things we produce, consume, or share, we should ask ourselves: Is it the truth? Is it fair to all concerned? Will it build goodwill and better friendships? Will it be beneficial to all concerned? If the answer to any of those questions is no, we should change the channel, seek another source for context, delete the post, block the sender, or adjust our message so we can answer yes And if you are asking yourself why you should be fair, or build goodwill, or benefit anyone from “the other side” ―perhaps scroll through your photos or look at the pictures on your desk or mantel. We are not adversaries. We’re on the same side. It’s time to stop listening to those who tell us otherwise. Heidi Hatfield Edwards is associate dean in Florida Tech’s College of Psychology and Liberal Arts and head of the School of Arts and Communication where she is a professor of communication. She began her career as a media professional and worked nearly a decade gaining experience across multiple media platforms and in strategic communication. She teaches courses in mass communication, theory, and science communication. Heidi is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

View all posts