Florida Tech Scientist to Study Deep-Space Agriculture After Planetary Society Grant Award

Jun 2, 2023

2 min

Andrew Palmer, Ph.D.




No matter where humans travel, sustenance remains a necessity. Finding a bite to eat during a visit to New York, for example, is no problem. When the destination is a bit farther away, such as Mars, the options are not as plentiful there or on the long journey to get there.


That’s where Florida Tech’s Andrew Palmer comes in. He and other scientists are exploring ways to feed our explorers, and a new competitive grant from the Planetary Society will fund work that examines the two most likely ways to produce food during travel to these far-flung spots: in soil or something like soil, or in water.


Palmer and his team were awarded a $50,000 Science and Technology Empowered by the Public (STEP) grant, the Planetary Society recently announced. Their project: “Evaluation of food production systems for lunar and Martian agriculture.”


For the next year, they will grow radish microgreens, lettuce and tomatoes in identical environmental conditions with one major exception: one batch will be grown hydroponically, and another will be grown in regolith – like lunar or Martian soil. The aim of the experiment is to characterize and compare the two methods, both of which have merits and shortcomings.


“It may be that a combination of these approaches, tailored to the diverse needs of different crops, is the best way to provide sustainable and productive agriculture,” Palmer said. “Until now, there have been no direct comparison studies between hydroponic and regolith-based systems for any crop targeted for space applications. We are excited to address this knowledge gap.”



The team, which includes experts in plant physiology and biochemistry as well as space agriculture and systems efficiency analysis, will test their hypothesis that faster growing crops like microgreens will be better suited for hydroponic systems even in the long term, while slower-growing crops like tomatoes may favor a regolith-based production system.


Palmer and his co-investigator, Rafael Loureiro from Winston-Salem University, are joined by collaborators J. Travis Hunsucker and Thiara Bento from Florida Tech, Laura E. Fackrell at the Jet Propulsion Laboratory and Jéssica Carneiro Oliveira at Universidade Federal do Estado do Rio de Janeiro, Brazil.




Care to delve a little deeper? Palmer and a second STEP grant recipient, Dartmouth College professor Jacob Buffo, spoke to the Planetary Society senior communications advisor Mat Kaplan about their respective projects. The segment with Palmer begins at the 23:57 mark and the piece is linked above.


Looking to know more about what it will take to feed our deep-space explorers? Then let us help with your questions and coverage.


Dr. Andrew Palmer is an associate professor of biological sciences at Florida Tech and a go-to expert in the field of Martian farming. He is available to speak with media regarding this and related topics. Simply click on his icon now to arrange an interview.


Connect with:
Andrew Palmer, Ph.D.

Andrew Palmer, Ph.D.

Associate Professor | Ocean Engineering and Marine Sciences

Dr. Palmer's research interests include eavesdropping on bacterial 'conversations', Martian farming, and cell wall fragment-based signaling.

AstrobiologyOcean EngineeringBiomedicalMolecular BiologyBiochemistry

You might also like...

Check out some other posts from Florida Tech

4 min

NASA Grant Funds Research Exploring Methods of Training Vision-Based Autonomous Systems

Conducting research at 5:30 a.m. may not be everybody’s first choice. But for Siddhartha Bhattacharyya and Ph.D. students Mohammed Abdul, Hafeez Khan and Parth Ganeriwala, it’s an essential part of the process for their latest endeavor. Bhattacharyya and his students are developing a more efficient framework for creating and evaluating image-based machine learning classification models for autonomous systems, such as those guiding cars and aircraft. That process involves creating new datasets with taxiway and runway images for vision-based autonomous aircraft. Just as humans need textbooks to fuel their learning, some machines are taught using thousands of photographs and images of the environment where their autonomous pupil will eventually operate. To help ensure their trained models can identify the correct course to take in a hyper-specific environment – with indicators such as centerline markings and side stripes on a runway at dawn – Bhattacharyya and his Ph.D. students chose a December morning to rise with the sun, board one of Florida Tech’s Piper Archer aircraft and photograph the views from above. Bhattacharyya, an associate professor of computer science and software engineering, is exploring the boundaries of operation of efficient and effective machine-learning approaches for vision-based classification in autonomous systems. In this case, these machine learning systems are trained on video or image data collected from environments including runways, taxiways or roadways. With this kind of model, it can take more than 100,000 images to help the algorithm learn and adapt to an environment. Today’s technology demands a pronounced human effort to manually label and classify each image. This can be an overwhelming process. To combat that, Bhattacharyya was awarded funding from NASA Langley Research Center to advance existing machine learning/computer vision-based systems, such as his lab’s “Advanced Line Identification and Notation Algorithm” (ALINA), by exploring automated labeling that would enable the model to learn and classify data itself – with humans intervening only as necessary. This measure would ease the overwhelming human demand, he said. ALINA is an annotation framework that Hafeez and Parth developed under Bhattacharyya’s guidance to detect and label data for algorithms, such as taxiway line markings for autonomous aircraft. Bhattacharyya will use NASA’s funding to explore transfer learning-based approaches, led by Parth, and few-shot learning (FSL) approaches, led by Hafeez. The researchers are collecting images via GoPro of runways and taxiways at airports in Melbourne and Grant-Valkaria with help from Florida Tech’s College of Aeronautics. Bhattacharyya’s students will take the data they collect from the airports and train their models to, in theory, drive an aircraft autonomously. They are working to collect diverse images of the runways – those of different angles and weather and lighting conditions – so that the model learns to identify patterns that determine the most accurate course regardless of environment or conditions. That includes the daybreak images captured on that December flight. “We went at sunrise, where there is glare on the camera. Now we need to see if it’s able to identify the lines at night because that’s when there are lights embedded on the taxiways,” Bhattacharyya said. “We want to collect diverse datasets and see what methods work, what methods fail and what else do we need to do to build that reliable software.” Transfer learning is a machine learning technique in which a model trained to do one task can generalize information and reuse it to complete another task. For example, a model trained to drive autonomous cars could transfer its intelligence to drive autonomous aircraft. This transfer helps explore generalization of knowledge. It also improves efficiency by eliminating the need for new models that complete different but related tasks. For example, a car trained to operate autonomously in California could retain generalized knowledge when learning how to drive in Florida, despite different landscapes. “This model already knows lines and lanes, and we are going to train it on certain other types of lines hoping it generalizes and keeps the previous knowledge,” Bhattacharyya explained. “That model could do both tasks, as humans do.” FSL is a technique that teaches a model to generalize information with just a few data samples instead of the massive datasets used in transfer learning. With this type of training, a model should be able to identify an environment based on just four or five images. “That would help us reduce the time and cost of data collection as well as time spent labeling the data that we typically go through for several thousands of datasets,” Bhattacharyya said. Learning when results may or may not be reliable is a key part of this research. Bhattacharyya said identifying degradation in the autonomous system’s performance will help guide the development of online monitors that can catch errors and alert human operators to take corrective action. Ultimately, he hopes that this research can help create a future where we utilize the benefits of machine learning without fear of it failing before notifying the operator, driver or user. “That’s the end goal,” Bhattacharyya said. “It motivates me to learn how the context relates to assumptions associated with these images, that helps in understanding when the autonomous system is not confident in its decision, thus sending an alert to the user. This could apply to a future generation of autonomous systems where we don’t need to fear the unknown – when the system could fail.” Siddhartha (Sid) Bhattacharyya’s primary area of research expertise/interest is in model based engineering, formal methods, machine learning engineering, and explainable AI applied to intelligent autonomous systems, cyber security, human factors, healthcare, explainable AI, and avionics. His research lab ASSIST (Assured Safety, Security, and Intent with Systematic Tactics) focuses on the research in the design of innovative formal methods to assure performance of intelligent systems, machine learning engineering to characterize intelligent systems for safety and model based engineering to analyze system behavior. Siddhartha Bhattacharyya is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

2 min

With aviation in the news, Florida Tech's Shem Malmquist offers insight and clarity

Recent news on the safety of airlines in America has detailed tragic fatalities, airplanes flipping over and some crashing into prominent city streets, which has shone a less than flattering light on what is supposed to be a safe industry. Given recent events, Florida Tech College of Aeronautics visiting assistant professor Shem Malmquist has appeared in high-profile interviews on both current and historic aviation incidents. Recently, he spoke with the Boston Globe, Rolling Stone and the news platform FedScoop to lend his insight and expertise as a pilot. Officials have repeatedly warned about a shortage of air traffic controllers. Pilots have made up for that gap by accepting visual approaches and separation from other airplanes to relieve some of the workloads off controllers, said Shem Malmquist, a pilot and visiting instructor at the Florida Institute of Technology, who teaches courses on aviation safety. He noted that was “part of the problem” with the D.C. collision. Still, flying remains safe because “pilots are overcoming the challenges in the system to prevent accidents,” Malmquist said. “Random distribution can create clusters like this. ... That doesn’t mean there’s more risk.”  February 21 - Boston Globe One former pilot told FedScoop that the system can be overpopulated with notices, only some of which might be important for a pilot to understand before taking off. Still, there’s generally no automated way of sorting through these notices, which means they can be incredibly long and difficult to completely process before flights. The notices themselves are densely written and use terminology that is often not immediately discernible. An example provided by the FAA shows the notices’ unique format. Textual data can also limit the ability to modernize the NOTAM system, an FAA statement of objectives from 2023 noted. Shem Malmquist, a working pilot who also teaches at Florida Tech’s College of Aeronautics, said the entire NOTAM system “migrated from color pipe machines,” which locked in “certain abbreviations and codes” beyond their point of usefulness. “It’s really great for computers, which is kind of funny because it was created before computers,” Malmquist added. “But it’s … not really very user friendly for the way humans think.” February 21 -FedScoop Recently, Malmquist was featured on National Geographic's TV series, "Air Crash Investigation." There, he spoke about the China Eastern Airlines Flight 583 crash investigation from 1993. Looking to connect with Shem Malmquist regarding the airline industry? He's available. Click on his icon to arrange an interview today.

4 min

Expert Opinion: Maneuvering friendships in the age of half-truths can be challenging

I recently shared an op-ed written by my colleague and friend, Ted Petersen, on a few social media sites. His thoughtful piece advocated for media literacy education. Later that day I received an alert that someone had commented on my post. The comment, made by a dear friend, alluded to disinformation about U.S.A.I.D.’s use of funds ― a false assertion that the federal agency supported the news outlet Politico for partisan gain. The comment was a perfect example of why media literacy education is important ― not just for school children. It gives people the tools to navigate a borderless media environment in which news and opinion, verified facts and unsubstantiated statements, and information and entertainment coexist. My dilemma after reading the comment was multi-faceted. What should I do? Do I respond? If so, how do I tell my friend that he is misinformed? If I don’t respond, am I shirking my responsibility as a friend, a citizen, an educator? How do I now live in a world in which my friends and family consume and trust media that actively promote disinformation? And, most importantly, how do I live in a world in which people I love are listening to a barrage of messages telling them that I am evil? That I cannot be trusted? That I should be hated? Because underlying his deceptively simple comment is the possibility that, like many, my friend trusts certain media and messages while castigating all those that don’t always align with their world view. These messages are coming through media channels that give voice to leaders and media personalities who gain traction with their audiences by demonizing those they deem their enemies. They use half-truths and outright lies to gain sway with their followers. Anyone who thinks, looks, believes differently cannot be trusted. As a media scholar I have studied media effects, persuasion, and audiences. I’ve analyzed the meaning audiences give messages and how different approaches affect audience perceptions. I’ve written about the importance of narrative and message framing. I have advocated for the ethical use of these powerful tools. As a human being, I’m saddened as I witness blatant disregard for ethical principles in those leaders and media personalities who wield communication like a weapon to undermine trust. The results are impenetrable walls separating us from those who should be our allies. After spending most of my life believing I was part of a community, able to agree or disagree, discuss and argue, to teach and to learn in conversation with others, I find myself the “other.” Dismissed. Demonized. Hated. Not by faceless strangers, but by those dear to me. I suspect I’m not alone in this feeling ― regardless of ideological preferences. Discord is painful. My heart hurts. Yet, I am stubbornly hopeful. When I see my students from different backgrounds, cultures, and generations, discussing ideas for solutions to social issues, I am hopeful. When I hear my pastor fearlessly speaking to the congregation about loving each other even in disagreement, I am hopeful. When I speak to community groups and listen to their concerns and insights, I am hopeful. When I have a long-overdue conversation with my friend instead of relying on mediated social platforms, I am hopeful. I recently spoke to a Rotary Club and borrowed their four-way test to suggest a healthier relationship with media and communication generally. Of the things we produce, consume, or share, we should ask ourselves: Is it the truth? Is it fair to all concerned? Will it build goodwill and better friendships? Will it be beneficial to all concerned? If the answer to any of those questions is no, we should change the channel, seek another source for context, delete the post, block the sender, or adjust our message so we can answer yes And if you are asking yourself why you should be fair, or build goodwill, or benefit anyone from “the other side” ―perhaps scroll through your photos or look at the pictures on your desk or mantel. We are not adversaries. We’re on the same side. It’s time to stop listening to those who tell us otherwise. Heidi Hatfield Edwards is associate dean in Florida Tech’s College of Psychology and Liberal Arts and head of the School of Arts and Communication where she is a professor of communication. She began her career as a media professional and worked nearly a decade gaining experience across multiple media platforms and in strategic communication. She teaches courses in mass communication, theory, and science communication. Heidi is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

View all posts