Planet 9 Doesn’t Exist, So Why Does It Matter How We Get There? Let Our Expert Explain.

Oct 12, 2022

3 min

Manasvi Lingam, Ph.D.

Planet 9 is an oft-discussed hypothetical planet in the outer region of the solar system. A new study involving Florida Tech astrobiologist Manasvi Lingam helps illustrate how we could possibly get there.


The study, “Can We Fly to Planet 9?” is from Lingam and researchers Adam Hibberd and Andreas Hein. The team discovered that using current, unmanned transportation methods, it would take 45 to 75 years to get to Planet 9, which is about 42 billion miles away from Earth. By comparison, Pluto, which is the ninth object from the Sun, is roughly three billion miles from Earth.


The research and work of Lingam, Hibberd and Hein is also getting a lot of attention from websites like UniverseToday.com.



The team also studied near-future transportation methods nuclear thermal propulsion and laser sails. Using nuclear thermal propulsion, it would take approximately 40 years to reach Planet 9. It would take merely six to seven years to reach Planet 9 using laser sail propulsion, which involves using light from lasers to propel the vehicle.




In its research, the team used the principles of orbital mechanics, sometimes called spaceflight mechanics. They inputted the complex and nonlinear mathematical equations into a computer, and then solved those equations with some optimization constraints.


“What I mean by the latter is that ideally you want to maximize or minimize some quantity as much as possible,” Lingam said. “You might say, ‘Well, I want to minimize the flight time of the spacecraft as much as possible.’ So, what we did is that we put in an optimization constraint. In this case, it happens to be minimizing the time of journey. You solve the mathematical equations for a spacecraft with this condition, and then you end up with the results.”


Lingam is inspired by the trendsetting Voyager spacecraft missions of the late 1970s, and one of his goals is to gain additional information about other worlds in our solar system, in addition to Planet 9 Voyager still provides valuable information regarding the outer solar system, though by 2025 it is expected that there may no longer be sufficient power to operate its science instruments.


“Any mission to Planet Nine would likewise not just provide valuable information about that hypothetical planet, but it would also yield vital information about Jupiter, because what we do in some of the trajectories is a slingshot or powered flyby around Jupiter,” Lingam said. “It could also provide valuable information about the Sun because we also do a maneuver around the Sun, so you would still be getting lots of interesting data along the journey. And the length of the journey is comparable to that of the functioning time of the Voyager spacecraft today.”


If you're a reporter looking to know more - then let us help get you connected to an expert.



Manasvi Lingam is an Assistant Professor in the Department of Aerospace, Physics and Space Sciences at the Florida Institute of Technology. He is an author and go-to expert for media when it comes to anything in outer space or out of this world - just recently he was featured in Astronomy.com  where he was asked to answer the illusive question - Are we alone? 



Manasvi is available to speak with media - simply click on his icon now to arrange an interview today.


Connect with:
Manasvi Lingam, Ph.D.

Manasvi Lingam, Ph.D.

Assistant Professor | Aerospace, Physics and Space Sciences

Dr. Lingam's research interests are primarily within the transdisciplinary areas of astrobiology.

Planetary SciencePlasma PhysicsAstrobiologyAstrophysics

You might also like...

Check out some other posts from Florida Tech

4 min

NASA Grant Funds Research Exploring Methods of Training Vision-Based Autonomous Systems

Conducting research at 5:30 a.m. may not be everybody’s first choice. But for Siddhartha Bhattacharyya and Ph.D. students Mohammed Abdul, Hafeez Khan and Parth Ganeriwala, it’s an essential part of the process for their latest endeavor. Bhattacharyya and his students are developing a more efficient framework for creating and evaluating image-based machine learning classification models for autonomous systems, such as those guiding cars and aircraft. That process involves creating new datasets with taxiway and runway images for vision-based autonomous aircraft. Just as humans need textbooks to fuel their learning, some machines are taught using thousands of photographs and images of the environment where their autonomous pupil will eventually operate. To help ensure their trained models can identify the correct course to take in a hyper-specific environment – with indicators such as centerline markings and side stripes on a runway at dawn – Bhattacharyya and his Ph.D. students chose a December morning to rise with the sun, board one of Florida Tech’s Piper Archer aircraft and photograph the views from above. Bhattacharyya, an associate professor of computer science and software engineering, is exploring the boundaries of operation of efficient and effective machine-learning approaches for vision-based classification in autonomous systems. In this case, these machine learning systems are trained on video or image data collected from environments including runways, taxiways or roadways. With this kind of model, it can take more than 100,000 images to help the algorithm learn and adapt to an environment. Today’s technology demands a pronounced human effort to manually label and classify each image. This can be an overwhelming process. To combat that, Bhattacharyya was awarded funding from NASA Langley Research Center to advance existing machine learning/computer vision-based systems, such as his lab’s “Advanced Line Identification and Notation Algorithm” (ALINA), by exploring automated labeling that would enable the model to learn and classify data itself – with humans intervening only as necessary. This measure would ease the overwhelming human demand, he said. ALINA is an annotation framework that Hafeez and Parth developed under Bhattacharyya’s guidance to detect and label data for algorithms, such as taxiway line markings for autonomous aircraft. Bhattacharyya will use NASA’s funding to explore transfer learning-based approaches, led by Parth, and few-shot learning (FSL) approaches, led by Hafeez. The researchers are collecting images via GoPro of runways and taxiways at airports in Melbourne and Grant-Valkaria with help from Florida Tech’s College of Aeronautics. Bhattacharyya’s students will take the data they collect from the airports and train their models to, in theory, drive an aircraft autonomously. They are working to collect diverse images of the runways – those of different angles and weather and lighting conditions – so that the model learns to identify patterns that determine the most accurate course regardless of environment or conditions. That includes the daybreak images captured on that December flight. “We went at sunrise, where there is glare on the camera. Now we need to see if it’s able to identify the lines at night because that’s when there are lights embedded on the taxiways,” Bhattacharyya said. “We want to collect diverse datasets and see what methods work, what methods fail and what else do we need to do to build that reliable software.” Transfer learning is a machine learning technique in which a model trained to do one task can generalize information and reuse it to complete another task. For example, a model trained to drive autonomous cars could transfer its intelligence to drive autonomous aircraft. This transfer helps explore generalization of knowledge. It also improves efficiency by eliminating the need for new models that complete different but related tasks. For example, a car trained to operate autonomously in California could retain generalized knowledge when learning how to drive in Florida, despite different landscapes. “This model already knows lines and lanes, and we are going to train it on certain other types of lines hoping it generalizes and keeps the previous knowledge,” Bhattacharyya explained. “That model could do both tasks, as humans do.” FSL is a technique that teaches a model to generalize information with just a few data samples instead of the massive datasets used in transfer learning. With this type of training, a model should be able to identify an environment based on just four or five images. “That would help us reduce the time and cost of data collection as well as time spent labeling the data that we typically go through for several thousands of datasets,” Bhattacharyya said. Learning when results may or may not be reliable is a key part of this research. Bhattacharyya said identifying degradation in the autonomous system’s performance will help guide the development of online monitors that can catch errors and alert human operators to take corrective action. Ultimately, he hopes that this research can help create a future where we utilize the benefits of machine learning without fear of it failing before notifying the operator, driver or user. “That’s the end goal,” Bhattacharyya said. “It motivates me to learn how the context relates to assumptions associated with these images, that helps in understanding when the autonomous system is not confident in its decision, thus sending an alert to the user. This could apply to a future generation of autonomous systems where we don’t need to fear the unknown – when the system could fail.” Siddhartha (Sid) Bhattacharyya’s primary area of research expertise/interest is in model based engineering, formal methods, machine learning engineering, and explainable AI applied to intelligent autonomous systems, cyber security, human factors, healthcare, explainable AI, and avionics. His research lab ASSIST (Assured Safety, Security, and Intent with Systematic Tactics) focuses on the research in the design of innovative formal methods to assure performance of intelligent systems, machine learning engineering to characterize intelligent systems for safety and model based engineering to analyze system behavior. Siddhartha Bhattacharyya is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

2 min

With aviation in the news, Florida Tech's Shem Malmquist offers insight and clarity

Recent news on the safety of airlines in America has detailed tragic fatalities, airplanes flipping over and some crashing into prominent city streets, which has shone a less than flattering light on what is supposed to be a safe industry. Given recent events, Florida Tech College of Aeronautics visiting assistant professor Shem Malmquist has appeared in high-profile interviews on both current and historic aviation incidents. Recently, he spoke with the Boston Globe, Rolling Stone and the news platform FedScoop to lend his insight and expertise as a pilot. Officials have repeatedly warned about a shortage of air traffic controllers. Pilots have made up for that gap by accepting visual approaches and separation from other airplanes to relieve some of the workloads off controllers, said Shem Malmquist, a pilot and visiting instructor at the Florida Institute of Technology, who teaches courses on aviation safety. He noted that was “part of the problem” with the D.C. collision. Still, flying remains safe because “pilots are overcoming the challenges in the system to prevent accidents,” Malmquist said. “Random distribution can create clusters like this. ... That doesn’t mean there’s more risk.”  February 21 - Boston Globe One former pilot told FedScoop that the system can be overpopulated with notices, only some of which might be important for a pilot to understand before taking off. Still, there’s generally no automated way of sorting through these notices, which means they can be incredibly long and difficult to completely process before flights. The notices themselves are densely written and use terminology that is often not immediately discernible. An example provided by the FAA shows the notices’ unique format. Textual data can also limit the ability to modernize the NOTAM system, an FAA statement of objectives from 2023 noted. Shem Malmquist, a working pilot who also teaches at Florida Tech’s College of Aeronautics, said the entire NOTAM system “migrated from color pipe machines,” which locked in “certain abbreviations and codes” beyond their point of usefulness. “It’s really great for computers, which is kind of funny because it was created before computers,” Malmquist added. “But it’s … not really very user friendly for the way humans think.” February 21 -FedScoop Recently, Malmquist was featured on National Geographic's TV series, "Air Crash Investigation." There, he spoke about the China Eastern Airlines Flight 583 crash investigation from 1993. Looking to connect with Shem Malmquist regarding the airline industry? He's available. Click on his icon to arrange an interview today.

4 min

Expert Opinion: Maneuvering friendships in the age of half-truths can be challenging

I recently shared an op-ed written by my colleague and friend, Ted Petersen, on a few social media sites. His thoughtful piece advocated for media literacy education. Later that day I received an alert that someone had commented on my post. The comment, made by a dear friend, alluded to disinformation about U.S.A.I.D.’s use of funds ― a false assertion that the federal agency supported the news outlet Politico for partisan gain. The comment was a perfect example of why media literacy education is important ― not just for school children. It gives people the tools to navigate a borderless media environment in which news and opinion, verified facts and unsubstantiated statements, and information and entertainment coexist. My dilemma after reading the comment was multi-faceted. What should I do? Do I respond? If so, how do I tell my friend that he is misinformed? If I don’t respond, am I shirking my responsibility as a friend, a citizen, an educator? How do I now live in a world in which my friends and family consume and trust media that actively promote disinformation? And, most importantly, how do I live in a world in which people I love are listening to a barrage of messages telling them that I am evil? That I cannot be trusted? That I should be hated? Because underlying his deceptively simple comment is the possibility that, like many, my friend trusts certain media and messages while castigating all those that don’t always align with their world view. These messages are coming through media channels that give voice to leaders and media personalities who gain traction with their audiences by demonizing those they deem their enemies. They use half-truths and outright lies to gain sway with their followers. Anyone who thinks, looks, believes differently cannot be trusted. As a media scholar I have studied media effects, persuasion, and audiences. I’ve analyzed the meaning audiences give messages and how different approaches affect audience perceptions. I’ve written about the importance of narrative and message framing. I have advocated for the ethical use of these powerful tools. As a human being, I’m saddened as I witness blatant disregard for ethical principles in those leaders and media personalities who wield communication like a weapon to undermine trust. The results are impenetrable walls separating us from those who should be our allies. After spending most of my life believing I was part of a community, able to agree or disagree, discuss and argue, to teach and to learn in conversation with others, I find myself the “other.” Dismissed. Demonized. Hated. Not by faceless strangers, but by those dear to me. I suspect I’m not alone in this feeling ― regardless of ideological preferences. Discord is painful. My heart hurts. Yet, I am stubbornly hopeful. When I see my students from different backgrounds, cultures, and generations, discussing ideas for solutions to social issues, I am hopeful. When I hear my pastor fearlessly speaking to the congregation about loving each other even in disagreement, I am hopeful. When I speak to community groups and listen to their concerns and insights, I am hopeful. When I have a long-overdue conversation with my friend instead of relying on mediated social platforms, I am hopeful. I recently spoke to a Rotary Club and borrowed their four-way test to suggest a healthier relationship with media and communication generally. Of the things we produce, consume, or share, we should ask ourselves: Is it the truth? Is it fair to all concerned? Will it build goodwill and better friendships? Will it be beneficial to all concerned? If the answer to any of those questions is no, we should change the channel, seek another source for context, delete the post, block the sender, or adjust our message so we can answer yes And if you are asking yourself why you should be fair, or build goodwill, or benefit anyone from “the other side” ―perhaps scroll through your photos or look at the pictures on your desk or mantel. We are not adversaries. We’re on the same side. It’s time to stop listening to those who tell us otherwise. Heidi Hatfield Edwards is associate dean in Florida Tech’s College of Psychology and Liberal Arts and head of the School of Arts and Communication where she is a professor of communication. She began her career as a media professional and worked nearly a decade gaining experience across multiple media platforms and in strategic communication. She teaches courses in mass communication, theory, and science communication. Heidi is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

View all posts