Global Technology Outage Raises Concerns About Ease of Future Cybersecurity Attacks

Unstable Global Environment a ‘National-Level Issue’ That Must Be Addressed, Florida Tech Professor Says

Jul 22, 2024

3 min

TJ O’Connor, Ph.D., LTC (Ret.)



The world came to a standstill after a technology outage reported Thursday evening grounded airplanes, disconnected hospitals and shut down banks across the globe. A faulty software update was to blame, not cybercriminals, but Florida Tech assistant professor TJ O’Connor said the outage’s cascading effect points to larger concerns about our society’s reliance on the internet.


The outage, which affected users’ ability to access Microsoft 365 applications, was traced back to a defect found in a software update from cybersecurity company CrowdStrike. CrowdStrike quickly released a statement confirming that the outage was “not a security incident or cyberattack.”

The outage was nonetheless damaging, kicking institutions offline. Issues remained more than a day later.


“Once those services go down, there’s this massive cascading effect,” O’Conner said. “If bank processing doesn’t work, then aviation doesn’t work. If aviation doesn’t work, shipping doesn’t work.”


Ultimately, O’Connor explained, the biggest concern isn’t the glitch in the system; it’s the number of systems that broke because CrowdStrike wasn’t working.


“I think what we’ll see a lot of people learn from this CrowdStrike incident is…that if they want to take the internet down in the future, all they have to do is hit one target,” O’Connor said. “It makes the threat landscape a lot smaller to attack for an adversary.”


Over the course of several hours, a blue Microsoft error screen taunted companies worldwide. Airlines including Delta, American and Frontier grounded all flights. Several television news outlets, including the United Kingdom’s Sky News, were unable to hold live broadcasts.


Some of the biggest concerns lie in the hospital industry, where planning, evaluation and continuous monitoring are essential, O’Connor noted.


“[Hospitals] are constantly processing so much data, and for them to go out for a couple of hours means that decisions aren’t being made on an automated basis,” O’Connor said. “We’ve kicked over so much of our decision making to automated systems that we can’t let those networks fail.”


According to the United Kingdom’s National Health Service (NHS), the outage disrupted its appointment and patient record system. Mass General Brigham in Boston, Massachusetts was also one of several U.S. hospitals that cancelled non-urgent surgeries, procedures, and medical visits because of the disruption.


911 outages were also reported in several states, including Phoenix, Arizona, whose computerized dispatch center was affected, the police department posted on social media. In Portland, Oregon, Mayor Ted Wheeler issued a citywide state of emergency due to the outage’s impact on city servers, computers and emergency communications.


Although CrowdStrike confirmed the incident was not malicious, O’Connor said it raises questions about overall reliance on the internet to make decisions, as well as ineffectiveness in securing it.


“We continually have these wake-up moments where something happens, it’s large scale, it’s a news blip, and then we forget about it… but our adversaries don’t,” O’Connor said. “Unfortunately, the attack infrastructure and the ability to attack is getting easier and easier.”


O’Connor also expects future network attacks to get worse, calling the unstable global environment a “national-level issue to address.”


While large-scale attacks and outages are mostly out the individuals’ control, O’Connor said, people can take action to protect themselves from personal cybersecurity attacks by using multi-factor authentication as much as possible.


Looking to know more?  Dr. TJ O’Connor’s research is focused on cybersecurity education, wireless protocols, software-defined radio and machine learning.


If you're looking to connect with Dr. O'Connor - simply click on his icon now to arrange an interview today.



Connect with:
TJ O’Connor, Ph.D., LTC (Ret.)

TJ O’Connor, Ph.D., LTC (Ret.)

Assistant Professor, Cybersecurity Program Chair | Computer Engineering and Sciences

Dr. O’Connor’s research is focused on cybersecurity education, wireless protocols, software-defined radio and machine learning.

Internet of Things (IoT)Information Security EngineeringCybersecurity EducationComputer SecurityComputer Science

You might also like...

Check out some other posts from Florida Tech

4 min

NASA Grant Funds Research Exploring Methods of Training Vision-Based Autonomous Systems

Conducting research at 5:30 a.m. may not be everybody’s first choice. But for Siddhartha Bhattacharyya and Ph.D. students Mohammed Abdul, Hafeez Khan and Parth Ganeriwala, it’s an essential part of the process for their latest endeavor. Bhattacharyya and his students are developing a more efficient framework for creating and evaluating image-based machine learning classification models for autonomous systems, such as those guiding cars and aircraft. That process involves creating new datasets with taxiway and runway images for vision-based autonomous aircraft. Just as humans need textbooks to fuel their learning, some machines are taught using thousands of photographs and images of the environment where their autonomous pupil will eventually operate. To help ensure their trained models can identify the correct course to take in a hyper-specific environment – with indicators such as centerline markings and side stripes on a runway at dawn – Bhattacharyya and his Ph.D. students chose a December morning to rise with the sun, board one of Florida Tech’s Piper Archer aircraft and photograph the views from above. Bhattacharyya, an associate professor of computer science and software engineering, is exploring the boundaries of operation of efficient and effective machine-learning approaches for vision-based classification in autonomous systems. In this case, these machine learning systems are trained on video or image data collected from environments including runways, taxiways or roadways. With this kind of model, it can take more than 100,000 images to help the algorithm learn and adapt to an environment. Today’s technology demands a pronounced human effort to manually label and classify each image. This can be an overwhelming process. To combat that, Bhattacharyya was awarded funding from NASA Langley Research Center to advance existing machine learning/computer vision-based systems, such as his lab’s “Advanced Line Identification and Notation Algorithm” (ALINA), by exploring automated labeling that would enable the model to learn and classify data itself – with humans intervening only as necessary. This measure would ease the overwhelming human demand, he said. ALINA is an annotation framework that Hafeez and Parth developed under Bhattacharyya’s guidance to detect and label data for algorithms, such as taxiway line markings for autonomous aircraft. Bhattacharyya will use NASA’s funding to explore transfer learning-based approaches, led by Parth, and few-shot learning (FSL) approaches, led by Hafeez. The researchers are collecting images via GoPro of runways and taxiways at airports in Melbourne and Grant-Valkaria with help from Florida Tech’s College of Aeronautics. Bhattacharyya’s students will take the data they collect from the airports and train their models to, in theory, drive an aircraft autonomously. They are working to collect diverse images of the runways – those of different angles and weather and lighting conditions – so that the model learns to identify patterns that determine the most accurate course regardless of environment or conditions. That includes the daybreak images captured on that December flight. “We went at sunrise, where there is glare on the camera. Now we need to see if it’s able to identify the lines at night because that’s when there are lights embedded on the taxiways,” Bhattacharyya said. “We want to collect diverse datasets and see what methods work, what methods fail and what else do we need to do to build that reliable software.” Transfer learning is a machine learning technique in which a model trained to do one task can generalize information and reuse it to complete another task. For example, a model trained to drive autonomous cars could transfer its intelligence to drive autonomous aircraft. This transfer helps explore generalization of knowledge. It also improves efficiency by eliminating the need for new models that complete different but related tasks. For example, a car trained to operate autonomously in California could retain generalized knowledge when learning how to drive in Florida, despite different landscapes. “This model already knows lines and lanes, and we are going to train it on certain other types of lines hoping it generalizes and keeps the previous knowledge,” Bhattacharyya explained. “That model could do both tasks, as humans do.” FSL is a technique that teaches a model to generalize information with just a few data samples instead of the massive datasets used in transfer learning. With this type of training, a model should be able to identify an environment based on just four or five images. “That would help us reduce the time and cost of data collection as well as time spent labeling the data that we typically go through for several thousands of datasets,” Bhattacharyya said. Learning when results may or may not be reliable is a key part of this research. Bhattacharyya said identifying degradation in the autonomous system’s performance will help guide the development of online monitors that can catch errors and alert human operators to take corrective action. Ultimately, he hopes that this research can help create a future where we utilize the benefits of machine learning without fear of it failing before notifying the operator, driver or user. “That’s the end goal,” Bhattacharyya said. “It motivates me to learn how the context relates to assumptions associated with these images, that helps in understanding when the autonomous system is not confident in its decision, thus sending an alert to the user. This could apply to a future generation of autonomous systems where we don’t need to fear the unknown – when the system could fail.” Siddhartha (Sid) Bhattacharyya’s primary area of research expertise/interest is in model based engineering, formal methods, machine learning engineering, and explainable AI applied to intelligent autonomous systems, cyber security, human factors, healthcare, explainable AI, and avionics. His research lab ASSIST (Assured Safety, Security, and Intent with Systematic Tactics) focuses on the research in the design of innovative formal methods to assure performance of intelligent systems, machine learning engineering to characterize intelligent systems for safety and model based engineering to analyze system behavior. Siddhartha Bhattacharyya is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

2 min

With aviation in the news, Florida Tech's Shem Malmquist offers insight and clarity

Recent news on the safety of airlines in America has detailed tragic fatalities, airplanes flipping over and some crashing into prominent city streets, which has shone a less than flattering light on what is supposed to be a safe industry. Given recent events, Florida Tech College of Aeronautics visiting assistant professor Shem Malmquist has appeared in high-profile interviews on both current and historic aviation incidents. Recently, he spoke with the Boston Globe, Rolling Stone and the news platform FedScoop to lend his insight and expertise as a pilot. Officials have repeatedly warned about a shortage of air traffic controllers. Pilots have made up for that gap by accepting visual approaches and separation from other airplanes to relieve some of the workloads off controllers, said Shem Malmquist, a pilot and visiting instructor at the Florida Institute of Technology, who teaches courses on aviation safety. He noted that was “part of the problem” with the D.C. collision. Still, flying remains safe because “pilots are overcoming the challenges in the system to prevent accidents,” Malmquist said. “Random distribution can create clusters like this. ... That doesn’t mean there’s more risk.”  February 21 - Boston Globe One former pilot told FedScoop that the system can be overpopulated with notices, only some of which might be important for a pilot to understand before taking off. Still, there’s generally no automated way of sorting through these notices, which means they can be incredibly long and difficult to completely process before flights. The notices themselves are densely written and use terminology that is often not immediately discernible. An example provided by the FAA shows the notices’ unique format. Textual data can also limit the ability to modernize the NOTAM system, an FAA statement of objectives from 2023 noted. Shem Malmquist, a working pilot who also teaches at Florida Tech’s College of Aeronautics, said the entire NOTAM system “migrated from color pipe machines,” which locked in “certain abbreviations and codes” beyond their point of usefulness. “It’s really great for computers, which is kind of funny because it was created before computers,” Malmquist added. “But it’s … not really very user friendly for the way humans think.” February 21 -FedScoop Recently, Malmquist was featured on National Geographic's TV series, "Air Crash Investigation." There, he spoke about the China Eastern Airlines Flight 583 crash investigation from 1993. Looking to connect with Shem Malmquist regarding the airline industry? He's available. Click on his icon to arrange an interview today.

4 min

Expert Opinion: Maneuvering friendships in the age of half-truths can be challenging

I recently shared an op-ed written by my colleague and friend, Ted Petersen, on a few social media sites. His thoughtful piece advocated for media literacy education. Later that day I received an alert that someone had commented on my post. The comment, made by a dear friend, alluded to disinformation about U.S.A.I.D.’s use of funds ― a false assertion that the federal agency supported the news outlet Politico for partisan gain. The comment was a perfect example of why media literacy education is important ― not just for school children. It gives people the tools to navigate a borderless media environment in which news and opinion, verified facts and unsubstantiated statements, and information and entertainment coexist. My dilemma after reading the comment was multi-faceted. What should I do? Do I respond? If so, how do I tell my friend that he is misinformed? If I don’t respond, am I shirking my responsibility as a friend, a citizen, an educator? How do I now live in a world in which my friends and family consume and trust media that actively promote disinformation? And, most importantly, how do I live in a world in which people I love are listening to a barrage of messages telling them that I am evil? That I cannot be trusted? That I should be hated? Because underlying his deceptively simple comment is the possibility that, like many, my friend trusts certain media and messages while castigating all those that don’t always align with their world view. These messages are coming through media channels that give voice to leaders and media personalities who gain traction with their audiences by demonizing those they deem their enemies. They use half-truths and outright lies to gain sway with their followers. Anyone who thinks, looks, believes differently cannot be trusted. As a media scholar I have studied media effects, persuasion, and audiences. I’ve analyzed the meaning audiences give messages and how different approaches affect audience perceptions. I’ve written about the importance of narrative and message framing. I have advocated for the ethical use of these powerful tools. As a human being, I’m saddened as I witness blatant disregard for ethical principles in those leaders and media personalities who wield communication like a weapon to undermine trust. The results are impenetrable walls separating us from those who should be our allies. After spending most of my life believing I was part of a community, able to agree or disagree, discuss and argue, to teach and to learn in conversation with others, I find myself the “other.” Dismissed. Demonized. Hated. Not by faceless strangers, but by those dear to me. I suspect I’m not alone in this feeling ― regardless of ideological preferences. Discord is painful. My heart hurts. Yet, I am stubbornly hopeful. When I see my students from different backgrounds, cultures, and generations, discussing ideas for solutions to social issues, I am hopeful. When I hear my pastor fearlessly speaking to the congregation about loving each other even in disagreement, I am hopeful. When I speak to community groups and listen to their concerns and insights, I am hopeful. When I have a long-overdue conversation with my friend instead of relying on mediated social platforms, I am hopeful. I recently spoke to a Rotary Club and borrowed their four-way test to suggest a healthier relationship with media and communication generally. Of the things we produce, consume, or share, we should ask ourselves: Is it the truth? Is it fair to all concerned? Will it build goodwill and better friendships? Will it be beneficial to all concerned? If the answer to any of those questions is no, we should change the channel, seek another source for context, delete the post, block the sender, or adjust our message so we can answer yes And if you are asking yourself why you should be fair, or build goodwill, or benefit anyone from “the other side” ―perhaps scroll through your photos or look at the pictures on your desk or mantel. We are not adversaries. We’re on the same side. It’s time to stop listening to those who tell us otherwise. Heidi Hatfield Edwards is associate dean in Florida Tech’s College of Psychology and Liberal Arts and head of the School of Arts and Communication where she is a professor of communication. She began her career as a media professional and worked nearly a decade gaining experience across multiple media platforms and in strategic communication. She teaches courses in mass communication, theory, and science communication. Heidi is available to speak with media. Contact Adam Lowenstein, Director of Media Communications at Florida Institute of Technology at adam@fit.edu to arrange an interview today.

View all posts