Experts Matter. Find Yours.
Connect for media, speaking, professional opportunities & more.

Recently named a Fellow of the Society of Automotive Engineers (SAE) International, Azim Eskandarian, D.Sc., the Alice T. and William H. Goodwin Jr. Dean of the Virginia Commonwealth University (VCU) College of Engineering, received one of the organization’s highest honors. The designation recognizes individuals who have made extraordinary and sustained impacts on the mobility industry through technical excellence, leadership, innovation and dedicated service to the profession and to SAE International. “SAE Fellows – whose leadership and technical contributions strengthen our organization embody the highest level of professional achievement,” said Carla Bailo, 2026 SAE International president and chair of the board of directors. “Election to SAE Fellow reflects an individual’s lasting influence on mobility engineering and reinforces the standards of excellence that guide SAE’s strategic direction.” Selected through a comprehensive review process led by the SAE International Fellows Committee and approved by the SAE International Board of Directors, SAE Fellows exemplify the organization’s mission to advance mobility knowledge and solutions for the benefit of humanity. “It is a great honor to receive this distinction from an organization that is so essential to the advancement of the automotive industry,” said Eskandarian. “I hope to continue collaborating with engineers, researchers and other professionals who share a vision for the great work we can do to improve the safety and efficiency of transportation.” Numerous scientific and technical contributions to automotive safety, academic programs, workforce development in crashworthiness, collision avoidance, advanced driver assistance systems, intelligent vehicles, and autonomous driving have stemmed from the more than 40 years of work Eskandarian has pioneered. His research on intelligent and autonomous vehicles includes the development of novel methods for driver safety systems. As an academic leader, Eskandarian’s enduring commitment to education, mentorship and service led him to start impactful academic programs at several universities. This includes robotics and autonomous systems programs and new master’s concentrations at the VCU College of Engineering, a graduate academic program in intelligent transportation systems and an undergraduate concentration in transportation engineering at George Washington University, and an automotive engineering concentration at Virginia Tech. Eskandarian is also a Fellow of two other technical societies, the American Society of Mechanical Engineers (ASME) and the Institute of Electrical and Electronics Engineers (IEEE).

Reading for pleasure in free fall: New study finds 40% drop over two decades
A sweeping new study from the University of Florida and University College London has found that daily reading for pleasure in the United States has declined by more than 40% over the last 20 years — raising urgent questions about the cultural, educational and health consequences of a nation reading less. Published today in the journal iScience, the study analyzed data from over 236,000 Americans who participated in the American Time Use Survey between 2003 and 2023. The findings suggest a fundamental cultural shift: fewer people are carving out time in their day to read for enjoyment. “This is not just a small dip — it’s a sustained, steady decline of about 3% per year,” said Jill Sonke, Ph.D., director of research initiatives at the UF Center for Arts in Medicine and co-director of the EpiArts Lab, a National Endowment for the Arts research lab at UF in partnership with University College London. “It’s significant, and it’s deeply concerning.” Who’s reading and who isn’t The decline wasn’t evenly spread across the population. Researchers found steeper drops among Black Americans than white Americans, people with lower income or educational attainment, and those in rural (versus metropolitan) areas — highlighting deepening disparities in reading access and habits. “While people with higher education levels and women are still more likely to read, even among these groups, we’re seeing shifts,” said Jessica Bone, Ph.D., senior research fellow in statistics and epidemiology at University College London. “And among those who do read, the time spent reading has increased slightly, which may suggest a polarization, where some people are reading more while many have stopped reading altogether.” The researchers also noted some more promising findings, including that reading with children did not change over the last 20 years. However, reading with children was a lot less common than reading for pleasure, which is concerning given that this activity is tied to early literacy development, academic success and family bonding, Bone said. Why it matters Reading for pleasure has long been recognized not just as a tool for education, but as a means of supporting mental health, empathy, creativity and lifelong learning. The EpiArts Lab, which uses large data sets to examine links between the arts and health, has previously identified clear associations between creative engagement and well-being. “Reading has historically been a low-barrier, high-impact way to engage creatively and improve quality of life,” Sonke said. “When we lose one of the simplest tools in our public health toolkit, it’s a serious loss.” The American Time Use Survey offers a unique window into these trends. “We’re working with incredibly detailed data about how people spend their days,” Bone said. “And because it’s a representative sample of U.S. residents in private households, we can look not just at the national trend, but at how it plays out across different communities.” Why are Americans reading less? While causes were not part of the study, the researchers point to multiple potential factors, including the rise of digital media, growing economic pressures, shrinking leisure time and uneven access to books and libraries. “Our digital culture is certainly part of the story,” Sonke said. “But there are also structural issues — limited access to reading materials, economic insecurity and a national decline in leisure time. If you’re working multiple jobs or dealing with transportation barriers in a rural area, a trip to the library may just not be feasible.” What can be done? The study’s authors say that interventions could help slow or reverse the trend, but they need to be strategic. “Reading with children is one of the most promising avenues,” said Daisy Fancourt, Ph.D., a professor of psychology and epidemiology at University College London and co-director of the EpiArts Lab. “It supports not only language and literacy, but empathy, social bonding, emotional development and school readiness.” Bone added that creating more community-centered reading opportunities could also help: “Ideally, we’d make local libraries more accessible and attractive, encourage book groups, and make reading a more social and supported activity — not just something done in isolation.” The study underscores the importance of valuing and protecting access to the arts — not only as a matter of culture, but as a matter of public health. “Reading has always been one of the more accessible ways to support well-being,” Fancourt said. “To see this kind of decline is concerning because the research is clear: reading is a vital health-enhancing behavior for every group within society, with benefits across the life-course.”
Covering the War in Iran? TCU has Experts that Are Getting National Coverage
As the war against Iran continues to unfold, global media coverage has intensified, with major news organizations providing near-constant reporting on the conflict and its geopolitical implications. From live battlefield updates to analysis of regional alliances, energy markets, and international diplomacy, the story has become one of the most closely followed developments in international affairs. Networks such as CBS News are dedicating significant airtime to helping audiences understand the rapidly evolving situation and the broader implications for global stability. To provide credible context and insight, these outlets often turn to academic experts who specialize in Middle East politics and international relations. Experts like Ralph Carter from Texas Christian University (TCU) are among those providing research-based analysis that helps explain the historical roots of the conflict, the motivations of the key actors involved, and what developments could come next. Their expertise allows journalists to translate complex geopolitical dynamics into clear, accurate information for audiences trying to make sense of a fast-moving global crisis. Professor Ralph Carter teaches introductory courses in political science and international politics as well as advanced courses in Middle East conflicts, U.S. foreign policy and Russian foreign policy. He is the author or editor of eight books and the author or co-author of over 50 journal articles, book chapters, and other professional publications. His research agenda focuses on the making of U.S. foreign, trade, and defense policy, with a particular emphasis on the roles played by members of Congress. Recently, Professor Carter's expertise was sought out by CBS News in Dallas/Fort Worth as journalists were updating Americans on the current situation in the war in Iran. Ralph Carter is available to speak with the media about the ongoing war in Iran - simply click on his icon now to arrange an interview today.
ESPN Star Suggests Plan to Run for President
Meena Bose, Hofstra University professor of political science, executive dean of the Public Policy and Public Service program, the Kalikow Chair in Presidential Studies and director of the Kalikow Center for the Study of the American Presidency, talked to Newsday about ESPN star Stephen A. Smith expressing interest in running for president of the United States.

AI in the classroom: What parents need to know
As students return to classrooms, Maya Israel, professor of educational technology and computer science education at the University of Florida, shares insights on best practices for AI use for students in K-12. She also serves as the director of CSEveryone Center for Computer Science Education at UF, a program created to boost teachers’ capabilities around computer science and AI in education. Israel also leads the Florida K-12 Education Task Force, a group committed to empowering educators, students, families and administrators by harnessing the transformative potential of AI in K-12 classrooms, prioritizing safety, privacy, access and fairness. How are K–12 students using AI in classrooms? There is a wide range of approaches that students are using AI in classrooms. It depends on several factors including district policies, student age and the teacher’s instructional goals. Some districts restrict AI to only teacher use, such as creating custom reading passages for younger students. Others allow older students to use tools to check grammar, create visuals or run science simulations. Even then, skilled teachers frame AI as one tool, not a replacement for student thinking and effort. What are examples of age-appropriate tools that enhance learning? AI tools can be used to either enhance or erode learner agency and critical thinking. It is up to the educators to consider how these tools can be used appropriately. It is critical to use AI tools in a manner that supports learning, creativity and problem solving rather than bypass critical thinking. For example, Canva lets students create infographics, posters and videos to show understanding. Google’s Teachable Machine helps students learn AI concepts by training their own image-recognition models. These types of AI-augmented tools work best when they are embedded into activities such as project-based learning, where AI supports learning and critical thinking. How do teachers ensure AI supports core skills? While AI can be incredibly helpful in supporting learning, it should not be a shortcut that allows students to bypass learning. Teachers should design learning opportunities that integrate AI in a manner that encourages critical thinking. For example, if students are using AI to support their mathematical understanding, teachers should ask them to explain their reasoning, engage in discussions and attempt to solve problems in different ways. Teachers can ask students questions like, “Does that answer make sense based on what you know?” or “Why do you think [said AI tool] made that suggestion?” This type of reflection reinforces the message that learning does not happen through getting fast answers. Learning happens through exploration, productive struggle and collaboration. Many parents worry that using AI might make students too dependent on technology. How do educators address that concern? This is a very valid concern. Over-reliance on AI can erode independence and critical thinking, that’s why teachers should be intentional in how they use AI for teaching and learning. Educators can address this concern by communicating with parents their policies and approaches to using AI with students. This approach can include providing clear expectations of when AI is used, designing assignments that require critical thinking, personal reflection and reasoning and teaching students the metacognitive skills to self-assess how and when to use AI so that it is used to support learning rather than as a crutch. How do schools ensure that students still develop original thinking and creativity when using AI for assignments or projects? In the age of AI, there is the need to be even more intentional designing learning experiences where students engage in creative and critical thinking. One of the best practices that have shown to support this is the use of project-based learning, where students must create, iterate and evaluate ideas based on feedback from their peers and teachers. AI can help students gather ideas or organize research, but the students must ask the questions, synthesize information and produce original ideas. Assessment and rubrics should emphasize skills such as reasoning, process and creativity rather than just focusing on the final product. That way, although AI can play a role in instruction, the goal is to design instructional activities that move beyond what the AI can do. How do educators help students understand when it’s appropriate to use AI in their schoolwork? In the age of AI, educators should help students develop the skills to be original thinkers who can use AI thoughtfully and responsibly. Educators can help students understand when to use AI in their school work by directly embedding AI literacy into their instruction. AI literacy includes having discussions about the capabilities and limitations of AI, ethical considerations and the importance of students’ agency and original thoughts. Additionally, clear guidelines and policies help students navigate some of the gray areas of AI usage. What guidance should parents give at home? There are several key messages that parents should give their children about the use of AI. The most important message is that even though AI is powerful, it does not replace their judgement, creativity or empathy. Even though AI can provide fast answers, it is important for students to learn the skills themselves. Another key message is to know the rules about AI in the classroom. Parents should speak with their students about the mental health implications of over-reliance on AI. When students turn to AI-augmented tools for every answer or idea, they can gradually lose confidence in their own problem-solving abilities. Instead, students should learn how to use AI in ways that strengthen their skills and build independence.
AI gives rise to the cut and paste employee
Although AI tools can improve productivity, recent studies show that they too often intensify workloads instead of reducing them, in many cases even leading to cognitive overload and burnout. The University of Delaware's Saleem Mistry says this is creating employees who work harder, not smarter. Mistry, an associate professor of management in UD's Lerner College of Business & Economics, says his research confirms findings found in this Feb. 9, 2026 article in the Harvard Business Review. Driven by the misconception that AI is an accurate search engine rather than a predictive text tool, these "cut and paste" employees are using the applications to pump out deliverables in seconds just to keep up with increasing workloads. Mistry notes that this prioritization of speed over accuracy is happening at every level of the organization: • Junior staff: Blast out polished looking but unverified drafts. • Managers: Outsource their ability to deeply learn and critically think in order to summarize data, letting their analytical skills atrophy. • Power users: Build hidden, unapproved systems that bypass company oversight. A management problem, not a tech problem "When discussing this issue, I often hear leaders blame the technology. However, I believe that blaming the tech is missing the point; I see it as a failure of leadership," Mistry said. "When already overburdened employees who are constantly having to do more with less are handed vague mandates to just use AI without any training, they use it to look busy and produce volume-based work. Because many companies still reward the volume of work produced rather than the actual impact, employees naturally use these tools to generate slick but empty deliverables." "I believe that blaming the tech is missing the point; I see it as a failure of leadership. Because many companies still reward the volume of work produced rather than the actual impact, employees naturally use these tools to generate slick but empty deliverables." The real costs to organizations and incoming employees Mistry outlines three risks organizations face if they don’t intervene: 1. The workslop epidemic "These programs allow people to generate massive amounts of workslop, which is low-effort fluff that looks good but lacks substance. It takes seconds to create, but hours for someone else to decipher, fact-check, and fix," Mistry notes. "This drains money (up to $9 million annually for large companies) and destroys morale. As an educator, researcher, and a person brought into organizations to help fix problems, I for one do not want to be on the receiving end of a thoughtless, automated data dump, especially on tasks that require real skill and deep thinking." 2. Legal disaster He also states, "When the cut and paste mentality makes its way into professional submissions, the risks to the organization are real and oftentimes catastrophic. Courts have made it perfectly clear: ignorance is no excuse. If your name is on the document, you own the liability. Recently, attorneys have faced severe sanctions, hefty fines, and case dismissals for blindly submitting fake legal citations made up by computers." Click here for a list of cases. 3. A warning for incoming talent For new graduates entering this environment, Mistry offers a warning: Do not rely on AI to do your deep thinking. "If you simply use AI to blast out polished but unverified drafts, you become a replaceable 'cut and paste' employee," he says. “To truly stand out, new grads must prove they have the discernment to review, tweak, and challenge what the computer writes. The hiring edge is no longer just saying, 'I can do this task,' but 'I know how to leverage and correct AI to help me perform it.'" Four ideas to fix it To survive and indeed thrive with these new tools and avoid the unintended consequences of untrained staff, organizations should: 1. Reinforce the importance of fact-checking and editing: Adopt frameworks that teach employees how to show their work and log how they verified computer-generated facts. 2. Change the incentives: Stop rewarding busy work, useless reports, and massive slide decks. Evaluate employees on accuracy and results. 3. Eradicate superficial work: Don’t use automation to speed up ineffective legacy processes. Instead, use it to identify and eliminate them entirely. 4. Make time for editing: Give yourself and your employees the breathing room to actually review, tweak, and challenge what the computer writes instead of accepting the first draft. Mistry is available to discuss: Why AI is causing an epidemic of corporate "workslop" (and how to spot it). The leadership failure behind the "cut and paste" employee. How to rewrite corporate incentives to measure impact instead of volume in the AI era. Strategies for implementing safe, effective AI policies at work. How new college graduates can avoid the "workslop" trap in their first jobs. To reach Mistry directly and arrange an interview, visit his profile and click on the "contact" button. Interested reporters can also send an email to MediaRelations@udel.edu.

VCU College of Engineering receives $600,000 for AI-driven cybersecurity research
To advance AI-enabled cybersecurity research, the National Science Foundation (NSF) presented Kemal Akkaya, Ph.D., professor and chair of the Department of Computer Science, with a $600,000 grant through the organization’s Cybersecurity Innovation for Cyberinfrastructure program. Akkaya’s three-year project will explore how large language models (LLMs) can automate packet labeling for intrusion detection systems. “From transportation and healthcare to finance, improving the accuracy of machine learning algorithms used to defend the networks that underpin these sectors’ cyberinfrastructure is critical for protecting them from cyberattacks. Strengthening these defenses helps ensure the reliability and security of the essential services people rely on every day,” said Akkaya. Intrusion detection systems monitor network traffic to identify suspicious or malicious activity. These systems rely on machine learning models trained on large volumes of accurately labeled data. Producing those datasets, however, is time intensive and often requires expert cybersecurity knowledge. As digital systems increasingly power transportation, health care, finance and communication, the volume and sophistication of cyber attacks continue to grow. At the same time, artificial intelligence is reshaping how both attackers and defenders operate. Improving how quickly and accurately security systems can be trained is critical to protecting the infrastructure that supports daily life. Akkaya’s project will investigate how generative AI can help address this challenge. The team will fine tune open-source large language models using network data, threat signatures and expert annotations. Model accuracy will be strengthened through retrieval-augmented refinement, ensemble modeling and human-in-the-loop verification. Labeled datasets will be released in stages to support the development and evaluation of cybersecurity models. Using data from AmLight, an international research and education network operated by Florida International University (FIU), the project includes collaboration with researchers from FIU. The award strengthens VCU’s growing leadership in AI-enabled cybersecurity research and provides hands-on research training for graduate students. Resulting datasets from this work will support machine learning education for undergraduate students.

Is writing with AI at work undermining your credibility?
With over 75% of professionals using AI in their daily work, writing and editing messages with tools like ChatGPT, Gemini, Copilot or Claude has become a commonplace practice. While generative AI tools are seen to make writing easier, are they effective for communicating between managers and employees? A new study of 1,100 professionals reveals a critical paradox in workplace communications: AI tools can make managers’ emails more professional, but regular use can undermine trust between them and their employees. “We see a tension between perceptions of message quality and perceptions of the sender,” said Anthony Coman, Ph.D., a researcher at the University of Florida's Warrington College of Business and study co-author. “Despite positive impressions of professionalism in AI-assisted writing, managers who use AI for routine communication tasks put their trustworthiness at risk when using medium- to high-levels of AI assistance." In the study published in the International Journal of Business Communication, Coman and his co-author, Peter Cardon, Ph.D., of the University of Southern California, surveyed professionals about how they viewed emails that they were told were written with low, medium and high AI assistance. Survey participants were asked to evaluate different AI-written versions of a congratulatory message on both their perception of the message content and their perception of the sender. While AI-assisted writing was generally seen as efficient, effective, and professional, Coman and Cardon found a “perception gap” in messages that were written by managers versus those written by employees. “When people evaluate their own use of AI, they tend to rate their use similarly across low, medium and high levels of assistance,” Coman explained. “However, when rating other’s use, magnitude becomes important. Overall, professionals view their own AI use leniently, yet they are more skeptical of the same levels of assistance when used by supervisors.” While low levels of AI help, like grammar or editing, were generally acceptable, higher levels of assistance triggered negative perceptions. The perception gap is especially significant when employees perceive higher levels of AI writing, bringing into question the authorship, integrity, caring and competency of their manager. The impact on trust was substantial: Only 40% to 52% of employees viewed supervisors as sincere when they used high levels of AI, compared to 83% for low-assistance messages. Similarly, while 95% found low-AI supervisor messages professional, this dropped to 69-73% when supervisors relied heavily on AI tools. The findings reveal employees can often detect AI-generated content and interpret its use as laziness or lack of caring. When supervisors rely heavily on AI for messages like team congratulations or motivational communications, employees perceive them as less sincere and question their leadership abilities. “In some cases, AI-assisted writing can undermine perceptions of traits linked to a supervisor’s trustworthiness,” Coman noted, specifically citing impacts on perceived ability and integrity, both key components of cognitive-based trust. The study suggests managers should carefully consider message type, level of AI assistance and relational context before using AI in their writing. While AI may be appropriate and professionally received for informational or routine communications, like meeting reminders or factual announcements, relationship-oriented messages requiring empathy, praise, congratulations, motivation or personal feedback are better handled with minimal technological intervention.

On March 10, 1876, Alexander Graham Bell spoke the first words ever transmitted over telephone: “Mr. Watson, come here; I want you.” This simple request to Bell’s assistant, Thomas Watson, marked a significant milestone in direct person-to-person communication. Now, 150 years later, this message has paved the way for advanced cellular technology in the form of satellites, wireless networks and the personal devices we carry everywhere. For Mojtaba Vaezi, PhD, associate professor of electrical and computer engineering at Villanova University and director of the Wireless Networking Laboratory, Bell’s few words spoken over telephone marked the beginning of an ongoing technological revolution. “One hundred fifty years ago when telephone communication first started, there was essentially a wired line and a transmitting voice,” said Dr. Vaezi. “That simple, basic transmission has transformed the field of communication technology in unimaginable ways.” According to Dr. Vaezi, five shifts have defined the past century and a half of communication technology: wired devices to wireless, analog to digital, voice to data, fixed landlines to mobile phones and human-to-human communication giving way to an increasing focus on machines and artificial intelligence. Early wireless networks were built around one device per person. Today's networks must support multiple devices per person, plus the technology behind innovations such as smart homes, driverless cars and even remote surgery. “Applications are much more diverse now, so communication has to follow,” said Dr. Vaezi. “A big portion of communication now, in terms of number of connections to the network, is from machine to machine—not human to human or even human to machine." The growing number of connections can cause a host of issues for users. When multiple users share the same wireless spectrum simultaneously, their signals interfere with one another—a problem that is becoming more acute as the number of connected devices increases exponentially. Dr. Vaezi’s research at Villanova focuses on developing techniques that allow multiple users to transmit messages on the same frequency at the same time and still be understood. Another vibrant research area of Dr. Vaezi’s involves Integrated Sensing and Communication (ISAC). This field of study focuses on integrating wireless communications and radar so they can function within the same spectrum. “Historically, radar and wireless communication work in different bandwidths or spectrums and use separate devices. Although they are related, they happen in different fields,” said Dr. Vaezi. “Almost every communication scheme that has been developed has focused on this: How can we better utilize the spectrum?” ISAC is increasingly important as new innovations like driverless cars become fixtures in everyday life. These vehicles rely on radar to continuously scan for hazards, and when a hazard is detected, a signal must be sent to trigger safety mechanisms. Currently, the radar and communications systems operate on separate bandwidths using separate hardware. Dr. Vaezi's research explores how both functions could be housed in a single device running on one shared spectrum. Areas of study like Dr. Vaezi’s that focus on machine to machine communication are becoming increasingly relevant as communication technology evolves and moves away from simple person to person messaging. As for the next big milestone in communications, Dr. Vaezi is looking ahead to the implementation of 6G by 2030, though he tempers expectations. For most users, the change will feel modest, amounting to slightly faster device speeds. The most massive shift with 6G will be the amount of added coverage in areas that previously did not have network accessibility. “Say you order a package and it’s coming from somewhere abroad,” explained Dr. Vaezi. “6G will add network coverage over oceans, so you’ll be able to track your package in real time using that satellite technology.” The sixth generation of cellular technology will continue to connect our world and optimize current communications to accommodate more users and devices that need network access each day. It is far different from Alexander Graham Bell’s historic phone call 150 years ago. That brief exchange over a single wired line laid the groundwork for a communications ecosystem that now supports billions of devices, complex data networks and emerging technologies yet to be seen. It also serves as a reminder that despite how far communication technology has come, and how complex it has gotten, it all shares a common, simple goal: to transmit information from one point to another.

Strategic Closure of Strait of Hormuz Puts Pressure on U.S., Threatens Global Oil Trade Stability
Less than a week after the onset of the war in Iran, and amid escalating conflict in the region, Iran effectively closed the Strait of Hormuz to shipping tankers moving oil from the Middle East by threatening attacks against any vessel who entered the waterway. Thus, the small body of water, which moves a large percentage of the world’s crude oil, has become one of the most discussed places in the world in recent days. Frank Galgano, PhD, is a professor of Geography and the Environment at Villanova University. He is an expert in military and Middle East geography and has also studied global maritime shipping and access to natural resources. Dr. Galgano says there geographic, geopolitical, military and economic factors at play, along with widespread potential consequences, as Iran holds steady on their closure of the strait and the U.S. considers how, or if, it will attempt to help escort oil ships through. Geography and Significance of Strait of Hormuz Situated between Iran to the north and Oman and the United Arab Emirates to the south, the Strait of Hormuz is a narrow shipping lane that connects the Persian Gulf to the Gulf of Oman and, further out, the Arabian Sea. It is one of the most vital chokepoints in the Middle East, along with the Suez Canal, Straits of Tiran, Bab al-Mandab and the Turkish Straits. “Right now, because of oil, it is the most important,” Dr. Galgano said. “Every day, roughly 20 percent of global petrochemical use goes through Hormuz.” The strait itself is barely over 20 nautical miles at its narrowest, but only a small portion of that is shipping lanes. Depth constraints limit shipping to two lanes, each two miles wide, with a two-mile buffer between. “You’re essentially looking at all of that shipping constrained to six nautical miles, and the ships are relatively slow,” Dr. Galgano said. “There are usually about 14-25 tankers every 24 hours transiting the Gulf, so there is always a ship in line." By Iran threatening military action against any oil-carrying ships in Hormuz—and by shipping companies refusing to attempt to traverse it— one-fifth of the global oil trade is essentially cut off indefinitely. That is concerning, given that it takes very little to send global oil prices skyrocketing. Dr. Galgano referenced the 2010-11 Somali pirate issue that caused supertankers—which cost upward of $50,000 a day to operate—to be rerouted. “That alone caused gas prices to raise 10 cents per gallon,” he said. In this case, the biggest impact will be felt throughout Asia, which relies more heavily on oil imports. But the U.S., despite being the second-biggest producer of crude oil last year, will still feel significant effects, since oil is traded globally. “It takes these supertankers eight or 12 days to reach the East Coast from Hormuz,” Dr. Galgano said. “So, a few days later you might see diminished supplies, but there is a critical point where we would face a real shortage.” Attempting to Move Ships Through Hormuz Poses Huge Danger Unlike the Iranian-backed Houthi rebels attacks on Israeli ships and those belonging to its allies in the Red Sea last year, Iran itself has far more sophisticated weapons, along with a strong motive to do whatever it can to put pressure on the U.S. and involved allies. In addition to drones designed for attacking ships—like the ones used by Houthis—Iran also possesses Chinese and Russian anti-ship missiles, according to the professor. “Ships are very vulnerable,” he said, then referencing the 2000 bombing of the USS Cole by Al Qaeda operatives. “That was just two guys in a rubber boat with an explosive device, and it almost sunk the whole ship. If one is carrying oil, it becomes almost like a large fuel bomb.” The United States has weighed the idea of sending a convoy to help escort and protect these ships. They did as much in the late 1980s in Operation Earnest Will, in which President Reagan ordered Kuwaiti supertankers—which were being fired at—to reflag under the U.S. flag so the Navy could legally escort them. But weapons technology has changed, and while U.S. naval ships could certainly defend themselves, “supertankers are slow and it is still an incredibly dangerous operation,” Dr. Galgano said. “The convoy would have to be lucky 100 percent of the time. Iran would only have to be lucky once to hit a ship and cause an immediate fiasco, both physically and in the media.” Global Dependance on Shipped Goods According to Dr. Galgano, between 75 and 90 percent of all items you handle on a day-to-day basis come from inside the hull of a ship: shocks on your car, clothes on your back, or components of your computer. When shipment is disrupted, it can cause supply chain and cost issues. “During the pandemic, Ford was waiting on chips for F-150s, and HP was waiting in chemicals to make ink,” Dr. Galgano said. “Even the ship that got stuck in the Suez Canal a few years ago caused $10 billion in losses per day due to the backup.” For commodities like oil, the indefinite inability to utilize perhaps the most important shipping lanes in the world due to large scale conflict quickly raises the economic stakes to even greater levels. “Iran absolutely knows that, and they see this as a bargaining chip,” Dr. Galgano said. “Cause economic pain to force cessation of the attacks.”






