Experts Matter. Find Yours.
Connect for media, speaking, professional opportunities & more.

AI in the classroom: What parents need to know
As students return to classrooms, Maya Israel, professor of educational technology and computer science education at the University of Florida, shares insights on best practices for AI use for students in K-12. She also serves as the director of CSEveryone Center for Computer Science Education at UF, a program created to boost teachers’ capabilities around computer science and AI in education. Israel also leads the Florida K-12 Education Task Force, a group committed to empowering educators, students, families and administrators by harnessing the transformative potential of AI in K-12 classrooms, prioritizing safety, privacy, access and fairness. How are K–12 students using AI in classrooms? There is a wide range of approaches that students are using AI in classrooms. It depends on several factors including district policies, student age and the teacher’s instructional goals. Some districts restrict AI to only teacher use, such as creating custom reading passages for younger students. Others allow older students to use tools to check grammar, create visuals or run science simulations. Even then, skilled teachers frame AI as one tool, not a replacement for student thinking and effort. What are examples of age-appropriate tools that enhance learning? AI tools can be used to either enhance or erode learner agency and critical thinking. It is up to the educators to consider how these tools can be used appropriately. It is critical to use AI tools in a manner that supports learning, creativity and problem solving rather than bypass critical thinking. For example, Canva lets students create infographics, posters and videos to show understanding. Google’s Teachable Machine helps students learn AI concepts by training their own image-recognition models. These types of AI-augmented tools work best when they are embedded into activities such as project-based learning, where AI supports learning and critical thinking. How do teachers ensure AI supports core skills? While AI can be incredibly helpful in supporting learning, it should not be a shortcut that allows students to bypass learning. Teachers should design learning opportunities that integrate AI in a manner that encourages critical thinking. For example, if students are using AI to support their mathematical understanding, teachers should ask them to explain their reasoning, engage in discussions and attempt to solve problems in different ways. Teachers can ask students questions like, “Does that answer make sense based on what you know?” or “Why do you think [said AI tool] made that suggestion?” This type of reflection reinforces the message that learning does not happen through getting fast answers. Learning happens through exploration, productive struggle and collaboration. Many parents worry that using AI might make students too dependent on technology. How do educators address that concern? This is a very valid concern. Over-reliance on AI can erode independence and critical thinking, that’s why teachers should be intentional in how they use AI for teaching and learning. Educators can address this concern by communicating with parents their policies and approaches to using AI with students. This approach can include providing clear expectations of when AI is used, designing assignments that require critical thinking, personal reflection and reasoning and teaching students the metacognitive skills to self-assess how and when to use AI so that it is used to support learning rather than as a crutch. How do schools ensure that students still develop original thinking and creativity when using AI for assignments or projects? In the age of AI, there is the need to be even more intentional designing learning experiences where students engage in creative and critical thinking. One of the best practices that have shown to support this is the use of project-based learning, where students must create, iterate and evaluate ideas based on feedback from their peers and teachers. AI can help students gather ideas or organize research, but the students must ask the questions, synthesize information and produce original ideas. Assessment and rubrics should emphasize skills such as reasoning, process and creativity rather than just focusing on the final product. That way, although AI can play a role in instruction, the goal is to design instructional activities that move beyond what the AI can do. How do educators help students understand when it’s appropriate to use AI in their schoolwork? In the age of AI, educators should help students develop the skills to be original thinkers who can use AI thoughtfully and responsibly. Educators can help students understand when to use AI in their school work by directly embedding AI literacy into their instruction. AI literacy includes having discussions about the capabilities and limitations of AI, ethical considerations and the importance of students’ agency and original thoughts. Additionally, clear guidelines and policies help students navigate some of the gray areas of AI usage. What guidance should parents give at home? There are several key messages that parents should give their children about the use of AI. The most important message is that even though AI is powerful, it does not replace their judgement, creativity or empathy. Even though AI can provide fast answers, it is important for students to learn the skills themselves. Another key message is to know the rules about AI in the classroom. Parents should speak with their students about the mental health implications of over-reliance on AI. When students turn to AI-augmented tools for every answer or idea, they can gradually lose confidence in their own problem-solving abilities. Instead, students should learn how to use AI in ways that strengthen their skills and build independence.
AI gives rise to the cut and paste employee
Although AI tools can improve productivity, recent studies show that they too often intensify workloads instead of reducing them, in many cases even leading to cognitive overload and burnout. The University of Delaware's Saleem Mistry says this is creating employees who work harder, not smarter. Mistry, an associate professor of management in UD's Lerner College of Business & Economics, says his research confirms findings found in this Feb. 9, 2026 article in the Harvard Business Review. Driven by the misconception that AI is an accurate search engine rather than a predictive text tool, these "cut and paste" employees are using the applications to pump out deliverables in seconds just to keep up with increasing workloads. Mistry notes that this prioritization of speed over accuracy is happening at every level of the organization: • Junior staff: Blast out polished looking but unverified drafts. • Managers: Outsource their ability to deeply learn and critically think in order to summarize data, letting their analytical skills atrophy. • Power users: Build hidden, unapproved systems that bypass company oversight. A management problem, not a tech problem "When discussing this issue, I often hear leaders blame the technology. However, I believe that blaming the tech is missing the point; I see it as a failure of leadership," Mistry said. "When already overburdened employees who are constantly having to do more with less are handed vague mandates to just use AI without any training, they use it to look busy and produce volume-based work. Because many companies still reward the volume of work produced rather than the actual impact, employees naturally use these tools to generate slick but empty deliverables." "I believe that blaming the tech is missing the point; I see it as a failure of leadership. Because many companies still reward the volume of work produced rather than the actual impact, employees naturally use these tools to generate slick but empty deliverables." The real costs to organizations and incoming employees Mistry outlines three risks organizations face if they don’t intervene: 1. The workslop epidemic "These programs allow people to generate massive amounts of workslop, which is low-effort fluff that looks good but lacks substance. It takes seconds to create, but hours for someone else to decipher, fact-check, and fix," Mistry notes. "This drains money (up to $9 million annually for large companies) and destroys morale. As an educator, researcher, and a person brought into organizations to help fix problems, I for one do not want to be on the receiving end of a thoughtless, automated data dump, especially on tasks that require real skill and deep thinking." 2. Legal disaster He also states, "When the cut and paste mentality makes its way into professional submissions, the risks to the organization are real and oftentimes catastrophic. Courts have made it perfectly clear: ignorance is no excuse. If your name is on the document, you own the liability. Recently, attorneys have faced severe sanctions, hefty fines, and case dismissals for blindly submitting fake legal citations made up by computers." Click here for a list of cases. 3. A warning for incoming talent For new graduates entering this environment, Mistry offers a warning: Do not rely on AI to do your deep thinking. "If you simply use AI to blast out polished but unverified drafts, you become a replaceable 'cut and paste' employee," he says. “To truly stand out, new grads must prove they have the discernment to review, tweak, and challenge what the computer writes. The hiring edge is no longer just saying, 'I can do this task,' but 'I know how to leverage and correct AI to help me perform it.'" Four ideas to fix it To survive and indeed thrive with these new tools and avoid the unintended consequences of untrained staff, organizations should: 1. Reinforce the importance of fact-checking and editing: Adopt frameworks that teach employees how to show their work and log how they verified computer-generated facts. 2. Change the incentives: Stop rewarding busy work, useless reports, and massive slide decks. Evaluate employees on accuracy and results. 3. Eradicate superficial work: Don’t use automation to speed up ineffective legacy processes. Instead, use it to identify and eliminate them entirely. 4. Make time for editing: Give yourself and your employees the breathing room to actually review, tweak, and challenge what the computer writes instead of accepting the first draft. Mistry is available to discuss: Why AI is causing an epidemic of corporate "workslop" (and how to spot it). The leadership failure behind the "cut and paste" employee. How to rewrite corporate incentives to measure impact instead of volume in the AI era. Strategies for implementing safe, effective AI policies at work. How new college graduates can avoid the "workslop" trap in their first jobs. To reach Mistry directly and arrange an interview, visit his profile and click on the "contact" button. Interested reporters can also send an email to MediaRelations@udel.edu.

Manitobans are still eager to travel, but how and where they’re going is changing, and so are the risks they may not see coming. New survey findings released as part of CAA Manitoba’s Travel Wise Week show a clear shift toward staying closer to home. Sixty per cent of Manitobans prefer travelling within Canada, while just 20 per cent are planning a trip to the United States. Global uncertainty, rising costs, and changing perceptions about international destinations are influencing those decisions. “We’re seeing more Manitobans choosing Canada because it feels familiar and safe,” said Susan Postma, Regional Manager, CAA Manitoba. “But that sense of comfort can lead people to underestimate the financial risks that can still come with travelling, even within our own borders.” Staying in Canada and Leaving Coverage Behind While Canadians feel confident travelling within their own country, many assume “home turf” means low risk. This misconception leaves millions exposed to unexpected costs when trips don’t go as planned. The survey found that 64 per cent of Canadians did not have travel insurance for their most recent trip within Canada. Provincial health coverage often provides only limited protection when travelling outside your home province, and in some cases, does not cover services such as air ambulances, extended hospital stays, or trip interruption costs. Recent media stories have highlighted Canadians facing unexpected medical bills, emergency transportation costs, or sudden trip changes, all during trips that never left the country. “People are often surprised to learn how quickly expenses can add up if something goes wrong,” says Postma. “A simple injury on a hiking trail or a family emergency back home can turn a short trip into a major financial stress.” With recent geopolitical incidents in Cuba, Mexico and the Middle East, CAA’s Travel Wise Campaign is focused on helping Canadians understand risk, avoid misinformation, and make decisions grounded in facts rather than fear or speculation. Here are some tips: Understand what an “avoid non-essential travel” advisory really means: Travel advisories reflect real-time safety risks, and an “avoid non-essential travel” signal indicates rapidly changing conditions that may change quickly, and support may be limited. Know that advisories can affect your insurance and your exit options: Travelling against government advice can limit your travel insurance, including medical care or emergency evacuation. Coverage must be in place before conditions deteriorate. Flexibility is essential; review cancellation and change policies now: Travellers should proactively confirm cancellation deadlines, refund eligibility, rebooking options for all reservations and understand the limits of credit card protections, employee benefits, and pension coverage benefits. Stay connected to Canada while abroad: Canadians should monitor official updates from Global Affairs Canada and register with the Registration of Canadians Abroad service before departure or while on location if something arises. Rely on reputable sources and be cautious of misinformation online: Canadians should rely on official government sources, established travel organizations, and verified news outlets for travel guidance. Additionally, the CAA Air Passenger Help Guide helps you understand your rights when faced with common flight disruptions, such as delayed or cancelled flights or lost bags. The guide can be found at CAA.ca/AirPassengerHelpGuide. For more information on travel insurance and how to stay protected, visit www.caamanitoba.com/travelwise The online survey was conducted by DIG Insights from September 29 – October 8, 2025, with 2,0210 Canadian travellers aged 25 to 64 who have travelled outside their province of residence in the past three years and plan to travel again in the next five years, out of which 137 travellers were from Manitoba or Saskatchewan. Based on the sample size of n=2,021 and with a confidence level of 95%, the margin of error for this research is +/- 2%.)

Canadians remain passionate about exploring new destinations, but changing global dynamics are reshaping how and where they travel. According to CAA’s Travel Wise survey, more than half of Canadians (51 per cent) now say geopolitical and economic factors, such as instability abroad, a perception of the U.S. as being less welcoming, and rising travel costs, are influencing where Canadians choose to travel. Shifting Destinations and Attitudes Travel patterns are evolving. The survey conducted in 2025 shows that only 22 per cent of Canadians planned to visit the U.S., an 11 per cent drop from 2024. Instead, many are opting to stay within Canada (40 per cent) or explore international destinations. The perception of the U.S. as less welcoming, coupled with rising travel costs and global instability, is prompting Canadians to reconsider their travel plans. "Canadians are adventurous by nature, but today’s travellers are having to make thoughtful decisions," says Kaitlynn Furse, Director of Corporate Communications. "We’re seeing a clear trend toward exploring closer to home and seeking out new international experiences, all while keeping an eye on safety and value." Travel Insurance: A Critical, Yet Overlooked, Safeguard While Canadians feel confident travelling within their own country, many assume “home turf” means low risk. This misconception leaves millions exposed to unexpected costs when trips don’t go as planned. The survey found that 64 per cent did not have travel insurance on their most recent trip when travelling within Canada. “Recent stories have highlighted Canadians facing unexpected medical bills, trip interruptions, and emergency expenses while travelling within Canada, often because they didn’t realize their regular provincial health coverage or credit card benefits had limits,” says Furse. “If something were to happen, provincial healthcare only partially covers you outside of your home, and sometimes, not at all, covering only basic emergency medical services when travelling in another province.” Among those who travelled uninsured, 44 per cent believed coverage wasn’t needed, and 29 per cent thought their provincial government’s health plan would suffice. However, provincial healthcare only partially covers emergency medical services in other provinces, and sometimes not at all. “One of the biggest misconceptions we see is the idea that travelling within Canada comes with less risk,” says Furse. “Unexpected medical costs, trip interruptions and emergencies can happen anywhere, and many travellers are surprised to learn they’re not fully covered.” With recent geopolitical incidents in Cuba, Mexico and the Middle East, Travel Wise is focused on helping Canadians understand risk, avoid misinformation, and make decisions grounded in facts rather than fear or speculation. Here are some tips: Understand what an “avoid non-essential travel” advisory really means: Travel advisories reflect real-time safety risks, and an “avoid non-essential travel” signal indicates rapidly changing conditions that may change quickly, and support may be limited. Know that advisories can affect your insurance and your exit options: Travelling against government advice can limit your travel insurance, including medical care or emergency evacuation. Coverage must be in place before conditions deteriorate. Flexibility is essential; review cancellation and change policies now: Travellers should proactively confirm cancellation deadlines, refund eligibility, rebooking options for all reservations and understand the limits of credit card protections, employee benefits, and pension coverage benefits. Stay connected to Canada while abroad: Canadians should monitor official updates from Global Affairs Canada and register with the Registration of Canadians Abroad service before departure or while on location if something arises. Rely on reputable sources and be cautious of misinformation online: Canadians should rely on official government sources, established travel organizations, and verified news outlets for travel guidance. For many travellers, cancelled or delayed flights remain a top concern. CAA’s Air Passenger Help Guide offers a straightforward resource for travellers facing disruptions. The online survey was conducted by DIG Insights from September 29 – October 8, 2025, with 2,0210 Canadian travellers aged 25 to 64 who have travelled outside their province of residence in the past three years and plan to travel again in the next five years. Based on the sample size of n=2,021 and with a confidence level of 95%, the margin of error for this research is +/- 2%.)

VCU College of Engineering receives $600,000 for AI-driven cybersecurity research
To advance AI-enabled cybersecurity research, the National Science Foundation (NSF) presented Kemal Akkaya, Ph.D., professor and chair of the Department of Computer Science, with a $600,000 grant through the organization’s Cybersecurity Innovation for Cyberinfrastructure program. Akkaya’s three-year project will explore how large language models (LLMs) can automate packet labeling for intrusion detection systems. “From transportation and healthcare to finance, improving the accuracy of machine learning algorithms used to defend the networks that underpin these sectors’ cyberinfrastructure is critical for protecting them from cyberattacks. Strengthening these defenses helps ensure the reliability and security of the essential services people rely on every day,” said Akkaya. Intrusion detection systems monitor network traffic to identify suspicious or malicious activity. These systems rely on machine learning models trained on large volumes of accurately labeled data. Producing those datasets, however, is time intensive and often requires expert cybersecurity knowledge. As digital systems increasingly power transportation, health care, finance and communication, the volume and sophistication of cyber attacks continue to grow. At the same time, artificial intelligence is reshaping how both attackers and defenders operate. Improving how quickly and accurately security systems can be trained is critical to protecting the infrastructure that supports daily life. Akkaya’s project will investigate how generative AI can help address this challenge. The team will fine tune open-source large language models using network data, threat signatures and expert annotations. Model accuracy will be strengthened through retrieval-augmented refinement, ensemble modeling and human-in-the-loop verification. Labeled datasets will be released in stages to support the development and evaluation of cybersecurity models. Using data from AmLight, an international research and education network operated by Florida International University (FIU), the project includes collaboration with researchers from FIU. The award strengthens VCU’s growing leadership in AI-enabled cybersecurity research and provides hands-on research training for graduate students. Resulting datasets from this work will support machine learning education for undergraduate students.

On March 10, 1876, Alexander Graham Bell spoke the first words ever transmitted over telephone: “Mr. Watson, come here; I want you.” This simple request to Bell’s assistant, Thomas Watson, marked a significant milestone in direct person-to-person communication. Now, 150 years later, this message has paved the way for advanced cellular technology in the form of satellites, wireless networks and the personal devices we carry everywhere. For Mojtaba Vaezi, PhD, associate professor of electrical and computer engineering at Villanova University and director of the Wireless Networking Laboratory, Bell’s few words spoken over telephone marked the beginning of an ongoing technological revolution. “One hundred fifty years ago when telephone communication first started, there was essentially a wired line and a transmitting voice,” said Dr. Vaezi. “That simple, basic transmission has transformed the field of communication technology in unimaginable ways.” According to Dr. Vaezi, five shifts have defined the past century and a half of communication technology: wired devices to wireless, analog to digital, voice to data, fixed landlines to mobile phones and human-to-human communication giving way to an increasing focus on machines and artificial intelligence. Early wireless networks were built around one device per person. Today's networks must support multiple devices per person, plus the technology behind innovations such as smart homes, driverless cars and even remote surgery. “Applications are much more diverse now, so communication has to follow,” said Dr. Vaezi. “A big portion of communication now, in terms of number of connections to the network, is from machine to machine—not human to human or even human to machine." The growing number of connections can cause a host of issues for users. When multiple users share the same wireless spectrum simultaneously, their signals interfere with one another—a problem that is becoming more acute as the number of connected devices increases exponentially. Dr. Vaezi’s research at Villanova focuses on developing techniques that allow multiple users to transmit messages on the same frequency at the same time and still be understood. Another vibrant research area of Dr. Vaezi’s involves Integrated Sensing and Communication (ISAC). This field of study focuses on integrating wireless communications and radar so they can function within the same spectrum. “Historically, radar and wireless communication work in different bandwidths or spectrums and use separate devices. Although they are related, they happen in different fields,” said Dr. Vaezi. “Almost every communication scheme that has been developed has focused on this: How can we better utilize the spectrum?” ISAC is increasingly important as new innovations like driverless cars become fixtures in everyday life. These vehicles rely on radar to continuously scan for hazards, and when a hazard is detected, a signal must be sent to trigger safety mechanisms. Currently, the radar and communications systems operate on separate bandwidths using separate hardware. Dr. Vaezi's research explores how both functions could be housed in a single device running on one shared spectrum. Areas of study like Dr. Vaezi’s that focus on machine to machine communication are becoming increasingly relevant as communication technology evolves and moves away from simple person to person messaging. As for the next big milestone in communications, Dr. Vaezi is looking ahead to the implementation of 6G by 2030, though he tempers expectations. For most users, the change will feel modest, amounting to slightly faster device speeds. The most massive shift with 6G will be the amount of added coverage in areas that previously did not have network accessibility. “Say you order a package and it’s coming from somewhere abroad,” explained Dr. Vaezi. “6G will add network coverage over oceans, so you’ll be able to track your package in real time using that satellite technology.” The sixth generation of cellular technology will continue to connect our world and optimize current communications to accommodate more users and devices that need network access each day. It is far different from Alexander Graham Bell’s historic phone call 150 years ago. That brief exchange over a single wired line laid the groundwork for a communications ecosystem that now supports billions of devices, complex data networks and emerging technologies yet to be seen. It also serves as a reminder that despite how far communication technology has come, and how complex it has gotten, it all shares a common, simple goal: to transmit information from one point to another.

Recently named the nuclear program director at the Virginia Commonwealth University (VCU) College of Engineering, Gennady Miloshevsky, Ph.D., associate professor in the Department of Mechanical & Nuclear Engineering, answers some questions about the direction of VCU Engineering’s nuclear program and what he hopes it can accomplish. What are your top priorities for the nuclear program at the VCU College of Engineering? I want to focus on student development, innovative research and our rankings in best program lists, but that is not everything. Strategy is important. We need to align ourselves with the country’s national energy needs. There are many new developments in the energy sector, like small modular reactors or fusion energy systems, and having the right faculty to engage with these advancements is important. Providing students with a well-rounded education and good opportunities for gaining experience benefits the College of Engineering’s public and private sector partners. Nuclear subject matter is complex, so higher education is very important for workforce development. We want to build partnerships, like the one we have with Dominion Energy, that support this goal. A priority for me is continuing to establish relationships with Commonwealth Fusion Systems, which seeks to build and operate the first commercial grid-scale fusion plant in Chesterfield County, Virginia. Our workforce partners will benefit from VCU’s well-trained nuclear engineering graduates joining the workforce. So, aligning our strategy with national energy needs, hiring the right faculty to support our programs and building industry partnerships that benefit our student’s education and career opportunities are important things for VCU Engineering’s nuclear program. Where would you like to see the College of Engineering’s nuclear program 10 years from now? I would like to see growth in the nuclear program. For example, some new graduate courses on topics like nuclear materials or fusion energy. In 2024, I developed a general course for fusion energy, so building out a curriculum that goes more in-depth would be good. When you look at small modular reactors and micro reactors, current energy policy does not allow private companies to build their own. However, as energy demands increase, policy could change to where you see these compact devices installed in places like data centers, for example. A more in-depth curriculum allows VCU Engineering students to step into industry roles that lead growth of the energy industry while also ensuring students are capable of adapting to the changing field and taking advantage of new developments. What sort of cross-disciplinary opportunities are there for the College of Engineering’s nuclear program? Nuclear engineering and nuclear science are very interdisciplinary fields. You have physics that covers the nuclear reaction and the radiation it generates, for example, then chemistry is needed when talking about nuclear fuel cycles and nuclear waste. You also need materials science because good materials capable of withstanding radiation and high temperatures are needed in nuclear fission and fusion energy systems. This science then connects to engineering, building the reactors, the energy distribution systems like a power grid. It is a small sample of the overall work, but you see how mechanical and electrical engineering are key to this part. All these disciplines come together to solve the same problem. One researcher might be figuring out how to confine plasma and make it stable, then another researcher is looking at how plasma can disrupt the containment wall and how to make materials to protect the wall. Within our department, we are making connections between mechanical-focused faculty working on high-temperature ceramics or additive manufacturing techniques and those of us researching nuclear energy systems in order to make joint proposals. We are also collaborating outside VCU. As an example, I am involved with an alliance founded by the Defense Threat Reduction Agency (DTRA) comprised of 17 universities, research labs and military centers. Coordinated through DTRA, we work together on many of the same problems.Through this partnership, my Ph.D. students do summer research rotations with national labs like Lawrence Livermore National Laboratory in California and The Pacific Northwest National Laboratory. We also bring cadets and midshipman into VCU from other institutions, like the DTRA Nuclear Science and Engineering Research Center, United States Military Academy West Point and the Virginia Military Institute, whose students have been part of research experience for undergraduates programs in the summer. How is artificial intelligence impacting the field of nuclear engineering? So, the United States is sponsoring the Genesis Mission, which seeks to transform science innovation through the power of AI. One area of the Genesis Mission is nuclear fission and fusion energy. I see this playing out with the Department of Energy encouraging national labs, universities and industry to work together on applying these AI advancements to solve the research problems of nuclear energy. It is a great opportunity for students, who we can involve in this work to give them real-world experience with topics they will see after graduation. Last semester I taught a course at VCU on the practical applications of AI on nuclear engineering problems. It is not something like ChatGPT or anything like that. What we did is take Google’s TensorFlow platform that is a library of AI models and machine neural networks. Using Python scripting students learn how to apply these AI resources to about 30 problems in mechanical and nuclear engineering. They create scripts, use data sets and run analytics. We have a nuclear reactor simulator and I have some ideas to create AI-based software we can pair with the simulator, then give the software a data set and let it control the operation of the simulator in a safe way. Tell us about your background. What brought you VCU and the Department of Mechanical and Nuclear Engineering? Actually, I am not a mechanical or a nuclear engineer. My background is in physics. I graduated from the Belarusian State University in 1990 and continued to a Ph.D. in physics from the Heat and Mass Transfer Institute of the National Academy of Sciences of Belarus working on topics related to fusion plasmas and nuclear weapon effects. In space, nuclear weapons produce shockwaves and radiation. I computationally model these effects in my research to determine how something like a nuclear warhead detonation in orbit will impact the materials a satellite is made of, for example. My research also crosses over into nuclear fusion, specifically thermodynamic and optical plasma properties, fusion plasma disruptions, melt motion and splashing from plasma facing components. Accelerating Next-Generation Extreme Ultraviolet (EUV) Lithography (ANGEL) is my most recent collaborative project, supported by the Department of Energy’s (DOE) Office of Science, Fusion Energy Sciences. It involves two national laboratories, three universities and a private-sector company focusing on advancement of future micro-electronic chips, EUV photon sources, mitigation of material degradation and plasma chemistry. Prior to joining the VCU College of Engineering I worked at Purdue University at a DOE-funded center investigating nuclear fusion and the effects of plasma on materials. Around 2019 I wanted to develop my own lab, so I came to VCU with startup funds from the Nuclear Regulatory Commission and DTRA. My first priority after joining the VCU College of Engineering was continuing my fusion research, the second was collaborating with an alliance of universities focused on work for DTRA and DOE.

MedPage Today: Ozzy Osbourne shined a light on Parkinson’s stigma
Ozzy Osbourne was best known for two things: his shape-shifting resilience as a pioneer of heavy metal music and, most recently, his remarkable authenticity during his public journey with Parkinson's disease. Osbourne, who passed away on July 22, possessed a unique ability to connect directly with people who were suffering. He was an honest and transparent voice for what it was like to live with a neurodegenerative disease. He was willing to go where others would not, and he took on the stigma of a Parkinson's diagnosis. Stigma remains one of the most underrecognized yet pervasive challenges in Parkinson's disease. Far too often, individuals are made to feel ashamed of their visible symptoms like tremors, facial masking, or soft speech. This reality can lead to social withdrawal, depression, and even delayed medical care. Research has shown that perceived stigma is not only linked to reduced quality of life, but it also correlates with worse outcomes. That's why, when someone like Osbourne rises up and speaks out, it matters. It sends a powerful message that Parkinson's does not define a person, and that no one should suffer in silence. Many people with Parkinson's disease choose to conceal their diagnosis from those closest to them. A recent study published in Scientific Reports found that nearly 23% of participants kept their condition hidden, even from family members. Broader surveys have suggested that more than half of individuals with Parkinson's disease may conceal symptoms, mask tremors, or avoid public situations due to stigma and fear of judgment. People who hide their diagnosis frequently report lower social support, reduced engagement in physical activity, and significantly worse emotional well-being. These findings underscore how pervasive and harmful disclosure avoidance can be.

Assisted by sniffer dogs and DNA sequencing, researchers discover three new truffle species
University of Florida biologists studying fungal evolution and ecology have discovered three new truffle species, including one capable of commanding hundreds of dollars per pound within culinary circles. “Our paper confirms what a lot of people had suspected for a long time, which is that the North American truffle species is genetically very distinct from its European relatives.” —Benjamin Lemmond, study co-author and a former UF student The researchers describe their discoveries in a Persoonia. Their work shakes up the Morchellaceae truffle family tree, with key insights related to perhaps the most commercially valuable truffle in North America, the Oregon black truffle. Gourmet chefs, who sometimes grate the odoriferous truffle over dishes or infuse butter with it, have been known to pay as much as $800 per pound for the delicacy. For decades, the Oregon black truffle has been known scientifically as Leucangium carthusianum. It was originally found in Europe and later found in the Pacific Northwest, from California to British Columbia. However, recent genetic testing and field analysis by researchers from UF’s Institute of Food and Agricultural Sciences (UF/IFAS) revealed the North American variety is a distinct species. Scientists are giving this newly recognized species a name honoring the Cascadia region in which it is found: Leucangium cascadiense. “Our paper confirms what a lot of people had suspected for a long time, which is that the North American truffle species is genetically very distinct from its European relatives,” said study co-author Benjamin Lemmond, a former UF student. Lemmond, now a postdoctoral associate at the University of California at Berkeley, began his research into the truffles as a first-year doctoral student studying under professor Matthew Smith of the UF/IFAS plant pathology department. During the COVID-19 pandemic, Lemmond couldn’t access the campus greenhouse where he was conducting an experiment, so Smith secured hundreds of dried truffle specimens from Oregon State University for him to study. The stash included slivers of the Oregon black truffle, a dark-colored, potato-shaped species with tiny, pyramid-shaped warts. When pandemic restrictions relaxed, Lemmond and Smith conducted genetic testing of the Oregon State specimens and others borrowed from Polish, Greek, Italian, French and Japanese collections. Their tests indicated Oregon black truffles from North America had at one point diverged from their European counterparts on the Morchellaceae evolutionary tree, according to the study. They also established the existence of another distinct and very rare species, Imaia kuwohiensis, a pale-colored truffle with dark warts, which is native to threatened spruce-fir habitats in the southern Appalachian Mountains. Their name for the truffle comes from the Cherokee word for the Great Smoky Mountains’ highest peak, Kuwohi. Field tests followed. The researchers wanted to understand the origin of Oregon black truffles’ energy. “Understanding the fundamental, basic biology and life cycle of this truffle is really important,” Lemmond said. “It’s a very valuable commodity, and this knowledge might help us to cultivate the truffle in the future. It also supports long-term conservation and management.” Most gourmet truffles are mycorrhizal, meaning they obtain energy from trees, Lemmond said. It had long been suspected that Oregon black truffles obtain energy through a symbiotic relationship with young Douglas fir trees, but no one had conclusively proven it. Lemmond traveled to the Pacific Northwest and worked with specially trained sniffer dogs capable of detecting truffles buried as deep as 10 inches beneath soil and leaf litter. With the dogs’ help, he unearthed Oregon black truffles nestled among Douglas fir stands. He used fluorescent stain that bonded with the fungal tissue, coloring it green to show where the truffle fungus grew between the cells of the tree root tissue. “The truffle fungi surround the whole root, but the fungus is healthy, and the plant is healthy,” Smith said. “The two trade nutrients back and forth.” DNA sequencing of the roots subsequently proved the truffles rely on the trees as their main source of carbon, according to the study. As the researchers conducted genome sequencing of the Oregon black truffle, they learned of a peculiar find reported by a citizen scientist on iNaturalist, an online science data network: a Leucangium truffle growing among Eastern hemlock trees in Oneida County, New York. It was the first time anyone had ever reported a Leucangium species in the United States, east of the Rocky Mountains, Lemmond said. Lemmond contacted Purdue University, which was preserving the specimen, and requested a sample. The truffle’s physical characteristics, including its dense external hairs and lack of warts, distinguished it from other Leucangium species. DNA analysis confirmed significant variation, too. The researchers named the new truffle species Leucangium oneidaense to recognize the county where it was unearthed. A few years later, just before the researchers submitted their study for publication, someone found a second Leucangium oneidaense specimen growing in Massachusetts, Lemmond said. “It was great timing, and it suggests to me that there are still a lot of undiscovered truffles out there, waiting to be found,” he said.
War in Iran: Impact on Oil Prices
As global markets respond to escalating tensions in Iran, energy prices are once again at the center of international concern. For insight into what this conflict could mean for oil markets, consumers and the broader economy, media can turn to Greg Upton, executive director and associate research professor at the LSU Center for Energy Studies. An expert at the intersection of energy and environmental economics, Upton studies how geopolitical disruptions, supply constraints and policy decisions influence oil prices and downstream economic impacts. As instability in the Middle East threatens global supply chains, he can provide context on potential price volatility, implications for Louisiana’s energy sector and what higher crude prices may mean for gasoline costs and inflation in the United States. Upton has contributed to more than 40 academic publications and has presented his research to over 200 industry, government and academic audiences. He has testified before committees in both chambers of the Louisiana Legislature and a subcommittee of the U.S. House of Representatives. A frequent voice in national and local media, Upton has been quoted or cited more than 250 times, including by the The Wall Street Journal, The New York Times, USA Today and NPR. In addition to his research, Upton teaches in LSU’s MBA program and in the Department of Economics and Environmental Sciences, helping prepare the next generation of leaders to navigate complex energy and environmental challenges. For timely, data-driven analysis on the impact of oil price fluctuations amid the ongoing conflict in Iran, Dr. Greg Upton is available for interviews and expert commentary.






