Experts Matter. Find Yours.
Connect for media, speaking, professional opportunities & more.

New light-based chip boosts power efficiency of AI tasks 100 fold
A team of engineers has developed a new kind of computer chip that uses light instead of electricity to perform one of the most power-intensive parts of artificial intelligence — image recognition and similar pattern-finding tasks. Using light dramatically cuts the power needed to perform these tasks, with efficiency 10 or even 100 times that of current chips performing the same calculations. Using this approach could help rein in the enormous demand for electricity that is straining power grids and enable higher performance AI models and systems. This machine learning task, called “convolution,” is at the heart of how AI systems process pictures, videos and even language. Convolution operations currently require large amounts of computing resources and time. These new chips, though, use lasers and microscopic lenses fabricated onto circuit boards to perform convolutions with far less power and at faster speeds. In tests, the new chip successfully classified handwritten digits with about 98% accuracy, on par with traditional chips “Performing a key machine learning computation at near zero energy is a leap forward for future AI systems,” said study leader Volker J. Sorger, Ph.D., the Rhines Endowed Professor in Semiconductor Photonics at the University of Florida. “This is critical to keep scaling up AI capabilities in years to come.” “This is the first time anyone has put this type of optical computation on a chip and applied it to an AI neural network,” said Hangbo Yang, Ph.D., a research associate professor in Sorger’s group at UF and co-author of the study. Sorger’s team collaborated with researchers at UF’s Florida Semiconductor Institute, the University of California, Los Angeles and George Washington University on study. The team published their findings, which were supported by the Office of Naval Research, Sept. 8 in the journal Advanced Photonics The prototype chip uses two sets of miniature Fresnel lenses using standard manufacturing processes. These two-dimensional versions of the same lenses found in lighthouses are just a fraction of the width of a human hair. Machine learning data, such as from an image or other pattern-recognition tasks, are converted into laser light on-chip and passed through the lenses. The results are then converted back into a digital signal to complete the AI task. This lens-based convolution system is not only more computationally efficient, but it also reduces the computing time. Using light instead of electricity has other benefits, too. Sorger’s group designed a chip that could use different colored lasers to process multiple data streams in parallel. “We can have multiple wavelengths, or colors, of light passing through the lens at the same time,” Yang said. “That’s a key advantage of photonics.” Chip manufacturers, such as industry leader NVIDIA, already incorporate optical elements into other parts of their AI systems, which could make the addition of convolution lenses more seamless. “In the near future, chip-based optics will become a key part of every AI chip we use daily,” said Sorger, who is also deputy director for strategic initiatives at the Florida Semiconductor Institute. “And optical AI computing is next.”
Delaware emerges as a test bed for the future of AI in health care
Delaware is positioning itself as a “living lab” where academia, health systems and government collaborate to shape the future of artificial-intelligence-enabled health care. The latest issue of the Delaware Journal of Public Health, guest edited by University of Delaware computer scientists Weisong Shi and Yixiang Deng, brings together 16 articles from researchers, clinicians, policymakers and industry leaders examining how AI and big data are reshaping health care. The issue, debuting this month, balances Delaware-specific topics with broader perspectives, highlighting three levels of impact: what Delaware can expect in the coming years, what other states can learn from Delaware’s approach and how UD research is advancing AI for health through collaborations. “At UD, we don’t work in isolation. We’re working closely with health care systems so that innovation happens together from the beginning,” says Shi, Alumni Distinguished Professor and Chair of UD’s Department of Computer and Information Sciences. Highlights from the issue include: The nation’s first nursing fellowship in robotics – ChristianaCare, Delaware’s largest health system, created an eight-month fellowship to train bedside nurses to conduct applied robotics research. Nurses who completed the program reported higher job satisfaction, improved well-being and greater professional confidence, suggesting programs like this may help retain the bedside workforce and reduce nationwide staffing shortages. Wheelchairs that navigate hospitals on their own – UD researchers developed a prototype autonomous wheelchair that combines onboard sensors and computing with software that interprets spoken directions from users, a step toward moving beyond systems that only work in controlled environments. To operate effectively in health care settings, the researchers say, wheelchairs must be able to navigate crowded hallways, interact with doors and elevators and recover safely when sensors or navigation systems fail. Smarter insulin dosing for type 1 diabetes – Researchers are developing computer models to predict blood sugar (glucose) trends and guide insulin delivery, but must address issues such as noisy data, reliable real-time prediction and the computational limits of wearable devices. A review by UD researchers and colleagues emphasizes the importance of interdisciplinary collaboration, standardized datasets, advances in computational infrastructure and clinical validation to turn these models into practical tools that improve patient care. To interview Shi about AI in health care and the new DJPH issue, click his profile or email MediaRelations@udel.edu. ABOUT WEISONG SHI Weisong Shi is an Alumni Distinguished Professor and Chair of the Department of Computer and Information Sciences at the University of Delaware. He leads the Connected and Autonomous Research Laboratory. He is an internationally renowned expert in edge computing, autonomous driving and connected health. His pioneering paper, “Edge Computing: Vision and Challenges,” has been cited over 10,000 times.

AI in the classroom: What parents need to know
As students return to classrooms, Maya Israel, professor of educational technology and computer science education at the University of Florida, shares insights on best practices for AI use for students in K-12. She also serves as the director of CSEveryone Center for Computer Science Education at UF, a program created to boost teachers’ capabilities around computer science and AI in education. Israel also leads the Florida K-12 Education Task Force, a group committed to empowering educators, students, families and administrators by harnessing the transformative potential of AI in K-12 classrooms, prioritizing safety, privacy, access and fairness. How are K–12 students using AI in classrooms? There is a wide range of approaches that students are using AI in classrooms. It depends on several factors including district policies, student age and the teacher’s instructional goals. Some districts restrict AI to only teacher use, such as creating custom reading passages for younger students. Others allow older students to use tools to check grammar, create visuals or run science simulations. Even then, skilled teachers frame AI as one tool, not a replacement for student thinking and effort. What are examples of age-appropriate tools that enhance learning? AI tools can be used to either enhance or erode learner agency and critical thinking. It is up to the educators to consider how these tools can be used appropriately. It is critical to use AI tools in a manner that supports learning, creativity and problem solving rather than bypass critical thinking. For example, Canva lets students create infographics, posters and videos to show understanding. Google’s Teachable Machine helps students learn AI concepts by training their own image-recognition models. These types of AI-augmented tools work best when they are embedded into activities such as project-based learning, where AI supports learning and critical thinking. How do teachers ensure AI supports core skills? While AI can be incredibly helpful in supporting learning, it should not be a shortcut that allows students to bypass learning. Teachers should design learning opportunities that integrate AI in a manner that encourages critical thinking. For example, if students are using AI to support their mathematical understanding, teachers should ask them to explain their reasoning, engage in discussions and attempt to solve problems in different ways. Teachers can ask students questions like, “Does that answer make sense based on what you know?” or “Why do you think [said AI tool] made that suggestion?” This type of reflection reinforces the message that learning does not happen through getting fast answers. Learning happens through exploration, productive struggle and collaboration. Many parents worry that using AI might make students too dependent on technology. How do educators address that concern? This is a very valid concern. Over-reliance on AI can erode independence and critical thinking, that’s why teachers should be intentional in how they use AI for teaching and learning. Educators can address this concern by communicating with parents their policies and approaches to using AI with students. This approach can include providing clear expectations of when AI is used, designing assignments that require critical thinking, personal reflection and reasoning and teaching students the metacognitive skills to self-assess how and when to use AI so that it is used to support learning rather than as a crutch. How do schools ensure that students still develop original thinking and creativity when using AI for assignments or projects? In the age of AI, there is the need to be even more intentional designing learning experiences where students engage in creative and critical thinking. One of the best practices that have shown to support this is the use of project-based learning, where students must create, iterate and evaluate ideas based on feedback from their peers and teachers. AI can help students gather ideas or organize research, but the students must ask the questions, synthesize information and produce original ideas. Assessment and rubrics should emphasize skills such as reasoning, process and creativity rather than just focusing on the final product. That way, although AI can play a role in instruction, the goal is to design instructional activities that move beyond what the AI can do. How do educators help students understand when it’s appropriate to use AI in their schoolwork? In the age of AI, educators should help students develop the skills to be original thinkers who can use AI thoughtfully and responsibly. Educators can help students understand when to use AI in their school work by directly embedding AI literacy into their instruction. AI literacy includes having discussions about the capabilities and limitations of AI, ethical considerations and the importance of students’ agency and original thoughts. Additionally, clear guidelines and policies help students navigate some of the gray areas of AI usage. What guidance should parents give at home? There are several key messages that parents should give their children about the use of AI. The most important message is that even though AI is powerful, it does not replace their judgement, creativity or empathy. Even though AI can provide fast answers, it is important for students to learn the skills themselves. Another key message is to know the rules about AI in the classroom. Parents should speak with their students about the mental health implications of over-reliance on AI. When students turn to AI-augmented tools for every answer or idea, they can gradually lose confidence in their own problem-solving abilities. Instead, students should learn how to use AI in ways that strengthen their skills and build independence.
AI gives rise to the cut and paste employee
Although AI tools can improve productivity, recent studies show that they too often intensify workloads instead of reducing them, in many cases even leading to cognitive overload and burnout. The University of Delaware's Saleem Mistry says this is creating employees who work harder, not smarter. Mistry, an associate professor of management in UD's Lerner College of Business & Economics, says his research confirms findings found in this Feb. 9, 2026 article in the Harvard Business Review. Driven by the misconception that AI is an accurate search engine rather than a predictive text tool, these "cut and paste" employees are using the applications to pump out deliverables in seconds just to keep up with increasing workloads. Mistry notes that this prioritization of speed over accuracy is happening at every level of the organization: • Junior staff: Blast out polished looking but unverified drafts. • Managers: Outsource their ability to deeply learn and critically think in order to summarize data, letting their analytical skills atrophy. • Power users: Build hidden, unapproved systems that bypass company oversight. A management problem, not a tech problem "When discussing this issue, I often hear leaders blame the technology. However, I believe that blaming the tech is missing the point; I see it as a failure of leadership," Mistry said. "When already overburdened employees who are constantly having to do more with less are handed vague mandates to just use AI without any training, they use it to look busy and produce volume-based work. Because many companies still reward the volume of work produced rather than the actual impact, employees naturally use these tools to generate slick but empty deliverables." "I believe that blaming the tech is missing the point; I see it as a failure of leadership. Because many companies still reward the volume of work produced rather than the actual impact, employees naturally use these tools to generate slick but empty deliverables." The real costs to organizations and incoming employees Mistry outlines three risks organizations face if they don’t intervene: 1. The workslop epidemic "These programs allow people to generate massive amounts of workslop, which is low-effort fluff that looks good but lacks substance. It takes seconds to create, but hours for someone else to decipher, fact-check, and fix," Mistry notes. "This drains money (up to $9 million annually for large companies) and destroys morale. As an educator, researcher, and a person brought into organizations to help fix problems, I for one do not want to be on the receiving end of a thoughtless, automated data dump, especially on tasks that require real skill and deep thinking." 2. Legal disaster He also states, "When the cut and paste mentality makes its way into professional submissions, the risks to the organization are real and oftentimes catastrophic. Courts have made it perfectly clear: ignorance is no excuse. If your name is on the document, you own the liability. Recently, attorneys have faced severe sanctions, hefty fines, and case dismissals for blindly submitting fake legal citations made up by computers." Click here for a list of cases. 3. A warning for incoming talent For new graduates entering this environment, Mistry offers a warning: Do not rely on AI to do your deep thinking. "If you simply use AI to blast out polished but unverified drafts, you become a replaceable 'cut and paste' employee," he says. “To truly stand out, new grads must prove they have the discernment to review, tweak, and challenge what the computer writes. The hiring edge is no longer just saying, 'I can do this task,' but 'I know how to leverage and correct AI to help me perform it.'" Four ideas to fix it To survive and indeed thrive with these new tools and avoid the unintended consequences of untrained staff, organizations should: 1. Reinforce the importance of fact-checking and editing: Adopt frameworks that teach employees how to show their work and log how they verified computer-generated facts. 2. Change the incentives: Stop rewarding busy work, useless reports, and massive slide decks. Evaluate employees on accuracy and results. 3. Eradicate superficial work: Don’t use automation to speed up ineffective legacy processes. Instead, use it to identify and eliminate them entirely. 4. Make time for editing: Give yourself and your employees the breathing room to actually review, tweak, and challenge what the computer writes instead of accepting the first draft. Mistry is available to discuss: Why AI is causing an epidemic of corporate "workslop" (and how to spot it). The leadership failure behind the "cut and paste" employee. How to rewrite corporate incentives to measure impact instead of volume in the AI era. Strategies for implementing safe, effective AI policies at work. How new college graduates can avoid the "workslop" trap in their first jobs. To reach Mistry directly and arrange an interview, visit his profile and click on the "contact" button. Interested reporters can also send an email to MediaRelations@udel.edu.

VCU College of Engineering receives $600,000 for AI-driven cybersecurity research
To advance AI-enabled cybersecurity research, the National Science Foundation (NSF) presented Kemal Akkaya, Ph.D., professor and chair of the Department of Computer Science, with a $600,000 grant through the organization’s Cybersecurity Innovation for Cyberinfrastructure program. Akkaya’s three-year project will explore how large language models (LLMs) can automate packet labeling for intrusion detection systems. “From transportation and healthcare to finance, improving the accuracy of machine learning algorithms used to defend the networks that underpin these sectors’ cyberinfrastructure is critical for protecting them from cyberattacks. Strengthening these defenses helps ensure the reliability and security of the essential services people rely on every day,” said Akkaya. Intrusion detection systems monitor network traffic to identify suspicious or malicious activity. These systems rely on machine learning models trained on large volumes of accurately labeled data. Producing those datasets, however, is time intensive and often requires expert cybersecurity knowledge. As digital systems increasingly power transportation, health care, finance and communication, the volume and sophistication of cyber attacks continue to grow. At the same time, artificial intelligence is reshaping how both attackers and defenders operate. Improving how quickly and accurately security systems can be trained is critical to protecting the infrastructure that supports daily life. Akkaya’s project will investigate how generative AI can help address this challenge. The team will fine tune open-source large language models using network data, threat signatures and expert annotations. Model accuracy will be strengthened through retrieval-augmented refinement, ensemble modeling and human-in-the-loop verification. Labeled datasets will be released in stages to support the development and evaluation of cybersecurity models. Using data from AmLight, an international research and education network operated by Florida International University (FIU), the project includes collaboration with researchers from FIU. The award strengthens VCU’s growing leadership in AI-enabled cybersecurity research and provides hands-on research training for graduate students. Resulting datasets from this work will support machine learning education for undergraduate students.
The AI In Action Symposium, hosted by the LSU E. J. Ourso College of Business, brings together expert voices at the heart of the AI revolution to explore how they have successfully navigated this evolving landscape. The 2026 symposium focuses on the practical implications of AI in business, including hiring AI-ready talent, ensuring responsible and ethical use, and exploring the challenges of implementing AI across both large enterprises and small startups. Speakers Attendees will hear from Louisiana leaders and national AI experts, including… Secretary Bruce Greenstein of the Louisiana Department of Health April Wiley, Senior Vice President at Community Coffee Robert Veit and Julian Tandler from Scale Team Six, a San Francisco-based business accelerator Dr. Tonya Jagneaux, who leads medical analytics at the Franciscan Missionaries of Our Lady Health System (FMOLHS) Hunter Thevis, president and co-founder of Lafayette-based S1 Technology …and many more! Details March 20, 2026, 8:00 a.m. – 1:00 p.m. Registration deadline is March 15. Held on the LSU A&M Campus, in the LSU Student Union Register at lsu.edu/business/ai-symposium Discount available for LSU System employees

Is writing with AI at work undermining your credibility?
With over 75% of professionals using AI in their daily work, writing and editing messages with tools like ChatGPT, Gemini, Copilot or Claude has become a commonplace practice. While generative AI tools are seen to make writing easier, are they effective for communicating between managers and employees? A new study of 1,100 professionals reveals a critical paradox in workplace communications: AI tools can make managers’ emails more professional, but regular use can undermine trust between them and their employees. “We see a tension between perceptions of message quality and perceptions of the sender,” said Anthony Coman, Ph.D., a researcher at the University of Florida's Warrington College of Business and study co-author. “Despite positive impressions of professionalism in AI-assisted writing, managers who use AI for routine communication tasks put their trustworthiness at risk when using medium- to high-levels of AI assistance." In the study published in the International Journal of Business Communication, Coman and his co-author, Peter Cardon, Ph.D., of the University of Southern California, surveyed professionals about how they viewed emails that they were told were written with low, medium and high AI assistance. Survey participants were asked to evaluate different AI-written versions of a congratulatory message on both their perception of the message content and their perception of the sender. While AI-assisted writing was generally seen as efficient, effective, and professional, Coman and Cardon found a “perception gap” in messages that were written by managers versus those written by employees. “When people evaluate their own use of AI, they tend to rate their use similarly across low, medium and high levels of assistance,” Coman explained. “However, when rating other’s use, magnitude becomes important. Overall, professionals view their own AI use leniently, yet they are more skeptical of the same levels of assistance when used by supervisors.” While low levels of AI help, like grammar or editing, were generally acceptable, higher levels of assistance triggered negative perceptions. The perception gap is especially significant when employees perceive higher levels of AI writing, bringing into question the authorship, integrity, caring and competency of their manager. The impact on trust was substantial: Only 40% to 52% of employees viewed supervisors as sincere when they used high levels of AI, compared to 83% for low-assistance messages. Similarly, while 95% found low-AI supervisor messages professional, this dropped to 69-73% when supervisors relied heavily on AI tools. The findings reveal employees can often detect AI-generated content and interpret its use as laziness or lack of caring. When supervisors rely heavily on AI for messages like team congratulations or motivational communications, employees perceive them as less sincere and question their leadership abilities. “In some cases, AI-assisted writing can undermine perceptions of traits linked to a supervisor’s trustworthiness,” Coman noted, specifically citing impacts on perceived ability and integrity, both key components of cognitive-based trust. The study suggests managers should carefully consider message type, level of AI assistance and relational context before using AI in their writing. While AI may be appropriate and professionally received for informational or routine communications, like meeting reminders or factual announcements, relationship-oriented messages requiring empathy, praise, congratulations, motivation or personal feedback are better handled with minimal technological intervention.

Recently named the nuclear program director at the Virginia Commonwealth University (VCU) College of Engineering, Gennady Miloshevsky, Ph.D., associate professor in the Department of Mechanical & Nuclear Engineering, answers some questions about the direction of VCU Engineering’s nuclear program and what he hopes it can accomplish. What are your top priorities for the nuclear program at the VCU College of Engineering? I want to focus on student development, innovative research and our rankings in best program lists, but that is not everything. Strategy is important. We need to align ourselves with the country’s national energy needs. There are many new developments in the energy sector, like small modular reactors or fusion energy systems, and having the right faculty to engage with these advancements is important. Providing students with a well-rounded education and good opportunities for gaining experience benefits the College of Engineering’s public and private sector partners. Nuclear subject matter is complex, so higher education is very important for workforce development. We want to build partnerships, like the one we have with Dominion Energy, that support this goal. A priority for me is continuing to establish relationships with Commonwealth Fusion Systems, which seeks to build and operate the first commercial grid-scale fusion plant in Chesterfield County, Virginia. Our workforce partners will benefit from VCU’s well-trained nuclear engineering graduates joining the workforce. So, aligning our strategy with national energy needs, hiring the right faculty to support our programs and building industry partnerships that benefit our student’s education and career opportunities are important things for VCU Engineering’s nuclear program. Where would you like to see the College of Engineering’s nuclear program 10 years from now? I would like to see growth in the nuclear program. For example, some new graduate courses on topics like nuclear materials or fusion energy. In 2024, I developed a general course for fusion energy, so building out a curriculum that goes more in-depth would be good. When you look at small modular reactors and micro reactors, current energy policy does not allow private companies to build their own. However, as energy demands increase, policy could change to where you see these compact devices installed in places like data centers, for example. A more in-depth curriculum allows VCU Engineering students to step into industry roles that lead growth of the energy industry while also ensuring students are capable of adapting to the changing field and taking advantage of new developments. What sort of cross-disciplinary opportunities are there for the College of Engineering’s nuclear program? Nuclear engineering and nuclear science are very interdisciplinary fields. You have physics that covers the nuclear reaction and the radiation it generates, for example, then chemistry is needed when talking about nuclear fuel cycles and nuclear waste. You also need materials science because good materials capable of withstanding radiation and high temperatures are needed in nuclear fission and fusion energy systems. This science then connects to engineering, building the reactors, the energy distribution systems like a power grid. It is a small sample of the overall work, but you see how mechanical and electrical engineering are key to this part. All these disciplines come together to solve the same problem. One researcher might be figuring out how to confine plasma and make it stable, then another researcher is looking at how plasma can disrupt the containment wall and how to make materials to protect the wall. Within our department, we are making connections between mechanical-focused faculty working on high-temperature ceramics or additive manufacturing techniques and those of us researching nuclear energy systems in order to make joint proposals. We are also collaborating outside VCU. As an example, I am involved with an alliance founded by the Defense Threat Reduction Agency (DTRA) comprised of 17 universities, research labs and military centers. Coordinated through DTRA, we work together on many of the same problems.Through this partnership, my Ph.D. students do summer research rotations with national labs like Lawrence Livermore National Laboratory in California and The Pacific Northwest National Laboratory. We also bring cadets and midshipman into VCU from other institutions, like the DTRA Nuclear Science and Engineering Research Center, United States Military Academy West Point and the Virginia Military Institute, whose students have been part of research experience for undergraduates programs in the summer. How is artificial intelligence impacting the field of nuclear engineering? So, the United States is sponsoring the Genesis Mission, which seeks to transform science innovation through the power of AI. One area of the Genesis Mission is nuclear fission and fusion energy. I see this playing out with the Department of Energy encouraging national labs, universities and industry to work together on applying these AI advancements to solve the research problems of nuclear energy. It is a great opportunity for students, who we can involve in this work to give them real-world experience with topics they will see after graduation. Last semester I taught a course at VCU on the practical applications of AI on nuclear engineering problems. It is not something like ChatGPT or anything like that. What we did is take Google’s TensorFlow platform that is a library of AI models and machine neural networks. Using Python scripting students learn how to apply these AI resources to about 30 problems in mechanical and nuclear engineering. They create scripts, use data sets and run analytics. We have a nuclear reactor simulator and I have some ideas to create AI-based software we can pair with the simulator, then give the software a data set and let it control the operation of the simulator in a safe way. Tell us about your background. What brought you VCU and the Department of Mechanical and Nuclear Engineering? Actually, I am not a mechanical or a nuclear engineer. My background is in physics. I graduated from the Belarusian State University in 1990 and continued to a Ph.D. in physics from the Heat and Mass Transfer Institute of the National Academy of Sciences of Belarus working on topics related to fusion plasmas and nuclear weapon effects. In space, nuclear weapons produce shockwaves and radiation. I computationally model these effects in my research to determine how something like a nuclear warhead detonation in orbit will impact the materials a satellite is made of, for example. My research also crosses over into nuclear fusion, specifically thermodynamic and optical plasma properties, fusion plasma disruptions, melt motion and splashing from plasma facing components. Accelerating Next-Generation Extreme Ultraviolet (EUV) Lithography (ANGEL) is my most recent collaborative project, supported by the Department of Energy’s (DOE) Office of Science, Fusion Energy Sciences. It involves two national laboratories, three universities and a private-sector company focusing on advancement of future micro-electronic chips, EUV photon sources, mitigation of material degradation and plasma chemistry. Prior to joining the VCU College of Engineering I worked at Purdue University at a DOE-funded center investigating nuclear fusion and the effects of plasma on materials. Around 2019 I wanted to develop my own lab, so I came to VCU with startup funds from the Nuclear Regulatory Commission and DTRA. My first priority after joining the VCU College of Engineering was continuing my fusion research, the second was collaborating with an alliance of universities focused on work for DTRA and DOE.

What the World Needs Now: How Art, Culture, and Nature Can Help Heal Communities in Difficult Times
In an era marked by political division, cultural fatigue, and rapid technological change, communities are increasingly searching for places that offer connection, restoration, and shared experience. Charles Burke, President & CEO of Frederik Meijer Gardens & Sculpture Park, brings a leadership perspective shaped by decades across the arts, civic engagement, and nonprofit strategy — focused on how cultural institutions can serve as stabilizing forces in uncertain times. Through the lens of Meijer Gardens, Burke examines how art, culture, and nature can work together to restore, unite, and inspire communities, offering spaces where people can slow down, reconnect, and engage with one another beyond polarization or distraction. Charles Burke is President & CEO of Frederik Meijer Gardens & Sculpture Park. Under his direction, the organization has been recognized as Best Sculpture Park in the United States by USA Today’s 10Best Readers’ Choice Awards in 2023, 2024, and 2025, and consecutively named one of the Best Places to Work in West Michigan, solidifying its reputation as a cultural landmark of international acclaim. View his profile Why This Matters Now In an era marked by political division, cultural fatigue, and rapid technological change, communities are increasingly searching for places that offer connection, restoration, and shared experience. Charles Burke, President & CEO of Frederik Meijer Gardens & Sculpture Park, brings a leadership perspective shaped by decades across the arts, civic engagement, and nonprofit strategy — focused on how cultural institutions can serve as stabilizing forces in uncertain times. Through the lens of Meijer Gardens, Burke examines how art, culture, and nature can work together to restore, unite, and inspire communities, offering spaces where people can slow down, reconnect, and engage with one another beyond polarization or distraction. An Expert Perspective on Healing Through Experience From Burke’s leadership vantage point, institutions like Meijer Gardens demonstrate how intentional design and programming can support community well-being. Examples include: Environments that encourage mental restoration, such as forested landscapes and immersive outdoor spaces Experiences that invite reflection and emotional engagement, rather than passive consumption Programming that brings together diverse audiences around shared encounters with beauty and creativity These experiences do not attempt to solve complex societal challenges directly. Instead, they create conditions for connection, empathy, and resilience, key foundations that healthy communities depend on. Civic Spaces as “Experiential Engines” A central concept in Burke’s work is the idea of cultural institutions as experiential engines — places designed not just to display art or plants, but to generate meaning, joy, and shared memory. When thoughtfully integrated, sculpture, horticulture, architecture, and programming can transform public spaces into environments that foster belonging and inclusion. This approach positions cultural institutions as active participants in civic life, contributing to community health and cohesion rather than operating at the margins of public discourse. Technology, Humanity, and the Future of Cultural Spaces As technology continues to shape how people interact with the world, Burke’s perspective emphasizes balance. Emerging tools — including artificial intelligence — can enhance accessibility, storytelling, and personalization when used intentionally. The challenge, and opportunity, lies in ensuring that technology deepens human connection rather than distracting from it. And while AI is ideal for aggregating information and should be integrated into , it isn't inherently creative. Burke believes that cultural institutions can uniquely unlock the power of human potential in creativity. And cultural institutions that integrate innovation thoughtfully can remain relevant while staying grounded in human experience. Meijer Gardens as a Living Model Over three decades, Meijer Gardens has evolved into a nationally recognized destination where beauty, experience, and mission align. Its integration of art, nature, education, and seasonal programming offers a real-world example of how cultural institutions can grow while remaining inclusive, restorative, and community-centered. Why Journalists and Conference Organizers Should Connect Charles Burke brings informed perspective on: The role of art and nature in public healing and mental wellness Cultural responsibility during periods of division and uncertainty Designing inclusive, joyful, and interactive civic spaces Balancing technology and humanity in cultural institutions How Meijer Gardens functions as a model for innovative integration and creativity Audience fit: museum and cultural leadership forums, civic innovation conferences, mental health and wellness discussions, placemaking initiatives, higher education leadership forums, philanthropic leadership events, sustainability and design summits.

National Academy of Inventors welcomes five VCU College of Engineering researchers
The National Academy of Inventors (NAI) recently inducted five Virginia Commonwealth University (VCU) College of Engineering researchers as senior members. Chosen for their innovative engineering contributions, the honorees are recognized as visionary inventors whose groundbreaking research and patented technologies are driving meaningful societal and economic advancements across the national innovation landscape. “Invention represents the practical application of knowledge and stands as one of the many ways engineers can make a positive impact on their communities and the world,” said Azim Eskandarian, D.Sc, the Alice T. and William H. Goodwin Jr. Dean of the VCU College of Engineering. “This year’s honorees exemplify the interdisciplinary nature of our field, leveraging advanced concepts from mechanical, biomedical, chemical and pharmaceutical engineering to address today’s most pressing challenges. We are immensely proud that our dedicated researchers have earned recognition as members of the esteemed National Academy of Inventors.” The VCU College of Engineering NAI inductees are: Jayasimha Atulasimha, Ph.D. Engineering Foundation Professor Department of Mechanical & Nuclear Engineering An internationally recognized pioneer of straintronics, an approach to electrically control magnetism for ultra-low-energy computing, Atulasimha has made significant research contributions to next-generation memory, neuromorphic hardware and emerging quantum computing technologies. He holds four U.S. patents spanning energy-efficient magnetic memory, nanoscale computing architectures and medical tools. Atulasimha’s commercially viable inventions are funded by organizations like the Virginia Innovation Partnership Corporation and he leads multi-institutional collaborations that drive innovation in computing hardware, AI and quantum technologies with more than $10 million in funded research. Casey Grey, Ph.D. Postdoctoral Research Associate Department of Mechanical & Nuclear Engineering Bridging engineering and medicine, Grey’s work spans life‑saving stroke technologies, breakthrough respiratory and neurological care, and sustainable packaging. As a lead R&D scientist at WestRock, he helped create and commercialize the CanCollar® portfolio, a recyclable paperboard replacement for plastic beverage rings now used on five continents, eliminating thousands of tons of single‑use plastic annually. In medical device innovation, Grey’s patent and development work on a novel cyclic aspiration thrombectomy platform, currently in clinical trials, is advancing stroke treatment by enhancing clot removal efficiency and reducing long‑term disability. At the VCU College of engineering, Grey built a research and commercialization pipeline around neurological and respiratory technologies, securing eight provisional patents and leading multidisciplinary teams in neurology, neurosurgery, surgery, pharmacology and toxicology, internal medicine, and respiratory medicine. His work includes developing dry powder inhaler strategies for delivering life‑saving drugs to patients with acute respiratory distress syndrome (ARDS), a pediatric bubble CPAP system designed to protect brain development in premature infants, and non‑invasive, non‑pharmacological 40 Hz neuromodulation therapies to treat neurodegeneration and conditions with significant central nervous system complications, like sickle cell disease. In collaborations with the VCU Children’s Hospital and VCU Critical Care Hospital, Grey is leading two clinical studies that are translating these innovations to improve patient care. Ravi Hadimani, Ph.D. Associate Professor and Director of Biomagnetics Laboratory Department of Mechanical & Nuclear Engineering Hadimani founded RAM Phantoms LLC, a VCU startup company, commercializing anatomically accurate, MRI-derived brain phantoms for neuromodulation and neuroimaging applications. These brain phantoms help test and tune transcranial magnetic and deep brain stimulation technologies, improving clinical safety and enabling personalized therapy for patients. RAM Phantoms is also developing a highly-skilled workforce for employment in Virginia’s growing biomedical device industry. Beyond commercialization, Hadimani maintains a productive research program with more than $4.5 million in funding resulting in 125 original peer-reviewed publications, 17 current and pending patents, a book, and several book chapters. His biomagnetics lab serves as a training ground for undergraduate, graduate and Ph.D. students to hone their skills in innovation management, intellectual property strategy and startup development. Several students from Hadimani’s lab have engaged in translational research, patent co-authorship and start-up formation, cultivating a new generation of engineer-entrepreneurs equipped to drive future technological advances. Before joining VCU, Hadimani led the development of hybrid piezoelectric–photovoltaic materials that established FiberLec Inc., which commercialized multifunctional energy-harvesting fibers capable of converting solar, wind and vibrational energy into usable electricity. Worth Longest, Ph.D. Alice T. and William H. Goodwin, Jr. Distinguished Chair Department of Mechanical & Nuclear Engineering Uniting aerosol science, biomedical engineering and computational modeling, Longest is revolutionizing inhaled drug delivery. Working with collaborators, his lab has developed novel devices, formulations and delivery platforms that precisely target medications to the lungs, addressing conditions like cystic fibrosis, pneumonia, acute respiratory distress syndrome and neonatal respiratory distress syndrome. These innovations have resulted in multiple patents. Some of them have been licensed through commercial partnerships like Quench Medical, an organization advancing inhaled therapies for applications like lung cancer. Collaborating with the Gates Foundation and the lab of Michael Hindle, Ph.D., from the VCU Department of Pharmaceutics, Longest’s team developed a low-cost, high-efficacy aerosol surfactant therapy for pre-term infants based entirely on technology developed at VCU. The invention eliminates intubation, reduces dosage by a factor of 10, and cuts treatment costs. Over 9 million infant lives are projected to be saved by this technology between 2030 and 2050. Through a long-term collaboration with the U.S. Food and Drug Administration, Longest’s in vitro and computational methods provide federal regulatory guidance for generic inhaled medications. The VCU mouth-throat airway models developed under his leadership are used globally across the pharmaceutical industry and in government laboratories. Hong Zhao, Ph.D. Associate Professor Department of Mechanical & Nuclear Engineering Zhao holds 40 patents with innovations spanning additive manufacturing, stretchable electronics, inkjet printing technologies and superoleophobic materials that repel oils, greases, and low-surface-tension liquids. Her research has applications across health care, sustainable energy and advanced manufacturing. Prior to joining the College of Engineering, Zhao served as a senior research scientist and project leader at the Xerox Research Center, where she developed high-performance materials and printing technologies for commercial deployment. Her industry experience makes Zhao’s lab a hub for innovation and mentorship, with students engaging in innovative research and co-authoring publications. Zhao is an invited reviewer for more than 50 premier journals and grant agencies. “Working with distinguished researchers and innovators like those inducted into the National Academy of Inventors is a great honor for me,” said Arvind Agarwal, Ph.D., chair of the Department of Mechanical & Nuclear Engineering and NAI fellow. “They are an inspiration and showcase the kind of impact engineers can make. Having all five of these innovators as part of our department amplifies the scientific richness of our college and its societal impact. They advance the college’s mission of Engineering for Humanity, with research that brings a positive change to our world.” The 2026 NAI class of senior members, composed of 231 emerging inventors from NAI’s member institutions, is the largest to date. Hailing from 82 NAI member institutions across the globe, they hold over 2,000 U.S. patents.





