The AI Journal: UF and other research universities will fuel AI. Here’s why

Alina Zare Ph.D.

Feb 2, 2026

3 min

Alina Zare


In the global AI race between small and major competitors, established companies versus new players, and ubiquitous versus niche uses, the next giant leap isn’t about faster chips or improved algorithms. Where AI agents have already vacuumed up so much of the information on the internet, the next great uncertainty is where they’ll find the next trove of big data.


The answer is not in Silicon Valley. It’s all across the nation at our major research universities, which are key to maintaining global competitiveness against China.


To teach an AI system to “think” requires it to draw on massive amounts of data to build models. At a recent conference, Ilya Sutskever, the former chief scientist at OpenAI — the creator of ChatGPT — called data the “fossil fuel of AI.” Just as we will use up fossil fuels because they are not renewable, he said we are running out of new data to mine to keep fueling the gains in AI.


However, so much of this thinking assumes AI was created by private Silicon Valley start-ups and the like. AI’s history is actually deeply rooted in U.S. universities dating back to the 1940s, when early research laid the groundwork for the algorithms and tools used today. While the computing power to use those tools was created only recently, the foundation was laid after World War II, not in the private sector but at our universities.


Contrary to a “fossil fuel problem,” I believe AI has its own renewable fuel source: the data and expertise generated from our comprehensive public academic institutions. In fact, at the major AI conferences driving the field, most papers come from academic institutions. Our AI systems learn about our world only from the data we offer them.


Current AI models like ChatGPT are scraping information from some academic journal articles in open-access repositories, but there are enormous troves of untapped academic data that could be used to make all these models more meaningful. A way past data scarcity is to develop new AI methods that leverage all of our knowledge in all of its forms. Our research institutions have the varied expertise in all aspects of our society to do this.


Here’s just one example: We are creating the next generation of “digital twin” technology. Digital twins are virtual recreations of places or systems in our world. Using AI, we can develop digital twins that gather all of our data and knowledge about a system — whether a city, a community or even a person — in one place and allow users to ask “what if” questions.


The University of Florida, for example, is building a digital twin for the city of Jacksonville, which contains the profile of each building, elevation data throughout the city and even septic tank locations. The twin also embeds detailed state-of-the-art waterflow models. In that virtual world, we can test all sorts of ideas for improving Jacksonville’s hurricane evacuation planning and water quality before implementing them in the actual city.


As we continue to layer more data into the twin — real-time traffic information, scans of road conditions and more — our ability to deploy city resources will be more informed and driven by real-time actionable data and modeling. Using an AI system backed by this digital twin, city leaders could ask, “How would a new road in downtown Jacksonville impact evacuation times? How would the added road modify water runoff?” and so on.


The possibilities for this emerging area of AI are endless. We could create digital twins of humans to layer human biology knowledge with personalized medical histories and imaging scans to understand how individuals may respond to particular treatments.


Universities are also acquiring increasingly powerful supercomputers that are supercharging their innovations, such as the University of Florida’s HiPerGator, recently acquired from NVIDIA, which is being used for problems across all disciplines. Oregon State University and the University of Missouri, for example, are using their own access to supercomputers to advance marine science discoveries and improve elder care.


In short, to see the next big leap in AI, don’t immediately look to Silicon Valley. Start scanning the horizon for those research universities that have the computing horsepower and the unique ability to continually renew the data and knowledge that will supercharge the next big thing in AI.


Read more...



Connect with:
Alina Zare

Alina Zare

Professor

Alina Zare's research focuses on developing new machine learning and artificial intelligence algorithms to process data and imagery.

AI for DefenseAI for AgricultureAutomated Sensor UnderstandingRemote SensingMachine Learning
Powered by

You might also like...

Check out some other posts from University of Florida

3 min

Reading for pleasure in free fall: New study finds 40% drop over two decades

A sweeping new study from the University of Florida and University College London has found that daily reading for pleasure in the United States has declined by more than 40% over the last 20 years — raising urgent questions about the cultural, educational and health consequences of a nation reading less. Published today in the journal iScience, the study analyzed data from over 236,000 Americans who participated in the American Time Use Survey between 2003 and 2023. The findings suggest a fundamental cultural shift: fewer people are carving out time in their day to read for enjoyment. “This is not just a small dip — it’s a sustained, steady decline of about 3% per year,” said Jill Sonke, Ph.D., director of research initiatives at the UF Center for Arts in Medicine and co-director of the EpiArts Lab, a National Endowment for the Arts research lab at UF in partnership with University College London. “It’s significant, and it’s deeply concerning.” Who’s reading and who isn’t The decline wasn’t evenly spread across the population. Researchers found steeper drops among Black Americans than white Americans, people with lower income or educational attainment, and those in rural (versus metropolitan) areas — highlighting deepening disparities in reading access and habits. “While people with higher education levels and women are still more likely to read, even among these groups, we’re seeing shifts,” said Jessica Bone, Ph.D., senior research fellow in statistics and epidemiology at University College London. “And among those who do read, the time spent reading has increased slightly, which may suggest a polarization, where some people are reading more while many have stopped reading altogether.” The researchers also noted some more promising findings, including that reading with children did not change over the last 20 years. However, reading with children was a lot less common than reading for pleasure, which is concerning given that this activity is tied to early literacy development, academic success and family bonding, Bone said. Why it matters Reading for pleasure has long been recognized not just as a tool for education, but as a means of supporting mental health, empathy, creativity and lifelong learning. The EpiArts Lab, which uses large data sets to examine links between the arts and health, has previously identified clear associations between creative engagement and well-being. “Reading has historically been a low-barrier, high-impact way to engage creatively and improve quality of life,” Sonke said. “When we lose one of the simplest tools in our public health toolkit, it’s a serious loss.” The American Time Use Survey offers a unique window into these trends. “We’re working with incredibly detailed data about how people spend their days,” Bone said. “And because it’s a representative sample of U.S. residents in private households, we can look not just at the national trend, but at how it plays out across different communities.” Why are Americans reading less? While causes were not part of the study, the researchers point to multiple potential factors, including the rise of digital media, growing economic pressures, shrinking leisure time and uneven access to books and libraries. “Our digital culture is certainly part of the story,” Sonke said. “But there are also structural issues — limited access to reading materials, economic insecurity and a national decline in leisure time. If you’re working multiple jobs or dealing with transportation barriers in a rural area, a trip to the library may just not be feasible.” What can be done? The study’s authors say that interventions could help slow or reverse the trend, but they need to be strategic. “Reading with children is one of the most promising avenues,” said Daisy Fancourt, Ph.D., a professor of psychology and epidemiology at University College London and co-director of the EpiArts Lab. “It supports not only language and literacy, but empathy, social bonding, emotional development and school readiness.” Bone added that creating more community-centered reading opportunities could also help: “Ideally, we’d make local libraries more accessible and attractive, encourage book groups, and make reading a more social and supported activity — not just something done in isolation.” The study underscores the importance of valuing and protecting access to the arts — not only as a matter of culture, but as a matter of public health. “Reading has always been one of the more accessible ways to support well-being,” Fancourt said. “To see this kind of decline is concerning because the research is clear: reading is a vital health-enhancing behavior for every group within society, with benefits across the life-course.”

4 min

AI in the classroom: What parents need to know

As students return to classrooms, Maya Israel, professor of educational technology and computer science education at the University of Florida, shares insights on best practices for AI use for students in K-12. She also serves as the director of CSEveryone Center for Computer Science Education at UF, a program created to boost teachers’ capabilities around computer science and AI in education. Israel also leads the Florida K-12 Education Task Force, a group committed to empowering educators, students, families and administrators by harnessing the transformative potential of AI in K-12 classrooms, prioritizing safety, privacy, access and fairness. How are K–12 students using AI in classrooms? There is a wide range of approaches that students are using AI in classrooms. It depends on several factors including district policies, student age and the teacher’s instructional goals. Some districts restrict AI to only teacher use, such as creating custom reading passages for younger students. Others allow older students to use tools to check grammar, create visuals or run science simulations. Even then, skilled teachers frame AI as one tool, not a replacement for student thinking and effort. What are examples of age-appropriate tools that enhance learning? AI tools can be used to either enhance or erode learner agency and critical thinking. It is up to the educators to consider how these tools can be used appropriately. It is critical to use AI tools in a manner that supports learning, creativity and problem solving rather than bypass critical thinking. For example, Canva lets students create infographics, posters and videos to show understanding. Google’s Teachable Machine helps students learn AI concepts by training their own image-recognition models. These types of AI-augmented tools work best when they are embedded into activities such as project-based learning, where AI supports learning and critical thinking. How do teachers ensure AI supports core skills? While AI can be incredibly helpful in supporting learning, it should not be a shortcut that allows students to bypass learning. Teachers should design learning opportunities that integrate AI in a manner that encourages critical thinking. For example, if students are using AI to support their mathematical understanding, teachers should ask them to explain their reasoning, engage in discussions and attempt to solve problems in different ways. Teachers can ask students questions like, “Does that answer make sense based on what you know?” or “Why do you think [said AI tool] made that suggestion?” This type of reflection reinforces the message that learning does not happen through getting fast answers. Learning happens through exploration, productive struggle and collaboration. Many parents worry that using AI might make students too dependent on technology. How do educators address that concern? This is a very valid concern. Over-reliance on AI can erode independence and critical thinking, that’s why teachers should be intentional in how they use AI for teaching and learning. Educators can address this concern by communicating with parents their policies and approaches to using AI with students. This approach can include providing clear expectations of when AI is used, designing assignments that require critical thinking, personal reflection and reasoning and teaching students the metacognitive skills to self-assess how and when to use AI so that it is used to support learning rather than as a crutch. How do schools ensure that students still develop original thinking and creativity when using AI for assignments or projects? In the age of AI, there is the need to be even more intentional designing learning experiences where students engage in creative and critical thinking. One of the best practices that have shown to support this is the use of project-based learning, where students must create, iterate and evaluate ideas based on feedback from their peers and teachers. AI can help students gather ideas or organize research, but the students must ask the questions, synthesize information and produce original ideas. Assessment and rubrics should emphasize skills such as reasoning, process and creativity rather than just focusing on the final product. That way, although AI can play a role in instruction, the goal is to design instructional activities that move beyond what the AI can do. How do educators help students understand when it’s appropriate to use AI in their schoolwork? In the age of AI, educators should help students develop the skills to be original thinkers who can use AI thoughtfully and responsibly. Educators can help students understand when to use AI in their school work by directly embedding AI literacy into their instruction. AI literacy includes having discussions about the capabilities and limitations of AI, ethical considerations and the importance of students’ agency and original thoughts. Additionally, clear guidelines and policies help students navigate some of the gray areas of AI usage. What guidance should parents give at home? There are several key messages that parents should give their children about the use of AI. The most important message is that even though AI is powerful, it does not replace their judgement, creativity or empathy. Even though AI can provide fast answers, it is important for students to learn the skills themselves. Another key message is to know the rules about AI in the classroom. Parents should speak with their students about the mental health implications of over-reliance on AI. When students turn to AI-augmented tools for every answer or idea, they can gradually lose confidence in their own problem-solving abilities. Instead, students should learn how to use AI in ways that strengthen their skills and build independence.

3 min

Is writing with AI at work undermining your credibility?

With over 75% of professionals using AI in their daily work, writing and editing messages with tools like ChatGPT, Gemini, Copilot or Claude has become a commonplace practice. While generative AI tools are seen to make writing easier, are they effective for communicating between managers and employees? A new study of 1,100 professionals reveals a critical paradox in workplace communications: AI tools can make managers’ emails more professional, but regular use can undermine trust between them and their employees. “We see a tension between perceptions of message quality and perceptions of the sender,” said Anthony Coman, Ph.D., a researcher at the University of Florida's Warrington College of Business and study co-author. “Despite positive impressions of professionalism in AI-assisted writing, managers who use AI for routine communication tasks put their trustworthiness at risk when using medium- to high-levels of AI assistance." In the study published in the International Journal of Business Communication, Coman and his co-author, Peter Cardon, Ph.D., of the University of Southern California, surveyed professionals about how they viewed emails that they were told were written with low, medium and high AI assistance. Survey participants were asked to evaluate different AI-written versions of a congratulatory message on both their perception of the message content and their perception of the sender. While AI-assisted writing was generally seen as efficient, effective, and professional, Coman and Cardon found a “perception gap” in messages that were written by managers versus those written by employees. “When people evaluate their own use of AI, they tend to rate their use similarly across low, medium and high levels of assistance,” Coman explained. “However, when rating other’s use, magnitude becomes important. Overall, professionals view their own AI use leniently, yet they are more skeptical of the same levels of assistance when used by supervisors.” While low levels of AI help, like grammar or editing, were generally acceptable, higher levels of assistance triggered negative perceptions. The perception gap is especially significant when employees perceive higher levels of AI writing, bringing into question the authorship, integrity, caring and competency of their manager. The impact on trust was substantial: Only 40% to 52% of employees viewed supervisors as sincere when they used high levels of AI, compared to 83% for low-assistance messages. Similarly, while 95% found low-AI supervisor messages professional, this dropped to 69-73% when supervisors relied heavily on AI tools. The findings reveal employees can often detect AI-generated content and interpret its use as laziness or lack of caring. When supervisors rely heavily on AI for messages like team congratulations or motivational communications, employees perceive them as less sincere and question their leadership abilities. “In some cases, AI-assisted writing can undermine perceptions of traits linked to a supervisor’s trustworthiness,” Coman noted, specifically citing impacts on perceived ability and integrity, both key components of cognitive-based trust. The study suggests managers should carefully consider message type, level of AI assistance and relational context before using AI in their writing. While AI may be appropriate and professionally received for informational or routine communications, like meeting reminders or factual announcements, relationship-oriented messages requiring empathy, praise, congratulations, motivation or personal feedback are better handled with minimal technological intervention.

View all posts