A Big Week in the Measurement of Expertise: How the UK Research Excellence Framework (REF) Results Will Impact Universities

May 9, 2022

4 min

Justin Shaw

How should we measure faculty expertise? This week the UK provides its answer to this question via its highly significant and formal (government-directed) assessment of academic research - which grades academic teams on a scale of 1* to 4* for their ability to deliver, share and create impact globally outstanding research.


This process is known as the REF (the Research Excellence Framework) - and the results will be publicly released this Thursday (12th May) with universities themselves finding out how they’ve performed in advance today (Monday 9th May). The process was last carried out 8 years ago and has been delayed by a year due to the pandemic.



Why is the Research Excellence Framework (REF) Significant?


The Research Excellence Framework steers the level of UK public funds - allocated via research councils - that will be invested in research for each academic department (or so-called “Unit of Assessment”) for the next few years. It is also a way of comparing performance against other universities that are offering similar research expertise, and of strengthening (or weakening) global research reputations.


During the next three days, UK universities will be digging into the detail of their REF gradings and the accompanying feedback. There will be some very nervous university leaders and research heads delving into why this peer-assessed review of their research has not gone as well as they expected and why their percentages in each of the four grade areas have dropped - or even been given the “unclassified” career-damaging stamp.


How are the REF Scores for Universities Determined?


The measurement process is based on three aspects:


  1. Quality of outputs (such as: publications, performances, and exhibitions),
  2. Impact beyond academia
  3. The environment that supports research


The preparation, participation, and assessment process takes a massive amount of time, attention and energy. Last time (2014) there were 1,911 submissions to review. Research teams, designated REF leaders and senior staff will have spent long hours across many months preparing their submissions and making sure they are presenting hard evidence and the best case possible to meet the above criteria at the highest possible level. There are 34 subject areas that are covered in the latest REF - and three tiers of expert panels (some with about 20 or more senior academics, international subject leaders, and research users) will have reviewed each submission and compared notes to come to decisions.


How do these Key Categories within the REF Contribute to the Rating for a University?


The Research Excellence Framework is actually an intensive and highly important approach to expert assessment. These are the key factors and their definitions (with the assigned weighting of each of the criteria in steering final grades):


  1. Outputs (60%): the quality of submitted research outputs in terms of their ‘originality, significance and rigour’, with reference to international research quality standards. This element will carry a weighting of 60 per cent in the overall outcome awarded to each submission.
  2. Impact (25%): the ‘reach and significance’ of impacts on the economy, society, culture, public policy or services, health, the environment or quality of life that were underpinned by excellent research conducted in the submitted unit. This element carries a weighting of 25 per cent.
  3. Environment (15%): the research environment in terms of its ‘vitality and sustainability’, including the approach to enabling impact from its research, and its contribution to the vitality and sustainability of the wider discipline or research base. This element accounts for 15 per cent.


Taking a Closer Look at the Categories - Are We Focusing Enough on Research Impact?


In 2014 a formal review was carried out in order to improve and evolve the REF process which made a number of recommendations. Most notably the weighting for “impact” was increased by five percent, with “outputs” being reduced by the same percentage. This is certainly a recognition that the external contribution difference that research makes is more important - but is it enough? Should there be greater emphasis on the return on investment from a beneficiaries and user experience perspective?


Many argue that academic research should retain a strong element of ‘”blue sky” experimentation - where outright evidence of impact may take several years (even decades) and so can’t demonstrate such immediate value.


A particularly notable benefit of the timing of the COVID-19 pandemic and the effect of this in REF deadlines has allowed the extended assessment period for ‘proof of impact’ from 1 August 2013 to 31 December 2020. This is an extension from the previous end date of 31 July 2020. The extension has been put in place to enable case studies affected by, or focusing on the response to, COVID-19 to be assessed in REF 2021.


Going back to the original question: how should we measure faculty expertise? It will be interesting to monitor the views and responses of university leaders and faculty members at the end of this week as to whether they feel that - standing back from it all - this UK-centric method of measurement is the best that can be done, a neat compromise or isn’t really what we really need.


For more information on the Research Excellence Framework visit www.ref.ac.uk/


Justin Shaw

Justin is UK and Ireland Development Director for ExpertFile and Chief Higher Education Consultant at Communications Management. An authority on University strategy and communications, he has worked in and with leadership teams at UK universities for over 30 years. In his role he has advised universities on how to promote their expertise and on communications strategies related to the REF.


Connect with:
Justin Shaw

Justin Shaw

Chief Higher Education Consultant (Communications Management) and UK Development Director

Specialist in higher education, communications, reputation and marketing of academic experts

You might also like...

Check out some other posts from ExpertFile

2 min

Thanksgiving North and South: Why Canada and the U.S. Celebrate at Different Times

Every fall, both Canadians and Americans gather around the table to give thanks — but they do it more than a month apart. While the two holidays share themes of gratitude, harvest, and togetherness, they evolved under distinct historical, cultural, and seasonal circumstances that reflect each nation’s story. A Canadian Harvest of Thanks Canada’s Thanksgiving traces its roots back to 1578, when English explorer Martin Frobisher held a ceremony in Newfoundland to give thanks for safe passage across the Atlantic. Over time, the holiday blended European harvest traditions with local customs, emphasizing gratitude for the year’s bounty rather than a single historic event. Because Canada’s growing season ends earlier than in most of the United States, Thanksgiving naturally became an autumn harvest celebration held in early October. It was officially recognized in 1957, when Parliament declared the second Monday of October as a national holiday “to give thanks for the harvest and the blessings of the past year.” The American Tradition South of the border, Thanksgiving carries a different historical symbolism. The U.S. holiday traces back to 1621, when Pilgrims and the Wampanoag people shared a harvest feast in Plymouth, Massachusetts. While similar in spirit, the American version became tied more closely to the nation’s founding mythology — a story of cooperation, survival, and gratitude in the New World. Because harvests occur later in the U.S., the celebration naturally took place in late November. In 1863, during the Civil War, President Abraham Lincoln proclaimed Thanksgiving a national holiday to promote unity, setting it for the final Thursday in November. Congress later standardized the date to the fourth Thursday in 1941. Seasons, Stories, and Shared Spirit At heart, both Thanksgivings mark the same human instinct: to pause, reflect, and give thanks. Canada’s October observance reflects the rhythm of northern harvests and a gratitude rooted in nature’s cycle. The American holiday, coming later in November, intertwines with its own national narrative of endurance and unity. Despite the calendar gap, the spirit is shared — families gathering to celebrate abundance, resilience, and community, in traditions that continue to evolve on both sides of the border. Connect with our experts on the history, traditions, and cultural meanings of Thanksgiving in North America. Check them out here : www.expertfile.com

2 min

The History of Government Shutdowns in America

Few events capture Washington gridlock more visibly than a government shutdown. While rare in the nation’s early history, shutdowns have become a recurring feature of modern politics—bringing uncertainty for federal workers, disruptions to public services, and ripple effects across the economy. How It Started The modern shutdown era began in the 1970s after a new law, the Congressional Budget and Impoundment Control Act of 1974, established a formal budget process. Before then, funding disputes didn’t usually halt operations. But a key shift came in 1980, when the Carter administration’s Justice Department concluded that, without approved appropriations, agencies had no legal authority to spend money. That ruling set the stage for shutdowns as we know them today. Since then, the U.S. has endured more than 20 funding gaps, ranging from brief lapses over a weekend to the record-long 35-day shutdown of 2018–2019. Each one has highlighted the partisan battles over federal spending, immigration, healthcare, or other policy priorities. Why They Happen Shutdowns occur when Congress fails to pass, and the president fails to sign, appropriations bills or temporary funding measures known as continuing resolutions. In practice, they reflect deeper political standoffs: one branch of government using the threat of a shutdown to force concessions on controversial issues. They can be triggered by disputes over budget size, specific programs, or broader ideological fights. In many cases, the standoff ends when mounting political and economic costs make compromise unavoidable. What Gets Impacted The effects of a shutdown are immediate and wide-ranging: Federal Workforce: Hundreds of thousands of employees are furloughed without pay, while others deemed “essential” must work without immediate compensation. Public Services: National parks close, permits stall, museums shutter, and routine government operations—from food inspections to scientific research—are delayed. Economic Ripple Effects: Contractors lose revenue, local economies near federal facilities take a hit, and financial markets often react nervously. Extended shutdowns can even slow GDP growth. Citizens’ Daily Lives: From delayed tax refunds to halted small business loans, ordinary Americans feel the squeeze when government services pause. Why This Matters Government shutdowns are more than political theater—they expose the fragility of the budget process and the real consequences of partisan impasse. They highlight the dependence of millions of Americans on public services and raise questions about the cost of dysfunction in the world’s largest economy. Understanding why they happen and what’s impacted helps citizens gauge not just the politics of Washington, but also how governance—or the lack of it—touches everyday life. Connect with our experts about the history, causes, and consequences of government shutdowns in America. Check out our experts here : www.expertfile.com

8 min

University Communications Needs a Bigger Role in the Research Conversation

While attending the Expert Finder Systems International Forum (EFS), several notable themes emerged for me over the 2-day event. It's clear that many universities are working hard to improve their reputation by demonstrating the real-world impact of their research to the public and to funders, but it's proving to be a challenging task - even for the largest R1 universities.  Many of these challenges stem from how institutions have traditionally organized their research functions, management systems, and performance metrics.  Engaging faculty researchers in this process remains a significant challenge, despite the need for rapid transformation. While this EFS conference was very well-organized and the speakers delivered a great deal of useful information, I appeared to be one of the few marketing and communications professionals in a room full of research leaders, administrative staff, librarians, and IT professionals. There's a certain irony to this, as I observe the same phenomenon at HigherEd marketing conferences, which often lack representation from research staff.  My point is this. We can't build better platforms, policies, and processes that amplify the profile of research without breaking down silos.  We need University Communications to be much more involved in this process. As Baruch Fischhoff, a renowned scholar at Carnegie Mellon University, notes: Bridging the gap between scientists and the public “requires an unnatural act: collaboration among experts from different communities” – but when done right, it benefits everyone.  But first, let's dive in a little more into RIM's and Expert Finder Systems for context. What are Research Information Systems (RIMs) Research Information Management systems (aka Expert Finder Systems) are the digital backbone that tracks everything researchers do. Publications, grants, collaborations, patents, speaking engagements. Think of them as massive databases that universities use to catalog their intellectual output and demonstrate their research capacity.  These systems matter. They inform faculty promotion decisions, support strategic planning and grant applications, and increasingly, they're what institutions point to when asked to justify their existence to funders, accreditors, and the public. But here's the problem: most RIM systems were designed by researchers, for researchers, during an era when academic reputation was the primary currency. The game has fundamentally changed, and our systems haven't caught up. Let's explore this further. Academic Research Impact: The New Pressure Cooker Research departments across the country are under intense pressure to demonstrate impact—fast. State legislators want to see economic benefits from university research. Federal agencies are demanding clearer public engagement metrics. Donors want stories, not statistics. And the general public? They're questioning whether their tax dollars are actually improving their lives. Yet some academics are still asking, “Why should I simplify my research? Doesn’t the public already trust that this is important?” In a word, no – at least, not like they used to. Communicators must navigate a landscape where public trust in science and academia is not a given.  The data shows that there's a lot of work to be done. Trust in science has declined and it's also polarized:. According to a Nov. 2024 Pew Research study, 88% of Democrats vs. 66% of Republicans have a great deal or fair amount of confidence in scientists; overall views have not returned to pre-pandemic highs and many Americans are wary of scientists’ role in policymaking. While Public trust in higher education has declined, Americans see universities having a central role in innovation. While overall confidence in higher education has been falling over the past decade, a recent report by Gallup Research shows innovation scores highest as an area where higher education helps generate positive outcomes. Communication is seen as an area of relative weakness for scientists. Overall, 45% of U.S. adults describe research scientists as good communicators, according to a November 2024 Pew Research Study. Another critique many Americans hold is the sense that research scientists feel superior to others; 47% say this phrase describes them well. The traditional media ecosystem has faltered:. While many of these issues are largely due to research being caught in a tide of political polarization fueled by a significant rise in misinformation and disinformation on social media, traditional media have faced serious challenges.  Newsrooms have shrunk, and specialized science journalists are a rare breed outside major outlets. Local newspapers – once a reliable venue for highlighting state university breakthroughs or healthcare innovations – have been severely impacted. The U.S. has lost over 3,300 newspapers since 2005, with closures continuing and more than 7,000 newspaper jobs vanished between 2022 and 2023 according to a Northwestern University Medill Report on Local News. Competition for coverage is fierce, and your story really needs to shine to grab a journalist's attention – or you need to find alternative ways to reach audiences directly.  The Big Message These Trends are Sending We can’t just assume goodwill – universities have to earn trust through clear, relatable communication. Less money means more competition and more scrutiny on outcomes. That's why communications teams play a pivotal role here: by conveying the impact of research to the public and decision-makers, they help build the case for why cuts to science are harmful. Remember, despite partisan divides, a strong majority – 78% of Americans – still agree government investment in scientific research is worthwhile. We need to keep it that way. But there's still a lot of work to do. The Audience Mismatch Problem The public doesn't care about your Altmetrics score. The policymakers I meet don't get excited about journal impact factors. Donors want to fund solutions to problems they understand, not citations in journals they'll never read. Yet our expert systems are still designed around these traditional academic metrics because that's what the people building them understand. It's not their fault—but it's created a blind spot. "Impact isn't just journal articles anymore," one EFS conference panelist explained. "It's podcasts, blogs, media mentions, datasets, even the community partnerships we build." But walk into most research offices, and those broader impacts are either invisible in the system or buried under layers of academic jargon that external audiences can't penetrate. Expert systems have traditionally been primarily focused on academic audiences. They're brilliant at tracking h-Index scores, citation counts, and journal impact factors. But try to use them to show a state legislator how your agriculture research is helping local farmers, or explain to a donor how your engineering faculty is solving real-world problems? There's still work to do here. As one frustrated speaker put it: "These systems have become compliance-driven, inward-looking tools. They help administrators, but they don't help the public understand why research matters. The Science Translation Crisis Perhaps the most sobering observation came from another EFS Conference speaker who said it very plainly. "If we can't explain our work in plain language, we lose taxpayers. We lose the community. They don't see themselves in what we do." However, this feels more like a communication problem masquerading as a technology issue. We've built systems that speak fluent academic, but the audiences we need to reach speak human. When research descriptions are buried in jargon, when impact metrics are incomprehensible to lay audiences, when success stories require a PhD to understand—we're actively pushing away the very people we need to engage. The AI Disruption Very Few Saw Coming Yes, AI, like everywhere else, is fast making its mark on how research gets discovered. One impassioned speaker representing a university system described this new reality: "We are entering an age where no one needs to click on content. AI systems will summarize and cite without ever sending the traffic back." Think about what this means for a lot of faculty research. If it's not structured for both AI discovery and human interaction, your world-class faculty might as well be invisible. Increasingly, you will see that search traffic isn't coming back to your beautifully designed university pages—instead, it's being "synthesized" and served up in AI-generated summaries. I've provided a more detailed overview of how AI-generated summaries work in a previous post here. Keep in mind, this isn't a technical problem that IT can solve alone. It's a fundamental communications challenge about how we structure, present, and distribute information about our expertise. Faculty Fatigue is Real Meanwhile, many faculty are experiencing serious challenges managing busy schedules and mounting responsibilities.  As another EFS panelist commented on the challenges of engaging faculty in reporting and communicating their research, saying, "Many faculty see this work as duplicative. It's another burden on top of what they already have. Without clear incentives, adoption will always lag." Faculty researchers are busy people. They will engage with these internal systems when they see direct benefits. Media inquiries, speaking opportunities, consulting gigs, policy advisory roles—the kind of external visibility that advances careers and amplifies research impact. And they require more support than many institutions can provide. Yet, many universities have just one or two people trying to manage thousands of profiles, with no clear strategy for demonstrating how tasks such as profile updates and helping approve media releases and stories translate into tangible opportunities. In short, we're asking faculty to feed a system that feels like it doesn't feed them back. Breaking Down the Silos Which brings me to my main takeaway: we need more marketing and communications professionals in these conversations. The expert systems community is focused on addressing many of the technical challenges—data integration, workflow optimization, and new metadata standards — as AI transforms how we conduct research. But they're wrestling with fundamental communication challenges about audience, messaging, and impact storytelling. That's the uncomfortable truth. The systems are evolving whether we participate or not. The public pressure for accountability isn't going away. Comms professionals can either help shape these systems to serve critical communications goals or watch our expertise get lost in translation. ⸻ Key Takeaways Get Closer to Your Research: This involves having a deeper understanding of the management systems you use across the campus. How is your content appearing to external audiences? —not just research administrators, but the journalists, policymakers, donors, and community members we're trying to reach. Don't Forget The Importance of Stories: Push for plain-language research descriptions without unnecessarily "dumbing down" the research. Show how the work your faculty is doing can create real-world benefits at a local community level. Also, demonstrate how it has the potential to address global issues, further enhancing your authority.  And always be on the lookout for story angles that connect the research to relevant news, adding value for journalists. Structure Expert Content for AI Discoverability: Audit your content to see how it's showing up on key platforms such as Google Gemini, ChatGPT. Show faculty how keeping their information fresh and relevant translates to career opportunities they actually care about. Show Up at These Research Events: Perhaps most importantly, communications pros need to be part of these conversations. Next year's International Forum on Expert Finder Systems needs more communications professionals, marketing strategists, and storytelling experts in the room. The research leaders, administrators and IT professionals you will meet have a lot of challenges on their plate and want to do the right thing.  They will appreciate your input. These systems are being rapidly redesigned - Whether you're part of the conversation or not. The question is: do we want to influence how they serve our institutions' communications goals, or do we want to inherit systems that work brilliantly for academic audiences but get a failing grade for helping us serve the public?

View all posts