Experts Matter. Find Yours.

Connect for media, speaking, professional opportunities & more.

AI-powered cruise control system may pave the way to fuel efficiency and traffic relief featured image

AI-powered cruise control system may pave the way to fuel efficiency and traffic relief

The CIRCLES Consortium, consisting of Vanderbilt University, UC Berkeley, Temple University and Rutgers University-Camden, in coordination with Nissan North America and the Tennessee Department of Transportation, concluded a five-day open-track experiment on Nov. 18. Congestion Impacts Reduction via CAV-in-the-loop Lagrangian Energy Smoothing (CIRCLES) Researchers tested an AI-powered cruise control system designed to increase fuel savings and ease traffic using 100 specially equipped Nissan Rogue vehicles. The experiment—which ran from Nov. 14 through Nov. 18 on a sensor-filled portion of Interstate 24—is based on the results from an earlier, closed-track study where a single smart vehicle smoothed human-caused traffic congestion, leading to significant fuel savings. A single AI-equipped vehicle could influence the speed and driving behavior of up to 20 surrounding cars, causing a kind of positive ripple effect in day-to-day traffic. The CIRCLES Consortium will spend the next several months analyzing data collected on the AI-equipped vehicles and their impact on the flow of traffic over the duration of the experiment. The test was conducted on the recently opened I-24 MOTION testbed, the only real-world automotive testing environment of its kind in the world. Stretching for four miles just southeast of downtown Nashville, the smart highway is equipped with 300 4K digital sensors capable of logging 260,000,000 vehicle-miles of data per year. The CIRCLES Consortium research is supported by the National Science Foundation and the U.S. Departments of Transportation and Energy. Support was also provided by Toyota North America and General Motors. The experiment included Toyota RAV4 and Cadillac XT5 vehicles. Preliminary vehicle and traffic flow detection in the I-24 Mobility Technology Interstate Observation Network (MOTION). “On November 16 alone, the system recorded a total of 143,010 miles driven and 3,780 hours of driving. The I-24 MOTION system, combined with vehicle energy models developed in the CIRCLES project, provided an estimation of the fuel consumption of the whole traffic flow during those hours. The concept we are hoping to demonstrate is that by leveraging this new traffic system to collect data and estimate traffic and applying artificial intelligence technology to existing cruise control systems, we can ease traffic jams and improve fuel economy,” the CIRCLES team said in a joint statement. “Nissan has always been a pioneer in automotive innovation, and with our long-term vision, Nissan Ambition 2030, we know our future is autonomous, connected and electric,” said Liam Pedersen, deputy general manager at the Nissan Alliance Innovation Lab in California’s Silicon Valley. “CIRCLES shares our common goal of building a safer, cleaner world by empowering mobility.” “When it comes to transportation and mobility in Tennessee, we are at a critical juncture,” said Deputy Governor and TDOT Commissioner Butch Eley. “Traffic congestion is now becoming more prominent throughout Tennessee, and not just in urban areas. Addressing these challenges will force us to think critically about solutions, as transportation infrastructure projects traditionally are not identified nor completed before traffic congestion more dramatically affects our quality of life. One of these solutions is greater use of technology to enhance mobility. We are confident that this project and others like it will further strengthen Tennessee’s reputation for being a hub of automotive excellence.” “The I-24 MOTION project is a first-of-its-kind testbed, where we’ll be able to study in real time the impact connected and autonomous vehicles have on traffic in an open road setting,” said Meredith Cebelak, adjunct instructor in civil and environmental engineering at Vanderbilt and Tennessee transportation and transportation systems management and operations department leader at Gresham Smith. “The permanent infrastructure has been designed and installed, meaning the testbed will always be ‘on’ and available to researchers. By unlocking a new understanding of how these vehicles influence traffic, vehicle, infrastructure, and traffic management strategies, design can be optimized to reduce traffic concerns in the future to improve safety, air quality and fuel efficiency.”  “Partnership across universities, government and the private sector is the key to pioneering projects like this one,” Vice Provost for Research and Innovation Padma Raghavan said. “From its earliest inception, all the partners in this effort have played vital roles. That trusted collaboration continues as the team analyzes results to seek new insights to address pressing challenges in transportation in Tennessee and beyond.”

Dan Work profile photo
3 min. read
Public lecture: how can we have a good future with artificial intelligence? featured image

Public lecture: how can we have a good future with artificial intelligence?

Public lecture: how can we have a good future with artificial intelligence?AI expert and educator Professor Anikó Ekárt to discuss one of today’s most provocative topics Lecture will take place on 28 February at Aston University Talk to explore artificial intelligence’s capabilities, benefits and pitfalls. The potential impact of artificial intelligence (AI) on our daily lives will be explored in a public lecture at Aston University. The University is inviting the public onto its campus on Tuesday 28 February to hear Professor Anikó Ekárt discuss one of today’s most provocative topics. Research into AI began in the1950s and since then it has played an increasing role in daily lives, such as chatbots and digital assistants. As an AI researcher and educator, Professor Ekárt will take a pragmatic view of the technology, arguing that society will benefit from it – but only if it is used responsibly. She said: “Digital assistants based on speech recognition are now broadly accepted and successfully embedded in many business services. “However, the most recent release of a chatbot with amazing writing capabilities has divided the world; some are relieved that their job may now become substantially easier, but others have questioned the impact of this on education. “In the lecture, I’ll suggest three key directions; responsible use of AI, exploring many AI techniques rather than focusing on just one, and educating the public about AI’s capabilities, benefits and pitfalls.” She will illustrate the success and further potential of less well-known AI techniques, such as evolutionary computation, genetic programming and symbolic regression, based on her 25 years of research. Anikó who is a professor of artificial intelligence, joined Aston University in 2006 as a lecturer. She leads the artificial intelligence research theme within the School of Informatics and Digital Engineering. Her research interests are centred around AI methods and their application, focusing on evolutionary algorithms and genetic programming. She has successfully contributed to applications of AI techniques to health, engineering, transport, and art. In 2022 she was the winner of the Evo* Award for Outstanding Contribution to Evolutionary Computation in Europe. The free event will be taking place on 28 February from 6 pm to 8 pm and will be followed by a drinks reception. To sign up for a place visit https://www.eventbrite.co.uk/e/an-inaugural-lecture-by-professor-aniko-ekart-tickets-516518760517

2 min. read
AI-Generated Content is a Game Changer for Marketers, but at What Cost? featured image

AI-Generated Content is a Game Changer for Marketers, but at What Cost?

Goizueta’s David Schweidel pitted man against the machine to create SEO web content only to find that providing an editor with bot-generated content trounces the human copywriter every time. Good news for companies looking to boost productivity and save cash, he says. But could there be other hidden costs? In December 2022, The New York Times ran a piece looking back on the year’s biggest consumer tech updates. The review was mixed. Ownership shifts in the world of social media garnered special mentions, but hardware innovations had been largely “meh,’ mused the Times. There was one breakthrough area that warranted attention, however: AI-powered language-processing tech capable of generating natural-looking text, the same technology that powers familiar chatbots. And one such technology could well be poised to “invade our lives in 2023.” Earlier in December, AI research lab OpenAI, released the latest update to its Generative Pre-Trained Transformer technology, an open source artificial intelligence. It’s latest iteration, ChatGPT, immediately went viral. Here was an AI assistant that sounded intelligent. Not only could it answer any question thrown its way without supervised training, but when prompted, it could also write blog posts, as well as find and fix bugs in programming code. ChatGPT could draft business proposals and even tell jokes. All of this at a speed that beggared belief. Since its first release in 2020, OpenAI’s GPT technology has powered through a slew of updates that have seen its capabilities leap forward “by light years” in less than 24 months, says Goizueta Professor of Marketing, David Schweidel. For businesses looking to harness this rapidly-evolving technology, the potential is clearly enormous. But aren’t there also risks that industry and consumers alike will need to navigate? Schweidel is clear that the academic community and initiatives such as the Emory AI Humanity Initiative have a critical role in asking hard questions—and in determining the limitations and dangers, as well as the opportunities, inherent in tech innovation—because, as he puts it, “these things are going to happen whether we like it or not.” Man Versus Machine To that end, Schweidel and colleagues from Vienna University of Economics and Business and the Modul University of Vienna have put together a study looking at how well natural language generation technologies perform in one specific area of marketing: drafting bespoke content for website search engine optimization, better known as SEO. What they find is that content crafted by the machine, after light human editing, systematically outperforms its human counterparts—and by a staggering margin. Digging through the results, Schweidel and his colleagues can actually pinpoint an almost 80 percent success rate for appearing on the first page of search engine results with AI-generated content. This compares with just 22 perfect of content created by human SEO experts. In other words, the AI content passed to a human is roughly four times more effective than a skilled copywriter working alone. Reaching these findings meant running two real-time, real-world experiments, says Schweidel. First, he and his colleagues had to program the machine, in this case GPT 2, an earlier incarnation of GPT. GPT relies on natural language generation (NGL), a software process that converts manually uploaded input into authentic-sounding text or content—comparable in some ways to the human process of translating ideas into speech or writing. To prepare GPT-2 for SEO-specific content creation, Schweidel et al. started with the pre-trained GPT-2, and then let the machine do the heavy lifting: searching the internet for appropriate results based on the desired keyword, scraping the text of the websites, and updating GPT-2 to “learn” what SEO looks like, says Schweidel. We partnered with an IT firm and a university to run our field experiments. This meant creating SEO content for their websites using GPT-2 and actual human SEO experts, and then doing A/B testing to see which content was more successful in terms of landing in the top 10 search engine results on Google. So this was an opportunity to put the AI bot to the test in a real-world setting to see how it would perform against people. The results point to one clear winner. Not only did content from GPT-2 outperform its human rivals in SEO capabilities, it did so at scale. The AI-generated content scored a daily median result of seven or more hits in the first page of Google search results. The human-written copy didn’t make it onto the first result page at all. On its best day, GPT showed up for 15 of its 19 pages of search terms inside the top 10 search engine results page, compared with just two of the nine pages created by the human copywriters—a success rate of just under 80 percent compared to 22 percent. Savings at Scale The machine-generated content, after being edited by a human, trounces the human in SEO. But that’s not all, says Schweidel. The GPT bot was also able to produce content in a fraction of the time taken by the writers, reducing production time and associated labor costs by more than 90 percent, he says. “In our experiments, the copywriters took around four hours to write a page, while the GPT bot and human editor took 30 minutes. Now assuming the average copywriter makes an annual $45K on the basis of 1,567 hours of work, we calculate that the company we partnered with would stand to save more than $100,000 over a five-year period just by using the AI bot in conjunction with a human editor, rather than relying on SEO experts to craft content. That’s a 91 percent drop in the average cost of creating SEO content. It’s an orders of magnitude difference in productivity and costs.” But there are caveats. First off, there’s the quality of the machine-generated content to consider. For all its mind-boggling capabilities, even the newly released ChatGPT tends to read somewhat sterile, says Schweidel. That’s a problem both in terms of Google guidelines and brand coherence. Human editors are still needed in order to attenuate copy that can sound a little “mechanical.” “Google is pretty clear in its guidelines: Content generated by machines alone is a definite no-no. You also need to factor in the uncanny valley effect whereby something not quite human can come off as weird. Having an editor come in to smooth out AI content is critical to brand voice as well as the human touch.” Asking the Big Questions Then there are the moral and metaphysical dimensions of machine learning and creativity that beg an important question: Just because we can, does that mean we should? Here, Schweidel has grave reservations about the future of ChatGPT and its ilk. The potential of this kind of technology is extraordinarily exciting when you think about the challenges we face from productivity to pandemics, from sustainable growth to climate change. But let’s be very clear about the risks, too. AI is already capable of creating content—audio, visual and written—that looks and feels authentic. In a world that is hugely polarized, you have to ask yourself: How can that be weaponized? At the end of the day, says Schweidel, the large language models powering these generative AIs are essentially “stochastic parrots:” trained mimics whose output can be hard to predict. In the wrong hands, he warns, the potential for misinformation—and worse—could well be “terrifying.” “Shiny new tech is neither inherently good nor bad. It’s human nature to push the boundaries. But we need to ensure that the guardrails are in place to regulate innovation at this kind of pace, and that’s not easy. Governments typically lag far behind OpenAI and companies like them, even academics have a hard time keeping up. The real challenge ahead of us will be about innovating the guardrails in tandem with the tech—innovating our responsible practices and processes. Without effective safeguards in place, we’re on a path to potential destruction.” Covering AI or interesting in knowing more about this fascinating topic - then let our experts help with your coverage and stories. David Schweidel is the Rebecca Cheney McGreevy Endowed Chair and Professor of Marketing at Emory University's Goizueta Business School. Simply click on David's icon now to arrange an interview today.

David Schweidel profile photo
6 min. read
Aston University AI expertise helps estimate daily transmission rates of infections such as Covid featured image

Aston University AI expertise helps estimate daily transmission rates of infections such as Covid

Model used antibody data collected at blood donation centres Data obtained allowed academics to estimate the proportion of people who were going undiagnosed Current epidemiological models tend not to be as effective at estimating hidden variables such as daily infection rates. Aston University researchers have helped develop a mathematical model which can estimate daily transmission rates of infections such as Covid by testing for antibodies in blood collected at blood donation centres. Current epidemiological models that are usually used tend not to be as effective at adjusting quickly to changes in infection levels. Working with researchers at the Universidade Federal de Minas Gerais in Brazil they conducted a large longitudinal study applying a compartmental model, which is a general modelling technique often applied to the mathematical modelling of infectious diseases, to results obtained from Brazilian blood donor centres. The testing was done by Fundacao Hemominas, one of the largest blood services in Brazil, which covers an area similar to that of continental France. They used the reported number of SARS-CoV-2 cases along with serology results (diagnostic methods used to identify antibodies and antigens in patients’ samples) from blood donors as inputs and delivered estimates of hidden variables, such as daily values of transmission rates and cumulative incidence rate of reported and unreported cases. The model discussed in the paper SARS-CoV-2 IgG Seroprevalence among Blood Donors as a Monitor of the COVID-19 Epidemic, Brazil gave the experts the ability to have a more refined view of the infection rates and relative rate of immunity compared to official measurements. The testing started at the beginning of the pandemic and involved 7,837 blood donors in seven cities in Minas Gerais, Brazil during March–December 2020. At that point testing wasn’t widely available and there was a high proportion of undetected asymptomatic or light symptomatic cases. The data obtained allowed the experts to estimate the proportion of people who were going undiagnosed. Dr Felipe Campelo, senior lecturer in computer science at Aston University, said: “Public communication about the COVID-19 epidemic was based on officially reported cases in the community, which strongly underestimates the actual spread of the disease in the absence of widespread testing. “This difference underscores the convenience of using a model-based approach such as the one we proposed, because it enables the use of measured data for estimating variables such as the total number of infected persons. “Our model delivers daily estimates of relevant variables that usually stay hidden, including the transmission rate and the cumulative number of reported and unreported cases of infection.” In Brazil in July 2020 there was a sharp increase in the number of people tested as new infrastructure became available, which allowed the experts to further validate their methodology by observing how officially recorded data became closer to the model predictions once testing became more widespread, including for asymptomatic or mildly symptomatic people. They applied the model to antibodies found in blood given by donors and used it to estimate the proportion of undiagnosed cases, and to analyse changes in the infection rate, that is, how many people each case infected on average. Previously this has been viewed as a fixed value or a fixed value over a long duration of time, but the dynamics of the spread of Covid change much faster than that. This aspect was very important in early days of the pandemic and could also be applied to similar diseases. Looking forward, the experts aim to improve the accuracy of the model by introducing changes to account for vaccination effects, waning immunity and the potential emergence of new variants. The paper SARS-CoV-2 IgG Seroprevalence among Blood Donors as a Monitor of the COVID-19 Epidemic, Brazil has been published in Volume 28, Number 4—April 2022 of Emerging Infectious Diseases.

3 min. read
One of the crowd or one of a kind? New artificial intelligence research indicates we're a bit of both
 featured image

One of the crowd or one of a kind? New artificial intelligence research indicates we're a bit of both

Evidence that behaviour follow a two-step process when we’re in a crowd We are likely to imitate the crowd first and think independently second Findings will increase understanding of how humans make decisions based on others’ actions. An Aston University computer scientist has used artificial intelligence (AI) to show that we are not as individual as we may like to think. In the late 1960s, famous psychologist Stanley Milgram demonstrated that if a person sees a crowd looking in one direction, they’re likely to follow their gaze. Now, Dr Ulysses Bernardet in the Computer Science Research Group at Aston University , collaborating with experts from Belgium and Germany, has found evidence that our actions follow a two-step process when we’re in a crowd. Their results, Evidence for a two-step model of social group influence, published in iScience show that we go through a two-stage process, where we’re more likely to imitate a crowd first and think independently second. The researchers believe their findings will increase the understanding of how humans make decisions based on what others are doing. To test this idea the academics created an immersive virtual reality (VR) experiment set in a simulated city street. Each of the 160 participants was observed individually as they watched a movie within the virtual reality environment that had been created for the experiment. As they watched the movie, 10 computer-generated ‘spectators’ within the VR simulated street were operated by AI to attempt to influence the direction of the gaze of the individual participants. During the experiment, three different sounds such as an explosion were played coming from either the left or right of the virtual street. At the same time, a number of the ‘spectators’ looked in a specific direction, not always in the direction of the virtual blast or the other two sounds. The academics calculated a direct, and an indirect, measure of gaze-following. The direct measure was the proportion of trials in which participants followed the gaze of the crowd. The indirect measure took into account the reaction speed of participants dependent on whether they were instructed to look in the same or opposite direction as the audience. The experiment’s results support the understanding that the influence of a crowd is best explained by a two-step model. Dr Bernardet, said: “Humans demonstrate an initial tendence to follow others – a reflexive, imitative process. But this is followed by a more deliberate, strategic processes when a person will decide whether to copy others around them, or not. “One way in which groups affect individuals is by steering their gaze. “This influence is not only felt in the form of social norms but also impacts immediate actions and lies at the heart of group behaviours such as rioting and mass panic. “Our model is not only consistent with evidence gained using brain imaging, but also with recent evidence that gaze following is the manifestation of a complex interplay between basic attentional and advanced social processes.” The researchers believe their experiments will pave the way for increased use of VR and AI in behavioural sciences.

3 min. read
Privacy implications of contact tracing for COVID-19 featured image

Privacy implications of contact tracing for COVID-19

Contact tracing is the process of identification of persons who may have come into contact with an infected person and subsequent collection of further information about these contacts Contact tracing is a key public health response to battle infectious diseases such as COVID-19. Mobile technologies offer robust options for contact tracing through the use of GPS, Bluetooth, cellular information and AI-powered big data analytics. Together, this information can help manage the spread of COVID-19. Several countries have rolled out contact-tracing apps and solutions, such as the UK, Israel and South Korea among others. However, preserving personal privacy is critical toward maintaining public trust and protecting users during this crisis.  Kurt Rohloff, assistant professor of computer science at New Jersey Institute of Technology and co-founder of Duality Technologies, is an expert in areas concerning privacy and the implications of contact tracing, and how technologies exist to both protect private information and support contact tracing. Duality Technologies has developed a prototype solution for privacy-preserving contact tracing that uses the open-source PALISADE homomorphic encryption library that Rohloff developed at NJIT with funding from DARPA. To speak with Rohloff directly on issues related to privacy and contact tracing, click on the button below to arrange an interview. 

Kurt Rohloff profile photo
1 min. read
Unattainably Perfect: Idealized Images of Influencers Negatively Affect Users’ Mental Health
 featured image

Unattainably Perfect: Idealized Images of Influencers Negatively Affect Users’ Mental Health

Filters, Adobe Photoshop, and other digital tools are commonly used by social media “influencers.” These celebrities or individuals have a large follower base and “influence” or hold sway over online audiences. This digital enhancement of images is well-documented anecdotally. Instagram, in particular, has come under growing scrutiny by the media in recent years for promoting and popularizing unattainably perfect or unrealistic representations of its influencers. What’s less understood is the appeal and the actual effect that these digitally enhanced images have on followers–particularly in terms of people’s feelings of self-worth and their mental wellbeing. A ground-breaking study by Goizueta Business School’s David Schweidel and Morgan Ward sheds new light on the real-world impact of digital enhancement, and what they find should be cause for significant concern. Downstream Consequences: Impressions Have Lasting Impact Across a series of five studies with a broad sample of participants and using AI-powered deep learning data analysis to parse individuals’ responses, Schweidel and Ward have unearthed a series of insights around the lure of these kinds of idealized images, and the negative “downstream consequences” that they have on other users’ self-esteem. “Going into the research, we hypothesized that micro-influencers who digitally manipulate their images, offering unrealistic versions of themselves, would be more successful at engaging with other users–getting more follows, likes, and comments from them. And we do find this to be the case, but that’s not all,” says Schweidel. He and Ward also discover that when users are exposed to these kinds of images, they make comparisons between themselves and the enhanced influencers; comparisons that leave them feeling lacking, envious, and often inadequate in some way. In terms of mental health and wellbeing, this is alarming, says Ward. Our research shows unequivocally that when followers consume idealized versions of popular figures on social media there is a social comparison process that results in these users experiencing negative feelings and a substantial decline in their state of self-esteem. On the basis of these insights, is Meta–the owner of Facebook and Instagram–likely to take action to limit the use of digital enhancement on its platforms and apps any time soon? Unlikely, say Schweidel and Ward. “Meta seems to be fully aware of the deleterious effects that Instagram has on its users. However, the success of Instagram–and that of the brands and influencers that appear on the app–is fueled by increased consumer engagement: the very engagement that this kind of digital enhancement of images drives. So the incentive is there to maintain the practices that keep users engaged, even if there’s a trade-off in their emotional and mental health.” This is a fascinating and important topic - and if you're a reporter looking to know, then let us help. David A. Schweidel is professor of marketing at Emory University’s Goizueta Business School. He is an expert in the areas of customer relationship management and social media analytics. Morgan Ward is an assistant professor of marketing at Emory University’s Goizueta Business School and is an expert in consumer behavior. Both experts are available to speak with media - simply click on an icon to arrange a discussion today.

Studying glaciers . . . from Florida featured image

Studying glaciers . . . from Florida

By Emma Richards On the surface, the University of Florida seems an unlikely place to find cutting-edge research on ice sheets. But Emma “Mickey” MacKie says this is the perfect place for her work — thanks in large part to HiPerGator, one of the fastest supercomputers in higher education. MacKie, an assistant professor of geological sciences and glaciologist, joined UF in August 2021 and said her decision hinged largely on access to HiPerGator and the university’s focus on machine learning and artificial intelligence technologies. MacKie uses machine learning methods to study subsurface conditions of glaciers in polar regions and access to a powerful supercomputer is crucial given the large data sets her research generates. “I'm very happy to be in a place with lots of people who are working on different types of problems and are interested in developing these different tools,” MacKie said. “There are a number of members of my department in geology who are studying glacial geology through different lenses. And so, there's all of this complementary geological and machine learning knowledge at UF that I'm very excited to bring together.” MacKie has set up the Gator Glaciology Lab, where she and a team of seven undergraduate students from the fields of geology, computer science, physics, math and data science are using AI to analyze what lies beneath glaciers and how they are moving and melting. “Our work is part of a bigger effort in the glaciology community to start working on quantifying our uncertainty in future sea-level rise projections so that we can give policy makers this information.” It’s a very difficult challenge, MacKie said, because of limited access to polar regions and the miles-thick ice covering the ground. Then there is the scale of ice sheets; Antarctica, for example, is the size of U.S. and Mexico combined. Measurements of the topography below such glaciers are gathered using radars mounted on airplanes to “see” through ice. Her team then uses HiPerGator to simulate realistic looking topography in places where there are gaps or blank spots in the measurements. They generate hundreds of maps to represent different possible ice sheet conditions, which could be used to determine numerous possible sea level rise scenarios. “Our work is part of a bigger effort in the glaciology community to start working on quantifying our uncertainty in future sea-level rise projections so that we can give policy makers this information,” she said. Earlier this spring, MacKie swapped out her flip-flops for snow boots to study subsurface glacial conditions in Svalbard, which is next to northeastern Greenland. Visiting Svalbard will help her test and develop data collection and analysis techniques that could be applied to Antarctica or Greenland, which both contain large ice sheets that could have serious environmental impacts if they experience significant melting. In Svalbard, MacKie and Norwegian researchers from the University of Bergen and the University Centre in Svalbard took seismic and radar measurements of glaciers that will be used to make estimates about conditions beneath the ice. Among glaciers of concern is the Thwaites “Doomsday Glacier,” which is losing the most ice of any glacier in Antarctica. There are signs showing Thwaites’ ice shelf could start to break in the next few years. MacKie said it will likely be a few hundred years before the glacier could undergo significant collapse and jeopardize the West Antarctica Ice Sheet, leading to several meters of sea level rise. The effects of Thwaites and other ice sheet melts in Antarctica and Greenland will become apparent in decades to come, with the potential for a meter of sea level rise by the end of the century, which MacKie and other researchers hope to predict more accurately. “The state of Florida has the most to lose when sea level rises,” she said in an episode of the From Florida podcast. “And so, I think we have a lot of skin in the game and it’s really important to be studying this question here in Florida.” To hear more about MacKie’s work, listen to From Florida at this link.

Emma "Mickey" MacKie profile photo
3 min. read
Join RealTime Medical at SIIM 2022! featured image

Join RealTime Medical at SIIM 2022!

Join RealTime Medical at Society for Imaging Informatics in Medicine (SIIM) Virtual Annual Meeting, where we will be announcing our latest advancements in how we empower radiologists to work more efficiently, improve quality and deliver better patient care. RTM operates one of Canada’s largest teleradiology networks covering over 30 hospital sites. This network is powered by the RealTime Medical AICloudSuite solution offering, delivering AI-enabled diagnostic workload balancing and first of a kind, multi-dimensional peer learning experience. The platform’s standards-based messaging enables ease of compatibility with existing HIS/RIS/PACS systems. RTM's co-founder, Dr. David Koff is a chair of SIIM’s session on advanced peer learning techniques and solutions. Visit our virtual booth via GRIP June 9-11. Access to the platform is already available! For more information about the event and to register, visit the official SIIM event website. Book your consultation today: https://realtimemedical.com/contact/

1 min. read
INNOVATORS BRING AI INTO IMAGING SKILLS DEVELOPMENT featured image

INNOVATORS BRING AI INTO IMAGING SKILLS DEVELOPMENT

Originally from CHT Magazine By Jerry Zeidenberg October 30, 2019 Two Ontario hospital organizations – encompassing six sites – will soon deploy artificial intelligence to help with continuous learning and peer review in their imaging departments. By automatically detecting the types of cases being read by radiologists at St. Joseph’s Healthcare Hamilton and Hamilton Health Sciences, the system will deliver the latest journal findings, as well as personal pattern recognition and error avoidance, direct to their desktops. While radiologists at all Canadian hospitals are experts in their field, with years of education and experience, our understanding of diseases and illnesses is rapidly expanding and new insights are constantly appearing. To ensure that they’re aware of the latest research and best practices, many radiologists conduct journal and web searches while they’re reading cases at the hospital, or at night, from home. “Our radiologists and physicians spend a lot of time reading and searching for literature,” said Shairoz Kherani, who until recently was Director of Diagnostic Services at HHS. (She has since moved to Halton Health Care, in nearby Oakville, Ont., where she is Director of Diagnostic Services and Laboratory.) “Finding the right information can be a daunting process. Now it will be readily available.” “There are hundreds of new findings every day,” said Ian Maynard, CEO of RealTime Medical, of Mississauga, Ont., the company that’s providing the AI-powered solution, called AICloudQA™. “Radiologists can spend two or more hours a day searching independent medical data sources,” said Maynard. “Our solution saves radiologists a significant amount of time and effort by searching multiple data sources simultaneously, relative to the case at hand. We’re like a Google search on steroids for relevant medical data, helping radiologists apply the latest findings to their patient care”. Indeed, RealTime Medical is collaborating with Google Cloud and Sightline Innovation to deliver its AI-fueled solutions. The project is also supported by the National Research Council of Canada’s Industrial Research Assistance Program (NRC IRAP), resulting in a collaboration between these organizations and the hospitals using the solution. Not only does the automated searching save time and contribute to better medical outcomes for patients, but it helps reduce radiologist “burnout”, a serious issue today as radiologists feel overloaded by the demands placed on them, Maynard said. St. Joseph’s Healthcare Hamilton and Hamilton Health Sciences will introduce AICloudQA for peer learning and skills development across their sites by the end of this year. The hospitals will probably start with one site, or one physician group across all sites, and then steadily roll out the solution. The context-sensitive provision of journal articles and other sources of medical information is expected to be of great help to the radiologists, nuclear medicine physicians, cardiologists and other clinicians who use the system. There are 70 to 80 radiologists and medical imaging experts at Hamilton Health Sciences and St. Joseph’s Healthcare Hamilton who will be the prime users of AICloudQA. RealTime Medical’s Ian Maynard said the importance of timely and accurate information cannot be underestimated. As they’re reading cases, radiologists want the latest literature and personal pattern recognition notifications of what to be on the lookout for. “What they don’t want our patients and their families coming back to them later, asking why they didn’t know about the latest finding from Cleveland Clinic for example,” said Maynard. Dr. Karen Finlay, radiologist and Interim Chief of Radiology at Hamilton Health Sciences, agreed that radiologists are currently taking “a lot of time for research”. “If a radiologist steps off a case for five to 10 minutes to go to Google Scholar, that can really add up over the course of a day,” she said. Additionally, for those familiar with the impact of interruptions on the efficiency of the diagnostic process, that time impact can be significantly magnified to the detriment of diagnostic efficiency, which collectively impacts system-wide efficiency. The feed from AICloudQA, by contrast, is instantaneous, meaning the radiologist doesn’t have to stop what they are doing. Notably, the RealTime Medical system also uses AI to scan the readings done by radiologists, and to provide feedback on areas where they might want to focus on or look more closely in future. “It’s like the blind-spot warning system in your car, only it’s anonymously helping you avoid possible gaps in your own reading patterns,” said Maynard. “This is very valuable,” said Kherani. “The system can do intelligent sampling and note where a radiologist may want to improve. It can even spot patterns, time of day and other conditions when they may be more vulnerable.” Dr. Finlay observed that AICloudQA will also transform the process of peer learning at Hamilton Health Sciences and St. Joseph’s Healthcare Hamilton. It will do this, in one way, by increasing the pool of radiologists participating. One of the limitations of current peer review methods is that there’s often a limited number of potential reviewers, especially when a sub-specialty is involved – such as breast or neuro-imaging. RealTime Medical’s cloud-based solution offers the potential to connect with other hospitals across the province and the country, creating a critical mass of peers with a cross-section of experiences in each sub-speciality. This will enable a level of peer learning and best practice sharing that’s simply not possible with site-based systems. Increasing the number of radiologists in the peer learning pool also helps with the issue of anonymity. With site-based solutions, it’s sometimes possible to guess the identity of the radiologist or clinician being assisted, as physicians are often familiar with the reporting styles of their peers. Like all physicians – and people in general – radiologists don’t like to be judged. By making the system more anonymous, the Real Time Medical system makes peer learning more objective, valid and hence palatable for participants. This part of what is being called a “just culture” approach, that physicians are calling for in such solutions. AICloudQA embraces the “just culture” principles that physicians want and deserve. It is not punitive, and the information is not shared. Instead, it’s sent privately to the participating radiologist or clinicians, who can use it for self-improvement. At Hamilton Health Sciences and St. Joseph’s Healthcare Hamilton, the peer-reviewing will be prospective – that is, it’s done before the results are reported to the referring physician. Of course, there are only so many cases that can be reviewed before the process becomes counter-productive. The need for continuous learning must be balanced with the extra burden that’s placed on reviewers. “The trick is to make it a rich and rewarding learning experience, but not burdensome,” said Dr. Finlay. Hamilton Health Sciences and St. Joseph’s Healthcare Hamilton currently aim to review 2 percent of the cases, which is in keeping with other Canadian programs. Kherani noted there are other potential benefits to the AICloudQA platform. It has a workload balancing function, where it uses its intelligence to feed cases to the appropriate radiologist – based on availability and expertise. That not only offers the organization advantages with workflow and wait times, but it also benefits patients, as they obtain the most expert radiologist available. She said the system can eventually support different types of physicians involved in imaging, such as cardiologists, and not only radiologists. “It’s a multi-ology solution.” Dr. Finlay noted the system also supports critical results reporting – so that urgent findings are quickly sent to referring doctors. It can also be tweaked to include notification of unexpected findings – flagging colleagues about problems that were unanticipated, but should be addressed.

5 min. read