Experts Matter. Find Yours.

Connect for media, speaking, professional opportunities & more.

Predicting the post-pandemic desires for the Latin music industry featured image

Predicting the post-pandemic desires for the Latin music industry

Coachella, identified as a mega-festival universe, decided on a diverse 2023 roster with artists like Becky G, Kali Uchis, and Rosalía. Bad Bunny, last year’s most-streamed global artist, made history as the festival’s first Spanish-language headliner. It also marked the first year since Coachella’s founding in 1999 that none of the headliners were white. José Valentino Ruiz-Resto, an assistant professor in the School of Music at the University of Florida, co-authored a paper for the Journal of Arts Entrepreneurship Education, which focused on how the music industry would evolutionarily change after the pandemic and ultimately predicted the 2023 Coachella trend. “The rise of Latin artists/headliners at festivals like Coachella is really a reflection of what has been happening in the music industry for the past two decades,” said José Valentino Ruiz-Resto who is also the program coordinator of Music Business & Entrepreneurship at UF. Ruiz-Resto’s research showed that the post-Covid era music industry would encourage more people to stay home and listen to music digitally, but the traditional Latin music experience is an outlier to this. The world-renowned multi-instrumentalist explains, "In order for concerts and festivals to maintain success, they needed to branch out to other markets to bring in those people who were still very much passionate about experiencing music in a live context.” Although this shift was initiated by the pandemic, it has been patiently anticipated by Ruiz-Resto for over 23 years, starting with the founding of the Latin Grammys in 2000. “Because the amount of production within the Latin recording academy is almost equivocal to that of all of the other genres in the American market combined. Latin music is the No. 1 meta genre in the music industry in terms of sales and fan support,” Ruiz-Resto, now a four-time Latin Grammy Award winner, said. Ruiz-Resto's data predicted the need for a stronger focus on the Latin music enthusiasts who still actively go to concerts like Coachella, “In order for Coachella to ultimately succeed in the post-Covid era and attract people, they needed to bring in artists like Bad Bunny.” This historic Coachella moment followed an announcement from the Recording Industry Association of America, stating that Latin music revenues in the United States were at an all-time high, exceeding over $1 billion in 2022. All of this was no surprise to Ruiz-Resto, who observes, researches and directly participates in the Latin music industry. “Now bigger shows are catching up to what has been the largest-selling music market for years. It’s a testament to how positively Latin American cultures are inspiring listeners across the U.S.” By Halle Burton

José Valentino Ruiz profile photo
2 min. read
Georgia Southern University opens doors to Gullah Geechee Cultural Heritage Center featured image

Georgia Southern University opens doors to Gullah Geechee Cultural Heritage Center

Georgia Southern University’s Gullah Geechee Cultural Heritage Center officially opened its doors with a grand opening and ribbon cutting on June 19. Coinciding with the Center’s Juneteenth celebration, the public was invited to attend the afternoon festivities at 13040 Abercorn Street in Savannah. The ribbon cutting saw many local dignitaries in attendence, including Savannah Mayor Van R. Johnson, Georgia Rep. Carl Gilliard, Georgia Sen. Derek Mallow and Chatham County Chairman Chester Ellis, as well as Gullah Geechee Cultural Heritage Corridor Executive Director Victoria Smalls, Gullah Geechee historian and preservationist Queen Quet and Georgia Southern Provost and Vice President for Academic Affairs Carl Reiber, Ph.D., offered opening remarks. “This is a monumental occasion,” said Maxine Bryant, Ph.D., director of the Gullah Geechee Center. “To celebrate our grand opening on the nationally recognized Juneteenth is extremely meaningful. We will simultaneously honor the freedom of enslaved Black Americans and the Gullah Geechee culture that has preserved more African traditions than any other group.” The Gullah Geechee people of Coastal Georgia are descendants of enslaved Africans from plantations along the lower Atlantic coast. Many came from the rice-growing region of West Africa and were brought to the Americas for their agricultural and architectural knowledge and skills. The enslaved Africans were isolated on the Sea Islands. This isolation enabled them to create and maintain a unique culture steeped in remnants of Africa. This culture became known as Gullah Geechee and is visible in the people’s distinctive arts, crafts, foodways, use of waterways, music, dance and language. Much of the Gullah Geechee community today, which is estimated to be a population of 1 million, can speak the African Creole language or tell the stories of their ancestors who are credited with influencing southern and American culture. Local Gullah Geechee artists and the McIntosh County Shouters showcased their talent at the event. The Gullah Geechee Cultural Heritage Center, established in 2019, honors myriad contributions made by Gullah Geechee people, provides educational resources for the public, promotes scholarship and research, and serves as a model for national reconciliation and reparations. It is part of the Gullah Geechee Corridor, which stretches across 27 counties in Georgia, South Carolina, North Carolina and Florida. If you're interested in learning more about Georgia Southern University’s Gullah Geechee Cultural Heritage Center - then let us help. Simply reach out to Georgia Southern's Director of Communications Jennifer Wise at jwise@georgiasouthern.edu to arrange an interview today.

2 min. read
Infant seating devices may reduce language exposure featured image

Infant seating devices may reduce language exposure

When a parent needs to cook dinner or take a shower, often they will place their baby in a bouncy seat, swing, exersaucer, or similar seating device intended to protect the baby and grant a degree of independence to both the parent and infant. For many parents, these devices represent a helpful extra set of hands; for babies, the freedom to safely explore their immediate surroundings. As useful as these devices are to both parents and infants, they may present trade-offs regarding their effect on infants’ exposure to adult language, which is critical for language development. That’s according to a new study by researchers at the Stress and Early Adversity Lab at Vanderbilt Peabody College of education and human development. Within infants’ natural environments and daily routines, the study explored interactions between their exposure to adult language and their placement in seating devices, which support posture and promote the infant’s ability to play with objects or observe their surroundings without direct support from a caregiver. The researchers found that infants were exposed to fewer words when spending time in seating devices compared to when spending time in other placements. They also found that infants who spent the most time in seating devices heard nearly 40 percent fewer daily words compared to infants who spent the least amount of time in seating devices. Infants with more, compared to less, seating device use also had less consistent exposure to adult language throughout the day. Sixty mothers and their 4- to 6-month-old infants participated in this study. For three days, a Language Environment Analysis audio recording device (i.e. “talk pedometer”) captured language exposure. The mothers inserted the audio recorder into the pocket of a vest their babies wore. Automated software estimated from the recordings the total number of adult words spoken to or near the infant over the course of a day. To record real-time behaviors of infant placement, the mothers responded to 12 brief surveys per day about their infant’s current location and use of seating devices. Caregiver reports of their child’s placement in seating devices accounted for 10 percent of an infant’s daily exposure to adult words, which the researchers say is a striking finding due to the complex nature of language exposure and how many other factors may influence children’s exposure to speech (e.g. caregiver’s talkativeness, presence of other siblings). Kathryn Humphreys, assistant professor of psychology and human development and expert in infant and early childhood mental health, is the senior author of the study. She notes that infant seating devices can provide a convenient way to keep infants safely contained while caregivers attend to other tasks. However, given the potential for frequent and prolonged use of these devices, she says that parents may want to be intentional about interactive opportunities while the infant explores their surroundings as well as consider wearing or otherwise carrying their infant on their body as much as possible to create more opportunities for engagement through speech. “While we need more research to be certain that seating devices reduce the richness of infants’ language environments, these findings are influencing my own decisions about intentional placement with my 6-month-old." - Kathryn Humphreys Kathryn Humphreys She suggests that safe and convenient places are a boon for both infants and their caregivers, but that there is a risk for reduced levels of interactions when infants are stationary and not moving to where their caregivers are active.

Kathryn L. Humphreys profile photo
3 min. read
Episode 14| CorpusCast with Dr Robbie Love featured image

Episode 14| CorpusCast with Dr Robbie Love

CorpusCast is the podcast about corpus linguistics and what it can do for society. Join Dr Robbie Love as he speaks with top researchers in the field to find out more about how corpus linguistics – the study of linguistic patterns in large samples of language – is applied to a diverse range of areas including health, social justice and education. On this episode of CorpusCast, Robbie chats to Professor Bas Aarts. Bas is director of the Survey of English Usage, an internationally recognised and highly regarded centre of excellence for research in the area of English Language and Linguistics. Dr Robbie Love ? https://bit.ly/3Zcgo36 Professor Bas Aarts ? https://bit.ly/3YfFxsv Aston Centre for Applied Linguistics ? https://bit.ly/3QKHcSF School of Social Sciences and Humanities ? https://bit.ly/3JCRAd1 Find out more about courses related to this show ? https://bit.ly/3pR705k #TeamAston #CorpusCast #linguistics

Dr Robbie Love profile photo
1 min. read
AI-Generated Content is a Game Changer for Marketers, but at What Cost? featured image

AI-Generated Content is a Game Changer for Marketers, but at What Cost?

Goizueta’s David Schweidel pitted man against the machine to create SEO web content only to find that providing an editor with bot-generated content trounces the human copywriter every time. Good news for companies looking to boost productivity and save cash, he says. But could there be other hidden costs? In December 2022, The New York Times ran a piece looking back on the year’s biggest consumer tech updates. The review was mixed. Ownership shifts in the world of social media garnered special mentions, but hardware innovations had been largely “meh,’ mused the Times. There was one breakthrough area that warranted attention, however: AI-powered language-processing tech capable of generating natural-looking text, the same technology that powers familiar chatbots. And one such technology could well be poised to “invade our lives in 2023.” Earlier in December, AI research lab OpenAI, released the latest update to its Generative Pre-Trained Transformer technology, an open source artificial intelligence. It’s latest iteration, ChatGPT, immediately went viral. Here was an AI assistant that sounded intelligent. Not only could it answer any question thrown its way without supervised training, but when prompted, it could also write blog posts, as well as find and fix bugs in programming code. ChatGPT could draft business proposals and even tell jokes. All of this at a speed that beggared belief. Since its first release in 2020, OpenAI’s GPT technology has powered through a slew of updates that have seen its capabilities leap forward “by light years” in less than 24 months, says Goizueta Professor of Marketing, David Schweidel. For businesses looking to harness this rapidly-evolving technology, the potential is clearly enormous. But aren’t there also risks that industry and consumers alike will need to navigate? Schweidel is clear that the academic community and initiatives such as the Emory AI Humanity Initiative have a critical role in asking hard questions—and in determining the limitations and dangers, as well as the opportunities, inherent in tech innovation—because, as he puts it, “these things are going to happen whether we like it or not.” Man Versus Machine To that end, Schweidel and colleagues from Vienna University of Economics and Business and the Modul University of Vienna have put together a study looking at how well natural language generation technologies perform in one specific area of marketing: drafting bespoke content for website search engine optimization, better known as SEO. What they find is that content crafted by the machine, after light human editing, systematically outperforms its human counterparts—and by a staggering margin. Digging through the results, Schweidel and his colleagues can actually pinpoint an almost 80 percent success rate for appearing on the first page of search engine results with AI-generated content. This compares with just 22 perfect of content created by human SEO experts. In other words, the AI content passed to a human is roughly four times more effective than a skilled copywriter working alone. Reaching these findings meant running two real-time, real-world experiments, says Schweidel. First, he and his colleagues had to program the machine, in this case GPT 2, an earlier incarnation of GPT. GPT relies on natural language generation (NGL), a software process that converts manually uploaded input into authentic-sounding text or content—comparable in some ways to the human process of translating ideas into speech or writing. To prepare GPT-2 for SEO-specific content creation, Schweidel et al. started with the pre-trained GPT-2, and then let the machine do the heavy lifting: searching the internet for appropriate results based on the desired keyword, scraping the text of the websites, and updating GPT-2 to “learn” what SEO looks like, says Schweidel. We partnered with an IT firm and a university to run our field experiments. This meant creating SEO content for their websites using GPT-2 and actual human SEO experts, and then doing A/B testing to see which content was more successful in terms of landing in the top 10 search engine results on Google. So this was an opportunity to put the AI bot to the test in a real-world setting to see how it would perform against people. The results point to one clear winner. Not only did content from GPT-2 outperform its human rivals in SEO capabilities, it did so at scale. The AI-generated content scored a daily median result of seven or more hits in the first page of Google search results. The human-written copy didn’t make it onto the first result page at all. On its best day, GPT showed up for 15 of its 19 pages of search terms inside the top 10 search engine results page, compared with just two of the nine pages created by the human copywriters—a success rate of just under 80 percent compared to 22 percent. Savings at Scale The machine-generated content, after being edited by a human, trounces the human in SEO. But that’s not all, says Schweidel. The GPT bot was also able to produce content in a fraction of the time taken by the writers, reducing production time and associated labor costs by more than 90 percent, he says. “In our experiments, the copywriters took around four hours to write a page, while the GPT bot and human editor took 30 minutes. Now assuming the average copywriter makes an annual $45K on the basis of 1,567 hours of work, we calculate that the company we partnered with would stand to save more than $100,000 over a five-year period just by using the AI bot in conjunction with a human editor, rather than relying on SEO experts to craft content. That’s a 91 percent drop in the average cost of creating SEO content. It’s an orders of magnitude difference in productivity and costs.” But there are caveats. First off, there’s the quality of the machine-generated content to consider. For all its mind-boggling capabilities, even the newly released ChatGPT tends to read somewhat sterile, says Schweidel. That’s a problem both in terms of Google guidelines and brand coherence. Human editors are still needed in order to attenuate copy that can sound a little “mechanical.” “Google is pretty clear in its guidelines: Content generated by machines alone is a definite no-no. You also need to factor in the uncanny valley effect whereby something not quite human can come off as weird. Having an editor come in to smooth out AI content is critical to brand voice as well as the human touch.” Asking the Big Questions Then there are the moral and metaphysical dimensions of machine learning and creativity that beg an important question: Just because we can, does that mean we should? Here, Schweidel has grave reservations about the future of ChatGPT and its ilk. The potential of this kind of technology is extraordinarily exciting when you think about the challenges we face from productivity to pandemics, from sustainable growth to climate change. But let’s be very clear about the risks, too. AI is already capable of creating content—audio, visual and written—that looks and feels authentic. In a world that is hugely polarized, you have to ask yourself: How can that be weaponized? At the end of the day, says Schweidel, the large language models powering these generative AIs are essentially “stochastic parrots:” trained mimics whose output can be hard to predict. In the wrong hands, he warns, the potential for misinformation—and worse—could well be “terrifying.” “Shiny new tech is neither inherently good nor bad. It’s human nature to push the boundaries. But we need to ensure that the guardrails are in place to regulate innovation at this kind of pace, and that’s not easy. Governments typically lag far behind OpenAI and companies like them, even academics have a hard time keeping up. The real challenge ahead of us will be about innovating the guardrails in tandem with the tech—innovating our responsible practices and processes. Without effective safeguards in place, we’re on a path to potential destruction.” Covering AI or interesting in knowing more about this fascinating topic - then let our experts help with your coverage and stories. David Schweidel is the Rebecca Cheney McGreevy Endowed Chair and Professor of Marketing at Emory University's Goizueta Business School. Simply click on David's icon now to arrange an interview today.

David Schweidel profile photo
6 min. read
A.I. and Higher Education: The Rise of ChatGPT featured image

A.I. and Higher Education: The Rise of ChatGPT

ChatGPT. Maybe you’ve heard of it. Colleges and universities certainly have. It’s the chatbot that uses artificial intelligence (A.I.) technology to generate sentences based only on a brief prompt, writing anything from college-level papers to fanfiction. And as one might expect, the popular chatbot is taking the academic world by storm, raising questions about trust, academic integrity and even the future of college admissions. We turned to Seth Matthew Fishman, PhD, Assistant Dean of Curriculum and Assessment and associate teaching professor in the Department of Education and Counseling at Villanova University, to get his thoughts. Q: What makes ChatGPT different and why is it causing such a stir? Dr. Fishman: The use of chatbots is not a new debate in higher education. But ChatGPT and other similar free software certainly add a complex layer that we are only just now starting to have conversations about. There will be an ongoing debate about trust—Who wrote the material we are reading? To what extent if any, will it impact faculty members? There are also A.I. digital images, graphics, and design—To what extent do these programs impact our creative arts and design programs? I think these fields will mostly embrace A.I., though I can see issues of copyright infringement and artist control/attribution. Q: How are other chatbots being used in academic settings? DF: A.I. use already impacts higher education. If you ask any faculty member teaching a foreign language that requires a translation, they will have tales of work submitted by students who use online translation software. But benefits do exist for students and faculty regardless—we’re able to interact a bit more with others, reducing some language barriers. I expect we will see hundreds of articles about ChatGPT’s impact on education; there are likely several dissertations underway, and I expect to see ChatGPT and similar software cited in papers and likely even in authorship groups. Q: What will the impact of ChatGPT be on the college application and admissions process? DF: I think we’ll see conversations from college admissions professionals on the impact of ChatGPT on higher education admissions. For example, key components of college applications such as essays and writing samples may be impacted. And ChatGPT may also be used to write some rather good letters of recommendation. Q: What does the future hold? Will ChatGPT and similar A.I. programs maintain popularity? DF: I’m curious if A.I. will be used to generate employment cover letters. Additionally, many corporations already use A.I. to sift through candidate applications to narrow down their applicant pools. It may continue to transcend academia. I also expect to hear more from our philosophy and ethics experts to help us better understand the societal and educational implications of using A.I. in these ways. And these kinds of conversations will be had with our students to engage them as partners in the learning experience. We will probably generate new ideas and different perspectives from doing just that.

2 min. read
Aston University students take home two prizes from annual European Union simulation event featured image

Aston University students take home two prizes from annual European Union simulation event

EuroSim is an annual international intercollegiate simulation of the European Union More than 150 students, from universities in North America and Europe, participate every year The Aston EuroSim Team was awarded best debater in two categories. Aston University’s EuroSim team has returned from this year’s event with two awards. The Aston EuroSim Team was awarded best debater in the European Parliament Committee on Employment and Social Affairs (EMPL) and best in special roles (media/journalist). EuroSim is an annual international intercollegiate simulation of the European Union (EU). The purpose of it is to provide a framework for a simulation of the EU decision-making on major current issues. More than 150 students, from 16 universities in North America and Europe, participate in the simulation. All students are assigned roles, including members of the European Parliament (MEPs), members of the European Commission, heads of government and national ministers. The purpose of this module is to educate students about the inner workings of the European Union in order to enhance the learning experience for students. This year it was hosted by the University of South Wales in Newport, the first time the event has been held in the UK. Dr Patrycja Rozbicka, a senior lecturer in politics and international relations who is the lead for Aston EuroSim and was European associate director for EuroSim experience (2019-2023), said: “Here at Aston University, the EuroSim module is one of the most innovative modules of the Aston Politics and International Relations Department’s undergraduate and MA programmes. Amin Hassan, a final year international relations and English language student at Aston University, who took part in EuroSim, said: “I would like to extend my gratitude to my team from Aston University, and special mentions to my lecturer Dr Patrycja Rozbicka and student director Chris Burden for organising and inviting us to this memorable trip. “Representing Max Orville (my alter ego), MEP and Renew Europe Group, I worked together with my party and committee members with shared interests and values to ensure that no one is left behind by the proposed Social Climate Fund, which has recently been approved in real life. “After three days packed with negotiations and meetings, we are pleased that the Social Climate Fund has been approved and we strongly believe that it will support vulnerable people, households, micro-enterprises and transport users at risk of facing higher costs as the bloc introduces new climate measures.” Chris Burden, European students director at EuroSim and PhD researcher at Aston University, said: “I had the greatest honour attending the EuroSim2023 meeting at the ICC Wales as the European student director and part of Team Aston. “The work that goes into this conference is unbelievable, and the students had a fantastic time debating and simulating questions surrounding social and climate action within Europe. “This Transatlantic conference is the highlight of any year. “Thank you to our fantastic team from Aston University who brought home the two awards for their efforts.” The next EuroSim will be held next year in Brockport, northern New York State, USA. If you want to read more about the Aston EuroSim, click here.

3 min. read
Aston University forensic linguistics experts partner in $11.3 million funding for authorship attribution research featured image

Aston University forensic linguistics experts partner in $11.3 million funding for authorship attribution research

Aston Institute for Forensic Linguistics (AIFL) is part of the project to infer authorship of uncredited documents based on writing style AIFL’s Professor Tim Grant and Dr Krzysztof Kredens are experts in authorship analysis Applications may include identifying counterintelligence risks, combating misinformation online, fighting human trafficking and even deciphering authorship of ancient religious texts. Aston University’s Institute for Forensic Linguistics (AIFL) is part of the AUTHOR research consortium which has won an $11.3 million contract to infer authorship of uncredited documents based on the writing style. The acronym stands for ‘Attribution, and Undermining the Attribution, of Text while providing Human-Oriented Rationales’. Worth $1.3 million, the Aston University part of the project is being led by Professor Tim Grant and Dr Krzysztof Kredens, who both are recognised internationally as experts in authorship analysis and who both engage in forensic linguistic casework as expert witnesses. In addition to their recognised general expertise and experience in this area, Professor Grant has specific expertise in using linguistic analysis to enhance online undercover policing and Dr Kredens has led projects to develop authorship identification techniques involving very large numbers of potential authors. The AUTHOR team is led by Charles River Analytics and is one of six teams of researchers that won The Human Interpretable Attribution of Text Using Underlying Structure (HIATUS) programme sponsored by the Intelligence Advanced Research Projects Activity (IARPA). The programme uses natural language processing techniques and machine learning to create stylistic fingerprints that capture the writing style of specific authors. On the flip side is authorship privacy - mechanisms that can anonymize identities of authors, especially when their lives are in danger. Pitting the attribution and privacy teams against each other will hopefully motivate each, says Dr Terry Patten, principal scientist at Charles River Analytics and principal investigator of the AUTHOR consortium. “One of the big challenges for the programme and for authorship attribution in general is that the document you’re looking at may not be in the same genre or on the same topic as the sample documents you have for a particular author,” Patten says. The same applies to languages: We might have example articles for an author in English but need to match the style even if the document at hand is in French. Authorship privacy too has its challenges: users must obfuscate the style without changing the meaning, which can be difficult to execute.” In the area of authorship attribution, the research and casework experience from Aston University will assist the team in identifying and using a broad spectrum of authorship markers. Authorship attribution research has more typically looked for words and their frequencies as identifying characteristics. However, Professor Grant’s previous work on online undercover policing has shown that higher-level discourse features - how authors structure their interactions - can be important ‘tells’ in authorship analysis. The growth of natural language processing (NLP) and one of its underlying techniques, machine learning, is motivating researchers to harness these new technologies in solving the classic problem of authorship attribution. The challenge, Patten says, is that while machine learning is very effective at authorship attribution, “deep learning systems that use neural networks can’t explain why they arrived at the answers they did.” Evidence in criminal trials can’t afford to hinge on such black-box systems. It’s why the core condition of AUTHOR is that it be “human-interpretable.” Dr Kredens has developed research and insights where explanations can be drawn out of black box authorship attribution systems, so that the findings of such systems can be integrated into linguistic theory as to who we are as linguistic individuals. Initially, the project is expected to focus on feature discovery: beyond words, what features can we discover to increase the accuracy of authorship attribution? The project has a range of promising applications – identifying counterintelligence risks, combating misinformation online, fighting human trafficking, and even figuring out the authorship of ancient religious texts. Professor Grant said: “We were really excited to be part of this project both as an opportunity to develop new findings and techniques in one of our core research areas, and also because it provides further recognition of AIFL’s international reputation in the field. Dr Kredens added: “This is a great opportunity to take our cutting-edge research in this area to a new level”. Professor Simon Green, Pro-Vice-Chancellor for Research, commented: “I am delighted that the international consortium bid involving AIFL has been successful. As one of Aston University’s four research institutes, AIFL is a genuine world-leader in its field, and this award demonstrates its reputation globally. This project is a prime example of our capacities and expertise in the area of technology, and we are proud to be a partner.” Patten is excited about the promise of AUTHOR as it is poised to make fundamental contributions to the field of NLP. “It’s really forcing us to address an issue that’s been central to natural language processing,” Patten says. “In NLP and artificial intelligence in general, we need to find a way to build hybrid systems that can incorporate both deep learning and human-interpretable representations. The field needs to find ways to make neural networks and linguistic representations work together.” “We need to get the best of both worlds,” Patten says. The team includes some of the world’s foremost researchers in authorship analysis, computational linguistics, and machine learning from Illinois Institute of Technology, Aston Institute for Forensic Linguistics, Rensselaer Polytechnic Institute, and Howard Brain Sciences Foundation.

4 min. read
From gobbledygook to goblins: how a child learns to crack the written code - livestreamed public lecture featured image

From gobbledygook to goblins: how a child learns to crack the written code - livestreamed public lecture

Aston Institute of Health and Neurodevelopment to host third in a series of livestreamed public lectures This episode of Molecules to Minds will explore Dr Laura Shapiro’s research into how children’s experiences of learning to read impacts on how they learn in the future The one-hour livestream will be followed by a Q&A and round table discussion Aston Institute of Health and Neurodevelopment (IHN) will host a livestreamed public lecture in the series Molecules to Minds on Aston University’s digital channel Aston Originals on Thursday 3 November 2022. Dr Laura Shapiro, a reader in psychology, will present her lecture ‘From gobbledygook to goblins: how a child learns to crack the written code’. Laura will reveal the hurdles and fortunes on the journey from spoken to written language and will discuss how our experience of learning to read changes the way we learn forever. Dr Shapiro's research focuses on the causes and consequences of children’s language and literacy development and is shaped both by fundamental scientific questions and by the concerns of practitioners and policymakers. The lecture will be co-presented with James McTaggart from the Highland Council, Scotland and hosted by Professor Jackie Blissett, co-director of IHN. Laura said: “Most adults take reading for granted, yet for a beginner reader, writing is just gobbledygook. The ability to crack the written code underpins all subsequent learning and provides the key to discovering new worlds and fictional friends.” After the livestreamed lecture, Dr Shapiro and guests will host a Q&A and round table discussion, where audience members can address researchers with their questions. The panel includes a variety of guests: James Cook, headteacher at Cawdor Primary, Scotland, Roxanne Mahroof, a parent and Dr Pamela Wadende a senior lecturer in education at Kisii University, Kenya. Dr Shapiro added: “Being able to read is like a key to the adult world: it underpins our ability to learn. Our research shows that strong language skills are needed to learn to read, and the journey to mastery is a long one. “The good news is that getting better at reading helps you learn more from each thing you read, and in turn spurs you to read more widely. Warning: reading can be addictive.” The lecture is targeted at anyone interested in literacy development in children and young people, including academics, teachers, parents and young people themselves. The livestream will take place at 16:00 – 17:00 BST on Thursday 3 November on the Aston Originals YouTube channel. To register for this event please visit our Eventbrite page.

Jackie Blissett profile photo
2 min. read
Aston University to launch Aston Centre for Applied Linguistics featured image

Aston University to launch Aston Centre for Applied Linguistics

The Centre is an interdisciplinary, multilingual group of researchers made up of academic staff and research students It aims to build on Aston University’s longstanding expertise in research into language education, languages, and applied linguistics The hybrid launch event will take place at the University on 14 September Aston University is launching a new research centre within its College of Business and Social Sciences. The Aston Centre for Applied Linguistics (ACAL), formerly known as the Centre for Language Research at Aston (CLaRA), aims to build on Aston University’s longstanding expertise in research into language education, languages, and applied linguistics by promoting interdisciplinary collaboration and establishing national and international networks and partnerships. ACAL is an interdisciplinary, multilingual group of researchers – academic staff and research students – who work in the field of language and language education research. The Centre will officially be launched through a hybrid event at the University on 14 September 2022. There will be talks by Aston University’s Dr Lucia Busso & Dr Marton Petyko, Dr Marcello Giovanelli, Dr Megan Mansworth and Dr Emmanuelle Labeau as well as guest lectures from Professor Zhu Hua (IOE faculty of education and society, UCL) and Terry Lamb (professor of languages and interdisciplinary pedagogy, University of Westminster and Aston University language graduate). The event will conclude with a celebration of the major publications of ACAL members in 2021-22. Dr Emmanuelle Labeau, director of ACAL, said: “Language actually is all around us: we use it to articulate all our human activities. “Languages actually are all around us: over 100 languages are spoken in Birmingham “My recent AHRC-funded project BRUM (Birmingham Research for Upholding Multilingualism) has shown that research in language(s) is needed in local schools, businesses, public services and culture. “ACAL wants to put the ‘applied’ into linguistics to serve the University, the city, region and beyond. Our researchers are a great asset to the University’s ambitions, and we cannot wait informing and helping shape the Aston University 2030 Strategy.”

2 min. read