Experts Matter. Find Yours.
Connect for media, speaking, professional opportunities & more.
Professor James Sample Provides National Commentary on Voting Rights, Key Supreme Court Cases
Professor James Sample of the Maurice A. Deane School of Law at Hofstra University continued to serve as a prominent national commentator this month, appearing across ABC News, MS NOW (formerly MSNBC), SiriusXM, and Newsday to analyze fast-moving developments in election law, constitutional doctrine, and executive power. Across these appearances, Professor Sample focused on the evolving legal and practical implications of the SAVE America Act, including its potential burdens on married voters and broader access concerns. He also examined a series of high-stakes Supreme Court matters, including disputes over mail-in ballot deadlines and the constitutional debate surrounding birthright citizenship, offering insight into how the Court’s rulings could reshape election administration and individual rights. In addition, Professor Sample provided analysis of expanding presidential authority following the Court’s immunity ruling, situating current developments within a broader conversation about the scope and limits of executive power.

Study: Many pregnant women uncertain of marijuana risks even as use increases
Many pregnant women are unsure if it’s safe to use marijuana or products containing cannabidiol, an active compound in marijuana, even as they increasingly turn to them to combat morning sickness, anxiety or insomnia, a recent University of Florida Health study shows. The American College of Obstetricians and Gynecologists advises against the use of marijuana and cannabidiol, or CBD, during pregnancy. Marijuana use has been associated in some studies with adverse fetal neurodevelopmental outcomes. Evidence of cannabidiol safety is sparse in human studies, but researchers remain concerned that it might nonetheless pose a danger. CBD is not intoxicating. The UF Health researchers said their study shows a need for the medical community to better educate women about the potential hazards to the fetus from using marijuana, also called cannabis. One worry is that some people believe the spreading legalization of marijuana or CBD around the nation equates to the government giving its stamp of approval that the products are safe, researchers said. Medical marijuana is legal in Florida, although its recreational use is not. “If a medication is legal, we assume that maybe it’s safe, although other things like tobacco and alcohol are also legal and we know that those can be harmful to pregnancies,” said Kay Roussos-Ross, M.D., the study’s senior author and a professor in the UF College of Medicine’s Department of Obstetrics and Gynecology. “We see a good deal of data out there that shows that there is increased risk of psychiatric and behavioral issues related to marijuana use in pregnancy, but we need more,” she added. “We need more so that we can be correct in our assessments and our educational efforts to women of reproductive age who are using marijuana.” It’s difficult to quantify the rise of marijuana and CBD use during pregnancy, with most estimates showing an increase predating COVID-19. A 2021 federal survey reported 7.2% of pregnant women used marijuana. The UF Health study noted that emerging evidence from obstetrics care shows more pregnant women are trying the products, perhaps because of increased legalization. The study, published in Medical Cannabis and Cannabinoids, surveyed 261 women and used focus groups to explore participants’ perceptions of the products. The women were either pregnant, breastfeeding or caring for a child 5 years old or younger, and reported use of marijuana or CBD products, such as vapes, smoking, tincture oils or ointments. “There seems to be a disconnect,” said Amie Goodin, Ph.D., an assistant professor in the UF College of Pharmacy’s Department of Pharmaceutical Outcomes and Policy and the study’s lead author. “About one in six pregnant women are telling us, ‘Yes, I have used marijuana or a CBD product while I’m pregnant.’ But half are saying, ‘I don’t know what the risks are.’” About 40% of the pregnant women surveyed said they were unsure how risky it was to use marijuana once or twice a week during pregnancy, compared with 34.5% of women who were not pregnant when surveyed for the study but who had children. Asked the same question about CBD, more than 52% of pregnant women were unsure of the risk, compared with 41.8% for mothers who weren’t pregnant when surveyed. About 36% of pregnant women reported using marijuana, compared with 65% of mothers not currently pregnant, perhaps reflecting at least some increased caution among those in the former group. CBD use was 19.9% for pregnant women and 38.2% for women who were not pregnant. “Some women did mention that the legalization of marijuana has made marijuana more socially acceptable,” said study co-author Deepthi Varma, Ph.D., an assistant professor in the College of Public Health and Health Professions’ Department of Epidemiology. The researchers said they were especially concerned that women were even less sure of the safety of CBD use because it is widely available and often seen as harmless. “You might notice that it’s even something that you can buy at a gas station or a grocery store,” Goodin said. “CBD in a purified form has actually got an FDA approval to treat certain types of pediatric epilepsy on its own … but pharmaceutical-grade CBD is not quite the same thing as you would expect to get if you were purchasing CBD oil at a smoke shop or a gas station.”
AI gives rise to the cut and paste employee
Although AI tools can improve productivity, recent studies show that they too often intensify workloads instead of reducing them, in many cases even leading to cognitive overload and burnout. The University of Delaware's Saleem Mistry says this is creating employees who work harder, not smarter. Mistry, an associate professor of management in UD's Lerner College of Business & Economics, says his research confirms findings found in this Feb. 9, 2026 article in the Harvard Business Review. Driven by the misconception that AI is an accurate search engine rather than a predictive text tool, these "cut and paste" employees are using the applications to pump out deliverables in seconds just to keep up with increasing workloads. Mistry notes that this prioritization of speed over accuracy is happening at every level of the organization: • Junior staff: Blast out polished looking but unverified drafts. • Managers: Outsource their ability to deeply learn and critically think in order to summarize data, letting their analytical skills atrophy. • Power users: Build hidden, unapproved systems that bypass company oversight. A management problem, not a tech problem "When discussing this issue, I often hear leaders blame the technology. However, I believe that blaming the tech is missing the point; I see it as a failure of leadership," Mistry said. "When already overburdened employees who are constantly having to do more with less are handed vague mandates to just use AI without any training, they use it to look busy and produce volume-based work. Because many companies still reward the volume of work produced rather than the actual impact, employees naturally use these tools to generate slick but empty deliverables." "I believe that blaming the tech is missing the point; I see it as a failure of leadership. Because many companies still reward the volume of work produced rather than the actual impact, employees naturally use these tools to generate slick but empty deliverables." The real costs to organizations and incoming employees Mistry outlines three risks organizations face if they don’t intervene: 1. The workslop epidemic "These programs allow people to generate massive amounts of workslop, which is low-effort fluff that looks good but lacks substance. It takes seconds to create, but hours for someone else to decipher, fact-check, and fix," Mistry notes. "This drains money (up to $9 million annually for large companies) and destroys morale. As an educator, researcher, and a person brought into organizations to help fix problems, I for one do not want to be on the receiving end of a thoughtless, automated data dump, especially on tasks that require real skill and deep thinking." 2. Legal disaster He also states, "When the cut and paste mentality makes its way into professional submissions, the risks to the organization are real and oftentimes catastrophic. Courts have made it perfectly clear: ignorance is no excuse. If your name is on the document, you own the liability. Recently, attorneys have faced severe sanctions, hefty fines, and case dismissals for blindly submitting fake legal citations made up by computers." Click here for a list of cases. 3. A warning for incoming talent For new graduates entering this environment, Mistry offers a warning: Do not rely on AI to do your deep thinking. "If you simply use AI to blast out polished but unverified drafts, you become a replaceable 'cut and paste' employee," he says. “To truly stand out, new grads must prove they have the discernment to review, tweak, and challenge what the computer writes. The hiring edge is no longer just saying, 'I can do this task,' but 'I know how to leverage and correct AI to help me perform it.'" Four ideas to fix it To survive and indeed thrive with these new tools and avoid the unintended consequences of untrained staff, organizations should: 1. Reinforce the importance of fact-checking and editing: Adopt frameworks that teach employees how to show their work and log how they verified computer-generated facts. 2. Change the incentives: Stop rewarding busy work, useless reports, and massive slide decks. Evaluate employees on accuracy and results. 3. Eradicate superficial work: Don’t use automation to speed up ineffective legacy processes. Instead, use it to identify and eliminate them entirely. 4. Make time for editing: Give yourself and your employees the breathing room to actually review, tweak, and challenge what the computer writes instead of accepting the first draft. Mistry is available to discuss: Why AI is causing an epidemic of corporate "workslop" (and how to spot it). The leadership failure behind the "cut and paste" employee. How to rewrite corporate incentives to measure impact instead of volume in the AI era. Strategies for implementing safe, effective AI policies at work. How new college graduates can avoid the "workslop" trap in their first jobs. To reach Mistry directly and arrange an interview, visit his profile and click on the "contact" button. Interested reporters can also send an email to MediaRelations@udel.edu.

Expert Insights: Environmental Risk in Times of Regulatory Change & Litigation Pressure
Environmental risks are becoming a central concern for organizations as regulations tighten, public expectations rise, and litigation related to environmental claims grows more common. Companies today must navigate a complex landscape where regulators, investors, and advocacy groups are paying closer attention to how environmental impacts are managed and reported. Recently, J.S. Held published the article, Environmental Claims and Disputes: Navigating Regulatory Change and Litigation Pressure, led by environmental risk and compliance expert Kimberly Logue Ortega. In this article, experts from J.S. Held share practical insights for insurance professionals and legal advisors on identifying environmental risks across industries and preparing for environmental disputes before they escalate. It examines how this increased scrutiny is creating new legal and financial pressures, particularly when organizations fail to comply with evolving regulations or when environmental claims made in public disclosures are challenged. A key issue is the growing focus on corporate environmental statements and sustainability reporting. Businesses face potential consequences whether they overstate environmental achievements, commonly referred to as “greenwashing" or avoid discussing them altogether. Without strong governance systems, clear internal oversight, and transparent reporting processes, organizations may expose themselves to regulatory penalties, legal disputes, and reputational damage. The article emphasizes that effective environmental governance is no longer simply a compliance exercise but an essential part of responsible corporate management. Kimberly Logue Ortega specializes in environmental risk and compliance. With over fifteen years of experience in the areas of environmental and natural resources law, Ms. Logue provides consulting and expert services for industrial facilities and law firms throughout the country. She has extensive experience with assessing and managing potential and ongoing compliance obligations. She routinely supports clients and media on rulemaking and legislative efforts focused on environmental and natural resources issues. View her profile As environmental regulations and stakeholder expectations continue to evolve, organizations that proactively strengthen their compliance frameworks and reporting practices will be better positioned to manage risk and build trust. The full report offers deeper insights into how companies can navigate regulatory change, reduce exposure to environmental claims, and develop stronger governance strategies in an increasingly complex landscape. To explore the topic further, simply connect with Kimberly through her icon below.

Julian Ku Analyzes International Law in Recent Media
Hofstra Law Professor Julian G. Ku has been featured in multiple news outlets, providing expert legal analysis on global issues and interpretations of international law. In a Newsweek article on China’s cancellation of flights to Japan, Prof. Ku provided commentary on how political pressures could play into fractious China-Japan relations. Prof. Ku also spoke with Dutch daily newspaper Trouw about China’s evolving vision of international law, explaining how Chinese leaders emphasize state sovereignty while downplaying human rights norms — a perspective that resonates in parts of the Global South. In Trouw, he described this selective approach as part of China’s broader effort to reshape the narrative around the postwar legal order. The Maurice A. Deane Distinguished Professor of Constitutional Law at Hofstra Law and Faculty Director of International Programs, Prof. Ku teaches and writes on international and constitutional law.

Researchers warn of rise in AI-created non-consensual explicit images
A team of researchers, including Kevin Butler, Ph.D., a professor in the Department of Computer and Information Science and Engineering at the University of Florida, is sounding the alarm on a disturbing trend in artificial intelligence: the rapid rise of AI-generated sexually explicit images created without the subject’s consent. With funding from the National Science Foundation, Butler and colleagues from UF, Georgetown University and the University of Washington investigated a growing class of tools that allow users to generate realistic nude images from uploaded photos — tools that require little skill, cost virtually nothing and are largely unregulated. “Anybody can do this,” said Butler, director of the Florida Institute for Cybersecurity Research. “It’s done on the web, often anonymously, and there’s no meaningful enforcement of age or consent.” The team has coined the term SNEACI, short for synthetic non-consensual explicit AI-created imagery, to define this new category of abuse. The acronym, pronounced “sneaky,” highlights the secretive and deceptive nature of the practice. “SNEACI really typifies the fact that a lot of these are made without the knowledge of the potential victim and often in very sneaky ways,” said Patrick Traynor, a professor and associate chair of research in UF's Department of Computer and Information Science and Engineering and co-author of the paper. In their study, which will be presented at the upcoming USENIX Security Symposium this summer, the researchers conducted a systematic analysis of 20 AI “nudification” websites. These platforms allow users to upload an image, manipulate clothing, body shape and pose, and generate a sexually explicit photo — usually in seconds. Unlike traditional tools like Photoshop, these AI services remove nearly all barriers to entry, Butler said. “Photoshop requires skill, time and money,” he said. “These AI application websites are fast, cheap — from free to as little as six cents per image — and don’t require any expertise.” According to the team’s review, women are disproportionately targeted, but the technology can be used on anyone, including children. While the researchers did not test tools with images of minors due to legal and ethical constraints, they found “no technical safeguards preventing someone from doing so.” Only seven of the 20 sites they examined included terms of service that require image subjects to be over 18, and even fewer enforced any kind of user age verification. “Even when sites asked users to confirm they were over 18, there was no real validation,” Butler said. “It’s an unregulated environment.” The platforms operate with little transparency, using cryptocurrency for payments and hosting on mainstream cloud providers. Seven of the sites studied used Amazon Web Services, and 12 were supported by Cloudflare — legitimate services that inadvertently support these operations. “There’s a misconception that this kind of content lives on the dark web,” Butler said. “In reality, many of these tools are hosted on reputable platforms.” Butler’s team also found little to no information about how the sites store or use the generated images. “We couldn’t find out what the generators are doing with the images once they’re created” he said. “It doesn’t appear that any of this information is deleted.” High-profile cases have already brought attention to the issue. Celebrities such as Taylor Swift and Melania Trump have reportedly been victims of AI-generated non-consensual explicit images. Earlier this year, Trump voiced support for the Take It Down Act, which targets these types of abuses and was signed into law this week by President Donald Trump. But the impact extends beyond the famous. Butler cited a case in South Florida where a city councilwoman stepped down after fake explicit images of her — created using AI — were circulated online. “These images aren’t just created for amusement,” Butler said. “They’re used to embarrass, humiliate and even extort victims. The mental health toll can be devastating.” The researchers emphasized that the technology enabling these abuses was originally developed for beneficial purposes — such as enhancing computer vision or supporting academic research — and is often shared openly in the AI community. “There’s an emerging conversation in the machine learning community about whether some of these tools should be restricted,” Butler said. “We need to rethink how open-source technologies are shared and used.” Butler said the published paper — authored by student Cassidy Gibson, who was advised by Butler and Traynor and received her doctorate degree this month — is just the first step in their deeper investigation into the world of AI-powered nudification tools and an extension of the work they are doing at the Center for Privacy and Security for Marginalized Populations, or PRISM, an NSF-funded center housed at the UF Herbert Wertheim College of Engineering. Butler and Gibson recently met with U.S. Congresswoman Kat Cammack for a roundtable discussion on the growing spread of non-consensual imagery online. In a newsletter to constituents, Cammack, who serves on the House Energy and Commerce Committee, called the issue a major priority. She emphasized the need to understand how these images are created and their impact on the mental health of children, teens and adults, calling it “paramount to putting an end to this dangerous trend.” "As lawmakers take a closer look at these technologies, we want to give them technical insights that can help shape smarter regulation and push for more accountability from those involved," said Butler. “Our goal is to use our skills as cybersecurity researchers to address real-world problems and help people.”
As tensions escalate over the possibility of the United States seeking control of Greenland — including threats of annexation that have drawn international backlash — seasoned international relations expert Glen Duerr, Ph.D. offers critical context for journalists reporting on the diplomatic, legal, and geopolitical dimensions of this unfolding crisis. What's Happening In early 2026, high-level rhetoric from U.S. political figures has revived debates about Greenland’s status as a strategic territory. What began as discussions of acquisition has evolved into broad international concern over sovereignty, alliance cohesion, and Arctic security. Denmark and Greenland have reaffirmed their commitment to autonomy, while NATO allies and the European Union warn that any forceful move by the U.S. could undermine alliance unity and violate international norms — raising profound questions about territorial integrity, international law, and the politics of national interest. Dr. Glen Deurr's teaching and research interests include nationalism and secession, comparative politics, international relations theory, sports and politics, and Christianity and politics. View his profile here How Dr. Glen Duerr Can Help Journalists Cover This Story 1. Understanding Strategic National Interests Dr. Duerr’s expertise in international relations provides journalists with a framework to explain why Greenland has become such a focal point for U.S., European, and Arctic security policy — from its strategic location to its role in broader defense calculations. 2. Explaining Nationalism, Sovereignty & Self-Determination His research on nationalism and secession is especially relevant as Greenlanders and Danish authorities assert self-determination and reject external control, a central narrative in the current debate. 3. Contextualizing International Norms & Legal Constraints As commentators and policymakers discuss potential annexation, treaty obligations, and alliance commitments, Dr. Duerr can unpack how international law, treaties (such as NATO agreements), and norms against territorial conquest shape policy choices and diplomatic responses. 4. Making Sense of Geopolitical Fallout With European leaders labeling aggressive claims as a form of “new colonialism” and threatening economic countermeasures, Dr. Duerr can help journalists interpret how Greenland could become a flashpoint affecting transatlantic relations, alliance politics, and global perceptions of U.S. foreign policy. About Glen Duerr, Ph.D. Dr. Glen Duerr is a Professor of International Studies at Cedarville University with deep expertise in international relations theory, nationalism, secession, and comparative politics. He holds a Ph.D. in Political Science and Government and is widely available to speak with media on geopolitics, sovereignty disputes, and the intersection of national interest and international order. Why This Matters The evolving crisis over Greenland is not merely a diplomatic dispute — it touches on fundamental questions of sovereignty, global strategic balance, alliance credibility, and international legal norms. Dr. Duerr is positioned to help journalists go beyond headlines, offering analysis that clarifies motivations, stakes, and implications for audiences tracking one of the most talked-about international issues of 2026.

Analyzing Legal Implications of Venezuela Intervention
Hofstra Law Professor James Sample has emerged as a leading legal analyst in national and regional media following the U.S. operation involving Venezuelan President Nicolás Maduro, offering expert commentary on constitutional authority, international law, and criminal procedure. Professor Sample appeared across major television, radio, and digital platforms, including ABC News, CBS New York, MS NOW, and Pacifica Radio, as developments unfolded surrounding the capture and federal prosecution. In multiple ABC News segments, Professor Sample analyzed the legality of the Venezuela operation under international law, characterizing the action as a potential violation of the United Nations Charter, and explained what to expect procedurally at the arraignment of Maduro and his wife on federal charges. His commentary also addressed the broader implications of asserting U.S. jurisdiction over a sitting foreign head of state.

Charities spend big to defend their board’s corporate agendas, new study reveals
Charities with corporate leaders on their boards spend an average of $130,000 a year lobbying on behalf of their connected companies. That’s according to a first-of-its-kind study that shows how companies benefit from their charitable work — and how charities may be all-too-happy to support their powerful board members in return for lucrative connections. The researchers behind the study say the findings could help policymakers and charity stakeholders keep tabs on a previously hidden form of political influence, but that such arrangements are perfectly legal for now. “Charities stand to gain something by behaving in this way. It doesn’t always have to be corporations pushing charities to behave in a way they don’t want to,” said Sehoon Kim, Ph.D., a professor of finance at the University of Florida and senior author of the new study. “It’s a natural quid pro quo arrangement that arises from the incentives corporations and charities have.” The American Medical Association shows one example of these incentives in action. In the 2010s, they actively lobbied against efforts by federal agencies to curb opioid prescriptions. This benefited companies like Purdue Pharma, the maker of OxyContin widely blamed for exacerbating the opioid epidemic in the U.S. It turned out that Richard Sackler, the former president of the company, sat on the board of AMA Foundation, a relationship viewed by many as controversial at the time. But Sackler had arranged for millions in donations to the foundation, and other charities are likely looking to corporate board members to help engineer large donations for their charitable work by connecting charities to other companies and leaders with deep pockets. Lobbying on behalf of their new friends, then, may simply be the most efficient way to ensure those donations keep flowing. Kim collaborated with UF Professor Joel Houston, Ph.D., and Changhyun Ahn, Ph.D., of the Chinese University of Hong Kong to conduct the analysis, which is forthcoming in the journal Management Science. They painstakingly hand collected data covering more than 400 charities and over 1,000 corporations that identified board connections, donations and lobbying activities that fell both within and outside of the charities’ typical political activity. The researchers focused on larger charities that already engage in some lobbying on their own behalf. These lobbying charities are three times larger than smaller nonprofits that never lobby. After a new corporate board member joined, these charities changed their behavior. They were far more likely to lobby outside of their own interests and to even work to support or defeat legislation that affected their new board member’s company, even when that legislation had nothing to do with their charitable mission. It worked out to about a 14% increase in the charity’s lobbying expenditures. “These were the smoking guns that there’s something going on that’s not supposed to be happening,” Kim said. Because lobbying is such an efficient use of resources, and because charities may lend their friendly brand to these lobbying efforts, this help from charities could significantly benefit these connected corporations. “These are previously unrecognized channels at play in terms of corporate political influence that policymakers need to be mindful of when assessing how influential corporations are likely to be,” Kim said.

Sun-Sentinel: What happens when parents go beyond sharenting?
So many parents routinely share photos and news about their kids on social media that the behavior has a name: sharenting. Usually harmless and well-meaning, it can also take a dangerous turn, exposing children to online predators, allowing companies to collect personal information and creating pathways for children to become victimized by identity theft. The risks are most pervasive when parents overshare to profit from their social media accounts. Whenever parents share, they are the gatekeepers, tasked with protecting their children’s information, but they are also the ones unlatching the gates. When parents profit from opening the gates, it is especially challenging to balance protecting their kids’ privacy against sharing their stories. Federal and state laws typically give wide deference to parents to raise their children as they see fit. But the state can and does intervene when parents abuse their children. Those laws protect children in the physical world. However, few laws shield children when parents risk harming them online. Let’s consider this hypothetical situation based on a composite of real-life events. Mia (fictional name) is a 7-year-old girl growing up in Orlando. Her mother is a stay-at-home parent who has a public Instagram account and considers herself an influencer. Many lingerie brands pay Mia’s mom to model their clothing. When a lingerie company from overseas offers Mia’s mom some money to have Mia also pose in their clothing, Mia’s mom says yes. Over the next few weeks, Mia and her mom model the clothing together in pictures and videos, sometimes wearing the outfits while reading together in bed, having pillow fights or being playful around the house — always in clearly intimate but arguably appropriate settings. Mia’s mom’s social media page explodes with new followers, many of whom appear to be grown men. The images on the page receive hundreds of likes and multiple comments. Mia’s mom deletes the most inappropriate comments but leaves others, hoping to increase engagement. As Mia’s mom’s social media following grows, so does the amount of money she earns. Mia tells her teacher about the social media page. Her teacher reaches out to Mia’s parents, to no avail. Mia’s mom keeps sharing. The teacher sees this as a potential form of abuse and neglect and, according to her obligation as a mandatory reporter of abuse, she calls in a report to the state’s central abuse registry. The teacher isn’t trying to get Mia’s mom in criminal trouble, but she thinks the family could use some education surrounding safe social media use and possibly access to financial support if they need this type of online exposure to pay the bills. The intake counselor declines to accept the hotline call. The counselor explains that the posting of pictures is not grounds for an abuse, abandonment or neglect investigation. The parent is sharenting, the counselor says, and that is within a parent’s right. Of course, child sexual abuse material is illegal, but the photos posted by Mia’s mom fall into a gray area — not illegal material, but likely harmful to Mia. Should there be a law to stop this? I believe there should be. Just as our views regarding child abuse have evolved, so must our views on sharenting. Merely 150 years ago, it was legal for parents to beat their children. It wasn’t until 1874, when a little girl named Mary Ellen was beaten severely by her caregiver, that courts began to step in. Drawing from existing laws prohibiting animal cruelty, the Society for the Prevention of Cruelty to Animals argued that Mary Ellen had the right to be free from abuse. At the time, there were laws protecting animals from harm by their caregivers but no laws protecting children from such harm! Back to the present: Mia’s disclosure to her teacher could have changed her life and led to her family getting online safety help, if only the child welfare laws were suitably tailored to protect her in the online world as they attempt to do offline. Child protection laws should be expanded to include harms that can be caused by online sharing. The law can both protect parental autonomy and honor children’s privacy through a comprehensive and multidisciplinary new approach toward protecting children online — one that allows for thoughtful investigation, education, remediation and prosecution of parents who use social media in ways that are significantly harmful to their children. This conduct, which falls beyond sharenting, is ripe for legal interventions that reset the balance between a parent’s right to share and a child’s right to online privacy and safety. Stacey Steinberg grew up in West Palm Beach and now lives in Gainesville, where she is a professor at the University of Florida Levin College of Law; the supervising attorney for the Gator TeamChild Juvenile Law Clinic; the director of the Center on Children and Families; and the author of “Beyond Sharenting,” forthcoming in the Southern California Law Review. This piece was also published in the South Florida Sun-Sentinel.





