Sharing photos of your kids online? Here's what you should consider.

May 9, 2023

3 min

Stacey Steinberg

By Emma Richards


Today’s parents are the first to raise children alongside social media and in this era of likes, comments and shares, they must also decide when to post images of their children online and when to hold off to protect their privacy.


The practice of “sharenting” – parents posting images of their children on social media platforms — has drawn attention to the intersection between the rights of parents and the rights of their children in the online world. Stacey Steinberg, a professor in UF’s Levin College of Law, author and mother of three, says parents need to weigh the right to post their child’s milestones and accomplishments online against the right of a child to dictate their own digital footprint and maintain their privacy.


Steinberg, like many parents, avidly posted photographs of her children online to document their childhoods. When she left her job as a child welfare attorney to become a professor, Steinberg also began writing about her motherhood experiences. She also began rethinking posting about her children online, realizing that it could be doing more harm than good. And yet, there was little guidance for parents on to consider when posting images and how to do so with their children’s safety in mind.


Among the problematic issues: Machine learning and artificial intelligence allow for the collection of information about people from online posts but there is little control over or understanding of how that stored information is being used or how it will future impact on the next generation.


According to Steinberg, a Barclays study found that by the year 2030, nearly two-thirds of all identity theft cases will be related to sharenting. There are also concerns pedophiles may collect and save photographs of children shared online. For example, one article she reviewed reported that 50% of pedophile image-sharing sites had originated on family blogs and on social media.


Steinberg says parents should model appropriate social media behavior for their children, such as asking permission before taking and posting an image and staying present in the moment rather than living life through a lens or being fixated with what’s online.


“I think it’s a danger that we’re not staying in the moment, that we’re escaping to our newsfeed or that we’re constantly posting and seeing who’s liked our images and liked what we’ve said instead of focusing on real connections with the people in front of us,” Steinberg said in an episode of the From Florida Podcast.


While parents serve as the primary gatekeepers for children’s access to the online world, tech companies and policymakers also have roles to play in setting parameters and adopting law that protect children’s safety. Numerous European countries have already moved in this direction with such concepts as the “right to be forgotten,” which allows people to get information that is no longer relevant or is inaccurate removed to protect their name or reputation on platforms such as Google.


“The United States really would have a hard time creating a right to be forgotten because we have really strong free speech protections and we really value parental autonomy Steinberg said.


Google has, however, created a form that allows older kids to request that old photographs and content about them be removed from the internet, which Steinberg says is a promising step.


Steinberg would love to see other mechanisms adopted to minimize the amount of data that is collected about children and ensure artificial intelligence is used responsibly and ethically when collecting online data.


In the meantime, parents can proactively make online privacy issues a topic of discussion with their children and take proactive steps to limit their digital footprints, such as deleting old childhood photos.


“One thing that I really want to encourage families to do is not to fear the technology, but to try to learn about it,” Steinberg said.


Connect with:
Stacey Steinberg

Stacey Steinberg

Director | Professor

Stacey Steinberg's research explores the intersection of a parent's right to share online and a child's interest in privacy.

Family LawChild WelfareOnline SafetySharentingChildren's Privacy

You might also like...

Check out some other posts from University of Florida

1 min

Opinion: Hey Florida! Want to go to Mars? Here’s what it will do to your body

The president is eager “to plant the stars and stripes on the planet Mars.” Would you sign up for that mission? What would happen to your body in the three years you would be gone? As the United States continues to prioritize space travel, you might wonder why anyone would want to travel to Mars and whether it’s even ethical to expose humans to such extreme physiological conditions. The world is watching as the astronauts on the Boeing Starliner remain stuck in space until at least March due to a capsule malfunction. So many questions have arisen about the impacts of people spending extended periods of time in space, and we don’t have all the answers yet. However, because I study how spaceflight affects human physiology and performance, I have some ideas. The first 10 minutes of your journey will be exciting, but it’s the next months and years we really need to worry about. We have solved some of the problems but not all. After you lift off, the high g-forces will paste your body against the crew couch as you accelerate, but there’s really not too much to fear. A typical launch results in only about half the acceleration experienced by a fighter pilot in a tight turn. You might feel lightheaded, but astronauts have dealt with this for generations. Read the full article in the Tampa Bay Times here:

3 min

As holiday shopping season nears, UF experts warn retail theft is growing more sophisticated

With the busiest shopping season of the year approaching, new findings from the National Retail Federation’s Impact of Retail Theft and Violence 2025 report — developed by the University of Florida’s SaferPlaces Lab and the Loss Prevention Research Council — show retailers are facing increasingly complex and technology-driven threats. UF researchers say early preparation, better data and stronger collaboration will be essential as stores brace for heavier foot traffic and heightened safety risks. Despite public reports that retail theft is decreasing, Read Hayes, Ph.D., a UF research scientist and director of the LPRC at UF Innovate, said retailer surveys tell a different story: Incidents of shoplifting, organized retail crime, online fraud and other external theft continue to rise, even as some law enforcement statistics appear flat or declining. The gap, he said, reflects how much crime goes unreported or unrecorded. “Retailers have always had a difficult time reporting much of their crime, and if you look only at police data, like calls for service or arrests, it can look like retail crime is flat or even slightly down,” he said. “But when we survey retailers, who are the actual crime victims, they consistently report year-over-year increases in theft and violence.” Criminal groups are also becoming more sophisticated. Hayes said offenders are increasingly using technology to defeat protective systems, disrupt cameras and identify vulnerable stores. They also rely heavily on social media platforms such as TikTok and Reddit to coordinate attacks and share tactics. “It’s a little disconcerting how much criminals rely on social media now to scout stores, map out easy targets, learn from each other or just plain brag about how they did it,” he said. LPRC scientists monitor social media signals to help retailers and law enforcement understand emerging threats — not in real time, Hayes said, but to help build best practices organizations can use to defend themselves. Criminals continue to focus on high-demand items such as branded apparel and footwear, prompting retailers to rethink how those products are displayed and secured. Hayes said many companies are testing new approaches to better protect vulnerable merchandise without driving customers away. One example is automated self-service systems for locked items, where customers can retrieve a product by having a code sent to their phone without waiting for a store employee. Safety remains retailers’ top concern, Hayes said. LPRC’s latest report, developed in collaboration with the security technology company Verkada, found that frontline retail workers report feeling less safe than ever, a trend that typically intensifies during the holiday rush. Rising incidents of in-store violence, limited law enforcement support in some areas and increased guest-related confrontations are pushing retailers to reassess how they protect both employees and customers. “Nothing is more important than protecting the frontline retail associates who keep this industry running,” Hayes said. “This report helps reinforce what retailers need to do to ensure those workers feel safe.” LPRC teams are also studying ways to improve safety beyond store walls, testing parking lot technologies, including license plate readers and flashing deterrent systems designed to discourage potential offenders and reassure law-abiding shoppers. At the federal level, Hayes said he and partners across the country are urging Congress to pass a bill to address organized retail crime and establish a centralized platform for reporting retail theft threats. As the holiday season approaches, Hayes said the need for evidence-based solutions has never been clearer. “Retailers are under pressure to keep their stores safe, welcoming and competitive,” Hayes said. “The more we can understand offender behavior, customer expectations and emerging technologies, the better we can help retailers, communities and law enforcement reduce harm.” The LPRC, headquartered at UF Innovate, brings together more than 200 major retailers, technology companies and public safety agencies to conduct research that strengthens store safety, reduces loss and enhances the customer experience.

1 min

AI Can’t Replace Therapists – But It Can Help Them

For a young adult who is lonely or just needs someone to talk to, an artificial intelligence chatbot can feel like a nonjudgmental best friend, offering encouragement before an interview or consolation after a breakup. AI’s advice seems sincere, thoughtful and even empathic – in short, very human. But when a vulnerable person alludes to thoughts of suicide, AI is not the answer. Not by itself, at least. Recent stories have documented the heartbreak of people dying by suicide after seeking help from chatbots rather than fellow humans. In this way, the ethos of the digital world – sometimes characterized as “move fast and break things” – clashes with the health practitioners’ oath to “first, do no harm.” When humans are being harmed, things must change. As a researcher and licensed therapist with a background in computer science, I am interested in the intersection between technology and mental health, and I understand the technological foundations of AI. When I directed a counseling clinic, I sat with people in their most vulnerable moments. These experiences prompt me to consider the rise of therapy chatbots through both a technical and clinical lens. AI, no matter how advanced, lacks the morality, responsibility and duty of care that humans carry. When someone has suicidal thoughts, they need human professionals to help. With years of training before we are licensed, we have specific ethical protocols to follow when a person reveals thoughts of suicide. Read the full article from US News & World Report here

View all posts