For people under the age of 30, social media platforms are the primary source of information. This applies to entertainment and lifestyle as well as politics, society, and health. Information on healthy lifestyles, early detection examinations, and education about medical misinformation – “If you want to reach people in this age group, there's no way around social media,” says Nicolas Merl, psychologist at the German Cancer Research Center. However, public health institutions often lack the necessary resources to create high-quality social media posts on a consistent basis. AI-generated content could be a solution to this dilemma.
AI content attracts more attention and interaction
Yet how is AI-generated content accepted and perceived by users? A team led by first author Nicolas Merl conducted a systematic review and meta-analysis to find out. The researchers evaluated 33 international studies published between 2020 and 2025 that compare AI-generated content with content created by humans.
The result: AI-generated posts led to an average of 12 percent more interactions—such as likes, shares, comments, or clicks—than comparable content created by humans. The quality of the posts was also frequently rated positively. Although this trend was not statistically significant in all studies, there was a clear tendency in favor of AI-based content. “Posts that are both perceived as credible and emotionally engaging are particularly effective,” explains lead author Merl.
Opportunities and responsibilities for cancer prevention
AI-generated social media content enables wide reach at low cost, personalized prevention messages, multilingual and accessible information, and rapid response to new information needs. At the same time, however, the researchers emphasize the need for clear guidelines. AI must not lead to uncertainty or the spread of unwanted misinformation. “Generative AI offers enormous opportunities for public health, but only if transparency, expert review, and ethical rules are adhered to,” says study leader Titus Brinker from the DKFZ. He and his team therefore call for the following requirements for the use of AI-generated content in health communication:
- Transparency about the use of AI (“created with AI”),
- Professional review of content by experts,
- Avoidance of manipulative emotional representations,
- Quality assurance through institutional standards
Publication: Nicolas B. Merl, Franziska Schramm, Christoph Wies, Jana T. Winterstein, Titus J. Brinker: Generative AI in social media health communication: systematic review and meta-analysis of user engagement with implications for cancer prevention.
European Journal of Cancer, 2025, https://doi.org/10.1016/j.ejca.2025.116114