The integration of generative artificial intelligence into public relations and communications has moved beyond the experimental phase. As we progress through 2025, the profession faces a critical juncture where the ethical implications of AI adoption demand urgent attention, strategic frameworks, and a fundamental reimagining of professional responsibility.
The Silent Adoption Crisis
Recent research reveals a startling disconnect in how PR professionals are using AI tools. According to studies of UK communications practitioners, approximately 80pct of content writers now use generative AI tools either frequently or occasionally. Yet only 20pct have disclosed this usage to their managers, while 8pct operate in complete secrecy. This clandestine adoption pattern signals a deeper crisis: the profession hasn’t established clear ethical guardrails for a technology that’s already reshaping daily workflows.
Michelle Songy, founder and CEO of Press Hook, anticipates the emergence of what she calls “AI-native PR pros” in 2025, the first generation of young professionals who have grown up with generative AI. These practitioners will bring fundamentally different expectations about automation, efficiency, and the role of human judgment in strategic communication. The question isn’t whether AI will transform PR workflows, but whether the profession can establish ethical frameworks quickly enough to guide this transformation responsibly.
The Authenticity Paradox
As AI-generated content becomes increasingly sophisticated, audiences are developing what communications strategist Christy Ragle describes as “AI fatigue.” Generic, algorithm-driven messaging that lacks genuine human insight is being tuned out across channels. This creates what industry observers call the “authenticity paradox” which is the very technology that promises efficiency threatens to undermine the trust and credibility that form PR’s foundation.
Bob Oltmanns, owner and president of OPR Group, warns that the profession faces a crisis of misinformation and disinformation that extends far beyond domestic concerns. The proliferation of deepfakes, synthetic media, and AI-generated propaganda from both domestic and international sources represents what he calls an “exploding threat” that the industry is poorly equipped to combat. The challenge isn’t simply identifying false content. It’s maintaining audience trust when distinguishing between authentic and synthetic communication becomes increasingly difficult.
Jennifer Jones-Mitchell, founder of Human Driven AI, advocates for legislative intervention. She argues that allowing AI companies to self-regulate while quantum computing accelerates AI capabilities is untenable. The stakes, she suggests, extend beyond marketing ethics to encompass market manipulation, democratic integrity, and social cohesion. PR professionals, traditionally positioned as reputation guardians, now find themselves on the front lines of a technological transformation they didn’t create and struggle to control.
Developing Ethical AI Frameworks
Progressive organizations are moving beyond ad-hoc AI policies to develop comprehensive ethical frameworks. These frameworks typically address several critical areas: transparency about AI usage, data privacy protections, bias mitigation strategies, quality control mechanisms, and clear delineation between AI-assisted and human-generated content.
Kristelle Siarza Moon, owner and CEO of Siarza, emphasizes that accuracy has become a defining PR trend. In an environment where misinformation spreads rapidly, communications professionals bear responsibility for promoting truthful narratives and maintaining rigorous verification standards. This responsibility extends to AI-generated content, where practitioners must verify facts, cite credible sources, and engage in thorough research before publication. The professional risk, Moon notes, includes potential “cancellation” if accuracy standards slip.
The European Commission and UNESCO have established global AI ethics guidelines that provide useful frameworks for PR practitioners. These emphasize human oversight, algorithmic transparency, accountability mechanisms, and protections against discriminatory outcomes. Forward-thinking agencies are adapting these principles for client work, developing internal governance structures that balance AI’s efficiency gains with ethical obligations.
The Trust Imperative
Dan Farkas, chief advocate officer of Pass PR and professor at Ohio University’s E.W. Scripps School of Journalism, identifies accuracy as perhaps the defining ethical challenge. As media continues fragmenting and journalists migrate to platforms like Substack, PR professionals must blend social listening, analytics skills, and traditional media relations while maintaining unwavering commitment to factual accuracy. Some earned coverage remains completely organic, some requires payment, and some operates in a hybrid model, but all demand ethical transparency.
Organizations are discovering that ethical AI use isn’t simply a compliance exercise, it’s a competitive advantage. As audiences grow more sophisticated about detecting synthetic content, brands that clearly communicate their AI policies and maintain authentic human engagement are building stronger community relationships. This trend is particularly pronounced among younger demographics who, paradoxically, are both most comfortable with AI technology and most skeptical of inauthentic communication.
The Competency Question
Michelle Johnson, a PhD candidate at Syracuse University’s Newhouse School, argues that PR professionals must develop information science skills, including data analytics capabilities previously considered outside traditional PR roles. The era when communicators could be “the words people” while leaving quantitative work to others has ended. Predicting crises, mitigating reputation risk, and measuring AI impact requires deep understanding of data collection, algorithmic processes, and analytics frameworks.
This competency evolution extends beyond technical skills. Strategic use of AI requires understanding its limitations, recognizing potential biases, identifying when human judgment is irreplaceable, and making ethically informed decisions about appropriate applications. Agencies are investing in training programs that combine technical AI literacy with ethical reasoning, preparing practitioners to navigate ambiguous situations where guidelines provide incomplete direction.
Case Study: Navigating the Deep Fake Crisis
Several organizations have already confronted AI ethics challenges that illustrate emerging best practices. When a major technology company discovered deepfake videos falsely depicting their CEO making controversial statements, their response demonstrated effective crisis management in the AI era. Rather than simply denying the videos, they proactively educated stakeholders about deepfake technology, published detailed technical explanations of how the videos were identified as fraudulent, and implemented blockchain-based verification for official communications.
This approach reflected understanding that simply correcting misinformation is insufficient rather audiences need education about AI capabilities and limitations. The company’s transparency about their verification processes, willingness to share technical details, and commitment to stakeholder education became the story, overshadowing the initial deepfake attempt.
Building AI-Ready Teams
Ben Haber, executive vice president at Racepoint Global, predicts that widespread multimedia AI capabilities will make deepfakes exponentially more challenging to identify and control. The emergence of AI-generated influencers on social platforms presents both opportunities and risks for brands. Organizations are responding by building what industry leaders call “AI-ready teams” that combine technical expertise with ethical judgment.
These teams typically include data scientists who understand AI capabilities and limitations, legal experts familiar with emerging regulations, ethicists who can navigate ambiguous situations, communications strategists who understand audience perception, and technical specialists who can verify content authenticity. The integration of these disciplines represents a fundamental shift in how PR agencies and corporate communications departments structure themselves.
The Path Forward
As 2025 progresses, several trends are emerging that will shape AI ethics in communications. First, regulatory frameworks are tightening globally, with the European Union leading on comprehensive AI governance while other jurisdictions develop sector-specific rules. Second, industry associations including PRSA are developing professional standards that members must follow. Third, client expectations are evolving, with sophisticated organizations demanding proof of ethical AI practices before engaging agencies.
The profession stands at a crossroads. PR practitioners can proactively embrace ethical AI leadership, establishing standards that balance innovation with responsibility, or they can react defensively as crises expose problematic practices. The former path requires investment in education, infrastructure, and sometimes difficult conversations with clients about appropriate boundaries. But it offers the opportunity to shape AI’s role in communications rather than being shaped by it.
As Stephanie Fung, founder of Pilea Consulting, argues, PR professionals have a responsibility to advance digital literacy among broader populations. This extends beyond teaching people how to consume digital content critically. It includes ensuring AI tools are understood as fallible instruments rather than infallible oracles. The profession’s credibility in the AI era will ultimately depend on its willingness to acknowledge limitations, maintain transparency, and prioritize accuracy above efficiency.
The AI ethics revolution in PR isn’t coming. It’s here already. The practitioners who will thrive in this environment are those who view ethical frameworks not as constraints but as essential infrastructure for sustainable, trust-based communication in an increasingly complex technological landscape.
This article originally appeared in Ceylon Today, followed by The Morning.


