AI Is Already in Your Agency. The Legal Risk Is Whether You’re Ready for It.
Insights from Samantha Rothaus and Michael Lasky, Attorneys at Davis & Gilbert
AI has officially crossed the line from productivity booster to legal exposure for boutique communications agencies. That was the clear message from PRBI’s January JAM, led by Samantha Rothaus and Michael Lasky of Davis & Gilbert, who walked members through the real-world legal risks agencies face when using AI—and what to do before those risks turn into lawsuits.
The most misunderstood issue? Copyright.
Under U.S. law, AI-generated output is not copyrightable because it lacks human authorship. This means if your agency delivers AI-generated content without meaningful human involvement, you cannot legally stop others from copying, reusing, or monetizing it. Courts have already drawn this line—AI-generated images were denied copyright protection in a graphic novel case, while the human-written text was protected. The takeaway: AI can assist, but humans must author.
Infringement risk is the next landmine.
Generative AI tools are trained on massive datasets that may include copyrighted works and brand trademarks. In multiple cases, AI has produced content deemed “substantially similar” to protected material. Stable Diffusion reproduced a distorted Getty Images watermark. The New York Times has sued OpenAI, citing near-identical article outputs. When agencies deliver infringing content to clients, liability doesn’t stop at the software provider.
Rights of publicity create another layer of exposure.
AI-generated people, voices, and likenesses can resemble real individuals – whether intentionally or not. From deepfake celebrity endorsements to synthetic models that mirror real people, agencies risk violating publicity and privacy rights, even when they don’t mean to. Voice cloning tools have already triggered lawsuits, reinforcing the need for caution and consent.
Confidentiality is an often-overlooked risk.
Free or public AI tools may store prompts and inputs, which can include client strategies, unreleased product information, or internal discussions. One transcription tool famously emailed a sensitive termination transcript to the employee being terminated. Enterprise-level AI tools with confidentiality protections are now a baseline requirement, not a luxury.
Regulators are moving quickly.
New York and California will soon require disclosure of AI use in advertising and synthetic performances. The EU AI Act goes further, imposing a risk-based approach that will soon require disclosure of AI use across media. While federal U.S. law remains unsettled, transparency requirements are becoming the new normal.
According to Rothaus and Lasky, the smartest agencies are doing four things now:
- Implementing internal AI policies
- Updating client contracts with AI addenda
- Flowing AI obligations down to vendors
- Keeping records of AI use and human oversight
AI isn’t the threat. Unmanaged AI is. Agencies that treat AI governance as core infrastructure—not an afterthought—will protect their clients, their reputations, and their businesses.