What’s Wrong with Having Standards? A Call for Communications Self-Regulation in the Age of AI
A tutorial about spokes-bots turned into an examination of how AI is affecting the professions aimed at gaining people’s attention and a conversation about how to preserve them.
I’m fresh back from Orlando where I spoke at the Ragan Social Media Conference about AI spokes-bots as well as all other manner of horrifying digital futures and the monsters that will populate them, and for the first time in a while I feel somewhat hopeful about our collective future.
At first, I wasn’t sure how it landed. I was a bit worried about the tepid in-room response some of my spicier takes received, but those worries were allayed when I was approached afterwards by no shortage of attendees who said they appreciated me calling out the worst perversions and transgressions of the AI era in communications.
I think most of us are peering over the edge of the deep ravine separating the promise of AI from its real life consequences – reductions in force thanks to purported “automation”, CEO after CEO actively bragging about their plans to gut headcounts and handwaving AI as their rationale, degrading the workforces that got them to where they are, and the dystopian slop work product that’s coming out in place of what was once professional comms work.
A few other points of consideration from the talk:
My own back-of-napkin math suggest the communications profession and associated disciplines have shrunk by 168,000 jobs since the start of 2024. The industry gained some 150,000 jobs from 1990-2005. So that’s 15 years of growth gone in 30 months.
Most AI adopters across the industry are doing so because of some form of mandate or obligation, not because of a genuine interest in the technology.
An army of AI spokes-bots unleashed across social media would likely crater campaign ROI, drown out messaging, and make it impossible to capture anyone’s attention.
AI is incredible technology and can be used in comms to reduce burdensome tasks like administrative management, task automation, list building, productivity tracking, research, querying big data sets, gamification, programming, and brainstorming. There’s a good chance that when we cure cancer, AI will have played a significant role.
The problem is, cures to cancer are not what we are getting right now. We’re getting banal slop.
Against this backdrop, it’s hard to drum up that 2023-ish excitement about this friendly little chatbot that can save you time writing up a-matter or press releases or whatever.
No More Binky
The topline takeaway for some attendees will be that I deleted All Points West’s OpenAI account on stage and committed to never again using ChatGPT for any elements of agency management.
I had literally so many reasons for this. But that’s just the splashy attention-grabber. Instead, I hope the bigger takeaway was the call to self-regulate the communications fields (including marketing, journalism, public relations, and traditional and social media) by adopting some basic standards.
The topline takeaway for some attendees will be that I deleted All Points West’s OpenAI account on stage
Look, maybe I’m too harsh a critic, but I believe that the things we make shouldn’t suck. And right now, if we’re being honest, most of this stuff sucks. So what to do?
Let’s Expect More From/For Ourselves
Below are the “Six Standards”, as I’ve decided to brand them (lol). This is a very, very basic framework, something we can debate and build on. I wrote these with the specific intent of reining in AI usage in creative work – think bland copy, stilted voiceovers, uncanny videos, creepy virtual avatars, and downright bad graphic design – not just because of how offensive that stuff is to our sensibilities, or how it collectively lowers the bar for what we expect from a creative endeavor, but because that’s the largest vector for loss of jobs and where we risk losing the creative artisanship that requires human inputs.
The Medicis are dead, and that means almost all creative output in our lifetime has been made in a commercial context. We can argue about why that sucks or why it’s good, actually. But there’s no arguing that if commercial interests are chiefly responsible for creative output, and those interests are chiefly interested in reducing cost, then it stands to reason the creative arts will be under attack for the foreseeable future.
We need to fight back.
Let’s start with transparency. There should be some disclosure when we’re looking at AI, right? This will become more important as this content becomes increasingly prevalent and indistinguishable from the real.
(Somewhat related, I find it interesting that many of the same people who defend AI content are not proud enough of their AI content to label it as AI. If it’s so great, why are you cropping out that Sora watermark?)
Headcount neutrality is something we should be pushing for across the board before, during and after taking on a project or a client. Is this going to impact someone’s job? Is my AI program implementation going to help people be more productive, or is it going to be a permissions structure for you to fire teams of people? This is a sticky, thorny question, and the answers won’t always be good, the criteria for judgment quite imperfect. But it should at least be a conversation.
Maybe don’t settle for the very first output from your prompt? It sure seems like a lot of the most discerning clients I’ve ever worked with are now just snapping up whatever the LLMs serve up uncritically – but most of what they’re serving is slop. How about we try iterating, fine-tuning, downloading and opening it in Photoshop or Premiere (remember those?).
On stage, I made the point that the idea of not stealing goes back at least as far as the authorship of the Old Testament. It shouldn’t be a revolutionary idea to make it unacceptable to blatantly rip-off the work of actual human beings in your AI creative output.
My readers with legal backgrounds might be familiar with the term “de gustibus non est disputandum”, loosely translating to “in matters of taste, there can be no dispute.”
I disagree. Look at this shit, man:
No disputatio that this suckssss
If we absolutely must make this stuff, can we all agree that the general quality of AI-supported creative work cannot dip below, say, a standard of what was acceptable in 2020? I’d love to resurrect my biggest pre-COVID era nightmare client, the one who picked every little nit she could find and apply her sensibilities to everyone today. We didn’t know how good we had it.
Resource Management & Impact: The Next Frontier
And finally, the big one: resource management. Energy use. We’ve all heard about the ridiculous AI environmental impact and water usage problems, and that’s a very nuanced issue that most of us have not even begun to understand yet. Some companies and their models use far less water, some recycle coolant, some are wasteful, and some are piggies who just want to gulp down our grids and reservoirs.
We can do a lot to address this. Small-language models can be leveraged to reduce impact. We can continue to defeat the data centers on a municipal level that aim to drive up costs and deplete resources.
The next “Here’s what we got wrong/Our commitment going forward”-corporate mea culpas are going to be about how brands cooked through natural resources during the mad rush of AI
As comms people, we need to read the tea leaves. We’ve been here before. The next public awareness awakening will come in the form of a collective demand for guardrails governing smart resource management in AI. AI use will soon be judged against its environmental impact. It’s already starting. If you’ve ever managed a crisis before, you can see what’s coming. The next series of “Here’s what we got wrong/Our commitment going forward”-corporate mea culpas are going to be about how brands cooked through natural resources and human capital during the mad rush of AI’s early period.
As communicators we are eternally asked to reframe narratives around corporate virtue signaling, and this will be no different. We’re going to have to invent terms like “ethical language modeling” or “net zero compute” the same we way pushed “going green” and “corporate social responsibility” in recent decades.
The benefits of that “brand ambassador” spokes-bot you build will ring hollow when Wired Magazine is writing about how your brand had to boil Lake Huron to make it.
This seems like a huge undertaking, but it is possible. Just as architects in the 1990s convened to promulgate the first versions of what would become LEED Certification, we can come together and define these standards, ultimately providing a framework for safe, effective, ethical AI usage in communications.
My idea – and we’re just brainstorming here – is an industry-wide set of standards governing LLM operators and practitioners for real ethical execution. Or… SLOP FREE.
We can do this, and we don’t even have to start from scratch. There’s no shortage of professional organizations out there with governing bodies, advisory councils and leadership groups aimed at picking apart these issues. The infrastructure exists.
The genie isn’t going back in the bottle. We can either make some rules about what we are allowed to wish for, or make our own bottle, crawl into it, and die.






