Episode 80 – How to NOT Lose your Job because of AI

The Marketing Gateway does NOT utilize AI or humanoid robots in our production. Ignore the smoke coming from Sean’s ears.

AI has cost plenty of people their jobs already, but there is an easy way for you to not add to that count!

This month I am plugging the St. Louis chapter of the AMA. To become a member, you can visit https://amasaintlouis.org/.

SOURCES

https://www.theguardian.com/technology/2026/feb/27/block-ai-layoffs-jack-dorsey

https://www.reuters.com/business/world-at-work/companies-cutting-jobs-investments-shift-toward-ai-2026-02-25

https://www.cnbc.com/2025/11/26/mit-study-finds-ai-can-already-replace-11point7percent-of-us-workforce.html#:~:text=Massachusetts%20Institute%20of%20Technology%20released,and%20Oak%20Ridge%20National%20Laboratory.

https://futurism.com/artificial-intelligence/ars-technica-fires-reporter-ai-quotes

https://www.404media.co/ars-technica-pulls-article-with-ai-fabricated-quotes-about-ai-generated-article

https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/?ref=404media.co

https://www.cnet.com/tech/services-and-software/supreme-court-declines-case-on-granting-copyright-to-ai-created-art

https://www.theguardian.com/technology/ng-interactive/2026/feb/28/chatgpt-ai-chatbot-mental-health

https://www.livescience.com/technology/artificial-intelligence/the-more-that-people-use-ai-the-more-likely-they-are-to-overestimate-their-own-abilities

The Marketing Gateway is a weekly podcast hosted by Sean in St. Louis (Sean J. Jordan, President of https://www.researchplan.com/) and featuring guests from the St. Louis area and beyond.

Every week, Sean shares insights about the world of marketing and speaks to people who are working in various marketing roles – creative agencies, brand managers, MarCom professionals, PR pros, business owners, academics, entrepreneurs, researchers and more!

The goal of The Marketing Gateway is simple – we want to build a connection between all of our marketing mentors in the Midwest and learn from one another! And the best way to learn is to listen.

And the next best way is to share!

For more episodes: https://www.youtube.com/@TheMarketingGateway

Copyright 2025, The Research & Planning Group, Inc.

TRANSCRIPT:

It’s March of 2026, and I know, I know – we’re all tired of hearing about AI, or more precisely, large language model transformers like ChatGPT, Copilot, Claude, Gemini and Grok. For some of us, like myself, the topic is both old news that also has become far too tinged with hype and hyperbole as increasingly desperate sales machines are trying to figure out how to make money on this stuff.

For other people, though, AI has become an existential threat that they fear may replace them in their very livelihood, and you don’t have to look far to see the signals that it’s bound to happen.

We already talked last week about how Burger King is introducing an AI-powered monitor called “Patty” to make sure employees are saying please and thank you, but we have not talked about situations like Jack Dorsey cutting 4,000 jobs, or 40% of the workforce, at the fintech company Block due to what he claims are efficiencies built by AI.

Personally, I’m skeptical that this isn’t just an extreme cost-cutting measure designed to appease investors – and it worked, because Block shares went up the next day! – but even if the behind-the-scenes story is that Block apparently has horrible morale within the company and may have used this shocking, massive layoff to prune detractors, scare complainers and boost short-term earnings, the reality is that an MIT study recently found AI systems can already replace 11.7% of the US workforce and Goldman Sachs claims that AI adoption resulted in 5,000 to 10,000 monthly net job losses last year and will soon drive up unemployment.

But let’s recognize a lot of that discourse for what it is – fearmongering designed to increase the value of a product that’s inflating a huge economic bubble right now – and think a little more soberly about what’s actually happening.

Because you see, the public is not really on the side of these large language models – for example, ChatGPT is seeing mass cancellations for trying to profit off Pentagon contracts since people fear it could be used to target innocent citizens, and a September study from the very trustworthy Pew Research found that Americans are far more concerned than excited about AI right now, with 53% saying it will erode creative thinking –  and the story I’m going to tell you today is an example of what happens when the public sours on AI usage.

Because aside from large companies masking mass layoffs with proclamations about AI, the truth is that you’re also likely to lose your job from being an AI adopter if you’re not careful how you use it.

And if you want to hold onto your job, the best practice I’d recommend is to be very, very selective about how you’re using these tools.

I’m Sean in St. Louis, and this is the Marketing Gateway.

Before I tell this story, let me first define what we’re talking about, because “AI” is a loaded term that means a lot of things. It can power the ghosts in Pac-Man, it can recommend movies to you on Netflix, it can enable you to use your voice to turn off your lights or set a reminder, and it can also help you transcribe recorded audio, proofread documents, spot patterns that are hard to see, help you avoid car accidents, route you to a destination and speed up repetitive processes by automating tasks.

Nobody has a problem with any of these applications of AI.

The technology that people are concerned about are transformers, the underlying technology behind large language models like chatbots and which allow media generation tools to accept plain language prompts to create output.

The technology has come a long way in the last few years, but it’s also incredibly flawed because it’s based on generating output that follows a complex probabilistic chain of reasoning based on an enormous amount of training data sourced largely from the internet.

Some of the training data is pruned and vetted, but a lot of it is, just, like, someone’s opinion, man, and because the models are not capable of actual thought, they have a tendency to create incorrect output commonly known as “hallucinations” that are generally as benign as misinformation on Wikipedia, but which are sometimes quite dangerous if they are instructing people to do things that can be harmful.

And yes, there are a non-trivial number of deaths and even a few attempted murders linked to large language model interactions. I don’t want to go down that particular rabbit hole today, but suffice it to say, this stuff needs to be better-regulated because it absolutely is capable of hurting people.

And I’m going to tell you the story of one person who just lost his job because of AI at a Conde Nast-owned website called Ars Technica. His name is Benj Edwards, and he was their senior AI reporter until very recently.

Why? Because he used ChatGPT to fabricate some quotes on a story he published.

But this isn’t a simple situation of someone committing fraud. This is rather a situation where a reporter was trying to take a shortcut to get a story filed because he was fighting off an illness. He thought, as someone who understands these tools pretty well, he could have ChatGPT help him out.

He was wrong, and he’s looking for a new job now because of it.

So, here’s the story. A few weeks ago on February 12, an engineer named Scott Shambaugh posted a story on his blog, The Shamblog, about how he helps maintain a code library for the programming language Python and recently, the submitted code from volunteers has come from AI agents, which are autonomous programs that follow directions to complete tasks, sort of like a robot. Agents can be told to behave with specific personalities, and Shambaugh describes them as being problematic since they have little oversight.

An AI agent made a code change request, and Shambaugh closed it. And so he says it wrote a hit piece on him that it published on the open internet, reviewing his previous code contributions to create a hypocrisy narrative, questioned his motivations and accused him of being prejudiced against AI agents.

If we take this story at face value – and I want to be clear, we should take it with an enormous grain of salt! – it’s an interesting reflection on how AI Agents running amok can cause problems for programmers. But the story went viral and started appearing in many places, including Ars Technica, which covers these sorts of topics.

The Ars piece went up on February 13th written by Benj Edwards and Kyle Orland and Scott Shambaugh noticed that the quotes attributed to him weren’t actually his words or sourced from his blog. He updated his blog to indicate that he did not speak to Ars Technica.

Benj Edwards jumped on Bluesky on February 15th and took responsibility for the fake quotes, explaining that he’d tried to a Claude Code-based AI tool to grab quotes, and it hadn’t worked, so he’d turned to ChatGPT. He further claimed that just the quotes, not the actual article itself, were written by AI and that they were just paraphrasing what Scott Shambaugh’s blog had said.

That same day, Ars Technica pulled the article and posted a note from Editor in Chief Ken Fisher saying that the article violated their publication guidelines and that they did not permit AI to be used to create their articles. And for the next two weeks, the comment thread around the topic was still roiling, going on for 47 pages before Ars Technica closed it on February 27th. Benj Edwards appears to have been let go around that time because his bio on the site was changed to reflect past-tense employment.

So, what we have is a mistake, certainly, but a very costly one for Benj Edwards and also for his co-author and the broader Ars Technica publication. And I want you to understand how it applies to marketing and how we have to be careful when we utilize generative AI as a tool.

First of all, one of the most common pieces of advice offered – including by me! – is to utilize generative AI as a starting point but not an ending point.

The analogy often used is scaffolding – put something in place you can work from, but remove it as you replace it with human-generated content.

Anything you present as your own work should legitimately be your own work, and anything else should be cited to its source. This is important not just for the purposes of ethics and integrity, but also copyright – and just yesterday, the US Supreme Court declined to hear a case about allowing AI-generate artwork to receive copyright, upholding the US Copyright Office’s rules stat that that humans must create work for it to be copyrightable.

Second, we need to acknowledge and understand that AI tools are not reliable for the purposes of formal writing or research where strict guidelines are in place, and this includes journalism as well as PR, writing copy or creating documents like communication briefs or brand guidelines.

While AI tools have gotten better at citing sources and quoting sources verbatim, they still have a tendency to hallucinate, paraphrase or falsely attribute, and even one error can be costly if your reputation as an author is on the line.

Simply put, AI tools are a fine starting point for research and document creation, but they are not a reliable source for publication, and you have to scrupulously fact check their output the same way you would an online blog or Wikipedia article under the assumption that the output may contain misinformation.

Finally, we need to address the bigger elephant in the room – research has already shown that overreliance on AI tools tends to lead to lower creativity, higher dependence on the tools and even psychosis. There was a story over the weekend about a perfectly ordinary man who started using ChatGPT to help him with a sustainable housing project and who became some enamored with it that he broke off all his personal relationships and ultimately had to be cut off from the technology. He wound up jumping off a bridge and is no longer with us.

This is really, really serious stuff because stories like this are becoming increasingly common.

To my knowledge, there have been very few deaths attributed to enthusiasm for other types of software, whether it’s content creation tools or office productivity software or even video games. Yes, social media have some dangers associated with them, but that’s not due to the software – it’s due to how human beings behave.

AI tools, on the other hand, are very good at tricking people into believing things that aren’t real, and the research has found that people who use AI every day tend to become overconfident about the benefits it’s providing them and that they can tell the difference between what’s real and what AI-generated. This leads to a version of the Dunning-Kruger effect that’s particularly formidable because AI tools encourage users to believe that they’re competent at tasks that neither the tools nor the users actually are.

So, here’s my advice to those who want to keep AI from taking their jobs – be very careful about how you use AI in your daily life, because the argument of whether or not it can replace you as a human is uniquely yours to make.

If you use these tools to replace your own work and the tools fail, it’s your reputation on the line, not the AI tools’. They’re just software. You, as the human, are supposed to know better.

This is not a new problem, by the way.

In my work as a researcher, I have many automated tools that will build charts and tables for me. They’re built using more traditional algorithms, not AI, but they’re often wrong for the type of data I have, and if I just pass these off to a client without checking the output, I wind up looking like the problem.

And even if I blame the software for doing something wrong, it’s me, not the software, who the client holds accountable.

So, let’s be accountable! If you use AI to create something, disclose it up front and make sure you’re also stating what you did to check its output and ensure it’s correct.

That’s how you hold on to your job in this era of AI. You don’t let it do your job for you.

You use it the same way you use Excel or PowerPoint or Word or anything else – as a tool to help you do your own work better.

I’m Sean in St. Louis, and this has been The Marketing Gateway. See ya next time!

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *