A P.S.A. About A.I., Brought To Us By KS A.G. Kris Kobach

Kansas Attorney General Kris Kobach’s PSA meekly counters utopian AI promises 

Eric Thomas

Whether you are watching the Super Bowl on TV, scrolling make-up tutorials on Instagram or listening to a technology podcast, you are being fed advertising for artificial intelligence.

The commercials are everywhere. And they promise the world.

Since the release of ChatGPT in 2022, the companies that build AI have deployed advertising to recruit us as loyal users of their astronomically expensive software. Persuade us now, and perhaps we will be loyal customers later.

Meanwhile, Kansas Attorney General Kris Kobach — with some help from a nebulous technology group — has sounded an alarm about AI this week, releasing a PSA.

“The reports are very troubling,” he says about threats posed by artificial intelligence.

Over the next few years, our culture will decide whether we are AI skeptics or fanatics. For that reason, the language that is used to sell AI — or steer us away from it — matters. Let’s listen to what is being said about the technology that could define the 21st century.

From Silicon Valley

In their ads, technology companies describe AI as a wonderland: productivity at work, inspired hobbies at home and wellness nirvana at the gym. The word choices would make a spiritual guru proud.

In marketing AI app-building software, Base 44 urges us: “Consider yourself limitless.” It’s also described as “the next thing you can’t live without.” The company uses the language of religious cults swirled with rampant consumerism. “Elite” plans start at $160 per month.

Besides AI, what product from the past 50 years could have generated all of these promises in one commercial? In 78 seconds of advertising, Perplexity offers:

  • “Get your time back.”
  • “Access to knowledge is easier than ever.”
  • “Discover something new every day.”
  • “Knowledge on-demand anytime anywhere. For anything you wanna know.”

There’s no modesty — just hyperbole.

Judging by their advertising, tech companies agree on AI’s greatest virtue: efficiency.

The YouTube description for a Copilot AI ad claims that “Microsoft 365 Copilot isn’t just a better way of doing the same things. It’s an entirely new way of working.” Press play on the video and watch a layered flurry of chatbot prompts, all written simultaneously and feverishly, including: “Want to get a jump start on your day?” The message envisions AI as hyperactive multichannel problem solving.

Other AI advertising promises are more direct. ChatGPT’s advertisement, “What Codex unlocks,” features a technology CEO who boasts about what AI made possible. You don’t need to understand his jargon to understand the promised efficiency.

“We were able to create a JavaScript runtime in just two weeks,” says Syrus Akbary Nieto. “Without Codex, it would have taken us easily one year.”

For people outside Silicon Valley, ChatGPT’s advertising shows tangible AI efficiencies, such as opening a new restaurant.

“I found the perfect spot,” someone types into the chatbot. “Help me write the business plan.”

In another ad, ChatGPT is the elixir for fixing the family car: “Dad said the truck is ours if we fix it. Help us get it running.”

The pitches implicitly promise success when you combine your ambition with AI’s wisdom — never mind the skills required to cook spaghetti bolognese or handle a wrench.

Elsewhere, two of Google’s recent AI commercials blend family values with problem solving. One commercial considers how to reassure a young boy about moving to a new house.

The ad’s answer: Open the Gemini chatbot and ask it to visualize his new bedroom, complete with the family dog’s bed. The commercial closes with words carrying a double meaning: “It will be whatever we want it to be,” the boy’s mom says. Both the house and the Gemini chatbot, the script suggests, can be family dreams. (snip-MORE, including the ad transcript, and info about the organization behind the ads. It’s good, and not much more to read. I just don’t like lifting other people’s work.)

A Good Question, Or Betteridge’s Law?

There is a fine discussion about A.I., over on Barry’s blog. But this is a different sort of use.

Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?

Anthropic partnered with the US government to create a filter meant to block Claude from helping someone build a nuke. Experts are divided on whether its a necessary protection—or a protection at all.

At the end of August, the AI company Anthropic announced that its chatbot Claude wouldn’t help anyone build a nuclear weapon. According to Anthropic, it had partnered with the Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) to make sure Claude wouldn’t spill nuclear secrets.

The manufacture of nuclear weapons is both a precise science and a solved problem. A lot of the information about America’s most advanced nuclear weapons is Top Secret, but the original nuclear science is 80 years old. North Korea proved that a dedicated country with an interest in acquiring the bomb can do it, and it didn’t need a chatbot’s help.

How, exactly, did the US government work with an AI company to make sure a chatbot wasn’t spilling sensitive nuclear secrets? And also: Was there ever a danger of a chatbot helping someone build a nuke in the first place?

The answer to the first question is that it used Amazon. The answer to the second question is complicated.

Amazon Web Services (AWS) offers Top Secret cloud services to government clients where they can store sensitive and classified information. The DOE already had several of these servers when it started to work with Anthropic. (snip-MORE on the page. It’s good-read it!)

Oops!

AI-Powered Coca-Cola Ad Celebrating Authors Gets Basic Facts Wrong

Emanuel Maiberg ·May 12, 2025 at 9:00 AM

Snippet:

In April, Coca-Cola proudly launched a new ad campaign it called “Classic,” celebrating famous authors and the sugary drink’s omnipresence in culture by highlighting classic literary works that mention the brand. The firm that produced the ad campaign said it used AI to scan books for mentions of Coca-Cola, and then put viewers in the point of view of the author, typing that portion of the text on a typewriter. The only issue is that the AI got some very basic facts about the authors and their work entirely wrong. 

One of the ads highlights the work of J.G. Ballard, the British author perhaps best known for his controversial masterpiece, Crash, and David Cronenberg’s film adaptation of the novel. In the ad, we get a first person perspective of someone typing a sentence from “Extreme Metaphors by J.G Ballard,” which according to the ad was written in 1967.  When the sentence gets to the mention of “Coca-Cola,” the typeface changes from the generic typewriter font to Coca-Cola’s iconic red logo. 

(snip-MORE)