A Good Question, Or Betteridge’s Law?

There is a fine discussion about A.I., over on Barry’s blog. But this is a different sort of use.

Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?

Anthropic partnered with the US government to create a filter meant to block Claude from helping someone build a nuke. Experts are divided on whether its a necessary protection—or a protection at all.

At the end of August, the AI company Anthropic announced that its chatbot Claude wouldn’t help anyone build a nuclear weapon. According to Anthropic, it had partnered with the Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) to make sure Claude wouldn’t spill nuclear secrets.

The manufacture of nuclear weapons is both a precise science and a solved problem. A lot of the information about America’s most advanced nuclear weapons is Top Secret, but the original nuclear science is 80 years old. North Korea proved that a dedicated country with an interest in acquiring the bomb can do it, and it didn’t need a chatbot’s help.

How, exactly, did the US government work with an AI company to make sure a chatbot wasn’t spilling sensitive nuclear secrets? And also: Was there ever a danger of a chatbot helping someone build a nuke in the first place?

The answer to the first question is that it used Amazon. The answer to the second question is complicated.

Amazon Web Services (AWS) offers Top Secret cloud services to government clients where they can store sensitive and classified information. The DOE already had several of these servers when it started to work with Anthropic. (snip-MORE on the page. It’s good-read it!)

Oops!

AI-Powered Coca-Cola Ad Celebrating Authors Gets Basic Facts Wrong

Emanuel Maiberg ·May 12, 2025 at 9:00 AM

Snippet:

In April, Coca-Cola proudly launched a new ad campaign it called “Classic,” celebrating famous authors and the sugary drink’s omnipresence in culture by highlighting classic literary works that mention the brand. The firm that produced the ad campaign said it used AI to scan books for mentions of Coca-Cola, and then put viewers in the point of view of the author, typing that portion of the text on a typewriter. The only issue is that the AI got some very basic facts about the authors and their work entirely wrong. 

One of the ads highlights the work of J.G. Ballard, the British author perhaps best known for his controversial masterpiece, Crash, and David Cronenberg’s film adaptation of the novel. In the ad, we get a first person perspective of someone typing a sentence from “Extreme Metaphors by J.G Ballard,” which according to the ad was written in 1967.  When the sentence gets to the mention of “Coca-Cola,” the typeface changes from the generic typewriter font to Coca-Cola’s iconic red logo. 

(snip-MORE)