There is a fine discussion about A.I., over on Barry’s blog. But this is a different sort of use.
Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?
Anthropic partnered with the US government to create a filter meant to block Claude from helping someone build a nuke. Experts are divided on whether its a necessary protection—or a protection at all.
At the end of August, the AI company Anthropic announced that its chatbot Claude wouldn’t help anyone build a nuclear weapon. According to Anthropic, it had partnered with the Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) to make sure Claude wouldn’t spill nuclear secrets.
The manufacture of nuclear weapons is both a precise science and a solved problem. A lot of the information about America’s most advanced nuclear weapons is Top Secret, but the original nuclear science is 80 years old. North Korea proved that a dedicated country with an interest in acquiring the bomb can do it, and it didn’t need a chatbot’s help.
How, exactly, did the US government work with an AI company to make sure a chatbot wasn’t spilling sensitive nuclear secrets? And also: Was there ever a danger of a chatbot helping someone build a nuke in the first place?
The answer to the first question is that it used Amazon. The answer to the second question is complicated.
Amazon Web Services (AWS) offers Top Secret cloud services to government clients where they can store sensitive and classified information. The DOE already had several of these servers when it started to work with Anthropic. (snip-MORE on the page. It’s good-read it!)