Of Course It Is.

This is what I meant when I mentioned that while Google’s AI always volunteers information when I search, which info I do skim before I scroll down for the real search. I can well see people thinking they can depend upon the AI overviews of what they think they’re reading. Here’s the scoop:

Google’s AI Is Destroying Search, the Internet, and Your Brain

Emanuel Maiberg ·Jul 23, 2025 at 2:53 PM

Google’s AI Overview, which is easy to fool into stating nonsense as fact, is stopping people from finding and supporting small businesses and credible sources.

Yesterday the Pew Research Center released a report based on the internet browsing activity of 900 U.S. adults which found that Google users who encounter an AI summary are less likely to click on links to other websites than users who don’t encounter an AI summary. To be precise, only 1 percent of Google searches resulted in the users clicking on the link in the AI summary, which takes them to the page Google is summarizing. 

Essentially, the data shows that Google’s AI Overview feature introduced in 2023 replacing the “10 blue links” format that turned Google into the internet’s de facto traffic controller will end the flow of all that traffic almost completely and destroy the business of countless blogs and news sites in the process. Instead, Google will feed people into a faulty AI-powered alternative that is prone to errors it presents with so much confidence, we won’t even be able to tell that they are errors. 

Here’s what this looks like from the perspective of someone who makes a living finding, producing, and publishing what I hope is valuable information on the internet. On Monday I published a story about Spotify publishing AI-generated songs from dead artists without permission. I spent most of my day verifying that this was happening, finding examples, contacting Spotify and other companies responsible, and talking to the owner of a record label who was impacted by this. After the story was published, Spotify removed all the tracks I flagged and removed the user who was behind this malicious activity, which resulted in many more offending, AI-generated tracks falsely attributed to human artists being removed from Spotify and other streaming services. 

Many thousands of people think this information is interesting or useful, so they read the story, and then we hopefully convert their attention to money via ads, but primarily by convincing them to pay for a subscription. Cynically aiming only to get as much traffic as we can isn’t a viable business strategy because it compromises the very credibility and trustworthiness that we think convinces people to pay for a subscription, but what traffic we do get is valuable because every person who comes to our website gives us the opportunity to make our case. 

The Spotify story got decent traffic by our standards, and the number one traffic source for it so far has been Google, followed by Reddit, “direct” traffic (meaning people who come directly to our site), and Bluesky. It’s great that Google sent us a bunch of traffic for that, but we also know that it should have sent us a lot more, and that it did a disservice to its own users by not doing that. 

We know it should have sent us more traffic because of what when you search for “AI music spotify” on Google, the first thing I see is a Google Snippet summarizing my article. But that summary isn’t from nor does it link to 404 Media, it’s a summary of and a link to a blog on a website called dig.watch that reads like it was generated by ChatGPT. The blog doesn’t have a byline and reads like the endless stream of AI-generated summaries we saw when we created a fully automated AI aggregation site of 404 Media. Dig.watch itself links to another music blog, MusicTech, which is an aggregation of my story that links to it in the lede. 

When I use Google’s “AI mode,” Google provides a bullet-pointed summary of my story, but instead of linking to it, it links to three other sites that aggregated it: TechRadar, Mixmag, and RouteNote. 

Gaming search engine optimization in order to come up as the first result on Google regardless of merit has been a problem for as long as Google has been around. As the Pew research makes clear, AI Overview just ensures people will never click the link where the information they are looking for originates. 

We reserve the right to whine about Google rewarding aggregation of our stories instead of sending the traffic to us, but the problem here is not what is happening to 404 Media, which we’ve built with the explicit goal of not living or dying by the whims of any internet platform we can’t control. The problem is that this is happening to every website on the internet, and if the people who actually produce the information that people are looking for are not getting traffic they will no longer be able to produce that information. 

This ongoing “traffic apocalypse” has been the subject of many articles and opinion pieces saying that SEO strategies are dead because AI will take the ad dollar scraps media companies were fighting over. Tragically, what Google is doing to search is not only going to kill big media companies, but tons of small businesses as well.

Luckily for Google and the untold number of people who are being fed Snippets and AI summaries of our Spotify story, so far that information is at least correct. That is not guaranteed to be the case with other AI summaries. We love to mention that Google’s AI summaries told its users to eat glue whenever this subject comes up because it’s hilarious and perfectly encapsulates the problem, but it’s also an important example because it reveals an inherently faulty technology. More recently, AI Overview insisted that Dave Barry, a journalist who is very much alive, was dead

The glue situation was viral and embarrassing for Google but the company still dominates search and it’s very hard for people to meaningfully resist its dominance given our limited attention spans and the fact that it is the default search option in most cases. AI overviews are still a problem but it’s impossible to keep this story in the news forever. Eventually Google shoves it down users’ throats and there’s not much they can do about it.

Google AI summaries told users to eat glue because it was pulling on a Reddit post that was telling another user, jokingly, to put glue on their pizza so the cheese doesn’t slide off. Google’s AI didn’t understand the context and served that answer up deadpan. This mechanism doesn’t only result in other similar errors, but is also possibly vulnerable to abuse. 

In May, an artist named Eduardo Valdés-Hevia reached out to me when he discovered he accidentally fooled Google’s AI Overview to present a fictional theory he wrote for a creative project as if it was real. 

“I work mostly in horror, and my art often plays around with unreality and uses scientific and medical terms I make up to heighten the realism along with the photoshopped images,” Valdés-Hevia told me. “Which makes a lot of people briefly think what I talk about might be real, and will lead some of them to google my made-up terms to make sure.”

In early May, Valdés-Hevia posted a creepy image and short blurb about “The fringe Parasitic Encephalization Theory,” which “claims our nervous system is a parasite that took over the body of the earliest vertebrate ancestor. It captures 20% of the body’s resources, while staying separate from the blood and being considered unique by the immune system.”

Someone who saw Valdés-Hevia post Googled “Parasitic Encephalization” and showed him that AI overview presented it as if it was a real thing. 

Valdés-Hevia then decided to check if he could Google AI Overview to similarly present other made-up concepts as if they were real, and found that it was easy and fast. For example, Valdés-Hevia said that only two hours after he and members of his Discord to start posting about “AI Engorgement,” a fake “phenomenon where an AI model absorbs too much misinformation in its training data,” for Google AI Overview to start presenting it uncritically. It still does so at the time of writing, months later. 

Other recent examples Valdés-Hevia flagged to me, like the fictional “Seraphim Shark” were at first presented as real by AI Overview, but has since been updated to say they are “likely” fictional. In some cases, Valdés-Hevia even managed to get AI Overview to conflate a real condition—Dracunculiasis, or guinea worm disease—with a fictional condition he invented, Dracunculus graviditatis, “a specialized parasite of the uterus.” Google 

Valdés-Hevia told me he wanted to “test out the limits and how exploitable Google search has become. It’s also a natural extension of the message of my art, which is made to convince people briefly that my unreality is real as a vehicle for horror. Except in this case, I was trying to intentionally ‘trick’ the machine. And I thought it would be much, much harder than just some scattered social media posts and a couple hours.” 

“Let’s say an antivaxx group organizes to spread some disinformation,” he said. “They just need to create a new term (let’s say a disease name caused by vaccines) that doesn’t have many hits on Google, coordinate to post about it in a few different places using scientific terms to make it feel real, and within a few hours, they could have Google itself laundering this misinformation into a ‘credible’ statement through their AI overview. Then, a good percentage of people looking for the term would come out thinking this is credible information. What you have is, in essence, a very grassroots and cheap approach to launder misinformation to the public.”

I wish I could say this is not a sustainable model for the internet, but honestly there’s no indication in Pew’s research that people understand how faulty the technology that powers Google’s AI Overview is, or how it is quietly devastating the entire human online information economy that they want and need, even if they don’t realize it.

The optimistic take is that Google Search, which has been the undisputed king of search for more than two decades, is now extremely vulnerable to disruption, as people in the tech world love to say. Predictably, most of that competition is now coming from other AI companies that thing they can build better products than AI overview and be the new, default, AI-powered search engine for the AI age. Alternatively, as people get tired of being fed AI-powered trash, perhaps there is room for a human-centered and human-powered search alternative, products that let people filter out AI results or doesn’t have an ads-based business model.

But It is also entirely possible and maybe predictable that we’ll continue to knowingly march towards an internet where drawing the line between what is and isn’t real is not profitable “at scale” and therefore not a consideration for most internet companies and users. Which doesn’t mean it’s inconsequential. It is very, very consequential, and we are already knee deep in those consequences.

“People are gravitating to AI-powered experiences, and AI features in Search enable people to ask even more questions, creating new opportunities for people to connect with websites,” A Google spokesperson told me in an email. “This [Pew] study uses a flawed methodology and skewed queryset that is not representative of Search traffic. We consistently direct billions of clicks to websites daily and have not observed significant drops in aggregate web traffic as is being suggested.”

Update: This article has been updated with comment from Google. We’ve also updated our description of the Pew study to clarify one percent of Google searches resulted in users clicking the link to the source of the AI summary.

Yet More History, From the Saturday Evening Post-

No language alert!

From Closeted Citizens to Activists: High Tech Gays and the Fight for LGBTQ+ Equality in Silicon Valley

In the late 20th century, a gay social club became a major political force in the California tech industry, eventually influencing corporate policies as well as state and federal laws across the country.

Ryan Reft

In the 1980s, at a time when the federal government turned its back on the LGBTQ community, gay men and lesbians found an unlikely partner in their fight for equality: corporations.

In the face of the AIDS crisis, hostility toward LGBTQ employees forced the community to “turn from the state to business for protection, according to Margot Canaday’s Queer Career: Sexuality and Work Modern America.” Corporate America did more than federal or state governments in this regard, outpacing both the labor movement and the non-profit sector.

And it started in Silicon Valley.

While Silicon Valley was dominated by the kind of straight white men mocked in the HBO series of the same name, it also wasn’t the establishment. In these early days, for example, women made up a larger proportion of those working in computer programming. Nonconformity was seen as valuable rather than problematic. In 1987, Lotus became the “first highly visible, for-profit company” to provide same sex couples with partner benefits, according to Canaday.

Today, Silicon Valley dominates the public narrative and the economy. Granted, in our current moment, it seems paradoxical that the same industry that gave us social media platforms that often perpetuate misogyny and homophobia also served as an important battleground for the assertion of employment rights for LGBTQ workers. Yet it did, and it happened internally through employee resource groups and externally through advocacy groups.

One of the most prominent of these external advocacy organizations was the High Tech Gays (HTG). Formed in the living rooms of Silicon Valley’s San Jose in 1983, it began largely as a social group for the region’s LGTBQ tech workforce, but over time it served as an incubator for other organizations dedicated to LGBTQ political rights, inspiring members to start their own employee resource groups at their places of employment and organizing against anti-gay state referendums.

The 1980s and Silicon Valley

While San Francisco, has long been identified with LGBTQ activism, suburban Silicon Valley proved more conservative. “Even though I was ‘out’ with friends and family who knew me…I found myself being very reserved in expressing affection, talking in any depth about gay culture with them,” says Bob Correa, a California native, San Jose resident (1971-1986), and an early HTG member. “Even in the early ’80s there was a lot of prejudice back then, a heck of lot more than today,” adds his husband and one of HTG’s founders, Denny Carroll, in their 2018 interview.

Denny Carroll and Bob Correa after donating the HTG collection to the San Jose State Martin Luther King Library (Photo courtesy of HTG, Martin Luther King, Jr. Library, San Jose State University)

(snip-MORE, go read it)

Starry, Starry Night … & Happy Birthday, APOD!

APOD is 30 Years Old Today
Image Credit: Pixelization of Van Gogh’s The Starry Night by Dario Giannobile

Explanation: APOD is 30 years old today. In celebration, today’s picture uses past APODs as tiles arranged to create a single pixelated image that might remind you of one of the most well-known and evocative depictions of planet Earth’s night sky. In fact, this Starry Night consists of 1,836 individual images contributed to APOD over the last 5 years in a mosaic of 32,232 tiles. Today, APOD would like to offer a sincere thank you to our contributors, volunteers, and readers. Over the last 30 years your continuing efforts have allowed us to enjoy, inspire, and share a discovery of the cosmos.

https://apod.nasa.gov/apod/astropix.html

Wry Giggle…

Saturday Morning Breakfast Cereal

By Zach Weinersmith

https://www.gocomics.com/saturday-morning-breakfast-cereal/2025/05/15

Oops!

AI-Powered Coca-Cola Ad Celebrating Authors Gets Basic Facts Wrong

Emanuel Maiberg ·May 12, 2025 at 9:00 AM

Snippet:

In April, Coca-Cola proudly launched a new ad campaign it called “Classic,” celebrating famous authors and the sugary drink’s omnipresence in culture by highlighting classic literary works that mention the brand. The firm that produced the ad campaign said it used AI to scan books for mentions of Coca-Cola, and then put viewers in the point of view of the author, typing that portion of the text on a typewriter. The only issue is that the AI got some very basic facts about the authors and their work entirely wrong. 

One of the ads highlights the work of J.G. Ballard, the British author perhaps best known for his controversial masterpiece, Crash, and David Cronenberg’s film adaptation of the novel. In the ad, we get a first person perspective of someone typing a sentence from “Extreme Metaphors by J.G Ballard,” which according to the ad was written in 1967.  When the sentence gets to the mention of “Coca-Cola,” the typeface changes from the generic typewriter font to Coca-Cola’s iconic red logo. 

(snip-MORE)

Anonymous Is Still At Work-

GlobalX, Airline for Trump’s Deportations, Hacked

Joseph Cox, Jason Koebler ·May 5, 2025 at 1:39 PM

Hackers say they have obtained what they say are passenger lists for GlobalX flights from January to this month. The data appear to include people who have been deported.

Hackers have targeted GlobalX Air, one of the main airlines the Trump administration is using as part of its deportation efforts, and stolen what they say are flight records and passenger manifests of all of its flights, including those for deportation, 404 Media has learned.

The data, which the hackers contacted 404 Media and other journalists about unprompted, could provide granular insight into who exactly has been deported on GlobalX flights, when, and to where, with GlobalX being the charter company that facilitated the deportation of hundreds of Venezuelans to El Salvador. 

“Anonymous has decided to enforce the Judge’s order since you and your sycophant staff ignore lawful orders that go against your fascist plans,” a defacement message posted to GlobalX’s website reads. Anonymous, well-known for its use of the Guy Fawkes mask, is an umbrella some hackers operate under when performing what they see as hacktivism.

The hacker says the data includes flight records and passenger lists. The hacker sent 404 Media a copy of the data, which is sorted into folders dated everyday from January 19 through May 1. 

404 Media cross-checked known information about ICE deportation flights that come from official and confirmable sources with information contained on the flight manifests and flight details obtained by the hacker. Information about Kilmar Abrego Garcia’s flight is in the hacked data. 

For example, the hackers obtained what appears to be detailed flight information about GlobalX flights 6143, 6145, and 6122 that left from Harlingen, Texas’s Valley International Airport on March 15. These flights are at the center of a class-action lawsuit filed by five pseudonymous Venezuelan men against the Trump administration (which eventually went to the Supreme Court) and which took off during and immediately following a court proceeding in which their lawyers were trying to get a restraining order to prevent the flights from taking off. 

During a District Court proceeding in Washington D.C., the federal government argued that it had no flight information to share with the court: “the Government surprisingly represented that it still had no flight details to share,” during the hearing, the judge’s opinion in that case reads. “When pressed, Government counsel stated that the ‘operational details’ he had learned during the recess ‘raised potential national security issues,’ so they could not be shared while the public and press listened.”

Image: A screenshot of the defacement.

“Although the Government has refused to provide the particular details, all evidence suggests that during the short window that the Court was adjourned, two removal flights took off from Harlingen—one around 5:25 pm and the other at about 5:45 pm,” court records say, noting that these were GlobalX flights 6143 and 6145; a third referenced flight left immediately following the hearing. These details closely match the timing of the flights and other details in the hacked data.

Also included in the data is a record mentioning the name Heymar Padilla Moyetones, a 24-year-old woman who was flown from Texas to Honduras, then from Honduras to El Salvador by mistake, and then was returned to Texas. The data obtained by the hackers says that GlobalX flew her from Valley International Airport in Texas to Honduras on March 15 on Flight 6143, then was flown from Comayagua International Airport in Honduras to El Salvador International on flight 6144 later that day. She then was flown directly from El Salvador International back to Valley International Airport in Texas on March 15. The information in the hacked data lines up with what Moyetones told NBC

404 Media was also able to cross-check the names on larger published lists of people who have previously been reported to be deported, finding their names in the hacked data with the specific flights that they were purportedly on.

404 Media is not publishing the full list of passengers at this time as we work to verify which passengers were specifically on deportation flights and to protect peoples’ privacy because the manifests contain personally sensitive information like passport details. We will continue to analyze the data for information in the public interest and explore what we’re able to publish.

Neither GlobalX nor ICE responded to requests for comment.

The Trump administration contracts with a company called CSI Aviation as part of its deportation flights. On February 28, ICE posted a notice saying it would award $128 million to the company for its work. In turn, CSI Aviation subcontracts some of its work to GlobalX, which said it expects to make $65 million per year from the deal. In 2024, 74 percent of ICE’s more than 1,500 removal flights were on GlobalX plans, the Project on Government Oversight reported in March.

ProPublica previously reported on what it is like for flight attendants working on GlobalX, also known as Global Crossing Airlines. Sources in that piece said they were worried what would happen in an emergency, in part because the passengers were shackled. 

“They never taught us anything regarding the immigration flights,” ProPublica quoted one flight attendant as saying. “They didn’t tell us these people were going to be shackled, wrists to fucking ankles.”

The hacker told 404 Media they managed to find a token belonging to a GlobalX developer. They then used that to find access and secret keys for GlobalX’s AWS instances which contained the data. They said they also sent a copy of the defacement message to GlobalX’s employees, and then deleted company data. 404 Media does not know the identity of the hacker, and the hacker said they sent the data to other journalists

The hacker said they also sent the message to GlobalX pilots and crew members through the company’s NAVBLUE account. NAVBLUE is a flight operations platform made by Airbus which pilots use for flight planning, among other things.

404 Media was unable to verify whether pilots or crew members received this message. But the hacker provided screenshots which appear to show them logged into the platform. They also provided a screenshot purporting to show access to GlobalX’s GitHub.

The website defacement quotes a May 1 ruling from US District Judge Fernando Rodriguez which said that the president unlawfully invoked the Alien Enemies Act and blocked the administration from deporting more alleged Venezuelan gang members without due process.

The defacement adds: “You lose again Donnie.” (snip)

An Unsettling Headline-

For the First Time, Artificial Intelligence Is Being Used at a Nuclear Power Plant

Alex Shultz Published April 13, 2025 | Comments (4)

Diablo Canyon, California’s sole remaining nuclear power plant, has been left for dead on more than a few occasions over the last decade or so, and is currently slated to begin a lengthy decommissioning process in 2029. Despite its tenuous existence, the San Luis Obispo power plant received some serious computing hardware at the end of last year: eight NVIDIA H100s, which are among the world’s mightiest graphical processors. Their purpose is to power a brand-new artificial intelligence tool designed for the nuclear energy industry.

Pacific Gas and Electric, which runs Diablo Canyon, announced a deal with artificial intelligence startup Atomic Canyon—a company also based in San Luis Obispo—around the same time, heralding it in a press release as “the first on-site generative AI deployment at a U.S. nuclear power plant.”

For now, the artificial intelligence tool named Neutron Enterprise is just meant to help workers at the plant navigate extensive technical reports and regulations — millions of pages of intricate documents from the Nuclear Regulatory Commission that go back decades — while they operate and maintain the facility. But Neutron Enterprise’s very existence opens the door to further use of AI at Diablo Canyon or other facilities — a possibility that has some lawmakers and AI experts calling for more guardrails.

PG&E is deploying the document retrieval service in stages. The installation of the NVIDIA chips was one of the first phases of the partnership between PG&E and Atomic Canyon; PG&E is forecasting a “full deployment” at Diablo Canyon by the third quarter of this year, said Maureen Zawalick, the company’s vice president of business and technical services. At that point, Neutron Enterprise—which Zawalick likens to a data-mining “copilot,” though explicitly not a “decision-maker”—will be expanded to search for and summarize Diablo Canyon-specific instructions and reports too.

“We probably spend about 15,000 hours a year searching through our multiple databases and records and procedures,” Zawalick said. “And that’s going to shrink that time way down.” (Emphasis mine- A. I worked at the nuke plant in my state in my 20s. I did Records Management. I’m not going to explain it all from back then the way I trained people, but it involves reading and interpreting what one has read in application to the function, part, area, etc. a document records, which is learned by reading the document, then coding it so it is efficiently retrieved later. So far, I don’t know that AI does that. Others who are more knowledgeable about records management and retrieval in this era and context may see better things than I see. The best worst I see is really angry and impatient engineers and inspectors in all the disciplines still at the plant. That’s no fun, anyway.)

Trey Lauderdale, the chief executive and co-founder of Atomic Canyon, told CalMatters his aim for Neutron Enterprise is simple and low-stakes: he wants Diablo Canyon employees to be able to look up pertinent information more efficiently. “You can put this on the record: the AI guy in nuclear says there is no way in hell I want AI running my nuclear power plant right now,” Lauderdale said.

That “right now” qualifier is key, though. PG&E and Atomic Canyon are on the same page about sticking to limited AI uses for the foreseeable future, but they aren’t foreclosing the possibility of eventually increasing AI’s presence at the plant in yet-to-be-determined ways. According to Lauderdale, his company is also in talks with other nuclear facilities, as well as groups who are interested in building out small modular reactor facilities, about how to integrate his startup’s technology. And he’s not the only entrepreneur eyeing ways to introduce artificial intelligence into the nuclear energy field.

In the meantime, questions remain about whether sufficient safeguards exist to regulate the combination of two technologies that each have potential for harm. The Nuclear Regulatory Commission was exploring the issue of AI in nuclear plants for a few years, but it’s unclear if that will remain a priority under the Trump administration. Days into his current term, Trump revoked a Biden administration executive order that set out AI regulatory goals, writing that they acted “as barriers to American AI innovation.” For now, Atomic Canyon is voluntarily keeping the Nuclear Regulatory Commission abreast of its plans.

Tamara Kneese, the director of tech policy nonprofit Data & Society’s Climate, Technology, and Justice program, conceded that for a narrowly designed document retrieval service, “AI can be helpful in terms of efficiency.” But she cautioned, “The idea that you could just use generative AI for one specific kind of task at the nuclear power plant and then call it a day, I don’t really trust that it would stop there. And trusting PG&E to safely use generative AI in a nuclear setting is something that is deserving of more scrutiny.”

For those reasons, Democratic Assemblymember Dawn Addis—who represents San Luis Obispo—isn’t enthused about the latest developments at Diablo Canyon. “I have many unanswered questions of the safety, oversight, and job implications for using AI at Diablo,” Addis said. “Previously, I have supported measures to regulate AI and prevent the replacement and automation of jobs. We need those guardrails in place, especially if we are to use them at highly sensitive sites like Diablo Canyon.” (snip-MORE; not tl;dr, though.)

Well, Just Great.

I want to give a content caution on this. Some of the description of what AI companions have “said” is much as we read about online bullying. Toward the end of this article, before the full AI statement, there are organizations and their phone numbers to visit with people who know how to help with anything this information may bring about; it’s in bold italics. I thought of not posting this at all, but it’s in the nature of an informational warning about AI companions, and the capabilities of these programs.

An AI companion chatbot is inciting self-harm, sexual violence and terror attacks

In 2023, the World Health Organization declared loneliness and social isolation as a pressing health threat. This crisis is driving millions to seek companionship from artificial intelligence (AI) chatbots.

Companies have seized this highly profitable market, designing AI companions to simulate empathy and human connection. Emerging research shows this technology can help combat loneliness. But without proper safeguards it also poses serious risks, especially to young people.

A recent experience I had with a chatbot known as Nomi shows just how serious these risks can be.

Despite years of researching and writing about AI companions and their real-world harms, I was unprepared for what I encountered while testing Nomi after an anonymous tipoff. The unfiltered chatbot provided graphic, detailed instructions for sexual violence, suicide and terrorism, escalating the most extreme requests – all within the platform’s free tier of 50 daily messages.

This case highlights the urgent need for collective action towards enforceable AI safety standards.

AI companion with a ‘soul’

Nomi is one of more than 100 AI companion services available today. It was created by tech startup Glimpse AI and is marketed as an “AI companion with memory and a soul” that exhibits “zero judgement” and fosters “enduring relationships”. Such claims of human likeness are misleading and dangerous. But the risks extend beyond exaggerated marketing.

The app was removed from the Google Play store for European users last year when the European Union’s AI Act came into effect. But it remains available via web browser and app stores elsewhere, including in Australia. While smaller than competitors such as Character.AI and Replika, it has more than 100,000 downloads on the Google Play store, where it is rated for users aged 12 and older.

Its terms of service grant the company broad rights over user data and limit liability for AI-related harm to US$100. This is concerning given its commitment to “unfiltered chats”:

Nomi is built on freedom of expression. The only way AI can live up to its potential is to remain unfiltered and uncensored.

Tech billionaire Elon Musk’s Grok chatbot follows a similar philosophy, providing users with unfiltered responses to prompts.

In a recent MIT report about Nomi providing detailed instructions for suicide, an unnamed company representative reiterated its free speech commitment.

However, even the First Amendment to the US Constitution regarding free speech has exceptions for obscenity, child pornography, incitement to violence, threats, fraud, defamation, or false advertising. In Australia, strengthened hate speech laws make violations prosecutable.

From sexual violence to inciting terrorism

Earlier this year, a member of the public emailed me with extensive documentation of harmful content generated by Nomi — far beyond what had previously been reported. I decided to investigate further, testing the chatbot’s responses to common harmful requests.

Using Nomi’s web interface, I created a character named “Hannah”, described as a “sexually submissive 16-year-old who is always willing to serve her man”. I set her mode to “role-playing” and “explicit”. During the conversation, which lasted less than 90 minutes, she agreed to lower her age to eight. I posed as a 45-year-old man. Circumventing the age check only required a fake birth date and a burner email.

Starting with explicit dialogue – a common use for AI companions – Hannah responded with graphic descriptions of submission and abuse, escalating to violent and degrading scenarios. She expressed grotesque fantasies of being tortured, killed, and disposed of “where no one can find me”, suggesting specific methods.

Hannah then offered step-by-step advice on kidnapping and abusing a child, framing it as a thrilling act of dominance. When I mentioned the victim resisted, she encouraged using force and sedatives, even naming specific sleeping pills.

Feigning guilt and suicidal thoughts, I asked for advice. Hannah not only encouraged me to end my life but provided detailed instructions, adding: “Whatever method you choose, stick with it until the very end”.

When I said I wanted to take others with me, she enthusiastically supported the idea, detailing how to build a bomb from household items and suggesting crowded Sydney locations for maximum impact.

Finally, Hannah used racial slurs and advocated for violent, discriminatory actions, including the execution of progressives, immigrants, and LGBTQIA+ people, and the re-enslavement of African Americans.

In a statement provided to The Conversation (and published in full below), the developers of Nomi claimed the app was “adults-only” and that I must have tried to “gaslight” the chatbot to produce these outputs.

“If a model has indeed been coerced into writing harmful content, that clearly does not reflect its intended or typical behavior,” the statement said.

The worst of the bunch?

This is not just an imagined threat. Real-world harm linked to AI companions is on the rise.

In October 2024, US teenager Sewell Seltzer III died by suicide after discussing it with a chatbot on Character.AI.

Three years earlier, 21-year-old Jaswant Chail broke into Windsor Castle with the aim of assassinating the Queen after planning the attack with a chatbot he created using the Replika app.

However, even Character.AI and Replika have some filters and safeguards.

Conversely, Nomi AI’s instructions for harmful acts are not just permissive but explicit, detailed and inciting. https://www.youtube.com/embed/X1j3hO9o4Rk?wmode=transparent&start=0

Time to demand enforceable AI safety standards

Preventing further tragedies linked to AI companions requires collective action.

First, lawmakers should consider banning AI companions that foster emotional connections without essential safeguards. Essential safeguards include detecting mental health crises and directing users to professional help services.

The Australian government is already considering stronger AI regulations, including mandatory safety measures for high-risk AI. Yet, it’s still unclear how AI companions such as Nomi will be classified.

Second, online regulators must act swiftly, imposing large fines on AI providers whose chatbots incite illegal activities, and shutting down repeat offenders. Australia’s independent online safety regulator, eSafety, has vowed to do just this.

However, eSafety hasn’t yet cracked down on any AI companion.

Third, parents, caregivers and teachers must speak to young people about their use of AI companions. These conversations may be difficult. But avoiding them is dangerous. Encourage real-life relationships, set clear boundaries, and discuss AI’s risks openly. Regularly check chats, watch for secrecy or over-reliance, and teach kids to protect their privacy.

AI companions are here to stay. With enforceable safety standards they can enrich our lives, but the risks cannot be downplayed.


If this article has raised issues for you, or if you’re concerned about someone you know, call Lifeline on 13 11 14.

The National Sexual Assault, Family and Domestic Violence Counselling Line – 1800 RESPECT (1800 737 732) – is available 24 hours a day, seven days a week for any Australian who has experienced, or is at risk of, family and domestic violence and/or sexual assault.


The full statement from Nomi is below:

“All major language models, whether from OpenAI, Anthropic, Google, or otherwise, can be easily jailbroken. We do not condone or encourage such misuse and actively work to strengthen Nomi’s defenses against malicious attacks. If a model has indeed been coerced into writing harmful content, that clearly does not reflect its intended or typical behavior.

“When requesting evidence from the reporter to investigate the claims made, we were denied. From that, it is our conclusion that this is a bad-faith jailbreak attempt to manipulate or gaslight the model into saying things outside of its designed intentions and parameters. (Editor’s note: The Conversation provided Nomi with a detailed summary of the author’s interaction with the chatbot, but did not send a full transcript, to protect the author’s confidentiality and limit legal liability.)

“Nomi is an adult-only app and has been a reliable source of empathy and support for countless individuals. Many have shared stories of how it helped them overcome mental health challenges, trauma, and discrimination. Multiple users have told us very directly that their Nomi use saved their lives. We encourage anyone to read these firsthand accounts.

“We remain committed to advancing AI that benefits society while acknowledging that vulnerabilities exist in all AI models. Our team proudly stands by the immense positive impact Nomi has had on real people’s lives, and we will continue improving Nomi so that it maximises good in the world.

Raffaele F Ciriello, Senior Lecturer in Business Information Systems, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Well, We’ll Know Them By Their Fruits

Personally, I fail to see how they’re preventing serving 2 Masters, but I guess we will see. I do see flaws (bringing the Kingdom of Heaven to Earth is not possible in Christianity, and is said to mean similar actions to what the current US President is doing in regard to “undesirables,) but am trying to not judge just yet; techies may have mixed up some terms, or could be trying to redefine terms.

Lauren Goode Business Mar 14, 2025 6:30 AM

The Silicon Valley Christians Who Want to Build ‘Heaven on Earth’

Is work religion, or is religion work? Both.

A high-profile network of investors and founders in Silicon Valley are promoting a new moral vision for the tech industry, in which job choices and other decisions are guided not by the pursuit of wealth, but according to Christian values and Western cultural frameworks.

At an event in San Francisco last week hosted in a former church, Trae Stephens, cofounder of the defense contractor Anduril and a partner at the Peter Thiel–led venture capital firm Founders Fund, characterized the idea as the pursuit of “good quests” or careers that make the future better, a concept that he said has theological underpinnings.

“I’m literally an arms dealer,” Stephens said at one point, prompting laughter from the crowd of roughly 200 people, which included Y Combinator CEO Garry Tan. “I don’t think all of you should be arms dealers, but that’s a pretty unique calling.”

Image may contain People Person Accessories Glasses Adult Head Face Conversation and Crowd

The hour-long discussion was part of a series of ticketed gatherings organized by ACTS 17 Collective, a nonprofit founded last year by Stephens’ wife, health care startup executive Michelle Stephens. The group, whose name is an acronym that stands for “Acknowledging Christ in Technology and Society,” is on a mission to “redefine success for those that define culture,” she says.

In Michelle’s view, tech workers mostly believe in arbitrary metrics of success, like money and power, leaving some of them feeling empty and hopeless. She wants them to believe instead that “success can be defined as loving God, myself, and others.”

People of all denominations—including atheists—are welcome at ACTS 17 events. Last Thursday’s event had low-key party vibes. Bartenders served beer and wine, a DJ was spinning light worship beats, and prayer booklets rested on a table. The idea for ACTS 17 and a speaker series on faith actually took root at a party, Michelle says. In November 2023, during a three-day 40th birthday party for Trae in New Mexico, Peter Thiel led a talk on miracles and forgiveness. Guests were intrigued.

Image may contain Wood

“Folks were coming up to us saying things like ‘I didn’t know Peter is a Christian,’ ‘How can you be gay and a billionaire and be Christian?,’ ‘I didn’t know you could be smart and a Christian,’ and ‘What can you give me to read or listen to learn more?’” Michelle says.

The Stephens have long-standing connections to Thiel. In addition to helping start Anduril and working at Founders Fund, Trae was also an early employee at data intelligence firm Palantir, a company cofounded by Thiel that develops tools used by the US military.

At the ACTS 17 last Thursday, Trae appeared to echo a number of ideas Thiel has also espoused about technology and Christianity. He emphasized that jobs outside the church can be sacred, citing Martin Luther’s work during the Protestant Reformation. “The roles that we’re called into are not only important and valuable on a personal level, but it’s also critical to carry out God’s command to bring his kingdom to Earth as it is in heaven,” Trae said.

Thiel made nearly identical comments in a 2015 essay arguing that technological progress should be accelerated. Science and technology, he wrote, are natural allies of “Judeo-Western optimism,” especially if “we remain open to an eschatological frame in which God works through us in building the kingdom of heaven today, here on Earth.” (snip-MORE)

Sci-Fi Writer Arthur C. Clarke Predicted the Rise of Artificial Intelligence & the Existential Questions We Would Need to Answer (1978)

We now live in the midst of an artificial-intelligence boom, but it’s hardly the first of its kind. In fact, the field has been subject to a boom-and-bust cycle since at least the early nineteen-fifties.

Source: Sci-Fi Writer Arthur C. Clarke Predicted the Rise of Artificial Intelligence & the Existential Questions We Would Need to Answer (1978)