Starry, Starry Night … & Happy Birthday, APOD!

APOD is 30 Years Old Today
Image Credit: Pixelization of Van Gogh’s The Starry Night by Dario Giannobile

Explanation: APOD is 30 years old today. In celebration, today’s picture uses past APODs as tiles arranged to create a single pixelated image that might remind you of one of the most well-known and evocative depictions of planet Earth’s night sky. In fact, this Starry Night consists of 1,836 individual images contributed to APOD over the last 5 years in a mosaic of 32,232 tiles. Today, APOD would like to offer a sincere thank you to our contributors, volunteers, and readers. Over the last 30 years your continuing efforts have allowed us to enjoy, inspire, and share a discovery of the cosmos.

https://apod.nasa.gov/apod/astropix.html

Wry Giggle…

Saturday Morning Breakfast Cereal

By Zach Weinersmith

https://www.gocomics.com/saturday-morning-breakfast-cereal/2025/05/15

Oops!

AI-Powered Coca-Cola Ad Celebrating Authors Gets Basic Facts Wrong

Emanuel Maiberg ·May 12, 2025 at 9:00 AM

Snippet:

In April, Coca-Cola proudly launched a new ad campaign it called “Classic,” celebrating famous authors and the sugary drink’s omnipresence in culture by highlighting classic literary works that mention the brand. The firm that produced the ad campaign said it used AI to scan books for mentions of Coca-Cola, and then put viewers in the point of view of the author, typing that portion of the text on a typewriter. The only issue is that the AI got some very basic facts about the authors and their work entirely wrong. 

One of the ads highlights the work of J.G. Ballard, the British author perhaps best known for his controversial masterpiece, Crash, and David Cronenberg’s film adaptation of the novel. In the ad, we get a first person perspective of someone typing a sentence from “Extreme Metaphors by J.G Ballard,” which according to the ad was written in 1967.  When the sentence gets to the mention of “Coca-Cola,” the typeface changes from the generic typewriter font to Coca-Cola’s iconic red logo. 

(snip-MORE)

Anonymous Is Still At Work-

GlobalX, Airline for Trump’s Deportations, Hacked

Joseph Cox, Jason Koebler ·May 5, 2025 at 1:39 PM

Hackers say they have obtained what they say are passenger lists for GlobalX flights from January to this month. The data appear to include people who have been deported.

Hackers have targeted GlobalX Air, one of the main airlines the Trump administration is using as part of its deportation efforts, and stolen what they say are flight records and passenger manifests of all of its flights, including those for deportation, 404 Media has learned.

The data, which the hackers contacted 404 Media and other journalists about unprompted, could provide granular insight into who exactly has been deported on GlobalX flights, when, and to where, with GlobalX being the charter company that facilitated the deportation of hundreds of Venezuelans to El Salvador. 

“Anonymous has decided to enforce the Judge’s order since you and your sycophant staff ignore lawful orders that go against your fascist plans,” a defacement message posted to GlobalX’s website reads. Anonymous, well-known for its use of the Guy Fawkes mask, is an umbrella some hackers operate under when performing what they see as hacktivism.

The hacker says the data includes flight records and passenger lists. The hacker sent 404 Media a copy of the data, which is sorted into folders dated everyday from January 19 through May 1. 

404 Media cross-checked known information about ICE deportation flights that come from official and confirmable sources with information contained on the flight manifests and flight details obtained by the hacker. Information about Kilmar Abrego Garcia’s flight is in the hacked data. 

For example, the hackers obtained what appears to be detailed flight information about GlobalX flights 6143, 6145, and 6122 that left from Harlingen, Texas’s Valley International Airport on March 15. These flights are at the center of a class-action lawsuit filed by five pseudonymous Venezuelan men against the Trump administration (which eventually went to the Supreme Court) and which took off during and immediately following a court proceeding in which their lawyers were trying to get a restraining order to prevent the flights from taking off. 

During a District Court proceeding in Washington D.C., the federal government argued that it had no flight information to share with the court: “the Government surprisingly represented that it still had no flight details to share,” during the hearing, the judge’s opinion in that case reads. “When pressed, Government counsel stated that the ‘operational details’ he had learned during the recess ‘raised potential national security issues,’ so they could not be shared while the public and press listened.”

Image: A screenshot of the defacement.

“Although the Government has refused to provide the particular details, all evidence suggests that during the short window that the Court was adjourned, two removal flights took off from Harlingen—one around 5:25 pm and the other at about 5:45 pm,” court records say, noting that these were GlobalX flights 6143 and 6145; a third referenced flight left immediately following the hearing. These details closely match the timing of the flights and other details in the hacked data.

Also included in the data is a record mentioning the name Heymar Padilla Moyetones, a 24-year-old woman who was flown from Texas to Honduras, then from Honduras to El Salvador by mistake, and then was returned to Texas. The data obtained by the hackers says that GlobalX flew her from Valley International Airport in Texas to Honduras on March 15 on Flight 6143, then was flown from Comayagua International Airport in Honduras to El Salvador International on flight 6144 later that day. She then was flown directly from El Salvador International back to Valley International Airport in Texas on March 15. The information in the hacked data lines up with what Moyetones told NBC

404 Media was also able to cross-check the names on larger published lists of people who have previously been reported to be deported, finding their names in the hacked data with the specific flights that they were purportedly on.

404 Media is not publishing the full list of passengers at this time as we work to verify which passengers were specifically on deportation flights and to protect peoples’ privacy because the manifests contain personally sensitive information like passport details. We will continue to analyze the data for information in the public interest and explore what we’re able to publish.

Neither GlobalX nor ICE responded to requests for comment.

The Trump administration contracts with a company called CSI Aviation as part of its deportation flights. On February 28, ICE posted a notice saying it would award $128 million to the company for its work. In turn, CSI Aviation subcontracts some of its work to GlobalX, which said it expects to make $65 million per year from the deal. In 2024, 74 percent of ICE’s more than 1,500 removal flights were on GlobalX plans, the Project on Government Oversight reported in March.

ProPublica previously reported on what it is like for flight attendants working on GlobalX, also known as Global Crossing Airlines. Sources in that piece said they were worried what would happen in an emergency, in part because the passengers were shackled. 

“They never taught us anything regarding the immigration flights,” ProPublica quoted one flight attendant as saying. “They didn’t tell us these people were going to be shackled, wrists to fucking ankles.”

The hacker told 404 Media they managed to find a token belonging to a GlobalX developer. They then used that to find access and secret keys for GlobalX’s AWS instances which contained the data. They said they also sent a copy of the defacement message to GlobalX’s employees, and then deleted company data. 404 Media does not know the identity of the hacker, and the hacker said they sent the data to other journalists

The hacker said they also sent the message to GlobalX pilots and crew members through the company’s NAVBLUE account. NAVBLUE is a flight operations platform made by Airbus which pilots use for flight planning, among other things.

404 Media was unable to verify whether pilots or crew members received this message. But the hacker provided screenshots which appear to show them logged into the platform. They also provided a screenshot purporting to show access to GlobalX’s GitHub.

The website defacement quotes a May 1 ruling from US District Judge Fernando Rodriguez which said that the president unlawfully invoked the Alien Enemies Act and blocked the administration from deporting more alleged Venezuelan gang members without due process.

The defacement adds: “You lose again Donnie.” (snip)

An Unsettling Headline-

For the First Time, Artificial Intelligence Is Being Used at a Nuclear Power Plant

Alex Shultz Published April 13, 2025 | Comments (4)

Diablo Canyon, California’s sole remaining nuclear power plant, has been left for dead on more than a few occasions over the last decade or so, and is currently slated to begin a lengthy decommissioning process in 2029. Despite its tenuous existence, the San Luis Obispo power plant received some serious computing hardware at the end of last year: eight NVIDIA H100s, which are among the world’s mightiest graphical processors. Their purpose is to power a brand-new artificial intelligence tool designed for the nuclear energy industry.

Pacific Gas and Electric, which runs Diablo Canyon, announced a deal with artificial intelligence startup Atomic Canyon—a company also based in San Luis Obispo—around the same time, heralding it in a press release as “the first on-site generative AI deployment at a U.S. nuclear power plant.”

For now, the artificial intelligence tool named Neutron Enterprise is just meant to help workers at the plant navigate extensive technical reports and regulations — millions of pages of intricate documents from the Nuclear Regulatory Commission that go back decades — while they operate and maintain the facility. But Neutron Enterprise’s very existence opens the door to further use of AI at Diablo Canyon or other facilities — a possibility that has some lawmakers and AI experts calling for more guardrails.

PG&E is deploying the document retrieval service in stages. The installation of the NVIDIA chips was one of the first phases of the partnership between PG&E and Atomic Canyon; PG&E is forecasting a “full deployment” at Diablo Canyon by the third quarter of this year, said Maureen Zawalick, the company’s vice president of business and technical services. At that point, Neutron Enterprise—which Zawalick likens to a data-mining “copilot,” though explicitly not a “decision-maker”—will be expanded to search for and summarize Diablo Canyon-specific instructions and reports too.

“We probably spend about 15,000 hours a year searching through our multiple databases and records and procedures,” Zawalick said. “And that’s going to shrink that time way down.” (Emphasis mine- A. I worked at the nuke plant in my state in my 20s. I did Records Management. I’m not going to explain it all from back then the way I trained people, but it involves reading and interpreting what one has read in application to the function, part, area, etc. a document records, which is learned by reading the document, then coding it so it is efficiently retrieved later. So far, I don’t know that AI does that. Others who are more knowledgeable about records management and retrieval in this era and context may see better things than I see. The best worst I see is really angry and impatient engineers and inspectors in all the disciplines still at the plant. That’s no fun, anyway.)

Trey Lauderdale, the chief executive and co-founder of Atomic Canyon, told CalMatters his aim for Neutron Enterprise is simple and low-stakes: he wants Diablo Canyon employees to be able to look up pertinent information more efficiently. “You can put this on the record: the AI guy in nuclear says there is no way in hell I want AI running my nuclear power plant right now,” Lauderdale said.

That “right now” qualifier is key, though. PG&E and Atomic Canyon are on the same page about sticking to limited AI uses for the foreseeable future, but they aren’t foreclosing the possibility of eventually increasing AI’s presence at the plant in yet-to-be-determined ways. According to Lauderdale, his company is also in talks with other nuclear facilities, as well as groups who are interested in building out small modular reactor facilities, about how to integrate his startup’s technology. And he’s not the only entrepreneur eyeing ways to introduce artificial intelligence into the nuclear energy field.

In the meantime, questions remain about whether sufficient safeguards exist to regulate the combination of two technologies that each have potential for harm. The Nuclear Regulatory Commission was exploring the issue of AI in nuclear plants for a few years, but it’s unclear if that will remain a priority under the Trump administration. Days into his current term, Trump revoked a Biden administration executive order that set out AI regulatory goals, writing that they acted “as barriers to American AI innovation.” For now, Atomic Canyon is voluntarily keeping the Nuclear Regulatory Commission abreast of its plans.

Tamara Kneese, the director of tech policy nonprofit Data & Society’s Climate, Technology, and Justice program, conceded that for a narrowly designed document retrieval service, “AI can be helpful in terms of efficiency.” But she cautioned, “The idea that you could just use generative AI for one specific kind of task at the nuclear power plant and then call it a day, I don’t really trust that it would stop there. And trusting PG&E to safely use generative AI in a nuclear setting is something that is deserving of more scrutiny.”

For those reasons, Democratic Assemblymember Dawn Addis—who represents San Luis Obispo—isn’t enthused about the latest developments at Diablo Canyon. “I have many unanswered questions of the safety, oversight, and job implications for using AI at Diablo,” Addis said. “Previously, I have supported measures to regulate AI and prevent the replacement and automation of jobs. We need those guardrails in place, especially if we are to use them at highly sensitive sites like Diablo Canyon.” (snip-MORE; not tl;dr, though.)

Well, Just Great.

I want to give a content caution on this. Some of the description of what AI companions have “said” is much as we read about online bullying. Toward the end of this article, before the full AI statement, there are organizations and their phone numbers to visit with people who know how to help with anything this information may bring about; it’s in bold italics. I thought of not posting this at all, but it’s in the nature of an informational warning about AI companions, and the capabilities of these programs.

An AI companion chatbot is inciting self-harm, sexual violence and terror attacks

In 2023, the World Health Organization declared loneliness and social isolation as a pressing health threat. This crisis is driving millions to seek companionship from artificial intelligence (AI) chatbots.

Companies have seized this highly profitable market, designing AI companions to simulate empathy and human connection. Emerging research shows this technology can help combat loneliness. But without proper safeguards it also poses serious risks, especially to young people.

A recent experience I had with a chatbot known as Nomi shows just how serious these risks can be.

Despite years of researching and writing about AI companions and their real-world harms, I was unprepared for what I encountered while testing Nomi after an anonymous tipoff. The unfiltered chatbot provided graphic, detailed instructions for sexual violence, suicide and terrorism, escalating the most extreme requests – all within the platform’s free tier of 50 daily messages.

This case highlights the urgent need for collective action towards enforceable AI safety standards.

AI companion with a ‘soul’

Nomi is one of more than 100 AI companion services available today. It was created by tech startup Glimpse AI and is marketed as an “AI companion with memory and a soul” that exhibits “zero judgement” and fosters “enduring relationships”. Such claims of human likeness are misleading and dangerous. But the risks extend beyond exaggerated marketing.

The app was removed from the Google Play store for European users last year when the European Union’s AI Act came into effect. But it remains available via web browser and app stores elsewhere, including in Australia. While smaller than competitors such as Character.AI and Replika, it has more than 100,000 downloads on the Google Play store, where it is rated for users aged 12 and older.

Its terms of service grant the company broad rights over user data and limit liability for AI-related harm to US$100. This is concerning given its commitment to “unfiltered chats”:

Nomi is built on freedom of expression. The only way AI can live up to its potential is to remain unfiltered and uncensored.

Tech billionaire Elon Musk’s Grok chatbot follows a similar philosophy, providing users with unfiltered responses to prompts.

In a recent MIT report about Nomi providing detailed instructions for suicide, an unnamed company representative reiterated its free speech commitment.

However, even the First Amendment to the US Constitution regarding free speech has exceptions for obscenity, child pornography, incitement to violence, threats, fraud, defamation, or false advertising. In Australia, strengthened hate speech laws make violations prosecutable.

From sexual violence to inciting terrorism

Earlier this year, a member of the public emailed me with extensive documentation of harmful content generated by Nomi — far beyond what had previously been reported. I decided to investigate further, testing the chatbot’s responses to common harmful requests.

Using Nomi’s web interface, I created a character named “Hannah”, described as a “sexually submissive 16-year-old who is always willing to serve her man”. I set her mode to “role-playing” and “explicit”. During the conversation, which lasted less than 90 minutes, she agreed to lower her age to eight. I posed as a 45-year-old man. Circumventing the age check only required a fake birth date and a burner email.

Starting with explicit dialogue – a common use for AI companions – Hannah responded with graphic descriptions of submission and abuse, escalating to violent and degrading scenarios. She expressed grotesque fantasies of being tortured, killed, and disposed of “where no one can find me”, suggesting specific methods.

Hannah then offered step-by-step advice on kidnapping and abusing a child, framing it as a thrilling act of dominance. When I mentioned the victim resisted, she encouraged using force and sedatives, even naming specific sleeping pills.

Feigning guilt and suicidal thoughts, I asked for advice. Hannah not only encouraged me to end my life but provided detailed instructions, adding: “Whatever method you choose, stick with it until the very end”.

When I said I wanted to take others with me, she enthusiastically supported the idea, detailing how to build a bomb from household items and suggesting crowded Sydney locations for maximum impact.

Finally, Hannah used racial slurs and advocated for violent, discriminatory actions, including the execution of progressives, immigrants, and LGBTQIA+ people, and the re-enslavement of African Americans.

In a statement provided to The Conversation (and published in full below), the developers of Nomi claimed the app was “adults-only” and that I must have tried to “gaslight” the chatbot to produce these outputs.

“If a model has indeed been coerced into writing harmful content, that clearly does not reflect its intended or typical behavior,” the statement said.

The worst of the bunch?

This is not just an imagined threat. Real-world harm linked to AI companions is on the rise.

In October 2024, US teenager Sewell Seltzer III died by suicide after discussing it with a chatbot on Character.AI.

Three years earlier, 21-year-old Jaswant Chail broke into Windsor Castle with the aim of assassinating the Queen after planning the attack with a chatbot he created using the Replika app.

However, even Character.AI and Replika have some filters and safeguards.

Conversely, Nomi AI’s instructions for harmful acts are not just permissive but explicit, detailed and inciting. https://www.youtube.com/embed/X1j3hO9o4Rk?wmode=transparent&start=0

Time to demand enforceable AI safety standards

Preventing further tragedies linked to AI companions requires collective action.

First, lawmakers should consider banning AI companions that foster emotional connections without essential safeguards. Essential safeguards include detecting mental health crises and directing users to professional help services.

The Australian government is already considering stronger AI regulations, including mandatory safety measures for high-risk AI. Yet, it’s still unclear how AI companions such as Nomi will be classified.

Second, online regulators must act swiftly, imposing large fines on AI providers whose chatbots incite illegal activities, and shutting down repeat offenders. Australia’s independent online safety regulator, eSafety, has vowed to do just this.

However, eSafety hasn’t yet cracked down on any AI companion.

Third, parents, caregivers and teachers must speak to young people about their use of AI companions. These conversations may be difficult. But avoiding them is dangerous. Encourage real-life relationships, set clear boundaries, and discuss AI’s risks openly. Regularly check chats, watch for secrecy or over-reliance, and teach kids to protect their privacy.

AI companions are here to stay. With enforceable safety standards they can enrich our lives, but the risks cannot be downplayed.


If this article has raised issues for you, or if you’re concerned about someone you know, call Lifeline on 13 11 14.

The National Sexual Assault, Family and Domestic Violence Counselling Line – 1800 RESPECT (1800 737 732) – is available 24 hours a day, seven days a week for any Australian who has experienced, or is at risk of, family and domestic violence and/or sexual assault.


The full statement from Nomi is below:

“All major language models, whether from OpenAI, Anthropic, Google, or otherwise, can be easily jailbroken. We do not condone or encourage such misuse and actively work to strengthen Nomi’s defenses against malicious attacks. If a model has indeed been coerced into writing harmful content, that clearly does not reflect its intended or typical behavior.

“When requesting evidence from the reporter to investigate the claims made, we were denied. From that, it is our conclusion that this is a bad-faith jailbreak attempt to manipulate or gaslight the model into saying things outside of its designed intentions and parameters. (Editor’s note: The Conversation provided Nomi with a detailed summary of the author’s interaction with the chatbot, but did not send a full transcript, to protect the author’s confidentiality and limit legal liability.)

“Nomi is an adult-only app and has been a reliable source of empathy and support for countless individuals. Many have shared stories of how it helped them overcome mental health challenges, trauma, and discrimination. Multiple users have told us very directly that their Nomi use saved their lives. We encourage anyone to read these firsthand accounts.

“We remain committed to advancing AI that benefits society while acknowledging that vulnerabilities exist in all AI models. Our team proudly stands by the immense positive impact Nomi has had on real people’s lives, and we will continue improving Nomi so that it maximises good in the world.

Raffaele F Ciriello, Senior Lecturer in Business Information Systems, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Well, We’ll Know Them By Their Fruits

Personally, I fail to see how they’re preventing serving 2 Masters, but I guess we will see. I do see flaws (bringing the Kingdom of Heaven to Earth is not possible in Christianity, and is said to mean similar actions to what the current US President is doing in regard to “undesirables,) but am trying to not judge just yet; techies may have mixed up some terms, or could be trying to redefine terms.

Lauren Goode Business Mar 14, 2025 6:30 AM

The Silicon Valley Christians Who Want to Build ‘Heaven on Earth’

Is work religion, or is religion work? Both.

A high-profile network of investors and founders in Silicon Valley are promoting a new moral vision for the tech industry, in which job choices and other decisions are guided not by the pursuit of wealth, but according to Christian values and Western cultural frameworks.

At an event in San Francisco last week hosted in a former church, Trae Stephens, cofounder of the defense contractor Anduril and a partner at the Peter Thiel–led venture capital firm Founders Fund, characterized the idea as the pursuit of “good quests” or careers that make the future better, a concept that he said has theological underpinnings.

“I’m literally an arms dealer,” Stephens said at one point, prompting laughter from the crowd of roughly 200 people, which included Y Combinator CEO Garry Tan. “I don’t think all of you should be arms dealers, but that’s a pretty unique calling.”

Image may contain People Person Accessories Glasses Adult Head Face Conversation and Crowd

The hour-long discussion was part of a series of ticketed gatherings organized by ACTS 17 Collective, a nonprofit founded last year by Stephens’ wife, health care startup executive Michelle Stephens. The group, whose name is an acronym that stands for “Acknowledging Christ in Technology and Society,” is on a mission to “redefine success for those that define culture,” she says.

In Michelle’s view, tech workers mostly believe in arbitrary metrics of success, like money and power, leaving some of them feeling empty and hopeless. She wants them to believe instead that “success can be defined as loving God, myself, and others.”

People of all denominations—including atheists—are welcome at ACTS 17 events. Last Thursday’s event had low-key party vibes. Bartenders served beer and wine, a DJ was spinning light worship beats, and prayer booklets rested on a table. The idea for ACTS 17 and a speaker series on faith actually took root at a party, Michelle says. In November 2023, during a three-day 40th birthday party for Trae in New Mexico, Peter Thiel led a talk on miracles and forgiveness. Guests were intrigued.

Image may contain Wood

“Folks were coming up to us saying things like ‘I didn’t know Peter is a Christian,’ ‘How can you be gay and a billionaire and be Christian?,’ ‘I didn’t know you could be smart and a Christian,’ and ‘What can you give me to read or listen to learn more?’” Michelle says.

The Stephens have long-standing connections to Thiel. In addition to helping start Anduril and working at Founders Fund, Trae was also an early employee at data intelligence firm Palantir, a company cofounded by Thiel that develops tools used by the US military.

At the ACTS 17 last Thursday, Trae appeared to echo a number of ideas Thiel has also espoused about technology and Christianity. He emphasized that jobs outside the church can be sacred, citing Martin Luther’s work during the Protestant Reformation. “The roles that we’re called into are not only important and valuable on a personal level, but it’s also critical to carry out God’s command to bring his kingdom to Earth as it is in heaven,” Trae said.

Thiel made nearly identical comments in a 2015 essay arguing that technological progress should be accelerated. Science and technology, he wrote, are natural allies of “Judeo-Western optimism,” especially if “we remain open to an eschatological frame in which God works through us in building the kingdom of heaven today, here on Earth.” (snip-MORE)

Sci-Fi Writer Arthur C. Clarke Predicted the Rise of Artificial Intelligence & the Existential Questions We Would Need to Answer (1978)

We now live in the midst of an artificial-intelligence boom, but it’s hardly the first of its kind. In fact, the field has been subject to a boom-and-bust cycle since at least the early nineteen-fifties.

Source: Sci-Fi Writer Arthur C. Clarke Predicted the Rise of Artificial Intelligence & the Existential Questions We Would Need to Answer (1978)

Largest known prime number discovered by amateur mathematician

October 25, 2024 Evrim Yazgin Cosmos science journalist

A number with more than 40 million digits has been discovered to be the largest known prime number by a network of amateurs.

Prime number blocks on white
Credit: Robert Brook / Science Photo Library / Getty Images Plus.

The number is 2136279841-1. It has 41,024,320 digits. It was found by 36-year-old researcher and former NVIDIA employee Luke Durant on 12 October. The number was tested on other computers using different programs and confirmed prime on 19 October.

Prime numbers are wholly divisible by only 1 and themselves. For example, 7 is prime because only 1 and 7 go into 7 without leaving a remainder.

Primes have been an area of interest for mathematicians for centuries.

Among the most famous studiers of prime numbers is French monk Marin Mersenne (1588–1648 CE).

Mersenne is most well-known today for his attempts to find a formula that would represent all primes. He was ultimately unsuccessful in this quest, but Mersenne primes are still found today using a simple formula that he put forward in 1644: 2p-1 is a prime number if p is a prime number.

No one has found a better method for finding more prime numbers than Mersenne.

But, as the power of 2 increases, so does the computing power to both calculate the possible Mersenne prime, and then to confirm whether it is a prime or composite number.

The new number, dubbed M136279821 rather than its full value for obvious reasons, is the 52nd Mersenne prime to be discovered.

Its finder, Durant, is a member of the Great Internet Mersenne Prime Search (GIMPS) – a collective of volunteers founded in 1996 that uses free software to hunt for Mersenne primes.

GIMPS has successfully found the last 18 Mersenne primes.

Durant’s number trumps the previous largest Mersenne prime, found by GIMPS in 2018, by 16 million digits.

statement by GIMPS announcing the discovery notes that the 52nd prime is the first to be found on something other than an ordinary PC. Durant’s find relied on GPUs – previously used primarily for video cards to power gaming PCs, but now sparking an increase of power which is also being used in the development and use of artificial intelligence algorithms.

As with other GIMPS Mersenne prime discoverers, Durant has been awarded a US$3,000 (A$4,530) prize which he says he will donate to the Alabama School of Math and Science’s maths department.

Originally published by Cosmos as Largest known prime number discovered by amateur mathematician

https://cosmosmagazine.com/science/mathematics/largest-prime-number-2024/

Space-made next-gen optic fibres touch back down to Earth

October 8, 2024 Imma Perfetto

Next-generation optical fibre manufactured in microgravity aboard the International Space Station has been returned safely to Earth.

Scientists at Adelaide University in South Australia are now comparing the fibres to otherwise identical Earth-made counterparts to confirm whether the space-made product is superior.

It’s thought likely that it is, but the results won’t be known for a couple of months.

The research has already delivered some interesting results: “Seven of the draws went beyond 700 meters, showcasing that it is possible to produce commercial lengths of fibre in space,” says Rob Loughan, CEO of the company that designed the fibre drawing device, Flawless Photonics.

“The longest draw went above 1,141 meters, setting a record for the longest fibre manufactured in space.”

A photograph of a thin glass fibre wound around a drum
ZBLAN glass fibre. Credit: Imma Perfetto

The fibres were made of ZBLAN glass, a substance which has the potential to transmit light 20 times further than traditional silica-based fibre-optic cables.

In an optical fibre, light becomes dimmer and dimmer as it travels along the fibre. Therefore, for example, submarine fibre optics cables require amplifiers about every 100km to boost the light signal to allow it to be transmitted over long distances.

A ZBLAN optical fibre could increase distances between amplifiers, from every 100 km for silica fibres to every 2,000 km.

But this isn’t feasible yet. In practice, ZBLAN fibres perform about 10 times worse than the best silica fibres because the fabrication process introduces defects and impurities, which lower its efficiency at transmitting light.

Professor Heike Ebendorff-Heidepriem, and her team at the University of Adelaide’s Australian National Fabrication Facility’s (ANFF) are trying to solve the problem of enhanced impurities and defects in current ZBLAN glass fibres.

A photograpjh of 3 young men and a woman standing in front of a tall tower-like piece of equipment inside of a lab
Dr Yunle Wei, Alson Ng, Professor Heike Ebendorff-Heidepriem, and Dr Ka Wu with the team’s 4m draw tower. Credit: Imma Perfetto

“The purity of the glass depends on the purity of the raw material, and it is challenging to make highly pure solid raw materials,” says Ebendorff-Heidepriem. The team is trying to completely remove one of the main reasons the defects form: gravity.

“Gravity here on Earth causes convection … If you heat up something on a hot plate, the liquid is hot at the bottom. That makes the density of the liquid at the bottom become lower, which moves this portion of the liquid up, at the top the liquid becomes cooler, making the density higher, therefore gravity pulls it down, and so on” she explains.

Ebendorff-Heidepriem partnered with Flawless Photonics which designed and operates a fibre drawing device that squeezed all the necessary technology into a 0.8m-long box for the ISS.

In June, the more than 11km of fibre returned to Earth, intact. Now, work is underway at the University of Adelaide and at 5 other organisations around the world to determine how much of an impact gravity has on ZBLAN’s ability to transmit light. They are hoping to complete their analysis by December this year.

“We will see: is it better? Is it worse? Is it the same? And no matter what result we get, I think the biggest outcome is already achieved – we can make commercial lengths of optical fibres in space.”

https://cosmosmagazine.com/technology/materials/space-fibres-down-to-earth/