First accurate simulation of a supermassive black hole destroying a star

(And of course you should listen to “Supermassive Black Hole” by Muse while enjoying this article. It’s the real only way. 😉)

August 21, 2024 Evrim Yazgin

Astrophysicists at Melbourne’s Monash University have generated the first simulation which accurately depicts what happens when a star ventures too close to a supermassive black hole.

The research, published in Astrophysical Journal Letters, is a technical milestone in our attempts to understand these mysterious cosmic giants.

Video on the page, or here on YouTube.

First author Daniel Price, a professor at Monash, tells Cosmos that there are about 100 events which have been observed over the past decade-and-a-half which astronomers believe fit the bill to be a star being destroyed by a supermassive black hole, also called a tidal disruption event (TDE).

Not X-ray vision

But these observations have thrown up some odd measurements which haven’t been explained until now.

“If you dump a bunch of material close to black hole and form an accretion disk around that black hole, there’s a prediction for where the material should land,” Price says. “The material at that location should be more than a million degrees in temperature. It should generate X-rays.

“So, if you have unobscured stuff feeding a black hole, you get X-ray emission. For example, the black hole sources in the galaxy, they’re all X-ray emitters.”

Stars falling into supermassive black holes, however, do not result in emission of X-rays. They emit light in the visible, or optical, spectrum.

Current theories can only speculate why such events lead to material being flung toward us at 20,000km per second – about one-fifteenth the speed of light.

An eating analogy – but not in the way you think

Price explains that the simulation illuminates why it is optical light, not X-rays, which we observe when our telescopes pick up stars falling into supermassive black holes.

“The analogy with me eating is that you don’t see my stomach. You’re not seeing the thing that’s generating the energy, you’re seeing it reprocessed through my skin,” Price says. “If you look at my light curve, you see that I’m a constant temperature of 38°C all day.

“My light curve is very much like a disruption event. The temperatures are pretty much constant. Luminosity changes a bit, but you infer that’s because the size of the objects changing, but the temperature evolution is very flat. So, it looks like exactly like me, just a lot warmer and a lot bigger.”

In fact, this size of the photosphere – the object which emits the optical rays – itself is surprising, says Price.

The photosphere in the simulation, which matches observations, is about 100 astronomical units (AU), where 1 AU is the distance from the Earth to the Sun (roughly 150 million kilometres).

Video on the page, or here on YouTube

“No one knows what it is,” Price laughs.

What we see is muffled

Price says the simulations confirm a theoretical explanation for these unexpected observations called the Eddington envelope.

“That’s the concept that you’re stuffing material down towards the black hole faster than it can process it,” Price says. “By process, I mean like the sun processes the energy from its core – it just kind of gently radiates it away. So the black hole can’t radiate away the stuff that you’re trying to feed it. And, so, it has to literally blow it away.”

This material “smothers” the black hole, absorbing the X-rays that the black hole emits and re-emitting it as optical light.

Price extends the eating analogy to an unpleasant place.

“Basically, it’s like stuffing your stomach. You’re going to vomit eventually. That’s pretty much what happens.”

The power of a simulation

“That’s the exciting thing in simulations. People have speculated for a long time and drawn illustrations and this kind of thing, but there’s no physics in that. That’s just what we call phenomenology. That’s how it must be to explain this phenomena. But we don’t know what produces that kind of envelope or layer, or reprocessing layer,” Price says.

The simulation, Price says, just requires the initial conditions – the star – the fluid mechanics governing the star, and the rules of general relativity.

“Then it’s just a technical challenge,” he says.

“In a lot of simulation work, you’re kind of guessing what might have happened,” he adds. “But in this case, we’re pretty sure what happens. It’s really nice to get that connection to the observations of transients from just chucking a star at a computer.”

Price explains that the simulation will set astrophysicists and astronomers up to be able to understand such phenomena much better as more observations are expected to be made soon.

“The first optical transient was only detected in 2010, but what’s coming is the Rubins observatory being built in Chile. That’s expected to boost the population of these things into the thousands.

“Having a good theoretical understanding of what the kind of phenomena is sets us up really well for that future flood of observations. It’s not just some theoretical speculation. There’s really something we can go after and understand by looking at it.”

News for people who pay attention to storms

Hailstone library improves predictions of damaging storms

August 19, 2024 Imma Perfetto

Scientists have compiled a library of hailstones to help fine-tune hailstorm simulations and make weather forecasts more accurate.

To make calculations more simple, conventional scientific hailstorm modelling assumes all hailstones are perfectly spherical. In reality, they’re a little more complicated than that.

Photograph of a rough and bumpy hailstone being weighed on a scale
A hailstone, flecked with black paint to assist in 3-D scanning, is weighed as part of processing for the hail library. Credit: UQ

“Hail can be all sorts of weird shapes, from oblong to a flat disc or have spikes coming out – no two pieces of hail are the same,” says Dr Joshua Soderholm, honorary senior research fellow at University of Queensland and research scientist at the Bureau of Meteorology in Australia.

In their new study in the Journal of the Atmospheric Sciences, Soderholm and collaborators explored whether compiling a reference library of non-spherical, natural hail shapes could change the outcomes of hailstorm modelling.

“Our study used data from 217 hail samples, which were 3-D scanned and then sliced in half, to tell us more about how the hailstone formed,” says Soderholm.

“This is effectively a dataset to represent the many and varied shapes of hailstones.”

According to lead researcher Yuzhu Lin, a PhD candidate at Pennsylvania State University in the US, the differences were dramatic.

“Modelling of the more naturally shaped hail showed it took different pathways through the storm, experienced different growth and landed in different places,” she says.

A photograph of a man wearing a grey beanie photographing a hailstone
Dr Joshua Soderholm photographing a hailstone. Credit: UQ

“It also affected the speed and impact the hail had on the ground. This way of modelling had never been done before, so it’s exciting science.”

While the modelling is currently only used by scientists studying storms, Soderholm says the end game is to be able to predict how big hail will be and where it will fall in real-time.

“More accurate forecasts would of course warn the public so they can stay safe during hailstorms and mitigate damage,” he says.

“But it could also significantly benefit industries such as insurance, agriculture and solar farming which are all sensitive to hail.”

Egyptians of Old Could Have Used Hydraulic Lifts for Work

Peace & Justice History 8/12

https://www.peacebuttons.info/E-News/peacehistoryaugust.htm#august12

August 12, 1953
The first Soviet hydrogen (thermonuclear or fusion) bomb, far more potentially damaging than those dropped on Japan, was exploded in the Kazakh desert, then part of the Soviet Union. Igor Vasziljevics Kurcsatov, head of the Soviet Uranium Committee, said to Josef Stalin at the time: “The atomic sword is in our hand. It is time to think about the peaceful use of nuclear energy.”
 The Soviet Nuclear Weapons Program: https://nuclearweaponarchive.org/Russia/Sovwpnprog.html
August 12, 1982
Open missile tubes on Trident sub
Twelve were arrested in an attempted blockade of the first Trident submarine, the USS Ohio, entering the Hood Canal in the state of Washington. In motorboats, sailboats and small handmade wooden vessels, the demonstrators were objecting to the presence of nuclear weapons in Seattle. The Coast Guard overturned some of the vessels with water cannon.
August 12, 1995

Thousands demonstrated in Philadelphia and other cities in support of journalist and former Black Panther Mumia Abu-Jamal (on death row for murder since 1982) in the largest anti-death-penalty demonstrations in the U.S. to date.
Who is Mumia Abu-Jamal? https://www.amnesty.org/en/wp-content/uploads/2021/06/amr510012000en.pdf

Cleaning plastic out of the ocean and rivers.

Harris Campaign: Donald Trump’s Very Good, Very Normal Press Conference

August 8, 2024, 3:56 pm | in

This is quite good:

Donald Trump’s Very Good, Very Normal Press Conference
Split Screen: Joy and Freedom vs. Whatever the Hell That Was (No photo on the page.)
Donald Trump took a break from taking a break to put on some pants and host a p̶r̶e̶s̶s̶ ̶c̶o̶n̶f̶e̶r̶e̶n̶c̶e̶ public meltdown. We have a lot to say about it. Here are some initial thoughts – with more to come.

He hasn’t campaigned all week. He isn’t going to a single swing state this week. But he sure is mad Kamala Harris and Tim Walz are getting big crowds across the battlegrounds.The facts were hard to track and harder to find in Donald Trump’s Mar-a-Lago meltdown this afternoon. He lied. He attacked the media. He made excuses for why he’s off the campaign trail. We’re here to help because his staff clearly isn’t.

But first, an important reminder on the question Donald didn’t answer: how he will vote on the Florida abortion referendum. (He has been ducking this question since April.) We worked to pin down reality so Donald Trump, bless his heart, doesn’t have to. Here are the facts:

We had 12,000 and 15,000 people in Wisconsin and Michigan yesterday, respectively (Not 2,000.)

The ABC debate is September 10th. Not the 25th.

People have spoken to bigger crowds than Donald Trump. (Obama, Clinton, literally anyone at Lollapalooza, Coachella, the World Cup…)

January 6th was decidedly nothing like MLK’s “I Have a Dream” speech. And Trump did not get a bigger crowd than Martin Luther King Jr. on that historic day.

There was famously not a “peaceful transfer” of power after the 2020 election, which Donald Trump fought to overturn. (Famously.) Five police officers died because of January 6th.

Donald Trump said he was off the trail this week because of the Democratic convention. (That convention is not happening this week.)

Trump said they have commercials at a level no one else does. (He is being drastically outspent on the airwaves.)

Governor Josh Shapiro is actually a great guy.

Project 2025 author Tom Homan, the “father” of Trump’s cruel child separation policy, is not a person to praise.

Jewish people should not “have their head examined” for not supporting him. (That’s actually antisemitic.)

Trump said he was not complaining. He in fact very much was.

Trump does not know the difference between asylum seekers and an insane asylum.

Donald Trump does not “cherish” the Constitution.

Abortion is not “less of an issue” for voters. It is not “subdued.” It is not a “small issue” for voters, despite how much Donald Trump wants it to be. Donald Trump did not answer the abortion question “very well in the debate.”

Everybody did not want Roe v. Wade overturned. The American people do not support states banning abortion.

After-birth abortion does not exist.

Minnesota and Virginia are not the same.

Donald Trump doesn’t know what progressive means.

Kamala Harris does not want to take away everyone’s guns. Tim Walz is a gun owner.

Vice President Harris does not support an arms embargo on Israel.

Donald Trump could not remember Tim Walz’s name.

Donald Trump’s tax cuts are not the biggest in history.

We don’t know what “the transgender became such a big thing” is supposed to mean.

Donald Trump will cut Social Security – just like he proposed every year he was in office.

Government was not weaponized against Trump and Steve Bannon.

Mail ballots are secure.

We agree – Elon IS a different kind of guy.

There are no polls that say Donald Trump is going to win in a landslide.

The MAGA base is not 75% of the country.

https://www.insidernj.com/press-release/harris-campaign-donald-trumps-very-good-very-normal-press-conference/

A Poem I Just Read

This poem came in a newsletter I receive. I thought it’s a worthy share.

The Earthling

Matthew Olzmann

The Earthlings arrived unannounced, entered
without knocking, removed their shoes 
and began clipping their toenails. 
They let the clippings fall wherever.  
They sighed loudly as if inconvenienced.
We were patient. We knew our guests
were in an unfamiliar environment; they needed 
time to adjust. For dinner, we prepared
turkey meatloaf with a side of cauliflower. 
This is too dry, they said.
This is not like what our mothers made. 
We wanted to offer a tour of our world, 
demonstrate how we freed ourselves 
from the prisons of linear time.
But the Earthlings were already spelunking 
our closets, prying tools 
from their containers and holding them 
to the light. What’s this? they demanded.
What’s this? What’s this? And what’s this?
That’s a Quantum Annihilator; put that down.
That’s a Particle Grinder; please put that down.  
We could show you how to heal the sick, we said.
We could help you feed every nation, commune 
with the all-seeing sentient energy that palpitates 
through all known forms of matter. 
Nah! they said. Teach us to vaporize a mountain! 
Teach us to turn the moon into revenue! 
Then the Earthlings 
left a faucet running and flooded our basement.

Copyright © 2023 by Matthew Olzmann. Originally published in Poem-a-Day on November 17, 2023, by the Academy of American Poets. 

https://poets.org/poem/earthlings/embed

So bad

I had to share it.

Lard’s World Peace Tips by Keith Tutt and Daniel Saunders for August 07, 2024

Lard's World Peace Tips Comic Strip for August 07, 2024

https://www.gocomics.com/lards-world-peace-tips/2024/08/07

Are the authorities powerless to stop Tommy Robinson’s online output?

New laws may make it easier to pursue far-right activist over alleged role in spreading disinformation

(I think they are here, because of our Constitution. However, it’d be good to see this sort of activity controlled, and people safer. -A)

Images of Tommy Robinson using his phone while sunbathing in Cyprus as a Rotherham hotel housing asylum seekers was set alight have prompted outrage among those long concerned about his ability to inspire far-right action, even from a distance.

Yet while he has long seemed able to operate with impunity, events may finally be catching up with the man who first rose to prominence in 2009 as the de facto leader of the now defunct English Defence League (EDL).

Far from being powerless to pursue Robinson, new legislation means the authorities may be able to move more easily against those who share damaging information online that they know to be untrue.

Robinson, whose real name is Stephen Yaxley-Lennon, is already known to be among those who are being looked at by police for their alleged role in disseminating disinformation.

A former director of public prosecutions, Ken Macdonald KC, spelled out on Monday how he believed investigators would want to quickly identify individuals who are involved in “online organisation, online incitement and online conspiracies”.

“I think prosecutors will want to have a strategy to identify people who may have been involved in inciting and encouraging these events, and they will want to arrest them and build cases against them. These are, in one sense, the most important people,” Lord Macdonald told BBC Radio 4’s World at One.

While Robinson has been abroad since 28 July, when he fled the UK on the eve of a high court hearing over contempt of court proceedings, he has maintained a near constant commentary on events in the UK since the fatal stabbings of three young girls in Southport on 29 July, sharing claims that police have described as false.

While he has long been a prolific user of multiple social media platforms – benefiting in particular from the return of his X account after Elon Musk bought Twitter – going after him for his online output is not clear-cut.

The far right has moved online, where its voice is more dangerous than ever Read more

Dominic Grieve, a former attorney general for England and Wales, told the Guardian: “It is an offence to incite violence on the grounds of race, belief or sexual orientation, and there is incitement to hatred. But it’s a grey area between the right to criticise and incitement to hatred and is a very difficult area to police.

“Quite simply, that’s why it is possible for people to play around with that area. Either you clamp down on it, in which case legitimate freedom of speech gets eliminated and breeds undesirable problems of its own, or you live with it and challenge those views through debate.”

Recent changes in the law open up other possibilities. Since January, an amendment to the Online Safety Act 2023 allows for the prosecution of those who convey information that they know to be false and “if the person intended the message, or the information in it, to cause non-trivial psychological or physical harm to a likely audience”.

Ashley Fairbrother, a senior prosecutor at the law firm Edmonds Marshall McMahon, said: “This now makes the circulation of damaging and false information online into an offence in its own right.” (snip-More)

https://www.theguardian.com/uk-news/article/2024/aug/06/are-the-authorities-powerless-to-stop-tommy-robinsons-online-output

With AI sexual abuse on the rise, the White House is tapping Big Tech for support

The call to action comes as the issue has intensified in recent years, affecting students to public figures like Taylor Swift and AOC.

Originally published by The 19th Republished with their republish link.

“This is an issue that affects everybody — from celebrities to high school girls.”

That’s how Jen Klein, director of the White House Gender Policy Council, describes the pervasiveness of image-based sexual abuse, a problem that artificial intelligence (AI) has intensified in recent years, touching everyone from students to public figures like Taylor Swift and Rep. Alexandria Ocasio-Cortez.

In May, the Biden-Harris administration announced a call to action to curb such abuse, which disproportionately targets girls, women and LGBTQ+ people. Stopping these images, whether real or AI-generated, from being circulated and monetized requires not just the government to act, but tech companies to as well, according to the White House.

“We’re inviting technology companies and civil society to consider what steps they can take to prevent image-based sexual abuse, and there’s really a spectrum of actors who we hope will get involved in addressing the problem,” Klein said. “So that can be anything from the payment processors, to mobile app stores, to mobile app and operating system developers, cloud providers, search engines, etc. They all have a particular part of the sort of ecosystem in which this problem happens.”

Responding to the White House’s call to action, the Center for Democracy & Technology, the Cyber Civil Rights Initiative and the National Network to End Domestic Violence announced in June that they would form a working group to counteract the circulation and monetization of image-based sexual abuse. In late July, Meta, owner of Facebook and Instagram, removed 63,000 accounts linked to the “sextortion” of children and teens.  

While older forms of this abuse include the leaking of intimate photos without the consent of all parties, the AI version includes face swapping, whereby the head of one individual is placed on another person’s naked body, Klein said. Both Swift and Ocasio-Cortez have been victims of this kind of sexual abuse. In March, Ocasio-Cortez introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act of 2024. The legislation provides recourse for people, more than 90 percent of whom are women, who have had their likenesses used in intimate “digital forgery.” The Senate passed the DEFIANCE Act on July 23.

Such images have also garnered repeated headlines this year after spreading at schools. The White House’s appeal to tech companies follows the Biden-Harris administration’s recent updates to Title IX, the law that bars educational institutions that receive federal funds from engaging in sex discrimination. Under the new regulations that took effect Thursday, sex-based harassment includes sexually explicit deepfake images if they create a hostile school environment. 

The National Women’s Law Center is one of 37 organizations applauding this development in a letter sent Monday to the Department of Education by the Sexual Violence Prevention Association (SVPA). The coalition of groups represented by SVPA expressed concern, however, that many school administrators don’t know about image-based sexual abuse or how to address it. 

“We respectfully urge the Department of Education to issue guidance delineating Title IX procedures and protocols specifically tailored to addressing digital sexual harassment within educational institutions,” the letter states. “This guidance should provide clear direction on how schools can effectively handle cases of digital sexual harassment including support mechanisms for victims, investigation procedures, research and referrals, and prevention strategies.”

The Biden-Harris administration’s effort to prevent the proliferation of explicit deepfake images coincides with states taking action.

“There’s a patchwork of laws across the country, and there are 20 states that have passed laws penalizing the dissemination of nonconsensual AI-generated pornographic material,” Klein said. “But there’s a lot of work to be done, both at the state level and at the federal level to really make that work a whole quilt to continue the process.”

One state lawmaker who’s been concerned about deepfakes for years is California Assemblyman Marc Berman. A 2018 AI-generated video of former President Barack Obama, created by comedian and film director Jordan Peele, alarmed him because he felt that bad actors could use digitally manipulated videos to influence political races. The next year, Berman authored legislation to regulate the use of deepfake technology involving political candidates around election time. 

“It was pretty tricky because of the various First Amendment arguments that get raised,” he said. “The bill, to be honest, got watered down more than I wanted as it went through the process. But it has since been copied in other states, and then frankly, made stronger in other states.”

In May, Berman announced that similar legislation he’d introduced to prevent deepfakes from interfering with elections had advanced in California’s assembly. During the current legislative session, he introduced multiple bills related to digital forgery and artificial intelligence. AB 1831 seeks to prohibit child sex abuse deepfakes, while AB 2876 would require the state’s Instructional Quality Commission to consider incorporating AI literacy content into state mathematics, science, and history-social science curriculum standards when they’re up for revision next year.

Berman decided to file legislation to prohibit child sex abuse deepfakes when the California District Attorneys Association informed his office that they’re increasingly catching people who are creating, disseminating or possessing such images. 

“Their interpretation of California law currently is that it is not specifically illegal, because it doesn’t involve an image of an actual child — because AI takes thousands of images of real children and then spits out this artificial image,” Berman said. “So they said, ‘We need to close this loophole in California law and make sure that the law explicitly states that child sexual abuse material, even if it’s created by artificial intelligence, is illegal. I was shocked that people were even using AI to create this type of content, and then I found out just how pervasive it is, especially on the dark web. It’s terrifying.”

Possessing or distributing such images online may result in perpetrators sexually exploiting minors offline, making it all the more important to address AI-generated versions of this content before it spirals out of control and becomes a huge problem for the nation’s young people, Berman said.

Multiple schools in California have been rocked by deepfake scandals, often related to images created by students of their peers. In March, a Calabasas High School student accused her onetime friend of disseminating actual and AI-generated nudes of her to their peers. That same month, a Beverly Hills middle school expelled five students for allegedly circulating AI-generated nudes of their classmates. 

Such incidents are one reason Berman believes students need to be taught to use AI responsibly. “AB 2876 will equip students with the skills and the training that they need to both harness the benefits of AI, but also to mitigate the dangers and the ethical considerations of using artificial intelligence,” he said. 

The legislation has been ordered to a third reading, the bill’s final phase before it leaves the state assembly and moves to the senate. Meanwhile, his bill to prohibit child sex abuse deepfakes, AB 1831, has been referred to the suspense file, meaning that the bill’s potential fiscal impacts to the state are being reviewed. The legislation would take effect January 1 if enacted. 

“It’d be great if Congress can pass some federal standards on this,” Berman said. “It’s always an ideal when it comes to legislation that really applies to every state and to kids in every state.”

Pending national legislation addressing the issue includes The SHIELD Act and The Kids Online Safety and Privacy Act (KOSA), which the Senate passed July 30, although it still awaits a vote in the House of Representatives. The former would make the non-consensual sharing of intimate images a federal offense, while the latter would require social media companies to take steps to prevent children and teens from being sexually exploited online, among other measures. KOSA, however, has sparked fears that lawmakers could use it to censor content they dislike, particularly LGBTQ+ content, under the guise of protecting children. Civil liberties groups like the ACLU said that the bill raises privacy concerns, may limit youth’s access to important online resources and could silence needed conversations. 

Evan Greer, director at Fight for the Future, a nonprofit advocacy group focused on digital rights, objected to KOSA’s Senate passage in a statement. “We need legislation that addresses the harm of Big Tech and still lets young people fight for the type of world that they actually want to grow up in,” she said. 

AI-generated image-based sexual abuse also affects college students, according to Tracey Vitchers, executive director of It’s On Us, a nonprofit that addresses college sexual assault. She called it an emerging issue on college campuses.

“It really started with the emergence of nonconsensual image-sharing involving an individual sharing a private photo with someone that they thought they could trust,” she said. “We are now starting to see this challenge come forward with AI and deepfakes, and unfortunately, many schools are not equipped to investigate gender-based harassment and violence that occurs as a result of deepfakes.”

Vitchers appreciates that the new Title IX regulations touch on the issue, but said that colleges need more guidance from the Department of Education about how to respond to these incidents, and students need more prevention education.

“It’s something that we have begun discussing with some of our partners, particularly those in the online dating space,” Vitchers said. “We are hearing that fear, among particularly young women on campus, about someone who can just take a picture of you from Instagram and use AI to superimpose it onto porn. Then it gets circulated and it feels impossible to get it removed from the internet.”

Some tech companies have already offered their support to the White House’s effort to stop image-based sexual abuse, Klein said, but she would like to hear from others. Although state and national lawmakers are working to enact legislation and regulations, Klein said that the Biden-Harris administration is calling on tech companies to intervene because they can take action now. 

“Given the scale that image-based abuse has been rapidly proliferating with the advent of generative AI, we need to do this while we continue to work toward longer-term solutions,” she said.