I had to share it.
Category: Technology
Are the authorities powerless to stop Tommy Robinson’s online output?
New laws may make it easier to pursue far-right activist over alleged role in spreading disinformation
(I think they are here, because of our Constitution. However, it’d be good to see this sort of activity controlled, and people safer. -A)
Images of Tommy Robinson using his phone while sunbathing in Cyprus as a Rotherham hotel housing asylum seekers was set alight have prompted outrage among those long concerned about his ability to inspire far-right action, even from a distance.
Yet while he has long seemed able to operate with impunity, events may finally be catching up with the man who first rose to prominence in 2009 as the de facto leader of the now defunct English Defence League (EDL).
Far from being powerless to pursue Robinson, new legislation means the authorities may be able to move more easily against those who share damaging information online that they know to be untrue.
Robinson, whose real name is Stephen Yaxley-Lennon, is already known to be among those who are being looked at by police for their alleged role in disseminating disinformation.
A former director of public prosecutions, Ken Macdonald KC, spelled out on Monday how he believed investigators would want to quickly identify individuals who are involved in “online organisation, online incitement and online conspiracies”.
“I think prosecutors will want to have a strategy to identify people who may have been involved in inciting and encouraging these events, and they will want to arrest them and build cases against them. These are, in one sense, the most important people,” Lord Macdonald told BBC Radio 4’s World at One.
While Robinson has been abroad since 28 July, when he fled the UK on the eve of a high court hearing over contempt of court proceedings, he has maintained a near constant commentary on events in the UK since the fatal stabbings of three young girls in Southport on 29 July, sharing claims that police have described as false.
While he has long been a prolific user of multiple social media platforms – benefiting in particular from the return of his X account after Elon Musk bought Twitter – going after him for his online output is not clear-cut.
The far right has moved online, where its voice is more dangerous than ever Read more
Dominic Grieve, a former attorney general for England and Wales, told the Guardian: “It is an offence to incite violence on the grounds of race, belief or sexual orientation, and there is incitement to hatred. But it’s a grey area between the right to criticise and incitement to hatred and is a very difficult area to police.
“Quite simply, that’s why it is possible for people to play around with that area. Either you clamp down on it, in which case legitimate freedom of speech gets eliminated and breeds undesirable problems of its own, or you live with it and challenge those views through debate.”
Recent changes in the law open up other possibilities. Since January, an amendment to the Online Safety Act 2023 allows for the prosecution of those who convey information that they know to be false and “if the person intended the message, or the information in it, to cause non-trivial psychological or physical harm to a likely audience”.
Ashley Fairbrother, a senior prosecutor at the law firm Edmonds Marshall McMahon, said: “This now makes the circulation of damaging and false information online into an offence in its own right.” (snip-More)
With AI sexual abuse on the rise, the White House is tapping Big Tech for support
The call to action comes as the issue has intensified in recent years, affecting students to public figures like Taylor Swift and AOC.
Originally published by The 19th Republished with their republish link.
“This is an issue that affects everybody — from celebrities to high school girls.”
That’s how Jen Klein, director of the White House Gender Policy Council, describes the pervasiveness of image-based sexual abuse, a problem that artificial intelligence (AI) has intensified in recent years, touching everyone from students to public figures like Taylor Swift and Rep. Alexandria Ocasio-Cortez.
In May, the Biden-Harris administration announced a call to action to curb such abuse, which disproportionately targets girls, women and LGBTQ+ people. Stopping these images, whether real or AI-generated, from being circulated and monetized requires not just the government to act, but tech companies to as well, according to the White House.
“We’re inviting technology companies and civil society to consider what steps they can take to prevent image-based sexual abuse, and there’s really a spectrum of actors who we hope will get involved in addressing the problem,” Klein said. “So that can be anything from the payment processors, to mobile app stores, to mobile app and operating system developers, cloud providers, search engines, etc. They all have a particular part of the sort of ecosystem in which this problem happens.”
Responding to the White House’s call to action, the Center for Democracy & Technology, the Cyber Civil Rights Initiative and the National Network to End Domestic Violence announced in June that they would form a working group to counteract the circulation and monetization of image-based sexual abuse. In late July, Meta, owner of Facebook and Instagram, removed 63,000 accounts linked to the “sextortion” of children and teens.
While older forms of this abuse include the leaking of intimate photos without the consent of all parties, the AI version includes face swapping, whereby the head of one individual is placed on another person’s naked body, Klein said. Both Swift and Ocasio-Cortez have been victims of this kind of sexual abuse. In March, Ocasio-Cortez introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act of 2024. The legislation provides recourse for people, more than 90 percent of whom are women, who have had their likenesses used in intimate “digital forgery.” The Senate passed the DEFIANCE Act on July 23.
Such images have also garnered repeated headlines this year after spreading at schools. The White House’s appeal to tech companies follows the Biden-Harris administration’s recent updates to Title IX, the law that bars educational institutions that receive federal funds from engaging in sex discrimination. Under the new regulations that took effect Thursday, sex-based harassment includes sexually explicit deepfake images if they create a hostile school environment.
The National Women’s Law Center is one of 37 organizations applauding this development in a letter sent Monday to the Department of Education by the Sexual Violence Prevention Association (SVPA). The coalition of groups represented by SVPA expressed concern, however, that many school administrators don’t know about image-based sexual abuse or how to address it.
“We respectfully urge the Department of Education to issue guidance delineating Title IX procedures and protocols specifically tailored to addressing digital sexual harassment within educational institutions,” the letter states. “This guidance should provide clear direction on how schools can effectively handle cases of digital sexual harassment including support mechanisms for victims, investigation procedures, research and referrals, and prevention strategies.”
The Biden-Harris administration’s effort to prevent the proliferation of explicit deepfake images coincides with states taking action.
“There’s a patchwork of laws across the country, and there are 20 states that have passed laws penalizing the dissemination of nonconsensual AI-generated pornographic material,” Klein said. “But there’s a lot of work to be done, both at the state level and at the federal level to really make that work a whole quilt to continue the process.”
One state lawmaker who’s been concerned about deepfakes for years is California Assemblyman Marc Berman. A 2018 AI-generated video of former President Barack Obama, created by comedian and film director Jordan Peele, alarmed him because he felt that bad actors could use digitally manipulated videos to influence political races. The next year, Berman authored legislation to regulate the use of deepfake technology involving political candidates around election time.
“It was pretty tricky because of the various First Amendment arguments that get raised,” he said. “The bill, to be honest, got watered down more than I wanted as it went through the process. But it has since been copied in other states, and then frankly, made stronger in other states.”
In May, Berman announced that similar legislation he’d introduced to prevent deepfakes from interfering with elections had advanced in California’s assembly. During the current legislative session, he introduced multiple bills related to digital forgery and artificial intelligence. AB 1831 seeks to prohibit child sex abuse deepfakes, while AB 2876 would require the state’s Instructional Quality Commission to consider incorporating AI literacy content into state mathematics, science, and history-social science curriculum standards when they’re up for revision next year.
Berman decided to file legislation to prohibit child sex abuse deepfakes when the California District Attorneys Association informed his office that they’re increasingly catching people who are creating, disseminating or possessing such images.
“Their interpretation of California law currently is that it is not specifically illegal, because it doesn’t involve an image of an actual child — because AI takes thousands of images of real children and then spits out this artificial image,” Berman said. “So they said, ‘We need to close this loophole in California law and make sure that the law explicitly states that child sexual abuse material, even if it’s created by artificial intelligence, is illegal. I was shocked that people were even using AI to create this type of content, and then I found out just how pervasive it is, especially on the dark web. It’s terrifying.”
Possessing or distributing such images online may result in perpetrators sexually exploiting minors offline, making it all the more important to address AI-generated versions of this content before it spirals out of control and becomes a huge problem for the nation’s young people, Berman said.
Multiple schools in California have been rocked by deepfake scandals, often related to images created by students of their peers. In March, a Calabasas High School student accused her onetime friend of disseminating actual and AI-generated nudes of her to their peers. That same month, a Beverly Hills middle school expelled five students for allegedly circulating AI-generated nudes of their classmates.
Such incidents are one reason Berman believes students need to be taught to use AI responsibly. “AB 2876 will equip students with the skills and the training that they need to both harness the benefits of AI, but also to mitigate the dangers and the ethical considerations of using artificial intelligence,” he said.
The legislation has been ordered to a third reading, the bill’s final phase before it leaves the state assembly and moves to the senate. Meanwhile, his bill to prohibit child sex abuse deepfakes, AB 1831, has been referred to the suspense file, meaning that the bill’s potential fiscal impacts to the state are being reviewed. The legislation would take effect January 1 if enacted.
“It’d be great if Congress can pass some federal standards on this,” Berman said. “It’s always an ideal when it comes to legislation that really applies to every state and to kids in every state.”
Pending national legislation addressing the issue includes The SHIELD Act and The Kids Online Safety and Privacy Act (KOSA), which the Senate passed July 30, although it still awaits a vote in the House of Representatives. The former would make the non-consensual sharing of intimate images a federal offense, while the latter would require social media companies to take steps to prevent children and teens from being sexually exploited online, among other measures. KOSA, however, has sparked fears that lawmakers could use it to censor content they dislike, particularly LGBTQ+ content, under the guise of protecting children. Civil liberties groups like the ACLU said that the bill raises privacy concerns, may limit youth’s access to important online resources and could silence needed conversations.
Evan Greer, director at Fight for the Future, a nonprofit advocacy group focused on digital rights, objected to KOSA’s Senate passage in a statement. “We need legislation that addresses the harm of Big Tech and still lets young people fight for the type of world that they actually want to grow up in,” she said.
AI-generated image-based sexual abuse also affects college students, according to Tracey Vitchers, executive director of It’s On Us, a nonprofit that addresses college sexual assault. She called it an emerging issue on college campuses.
“It really started with the emergence of nonconsensual image-sharing involving an individual sharing a private photo with someone that they thought they could trust,” she said. “We are now starting to see this challenge come forward with AI and deepfakes, and unfortunately, many schools are not equipped to investigate gender-based harassment and violence that occurs as a result of deepfakes.”
Vitchers appreciates that the new Title IX regulations touch on the issue, but said that colleges need more guidance from the Department of Education about how to respond to these incidents, and students need more prevention education.
“It’s something that we have begun discussing with some of our partners, particularly those in the online dating space,” Vitchers said. “We are hearing that fear, among particularly young women on campus, about someone who can just take a picture of you from Instagram and use AI to superimpose it onto porn. Then it gets circulated and it feels impossible to get it removed from the internet.”
Some tech companies have already offered their support to the White House’s effort to stop image-based sexual abuse, Klein said, but she would like to hear from others. Although state and national lawmakers are working to enact legislation and regulations, Klein said that the Biden-Harris administration is calling on tech companies to intervene because they can take action now.
“Given the scale that image-based abuse has been rapidly proliferating with the advent of generative AI, we need to do this while we continue to work toward longer-term solutions,” she said.
Let’s talk about Ukraine, planes, and what comes next….
Peace and Justice history for 8/3
One snip today; there is more on the page. But this entry falls into today’s Republicans lie narrative:
| August 3, 1981 |
![]() | Nearly 13,000 of the nation’s 17,500 air traffic controllers, members of the Professional Air Traffic Controllers Organization (PATCO), went on strike. |
| After six months of negotiations with PATCO President Robert Poli, the Federal Aviation Administration (FAA) had offered less than 10% of what the union had sought. Due to the stressful nature of their jobs, managing the nation’s ever-increasing volume of airport landings and take-offs without up-to-date equipment, they had asked for a shorter workweek, an increase in pay and retirement after 20 years. 95% of PATCO members rejected the FAA’s final offer. The union had endorsed Ronald Reagan for president in 1980 (one of very few to do so), but President Reagan said they were violating U.S. law banning strikes by federal workers, and would all be terminated unless they returned to work within 48 hours. |
| A Reagan Letter to Robert Poli, PATCO (October. 20, 1980) | |
| Dear Mr. Poli: I have been briefed by members of my staff as to the deplorable state of our nation’s air traffic control system. They have told me that too few people working unreasonable hours with obsolete equipment has placed the nation’s air travellers in unwarranted danger. In an area so clearly related to public safety the Carter administration has failed to act responsibly. You can rest assured that if I am elected President, I will take whatever steps are necessary to provide our air traffic controllers with the most modern equipment available and to adjust staff levels and work days so that they are commensurate with achieving a maximum degree of public safety…. I pledge to you that my administration will work very closely with you to bring about a spirit of cooperation between the President and the air traffic controllers. Sincerely, Ronald Reagan | |
| More about the strike https://socialistworker.org/2011/02/25/lessons-of-the-patco-strike |
https://www.peacebuttons.info/E-News/peacehistoryaugust.htm#august31981
How to strip metadata from your files
https://getsession.org/blog/how-to-strip-metadata-from-your-files
You are being sold with every email, every picture, every blog post, every Instagram post, every thing you put on X or Facebook, you are the cash cow these companies want to milk. I have been taking steps for years to reduce how much these businesses can strip mine my stuff data about me. Here is a program I like, and here is a free tutorial on how to remove a lot of the metadata out of your images and files. I recently realized how much information about myself I include in messages, so I looked into this program by Sessions. Hugs. Scottie
January 07, 2022 / Alex Linton
Metadata is a big deal — especially for people who are trying to protect their online privacy. For any files on your computer, there is probably a bunch of metadata about the file — such as location data, when the file was created, when it was last changed, or even what device created the file. If you ever give that file to someone else (or upload it to the internet), the file’s metadata will be transferred as well — which means they can see all that info.
There are plenty of reasons you might want to share a file without wanting to share the metadata as well — such as preserving your privacy or anonymity. To do that, you’ll want to strip the file’s metadata. Stripping metadata from files removes all that extra information and leaves you with just the information in the file itself. Some messengers—like Session—strip metadata from your image files when you attach them to a message. This is because they can contain sensitive EXIF data, which contains device and location information. Other file types—like MP3s—might have less sensitive metadata (such as a song name or artist) which won’t be stripped.
However, if you want to be sure, you can always manually strip the metadata from your files. Here’s how!
How to remove metadata from files using Windows
Scrubbing your files on Windows is relatively easy, you can do so directly in the File Explorer without relying on any third party software.
Locate the file you want to strip in Explorer
Right click on the file and select Properties
Select the Details tab to see the associated metadata
Click Remove Properties and Personal Information
Select the metadata you want to remove
Select to either Create a copy (if you want to preserve the original) or Remove properties from this file
Click OK
Ta-da! Metadata removed, now you can safely share the file without extra personal information being attached.
How to remove metadata from files using macOS
Unfortunately things aren’t as simple on macOS. There’s no catch-all option for all file types, and you can only remove specific kinds of data. For images, you can remove location data using macOS’s built-in Preview application.
Open the image in Preview
Hit CMD + i to display the image’s metadata
Navigate to the Info tab (circle with an ‘i’ inside it)
Click on GPS to see the location metadata
Click Remove Location Info to remove the GPS information stored in the image
For anything more thorough than this (or any file type other than images) you’ll need to use third-party software, such as Acrobat Pro for PDFs.
How to remove metadata from files using Linux
Thankfully, things are much easier on Linux! Note: Instructions may be different depending on your distro.
If you’re using a Debian-based system (such as Ubuntu), you can install MAT (which comes pre-packaged) using this command:
$ sudo apt-get install matYou can then launch the MAT GUI using this:
$ mat-guiAdd the file you want to strip to MAT by clicking the Add icon in the top navigation bar
Once the file is added, click Check to scan for metadata
If metadata is detected, MAT will mark the file as ‘Dirty’, you can double click the file to see what metadata has been detected
Click Clean to strip the metadata from the file
Simple as that!
Keep your files safe
Stripping metadata from your files before you share them is a good idea — especially if you’re going to upload them somewhere public on the internet. If you want to be certain that the files you send are completely metadata free (even to the people you send them to) — make sure to strip them first! To ensure your files stay private, it’s also important to consider how and where you store your files. Make sure you’re encrypting your hard drives, and be careful about what Cloud storage providers you use and what upload.
These days, it’s a normal everyday thing to share files on messaging apps and social media — but it’s important to be mindful about what exactly you’re sharing. Once those files hit the internet — they could end up anywhere!
For Science!
New feature spotted in brightest gamma-ray burst of all time
July 28, 2024 Evrim Yazgin
NASA’s Fermi Telescope has revealed new details about the brightest of all time gamma-ray burst which may help explain these extreme and mysterious cosmic events.
Gamma-ray bursts (GRBs) usually last less than a second. They originate from the dense remains of a dead giant star’s core, called a neutron star. But what causes neutron stars to release huge amounts of energy in the form of gamma radiation is still a mystery.

In October 2022, astronomers detected the largest gamma-ray burst ever seen – GRB 221009A. It came from a supernova about 2.4 billion light years away. The event had an intensity at least 10 times greater than any other GRB detected. It was dubbed the BOAT, for brightest of all time.
Now, analysis of the data from that event has revealed the first emission line which can be confidently identified in 50 years of studying GRBs.
The new analysis is published in Science.
Emission lines are created when matter interacts with light. Energy from the light is absorbed and reemitted in ways characteristic to the chemical make up of the matter which is interacting with it.
When the light reaches Earth and is spread out like a rainbow in a spectrum, the absorption and emission lines appear. Emission lines appear as dimmer or even black lines in the spectrum, whereas emission lines are brighter features.
At higher energies, these features in the spectrum can reveal processes between subatomic particles such as matter and anti-matter annihilation which can produces gamma rays.
“While some previous studies have reported possible evidence for absorption and emission features in other GRBs, subsequent scrutiny revealed that all of these could just be statistical fluctuations,” says coauthor Om Sharan Salafia at the Italian National Institute of Astrophysics Brera Observatory in Milan. “What we see in the BOAT is different.”
The emission line appeared almost 5 minutes after the burst was detected. It lasted about 40 seconds.
It peaked at 12 million electron volts of energy – millions of times more energetic than light in the visible spectrum.
The astronomers believe the emission line was caused by the annihilation of electrons and their anti-matter counterparts, positrons. If their interpretation is correct, it means the particles would have to have been moving toward Earth at 99.9% the speed of light.
“After decades of studying these incredible cosmic explosions, we still don’t understand the details of how these jets work,” says Elizabeth Hays, Fermi project scientist at NASA’s Goddard Space Flight Center in the US. “Finding clues like this remarkable emission line will help scientists investigate this extreme environment more deeply.”
https://cosmosmagazine.com/space/astronomy/brightest-gamma-ray-burst-new-details/
Dems, Non-Trumpers: Going on Offense in Pushing Back Against Trump’s Lies and Missteps
I have followed Gronda for a long time, before she took her long break. But she is back and her writtings while in debth and a bit long are so very interesting and well researched that they are more than worth the time to read. I love them. I hope everyone here will. Hugs. Scottie
