I watched this video when it first came out. I was out of my skin upset. There is no justification on the planet for this. A little girl hurt and begging for help as the Israeli Military attacks every aid sent to help her and in the end targeted her ending her life. I do not care about AIPAC money or any other pretend made up reason why a little preteen girl injured and begging for help is some how an enemy combatant needing to be used to kill those who would rescue her and then herself. If you think this is justified you are not human, you have no redeeming value, and I don’t want to know you. Hugs
Category: Violence
Who’s Gunning For Hegseth? | Jeet Heer | TMR
CNN Host Shreds Republican’s Defense Of Trump Boat Strikes
Chris Cuomo Is Dumb And Desperate
Hegseth Investigated By Pentagon’s Internal Watchdog
Clips from The Majority Report on Criminal Israel and their illegal war crimes against Palestinian adults and children
Clips from The Majority Report on ICE criminal actions, tRump admin’s lies, and the admins racism.
Curtis Yarvin’s Idiotic Nazi-Bait Origin Story
More bad things by bad people. Old ones in my open tabs.
The owner of the shuttered Three Mile Island nuclear plant has been awarded a $1 billion federal loan guarantee that will enable it to shift onto taxpayers some of the risk of its plan to restart the Pennsylvania facility and sell the electricity to Microsoft for its data centers. Amid rising energy demands, the taxpayer-backed loan will go toward the unprecedented effort to reopen a mothballed U.S. nuclear plant that suffered a partial meltdown decades ago.
Judge Says ICE Used ChatGPT to Write Use-of-Force Reports
https://gizmodo.com/judge-says-ice-used-chatgpt-to-write-use-of-force-reports-2000692370
ChatGPT for Fascists.By AJ DellingerReading time 2 minutes
Last week, a judge handed down a 223-page opinion that lambasted the Department of Homeland Security for how it has carried out raids targeting undocumented immigrants in Chicago. Buried in a footnote were two sentences that revealed at least one member of law enforcement used ChatGPT to write a report that was meant to document how the officer used force against an individual.
The ruling, written by US District Judge Sara Ellis, took issue with the way members of Immigration and Customs Enforcement and other agencies comported themselves while carrying out their so-called “Operation Midway Blitz” that saw more than 3,300 people arrested and more than 600 held in ICE custody, including repeated violent conflicts with protesters and citizens. Those incidents were supposed to be documented by the agencies in use-of-force reports, but Judge Ellis noted that there were often inconsistencies between what appeared on tape from the officers’ body-worn cameras and what ended up in the written record, resulting in her deeming the reports unreliable.
More than that, though, she said at least one report was not even written by an officer. Instead, per her footnote, body camera footage revealed that an agent “asked ChatGPT to compile a narrative for a report based off of a brief sentence about an encounter and several images.” The officer reportedly submitted the output from ChatGPT as the report, despite the fact that it was provided with extremely limited information and likely filled in the rest with assumptions.
“To the extent that agents use ChatGPT to create their use of force reports, this further undermines their credibility and may explain the inaccuracy of these reports when viewed in light of the [body-worn camera] footage,” Ellis wrote in the footnote.
Per the Associated Press, it is unknown if the Department of Homeland Security has a clear policy regarding the use of generative AI tools to create reports. One would assume that, at the very least, it is far from best practice, considering generative AI will fill in gaps with completely fabricated information when it doesn’t have anything to draw from in its training data.
The DHS does have a dedicated page regarding the use of AI at the agency, and has deployed its own chatbot to help agents complete “day-to-day activities” after undergoing test runs with commercially available chatbots, including ChatGPT, but the footnote doesn’t indicate that the agency’s internal tool is what was used by the officer. It suggests the person filling out the report went to ChatGPT and uploaded the information to complete the report.
No wonder one expert told the Associated Press this is the “worst case scenario” for AI use by law enforcement.
