The Growing Surveillance State | Jessica Burbank | TMR

Alabama taxpayers are funding Christian textbooks that lie to children

Clips from The Majority Report on Criminal Israel and their illegal war crimes against Palestinian adults and children

 

Clips from The Majority Report on ICE criminal actions, tRump admin’s lies, and the admins racism.

Curtis Yarvin’s Idiotic Nazi-Bait Origin Story

 

 

Judge Says ICE Used ChatGPT to Write Use-of-Force Reports

https://gizmodo.com/judge-says-ice-used-chatgpt-to-write-use-of-force-reports-2000692370

ChatGPT for Fascists.
By 

Reading time 2 minutes

Last week, a judge handed down a 223-page opinion that lambasted the Department of Homeland Security for how it has carried out raids targeting undocumented immigrants in Chicago. Buried in a footnote were two sentences that revealed at least one member of law enforcement used ChatGPT to write a report that was meant to document how the officer used force against an individual.

The ruling, written by US District Judge Sara Ellis, took issue with the way members of Immigration and Customs Enforcement and other agencies comported themselves while carrying out their so-called “Operation Midway Blitz” that saw more than 3,300 people arrested and more than 600 held in ICE custody, including repeated violent conflicts with protesters and citizens. Those incidents were supposed to be documented by the agencies in use-of-force reports, but Judge Ellis noted that there were often inconsistencies between what appeared on tape from the officers’ body-worn cameras and what ended up in the written record, resulting in her deeming the reports unreliable.

More than that, though, she said at least one report was not even written by an officer. Instead, per her footnote, body camera footage revealed that an agent “asked ChatGPT to compile a narrative for a report based off of a brief sentence about an encounter and several images.” The officer reportedly submitted the output from ChatGPT as the report, despite the fact that it was provided with extremely limited information and likely filled in the rest with assumptions.

“To the extent that agents use ChatGPT to create their use of force reports, this further undermines their credibility and may explain the inaccuracy of these reports when viewed in light of the [body-worn camera] footage,” Ellis wrote in the footnote.

Per the Associated Press, it is unknown if the Department of Homeland Security has a clear policy regarding the use of generative AI tools to create reports. One would assume that, at the very least, it is far from best practice, considering generative AI will fill in gaps with completely fabricated information when it doesn’t have anything to draw from in its training data.

The DHS does have a dedicated page regarding the use of AI at the agency, and has deployed its own chatbot to help agents complete “day-to-day activities” after undergoing test runs with commercially available chatbots, including ChatGPT, but the footnote doesn’t indicate that the agency’s internal tool is what was used by the officer. It suggests the person filling out the report went to ChatGPT and uploaded the information to complete the report.

No wonder one expert told the Associated Press this is the “worst case scenario” for AI use by law enforcement.

Trump Berates Reporters, Gets Mystery MRI & Closes Border to (Non-White) Immigrants | The Daily Show

Trump FLIPS OUT When Reporter Calls Out His Lies

Trump-Hegseth’s War Crimes Scandal Isn’t Going Away

Let’s talk about Trump vs House and Senate oversight: Boat edition….