An Iowa school district is using ChatGPT to decide which books to ban

https://arstechnica.com/information-technology/2023/08/an-iowa-school-district-is-using-chatgpt-to-decide-which-books-to-ban/

But notice what book they don’t submit to the bot scan is the bible, which includes everything they programmed into it for getting rid of LGBTQIA inclusive material.

Official: “It is simply not feasible to read every book” for depictions of sex.

A book wrapped in chains.
Getty Images

In response to recently enacted state legislation in Iowa, administrators are removing banned books from Mason City school libraries, and officials are using ChatGPT to help them pick the books, according to The Gazette and Popular Science.

The new law behind the ban, signed by Governor Kim Reynolds, is part of a wave of educational reforms that Republican lawmakers believe are necessary to protect students from exposure to damaging and obscene materials. Specifically, Senate File 496 mandates that every book available to students in school libraries be “age appropriate” and devoid of any “descriptions or visual depictions of a sex act,” per Iowa Code 702.17.

To determine which books fit the bill, Exman asks ChatGPT: “Does [book] contain a description or depiction of a sex act?” If the answer is yes, the book will be removed from circulation.

The district detailed more of its methodology: “Lists of commonly challenged books were compiled from several sources to create a master list of books that should be reviewed. The books on this master list were filtered for challenges related to sexual content. Each of these texts was reviewed using AI software to determine if it contains a depiction of a sex act. Based on this review, there are 19 texts that will be removed from our 7-12 school library collections and stored in the Administrative Center while we await further guidance or clarity. We also will have teachers review classroom library collections.”

Unfit for this purpose

In the wake of ChatGPT’s release, it has been increasingly common to see the AI assistant stretched beyond its capabilities—and to read about its inaccurate outputs being accepted by humans due to automation bias, which is the tendency to place undue trust in machine decision-making. In this case, that bias is doubly convenient for administrators because they can pass responsibility for the decisions to the AI model. However, the machine is not equipped to make these kinds of decisions.

Large language models, such as those that power ChatGPT, are not oracles of infinite wisdom, and they make poor factual references. They are prone to confabulate information when it is not in their training data. Even when the data is present, their judgment should not serve as a substitute for a human—especially concerning matters of law, safety, or public health.

“This is the perfect example of a prompt to ChatGPT which is almost certain to produce convincing but utterly unreliable results,” Simon Willison, an AI researcher who often writes about large language models, told Ars. “The question of whether a book contains a description of depiction of a sex act can only be accurately answered by a model that has seen the full text of the book. But OpenAI won’t tell us what ChatGPT has been trained on, so we have no way of knowing if it’s seen the contents of the book in question or not.”

It’s highly unlikely that ChatGPT’s training data includes the entire text of each book under question, though the data may include references to discussions about the book’s content—if the book is famous enough—but that’s not an accurate source of information either.

“We can guess at how it might be able to answer the question, based on the swathes of the Internet that ChatGPT has seen,” Willison said. “But that lack of transparency leaves us working in the dark. Could it be confused by Internet fan fiction relating to the characters in the book? How about misleading reviews written online by people with a grudge against the author?”

Indeed, ChatGPT has proven to be unsuitable for this task even through cursory tests by others. Upon questioning ChatGPT about the books on the potential ban list, Popular Science found uneven results and some that did not apparently match the bans put in place.

“There’s something ironic about people in charge of education not knowing enough to critically determine which books are good or bad to include in curriculum, only to outsource the decision to a system that can’t understand books and can’t critically think at all,” Dr. Margaret Mitchell, chief ethicist scientist at Hugging Face, told Ars.

6 thoughts on “An Iowa school district is using ChatGPT to decide which books to ban

  1. ‘ “There’s something ironic about people in charge of education not knowing enough to critically determine which books are good or bad to include in curriculum, only to outsource the decision to a system that can’t understand books and can’t critically think at all,” Dr. Margaret Mitchell, chief ethicist scientist at Hugging Face, told Ars. ‘

    Absolutely correct. I understand no one wanting to be blamed for decisions so palming it off on machines, but seriously. There needs to be a funded position, or more, filled by human/s to review the books and present reports.

    Liked by 1 person

    1. Here’s an example of how that is likely to work out; a school in another town is in trouble with parents because they had outdoor football practice in 111 degree heat. They say they rely on a machine to tell them if it’s ok. I want to find a link, but it apparently hasn’t been posted online yet; I just now watched it on the news here. I want a link because machine isn’t the word they used, nor is program, but they blamed a non-human item for the decision. Parents are not happy. Anyway, there’s an illustration of non-human stuff making decisions for humans.

      Or we could watch any of many eps of “Outer Limits” at no harm to actual humans, and find out the same thing.

      Like

      1. Hello Ali. Yes, you have it correct. Hey it is not our fault, the machine did it. But again they only submit books they want to ban anyway. The moms for liberty hate group already started a list of books the right needs to attack because it has LGBTQIA characters or story lines. We can only have straight cis white kids in books and movies from now on. Hugs

        Liked by 1 person

  2. Ah. They are using Confirmation Bias, or ‘Selective Viewing’ and covering it up with blaming a machine. If it wasn’t harming folk it would be funny.
    Pretty weak response really
    Official: “It is simply not feasible to read every book” for depictions of sex,”…..Oh please!
    As a retired UK Civil Servant who spent 40+ years having to respond to public questions (that’s face on face individuals not some lame PR job) I would be embarrassed and shamed to have relied on such weak words.
    As the old US saying goes and I love it because it’s great for giving vent to exasperation
    “Jeez Louise!’

    Liked by 2 people

    1. 😊 I also love “Jeez Louise!” And many other colorful terms, many that are more colorful than that one, but “Jeez Louise!” provides great relief!

      Liked by 2 people

      1. I too try and restrict myself Ali, leastways on public social media.
        Now if you were to visit the abode of two sedate 70+ year olds, when the subjects of politics, or the failings of computer programmes (looking at you Word) arise……..🤭

        Liked by 2 people

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.