The use of artificial intelligence tools in schools is a topic that has raised eyebrows and concerns among book lovers and educators alike. Recently, the Mason City Community School District in Iowa made headlines for using ChatGPT, an AI-powered tool, to help remove certain books from its library shelves. This move ignited a passionate debate about the role of AI in shaping access to literature and the preservation of original work.
Critics argue that by relying on AI tools like ChatGPT to determine which books should be removed from libraries, there is a risk of stifling creativity and limiting exposure to diverse perspectives. They argue that this is yet another example of AI “taking out its No. 1 enemy: original work,” as Vulture aptly put it. The removal of books such as Margaret Atwood’s “The Handmaid’s Tale” and Toni Morrison’s “Beloved” has further fueled concerns about censorship and the suppression of important literary works.
However, there is another side to this story. The Mason City Community School District, like many other educational institutions, is bound by vague laws surrounding what is deemed “age-appropriate” material with “descriptions or visual depictions of a sex act.” In May, Iowa’s SF 496, a bill signed into law by Governor Kim Reynolds, sought to limit discussions about sexuality and gender identity in schools. While the intentions behind such bills may vary, their stipulations often result in a sweeping eradication of any material that touches on these topics.
As a result, educators like Bridgette Exman, the assistant superintendent of the Mason City School District, are faced with the difficult task of adhering to these laws without the resources to systematically review every book in their libraries. It is simply not feasible to read and filter every book for compliance with these new requirements. This is where AI tools like ChatGPT come into play. They offer a potential solution to help educators navigate the complexities of these laws and ensure compliance.
However, relying on AI to interpret laws that lack nuance and are open to interpretation poses a unique conundrum. While broad bans on sexual content may seem straightforward, the reality is that they can lead to subjective interpretations. This is where AI’s limitations come into play. AI does not always fully comprehend the subtleties of written language and can struggle to distinguish between explicit descriptions of sexual acts and nuanced discussions of sexual topics and relationships.
The case of Mason City’s use of ChatGPT highlights this challenge. The tool was asked to determine if certain books contained descriptions or depictions of a sex act. If the answer was yes, the book was removed. However, what about when the answer fell into a gray area? For instance, the Bible, a religious text studied widely, contains passages that address sexual topics and relationships without explicit descriptions of sexual acts. ChatGPT classified it as having sexual depictions but provided context that might complicate the issue. This inconsistency and confusion further complicate the task of determining which books should be removed.
It is important to note that educators like Exman faced similar challenges. They received responses from ChatGPT that acknowledged the presence of sexual depictions in certain books but included additional context. Exman describes these responses as “Yes, but” answers, which don’t align with the strict parameters of the law. Consequently, books like Patricia McCormick’s “Sold,” which addresses the issue of human trafficking through the eyes of a young girl forced into prostitution, ended up on the banned list.
The use of AI in these situations is not without its flaws and challenges. While it can assist educators in identifying potentially problematic material, it is crucial to recognize that AI is not infallible. Its limitations in understanding nuance and interpreting complex laws underscores the need for human oversight and critical thinking. Educators, administrators, and policymakers must work together to strike a balance between the need to comply with legal requirements and the desire to provide students with a diverse and comprehensive education.
In conclusion, the use of AI tools like ChatGPT in school districts raises important questions about censorship, the preservation of original work, and the interpretation of vague laws. While AI can offer assistance in navigating complex legal requirements, it is not a perfect solution. Human oversight and critical thinking are essential in ensuring that decisions about which books to remove from library shelves are made with careful consideration and respect for the diverse perspectives and experiences literature has to offer.