Precision matters in this conversation. The original op-ed focused on commercially produced chatbots powered by large language models trained for next-word prediction. Here, I’ll call them C-LLMs for specificity; “AI” covers much more than this one product type.
“Intelligence” needs to be defined as a concept before it can be assessed. The influence of neuroscience on the design of artificial neural networks and how that relates to locating “intelligence” in machines is complex. The history and interrogation of these points are beyond the scope of this format, but I’ve tried to explore them (and others, as well as curate a bibliography to encourage exploration) more fully in a different space: https://bit.ly/JA_AIProject.
LLMs are statistical models that reflect and repeat patterns in their training data. C-LLM outputs have more in common with data visualizations than intentional communication. Chatbots built on LLMs successfully mimic participating in language because they are consciously, deliberately designed to. From the anthropomorphized user interface, to the next word predictor algorithm, to the training data, to fine-tuning the models, to marketing: we can’t set aside how the products and their output are built on and shaped by human choices and human labor. These product development decisions exploit a human predisposition to project our own intelligence onto things that seem to be communicating with us. The phenomenon of projecting intelligence onto machines mimicking communication is called the ELIZA effect. If that effect was already noted for a fully rules-based chatbot from the 1960s, then it’s likely involved in our interpretations of today’s generative AI output.
As discussed by Dr. Alex Hanna during her campus visit, the dangers of C-LLMs (and other AI products) are not in imagined futures. They exist right now, and are environmental, social, legal, personal, psychological and more. Addressing C-LLM and other AI harms is interdisciplinary and urgent. Rejection is a reasonable response to harmful products. Technology is not inevitable, and we have agency in what we adopt.
Yes, follow the letter writer’s advice: “dare to know.” But that means dare to look past the surface. Dare to question. Dare to learn. Dare to contextualize. Dare to layer analytical lenses into a prism that breaks up the white light of a software into the full colour spectrum of its origins, trajectory and impact on our lives. Dare to understand.
Janna Avon, Digital Initiatives Librarian