The application of artificial intelligence (AI) in the world is growing so rapidly that it's a safe bet you interact with some for of AI every day. The core premise of AI is a simple one: machines performing tasks originally solved using human intelligence.
Any machine that imitates some features of human intelligence, such as perception, reasoning, problem-solving, language interaction, or creative design is a form of AI.
However, these machines are not yet perfect, and we are just beginning to come to terms with the ethical implications of their use. Artificial intelligence is constantly evolving, becoming more complex by the day, but still can easily produce falsified information or show a clear bias. While AI can be problematic, it still offers significant opportunities for growth, innovation and improvement in almost every industry.
The modern library is no exception to this.
How AI in public libraries can benefit librarians and patrons alike
Practical, responsible and human-centered: the future of AI in libraries
Libraries can fight AI misinformation with media literacy education
Libraries have long been a trustworthy wealth of information and services. They are inclusive spaces where patrons can access resources and hone critical thinking skills to foster a well-informed community.
They have also become increasingly high-tech operations, where patrons can access digital tools, databases and news platforms such as PressReader.
With the rise of artificial intelligence, the modern library has an opportunity to adapt and grow to improve the experience of every patron who walks through the door. It all begins with AI literacy; by learning how these machines and programs function, librarians can leverage these tools to benefit themselves and each of their patrons.
We have seen significant advancements in natural language processing and other AI technologies since the launch of OpenAI’s ChatGPT program on November 30, 2022.
Though chatbots have been used for years, ChatGPT was more advanced than anything made available to the public; it could be used for anything ranging from computer coding to advanced writing.
ChatGPT’s launch spiked the trend of AI being incorporated into almost every industry worldwide; it’s believed that at least 35% of companies worldwide are now using some form of AI. Suddenly, it was easier to manage logistics, communicate with customer service representatives, spark new ideas in meetings and more.
Since the launch of this program, there have been several new AI options made available to the public, such as Google’s Gemini and Microsoft’s Copilot. Engineers, creative writers, artists, personal assistants — AI applications can be used to boost productivity, innovation, and accessibility by professionals in just about every industry.
However, modern AI systems don’t think as humans do. Instead, they work off pattern recognition, often beginning with a concept called “machine learning”. This is the use of an algorithm that can be applied to incredibly large amounts of data.
When an AI uses machine learning algorithms, it isn’t programmed to respond a certain way; instead, it analyzes the training data to recognize patterns.
Whether this data is images, math equations, or even the writing style of a famous author, AI begins to predict how certain aspects are combined so it can formulate a coherent response.
For example, if the AI is shown thousands of pictures of a certain dog breed, it begins to pinpoint certain factors that make it easier to recognize; when shown a picture of a cat, the AI can recognize that this is not the same thing.
This means that an AI does not instinctively know what a cat is unless this information was included in the data on which it was trained. It can tell you that the image is not of a dog but cannot explain what it is.
Typically, this training data is carefully controlled at first. It is fed so that the AI can begin to recognize an intended pattern; if you want it to be capable of recognizing a dog, you need to feed it images of dogs.
Artificial intelligence thrives on data; the more it has, the more connections it can begin to identify. As it processes more and more information, the more patterns it creates.
However, even when this training data is biased or even incorrect, most AI programs still process it as though it were correct. Modern artificial intelligence has yet to reach a point where it can accurately determine when information is false. Instead, it associates it with the correct information, corrupting the machine-learned data so that the AI may no longer produce trustworthy results.
This becomes more problematic the more content AI itself generates. If it produces an incorrect response, it can merge into the training information. The generative AI program may then believe this information to be true, leading to further fallacies.
Due to this significant problem, many AI bots have a fail-safe system of sorts. If it has incomplete training data or conflicting information, it will often produce a message similar to, “As an AI Language Model, I cannot accurately say...” This becomes more common when input is vague; the AI needs to pull from a larger set of data and becomes vulnerable to information that may not meet the required pattern to produce an accurate response.
This is why almost every tool, such as ChatGPT 3.5 and Google Gemini, includes a disclaimer at the bottom of each webpage, explaining that there is a chance that any information produced could be misleading, incorrectly phrased, or downright false.
This is one reason why it is essential to practice caution when using these tools. AI technologies are here to stay, and they will continue evolving. While we may one day reach a point where even free AI tools can identify falsified data, that day is not here yet.
While modern generative AI systems can write out a logical sentence that flows from one point to the next, remember that they don't actually think. Rather than existing as an intelligent being who can think, reason, and make informed decisions with no human involvement, modern AI currently uses a generational model; it simply generates the next word in a sentence to continue the flow of the thought based on the data it has been trained on.
If it generates an incorrect word or statement, it rarely corrects itself. Instead, it goes on to either justify its own claims with made-up facts or lie entirely about things that simply do not exist. This can lead to falsified information being spread using confident language, and if a person is not familiar with AI generative text, this information can be quite convincing.
Consider the case of Mata v. Avianca, where a legal team used ChatGPT to find, source and quote relevant legal cases. Rather than convey that the requested information was not available or provide a reasonable alternative, the bot created fake examples; names, dates and the cases themselves were completely falsified.
The legal team did not verify the existence of said cases, creating significant problems in the courtroom. The error was simple but severe; the lawyers involved believed that ChatGPT was an advanced search engine that could provide real-time research results, rather than a computer program that used auto-fill to create coherent sentences without real information.
This case was not the only time AI has had significant problems; AI is subject to extreme bias depending on the data it has been trained on. Another example of the common problems AI experiences is the training of Microsoft’s Tay bot in 2016. The company released a prototype chatbot to Twitter with the goal of understanding how people spoke, acted and thought when online so it could then replicate it.
However, Tay immediately became exposed to racist rhetoric, hate speech and more. The Tay bot began parroting these words, and less than 24 hours later, Microsoft shut it down.
An AI bot like Tay is trained on an extensive set of data from which it derives sentence patterns and critical information; when this data set is corrupted, the bot's responses and results will be as well.
This is why developing AI literacy is so crucial; by learning the skills needed to critically evaluate AI technologies and analyze the information they provide, we get a step closer to recognizing when something is amiss.
We spend a lot of time talking about media literacy and digital literacy, but becoming AI literate is growing equally essential. As AI algorithms become more intricate and sophisticated, ethical considerations become more and more important, which means AI literacy will become even more crucial.
AI has already found its way into almost every industry worldwide, and the modern library is no exception. AI offers a way for librarians at public, academic and research libraries to streamline services in a way that benefits both themselves and library users. These bots can be used for more than just conversation; AI offers a way to benefit patrons in previously unimagined ways.
1. Improve searching capabilities by simplifying data accessibility. A person can use AI to quickly search through library resources, and the program can compile all relevant information in a simple, easy-to-process manner.
2. Interact with patrons, analyze user feedback and provide ideas regarding how a library can improve resource management and patron experiences.
3. Provide personalized recommendations to each patron to help them experience new authors, books, magazines, websites and more.
4. Automate repetitive tasks, such as organizing and processing vast collections of data.
5. Offer translation services to make content more accessible, no matter what language a person speaks.
Whether a patron is an avid reader looking for the next bestselling fantasy novel or a college student trying to research a new field of study, artificial intelligence can likely help. This can all lead to an overall improvement in efficiency, allowing libraries to dedicate more time to specialized tasks that require a more human touch.
AI literacy is going to become an even more crucial skill in the coming years. The potential of these tools is almost endless, which is why it is essential to consider all the ethical implications of their use. By learning more about AI tools and how these bots work, the modern librarian can incorporate these emerging technologies into everyday operations, improving the patron experience across the board.