Kyle is on Twitter as @kylemccollum9.
In this report I am going to investigate the development of Artificial Intelligence (AI) and its relevance to those working in library and information science (LIS). I will then consider how chatterbots (one example of AI technology) could be deployed in libraries to enhance the user experience while also considering the moral and ethical questions posed by their introduction.
I will begin with a discussion of what we mean by AI and provide a brief history of developments in the field up to this point in time. I will then discuss how AI has entered mainstream consciousness and how it is increasingly part of our everyday lives. I intend to draw on the insights of key figures in contemporary LIS such as Michael Gorman and John Palfrey as well as the noted philosopher Luciano Floridi.
What is Artificial Intelligence and how has it developed?
Artificial Intelligence is a broad term that encompasses many ways of thinking about the human mind and therefore the many ways in which it could somehow be replicated by specially designed machines. Margaret A. Boden, Research Professor of Cognitive Science at Sussex University, talks about how “intelligence isn’t a single dimension but a richly structured space of diverse information-processing capacities” Even with this more nuanced interpretation of intelligence, we are still held back by our limited knowledge and many commentators are hence sceptical that we could ever create a truly intelligent machine. Luciano Floridi argues that “we have no idea how we might begin to engineer (AI), not least because we have very little understanding of how our own brains and intelligence work.” However, there is some consensus that advances in current technology have brought us closer than before to a time when machines could, if not truly think independently, perform complex tasks so efficiently that we need to re-evaluate our relationship with them. German philosopher Tom Stonier went so far as to say that “the emergence of machine intelligence during the second half of the twentieth century is the most important development in the evolution of this planet since the origin of life two to three thousand million years ago.” He believed a realistic goal for AI would involve “an ability of a system to analyse its environment and then to make an intelligent response.” This seems like a more reasonable target to aim towards although still difficult to achieve. Perhaps the key figure in the history of computing and AI is the British scientist Alan Turing, well-known to laypeople for his codebreaking in World War Two at Bletchley Park. The so-called Turing test is an attempt to prove a human and a machine could behave in a way that was indistinguishable from the other. It relies on what Turing called the “Imitation Game” (also the name for Morten Tyldum’s 2014 biopic about Turing starring Benedict Cumberbatch) in which two players were kept out of sight of a third and could only communicate by writing notes. One of the first two participants was to be replaced by a computer. The third player would then have to decide which of the two players he or she couldn’t see was the human and which was the computer. Many variations of this test have been created over the years and there is no agreement that any of them have been successful (i.e. the computer succeeded in convincing humans that it was in fact one of them). Luciano Floridi judged one of these competitions, the Loebner Prize, in 2008. He is dubious about the merits of the Turing Test as it only assesses one ability of a computer (to participate in a coherent conversation) and describes it as “a necessary but insufficient condition for a form of intelligence”. Floridi makes the valid point that even the most advanced machines are caught out by basic semantics again and again and therefore we are not remotely close to genuine AI.
The term “robot”, still synonymous with AI for many people, was first used by the Czech writer and artist Josef Capek in the 1920s but the idea of creating a mechanical device capable of performing human tasks was not a new one. Anthropomorphic automata that correspond to the modern idea of robots appear in Greek mythology as creations of the god Hephaestus and in Jewish folklore as the Golem of Prague, a being formed of clay that protected the community from pogroms. Leonardo Di Vinci, among his many other inventions, designed a mechanical knight at the end of the 15th century. In more recent times, scientists and inventors have moved away from this idea of AI strictly needing a humanoid form, though this has not stopped the robot becoming an enduring fascination in both science and popular culture, the latter sometimes even informing the former. In fact, Robin Murphy and David D. Woods describe Isaac Asimov’s three laws of robotics as having “been so successfully inculcated into the public consciousness through entertainment that they now appear to shape society’s expectations about how robots should act around humans.” Mathematicians Ada Lovelace and Charles Babbage began working in 1837 on an early computer which they called the Analytical Engine. They intended the machine to be able to run any algorithm which would have made it what is known today as Turing-complete. Although they were never able to build a complete version, their work laid the foundations for later developments as computers are a prerequisite for AI. The second half of the twentieth century saw much faster advances in AI. Marvin Minsky and John McCarthy were founders of the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology in 1959. McCarthy in fact was the first person to use the term “artificial intelligence” and was instrumental in developing the influential early computers known as Lisp machines. The academic backing of AI by M.I.T. can be seen as legitimising it as a field of scientific enquiry. The famous chess matches between legendary Russian grandmaster Gary Kasparov and IBM’s Deep Blue machine in the 1990s garnered huge media coverage and raised public awareness of the rapid developments in AI. Kasparov’s 1997 defeat in New York was a huge surprise at the time and raised questions about human identity and our relationship with machines. The philosopher Luciano Floridi talks about how “questions about our personal identities, self-conceptions, and social selves are, of course, as old as the philosophical question ‘who am I?’” We might expect people to re-assess their place in the world when the player many people considered the greatest of all time is unable to defeat a computer, albeit one programmed to perform a single task by humans. However, the nature of chess with only a finite number of moves possible at any stage of the game means that Deep Blue simply calculate the optimum move more quickly than a human can and this does not make it in any way intelligent.
How is AI affecting our lives?
AI is certainly something that has seized the popular imagination and I think the way in which we have engaged with it in books, movies and television have hugely impacted our reactions to real-world developments. Isaac Asimov’s “Robot” series has proved hugely influential since he began writing them in 1939 and the stories have been adapted by Hollywood several times, notably with “I, Robot” starring Will Smith in 2004. The film, although only loosely based on Asimov’s work, uses the Three Laws of Robotics from the books. Its outset shows robots forbidden to harm humans either directly or indirectly. This relatively benign view of robots is in sharp contrast to James Cameron’s “Terminator” franchise which depicts a future in which a malign AI network called Skynet has taken over the world with a force of evil cyborgs. The idea that humanity could come under threat from intelligent machines has gained ground in recent years with books like Nick Bostrom’s “Superintelligence” which argues that our brainpower is our main distinguishing feature and advances in AI might change our relationship with ourselves profoundly. While the book was a best-seller, some reviewers were unconvinced. Zachary Miller writing in Yale Scientific Magazine thought that it was “replete with far-fetched speculation”. The vague sense that AI could pose a threat to human existence can be seen in the media coverage surrounding it in recent years. We have seen stories in the news about e-commerce giant Amazon using drones to makes its deliveries (albeit only to a very select few customers) in a scheme they are calling Amazon Prime Air. I think it is no coincidence that the journalists reporting on the story have chosen to use the term “drone” to describe the robots being used and they are deliberately making comparisons with the unmanned, remotely controlled planes that have been controversially used in military bombing raids. Luciano Floridi and Mariarosaria Taddeo talk about how “in the case of robotic weapons, it is becoming increasingly unclear who, or what, is accountable and responsible for the actions performed by complex, hybrid, man-machine systems on the battlefield”. Is this comparison in any way justified? While this could be a case of sensationalist reporting depicting machinery as inherently sinister (echoes of the Luddites in the eighteenth century), there certainly have been concerns raised about the fact that these drones are always connected to the internet and we don’t know what data Amazon will be gathering from their cameras. I think there are similarities in terms of accountability here even if Amazon’s use of the technology is relatively innocuous. We don’t know who as in an actual individual person is making the decisions and it is easy for whoever it is to hide behind a much larger institution, whether it be a corporation or an arm of the military.
What does seem to be a given to the non-specialist that AI is going to play an increasingly important part in all our lives. We live in a world where internet search engines are often the first port of call for information and information behavior seems to be changing rapidly. Tom Stonier’s 1992 prediction that “increasingly human beings will rely on the equivalent of a global brain to do some of the preliminary thinking for them” seems all too relevant now we are entering the era of the Semantic Web and linked data.
Another more general issue I feel it is worth addressing is the huge investments large multinational corporations have made to develop AI and the influence they may have on the way it evolves in future. In early 2014, Google paid over $500 million to buy the British company DeepMind which specializes in so-called “deep learning”. What DeepMind are actually trying to achieve is far from clear but they have set themselves the “ultimate goal of solving intelligence” and have assembled a team of experts in order to do so. We should perhaps be concerned that such an ambitious project which could have profound implications for our understanding of human existence may be used for commercial purposes. Likewise Facebook has invested heavily in its Applied Machine Learning team which has developed its FBLearner Flow program. The aim of this software is to “help define what content appears in your News Feed and what advertisements you see” and, while a better understanding of its customers will doubtless prove itself lucrative to Facebook, we could again wonder whether this technology might better serve humanity in other ways and many commentators are wary of the influence these big corporations are having on those seeking information. John Palfrey, chairman of the Digital Public Library of America (DPLA), argues that “the risk of a small number of technically savvy, for-profit companies determining the bulk of what we read and how we read it is enormous” and identifies public libraries as a reliable and truly independent source of information not motivated by financial gain. Following the 2016 U.S. Presidential Election, Facebook has taken action to prevent misleading and inaccurate news stories (“fake news”) appearing in its News Feeds due to widespread public worries that the prevalence of these articles could be undermining democracy. Democratic values are something that libraries and librarians, on the other hand, have often pledged to uphold. Andrew Carnegie, wealthy industrialist and the philanthropist who founded over 2,500 public libraries around the world between 1883 and 1929, argued that “free libraries, maintained by the people, are cradles of a democracy and their spread can never fail to extend and strengthen the democratic idea, the equality of the citizen, the royalty of the man” and these values have often since influenced the ways librarians think about their work as we shall see in a moment.
Michael Gorman, a former President of the American Library Association (ALA), has spoken of an ideal library as one that “uses current technology” to provide readers access to the materials they need. I have found his 2000 book “Our Enduring Values” to be helpful in applying a philosophical framework to LIS. The core values of librarianship he identifies are the following:
- Intellectual freedom
- Equity of access
I am now going to see how these values may correspond to the use of chatterbot technology. Although they represent a traditionalist mindset, I believe they can still inform our work as LIS professionals as they address both why people use libraries and why people work as librarians. In adopting any new technology, librarians need to consider whether it will further the ethos of the library: a place where users have free and fair access to the information they require in a safe and inclusive environment. While the use of chatterbots may open up new opportunities to libraries, we should be careful that they are not placing our best existing qualities in jeopardy.
How could chatterbots be used in the library?
So now we have some idea about what AI is and its place in the wider society, it is time to consider what a chatterbot is and what it does. Many of us use microblogging platforms such as Twitter where bots (programs that aim to duplicate a human account) are increasingly ubiquitous and it is a similar enough concept. Put simply, a chatterbot is a computer program that is designed to emulate having a conversation with a human. Ask it a question and it should give a coherent and appropriate answer. Well-known examples encountered in everyday life include Apple’s Siri and Microsoft’s Cortana which act as virtual personal assistants. Through voice recognition software, they are designed to be able to answer all kinds of queries by drawing upon information found on the internet. In a library, chatterbots could work to promote equity of access as they could be used to help people with mobility difficulties or who were visually impaired, relying as they do on voice activated interfaces. This would also show evidence of a service ethos. Library users may also feel less awkward posing potential difficult questions to a bot, especially those of a political or religious nature, thus enabling an atmosphere of intellectual freedom in the library. This would raise issues of data protection as the questions users asked the bot would be recorded somewhere and we would need to consider who might have access to them. Much would depend on whether the bot software was developed in-house by the library or outsourced to some third-party. I think the former would be far preferable as librarians are trained to consider the privacy and freedom of their users as we can see from Gorman’s core values above. John Palfrey talks about how “librarians may be society’s most effective privacy advocates” and this is an area in which they can contribute to the debate.
They may also encourage young people to participate more in the life of the library. DeeAnn Allison of the University of Nebraska argues that “they engage users with a playful interface that is familiar to a generation that grew up with online games”. Again this would show libraries to be progressive institutions that are thinking about the needs of their users. Allison also stresses the interactive nature of these bots, saying that they would be attractive to those who want “something livelier than a search engine”. The fact that these bots are still slightly out of the ordinary might encourage people to engage with library services rather than solely relying on the likes of Google. Palfrey thinks that libraries need to be implanted with a “spirit of innovation that will lead to successful and positive reinvention.” Using chatterbots considerately and sensitively would be one way to go about achieving this.
Surely all this engagement should be the work of a human librarian however and not a bot? But then a qualified librarian is not always on hand, particularly in smaller public libraries that have seen their staffing budgets slashed in recent years. Such places may now be staffed entirely by volunteers and a chatterbot could, if programmed well enough, be something of a substitute (making what Gorman would describe as a rational use of resources). Users could ask it whether a particular book was available for loan and where on the shelves it was located. At some indefinite point in the future, chatterbots could even be used for book recommendations. This would rely both on linked data being used by the bot and the sort of semantic errors that irked Floridi being reduced somehow. I suspect we are nowhere near the level of technological sophistication to be able to provide that service. Given what could reasonably be delivered at the minute, what would the benefits to library users be? Firstly, more queries could be answered without the presence of a librarian which is useful in a time when there are simply fewer available. Opening hours at many public libraries have been reduced and chatterbots would offer users the opportunity to interact with the library whenever they needed or wanted, given that the bot could be accessed with an app on a mobile device. A chatterbot could also inform users about services offered by the library, perhaps ones many users may have been unaware of, making people more likely to return to visit in the future and acting as a virtual ambassador if programmed well enough. However, there are technological pitfalls to consider here. Microsoft’s Tay chatterbot was recently removed from service after it used offensive language it had learned from its users. We are still at a relatively early stage in the creation of machine learning and a bot will not be context-sensitive enough for there to be no risk that it will repeat inappropriate language from its users. There is a danger of this happening because of a library’s very public nature. People with substance abuse and mental health issues are regular users of many public libraries because they are open to everyone. A Tay-like incident could ironically cause great damage to a library’s reputation as a safe and inclusive place. My own conclusion is that the potential benefits of chatterbots outweigh these risks, particularly as they could exemplify some of the best practice to be found in libraries, promoting the values of equity of access, intellectual freedom and service.
 Boden, M.A. (2016). AI: its nature and future, Oxford University Press, Oxford, p.1
 Aeon. (2017). True AI is both logically possible and utterly implausible – Luciano Floridi | Aeon Essays. Available at: https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible [Accessed 4 Jan. 2017].
Stonier, T. (1992). Beyond information: the natural history of intelligence, Springer-Verlag, New York, p.1
 Stonier, T. (1992), p.15
 Aeon. (2017). True AI is both logically possible and utterly implausible – Luciano Floridi | Aeon Essays. Available at: https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible [Accessed 4 Jan. 2017].
 Murphy, R.R. & Woods, D.D. (2009). “Beyond Asimov: The Three Laws of Responsible Robotics”, IEEE Intelligent Systems, 24 (4), p. 14-20.
 Aitopics.org. (2017). Brief History | AITopics. Available at: http://aitopics.org/misc/brief-history [Accessed 6 Jan. 2017].
 ibm.com. (2017). IBM100 – Deep Blue. Available at: http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue [Accessed 6 Jan. 2017].
 Floridi, L. (2014). The 4th revolution: how the infosphere is reshaping human reality, Oxford University Press, Oxford, p.65
 Miller, Z. (2015). Book Review: Superintelligence — Paths, Dangers, Strategies. Yale Scientific Magazine. Available at: http://www.yalescientific.org/2015/01/book-review-superintelligence-paths-dangers-strategies [Accessed 4 Jan. 2017].
 Hern, A. (2017). Amazon claims first successful Prime Air drone delivery. The Guardian. Available at: https://www.theguardian.com/technology/2016/dec/14/amazon-claims-first-successful-prime-air-drone-delivery [Accessed 4 Jan. 2017].
 Floridi, L. & Taddeo, M. (2014). The Ethics of Information Warfare, Springer International Publishing, Cham, p. vii
 Singer, P. (2013). The Predator Comes Home: A Primer on Domestic Drones, their Huge Business Opportunities, and their Deep Political, Moral, and Legal Challenges | Brookings Institution. Available at: https://www.brookings.edu/research/the-predator-comes-home-a-primer-on-domestic-drones-their-huge-business-opportunities-and-their-deep-political-moral-and-legal-challenges/ [Accessed 7 Jan. 2017].
 Stonier, T. (1992), p.104
 Shu, C. (2017). Google Acquires Artificial Intelligence Startup DeepMind For More Than $500M. TechCrunch. Available at: https://techcrunch.com/2014/01/26/google-deepmind/ [Accessed 6 Jan. 2017].
 DeepMind. (2017). Research | DeepMind. Available at: https://deepmind.com/research/ [Accessed 6 Jan. 2017].
 Fortune. (2017). Inside Facebook’s Biggest Artificial Intelligence Project Ever. Available at: http://fortune.com/facebook-machine-learning/ [Accessed 6 Jan. 2017].
 Palfrey, J. (2013). Bibliotech, Basic Civitas Books, New York, p.90
 Isaac, M. (2017). How Facebook’s Fact-Checking Partnership Will Work. nytimes.com. Available at: http://www.nytimes.com/2016/12/15/technology/facebook-fact-checking-fake-news.html [Accessed 6 Jan. 2017].
 Andrew Carnegie in a speech at the opening of the Carnegie Library in Washington D.C. in 1903, reported in the New York Times
 Gorman, M. (2000). Our enduring values: librarianship in the 21st century, American Library Association, Chicago, p.9
 Dubbin, R. (2017). The Rise of Twitter Bots. The New Yorker. Available at: http://www.newyorker.com/tech/elements/the-rise-of-twitter-bots [Accessed 4 Jan. 2017].
 Palfrey, J. (2013), p. 202
 Allison, D. (2012). “Chatbots in the library: is it time?”, Library Hi Tech, 30 (1), p. 95-107
 Allison, D. (2012), p.95-107
 Palfrey, J. (2013), p. 128
 Kean, D. (2017). UK library budgets fall by £25m in a year. The Guardian. Available at: https://www.theguardian.com/books/2016/dec/08/uk-library-budgets-fall-by-25m-in-a-year [Accessed 7 Jan. 2017].
 Associated Press. (2017). Microsoft kills ‘inappropriate’ AI chatbot that learned too much online. latimes.com. Available at: http://www.latimes.com/business/la-fi-0325-microsoft-chatbot-20160326-story.html [Accessed 4 Jan. 2017].