Ethics, Morality and Information Science


***This essay was written by CityLIS student Catherine Jenkins in January 2019. It is reproduced here with the author’s permission as part of our CityLIS Writes initiative.***

“The internet is just a tool. It’s just technology, it’s just tools, we will always use them for the good or the bad. But we’ve been being asked, for a couple of decades now, to flatten ourselves out, to be more surface, to act on the surface of the screen as it were. The word tells us exactly what it is, it’s screening us from something. It’s the same for words like net and web, why call it that if we didn’t think we were going to get caught in it. The temptation to flatten out everything else, to have tweets as politics, to make very complex things into simplistic tags, to not think dimensionally because it’s all flat surfaces, that means that cartoon people can come to power.”
(Smith, 2018)

On 17 October 2018, a panel of cyber security experts convened at Facebook’s London offices to discuss future directions for cyber security studies. Places in the audience were invitation-only and attendees were required to present ID upon arrival. The following day, attendees received this email from the event organiser’s Data Protection Officer:

Email received by attendees at the cyber security event. Screenshot from personal inbox.

An ‘unfortunate incident’ indeed, considering the panellists’ focus on ‘Intelligence with Integrity’ in cyber security studies – a principle also applicable to the field of information science. The two disciplines have much in common: both adapt methodologies from cognate areas of study to further develop their foci, both deal in data (with information scientists often cast in the role of gatekeepers, or security guards, for the same ‘givens’ that form the currency of cyber security studies), and both are particularly vulnerable to the data disasters they seek to teach others how to avoid.
In this essay, I will consider the ethical concerns raised by the increasing entanglement of humans in the technological ‘web’ described by Smith in the opening quotation. In particular, I will explore how information science is implicated in the deepening dependence of researchers on corner-cutting technologies and how the field should mobilise its students and practitioners to vet new commits to the scholarly communication technical infrastructure and signpost where reliance on research tools threatens research integrity. The rise of the ‘intelligent library’ (Cox et al., 2018) means that library and information science (LIS), more than ever, must set an ethical precedent for human interfaces with technology that goes further than the vague ‘beneficial purpose’ recommended in the latest iteration of guidelines for ‘Search Quality evaluators’ (Google, 2018). Ethics is already encoded in the language of information science: the term ‘grey literature’, while not intended to have moral connotations, nevertheless lends itself to forming a visual abstract to my argument that the ethics of data and IT applications in scholarship is a grey area. Like the ambiguous position of grey literature within the taxonomy of documentation, the relationship between technology and research practices is not one that can be classified simply in terms of the ‘good’ or ‘bad’ of Smith’s dichotomy – it colours outside the lines.
Examining our AI, IA world through the lens of the moral greyness of the use of technology for academic research flags the conflict between the satisficing of researchers skimming eTOCs on one hand, and the uptake of forensic text analysis and a Voyant-view over all the (big) data afforded by corpora-backed datasets on the other. The historical and ongoing development of a widely-used means of scholarly communication – the informative abstract – from author-submitted narrative text through to structured, machine-written and visual incarnations, embodies in microcosm these conflicting behavioural propensities.
My rationale for choosing the abstract as my case study stems from the ethical questions around this ostensibly humble appetizer before the meat of the research: namely, that the abstract is increasingly used (and misused) not as an accompaniment, but as a surrogate for the article or conference proceeding itself, with critical decisions being made based solely on its claims (Montesi & Owen, 2007; Hopewell et al., 2008). Although exploited as a test-bed for AI early on (see Luhn, 1958), relatively little research into the abstract and its permutations has been undertaken – considering its ubiquity in scholarly communications – to confirm the conclusions of the scholar whose own abstracts are mostly meta-abstracts, James Hartley (1999, 2000, 2008, 2009, 2011, 2017) (an exception is Zhang and Liu, 2011, a review of Hartley’s work on structured abstracts which concurs rather than challenges). This is surprising, because the abstract’s evolution from peritext to the main event is fraught with ethical issues. The abstract by its nature does not tell the full story. It is a teaser to the detail of the research, and in unstructured cases may not even give away key “spoilers”. One example is a “shocking” study on obedience that

fails to mention that the shock generator (with which participants administered electronic shocks to learners for making errors) was, in fact, a simulator. It is not until one reads the Method section of the article that one finds that no actual shocks took place (Hartley & Betts, 2009, p. 2015).

Alongside rhetoric, the provenance and structure of abstracts renders their effects far more concrete in ethical terms than the name would suggest. The typographical decoupling of this information container from its article – a broken link systemic in scientific publishing, where the abstract is available in the public domain but the article to which it refers is behind a paywall – is primed to propagate misinformation, rendering informative abstracts anything but. The tenet ‘Do no harm’ (see also Coeira, 2018: ‘First compute no harm’), becomes tenuous when decisions are made based on partial (in all senses of the word) abstracts that do harm (Redden, 2018). Witness a doctor based in the global South who, based on reading only the abstract, altered their perinatal HIV prevention program from an effective therapy to one with lesser efficacy. Had they read the full text article they would have undoubtedly realized that the study results were based on short-term follow-up, a small pivotal group, incomplete data, and unlikely to be applicable to their country situation. Their decision to alter treatment based solely on the abstract’s conclusions may have resulted in increased perinatal HIV transmission. (The PLoS Medicine Editors, 2006).

This example shows the need for information literacy and education around how abstracts should be used, as well as the importance of open access for practitioners without the resources of Northern institutions. However, the issue is more often one of time than of access – even in cases where the full-text is open access, readers still choose to stop at the summary (Nicholas et al., 2007, p. 446).

Meta-research has previously compared traditional text-dense abstracts with the structured type recommended by the Ad Hoc Working Group for Critical Appraisal of the Medical Literature (1987) and the CONSORT (Consolidated Standards of Reporting Trials) statement (Hopewell et al., 2008). I would like to focus instead on the next generation of abstracts: those authored by AI.

Machine-written abstracts are not new. The technology has been available for many years, both in serious publications and as online apps to create spoof submissions to predatory journals. As early as the 1950s, it could already be written (in an abstract, no less), that ‘Excerpts of technical papers and magazine articles that serve the purposes of conventional abstracts have been created entirely by automatic means’ (Luhn, 1958, p. 159). This feat was achieved thanks to an ‘IBM 704 data-processing machine’, which computes ‘a relative measure of significance, first for individual words and then for sentences. Sentences scoring highest in significance are extracted and printed out to become the “auto-abstract”’ (ibid).

What is new, however, is researchers’ depth of dependence on machine-written abstracts harvested from big data. Leveraging technology to combat publication bias in fact threatens to skew research workflows away from the systematic review and towards the slapdash. Widespread adoption of AI runs the risk that future systematic reviews may consist of re-runs of misinformed decision-making perpetuated by the use of AI to compile the systematic review – an irony that LIS practitioners should remain cognisant of when advising on literature-searching methodologies. As I noted at the outset, the ethics of machine-written abstracts are not black or white. AI occupies a sliding scale along the ethical spectrum, encapsulated in its combined effects of enhancing the penetration into and consumption of proliferating research while compromising the validity of that same research through short-circuiting evidence-gathering procedures. Even AI-authored abstracts are prone to misreporting negative results, because technology can only work with what it is given by humans, and raw ‘givens’ are an oxymoron (Gitelman, 2013). It is not technology which is ethically grey, but the relationship between technology and human application of it. This is where the stewardship of LIS professionals is needed.

Researchers’ two-fold reliance on the abstract – on technology to produce summaries, and on those summaries for critical clinical decisions – endows it and LIS with a great responsibility. Machine-written abstracts can widen the audience for research by the ease with which they can be adapted to accessible formats. But AI tools trained to ‘speak human’by design fail in their use-case of responding to Hartley & Betts’s call to action for ‘further research [to] be conducted so as to determine whether the abstract content is an accurate reflection of the content of the published paper’ (2009, p. 2015). AI abstracting tools are trained on human-constructed datasets and act in accordance with human-engineered algorithms. The methodology underlying the deep learning of the fifties and today has been, until the recent trend towards ‘understanding concepts, not just matching words’ (Conrad, 2018), essentially the same. Luhn (1958) optimistically wrote that ‘The application of machine methods to literature searching […] indicates that both human effort and bias may be eliminated from the abstracting process.’ He was wrong on both counts. AI tools are not more objective than humans, and therefore AI’s overall benefit to the question of how to engineer unbiased, accurate abstracts from databases of papers is doubtful. For LIS staff, AI could actually demand more effort in weeding-out what one commentator has called ‘robomatically generated stuff’ (Cox et al., 2018).

Prior to AI-powered tools ‘chunking’ research into ‘digestible summary cards’ that allow researchers to ‘Read less. Learn more’ ( 2018), author-submitted abstracts that failed to abide by the ethical principles of the Declaration of Helsinki regarding ‘the completeness and accuracy of their reports’ (WMA 2018, §36) were identified and revised by staff at abstracting and indexing services (Montesi & Owen, 2007). These human expert interventions were subjective, but at least prevented the occurrence of such mangled examples as the abstract for ‘Deta Antboi Prophlaxis What to Dog Wit Curen Reomnain’ (translation: ‘Dental Antibiotic Prophylaxis: What to do with Current Recommendations’). While now corrected on PubMed (without a correction notice), the authors are still dogged by the doggerel version:

Screenshot from Google search using the PMID of the paper, 29 November 2018.

Screenshot of the abstract as it appeared on ResearchGate after being corrected on PubMed, 29 November 2018. The text here shows no evidence of correction to the poor optical character recognition (Couch, 2017).

With the exception of such obvious failures, we can no longer be certain whether a piece of text was produced by a human or a trained machine: ‘We do not know whether or not we have read any computer-generated abstracts’ (Hartley & Cabanac, 2017, p. 5). If even the specialists in the field cannot tell the difference, what hope is there for time-poor doctors who, when searching Medline, ‘rely on abstracts to make decisions which often affect patient care’ (Montesi & Owen, 2007, p. 27); or lay readers, who may redefine their search strategies based on the authority of the abstract? The repercussions are far graver than the frivolity of quizzes pitching human-authored novel extracts against AI-authored ones would suggest (see Podolny, 2015). The ambiguity should, and does, disturb LIS practitioners:

The idea that you might simply not know how a particular piece of content was found because a neural network recognised that it was in some ways similar to some other piece of content, that is quite a profound thing. You can’t really ask it and get an intelligible answer back all you can get is something that says hmm… it is kind of 85% similar to this one but you know take my word for it. [sic] (Cox et al., 2018)

This black-box scenario is troubling in the context of AI abstract/ing (the product, and the process; Pinto 2006). How much do we really know about the AI tools currently operating in the market? What are the ethical dimensions of Digital Science’s Dimensions? To what extent can technology highlighting open access papers, or which is itself open-source, be considered unethical? A great extent, it turns out:’s reach across ‘more than 70M Open Access papers’ (, 2018) is best utilised in the premium version, which allows more granular searching to the tune of twenty-thousands of Euros per year (Extance, 2018) – a price-tag hardly conducive to ‘accelerat[ing] researchers’ entry into new fields’ (ibid.). And Scholarcy, although it supports replicability by providing researchers with the raw data ‘so you can run your own calculations on the results’ (, 2018), uses a proprietary Robo-Highlighter™: ‘a magic, virtual marker pen that somehow [my emphasis] knew what needed to be highlighted’ (Gooch, 2018a). While touted as a research solution to surface and prioritize pertinent insights, AI may actually exacerbate the ‘inaccuracies and distortions’ that compromise human-written abstracts (Hartley & Betts, 2009, p. 2015).

The real ethical quandary is thus not in technology-in-isolation, but at the intersection of technology with humans – particularly in the context of premature application of AI models:

Screenshot from personal Twitter feed, 7 November 2018 (Goldacre, 2018).

Tools which apparently work by ingesting and understanding full-text ‘the same way a human does. It then extracts the context, and “understands” its meaning in the form of a concept, in a totally unbiased way’ (Bolina, 2018), are just as vulnerable to being persuaded by p-hacking as humans are (and, of course, just as biased). They alter the research workflow extensively by applying the acronym-approach beyond AI-the-concept to AI-the-process in ways detrimental to the scholarly record. Information professionals should question the marketing blurbs of such “solutions” that condone the saving of time at the possible expense of saving lives, especially in the light of further barriers to accessing ‘high-quality, nuanced, human-expert indexing’ thrown up by exclusive licensing deals (Ken Varnum, cited in Conrad, 2018).
A previous contribution of information science to the ‘abstract and abstracting problem’ has been the suggestion of ‘partial automation, a hybrid man-machine methodology’ (Pinto, 2003b, p. 405). I would like to submit visual abstracts, linked to raw datasets, as one answer. Created by humans but data-driven in aesthetics, a visual abstract distils findings into a shareable infographic. Recent drafts of universal emoji for coding these formats – for example, an image of a die for a randomised controlled trial, or a covered eye denoting the use of blinding (Silberberg et al., 2017) – promise to further optimise the reproducibility of research by making such abstracts searchable in databases like PubMed. This proposal aligns with Pinto’s in that the use of emoji could help machines to interpret and find visual abstracts at scale, resulting in a balance between efficiency and accuracy and a more ethical relationship between technology and humans. As a discussion on the academic library of the future has already noted,

The assumption that the end goal is for human eyeballs to look at something will, perhaps not disappear, but reduce. Students and academics [are] using machines in the middle to bring back information in a condensed way. […] That is going to have different implications on how you present the information, how you access it, how you licence it’ (Cox et al., 2018).

Visual abstracts enhance access to readers with disabilities, as communicating information via this means can offer a 2D, 3D (video or VR abstracts) or even 4D (haptic) rendering of the different facets of a study. The combination of technology and human creativity inherent in visual abstracts also serves to remind researchers how data can be manipulated. The barrier to entry for authors creating their own visual abstracts is low (Spicer, 2018; the ‘Scientist Videographer’ offers tutorials in McKee, 2018) and the additional room in a #VisualAbstract tweet for contextual notes with a DOI linking to the (open) dataset, or details of how access to the original dataset may be obtained, makes this an ethical technology that could be of great benefit to academia. While ethical considerations still remain – the pool of professional graphical abstractors is a small one, and it is in the interest of leading journals to prioritise artists whose version of knowledge visualisation most closely resembles the editors’ own (subjective) conception of how data should be prettified for the high-impact papers selected for this honour – these ethical minuses are cancelled-out by the ethical pluses involved in putting the data at the forefront of readers’ engagement with research.

In encouraging the use of visual abstracts across disciplines and leading on their curation and archiving in repositories, LIS professionals could divert the purely-AI turn in machine-written abstracts unsupervised by the human eye towards a solution that merges the best of human creativity with the best of creative technology. If LIS professionals do not mobilise themselves to support this alternative, the (illogical) next step in the evolution of time-saving research consumption, as measured in ‘time-to-insight’ (Deloitte, 2014), could be the ‘one-word abstract’ (Hartley & Cabanac, 2017, p. 4). Examples have already been seriously published and take concision to extremes:

A slightly more expansive abstract related to the one-word type (Berry et al., 2011)

Hartley & Cabanac quip that ‘We would probably all enjoy a one-word abstract’ (2017, p. 4), but if this were to become an accepted part of the scholarly ecosystem it would dangerously dilute research integrity. Applying an abstract of this type to the question with which the present essay is concerned – ‘Is the use of machine-written abstracts in research ethical?’ – would restrict any exploration of ‘hybrid’ solutions like the visual abstract + data citation duo which, more than parsing undertaken by an AI bot, is a compelling point of entry for LIS into supporting scholarly communication with evidence-based decision support. So in answer to the academic librarian who asked ‘Do I instead focus in on licensing the very best machine learning […] And actually does the nature of content as something distinct from a machine service stop existing?’ (Cox et al., 2018) – another question which would be difficult to answer within the bounds imposed by a limited abstract length – I would advise not to exchange access to journals for premium access to AI tools just yet. Google’s Deep Dream can remix, but not create ab ovo (Rayner, 2016).
LIS practitioners are equipped to advise researchers on the best tools for taking a multidimensional view of their research without becoming overwhelmed by volume (Metzendorf & Featherstone, 2018). It is within the remit of information science to interrogate the ethics behind technologies – whether AI, or creative infographic software – and to make recommendations only upon evidence being presented (for example, a robust ethics statement in a tool’s GitHub Readme). LIS interventions could include the addition of visual abstracts to critical appraisal checklists, guidelines for the long-term archiving of this document type, the embedding of technology advisory services in LIS job descriptions and participation in relevant Grand Challenges like 4.3: ‘Designing and Governing Algorithms in the Scholarly Ecosystem to Support Accountability, Credibility, and Agency’ (Altman, 2018). Finally, LIS should be the vigilant voice in the outsourcing of research and information assessment to machines playing join-the-dots. Rather than allowing AI to rewrite the rules (and literally rewrite abstracts), now is the time for the human intermediaries of scholarly communication to write the rulebook for future research infrastructures themselves.
At the end of his study of abstract use by deep log analysis, Nicholas comments that one of the publisher participants ‘joked that given the evidence showing the key role of abstracts in today’s crowded information environment, maybe they should reverse their business model and give full-text away and charge for abstracts’ (2007, p. 453). This flipped business model has not happened yet. But if information science as a field does not act, we may see a situation where articles are routinely machine-written, and technology vendors or publishers with a share in slices of the research workflow engineer insights from engineered research in a closed loop. The ethical concerns of this would greatly outweigh the one ethical bonus of abstracts always being an accurate reflection of their papers. Keeping a consideration of ethics at the forefront of information science will be necessary because, as Nicholas concludes, ‘They [publishers] were only half-joking’ (ibid.).


Ad Hoc Working Group for Critical Appraisal of the Medical Literature (1987). A proposal for more informative abstracts of clinical articles. Annals of Internal Medicine [Online] 106:598–604. Available at: [Accessed: 13 December 2018].
Altman, M. (2018). A Grand-Challenges Based Research Agenda for Scholarly Communication and Information Science. [Online]. Available at: [Accessed: 20 November 2018]. (n.d.) Alexa Design Guide [Online]. Available at: [Accessed: 8 December 2018].
Berry, M.V., Brunner, N., Popescu, S. & Shukla, P. (2011). Can apparent superluminal neutrino speeds be explained as a quantum weak measurement? Journal of Physics A: Mathematical and Theoretical [Online] 44:492001. Available at:[Accessed: 23 November 2018].
Bolina, M. (2018). The importance of concepts. Research Information [Online]. Available at: [Accessed: 29 November 2018].
Conrad, L.Y. (2018). Exclusive Deals in Scholarly Discovery: How They Hurt Users and Pose Threats to Open Scholarship [Online]. Available at: [Accessed: 4 December 2018].
Couch, C.G., Mears, S.C., Edwards, P.K., Gene Jines & W., Lowry Barnes, C. (2018). Deta Antboi Prophlaxis What to Dog Wit Curen Reomnain. ResearchGate [Online]. Available at: [Accessed: 8 December 2018].
Couch, C.G., Mears, S.C., Edwards, P.K., Gene Jines & W., Lowry Barnes, C. (2017). Dental Antibiotic Prophylaxis: What to do with Current Recommendations. The Journal of the Arkansas Medical Society [Online] 113:259–261. Available at: [Accessed: 13 December 2018].
Cox, A.M., Pinfield, S. & Rutter, S. (2018). The intelligent library: Thought leaders’ views on the likely impact of artificial intelligence on academic libraries. Library Hi Tech [Online]. Available at: [Accessed: 25 November 2018].
Daston, L. II. The Sciences of the Archive | MPIWG [Online]. Available at: [Accessed: 8 December 2018].
Deloitte (2014). Intelligent Automation: A New Era of Innovation [Online]. Available at: [Accessed: 17 November 2018].
Extance, A. (2018). How AI technology can tame the scientific literature. Nature [Online] 561:273. Available at: [Accessed: 23 November 2018].
Gitelman, L. (2013). ‘Raw Data’ Is an Oxymoron. Cambridge, MA: MIT Press.
Goldacre, B. (2013). Bad Pharma: How medicine is broken, and how we can fix it. London: Fourth Estate.
Goldacre, B. (2018). @bengoldacre [Online]. Available at: [Accessed: 8 December 2018].
Gooch, P. (2018a). How I Built and Launched an AI Product for under $100 [Online]. Available at: [Accessed: 21 November 2018].
Google (2018). Search Quality Evaluator Guidelines [Online]. Available at: [Accessed: 9 December 2018].
Hartley, J. (1999). From Structured Abstracts to Structured Articles: A Modest Proposal [Online]. Available at: [Accessed: 17 November 2018].
Hartley, J. (2011). Making the Journal Abstract More Concrete. Journal of Scholarly Publishing [Online] 43:110–115. Available at: [Accessed: 17 November 2018].
Hartley, J. (2000). Typographic Settings for Structured Abstracts. Journal of Technical Writing and Communication [Online] 30:355–365. Available at: [Accessed: 17 November 2018].
Hartley, J. & Betts, L. (2009). Common weaknesses in traditional abstracts in the social sciences. Journal of the American Society for Information Science and Technology [Online] 60:2010–2018. Available at: [Accessed: 12 October 2018].
Hartley, J. & Betts, L. (2008). Revising and polishing a structured abstract: Is it worth the time and effort? Journal of the American Society for Information Science and Technology [Online] 59:1870–1877. Available at: [Accessed: 17 November 2018].
Hartley, J. & Cabanac, G. (2017). Thirteen Ways to Write an Abstract. Publications [Online] 5:11. Available at: [Accessed: 21 November 2018].
Herndon, T., Ash, M. & Pollin, R. (2013). PERI – Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff. [Online]. Available at: [Accessed: 2 December 2018].
Hopewell, S., Clarke, M., Moher, D., Wager, E., Middleton, P., Altman, D.G., Schulz, K.F. & The CONSORT Group (2008). CONSORT for Reporting Randomized Controlled Trials in Journal and Conference Abstracts: Explanation and Elaboration. PLOS Medicine [Online] 5:e20. Available at: [Accessed: 7 December 2018].
Neon Century Intelligence (2018). Corporate Intelligence for the Digital Age. [Online]. Available at: [Accessed: 8 December 2018].
Nicholas, D., Huntington, P. & Jamali, H.R. (2007). The Use, Users, and Role of Abstracts in the Digital Scholarly Environment. The Journal of Academic Librarianship [Online] 33:446–453.
International Organization for Standardization (2010). ISO 214:1976 Documentation – Abstracts for Publications and Documentation [Online]. Available at: [Accessed: 21 November 2018].
King, J. (2018). Simulated Annealing Considered Harmful [Online]. Available at: [Accessed: 8 December 2018].
Luhn, H.P. (1958). The Automatic Creation of Literature Abstracts. IBM Journal of Research and Development 2:159–165.
Metzendorf, M.-I. & Featherstone, R.M. (2018). Ensuring quality as the basis of evidence synthesis: leveraging information specialists’ knowledge, skills, and expertise. Cochrane Database of Systematic Reviews [Online]. Available at: [Accessed: 4 December 2018].
Montesi, M. & Owen, J.M. (2007). Revision of author abstracts: how it is carried out by LISA editors. Aslib Proceedings [Online] 59:26–45. Available at: [Accessed: 12 October 2018].
OED Online (2018). grey | gray, adj. and n. OED Online [Online]. Available at: [Accessed: 24 November 2018].
Pinto, M. (2006). A grounded theory on abstracts quality: Weighting variables and attributes. Scientometrics [Online] 69:213–226. Available at: [Accessed: 17 November 2018].
Pinto, M. (2003b). Engineering the Production of Meta-Information: The Abstracting Concern. Journal of Information Science [Online] 29:405–417. Available at: [Accessed: 17 November 2018].
The PLOS Medicine Editors. (2006). The Impact of Open Access upon Public Health. PLOS Medicine [Online] 3:e252. Available at: [Accessed: 8 December 2018].
Podolny, S. (2015). Opinion | Did a Human or a Computer Write This? The New York Times [Online]. Available at: [Accessed: 17 November 2018].
Rayner, A. (2016). Can Google’s Deep Dream become an art machine? The Guardian [Online]. Available at: [Accessed: 8 December 2018].
Redden, J. (2018). The Harm That Data Do. Scientific American [Online] 319. Available at: [Accessed: 25 October 2018].
Roose, K. (2013). Meet the 28-Year-Old Grad Student Who Just Shook the Global Austerity Movement. Intelligencer [Online]. Available at: [Accessed: 13 December 2018]. (2018). Scholarcy [Online]. Available at: [Accessed: 1 November 2018].
SCIgen (2018). SCIgen – An Automatic CS Paper Generator [Online]. Available at: [Accessed: 21 November 2018].
Silberberg, S.D., Crawford, D.C., Finkelstein, R., Koroshetz, W. J., Blank, R.D., Freeze, H.H., Garrison, H.H. & Seger, Y.R. (2017). Shake up conferences. Nature News [Online] 548:153. Available at: [Accessed: 22 November 2018].
Simmons University (2018). Jobline: School of Library and Information Science, Simmons University [Online]. Available at: [Accessed: 8 December 2018].
Smith, A. (2018). Ali Smith on the post-truth era: ‘There is still a light’. Interview with Anna James [Online]. Available at: [Accessed: 19 November 2018].
US-UK Fulbright Commission (2018). Cyber Security: The next 50 Years | US-UK Fulbright Commission [Online]. Available at: [Accessed: 8 December 2018].
Voutier, C. (2018). @CathVoutier [Online]. Available at: [Accessed: 29 November 2018].
Warburton, N. (n.d.). The Best Books on Philosophy of Information | Five Books Expert Recommendations [Online]. Available at: [Accessed: 25 November 2018].
White, H. (n.d.). AI and Evidence Synthesis: Opportunity or Existential Threat? [Online]. Available at: [Accessed: 8 December 2018].
WMA [World Medical Association] (2018). Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects [Online]. Available at: [Accessed: 14 December 2018].
Yao, J., Wan, X. & Xiao, J. (2017). Recent advances in document summarization. Knowledge and Information Systems [Online] 53:297–336. Available at: [Accessed: 17 November 2018].
Zhang, C. & Liu, X. (2011). Review of James Hartley’s research on structured abstracts: Journal of Information Science [Online] 37: 570–576. Available at: [Accessed: 12 October 2018].


Alasbali, T., Smith, M., Geffen, N., Trope, G.E., Flanagan, J.G., Jin, Y. & Buys, Y.M. (2009). Discrepancy between Results and Abstract Conclusions in Industry- vs Nonindustry-funded Studies Comparing Topical Prostaglandins. American Journal of Ophthalmology [Online] 147:33–38.e2. Available at: [Accessed: 14 December 2018].
Altwairgi, A.K., Booth, C.M., Hopman, W.H. & Baetz, T.D. (2012). Discordance between conclusions stated in the abstract and conclusions in the article: analysis of published randomized controlled trials of systemic therapy in lung cancer. Journal of Clinical Oncology [Online] 30:3552–3557. Available at: [Accessed: 13 December 2018].
Anderson, P.F., (2017). Visual Abstracts — Thoughts from a Medical Librarian. Emerging Technologies Librarian [Online]. Available at: [Accessed: 18 November 2018].
Armstrong, H.E. (1914). A move towards scientific socialism. Chemical World (March), 67–71.
ASIS&T (2018). ASIS&T 81st Annual Meeting Proceedings: Building an Ethical and Sustainable Information Future with Emerging Technologies (November 10-15, 2018 | Vancouver, Canada). Available at:
Auer, S. & Mann, S. (2018). Toward an Open Knowledge Research Graph. The Serials Librarian [Online] 0:1–7. Available at: [Accessed: 3 December 2018].
Aungst, T. (2017). Visual Abstracts Are Changing How We Share Studies. Op-Med [Online]. Available at: [Accessed: 18 November 2018].
Bellingcat (2018). Bellingcat’s Online Investigation Toolkit [Online]. Available at: [Accessed: 11 October 2018].
Bernal-Delgado, E. & Fisher, E.S. (2008). Abstracts in high profile journals often fail to report harm. BMC Medical Research Methodology [Online] 8:14. Available at: [Accessed: 13 December 2018].
Boutron, I., Dutton, S., Ravaud, P. & Altman, D.G. (2010). Reporting and Interpretation of Randomized Controlled Trials With Statistically Nonsignificant Results for Primary Outcomes. JAMA [Online] 303:2058–2064. Available at: [Accessed: 14 December 2018].
Buljan, I., Malički, M., Wager, E., Puljak, L., Hren, D., Kellie, F., West, H., Alfirević, Ž. & Marušić, A. (2018). No difference in knowledge obtained from infographic or plain language summary of a Cochrane systematic review: three randomized controlled trials. Journal of Clinical Epidemiology [Online] 97:86–94. Available at: [Accessed: 13 December 2018].
Cambridge Digital Humanities (2018). Text and data mining services: an update. Unlocking Research [Online]. Available at: [Accessed: 8 December 2018].
Candy, L., Edmonds, E. & Poltronieri, F.A. (2018). Explorations in Art and Technology. 2nd ed. Springer Series on Cultural Computing. London: Springer-Verlag.
Centre for Investigative Journalism (2018). Conspiracy: Logan Symposium [Online]. Available at: [Accessed: 19 November 2018].
CONSORT (n.d.) Extensions of the CONSORT Statement: Abstracts [Online]. Available at: [Accessed: 17 November 2018].
CorCenCC (2018). CorCenCC – National Corpus of Contemporary Welsh. [Online]. Available at: [Accessed: 8 December 2018].
Crisan, A., McKee, G., Munzner, T. & Gardy, J.L. (2018). Evidence-based design and evaluation of a whole genome sequencing clinical report for the reference microbiology laboratory. PeerJ [Online] 6:e4218. Available at: [Accessed: 18 November 2018].
Cunningham, A.M. & Wicks, W. (1992). Guide to Careers in Abstracting and Indexing. Philadelphia: National Federation of Abstracting and Information Services.
Demner-Fushman, D., Shooshan, S.E., Rodriguez, L., Aronson, A.R., Lang, F., Rogers, W., Roberts, K. & Tonning, J. (2018). A dataset of 200 structured product labels annotated for adverse drug reactions. Scientific Data [Online] 5:180001. Available at: [Accessed: 17 November 2018].
Dickersin, K. & Mayo-Wilson, E. (2018). Standards for design and measurement would make clinical research reproducible and usable. Proceedings of the National Academy of Sciences of the United States of America [Online] 115:2590–2594. Available at: [Accessed: 13 December 2018].
Dijkers, M.P.J.M. (2003). Searching the literature for information on traumatic spinal cord injury: the usefulness of abstracts. Spinal Cord [Online] 41:76–84. Available at: [Accessed: 13 December 2018].
Eid, T., van Sonnenberg, E., Azar, A., Mistry, P., Eid, K. & Kang, P. (2018). Analysis of the Variability of Abstract Structures in Medical Journals. Journal of General Internal Medicine [Online] 33:1013–1014. Available at: [Accessed: 17 November 2018].
Elliott-Rudder, M. (2009) Health Professional Knowledge of Breastfeeding: Are the Health Risks of Infant Formula Feeding Accurately Conveyed by the Titles and Abstracts. Journal of Human Lactation [Online] 25:3. Available at: [Accessed: 13 December 2018]
Ellis, G. ed. (2018). Cognitive Biases in Visualizations. Cham: Springer International Publishing.
Floridi, L. (2016). True AI Is Both Logically Possible and Utterly Implausible [Online]. Available at: [Accessed: 26 November 2018].
FORCE11 (2016). The FAIR Data Principles [Online]. Available at: [Accessed: 8 December 2018].
Furner, J. (2016). “Data”: The data. In: Kelly, M. and Bielby, J. (eds.) Information Cultures in the Digital Age: A Festschrift in Honor of Rafael Capurro. Wiesbaden: Springer Fachmedien Wiesbaden, pp. 287–306.
Garfield, E. (1965). Can Citation Indexing be Automated? In: Stevens, M.E., Giuliano, V.E. and Heilprin, L.B (eds.) Statistical Association Methods for Mechanized Documentation: Symposium Proceedings, Washington, 1964. U.S. Government Printing Office, pp. 189–192.
Genette, G. (1997). Paratexts: Thresholds of interpretation. Translated from French by J. E. Lewin. Cambridge: Cambridge University Press.
Gigerenzer, G. & Kolpatzik, K. (2017). How new fact boxes are explaining medical risk to millions. BMJ [Online] 357:j2460. Available at: [Accessed: 21 November 2018].
Gooch, P. (2018b). Scholarcy: Research in a Nutshell [Online]. Available at: [Accessed: 21 November 2018].
Graber, M.A., Dowell, D. & Endres, J. (2013). Do Abstracts of Articles in Major Journals Contain the Same Information as the Body of the Paper? American Family Physician [Online] 88:466–467. Available at: [Accessed: 17 November 2018].
Guimarães, C.A. (2006). Structured abstracts: narrative review. Acta Cirurgica Brasileira 21:263–268.
Haynes, R. (2016). Improving Reports of Research by More Informative Abstracts: A Personal Reflection [Online]. Available at: [Accessed: 17 November 2018].
Hua, F., Walsh, T., Glenny, A.-M. & Worthington, H. (2018). Structure formats of randomised controlled trial abstracts: a cross-sectional analysis of their current usage and association with methodology reporting. BMC Medical Research Methodology [Online] 18:6. Available at: [Accessed: 13 December 2018).
Ibrahim, A.M. (n.d.). Visual Abstract: Open Source Primer [Online]. Available at: [Accessed: 18 November 2018].
Ioannidis, J.P.A., Greenland, S., Hlatky, M.A., Khoury, M.J., Macleod, M.R., Moher, D., Schulz, K.F. & Tibshirani, R. (2014). Increasing value and reducing waste in research design, conduct, and analysis. The Lancet [Online] 383:166–175. Available at: [Accessed: 22 November 2018]. (2018). – Your Science Assistant [Online]. Available at: [Accessed: 21 November 2018].
Izquierdo Alonso, M. & Moreno Fernández, L.M. (2010). Perspectives of studies on document abstracting: Towards an integrated view of models and theoretical approaches. Journal of Documentation [Online] 66:563–584. Available at: [Accessed: 12 October 2018].
Jia, R. & Liang, P. (2017). Adversarial Examples for Evaluating Reading Comprehension Systems. arXiv:1707.07328 [cs] [Online]. Available at: [Accessed: 25 November 2018].
Jimeno-Yepes, A.J., Plaza, L., Mork, J.G., Aronson, A.R. & Díaz, A. (2013). MeSH indexing based on automatically generated summaries. BMC Bioinformatics [Online] 14:208. Available at: [Accessed: 13 December 2018].
Johnson, D.G. & Verdicchio, M. (2017). AI Anxiety. Journal of the Association for Information Science and Technology [Online] 68:2267–2270. Available at: [Accessed: 25 November 2018].
Kirtley, S. (2018). Current issues regarding the quality and reproducibility of published biomedical research studies. The EQUATOR Network – Librarian Network [Online]. Available at: [Accessed: 22 November 2018].
Kuhn, I. (2017). #icmldub #eahil2017 CEC6: Librarians can help address reporting concerns in the biomedical literature, particularly for systematic reviews. Musings of a Medical Librarian [Online]. Available at: [Accessed: 9 December 2018].
Mani, I. (2001). Automatic Summarization. Amsterdam; Philadelphia: John Benjamins Publishing Company.
McLoughlin, I. (2018). How to Teach AI to Speak Welsh (and Other Minority Languages). The Conversation [Online]. Available at: [Accessed: 8 December 2018].
Metz, R. (2016). Poets, Fiction Writers, and Comedians Are Making AI Assistants More Fun to Be Around [Online]. Available at: [Accessed: 26 November 2018].
Mind the Graph (2018). Mind the Graph: Create Scientific Infographics Easily [Online]. Available at: [Accessed: 18 November 2018].
Mitchell, M. (2018). Artificial Intelligence Hits the Barrier of Meaning. The New York Times [Online]. Available at: [Accessed: 25 November 2018].
Modern Language Association (2018). Letter to MLA Bibliography Customers [Online]. Available at: [Accessed: 8 December 2018].
Mork, J.G, Aronson, A.R. & Demner-Fushman, D. (2017). 12 years on – Is the NLM medical text indexer still useful and relevant? Journal of Biomedical Semantics [Online] 8:8. Available at: [Accessed: 13 December 2018].
Nadine, D. (2014). Examining Paratextual Theory and Its Applications in Digital Culture. Hershey, PA: IGI Global.
Narrative Science (2018). Narrative Science | How the Future Gets Written [Online]. Available at: [Accessed: 17 November 2018].
National Institute of Neurological Disorders and Stroke (NINDS) (2017). Request for Information on Developing Experimental Design ‘Emoji’ Symbols for Use in Scientific Presentations [Online]. Available at: [Accessed: 22 November 2018].
Nieri, M., Clauser, C., Franceschi, D., Pagliaro, U., Saletta, D. & Pini-Prato, G. (2007). Randomized clinical trials in implant therapy: relationships among methodological, statistical, clinical, paratextual features and number of citations. Clinical Oral Implants Research [Online] 18:419–431. Available at: [Accessed: 17 November 2018].
NISO (2015). ANSI/NISO Z39.14-1997 (R2015) Guidelines for Abstracts [Online]. Available at: [Accessed: 9 December 2018].
Nissen, S.B., Magidson, T., Gross, K. & Bergstrom, C.T. (2016). Publication bias and the canonization of false facts. eLife [Online] 5:e21451. Available at: [Accessed: 14 December 2018].
Noble, S.U. (2018a). Safiya Umoja Noble and the Ethics of Social Justice in Information (Part 1) [Online]. Available at: [Accessed: 9 December 2018].
Noble, S.U. (2018b). Safiya Umoja Noble and the Ethics of Social Justice in Information (Part 2) [Online]. Available at: [Accessed: 9 December 2018].
O’Mara-Eves, A., Thomas, J., McNaught, J., Miwa, M. & Ananiadou, A. (2015). Using text mining for study identification in systematic reviews: a systematic review of current approaches. Systematic Reviews [Online] 4:5. Available at: [Accessed: 4 December 2018].
Peroni, S. & Shotton, D. (2018). SPAR Ontologies [Online]. Available at: [Accessed: 18 October 2018].
Pinto, M. (2003a). Abstracting/abstract adaptation to digital environments: research trends. Journal of Documentation [Online] 59:581–608. Available at: [Accessed: 17 November 2018].
Pitkin, R.M. & Branagan, M.A. (1998). Can the Accuracy of Abstracts Be Improved by Providing Specific Instructions?: A Randomized Controlled Trial. JAMA [Online] 280:267–269. Available at: [Accessed: 17 November 2018].
Pitkin, R.M., Branagan, M.A. & Burmeister, L.F. (1999). Accuracy of data in abstracts of published research articles. JAMA [Online] 281:1110–1111. Available at: [Accessed: 13 December 2018].
Plavén-Sigray, P., Matheson, G.J., Schiffler, B.C. & Thompson, W.H. (2017). The readability of scientific texts is decreasing over time. eLife [Online] 6:e27725. Available at: [Accessed: 14 December 2018].
Robinson, L. (2010). Understanding Healthcare Information. London: Facet Publishing.
Salager-Meyer, F. (1990). Discoursal flaws in Medical English abstracts: A genre analysis per research- and text-type. Text – Interdisciplinary Journal for the Study of Discourse [Online] 10. Available at: [Accessed: 17 November 2018].
School of Advanced Study (2018). House of Lords AI Report: Policy Impact, Implementation, and Progress [Online]. Available at: [Accessed: 29 November 2018].
Sharma, S. & Harrison, J.E. (2006). Structured abstracts: do they improve the quality of information in abstracts? American Journal of Orthodontics and Dentofacial Orthopedics [Online] 130:523–530. Available at: [Accessed: 13 December 2018].
Shirky, C. (2009). A Speculative Post on the Idea of Algorithmic Authority. [Online]. Available at: [Accessed: 7 December 2018].
Siebers, R. (2001). Data Inconsistencies in Abstracts of Articles in Clinical Chemistry. Clinical Chemistry [Online] 47:149–149. Available at: [Accessed: 17 November 2018].
Silber, H.G. & McCoy, K.F. (2000). An Efficient Text Summarizer using Lexical Chains. In: INLG 2000 Proceedings of the First International Conference on Natural Language Generation. Mitzpe Ramon, Israel: Association for Computational Linguistics, pp. 268–271. Available at: [Accessed: 17 November 2018].
Smith, R. (2006). The trouble with medical journals. Journal of the Royal Society of Medicine [Online] 99:115–119. Available at: [Accessed: 23 November 2018].
Ward, L.G., Kendrach, M.G. & Price, S.O. (2004). Accuracy of abstracts for original research articles in pharmacy journals. The Annals of Pharmacotherapy [Online] 38:1173–1177. Available at: [Accessed: 13 December 2018].
Welsh Government (2018). Welsh Language Technology Action Plan [Online]. Available at: [Accessed: 9 December 2018].
Wheatley, A. & Armstrong, C. (1997). Metadata, recall, and abstracts: can abstracts ever be reliable indicators of document value? Aslib Proceedings [Online] 49:206–213. Available at: [Accessed: 13 December 2018].
Witty (1973). The Beginnings of Indexing and Abstracting: Some Notes towards a History of Indexing and Abstracting in Antiquity and the Middle Ages. The Indexer 8:193–198.

About Joseph Dunne-Howrie

Joseph is a practitioner scholar in theatre and library information science. He teaches at several universities including City, Rose Bruford College, and UEL. His research interests include immersive performance, performative writing, digital culture, documenting and archiving, and audience participation. You can learn more about Joseph's work at
This entry was posted in CityLIS Writes and tagged , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *