An analysis and evaluation of an article from the Journal of Information Science

***This analysis was written by CityLIS student Timothy Spring in January 2019. It is reproduced here with the author’s permission as part of our CityLIS Writes initiative.***

Introduction
For this assignment, I have chosen to evaluate the article ‘Academics’ attitudes towards peer review in scholarly journals and the effect of role and discipline’ written by Jennifer Rowley and Laura Sbaffi in Volume 44, Issue 5 (2018) of the Journal of Information Science. This article is somewhat complicated by the fact that the dataset used was not created by the authors and therefore, it is difficult to assess it under certain criteria. As a result, I have used Glynn’s (2006) critical appraisal tool and I have applied the questions in this framework directly to the original survey/report conducted by Taylor & Francis (2016) in addition to the article itself where appropriate. This tool comes in the form of a checklist which breaks down a study into questions grouped into four sections – each question can be answered with a Yes, No, Unclear or N/A which can be later be used to calculate the overall validity of an article, as well as each section in general. This particular framework appears to be fairly well regarded and has been applied by others for their own research – see Aubry et al. (2017), Cairns et al. (2012) and O’Rourke & O’Brien (2016) for practical examples of this. A copy of the appraisal tool is located in Appendix II of this essay including the results for this particular article. I will explain the results of this analysis in further detail and explore some of the issues that have arisen from this. Where questions are not relevant to this particular article, I have omitted them from this essay (e.g. questions on comparative studies).

Glynn’s critical appraisal tool provides a useful framework for analysis but is predominantly focused on the methodological and research design aspects of critical analysis and does not consider other suitable questions which may be of interest for evaluation. Whilst the main appraisal is based around this tool, I have devised a selection of background questions that aim to give a broader introduction to the article and provide a platform for more detailed analysis. These questions are based on concepts found in Bawden & Robinson (2012) and Greenhalgh (2014) and can be viewed in Appendix I.

Background Questions
What journal was this article published in? Is it an appropriate journal to be published in?
This article was published in the Journal of Information Science. It currently has an impact factor of 1.939 and is ranked 27th out of 88 for the subject ‘Information Science & Library Science’ (Journal Citation Reports, 2018). Its impact factor and ranking suggest it is a fairly refutable journal for this subject as it ranks in a mid-to-high place. It is also published in association with the Chartered Institute of Library and Information Professionals (CILIP), the UK’s professional body for library and information science professionals, also adding to the credibility of this journal.

Who wrote this research? Are they a credible author with an appropriate background to explore this subject?
Jennifer Rowley is a Professor of Information and Communications at Manchester Metropolitan University. Her academic record suggests she has a wide understanding on the subject written in this article, including articles looking at academic attitudes towards open access (Manchester Metropolitan University, 2018).
Laura Sbaffi is a lecturer in Health Informatics at the University of Sheffield within their Information School. Her previous research has included subjects such as information literacy and information behaviour and shows a pattern of frequent collaborations with Jennifer Rowley (The University of Sheffield, 2018).
Based on their staff profiles and research backgrounds, it is fair to say that both of these academics have sufficient understanding of the subject material they discuss in the main article I am evaluating and can be considered as reliable and reputable sources.

Is the style of the article suitable for its audience?
The article follows a standard template consisting of an abstract, introduction, literature review, methodology, findings, discussion, conclusion and further recommendations section. Each section uses appropriate and clear language that defines the boundaries of the article.

Does this research contain a literature review? Is it comprehensive enough for this research? Do they draw appropriate findings from the literature review?
The article provides a neatly presented table in the appendix of the previous studies of a similar nature and scope which is presented clearly in order of year of publication. They also provide fair analytical synopses of each previous similar study, explaining the scope of the research (national vs. international), style of research (quantitative vs. qualitative), results and methodologies. They fairly conclude that previous studies have affirmed the value of peer review but have not considered the subject from the same angle that Rowley & Sbaffi intend to.

Is this study original?
Rowley & Sbaffi contend that previous research on this subject has not considered the impact that discipline and role in the publishing process has on attitudes towards peer-review on an international level. The literature review is fairly comprehensive regarding previous studies of this size so this research should be considered original and based on their publication history, they have not previously explored peer review in detail.
However, they have previously looked at other aspects of academic publishing, producing research that is almost identical in style and content that suggests there is a questionable amount of creativity with this research. For instance, Rowley et al. (2017) published a previous article entitled ‘Academics’

Behaviors and Attitudes towards Open Access Publishing in Scholarly Journals’. The titles alone are similar but the articles themselves bear striking similarities in style and content; for strong examples of this, see the methodology for both articles which contain many sentences that are identical showing this work is somewhat formulaic. This raises a difficult question for this article as on a technical level, their research has been what we could consider original but if they are producing regular research using a formulaic structure and methodology where the parameters are changed slightly each time, is it truly that original?

Is the dataset and numerical value of the research consistent?
One noticeable issue is that figures within the article do not entirely match the numbers within Taylor & Francis’ own dataset. For instance, both Rowley & Sbaffi and the Taylor & Francis report agree that the survey was sent to 86,487 academics; by contrast, Rowley & Sbaffi state that 7,875 questionnaires were completed but within the Taylor & Francis White Paper, they have the figure as 7,438. Within the article’s acknowledgements, Rowley & Sbaffi point out that they had access to the data generated from the survey, which could mean the raw data they were working with could have different values to that which was presented in the Taylor & Francis survey. Overall, the variation is not dramatic enough and the analysis they provide has a suitable enough population and sample size that this slight change is likely to be negligible to their analysis but overall, I would like to know why there is this slight discrepancy in the figures.

Are the citations used appropriate? Is the original research being cited presented in a fair and accurate way?
The citations within the article are used appropriately, often citing other academic research on peer review to establish what is meant by the term in their introduction and how this has developed over time. These citations are often used again and added to by other citations featuring in the literature review which explore the previous research on this subject.

Is the abstract written in an appropriate style?
The abstract they have written provides a concise summary of what their research aims to find out and provides a brief summary of what their research discovered and their conclusions which I think fairly reflect what they found. One useful addition that could have been included in the abstract would have been a summary of the methodology they used (e.g. how the data was gathered, how did they analyse it).

Critical Appraisal Toolkit
As the data used in this article was based off a Taylor & Francis survey completed in Spring 2015, evaluation for this article is based off a combination of the methods used in the original survey and the methods explained in the article. For instance, Section A (Population) and Section B (Data Collection) questions can be applied to the original survey but would not be applicable to the article. However, Section C (Study Design) and Section D (Results) can be applied to both. For this appraisal, I will only apply Glynn’s tool to the Taylor & Francis survey to assess its general validity as a dataset for the article in question – the article will be assessed for its findings and methodology under a separate framework.

Section A: Population
Is the study population representative of all users, actual and eligible, who might be included in the study?
YES: The survey was distributed to 86,487 authors, reviewers and editors across both Science, Technology and Medicine (STM) disciplines and Humanities and Social Sciences (HSS) fairly equally (Taylor & Francis Group, 2015a). Results were compared with a sample of researchers from Thompson Reuters to further ensure that the results were representative of academia broadly – not just Taylor & Francis (2015b).
Are inclusion and exclusion criteria definitively outlined?
YES: The report specifically targeted researchers that had published with Taylor & Francis in 2013 – this was seemingly indiscriminate across all subjects and potentially ethical areas such as gender, race, and religion. Whilst this would exclude researchers who are not involved with peer review and publishing for whatever reason, it is explicitly outlined in their survey.
Is the sample size large enough for sufficiently precise estimates? Is the response rate large enough for sufficiently precise estimates?
YES: Whilst it is difficult to say how many ‘researchers’ there are exactly in the world, if we take UNESCO’s 2013 estimate of there being 7.8 million researchers in the world (UNESCO, 2016) it is safe to assume that the sample size is certainly sufficient. Whilst there is no benchmark for what is considered a good response rate, we can judge this survey off the response rate of other similar surveys (Denscombe, 2017). Based off Rowley & Sbaffi’s literature review, other previous studies have had even less responses than the 9.1% response rate of this survey. This suggests that the quantity of respondents due to the large sample size makes this an acceptable response rate.

Is the choice of population bias-free?
YES: Even though the selection is exclusive to those involved with Taylor & Francis publishing in 2013, this population is likely to be bias-free due to the sample size. Porta (2014) defines a selection bias as the ‘systematic differences’ that differentiate the population of a study with other populations and says that ‘These differences may make it problematic to transport the inferences from the study population to the other populations.’ In this instance, I feel that the likelihood that researchers involved in this survey have a high crossover with other publishers makes this a relatively bias-free survey – the comparative sample with Thompson Reuters researchers also supports that there is an attempt to be bias-free.

Section B: Data Collection
Are data collection methods clearly described?
YES: The survey was an online survey sent out to respondents during Spring 2015 via email.
Is the data collection instrument validated?
YES: As shown in Rowley & Sbaffi’s methodology (Section 3.1), Taylor & Francis tested this questionnaire internally and extenrally with aa small group of academics. They also sent the survey out in small batches, which allowed them to address any technical issues that may have arisen.
Does the study measure the outcome at a time appropriate for capturing the intervention’s effect?
YES: The survey was sent during Spring 2015. Whilst the questionnaire and area of interest is not particularly time sensitive and is likely to change again given enough time, the window provided seems sufficient for obtaining a snapshot of academic attitudes towards peer review in a certain timeframe.

Is the instrument included in the publication?
YES: This is a slightly complicated question as the survey and results are not directly included with the article as an appendix but this information is freely available online via Taylor & Francis who conducted the survey. Whilst it would have been better to include the instrument as part of the article, by providing reference as to where the public can access the data, this does make it accessible for evaluation and therefore transparent research.

Are questions posed clearly enough to be able to elicit precise answers?
UNCLEAR: Whilst the compilation of questions are mostly logical and ask the respondence to choose an option from 1 to 10 based on agreement (e.g. 1 = strongly disagree, 10 = strongly agree), some of the questions could be framed in a less ambiguous manner. For example, Question 6a in Table 3 reads as follows: ‘Authors of one gender (either male or female) are more likely to be accepted for publication in a journal than authors of the other gender.’ On a scale of 1 = extremely rare, to 10 = extremely common, by not specifying the gender, I feel this provides a potential lack of insight into gender inequality in the peer review process. For example, within STM researchers, editors (77% male to 23% female) and reviewers (68% male to 32% female) are largely dominated by older males. By asking non-specific questions with vague descriptions, you risk losing nuance and actual inciteful areas for further research.

Were those involved in data collection not involved in delivering a service to the target population?
YES: Despite Taylor & Francis having a vested interest in academic attitutudes towards peer review, the anonymity of the survey eliminates the possibility of bias regarding the distribution and gathering of the survey data.

Section C: Study Design
Is the study type/methodology utilised appropriate?
YES: Using the initial results from the survey, Rowley & Sbaffi input the responses to the questionnaire to IBM SPSS – a piece of software designed for statistical analysis – to provide their own findings on the subject with a focus on statistics.

Is there face validity?
YES: At a simple level, the methodology used seems valid and appropriate for this study.
Is the research methodology clearly stated at a level of detail that would allow its replication?
YES: The article outlines how the data was gathered by Taylor & Francis – whilst I could not replicate the reach of a major publishing company, the survey layout is made up of fairly standard closed questions using ‘Likert-style’ questions which could be replicated. The methods they use for data analysis are also replicable as I could input the data set available into a statistical analysis software and produce the results they did.

Are the outcomes clearly stated and discussed in relation to the data collection?
YES: In the introduction of the article, Rowley & Sbaffi make a clear statement that they aim to contribute to the knowledge of academics’ attitudes towards the peer review process with a particular focus on whether this is impacted by discipline or role in the process (as author, reviewer or editor). By performing an analysis of variance (ANOVA) between the disciplines and roles, the authors attempt to determine whether there are statistically significant differences between any of these groups.

Section D: Results
Are all the results clearly outlined?
YES: The outcome of their statistical analysis for each question is clearly available in Tables 1 to 5. They explain that empty cells have been left this way as they contained no significant difference between values. In this context, the omission of data is not to obscure their analysis but is an acknowledged absence of significance which seems an appropriate reason to omit this data.

Are confounding variables accounted for?
NO: The original Taylor & Francis survey also carries demographic based questions such as gender and age by discipline and role (author, reviewer, editor). Looking at the original data survey results, significant differences can be seen between these demographics within discipline and roles and could lead to interesting interpretation of the data which I believe is missing from the article. Whilst I respect the authors’ intent was to do a large scale statistical analysis of survey results, they should have made some mention of the range of ‘confounding variables’ that are not analysed in their article as possible areas for further research.

Do the conclusions accurately reflect the analysis?
YES: The discussion and conclusion section of the article provide a fair and succinct summary of the statistical analysis of the survey data and the authors do not make any inappropriate assumptions based off the data they had.
1 Examples of this formed part of the analysis when answering the question ‘Are questions posed clearly enough to be able to elicit precise answers?’ on page 8.

Is subset analysis a minor, rather than a major, focus of the article?
YES: Overall, there are five tables presented that split the questions into suitable categories. In both the findings and discussion of the results, the authors explain what the results show for an appropriate length each without any bias to a particular set of results or any other major subset analysis infractions.

Are suggestions provided for further areas to research?
YES: The final section of the article (section 6.2) concludes by providing suggestions for future research including topics such as understanding further the disciplinary differences between HSS and STM.

Is there external validity?
YES: The population size of the survey was both large and non-specific with a good sample size which allowed for the generation of acceptable results. This checklist has also demonstrated that the article and research methods were relatively free from bias.

Section E: Calculating Validity
Glynn’s critical appraisal toolkit provides the following formula for calculating validity:
Y (Yes) + N (No) + U (Unclear) = T (Total)
If Y is greater than or equal to T by 75% or N + U is less than or equal to T by 25%, then we can conclude that the research conducted has validity. The total for this article is 21 with 19 yes, 1 no and 1 unclear. Based off of this, we can determine that the article has 90% validity and can therefore be treated as an appropriate piece of research.

Conclusion
Overall, this critical analysis has tried to demonstrate the quality of the article written by Rowley & Sbaffi by using an established critical appraisal tool in addition to supplementary questions devised by myself. The methodical approach to viewing their article using Glynn’s (2006) tool demonstrates that this article has ‘high validity’ scoring 90% overall and achieves a score well above the 75% threshold established for validity in each section too (see Appendix II). In addition to this detailed appraisal of the data behind their article, the background questions help to establish that the authors, journal and article itself are also credible. In summary, I would assess that this article is a solid piece of research that has been constructed in a logical fashion. The only reservations I hold are regarding how creative this research should be considered. Whilst they have made the case that this is an area of research that has not been considered before, this piece almost comes across as by-the-numbers academic publishing with identical methodologies to previous work. Alternatively, this may be too cynical and if the methodology and approach is an effective and suitable way of processing similar batches of data, then using a similar approach for other research is within reason.

In addition, whilst Taylor & Francis presentation of the data is visually friendly with plenty of diagrams and charts, it can be difficult to digest the significance of the data. By processing the data through statistical software, Rowley & Sbaffi changes the way an audience interprets the data by presenting the information in a more robust manner than the original survey. For this reason – in addition to the many others above – I would say that overall, this is a solid piece of academic research.

Bibliography
Aubry, R. E., Scott, L. & Cassidy, E., 2017. Lithium monitoring patterns in the United Kingdom and Ireland: Can shared care agreements play a role in improving monitoring quality? A systematic review. Irish Journal of Psychological Medicine, 34(2), pp. 127-140.
Bawden, D. & Robinson, L., 2012. Introduction to Information Science. London: Facet Publishing.
Cairns, G. et al., 2012. Systematic Literature Review of the Evidence for Effective National Immunisation Schedule Promotional Communications, s.l.: European Centre for Disease Prevention and Control (ECDC).
Denscombe, M., 2017. The Good Research Guide: For small-scale social research projects. 6th ed. London: Open University Press.
Glynn, L., 2006. A critical appraisal tool for library and information research. Library Hi Tech, 24(3), pp. 387-399.
Greenhalgh, T., 2014. How to Read a Paper: The Basics of Evidence-Based Medicine. 5th ed. Chichester: BMJ Books.
Manchester Metropolitan University, 2018. Profile – Manchester Metropolitan University. [Online] Available at: https://www2.mmu.ac.uk/infocomms/staff/profile/index.php?id=125 [Accessed 30 December 2018].
O’Rourke, G. & O’Brien, J. J., 2016. Identifying the barriers to antiepileptic drug adherence among adults with epilepsy. European Journal of Epilepsy, Volume 45, pp. 160-168.
Porta, M., ed., 2014. A Dictionary of Epidemiology. 6th ed. Oxford: Oxford University Press.
Rowley, J., Johnson, F. & Sbaffi, L., 2017. Academics’ Behaviors and Attitudes Towards Open Access Publishing in Scholarly Journals. Journal of the Association Form Information Science and Technology, 68(5), pp. 1201-1211.
Rowley, J. & Sbaffi, L., 2018. Academics’ attitudes towards peer review in scholarly journals and the effect of role and discipline. Journal of Information Science, 44(5), pp. 644-657.
Taylor & Francis Group, 2015a. Peer review in 2015: A Global View: A White Paper from Taylor & Francis, s.l.: Taylor & Francis.
Taylor & Francis Group, 2015b. Peer review in 2015: A Global View: Key survey data from Taylor & Francis, s.l.: Taylor & Francis.
The University of Sheffield, 2018. Dr Laura Sbaffi – Staff – Information School – The University of Sheffield. [Online] Available at: https://www.sheffield.ac.uk/is/staff/sbaffikeypublications [Accessed 30 December 2018].
UNESCO, 2016. UNESCO Science Report: Towards 2030. 2nd ed. Paris: UNESCO Publishing.

About Joseph Dunne-Howrie

Joseph is a practitioner scholar in theatre and library information science. He teaches at several universities including City, Rose Bruford College, and UEL. His research interests include immersive performance, performative writing, digital culture, documenting and archiving, and audience participation. You can learn more about Joseph's work at www.josephjohndunne.com.
This entry was posted in CityLIS Writes. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *