Dear Melissa


Some initial thoughts from the COUNTER executive who hope that some other libraries also share their findings.


Most likely explanation for differences within platform we think is

  1. Some journals come in just one format while others have PDF and HTML. For example, Wiley backfiles are PDF only.
  2. Depending on the way a library’s users usually find the articles (Google vs. discovery tool vs A&I database vs personal bookmarks), they may land on different page types – either the one they want or one from where they have to navigate to the one they want.


It would be interesting to see the values for total and PDF usage from R4 for comparison. We assume they would also show a signification variation on a single platform.


The Total_Item_Requests of course correspond to the R4 total usage (ft_total in XML), which isn't necessarily the sum of PDF and HTML usage, because there might be additional formats ePub, etc.


Kind regards



p.s. do feel free to share these comments with the list if that is helpful.


From: Melissa Belvadi <>
Sent: 17 June 2019 16:25
To: Serials in Libraries Discussion Forum <>
Cc: Lorraine Estelle <>
Subject: unexpected results in new COUNTER 5 Journal data re diff total and unique item requests


Hi, all.


This is very wonky for those into analyzing journal use COUNTER data.


We've now had a full academic semester's worth of the new R5 data, which distinguishes "total item requests" from "unique item requests".


My understanding of the difference is that the "total" is basically the equivalent of adding the HTML and PDF totals from the R4 reports. Depending on the platform's web interface design, the HTML figures could have been seriously overinflated causing the total to be overinflated, if anyone visiting even the abstract found themselves actually viewing the entire HTML full text.

So I was expecting there to be a pattern across platforms, but not much within a single platform, in the difference between the Total and Unique in the R5 data, reflecting the difference in platform UI design.


However, I've started to look at a couple of platforms' reports, and am seeing wide variation at a title level within a single platform.


To reduce randomness from tiny data (a journal used just a couple of times), I am only including in the figures below journals that had unique uses 50 or higher for the Jan - April period.

We are a relatively small library. I would love to see the equivalent figures for a much library library.


On Elsevier's ScienceDirect, we have 58 titles that met that threshold.

I calculated the percentage difference for each between the Total and Unique Items Requested.

The min % difference is 21%, the max is 50% with a standard deviation of 6%.


On Highwire, we have just 15 titles meeting that threshold, min: 8%, max 33%, stdev: 8%


On Wiley, we have 27 titles meeting that threshold, min: 6%, max 66%, stdev: 11%


Such wide variation within a single platform suggests that there is something more interesting in user behavior going on rather than just platform UI. I'd love to hear suggestions about what, especially if you've done any direct patron behavior eg usability-type studies.


I am calculating the percent difference based on the total (=(total-unique)/total) although I could see a case for the unique instead =(total-unique)/unique instead.


If anyone is interested in a quick lesson on how to do get these figures from your own TR_J1 reports, let me know and if I get enough replies I'll offer a quick "webinar" on it. I am using Google Sheets with pivot tables but it's pretty much the same in Excel I think.


Melissa Belvadi

Collections Librarian

University of Prince Edward Island  902-566-0581

Make an appointment via YouCanBookMe



To unsubscribe from the SERIALST list, click the following link: