This FAQ is designed to help people who are already using COUNTER reports find the answers to their questions. If you need an introduction to COUNTER, we suggest starting with the Education pages, or reading more about who we are on our About pages. And of course, if the answer you’re looking for isn’t here, please get in touch with Tasha.
First off, thank you – it’s great to know we’re helping our community! There are a few things that you can do to boost COUNTER compliance from your publishing partners. The first is as simple as asking them: some may simply not know that you want these metrics. If they need information, just introduce them to tasha@countermetrics.org. Secondly, you can include a requirement for COUNTER reporting in your license agreements. There’s some standard terminology in section 10.1 of the Code of Practice. Lastly, you can collaborate with your fellow librarians to help show just how widespread the demand for COUNTER reporting is!
We ask report providers to make sure they have usage reports available for the year-to-date plus the previous 24 months. That means that from the 28th of April 2024, we would expect compliant providers to be offering January, February and March 2024 reports, plus 2022 and 2023 reports.
The only exception to this is for new report providers, where we only expect reports from the date when they first became COUNTER-compliant.
We don’t recommend it, but we know that there are some publishers who have chosen to offer Title Reports for A&I databases. This typically happens where specialist publishers have added contextual information like reviews on the source articles, and where subject-specialist librarians in the same field have explicitly requested that the publisher offers a TR so that they can understand usage patterns for titles within the database. Where this is done, the TR for the A&I database must be completely separate from any other TR that the publisher offers (e.g. for their journal platform).
As database publishers they are still required to offer the appropriate Database Report, and we recommend that the DR should be used in place of the TR.
While we develop and maintain the Code of Practice, audits for COUNTER compliance are conducted by independent third party auditors – either ones that we work with frequently, or Chartered Accountants or their equivalent.
Our goal is to ensure that usage reporting is consistent and comparable across different report providers and over time. Including zero usage creates challenges for both report providers and report consumers:
Instead, we recommend using KBART (https://www.niso.org/standards-committees/kbart) as a way to match subscription holdings against COUNTER reports.
Report providers are only obliged to offer reports on a monthly basis. If they are able to offer more frequent reports while remaining compliant with our processing rules they can do so, provided they very clearly mark the resultant report so as not to accidentally cause problems (e.g. a report consumer mis-interpreting a one-day report as an entire month’s usage). One way to do this would be for the report provider to change the Report_Name and Report_ID to a non-standard value to prevent the report from being mistaken for a true COUNTER report. These non-standard reports would not be subject to audit and could not be considered fully COUNTER-compliant.
We know that some report providers may want to provide custom reports for their content, so we’ve a whole section in the Code of Practice that describes how you can extend your reports – it’s inSection 11 of the Code of Practice. If you do choose to extend your reports, you need to abide by two rules:
No, you don’t need to deliver DR_D2, TR_B2 or TR_J2 denial-based Standard Views if you never turn users away from your platform.
It’s quite common for a platform to include only one database. Where this is the case, the publisher needs to offer both the Platform Report and its PR_P1 Standard View, and the Database Report and the DR_D1 and DR_D2 Standard Views.
The required consortium reports look the same as the other COUNTER Reports (PR, TR, DR, IR). Samples are available in Appendix G of the Code of Practice in TSV, JSON and Excel format.
Many of our community members have developed special tools for consortia to make it easier to gather COUNTER reports. They’re all listed in ourTools and Services page and you will also find useful information about them on theConsortia page. Since we didn’t build these tools ourselves, we aren’t the best people to give you detailed information about them.
No, the naming of consortia reports matches the naming of other COUNTER reports. For example, if a consortium manager requests a consortium-level summary of the Title Report, the report name is still Title Report.
Where consortia operate as one extra-large institution, the typical COUNTER Reports apply – just be aware that there won’t be a mechanism to break down usage for the members of the consortium.
We know that some consortia are set up in such a way that the consortium leader is an institution in its own right, similar to the way some groups like to have a parent organisation with multiple children (e.g. universities with multiple campuses). If users can gain access to content directly from the consortium leader, the leader would need a Customer_ID separate from those of the member institutions and usage would accrue to the consortium directly. That means the consortium administrator should be able to download a regular report showing just the usage for the consortium’s Customer_ID.
It depends on what you are tracking! Typically we hear that consortia want to report on book and journal usage: requesting a Title Report (TR) covering the dates you are interested in and taking the Unique_Item_Requests for Data_Types Book and Journal (plus Reference_Work in Release 5.1) would be the minimum required to meet that need.
If the community would like COUNTER to develop recommendations for consortial or regional data collection, we’re happy to run that project in future: please get in touch withtasha@countermetrics.org.
There are three types of consortia reporting covered by the Release 5 and Release 5.1, which you can read about on oureducation pages. Mandatory reports are the summary report (total consortium usage, not broken down by institution) and the separate institutional reports, which consortia managers can collect using harvesting tools. There are also detailed reports, which collate the institutional reports into a single file, but these are optional – not every report provider offers them.
Unless the content generated by your AI interface is going to be allocated a persistent identifier and become a static piece of content in your platform, you should not be counting any Investigations or Requests for that generated content.
Yes: if a user is typing in a prompt and the AI is using search functionality to find the relevant content in your database, you should count 1 Searches_Platform for each user prompt. If your interface does not bring up a list of relevant content, you should not count a Searches_Platform.
Generative AI is being used in a lot of different ways, so counting usage will be different depending on what the tool is doing.
Scenario one:Content is generated by AI tools based on the complete corpus, with a list of references displayed either in-line or beneath the generated text. This should be counted as a Search, with Investigations/Requests triggered only if the user clicks through to one of the references.
Scenario two:Content relating to one specific Item is generated by AI tools. Unless the content generated by your AI interface is going to be allocated a persistent identifier and become a static piece of content in your platform, you should not be counting any Investigations or Requests for that generated content. If the generated text is rendered as part of the regular content page (e.g. a journal abstract page which includes an AI-generated lay summary), then the usual Investigation should be counted for that item.
Scenario three: Completely new material is generated as a topic summary. As before, unless this content is going to be allocated a persistent identifier and become a static Item in your platform, you should not be counting any Investigations or Requests for thegenerated content.
We are a membership organisation created in 2003 by the knowledge community, including libraries, consortia, publishers, aggregators, and technology providers. We bring the community together to define, update and use the COUNTERCode of Practice, which is the global standard for measuring and reporting content usage. Compliance with the Code means that publishers, aggregators and technology providers can deliver credible, consistent, comparable usage metrics to libraries and consortia around the world. You can learn more on ourAboutpage.
Of course, everyone is welcome to sign up for the newsletter – it’s the best way to stay abreast of COUNTER news! There’s a quick sign-up link at the bottom of theMembers page.
Being part of a COUNTER member organisation gives you access to a wide range of opportunities, including attending the COUNTER conference, but you will need to make sure that your contact details are registered on our membership CRM (GlueUp). You can do this by getting in touch with your membership administrator, or bycontacting us.
We issue invoices through our accounting and membership software that include instructions on how to pay by credit card, using our secure Stripe payment server, or by bank transfer. Please note that we are unable to accept payment by cheque.
It’s easy to become a member: just visit theMembers page, select the type of organisation that best matches you, and complete the short registration form. We’ll get in touch with you to finish the process.
Drop a line to our Executive Director,tasha@countermetrics.org, explaining your idea and why you think it enhances the Code of Practice. The TAG will consider whether the idea can feasibly be implemented, and then make a recommendation to the Executive Committee explaining when and how that should be done.
Libraries affiliated with a COUNTER member consortium are entitled to many of the regular member benefits, including being able to attend the members-only COUNTER conference. However, only the consortium can vote in the AGM or be a member of the COUNTER Board.
Just like a regular organisational member, you will need to make sure that your contact details are registered on our membership CRM (GlueUp). You can do this by getting in touch with your consortium membership administrator, or by contacting us.
Global reports are the sum of all usage on your platform, whether that is non-attributed (i.e. not linked to an institution) or attributed. If your platform can attribute usage to multiple institutions simultaneously, you must only count the attributed usage once.
The COUNTER Code of Practice is a standard developed and maintained by the knowledge community, and as such it’s a trusted and trustworthy way of reporting usage so that it can be compared across publishing platforms and over time. For OA publishers, in particular, it offers a way to prove claims that open access is an effective way to increase global dissemination and use of scholarly content.
Yes, the Code of Practice includes options for you to break down global usage by country and by state (called Country_Subdomain in the Code). What we don’t recommend is any kind of ‘best guess’ mapping of IP addresses to a smaller area like a city, because it’s difficult to define the boundaries of a city and because many people use IP randomisation and obfuscation tools to hide their location.
Report providers can also choose to work with IP databases to match on-campus usage to specific institutions, and use this information to provide institutional COUNTER Reports for OA content.
We apply the principle that Access_Type (Open, Controlled, Free_To_Read) applies on the platform being used, as we simply don’t have capacity to start auditing licenses and so on. To take a common example, an open access book from Platform A is included in a subscriber-only database on Platform B. In this case, usage of the book should be reported with Access_Type=Open on Platform A, but Access_Type=Controlled on Platform B.
While not strictly a COUNTER issue, we always recommend checking on the Directory of OA Books (https://doabooks.org/) to find out if something was published under an OA license.
Absolutely! That’s what the Global reports are for: they show all usage from around the world, whether it can be attributed to an institution or not. There’s a whole section on COUNTER for OA in oureducation pages if you want to find out more.
Yes, the Registry can be accessed without using the interface, fromregistry.projectcounter.org/api/v1. There are four endpoints:
We know that there will be occasions where a report provider needs to restate their data, and that’s accounted for inSection 7.11 of the Code of Practice, which says that if you find an error, you have three months to correct it.
The organisation which is audited, listed under ‘Usage Data Hosts’ in the Registry, is responsible for alerting us to the problem. Publishers working with third-party usage data hosts are also welcome to contact us directly. Please emailtasha@countermetrics.org as soon as the problem is spotted, so that she can add a notification to the Registry, and again when the correction is made.
It depends on what’s strange: typically, the first thing to do is check theRegistry – if you can’t find the platform in the Registry, the publisher isn’t COUNTER-compliant and there’s unfortunately not much we can do beyond reminding them about the value of standardised metrics. If the platform is in the Registry, you’ll find a contact email for the publisher. Please send them a message outlining the problem you are having, cctasha@countermetrics.org.
We operate an open standard, meaning that any publisher may offer reports using our format, and many do so without formal compliance – this is why we introduced theRegistry as a source of information about audited providers. If you are receiving something that looks like a COUNTER Report from a publisher who isn’t on the Registry, we suggest that you run their reports through the COUNTER Validation Tool (this will show you if there are major issues with the reports), and then get in touch with the publisher to ask them about their compliance status. Tasha adds report providers to the Registry whenever she receives proof of a successful audit.
We ask report providers to deliver either Excel or tab-separated-value (TSV) files, or both. Additional file formats that can be easily imported into spreadsheet programs without loss or corruption may be offered at the vendor’s discretion.
There is a full set of samples covering all four COUNTER Reports and their standard views inAppendix G of the Code of Practice. They’re available in TSV, JSON and Excel formats.
If a user has visited a publisher website and can be attributed to your institution (e.g. through IP recognition or single sign-on), the publisher will show that usage in your COUNTER Reports. Searches, denials and Investigation metrics are to be expected.
Requests may also show up. This typically means that user is dual-attributed with both your institution and another institution which does hold a license for the content. We’re working on developing a policy to increase the transparency around this dual- or multi-attribution process.
Audiobooks are typically allocated an ISBN or other identifier to distinguish them from other formats. This means you should be reporting usage of audiobooks under the Data_Type Book, with the same rules around chapter-level usage and so on, and listing the audiobook ISBN in the appropriate field of the report.
We know that some publishers have elected to code audiobooks to Multimedia to help users distinguish audiobook usage from use of text-based book content. This is not recommended and publishers are asked to use Book.
In Excel, navigate to the ‘Data’ ribbon, then click the ‘From Text/CSV’ button. This will open a file navigator popup.
First make sure that you have selected ‘All files’ in the dropdown menu next at the bottom of the popup, then choose your COUNTER report in .tsv format and click ‘Open’.
You will see a new popup with three control options at the top: leave the ‘File Origin’ and ‘Data Type Detection’ controls alone, and make sure that ‘Delimiter’ is set to Tab, then click ‘Load’.
That’s it, you have your TSV-formatted COUNTER Report in Excel, ready to be worked on.
We don’t recommend it, but we know that there are some publishers who have chosen to offer Title Reports for A&I databases. This typically happens where specialist publishers have added contextual information like reviews on the source articles, and where subject-specialist librarians in the same field have explicitly requested that the publisher offers a TR so that they can understand usage patterns for titles within the database. Where this is done, the TR for the A&I database must be completely separate from any other TR that the publisher offers (e.g. for their journal platform).
As database publishers they are still required to offer the appropriate Database Report, and we recommend that the DR should be used in place of the TR.
Our goal is to ensure that usage reporting is consistent and comparable across different report providers and over time. Including zero usage creates challenges for both report providers and report consumers:
Instead, we recommend using KBART (https://www.niso.org/standards-committees/kbart) as a way to match subscription holdings against COUNTER reports.
The four COUNTER Reports – the Platform, Database, Title, and Item Reports – are the way that COUNTER usage data is presented. The contents of the COUNTER Reports are defined by theCode of Practice and there’s a lot more information about them on ourCOUNTER Reports page.
It depends on what’s strange: typically, the first thing to do is check theRegistry – if you can’t find the platform in the Registry, the publisher isn’t COUNTER-compliant and there’s unfortunately not much we can do beyond reminding them about the value of standardised metrics. If the platform is in the Registry, you’ll find a contact email for the publisher. Please send them a message outlining the problem you are having, cctasha@countermetrics.org.
This is the most common question we receive from librarians and other report consumers! Standard Views are pre-canned snippets of the four main COUNTER Reports and only include limited information. They were designed to bridge the gap from Release 4 to Release 5 by mirroring the old R4 format, and not to be comprehensive. If the metric you want isn’t in the Standard View, please take a look at the COUNTER Report instead (e.g. look at the TR instead of TR_B1). You can always filter COUNTER Reports to show only the information you’re interested in!
Yes! Release 5.1 reports are broadly comparable to Release 5, provided some care is applied.
Report providers are only obliged to offer reports on a monthly basis. If they are able to offer more frequent reports while remaining compliant with our processing rules they can do so, provided they very clearly mark the resultant report so as not to accidentally cause problems (e.g. a report consumer mis-interpreting a one-day report as an entire month’s usage). One way to do this would be for the report provider to change the Report_Name and Report_ID to a non-standard value to prevent the report from being mistaken for a true COUNTER report. These non-standard reports would not be subject to audit and could not be considered fully COUNTER-compliant.
If you take a look at column K in the tabular version of the Title Report, you’ll see the heading Data_Type. As many publisher platforms include lots of different types of content, their Title Reports will necessarily include quite a few Data Types that aren’t journals or books, including conferences, magazines, etc. This means the Title Report usage metrics will often exceed the total of the book and journal Standard Views.
We know that some report providers may want to provide custom reports for their content, so we’ve a whole section in the Code of Practice that describes how you can extend your reports – it’s inSection 11 of the Code of Practice. If you do choose to extend your reports, you need to abide by two rules:
We know that it can be complicated to process raw usage data into COUNTER-compliant metrics, so the Code of Practice allows up to 28 days for report providers to make their monthly COUNTER Reports available. If you are setting up automated harvests, we recommend configuring your system to do this after the 28th.
It depends. For Aggregated_Full_Content databases – those databases which include content that is aggregated or collected up by title – you should expect to see both a Database Report and a Title Report.
It’s quite common for a platform to include only one database. Where this is the case, the publisher needs to offer both the Platform Report and its PR_P1 Standard View, and the Database Report and the DR_D1 and DR_D2 Standard Views.
To ensure that there is no double-counting, platforms that always associate the Data_Type with a Parent_Data_Type must only show the Parent_Data_Type in the Platform Report and PR_P1 Standard View. Only platforms that have the Data_Type without a Parent_Data_Type should show the Data_Type in the Platform Report. This applies to all Data_Type and Parent_Data_Type pairings:
The same applies to the Database Report and Title Reports and their associated Standard Views.
There are three types of COUNTER metric – the usage, search, and denial metrics. They are defined by the Code of Practice. Usage metrics include Investigations and Requests, and they describe user interactions with pieces of content. Search metrics are what you would expect: information about the number of searches on a given platform or database. Denial metrics, sometimes called turnaways, count the number of times users are refused access to pieces of content. There’s a lot more information about the metrics on ourEducationpages.
Any time a platform has to execute a new search, a search metric should be counted. Examples include but are not limited to:
Typically, users moving between pages on a paginated search results interface should not be treated as additional searches, with one exception as outlined in the search metrics FAQ about landing on a trending content page.
Some platforms are exclusively for subscribers or registered users. They can still track and report No_License denials for users who land on the platform, for example by using an IP database to determine the users’ likely institutional affiliation.
One of our search metrics, Searches_Platform, refers to search activity across a whole site and it appears only in the Platform Report and the PR_P1 Standard View.
The other three search metrics appear in the Database Report and the DR_D1 Standard View.
For simple platforms with only one database, Searches_Regular is the relevant COUNTER search metric.
For more complex platforms with multiple databases, the search metric depends on the user interface: if the user can choose which database they want to search, Searches_Regular applies. If the user cannot choose, and has to search every database on the platform, then Searches_Automated applies.
Lastly, Searches_Federated shows you the number of searches happening remotely – for example, if your library database is linked to and can search the platform.
Yes: if a user is typing in a prompt and the AI is using search functionality to find the relevant content in your database, you should count 1 Searches_Platform for each user prompt. If your interface does not bring up a list of relevant content, you should not count a Searches_Platform.
No: while the search system might be generating the list of trending content, it’s not a search that has been executed by the user so it should not be counted. The same applies for things like ‘most read’ or ‘most cited’ lists. The only exception to this would be if a user specifically elected to explore more of the list – for example by clicking a ‘view more’ link, or moving to another page on a paginated search results interface (in the sole exception to the rule about pagination described in the search metrics FAQ on filtering and faceting).
The original SUSHI standard described a SOAP protocol for collecting usage reports. Feedback from our community was that the original protocol was too cumbersome, so Release 5 of the COUNTER Code of Practice was based on a technical report that we worked on with NISO in 2016/17, called SUSHI-lite, which described a way to harvest usage reports using a restful protocol. More recently it has become clear that the name ‘SUSHI’ was confusing for people who are new to COUNTER, so we are working towards relabelling it more clearly as the COUNTER API (formerly sushi) across all of our resources.
Yes, if you cannot differentiate between Books, Conferences and Reference_Works, please continue to use Book. We added these new Data_Types to help report providers who have distinct conference series or textbook programmes.
Using the more granular multimedia Data_Types isn’t mandatory: we introduced them to help those report providers who are already using DataCite resource types and want to show them in their usage reporting.
Users need a Customer_ID and Requestor_ID to call reports from the COUNTER API (formerly sushi). The Customer_ID is usually the report provider’s internal identifier for the institution, while the Requestor_ID is a system-generated identifier for the API session.
Items are individual pieces of content – book chapters, journal articles, videos, etc. Some content can be rolled up or aggregated into Titles like books or journals.
Report providers include publishers, aggregators, technology providers, syndicated platforms, and other groups who offer COUNTER Reports.
Report consumers include librarians, consortia, OA teams, technology providers, funders, publishers, and others who use COUNTER Reports within their work for any reason.
A chapter could be a Book_Segment, but we often see other terms like ‘essay’, or different divisions like ‘part’ or ‘section’. We therefore use the term Book_Segment to indicate the Items within a Title. Book_Segments need to have unique identifiers (such as a DOI) to be countable.
If we take the example of an edited volume with sixteen chapters, where each chapter has a DOI but the front matter (foreword, etc.) does not, then a download of the entire book should count as 16 Total_Item_Investigations, 16 Unique_Item_Investigations, 16 Total_Item_Requests, 16 Unique_Item_Requests, 1 Unique_Title_Investigation and 1 Unique_Title_Request.
Yes! Release 5.1 reports are broadly comparable to Release 5, provided some care is applied.
Report providers don’t need to reprocess data from before the cutover date to comply with Release 5.1: you just need to keep the old Release 5 reports available so that report consumers can still get hold of reports covering the current year plus the prior 24 months.
If you choose to go back and reprocess data to provide Release 5.1 report from before the transition, please
There’s not much change from Release 5 to Release 5.1, but we have included a detailed description of the changes and how to map them inAppendix B of the Code of Practice. There is also a Friendly Guide specifically aboutwhat’s new in R5.1. Similarly, for mapping Release 4 to Release 5 the detail is inAppendix B, and the Release 5Friendly Guide for Librarians has a summary table that might be useful for you.
Drop a line to our Executive Director,tasha@countermetrics.org, explaining your idea and why you think it enhances the Code of Practice. The TAG will consider whether the idea can feasibly be implemented, and then make a recommendation to the Executive Committee explaining when and how that should be done.
No! Provided the platform URL isn’t changing and the content is not being given new identifiers, you don’t need to separate out usage for the old and new interfaces.
When you move between report providers, just like when you upgrade to the latest release of the Code of Practice, you need to ensure reports are available for the year-to-date plus the previous 24 months – so if you are transitioning to a new provider in April, you’d need January, February and March 2024 reports, plus 2022 and 2023 reports, from your old provider. The responsibility for making sure those older reports remain available rests with the publisher (you), not the old report provider.
You can do that by keeping the old service running until the reports age out; moving the old reports to the new service; or keeping a data dump of the old reports available through your customer service tools, provided they are secure enough to ensure that customer data isn’t improperly shared.
Yes, if you cannot differentiate between Books, Conferences and Reference_Works, please continue to use Book. We added these new Data_Types to help report providers who have distinct conference series or textbook programmes.
Using the more granular multimedia Data_Types isn’t mandatory: we introduced them to help those report providers who are already using DataCite resource types and want to show them in their usage reporting.
There are three types of COUNTER metric – the usage, search, and denial metrics. They are defined by the Code of Practice. Usage metrics include Investigations and Requests, and they describe user interactions with pieces of content. Search metrics are what you would expect: information about the number of searches on a given platform or database. Denial metrics, sometimes called turnaways, count the number of times users are refused access to pieces of content. There’s a lot more information about the metrics on ourEducationpages.
If a user successfully accesses metadata about an item – a journal article abstract, for example – that counts as an Investigation. If they are then denied access to the full text, perhaps because your institution doesn’t subscribe to the journal, a No_License denial has to be counted for the same item.
Usage metrics in this scenario will reflect user experience:
• The user cannot see any PDF download options, but the interface offers up the HTML instead. The user clicks the HTML link to successfully access the item: 1 Total_Item_Request is counted.
• The user can see a PDF download option, but when they click it they are automatically redirected to the HTML that they are licensed to use: 1 Total_Item_Request is counted.
• The user can see a PDF download option, but when they click it they are not automatically redirected to HTML. As the user is being turned away from content they want to use, and not offered an alternative format for the same content, 1 No_License is counted.
A&I databases that create value-add content around the materials that they index – for example, lay summaries or expert commentaries – often count that value-add content as a Request. We don’t recommend using Requests to assess the usage of A&I databases.
Audiobooks are typically allocated an ISBN or other identifier to distinguish them from other formats. This means you should be reporting usage of audiobooks under the Data_Type Book, with the same rules around chapter-level usage and so on, and listing the audiobook ISBN in the appropriate field of the report.
We know that some publishers have elected to code audiobooks to Multimedia to help users distinguish audiobook usage from use of text-based book content. This is not recommended and publishers are asked to use Book.
There are three types of COUNTER metric – the usage, search, and denial metrics. They are defined by the Code of Practice. Usage metrics include Investigations and Requests, and they describe user interactions with pieces of content. Search metrics are what you would expect: information about the number of searches on a given platform or database. Denial metrics, sometimes called turnaways, count the number of times users are refused access to pieces of content. There’s a lot more information about the metrics on ourEducationpages.
If a user successfully accesses metadata about an item – a journal article abstract, for example – that counts as an Investigation. If they are then denied access to the full text, perhaps because your institution doesn’t subscribe to the journal, a No_License denial has to be counted for the same item.
If you take a look at column K in the tabular version of the Title Report, you’ll see the heading Data_Type. As many publisher platforms include lots of different types of content, their Title Reports will necessarily include quite a few Data Types that aren’t journals or books, including conferences, magazines, etc. This means the Title Report usage metrics will often exceed the total of the book and journal Standard Views.
Both Investigations and Requests measure usage of pieces of content (‘Items’). Every interaction with an item generates an Investigation, whether that is a user looking at a journal abstract or a video thumbnail, downloading a book chapter, or sharing the item from a link embedded in the page. When a user chooses to interact with the complete content item – downloading full text or hitting play on a video, for example – that is counted as both an Investigation and a Request.
That means you should expect to see a lot more Investigations than Requests in COUNTER Reports.
Report providers may count Investigations when users access home pages or tables of contents for two COUNTER Data_Types: Books and Reference_Works. No other home pages or tables of contents (e.g. journals) can be counted as Investigations or Requests.
As a rule of thumb, only usage of items with a unique identifier such as a DOI or ISBN should be counted.
Count an Investigation, but not a Request, when:
Do not count an Investigation or a Request when:
In Release 5 we introduced the concept of ‘Unique’ metrics, which deduplicate Investigations and Requests by each user within a single user session. For example: if a user reads a book chapter on screen and then downloads the same chapter to read offline later, that would count as 2 Total_Item_Investigations and 2 Total_Item_Requests, but only 1 Unique_Item_Investigation and 1 Unique_Item_Request.
By contrast, if the user read a book chapter on screen, closed their browser window and re-visited the same chapter on screen later in the same day, that would count as 2 Total_Item_Investigations, 2 Total_Item_Requests, 2 Unique_Item_Investigations and 2 Unique_Item_Requests – that is, there’s no deduplication because the activity happens in two sessions.
The Request is counted as soon as the user hits play: as with downloading a book chapter, if the content is available and the user has clicked to access it, the action counts as a Request.
How you count usage will depend on whether you can uniquely identify each page of the magazine. If each page has a unique identifier, it can be considered an item with Data_Type News_Item. That would mean that if a user looks at four scanned pages, you would count 1 Total_Item_Request for each page.
On the other hand, if you only have a unique identifier for the issue of the magazine then the whole issue would be considered an item with Data_Type Newspaper_or_Newsletter, and the same user scrolling through the same four scanned pages would only count as 1 Total_Item_Request.
The Unique_Title_Investigation and Unique_Title_Request metrics only apply to Books and Reference_Works (e.g. encyclopaedias). They only increase by 1 no matter how many chapters of the work are accessed in a given user session.
For example: if a user downloads five book chapters in a single session, that would count as 1 Unique_Title_Investigation and 1 Unique_Title_Request. By contrast, if the user downloaded five chapters, closed their browser window and downloaded two more chapters from the same book later in the same day, that would count as 2 Unique_Title_Investigations and 2 Unique_Title_Requests – there’s no deduplication because the activity happens in two sessions. There are Item metrics too, but we’ve not reported them here.
It can be challenging to compare use of an abstracting and indexing service (A&I_Database) with a database that contains full content items. We suggest using search metrics to identify whether your users are exploring databases, and to compare levels of interest across all the databases to which you have access. You can also use Unique_Item_Investigations to compare the number of interactions with content in the databases.
It depends on what you’re trying to measure!
We typically recommend Unique_Item_Requests for calculating cost per use, except for abstracting and indexing services (which appear in the Registry with Host_Type A&I_Database), where Unique_Item_Investigations is more appropriate. Some people prefer to use Unique_Title_Requests for calculating cost per use for books, which is absolutely valid provided the same metric is used consistently for all books across different publisher platforms and over time.
Equally, search metrics are great for identifying whether your users are exploring databases and denials can be a helpful indication of where there may be gaps in your collection.