The Open Citation Project - Reference Linking for Open Archives
Evaluation | Papers

Evaluating Citebase, an open access Web-based citation-ranked search and impact discovery service

Steve Hitchcock, Arouna Woukeu, Tim Brody, Les Carr, Wendy Hall and Stevan Harnad

IAM Group, Department of Electronics and Computer Science, University of Southampton, SO17 1BJ, United Kingdom
Contact for correspondence: Steve Hitchcock sh94r@ecs.soton.ac.uk

Version history of this report
This version 1.0, official report to JISC, released to selected users and evaluators December 2002
Version 2.0, edited for publication as a Technical Report, July 2003
Version 3.0, focus on usability results, edited for journal publication, draft, July 2003

This report was produced by the Open Citation Project, which was funded between 1999 and 2002 by the Joint NSF - JISC International Digital Libraries Research Programme.
 

"(Citebase) is a potentially critical component of scholarly information architecture"
Paul Ginsparg, founder of arXiv

"I believe that ResearchIndex and Citebase are outstanding examples (of compellingly useful tools). These tools still have to be perfected to a point where their use is essential in any research activity. They will have to become clearly more pleasant, more informative and more effective than a visit to the library or the use of one's own knowledge of the literature. Much, much more! And I, for one, believe that they are coming quite near to this. But relatively few people realized this until now, even in these more technology prone fields of study."
Professor Imre Simon, September98-Forum, 24th November 2002
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2399.html

Abstract

Google, the most popular search engine among Web users, has built its success on a tool borrowed from the scholarly community: citation analysis. Google treats links as citations, and ranks search results on the basis of the number of links to Web pages. Citebase is a new citation-ranked search and impact discovery service that returns Web-based citation analysis to its roots by measuring citations of scholarly research papers. Citebase can be used to rank papers by impact, and like Google it does this for pages that are openly available on the Web, that is, papers that are freely accessible and assessable continuously online by anyone who is interested, any time. Other services, such as ResearchIndex, have emerged to offer citation indexing of Web research papers. In the first detailed investigation of the impact of an open access Web citation indexing service with users, Citebase has been evaluated by nearly 200 users from different backgrounds. This report analyses and discusses the results of this study, which took place between June and October 2002. It was found that within the scope of its primary components, the search interface and services available from its rich bibliographic records, Citebase can be used simply and reliably for the purpose intended, and that it compares favourably with other bibliographic services. Coverage is seen as a limiting factor, and better explanations and guidance are required for first-time users.

Summary

Since 1999, the Open Citation (OpCit) Project has been developing tools and services for reference linking and citation analysis of scholarly research papers in large open access archives (Hitchcock et al. 2002). Most of the data collected and many of the services provided by OpCit have converged within a single interface, Citebase, a citation-ranked search and impact discovery service. Citebase offers both a human user interface (http://citebase.eprints.org/), and an Open Archives (OAI)-based machine interface for further harvesting by other OAI services. The OpCit project completes its period of funding from the Joint NSF - JISC International Digital Libraries Research Programme at the end of 2002. As the project will be outlived by Citebase, it is appropriate to evaluate the project by means of the user response to this interface.

Other deliverables of the project, such as an application programming interface for reference linking, have been evaluated separately (Bergmark and Lagoze 2001). Another deliverable, EPrints.org software for creating open access Web-based archives (Gutteridge 2002), receives feedback from its already extensive list of registered implementers, which informs continuing development of new versions of the software (version 2.2 was announced in October 2002).

The OpCit evaluation of Citebase took the form of a two-part Web-based questionnaire, designed to test Citebase in two ways:

The test invited users to participate in a practical exercise and then to offer views on the service. Background information was also sought on how this new service might fit in with existing user practices. In this way the evaluation aimed to combine objectivity with subjectivity, overcoming some of the limitations of purely subjective tests.

The evaluation was performed over four months from June to October 2002 by members of the Open Citation Project team at Southampton University. Observed tests of local users were followed by scheduled announcements to selected discussion lists for JISC and NSF DLI developers, OAI developers, open access advocates and international librarian groups. Finally, following consultation with our project partners at arXiv Cornell, arXiv users were directed to the evaluation by means of links placed in abstract pages for all but the latest papers deposited in arXiv.

A by-product of the exercise was raising awareness of Citebase among target user and prospective user groups, especially among arXiv physicists to whom the service had not previously been announced. By monitoring Web usage logs for Citebase it is shown that usage increased from around 25-45 visitors per day before the evaluation began to 660 daily visitors at its peak during the evaluation. Further, staging announcements to lists enabled response levels to be tracked for different user groups.

The principal finding is that Citebase fulfills the objective of providing a usable and useful citation-ranked search service. It is shown tasks can be accomplished efficiently with Citebase regardless of the background of the user. More data need to be collected and the process refined before it is as reliable for measuring impact.

The principle of citation searching of open access archives has thus been demonstrated and need not be restricted to current users..

Its deceptively simple search interface, however, masks a complexity that is characteristic of such services and which requires better explanations and guidance for first-time users.

Coverage is seen as another limiting factor. Although Citebase indexes over 200,000 papers from arXiv, non-physicists were frustrated at the lack of papers from other sciences. This is a misunderstanding of the nature of open access services, which depend on prior self-archiving by authors. In other words, rather than Citebase it is users, many of whom will also be authors, who have it within their power to increase the scope of Citebase by making their papers available freely from OAI-compliant open access archives. Citebase will index more papers and more subjects as more archives are launched. The evaluation thus highlights findings that need to be addressed in other forums, such as OAI and the Budapest Open Access Initiative. One of the announced objectives of the project was that Citebase, in highlighting both the unrealized potential of current levels of coverage, will stimulate further self-archiving by authors.

1 Introduction

Citation analysis and impact ranking are classical tools that are used not just by researchers but by policy makers who shape research. Developed by Garfield since the 1950s, citation indexing became the foundation for a series of products from ISI, most notably the "first multidisciplinary citation index to the scientific literature", the Science Citation Index (SCI). Merton (1979) described how citation indexing systematically identifies "links between the work of scientists that could be put to use both for searching the literature and for exploring cognitive and social relationships in science". With the development of Web of Science and more recently Web of Knowledge, ISI has migrated the SCI online, surely its natural medium with the facility for representing those "cognitive links" as simple hypertext links between citing and cited items.

It has been noted that while Garfield’s basic intentions were "essentially bibliographic", he has conceded that "no one could have anticipated all the uses that have emerged from the development of the SCI" (Guedon 2001). One of these uses is co-citation analysis (Small 1973), which makes possible the identification of emerging trends, or 'research fronts', which today can be visualised using powerful computational techniques (Chen and Carr 1999).

Another use, however, was to divert the SCI into a new business as a career management tool. As a result, Guedon claims that in "introducing elitist components into the scientific quest for excellence, SCI partially subverted the meaning of the science game".

New Web-based citation indexing services, such as ResearchIndex (also known as CiteSeer; Lawrence et al. 1999) and Citebase from the Open Citation (OpCit) Project, are founded on the same basic principles elaborated by Garfield (1994). Unlike Web of Knowledge which indexes core journal titles, these new services index full-text papers that can be accessed freely by users on the Web, and the indexing services are also currently free. While it is possible that open access indexing services founded on open access texts could re-democratise the role of citation indexing, there is no doubt these services will offer qualitatively different services from those provided by ISI: "Newer and richer measures of "impact" ... will be not only accessible but assessable continuously online by anyone who is interested, any time" (Harnad 2001). According to Lawrence (2001), open access increases impact.

According to Suber (2002) the "greatest benefit" of open access content services that are free to users will be "to provide free online data to increasingly sophisticated software which will help scholars find what is relevant to their research, what is worthy, and what is new". Citebase is an example of exactly that.

Despite the apparent advantage of open access, critical questions still have to be asked of these new services: are they useful and usable for the purposes of resource discovery and measuring impact? This report seeks to answer these questions based on an evaluation of Citebase, a citation-ranked search service. In the course of the investigation, some pointers to the resolution of these wider issues are also revealed.

2 Background to the evaluation

2.1 About Citebase

Citebase, described by Hitchcock et al. (2002), allows users to find published research papers stored in the larger open access, OAI disciplinary archives - currently arXiv (http://arxiv.org/), CogPrints (http://cogprints.soton.ac.uk/) and BioMed Central (http://www.biomedcentral.com/). Citebase harvests OAI metadata records for papers in these archives, additionally extracting the references from each paper. The association between document records and references is the basis for a classical citation database. Citebase is sometimes referred to as “Google for the refereed literature”, because it ranks search results based on references to papers.

Just prior to the evaluation Citebase had records for 230,000 papers, indexing 5.6 million references. By discipline, approximately 200,000 of these papers are classified within arXiv physics archives. Thus, overwhelmingly, the current target user group for Citebase is physicists. The impact being made by OAI (Van de Sompel and Lagoze 2002) should help extend coverage significantly to other disciplines (Young 2002), through the emphasis of OAI on promoting institutional archives (Crow 2002). It is clear that a strong motivation for authors to deposit papers in institutional archives is the likelihood of subsequent inclusion in powerful resource discovery services which also have the ability to measure impact. For this reason there is a need to target this evaluation at prospective users, not just current users, so that Citebase can be designed for an expanding user base. Citebase needs to be tested and optimised for users, and nurtured by the communities that stand to benefit in the longer term.

The primary Citebase Web user interface (Figure 2.1) shows how the user can classify the search query terms (typical of an advanced search interface) based on metadata in the harvested record (title, author, publication, date). In separate interfaces, users can search by archive identifier or by citation. What differentiates Citebase is that it also allows users to select the criterion for ranking results by Citebase processed data (citation impact, author impact) or based on terms in the records identified by the search, e.g. date (see drop-down list in Figure 2.1). It is also possible to rank results by the number of 'hits', a measure of the number of downloads and therefore a rough measure of the usage of a paper. This is an experimental feature to analyse both the quantitative and the temporal relationship between hit (i.e. usage) and citation data, as measures as well as predictors of impact. Hits are currently based on limited data from download frequencies at the UK arXiv mirror at Southampton only.


Figure 2.1. Citebase search interface showing user-selectable criteria for ranking results (with results appended for the search terms shown)

The results shown in Figure 2.1 are ranked by citation impact: Maldacena's paper, the most-cited paper on string theory in arXiv at the time (September 2002), has been cited by 1576 other papers in arXiv. (This is the method and result for Q2.3 in the evaluation exercise described below.)

The combination of data from an OAI record for a selected paper with the references from and citations to that paper is also the basis of the Citebase record for the paper. A record can be opened from a results list by clicking on the title of the paper or on 'Abstract' (see Figure 2.1). The record will contain bibliographic metadata and an abstract for the paper, from the OAI record. This is supplemented with four characteristic services from Citebase:

'Hits' are a new and contentious measure, especially when based on limited data. Recent studies offer support for the use of reader data by digital libraries to complement more established measures of citation frequency, which reflect author preferences (Darmoni et al. 2002). At the Los Alamos National Laboratory Research Library, Bollen and Luce (2002) defined a measure of the consultation frequency of documents and journals, and found that ranking journals using this method differs strongly from a ranking based on the traditional impact factor and, in addition, corresponded strongly to the general mission and research interests of their user community.

Another option presented to users from a results list is to open a PDF version of the paper (see Figure 2.1). This option is also available from the record page for the paper. This version of the paper is enhanced with linked references to other papers identified to be within arXiv, and is produced by OpCit. Since the project began, arXiv has been producing referenced linked versions of papers. Although the methods used for linking are similar, they are not identical and OpCit versions may differ from versions of the paper available from arXiv. An important finding of the evaluation will be whether reference linking of full-text papers should be continued outside arXiv. An earlier, smaller-scale evaluation, based on a previous OpCit interface (Hitchcock et al. 2000), found that arXiv papers are the most appropriate place for reference links because users overwhelmingly use arXiv for accessing full texts of papers, and references contained within papers are used to discover new works. It was also reported that all aspects of the interface evaluated at that time needed more user-focussed development (see http://opcit.eprints.org/evaluation/v10/v10evaluation.html).

2.2 Scope of the evaluation

The following elements of Citebase will be a particular focus of the evaluation: Given the wide prospective user base, fundamentally what is to be evaluated is not just the current implementation of Citebase, but the principle of citation-based navigation and ranking.

2.3 Purpose of the evaluation

The evaluation sought to:
  1. evaluate the usability of Citebase (can it be used simply and reliably for the purpose intended)
  2. assess the usefulness of Citebase (how does it compare and fit in with other services)
  3. measure user satisfaction with Citebase
  4. raise awareness of Citebase
  5. inform ongoing development of Citebase

3 Description of the evaluation

3.1 Participants in the evaluation

The evaluation was managed by the OpCit project team in the IAM Group at Southampton University, the same team that reported on the evaluation of the forerunner eLib-funded Open Journal Project (Hitchcock et al. 1998). The arXiv Cornell partners in the project supported design and dissemination.

3.2 Methods

The evaluation used two methods to collect data: The questionnaire was first tested by observation using local users at Southampton University. The wider community was made aware of the evaluation by means of staged announcements to selected discussion lists, and by links from the project and partner Web sites, notably from the abstract pages of all but the latest papers in arXiv, and from other Web pages serving physicists, PhysNet - "the worldwide Network of Physics Departments" (http://physics-network.org/PhysNet/), and the CERN library (http://library.cern.ch/).

After removing blanks, duplicates and test submissions, a total of 195 valid submissions of Form 1 were received. Of these users, 133 also completed Form 2, which was linked from the submit button of Form 1.

3.3 Timescale

The evaluation was open from June 2002, when the first observational tests took place, to the end of October 2002 when a closure notice was placed on the forms and the submit buttons on both forms were disabled. Links from arXiv became active on 20th August.

There is scope for a follow-up exercise, using the email addresses supplied by over 100 interested users, to test the effectiveness of changes to Citebase following the evaluation.

3.4 Discussion of the methods

Effective evaluation techniques include working intensively with a small group of people and applying the methodology of usability testing, where users are assigned a set of specific tasks to complete. The initial observed tests using the practical exercise in Form 1 satisfied these criteria and provided preliminary feedback.

As already indicated, Citebase is aimed at a much wider user group, both now and in the future, and the evaluation had to be extended to a representative section of those users. Open invitation is one way of achieving this. There are drawbacks to inviting evaluation based on a Web-only questionnaire, most obviously the lack of direct contact with users, and the consequent loss of motivation and information. Balancing this should be simplicity, easy accessibility and continuous availability. Web surveys have widened use and reduced the costs of survey techniques, but introduce new complexities (Gunn 2002). Efforts were made to ensure the forms were usable, based on the observed tests, and that Citebase offered a reliable service during the period of the evaluation. Availability of forms and service were monitored and maintained as far as was possible during the period of evaluation (see section 7.3).

A perennial problem with forms-based evaluation, whether users are remote or not, is that badly designed  forms can become the object of the evaluation. In tests of this type, where most users are experiencing a service for the first time, observation suggests that users may have understood the service more intuitively had they just looked at it as a search service rather than being introduced to it via step-by-step questions. This raises the question of whether the service to be evaluated, Citebase, should have been promoted more extensively prior to the evaluation. This would have increased familiarity, but it was felt this would make it more difficult to attract users to the evaluation unless those users were being brought to Citebase via the evaluation.

In contrast to Web forms, usage logs are an impeccable record of what people actually do, although there are problems of interpretation, and there are no standards for the assessment of Web logs.

The response to the evaluation from arXiv physicists, the primary target user group, was a little below expectations, although replies from other users were higher than expected. Our earlier survey received nearly 400 replies from arXiv users (Hunt 2001). Similarly, usage of Citebase among the same group might have been expected to be higher.

It is likely the lower number of respondents to the evaluation was due to the method of linking from arXiv to the evaluation. For the earlier survey, arXiv linked directly from a notice on its home page to the Web form. In this case abstract pages for papers in arXiv linked to the corresponding Citebase records. To get users to the evaluation form required that a linked notice be inserted temporarily in the Citebase records (Figure 3.1).


Figure 3.1 Adding a temporary notice to Citebase records to attract arXiv users to participate in this survey

As a means of bringing arXiv users to Citebase on an ongoing basis, this is an ideal, task-coupled arrangement. From the perspective of the evaluation, however, users were expected to follow two links to reach the evaluation, and were thus required to take two steps away from their original task. Since there was no direct link to the evaluation from the arXiv home page, and therefore no prior advocacy for, or expectation of, Citebase or the evaluation. perhaps it should not be surprising that the response did not match our earlier survey.

Usage of Citebase would have been affected for the same reason; also by a prominent notice:

Citebase (trial service, includes impact analysis)

placed alongside the new links to Citebase in arXiv (Figure 1 in Hitchcock et al. 2002).

It is planned that the Citebase developer (Tim Brody) will work with our arXiv partners to refine Citebase in response to the evaluation with a view to removing the warning notices and increasing advocacy for the service. It will be instructive to see how usage levels change as a result of such changes.

4 Preparing Citebase for evaluation

Citebase is a large and dynamic database, the complexities of which must be hidden from the user while allowing the underlying power to be exploited. One way in which this complexity might become apparent to the user is the speed with which results are processed and presented. During pre-tests of the practical exercise it was noted that it took Citebase over one minute to process the query about the most-cited paper in arXiv on string theory (Q2.3). Observed tests showed how seriously this would affect results, and reactions, with users moving on to later questions before completing current tasks.

This was resolved prior to open Web evaluation. According to Tim Brody, Citebase developer: "A few database tweaks later (!) and it's now ~0.5s to return the co-cited papers. It is slightly different behaviour now: whereas before it ranked (in descending order) co-citedness, paper impact, author impact, it now only ranks by co-citedness (avoids a table join)."

The other area of concern for Citebase were the descriptions, support and help pages, a vital part of any new and complex service. There was some reorganisation of this material and new pages added. This is an ongoing process and will continue to be informed by users.

Terminology was another aspect raised leading up to the evaluation. Terms used in the evaluation form such as "most cited" can be interpreted as the largest number of citations for an author or the largest average number of citations per paper for the same author. On the form this was revised (Q2.1). More generally, efforts were made to make terminology in Citebase comparable with ISI.

If bibliographic tools have been subverted, whether by design or not, to serve as career management tools, there is no hiding from the fact that new, experimental services will produce contentious results. This was a particularly acute concern during the preparation of Citebase for testing. A warning notice was added prominently to the main search page:
 

Citebase is currently only an experimental demonstration. Users are cautioned not to use it for academic evaluation yet. Citation coverage and analysis is incomplete and hit coverage and analysis is both incomplete and noisy.

Citebase was incomplete during the evaluation because new arXiv papers and their references were not harvested once the evaluation began in June. It was decided the data should be static during the evaluation, to ensure all users were evaluating the same object (some minor changes were made during the evaluation period, and these are highlighted in section 4.1). In arXiv, papers with numbers before 0206001 (June) had a link to Citebase, but not those deposited after.

Also, not all references could be extracted from all papers, which clearly would affect impact results. Techniques and software for automated reference extraction have been discussed by Bergmark (2000). Since the evaluation closed Citebase data have been brought up-to-date, and the reference parsing algorithm has been refined to improve extraction rates.

Warnings were also strengthened, after much discussion, around the hit data graphs displayed in Citebase records (Figure 4.1). Reservations about this feature were expressed by arXiv Cornell colleagues, for the following reasons:

This feature needs to be examined carefully in the light of the results of the evaluation. However, the potential for usage measures in formal research assessment is undeniable (Harnad 2002). Citebase could serve to enrich the accuracy, equity and diversity of scientometric assessment of research productivity impact.


Figure 4.1. Citation/Hit History graph in a Citebase record, with prominent Caution! notice

4.1 Updating Citebase during the evaluation

In principle there should be no changes to the object being investigated during an evaluation. In practice, for a live, developing service such as Citebase there is always pressure to make changes and update it, especially over a four month evaluation period as in this case. Simply harvesting the daily submissions to arXiv materially changes Citebase, so this had to stop for the duration, with consequences that were noticed by some users. Some updates were essential, however. Table 4.1 highlights changes to Citebase that may have had some effect on the results of the evaluation.
 
Table 4.1. Citebase updates (moved to live version on 29th August), and possible effects on the evaluation
Citebase changes/updates Possible effect on evaluation (Form 1)
On Citebase search results page (Figure 2.1), add explicit 'Abstract/PDF' links to records (some users do not understand that clicking a title brings you to the abstract) Q2.3
New layout for internal links within Citebase record pages  Q2.4
'Linked PDF' label on Citebase record pages replaced by green 'PDF' graphic Q2.6a (full-text download)
Hits/citations graphs now on a different scale, hits warning added Q2.4-, Q3.1
* Other warnings added 13th September:
No update during study period
Incomplete coverage in ArXiv
Incomplete success of reference detection algorithm

5 Design of the evaluation forms

Users were presented with two evaluation forms to complete. Form 2 involved a simple measure of user satisfaction with the object being evaluated, Citebase, and was reached by responding to Form 1, which has four sections designed to: The following sections should be read with reference to Form 1 (http://www.ecs.soton.ac.uk/~aw01r/citebase/evalForm1.htm)

5.1 User context

Based on the broad coverage of Citebase and the scope of the planned announcements of the evaluation, it was anticipated that users would include not just physicists, but would extend to other users such as mathematicians, computer scientists and information scientists (Q1.1). Evaluators could specify other interests as necessary. Since the majority of evaluators, but not all, will also be users of arXiv, it was important to learn how this breaks down and to identify how this might quantitatively affect the results (Q1.2). It will also help, for arXiv users, to know how they currently discover new papers (Q1.3) as this will suggest possible routes into new discovery services such as Citebase.

Similarly, since Citebase will extend coverage to new OAI archives, it is helpful to know the level of awareness of OAI among evaluators (Q1.4) , and whether they use other OAI services (Q1.5).

As with all other sections on the evaluation forms, this section ends by inviting open comments from evaluators, which can be used to comment on any aspect of the evaluation up to this point.

5.2 Practical exercise

This is the critical phase of the evaluation, inviting evaluators to try key features of Citebase, identified in section 2, based on a set practical exercise. The subject chosen for the exercise, string theory, is of relevance to many physicists who use arXiv, but no prior knowledge of the subject was required to complete the exercise.

At this point users were prompted to open a new Web browser window to view the main Citebase search interface. It was suggested this link could have been placed earlier and more prominently, but this was resisted as it would have distracted from the first section.

Questions 2.1-2.3 involved performing the same task and simply selecting a different ranking criterion in the search interface (Figure 2.1). Selectable ranking criteria is not a feature offered by popular Web search engines, even in advanced search pages, which the main Citebase search page otherwise resembles. The user's response to the first question is therefore important in determining the method to be used, and Q2.1 might be expected to score lowest as familiarity increases for Q2.2 and Q2.3. Where Q2.1 proved initially tricky, however, observed tests revealed that users would return to Q2.1 and correct their answer. We have no way of knowing to what extent this happened in unobserved submissions, but allowance should be made for this when interpreting the results.

The next critical point occurs in Q2.4, when users are effectively asked for the first time to look below the search input form to the results listing for the most-cited paper on string theory in arXiv (Q2.3). To find the most highly cited paper that cites this paper, notwithstanding the apparent tautology of the question, users have to recognise they have to open the Citebase record for the most-cited paper by clicking on its title or on the Abstract link. Within this record the user then has to identify the section 'Top 5 Articles Citing this Article'. To find the paper most often co-cited with the current paper (Q2.5) the user has to scroll down the page, or use the link, to find the section 'Top 5 Articles Co-cited with this Article'.

Now it gets slightly harder. The evaluator is asked to download a copy of the full-text of the current paper (Q2.6a). What the task seeks to determine is the user's preference for selecting either the arXiv version of the paper of the OpCit linked PDF version. Both are available from the Citebase record. A typical linked PDF was illustrated by Hitchcock et al. (2000). Originally the Citebase record offered a 'linked PDF', but during the evaluation the developer changed this to a PDF graphic (Table 4.1). The significance of omitting 'linked' is that this was the feature differentiating the OpCit version. Given that it is known physicists tend to download papers in Postscript format rather than PDF (http://opcit.eprints.org/ijh198/3.html), it is likely that a simple PDF link would have little to recommend it against the link to the arXiv version.

As a check on which version users had downloaded, they were asked to find a reference (Q2.6b) contained within the full text (and which at the time of the evaluation was not available in the Citebase record, although it appeared in the record subsequently). To complete the task users had to give the title of the referenced paper, but this is not as simple as it might be because the style of physics papers is not to give titles in references. To find the title, the user would need to access a record of the referenced paper. Had they downloaded the linked version or not? If so, the answer was one click away. If not, the task was more complicated. As final confirmation of which version users had chosen, and how they had responded subsequently, users were asked if they had resorted to search to find the title of the referenced paper. In fact, a search using Citebase or arXiv would not have yielded the title easily.

In this practical exercise users were asked to demonstrate completion of each task by identifying an item of information from the resulting page, variously the author, title or URL of a paper. Responses to these questions were automatically classified as true, false or no response. Users could cut-and-paste this information, but to ensure false responses were not triggered by mis-keying or entering an incomplete answer, a fuzzy text matching procedure was used in the forms processor.

Although this is an indirect measure of task completion, the results of this exercise can be read as an objective measure showing whether Citebase is a usable service. As an extra aid to judge the efficiency with which the tasks are performed, users were asked to time this section. One idea was to build a Javascript clock into the form, but this would have required additional user inputs and added to the complexity of the form.

5.3 Views on Citebase

By this stage users might be excited, exhausted or exasperated by Citebase (or by the evaluation), but they are now familiar with its features, and in this section are asked for their views on these.

Questions 3.1 and 3.2 enquire about Citebase as it is now and as it might be, respectively. It is reasonable to limit choices in the idealised scenario (Q3.2) so that users have to prioritise desired features. Users are likely to be more critical of the actual service, so it seems safe to allow a more open choice of preferred features.

Citebase has to be shaped to offer users a service they cannot get elsewhere, or a better service. Q3.3 seeks to assess the competition. This part of the evaluation is concluded by asking the user for a view on Citebase, not in isolation, but in comparison with familiar bibliographic services.

5.4 Follow-up and submission

As well as the necessary courtesies to users, such as offering follow-up in the form of a report and results, there was a practical motive for signing-up users for further evaluation. Citebase will certainly change as a result of the evaluation, and this will create a motivated group of users willing to test the changes.

There is a second part to the evaluation, which is displayed to users automatically on submission of Form 1. It became apparent from observed tests that users do not always wait for a response to the submission and may miss Form 2, so a clear warning was added above the submission button on Form 1.

On submission the results were stored in a MySQL database and passed to an Excel spreadsheet for analysis.

5.5 Response and Form 2

The following sections should be read with reference to Form 2 (http://www.ecs.soton.ac.uk/~aw01r/citebase/evalForm2.htm),

Form 1 prompted users to respond to specific questions and features, and gives an impression of their reaction to the evaluated service, but does not really explore their personal feelings about it. A recommended way of tackling this is an approach based on the well-known Software Usability Measurement Inventory (SUMI) form of questionnaire for measuring software quality from the end user's point of view. Form 2 is a short implementation of this approach which seeks to discover:

Experience has shown that users rush through this form within a few minutes if it is seen immediately after the first form. It is thus a rough measure of satisfaction, but when structured in this way can point to areas of concern that might otherwise go undetected.

Four response options, ranging from very positive to very negative, are offered for each of four statements in each section. These responses are scored -2 to 2. A neutral response is not offered, but no response scores zero. A statement that users typically puzzle over is 'If the system stopped working it was not easy to restart it', before choosing not to respond if the system did not fail at any stage. Users often query this, but an evaluation, especially where users are remote from testers, has to anticipate all possible outcomes rather than make assumptions about reliability.

6 Observed testing

Volunteer users from the Physics and IAM departments at Southampton University worked in five separate pairs at various times at the end of June 2002, observed by one of the designers of the form (Arouna Woukeu, Steve Hitchcock). These tests were performed before open evaluation was announced to Web users. Form-based submissions from these users were recorded in the database, just as for later users, and are included in the summary quantitative results. Notes taken by the observer during the test were used to inform late revisions to the evaluation before wider announcements; in other respects the observed users performed the same exercise as all other users. It is not claimed that the observed evaluators are fully representative of the wider Citebase constituency, but the group is adequate to test the usability of the form (Nielsen 2000).

Scenario: Users worked with machines in their own environment. Users were assured that they were not being tested, but that the system was being tested. Once they were in front of a machine with a working Web browser and connection, they were handed a printed copy of the evaluation forms as an aid and for notes, not instead of the online version. They were then given the URL to access evaluation Form 1, with no other instruction. Once started, observers were to avoid communication with users. Users were debriefed after completing the tasks.

Main findings (actions):

7 Usage: Citebase and the evaluation

7.1 Open announcements: effect on the evaluation

Following actions taken to improve the experience for users, the evaluation was announced to selected open discussion lists in a phased programme during July 2002. Announcements were targetted at: The effect of these announcements in terms of the number of responses to the evaluation and the level of usage of Citebase can be seen in Figures 7.1 and 7.2.


Figure 7.1. Chart of daily responses to evaluation Form 1(July-November 2002)


Figure 7.2. Citebase usage: summary statistics for July 2002 showing number of distinct visits (yellow chart) to citebase.eprints.org. See http://citebase.eprints.org/usage/usage_200207.html for full chart and data (excludes all hits from soton.ac.uk and from cs.odu.edu (DP9), but includes search engines)

It can be seen that the highest response to the evaluation during the period of open announcements occurred between 12-14 July following announcements to open access advocates (Figure 7.1), but Citebase usage in July was highest on the 22nd (Figure 7.2) after announcements to library lists.
 
Table 7.1: Citebase usage spikes (unique site visits) attributed to list announcements
Date (July) No. of visits Suspected source of users
22nd 207 Delayed reaction to library mails over a weekend
12th 175 OAI, Sept-Forum, FOS-Forum
15th 159 D-Lib Magazine
29th 138 PhysNet?
8th 109 Possibly delayed reaction to JISC, DLI mails over a weekend

7.2 ArXiv links to Citebase: bringing physicists to the evaluation

As has been noted already, physicists are likely to be the largest user group for Citebase given its extensive indexing of physics papers in arXiv. Links to Citebase records first appeared from arXiv abstracts on 20th August 2002. The effect on usage of Citebase was almost immediate, with peak usage occurring on 22nd August, as can be seen in Figure 7.3.


Figure 7.3. Citebase usage: summary statistics for August 2002 showing number of distinct visits (yellow chart) to citebase.eprints.org. See http://citebase.eprints.org/usage/usage_200208.html for full chart and data (excludes all hits from soton.ac.uk and from cs.odu.edu (DP9), but includes search engines)

The impact of arXiv links on usage of Citebase was relatively much larger than that due to list announcements, as can be seen in Figure 7.4 in the column heights for July (list announcements) against August, September and October (arXiv links) (ignoring the red chart which emphasises the effect of Web crawlers rather than users).

Figure 7.4. Usage statistics for citebase.eprints.org. See http://citebase.eprints.org/usage/ for full chart and data (excludes all hits from soton.ac.uk and from cs.odu.edu (DP9), but includes search engines)
*image saved on 15 November

Table 7.2 puts the growth of Citebase usage (by visits) in perspective over this period, prior to the evaluation (February-June), due to list announcements (July), due to new arXiv links (August), and during the first full month of arXiv links (September).
 
Table 7.2. Growth of visits to Citebase, February-September 2002 (yellow columns in usage charts)
February-June July August September
Average daily visits 25-45 85 211 402
Highest daily visits 95 (8th May) 207 (22nd) 660 (22nd) 567 (4th)

The effect of the arXiv links on the evaluation were materially different, however, because the links were to Citebase, and only indirectly from there to the evaluation (see section 3.4). Table 7.3 shows how efficiently Citebase users were turned into evaluators on the best days for submission of evaluation Form 1. It shows how list announcements taking users directly to the evaluation returned the highest percentage of daily submissions from all Citebase users. Although overall usage of Citebase generated by arXiv links was much larger than for list announcements, this was not effectively translated into more submissions of the evaluation.
 
Table 7.3: Turning Citebase users into evaluators
Date No. of evaluation forms returned (Figure 7.1) Percentage of Citebase visitors that day
July 12th 16 9.1
July 13th 8 6.1
July 15th  7 4.4
July 22nd 7 3.4
July 8th 6 3.4
August 21st 6 1.3
August 23rd 6 1.1
August 27th 6 1.0
August 22nd 6 0.9

ArXiv.org HTTP server daily usage (http://arxiv.org/show_daily_graph) shows c.15,000 hosts connecting each day, i.e. approximately 3.3% of arXiv visitors become Citebase users. The challenge for Citebase, highlighted by these figures, is to attract a higher proportion of arXiv users by examining the results of this evaluation to improve the service Citebase offers. What isn't yet known is what proportion of arXiv usage is mechanical downloads, just keeping up with the literature, to be read later offline. Citebase will make little difference to this type of activity, but instead will help more active users, and here its proportionate share may already be much higher.

7.3 Network access and Citebase outages

Citebase was monitored for outages that might have affected usage levels. Below are the main incidents we are aware of. Users may have had problems at other times due to network problems beyond the control of the project.

8 Results: Form 1 - using Citebase

Valid submissions to Form 1 (http://www.ecs.soton.ac.uk/~aw01r/citebase/evalForm1.htm) were received from 195 evaluators. Details of the submissions can be viewed as an Excel spreadsheet saved as a Web page (as with all things saved in Microsoft applications, this is best viewed using an up-to-date or recent version of Internet Explorer).

8.1 About the evaluators

Q1.1 Subject interests of evaluators
 
Mathematicians 13 Computer scientists 15 Information scientists 33 Physicists 69 Other 60 Blank 5 Total 195

Other users included: Librarians (13), Cognitive scientists (10), Biologists (5), Cognitive neuroscientists (3), Health scientists (3), Medical psychiatrists (2), Sociologists (2), Publishers (2), and a teacher, information professional, behavioural geneticist, media specialist, philosopher, geomorphologist, engineer, economist, technical marketer, undergraduate.

Q1.2 Have you used the arXiv eprint archive before?
 
Table 8.1. Usage of arXiv (physicists only)
Daily Regularly Occasionally (less than monthly) No
56 (50) 26 (11) 28 (3) 79 (5)


Figure 8.1. Correlation between subject disciplines and arXiv usage (x axis: Physics=4, Maths=3, Computer=2, infoScience=1, Other=0; y axis: 4=daily usage, 3=regular, 2=occasional, 1=don't use). Physicists are more likely to use arXiv daily, non-physicists are less likely to use arXiv: correlation= 0.754522, N=189, p<0.001

Q1.3 If you have used arXiv, which way do you access arXiv papers? (you may select one or more)
 
Table 8.2. Means of accessing arXiv papers (physicists only)
Web search or other Web services Don't use arXiv Daily or regular browsing Reference links in papers Email alerts from arXiv Bibliography or library services Email alerts from other services No response
37 (18) 29 (1) 65 (47) 31 (18) 25 (20) 16 (6) 5 (1) 58 (5)


Figure 8.2. Accessing arXiv papers

Q1.4 Had you heard of the Open Archives Initiative?
 
Yes No
99 (11) 86 (55)
( ) physicists only
 

a

b
Figure 8.3 Correlations with prior knowledge of OAI: a, with subject discipline (x axis: physics=4, maths=3, computer=2, infoScience=1, other=0), physicists are least likely, information scientists most likely to have heard of OAI, correlation= -0.46758, N=189, p<0.001; b, with level of arXiv usage (x axis: daily usage=4, regular usage=3, occasional usage=2, no usage=1), those who use arXiv least or not at all are more likely to have heard of OAI, correlation=-0.35072, N=189, p<0.001 (y axis: 2=heard of OAI, yes, 1=have not heard of OAI)

Q1.5 Have you used any other OAI services? (you may select one or more)
 
arc 8 myOAI 10 kepler 6 Other 2 No response 178

User comments on this section (highlights)

  • As usual, I find myself an "outsider" in discussions of things that will be important to me very soon. I find there is no category for me to go into. You guys need to look beyond geekdom to think about ordinary social scientists, librarians, educationists
  • In addition to CogPrints I am a heavy user of the CiteSeer ResearchIndex service. The automatic citation linkage adds a lot of value to that service.
  • For full list of comments see Appendix 4.

    8.2 Commentary on Citebase evaluators

    The backgrounds of evaluators are broadly based, mostly in the sciences, but about 10% of users were non-scientists (Q1.1). This would appear to suggest greater expectation of OAI-based open access archives and services in the sciences, if this reflects a broad cross-section of the lists mailed (see section 7.1).

    Among non-scientists, as the comment above indicates, there may be a sense of exclusion. This is a misunderstanding of the nature of open access archives and services. No disciplines are excluded, but services such as Citebase can only act on major archives, which currently are mostly in the sciences. The primary exception is economics, which has distributed archives indexed by RePEc (http://repec.org/). There are plans to index RePEc in Citebase.

    It is true, if this is what is meant by the "outsider" above, that Q1.1 in this evaluation anticipated that evaluators would mostly be scientists of certain types, as shown in Figure 8.1. It must also be added that the Citebase services of impact-based scientometric analysis, measurement and navigation are intended in the first instance for research-users, rather than lay-users, because the primary audience for the peer-reviewed research literature is the research community itself.

    About a third of evaluators were physicists, although the number of physicists as a proportion of all users might have been expected to be higher given the concentration of Citebase on physics. (Physicists can be notoriously unfond of surveys, as the ArXiv administrators warned us in advance!)

    Physicists in this sample tend to be daily users of arXiv (Q1.2). Non-physicists, noting that arXiv has smaller sections on mathematics and computer science, tend to be regular or occasional users of arXiv (Figure 8.1). Beyond these disciplines most are non-users of arXiv, and thus would be unlikely to use Citebase given its present coverage.

    Most arXiv users in this study access new material by browsing, rather than by alerts from arXiv (Q1.3). The relatively low ranking of the latter was unexpected. There is some encouragement for services such as Citebase (note, at this stage of the evaluation users have not yet been introduced to Citebase) in the willingness to use Web search and reference links to access arXiv papers (second and third most popular categories of access). It is possible, as mentioned above, that the Citebase evaluators were a biased sample of arXiv daily users who do not download mechanically.

    OAI is familiar to over half the evaluators, but not to many physicists (Q1.4, Figure 8.3a). The latter is not surprising. OAI was originally motivated by the desire to encourage researchers in other disciplines to build open access archives such as those already available to physicists through arXiv, although the structure of Open Archives, unlike arXiv, is de-centralised (Lynch 2001).

    Although OAI has had an impact among most non-physicist evaluators - again probably preordained through list selection - there is clearly a problem attracting these users to OAI services (Q1.5). Either current OAI services are not being promoted effectively, or they are not providing services users want -- or this may be merely a reflection of the much lower availability of non-physics OAI content to date! As an OAI service, this result shows the importance for Citebase of learning the needs of its users from this evaluation, and of continuing to monitor the views of users. More generally, this result suggests there are stark issues for OAI and its service providers to tackle. To be fair, the services highlighted on the questionnaire are mainly research projects. It is time for OAI services to address users.

    8.3 Practical exercise: building a short bibliography

    ( ) physicists only

    Q2.1 Who is the most-cited (on average) author on string theory  in arXiv?
    Correct 141 (45) Incorrect 20 (8) No answer 34 (15)
    Q2.2 Which paper on string theory is currently being browsed most often in arXiv?
    Correct 133 (41) Incorrect 16 (8) No answer 46 (19)
    Q2.3 Which is the most-cited paper on string theory in arXiv?
    Correct 145 (48) Incorrect 9 (2) No answer 41 (18)
    Q2.4 Which is the most highly cited paper that cites the most-cited paper above? (critical point)
    Correct 122 (44) Incorrect 26 (5) No answer 47 (19)
    Q2.5 Which paper is most often co-cited with the most-cited paper above?
    Correct 133 (46) Incorrect 12 (3) No answer 50 (19)
    Q2.6a  Download the full-text of the most-cited paper on string theory. What is the URL?
    Correct 124 (42) Incorrect 13 (3) No answer 58 (23)
    (Correct=Opcit linked copy 71 (15) +arXiv copy 53 (27))
    Q2.6b In the downloaded paper, what is the title of the referenced paper co-authored with Strominger and Witten (ref [57])?
    Correct 105 (35) Incorrect 27 (9) No answer 63 (24)

    Q2.6c Did you use search to find the answer to 2.6b?   No 118 (40); Yes 18 (3)


    Figure 8.4. All users: progress in building a short bibliography through Q2.1-2.6b in evaluation Form 1 (T=true, F=false, N=no response)


    Figure 8.5. Physicists only: progress in building a short bibliography through Q2.1-2.6b in evaluation Form 1 (T=true, F=false, N=no response)

    Time taken to complete section 2
     
    1-5 minutes 5-10 10-15 15-20 20-25 25-30 30+ ? Total
    13 (6) 60 (21) 36 (14) 17 (5) 9 (0) 6 (0) 2 (0) 5 (2) 147

    a

    b
    Figure 8.6 Correlations between time taken to complete section 2 and: a, subject disciplines (x axis: physics=4, maths=3, computer=2, infoScience=1, other=0) correlation= -0.15, N=140, p<0.077, b, level of arXiv usage (x axis: daily usage=4, regular usage=3, occasional usage=2, no usage=1) correlation=-0.18, N=140, p<0.033

    User comments on this section (highlights)

  • I ran into two problems doing the task: (1) it wasn't clear at all initially how to find the most cited author (to do that task you'd have to go into the dropdown) -- I ended up doing the tasks backward and managed to find that feature in the dropdown when I was doing something else. (2) it was a little hard to find the paper given three authors -- I couldn't find the paper searching by author or by its hep-th number and ended up having to look it up in arxiv.org.
  • The site is very weak in explanations on what is possible. Perhaps experts would know what they were doing but for me this was very difficult.
  • Help documentation failed to help me in most cases.
  • Your questions too easily lead to the answers.
  • You need clear definitions of your terms in standard English. 'Author hits', and 'Paper Hits'
  • Fascinating ability to cross-check links between papers, though I would need more practice to not be confused as to where I was!
  • I think it would be more accurate if the search engine would accept "string theory". In the current case, it seems that some papers irrelevant to string theory are retrieved, because they are relevant either to string or to theory.
  • Easy to do & that usually means well thought out.
  • Extremely informative, a good way to get people to learn about the handy features of the system.
  • hmmm... there must be a difference between "rank by cocitedness" and the "cited with" button, but i can't guess what it is.
  • For full list of comments see Appendix 4.

    8.4 Commentary on building a short bibliography using Citebase

    Figures 8.4 and 8.5 show that most users were able to build a short bibliography successfully using Citebase. This exercise introduced users to most of the principal features of Citebase, so there is a good chance that users would be able to use Citebase for other investigations, especially those related to physics. The yellow line in Figure 8.4, indicating correct answers to the questions posed, shows a downward trend through the exercise, which is most marked for Q2.6 involving downloading of PDF full texts. Figure 8.5, which includes results for physicists only, shows an almost identical trend, indicating there is not a greater propensity among physicists to be able to use the system compared with other users, although physicists generally completed the exercise faster (Figure 8.6a).

    Reinforcing these results, almost 90% of users (100% of physicists) completed the exercise within 20 mins, with approximately 50% (55% physicists) finishing within 10 mins. There appears to be some correlation between subject disciplines and level of arXiv usage with the time taken to complete the exercise (Figure 8.6), although neither correlation is statistically significant. Taken together these results show that tasks can be accomplished efficiently with Citebase regardless of the background of the user.

    As anticipated, Q2.4 proved to be a critical point, showing a drop in correct answers from Q2.3. The upturn for Q2.5 suggests that user confidence returns quickly when familiarity is established for a particular type of task. Similarly, the highest number of correct answers for Q2 .3 shows that usability improves quickly with familiarity of the features of a particular page. At no point in either Figure 8.4 or 8.5 is there evidence of a collapse of confidence or of willingness among users to complete the exercise.

    On the basis of these results there can be confidence in the usability of most of the features of Citebase, but the user comments in this section draw attention to some serious usability issues - help and support documentation, terminology - that must not be overshadowed by the results.

    The incidental issue of which PDF version users prefer to download, OpCit or arXiv version (Q2.6a), was not conclusively answered, and could not be due to the change in format on the Citebase records for papers (Table 4.1). It can be noted that among all users, physicists displayed a greater preference to download the arXiv version.

    8.5 User views of Citebase

    Q3.1 In your view, which are the most useful features of Citebase? (you may select one or more)
     
    Table 8.3: Most useful features of Citebase (physicists only)
    Links to citing papers Citations graphs Linked PDFs Links to co-citing papers Links to references Search Ranking criteria No response Total
    129 (43) 35 (18) 75 (17) 87 (26) 75 (22) 69 (21) 87 (31) 32 (10) 589


    Figure 8.7. Most useful features of Citebase

    Q3.2 What would most improve Citebase?
     
    Table 8.4: Improving Citebase (physicists only)
    Wider coverage Better explanations Faster results Better interface More links More papers Other Responses No response
    50 (22) 30 (7) 3 (1) 11 (5) 6 (0) 30 (12) 24 (6) 41 (15)


    Figure 8.8. Improving Citebase

    Table 8.5. Other suggestions for improving Citebase
    • A way to get BIBTEX format
    • Ability to extract reference lists from paper
    • Author search clarity
    • Better data processing
    • Displayed comments from experts
    • Explanations in a more obvious place
    • Facilities to download references
    • Help files giving examples of common procedures
    • Include journal articles/references
    • Include journal references
    • Method for keeping track of search/browse path
    • More intelligence
    • More precision in indexing and therefore search
    • More refined search capabilities
    • Most-browsed graphic indicator
    • Null not found in explanation
    • One example worked thru all the way
    • Other presentation of the results
    • Possibility to download selected references
    • Remove ranking, etc.

    Q3.3  What services would you use to compile a bibliography in your own work and field? (you may select one or more)
     
    Table 8.9: Compiling a bibliography (physicists only)
    My own bib list Online library services  Web services Offline library services Other No response
    84 (31) 85 (23) 91 (37) 24 (5) 49 (14)


    Figure 8.9. Creating personal bibliographies

    Table 8.7. Bibliography services used most by evaluators
    • ISI Web of Science 16
    • PubMed 10 (PubMed (Medline) 1; was Reference Update, will probably become PubMed 1)
    • Slac Spires 8
    • Mathscinet 7
    • Google 6
    • ISI 7 (SCI 5; Scisearch, Social SciSearch 1)
    • arXiv 4
    • CiteSeer/ResearchIndex 4
    • Inspec 4 (direct SilverPlatter, not WoK 1)
    • ADS 3
    • Medline 3
    • PsycInfo 3 (Psychinfo silver platter 1)
    • Library and IS Abstracts 2
    • ProQuest 2
    • Web services 2
    • archives 1
    • British Education Index 1
    • citation index 1
    • current contents 1
    • hepdata 1
    • http://econpapers.hhs.se 1
    • http://liinwww.ira.uka.de/bibliography/Misc/HCI/in 1
    • http://www.physics-network.org/PhysNet/physdoc.htm 1
    • LIta 1
    • math reviews / Zentralblatt (online) 1
    • medscape 1
    • Melvyl (Inspec) 1
    • OLIS 1
    • own web page 1
    • STN 1
    • UCI electronic library 1
    • zetoc 1

    Q3.4 How does Citebase compare with these bibliography services (assuming that Citebase covered other subjects to the degree it now covers physics)?
     
    Table 8.8: Comparing Citebase with other bibliography services
    Very favourably Favourably No response Unfavourably Total
    28 (7) 98 (40) 54 (16) 15 (5) 195


    Figure 8.10. Comparing Citebase with other bibliography services


    Figure 8.11. Correlation between subject disciplines and views on how Citebase compares with other bibliography services (x axis: physics=4, maths=3, computer=2, infoScience=1, other=0; y axis: citebase compares "very favourably"=2, "favourably"=1, no response=0, "unfavourably"=-1), correlation= -0.00603, N=190, p<0.924. There is no meaningful correlation


    Figure 8.12. Correlation between level of arXiv usage and views on how Citebase compares with other bibliography services (x axis: daily usage=4, regular usage=3, occasional usage=2, no usage=1; y axis: citebase compares "very favourably"=2, "favourably"=1, no response=0, "unfavourably"=-1), correlation= 0.014765, N=190, p<0.840. There is no meaningful correlation


    Figure 8.13. Correlation between views on how Citebase compares with other bibliography services and time taken to complete section 2 (xaxis: citebase compares "very favourably"=2, "favourably"=1, no response=0, "unfavourably"=-1), correlation= 0.029372, N=144, p<0.727. There is no meaningful correlation

    User comments on this section (highlights)

    For full list of comments see Appendix 4.

    8.6 Commentary on user views of Citebase

    Links to citing and co-citing papers are features of Citebase that are valued by users (Q3.1) although these features are not unique to Citebase. The decision to rank papers according to criteria such as these, and to make these ranking criteria selectable from the main Citebase search interface, is another feature that has had a positive impact with users. Citations/hit graphs appear to have been a less successful feature. There is little information in the data or comments to indicate why this might be, but it could be due to the shortcomings discussed in section 4 and it may be a feature worth persevering with until more complete data can be tested.

    Users found it harder to say what would improve Citebase, judging from the number of 'no responses' to Q3.2. Wider coverage, especially in terms of more papers, is desired by all users, including physicists. The majority of the comments are complaints about coverage.

    Signs of the need for better support documentation reemerge in this section. Although the number of users calling for a better interface is not high, comments indicate that those calling for improvements in this area are more vociferous. Among features not offered on the questionnaire but suggested by users, the need for greater search precision stands out.

    There is a roughly equal likelihood that users who participated in this survey will use Web services (e.g. Web search), online library services and personal bibliography software to create bibliographies (Q3.3). This presents opportunities for Citebase to become established as a Web-based service that could be integrated with other services. The lack of a dominant bibliography service, including services from ISI, among this group of users emphasises the opportunity.

    Citebase is beginning to exploit that opportunity (Q3.4), but needs to do more to convince users, even physicists, that it can become their primary bibliographic service.

    Attempts to correlate how Citebase compared with other bibliographic services with other factors considered throughout the evaluation - with subject discipline, with level of arXiv usage, and time taken to complete section 2 - showed no correlations in any case (Figures 8.11-8.13). This means that reactions to Citebase are not polarised towards any particular user group or as a result of the immediate experience of using Citebase for the pre-set exercise, and suggests that the principle of citation searching of open access archives has been demonstrated and need not be restricted to current users.

    There is little opportunity in this section for users for users to compare, contrast and discuss features of Citebase that differentiate it from other services. In particular, Citebase offers access to full texts in open access eprint archives. It is an aspect that needs to be emphasised as coverage and usage widen. Comments reveal that some users appreciate this, although calls for Citebase to expand coverage in areas not well covered now suggest this is not always understood. It is not possible for Citebase to simply expand coverage unless there is recognition and prior action by researchers, as authors, of the need to contribute to open access archives. One interpretation is that users in such areas do not see the distinction between open access archives and services and paid-for journals and services, because they do not directly pay for those services themselves - these services appear to be free.

    8.7 Follow-up and comment on the evaluation

    Email addresses have been provided by 109 evaluators who will receive a copy of this report. Moreover, 71 evaluators would be interested in participating in a follow-up exercise. This would be a good way of testing changes to Citebase due to this evaluation or ongoing development, and may be a necessary precondition of wider exposure through arXiv.

    User comments on the evaluation: when the form becomes the object

    For full list of comments see Appendix 4.

    Submission of Form 1

    Prior to submission of Form 1 users are given clear notice of a follow-up, Form 2.

    On submission of Form 1 users see a message thanking them for their participation and congratulating them on compiling a bibliography on string theory and giving the answers to the questions in section 2. Based on comments submitted, many users were keen to have their answers confirmed, although the response was not personalised in terms of telling users whether they were right or wrong, just what the correct answers were (see Appendix 1).

    9 Results: Form 2 - user satisfaction with Citebase

    Form 1 focussed on specifics: about the user; a series of tasks; about Citebase. Form 2 allowed users to express a more general and considered reaction to the service they had experienced. Form 2 was based, as the preamble said, on the well-known Software Usability Measurement Inventory (SUMI) form of questionnaire for measuring software quality from the end user's point of view. Users were invited to indicate, from a predefined list, their degree of reaction, for or against, to a series of propositions about general features of the system tested. These propositions assessed the users' impression and command of the system, and the effectiveness and navigability of the system.

    Form 2 could have been longer and explored other areas, but this may have inhibited the number of responses. As Form 2 was separate from Form 1 it was not expected that all users would progress this far. Of 195 users who submitted the first form, 133 completed Form 2 (http://www.ecs.soton.ac.uk/~aw01r/citebase/evalForm2.htm). Details of the submissions to Form 2 can be viewed as an Excel spreadsheet saved as a Web page (again with the proviso that this is best viewed using an up-to-date or recent version of Internet Explorer).

    The summary results by question and section are shown in Table 9.1 and Figure 9.1.
     
    Table 9.1. Satisfaction scores (Form 2)
    Question 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
    Average score by Q. 0.92 0.79 1.39 1.05 0.41 1.17 0.83 1.02 0.65 1.07 1.42 0.99 0.92 0.27 0.57 0.26
    Section Impression Command Effectiveness Navigability
    Average by section 1.04 0.86 1.03 0.51

    a   b
    Figure 9.1. Average user satisfaction scores: a, by question, b, by section

    The highest score was recorded for Q11, indicating that on average users were able to find the information required most of the time. Scoring almost as high, Q3 shows users found the system frustrating to use only some of the time.

    The questions ranked lowest by score, Q14 and Q16, suggest that users agree weakly with the proposition that there were plenty of ways to find the information needed, and disagreed weakly with the proposition that it is easy to become disoriented when using the system.

    Scores by section indicate that, overall, users formed a good impression of Citebase. They found it mostly to be effective for task completion (confirming the finding of Form 1, section 2), and they were able to control the system most of the time. The lower score for navigability suggests this is an area that requires further consideration.

    It should be recalled that responses were scored between 2 and -2, depending on the strength of the user's reaction. In this context it can be seen that on average no questions or sections scored negatively; six questions scored in the top quartile, and two sections just crept into the top quartile.

    Among users, scores were more diverse, with the total user score varying from 31 (maximum score possible is 32) to -25. Other high scores included 29, 28 and 27 (by five users). Only eight users scored Citebase negatively.

    Submission of Form 2 completed the evaluation for the user, signalled by the response shown in Appendix 2.

    10 A summary view of Citebase in user comments

    Feeling good about Citebase

    Implications for Citebase design: users

    Implications for Citebase design: search

    Should we be doing this?

    For full list of comments see Appendix 4.

    11 Correspondence arising from the evaluation

    Some evaluators elaborated their criticisms of Citebase in email correspondence. Two significant criticisms highlighted by users are addressed by the Citebase developer below. The full correspondence arising from these criticisms is reproduced in Appendix 3.

    12 How Citebase has responded

    Closure of the evaluation at the end of October has allowed the version of Citebase seen by users to be updated, and for new features to be transferred from the development version that is not accessible to users. At the time of writing this report the following changes had been made, many in response to the findings and criticisms from the evaluation:

    Changes affecting user interface

    Changes affecting machine interface Bug fixes

    Forthcoming developments

    Improved support documentation is a priority. Coverage of the index will be expanded, requiring the database to be completely restructured to support analysis and navigation of non-arXiv articles. The most immediate plans are to include coverage of RePEc and eprints.org repositories, the latter targetting citation indexing at institutional archives for the first time.

    13 Conclusions and recommendations

    Professor Imre Simon, quoted at the top of this report, perfectly sums up the results of the evaluation and the feelings of users towards Citebase: there is much scope for improvement, but as exemplified by Citebase Web-based citation indexing of open access archives is closer to a state of readiness for serious use than had previously been realised.

    The exercise to evaluate Citebase had a clear scope and objectives. Within the scope of its primary components, the search interface and services available from a Citebase record, it was found Citebase can be used simply and reliably for resource discovery. It was shown tasks can be accomplished efficiently with Citebase regardless of the background of the user.

    The principle of citation searching of open access archives has been demonstrated and need not be restricted to current users.

    More data need to be collected and the process refined before it is as reliable for measuring impact. As part of this process users should be encouraged to use Citebase to compare the evaluative rankings it yields with other forms of ranking.

    Citebase is a useful service that compares favourably with other bibliographic services, although it needs to do more to integrate with some of these services if it is to become the primary choice for users.

    The linked PDFs are unlikely to be as useful to users as the main features of Citebase. Among physicists, linked PDFs will be little used, but the approach might find wider use in other disciplines where PDF is used more commonly.

    Although the majority of users were able to complete a task involving all the major features of Citebase, user satisfaction appeared to be markedly lower when users were invited to assess navigability than for other features of Citebase.

    Perhaps one of the most important findings of the evaluation is that Citebase needs to be strengthened considerably in terms of the help and support documentation it offers to users.

    Development of Citebase will continue, and since the evaluation has focussed on re-starting daily updates and improving performance and speed of search responses. In the longer-term, coverage will be expanded and support documentation strengthened.

    A notable success of the evaluation has been to increase usage of Citebase, in terms of average daily visits, by more than a factor of 10. There is still considerable scope to increase usage of Citebase by arXiv physicists.

    According to Paul Ginsparg, founder of arXiv: "(Citebase) is a potentially critical component of scholarly information architecture". The first step must be to examine the results of this evaluation to improve the services Citebase offers with a view to establishing Citebase as a service used regularly by all arXiv users.

    There are wider objectives and aspirations for developing Citebase. The overarching purpose is to help increase the open-access literature. Where there are gaps in the literature - and there are very large gaps in the open-access literature currently - Citebase will motivate authors to accelerate the rate at which these gaps are filled.

    Acknowledgements

    We are grateful to Paul Ginsparg, Simeon Warner and Paul Houle at arXiv Cornell for their comments and feedback on the design of the evaluation and their cooperation in helping to direct arXiv users to Citebase during the evaluation. Eberhard Hilf and Thomas Severiens at PhysNet and Jens Vigen at CERN were also a great help in alerting users to the evaluation.

    Our local evaluators at Southampton University gave us confidence the evaluation was ready to be tackled externally. We want to thank Iain Peddie, Shams Bin Tariq, David Crooks, Jonathan Parry (Physics Dept.) and Muan H. Ng, Chris Bailey, Jing Zhou, Norliza Mohamad Zaini, Hock K.V. Tan and Simon Kampa.(IAM Dept.).

    Finally, we thank all our Web evaluators, who must remain anonymous, but this in no way diminishes their vital contribution.

    References

    Bergmark, Donna (2000) "Automatic Extraction of Reference Linking Information from Online Documents".
    Cornell University Technical Report, TR 2000-1821, November
    http://www.cs.cornell.edu/cdlrg/Reference%20Linking/extraction.pdf

    Bergmark, D. and Lagoze, C. (2001) "An Architecture for Automatic Reference Linking". Cornell University Technical Report, TR2001-1842, presented at the 5th European Conference on Research and Advanced Technology for Digital Libraries (ECDL), Darmstadt, September
    http://www.cs.cornell.edu/cdlrg/Reference%20Linking/tr1842.ps

    Bollen, Johan and Rick Luce (2002) "Evaluation of Digital Library Impact and User Communities by Analysis of Usage Patterns". D-Lib Magazine, Vol. 8, No. 6, June
    http://www.dlib.org/dlib/june02/bollen/06bollen.html

    Chen, C. and Carr, L. (1999) "Trailblazing the literature of hypertext: An author co-citation analysis (1989-1998)". Proceedings of the 10th ACM Conference on Hypertext (Hypertext '99), Darmstadt, February
    http://www.ecs.soton.ac.uk/~lac/ht99.pdf

    Crow, R. (2002) "The Case for Institutional Repositories: A SPARC Position Paper". Scholarly Publishing & Academic Resources Coalition, Washington, D.C., July
    http://www.arl.org/sparc/IR/ir.html

    Darmoni, Stefan J., et al. (2002) Reading factor: a new bibliometric criterion for managing digital libraries. Journal of the Medical Library Association, Vol. 90, No. 3, July
    http://www.pubmedcentral.gov/picrender.fcgi?action=stream&blobtype=pdf&artid=116406

    Garfield, Eugene (1994) "The Concept of Citation Indexing: A Unique and Innovative Tool for Navigating the Research Literature". Current Contents, January 3rd
    http://www.isinet.com/isi/hot/essays/citationindexing/1.html

    Guédon, Jean-Claude (2001) "In Oldenburg's Long Shadow: Librarians, Research Scientists, Publishers, and the Control of Scientific Publishing". ARL Proceedings, 138th Membership Meeting, Creating the Digital Future, Toronto, May
    http://www.arl.org/arl/proceedings/138/guedon.html

    Gunn, Holly (2002) "Web-based Surveys: Changing the Survey Process". First Monday, Vol. 7, No. 12, December
    http://firstmonday.org/issues/issue7_12/gunn/index.html

    Gutteridge, Christopher (2002) "GNU EPrints 2 Overview". Author eprint, Dept. of Electronics and Computer Science, Southampton University, October, and in Proceedings 11th Panhellenic Academic Libraries Conference, Larissa, Greece, November
    http://eprints.ecs.soton.ac.uk/archive/00006840/

    Harnad, S. (2002) "UK Research Assessment Exercise (RAE) review". American Scientist September98-Forum, 28th October
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2325.html

    Harnad, S. (2001) "Why I think research access, impact and assessment are linked". Times Higher Education Supplement, Vol. 1487, 18 May, p. 16
    http://www.cogsci.soton.ac.uk/~harnad/Tp/thes1.html (extended version)

    Hitchcock, Steve, Donna Bergmark, Tim Brody, Christopher Gutteridge, Les Carr, Wendy Hall, Carl Lagoze, Stevan Harnad (2002) "Open Citation Linking: The Way Forward". D-Lib Magazine, Vol. 8, No. 10, October
    http://www.dlib.org/dlib/october02/hitchcock/10hitchcock.html

    Hitchcock, S. et al. (2000) "Developing Services for Open Eprint Archives: Globalisation, Integration and the Impact of Links". Proceedings of the Fifth ACM Conference on Digital Libraries, June (ACM: New York), pp. 143-151
    http://opcit.eprints.org/dl00/dl00.html

    Hitchcock, S. et al. (1998) "Linking Electronic Journals: Lessons from the Open Journal Project". D-Lib Magazine, December
    http://www.dlib.org/dlib/december98/12hitchcock.html

    Hunt, C. (2001) "Archive User Survey". Final year project, ECS Dept, University of Southampton, May
    http://www.eprints.org/results/

    Krichel, T. and Warner, S. (2001) "A metadata framework to support scholarly communication". International Conference on Dublin Core and Metadata Applications 2001, Tokyo, October
    http://openlib.org/home/krichel/papers/kanda.html

    Lawrence, Steve (2001) "Free Online Availability Substantially Increases a Paper's Impact". Nature Web Debate on e-access, May
    http://www.nature.com/nature/debates/e-access/Articles/lawrence.html

    Lawrence, S., Giles, C. L. and Bollacker, K. (1999) "Digital Libraries and Autonomous Citation Indexing". IEEE Computer, Vol. 32, No. 6, 67-71
    http://www.neci.nj.nec.com/~lawrence/papers/aci-computer98/

    Lynch, Clifford A. (2001) "Metadata Harvesting and the Open Archives Initiative". ARL Bimonthly Report, No. 217, August
    http://www.arl.org/newsltr/217/mhp.html

    Merton, Robert (1979) "Foreward". In Garfield, Eugene Citation Indexing: Its Theory and Application in Science, Technology, and Humanities (New York: Wiley), pp. v-ix http://www.garfield.library.upenn.edu/cifwd.html

    Nielsen, Jakob (2000) "Why You Only Need to Test With 5 Users". Alertbox, March 19th
    http://www.useit.com/alertbox/20000319.html

    Small (1973) "Co-citation in the Scientific literature: A New Measure of the Relationship Between Two Documents". Journal of the American Society for Information Science, Vol. 24, No. 4, July-August;
    reprinted in Current Contents, No. 7, February 13th, 1974,
    http://www.garfield.library.upenn.edu/essays/v2p028y1974-76.pdf

    Suber, Peter (2002) "Larger FOS ramifications". FOS-Forum list server, 2nd July
    http://www.topica.com/lists/fos-forum/read/message.html?mid=904724922&sort=d&start=240

    Van de Sompel, H. and Lagoze, C. (2002) "Notes from the Interoperability Front: A Progress Report from the Open Archives Initiative". 6th European Conference on Research and Advanced Technology for Digital Libraries (ECDL), Rome, September
    http://lib-www.lanl.gov/%7Eherbertv/papers/ecdl-submitted-draft.pdf

    Young, Jeffrey R. (2002) "Superarchives' Could Hold All Scholarly Output". Chronicle of Higher Education, July 5th
    http://chronicle.com/free/v48/i43/43a02901.htm

    Appendices

    Appendix 1: Response to users on submission of Form 1

    Congratulations! The bibliography is printed below.

    With this you could pass for an expert on string theory! More importantly, you have helped us assess the operation and usability of Citebase, which we hope to extend to other disciplines as more data becomes available.

    The scenarios you have just performed have introduced you to ways of querying and browsing Citebase. You are invited now to complete a short user satisfaction form to give your initial impressions of using Citebase. This form is closely related to the first exercise and is an integral part of the overall evaluation.

    Continue to the second form: User Satisfaction Questionnaire

    If you have any queries, you can email me from the link below.

    Steve Hitchcock, Evaluation Coordinator
    sh94r@ecs.soton.ac.uk


    You completed the bibliography successfully. Here is a copy of your bibliography with the details added.

       1. Lidsey, James E.; Copeland, E. J.; Wands, David, Superstring Cosmology, 2000, hep-th/9909061
       2. Maldacena, Juan M., The Large N Limit of Superconformal Field Theories and Supergravity, 1998, hep-th/9711200
       3. Klebanov, I. R.; Polyakov, A. M.; Gubser, S. S., Gauge Theory Correlators from Non-Critical String Theory, 1998, hep-th/9802109
       4. Witten, Edward, Anti De Sitter Space And Holography, 1998, hep-th/9802150

    Edward Witten is the most highly cited author on string theory in arXiv, and the paper listed above is his most-cited paper.

    This is a fun result, of course. Citebase alone will not enable you to replace the real experts, but it will enable the interested reader to be better informed. Even the experts will be impressed that you are now aware of a few key papers in their field.



    If no fields were filled in for section 2, the following line replaced the bibliography:

    All the fields submitted were empty: Please return to the form and fill it first

    Appendix 2: Response to users on submission of Form 2

    Thank you !

    Your responses to the user satisfaction form have been received and will be included in results of the evaluation to be reported as part of the Open Citation Project. If you have signed up to receive notice of these results, or to participate in follow-up tests, we will contact you again. If not, we are most grateful for your participation.

    If you have any queries, you can email me at the address below.

    Steve Hitchcock, Evaluation Coordinator
    sh94r@ecs.soton.ac.uk



    If no fields were filled in for Form 2, the following line was added to this response:

    All fields submitted were empty: Please return to the form and fill it first

    Appendix 3: Correspondence with users arising from the evaluation

    Ashok Prasad <gc.cuny.edu> (22nd August):
    Your citation impact summary appears to give a significantly lower total number of cites than the spires database at SLAC for high energy physics papers.

    Response sent to evaluator, Tim Brody: I believe SLAC/SPIRES index both arXiv-articles and the tables of contents of journals. This gives them greater coverage of all the literature in High Energy Physics (whereas Citebase can only discover citations from articles deposited in arXiv, but does so for all subject areas).

    In addition SLAC/SPIRES may be able to parse references more successfully than Citebase - although we strive to improve!

    Hopefully, we will be able to integrate SLAC/SPIRES and Citebase at some point to provide greater coverage (and accuracy) to the end-user/authors.

    If you have specific examples of articles where Citebase's coverage is poor, please email me with URL's.

    Guido Burkard <unibas.ch> (28th August):
    I've recently discovered CiteBase and I like it very much!

    One problem I have just noticed is that it happens that CiteBase gets the order of authors wrong, e.g. in     cond-mat/9808026 where the author listed first on CiteBase is actually the last author.

    Another suggestion would be to indicate the number of citing articles on the main page--I imagine that many people are interested in the impact of their paper and don't necessarily want to browse the list of citing articles.

    Tim Brody (28th August): Citebase should be displaying authors in the order given by the source repository. Regrettably the harvest program wasn't storing this order. This should be fixed once we finish our evaluation, as I will be performing a complete re-harvest (to go to OAI 2.0) - in the next month or so.

    The impact figure you mention should also make its way into the abstract page, along with the hits count, and by-author values.

    "Vivian Incera" <fredonia.edu> (12th September):
    I am very disappointed, to say the least, with the new CiteBase system appearing in the archives. I think that it should not have been made available to users until it was bug-free. Problems as obvious as a contradiction between the list of articles citing a paper appearing in the same archive and the list you provide in your system show that CiteBase is still highly inaccurate.

    I checked several of my papers and found problems in most of them. For example, a recent paper that has three citations if you click in the archive link, has no citation in your system. That's outrageous.

    You know that citation is a serious matter. It is used for many different purposes, it can be used to track the literature, as well as to decide a position, etc. You can hurt many people by making mistakes like these. The idea of CiteBase is great, no doubt. I know of many papers that had cited mine, but for different reasons, as for instance that they are in cond-mat, but my paper is an hep-ph, these citations have never been automatically counted on my high energy archive list. Your system could really help to improve that situation. So, don't think I am against what you are doing. What I don't like is that the program to produce the data is still  limited, but, despite that, it is available to everybody in the web, so people can take it seriously.

    I hope that you will do something to address the problems I am mentioning here. The best is to put a clear sign that this system is still under construction and should not be considered as a reliable source of information yet.

    Internal project correspondence

    Stevan Harnad (12th September): I agree with this user that your cautions and impact health warnings need to be strengthened to take all these limitations into account. I suggest you do that as soon as possible, to prevent reactions like this from propagating. It is just a matter of itemizing the limitations, stressing that this is still experimental, should not yet be used for actual evaluation, and is meant mainly to demonstrate the POTENTIAL of online citation analysis.

    I'd be happy to vet the changed text. Politically and strategically it would be best if we got these hedges optimized and online ASAP!

    The other question is about whether there is anything that can be done to fix the specific within-archive omissions she mentions. (Obviously, she is looking narrowly at her own work, and you are handling the whole corpus, but every author will be looking first, and under a microscope, at how citebase treats his own work, and only secondarily at how it helps him with the work of others.

    Actual notice added: see section 4.
    Steve Hitchcock (13th September): I think we are talking about citations, and 'archive link' means SLAC-SPIRES HEP (cited by). It would have helped if the author had been more specific about the examples. I can't identify the paper from this.
    > A recent paper that has three citations if you click in the archive link, has no citation in your system. That's outrageous.
    Can anyone else? However, there is a paper that has null citations in Citebase but six in Spires http://citebase.eprints.org/cgi-bin/citations?id=oai%3AarXiv%3Ahep%2Dph%2F0004113. Maybe this needs to be looked at to see why Citebase isn't collecting these citations

    It is worth noting that Spires 'cited by' lists are preceded by a prominent warning.

    Response Tim Brody (13th September): We are sorry for any distress that Citebase may have caused. We try to draw user's, and author's, attention to the warnings we give on the site: http://citebase.eprints.org/help/#impactwarning

    Please do not despair if your papers fail to appear or have few or no citations:

    Citebase is based only (1) on those papers, references and hit data that are available from the source eprint archives (see Coverage <http://citebase.eprints.org/help/coverage.php>), and (2) only on those references that can currently be successfully linked (~15%). The literature currently available online is still only a tiny subset of the total research literature across disciplines, although it is growing daily. (Hence the moral of this story is not that these services are intrinsically limited, but that not enough researchers have self-archived their papers yet!)

    We will, of course, try to improve these notices in response to your concerns.

    We are continually improving Citebase's ability to find and link citations within the literature available to us from the source e-print archives. However, these are automated processes that will fail in certain conditions - for example, should an article not have a "journal-ref", or if the full-text document can not be converted and/or parsed.

    Where we are unable to link a citation, it is most valuable to us for authors/users to supply the arXiv id's of the articles that haven't been linked together. Given the arXiv id's I can build a test case to use against our systems to identify why it failed.

    As you point out, citation-indexing services should not be used in isolation. SLAC/SPIRES is likely to have more comprehensive coverage of the HEP field (due to the human-effort involved), Citebase will cover all fields in arXiv, and ISI's Web of Science will provide comprehensive indexes of the journals that they index. For any service, citation counts will be intrinsically limited by the amount of literature that is included.

    Ehud Schreiber:
    I just completed the exercise and the site seemed promising, but when I tried to find papers citing hep-th/0205185 I got none, although SPIRES found 30 of them.

    Tim Brody (24th September): During our evaluation period we haven't updated Citebase with new records. Therefore for a relatively new article, in this case June of this year, Citebase is unlikely to have picked up any articles since that cited it.

    (I've duplicated the evaluation warning on the coverage page, so if you click the "incomplete" link in the warning on the abstract page you should see the updated message)

    Regardless, it is possible that SPIRES - for hep-th - will always identify more citations. While Citebase can only see citations from articles that are deposited in arXiv, SPIRES covers articles that are only available on-paper.

    Internal project correspondence

    Steve Hitchcock (24th September): This is an interesting example. The paper was obviously being cited within days of deposit on arXiv. Citebase has records for the earliest citing papers, 21-30 on the SLAC citations list http://arxiv.org/cits/hep-th/0205185, but for some reason has not been able to extract the reference lists from any of these papers, hence obviously no citations. Is this lack of references an updating problem, or is there a technical reason why the ref. lists are not being extracted in these cases?

    Tim Brody (24th September): I *guess* that the metadata is slightly ahead of the references. Given re-harvesting everything we should pick this up again (the PDFs in question look parsable).

    Appendix 4: User comments

    * Physicist

    A4.1: Comments on section 1: About you

    A4.2: Comments on section 2: Practical exercise

    A4.3: Comments on section 3: views on Citebase

    A4.4: Final comments on the evaluation


    ^Top  
    <Home  Evaluation | Papers                                                                                                           
    The OpCit Project
    This page produced and maintained by the Open Citation project. Contact us