Dear all,
Let me answer the questions that were posted here.
WHY CORE?
The primary reason why I started 2.5 years ago building CORE is that
there was no easy way of getting access to Open Access research
outputs at a massive scale that would enable people to build new
services (primarily making use of text-mining). As a person interested
in text-mining I was totally frustrated by this. All existing
aggregators were only metadata aggregators, which was making them
useless in most of the use cases I was interested in.
Let me mention a few of such use cases.
- Text-mining - Open access is not just about opening access, but
primarily about reuse. There is a lot of possibilities of developing
new innovative services on top of the open access content (not just
metadata), but this is too costly if one has to first aggregate
content from hundreds of sources that are not fully interoperable and
where some of the big players (typically subject-based repositories)
require a completely different approach or bespoke arrangements. In
addition, relying on a large sample, such as from arXiv.org, is not
good-enough for all researchers as the data cover only certain fields.
If you do research in bibliometrics or want to develop services for
exploratory search/discovery, you just don't have data. So here, I
agree with Thomas on the necessity of having a full-text copy.
- Building of open access cross-repository search - It is impossible
to build high quality cross-search if an aggregator has only access to
metadata. Without access to content, the aggregator has no means for
checking content availability, validity and quality [1] as it has to
rely on the information provided in the metadata (and it must trust
it). The resulting retrieval system cannot have (by definition) good
precision and recall characteristics or generate snippets, which is
why metadata only search engines will never become popular among
researchers.
- Metadata enrichment - it is impossible to create and maintain some
types of metadata at the level of individual repositories. As an
example, repositories cannot provide metadata about related papers or
cited_by relationships as they do not have access to this information.
However, this information can be mined from the full-texts. The job of
an aggregator, such as CORE, is to enrich the metadata with these
relationships.
- Content monitoring and benchmarking - It is necessary to aggregate
content, in order to provide reliable statistics. Metadata is not
sufficient as repositories can create (and they do often create)
metadata about non-existing items. As an example, I will mention
23,880 instances of "Dark item" in the Cambridge repository [2]. At
the same time, monitoring of OA growth (in terms of content) is
essential for the success of the OA movement (green OA is often
monitored just using metadata records). In addition, the only way to
check compliance with the new HEFCE policy is using a full-text
aggregation.
- Monitoring of standards adoption and support for repository managers
- By aggregating content from repositories, CORE can detect
inconsistencies in the way of various metadata standards are used
across repositories and communicate these back to the repository of
origin, thus helping to speed-up the standardization process.
There are certainly other use cases, which fall into one of the "raw
data access", transaction access or analytical access [1] categories.
To read more about some of the use cases, have a look in [3].
CORE NEVER INTENDED TO BE A SUBSTITUTE FOR GOOGLE SCHOLAR (GS)
The most well-known application of CORE is the CORE Portal that
provides access to the aggregated content. While most of the people
feel that the portal is synonymous to CORE, it is only a part of it.
The main importance of CORE is in the ability to develop applications
that communicate with the CORE API and have access to the
pre-processed content (again not just metadata). The same holds for
the new CORE data dumps. Will send a link to the data dumps in a
separate email. This has the potential to kick-start many new projects
and is something which distinguishes CORE from GS. The reason I
originally decided to build the CORE Portal was just to demonstrate
the ability to do cross-search for OA content as we had the data. This
has shifted a little bit over the last months when hundreds of
thousand people started using the portal, however the main strength of
CORE is in my view still not in the user interface and was never
intended to be.
Also, the goal of CORE has never been to compete with GS, but rather
to work with this service hand in hand. We are interested in supplying
GS with better quality metadata and content than the metadata and
content coming directly from repositories.
CORE vs REPOSITORIES
I agree with Hugh that aggregation services should be beneficial for
the original repositories as well. This is what, I believe,
distinguishes CORE from ResearchGate and other scholarly scams. We are
really interested in adding value to repositories through plugins,
help with detecting harvesting issues and monitor compliance with
standards. We do link back to the repositories (but also keep a cached
copy for performance reasons and to facilitate text-mining, etc.).
OAI-PMH INTERFACE
I agree with Paul, that OAI-PMH is not ideal and has many problems.
Therefore, we are quite interested in the adoption of ResourceSync,
but for now we have no other option than to respect the protocols
widely implemented across the repository spectrum.
The work to provide an OAI-PMH interface on top of CORE is already in progress.
ONE DISAGREEMENT
<When I looked at the latest additions list of CORE, they were
< all scanned legal documents from Brezil. It's fine stuff for
< legal information services but useless for scholarly communication.
< All sorts of stuff is being thrown into repositories. That
< makes them very hard to use as a basis for advanced services.
At different times we harvest content from different repositories.
Some repositories might be topic specific (such as legal), but CORE
contains overall documents from a very wide range of subjects. A lot
of stuff is being thrown on the Web, which does NOT make Google and
other search engines very hard to use.
FUNDING
On behalf of my team I can say that we will keep building an OA
content aggregation regardless of funding. Why? Because we believe in
its necessity, usefulness and potential to contribute to the success
of the OA movement. We know that OA needs such infrastructure and that
this infrastructure should not fall in the hands of commercial
publishers.
Let me finish with an idea that if we (the society) come up with an OA
technical infrastructure that enables to redefine the way researchers
communicate (with OA papers but not non-OA papers), and we articulate
these benefits to all participating user groups, this will be a very
strong incentive for the adoption of OA. Perhaps stronger than the
political stimulation we see today.
REFERENCES
[1] Knoth, P. and Zdrahal, Z. (2012) CORE: Three Access Levels to
Underpin Open Access, D-Lib Magazine, 18, 11/12, Corporation for
National Research Initiatives
http://www.dlib.org/dlib/november12/knoth/11knoth.html
[2] Knoth, P. (2013) From Open Access Metadata to Open Access Content:
Two Principles for Increased Visibility of Open Access Content, Open
Repositories 2013, Charlottetown, Prince Edward Island, Canada
http://core-project.kmi.open.ac.uk/files/oa-metadata-to-oa-content.pdf
[3] Knoth, P. (2013) CORE: Aggregation Use Cases for Open Access, Demo
at Joint Conference on Digital Libraries (JCDL 2013), Indianapolis,
Indiana, United States
http://core-project.kmi.open.ac.uk/files/jcdl2013_v7.pdf
Petr
On 28 October 2013 05:53, Thomas Krichel <[log in to unmask]> wrote:
> Hugh Glaser writes
>
>> Mirroring is nearly always wrong on the Web, other than for performance &c..
>
> I could not agree less.
>
> There is more to scholarly communication than the web. You can't
> build advanced services without having a local copy of metadata and
> sometimes you need to the full text.
>
>> A big point about the Web is that you don’t go around copying data
>> and republishing it; it is already available somewhere, and you
>> point at it.
>
> No you can't. You need a local copy to provide a service. Example:
> citation indexing. You need to translated the full text to extract
> the texual data. You can't do this with a link, you need a copy to
> of the full text to extract the data.
>
> And with the current repository infrastructure, finding the
> full-text is not trivial and error-prone.
>
>> You may need to go and get the data, so that you can add value and
>> then publish metadata, but, like Google etc you then point at the
>> original, which is what people want (although because you have the
>> pages you can provide a cache for when things go wrong (preservation
>> service)).
>
> I have data from OpenDOAR getting many years back. When you look at
> this you will be shocked to see how many repositories have
> appeared and then been closed. A flimsy linking system just does not
> make the cut.
>
>> This could be a real opportunity to move the OA world on towards the
>> vision that many of us have!
>
> Furthering repositories for OA means providing better and more
> interesting user/contributor services. A "page by paper, with local
> search engine" thingy, which where most IR user interfaces seem to
> be stuck at, is not good enough.
>
> --
>
> Cheers,
>
> Thomas Krichel http://openlib.org/home/krichel
> skype:thomaskrichel
|