[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Ref-Links] report of the reference linking working group



>As promised in Don Water's note last week, the report of the small working
>group of the NISO/DLF/NFAIS/SSP workshop on reference linking is now
>available on the web at:
>http://www.lib.uchicago.edu/Annex/pcaplan/reflink.html.
>
>We hope the paper will stimulate discussion on this list, which will in
>turn help set the agenda for the second reference linking workshop to be
>held in conjunction with SSP on June 9 in Boston.
>
>p
>

The report is a clear exposition of the problem and a good starting point,
despite the "update date" which violates causality.

I'd like to take issue with the following paragraphs:

>snip<
Running a local interceptor is likely to be a somewhat demanding task, as
potentially the size of the resolution database and the rate of change in
it could be very large, as it reflects each individual article available
from a repository.   It was suggested that smaller libraries typically get
their electronic journal literature from a small number of aggregator
services. If true, perhaps these vendors could, as part of their service,
provide an interceptor for the identifiers of the articles included in the
service. Larger libraries subscribing to myriad sources would presumably
have the resources to support their own interceptors. Again, aggregators
and publishers would provide the data to resolve identifiers for their own
materials as part of the subscription service. The library would maintain
the interceptor merging and managing data from the various sources.

The details of this model require further exploration and development. It
requires the development of browsers with the ability to support name
resolution and browsers that allow users to configure a hierarchy of
interceptors. It requires the development of a default, DNS-like service to
route identifiers to the default resolution server for the namespace. It
requires further specification and testing of the interceptor concept, and
perhaps most importantly, it requires the willingness of publishers and
third-party abstracting, indexing, and aggregator services to provide
interceptors or data for locally-managed interceptors.
>snip<


A large number of libraries are already running "local interceptors" for
the purpose of filtering obscene material. The technology for a journal
interceptor is in fact easier because journal publishers are likely to
cooperate (it helps them sell subscriptions), and because porn is a larger
dataset than journal articles.

Running an interceptor is not at all a demanding task. You just plug it
into your proxy server. Elementary school librarians do it. The database
update problems for a journal interceptor are similar those addressed in
porn interceptors.

The redirection aspect of an interceptor can be solved with the same tools
being developed for linking. PubMed, DOi, S-Link-S can all be used for
this. There are advantages to each.

Interceptor technology is old and mature (well at least by internet time).
Sure, you can develop a new DNS-like service, and there are clear
advantages to doing so, but an interceptor technology based on http is
likely to be a lot faster to market.

Eric
Eric Hellman
Openly Informatics, Inc.
http://www.openly.com/           Tools for 21st Century Scholarly Publishing

------------------------------------------------------
Ref-Links maillist  -  Ref-Links@doi.org
http://www.doi.org/mailman/listinfo/ref-links