Open Bibliography and Open Bibliographic Data » bibliographic http://openbiblio.net Open Bibliographic Data Working Group of the Open Knowledge Foundation Tue, 08 May 2018 15:46:25 +0000 en-US hourly 1 http://wordpress.org/?v=4.3.1 Finnish Turku City Library and the Vaski consortia now Open Data with 1.8M MARC-records http://openbiblio.net/2011/10/13/finnish-turku-city-library-and-the-vaski-consortia-now-open-data-with-1-8m-marc-records/ http://openbiblio.net/2011/10/13/finnish-turku-city-library-and-the-vaski-consortia-now-open-data-with-1-8m-marc-records/#comments Thu, 13 Oct 2011 18:50:34 +0000 http://openbiblio.net/?p=1614 Continue reading ]]>

Let's open up our metadata containers

I’m happy to announce that our Vaski-consortia of public libraries  serving total 300 000 citizens in Turku and the a dozen surrounding municipalities in western Finland, have recently published all of our 1.8 million bibliographical records in the open, as a big pile of data (see on The Data Hub).

Each of the records describes a book, recording, movie, song or other publication in our library catalogue. Titles, authors, publishing details, library classifications, subject headings, identifiers and so on systematically saved in MARC -format, the international, structured library metadata standard since the late 1960s.

Unless I’ve missed something, ours is the third large scale Open Data -publication from the libraries of Finland. The first one was the 670 000 bibliographical records of HelMet-consortia (see on The Data Hub), an another consortia of public libraries around the capital Helsinki. This first publication was organized and initiated in 2010 by Kirjastot.fi Labs, a project seeking for more agile, innovative library concepts. The second important Open Data publication was our national generic theseurus Yleinen suomalainen asiasanasto YSA which is also available as a cool semantic ontology.

Joining this group of Open Data publications was natural for our Vaski-consortia, because we are moving our data from one place to another anyway; we are in the middle of the process of converting from our national FinMARC -flavour to the international MARC21 -flavour of MARC, swapping our library system from Axiell PallasPro to Axiell Aurora, plus implementing a new, ambitious search and discovery interface for all the Finnish libraries, archives and museums (yes, it’s busy times here and we love the taste of a little danger). All this means we are extracting, injecting, converting, mangling, breaking, fixing, disassembling and reassembling all of our data. So, we asked ourselves, why not publish all of our bibliographical data on the net while we are on it?

The process of going Open Data has been quite seamless for us. On my initiative the core concept of Open Data was explained to the consortia’s board. As there were no objections or further questions, we contacted our vendor BTJ who immidiately were supporting the idea. From there on it was basically just about some formalities with BTJ, consulting international colleagues regarding licensing, writing a little press-release, organizing a few hundred megabytes of storage space on the internet. And trying to make sure the Open Data -move didn’t get buried under other, more practical things during the summertime.

For our data license we have chosen the liberal Creative Commons-0 license (CC0), because we try to have as little obstructions to our data as possible. However we have agreed on a 6 month embarko with BTJ, a company who is doing most of the cataloguing for the Finnish public libraries. We believe that it is a good compromise to prefer publishing data that is slightly outdated, than try to make the realm of immaterial property rights any more unclear than it already is.

Traditional library metadata at Turku main library

We seriously cannot anticipate what our Open Data -publication will lead to. Perhaps it will lead to absolutely nothing at all. I believe most organizations opening up their data face this uncertainty. However what we do know for sure is, that all of the catalogue records we have carefully crafted, acquired and collected, are seriously underutilized if they are only used for one particular purpose: finding and locating items in the library collections.

For such a valuable assett as our bibliographical metadata, I feel this is not enough. By removing obstacles for accessing our raw data, we open up new possibilities for ourselves, for our colleagues (understood widely), and to anybody interested.

Mace Ojala, project designer
Turku City Library/Vaski-consortia; National Digital Library of Finland, Cycling for libraries, etc.
http://xmacex.wordpress.com, @xmacex, Facebook etc.

]]>
http://openbiblio.net/2011/10/13/finnish-turku-city-library-and-the-vaski-consortia-now-open-data-with-1-8m-marc-records/feed/ 1
Bibliographica gadget in Wikipedia http://openbiblio.net/2011/06/06/bibliographica-gadget-in-wikipedia/ http://openbiblio.net/2011/06/06/bibliographica-gadget-in-wikipedia/#comments Mon, 06 Jun 2011 10:14:04 +0000 http://openbiblio.net/?p=1017 Continue reading ]]> What is a wikipedia gadget?

Thinking of ways to show the possibilities of linked data, we have made a Wikipedia gadget, making use of a great resource the Wikimedia developers give to the community.

Wikipedia gadgets are small pieces of code you can add to your Wikipedia user templates, and allow you to add more functionality and render more information when you browse wikipedia pages.

In our case, we wanted to retrieve information from our bibliographica site to render in Wikipedia, and so as the pages are rendered with specific markup we can use the ISBN numbers present on the wikipedia articles to make consults to the bibliographica database, in a way similar to what Mark has done with the Edinburgh International Science Festival.

Bibliographica.org offers an isbn search endpoint at http://bibliographica.org/isbn/, so if we ask for the page http://bibliographica.org/isbn/0241105161 we receive

[{"issued": "1981-01-01T00:00:00Z", "publisher": {"name": "Hamilton"}, "uri": "http://bnb.bibliographica.org/entry/GB8102507", "contributors": [{"name": "Boyd, William, 1952-"}], "title": "A good man in Africa"}]

I can use this information to make a window pop up with more information about works when we hover their ISBNs on the Wikipedia pages. If my user templates has the bibliographica gadget, every time I open a wiki page the script will ask information about all the ISBNs the page has to our database.
If something is found, it will render a frame around the ISBN numbers:

And if I hover over them, I see a window with information about the book:

Get the widget

So, if you want to have this widget, first you need to create an account in the wikipedia, and then change your default template to add the JavaScript snippet. Once you do this (instructions here ) you will be able to get the information available in bibliographica about the books.

Next steps

By now, the interaction goes in just one direction. Later on, we will be able to feed that information back to Bibliographica.

]]>
http://openbiblio.net/2011/06/06/bibliographica-gadget-in-wikipedia/feed/ 0
Medline dataset http://openbiblio.net/2011/05/23/medline-dataset/ http://openbiblio.net/2011/05/23/medline-dataset/#comments Mon, 23 May 2011 09:56:55 +0000 http://openbiblio.net/?p=1120 Continue reading ]]> Announcing the CC0 Medline dataset

We are happy to report that we now have a full, clean public domain (CC0) version of the Medline dataset available for use by the community.

What is the Medline dataset?

The Medline dataset is a subset of bibliographic metadata covering approximately 98% of all PubMed publications. The dataset comes as a package of approximately 653 XML files, chronologically listing records in terms of the date the record was created. There are approximately 19 million publication records.

Medline is a maintained dataset, and updates chronologically append to the current dataset.

Read our explanation of the different PubMed datasets for further information.

Where to get it

The raw dataset can be downloaded from CKAN : http://ckan.net/package/medline

What is in a record

Most records contain useful non-copyrightable bibliographic metadata such as author, title, journal, PubMed record ID. Many also have DOIs. We have stripped out any potentially copyrightable material such as abstracts.

Read our technical description of a record for further information.

Sample usage

We have made an online visualisation of a sample of the Medline dataset – however the visualisation relies on WebGL which is not yet widely supported by all browsers. It should work in Chrome and probably FireFox4.

This is just one example, but shows what great things we can build and learn from when we have open access to the necessary data to do so.

]]>
http://openbiblio.net/2011/05/23/medline-dataset/feed/ 3
OpenBiblio workshop report http://openbiblio.net/2011/05/09/openbiblio-workshop-report/ http://openbiblio.net/2011/05/09/openbiblio-workshop-report/#comments Mon, 09 May 2011 16:03:29 +0000 http://openbiblio.net/?p=1081 Continue reading ]]> #openbiblio #jiscopenbib

The OpenBiblio workshop took place on 6th May 2011, at London Knowledge Lab

Participants

  • Peter Murray-Rust (Open Bibliography project, University of Cambridge, IUCr)
  • Mark MacGillivray (Open Bibliography project, University of Edinburgh, OKF, Cottage Labs)
  • William Waites (Open Bibliography project, University of Edinburgh, OKF)
  • Ben O’Steen (Open Bibliography project, Cottage Labs)
  • Alex Dutton (Open Citation project, University of Oxford)
  • Owen Stephens (Open Bibliographic Data guide project, Open University)
  • Neil Wilson (British Library)
  • Richard Jones (Cottage Labs)
  • David Flanders (JISC)
  • Jim Pitman (Bibserver project, UCB) (remote)
  • Adrian Pohl (OKF bibliographic working group) (remote)

During the workshop we covered some key areas where we have seen some success already in the project, and discussed how we could continue further.

Open bibliographic data formats

In order to ensure successful sharing of bibliographic data, we require agreement on a suitable yet simple format via which to disseminate records. Whilst representing linked data is valuable, it also adds complexity; however, simplicity is key for ensuring uptake and for enabling easy front end system development.

Whilst data is available as RDF/XML, JSON is now a very popular format for data transfer, particularly where front end systems are concerned. We considered various JSON linked data formats, and have implemented two for further evaluation. In order to make sure this development work is as widely applicable as possible, we wrote parsers and serialisers for JSON-LD and RDF/JSON as plugins for the popular RDFlib.

The RDF/JSON format is, of course, RDF; therefore, it requires no further change to enable it to handle our data, and our RDF/JSON parser and serialiser are already complete. However, it is not very JSON-like, as data takes the subject(predicate(object)) form rather than the general key:value form. This is where JSON-LD can improve the situation – it provides for listing information in a more key:value-like format, making it easier for front end developers not interested in the RDF relations to utilise. But this leads to additional complexity in the spec and parsing requirements, so we have some further work to complete:
* remove angle brackets from blank nodes
* use type coersion to move types out of main code
* use language coersion to omit languages

Our code is currently available in our repository, and we will request that our parsers and serialisers get added to RDFlib or to RDFextras once they are complete (they are still in development at present).

To further assist in representing bibliographic information in JSON, we also intend to implement BibJSON within JSON-LD; this should provide the necessary lined data functionality where necessary via JSON-LD support, whilst also enabling simpler representation of bibliographic data via key:value pairs where that is all that is required.

By making these options available to our users, we will be able to gauge the most popular representation format.

Regardless of format used, a critical consideration is that of stable references to data. Without this maintaining datasets will be very hard. To date, the British Library data for example does not have suitable identifiers. However, the BL are moving forward with applying identifiers and will be issuing a new version of their dataset soon, which we will take as a new starting point. We have provided a list of records that we have identified as non-unique, and in turn the BL will share the tools they use to manage and convert data where possible, to enable better community collaboration.

Getting more open datasets

We are building on the success of the BL data release by continuing work on our CUL and IUCr data, and also by getting more datasets. The latest is the Medline dataset; there were some initial issues with properly identifying this dataset, so we have a previous blog post and a link to further information, the Medline DTD and specifications of the PubMed data elements to help.

The Medline dataset

We are very excited to have the Medline dataset; we are currently working on cleaning so that we can provide access to all the non-copyrightable material it contains, which should represent a listing of about 98% of all articles published in PubMed.

The Medline dataset comes as a package of approximately 653 XML files, chronologically listing records in terms of the date the record was created. This also means that further updates will be trackable as they will append to the current dataset. We have found that most records contain useful non-copyrightable bibliographic metadata such as author, title, journal, PubMed record ID, and that some contain further metadata such as citations, which we will remove. Once this is done, and we have checked that there are unique IDs (e.g. that the PubMed IDs are unique) we will make the raw CC0 collection available, then attempt to get it into our Bibliographica instance. We will then also be able to generate visualisations on our total dataset, which we hope will be approaching 30 million records by the end of the JISC Open Bibliography project.

Displaying bibliographic records

Whilst Bibliographica allows for display of individual bibliographic records and enables building collections of such records, it does not yet provide a means of neatly displaying lists of bibliographic records. We have partnered with Jim Pitman of Berkeley University to develop his BibServer to fit this requirement, and also to bring further functionality such as search and faceted browse. This also provides further development direction for the output of the project beyond the July end date of the JISC Open Bibliography project.

Searching bibliographic records

Given the collaboration between Bibliographica and BibServer on collection and display of bibliographic records, we are also considering ways to enable search across non-copyrightable bibliographic metadata relating to any published article. We believe this may be achievable by building a collection of DOIs with relevant metadata, and enabling crowdsourcing of updates and comments.

This effort is separate to the main development of the projects, however would make a very good addition both to the functionality of developed software and to the community. This would also tie in with any future functionality that enables author identification and information retrieval, such as ORCID, and allowing us to build on the work done at sites such as BIBKN

Disambiguation without deduplication

There have been a number of experiments recently highlighting the fact that a simple LUCENE search index over datasets tends to give better matches than more complex methods of identifying duplicates. Ben O’Steen and Alex Dutton both provided examples of this, from their work with the Open Citation project.

This is also supported by a recent paper from Jeff Bilder entitled “Disambiguation without Deduplication” (not publicly available). The main point here is that instead of deduplicating objects we can simply do machine disambiguation and make sameAs-ness assertions between multiple objects; this would enable changes to still be applied to different versions of an object by disparate groups (e.g. where each group has a different spelling or identifier, perhaps, for some key part of the record) whilst still maintaining a relationship between the two objects. We could build on this sort of functionality by applying expertise from the library community if necessary, although deduplication/merging should only be contemplated if there is a new dataset being formed which some agent is taking responsibility to curate. If not, better to just cluster the data by SameAs assertions, and keep track of who is making those assertions, to assess their reliability.

We suggest a concept for increasing collaboration on this sort of work – a ReCaptcha of identities. Upon login, perhaps to a Bibliographica or another relevant system, a user could be presented with two questions, one of which we know the answer to, and the other being a request to match identical objects. This, in combination with decent open source software tools enabling bibliographic data management (building on tools such as Google Refine and Needlebase), would allow for simple verifiable disambiguation across large datasets.

Sustaining open bibliographic data

Having had success in getting open bibliographic datasets and prototyping their availability, we must consider how to maintain long term open access. There are three key issues:

Continuing community engagement

We must continue to work with the community, and to provide explanatory information to those needing to make decisions about bibliographic data, such as the OpenBiblio Principles and the Open BIbliographic Data guide. We must also ensure we improve resource discovery by supporting the requirement for generating collections and searching content.

Additionally, quality bibliographic data should be hosted at some key sites – there are a variety of options such as Freebase, CKAN, bibliographica – but we must also ensure that community members can be crowdsourced both for managing records within these central options and also for providing access to smaller distributed nodes, where data can be owned and maintained at the local level whilst being discoverable globally.

Maintaining datasets

Dataset maintenance is critical to ongoing success – stale data is of little use to people and disregard for content maintenance will put off new users. We must co-ordinate with source providers such as the BL by accepting changesets from them and incorporating that into other versions. This is already possible with the Medline data, for example, and will very soon be the case with BL updates too. We should advocate for this method of dataset updates during any future open data negotiations. This will allow us to keep our datasets fresh and relevant, and to properly represent growing datasets.

We must continue to promote open access to non-copyrightable datasets, and ensure that there is a location for open data providers to easily make their raw datasets available – such as CKAN.

We will ensure that all the software we have developed during the course of the project – and in future – will remain open source and publicly available, so that it will be possible for anyone to perform the transforms and services that we can perform.

Community involvement with dataset maintenance

We should support community members that wish to take responsibility for overseeing updating of datasets. This is critical for long term sustainability, but hard to find. These people need to be recruited and provided with simple tools which will empower them to easily maintain and share datasets they care about with a minimal time commitment. Thus we must make sure that our software and tools are not only open source, but usable by non-team members.

We will work on developing tools such as ReCaptcha for disambiguation, and on building game / rank table functionality for those wishing to participate in entity disambiguation (in addition to machine disambiguation).

Critical mass

We hope that by providing almost 30 million records to the community under CC0 license, and with the support of all the providers that made this possible, we will achieve a critical mass of data, and an exemplar for future open access to such data.

This should provide the go-to list of such information, and inspire others to contribute and maintain. However, such community assistance will only continue for as long as there appears to be reasonable maintenance of the corpus and software we have already developed – if this slips into disrepair, community engagement is far less likely.

Maintaining services

The bibliographica service that we currently run already requires significant hardware to run. Once we add in Medline data, we will require very large indexes, requiring a great deal of RAM and fast disks. There is therefore a long term maintenance requirement implicit in running any such central service of open bibliographic data on this scale.

We will present a case for ongoing funding requirements and seek sources for financial support both for technical maintenance and for ongoing software maintenance and community engagement.

Business cases

In order to ensure future engagement with groups and business entities, we must make clear examples of the benefits of open bibliographic data. We have already done some work on visualising the underlying data, which we will develop further for higher impact. We will identify key figures in the data that we can feed into such representations to act as exemplars. Additionally, we will continue to develop mashups using the datasets, to show the serendipitous benefit that increases exposure but is only possible with unambiguously open access to useful data.

Events and announcements

We will continue to promote our work and the efforts of our partners, and advocate further for open bibliography, by publicising our successes so far. We will co-ordinate this with JISC, BL, OKF and other interested groups, to ensure the impact of announcements by all groups are enhanced.

We will present our work at further events throughout the year, such as attendance and sessions at OKCon, OR11 and other conferences, and by arranging further hackdays.

]]>
http://openbiblio.net/2011/05/09/openbiblio-workshop-report/feed/ 0
Getting open bibliographic data from (UK)PMC / PubMed http://openbiblio.net/2011/05/03/getting-open-bibliographic-data-from-pmc/ http://openbiblio.net/2011/05/03/getting-open-bibliographic-data-from-pmc/#comments Tue, 03 May 2011 11:32:30 +0000 http://openbiblio.net/?p=1000 Continue reading ]]> For some time now, the JISC Open Bibliography project team has been attempting to get open bibliographic data from (UK)PMC / PubMed. Everyone involved (Robert Kiley – Wellcome, Ben O’Steen, Peter Murray-Rust – JISC OpenBib, Jeff Beck – NIH/NLM/NCBI, Johanna McEntyre) has worked hard to achieve this, but attempts have been hampered by ambiguities and technical restrictions. The purpose of this post is to clarify and highlight these issues as examples of stumbling blocks on any path to linked open data, to specify what it is we are trying to achieve at present, and learn how to improve this process.

WHAT WE ARE TRYING TO DO

Closed access to bibliography is dangerous – it actually holds back the scientific discovery process. We therefore believe it is important to have an authoritative Open collection of bibliographic records. This acts as a primary resource for the community which they can use for normalisation, discovery, annotation, etc. We seek confirmation that we can have programmatic access to the approximately twenty million or so records in PubMed. NCBI for example should be able to say: “these are the articles which we have in Pubmed” without breaking any laws or contracts. These articles would be identified by their core bibliographic data.

PROBLEMS

  • We received an original email last year stating that we could have such access to PubMed, but it has become unclear what PubMed is.
  • Identifying the correct content is not straightforward – are we talking about PMC / UKPMC / PubMed / Open Access subset?
  • What licenses are involved and on which subsets do open licenses such as CC0 apply?
  • These datasets are very large, so incremental and recordset-by-recordset requests to servers have resulted in roadblocks such as timeouts and errors.

WHAT DATASET ARE WE TALKING ABOUT

  • The 2 million articles in PMC are NOT all open access. There are 251,129 articles (approx 12% of PMC) that are in the open access subset.
  • Although there are 2 million or so articles in PMC which anyone can look at, print out etc, only 251k of these have an OA licence which allows people to re-use the content, including creating derivative works.
  • PMC and UKPMC have approximately the same full-text content. There are a small minority of journals which refused to allow their content to be mirrored to UKPMC.
  • The distinction between “public access” content and “open access” articles (i.e 0.25m articles) is irrelevant, as we are only interested in the bibliographic record, not the content.
  • For current purposes PMC and UKPMC can be used interchangeably.
  • PMC is only a subset of PubMed – which contains about twenty million records, the totality of content in NIH / NLM / NCBI.
  • The MEDLINE dataset is a subset of about 98% of PubMed.
  • However we believe, as per previous discussions, that the legal situation applies equally to PubMed as to the PMC.
  • So we are looking for every bibliographic record in PubMed (or MEDLINE if that is easier to acquire).

WHAT DO WE MEAN BY BIBLIOGRAPHIC RECORD

  • “Bibliography” is sometimes used as synonymous with “a given collection of bibliographic records”. Consider “the bibliographic data for Pubmed”; what we are interested in is enumerating individual bibliographic records.
  • “Citation” often refers to the reference within the fulltext to another publication (via its bibliographic record). The list of citations is not in general Open except in Open Access journals.
  • For the purposes of Open Bibliography we are restricting our discussion to what we call core bibliographic data (described in the open bibliographic data principles)
  • We regard the core bibliographic data as uncopyrightable, and generally acknowledged to be necessarily Open.
  • This core bibliographic data is what we mean by the bibliographic record.
  • Such records are unoriginal and inevitable, being the only way of actually identifying a work.
  • Although collections of bibliographic data are copyrightable (at least in Europe) because they are the result of the creative act of assembling a set of records, the individual records are not.
  • There is no creative act in compiling the list of bibliographic records held by NCBI/Pubmed as it is an exhaustive enumeration.
  • We believe that there is no moral case and probably no legal case for regarding these as the property of the publisher.

WHAT DO WE NOT MEAN BY BIBLIOGRAPHIC RECORD

  • As abstracts appear to be copyrightable we do not include abstracts, or annotations.
  • If it is not in the open bibliographic principles, we do not consider it to be in the bibliographic record.

WHAT WE HOPE TO GET NOW

  • Due to issues with programmatic access to PMC / PubMed dataset (restrictions on requests to the servers that contain them, we request a dump of the MEDLINE dataset.
  • This represents about 98% of PubMed which we believe is or should be available as CC0.
  • As MEDLINE also has incremental updates, we request ongoing access to those, to allow change tracking and synchronisation.
  • We have have filled in the automatic leasing form for the MEDLINE set a few times since February, (most recent attempt was at the end of April.)
  • We hope that the position is now clearly stated in this post, and await confirmation.
  • Upon agreement we look forward to receiving the XML files containing the MEDLINE dataset, from which we will extract the aforementioned unoriginal and re-usable bibliographic data.

We look forward to resolving this, to receiving the data, and to helping to make it openly available.

]]>
http://openbiblio.net/2011/05/03/getting-open-bibliographic-data-from-pmc/feed/ 1
open theses at EURODOC http://openbiblio.net/2011/04/07/open-theses-at-eurodoc/ http://openbiblio.net/2011/04/07/open-theses-at-eurodoc/#comments Thu, 07 Apr 2011 09:53:46 +0000 http://openbiblio.net/?p=900 Continue reading ]]> #jiscopenbib #opentheses

On Friday 1st April 2011, Mark MacGillivray, Peter Murray-Rust and Ben O’Steen remotely attended the EURODOC conference in Vilnius, Lithuania in order to take part in an Open Theses workshop locally hosted by Daniel Mietchen and Alfredo Ferreira (funded by the JISC Open Bib project to attend in person).

During the workshop we began laying the foundations for open theses in Europe, discussing with current and recently finished postgraduate students and collecting data from those present and from anyone else interested.

As described by Peter prior to the event:

As part of our JISCOpenBIB project we are running a workshop on Open Theses at EURODOC 2011. “We” is an extended community of volunteers centered round the main JISC project. In that project we have developed an approach to the representation of Open Bibliographic metadata, and now we are extending this to theses.

Why theses? Because, surprisingly, many theses are not easily discoverable outside their universities. So we are running the workshop to see how much metadata we can collect on European theses. Things like name, university, subject, datae, title – standard metadata.

We have the beginnings of a dataset at:

https://spreadsheets.google.com/ccc?key=0AnCtSdb7ZFJ3dHFTNDhJU0xfdGhIT01WeTBMMDZWOGc&hl=en_GB&authkey=CJuy4owB

The content of this datasheet will hopefully be used to populate an open theses collection in bibliographica, and in addition it is powering a mashup that will allow us to view at a glance the theses that have been published across the world, and where possible a link to the work itself:

http://benosteen.com/eurodoc.html

We also have a survey to fill in, to collect opinion around copyright issues for current / soon to be published theses, based at:

http://openbiblio.net/opentheses-survey/

The data collected by this survey is available at:

https://spreadsheets.google.com/ccc?key=0AnCtSdb7ZFJ3dDN1cHQ3TDJpYWRaWmkxWlFDS2lMWXc&hl=en_GB&authkey=CMKN-O8I#gid=0

]]>
http://openbiblio.net/2011/04/07/open-theses-at-eurodoc/feed/ 0
Bibliographic models in RDF http://openbiblio.net/2010/09/10/bibliographic-models-in-rdf/ http://openbiblio.net/2010/09/10/bibliographic-models-in-rdf/#comments Fri, 10 Sep 2010 14:56:12 +0000 http://openbiblio.net/?p=225 Continue reading ]]>

Put it in RDF to solve all your problems!

As with most things in life, the reality is often a little more complex. If you are old enough, you may well remember when this very same cry was often uttered, but with ‘RDF’ above replaced by ‘XML’ or if you are older still, ‘SGML’.

We haven’t quite reached the tipping point with bibliographic data in RDF so that a defacto model and structure has clearly emerged. There are plenty of contenders though, each based on differing models for how this data should be encapsulated in RDF. The main characteristic difference is in how markedly hierarchical or flat the model structure is.

A model that has emerged from the library world is FRBR – Functional Requirements for Bibliographic Records. From wikipedia:

FRBR is a conceptual entity-relationship model developed by the International Federation of Library Associations and Institutions (IFLA) that relates user tasks of retrieval and access in online library catalogues and bibliographic databases from a user’s perspective. It represents a more holistic approach to retrieval and access as the relationships between the entities provide links to navigate through the hierarchy of relationships.

There are plenty of articles and documents online to explain further, so I will not take up your time with a summary of it, just my opinion. FRBR is very much built around the notion of books – what a book is, taking into account things like editions and so on. Where FRBR really does fall down a rabbit’s hole, is the consideration of things like serials and journal articles. Their treatment feels very much like an afterthought and the philosophical ideas of Work and Expression get very much more murky, especially when considering linking these records to conference papers and blog posts by the same article authors.

There is enough of a model, however, to render an understandable bibliographic ‘record’ for an article in RDF, and this post will give an example of this, using David Shotton and Silvio Peroni’s FaBIO ontology to encapsulate the information in a FRBR-like manner.

The data used comes from an IUCr paper “Nicotinamide-2,2,2-trifluoroethanol (2/1)” Acta Cryst. (2009). E65, o727-o728, which has RDF embedded in the HTML page itself. The original RDF looks something like this:

@prefix dc: <http://purl.org/dc/elements/1.1/>.
@prefix dcterms: <http://purl.org/dc/terms/>.
@prefix foaf: <http://xmlns.com/foaf/0.1/>.
@prefix prism: <http://prismstandard.org/namespaces/1.2/basic/>.
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>.
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>.

<doi:10.1107/S1600536809007594>
     prism:eissn "1600-5368";
     prism:endingpage "728";
     prism:issn "1600-5368";
     prism:number "4";
     prism:publicationdate "2009-04-01";
     prism:publicationname "Acta Crystallographica Section E: Structure Reports Online";
     prism:rightsagent "med@iucr.org";
     prism:section "organic compounds";
     prism:startingpage "727";
     prism:volume "65";
     dc:creator "Bardin, J.",
         "Florence, A.J.",
         "Johnston, B.F.",
         "Kennedy, A.R.",
         "Wong, L.V.";
     dc:date "2009-04-01";
     dc:description "The nicotinamide (NA) molecules of the title compound, 2C6H6N2O.C2H3F3O, form centrosymmetric R22(8) hydrogen-bonded dimers via N-H...O contacts. The asymmetric unit contains two molecules of NA and one trifluoroethanol molecule disordered over two sites of equal occupancy. The packing consists of alternating layers of nicotinamide dimers and disordered 2,2,2-trifluoroethanol molecules stacking in the c-axis direction. Intramolecular C-H...O and intermolecular N-H...N, O-H...N, C-H...N, C-H...O and C-H...F interactions are present.";
     dc:identifier _9:S1600536809007594;
     dc:language "en";
     dc:link <http://scripts.iucr.org/cgi-bin/paper?fl2234>;
     dc:publisher "International Union of Crystallography";
     dc:rights <http://creativecommons.org/licenses/by/2.0/uk>;
     dc:source <urn:issn:1600-5368>;
     dc:subject "";
     dc:title "Nicotinamide-2,2,2-trifluoroethanol (2/1)";
     dc:type "text";
     dcterms:abstract "The nicotinamide (NA) molecules of the title compound, 2C6H6N2O.C2H3F3O, form centrosymmetric R22(8) hydrogen-bonded dimers via N-H...O contacts. The asymmetric unit contains two molecules of NA and one trifluoroethanol molecule disordered over two sites of equal occupancy. The packing consists of alternating layers of nicotinamide dimers and disordered 2,2,2-trifluoroethanol molecules stacking in the c-axis direction. Intramolecular C-H...O and intermolecular N-H...N, O-H...N, C-H...N, C-H...O and C-H...F interactions are present.".

This bibliographic information rendered into a FaBIO model (amongst other ontologies):

@prefix fabio: <http://purl.org/spar/fabio/> .
@prefix c4o: <http://purl.org/spar/c4o/> .
@prefix dc: <http://purl.org/dc/elements/1.1/> .
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix frbr: <http://purl.org/vocab/frbr/core#> .
@prefix prism: <http://prismstandard.org/namespaces/basic/2.0/> .

:article
    a fabio:JournalArticle
    ; dc:title "Nicotinamide-2,2,2-trifluoroethanol (2/1)"
    ; dcterms:creator [ a foaf:Person ; foaf:name "Johnston, B.F." ]
    ; dcterms:creator [ a foaf:Person ; foaf:name "Florence, A.J." ]
    ; dcterms:creator [ a foaf:Person ; foaf:name "Bardin, J." ]
    ; dcterms:creator [ a foaf:Person ; foaf:name "Kennedy, A.R." ]
    ; dcterms:creator [ a foaf:Person ; foaf:name "Wong, L.V." ]
    ; dc:rights <http://creativecommons.org/licenses/by/2.0/uk>
    ; dc:language "en"
    ; fabio:hasPublicationYear "2009"
    ; fabio:publicationDate "2009-04-01"
    ; frbr:embodiment :printedArticle , :webArticle
    ; frbr:partOf :issue
    ; fabio:doi "10.1107/S1600536809007594"
    ; frbr:part :abstract
    ; prism:rightsagent "med@iucr.org" .

:abstract
    a fabio:Abstract
    ; c4o:hasContent "The nicotinamide (NA) molecules of the title compound, 2C6H6N2O.C2H3F3O, form centrosymmetric R22(8) hydrogen-bonded dimers via N-H...O contacts. The asymmetric unit contains two molecules of NA and one trifluoroethanol molecule disordered over two sites of equal occupancy. The packing consists of alternating layers of nicotinamide dimers and disordered 2,2,2-trifluoroethanol molecules stacking in the c-axis direction. Intramolecular C-H...O and intermolecular N-H...N, O-H...N, C-H...N, C-H...O and C-H...F interactions are present." .

:printedArticle
    a fabio:PrintObject
    ; prism:pageRange "727-728" .

:webArticle
    a fabio:WebPage
    ; fabio:hasURL "http://scripts.iucr.org/cgi-bin/paper?fl2234" .

:volume
    a fabio:JournalVolume
    ; prism:volume "65"
    ; frbr:partOf :journal .

:issue
    a fabio:JournalIssue
    ; prism:issueIdentifier "4"
    ; frbr:partOf :volume

:journal
    a fabio:Journal
    ; dc:title "Acta Crystallographica Section E: Structure Reports Online"
    ; fabio:hasShortTitle "Acta Cryst. E"
    ; dcterms:publisher [ a foaf:Organization ; foaf:name "International Union of Crystallography" ]
    ; fabio:issn "1600-5368" .

The most obvious model and ontology that has emerged for describing bibliographic metadata in RDF is the Bibliographic Ontology, developed by Frédérick Giasson and Bruce D’Arcus and has been in existence for long enough to gain acceptance by a number of other projects, such as EPrints, Talis Aspire and Chronicling America (The Chronicling America website at the Library of Congress provides a view on millions of page of digitized newspaper content from around the United States.)

The same data again, rendered this time using BIBO’s model and ontology, rather than a FRBR-like one:

@prefix bibo: <http://purl.org/ontology/bibo/> .
@prefix dc: <http://purl.org/dc/elements/1.1/> .
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix frbr: <http://purl.org/vocab/frbr/core#> .
@prefix prism: <http://prismstandard.org/namespaces/basic/2.0/> .

<info:doi:10.1107/S1600536809007594>
    a bibo:Article
    ; dc:title "Nicotinamide-2,2,2-trifluoroethanol (2/1)"
    ; dc:isPartOf <urn:issn:16005368>
    ; bibo:volume "65"
    ; bibo:issue "4"
    ; bibo:pageStart "727"
    ; bibo:pageEnd "728"
    ; dc:creator :author1
    ; dc:creator :author2
    ; dc:creator :author3
    ; dc:creator :author4
    ; dc:creator :author5
    ; bibo:authorList (:author1 :author2 :author3 :author4 :author5)
    ; dc:rights <http://creativecommons.org/licenses/by/2.0/uk>
    ; dc:language "en"
    ; dc:date "2009-04-01"
    ; bibo:doi "10.1107/S1600536809007594"
    ; bibo:abstract "The nicotinamide (NA) molecules of the title compound, 2C6H6N2O.C2H3F3O, form centrosymmetric R22(8) hydrogen-bonded dimers via N-H...O contacts. The asymmetric unit contains two molecules of NA and one trifluoroethanol molecule disordered over two sites of equal occupancy. The packing consists of alternating layers of nicotinamide dimers and disordered 2,2,2-trifluoroethanol molecules stacking in the c-axis direction. Intramolecular C-H...O and intermolecular N-H...N, O-H...N, C-H...N, C-H...O and C-H...F interactions are present."
    ; prism:rightsagent "med@iucr.org" .

<urn:issn:16005368>
    a bibo:Journal
    ; dc:title "Acta Crystallographica Section E: Structure Reports Online"@en ;
    ; bibo:shortTitle "Acta Cryst. E"@en
    ; bibo:issn "1600-5368" .

:author1
    a foaf:Person
    ; foaf:name "Johnston, B.F." .

:author2
    a foaf:Person
    ; foaf:name "Florence, A.J." .

:author3
    a foaf:Person
    ; foaf:name "Bardin, J." .

:author4
    a foaf:Person
    ; foaf:name "Kennedy, A.R."

:author5
    a foaf:Person
    ; foaf:name "Wong, L.V."

Comments on which is the most useable, the most understandable and what is likely to be the better model for sharing this data with other people are most welcome. This is an area in which the community will have to chose a model, as practically, wrapping the information in any of the models is straightforward, but if you put it into a model that noone uses, the model becomes more of a data coffin, than a useful concept to use.

]]>
http://openbiblio.net/2010/09/10/bibliographic-models-in-rdf/feed/ 7