Open Bibliography and Open Bibliographic Data » licensing http://openbiblio.net Open Bibliographic Data Working Group of the Open Knowledge Foundation Tue, 08 May 2018 15:46:25 +0000 en-US hourly 1 http://wordpress.org/?v=4.3.1 Community Discussions 3 http://openbiblio.net/2012/07/13/community-discussions-3/ http://openbiblio.net/2012/07/13/community-discussions-3/#comments Fri, 13 Jul 2012 12:41:46 +0000 http://openbiblio.net/?p=2842 Continue reading ]]> It has been a couple of months since the round-up on Community Discussions 2 and we have been busy! BiblioHack was a highlight for me, and last week included a meeting of many OKFN types – here’s a picture taken by Lucy Chambers for @OKFN of some team members:

IMG_0351

The Discussion List has been busy too:

  • Further to David Weinbergers’s pointer that Harvard released 12 million bibliographic records with a CC0 licence, Rufus Pollock created a collection on the DataHub and added it to the Biblio section for easy of reference

  • Rufus also noticed that OCLC had issued their major release of VIAF, meaning that millions of author records are now available as Open Data (under Open Data Commons Attribution license), and updated the DataHub dataset to reflect this

  • Peter Murray-Rust noted that Nature has made its metadata Open CC0

  • David Shotton promoted the International Workshop on Contributorship and Scholarly Attribution at Harvard, and prepared a handy guide for attribution of submissions

  • Adrian Pohl circulated a call for participation for the SWIB12 “Semantic Web in Bibliotheken” (Semantic Web in Libraries) Conference in Cologne, 26-28 November this year, and hosted the monthly Working Group call

  • Lars Aronsson looked at multivolume works, asking whether the OpenLibrary can create and connect records for each volume. HathiTrust and Gallica were suggested as potential tools in collating volumes, and the barcode (containing information populated by the source library) was noted as being invaluable in processing these

  • Sam Leon explained that TEXTUS would be integrating BibSever facet view and encouraged people to have a look at the work so far; Tom Oinn highlighted the collaboration between Enriched BibJSON and TEXTUS, and explained that he would be adding a ‘TEXTUS’ field to BibJSON for this purpose

  • Sam also circulated two tools for people to test, Pundit and Korbo, which have been developed out of Digitised Manuscripts to Europeana (DM2E)

  • Jenny Molloy promoted the Open Science Hackday which took place last week – see below for a snap-shot courtesy of @OKFN:

IMG_1964

In related news, Peter Murray-Rust is continuing to advocate the cause of open data – do have a read of the latest posts on his blog to see how he’s getting on.

The Open Biblio community continues to be invaluable to the Open GLAM, Heritage, Access and other groups too and I would encourage those interested in such discussions to join up at the OKFN Lists page.

]]>
http://openbiblio.net/2012/07/13/community-discussions-3/feed/ 0
BiblioHack: Day 1 http://openbiblio.net/2012/06/14/bibliohack-day-1/ http://openbiblio.net/2012/06/14/bibliohack-day-1/#comments Thu, 14 Jun 2012 10:25:46 +0000 http://openbiblio.net/?p=2742 Continue reading ]]> The first day of BiblioHack was a day of combinations and sub-divisions!

The event attendees started the day all together, both hackers and workshop / seminar attendees, and Sam introduced the purpose of the day as follows: coders – to build tools and share ideas about things that will make our shared cultural heritage and knowledge commons more accessible and useful; non-coders – to get a crash course in what openness means for galleries, libraries, archives and museums, why it’s important and how you can begin opening up your data; everyone – to get a better idea about what other people working in your domain do and engender a better understanding between librarians, academics, curators, artists and technologists, in order to foster the creation of better, cooler tools that respond to the needs of our communities.

The hackers began the day with an overview of what a hackathon is for and how it can be run, as presented by Mahendra Mahey, and followed with lightning talks as follows:

  • Talk 1 Peter Murray Rust & Ross Mounce – Content and Data Mining and a PDF extractor
  • Talk 2 Mike Jones – the m-biblio project
  • Talk 4 Ian Stuart – ORI/RJB (formerly OA-RJ)
  • Talk 5 Etienne Posthumus – Making a BibServer Parser
  • Talk 6 Emanuil Tolev – IDFind – identifying identifiers (“Feedback and real user needs won’t gather themselves”)
  • Talk 7 Mark MacGillivray – BibServer – what the project has been doing recently, how that ties into the open access index idea.
  • Talk 8 Tom Oinn – TEXTUS
  • Talk 9 Simone Fonda – Pundit – collaborative semantic annotations of texts (Semantic Web-related tool)
  • Talk 10 Ian Stuart – The basics of Linked Data

We decided we wanted to work as a community, using our different skills towards one overarching goal, rather than breaking into smaller groups with separate agendas. We formed the central idea of an ‘open bibliographic tool-kit’ and people identified three main areas to hack around, playing to their skills and interests:

  • Utilising BibServer – adding datasets and using PubCrawler
  • Creating an Open Access Index
  • Developing annotation tools

At this point we all broke for lunch, and the workshoppers and hackers mingled together. As hoped, conversations sprung up between people from the two different groups and it was great to see suggestions arising from shared ideas and applications of one group being explained to the theories of the other.

We re-grouped and the workshop continued until 16.00 – see here for Tim Hodson’s excellent write-up of the event and talks given – when the hackers were joined by some who attended the workshop. Each group gave a quick update on status, to try to persuade the new additions to the group to join their particular work-flow, and each group grew in number. After more hushed discussions and typing, the day finished with a talk from Tara Taubman about her background in the legalities of online security and IP, and we went for dinner. Hacking continued afterwards and we celebrated a hard day’s work down the pub, lookong forward to what was to come.

Day 2 to follow…

]]>
http://openbiblio.net/2012/06/14/bibliohack-day-1/feed/ 0
DBLP releases its 1.8 million bibliographic records as open data http://openbiblio.net/2011/12/09/dblp-releases-its-1-8-million-bibliographic-records-as-open-data/ http://openbiblio.net/2011/12/09/dblp-releases-its-1-8-million-bibliographic-records-as-open-data/#comments Fri, 09 Dec 2011 14:20:27 +0000 http://openbiblio.net/?p=1834 Continue reading ]]> The following guest post is by Marcel R. Ackermann who works at the Schloss Dagstuhl – Leibniz Center for Informatics on expanding the DBLP computer science bibliography.

Computer Science literature

Right from the early days of the DBLP, the decision has been made to make its whole data set publically available. Yet, only at the age of 18 years, DBLP adopted an open-data license.

The DBLP computer science bibliography provides access to the metadata of over 1.8 million publications, written by over 1 million authors in several thousands of journals or conference proceedings series. It is a helpful tool in the daily work of researchers and computer science enthusiasts from around the world. Although DBLP started with a focus on database systems and logic programming (hence the acronym), it has grown to cover all disciplines of computer science.

The success of DBLP wasn’t planned. In 1993, Michael Ley from the University of Trier, Germany, started a simple webserver to play around with this so-called “world wide web” everybody was so excited about in these days. He chose to set up some webpages listing the table of contents of recent conference proceedings and journal issues, some other pages listing the articles of individual authors, and provided hyperlinks back and forth between these pages. People from the computer science community found this quite useful, so he just kept adding papers. Funds were raised to hire helpers, some new technologies were implemented, and the data set grew over the years.

The approach of of DBLP has always been a pragmatic one. So it wasn’t until the recent evolution of DBLP into a joint project of the University of Trier and Schloss Dagstuhl – Leibniz Center for Informatics that the idea of finding a licensing model came to our minds. In this process, we found the source material and the commentaries provided by the Open Knowledge Foundation quite helpful. We quickly concluded that either the PDDL or the ODC-by license would be the right choice for us. In the end, we choose ODC-by since, as researchers ourself, it is our understanding that external sources should be referenced. Although from a pragmatic point of view, nothing has changed at all for DBLP (since permissions to use, copy, redistribute and modify had been generally granted before) we hope that this will help to clarify the legal status of the DBLP data set.


For additional information about access to and technical details of the dataset see the corresponding entry on the Data Hub.


Credits: Photo licensed CC-BY-SA by Flickr user Unhindered by Talent.

]]>
http://openbiblio.net/2011/12/09/dblp-releases-its-1-8-million-bibliographic-records-as-open-data/feed/ 0
Animal Garden – open science issues http://openbiblio.net/2011/11/22/animal-garden-open-science-issues/ http://openbiblio.net/2011/11/22/animal-garden-open-science-issues/#comments Tue, 22 Nov 2011 16:31:10 +0000 http://openbiblio.net/?p=1788 Continue reading ]]> Peter and Tom Murray-Rust put together a presentation called Animal Garden, which we have now converted to a prezi for nice swooshy embedding-ness in web pages.

This prezi tells the story of some teddybear scientists who try to share their lovely flowers with the world, only to find that their flowers get locked up behind a big wall… but there is hope! Can a certain open access turtle save them?

]]>
http://openbiblio.net/2011/11/22/animal-garden-open-science-issues/feed/ 0
Recommendations on Releasing Library Data as Open Data http://openbiblio.net/2011/11/21/recommendations-on-releasing-library-data-as-open-data/ http://openbiblio.net/2011/11/21/recommendations-on-releasing-library-data-as-open-data/#comments Mon, 21 Nov 2011 11:34:11 +0000 http://openbiblio.net/?p=1729 Continue reading ]]> Last week, the German KIM-DINI working group (KIM = Competence Centre Interoperable Metadata) officially published recommentations for the release of open data by libraries and related institutions. The recommendations are intended to serve information facilities as guide and reference text for the release of open data.

Besides descriptive metadata which is already covered in other documents also non-sensitive data produced by libraries and related institutions is subject of the recommendations, e.g. statistical data or circulation data. Furthermore, the recommendations don’t only cover open licensing but also open access, open standards and the documentation, sustainability as well as other apsects of open data.

The recommendations include nine principles for open library data. To be called ‘open’ at all the three core demands of open access, open standards and open licenses have to be met.

Furthermore open data should be updated regularly and also be published as raw data. It should be described in a structured form and be accessible without registration. Precautions for a sustainable provision of open data should be taken.

The recommendations follow existing principles and guidelines for open data in memory institutions or the public sector in general. The German original can be found here (shortlink: http://is.gd/openbibdata).

The text itself is published under a CC0 license. Its dissemination, re-publication and reuse is expressly desired.

I had a first try at an English translation of the recommendations which is posted below. Anybody please correct mistakes and bad English on the etherpad at http://okfnpad.org/dini-kim-recommendations. Everybody is free to

Recommendations on Opening up Library Data

v.1.0 published on October, 31th 2011

1. Preamble

Libraries and other information facilities work on a daily basis and in various ways with data for different purposes and destinations. They act as producers, providers, users and aggregators of data. To reap the full benefits from data produced by public institutions, it is necessary to publish them openly on the internet.

2. Subject

In information facilities various forms of data are produced which could be subject of an open data release. It is important to emphasize that an open data release can only be carried out under the condition that

  1. the respective data isn’t personal data or otherwise sensitive data,
  2. the respective institution is holder of the database rights or, if possible, the copyright over the data.

Library data includes both bibliographic data in accordance with the “Principles on Open Bibliographic Data” and other data that is created by libraries and related institutions.

For the administration of libraries’ services also further data accrues that – insofar it isn’t personal or otherwise sensitive data – can also be released as open data. These data includes item data, acquisition data, anonymized circulation data, statistical data.

3. Principles

The DINI-KIM working group recommends the release of library data as open data to library institutions in the German-speaking world and beyond. In doing so, the following principles have to be strictly adhered:

  • Open Access, that is the data as a whole must be available on the web openly and without cost.
  • Open Standards, that is the data must be available in an openly documented and non-proprietary format.
  • Open licenses, that is the data (as individual date and as collection) must be published under an open license according to the open definition. to guarantee the data’s best possible legal interoperability, the DINI-KIM working group recommends the use of a public domain waiver like the CC0 Public Domain Dedication or the Public Domain Dedication and License (PDDL).

Furthermore, we recommend considering the following principles:

  • Documentation: A structured description of the data should be published. At best, the data should be registered at a central registry (like the Data hub).
  • Raw data: As possible, the data should be made available in the form as it accrues in libraries’ information cycle. All further filtering or processing is shifted to those who make use of the data.
  • Timeliness: The data should be published in a reasonable time after its creation. What is reasonable may vary regarding the kind of data.
  • Structured: The data should be published in a structured format which allows easy processing.
  • Non-discriminating: Accessing the data should be possible to all, the only acceptable hurdle being acces to the internet. That is, no registration should be required.
  • Sustainability: Provision of open data should be connected with the development of a sustainability concept which ensures long-term archiving and access to older versions of the data.

Rarely compliance to all principles will be guaranteed from the beginning. However, the first three principles are necessary conditions to speak about “open library data” in the first place. It is strongly recommended to start with publishing raw data that may not be available in an openly documented format and/or that may not be structured or regularly updated. In the medium term, it should be worked on complying to all principles.

4. Related Material

Created within the group “Lizenzen” of the DINI-KIM working group.

Contributors: Patrick Danowski, Kai Eckert, Christian Hauschke, Adrian Pohl and others

The text of these recommendations is published under the Creative Commons Lizenz CC0 (this also holds for the English translation). Thus, it is in the public domain, that is it belongs to all and may be used for any purpose without constraints. When reuising the text, it is asked for naming the source.

]]>
http://openbiblio.net/2011/11/21/recommendations-on-releasing-library-data-as-open-data/feed/ 0
German Guide for Open Library Catalogue Data http://openbiblio.net/2011/11/03/german-guide-for-open-library-catalogue-data/ http://openbiblio.net/2011/11/03/german-guide-for-open-library-catalogue-data/#comments Thu, 03 Nov 2011 20:27:22 +0000 http://openbiblio.net/?p=1690 Continue reading ]]> The German legal scholar and lawyer Dr. Till Kreutzer has written a legal guide titled “Open Data – Releasing data from library catalogues” at the request of the North Rhine-Westphalian Library Service Center (hbz).

For some time now there has been a strong effort in opening up data from library catalogues, see for instance these lists of library data sources. For many libraries, diverse and sometimes complex legal questions are an obstacle to also publish open data. The legal guide shall give some orientation. It is intended for employees of public and academic libraries and especially for people without a legal background.

In Part 1 the guide deals with legal questions that occur in the creation of catalogues: It is explained whether individual parts of a catalogue can be copyrighted and, if yes, under which conditions. Then it is examined under which conditions data providers have a sui generis database right on complete databases.

Part 2 of the guide addresses the issue under which conditions a library or related institution can publish a database as open data. Finally, licenses to use for opening up catalogue data are recommended.

The guide is available under a Creative Commons Attribution license, and the author and publisher encourage wider distribution of the text.

This text is a freely translated and slightly changed version of an announcement by the North Rhine-Westphalian Library Service Center (hbz). Disclaimer: The hbz is Adrian Pohl’s employer.

]]>
http://openbiblio.net/2011/11/03/german-guide-for-open-library-catalogue-data/feed/ 0
Finnish Turku City Library and the Vaski consortia now Open Data with 1.8M MARC-records http://openbiblio.net/2011/10/13/finnish-turku-city-library-and-the-vaski-consortia-now-open-data-with-1-8m-marc-records/ http://openbiblio.net/2011/10/13/finnish-turku-city-library-and-the-vaski-consortia-now-open-data-with-1-8m-marc-records/#comments Thu, 13 Oct 2011 18:50:34 +0000 http://openbiblio.net/?p=1614 Continue reading ]]>

Let's open up our metadata containers

I’m happy to announce that our Vaski-consortia of public libraries  serving total 300 000 citizens in Turku and the a dozen surrounding municipalities in western Finland, have recently published all of our 1.8 million bibliographical records in the open, as a big pile of data (see on The Data Hub).

Each of the records describes a book, recording, movie, song or other publication in our library catalogue. Titles, authors, publishing details, library classifications, subject headings, identifiers and so on systematically saved in MARC -format, the international, structured library metadata standard since the late 1960s.

Unless I’ve missed something, ours is the third large scale Open Data -publication from the libraries of Finland. The first one was the 670 000 bibliographical records of HelMet-consortia (see on The Data Hub), an another consortia of public libraries around the capital Helsinki. This first publication was organized and initiated in 2010 by Kirjastot.fi Labs, a project seeking for more agile, innovative library concepts. The second important Open Data publication was our national generic theseurus Yleinen suomalainen asiasanasto YSA which is also available as a cool semantic ontology.

Joining this group of Open Data publications was natural for our Vaski-consortia, because we are moving our data from one place to another anyway; we are in the middle of the process of converting from our national FinMARC -flavour to the international MARC21 -flavour of MARC, swapping our library system from Axiell PallasPro to Axiell Aurora, plus implementing a new, ambitious search and discovery interface for all the Finnish libraries, archives and museums (yes, it’s busy times here and we love the taste of a little danger). All this means we are extracting, injecting, converting, mangling, breaking, fixing, disassembling and reassembling all of our data. So, we asked ourselves, why not publish all of our bibliographical data on the net while we are on it?

The process of going Open Data has been quite seamless for us. On my initiative the core concept of Open Data was explained to the consortia’s board. As there were no objections or further questions, we contacted our vendor BTJ who immidiately were supporting the idea. From there on it was basically just about some formalities with BTJ, consulting international colleagues regarding licensing, writing a little press-release, organizing a few hundred megabytes of storage space on the internet. And trying to make sure the Open Data -move didn’t get buried under other, more practical things during the summertime.

For our data license we have chosen the liberal Creative Commons-0 license (CC0), because we try to have as little obstructions to our data as possible. However we have agreed on a 6 month embarko with BTJ, a company who is doing most of the cataloguing for the Finnish public libraries. We believe that it is a good compromise to prefer publishing data that is slightly outdated, than try to make the realm of immaterial property rights any more unclear than it already is.

Traditional library metadata at Turku main library

We seriously cannot anticipate what our Open Data -publication will lead to. Perhaps it will lead to absolutely nothing at all. I believe most organizations opening up their data face this uncertainty. However what we do know for sure is, that all of the catalogue records we have carefully crafted, acquired and collected, are seriously underutilized if they are only used for one particular purpose: finding and locating items in the library collections.

For such a valuable assett as our bibliographical metadata, I feel this is not enough. By removing obstacles for accessing our raw data, we open up new possibilities for ourselves, for our colleagues (understood widely), and to anybody interested.

Mace Ojala, project designer
Turku City Library/Vaski-consortia; National Digital Library of Finland, Cycling for libraries, etc.
http://xmacex.wordpress.com, @xmacex, Facebook etc.

]]>
http://openbiblio.net/2011/10/13/finnish-turku-city-library-and-the-vaski-consortia-now-open-data-with-1-8m-marc-records/feed/ 1
Did you hear that loud bang? That was CENL releasing their data under CC0 http://openbiblio.net/2011/10/04/did-you-hear-that-loud-bang-that-was-cenl-releasing-their-data-under-cc0/ http://openbiblio.net/2011/10/04/did-you-hear-that-loud-bang-that-was-cenl-releasing-their-data-under-cc0/#comments Tue, 04 Oct 2011 08:09:04 +0000 http://openbiblio.net/?p=1582 Continue reading ]]> The conference of European National Librarians (CENL) came up with great news last wednesday! Data from all European national libraries will be published under an open license! From the announcement:

Meeting at the Royal Library of Denmark, the Conference of European National Librarians (CENL), has voted overwhelmingly to support the open licensing of their data. CENL represents Europe’s 46 national libraries, and are responsible for the massive collection of publications that represent the accumulated knowledge of Europe.

What does that mean in practice?
It means that the datasets describing all the millions of books and texts ever published in Europe – the title, author, date, imprint, place of publication and so on, which exists in the vast library catalogues of Europe – will become increasingly accessible for anybody to re-use for whatever purpose they want.

The first outcome of the open licence agreement is that the metadata provided by national libraries to Europeana.eu, Europe’s digital library, museum and archive, via the CENL service The European Library, will have a Creative Commons Universal Public Domain Dedication, or CC0 licence. This metadata relates to millions of digitised texts and images coming into Europeana from initiatives that include Google’s mass digitisations of books in the national libraries of the Netherlands and Austria.

See also this post by Richard Wallis.

(Thanks to Mathias for the title of this post.)

]]>
http://openbiblio.net/2011/10/04/did-you-hear-that-loud-bang-that-was-cenl-releasing-their-data-under-cc0/feed/ 0
Open bibliographic data checklist http://openbiblio.net/2011/09/25/open-bibliographic-data-checklist/ http://openbiblio.net/2011/09/25/open-bibliographic-data-checklist/#comments Sun, 25 Sep 2011 07:45:27 +0000 http://openbiblio.net/?p=1562 Continue reading ]]> This guest post by Jindřich Mynarz was originally published here under a Creative Commons Attribution license.

I have decided to write a few points that might be of interest to those thinking about publishing open bibliographic data. The following is a fragment of an open bibliographic data checklist, or, how to release your library’s data into the public without a lawyer holding your hand.

I have been interested in open bibliographic data for a couple of years now, and I try to promote them at the National Technical Library, where we have, so far, released only authority dataset — the Polythematic Structured Subject Heading System. The following points are based on my experiences with this topic. What should you pay attention to when opening your bibliographic data then?

  • Make sure you are the sole owner of the data or make arrangements with other owners. For instance, things may get complicated in the case data was created collaboratively via shared cataloguing. If you are not in complete control of the data, then start with consulting the other proprietors that have a stake in the datasets.
  • Check if the data you are about to release are not bound by some contractual obligations. For example, you may publish a dataset under a Creative Commons licence, soon to realize that there are some unsolved contracts with parties that helped fund the creation of that data years ago. Then you need to discuss this issue with the involved parties to resolve if making the data open is a problem.
  • Read your country’s legislation to get to know what you are able to do with your data. For instance, in Czech Republic it is not possible to put data into the public domain intentionally. The only way how public domain content is created is by the natural order of things, i.e., author dies, leaves no heir, and after quite some time the work enters the public domain.
  • See if the data are copyrightable. For instance, if the data do not fall into the scope of the copyright law of your country, it is not suitable to be licenced under Creative Commons since this set of licences draws its legal binding from the copyright law; it is an extension of the copyright and it builds on it. Facts are not copyrightable, and most bibliographic records are made of facts. However, some contain a creative content, for example, subject indexing or an abstract, and as such are appropriate for licencing based on the copyright law. Your mileage may vary.
  • Consult the database act. Check if your country has a specific law dealing with the use of databases that might add more requirements that need your attention. For example, in some legal regimes databases are protected on other level, as an aggregation of individual data elements.
  • Different licencing options may be applicable for content and structure of dataset, for instance when there are additional terms required by database law. You can opt in dual-licensing and use two different licences, one for dataset’s content that is protected by copyright law (e.g., a Creative Commons licence), and one for dataset’s structure for which the copyright protectio)).
  • Choose a proper licence. A proper open may not apply (e.g., Public Domain Dedication and License).
    Choose a proper licence. A proper open licence is a licence that conforms with the Open Definition (and will not get you sued), so pick one of the OKD-Compliant licenses. A good source of solid information about licences for open data is Open Data Commons.
  • BONUS: Tell your friends. Create a record in the Data Hub (formerly CKAN) and add it to the bibliographic data group to let others know that your dataset exists.

Even if it may seem there are lots of things you need to check before releasing open bibliographic data, it is actually easy. It is an performative speech act: you only need to declare your data open to make it open.

Disclaimer: If you are unsure about some of the steps above, see a lawyer to consult it. Note that the usual disclaimers apply for this post, i.e., IANAL.

]]>
http://openbiblio.net/2011/09/25/open-bibliographic-data-checklist/feed/ 0
Open national bibliography data by New Zealand National Library http://openbiblio.net/2011/09/12/open-national-bibliography-data-by-new-zealand-national-library/ http://openbiblio.net/2011/09/12/open-national-bibliography-data-by-new-zealand-national-library/#comments Mon, 12 Sep 2011 09:39:09 +0000 http://openbiblio.net/?p=1481 Continue reading ]]> A tweet by Owen Stephens prodded me to the New Zealand National Library’s service which provides the national bibliography as MARC/MARCXML dumps (350,000 records) licensed under a Creative Commons Attribution license. Great!

Obviously this service has been around for a while now but I’ve not heard about it before. As it wasn’t registered on CKAN/the Data hub I created an entry and added it to the Bibliographic Data group.

Using attribution licenses for data

This publication is an interesting case as it uses an attribution license for bibliographic data. Until now, most open bibliographic datasets have been published under a public domain license. So, the question pops up: “Under what conditions may I use a CC-BY licensed dataset?”

The readme.txt accompanying the download file (118 MB!) gives some clarity:

“You do not need to make any attribution to National Library of New Zealand Te Puna Matauranga o Aotearoa if you are copying, distributing or adapting only a small number of individual bibliographic records from the overall Dataset.

If you publish, distribute or otherwise disseminate this work to the public without adapting it, the following attribution to National Library of New Zealand Te Puna Matauranga o Aotearoa should be used:

“Source: National Library of New Zealand Te Puna Matauranga o Aotearoa and licensed by the Department of Internal Affairs for re-use under the Creative Commons Attribution 3.0 New Zealand Licence (http://creativecommons.org/licenses/by/3.0/nz/).”

If you adapt this work in any way or include it in a wider collection, and publish, distribute or otherwise disseminate that adaptation or collection to the public, the following style of attribution to National Library of New Zealand Te Puna Matauranga o Aotearoa should be used:

“This work uses data sourced from National Library of New Zealand Te Puna Matauranga o Aotearoa’s Publications New Zealand Metadata Dataset which is licensed by the Department of Internal Affairs for re-use under the Creative Commons Attribution 3.0 New Zealand licence (http://creativecommons.org/licenses/by/3.0/nz/).”

In my opinion, these license requirements set a good precedence licensing bibliographic data with an attribution license, although it is not clear what still passes for “a small number of individual records”. I think it is important and the only legally consistent way that datasets with an attribution or share-alike license must only be attributed at the database level and not on the record level. Other’s who tend to use an attribution license should use a similar wording.

This might be of interest for other approaches of using an attribution license, e.g. at OCLC or E-LIS.

In related news, there’ll be a LODLAM-NZ event on December 1st in Wellington, see http://lod-lam.net/summit/2011/09/08/lodlam-nz/. Converting this dataset to LOD might be a topic…

Update: Tim McNamara has already provided an RDF version of the bibliographic data and reported on his motivations and challenges, see this post.

]]>
http://openbiblio.net/2011/09/12/open-national-bibliography-data-by-new-zealand-national-library/feed/ 1