Worldcat record use policy causes National Library of Sweden to end negotiatons with OCLC

The National Library of Sweden has decided to end negotiations with OCLC about uploading their union catalog, Libris, into WorldCat as well as using WorldCat as a source of records in Libris. According to the announcement, Libris is and needs to remain an open database and OCLC’s WorldCat Rights and Responsibilities for the OCLC Cooperative does not make that possible. The National Library also believes that the record use terms would make it impossible to contribute biographical data to Europeana and the European Library. As Karen Coyle mentions in her blog post about this decision, open data (or the lack of it) is not just an idealogical stance: it “has real practical applications.” Whatever good the WorldCat record use policy has had, this is a real-world example of how it can (and in this case, has) also harm libraries – including OCLC member libraries who will not be able to access Libris records via WorldCat.

Library Journal contacted OCLC about the announcement, but they did not immediately respond to LJ’s request for comment.

MITx

Some of you probably have seen MIT’s announcement of MITx on December 19. Basically, “MITx will offer a portfolio of MIT courses through an online interactive learning platform.” It will “operate on an open-source, scalable software infrastructure” and offer many features that current learning Management Systems offer as well as some other unique features. While the technology sounds interesting, I am most interested in the program itself, in particular the credentialing. MIT has been a leader in the open education with its OpenCourseWare project, but adding a level of credentialing is a huge step. There isn’t a lot of information available yet, but basically if you want to learn, you can do that for fee. If you want some form of credential, there will be a fee for that. The credential will be a certificate of completion that will be offered buy a not-for-profit body within the Institute created to do such a thing. The body offering the credentials will be distinctly named to avoid confusion that MIT “proper” awarded the credential and costs are yet to be determined.

MITx has yet to announce what classes will be available but they plan to start offering classes in Spring 2012. More information can be found on the MITx announcement FAQ. If they have something I am interested in and it fits my schedule, I may try to take a class and, if I do, I’ll probably pay for the credential.

First Top 10 of the 2011 College Football Season

Here is my top 10. As a reminder, my top 10 is based on performance not “predictions” of the future. This is why I don’t start the pole until October. I also add weight for the portion of schedule the school can control (out of conference). Scheduling cupcakes is not looked on favorably in my pole.

  1. LSU (6-0). Normally I ping the SEC teams for their out of conference schedules. Well, LSU stepped it up and played Oregon (neutral field, but more home than away) and West Virginia, so kudos for them. They get the #1 vote for the first week of my standings this year.
  2. Boise State (5-0). Once again Boise State opens on the road against a “big time team” and shows it can play with anyone. Boise State haters, why don’t you schedule a game up in Idaho against them in November and prove they can’t beat you?
  3. Oklahoma (5-0). They had a few out of conference cupcakes but going on the road to play what many thought was a top 5 Florida State team shows me something. The impressive Texas win counter-acts the Missouri game where I don’t think they looked as good.
  4. Clemson (6-0). Nice road win at Auburn out of conference followed by good in-conference wins against Florida State and Virginia Tech puts Clemson in my top 5.
  5. Kansas State (5-0). I now the Florida schools haven’t been impressive these year, but K-State gets credit for an out of conference win at Miami (FL) and good conference wins at home against Baylor and Missouri.
  6. Alabama (6-0). Beating Penn State at Penn State is a good win even if Penn State isn’t that good. The other out of conference games were against cream puffs though. Good in conference wins against Florida and Arkansas help them a bit in my rankings.
  7. Illinois (6-0). Didn’t really know where to put the Illini. The victory against Arizona State who is 3-0 in the PAC-10 was impressive but the rest of the schedule is lacking. Once they play a few of the better Big-10 teams they will probably fall out of my top 10, but my picks are based on what they did so far, not based on my predictions of the future.
  8. Oklahoma State (5-0). Despite putting up 70 against conference foe Kansas, I am not sold on the Cowboys. Toughest out of conference game was Arizona who is 1-5. A few props to the victory over Tulsa though as at least it is an in-state match-up.
  9. Michigan (6-0). Nice out of conference win against Notre Dame. Notre Dame might not be the most impressive team but they only have one other loss (to South Florida). Rest of out of conference schedule could use some work.
  10. Georgia Tech (6-0). Only out of conference game even worth speaking about was at Kansas. Beating Kansas in a basketball game is impressive, in football not so much. Still, at least they went on the road once against another BCS school out of conference. Should be interesting if they remain undefeated going onto the Oct 29 matchup with Clemson.

What I did on my September Vacation

Last week I took off from work for some vacation, but I didn’t leave the library world behind. In fact, I co-presented a Webinar, “Cloud computing and libraries: The view from 10,000 feet, with Dr. Heather Lea Moulaison that was put on by Education Institute (Canada) and the Neal-Schuman Professional Education Network (USA), talked to an LIS class at the University of Missouri (incidentally, I was very impressed by the students), and attended and co-presented a session with Dr. Moulaison at the LITA National Forum.

I skipped the last couple of LITAs National Forums as in the past I have not found them as useful for me as some other conferences I go to. With limited travel budgets, you need to look for value. LITA does not appear to be highly subsidized by sponsors and isn’t a cheap conference compared to other library conferences and the content has been a little weak in my areas. However when an opportunity to present with my co-editor, Heather Lea Moulaison, of Getting Started with Cloud Computing: A LITA Guide in her home state emerged, I figure, hey, why not? What else am I going to do with these vacation days? If I don’t use some, I’ll lose them, so I might as well hang out with some library peeps.

I am not going to review the whole conference but I was happy to see what seemed like an increase in sessions that were more advanced (technology-wise). It isn’t that past Forums were bad, I just wasn’t the proper audience. Kudos to this year’s program planners. I’d like to see less long breaks and it seemed odd that the posters were at the end of the day Saturday with no food or refreshment, but oh well. While I am on it, this isn’t just a LITA thing, but I think at most conferences sessions are too long. I’d much rather see two 25 minute presentations then one fifty minute one. I think this is were Code4Lib with it’s 20 minute time slots does a real good job. Library Journal has a good review of the 2011 LITA National Forum (and I’m not just saying that because they liked our presentation, although I’m pleased that they did.

The slides from our LITA presentation, Practical Approaches to Cloud Computing at YOUR Library, are available on CodaBox.

Webinar on Digital Preservation tommorow

Tomorrow (Tuesday, September 20, 2011) I will be one of two people presenting a Library Journal Webinar called Low Maintenance, High Value: How Binghamton University Libraries Used Digital Preservation to Increase its Value on Campus. My Co-presenter is Ido Peled, Rosetta Product Manager, Ex Libris Group. Ex Libris is also a cosponsor. The abstract of our talk is:

Is end-to-end Digital Preservation here today? Does it require an army of staff to manage? Is it a library function or a central IT function? Answer these questions and more while hearing Edward Corrado tell the story of turning the Binghamton University Libraries into the university’s identity and heritage storehouse.

Apparently you can register now and they will send you a link to the webcast is archived for your viewing pleasure.

Major College athletic conference reallignment

With Syracuse and Pitt most likely headed to the ACC and the Big 12 falling apart, I think a few more teams will leave the Big East. I don’t think any of them will go to the ACC though, or they would be going with Syracuse & Pitt. I predict Rutgers will be one of the teams that leaves and they will go to the Big 10. Possibly with UCONN and 2 Big 12 teams (Missouri? Kansas? Iowa State?) to make the Big 10 have 16 teams.

West Virginia will end up in the SEC with Texas A&M and maybe another Big 12 team (Missouri?)

I think Kansas and K-State may both end up in the ACC with Syracuse and Pitt — assuming Kansas doesn’t end up in the Big 10. K-State will probably not go to the ACC unless they are brought in with Kansas. Texas is also a possibility if the ACC lets them keep their Longhorn Network.

Texas, Texas Tech, OK, OK-State all to the PAC-16.

The Big East will pick up the best teams left and will still be in good shape. Big 12 will be history.

Still don’t see Notre Dame joining a conference in football. BYU would fit in the PAC-12, but the PAC-12 won’t take them because of the religious affiliation.

I don’t see anyone from the ACC, PAC-12, SEC, or Big 10 swapping conference – although if Florida State or someone else from the ACC jumps to the SEC I wouldn’t be shocked. Likewise, Maryland to the Big10 wouldn’t total shock me but it will cost each of those teams $20m to bail out so I doubt it will happen. But really, who knows.

The real moral of this story is the Big 12 really messed up by not adding teams when Colorado and Nebraska left. Sticking with 10 left them ripe for picking.

Library Linked Data

Carl Grant has an excellent blog post about a vendor’s perspective on the case of the Library Linked Data Model. It is well worth a read if you are interested in Library Linked Data or how any other new idea/concept/profuct/service gets implemented by a vendor. Carl says that before vendors can invest (heavily) into Librry Linked Data the need to have some questions answered:

It includes a lack of clear understanding of what exactly are the problems being solved for the profession by this technology that can only be solved with the Library Linked Data model or that can’t be otherwise solved? Are these problems shared across the profession, across institutions? Is it agreed that the Library Linked Data model is the solution? If so, how many institutions, or even personal services, are in production status using this model to solve those problems?

This are interesting questions and ones I don’t have any answer for. The idea of Linked Data in the library world has been pushed around for a while, but it has only been recently that I have seen any working prototypes and implementations. While I am impressed with what some people have done and I understand some of the potential benefits, I don’t think any of the above questions have been answered. I’d really like to see some answers to the first one – especially what benefit will our users gain from it. I really want to be convinced that any significant investment in Library Linked Data will benefit our end users and I don’t see it (yet). I have never heard a student or professor come to me with a problem that linked data will solve more completely or more efficiently then other solutions. I imagine that will come with time, but until it does it is hard to make the case to go all-in on linked data.

There may be some benefits (mostly in the form of efficiency) from a staff point of view, but I am still not sure that at this point they outweigh the costs of implementation. Also, as Carl asks in his post (question #3), “How do we see this data being maintained? ” Unless you can give me a clear plan that shows sustainability, again it is hard to get behind the linked data model.

What does this all me? The proponents of Library Linked Data need to get out and show some real world examples on how it will help end-users and/or how it will create efficiencies that can not be seen by other solution. For example, if you are talking about bibliographic and related data, how would linked data be better then OCLC’s centralized Web Scale Management Services or Ex Libris’s Alma (assuming for Alma that the community zone is populated with the appropriate data).

Will these answers come? I believe so. The Library Linked Data Incubator Group is a good start — especially if they can provide examples as to how linked data will efficiently benefit end users in ways other technologies can not — but it will be a while before we see any signs that the “Early Majority” are ready to jump on board,

New Book: Getting Started with Cloud Computing

If you are looking for some fun and educational reading, why don’t you pick up a copy or two of Getting Started with Cloud Computing: A LITA Guide? I’d give a review, but I am biased since I am one of the co-editors along with Dr. Heather Lea Moulaison, I’ll just say that I think the book came out great and the author chapters did an excellent job. A million thanks to all of the authors and to Roy Tennant for writing the foreword. Neal-Schuman was great to work with as well.

Editing a book was a lot of work (more than I thought it would be, to be honest) but it was a rewarding experience and I leaned a lot along the way – both about the topic, and about editing a book.

By the way, if you happen to be in Europe, don’t fret, you can head over to Facet Publishing and get the UK imprint of Getting Started with Cloud Computing.

RDA and transforming to a new bibliographic framework

I haven’t had the opportunity to work much with RDA records yet, however I’ve been following some e-mail lists, blogs, and other commentaries where people have been discussing there experiences with it. The Library of Congress , the National Library of Medicine (NLM), and the National Agricultural Library (NAL) organized testing to evaluate whether or not they will implement RDA.

Out of this testing experience (which is still being analyzed), the Library of Congress issued “Transforming our Bibliographic Framework: A Statement from the Library of Congress” on May 13. According to the statement, “Spontaneous comments from participants in the US RDA Test show that a broad cross-section of the community feels budgetary pressures but nevertheless considers it necessary to replace MARC 21 in order to reap the full benefit of new and emerging content standards.” Therefore, Library of Congress is going to investigate, among other things, replacing MARC 21.

From what I have heard of the RDA testing, I think this makes sense. The general feel I get is that RDA by its self is not enough of a change to make libraries expend the resources necessary to implement it. Sure there are some improvements over AACR2, but there are also many things I read that are not improvements. This is especially true if you agree with the Taiga Forum 6′s 2011 Provocative Statement #2 that libraries will need to participate in radical cooperation. RDA offers a bit too much flexibility to insure that bibliographic records created by one library will fit well for other libraries. For example, the Rule of 3 is gone which on the cover is an improvement since it allows for more then 3 authors to be included as main or added entry. However, as discussions on the RDA-L list, it requires only the first author and illustrators of children’s books as author main or added entry. Local choices are great if you are only working for the local and not “radically cooperating.”

I won’t go through the list of complaints (and, to be fair, some complements) of RDA I’ve seen, as you can find them yourselves. I think my takeaway though is RDA on top of our existing bibliographic infrastructure is probably not going to make a monumental improvement for our patrons while at the same time it will be costly to implement (especially retroactively). RDA might be better than AACR2, but is it better enough that migrating to it is worth the time and costs? I am not so sure. Maybe simple changes to AACR2 would be just as good and more practical?

Some people I talk to think moving to RDA is a necessary first step that will make more significant or radical changes easier in the future. I, however, have a underlying fear that if libraries implement RDA in the current environment they will be stuck with it for a long time and it will actually make it harder to implement something different in the future. I hope the others are right and I am wrong since I believe in the short to medium term, RDA will be implemented on top of our existing bibliographic infrastructure – for better or worse.

If we replace our underlying bibliographic infrastructure with something else and change to RDA, say maybe something based on RDF or some other standard model for data interchange, we might actually get a significant change that will help expose our bibliographic data to the greater world of linked data while at the same time making it easier for libraries to take advantage of linked data.

One thing that the Library of Congress needs to take account in this process is the economic realities of implementing something new. I don’t see this specifically mentioned in the issues they plan on addressing. I assume that it will be part of the underlying discussions, but I would like to see it more prominently mentioned. Part of this is also involving vendors as well as open source developers of systems such as Evergreen and Koha. If LoC makes a change, it will effect libraries throughout the US (and probably the world). If the systems libraries use can’t function withing this new bibliographic framework, it will be a difficult and extremely expensive transition.

I think this is something librarians, especially those in systems and cataloging, should follow closely. I know I will be doing so.

MARC is better than Dublin Core

During my presentation on Digital Preservation: Context & Content (slides) at ELAG 2011 last week, I made the statement that MARC is better than Dublin Core. This may have been a bit of proactive statement but I thought it was relevant to my presentation and the conference in general. I felt that someone had to say it, especially since there was a whole workshop on MARC Must Die and a number of other presentations were gleefully awaiting the day we were done with MARC. For example, with a great deal of support from the audience, Anders Söderbäck said “we the participants of #elag2011 hold these truths to be self-evident, that MARC must die…”.

Probably not surprisingly my statement sent off a mini-barrage of messages on the conference Twitter feed. Since the conference was almost over (my presentation was the second to last) and it wasn’t the core to what I was talking about, I didn’t have time to explain/expand on my position. I know that some of the people that responded to my statement on Twitter were not at the conference and at least a few I am pretty sure weren’t watching the live stream. Because of this, I wanted to take this time to put the statement in context and explain why I said I think MARC is better than Dublin Core. I understand people may not agree with me and this post won’t change that, but that doesn’t mean I need to agree with the band-wagon that wants to kill something that has been pretty successful for the last 40 or so years.

Before going any further since I’m not sure it was clear to everyone commenting on Twitter, I should point out that by MARC I mean MARC 21+AACR2 (which is the common usage of the term in the USA), but I imagine the same statements would likely apply to any version of MARC + what ever set of rules you want to apply, Similarly, by Dublin Core, I mean the Simple and/or Qualified Dublin Core Metadata along with the Dublin Core Metadata Element (DCMES) format (i.e. descriptive fields). I know that there are other aspects of the Dublin Core Metadata Initiative, but for the purposes of this discussion I don’t believe they are germane [1]. I am focusing on how Dublin Core can be used to describe objects. After all, that is why librarians use metadata – to describe things. No matter how easy it is for machines (or humans) to parse a metadata record, it would not be very useful if the standard does not make it possible to adequately describe, in a consistent way, whatever it is that one is trying to describe. I should also also point out, that while I love theory and research, in this case I am mostly concerned with the practical.

The statement came out of my experiences thus far with using Dublin Core for digital preservation at Binghamton University. Before we started on this, I was familiar with Dublin Core but never really had to work closely with it on a large scale so I didn’t have a strong opinion of it. I am not a cataloger, but as a systems librarian, I feel it is necessary to follow developments in cataloging and I am also having to work with MARC records on a fairly regular basis. Thus, I realize that MARC has its issues but please don’t kill it until we have something better and at this point, I don’t believe we do. [2]

In short, my problem with Dublin Core is that it does not allow for the granularity and consistency that I believe is necessary to adequately describe a mixed set of objects for long term preservation and access. Mixed sets is important here, if you are doing a long term preservation project that includes a disperse set of objects, I believe it is important that there is some consistency across collections. This is especially true if they are going to be managed or searched together. Librarians often comment on the need to break down silos or at least tie them together for discovery. The metadata needs to be adequate to do this. Maybe if you are a national library you can have multiple digital preservation solutions, but at a mid-sized university library that approach is problematic and most-likely not realistic. This is doubly so if you consider that one of the main components of preservation is ensuring access in the future (i.e. you are not talking about a dark archive). This is not really a new or unique criticism but I think it is often overlooked and/or too easily dismissed. Even one of the people who objected to my saying MARC was better than Dublin Core, Corey Harper, admitted this was a valid criticism in his article, “Dublin Core Metadata Initiative: Beyond the Element Set” published in the Winter 2010 issue of Information Standards Quarterly .

A couple of tweeters brought up DCAP (Dublin Core Application Profiles) which in theory could be used to allow for the use of additional (or alternative) metadata fields to address some of my issues with how well Dublin Core describes particular objects. However, as Corey Harper mentioned in a tweet, “I understand that DCAP infrastructure lacking, but…” (ellipsis in the original). But the “but” is not something that can be ignored. If the infrastructure isn’t there, it is a big issue – practice over theory. Even if the infrastructure wasn’t lacking, I am not sure how well it would address my criticisms. Even without DCAP I can add local qualifiers or elements for my application (and have in fact, done so), but as Dublin Core Metadata Initiative warns, “Nevertheless, designers should employ additional qualifiers with both caution and the understanding that interoperability could suffer as a result.” I don’t see how the use of multiple DCAPs would not end up leading to similar interoperability issues and result in a “Least Common Denominator” situation on the discovery end of things. Without discovery, you don’t have access, and without access you don’t have preservation.

Lastly, Michael Giarlo asked “But then is anyone actually putting DCMES up against MARC? Seems a category error to me.” I don’t think it is a category error at all. Both are metadata formats/standards that libraries are using to describe objects in their collections. Perhaps one might argue the category is overly broad category, but I think they are obviously in the same category. Comparing the two is only natural and is in fact, I think quite useful. DCMES may be easier to teach and for computer programmers to program, but in my experience it is nowhere near as useful when it comes to actually describing at an item – which as I said earlier is the goal in the first place. Maybe some technologists value interoperability over description, but I am not ready to go there. We need something better, not something different.

As I said earlier in the post, I doubt this will change anyones mind, but hopefully it explains why I said that MARC is better than Dublin Core.

[1] Truthfully I am a bit confused why this was an issue on Twitter. Wikipedia and even the official “Using Dublin Core” document Diane Hillmann created for DCMI just use the term “Dublin Core” to describe the metadata standard so this is pretty common usage.

[2] I do not mean to imply that anyone is making the argument that Dublin Core should completely replace MARC, but the MARC must die contingent is relevant to this particular discussion of MARC versus Dublin Core. At some point maybe I’ll make a post about some of the more complete alternatives to MARC being discussed.

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »