New Article: SkyRiver and Innovative Interfaces File Antitrust Suit Against OCLC

I just had an article, SkyRiver and Innovative Interfaces File Antitrust Suit Against OCLC, published as an Information Today NewsBreak. The introduction states:

SkyRiver Technology Solutions filed a complaint for Federal and State antitrust violations and unfair competition against OCLC in United States District Court, Northern Division of California on July 28. The suit [1] alleges that OCLC is “unlawfully monopolizing the bibliographic data, cataloging service and interlibrary lending markets and is attempting to monopolize the market for integrated library systems by anticompetitive and exclusionary agreements, policies and practices.” Innovative Interfaces, Inc. is listed as a co-plaintiff. OCLC released a statement on July 29 saying that it hadn’t reviewed the complaint yet and after it reviews the complaint and “have had an opportunity to review the allegations with its legal counsel, a statement in response will be forthcoming.” This suit could have major implications in the library software and technology services industry. If the suit is successful, OCLC may have to provide for-profit firms access to the WorldCat database and there could be implications for OCLC’s status as a non-profit cooperative.

Please go to the Information Today Web site to read the whole article.

SkyRiver files antitrust lawsuit against OCLC

When the SkyRiver bibliographic utility was first announced, I thought this would eventually lead to some sort of legal action. What I didn’t know is who would be the first to bring legal action and against whom. Well, now we know. SkyRiver, joined by Innovative Interfaces, has filed a lawsuit in federal court in San Francisco.

The likelihood of a lawsuit seemed more certain after the fees OCLC wanted to charge some of the first customers of SkyRiver like Michigan State University and California State University, Long Beach, to upload holdings. According to SkyRivers’ press release about the lawsuit (pdf) OCLC quoted them a price increase of over 1100%. I’m not a legal scholar and don’t know any details of the actually filling, so I don’t know what will happen, but it certainly will be interesting and will be a game changer. I also don’t expect it to have a quick outcome.

I didn’t see a press release from Innovative Interfaces yet, but I am sure that one of the reasons the company joined the lawsuit was the new OCLC Web-scale Management Services which directly competes with the traditional ILS.* Honestly, I was really surprised that the new OCLC system didn’t create a bigger buzz because in my mind it is a game changer. OCLC with control of so many bibliographic members created by members via there WorldCat platform is in a position to leverage WorldCat and a tremendous amount of data in ways other vendors simply can’t, especially if SkyRiver’s anti-trust claims are accurate. I also think the whole WorldCat record use policy fiasco over the last year or so has also added to the factors leading to this lawsuit.

As far as I know, OCLC also hasn’t made a public response as of yet.

I plan on following this story closely because I believe however it turns out, as I mentioned earlier it will be a game changer. If OCLC prevails, startups like SkyRiver won’t have a fair chance. If SkyRiver prevails, we can see a major restructuring of services that OCLC provides and possibly even a breakup of OCLC.

For information about the lawsuit from SkyRiver, check out the Web site they created about it, called Choice for Libraries.

* Yes, I know that SkyRiver and Innovative Interfaces are owned by the same people, but they are different companies.

DHCP to Static IP and hostname

For some reason whenever I install Ubuntu or a derivative it finds my DHCP server and automatically configures a DHCP client. This is great but usually not what I want so I end up going back and changing to a static IP. I’m sure if I did an advanced install, I could get these options at install time, but it is easy enough to change afterwards so I just do it then. However, I usually forget what files I need to edit to make the change. Luckily, there is a good post about Ubuntu Networking Configuration Using Command Line on Ubuntu Geek. For details, follow the above links, but in short you need to edit:

/etc/network/interfaces (for IP Address, Gateway, etc.)
/etc/resolv.conf (for DNS)

Also, you may have to change the hostname using the following command:

sudo /bin/hostname newname

After this you can restart networking using:

sudo /etc/init.d/networking restart

However, I prefer to just reboot to make sure the changes stick.

WordPress Theme Kerfuffle

People reading this blog outside of an RSS reader will notice something different with my blog. I changed the theme. I was using the Coppyblogger theme by Chris Pearson which I really like. However there has been a bit of kerfuffle that Chris is in the center of. The developers of WordPress feel that themes are part of the WordPress code base and therefore subject to the GPL as a derivative work. Chris feels differently. I don’t know from a legal standpoint who is right, except to say that if any WordPress GPL code is in the theme (which is true of many themes, including Thesis which Pearson wrote and is at the center of the kerfuffle), than it would definitely by GPLed.

For a convincing argument about why the developers think that it themes are subject to the GPL, see Mark Jaquith post about Why WordPress Themes are Derivative of WordPress. Whether or not it is legal to distribute themes under a license other than the GPL, after thinking about the issue, I feel not distributing themes under the GPL is unethical, or at least shows a lack of respect now that this issue has come to light.

The Copyblogger theme is free and licensed under Creative Commons Attribution-ShareAlike 2.5. Outside of this kerfuffle the CC Attribution Share Alike would be fine by me. Still I decided to at least temporarily change my theme since it was developed by Chris Pearson. I mostly did this as a tiny signal of support for the WordPress developers. I may change the theme again soon because I didn’t do a lot of searching/experimenting for a theme – I just took the first one that looked good.

E-mail Signature Files

I decided to update my work e-mail signature file to reflect my new job title and at the same time make it automatically attach via my e-mail clients (Mozilla Thunderbird and the Gmail interface). While doing so I decided to look at what the prevailing thought on e-mail signatures is. Using a Google Search I picked out about ten Web pages/blog posts to review on this subject and this is what I found.

Things that all or almost all of the posts agreed upon:

  1. Name (obvious, no?)
  2. Professional Title / Position
  3. Website URL (One or two people said it wasn’t needed but most thought this was good. Personally I think unless it is a small company with a minimal Web site you should include it. For example, finding the Library Web site on a large University site can sometimes take a while).
  4. Phone number (Possibly also Mobile and Fax numbers. Joshua Dorkin pointed out “If you’re not willing to include a phone number with an email, then who on earth can take you seriously?“).
  5. Keep the signature from 4 to 6 lines

Her are some things that some people thought were appropriate and others not:

  1. E-mail Address (Some people said it is in the header, but others pointed out that some e-mail clients hide it and once the mail gets forwarded, the e-mail address may no longer be there. Personally, I decided to include it).
  2. Instant Messaging Names (I didn’t see anyone say not to include it, but only a few mentioned it. Nathan Jones pointed out you should only include one. I would just say if you use IM all the time it makes sense, but if you are a light or even moderate user, probably not.
  3. Mobile Note (As with IM, I didn’t see anyone say not tto include it, but not everyone mentioned it. Nathan Jones writes “I think it’s a good idea to add a small note at the bottom of the signature that indicates that the email is being sent from your mobile phone.” The thought is that people will be more forgiving of small typos and short responses.
  4. Sig Separators (Again, no one said not to use them but I was surprised by how many didn’t mention them at all).

Here are somethings with more disagreement where the leaning was to not include the following:

  1. Business address (More people in my small sample didn’t like the idea of a street address then did, but it was up for debate. Joshua Dorkin wrote “While it helps to know where someone’s physical presence is, in the current day and age people aren’t using snail mail as often as they used to. Mailing addresses are great to have, but not 100% necessary.” Others thought it depended on how hard it would be to find out the address or if people are likely to want to come visit you. Personally I included it because people may not know where Binghamton University is otherwise, and If I’m going to include “Binghamton NY, USA” I might as well add a PO Box and Zip code. Besides, how often do a see complaints about job postings that don’t include addresses or people getting schools with similar names confused?).
  2. Quotes, mottos, etc. (Judith at Netmanners.com specifically pointed out not to “use inflammatory quotes in your signature file.” I see a lot of professional e-mail with quotes that might not be inflammatory, but definitely could turn some people off. On your personal e-mail to friends and family that is your choice but I don’t think it is appropriate for professional e-mail. I just say no to quotes in professional email signatures).
  3. Branding via color or images (Some thought minimal levels of branding such as fonts matching the organizations color or a small image are okay, but all agreed that too much is too much)
  4. Closing sentiment (Some posts mentioned that the “first line of an email signature should be a closing sentiment, such as ‘Thank you,’ or “Sincerely.’” Personally, I don’t agree. If I want a closing sentiment, I’ll type it myself and make sure it is appropriate for the situation).
  5. Formatting (Surprisingly not too many people mentioned formatting. One person that did was Judith at nermanners.com who said you should “align your sig’s text with spaces rather than tabbing […] Also keep in mind that you want to keep your sig file to 70 characters or less, as that is the set screen width default for most email programs.” I think the 70 character wide rule is a good one to keep in mind.
  6. Degrees (Most people thought listing things like MBA looked arrogant if for no other reason then because it is uncommon – at least in the United States. However, these people didn’t work at Universities as far as I could tell. I think the attitude in academia about this would be different than in corporations, so I see no harm in listing MLS, MBA, EdD, PhD, etc in the library world. I chose not to list my MLS, but if I had a doctorate I might have choose differently).

Anyway, if you are interested this is what I came up with…


Edward M. Corrado
Assistant Director for Library Technology
Binghamton University Libraries
P.O. Box 6012, Binghamton, NY 13902 USA
Phone: +1-607-777-4909 | Fax: +1-607-777-4848
ecorrado@binghamton.edu | http://library.binghamton.edu

Preserving Electronic Records in Colleges and Universities workshop

On Friday, July 9 I went to a Preserving Electronic Records in Colleges and Universities workshop held at Cornell University and sponsored by the New York State Archives. The workshop was presented by Steve F. Goodfellow, President, Access Systems, Inc. The workshop was well organized and Steve Goodfellow did a good job presenting the material. In some respects, I can’t say I learned a real lot, especially on the technology side, but the workshop was more the worthwhile, if only to have some of my thoughts on the issue reinforced by an expert. I did learn about some policy considerations and retention schedules however.

During a break I talked with Steve and we agreed that while the technology is important and their are technological challenges, really electronic preservation is more of a policy challenge then a technological one. If the policies are in place and carried out (which included the proper funding), the technology can be worked out. That is not to say the technological solutions are always worked out properly. During the first part of the workshop we discussed items when it didn’t. One example was a client of his had an old student records system and they thought they migrated everything. However, they kept the old system around for old lookups “just in case.” Well, a new CIO came in and asked when was the last time it was used. The answer was not in a long time, so the old system was removed. Guess what happened? Not everything was moved and now they didn’t have it any more.

One of the big take aways for me was the Fundamental Goals of an electronic records preservation system identified during the workshop. The three are:

  1. Readable of electronically stored records
  2. Authoritative & trustworthy process
  3. Maintain a secure and reliable repository

These to a large degree are obvious, but if you are embarking on a electronic preservation program, you should identify how you are accomplishing these goals.

Article about SkyRiver

After being away for a short while at a conference, I am catching up on some week old e-mails. One e-mail I received was about an article in ALCTS Newsletter Online about Michigan State University’s experience with the new SkyRiver bibliographic utility. The bottom line, according to the article, is that they saved about $80,000 a year and didn’t see a loss in productivity once the catalogers became used to using SkyRiver instead of OCLC for copy cataloging. They did say, however, that foreign language materials where lacking. They also mentioned the cost-prohibitiveness of uploading records to OCLC. Anyway, if you are at all interested in this alternative to OCLC, it gives a nice, albeit brief, overview of Michigan State’s experience with SkyRiver.

Code4Lib Journal Issue 10 Published

The tenth issue of Code4Lib Journal was published this morning. I was the Coordinating Editor (CE) this time around. When I volunteered to be the CE, I was afraid it was going to be a lot of work. Fortunately, while their was a fair amount of work involved, it wasn’t overwhelming. This is because the authors and the rest of the editorial committee are passionate about Code4Lib and really put a lot of effort and dedication into the Journal. My thanks to all the editors and authors.

The articles in Issue 10 are….

Editorial Introduction: The Code4Lib Journal Experiment, Rejection Rates, and Peer Review
Edward M. Corrado

Code4Lib Journal has been a successful experiment. With success, questions have arisen about the scholarly nature and status of the Journal. In this editorial introduction we take a look at the question of Code4Lib Journal’s rejections rates and peer review status.

Building a Location-aware Mobile Search Application with Z39.50 and HTML5
MJ Suhonos

This paper presents MyTPL (http://www.mytpl.ca/), a proof-of-concept web application intended to demonstrate that, with a little imagination, any library with a Z39.50 catalogue interface and a web server with some common open-source tools can readily provide their own location-aware mobile search application. The complete source code for MyTPL is provided under the GNU GPLv3 license, and is freely available at: http://github.com/mjsuhonos/mytpl

OpenRoom: Making Room Reservation Easy for Students and Faculty
Bradley D. Faust, Arthur W. Hafner, and Robert L. Seaton

Scheduling and booking space is a problem facing many academic and public libraries. Systems staff at the Ball State University Libraries addressed this problem by developing a user friendly room management system, OpenRoom. The new room management application was developed using an open source model with easy installation and management in mind and is now publicly available.

Map it @ WSU: Development of a Library Mapping System for Large Academic Libraries
Paul Gallagher

The Wayne State Library System launched its library mapping application in February 2010, designed to help locate materials in the five WSU libraries. The system works within the catalog to show the location of materials, as well as provides a web form for use at the reference desk. Developed using PHP and MySQL, it requires only minimal effort to update using a unique call number overlay mechanism. In addition to mapping shelved materials, the system provides information for any of the over three hundred collections held by the WSU Libraries. Patrons can do more than just locate a book on a shelf: they can learn where to locate reserve items, how to access closed collections, or get driving maps to extension center libraries. The article includes a discussion of the technology reviewed and chosen during development, an overview of the system architecture, and lessons learned during development.

Creating a Library Database Search using Drupal
Danielle M. Rosenthal & Mario Bernardo

When Florida Gulf Coast University Library was faced with having to replace its database locator, they needed to find a low-cost, non-staff intensive replacement for their 350 plus databases search tool. This article details the development of a library database locator, based on the methods described in Leo Klein’s “Creating a Library Database Page using Drupal” online presentation. The article describes how the library used Drupal along with several modules, such as CCK, Views, and FCKeditor. It also discusses various Drupal search modules that were evaluated during the process.

Implementing a Real-Time Suggestion Service in a Library Discovery Layer
Benjamin Pennell and Jill Sexton

As part of an effort to improve user interactions with authority data in its online catalog, the UNC Chapel Hill Libraries have developed and implemented a system for providing real-time query suggestions from records found within its catalog. The system takes user input as it is typed to predict likely title, author, or subject matches in a manner functionally similar to the systems found on commercial websites such as google.com or amazon.com. This paper discusses the technologies, decisions and methodologies that went into the implementation of this feature, as well as analysis of its impact on user search behaviors.

Creating Filtered, Translated Newsfeeds
James E. Powell, Linn Marks Collins, Mark L. B. Martinez

Google Translate’s API creates the possibility to leverage machine translation to both filter global newsfeeds for content regarding a specific topic, and to aggregate filtered feed items as a newsfeed. Filtered items can be translated so that the resulting newsfeed can provide basic information about topic-specific news articles from around the globe in the desired language of the consumer. This article explores a possible solution for inputting alternate words and phrases in the user’s native language, aggregating and filtering newsfeeds progammatically, managing filter terms, and using Google Translate’s API.

Metadata In, Library Out. A Simple, Robust Digital Library System
Tonio Loewald, Jody DeRidder

Tired of being held hostage to expensive systems that did not meet our needs, the University of Alabama Libraries developed an XML schema-agnostic, light-weight digital library delivery system based on the principles of “Keep It Simple, Stupid!” Metadata and derivatives reside in openly accessible web directories, which support the development of web agents and new usability software, as well as modification and complete retrieval at any time. The file name structure is echoed in the file system structure, enabling the delivery software to make inferences about relationships, sequencing, and complex object structure without having to encapsulate files in complex metadata schemas. The web delivery system, Acumen, is built of PHP, JSON, JavaScript and HTML5, using MySQL to support fielded searching. Recognizing that spreadsheets are more user-friendly than XML, an accompanying widget, Archivists Utility, transforms spreadsheets into MODS based on rules selected by the user. Acumen, Archivists Utility, and all supporting software scripts will be made available as open source.

AudioRegent: Exploiting SimpleADL and SoX for Digital Audio Delivery
Nitin Arora

AudioRegent is a command-line Python script currently being used by the University of Alabama Libraries’ Digital Services to create web-deliverable MP3s from regions within archival audio files. In conjunction with a small-footprint XML file called SimpleADL and SoX, an open-source command-line audio editor, AudioRegent batch processes archival audio files, allowing for one or many user-defined regions, particular to each audio file, to be extracted with additional audio processing in a transparent manner that leaves the archival audio file unaltered. Doing so has alleviated many of the tensions of cumbersome workflows, complicated documentation, preservation concerns, and reliance on expensive closed-source GUI audio applications.

Automatic Generation of Printed Catalogs: An Initial Attempt
Jared Camins-Esakov

Printed catalogs are useful in a variety of contexts. In special collections, they are often used as reference tools and to commemorate exhibits. They are useful in settings, such as in developing countries, where reliable access to the Internet—or even electricity—is not available. In addition, many private collectors like to have printed catalogs of their collections. All the information needed for creating printed catalogs is readily available in the MARC bibliographic records used by most libraries, but there are no turnkey solutions available for the conversion from MARC to printed catalog. This article describes the development of a system, available on github, that uses XSLT, Perl, and LaTeX to produce press-ready PDFs from MARCXML files. The article particularly focuses on the two XSLT stylesheets which comprise the core of the system, and do the “heavy lifting” of sorting and indexing the entries in the catalog. The author also highlights points where the data stored in MARC bibliographic records requires particular “massaging,” and suggests improvements for future attempts at automated printed catalog generation.

Easing Gently into OpenSRF, Part 1 and 2
Dan Scott

The Open Service Request Framework (or OpenSRF, pronounced “open surf”) is an inter-application message passing architecture built on XMPP (aka “jabber”). The Evergreen open source library system is built on an OpenSRF architecture to support loosely coupled individual components communicating over an OpenSRF messaging bus. This article introduces OpenSRF, demonstrates how to build OpenSRF services through simple code examples, explains the technical foundations on which OpenSRF is built, and evaluates OpenSRF’s value in the context of Evergreen.

S3 for Backup, Is It Worth It?

I’ve been using Amazon’s S3 to back up my blog for a while now, and I really like it for that purpose. The amount of data my blog has is very little, so I end up getting billed $0.01 a month. It probably costs Amazon more money to bill me then they make off of me! This has got me looking at using S3 to backup ELUNA‘s document repository. Currently, we have about 12 GB of data. Assuming we transfer all 12 GB in and out each month (which we wouldn’t, but I’m just saying) it comes out to $4.65 a month for regular storage and $4.05 for reduced redundancy storage according to the AWS calculator. Not bad – especially considering this is certainly an over-estimate (providing we don’t actually need to restore, which in that case I’m not worried about a few bucks to get my data back!). I have other free-to-me options, but this seems like a pretty good deal to me and is something I am considering suggesting to ELUNA.

However, does Amazon S3 scale as a backup solution for my library? It does, but I think only to a point. Let’s say you have 500 GB of images, video, and other data. It will cost you $50 a month for Reduced Redundancy and $75 a month for regular storage (and can you imagine telling the boss, I’m sorry, Amazon lost our data, we were using the Reduced Redundancy plan? I don’t think so.) – not counting data transfer which could double the costs. That is $600 to $900 a year. Maybe that is still reasonable depending on the nature of your project, but you can see it grows quickly to a point where other, local, options that you have more control of our looking more and more reasonable.

We are lucky enough to have a campus-wide IT department that does a good job handling backups so this isn’t something we are considering here in the library, but it seems to me that it could be a good solution if you hit the sweet spot when compared to local storage options. Obviously, the advantage of Amazon being off-site storage is something that shouldn’t be overlooked, which makes the sweet-spot a little higher price range. I have multiple locations I could store a network-based backup up device, so that isn’t as huge of a deal for me. Still, I’d say it is worth other libraries investigating if they have non-huge data that need to be backed up.

Jing for Grading

As some of you know, I occasionally teach an online class in Multimedia Production. The students are assigned the task of creating a Web site that contains various multimedia production. While reading Higher Ed’s “Technology and Learning” blog by Joshua Kim post about Jing For Student Authoring I was intrigued that one of the commentators mentioned using Jing to provide feedback on assignments. Since my class is online and I’m grading multimedia, feedback can sometimes be a challenge. I never thought of doing it with a screencast. I’m not sure if I’ll use Jing or something else, but I think I’ll try using a screencast program in the next time I teach to see if it can compliment the rubrics I have been using.

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »