Open Access Day

Tomorrow is Open Access Day. The purpose of Open Access day is to help “broaden awareness and understanding of Open Access, including recent mandates and emerging policies, within the international higher education community and the general public.” I am please to say that at Binghamton University Libraries we are simulcasting an informal broadcast from SPARC (Scholarly Publishing & Academic Resources Coalition) on current issues in Open Access and trends in scholarly publishing on Tuesday, October 14, at 7 p.m., in SL-209. I think Librarians and other supports of Open Access need to help get the word out about what Open Access is and why it is important for faculty to be aware of it and the issues that surround open access. I’m not sure how often Binghamton has held these type of events, but I do hope we get a decent turnout.

New England Code4Lib Chapter

Thanks to a post on Roy Tennant’s blog, I have found out about a third Code4Lib regional group to go along with Appalachia and NYC Code4Lib regional groups that I previously wrote about. This one is in New England and hopes to have a one day event in mid-November or early December. Keep the regional groups coming!

IGeLU 2008

Between September 6-10, 2008 I attended the International Group of ex Libris Users (IGeLU) conference in Madrid, Spain. One thing I really miss about Endeavor Users Group (EndUser) is the international aspects. The Ex Libris Users of North America (ELUNA) conference is mostly attended by librarians from the United States, along with some Canadians. True, there are other international attendees, including some from the Caribbean, but they are very small in number. IGeLU, on the other hand has people from many European countries with a few Aussies, South Africans, Americans, Israelis, etc. thrown in for good measure. I think of IGeLU as mostly an European conference (or at least a “Western-world” conference, if I can use that term) and I really like getting the different view of the library world I get when I go to a conference such as IGeLU. That said, it still doesn’t have that same feel as EndUser did to me.

Having just gone to ELUNA a little over a month ago, I wasn’t expecting a lot of new news from Ex Libris management. For the most part this turned out to be true. However, there was one bit of important news for Metalib customers. Metalib 4.3 will be the last version of Metalib as we know it. Metalib 5 (or whatever it will be called) will be a complete re-write. The underlying database and administration portions of the code may be based on Primo but that is still unclear. The user front-end (i.e. discovery layer) will use Primo. Apparently customers of Metalib will not be charged extra for the Primo front-end to Metalib. That is, of course, if they only use the Primo front-end for Metalib. I think using the Primo front end for Metalib like this makes sense for Ex Libris on a couple of accounts. First, they won’t have to develop a new Metalib front end. That means less development is needed and less products to support. Secondly, it will give customers a little taste of Primo. If customers that get this taste decide they like the Primo front end, they may purchase Primo for use with other products. In other words, Ex Libris saves costs and might get a few more sales of Primo out of it.

Most of the user sessions were pretty good. I’m not sure if I got anything specific to bring back home and implement however. One session I enjoyed focused on Metalib. It was interesting to learn about how others are working with Metalib and doing things like creating RSS feeds out of it. However, the approach they used appeared to require a fair bit of work to implement, and with Metalib going away as we know it, I am not sure it would be worth it for us to try to do this now.

I think my presentation on RSS A to V went well. I definitely find it harder to read European audiences than I do American ones. I am not sure if that is because I am from the USA or not, but I think it might not be based on a conversation with a Swede who has presented in the USA a number of times. He said (and I agree) that Americans are more likely to provide visual clues that they are “getting” a presentation by doing things like nodding their head. Overall, Europeans seem to be more reserved in that aspect and seem to more intently focus/concentrate on what you are saying without showing much emotion during a presentation. I am not sure why this is, maybe it is a cultural thing or maybe it is because for most of them, while excellent English speakers, English is not their first language so it takes more focus/concentration for them. For those of you in the Mid-Atlantic states, you can see an encore of my RSS presentation at the Ex Libris Mid-Atlantic Users Group (EMA) meeting in early October.

One topic I found very interesting was the presentation and ensuing discussion abut the future of e-book management. Representatives from ELUNA, IGeLU, and, I believe, Ex Libris created a report about what customers need to manage e-books. As my friend Zoe says the “first you have to acknowledge that an e-book is not always a book in “e” form and then things just go downhill from there. :-)”

One of the things they mentioned was the enormous challenges of managing e-books. Some of the challenges include: variety of formats, different purpose and use (textbook, research book), diversity of hardware/software, digital rights management (DRM), pricing models, licensing models (do you lease or have ownership), digital curation, metadata creation, and discovery and accessibility. With all of the different formats, e-versions, and sources of e-books, putting like items together is a problem. One of the recommendations is that libraries need a solution that provides sophisticated de-duplication in a FRBR-ized manner. Libraries and library software vendors have there work cut out for them.

Three other things stood out to me about their recommendations. First, the task force felt that the ability to meta-search full content of e-books is not an immediate priority. Secondly they believe browsing is not required for e-book discovery. Thirdly, they recommend that the e-book management system needs to allow libraries to include not just licensed content, but freely available content as well if they wanted to.

Considering the challenges of getting the full-text content into a meta search tool (esp. with all of the possible sources of electronic texts) it would be a daunting task to have full text search in an e-books discovery application. With that in mind, I can understand this one – although I think it will be needed at some point (which the recommendation also implied). We have it for journal articles and patrons will come to expect it in e-books. Also, I just think that it would be really useful if you are looking for information about a specific aspect of a topic.

My initial reaction about the no need for browsing recommendation is that I do not agree. Yes, if students are looking for a specific book (esp. a text book), they don’t need to browse. However, when not having known items, I think a browse is very effective discovery method. The task force pointed out that some ILS don’t currently have a browse function. While this is true, that doesn’t mean it is good. Also, while you might not be able to browse the catalog, you can typically (at least in the USA) browse the shelves. The physical shelf browse-ability is obviously not available for e-books. I think browsing by author, title, subject, classification number, etc is very useful for electronic items. What is great about e-items is that they can be in multiple places at once, so you can assign multiple class numbers, authors, etc. The reasoning the task force provided for not needing browsing is that while it may work for hundreds or a few thousand items, browsing is not so useful for very large collections (which is what the task force expects libraries to be dealing with). I’m not sure if I agree with this logic, I regularly browse shelves of large research libraries with millions of volumes to great effect. Just recently I was looking for a book that was on the shelf about communities of practice and I discovered two other books that were useful to my topic I didn’t find in the catalog. Other occasions you might not know the exact spelling of an author names, so browsing becomes more useful, maybe even necessary. Without full text searching, I think browsing is that much more important. This is amplified if my assumption that our metadata won’t be much better then what we get in a typical full AACR2 MARC record. I’ll have to read the report and think about this some more.

The third recommendation to allow libraries to include freely available materials in the e-book discovery system seemed logical to me. However, it prompted a good deal of discussion. Someone from a national library asked why you would want to include non-licensed (or non-purchased) items that were not selected by her library. The answer from one of the panelists was “Why not?” What you think about it, what are the books that are out there that are freely available. Typically these books are coming from mass-digitization products involving libraries. This means these books were selected at some point by librarians; usually at major research universities. The reason why other libraries wouldn’t have these items is not typically because of the content contained in them, but instead because of costs. This is when the person who posed the question mentioned that they have a very specific collection policy on only collecting items about the history and culture of their country. I can see this is an important distinction. Of course, this is probably one of the reasons why the task force recommended giving the libraries a choice.

Ex Libris did offer the conference attendees a few insights on how they will deal with these issues, but my guess is that is getting closer to proprietary information, so I’ll keep it to myself for now.

Overall a good conference and it was nice to see some of the people I used to spend time with at EndUser who because of their location now go to IGeLU instead of ELUNA. I am already looking forward to the next IGeLU conference in Helsinki, Finland next September.

Creative Commons license for presentations?

I have been thinking a lot about what copyright license I want to place on my presentation slides. Generally speaking, I am a fan of open access, and I want people to be able to reuse what I do for their own purposes. However, this can cause some problems. Keeping in mind that there are different Creative Common (CC) licenses you can choose, so these examples won’t be accurate for all licenses, here are some of my concerns. For one thing, an article can be re-posted somewhere. In theory, that may be a good thing that an article is in multiple places and there are more places to find it, but it also can weaken the gravitational force of the original when it comes to search engines and the like. While it was flattering at first to see I actually published a journal article in India (that was copied from an open access journal), I think it did detract from the original. This is not to say that anyone in India did anything wrong and certainly not anything illegal since the license allowed them to do so. As Andy Powell mentioned in June in a blog post about creative borrowing there are other reasons besides gravitational force that makes people doing this more then just “downright unhelpful.” When information is inaccurate or out of date it can do more harm than good. If I find an error in something that I posted, and know where is, I can make sure it gets corrected or at least that there is a disclaimer of some sort. Also, just simple things like contact information may change and it would be good to be able to make sure the up-to-date information is available which is not possible if people re-publish things in places I don’t know about.

But I think I can live with that if that is the only downside, even if it is annoying. The overall good probably outweigh the negative. However, I have a separate issue with slides used during presentations. If I am going to put slides up on my Web site or elsewhere under a CC license, I have to make sure that all parts of it are also available under the CC license or otherwise fall under fair use. Of course relying on fair use is a sticky situation because the concept of fair use can vary greatly depending on the jurisdiction.

One example of this problem is with Microsoft Office clip art. According to Microsoft, among other things “You may not use clip art to advertise your business.” This means using this clip art would make me unable to use certain CC-licenses. OK, you say, I can find Open clip art. While this is true, it can also add significant time to preparing presentations if you try to do this (esp. if you are working with someone who only uses Microsoft products). But this is just one small example. A better example might be something that happened recently while preparing a presentation. I needed (OK, I wanted) a picture of the inscription “Free to All” above the entrance to the Boston Public Library. I couldn’t get to Boston to take my own photo and I couldn’t find a CC-licensed one, but I found a great photo by informationgoddess29 that was just what I needed. I contacted her and she generously gave me permission to use it. Since I used this image, I can’t legally release this presentation under CC without getting her permission. Now, being a fellow librarian, she may very well have agreed to change the license if I asked, but that would be very forward of me to do that. This also gets tricky with Web sites that you are showing off or other content that belongs to the University you work at. Do I have to apply to have the University agree to release this under a CC every time I am creating a presentation? I would think I do, assuming they won’t give me blanket permission.

I could go on, but I think you can see my dilemma. I want people to be able to freely use my work, but I don’t want them to be duplicating it in different places on the Internet and possibly watering it down (without asking, anyway). But even more, when I am preparing a presentation, I may need to use content I don’t own. While a friendly e-mail can usually result in permission to use it, it generally will not get me permission to give it away to someone else. I could work on being more selective about the content I re-use, and I will try to do that, but it is not always feasible. Even when it is feasible, it may be very time consuming to do so (and I probably won’t do it in that case).

I wonder what others do? Have they never thought about it? Do they just ignore the license on the content from others that they include when licensing a presentation? Do they make sure they have the proper permissions to release the re-used content? Do they just decide not to use a CC license? I don’t know what I am going to do about this. I am thinking as long as I don’t use images/content that can easily be resold (i.e. I down scale images), and I use a CC license that doesn’t allow commercial use, no one will complain, however I want to do more than to think, and hope, that no one will complain. For that reason, I think I may not use a CC license on presentations by default. I think I may just put a note saying that if you want to use my stuff, just ask and I most likely will say yes. This way, I can respond with what content I legally can’t give away. Of course, if I am only using my content, I may still use a CC license so people don’t need to ask, but a blanket CC-license on my personal scholarly archive appears problematic.

OCLC WorldCat Hackathon

Word is getting out about the OCLC WorldCat Hackathon that will be held November 7-8 in New York City. According to the Web site, the Hackathon is “sponsored by the OCLC Developer’s Network and NYPL Labs of The New York Public Library, the WorldCat Hackathon gives participants the opportunity for two full days of brainstorming and coding mash-ups with local systems and other Web services to take advantage of all that WorldCat, the world’s largest bibliographic database, has to offer.”

As Peter Murray laments, I also “wish I could get to NYC for the two-day event.” Since it is only one day away from work (it is on a Friday/Saturday), I don’t think being away from work would be a problem for me, but staying in NYC can be quite expensive and Binghamton is a little far to drive back and forth (besides, parking would be really expensive if I actually drove into the city). Hopefully I can figure out a way to put it into my schedule/budget.

With the new Ex Libris Open-Platform Strategy, it will be interesting to see if Ex Libris will follow and host a similar event for their customers. Certainly, OCLC’s base of programmers, hackers, and tech enthusists are going to be smaller, but something like this could still attract a number of people if the 2007 EndUser Voyager Hackfest is any indication. Maybe this would be more viable as far as attracting developers if it is held in conjunction with ELUNA or the IGeLU conference? Do any of the Ex Libris product users/hackers have any thoughts?

Well-presented negative results

I read with interest the call for papers for the 3rd IEEE/ACM International Conference on Information and Communication Technologies and Development (ICTD2009). What caught my eye was the sentence of the conference focus which reads “Well-presented negative results from which generalizable conclusions can be drawn are also sought.” I’d like to see more reporting of well presented negative results at library-related conferences. Sometimes we here about negative results on e-mail lists when some asks the question “Has any tried this?” but very rarely do we have sessions that report on things that didn’t work out at conferences or read them in articles. This leads to different people try their same thing and also failing whereas if the negative results were public they could either decide to go in a different direction, or the can look at what didn’t work in the previous project and figure out a way to modify the approach so it will work.

CATaC slides posted

I just posted the slides for the presentation I did with Heather L. Moulaison at the Sixth International Conference on Cultural Attitudes Towards Technology and Communication (CATaC) 2008 on my scholarly activities Web page. Emma Tonkin also co-authored the paper in the proceedings but unfortunately she couldn’t join us in France. I’ll write up a review of the conference in the next week or so. However, I will say at this point it was a fun conference with a lot of interesting people to meet. The citation for our presentation is:

Moulaison, Heather Lea, Emma Tonkin, & Edward M. Corrado (2008, June 27). Linking communities of practice online. Sixth International Conference on Cultural Attitudes Towards Technology and Communication (CATaC) 2008. Université de Nîme Nîmes, France.

The URL is:

Colloque conjoint Asted/CBPQ

On May 15, I presented along with Heather Lea Mouliason at the Colloque conjoint Asted/CBPQ held in Montréal, Québec. The topic of our talk was “Library Subject Guides 2.0.” Before you read any further, I should mention that I only have cultural comments to make and not any LIS content because I wasn’t able to attend/understand any sessions. Thus, you may want to stop reading now.

The drive from Binghamton was enjoyable and we had no trouble getting into Canada – although we were asked by the Canadian border guard for our hotel reservation, which luckily I had in a handy spot. On the morning of the 15’th we finalized our presentation and walked down to the conference center just in time for lunch. The lunch was very nice and the weather was excellent so we got to eat outside. During lunch we talked to a number of the conference goers and found out that the librarians in attendance spoke very good English (at least the ones we talked to did). We weren’t really sure that would be the case because all of the sessions (except for ours were in French).

Being that we were the only English-language session, we weren’t sure how many people would attend our talk. I’m happy to report that our session was standing room only. I didn’t count the number of seats, but I’d say we easily had 200 people in the room – probably closer to 250. I think our session was well received and that people enjoyed our talk. One of the possible reasons for the large turnout for our session (according to another presenter) was that French Quebec librarian community tends to be slightly insular and attendees were interested in hearing about what libraries outside of French-speaking Canada were up to. I was also a little surprised about the large number of academic librarians in the audience (@ 90% of the attendees by a show of hands). In fact, there was a very good mix of public, academic, and special libraries represented at the conference.

As mentioned, our session was the only English-language session. My co-presenter speaks excellent French, but I don’t speak any. I am pretty sure that I was the only non-French speaker at the conference. For this reason, I wasn’t able to get much out of the conference sessions and only attended the talk just after mine.

The fact that the other librarians spoke English made the conference social activities more enjoyable for me. Heather and I were able to learn a little more about the French-Canadian library world during the evening reception. One of the people we talked to was Eric Bégin from inLibro. inLibro provides hosting, installation, migration, development, support and teaching services for Open Source Integrated Library Systems in Québec. I was able to learn a lot about the Open Source Library community in the province and some of the issues involved with supporting Koha in French-speaking Canada.

On Friday, Heather had an appointment with a cataloging professor at Université de Montréal, and I found out that French Québec uses translated AACR2 whereas the rest of the French-speaking world uses AFNOR. After her appointment, we drove back down to Binghamton (crossing the border in record time).

All-in-all, it was a great trip. Not only did the presentation go extremely well, but I was able to meet a number of nice people. Everyone was friendly and the conference organizers (especially Régine Horinstein, the Executive Director of Corporation des bibliothécaires professionnels du Québec) did an excellent job making us feel welcome. I think there probably is some very good collaboration opportunities with the French-speaking academic librarians (even for a non-French speaker such as myself) and I am going to try to pay more attention to what is going on in Québec libraries.

VALE-OLS Next Generation Academic Library System Symposium

Video (wmv), audio (mp3) files, and photos of the VALE-OLS Next Generation Academic Library System Symposium that took place at TCNJ on March 12, 2008 are now available on the VALE Web site. It was a great symposium talking about the future of VALE libraries and the future of Open Source Integrated Library systems and I recommend you take some time to check it out if you didn’t have the opportunity to attend in person. While the whole thing was great, I especially recommend the presentation by Joe Lucia, University Librarian, Villanova University and President of PALINET Board.

NERCOMP presentation using Google Docs

When I co-presented at NERCOMP, we used Google Docs. I never used it for a presentation before, but it worked pretty well. There are some issues with Google Docs (not being able to export into a different, editable slide show format for example*). It is also rather basic, but that is OK for me since I tend to create basic slideshows. You can access the presentation here (If you get a bright green screen, Google might not like the version of your Flash player, in that case, please see the PDF available from this page):


* Note: I have just found out that Google Docs, as of April 8, can now export to a PPT.

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »