No Place to Hide
This multimedia site illustrates one of the great dilemmas and paradoxes the U.S. faces now: To what extent should the need for security trump the constitutional right to privacy? Because consulting firms (who employ many information professionals like ourselves) exist to analyze who we are, what our income is, and what our consumer preferences are, I believe Viet Dinh (the primary drafter of the Patriot Act) correctly observed: "I think that in a democratic government, we should always distrust governmental authority." This is my personal position as well.
"TIA"
Is anyone else scandalized by how out of date TIA's website is? Under "Latest News," the most recent entry reads "March 1, 2005." Keeping an eye on our government is a noble goal - but one should stay current!!!
Protecting Privacy Rights in Libraries
Judah Hamer courageously stands for an absolute right of privacy for patrons: in their reading habits, in their lending records, against the government. I believe the ALA might not have done a vigorous enough job in lobbying on behalf of library patron confidentiality in the wake of September 11. And in the limited circumstance of pre-teens being able to read what they want without parental supervision -- I think librarians should support freedom of information in this context without reserve.
Monday, April 16, 2012
Monday, April 9, 2012
Week 13 Reading Notes
The creation of user-generated content that consistently shows great quality is the great success of Wikipedia. In fact, studies have shown that user-created folksonomies overlap with 80% of LOC Subject Headings, a controlled vocabulary. See Yi, Kwan, and Chan Lois Mai. "Linking Folksonomy to Library of Congress Subject Headings: an Exploratory Study." Journal of Documentation 65, no. 6 (2009): 872-900.
However, even though the technology for creating user-generated content may be free sometimes, sometimes that software also comes with a wealth barrier. Probably the most user-friendly OPAC which permits patrons to tag and comment on particular books, "Library Thing for Libraries," also has a steep annual subscription price depending on how many students make up your FTE. I am glad for Wikipedia . . . But for libraries to take advantage of OS-based software and technologies, there still seems to be an initial investment cost that seems hard to justify in the era of economic downturn. Therefore, I personally am not as optimistic as Blossom and Allan, because many of these technologies, despite having a Creative Commons or OS-basis, have quickly erected wealth barriers of their own.
Second, in some instances, a user-generated Wiki-textbook is simply inappropriate. I work in a law library. Here, we must teach students how to identify who are the most authoritative voices in a certain field of law. Because of the hierarchical nature of the field of law, we cannot sustain an egalitarian or simply ignorant manner of information transmission for very long. I'm sorry to appear hierarchical, but I think that's simply a function of the way law works.
Monday, April 2, 2012
Week 12 Reading Notes
This week, David Hawking's article Web Search Engines Parts 1 & 2 presumed way too much background knowledge. I tried my very best to follow it, but he used too much nomenclature without defining those terms. However, it gave me a general idea of what happens -- crawling and indexing -- each time I enter a search into an engine.
My most favorite articles by far were Shreeves' Current Developments and Future Trends for the OAI Protocol for Metadata Harvesting and Bergman's White Paper - The Deep Web: Surfacing Hidden Value. I am so glad that most repositories seem to be implementing the OAI Protocol as a general standard! This will vastly improve findability and put libraries on the same page as each other. I hope that the OAI Registry will help libraries see which resources other libraries have already digitized, so that other libraries have no need to spend tons of money to digitize them again. This will help prevent redundancy.
The description of BrightPlanet's functions fascinated me -- really, a search engine that could capture dynamically generated sites??!! This sounds very powerful. I wanted to dry this out, but when I went to BrightPlanet's site, unfortunately, it seemed that one had to subscribe to this content. Might anyone know how to get in touch with them for a free trial?
My most favorite articles by far were Shreeves' Current Developments and Future Trends for the OAI Protocol for Metadata Harvesting and Bergman's White Paper - The Deep Web: Surfacing Hidden Value. I am so glad that most repositories seem to be implementing the OAI Protocol as a general standard! This will vastly improve findability and put libraries on the same page as each other. I hope that the OAI Registry will help libraries see which resources other libraries have already digitized, so that other libraries have no need to spend tons of money to digitize them again. This will help prevent redundancy.
The description of BrightPlanet's functions fascinated me -- really, a search engine that could capture dynamically generated sites??!! This sounds very powerful. I wanted to dry this out, but when I went to BrightPlanet's site, unfortunately, it seemed that one had to subscribe to this content. Might anyone know how to get in touch with them for a free trial?
Week 11 Lab
Web of Science
Query:
Topic: ~virtual reference
Refined to Web of Science Category, "Information Science Library Science"
2008-2012

Web of Science
Query:
Topic: "digital library"
2008-2012

Google Scholar
Query:
~virtual reference
2008-2012
Articles excluding patents

Google Scholar
Query:
"digital library"
2008-2012
Articles excluding patents

Saturday, March 24, 2012
Week 11 Reading Notes
First of all, it was great to see all my old classmates yesterday night! I look forward to seeing you again in the summer!
Out of this week's 3 readings, Paepcke et al.'s Dewey Meets Turing and Lynch's Institutional Repositories greatly benefited me with a Cliff Notes summary on the past 10 years of shared experience between computer scientists and librarians. I felt great ambivalence to discover that the search mechanics behind Google started out supported by a federally funded grant, under DLI. If public monies first supported DLI, why weren't the powerful algorithms made open source?? Or at least revealed through a Creative Commons license which would still have afforded the authors the opportunity to profit? My take is, if public monies funded the project, then the fruits of that research should be known publicly, for the commonweal. Instead, Google is now a private entity grossing $400+ per share and whose algorithms are kept secret, under wraps. Frustrating!!
Second, I think Lynch blathers on rather incoherently and redundantly. He does not articulate in a compelling fashion how classic and revolutionary librarianship principles apply to the brave new world created through the marriage of CS and librarianship. His work seems to play catch-up with what computer scientists are already envisioning. We as librarians need to (a) first build a strong foundation of classic librarianship principles - (e.g., Ranganathan's Five Principles); and (b) think about how new IT can creatively deliver access and organization to the information flood. We need more librarianship substance and IT savvy-ness, not mere blundering about in the dark with fancy rhetoric.
I completely lost respect for Mischo's piece when it contained this dead link (http://www.niso.org/committees/MetaSearch-info.html). For those interested, the correct link is http://www.niso.org/workrooms/mi. There should be some sort of mechanism to help one find new permalinks that replace old ones!
Wednesday, March 14, 2012
Week 10 Reading Notes
This week I greatly enjoyed Doug Tidwell's "Introduction to XML" article - Finally, an article about programming that does not presume tons of background information! And is written in normal English! I especially enjoyed Section 4, "Defining document content," which did a good job of describing DTD, Document Type Definition.
Although Ogbuji's "A survey of XML standards" presumed too much background information, it gave me a clear description on the process by which certain coding practices become standards. And his article thoughtfully gathered different standards communities together and described their different standards-making processes one-by-one. This is a useful tool for anyone who would like to know what goes in behind the scenes.
I strongly believe that I am learning more from actually doing the labs, and not that much from the readings. When I follow Evgeny's labs step-by-step, and actually do the work, then I understand the material.
There is an old Chinese proverb on learning that goes: "To hear is to forget. To see is to remember. To do is to understand." I think these learning principles can still be applied to learning 21st century programming languages! I would like to see LIS 2600 developed such that there is very little reading, but longer and richer screencasts, with more complicated labs.
Saturday, March 3, 2012
Week 9 Reading Notes
I am very excited about some of HTML5's new features, particularly (a) semantic replacements for generic block and inline elements; and (b) the ability to script APIs. (a) will make coding more user-friendly; and (b) will help programmers plug-in more cutting edge APIs more easily.
I am surprised that the HTML5 syntax is no longer based on SGML . . . Might there be problems for browsers to read HTML5, in the light that most browsers were programmed to read older versions of HTML? I wonder whether WHATWG coordinates with browser programmers to make interoperability a living reality, and not just a fancy idea!
There is an HTML5 browser test that seems to show Google Chrome and Firefox leading the way! Yay for Chrome users =) .
Wednesday, February 22, 2012
Week 8 Reading Notes
I have some thoughts regarding last week's CamTasia lecture.
First, what new librarians really need is a "best practices" guide on how to design a library website. With so many options like FrontPage, Dreamweaver, and LibGuides, librarians are faced with so many platform options that they don't know which one to use to design a website.
Second, because librarians may be employing the above platforms to design library websites, what is the value in learning HTML? Now I know that we should have at least an elementary knowledge of HTML, but what is the scope of that elementary knowledge? Are we learning too much HTML, and should we spend more time learning a website design platform, like Frontpage or Dreamweaver?
I feel that library schools must define the scope of basic website development skills more clearly, so that we are not wasting time on knowledge which we will not use. If we don't use a certain piece of knowledge, it's obsolete and we will forget it.
Third, it is very hard to read this week's readings on Cascading Style Sheets before doing the lab. We should be assigned both the reading and the lab at the same time, so we can get a firm comprehension on what CSSes are. I believe this will be more useful for future LIS 2600 students in the future.
Just my two cents!
Wednesday, February 15, 2012
Week 7 Reading Notes
After reading "Beyond HTML," I felt that it was outdated. For example, Goans, Leach and Vogel seem to emphasize the importance for librarians of learning HTML.
While I believe that all librarians should have a basic knowledge of HTML, I believe there is an "easy" solution that may help librarians. Institutions can subscribe to Springer LibGuides, to permit their librarians to create LibGuides which are similar in look, content, and feel. We at Temple employ LibGuides and are able to customize them within limits. Therefore, we don't need to know the details of setting up or navigating a CMS that much. The drawback, of course, is that Springer LibGuides costs money annually. A CMS-system customized to one's institution would cost time and effort upfront, but later, one may not need to pay for it.
Friday, February 10, 2012
Week 6 Reading Notes
I fully advocate Sergey Brin's philosophy of the 80-20 principle in the workplace!
If people at their workplace were actively encouraged to work on their passion for 20% percent of the time, as long as that related to the institution, I believe they would be adding more value to their organization in the long run.
The examples he used of Mendel discovering the alws of genetics in his spare time, Froogle being invented during a programmer's 20% time, or Google Desktop bar being invented in another person's leisure time all serve as support.
It is a pity that most organizations do not permit their employees to engage in this principle more often.
Thursday, February 2, 2012
Week 4 LAB
Week 5 Reading Notes
In this week's Wikipedia reading about Computer Networks, I greatly enjoyed the section on Overlay Networks. The reason why is because, finally, for the first time, I have a better understanding of how peer-to-peer networks seem to be set up.
This inspired to think about how overlay networks might help libraries. The article mentioned that "Akamai Technologies manages an overlay network that provides reliable, efficient content delivery."
How might an overlay network help libraries?
On a search using PittCat's Summon, I found a couple articles that seem really good. One is called SDQE: towards automatic semantic query optimization in P2P systems, and this describes a new system employed to retrieve information embedded in overlay networks.
Another competing paper talks about a similar system, but this time, with an emphasis on the semantic Web, p2pDating: Real life inspired semantic overlay networks for Web search.
Finally, one last paper talks about Pepper, a P2P network specifically designed to search and browse digital libraries.
Hopefully, one day I'll be tech savvy enough to dig and comprehend what these papers are saying =) .
How might an overlay network help libraries?
On a search using PittCat's Summon, I found a couple articles that seem really good. One is called SDQE: towards automatic semantic query optimization in P2P systems, and this describes a new system employed to retrieve information embedded in overlay networks.
Another competing paper talks about a similar system, but this time, with an emphasis on the semantic Web, p2pDating: Real life inspired semantic overlay networks for Web search.
Finally, one last paper talks about Pepper, a P2P network specifically designed to search and browse digital libraries.
Hopefully, one day I'll be tech savvy enough to dig and comprehend what these papers are saying =) .
Thursday, January 26, 2012
Week 4 Reading Notes
This week's readings were hard, very hard. They raised more questions for me than they did in providing answers.
The reason why they were hard is because I believe we had a bunch of database information thrown at us, but without a proper framework for us to judge which information was most relevant to libraries, which were not.
For example, in the Wikipedia article about databases, a list of 18 databases are mentioned, from active, cloud, distributed, federated, to even an "unstructured-data" database.
I really wish we could have another reading or some sense of which database is most relevant for libraries, and which databases are most often used in libraries, and for what purpose.
The only reading that I could grasp and try to apply in the real world was the entity-relationship model reading. This seems to bear upon the semantic web, a topic in which I already have background knowledge. However, I feel for those without background knowledge, this is going to be hard as well.
I really hope our lecture next week gives us a framework on how the structure of databases can apply in the real world!
Monday, January 23, 2012
Week 2 Lab - Jing Screencast and Flickr URLs
Wednesday, January 18, 2012
Week 3 Reading Notes
Eliminating Wealth and Language Barriers to Discoverability
While reading Anne Gilliland's introductory piece on metadata, I started to think about two "problems" that currently exist in library-land.
First, some metadata schemes like MARC or EAD are " . . . complex, time consuming, and resource intensive, and may only be justifiable when there is a legal mandate[.]" This reminded me of the fact that, at my present library, in order for books or digital materials to be enumerated in our collection, we are required to buy the MARC records. This affects us, because the number of books we have also affects our institution's academic ranking. This also reminded me that, on the Internet, one often must pay a certain fee to Google in order to appear on the first page of search results.
I wonder whether the next grand step for librarianship is to champion discoverability as a mandate that is just as important as access. In fact, the two go hand-in-hand; it does no good for a patron to have access if s/he cannot discover anything. And if there are wealth barriers which libraries must jump over to make their items discoverable, then perhaps we should break them, just as we (or at least some of us) championed OA on the principle of universal access.
Which of course brings us to Dublin Core, which is OA, but does not seem to be multilingual. In order to fully discover stuff, and maximize the flattening of distance technology affords, we need an interoperable multilingual metadata scheme. The problem then becomes such a scheme would cost money. However, I wonder whether we can upload videos to educate savvy patrons on how to craft sophisticated folksonomies. The scheme should be OS as well.
This would not necessarily put OCLC's new endeavor out of business! We as librarians should support hybrid OS & private business endeavors. However, we should get the source code out there so that innovative information professionals anywhere can eliminate the language barrier without charging a fee.
Wednesday, January 11, 2012
Week 2 Reading Notes
After reading the NY Times article on European libraries' efforts to digitize, the "cataloger" inside me became greatly disturbed. I understand that, in an effort to get money from a variety of sources, every individual European library needs to make private alliances with variety of funders.
However, this seems that the final end product of digitization will be a scattered and patchwork network of European digital libraries. That would preclude a one-stop-shop model for European works, which I personally prefer for its simplicity.
Hopefully, after the vast digitization work is done, the majority of European libraries can join a single consortium and create a single search engine or index for all their works. That would be more efficient and save more time, than requiring everyone to enter separate URLs for separate search engines.
Subscribe to:
Comments (Atom)





