Saturday, November 15, 2008

Week 12 Muddy Point

Going back to the week 6 assignment, my muddy point is how is it possible to delete the public folder from the program. Once the folder is deleted why can one not restore it by reinstalling the program?

Wednesday, November 12, 2008

Week 12 Readings

Using a Wiki to Manage a Library program: Sharing knowledge to better serve patrons.

Librarians can make use of the Wiki to provide or share information across a network. When used to assist with instruction, staff members invited to the Wiki can edit and input resources to achieve a well rounded and rich tool set. The oppertunities for collaberative works alone make the instructional Wiki more interactive than plain text files. In my opinion, the wiki as an instructional tool is more valuable when those in need of instruction or input are great distance from those who have the resources to share.

WebBlogs: Their use and Applications In Science and Technology Libraries:

This article details the history, functionality and popularity of Web blogging. The range of blog types run the simple one page list of friends to call, to complex industry project logs. Children Blog about favorite shows and activities, teens may blog about social life, adults express their views on current hot topics. In library settings, Blogs contain information that is both time sensitive and mission critical, announce new resources and exchange a loose collection of work-culture ideas.


Creating the Academic Library Folksonoy:


Just as Google revolutionized the internet search, social tagging is revolutionizing the way people (researchers) access, collect and store those internet resources after they have been discovered. I see it in terms of effort reduction. By use of social tagging someone who found information on a topic of interest to me will have left a sign-post that says: "It's over here!" about whatever that topic is. The other nice thing about social tagging is the location. The collection of tags are always online which means one has access to them 24/7 via whatever device gets internet connectivity, plus keywords used and identification of content. So, instead of an exhaustive Google search that may produce content that is off-topic, I can search through social tags that have narrowed the parameters in advance.

As a Reference Librarian, where the entire reference interview may only consist of 13minutes, an accurate, dependable and fast search method that has pre-defined the content is a major tool to use in getting an answer to the Library user's questions or needs.


Jimmy Wales On the Birth of Wikipedia:

How to take the Concept of We The People to defining information on the Web. Wikipedia is doing for the internet what World Book Encyclopedia has done for the K-12 set. Most administrative functions are staff by volunteers which has reduced the operating cost. Considering that Wikipedia is multi-lingual and multi-cultural in scope. The content is edited for political balance and monitored for offensive content. Social policy and management software is employed to maintain some control of the issues under discussion.

Saturday, November 8, 2008

Week 10 and 11-Muddy Points

My muddy point for week 10 and 11 is how so many organizations while trying for the same goal, have not achieved better networking methods and standards.

Why are University digital repositories not open online to the general public?

Week 9 Comments

1. Adrien's blog- https://www.blogger.com/comment.g?blogID=5116479294225641407&postID=8570092441742701385&page=1

https://www.blogger.com/comment.g?blogID=5116479294225641407&postID=8570092441742701385&page=1

2. Bo's blog- http://analogfailure.blogspot.com/

3. Stephanie's blog https://www.blogger.com/comment.g?blogID=5053881157949942224&postID=1769399562575819471&page=1

Friday, November 7, 2008

Week 11 Readings

Digital Libraries:


Digital collections are capable of holding a wide variety of content from plain text files to .gif and .jpg images, pdf, and WAV audo/video files. The efforts of several government and industry programs are currently engaged in compiling content from the world wide web for inclusion in digital archives. Access to these archives has not been conclusively determined. One example of a digital library that is free to access is Google Scholar.

Dewey Meets Turning:

From the prospective of Librarians the NSF's DL Initiative was a avenue to new funding and a way to experience new technologies. For the computer science professionals the partnership would mean a step into the social markets served by Libraries. What Paepcke calls a "Cuckoo's Egg" was the advent of the WWW and its proliferation of on-demand, open access information delivery. Here is the implied threat to computer science and library functions appears in which non-technical minded individuals with access to simple home computer can call up detailed content web pages from any server on the WWW.
I think society is still struggling to embrace the scope of the WWW and to a lesser degree the Net. How do different cultural groups assimilate the content concepts from outside their world view when encountered online?

Institutional Repositorities:

A Institutional Repository is those database resources held by places like Universities, Colleges, and Research Facilities or Information warehouses. The content of such Repositories can run the spectrum from simple cooking recipes to the chemical formula for a new flu vaccine.
Couple this content with the ability to network on a global scale and the true potential of digital Repositories becomes apparent.

Monday, November 3, 2008

Week 10 Readings

Digital Libraries-Challenges and Influential Work

Mischo sets out to describe the complexities involved in the creation and maintainence of digital Libraries. He states that a digital library is more than a collection of sound, image or data files.

AOI Meta Data for Libraries

Open Archive Initiative Meta-data Harvesting Protocol is a way to collect information about the structure of archived data. The OAIMH protocol was designed as a simple, low-barrier way to achieve interoperability through meta-data harvesting. Exactly how useful meta-data sharing will be has not been fully determined. However, considerable interest in OAI and experience with early OAIMH implementations is encouraging(Warner, S., Exposing and harvesting meta-data protocol, 2001).

Deep Web Surfacing Hidden Value

Programs called spiders or web crawlers are deployed to hunt the web pages in search of content. Some types of content, however escape detection by being buried or hidden from those commonly used detection programs. Think of it as viewing a photo. One can see what is in the foreground with little effort, but may need a magnifying glass to pull up finer details. The web crawers function as this magnifying class but is limited to commonly detectable elements. To get a veiw of the finer details or detect encoded meta-data, different program was needed. BrightPlanet technology was invented to read header packets and detect content by the size of the files. While this sounds simple it is very effective. Small content files do not use much in the way of bits and bytes, but files with larger files do.

Site characterization required three steps:

  1. Estimating the total number of records or documents contained on that site.
  2. Retrieving a random sample of a minimum of ten results from each site and then computing the expressed HTML-included mean document size in bytes. This figure, times the number of total site records, produces the total site size estimate in bytes.
  3. Indexing and characterizing the search-page form on the site to determine subject coverage. (Bergmen, M. , 2001).