The Open-Video Project is a repository of digitized videos developed and hosted by the Interaction Design Laboratory at the School of Information and Library Science, University of North Carolina Chapel Hill. The collection serves as a standardized data set for researchers working on similar issues related to digital video. Examples include facial recognition algorithms and creation of surrogates to describe video content.
The first stage of the project began in 1998. It included the digitization of 195 video segments, creation of metadata for the segments, and creation of a website to enable researchers to access the segments. The next stage of the project concentrated on adding more videos, and increasing the available number of genres and formats. This stage also included research and innovation on the web interface, to simply the processes of searching, browsing, previewing and evaluating the videos.
Although the project is not currently accepting contributions, there are six contributor collections on the site, as well as two special collections. One of the contributed collections that I found particularly interesting is the NASA K-16 Education Programs, which has 580 video segments of NASA broadcasts that are used in their technology-based learning programs.
What really impressed me about this site was the granularity of the search and browse options. From the homepage, users can browse the collections by four categories: genre, duration, color, and sound. Each category has subcategories that list the number of video segments available in each; this shows the user how deep or shallow the collection is in that particular area. On the detailed search page, users can narrow by these four categories and also by file format, language, and creation date. (It's interesting to note that the site designers specify the creation date as the date of the original video and not the creation date of the digital surrogate.)
What is also impressive about this site is the quality and presentation of the metadata underlying the collections. Metadata at the collection level gives a description of the collection, plus the number and format of the segments, and the copyright and accrual polices for that collection. Metadata at the segment level includes the categories already mentioned from the search page, plus such details as the amount of motion in the segment and whether or not a transcript is available.
This is an extremely interesting project, and there is much more that I could say about it. However, in the interest of brevity, I will stop here.