Thursday 29 March 2007

SPIRO: Slide and Photograph, Image Retrieval Online

Bibliographic description
ARCHITECTURE VISUAL RESOURCES LIBRARY. SPIRO [on line]. Berkeley: University of California, 11 december 2006. Available on: http://www.mip.berkeley.edu/query_forms/browse_spiro_form.html

Dublin Core
Title : SPIRO
Creator : Architecture Visual Resources Library, Department of Architecture, College of Environmental Design, University of California, Berkeley
Subject : Image database / architecture / Visual arts / City planning / Urban development
Description : "SPIRO is the visual online public access catalog to the 35mm slide collection of the Architecture Visual Resources Library (AVRL) at the University of California at Berkeley. The collection numbers over 250,000 slides and 20,000 photographs."
Publisher : University of California, Berkeley
Date : 2006-12-11
Type : Image Database
Format : HTML
Identifier : http://www.mip.berkeley.edu/query_forms/browse_spiro_form.html
Source: http://www.arch.ced.berkeley.edu/resources/archslides.htm
Language : En
Relation : http://www.mip.berkeley.edu/spiro/about.html
http://www.berkeley.edu/

Coverage : USA
Rights : Architecture Visual Resources Library

Abstract
"SPIRO permits access to the collection by seven access points which may be used independently or in combination:
historical period
place
personal name
object name
subject terms
source of image
image identification number
As of January 2004, SPIRO contained over 63,000 records linked to images, approximately 20% of AVRL's total slide collection. Thirty-three percent (33%) of the images in SPIRO come from images in books. These are produced in-house by copy stand photography under the fair use and educational copying provisions of the U.S. Copyright Law. Eleven percent (11%) of the images in SPIRO derive from copy stand photography from periodicals, also produced in-house. Thirty-eight percent (38%) of the images are donor- supplied, and eighteen percent (18%) are purchased from commercial slide vendors."


The Subject Analysis of Images: Past, Present and Future

Bibliographic description
WARDEN, Ginger; DUNBAR, Denise; WANCZYCKI, Catherine; O'HANLEY, Suanne. The Subject Analysis of Images: Past, Present and Future [on line]. University of British Columbia School of Library, 27th March 2002. Available on:
http://www.slais.ubc.ca/people/students/student-projects/C_Wanczycki/libr517/homepage.html

Dublin Core
Title : The Subject Analysis of Images: Past, Present and Future
Creator : Ginger Warden, Denise Dunbar, Catherine Wanczycki, Suanne O'Hanley
Subject : image collection / image classification / thesaurus / image indexing
Description : "The Art and Architecture Thesaurus (AAT) is a structured vocabulary that can be used to improve access to art, architecture, and material culture."
Publisher : University of British Columbia School of Library
Date : 2002-03-27
Type : Web site
Format : HTML
Identifierhttp://www.slais.ubc.ca/people/students/student-projects/C_Wanczycki/libr517/homepage.html
Sourcehttp://www.slais.ubc.ca/
Language : En
Relation : -
Coverage : UK
Rights : No

Extract

"Image collections exist for many purposes: medicine (ultrasounds, CAT scans), architecture (building plans), geography (aerial photos, maps), art (paintings, cartoons), business (trademarks), history (photographs). Some image collections are very large. The Getty Institute's Photo Study Collection, for example, has over two million photographs. Indexing collections of this size can be extremely time consuming, and unlike text, images cannot be searched by keyword. Many automatic indexing systems have been developed, but what computers can currently extract from images are "mostly low-level features" (Rui, 1999) like color, shape, and texture. Research on the information needs of users, and on human perception of images may, in time, contribute the knowledge needed to produce the most precise and efficient retrieval systems possible.

In the meantime, librarians contending with image collections have to make decisions about how best to provide access to them. Currently, there is no universal consensus in libraries. In a survey of 58 libraries in the U.K., (Graham, 1999) the clear majority of respondents employed in-house methods of classifying and indexing their collections, rather than relying on publicized schemes, such as the AAT (Art and Architecture Thesaurus), LCTGM (Library of Congress Thesaurus for Graphic Materials), and LCSH (Library of Congress Subject Headings). This is likely the result of tradition. Curators of image collections were left to their own devices for most of the century, insofar as subject headings for images went, while LCSH concentrated on primarily text-based materials. Many different thesauri were developed by individuals or groups of individuals to deal with particular collections but efforts to create a universally acceptable indexing language for images has only been a point of interest in the past 30 years or so, with the increasing volume of available images and the desire for increased resource-sharing between institutions.

The AAT and LCTGM are presently the two most widely accepted vocabularies for use with image collections. Their development, structure and scope are the main focus of this website. Subject headings from each are applied to several types of images by way of example. We also look to the past and future of subject access to images by surveying both the methods librarians have used in the past (and are still using today to some extent) and the methods that are currently being developed (and to some extent already in place)."

Monday 26 March 2007

Iconclass: iconographic classification system

Bibliographic description
Iconclass: iconographic classification system. The Netherlands Institute for Scientific Information Services, 21 october 2003. Available on: http://www.niwi.knaw.nl/en/geschiedenis/projecten/iconclass/

Dublin Core
Title : Iconclass: iconographic classification system
Creator : ?
Subject : Iconclass / image classification / thesaurus
Description : It's a" classification system for standardized description of the contents of visual documents."
Publisher : the Netherlands Institute for Scientific Information Services (NIWI)
Date : 2003-10-21
Type : article
Format : HTML
Identifier : http://www.niwi.knaw.nl/en/geschiedenis/projecten/iconclass/
Source: http://www.niwi.knaw.nl/nl/
Language
: En
Relation : http://www.iconclass.nl/
Coverage : Netherland
Rights : -

Abstract
Iconclass is a classification system for standardized description of the contents of visual documents. [...]
Iconclass is a collection of ready-made classification codes called notations, used to define objects, persons, events, situations, abstract ideas and other potential subjects of visual documents. The approximately 28,000 definitions are arranged in hierarchical order and divided into ten main classes. Some classes are designed for the description of specific subjects, in particular biblical, mythological and literary themes. These are used mainly in art-historical context. Others, containing general subjects, constitute a self-sufficient system offering a place to every subject and activity on earth.

Sunday 25 March 2007

IPTC Standard

Bibliographic description
The IPTC-NAA standards [on line]. Controlled Vocabulary. Available on:
http://www.controlledvocabulary.com/imagedatabases/iptc_naa.html

Dublin Core
Title : The IPTC-NAA standards
Creator : ?
Subject : metadata / IPTC / image description / image database
Description : "A controlled vocabulary can be useful in describing images and information when organizing and classifying content for image databases."
Publisher : Controlled Vocabulary
Date : ?
Type : Article
Format : HTML
Identifier : http://www.controlledvocabulary.com/imagedatabases/iptc_naa.html
Source : http://www.controlledvocabulary.com/
Language : En
Relation : -
Coverage : ?
Rights : -

Extract
Each image file can be saved using Adobe Photoshop with this text information embedded within the file. Anyone that's worked around newspapers, with digital images or image databases for a while has probably heard the acronyms IPTC or IPTC-NAA tossed around, usually when discussing the use of the File Info feature of photoshop. But few understand what they mean or stand for. The short story is that IPTC, the International Press Telecommunications Council, was one of the groups responsible for encouraging the standards necessary to“marry” the text information describing an image with the image data itself. The NAA is the Newspaper Association of America (formerly ANPA), and they also have been responsible for developing standards for exchanging information between news operations, including information used to describe images. [...]
Standards regarding metadata for news images have evolved over time, beginning in the 1970's when some were first issued as“guidelines.” However, most of these efforts were regional in nature, and focused on text. As news organizations moved from manual typewriters to CRTs (Cathode Ray Tubes) and VDTs (Video Display Terminals) these standards were revised and became more specific. Only later, as the world embraced the web, did the standards begin to address multimedia content.
In 1979, the International Press Telecommunications Council (IPTC) approved its first news exchange standard IPTC 7901. This provided metadata and content in plain text only; the only delimiters allowed were spaces and line breaks...

Saturday 24 March 2007

Photo classification by integrating image content and camera metadata

Bibliographic description
BOUTELL, M., LUO, Jiebo. Photo classification by integrating image content and camera metadata. Rochester University, MN, USA: Department of Computer Science, 23 august 2004. Available on: http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1333918

Dublin Core
Title : Photo classification by integrating image content and camera metadata
Creator : Boutell, M. Jiebo Luo
Subject : Image classification / content-based /metadata / semantic classification
Description :
Publisher : IEEE EXPLORE
Date : 2004-08-23
Type : article
Format : PDF
Identifier : http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1333918
Source : http://ieeexplore.ieee.org/Xplore/guesthome.jsp
Language : En
Relation : -
Coverage : USA
Rights : Copyright 2006 IEEE

Abstract
Despite years of research, semantic classification of unconstrained photos is still an open problem. Existing systems have only used features derived from the image content. However, Exif metadata recorded by the camera provides cues independent of the scene content that can be exploited to improve classification accuracy. Using the problem of indoor-outdoor classification as an example, analysis of metadata statistics for each class revealed that exposure time, flash use, and subject distance are salient cues. We use a Bayesian network to integrate heterogeneous (content-based and metadata) cues in a robust fashion. Based on extensive experimental results, we make two observations: (1) adding metadata to content-based cues gives highest accuracies; and (2) metadata cues alone can outperform content-based cues alone for certain applications, leading to a system with high performance, yet requiring very little computational overhead. The benefit of incorporating metadata cues can be expected to generalize to other scene classification problems.

Thursday 22 March 2007

CIRES: Content Based Image Retrieval System

Bibliographic description
IQBAL, Qasim; AGGARWAL, J.K. CIRES: A system for Content Based Retrieval in digital image libraries [on line]. Singapore: Invited session on Content Based Image Retrieval: Techniques and Applications, International Conference on Control, Automation, Robotics and Vision (ICARCV), 2 december 2002. Available on: http://amazon.ece.utexas.edu/~qasim/papers.htm

Dublin Core
Title : CIRES: Content Based Image Retrieval System
Creator : Qasim Iqbal, J. K. Aggarwal
Subject : image retrieval, content based, retrieval system, CIRES
research, CIRES
Description : This document is a powerful tool om line for retrieval in digital image libraries
Publisher : International Conference on Control, Automation, Robotics and Vision (ICARCV)
Date : 2002-12-02
Type : Conference
Format : PDF
Identifier : http://amazon.ece.utexas.edu/~qasim/papers.htm
Source : http://amazon.ece.utexas.edu/~qasim/
Language : En
Relation : -
Coverage : USA
Rights :

Abstract
This papers presents CIRES, a new online system for content-based retrieval in digital image libraries. Contentbased image retrieval systems have traditionally used color and texture analyses. These analyses have not always achieved adequate level of performance and user satisfaction. The growing need for robust image retrieval systems has led to a need for additional retrieval methodologies. CIRES addresses this issue by using image structure in addition to color and texture. The efficacy of using structure in combination with color and texture is demonstrated.

Retrieval of Pictures Using Approximate Matching

Bibliographic description
PRASAD SISTLA, A; YU, Clement. Retrieval of Pictures Using Approximate Matching [on line]. University of Illinois: Departement of Electrical engeneering and computer science , 1995. Available on: http://citeseer.ist.psu.edu/sistla95retrieval.html

Dublin Core
Title : Retrieval of Pictures Using Approximate Matching
Creator : A. Prasad Sistla, Clement Yu
Subject : picture retrieval / image database
Description : A description of a general-purpose pictorial retrieval system based on approximate matching.
Publisher : Departement of Electrical engeneering and computer science, University of Illinois
Date : 1995
Type : Book extract
Format : PDF
Identifier : http://citeseer.ist.psu.edu/sistla95retrieval.html
Source : http://citeseer.ist.psu.edu/
Language : En
Relation : -
Coverage : USA
Rights : Copyright Penn State and NEC

Abstract
We describe a general-purpose pictorial retrieval system based on approximate matching. This system accommodates pictorial databases for a broad class of applications. It consists of tools for handling the following aspects--- user interfaces, reasoning about spatial relationships, computing degrees of similarity between queries and pictures. In this paper, we briefly describe the model that is used for representing pictures/queries, the user interface, the system for reasoning about spatial relationships, and the methods employed for computation of similarities of pictures with respect to queries.

Picture Retrieval Systems: A Unified Perspective and Research Issues

Bibliographic description
Venkat N. Gudivada, Vijay V. Raghavan. Picture Retrieval Systems: A Unified Perspective and Research Issues [on line]. Ohio University: Department of Computer Science, The Center for Advanced Computer Studies, 1995. Available on: http://citeseer.ist.psu.edu/gudivada95picture.html

Dublin Core

Title : Picture Retrieval Systems: A Unified Perspective and Research Issues
Creator : Venkat N. Gudivada, Vijay V. Raghavan
Subject : picture retrieval / image database / Picture Retrieval System
Description :
Publisher : Department of Computer Science, The Center for Advanced Computer Studies, Ohio University
Date : 1995
Type : article
Format : PDF
Identifier : http://citeseer.ist.psu.edu/gudivada95picture.html
Source : http://citeseer.ist.psu.edu/
Language : En
Relation : -
Coverage : USA
Rights : Copyright Penn State and NEC

Abstract
Picture Retrieval (PR) problem is concerned with retrieving pictures
that are relevant to users' requests from a large collection of
pictures, referred to as the picture database. We use the term
picture in a very general context to refer to different types of
images originating in disparate application areas. The sources for
these images range from satellites, diagnostic medical imaging,
architectural and engineering drawings, geographic maps, mug-shot
images of criminals, to family photographs and portraits. A computer
system that facilitates picture retrieval is referred to as the
Picture Retrieval System (PRS). The application areas that consider
picture retrieval as a principal activity are both numerous and
disparate. As diverse as the application areas are, there seems to be
no consensus as to what a picture retrieval system really is.
Consequently, the features of the existing picture retrieval systems
have essentially evolved from domain specific considerations.

Sunday 18 March 2007

Real-Time Computerized Annotation of Pictures

Bibliographic description
LI, Jia; Z.WANG, James. Real-Time Computerized Annotation of Pictures [on line]. The Pennsylvania State University, University Park, 25 July 2006. Available on: http://infolab.stanford.edu/~wangz/project/imsearch/ALIP/ACMMM06/li06.pdf

Dublin Core
Title : Real-Time Computerized Annotation of Pictures
Creator : Jia Li and James Z. Wang
Subject : digital picture / indexing / automatic indexing
Description : An article about automated annotation of digital pictures and the web site ALIPR (Automatic Linguistic Indexing of Pictures).
Publisher : http://infolab.stanford.edu/
Date : 2006-07-25
Type : article
Format : PDF
Identifier : http://infolab.stanford.edu/~wangz/project/imsearch/ALIP/ACMMM06/li06.pdf
Source : http://infolab.stanford.edu/
Language : En
Relation : http://www.alipr.com/, http://wang.ist.psu.edu/docs/home.shtml
Coverage : USA
Rights : ACM Multimedia Conference

Abstract
Automated annotation of digital pictures has been a highly challenging problem for computer scientists since the invention of computers. The capability of annotating pictures by computers can lead to breakthroughs in a wide range of applications including Web image search, online picture-sharing communities, and scientific experiments. In our work, by advancing statistical modeling and optimization techniques, we can train computers about hundreds of semantic concepts using example pictures from each concept. The ALIPR (Automatic Linguistic Indexing of Pictures - Real Time) system of fully automatic and high speed annotation for online pictures has been constructed. Thousands of pictures from an Internet photo-sharing site, unrelated to the source of those pictures used in the training process, have been tested. The experimental results show that a single computer processor can suggest annotation terms in real-time and with good accuracy.