Johnson, M and Brostow, GJ and Shotton, J and Kwatra, V and Cipolla, R (2007) Semantic photo synthesis - art. no. 64920X. In: Rogowitz, BE and Pappas, TN and Daly, SJ, (eds.) HUMAN VISION AND ELECTRONIC IMAGING XII. (pp. X4920 - X4920). SPIE-INT SOC OPTICAL ENGINEERING
Full text not available from this repository.
Composite images are synthesized from existing photographs by artists who make concept art, e.g. storyboards for movies or architectural planning. Current techniques allow an artist to fabricate such an image by digitally splicing parts of stock photographs. While these images serve mainly to "quickly" convey how a scene should look, their production is laborious. We propose a technique that allows a person to design a new photograph with substantially less effort. This paper presents a method that generates a composite image when a user types in nouns, such as "boat" and "sand." The artist can optionally design an intended image by specifying other constraints. Our algorithm formulates the constraints as queries to search an automatically annotated image database. The desired photograph, not a collage, is then synthesized using graph-cut optimization, optionally allowing for further user interaction to edit or choose among alternative generated photos. Our results demonstrate our contributions of (1) a method of creating specific images with minimal human effort, and (2) a combined algorithm for automatically building an image library with semantic annotations from any photo collection.
|Title:||Semantic photo synthesis - art. no. 64920X|
|Event:||Conference on Human Vision and Electronic Imaging XII|
|Location:||San Jose, CA|
|Dates:||2007-01-29 - 2007-02-01|
|Keywords:||GRAPH CUTS, IMAGE|
|UCL classification:||UCL > School of BEAMS > Faculty of Engineering Science > Computer Science|
Archive Staff Only: edit this record