Skip to main content
Log in

Coarse-to-fine adaptive masks for appearance matching of occluded scenes

  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract.

In this paper, we discuss an appearance-matching approach to the difficult problem of interpreting color scenes containing occluded objects. We have explored the use of an iterative, coarse-to-fine sum-squared-error method that uses information from hypothesized occlusion events to perform run-time modification of scene-to-template similarity measures. These adjustments are performed by using a binary mask to adaptively exclude regions of the template image from the squared-error computation. At each iteration higher resolution scene data as well as information derived from the occluding interactions between multiple object hypotheses are used to adjust these masks. We present results which demonstrate that such a technique is reasonably robust over a large database of color test scenes containing objects at a variety of scales, and tolerates minor 3D object rotations and global illumination variations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Additional information

Received: 21 November 1996 / Accepted: 14 October 1997

Rights and permissions

Reprints and permissions

About this article

Cite this article

Edwards, J., Murase, H. Coarse-to-fine adaptive masks for appearance matching of occluded scenes. Machine Vision and Applications 10, 232–242 (1998). https://doi.org/10.1007/s001380050075

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s001380050075

Navigation