Background modeling for detecting move-then-stop arbitrary-long time video objects

Xiaodong Cai, Falah Ali, E. Stipidis

Research output: Chapter in Book/Conference proceeding with ISSN or ISBNConference contribution with ISSN or ISBNpeer-review

Abstract

Obtaining a dynamically updated background reference image is an important and challenging task for video applications using background subtraction. This paper proposes a novel algorithm for dynamic video background reconstruction with move-then-stop arbitrary-long time video object detection enabled. In addition to the Adaptive Mixture Gaussian Background Model (AMGBM), the proposed algorithm makes use of the location information of the detected objects in the previous background subtraction step as a feedback control for the current step. The background update action of a certain pixel position is not taken until a signal indicating "object starts moving" is received. The proposed algorithm uses an update counter which is updated only for pixels that are not occluded by an object. The update procedure takes place after the background subtraction when the position of all objects is known. The experimental results show that the proposed algorithm outperforms existing AMGBM by providing novel feature to detect move-then-stop video objects, even when an objectstops for an arbitrarily long time.

Original languageEnglish
Title of host publication2009 10th International Workshop on Image Analysis for Multimedia Interactive Services, WIAMIS 2009
Pages197-200
Number of pages4
DOIs
Publication statusPublished - 25 Sep 2009
Event2009 10th International Workshop on Image Analysis for Multimedia Interactive Services, WIAMIS 2009 - London, United Kingdom
Duration: 6 May 20098 May 2009

Conference

Conference2009 10th International Workshop on Image Analysis for Multimedia Interactive Services, WIAMIS 2009
CountryUnited Kingdom
CityLondon
Period6/05/098/05/09

Fingerprint

Dive into the research topics of 'Background modeling for detecting move-then-stop arbitrary-long time video objects'. Together they form a unique fingerprint.

Cite this