• OpenAccess
  • Research on Motion Attention Fusion Model-Based Video Target Detection and Extraction of Global Motion Scene  [CSIP 2013]
  • DOI: 10.4236/jsip.2013.43B006   PP.30 - 35
  • Author(s)
  • Long Liu, Boyang Fan, Jing Zhao
  • For target detection algorithm under global motion scene, this paper suggests a target detection algorithm based on motion attention fusion model. Firstly, the motion vector field is pre-processed by accumulation and median filter; Then, according to the temporal and spatial character of motion vector, the attention fusion model is defined, which is used to detect moving target; Lastly, the edge of video moving target is made exactly by morphologic operation and edge tracking algorithm. The experimental results of different global motion video sequences show the proposed algorithm has a better veracity and speedup than other algorithm.

  • Target Detection; Attention Model; Global Scene
  • References
  • [1]
    J. Wang and E. Adelson, “Representing Moving Images with Layers,” IEEE Transactions on Image Processing, Vol. 3, No. 5, 1994, pp. 625-638.
    H. G. Musmann, M. Hotter and J. Ostermann, “Object-oriented Analysis Synthesis Coding of Moving Images,” Signal Processing: Image Communication, Vol. 1, No. 2, 1989, pp. 117-138.
    N. Diehl, “Object-oriented Motion Estimation and Segmentation in Image Sequences,” Signal Processing: Image Communication, Vol. 3, No. 1, 1991, pp. 23-56.
    C. Kim and J.-N. Hwang, “Fast and Automatic Video Object Segmentation and Tracking for Content-Based Applications,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 12, No. 2, 2002, pp. 122-129.
    C. Stauffer and W. E. L. Grimson, “Adaptive Background Mixture Models for Real-time Tracking,” IEEE Computer Society Conference on Computer Vision and Pattern and Recognition, Vol. 2, Fort Collins, CO, Jun 1999, pp. 246-252.
    D. Magee, “Tracking Multiple Vehicle Using Foreground, Background and Motion Models,” Image and Vision Computing, Vol. 22, No. 2, 2004, pp. 143-155.
    C. R. Wren, A. Azarbayejani, T. Darrell and A. P. Pentland, “Pfinder: Real-time Tracking of the Human Body,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 7, 1997, pp. 780-785.
    I. Haritaoglu, D. Harwood and L. Davis, “W4: Real-time Surveillance of People and Their Activities,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 8, 2000, pp. 809-830.
    Q. Bin, M. Ghazal and A. Amer, “Robust Global Motion Estimation Oriented to Video Object Segmentation,” IEEE Transactions on Im-age Processing, Vol. 17, No. 6, 2008, pp. 958-967.
    H. Xu, A. A. Younis and M. R. Kabuka, “Automatic Moving Object Extraction for Content-based Applications,” IEEE Transactions on Circuits and System for Video Technology, Vol. 14, No. 6, 2004, pp. 796-812.
    L. Itti and C. Koch, “Computational Modeling of Visual Attention,” Nature Reviews Neuroscience, Vol. 2, No. 3, 2001, pp. 193-203.
    L. Itti, C. Koch and E. Niebur, “A Model of Saliency-based Visual Attention for Rapid Scene Analysis,” IEEE Trans on Pattern Analysis and Machine Intelligence, Vol. 20, No. 11, 1998, pp. 1254-1259.
    Y. F. Ma and H. J. Zhang, “A Model of Motion Attention for Video Skimming,” IEEE International Conference on Image Processing 2002, Vol. 1, New York, USA, 2002, pp. 129-132.
    Guironnet and Mickael., “Spatio-temporal Attention Model for Video Content Analysis,” IEEE International Conference on Image Processing. Vol. 3, 2005, pp. 1156-1159.
    J. Zhang, L. Zhou and L. S. Shen, “Regions of Interest Extraction Based on Visual Attention Model and Watershed Segmentation,” IEEE international Conference Neural Networks & Signal Processing, Zhenjiang, China, Jun 8-10, 2008, pp. 375-378.
    S.-H. Lee, J. Moon and M. Lee, “A Region of Interest Based Image Segmentation Method Using a Biologically Motivated Selective Attention Model,” 2006 international Joint Conference on Neural Networks, Canada, July 16-21, 2006, pp. 1413-1420.
    J. W. Han, “Object Segmentation from Consumer Video: A Unified Framework Based on Visual Attention,” IEEE Transactions on Consumer Electronics, Vol. 55, No. 3, 2009, pp. 1597-1605.
    B. K. P. Horn and B. G. Schunck, “Determining Optical Flow,” Artificial Inteligence, Vol. 17, 1981, pp. 185-203.

Engineering Information Institute is the member of/source content provider to