Temporal archetypal analysis for action segmentation

E Fotiadou, Y Panagakis… - 2017 12th IEEE …, 2017 - ieeexplore.ieee.org
E Fotiadou, Y Panagakis, M Pantic
2017 12th IEEE International Conference on Automatic Face …, 2017ieeexplore.ieee.org
Unsupervised learning of invariant representations that efficiently describe high-dimensional
time series has several applications in dynamic visual data analysis. Clearly, the problem
becomes more challenging when dealing with multiple time series arising from different
modalities. A prominent example of this multimodal setting is the human motion which can
be represented by multimodal time series of pixel intensities, depth maps, and motion
capture data. Here, we study, for the first time, the problem of unsupervised learning of …
Unsupervised learning of invariant representations that efficiently describe high-dimensional time series has several applications in dynamic visual data analysis. Clearly, the problem becomes more challenging when dealing with multiple time series arising from different modalities. A prominent example of this multimodal setting is the human motion which can be represented by multimodal time series of pixel intensities, depth maps, and motion capture data. Here, we study, for the first time, the problem of unsupervised learning of temporally and modality invariant informative representations, referred to as archetypes, from multiple time series originating from different modalities. To this end a novel method, coined as temporal archetypal analysis, is proposed. The performance of the proposed method is assessed by conducting experiments in unsupervised action segmentation. Experimental results on three different real world datasets using single modal and multimodal visual representations indicate the robustness and effectiveness of the proposed methods, outperforming compared state-of-the-art methods by a large, in most of the cases, margin.
ieeexplore.ieee.org
Showing the best result for this search. See all results