![](https://dblp.uni-trier.de/img/logo.320x120.png)
![search dblp search dblp](https://dblp.uni-trier.de/img/search.dark.16x16.png)
![search dblp](https://dblp.uni-trier.de/img/search.dark.16x16.png)
default search action
15th FG 2020: Buenos Aires, Argentina
- 15th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2020, Buenos Aires, Argentina, November 16-20, 2020. IEEE 2020, ISBN 978-1-7281-3079-8
- Juan P. Wachs, Sergio Escalera
, Jeffrey F. Cohn, Albert Ali Salah, Arun Ross:
Message from the General and Program Chairs FG 2020. xxi - Lin Xi
, Weihai Chen, Changchen Zhao, Xingming Wu, Jianhua Wang:
Image Enhancement for Remote Photoplethysmography in a Low-Light Environment. 1-7 - Adrian Bulat, Jean Kossaifi, Georgios Tzimiropoulos, Maja Pantic:
Toward fast and accurate human pose estimation via soft-gated skip connections. 8-15 - Mohammad Rami Koujan, Michail Christos Doukas
, Anastasios Roussos
, Stefanos Zafeiriou:
Head2Head: Video-based Neural Head Synthesis. 16-23 - Mohammad Rami Koujan, Luma Alharbawee
, Giorgos A. Giannakakis, Nicolas Pugeault, Anastasios Roussos
:
Real-time Facial Expression Recognition "In The Wild" by Disentangling 3D Expression from Identity. 24-31 - Chun Pong Lau
, Hossein Souri, Rama Chellappa:
ATFaceGAN: Single Face Image Restoration and Recognition from Atmospheric Turbulence. 32-39 - Naveen Madapana, Glebys T. Gonzalez, Juan P. Wachs:
Gesture Agreement Assessment Using Description Vectors. 40-44 - Kunjian Li, Qijun Zhao:
IF-GAN: Generative Adversarial Network for Identity Preserving Facial Image Inpainting and Frontalization. 45-52 - Enrique Sanchez, Michel F. Valstar:
A recurrent cycle consistency loss for progressive face-to-face synthesis. 53-60 - Yiming Lin, Jie Shen, Shiyang Cheng, Maja Pantic:
FT-RCNN: Real-time Visual Face Tracking with Region-based Convolutional Neural Networks. 61-68 - Kritaphat Songsri-in
, Stefanos Zafeiriou:
Face Video Generation from a Single Image and Landmarks. 69-76 - Okan Köpüklü, Thomas Ledwon, Yao Rong, Neslihan Kose, Gerhard Rigoll:
DriverMHG: A Multi-Modal Dataset for Dynamic Recognition of Driver Micro Hand Gestures and a Real-Time Recognition Framework. 77-84 - Prithviraj Dhar, Ankan Bansal, Carlos Domingo Castillo, Joshua Gleason, P. Jonathon Phillips, Rama Chellappa:
How are attributes expressed in face DCNNs? 85-92 - Joaquim Comas, Decky Aspandi
, Xavier Binefa:
End-to-end Facial and Physiological Model for Affective Computing and Applications. 93-100 - Jiayi Wang, Franziska Mueller, Florian Bernard, Christian Theobalt
:
Generative Model-Based Loss to the Rescue: A Method to Overcome Annotation Errors for Depth-Based Hand Pose Estimation. 101-108 - Vivek Sharma, Makarand Tapaswi, M. Saquib Sarfraz, Rainer Stiefelhagen:
Clustering based Contrastive Learning for Improving Face Representations. 109-116 - Yicheng Zhong, Yuru Pei, Peixin Li, Yuke Guo, Gengyu Ma, Meng Liu, Wei Bai, Wenhai Wu, Hongbin Zha:
Face Denoising and 3D Reconstruction from A Single Depth Image. 117-124 - Matthew J. Vowels
, Necati Cihan Camgöz, Richard Bowden
:
Gated Variational AutoEncoders: Incorporating Weak Supervision to Encourage Disentanglement. 125-132 - Jiaxin Zhou, Takashi Komuro:
Recognizing Gestures from Videos using a Network with Two-branch Structure and Additional Motion Cues. 133-137 - Konstantinos Papadopoulos, Enjie Ghorbel
, Oyebade K. Oyedotun, Djamila Aouada
, Björn E. Ottersten:
DeepVI: A Novel Framework for Learning Deep View-Invariant Human Action Representations using a Single RGB Camera. 138-145 - Edouard Yvinec, Arnaud Dapogny, Kévin Bailly
:
DeeSCo: Deep heterogeneous ensemble with Stochastic Combinatory loss for gaze estimation. 146-152 - Sacha Bernheim, Estèphe Arnaud, Arnaud Dapogny, Kévin Bailly:
MoDuL: Deep Modal and Dual Landmark-wise Gated Network for Facial Expression Recognition. 153-159 - Lichen Wang, Bin Sun, Joseph P. Robinson, Taotao Jing, Yun Fu:
EV-Action: Electromyography-Vision Multi-Modal Action Dataset. 160-167 - Ruikui Wang, Shishi Qiao, Ruiping Wang, Shiguang Shan, Xilin Chen:
Hybrid Video and Image Hashing for Robust Face Retrieval. 168-175 - Hanxiang Hao, David Güera, János Horváth, Amy R. Reibman
, Edward J. Delp
:
Robustness Analysis of Face Obscuration. 176-183 - Bin Sun, Jun Li, Yun Fu:
Block Mobilenet: Align Large-Pose Faces with <1MB Model Size. 184-191 - Arnaud Dapogny, Kevin Bailly
, Matthieu Cord:
Deep Entwined Learning Head Pose and Face Alignment Inside an Attentional Cascade with Doubly-Conditional fusion. 192-198 - Junjie Zhang, Yuntao Liu, Rongchun Li, Yong Dou:
End-to-end Spatial Attention Network with Feature Mimicking for Head Detection. 199-206 - Mohammad Rafayet Ali, Javier Hernandez, Earl Ray Dorsey, Ehsan Hoque, Daniel McDuff:
Spatio-Temporal Attention and Magnification for Classification of Parkinson's Disease from Videos Collected via the Internet. 207-214 - Changzhen Li, Jie Zhang, Shiguang Shan, Xilin Chen:
PAS-Net: Pose-based and Appearance-based Spatiotemporal Networks Fusion for Action Recognition. 215-221 - Alptekin Orbay, Lale Akarun:
Neural Sign Language Translation by Learning Tokenization. 222-228 - Huiyuan Yang, Taoyue Wang, Lijun Yin:
Set Operation Aided Network for Action Units Detection. 229-235 - Saurabh Hinduja, Shaun J. Canavan, Lijun Yin:
Recognizing Perceived Emotions from Facial Expressions. 236-240 - Zhixin Shu, Duygu Ceylan, Kalyan Sunkavalli, Eli Shechtman, Sunil Hadap, Dimitris Samaras:
Learning Monocular Face Reconstruction using Multi-View Supervision. 241-248 - Yu Yin
, Songyao Jiang, Joseph P. Robinson, Yun Fu:
Dual-Attention GAN for Large-Pose Face Frontalization. 249-256 - Richard T. Marriott, Sami Romdhani, Liming Chen:
Taking Control of Intra-class Variation in Conditional GANs Under Weak Supervision. 257-264 - Panagiotis Tzirakis, Athanasios Papaioannou, Alexandros Lattas, Michail Tarasiou, Björn W. Schuller, Stefanos Zafeiriou:
Synthesising 3D Facial Motion from "In-the-Wild" Speech. 265-272 - Mingshuang Luo, Shuang Yang, Shiguang Shan, Xilin Chen:
Pseudo-Convolutional Policy Gradient for Sequence-to-Sequence Lip-Reading. 273-280 - Stefan Hörmann, Abdul Moiz, Martin Knoche, Gerhard Rigoll:
Attention Fusion for Audio-Visual Person Verification Using Multi-Scale Features. 281-285 - S. L. Happy, Antitza Dantcheva, François Brémond:
Semi-supervised Emotion Recognition using Inconsistently Annotated Data. 286-293 - ShahRukh Athar, Zhixin Shu, Dimitris Samaras:
Self-supervised Deformation Modeling for Facial Expression Editing. 294-301 - Muzammil Behzad
, Nhat Vo, Xiaobai Li, Guoying Zhao:
Landmarks-assisted Collaborative Deep Framework for Automatic 4D Facial Expression Recognition. 302-306 - Matheus Alves Diniz, William Robson Schwartz:
Face Attributes as Cues for Deep Face Recognition Understanding. 307-313 - Song Chen, Weixin Li, Hongyu Yang, Di Huang, Yunhong Wang:
3D Face Mask Anti-spoofing via Deep Fusion of Dynamic Texture and Shape Clues. 314-321 - Nikhil Churamani
, Hatice Gunes:
CLIFER: Continual Learning with Imagination for Facial Expression Recognition. 322-328 - Ankit Kumar Sharma, Hassan Foroosh:
Slim-CNN: A Light-Weight CNN for Face Attribute Prediction. 329-335 - Xiaotian Li, Xiang Zhang
, Huiyuan Yang, Wenna Duan, Weiying Dai, Lijun Yin:
An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis. 336-343 - Ziheng Zhang, Weizhe Lin, Mingyu Liu, Marwa Mahmoud:
Multimodal Deep Learning Framework for Mental Disorder Recognition. 344-350 - Ettore Maria Celozzi, Luca Ciabini, Luca Cultrera
, Pietro Pala, Stefano Berretti, Mohamed Daoudi, Alberto Del Bimbo:
Modelling the Statistics of Cyclic Activities by Trajectory Analysis on the Manifold of Positive-Semi-Definite Matrices. 351-355 - Yuanhang Zhang
, Shuang Yang, Jingyun Xiao, Shiguang Shan, Xilin Chen:
Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech Recognition. 356-363 - Jingyun Xiao, Shuang Yang, Yuanhang Zhang
, Shiguang Shan, Xilin Chen:
Deformation Flow Based Two-Stream Network for Lip Reading. 364-370 - Weizhe Lin, Indigo Orton, Mingyu Liu, Marwa Mahmoud:
Automatic Detection of Self-Adaptors for Psychological Distress. 371-378 - Dario Dotti, Esam Ghaleb, Stylianos Asteriadis
:
Temporal Triplet Mining for Personality Recognition. 379-386 - Francisca Pessanha
, Krista McLennan, Marwa Mahmoud:
Towards automatic monitoring of disease progression in sheep: A hierarchical model for sheep facial expressions analysis from video. 387-393 - Peter Thompson
, Aphrodite Galata
:
Hand tracking from monocular RGB with dense semantic labels. 394-401 - Eimear O' Sullivan, Stefanos Zafeiriou:
3D Landmark Localization in Point Clouds for the Human Ear. 402-406 - István Sárándi
, Timm Linder, Kai O. Arras, Bastian Leibe
:
Metric-Scale Truncation-Robust Heatmaps for 3D Human Pose Estimation. 407-414 - Md Sirajus Salekin, Ghada Zamzmi, Dmitry B. Goldgof, Rangachar Kasturi, Thao Ho, Yu Sun:
First Investigation into the Use of Deep Learning for Continuous Assessment of Neonatal Postoperative Pain. 415-419 - Xing Zhao, Shuang Yang, Shiguang Shan, Xilin Chen:
Mutual Information Maximization for Effective Lip Reading. 420-427 - Ali N. Salman, Carlos Busso
:
Dynamic versus Static Facial Expressions in the Presence of Speech. 436-443 - Josep Famadas, Meysam Madadi, Cristina Palmero, Sergio Escalera
:
Generative Video Face Reenactment by AUs and Gaze Regularization. 444-451 - Torsten Wörtwein, Louis-Philippe Morency:
Simple and Effective Approaches for Uncertainty Prediction in Facial Action Unit Intensity Regression. 452-456 - Edgar Rojas-Muñoz, Juan P. Wachs:
Beyond MAGIC: Matching Collaborative Gestures using an optimization-based Approach. 457-464 - Nagashri N. Lakshminarayana, Srirangaraj Setlur
, Venu Govindaraju:
Learning Guided Attention Masks for Facial Action Unit Recognition. 465-472 - R. Gnana Praveen
, Eric Granger
, Patrick Cardinal:
Deep Weakly Supervised Domain Adaptation for Pain Localization in Videos. 473-480 - Si-Qi Liu, Pong C. Yuen:
A General Remote Photoplethysmography Estimator with Spatiotemporal Convolutional Network. 481-488 - S. L. Happy, Antitza Dantcheva, Abhijit Das, François Brémond, Radia Zeghari
, Philippe Robert:
Apathy Classification by Exploiting Task Relatedness. 489-494 - Blaz Bortolato, Marija Ivanovska, Peter Rot, Janez Krizaj, Philipp Terhörst, Naser Damer, Peter Peer, Vitomir Struc:
Learning privacy-enhancing face representations through feature disentanglement. 495-502 - João Loureiro, Paulo Lobato Correia:
Using a Skeleton Gait Energy Image for Pathological Gait Classification. 503-507 - R. James Cotton
:
Kinematic Tracking of Rehabilitation Patients With Markerless Pose Estimation Fused with Wearable Inertial Sensors. 508-514 - Yaohui Wang, Antitza Dantcheva:
A video is worth more than 1000 lies. Comparing 3DCNN approaches for detecting deepfakes. 515-519 - Dimitrios I. Kosmopoulos
, Iasonas Oikonomidis, Constantinos Constantinopoulos, Nikolaos Arvanitis, Klimis Antzakas, Aristeidis Bifis
, Georgios Lydakis, Anastasios Roussos
, Antonis A. Argyros
:
Towards a visual Sign Language dataset for home care services. 520-524 - Simone Palazzo
, Francesco Rundo, Sebastiano Battiato, Daniela Giordano, Concetto Spampinato:
Visual Saliency Detection guided by Neural Signals. 525-531 - Zeyi Lin, Wei Zhang, Xiaoming Deng, Cuixia Ma, Hongan Wang:
Image-based Pose Representation for Action Recognition and Hand Gesture Recognition. 532-539 - Diego L. Guarin, Aidan Dempster
, Andrea Bandini
, Yana Yunusova
, Babak Taati:
Estimation of Orofacial Kinematics in Parkinson's Disease: Comparison of 2D and 3D Markerless Systems for Motion Tracking. 540-543 - Su Lei, Kalin Stefanov, Jonathan Gratch:
Emotion or expressivity? An automated analysis of nonverbal perception in a social dilemma. 544-551 - Laura Schiphorst, Metehan Doyran, Sabine Molenaar, Albert Ali Salah, Sjaak Brinkkemper:
Video2Report: A Video Database for Automatic Reporting of Medical Consultancy Sessions. 552-556 - Megan Quarmley, Zhibo Yang, Shahrukh Athar, Gregory J. Zelinsky, Dimitris Samaras, Johanna M. Jarcho:
Nonverbal Behavioral Patterns Predict Social Rejection Elicited Aggression. 557-561 - Tempestt J. Neal
, Shaun J. Canavan:
Mood Versus Identity: Studying the Influence of Affective States on Mobile Biometrics. 562-566 - Vidhathri Kota, Geeta Madhav Gali, Ifeoma Nwogu
:
A Computational View of the Emotional Regulation of Disgust using Multimodal Sensors. 567-571 - Aviv Elor
, Asiiah Song:
iSAM: Personalizing an Artificial Intelligence Model for Emotion with Pleasure-Arousal-Dominance in Immersive Virtual Reality. 572-576 - Saurabh Hinduja, Shaun J. Canavan, Gurmeet Kaur:
Multimodal Fusion of Physiological Signals and Facial Action Units for Pain Recognition. 577-581 - Sayde King, Mohamed Ebraheem, Khadija Zanna
, Tempestt J. Neal
:
Learning a Privacy-Preserving Global Feature Set for Mood Classification Using Smartphone Activity and Sensor Data. 582-586 - André Roberto Ortoncelli, Luciano Silva
, Olga Regina Pereira Bellon, Tiago Mota de Oliveira, Juliana Daga:
Summarizing Driving Behavior to Support Driver Stress Analysis. 587-591 - Didan Deng, Zhaokang Chen, Bertram E. Shi
:
Multitask Emotion Recognition with Incomplete Labels. 592-599 - Felix Kuhnke, Lars Rumberg, Jörn Ostermann:
Two-Stream Aural-Visual Affect Analysis in the Wild. 600-605 - Decky Aspandi
, Adria Mallol-Ragolta, Björn W. Schuller, Xavier Binefa:
Latent-Based Adversarial Neural Networks for Facial Affect Estimations. 606-610 - Sowmya Rasipuram, Junaid Hamid Bhat, Anutosh Maitra:
Multi-modal Sequence-to-sequence Model for Continuous Affect Prediction in the Wild Using Deep 3D Features. 611-614 - Hanyu Liu, Jiabei Zeng, Shiguang Shan:
Facial Expression Recognition for In-the-wild Videos. 615-618 - Ines Rieger
, Jaspar Pahl, Dominik Seuss
:
Unique Class Group Based Multi-Label Balancing Optimizer for Action Unit Detection. 619-623 - Nhu-Tai Do
, Tram-Tran Nguyen-Quynh, Soo-Hyung Kim:
Affective Expression Analysis in-the-wild using Multi-Task Temporal Statistical Deep Learning Model. 624-628 - Sowmya Rasipuram, Junaid Hamid Bhat, Anutosh Maitra:
Multi-modal Expression Recognition in the Wild Using Sequence Modeling. 629-631 - Yuan-Hang Zhang
, Rulin Huang, Jiabei Zeng, Shiguang Shan:
M3F: Multi-Modal Continuous Valence-Arousal Estimation in the Wild. 632-636 - Dimitrios Kollias
, Attila Schulc, Elnar Hajiyev, Stefanos Zafeiriou:
Analysing Affective Behavior in the First ABAW 2020 Competition. 637-643 - Aniket Bera, Tanmay Randhavane, Rohan Prinja, Kyra Kapsaskis, Austin Wang, Kurt Gray, Dinesh Manocha:
How are you feeling? Multimodal Emotion Learning for Socially-Assistive Robot Navigation. 644-651 - Pablo V. A. Barros
, Nikhil Churamani, Alessandra Sciutti:
The FaceChannel: A Light-weight Deep Neural Network for Facial Expression Recognition. 652-656 - Leopoldo A. D. Lusquino Filho, Luiz F. R. Oliveira, Hugo C. C. Carneiro
, Gabriel P. Guarisa, Aluizio Lima Filho, Felipe M. G. França, Priscila M. V. Lima:
A weightless regression system for predicting multi-modal empathy. 657-661 - Jinsong Liu, Isak Worre Foged
, Thomas B. Moeslund
:
Vision-based Individual Factors Acquisition for Thermal Comfort Assessment in a Built Environment. 662-666 - Anna Esposito
, Italia Cirillo, Antonietta Maria Esposito, Leopoldina Fortunati, Gian Luca Foresti, Sergio Escalera
, Nikolaos G. Bourbakis:
Impairments in decoding facial and vocal emotional expressions in high functioning autistic adults and adolescents. 667-674 - Ajay Vasudevan, Pablo Negri
, Bernabé Linares-Barranco, Teresa Serrano-Gotarredona:
Introduction and Analysis of an Event-Based Sign Language Dataset. 675-682 - Naveen Madapana, Juan P. Wachs:
Feature Selection for Zero-Shot Gesture Recognition. 683-687 - Emely Pujólli da Silva, Paula Dornhofer Paro Costa
, Kate Mamhy Oliveira Kumada, José Mario De Martino
:
SILFA: Sign Language Facial Action Database for the Development of Assistive Technologies for the Deaf. 688-692 - Ciprian A. Corneanu, Meysam Madadi, Sergio Escalera
, Aleix M. Martínez:
Explainable Early Stopping for Action Unit Recognition. 693-699 - Al Amin Hosain, Panneer Selvam Santhalingam, Parth H. Pathak, Huzefa Rangwala, Jana Kosecká:
FineHand: Learning Hand Shapes for American Sign Language Recognition. 700-707 - Araceli Morales, Antonio R. Porras
, Liyun Tu
, Marius George Linguraru, Gemma Piella, Federico M. Sukno:
Spectral Correspondence Framework for Building a 3D Baby Face Model. 708-715 - Anna Esposito
, Terry Amorese, Nelson Mauro Maldonato, Alessandro Vinciarelli, María Inés Torres
, Sergio Escalera, Gennaro Cordasco
:
Seniors' ability to decode differently aged facial emotional expressions. 716-722 - Edgar Rojas-Muñoz, Juan P. Wachs:
The MAGIC of E-Health: A Gesture-Based Approach to Estimate Understanding and Performance in Remote Ultrasound Tasks. 723-727 - Giorgos A. Giannakakis, Mohammad Rami Koujan, Anastasios Roussos
, Kostas Marias
:
Automatic stress detection evaluating models of facial action units. 728-733 - Li-Wei Zhang, Jingting Li, Su-Jing Wang, Xian-Hua Duan, Wen-Jing Yan, Hai-Yong Xie, Shu-Cheng Huang:
Spatio-temporal fusion for Macro- and Micro-expression Spotting in Long Video Sequences. 734-741 - Ying He
, Su-Jing Wang, Jingting Li, Moi Hoon Yap:
Spotting Macro-and Micro-expression Intervals in Long Video Sequences. 742-748 - Hang Pan
, Lun Xie, Zhiliang Wang:
Local Bilinear Convolutional Neural Network for Spotting Macro- and Micro-expression Intervals in Long Video Sequences. 749-753 - Zhanling Cui, Yingjun Zhao, Jin Guo, Huibo Du, Jijia Zhang:
Typical and Reverse Other-Race Effect on Tibetan Students' Emotional Face Recognition. 754-760 - Liping He, Xunbing Shen
, Zhencai Chen, Keding Li, Zhennan Liu, Ruirui Zhuo:
The Ability to Recognize Microexpression and Detect Deception in the Elderly. 761-764 - Walied Merghani, Moi Hoon Yap:
Adaptive Mask for Region-based Facial Micro-Expression Recognition. 765-770 - Chuin Hong Yap
, Connah Kendrick, Moi Hoon Yap:
SAMM Long Videos: A Spontaneous Facial Micro- and Macro-Expressions Dataset. 771-776 - Jingting Li, Su-Jing Wang, Moi Hoon Yap, John See
, Xiaopeng Hong, Xiaobai Li:
MEGC2020 - The Third Facial Micro-Expression Grand Challenge. 777-780 - Sowmya Rasipuram, Bukka Nikhil Sai, Dinesh Babu Jayagopi
, Anutosh Maitra:
Using Deep 3D Features and an LSTM Based Sequence Model for Automatic Pain Detection in the Wild. 781-785 - Xiaojing Xu, Virginia R. de Sa:
Exploring Multidimensional Measurements for Pain Evaluation using Facial Action Units. 786-792 - Hilde I. Hummel
, Francisca Pessanha
, Albert Ali Salah, Thijs J. P. A. M. van Loon, Remco C. Veltkamp:
Automatic Pain Detection on Horse and Donkey Faces. 793-800 - Albert Clapés, Júlio C. S. Jacques Júnior
, Carla Morral, Sergio Escalera
:
ChaLearn LAP 2020 Challenge on Identity-preserved Human Detection: Dataset and Results. 801-808 - Zijian Zhao, Jie Zhang, Shiguang Shan:
Noise Robust Hard Example Mining for Human Detection with Efficient Depth-Thermal Fusion. 809-813 - Van Thong Huynh
, Hyung-Jeong Yang, Gueesang Lee, Soo-Hyung Kim:
Multimodality Pain and related Behaviors Recognition based on Attention Learning. 814-818 - Yi Li, Shreya Ghosh, Jyoti Joshi, Sharon L. Oviatt:
LSTM-DNN based Approach for Pain Intensity and Protective Behaviour Prediction. 819-823 - Xinhui Yuan, Marwa Mahmoud:
ALANet: Autoencoder-LSTM for pain and protective behaviour detection. 824-828 - Adria Mallol-Ragolta, Shuo Liu, Nicholas Cummins
, Björn W. Schuller:
A Curriculum Learning Approach for Pain Intensity Recognition from Facial Expressions. 829-833 - Fasih Haider
, Pierre Albert, Saturnino Luz:
Automatic Recognition of Low-Back Chronic Pain Level and Protective Movement Behaviour using Physical and Muscle Activity Information. 834-838 - Saandeep Aathreya Sidhapur Lakshminarayan, Saurabh Hinduja, Shaun J. Canavan:
Three-level Training of Multi-Head Architecture for Pain Detection. 839-843 - Md Taufeeq Uddin, Shaun J. Canavan:
Multimodal Multilevel Fusion for Sequential Protective Behavior Detection and Pain Estimation. 844-848 - Joy O. Egede, Siyang Song, Temitayo A. Olugbade, Chongyang Wang, Amanda C. de C. Williams, Hongying Meng
, Min S. Hane Aung, Nicholas D. Lane, Michel F. Valstar, Nadia Bianchi-Berthouze
:
EMOPAIN Challenge 2020: Multimodal Pain Evaluation from Facial and Bodily Expressions. 849-856 - Joseph P. Robinson, Yu Yin, Zaid Khan
, Ming Shao, Siyu Xia, Michael Stopa, Samson Timoner, Matthew A. Turk, Rama Chellappa, Yun Fu:
Recognizing Families In the Wild (RFIW): The 4th Edition. 857-862 - Stefan Hörmann, Martin Knoche, Gerhard Rigoll:
A Multi-Task Comparator Framework for Kinship Verification. 863-867 - Zhipeng Luo, Zhiguang Zhang, Zhenyu Xu, Lixuan Che:
Challenge report Recognizing Families In the Wild Data Challenge. 868-871 - Andrei Shadrikov
:
Achieving Better Kinship Recognition Through Better Baseline. 872-876 - Oualid Laiadi
, Abdelmalik Ouamane, Abdelhamid Benakcha, Abdelmalik Taleb-Ahmed, Abdenour Hadid:
Multi-view Deep Features for Robust Facial Kinship Verification. 877-881 - Jun Yu, Guochen Xie, Mengyan Li, Xinlong Hao:
Retrieval of Family Members Using Siamese Neural Network. 882-886 - Tuan-Duy H. Nguyen, Huu-Nghia H. Nguyen, Hieu Dao:
Recognizing Families through Images with Pretrained Encoder. 887-891 - Jun Yu, Mengyan Li, Xinlong Hao, Guochen Xie:
Deep Fusion Siamese Network for Automatic Kinship Verification. 892-899 - Zhiyuan Zhang, Miao Yi, Juan Xu, Rong Zhang, Jianping Shen:
Two-stage Recognition and Beyond for Compound Facial Emotion Recognition. 900-904 - Zhipeng Luo, Zhiguang Zhang:
Challenge report: Compound Emotion challenge. 905-907 - Wei Gou, Jian Li, Mo Sun:
Multimodal(Audio, Facial and Gesture) based Emotion Recognition challenge. 908-911 - Carla Viegas
:
Two Stage Emotion Recognition using Frame-level and Video-level Features. 912-915 - Saurabh Hinduja, Shaun J. Canavan:
Real-time Action Unit Intensity Detection. 916 - Jacob Schioppo, Zachary Meyer, Diego Fabiano, Shaun J. Canavan:
Sign Language Recognition in Virtual Reality. 917 - Mohammad Rami Koujan, Michail Christos Doukas, Anastasios Roussos
, Stefanos Zafeiriou:
ReenactNet: Real-time Full Head Reenactment. 918 - Natalia Efremova, Navid Hajimirza, David Bassett, Felipe Thomaz
:
Understanding consumer attention on mobile devices. 919
![](https://dblp.uni-trier.de/img/cog.dark.24x24.png)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.