default search action
SIGGRAPH Asia 2017 Posters: Bangkok, Thailand
- Diego Gutierrez, Hui Huang:
SIGGRAPH Asia 2017 Posters, Bangkok, Thailand, November 27 - 30, 2017. ACM 2017, ISBN 978-1-4503-5405-9
Animation
- Seongmin Baek, Myunggyu Kim:
User pose estimation based on multiple depth sensors. 1:1-1:2 - Aihua Mao, Mingle Wang, Yong-Jin Liu, Huamin Wang, Guiqing Li:
Liquid wetting across porous anisotropic textiles. 2:1-2:2 - Sakiho Kato, Tomofumi Narita, Chika Tomiyama, Takashi Ijiri, Hiroya Tanaka:
4D computed tomography measurement for growing plant animation. 3:1-3:2 - Chen-Hui Hu, Ping Chen, Yan-Ting Liu, Wen-Chieh Lin:
Motion sickness simulation based on sensorimotor control. 4:1-4:2
Hardware
- Jung-Woo Chang, Suk-Ju Kang, Min-Woo Seo, Song-Woo Choi, Sang-Lyn Lee, Ho-Chul Lee, Eui Yeol Oh, Jong-Sang Baek:
Real-time temporal quality compensation technique for head mounted displays. 5:1-5:2 - Lifang Wu, Miao Yu, Yisong Gao, Dong-Ming Yan, Ligang Liu:
Multi-DOF 3D printing with visual surveillance. 6:1-6:2 - Shinnosuke Ando, Kazuki Otao, Kazuki Takazawa, Yusuke Tanemura, Yoichi Ochiai:
Aerial image on retroreflective particles. 7:1-7:2 - Tomofumi Kobori, Kazuki Shimose, Sho Onose, Tomoyuki Okamoto, Masao Nakajima, Toru Iwane, Hirotsugu Yamamoto:
Aerial light-field image augmented between you and your mirrored image. 8:1-8:2 - Ayumu Tsuboi, Mamoru Hirota, Junki Sato, Masayuki Yokoyama, Masao Yanagisawa:
A proposal for wearable controller device and finger gesture recognition using surface electromyography. 9:1-9:2 - Makoto Yoda, Hiroki Imamura:
Improvement of a finger-mounted haptic device using surface contact. 10:1-10:2
Imaging and video
- Tatsunori Hirai:
Seamless video scene transition using hierarchical graph cuts. 11:1-11:2 - Yingying Deng, Fan Tang, Weiming Dong, Hanxing Yao, Bao-Gang Hu:
Style-oriented representative paintings selection. 12:1-12:2 - Hsueh-I Chen, Claire Chen, Yung-Yu Chuang:
A deep convolutional neural network for continuous zoom with dual cameras. 13:1-13:2 - Bruno Patrão, Leandro Cruz, Nuno Gonçalves:
An application of a halftone pattern coding in augmented reality. 14:1-14:2 - Bo Jiang, Sijiang Liu, Liping He, Weimin Wu, Hongli Chen, Yunfei Shen:
Subtitle positioning for e-learning videos based on rough gaze estimation and saliency detection. 15:1-15:2 - Jiun-Yu Lee, Ping-Hsuan Han, Ling Tsai, Rih-Ding Peng, Yang-Sheng Chen, Kuan-Wen Chen, Yi-Ping Hung:
Estimating the simulator sickness in immersive virtual reality with optical flow analysis. 16:1-16:2
Interaction
- Kazuki Otao, Takanori Koga:
Mistflow: a fog display for visualization of adaptive shape-changing flow. 17:1-17:2 - Ting-Wen Chin, Yu-Yen Chuang, Yu-Ling Fan, Ye-Ning Jiang, Yi-Ching Kang, Wei-Hsin Kuo, Tzu-Wen To, Hiroki Nishino:
Prototyping digital signage systems with high-low tech interfaces. 18:1-18:2 - Taizhou Chen, Yi-Shiun Wu, Feng Han, Baochuan Yue, Kening Zhu:
DupRobo: an interactive robotic platform for phyiscal block-based autocompletion. 19:1-19:2 - Tao-Jen Yang, Wei-An Chen, Yung-Long Chu, Zi-Xin You, Chien-Hsing Chou:
Tactile Braille learning system to assist visual impaired users to learn Taiwanese Braille. 20:1-20:2 - Santawat Thanyadit, Ting-Chuen Pong:
User interface applications in desktop VR using a mirror metaphor. 21:1-21:2 - Shuo-Ting Chien, Chen-Hui Hu, Cheng-Yang Huang, Yu-Ting Tsai, Wen-Chieh Lin:
Deformation simulation based on model reduction with rigidity sampling. 22:1-22:2 - Ungyeon Yang, Nam-Gyu Kim, Ki-Hong Kim:
Perception adjustment for egocentric moving distance between real space and virtual space with see-closed-type HMD. 23:1-23:2 - Seunghyun Woo, Jia Lee, Changmok Kim:
Dualboard: integrated user interface between typing and handwriting. 24:1-24:2 - Akihiro Matsuura, Yuma Ikawa, Yuka Takahashi, Hiroki Tone:
Spin and roll: convex solids of revolution as playful interface. 25:1-25:2 - Seung-Hwan Choi, Hyun-Jin Kim, Sang-Woong Hwang, Jae-Young Lee:
Natural interaction for media consumption in VR environment. 26:1-26:2 - Barrett Ens, Aaron Quigley, Hui-Shyong Yeo, Pourang Irani, Thammathip Piumsomboon, Mark Billinghurst:
Exploring mixed-scale gesture interaction. 27:1-27:2
Methods and applications
- Sakiko Fujieda, Yuki Morimoto, Kazuo Ohzeki:
An image generation system of delicious food in a manga style. 28:1-28:2 - Naoya Muramatsu, Chun Wei Ooi, Yuta Itoh, Yoichi Ochiai:
DeepHolo: recognizing 3D objects using a binary-weighted computer-generated hologram. 29:1-29:2 - Yu-Ju Tsai, Yu-Xiang Wang, Ming Ouhyoung:
Affordable system for measuring motion-to-photon latency of virtual reality in mobile devices. 30:1-30:2 - Jonathan Dyssel Stets, Yongbin Sun, Wiley Corning, Scott W. Greenwald:
Visualization and labeling of point clouds in virtual reality. 31:1-31:2 - Toshikazu Ohshima, Kenzo Kojima:
Mitsudomoe: ecosystem simulator of virtual creatures in mixed reality petri dish. 32:1-32:2 - Mose Sakashita, Yuta Sato, Ayaka Ebisu, Keisuke Kawahara, Satoshi Hashizume, Naoya Muramatsu, Yoichi Ochiai:
Haptic marionette: wrist control technology combined with electrical muscle stimulation and hanger reflex. 33:1-33:2 - Wanchao Su, Xin Yang, Hongbo Fu:
Sketch2normal: deep networks for normal map generation. 34:1-34:2 - Heesook Shin, Youn-Hee Gil, ChoRong Yu, Hee-Kwon Kim, Jisu Lee, Hyungkeun Jee:
Improved and accessible e-book reader application for visually impaired people. 35:1-35:2
Modeling
- Caigui Jiang, Renjie Chen:
Polyhedral meshes with concave faces. 36:1-36:2 - Kengo Tanaka, Taisuke Ohshima, Yoichi Ochiai:
Spring-pen: reproduction of any softness with the 3D printed spring. 37:1-37:2 - Jung-Jae Yu, Chang-Joon Park:
Bidirectional pyramid-based PMVS with automatic sky masking. 38:1-38:2 - Yilan Chen, Wenlong Meng, Shi-Qing Xin, Hongbo Fu:
Smartsweep: context-aware modeling on a single image. 39:1-39:2
Multimedia
- Rahul Upadhyay, Ajay Surendranath:
A fast and efficient content aware downscaling based image compression method for mobile devices. 40:1-40:2 - Baoquan Zhao, Shujin Lin, Xin Qi, Zhiquan Zhang, Xiaonan Luo, Ruomei Wang:
Automatic generation of visual-textual web video thumbnail. 41:1-41:2 - Jun-Ho Choi, Jong-Seok Lee:
Aesthetic temporal and spatial editing of casual videos. 42:1-42:2 - Xingjia Pan, Juntao Ye, Fan Tang, Weiming Dong, Feiyue Huang, Xiaopeng Zhang:
Content-based measure of image set diversity. 43:1-43:2
Rendering
- Stefan Seibert, Stefan Radicke:
Spatial multisampling and multipass occlusion testing for screen space shadows. 44:1-44:2 - Jie Guo, Yanwen Guo, Jingui Pan:
Importance sampling measured BRDFs based on second order spherical moment. 45:1-45:2 - Riku Iwasaki, Yuta Sato, Ippei Suzuki, Atsushi Shinoda, Kenta Yamamoto, Kohei Ogawa, Yoichi Ochiai:
Silk fabricator: using silkworms as 3D printers. 46:1-46:2 - Yuna Omae, Tokiichiro Takahashi:
Eyeshine rendering: a real time rendering method for realistic animal eyes. 47:1-47:2 - Masaaki Sato, Masataka Imura:
Method for quantitative evaluation of the realism of CG images using deep learning. 48:1-48:2
Virtual environments
- Seunghyun Woo, Dong-Seon Chang, Daeyun An, Dong Jin Hyun, Christian Wallraven, Manfred Dangelmaier:
Emotion induction in virtual environments: a novel paradigm using immersive scenarios in driving simulators. 49:1-49:2 - Yuki Morikubo, Naoki Hashimoto:
Marker-less real-time tracking of texture-less 3D objects from a monocular image. 50:1-50:2 - JoungHuem Kwon, YoungEun Kim, Sang-Hun Nam:
A spatial user interface design using accordion metaphor for VR systems. 51:1-51:2 - Vy Dang Ha Thanh, Ondris Pui, Martin Constable:
Room VR: a VR therapy game for children who fear the dark. 52:1-52:2 - Chi-Yang Lee, Hsuan-Ming Chang, Chun-Heng Lin, Ming-Han Tsai, Wen-Chieh Lin, Pei-Hsien Hsu, I-Chen Lin, Yu-Shuen Wang, Jung-Hong Chuang:
VR lighting design. 53:1-53:2 - Yi-Shan Lan, Shih-Wei Sun, Kai-Lung Hua, Wen-Huang Cheng:
O-Displaying: an orientation-based augmented reality display on a smart glass with a user tracking from a depth camera. 54:1-54:2 - Chia-Hung Tsou, Ting-Wei Hsu, Chun-Heng Lin, Ming-Han Tsai, Pei-Hsien Hsu, I-Chen Lin, Yu-Shuen Wang, Wen-Chieh Lin, Jung-Hong Chuang:
Immersive VR environment for architectural design education. 55:1-55:2
Visualization
- Gaurav Patekar, Karan Dudeja:
Data Jalebi bot. 56:1-56:2 - Keiichi Zempo, Tomoki Kurahashi, Koichi Mizutani, Naoto Wakatsuki:
Speech balloon system using single-channel microphone array on see-through head-mounted display. 57:1-57:2
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.