default search action
Yuta Sugiura
Person information
- affiliation: Keio University, Yokohama, Japan
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j17]Jiakun Yu, Supun Kuruppu, Biyon Fernando, Praneeth Bimsara Perera, Yuta Sugiura, Sriram Subramanian, Anusha Withana:
IrOnTex: Using Ironable 3D Printed Objects to Fabricate and Prototype Customizable Interactive Textiles. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 8(3): 138:1-138:26 (2024) - [j16]Yukina Sato, Takashi Amesaka, Takumi Yamamoto, Hiroki Watanabe, Yuta Sugiura:
Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUs. Proc. ACM Hum. Comput. Interact. 8(MHCI): 1-23 (2024) - [j15]Chengshuo Xia, Tian Min, Yuta Sugiura:
AudioMove: Applying the Spatial Audio to Multi-Directional Limb Exercise Guidance. Proc. ACM Hum. Comput. Interact. 8(MHCI): 1-26 (2024) - [c132]Yurina Mizuho, Yohei Kawasaki, Takashi Amesaka, Yuta Sugiura:
EarAuthCam: Personal Identification and Authentication Method Using Ear Images Acquired with a Camera-Equipped Hearable Device. AHs 2024: 119-130 - [c131]Takumi Yamamoto, Biyon Fernando, Takashi Amesaka, Anusha Withana, Yuta Sugiura:
Creating viewpoint-dependent display on Edible Cookies. AHs 2024: 286-289 - [c130]Takuro Watanabe, Eriku Yamada, Koji Fujita, Yuta Sugiura:
Draw4CM: Detecting Cervical Myelopathy via Hand Drawings Captured by Mobile Devices. MobileHCI (Companion) 2024: 16:1-16:7 - [c129]Hiyori Tsuji, Takumi Yamamoto, Sora Yamaji, Maiko Kobayashi, Kyoshiro Sasaki, Noriko Aso, Yuta Sugiura:
Smartphone-Based Teaching System for Neonate Soothing Motions. SII 2024: 178-183 - [c128]Sarii Yamamoto, Kaori Ikematsu, Kunihiro Kato, Yuta Sugiura:
Pinch Force Measurement Using a Geomagnetic Sensor. SII 2024: 284-287 - [c127]Yuto Ueda, Anusha Withana, Yuta Sugiura:
Tactile Presentation of Orchestral Conductor's Motion Trajectory. SII 2024: 546-553 - [c126]Naoharu Sawada, Takumi Yamamoto, Yuta Sugiura:
Converting Tatamis into Touch Sensors by Measuring Capacitance. SII 2024: 554-558 - [c125]Shunta Suzuki, Takashi Amesaka, Hiroki Watanabe, Buntarou Shizuki, Yuta Sugiura:
EarHover: Mid-Air Gesture Recognition for Hearables Using Sound Leakage Signals. UIST 2024: 129:1-129:13 - [i6]Fengzhou Liang, Tian Min, Yuta Sugiura:
Vsens Reality: Blending the Virtual Sensors into XR. CoRR abs/2409.11419 (2024) - [i5]Yuri Suzuki, Kaho Kato, Naomi Furui, Daisuke Sakamoto, Yuta Sugiura:
Exploring Gestural Interaction with a Cushion Interface for Smart Home Control. CoRR abs/2410.04730 (2024) - [i4]Yohei Kawasaki, Yuta Sugiura:
Guidance of the Center of Pressure Using Haptic Presentation. CoRR abs/2410.04732 (2024) - [i3]Naoto Takayanagi, Naoji Matsuhisa, Yuki Hashimoto, Yuta Sugiura:
A Stretchable Electrostatic Tactile Surface. CoRR abs/2410.04768 (2024) - [i2]Takumi Yamamoto, Rin Yoshimura, Yuta Sugiura:
EnchantedClothes: Visual and Tactile Feedback with an Abdomen-Attached Robot through Clothes. CoRR abs/2411.05102 (2024) - [i1]Yurina Mizuho, Yuta Sugiura:
A Comparison of Violin Bowing Pressure and Position among Expert Players and Beginners. CoRR abs/2411.05126 (2024) - 2023
- [j14]Takumi Yamamoto, Yuta Sugiura:
Turning carpets into multi-image switchable displays. Comput. Graph. 111: 190-198 (2023) - [j13]Takuro Watanabe, Chengshuo Xia, Koji Fujita, Yuta Sugiura:
Screening for Carpal Tunnel Syndrome Using Daily Behavior on Mobile Devices. Computer 56(9): 62-70 (2023) - [j12]Chengshuo Xia, Yuta Sugiura:
Virtual Sensors With 3D Digital Human Motion for Interactive Simulation. Computer 56(12): 42-54 (2023) - [j11]Tian Min, Chengshuo Xia, Takumi Yamamoto, Yuta Sugiura:
Seeing the Wind: An Interactive Mist Interface for Airflow Input. Proc. ACM Hum. Comput. Interact. 7(ISS): 398-419 (2023) - [c124]Naoharu Sawada, Takumi Yamamoto, Yuta Sugiura:
A Virtual Window Using Curtains and Image Projection. APMAR 2023 - [c123]Takumi Yamamoto, Ryohei Baba, Yuta Sugiura:
Augmented Sports of Badminton by Changing Opening Status of Shuttle's Feathers. APMAR 2023 - [c122]Sarii Yamamoto, Fei Gu, Kaori Ikematsu, Kunihiro Kato, Yuta Sugiura:
Maintenance-Free Smart Hand Dynamometer. EMBC 2023: 1-5 - [c121]Yurina Mizuho, Riku Kitamura, Yuta Sugiura:
Estimation of Violin Bow Pressure Using Photo-Reflective Sensors. ICMI 2023: 216-223 - [c120]Riku Kitamura, Takumi Yamamoto, Yuta Sugiura:
TouchLog: Finger Micro Gesture Recognition Using Photo-Reflective Sensors. ISWC 2023: 92-97 - [c119]Takashi Amesaka, Hiroki Watanabe, Masanori Sugimoto, Yuta Sugiura, Buntarou Shizuki:
User Authentication Method for Hearables Using Sound Leakage Signals. ISWC 2023: 119-123 - [c118]Takumi Yamamoto, Katsutoshi Masai, Anusha Withana, Yuta Sugiura:
Masktrap: Designing and Identifying Gestures to Transform Mask Strap into an Input Interface. IUI 2023: 762-775 - [c117]Yohei Kawasaki, Yuta Sugiura:
Identification and Authentication using Clavicles. SICE 2023: 1141-1145 - [c116]Tian Min, Chengshuo Xia, Yuta Sugiura:
Assisting the Multi-directional Limb Motion Exercise with Spatial Audio and Interactive Feedback. ISS Companion 2023: 53-56 - 2022
- [j10]Koji Fujita, Kana Matsuo, Takafumi Koyama, Kurando Utagawa, Shingo Morishita, Yuta Sugiura:
Development and testing of a new application for measuring motion at the cervical spine. BMC Medical Imaging 22(1): 193 (2022) - [j9]Takumi Yamamoto, Yuta Sugiura:
A robotic system for images on carpet surface. Graph. Vis. Comput. 6: 200045 (2022) - [j8]Chengshuo Xia, Xinrui Fang, Riku Arakawa, Yuta Sugiura:
VoLearn: A Cross-Modal Operable Motion-Learning System Combined with Virtual Avatar and Auditory Feedback. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6(2): 81:1-81:26 (2022) - [c115]Xiang Zhang, Kaori Ikematsu, Kunihiro Kato, Yuta Sugiura:
ReflecTouch: Detecting Grasp Posture of Smartphone Using Corneal Reflection Images. CHI 2022: 289:1-289:8 - [c114]Fei Gu, Chengshuo Xia, Yuta Sugiura:
Augmenting the Boxing Game with Smartphone IMU-based Classification System on Waist. CW 2022: 165-166 - [c113]Yuto Ueda, Yuta Sugiura:
Evaluation of Trajectory Presentation of Conducting Motions Using Tactile Sensation for the Visually Impaired. ICAT-EGVE (Posters and Demos) 2022: 5-6 - [c112]Motoyasu Masui, Yoshinari Takegawa, Yutaka Tokuda, Yuta Sugiura, Katsutoshi Masai, Keiji Hirata:
High-Speed Thermochromism Control Method Integrating Water Cooling Circuits and Electric Heating Circuits Printed with Conductive Silver Nanoparticle Ink. HCI (2) 2022: 66-80 - [c111]Takumi Yamamoto, Yuta Sugiura:
Demonstration of Multi-image Switchable Visual Displays Using Carpets. ISMAR Adjunct 2022: 893-894 - [c110]Yuto Ueda, Yuta Sugiura:
Demonstration of Trajectory Presentation of Conducting Motions Using Tactile Sensation for Visually Impaired. ISMAR Adjunct 2022: 901-902 - [c109]Chengshuo Xia, Tsubasa Maruyama, Haruki Toda, Mitsunori Tada, Koji Fujita, Yuta Sugiura:
Knee Osteoarthritis Classification System Examination on Wearable Daily-Use IMU Layout. ISWC 2022: 74-78 - [c108]Chengshuo Xia, Yuta Sugiura:
Virtual IMU Data Augmentation by Spring-Joint Model for Motion Exercises Recognition without Using Real Data. ISWC 2022: 79-83 - [c107]Yohei Kawasaki, Yuta Sugiura:
Personal Identification and Authentication Using Blink with Smart Glasses. SICE 2022: 1251-1256 - [c106]Xiang Zhang, Kaori Ikematsu, Kunihiro Kato, Yuta Sugiura:
Evaluation of Grasp Posture Detection Method using Corneal Reflection Images through a Crowdsourced Experiment. ISS Companion 2022: 9-13 - [c105]Ryota Matsui, Takuya Ibara, Kazuya Tsukamoto, Takafumi Koyama, Koji Fujita, Yuta Sugiura:
Video Analysis of Hand Gestures for Distinguishing Patients with Carpal Tunnel Syndrome. ISS Companion 2022: 27-31 - [c104]Kana Matsuo, Koji Fujita, Takafumi Koyama, Shingo Morishita, Yuta Sugiura:
Cervical Spine Range of Motion Measurement Utilizing Image Analysis. VISIGRAPP (4: VISAPP) 2022: 861-867 - 2021
- [j7]Chengshuo Xia, Yuta Sugiura:
Wearable Accelerometer Layout Optimization for Activity Recognition Based on Swarm Intelligence and User Preference. IEEE Access 9: 166906-166919 (2021) - [j6]Chengshuo Xia, Yuta Sugiura:
Optimizing Sensor Position with Virtual Sensors in Human Activity Recognition System Design. Sensors 21(20): 6893 (2021) - [c103]Xinrui Fang, Chengshuo Xia, Yuta Sugiura:
FacialPen: Using Facial Detection to Augment Pen-Based Interaction. AsianCHI@CHI 2021: 1-8 - [c102]Miyu Fujii, Kaho Kato, Chengshuo Xia, Yuta Sugiura:
Personal Identification using Gait Data on Slipper-device with Accelerometer. AsianCHI@CHI 2021: 74-79 - [c101]Chengshuo Xia, Yuta Sugiura:
From Virtual to Real World: Applying Animation to Design the Activity Recognition System. CHI Extended Abstracts 2021: 216:1-216:6 - [c100]Kureha Noguchi, Yoshinari Takegawa, Yutaka Tokuda, Yuta Sugiura, Katsutoshi Masai, Keiji Hirata:
Study of Interviewee's ImpressionMade by Interviewer Wearing Digital Full-face Mask DisplayDuring Recruitment Interview. HAI 2021: 323-327 - [c99]Motoyasu Masui, Yoshinari Takegawa, Nonoka Nitta, Yutaka Tokuda, Yuta Sugiura, Katsutoshi Masai, Keiji Hirata:
PerformEyebrow: Design and Implementation of an Artificial Eyebrow Device Enabling Augmented Facial Expression. HCI (3) 2021: 584-597 - [c98]Ryota Matsui, Takafumi Koyama, Koji Fujita, Hideo Saito, Yuta Sugiura:
Video-Based Hand Tracking for Screening Cervical Myelopathy. ISVC (2) 2021: 3-14 - [c97]Kaho Kato, Chengshuo Xia, Yuta Sugiura:
Exercise Recognition System using Facial Image Information from a Mobile Device. LifeTech 2021: 268-272 - [c96]Chengshuo Xia, Xinrui Fang, Yuta Sugiura:
VoLearn: An Operable Motor Learning System with Auditory Feedback. UIST (Adjunct Volume) 2021: 103-105 - 2020
- [c95]Akino Umezawa, Yoshinari Takegawa, Katsuhiro Suzuki, Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto, Yutaka Tokuda, Diego Martínez Plasencia, Sriram Subramanian, Masafumi Takahashi, Hiroaki Taka, Keiji Hirata:
e2-MaskZ: a Mask-type Display with Facial Expression Identification using Embedded Photo Reflective Sensors. AHs 2020: 36:1-36:3 - [c94]Yoshinari Takegawa, Yutaka Tokuda, Akino Umezawa, Katsuhiro Suzuki, Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto, Diego Martínez Plasencia, Sriram Subramanian, Keiji Hirata:
Digital Full-Face Mask Display with Expression Recognition using Embedded Photo Reflective Sensor Arrays. ISMAR 2020: 101-108 - [c93]Katsutoshi Masai, Kai Kunze, Daisuke Sakamoto, Yuta Sugiura, Maki Sugimoto:
Face Commands - User-Defined Facial Gestures for Smart Glasses. ISMAR 2020: 374-386 - [c92]Chengshuo Xia, Yuta Sugiura:
Wearable Accelerometer Optimal Positions for Human Motion Recognition. LifeTech 2020: 19-20 - [c91]Souma Sumitomo, Yuta Sugiura:
VR Music Game Considering Range of Arm Motion. LifeTech 2020: 59-62 - [c90]Masaru Watanabe, Yuta Sugiura, Hideo Saito, Takafumi Koyama, Koji Fujita:
Detection of cervical myelopathy with Leap Motion Sensor by random forests. LifeTech 2020: 214-216 - [c89]Yuri Suzuki, Kaho Kato, Naomi Furui, Daisuke Sakamoto, Yuta Sugiura:
Cushion Interface for Smart Home Control. TEI 2020: 467-472 - [c88]Atsuya Munakata, Yuta Sugiura:
Prediction of Impulsive Input on Gamepad Using Force-Sensitive Resistor. TEI 2020: 589-595
2010 – 2019
- 2019
- [c87]Yuki Ishikawa, Ryo Hachiuma, Naoto Ienaga, Wakaba Kuno, Yuta Sugiura, Hideo Saito:
Semantic Segmentation of 3D Point Cloud to Virtually Manipulate Real Living Space. APMAR 2019: 1-7 - [c86]Ayane Saito, Wakaba Kuno, Wataru Kawai, Natsuki Miyata, Yuta Sugiura:
Estimation of Fingertip Contact Force by Measuring Skin Deformation and Posture with Photo-reflective Sensors. AH 2019: 2:1-2:6 - [c85]Takumi Kobayashi, Yuta Sugiura, Hideo Saito, Yuji Uema:
Automatic Eyeglasses Replacement for a 3D Virtual Try-on System. AH 2019: 30:1-30:4 - [c84]Fumihiko Nakamura, Katsuhiro Suzuki, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, Maki Sugimoto:
Automatic Labeling of Training Data by Vowel Recognition for Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Display. ICAT-EGVE 2019: 9-16 - [c83]Ayane Saito, Wataru Kawai, Yuta Sugiura:
Software to Support Layout and Data Collection for Machine-Learning-Based Real-World Sensors. HCI (34) 2019: 198-205 - [c82]Kaho Kato, Naoto Ienaga, Yuta Sugiura:
Motion Estimation of Plush Toys Through Detachable Acceleration Sensor Module and Machine Learning. HCI (34) 2019: 279-286 - [c81]Nagisa Matsumoto, Chihiro Suzuki, Koji Fujita, Yuta Sugiura:
A Training System for Swallowing Ability by Visualizing the Throat Position. HCI (17) 2019: 501-511 - [c80]Naoto Ienaga, Yuta Sugiura, Hideo Saito, Koji Fujita:
Self-assessment Application of Flexion and Extension. LifeTech 2019: 150-152 - [c79]Madoka Toriumi, Yuta Sugiura, Koji Fujita:
An Application for Wrist Rehabilitation Using Smartphones. MobileHCI 2019: 65:1-65:6 - [c78]Masaaki Murakami, Kosuke Kikui, Katsuhiro Suzuki, Fumihiko Nakamura, Masaaki Fukuoka, Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto:
AffectiveHMD: facial expression recognition in head mounted display using embedded photo reflective sensors. SIGGRAPH Emerging Technologies 2019: 7:1-7:2 - 2018
- [c77]Naoto Ienaga, Wataru Kawai, Koji Fujita, Natsuki Miyata, Yuta Sugiura, Hideo Saito:
A Thumb Tip Wearable Device Consisting of Multiple Cameras to Measure Thumb Posture. ACCV Workshops 2018: 31-38 - [c76]Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto:
FaceRubbing: Input Technique by Rubbing Face using Optical Sensors on Smart Eyewear for Facial Expression Recognition. AH 2018: 23:1-23:5 - [c75]Kosuke Kikui, Yuta Itoh, Makoto Yamada, Yuta Sugiura, Maki Sugimoto:
Intra-/inter-user adaptation framework for wearable gesture sensing device. UbiComp 2018: 21-24 - [c74]Katsutoshi Masai, Kai Kunze, Yuta Sugiura, Maki Sugimoto:
Mapping Natural Facial Expressions Using Unsupervised Learning and Optical Sensors on Smart Eyewear. UbiComp/ISWC Adjunct 2018: 158-161 - [c73]Kei Saito, Katsutoshi Masai, Yuta Sugiura, Toshitaka Kimura, Maki Sugimoto:
Development of a Virtual Environment for Motion Analysis of Tennis Service Returns. MMSports@MM 2018: 59-66 - [c72]Kentaro Yagi, Kunihiro Hasegawa, Yuta Sugiura, Hideo Saito:
Estimation of Runners' Number of Steps, Stride Length and Speed Transition from Video of a 100-Meter Race. MMSports@MM 2018: 87-95 - [c71]Konomi Inaba, Akihiko Murai, Yuta Sugiura:
Center of pressure estimation and gait pattern recognition using shoes with photo-reflective sensors. OZCHI 2018: 224-228 - [c70]Madoka Toriumi, Takuro Watanabe, Koji Fujita, Akimoto Nimura, Yuta Sugiura:
Rehabilitation support system for patients with carpal tunnel syndrome using smartphone. OZCHI 2018: 532-534 - [c69]Nao Asano, Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto:
3D facial geometry analysis and estimation using embedded optical sensors on smart eyewear. SIGGRAPH Posters 2018: 45:1-45:2 - [c68]Yuta Sugiura, Toby Long Hin Chong, Wataru Kawai, Bruce H. Thomas:
Public/private interactive wearable projection display. VRCAI 2018: 10:1-10:6 - [c67]Yuta Sugiura, Hikaru Ibayashi, Toby Long Hin Chong, Daisuke Sakamoto, Natsuki Miyata, Mitsunori Tada, Takashi Okuma, Takeshi Kurata, Takashi Shinmura, Masaaki Mochimaru, Takeo Igarashi:
An asymmetric collaborative system for architectural-scale space design. VRCAI 2018: 21:1-21:6 - [e1]Tony Huang, Mai Otsuki, Myriam Servières, Arindam Dey, Yuta Sugiura, Domna Banakou:
International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, ICAT-EGVE 2018, Posters and Demos, Limassol, Cyprus, November 7-9, 2018. Eurographics Association 2018, ISBN 978-3-03868-076-5 [contents] - 2017
- [j5]Katsutoshi Masai, Kai Kunze, Yuta Sugiura, Masa Ogata, Masahiko Inami, Maki Sugimoto:
Evaluation of Facial Expression Recognition by a Smart Eyewear for Facial Direction Changes, Repeatability, and Positional Drift. ACM Trans. Interact. Intell. Syst. 7(4): 15:1-15:23 (2017) - [c66]Arashi Shimazaki, Yuta Sugiura, Dan Mikami, Toshitaka Kimura, Maki Sugimoto:
MuscleVR: detecting muscle shape deformation using a full body suit. AH 2017: 15 - [c65]Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto:
ACTUATE racket: designing intervention of user's performance through controlling angle of racket surface. AH 2017: 31 - [c64]Nao Asano, Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto:
Facial Performance Capture by Embedded Photo Reflective Sensors on A Smart Eyewear. ICAT-EGVE 2017: 21-28 - [c63]Wakaba Kuno, Yuta Sugiura, Nao Asano, Wataru Kawai, Maki Sugimoto:
Estimation of 3D Finger Postures with wearable device measuring Skin Deformation on Back of Hand. ICAT-EGVE (Posters and Demos) 2017: 41-42 - [c62]Wakaba Kuno, Yuta Sugiura, Nao Asano, Wataru Kawai, Maki Sugimoto:
3D Reconstruction of Hand Postures by Measuring Skin Deformation on Back Hand. ICAT-EGVE 2017: 221-228 - [c61]Katsutoshi Masai, Yuta Sugiura, Michita Imai, Maki Sugimoto:
RacketAvatar that Expresses Intention of Avatar and User. HRI (Companion) 2017: 44 - [c60]Naomi Furui, Katsuhiro Suzuki, Yuta Sugiura, Maki Sugimoto:
SofTouch: Turning Soft Objects into Touch Interfaces Using Detachable Photo Sensor Modules. ICEC 2017: 47-58 - [c59]Yan Zhao, Yuta Sugiura, Mitsunori Tada, Jun Mitani:
InsTangible: A Tangible User Interface Combining Pop-up Cards with Conductive Ink Printing. ICEC 2017: 72-80 - [c58]Takuro Watanabe, Yuta Sugiura, Natsuki Miyata, Koji Fujita, Akimoto Nimura, Maki Sugimoto:
DanceDanceThumb: Tablet App for Rehabilitation for Carpal Tunnel Syndrome. ICEC 2017: 473-476 - [c57]Koki Yamashita, Yuta Sugiura, Takashi Kikuchi, Maki Sugimoto:
DecoTouch: Turning the Forehead as Input Surface for Head Mounted Display. ICEC 2017: 481-484 - [c56]Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas:
EarTouch: turning the ear into an input surface. MobileHCI 2017: 27:1-27:6 - [c55]Yuta Sugiura, Koki Toda, Takashi Kikuchi, Takayuki Hoshi, Youichi Kamiyama, Takeo Igarashi, Masahiko Inami:
Grassffiti: Drawing Method to Produce Large-scale Pictures on Conventional Grass Fields. TEI 2017: 413-417 - [c54]Shigo Ko, Yuta Itoh, Yuta Sugiura, Takayuki Hoshi, Maki Sugimoto:
Spatial Calibration of Airborne Ultrasound Tactile Display and Projector-Camera System Using Fur Material. TEI 2017: 583-588 - [c53]Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, Maki Sugimoto:
Recognition and mapping of facial expressions to avatar by embedded photo reflective sensors in head mounted display. VR 2017: 177-185 - [c52]Koki Yamashita, Takashi Kikuchi, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas, Yuta Sugiura:
CheekInput: turning your cheek into an input surface by embedded optical sensors on a head-mounted display. VRST 2017: 19:1-19:8 - [c51]Naoaki Kashiwagi, Yuta Sugiura, Natsuki Miyata, Mitsunori Tada, Maki Sugimoto, Hideo Saito:
Measuring Grasp Posture Using an Embedded Camera. WACV Workshops 2017: 42-47 - 2016
- [j4]Shigeo Yoshida, Takumi Shirokura, Yuta Sugiura, Daisuke Sakamoto, Tetsuo Ono, Masahiko Inami, Takeo Igarashi:
RoboJockey: Designing an Entertainment Experience with Robots. IEEE Computer Graphics and Applications 36(1): 62-69 (2016) - [j3]Yuta Sugiura, Takeo Igarashi, Masahiko Inami:
Cuddly User Interfaces. Computer 49(7): 14-19 (2016) - [j2]Daisuke Sakamoto, Yuta Sugiura, Masahiko Inami, Takeo Igarashi:
Graphical Instruction for Home Robots. Computer 49(7): 20-25 (2016) - [j1]Hayeon Jeong, Daniel Saakes, Uichin Lee, Augusto Esteves, Eduardo Velloso, Andreas Bulling, Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Kai Kunze, Masahiko Inami, Maki Sugimoto, Anura Rathnayake, Tilak Dias:
Demo hour. Interactions 23(1): 8-11 (2016) - [c50]Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, Maki Sugimoto:
Analysis of Multiple Users' Experience in Daily Life Using Wearable Device for Facial Expression Recognition. ACE 2016: 52:1-52:5 - [c49]Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Kai Kunze, Masahiko Inami, Maki Sugimoto:
Facial Expression Recognition in Daily Life by Embedded Photo Reflective Sensors on Smart Eyewear. IUI 2016: 317-326 - [c48]Natsuki Miyata, Takehiro Honoki, Yusuke Maeda, Yui Endo, Mitsunori Tada, Yuta Sugiura:
Wrap & Sense: Grasp Capture by a Band Sensor. UIST (Adjunct Volume) 2016: 87-89 - [c47]Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, Maki Sugimoto:
Facial Expression Mapping inside Head Mounted Display by Embedded Optical Sensors. UIST (Adjunct Volume) 2016: 91-92 - 2015
- [c46]Masaharu Hirose, Karin Iwazaki, Kozue Nojiri, Minato Takeda, Yuta Sugiura, Masahiko Inami:
Gravitamine spice: a system that changes the perception of eating through virtual weight sensation. AH 2015: 33-40 - [c45]Masaharu Hirose, Yuta Sugiura, Kouta Minamizawa, Masahiko Inami:
PukuPuCam: a recording system from third-person view in scuba diving. AH 2015: 161-162 - [c44]Jun Kato, Hiromi Nakamura, Yuta Sugiura, Taku Hachisu, Daisuke Sakamoto, Koji Yatani, Yoshifumi Kitamura:
Japanese HCI Symposium: Emerging Japanese HCI Research Collection. CHI Extended Abstracts 2015: 2321-2324 - [c43]Eria Chita, Yuta Sugiura, Sunao Hashimoto, Kai Kunze, Masahiko Inami, Masa Ogata:
Silhouette interactions: using the hand shadow as interaction modality. UbiComp/ISWC Adjunct 2015: 69-72 - [c42]Katsutoshi Masai, Yuta Sugiura, Katsuhiro Suzuki, Sho Shimamura, Kai Kunze, Masa Ogata, Masahiko Inami, Maki Sugimoto:
AffectiveWear: towards recognizing affect in real life. UbiComp/ISWC Adjunct 2015: 357-360 - [c41]Vilhelmina Sokol, Yuta Sugiura, Kai Kunze, Masahiko Inami:
Enhanced tradition: combining tech and traditional clothing. UbiComp/ISWC Adjunct 2015: 591-594 - [c40]Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Katsuhiro Suzuki, Fumihiko Nakamura, Sho Shimamura, Kai Kunze, Masahiko Inami, Maki Sugimoto:
AffectiveWear: toward recognizing facial expression. SIGGRAPH Emerging Technologies 2015: 4:1 - [c39]Hikaru Ibayashi, Yuta Sugiura, Daisuke Sakamoto, Natsuki Miyata, Mitsunori Tada, Takashi Okuma, Takeshi Kurata, Masaaki Mochimaru, Takeo Igarashi:
Dollhouse VR: a multi-view, multi-user collaborative design workspace with VR technology. SIGGRAPH Asia Emerging Technologies 2015: 8:1-8:2 - [c38]Masa Ogata, Yuta Sugiura, Michita Imai:
FlashTouch: touchscreen communication combining light and touch. SIGGRAPH Emerging Technologies 2015: 11:1 - [c37]Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Katsuhiro Suzuki, Fumihiko Nakamura, Sho Shimamura, Kai Kunze, Masahiko Inami, Maki Sugimoto:
AffectiveWear: toward recognizing facial expression. SIGGRAPH Posters 2015: 16:1 - [c36]Hikaru Ibayashi, Yuta Sugiura, Daisuke Sakamoto, Natsuki Miyata, Mitsunori Tada, Takashi Okuma, Takeshi Kurata, Masaaki Mochimaru, Takeo Igarashi:
Dollhouse VR: a multi-view, multi-user collaborative design workspace with VR technology. SIGGRAPH Asia Posters 2015: 24:1 - 2014
- [c35]Kozue Nojiri, Suzanne Low, Koki Toda, Yuta Sugiura, Yoichi Kamiyama, Masahiko Inami:
Present information through afterimage with eyes closed. AH 2014: 3:1-3:4 - [c34]Shunsuke Koyama, Yuta Sugiura, Masa Ogata, Anusha I. Withana, Yuji Uema, Makoto Honda, Sayaka Yoshizu, Chihiro Sannomiya, Kazunari Nawa, Masahiko Inami:
Multi-touch steering wheel for in-car tertiary applications using infrared sensors. AH 2014: 5:1-5:4 - [c33]Suzanne Low, Yuta Sugiura, Dixon Lo, Masahiko Inami:
Pressure detection on mobile phone by camera and flash. AH 2014: 11:1-11:4 - [c32]Masa Ogata, Yuta Sugiura, Yasutoshi Makino, Masahiko Inami, Michita Imai:
Augmenting a Wearable Display with Skin Surface as an Expanded Input Area. HCI (9) 2014: 606-614 - [c31]Yuta Sugiura, Koki Toda, Takayuki Hoshi, Masahiko Inami, Takeo Igarashi:
Graffiti fur: turning your carpet into a computer display. SIGGRAPH Emerging Technologies 2014: 9:1 - [c30]Yuta Sugiura, Koki Toda, Takayuki Hoshi, Masahiko Inami, Takeo Igarashi:
Graffiti fur: turning your carpet into a computer display. SIGGRAPH Studio 2014: 36:1 - [c29]Kevin Fan, Yuta Sugiura, Kouta Minamizawa, Sohei Wakisaka, Masahiko Inami, Naotaka Fujii:
Ubiquitous substitutional reality: re-experiencing the past in immersion. SIGGRAPH Posters 2014: 55:1 - [c28]Yuta Sugiura, Koki Toda, Takayuki Hoshi, Youichi Kamiyama, Takeo Igarashi, Masahiko Inami:
Graffiti fur: turning your carpet into a computer display. UIST 2014: 149-156 - 2013
- [c27]Suzanne Low, Yuta Sugiura, Kevin Fan, Masahiko Inami:
Cuddly: Enchant Your Soft Objects with a Mobile Phone. Advances in Computer Entertainment 2013: 138-151 - [c26]Tsubasa Yamamoto, Yuta Sugiura, Suzanne Low, Koki Toda, Kouta Minamizawa, Maki Sugimoto, Masahiko Inami:
PukaPuCam: Enhance Travel Logging Experience through Third-Person View Camera Attached to Balloons. Advances in Computer Entertainment 2013: 428-439 - [c25]Yasutoshi Makino, Yuta Sugiura, Masayasu Ogata, Masahiko Inami:
Tangential force sensing system on forearm. AH 2013: 29-34 - [c24]Masayasu Ogata, Yuta Sugiura, Hirotaka Osawa, Michita Imai:
FlashTouch: data communication through touchscreens. CHI 2013: 2321-2324 - [c23]Kevin Fan, Hideyuki Izumi, Yuta Sugiura, Kouta Minamizawa, Sohei Wakisaka, Masahiko Inami, Naotaka Fujii, Susumu Tachi:
Reality jockey: lifting the barrier between alternate realities through audio and haptic feedback. CHI 2013: 2557-2566 - [c22]Suzanne Low, Yuta Sugiura, Kevin Fan, Masahiko Inami:
Cuddly: enchant your soft objects with a mobile phone. SIGGRAPH ASIA Emerging Technologies 2013: 5:1-5:2 - [c21]Masayasu Ogata, Yuta Sugiura, Yasutoshi Makino, Masahiko Inami, Michita Imai:
SenSkin: adapting skin as a soft interface. UIST 2013: 539-544 - 2012
- [c20]Masa Ogata, Yuta Sugiura, Hirotaka Osawa, Michita Imai:
Pygmy: a ring-shaped robotic device that promotes the presence of an agent on human hand. APCHI 2012: 85-92 - [c19]Yuta Sugiura, Calista Lee, Masayasu Ogata, Anusha I. Withana, Yasutoshi Makino, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi:
PINOKY: a ring that animates your plush toys. CHI 2012: 725-734 - [c18]Masayasu Ogata, Yuta Sugiura, Hirotaka Osawa, Michita Imai:
Pygmy: a ring-like anthropomorphic device that animates the human hand. CHI Extended Abstracts 2012: 1003-1006 - [c17]Yuta Sugiura, Calista Lee, Masayasu Ogata, Anusha I. Withana, Yasutoshi Makino, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi:
PINOKY: a ring-like device that gives movement to any plush toy. CHI Extended Abstracts 2012: 1443-1444 - [c16]Shigeo Yoshida, Daisuke Sakamoto, Yuta Sugiura, Masahiko Inami, Takeo Igarashi:
RoboJockey: robotic dance entertainment for all. SIGGRAPH Asia Emerging Technologies 2012: 19:1-19:2 - [c15]Masayasu Ogata, Yuta Sugiura, Hirotaka Osawa, Michita Imai:
iRing: intelligent ring using infrared reflection. UIST 2012: 131-136 - [c14]Yuta Sugiura, Masahiko Inami, Takeo Igarashi:
A thin stretchable interface for tangential force measurement. UIST 2012: 529-536 - 2011
- [c13]Wataru Yoshizaki, Yuta Sugiura, Albert C. Chiou, Sunao Hashimoto, Masahiko Inami, Takeo Igarashi, Yoshiaki Akazawa, Katsuaki Kawachi, Satoshi Kagami, Masaaki Mochimaru:
An actuated physical puppet as an input device for controlling a digital manikin. CHI 2011: 637-646 - [c12]Kakehi Gota, Yuta Sugiura, Anusha I. Withana, Calista Lee, Naohisa Nagaya, Daisuke Sakamoto, Maki Sugimoto, Masahiko Inami, Takeo Igarashi:
FuwaFuwa: detecting shape deformation of soft objects using directional photoreflectivity measurement. SIGGRAPH Emerging Technologies 2011: 5 - [c11]Anusha I. Withana, Yuta Sugiura, Charith Lasantha Fernando, Yuji Uema, Yasutoshi Makino, Maki Sugimoto, Masahiko Inami:
ImpAct: haptic stylus for shallow depth surface interaction. SIGGRAPH Asia Emerging Technologies 2011: 10:1 - [c10]Yuta Sugiura, Calista Lee, Anusha I. Withana, Yasutoshi Makino, Masahiko Inami, Takeo Igarashi:
PINOKY: a ring that animates your plush toys. SIGGRAPH Asia Emerging Technologies 2011: 14:1 - [c9]Yuta Sugiura, Anusha I. Withana, Teruki Shinohara, Masayasu Ogata, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi:
Cooky: a cooperative cooking robot system. SIGGRAPH Asia Emerging Technologies 2011: 17:1 - [c8]Yuta Sugiura, Kakehi Gota, Anusha I. Withana, Calista Lee, Daisuke Sakamoto, Maki Sugimoto, Masahiko Inami, Takeo Igarashi:
Detecting shape deformation of soft objects using directional photoreflectivity measurement. UIST 2011: 509-516 - 2010
- [c7]Takumi Shirokura, Daisuke Sakamoto, Yuta Sugiura, Tetsuo Ono, Masahiko Inami, Takeo Igarashi:
RoboJockey: real-time, simultaneous, and continuous creation of robot actions for everyone. Advances in Computer Entertainment Technology 2010: 53-56 - [c6]Yuta Sugiura, Daisuke Sakamoto, Anusha I. Withana, Masahiko Inami, Takeo Igarashi:
Cooking with robots: designing a household system working in open environments. CHI 2010: 2427-2430 - [c5]Takumi Shirokura, Daisuke Sakamoto, Yuta Sugiura, Tetsuo Ono, Masahiko Inami, Takeo Igarashi:
RoboJockey: real-time, simultaneous, and continuous creation of robot actions for everyone. UIST (Adjunct Volume) 2010: 399-400
2000 – 2009
- 2009
- [c4]Yuta Sugiura, Takeo Igarashi, Hiroki Takahashi, Tabare Akim Gowon, Charith Lasantha Fernando, Maki Sugimoto, Masahiko Inami:
Graphical instruction for a garment folding robot. SIGGRAPH Emerging Technologies 2009: 12:1 - [c3]Masahiro Furukawa, Naohisa Nagaya, Takuji Tokiwa, Masahiko Inami, Atsushi Okoshi, Maki Sugimoto, Yuta Sugiura, Yuji Uema:
Fur display. SIGGRAPH ASIA Art Gallery & Emerging Technologies 2009: 70 - [c2]Charith Lasantha Fernando, Takeo Igarashi, Masahiko Inami, Maki Sugimoto, Yuta Sugiura, Anusha Indrajith Withana, Kakehi Gota:
An operating method for a bipedal walking robot for entertainment. SIGGRAPH ASIA Art Gallery & Emerging Technologies 2009: 79 - [c1]Fumitaka Ozaki, Takuo Imbe, Shin Kiyasu, Yuta Sugiura, Yusuke Mizukami, Shuichi Ishibashi, Maki Sugimoto, Masahiko Inami, Adrian David Cheok, Naohito Okude, Masahiko Inakage:
MYGLOBE: cognitive map as communication media. SIGGRAPH Posters 2009
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-20 23:00 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint