default search action
Shoichi Matsunaga
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2010 – 2019
- 2019
- [c65]Naoki Umeno, Masaru Yamashita, Hiroyuki Takada, Shoichi Matsunaga:
Training Data Expansion for Classification between Normal and Abnormal Lung Sounds. APSIPA 2019: 935-938 - 2018
- [c64]Takayuki Kurokawa, Tasuku Miura, Masaru Yamashita, Tomoya Sakai, Shoichi Matsunaga:
Emotion-Cluster Classification of Infant Cries Using Sparse Representation. APSIPA 2018: 1875-1878 - [c63]Kimitake Ohkawa, Masaru Yamashita, Shoichi Matsunaga:
Classification Between Abnormal and Normal Respiration Through Observation Rate of Heart Sounds Within Lung Sounds. EUSIPCO 2018: 1142-1146 - 2017
- [c62]Masara Yamashita, Tasuku Miura, Shoichi Matsunaga:
Distinction between healthy individuals and patients with confident abnormal respiration. APSIPA 2017: 1144-1147 - 2016
- [c61]Naoki Nakamura, Masaru Yamashita, Shoichi Matsunaga:
Detection of patients considering observation frequency of continuous and discontinuous adventitious sounds in lung sounds. EMBC 2016: 3457-3460 - 2015
- [c60]Shunya Umeki, Masaru Yamashita, Shoichi Matsunaga:
Classification between normal and abnormal lung sounds using unsupervised subject-adaptation. APSIPA 2015: 213-216 - [c59]Shohei Matsutake, Masaru Yamashita, Shoichi Matsunaga:
Abnormal-respiration detection by considering correlation of observation of adventitious sounds. EUSIPCO 2015: 634-638 - 2014
- [c58]Takanori Okubo, Naoki Nakamura, Masaru Yamashita, Shoichi Matsunaga:
Classification of healthy subjects and patients with pulmonary emphysema using continuous respiratory sounds. EMBC 2014: 70-73 - [c57]Masaru Yamashita, Masataka Himeshima, Shoichi Matsunaga:
Robust classification between normal and abnormal lung sounds using adventitious-sound and heart-sound models. ICASSP 2014: 4418-4422 - 2013
- [c56]Shohei Matsutake, Masaru Yamashita, Shoichi Matsunaga:
Discrimination between healthy subjects and patients using lung sounds from multiple auscultation points. ICASSP 2013: 1296-1300 - 2012
- [c55]Kazushige Egashira, Kazuya Kojima, Masaru Yamashita, Katsuya Yamauchi, Shoichi Matsunaga:
Expansion of training texts to generate a topic-dependent language model for meeting speech recognition. APSIPA 2012: 1-4 - [c54]Kazuki Honda, Kazuki Kitahara, Shoichi Matsunaga, Masaru Yamashita, Kazuyuki Shinohara:
Emotion classification of infant cries with consideration for local and global features. APSIPA 2012: 1-4 - [c53]Masataka Himeshima, Masaru Yamashita, Shoichi Matsunaga, Sueharu Miyahara:
Detection of abnormal lung sounds taking into account duration distribution for adventitious sounds. EUSIPCO 2012: 1821-1825 - 2011
- [c52]Masaru Yamashita, Shoichi Matsunaga, Sueharu Miyahara:
Discrimination between healthy subjects and patients with pulmonary emphysema by detection of abnormal respiration. ICASSP 2011: 693-696 - [c51]Kazuki Kitahara, Shinzi Michiwiki, Miku Sato, Shoichi Matsunaga, Masaru Yamashita, Kazuyuki Shinohara:
Emotion Classification of Infants' Cries Using Duration Ratios of Acoustic Segments. INTERSPEECH 2011: 1573-1576
2000 – 2009
- 2009
- [c50]Shoichi Matsunaga, Katsuya Yamauchi, Masaru Yamashita, Sueharu Miyahara:
Classification between normal and abnormal respiratory sounds based on maximum likelihood approach. ICASSP 2009: 517-520 - 2008
- [c49]Shoichi Matsunaga, Masahide Yamaguchi, Katsuya Yamauchi, Masaru Yamashita:
Sound source detection using multiple noise models. ICASSP 2008: 2025-2028 - 2007
- [c48]N. Satoh, Katsuya Yamauchi, Shoichi Matsunaga, Masaru Yamashita, R. Nakagawa, Kazuyuki Shinohara:
Emotion clustering using the results of subjective opinion tests for emotion recognition in infants' cries. INTERSPEECH 2007: 2229-2232 - 2006
- [j13]Katsutoshi Ohtsuki, Katsuji Bessho, Yoshihiro Matsuo, Shoichi Matsunaga, Yoshihiko Hayashi:
Automatic multimedia indexing: combining audio, speech, and visual information to index broadcast news. IEEE Signal Process. Mag. 23(2): 69-78 (2006) - [c47]Shoichi Matsunaga, S. Sakaguchi, Masaru Yamashita, Sueharu Miyahara, S. Nishitani, Kazuyuki Shinohara:
Emotion detection in infants² cries based on a maximum likelihood approach. INTERSPEECH 2006 - 2005
- [j12]Atsunori Ogawa, Yoshikazu Yamaguchi, Shoichi Matsunaga:
Children's speech recognition using elementary-school-student speech database. Syst. Comput. Jpn. 36(12): 33-42 (2005) - [c46]Masahide Yamaguchi, Masaru Yamashita, Shoichi Matsunaga:
Spectral cross-correlation features for audio indexing of broadcast news and meetings. INTERSPEECH 2005: 613-616 - 2004
- [c45]Shoichi Matsunaga, Osamu Mizuno, Katsutoshi Ohtsuki, Yoshihiko Hayashi:
Audio source segmentation using spectral correlation features for automatic indexing of broadcast news. EUSIPCO 2004: 2103-2106 - [c44]Katsutoshi Ohtsuki, Nobuaki Hiroshima, Yoshihiko Hayashi, Katsuji Bessho, Shoichi Matsunaga:
Topic structure extraction for meeting indexing. INTERSPEECH 2004: 305-308 - [c43]Katsutoshi Ohtsuki, Nobuaki Hiroshima, Shoichi Matsunaga, Yoshihiko Hayashi:
Multi-pass ASR using vocabulary expansion. INTERSPEECH 2004: 1713-1716 - 2003
- [c42]Shoichi Matsunaga, Atsunori Ogawa, Yoshikazu Yamaguchi, Akihiro Imamura:
Non-native English speech recognition using bilingual English lexicon and acoustic models. ICASSP (1) 2003: 340-343 - [c41]Shoichi Matsunaga, Atsunori Ogawa, Yoshikazu Yamaguchi, Akihiro Imamura:
Non-native English speech recognition using bilingual English lexicon and acoustic models. ICME 2003: 625-628 - [c40]Shoichi Matsunaga, Atsunori Ogawa, Yoshikazu Yamaguchi, Akihiro Imamura:
Speaker adaptation for non-native speakers using bilingual English lexicon and acoustic models. INTERSPEECH 2003: 3113-3116 - [c39]Yoshihiko Hayashi, Katsutoshi Ohtsuki, Katsuji Bessho, Osamu Mizuno, Yoshihiro Matsuo, Shoichi Matsunaga, Minoru Hayashi, Takaaki Hasegawa, Naruhiro Ikeda:
Speech-based and video-supported indexing of multimedia broadcast news. SIGIR 2003: 441-442 - 2001
- [c38]Takaaki Hori, Yoshiaki Noda, Shoichi Matsunaga:
Improved phoneme-history-dependent search for large-vocabulary continuous-speech recognition. INTERSPEECH 2001: 1809-1813 - 2000
- [j11]Máté Szarvas, Shoichi Matsunaga:
Improving Phoneme Classification Performance Using Observation Context-Dependent Segment Models. Int. J. Speech Technol. 3(3-4): 253-262 (2000) - [c37]Atsunori Ogawa, Yoshiaki Noda, Shoichi Matsunaga:
Novel two-pass search strategy using time-asynchronous shortest-first second-pass beam search. INTERSPEECH 2000: 290-293
1990 – 1999
- 1999
- [j10]Masahiro Tonomura, Tetsuo Kosaka, Shoichi Matsunaga:
Speaker adaptation using maximum a posteriori probability estimation with data size-dependent parameter smoothing. Syst. Comput. Jpn. 30(11): 59-66 (1999) - [c36]Shoichi Matsunaga, Yoshiaki Noda, Katsutoshi Ohtsuki, Eiji Doi, Tomio Itoh:
A medical rehabilitation diagnoses transcription method that integrates continuous and isolated word recognition. EUROSPEECH 1999: 935-938 - 1998
- [c35]Katsutoshi Ohtsuki, T. Matsutoka, Shoichi Matsunaga, Sadaoki Furui:
Topic extraction with multiple topic-words in broadcast-news speech. ICASSP 1998: 329-332 - [c34]Shoichi Matsunaga, Shigeki Sagayama:
Two-step generation of variable-word-length language model integrating local and global constraints. ICASSP 1998: 697-700 - [c33]Máté Szarvas, Shoichi Matsunaga:
Acoustic observation context modeling in segment based speech recognition. ICSLP 1998 - 1997
- [c32]Shoichi Matsunaga, Shigeki Sagayama:
Variable-length language modeling integrating global constraints. EUROSPEECH 1997: 2719-2722 - 1996
- [j9]Tetsuo Kosaka, Shoichi Matsunaga, Shigeki Sagayama:
Speaker-independent speech recognition based on tree-structured speaker clustering. Comput. Speech Lang. 10(1): 55-74 (1996) - [j8]Masahiro Tonomura, Tetsuo Kosaka, Shoichi Matsunaga:
Speaker adaptation based on transfer vector field smoothing using maximum a posteriori probability estimation. Comput. Speech Lang. 10(2): 117-132 (1996) - [c31]Tohru Shimizu, Hirofumi Yamamoto, Hirokazu Masataki, Shoichi Matsunaga, Yoshinori Sagisaka:
Spontaneous dialogue speech recognition using cross-word context constrained word graphs. ICASSP 1996: 145-148 - [c30]Shoichi Matsunaga, Hiroyuki Sakamoto:
Two-pass strategy for continuous speech recognition with detection and transcription of unknown words. ICASSP 1996: 538-541 - [c29]Jun Ishii, Masahiro Tonomura, Shoichi Matsunaga:
Speaker adaptation using tree structured shared-state HMMs. ICSLP 1996: 1149-1152 - [c28]Atsushi Nakamura, Shoichi Matsunaga, Tohru Shimizu, Masahiro Tonomura, Yoshinori Sagisaka:
Japanese speech databases for robust speech recognition. ICSLP 1996: 2199-2202 - 1995
- [j7]Ryosuke Isotani, Shoichi Matsunaga, Shigeki Sagayama:
Speech Recognition Using Function-Word N-Grams and Content-Word N-Grams. IEICE Trans. Inf. Syst. 78-D(6): 692-697 (1995) - [j6]Kouichi Yamaguchi, Harald Singer, Shoichi Matsunaga, Shigeki Sagayama:
Speaker-Consistent Parsing for Speaker-Independent Continuous Speech Recognition. IEICE Trans. Inf. Syst. 78-D(6): 719-724 (1995) - [j5]Yasunaga Miyazawa, Jun-ichi Takami, Shigeki Sagayama, Shoichi Matsunaga:
Unsupervised Speaker Adaptation Using All-Phoneme Ergodic Hidden Markov Network. IEICE Trans. Inf. Syst. 78-D(8): 1044-1050 (1995) - [j4]Takeshi Matsumura, Shoichi Matsunaga:
Non-uniform unit based HMMs for continuous speech recognition. Speech Commun. 17(3-4): 321-329 (1995) - [c27]Tetsuo Kosaka, Shoichi Matsunaga, Mikio Kuraoka:
Speaker-independent phone modeling based on speaker-dependent HMMs' composition and clustering. ICASSP 1995: 441-444 - [c26]Tohru Shimizu, Seikou Monzen, Harald Singer, Shoichi Matsunaga:
Time-synchronous continuous speech recognizer driven by a context-free grammar. ICASSP 1995: 584-587 - [c25]Masahiro Tonomura, Tetsuo Kosaka, Shoichi Matsunaga:
Speaker adaptation based on transfer vector field smoothing using maximum a posteriori probability estimation. ICASSP 1995: 688-691 - [c24]Takeshi Matsumura, Shoichi Matsunaga:
Non-uniform unit HMMS for speech recognition. EUROSPEECH 1995: 499-502 - [c23]Shoichi Matsunaga, Tetsuo Kosaka, Tohru Shimizu:
Speaking-style and speaker adaptation for the recognition of spontaneous dialogue speech. EUROSPEECH 1995: 1135-1138 - [c22]Masahiro Tonomura, Tetsuo Kosaka, Shoichi Matsunaga, Akito Monden:
Speaker adaptation fitting training data size and contents. EUROSPEECH 1995: 1147-1150 - [c21]Shoichi Matsunaga, Takeshi Matsumura, Harald Singer:
Continuous speech recognition using non-uniform unit based acoustic and language models. EUROSPEECH 1995: 1619-1622 - [c20]Hiroyuki Sakamoto, Shoichi Matsunaga:
Detection of unknown words using garbage cluster models for continuous speech recognition. EUROSPEECH 1995: 2103-2106 - 1994
- [j3]Kiyohiro Shikano, Tomokazu Yamada, Takeshi Kawabata, Shoichi Matsunaga, Sadaoki Furui, Toshiyuki Hanazawa:
Dictation Machine Based on Japanese Character Source Modeling. Int. J. Pattern Recognit. Artif. Intell. 8(1): 181-196 (1994) - [c19]Ryosuke Isotani, Shoichi Matsunaga:
A stochastic language model for speech recognition integrating local and global constraints. ICASSP (2) 1994: 5-8 - [c18]Harald Singer, Jun-ichi Takami, Shoichi Matsunaga:
Non-uniform unit parsing for SSS-LR continuous speech recognition. ICASSP (2) 1994: 149-152 - [c17]Yasunaga Miyazawa, Jun-ichi Takami, Shigeki Sagayama, Shoichi Matsunaga:
All-phoneme ergodic hidden Markov network for unsupervised speaker adaptation. ICASSP (1) 1994: 249-252 - [c16]Toshiaki Tsuboi, Shigeru Homma, Shoichi Matsunaga:
A speech-to-text transcription system for medical diagnoses. ICSLP 1994: 687-690 - [c15]Kouichi Yamaguchi, Harald Singer, Shoichi Matsunaga, Shigeki Sagayama:
Speaker-consistent parsing for speaker-independent continuous speech recognition. ICSLP 1994: 791-794 - [c14]Hiroyuki Sakamoto, Shoichi Matsunaga:
Continuous speech recognition using a dialog-conditioned stochastic language model. ICSLP 1994: 811-814 - [c13]Jin'ichi Murakami, Shoichi Matsunaga:
A spontaneous speech recognition algorithm using word trigram models and filled-pause procedure. ICSLP 1994: 819-822 - [c12]Tetsuo Kosaka, Shoichi Matsunaga, Shigeki Sagayama:
Tree-structured speaker clustering for speaker-independent continuous speech recognition. ICSLP 1994: 1375-1378 - [c11]Ryosuke Isotani, Shoichi Matsunaga:
Speech Recognition Using a Stochastic Language Model Integrating Local and Global Constraints. HLT 1994 - 1993
- [c10]Shoichi Matsunaga, Tomokazu Yamada, Kiyohiro Shikano:
Dictation system using inductively auto-generated syntax. EUROSPEECH 1993: 2135-2138 - 1992
- [c9]Tomokazu Yamada, Shoichi Matsunaga, Kiyohiro Shikano:
Japanese dictation system using character source modeling. ICASSP 1992: 37-40 - [c8]Shoichi Matsunaga, Tomokazu Yamada, Kiyohiro Shikano:
Task adaptation in stochastic language models for continuous speech recognition. ICASSP 1992: 165-168 - [c7]Shoichi Matsunaga, Toshiaki Tsuboi, Tomokazu Yamada, Kiyohiro Shikano:
Continuous speech recognition for medical diagnoses using a character trigram model. ICSLP 1992: 727-730 - [c6]Sadaoki Furui, Kiyohiro Shikano, Shoichi Matsunaga, Tatsuo Matsuoka, Satoshi Takahashi, Tomokazu Yamada:
Recent Topics in Speech Recognition Research at NTT Laboratories. HLT 1992 - 1991
- [c5]Tomokazu Yamada, Toshiyuki Hanazawa, Takeshi Kawabata, Shoichi Matsunaga, Kiyohiro Shikano:
Phonetic typewriter based on phoneme source modeling. ICASSP 1991: 169-172 - 1990
- [c4]Shoichi Matsunaga, Shigeki Sagayama, Shigeru Homma, Sadaoki Furui:
A continuous speech recognition system based on a two-level grammar approach. ICASSP 1990: 589-592 - [c3]Satoshi Takahashi, Shoichi Matsunaga, Shigeki Sagayama:
Isolated word recognition using pitch pattern information. ICSLP 1990: 553-556 - [c2]Shoichi Matsunaga, Shigeki Sagayama:
Sentence speech recognition using semantic dependency analysis. ICSLP 1990: 929-932
1980 – 1989
- 1988
- [j2]Shoichi Matsunaga, Masaki Kohda:
Reduction of Word and Minimal Phrase Candidates for Speech Recognition Based on Phoneme Recognition. Syst. Comput. Jpn. 19(4): 11-22 (1988) - [c1]Shoichi Matsunaga, Masaki Kohda:
Linguistic processing using a dependency structure grammar for speech recognition and understanding. COLING 1988: 402-407 - 1986
- [j1]Shoichi Matsunaga, Kiyohiro Shikano:
Speech recognition based on top-down and bottom-up phoneme recognition. Syst. Comput. Jpn. 17(7): 95-106 (1986)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-09-19 23:45 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint