Automatic Music Timbre Indexing
REFERENCES le Cessie, S. and van Houwelingen, J.C. (1992). Ridge
Estimators in Logistic Regression, Applied Statistics, A
Atkeson, C.G., Moore A.W., and Schaal, S. (1997). 41, (1 ), 191-201.
Locally Weighted Learning for Control, Artificial
Livescu, K., Glass, J., and Bilmes, J. (2003). Hidden
Intelligence Review. Feb. 11(1-5), 75-113.
Feature Models for Speech Recognition Using Dynamic
Balzano, G.J. (1986). What are Musical Pitch and Tim- Bayesian Networks, in Proc. Euro-speech, Geneva,
bre? Music Perception - an Interdisciplinary Journal. Switzerland, September, 2529-2532.
3, 297-314.
Martin, K.D., and Kim, Y.E. (1998). Musical Instrument
Bregman, A.S. (1990). Auditory Scene Analysis, the Identification: A Pattern-Recognition Approach, in the
Perceptual Organization of Sound, MIT Press 136th Meeting of the Acoustical Society of America,
Norfolk, VA.
Cadoz, C. (1985). Timbre et causalite, Unpublished
paper, Seminar on Timbre, Institute de Recherche et Pollard, H.F. and Jansson, E.V. (1982). A Tristimulus
Coordination Acoustique / Musique, Paris, France, Method for the Specification of Musical Timbre. Acus-
April 13-17. tica, 51, 162-171
Dziubinski, M., Dalka, P. and Kostek, B. (2005) Quinlan, J.R. (1993). C4.5: Programs for Machine
Estimation of Musical Sound Separation Algorithm Learning, Morgan Kaufmann, San Mateo, CA.
Effectiveness Employing Neural Networks, Journal
Wieczorkowska, A. (1999). Classification of Musical
of Intelligent Information Systems, 24(2/3), 133–158.
Instrument Sounds using Decision Trees, in the 8th
Eronen, A. and Klapuri, A. (2000). Musical Instrument International Symposium on Sound Engineering and
Recognition Using Cepstral Coefficients and Tempo- Mastering, ISSEM’99, 225-230.
ral Features. In proceeding of the IEEE International
Wieczorkowska, A., Wroblewski, J., Synak, P., and
Conference on Acoustics, Speech and Signal Processing
Slezak, D. (2003). Application of Temporal Descriptors
ICASSP, Plymouth, MA, 753-756.
to Musical Instrument Sound, Journal of Intelligent
Fujinaga, I., McMillan, K. (2000) Real time Recogni- Information Systems, Integrating Artificial Intelligence
tion of Orchestral Instruments, International Computer and Database Technologies, July, 21(1), 71-93.
Music Conference, 141-143.
Zhang, X. and Ras, Z.W. (2006A). Differentiated
Gillet, O. and Richard, G. (2005) Drum Loops Retrieval Harmonic Feature Analysis on Music Information
from Spoken Queries, Journal of Intelligent Informa- Retrieval For Instrument Recognition, proceeding of
tion Systems, 24(2/3), 159-177 IEEE International Conference on Granular Comput-
ing, May 10-12, Atlanta, Georgia, 578-581.
Herrera. P., Peeters, G., Dubnov, S. (2003) Automatic
Classification of Musical Instrument Sounds, Journal Zhang, X. and Ras, Z.W. (2006B). Sound Isolation by
of New Music Research, 32(19), 3–21. Harmonic Peak Partition for Music Instrument Rec-
ognition, Special Issue on Knowledge Discovery, (Z.
Kaminskyj, I., Materka, A. (1995) Automatic Source Ras, A. Dardzinska, Eds), in Fundamenta Informaticae
Identification of Monophonic Musical Instrument Journal, IOS Press, 2007, will appear
Sounds, the IEEE International Conference On Neural
Networks, Perth, WA, 1, 189-194 Zweig, G. (1998). Speech Recognition with Dynamic
Bayesian Networks, Ph.D. dissertation, Univ. of Cali-
Kostek, B. and Wieczorkowska, A. (1997). Parametric fornia, Berkeley, California.
Representation of Musical Sounds, Archive of Acous-
tics, Institute of Fundamental Technological Research, ISO/IEC, JTC1/SC29/WG11. (2002). MPEG-7
Warsaw, Poland, 22(1), 3-26. Overview. Available at http://mpeg.telecomitalialab
.com/standards/mpeg-7/mpeg-7.htm