A Review of Research Methodologies in in
A Review of Research Methodologies in in
international
business
review
International Business Review 15 (2006) 601–617
www.elsevier.com/locate/ibusrev
Abstract
What is common practice in international business (IB) research methodology? To address this
question, we surveyed 1,296 empirical articles published in six leading international business journals
from 1992 to 2003. The study uncovers state-of-the-art approaches in research methodologies in IB in
terms of five major aspects: data collection methods, sample sources including sampled countries and
subjects, sampling methods, sample sizes, and response rates. The results indicate that (1) mail
questionnaire surveys dominate empirical research, (2) 60.9% of the studies use a one-country sample
(88.9% from western countries), (3) 33.7% of the studies are based upon sample frames provided by
third parties, and (4) the median sample size is 180 with an average response rate of 40.1%.
Suggestions and recommendations are also provided to improve the methodological rigor of IB
research.
r 2006 Elsevier Ltd. All rights reserved.
1. Introduction
Corresponding author. Tel.: +852 2784 4644; fax: +852 2788 9146.
E-mail address: [email protected] (Z. Yang).
0969-5931/$ - see front matter r 2006 Elsevier Ltd. All rights reserved.
doi:10.1016/j.ibusrev.2006.08.003
ARTICLE IN PRESS
602 Z. Yang et al. / International Business Review 15 (2006) 601–617
2. Conceptual background
objectives of this study suggest that our focus is not only on cross-cultural/country
research, but on all empirical studies in leading IB journals as well. Only a small portion of
IB research is cross-cultural/national oriented; single-country studies are still prevalent
(Hyman & Yang, 2001). Thus, many methodological issues pertaining to comparative
studies are not relevant. This leads us to focus on methodological issues applicable to
studies using samples from both single and multiple countries. The focus also enables us to
make cross-serial and cross-study comparisons. Second, the five selected aspects have been
considered as the most important fundamental indicators of quality research design,
substantially influencing the quality of an IB study (Ember & Ember, 2001). Specifically, as
we elaborate subsequently, the quality of the five issues and their combination contributes
to ensuring the validity of IB studies.
Data collection methods influence a test’s reliability and validity (Pedhazur &
Schmelkin, 1991). Some frequently used methods include survey (mail or administrated
questionnaire survey), experiment, personal or telephone interviews, and secondary data.
Each data collection method has its advantages and disadvantages. For example,
conducting surveys in a question format may induce problems of vagueness and
generalizability. The situation or scenario could be ‘‘described so briefly to the respondent
that it is difficult for him or her to evaluate and for the researcher to attain any reasonable
degree of within-subject reliability’’ (Randall & Gibson, 1990, p. 465). An experiment
usually involves pre-existing groups or non-random assignment of units to treatment
which influences internal validity. Thus, researchers advocate multiple methods of data
collection, e.g., a combination of qualitative and quantitative methods, to overcome single
method bias. In addition, it is argued that the selection of modes of data collection could
influence non-response error and response rates (Shettle & Mooney, 1999). Sudman and
Bradburn (1982) argue that less educated people are more willing to respond to telephone
surveys than to mail questionnaires, which require literacy skills.
Furthermore, Ember and Ember (2001, p. 13) have pointed out that an important
feature of cross-cultural comparison is ‘‘whether the data used are primary (collected by
the investigator in various field sites explicitly for a comparative study) or secondary
(collected by others and found by the comparative researcher in ethnographies, censuses,
and histories)’’. Using both primary and secondary data is also highly recommended,
though the difficulty lies in ensuring the equivalence and comparability of secondary and
primary data obtained from different cultures (Malhotra et al., 1996).
Sample sources refer to the researched environment (Pedhazur & Schmelkin, 1991) and
the subjects used in a study. Ember and Ember (2001, p. 12) emphasize the critical role of
‘‘geographical scope of the comparison—whether the sample is worldwide or is limited to a
geographic area (e.g., a region such as North America)’’ in cross-cultural studies. People in
different countries and areas differ in many ways, e.g., demographic and psychographic
characteristics, which could cause a treatments–attributes interaction and in turn influence
external validity and generalizability of research findings (Pedhazur & Schmelkin, 1991).
Culture is an even broader setting for research, exerting significant influence on subjects’
ARTICLE IN PRESS
604 Z. Yang et al. / International Business Review 15 (2006) 601–617
Research objectives and questions often determine the sampling frame as to whom or
what to sample, leading to two different sampling techniques, i.e., probability and non-
probability sampling (Palys, 1997). A good sample has two properties: representativeness
and adequacy (Singh, 1986). In general, random samples provide a good approximation of
the population and offer better assurance against sampling bias; thus are more
representative than non-probability samples (e.g., Lazerwitz, 1968). Nevertheless, due to
situational and financial constraints, researchers in many fields rely heavily upon
convenience sampling (Randall & Gibson, 1990). For instance, it would be difficult to
obtain an appropriate sampling frame for a study of behaviors of homosexual consumers
in China. Thus, it is desirable to learn how IB researchers make a trade-off between sample
representativeness and costs.
Sample size influences the accuracy of estimation. In general, a large sample size can
help minimize sampling errors, and improve generalizability of research findings. Sample
size affects statistic power through influencing standard errors (Pedhazur & Schmelkin,
1991). The adequacy of sample size is determined by such factors as the way the
respondents are selected (random or convenient), the distribution of the population
parameters (the variables of interest), the purpose of the research project (exploratory or
applied), and data analytic procedures (Randall & Gibson, 1990). It is essential for
researchers to find the optimal point between the costs and adequacy of a sample size.
The problem of low response rates or non-response error occurs when some sample
subjects do not respond. Such non-response errors distort the information drawn from the
selected sample (Assael & Keon, 1982), thus decreasing reliability and validity of a study,
and making it difficult for generalization. Therefore, to do quality research, scholars need
to know not only the average response rates of different methods, but must also identify
factors that affect response rates. For IB research, it is critical to focus attention on
question relevancy, language ambiguity, cultural and geographical distances, and the
sensitivity of the study’s subject that may significantly influence non-response errors
(Helgeson, Voss, & Terpening, 2002).
In the following section, we empirically review the adequacy of these five aspects in IB
research methodology based on a more comprehensive survey.
3. Research design
The research design for this study follows procedures suggested in previous studies
(Hyman & Yang, 2001; Sila & Ebrahimpour, 2002). First, based upon the purpose of our
research, we selected our sampling frame and further narrowed our samples. Second, we
ARTICLE IN PRESS
Z. Yang et al. / International Business Review 15 (2006) 601–617 605
developed corresponding measures and a content analysis scheme defined in terms of our
research domain. Finally, major issues concerning data collection and coding reliability
were addressed.
3.1. Samples
Among the numerous elements of research design and methodology, we focus on aspects
significantly affecting reliability and validity of the findings of a study. A coding sheet was
developed based on eight major elements of research design. They are (1) data collection
methods, (2) sampling techniques, (3) population (the entire group under study as defined
by research objectives), (4) sample frame (a master list of the entire population), (5) sample
(a subset of the population), (6) sample subjects, (7) sample size, and (8) response rate
(Burns & Bush, 2002; Sin, Cheung, & Lee, 1999).
For this study, a country is defined as ‘‘nations, specific areas, or regions.’’ For example,
the UK, Hong Kong, and Puerto Rico, are each coded as one country. The few cases of
researcher-identified regions, such as General Europe or Middle Asia, also were coded as
one country. If an article conducted multiple studies, the mean sample size was calculated
and used. We recorded response rates according to data collection methods. The primary
data collection methods were coded as the most important for the study if there was more
than one method. For example, De Mortandes and Vossen (1999) conducted a telephone
interview first to identify and determine participants, followed by a mail survey. Therefore,
we coded mail survey as the primary data collection method. When a study did not specify
ARTICLE IN PRESS
606 Z. Yang et al. / International Business Review 15 (2006) 601–617
its sampling method, response rate, or other information, we categorized each item as ‘‘do
not know.’’
To develop a coding scheme, two researchers coded the first 200 articles together. The
200 articles were drawn from the first 11 empirical studies published in 1993, 1997, and
2001, respectively, for each IB serial (total articles: 11 3 6 ¼ 198). In addition, two
more empirical articles published in JIBS in 2001 were selected to reflect the fact that JIBS
has the largest number of empirical articles in the examined period. The sampling
technique enables us to choose articles roughly representing each serial. Based upon the
initial coding scheme, the two researchers then coded the remaining 1,096 articles
independently. The coding scheme was further expanded and incorporated new items
whenever necessary. For instance, new countries studied such as Ghana, Kenya, Malawi,
and Czech Republic were added in the process of coding. In addition, the researchers
reclassified and redefined a category through either combining similar items or further
dividing a category into two or more. For example, sample types were initially categorized
into convenience, probability, judgment, sample based on lists supplied by others,
secondary data, and others. Later, we further divided ‘‘secondary data’’ into financial data,
and government data to specify the two distinct secondary data sources.
The inter-coder reliabilities, measured by the percent agreement index across the eight
categories, ranged from 95.5% to 99.1%, indicating satisfactory results. Specifically, the
inter-coder reliability was 99.1% for population, 98.5% for data collection methods,
98.3% for sample frame, 97.7% for sample, 97.5% for sampling techniques, 97.6% for
response rate, 96.5% for sample size, and 95.5% for sample subjects. All disagreements
and inconsistencies were resolved through further discussion.
4. Results
Table 1
Data collection methods for empirical studies
experiment-based studies. Surprisingly, MIR has not published any research using
experiment as a primary research method.
Secondary data has been widely utilized in IB research with a total of 32.7% of studies
employing pre-existing quantitative and qualitative data such as government database,
financial data, census data, social surveys, organizational administrative data, public
records, and longitudinal studies. JIBS published the highest percentages of secondary
data-based research, demonstrating a good balance between survey-based data and
secondary data (51.8% versus 42.6%). In comparison, studies in two international
marketing-specific journals, IMR and JIM, utilized less secondary data (17.7% and 24.8%,
respectively). Therefore, researchers in IM may be encouraged to exploit secondary data as
they are less expensive and time-consuming (Heaton, 2004).
1 USA 39.0 (506) 27.7 (184) 42.9 (258) 48.5 46.7 29.5 36.4 25.1 41.8
2 UK 15.7 (204) 6.2 (41) 14.0 (84) 16.2 14.2 13.4 16.8 19.7 10.3
3 Japan 14.4 (187) 5.7 (38) 15.4 (93) 18.9 14.8 12.5 9.5 13.5 13.3
ARTICLE IN PRESS
4 China 10.7 (139) 3.5 (23) 2.0 (12) 10.8 11.8 15.2 5.9 10.4 13.3
5 Germany 9.5 (123) 1.7 (11) 11.0 (66) 11.1 14.8 14.3 5.0 7.7 6.1
6 Canada 8.6 (112) 3.3 (22) 10.1 (61) 12.1 10.7 9.8 5.9 6.6 4.8
7 France 8.2 (106) 1.1 (7) 9.3 (11) 10.2 8.3 11.6 6.4 7.7 4.2
8 Australia 6.4 (83) 2.7 (18) 4.2 (25) 7.0 4.1 6.3 7.7 6.2 6.1
9 Sweden 5.6 (73) 0.8 (5) 5.8 (35) 6.7 6.5 5.4 0.9 8.1 3.0
9 Holland 5.6 (73) 1.7 (11) 6.6 (40) 9.4 5.3 7.1 1.8 4.6 3.0
11 Hong Kong 4.8 (62) 2.9 (19) 1.8 (11) 5.4 3.6 5.4 4.5 5.4 3.6
12 Korea 4.6 (60) 3.6 (24) 4.5 (27) 5.1 3.6 4.5 5.0 3.1 6.7
13 Italy 4.6 (59) 0.5 (3) 4.7 (28) 6.7 3.0 11.6 1.4 3.1 3.0
14 Spain 3.9 (51) 0.5 (3) 2.2 (13) 5.1 5.3 7.1 3.2 1.5 2.4
15 Norway 3.9 (50) 0.5 (3) 2.8 (17) 4.6 3.0 2.7 2.7 3.9 5.5
Number of articles 1296 665 602 371 169 112 220 259 165
a
IBJ: International Business Journals including JIBS, MIR, JWB, IMR, JIM, and IBR.
b
Percentages for sampled five international marketing journals based on Hyman and Yang (2001).
c
Percentages based on Thomas et al. (1994).
d
JIBS: Journal of International Business Studies; MIR: Management International Review; JWB: Journal of World Business; IMR: International Marketing Review;
JIM: Journal of International Marketing, and IBR: International Business Review (1992–2003).
e
Percent ¼ frequency of the country studied/no. of articles.
f
Frequency of the countries studied.
ARTICLE IN PRESS
Z. Yang et al. / International Business Review 15 (2006) 601–617 609
The distribution of sampled countries is also skewed at the regional level. Europe is the
continent studied most frequently (1,152 studies), followed by Asia (859), North America
(670), Oceania (mainly Australia, New Zealand) (121), Central and South America (81),
and Africa (29). In terms of the mean studies per region, North America ranks first (134),
followed by Oceania (60.5), Europe (36), Asia (14.6), Central and South America (6.2), and
Africa (3.2). Thus, Central and South America and Africa are under-researched areas.
The mean number of countries studied per empirical article is 1.56 (s.d. ¼ 3.37) in IBJ,
which is similar to that in IMS (mean ¼ 1.55, s.d. ¼ 0.52) (Hyman & Yang, 2001). Of the
empirical studies in IBJ, 60.9% (740 of 1,216) are limited to one-country samples, which is
lower than 73.4% in IMS, 17.4% (211 of 1,216) are limited to two-country samples, and
21.7% involve three- or-more-country samples. While the mean number of countries
sampled has been increasing over time, one-country samples still dominate IB research.
Table 3
Profile of data sources
IBJa IMSb
Theoretically, probability samples are preferred since they offer better assurance against
sampling bias. Nevertheless, the majority of IB studies were found to rely on non-
probability sampling methods. Specifically, judgment samples are most frequently used
(15.1%), convenience sample (14.6%), probability sample (9.3%), financial data (7.7%),
government data (7.0%), census (1.9%), and newspaper articles (1.6%) (see Table 4). Sin,
Hung, and Cheung (2001) identify a similar pattern based on cross-cultural advertising
studies. They find that the most frequently used method was convenience sampling,
followed by judgment sampling, and random sampling. In the same vein, Hyman and
Yang (2001) observe that only 19.3% of empirical articles use probability samples while
26% rely on convenience samples and 31.3% on lists supplied by others. Thus, this is
consistent with the finding of Sin and Ho (2001, p. 25) that researchers studying Chinese
consumer behavior have ‘‘a tradition of relying heavily on non-probability sampling in
selecting sample units.’’
Furthermore, lists provided by various third parties have been widely utilized,
representing 33.7% of studies in IBJ. The most popular sample frames are multinational
companies directories compiled by commercial organizations such as Fortune 500,
Macmillan Directory of Multinationals, International Firm Directory (IFD), and
Kompass Directory of Enterprises, followed by lists compiled by government and trade
associations such as the US Department of Commerce Directory, member lists of the
China Business Club, World Business Directory, and Singapore China Trade and
Investment Directory, AMA and Thai Marketing Association, American Countertrade
Association, and Defense Industry Offset Association. Lists compiled by non-profit,
world-wide organizations such as the OECD and EU are also quite popular.
Table 4
Frequency of sample type
Percent (frequency)
The mean sample size of empirical studies in IBJ varies dramatically according to unit of
analysis, ranging from 181 to 5,186. For example, the mean size is 426 for manager
samples, and 5,186 for studies using secondary financial data. As the mean sample size is
likely skewed by either very small or very large samples, the median size, 180, is considered
as more representative of the typical sample size in IB. In terms of sample unit,
advertisements have the largest median sample size of 647, followed by individuals (343),
students (248), newspaper articles (201), government data (187), financial data (177),
managers/CEOs/VPs (175), journal articles (119), and product/sales data (116). The
median sample sizes also vary by types of study and by sampling methods. The median
sample size of census is the largest (351), followed by probability sample (242), list (203),
financial data (197), newspaper articles (186), convenience sample (174), government data
(154), and judgment (135) (see Table 5). The median sample size is larger than a minimum
satisfactory sample size which is usually set at 100 subjects per study (Bailey, 1982).
The average response rates of studies using survey questionnaire across IB journals are
relatively higher, ranging from 27.4% to 51.2%. In comparison, Hyman and Yang (2001)
have found that the mean response rate of studies in IMJ is 40.0%, which is within the
above range. Studies employing administered questionnaire survey have the highest
response rate (51.2%), followed by telephone interview (45.2%), personal interview
(36.6%, response rates are calculated as the ratio of people agreed to be interviewed versus
people contacted), and mail survey (27.4%). It is not surprising to see that mail
questionnaire surveys, being most popular, have received a lower response rate relative to
other survey methods (Malhotra et al., 1996). In spite of its high cost, personal interview
with survey questionnaire is the dominant mode for collecting data in most European
countries, newly industrialized countries (NICs), and the developing world (Honomichl,
1984; Monk, 1987).
Table 5
Sample size by type
IBJa IMSb
5. Discussion
In this research, we summarized the common practice of the five key aspects of IB
research design through an extensive examination of 1,296 empirical articles. The results
indicate that (1) the most popular data collection method is mail questionnaire survey,
followed by secondary database, and administrated questionnaire survey, (2) 60.9% of the
studies used a one-country sample (88.9% from western countries), (3) 33.7% of studies
drew sample frames from authoritative lists, and (4) the median sample size was 180 with
an average response rate of 40.1%. The findings provide directions for further
improvement in terms of data collection methods, data sources and quality.
organizational records (such as census data), survey archives (such as the General Social
Survey), and written records (such as newspapers), are suitable for longitudinal and multi-
country studies (Judd, Smith, & Kidder, 1991). However, Salehi-Sangari and Lemar (1993,
p. 132) have argued in DCs, ‘‘the use of secondary data compiled by DCs is not
recommended owing to the existence of a high degree of unreliability.’’ In this case,
researchers may try to get approval of top management in an organization and use cash
remuneration to acquire the full cooperation of the lower management to obtain ‘‘higher
rate of return for questionnaire, increase in quality of response, and possibility of
availability of document, if needed for research’’ (Salehi-Sangari & Lemar, 1993, p. 132).
In addition to these traditional approaches, more IB scholars could adopt the Internet
and the recently emerged communication media such as CATI (computer assisted
telephone interviewing) and CAPI (computer assisted personal interviewing), to take
advantage of ‘‘low cost, fast response time, and access to any location’’ (Sackmary, 1998,
p. 41). For instance, researchers could employ online sampling such as random online
intercept sampling, invitation online sampling, and online panel sampling. Furthermore,
researchers could benefit from using new methods such as ‘‘Netnography’’ method
(Kozinets, 2002) and online content analysis (Yang & Peterson, 2003). Compared with
traditional research methods such as personal interviews and focus groups, these methods
are less time-consuming, less obtrusive, but more timely and cheaper, providing a unique
opportunity for IB researchers to collect data. For example, ‘‘Netnography,’’ as a
qualitative research technique, employs ethnographic methods which draw upon ‘‘the
information publicly available in online forums to identify and understand the needs and
decision influences of relevant online consumer groups.’’ (Kozinets, 2002, p. 62).
Previous studies call for increasing the number of countries sampled in IB study (Hyman
& Yang, 2001; Sin et al., 2001). Researchers using data collected from multiple countries
could control unmatched factors, increase validity, and rule out alternative explanations
(Malhotra et al., 1996; Sin et al., 1999; Berry, 1980), and, in turn, enhance the
generalizability of the findings. Multi-country sampling is possibly restricted for two
reasons. First, survey sample in more than one country involves large financial and human
costs. Second, researchers’ opportunity for international cooperation is limited (Aulakh &
Kotabe, 1993). This could be partially solved by editors or other research organizations
through their efforts in coordinating researchers across countries. IB researchers heavily
concentrated on the Western countries, reflecting their economic power. As rapid
globalization of business continues to evolve, it is reasonable to call for more studies
focusing on emerging and DCs.
Regarding subjects, while it is reasonable to acknowledge that the majority of
respondents are managers/CEOs/VPs, IB scholars should extend their research scope by
surveying more IB stakeholders, such as shareholders, environmental groups, and the
community. Studying the interplay between these subjects and MNCs tends to enrich our
knowledge. For secondary data, because a single data set only addresses some aspects of a
researched phenomenon, various sources should be used to verify accuracy of the data and
the robustness of the results. For instance, to measure the institutional environment,
scholars can utilize multiple sources such as the Institutional Profiles database, the Fraser
Institute database, World Economic Forum Global Competitiveness Report, the PRS
ARTICLE IN PRESS
614 Z. Yang et al. / International Business Review 15 (2006) 601–617
Group International Country Risk Guide, IMF International Financial Statistics, the
Kaufmann, Kraay and Zoidon-Lobatón database, and World Development Indicators,
and Hofstede’s Cultural Dimension Index.
One notable issue we observe is that there have been very few IB studies on
organizational behaviors using multiple informants. As pointed out by Bruggen, Lilien and
Kacker (2002, p. 476), ‘‘using multiple versus a single informant improves the quality of
response data and thereby the validity of reported relationships in organizational
marketing research.’’ While organizational level studies in other top business journals have
adopted a multiple informants practice to minimize response errors, IB research still lags
behind this trend.
6. Limitations
Several limitations suggest further research. First, we only surveyed the six leading IB
journals written in English, located in the USA and UK. Future studies may also include
other outlets, particularly prestige journals in other business disciplines. Second, the scope
of our survey of research methods is rather limited by only reviewing five major aspects.
Other important methodological issues include the choice of statistical tools, the power of
the findings, construct equivalence (e.g., functional, conceptual, instrument, and
measurement equivalence) (Drasgow & Kanfer, 1985), construct reliability and validity,
and scale construction for cross-cultural samples. Third, our coding practice indicates that
a relatively comprehensive, detailed coding scheme is necessary for the reduction of coding
inconsistency and disagreement. Researchers could use our findings as a base to further
develop the coding scheme and improve inter-coder reliability. Finally, future research can
undertake systematic examination of these areas so as to provide additional insights and a
comprehensive review of research methods in IB.
Acknowledgments
The authors gratefully acknowledge a research grant from City University of Hong
Kong (SRG Project No. 7001742). The authors owe several ideas to Professor Michael
Hyman at New Mexico State University and would like to express appreciation of his
strong support for the project. The authors also would like to thank two anonymous
reviewers for their constructive suggestions.
References
Assael, H., & Keon, J. (1982). Nonsampling vs. sampling errors in survey research. Journal of Marketing, 46(2),
114–123.
Aulakh, P. S., & Kotabe, M. (1993). An assessment of the theoretical and methodological developments in
international marketing: 1980–1990. Journal of International Marketing, 1(2), 5–28.
Bailey, K. D. (1982). Methods of social research (2nd ed.). New York: Free Press.
Berry, J. W. (1980). Introduction to methodology. In H. C. Triandis, & J. W. Berry (Eds.), Handbook of cross-
cultural psychology: Methodology, Vol. 2 (pp. 1–28). Boston, MA: Allyn & Bacon.
Bruggen, G. H. V., Lilien, G. L., & Kacker, M. (2002). Informants in organizational marketing research: Why use
multiple informants and how to aggregate responses. Journal of Marketing Research, 39(4), 469–478.
Burns, A. C., & Bush, R. F. (2002). Marketing research: Online research applications. New Jersey: Pearson
Education, Inc.
Cavusgil, S. T., & Das, A. (1997). Methodological issues in empirical cross-cultural research: A survey of the
management literature and a framework. Management International Review, 37(1), 71–96.
Chan, K. C., Fung, H. G., & Leung, W. K. (2006). International business research: Trends and school rankings.
International Business Review, 15(4), 317–338.
Craig, C. S., & Douglas, S. P. (2001). Conducting international marketing research in the twenty-first century.
International Marketing Review, 18(1), 80–90.
ARTICLE IN PRESS
616 Z. Yang et al. / International Business Review 15 (2006) 601–617
De Mortandes, C. P., & Vossen, J. (1999). Mechanisms to control the marketing activities of foreign distributors.
International Business Review, 8(1), 75–97.
Drasgow, F., & Kanfer, R. (1985). Equivalence of psychological measurement in heterogeneous populations.
Journal of Applied Psychology, 70(4), 662–680.
Dubois, F. L., & Reeb, D. (2000). Ranking the international business journals. Journal of International Business
Studies, 31(4), 689–704.
Ember, C. R., & Ember, M. (2001). Cross-cultural research methods. New York: Rowman & Littlefield.
Heaton, J. (2004). Reworking qualitative data. Gateshead, UK: Athenaeum Press.
Helgeson, J. G., Voss, K. E., & Terpening, W. D. (2002). Determinants of mail-survey response: Survey design
factors and respondent factors. Psychology & Marketing, 19(3), 303–328.
Hoverstad, R., Shipp, S. G., & Higgins, S. (1995). Productivity, collaboration, and diversity in major marketing
journals: 1984–1993. Marketing Education Review, 5(2), 57–65.
Honomichl, J. J. (1984). Survey results positive/Plotting structural changes in industry. Advertising Age, 55(74),
22–29.
Hyman, M. R., & Yang, Z. (2001). International marketing journals: A retrospective. International Marketing
Review, 18(6), 667–716.
Judd, C. M., Smith, E. R., & Kidder, L. H. (1991). Research methods in social relations. Orlando, FL: Holt,
Rinehart &Winston, Inc.
Kaiser, B. P. (1988). Marketing research in Sweden. European Research, 16(1), 64–70.
Kogut, B. (2001). Methodological contributions in international business and the direction of academic research
activity. In A. Rugman, & T. Brewer (Eds.), The Oxford handbook of international business (pp. 785–817). UK:
Oxford University Press.
Kozinets, R. V. (2002). The field behind the screen: Using Netnography for marketing research in online
communities. Journal of Market Research, 39(1), 61–72.
Lazerwitz, B. (1968). Sampling theory and procedures. In H. A. Blalock, & A. B. Blalock (Eds.), Methodology in
social research (pp. 278–328). NY: McGraw-Hill.
Li, T., & Cavusgil, S. T. (1995). A classification and assessment of research streams in international marketing.
International Business Review, 4(3), 251–277.
Malhotra, N. K., Agarwal, J., & Peterson, M. (1996). Methodological issues in cross-cultural marketing research:
A state-of-the-art review. International Marketing Review, 13(5), 7–43.
Malpass, R., & Poortinga, Y. (1986). Strategies for design and analysis. In W. Lonner, & J. W. Berry (Eds.), Field
methods in cross-cultural research (pp. 47–83). Beverly Hills, CA: Sage Publications.
McGrath, J. E., & Brinberg, D. (1983). External validity and the research process: A comment on the Calder/
Lynch dialogue. Journal of Consumer Research, 10(1), 115–124.
Monk, D. (1987). Marketing research in Canada. European Research, 15(4), 271–274.
Paliwoda, S. J. (1999). International marketing: An assessment. International Marketing Review, 16(1), 8–17.
Palys, T. (1997). Research decisions: Quantitative and qualitative perspectives. Toronto: Harcourt Brace &
Company Canada Ltd.
Pedhazur, E. J., & Schmelkin, L. P. (1991). Measurement, design, and analysis: An integrated approach. London:
Lawrence Erlbaum Associates, Inc.
Randall, D. M., & Gibson, A. M. (1990). Methodology in business ethics research: A review and critical
assessment. Journal of Business Ethics, 9(6), 457–471.
Sackmary, B. (1998). Internet survey research: Practices, problems, and prospects. In R. C. Goodstein, & S.
MacKenzie (Eds.), AMA educators’ proceedings: Enhancing knowledge development in marketing, Vol. 9 (pp.
41–49). Chicago, MI: American Marketing Association.
Salehi-Sangari, E., & Lemar, B. (1993). Survey research techniques: Application in data collection from
developing countries (DCs). In C. L. Swanson, A. Alkhafaji, & M. H. Ryan (Eds.), International research in
the business disciplines, Vol. 1 (pp. 125–133). Greenwich, CT: JAI Press.
Schaffer, B. S., & Riordan, C. M. (2003). A review of cross-cultural methodologies for organizational research: A
best-practices approach. Organizational Research Methods, 6(2), 169–215.
Shettle, C., & Mooney, G. (1999). Monetary incentives in US government surveys. Journal of Official Statistics,
15(2), 231–250.
Sila, I., & Ebrahimpour, M. (2002). An investigation of the total quality management survey based research
published between 1989 and 2000: A literature review. International Journal of Quality & Reliability
Management, 19(7), 902–970.
ARTICLE IN PRESS
Z. Yang et al. / International Business Review 15 (2006) 601–617 617
Sin, L. Y. M., Cheung, G. W. H., & Lee, R. (1999). Methodology in cross-cultural consumer research: A review
and critical assessment. Journal of International Consumer Marketing, 11(4), 75–96.
Sin, L. Y. M., & Ho, S. (2001). An assessment of theoretical and methodological development in consumer
research on greater China: 1979–1997. Asia Pacific Journal of Marketing and Logistics, 13(1), 3–42.
Sin, L. Y. M., Hung, K., & Cheung, G. W. H. (2001). An assessment of methodological development in cross-
cultural advertising research: A twenty-year review. Journal of International Consumer Marketing, 14(2/3),
153–192.
Singh, A. K. (1986). Tests measurements and research methods in behavioural sciences. New Delphi: Tata McGraw-
Hill Publishing Company Limited.
Sudman, S., & Bradburn, M. M. (1982). Asking questions: A practical guide to questionnaire design. San Francisco,
CA: Jossey-Bass.
Thomas, A. S., Shenkar, O., & Clarke, L. (1994). The globalization of our mental maps: evaluating the geographic
scope of JIBS coverage. Journal of International Business Studies, 25(4), 675–686.
Urbancic, F. R. (1995). An analysis of the institutional and individual authorship sources of articles in the Journal
of Applied Business Research: 1985–1993. Journal of Applied Business Research, 11(1), 108–117.
Yale, L., & Gilly, M. C. (1988). Trends in advertising research: A look at the content of marketing-oriented
journal from 1976 to 1985. Journal of Advertising, 17(1), 12–22.
Yang, Z., & Peterson, R. T. (2003). I read about it onliney Web-based product reviews provide a wealth of
information for marketers. Marketing Research, 15(4), 26–31.