0% found this document useful (0 votes)
220 views34 pages

Chapter 3 - Methodology Final Visalakshi PDF

This document describes the research methodology used in a study measuring the performance of commercial banks in India using the Balanced Scorecard method. Both primary and secondary data were collected. Primary data came from surveys distributed to customers and employees of selected banks, using questionnaires to gather opinions on various aspects of the Balanced Scorecard. Secondary data on financial performance came from bank annual reports. Pilot tests were conducted to refine the survey instruments before the full study.

Uploaded by

Satya Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
220 views34 pages

Chapter 3 - Methodology Final Visalakshi PDF

This document describes the research methodology used in a study measuring the performance of commercial banks in India using the Balanced Scorecard method. Both primary and secondary data were collected. Primary data came from surveys distributed to customers and employees of selected banks, using questionnaires to gather opinions on various aspects of the Balanced Scorecard. Secondary data on financial performance came from bank annual reports. Pilot tests were conducted to refine the survey instruments before the full study.

Uploaded by

Satya Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

CHAPTER III

RESEARCH METHODOLOGY

This chapter deals with the methodology used for the purpose of this study. It
includes details about the methods of data collection, the sampling plan, the research
instruments, and the data analysis software and tools proposed to be used.

3.1 RESEARCH DESIGN


The research is descriptive in nature. Descriptive research involves describing the
characteristics of the population without attempting to change the environment. The present
study measures the performance of commercial banks in India using the Balanced Scorecard
method. The study intends to identify the factors leading to the performance in different
perspectives of the Balanced Scorecard. The study also makes a comparison between the
results of performance measurement of the selected banks using the CAMEL method and the
Balanced Scorecard method. Since the study involves measurement of present performance
and identification of factors affecting present performance, the study is descriptive in nature.

3.2 NATURE AND SOURCE OF DATA


Both primary and secondary data are used in the study. Primary data is required to
analyse the three perspectives of the Balanced Scorecard viz. Customer, Learning and
Growth and Internal Process. Opinions from the customers of the banks are obtained towards
the Customer Perspective and from the employees of the banks towards Learning and
Growth and Internal Process perspectives. The secondary data is used for analyzing the
performance of the banks in the Financial Perspective of the Balanced Scorecard. The
secondary data is collected from the annual reports published by the banks.

3.3 METHODOLOGY FOR CUSTOMER AND EMPLOYEE PERSPECTIVE


The primary data is collected through two well-structured questionnaires. Ten banks
are chosen for this purpose on a simple random sampling basis and include both public
sector and private sector banks in India. The questionnaire also elicits information on the
demographic profile of the respondents. To develop the questionnaires, the existing literature
on performance measurement systems used in the banking sector and Balanced Scorecard
implementation in the service sector was reviewed. The questionnaires are also vetted by a
panel of subject experts, statisticians, managers and senior level officers of banks, and
academicians. The questions in the questionnaires are sequentially arranged and the
86
questions are asked in a simple and understandable manner. The respondents are first
educated about the purpose of the study and assurance of confidentiality of the data is given
to them.

3.3.1 Components of the survey questionnaire for Customers


The questionnaire for the customers consists of two different sections. The first
section consists of questions pertaining to the demographic profile of the customers and the
second section relates to the questions relating to customer satisfaction. The demographic
profile section of the customers includes personal profiles and banking profiles and gathers
information on their age, gender, educational qualifications, occupation, annual income,
number of years dealing with the bank, the type of facilities they have with the bank, types
of deposit facilities, types of loan facilities, whether dealing with other banks, and if so, the
types of accounts with other banks.
There are four constructs in the second section. The responses to all the constructs in
this section are solicited on a Likert five point scale. The first construct pertains to Branch
Ambience wherein various factors which relate to the environment in the branch, availability
of parking space, location of counters and the comfort level of the customers when they are
dealing with the branch are listed.
The second construct is about Employee Behavior where the behavior of the staff,
the efficiency of service, process of account opening, time taken to attend to queries of
customers, and the quality of services in the bank are the main focus. In the section on
Product Satisfaction, the opinions of the customers on various factors like e-banking
facilities, ATM services of the bank, range of products offered, technology provided by the
bank, new products and services, reasonability of bank charges is sought which indicate
satisfaction of the customer with the services provided with the bank.
The last construct deals with Trust and Loyalty wherein the responses of the
customers implying trust and loyalty towards the banks are ascertained. Responses to
questions like whether the customer would like to maintain relationship with the bank,
whether he would like to avail further facilities with the bank, has he recommended the bank
to other family members are elicited.

87
3.3.2 Components of the survey questionnaire for Employees
There are two sections in the questionnaire. The first section of the questionnaire for
employees deals with the demographic profiles of the employees. Information about their
personal profiles like age, gender, qualifications and employment profile like length of
service in the bank, period since last promotion and the length of service in the present
position is ascertained in this part of the questionnaire.
The second section has two sub-sections – the Learning and Growth section and the
Internal Perspective section. Learning and Growth has been further sub-divided into two
parts – Learning Perspective and Growth Perspective.
The Learning Perspective has two constructs – Training Needs and Organizational
Support for Training. The first construct pertains to the training opportunities available in the
bank, relevance of the training to the job requirements, the opportunities provided by the
bank to develop new job skills, and the satisfaction derived by the employee as regards the
training opportunities provided. The second construct deals with the organizational support
received by the employee towards training. It dwells on opportunities to cross-train, learning
through job rotation, encouragement for use of IT, supervisory support for identifying areas
of strengths and weaknesses of the employees.
The second perspective i.e. Growth Perspective deals with Career Opportunities,
Rewards and Recognition, Employee Empowerment and Vision and Mission Alignment.
The Career Opportunities aspect talks about a clear path available for career advancement,
availability of growth opportunities in the organization, objectivity of the performance
appraisal system, and the satisfaction of the employee with regards to the availability of
career opportunities. The Rewards and Recognition lays stress on the work of the employee
being rewarded, whether the system is equitable, if the bank encourages free and fair
competition, and also talks about work-life balance of the employees. Employee
Empowerment aspect deals with empowering the employees by decentralizing responsibility
in the organization, encouraging the employees to give suggestions and taking up new
challenges. The Vision and Mission alignment includes the communication of goals and
strategies by the bank to the employees and the motivation level of the employees to meet
these goals.
The second sub-section has four constructs – Service Quality, Product Quality,
Product Innovation and Risk Management and Compliance. The Service Quality construct
deals with the service quality of the bank. It elicits the opinion of the employees regarding
the adherence of norms for service delivery, the bank’s efficiency in dealing with the
88
customer complaints and the adoption of quality standards like Six Sigma standards and 5S
standards. The Product Quality construct asks about the efficient core banking system of the
bank, the efficiency of the ATM network in the bank, the efficiency and reliability of mobile
banking and internet banking and about the banks’ e-products.
The Product Innovation construct deals with new product offerings of the bank, cross
selling of products, innovation in products and the frequency of new and differentiated
products from the bank. The Risk Management and Compliance section deals with the
requirements of risk management and compliance like maintenance of confidentiality of the
customers’ data, informing the customers of the banking norms, exercising due diligence in
banking operations, adherence to KYC norms and anti-money laundering norms, adhering to
the systems and procedures of the bank, whether the employees are conscious of audit
compliance and strive hard to protect the image of the bank.

3.3.3 Use of scale in the questionnaire


The researcher has used different scales to measure the demographic profiles,
customers’ perceptions and employees’ perceptions regarding the various perspectives of the
Balanced Scorecard. Nominal and ordinal scales are used in the demographic profiles,
banking profiles and employment profile sections whereas in the section on the various
perspectives of the BSC, the questions are on a five point Likert scale with 5 meaning
“Strongly Agree”, 4 indicating “Agree”, 3 indicating “Neither Agree Nor disagree”, 2
meaning “Disagree” and 1 indicating “Strongly Disagree”. Only two questions in the profile
sections are multiple tick questions, the rest are single tick questions.

3.3.4 Pilot study


A pilot feasibility study was conducted before venturing into a full-fledged study.
Accordingly, 30 respondents were chosen from both customer and employee respondents on
convenience basis. This process helped the researcher to understand the intricacies of the
instrument and brought about clarity in conducting the study. Certain questions were
simplified and suitable modifications and corrections were brought about in the instrument.
The following were the changes made in the questionnaire.
a. Some of the questions which were not understood properly by the respondents had to be
revised into simple statements for better understanding.
b. Under the different types of facilities, share trading was later on added to include some
of the respondents who had these facilities in the banks.

89
c. Few more statements in the factors relating to Product satisfaction were included based
on the responses obtained in the pilot study.
d. In the employee questionnaire, the factor on work-life balance was added later on.
e. In the Vision and Mission alignment construct, some of the factors were felt to be
unnecessary and were deleted.
f. Some of the questions were open ended which were converted into close ended
questions.
The above changes were made after careful scrutiny and valuable suggestions received
from expert panel members.
Subsequent to the pilot study, the researcher verified the reliability of the data by
using the Cronbach’s alpha test (Cronbach, 1951). Cronbach's alpha is the most common
measure of internal consistency or reliability. It shows how closely a set of items are related
as a group. It is most commonly used when there are multiple Likert scale questions in the
questionnaire. It is expressed as a number between 0 and 1. Generally, a value of 0.7 to 0.8
is considered to be an acceptable value of alpha. According to Peterson R.A (1994) and
Nunnally (1978), the value of Cronbach’s Alpha should exceed the threshold limit of 0.6 to
be an acceptable reliability coefficient but lower thresholds are sometimes used in the
literature.
The customer’s research instrument consists of four constructs viz., Branch
Ambience, Employee Behavior, Product Satisfaction, Trust and Loyalty. The employee’s
research instrument consists of ten constructs viz. Training Needs, Organizational Support
for Training, Career Opportunities, Rewards and Recognition, Employee Empowerment,
Vision and Mission alignment, Service Quality, Product Quality, Product Innovation, Risk
Management and Compliance.
Tables 3.1 and 3.2 display the results of the reliability tests performed on the two
questionnaires.

90
Table 3.1: Cronbach’s Alpha for customers’ research instrument

Constructs Cronbach’s Alpha


(first 30 respondents)
Branch Ambience 0.647
Employee Behaviour 0.825
Product Satisfaction 0.838
Trust and Loyalty 0.793

Table 3.2: Cronbach’s Alpha employees’ research instrument


Cronbach’s Alpha
Constructs
(first 30 respondents)
Training Needs 0.873
Organisational support for training 0.849
Career opportunities 0.859
Rewards and Recognition 0.880
Employee Empowerment 0.783
Vision and Mission Alignment 0.851
Service Quality 0.870
Product Quality 0.880
Product Innovation 0.856
Risk Management and Compliance 0.898

The researcher has checked the reliability of the constructs used in the study. From
the above tables, it can be inferred that the Cronbach’s alpha values in respect of all the
constructs has exceeded the threshold limit of 0.6 indicating that the variables used to
measure the constructs are reliable. Hence, all the variables included in the constructs
possess the desirable internal consistency needed for further analysis.

3.4 SAMPLING DESIGN


Sampling design is a technique or procedure adopted by the researcher for selecting
the samples from a given population. It is a pre-planned technique for data collection. There
are many sample designs from which the researcher has used the standardized sampling
design technique for obtaining a sample from the population. Sample design includes sample
unit, sample size and the sampling process.

91
3.4.1 Sample unit
Customers who have accounts with the banks, either deposit or loan accounts, are
considered to be the sample unit for the customer questionnaire. Permanent employees of
banks from all cadres working in the branches are considered as the sample unit for the
employee questionnaire.

3.4.2 Sample size


The sample size was determined using the following formula where n is the sample
size, σ is the standard deviation, µ is the mean

By using the above formula, the sample size i.e. n is calculated for every statement in
the questionnaire. The highest value of n so arrived from all the statements is taken as the
sample size for the study. For the customer questionnaire, the highest value obtained is 282
and the value of standard deviation is 1.671 and the mean is 3.9 for this statement. For the
employee questionnaire, the highest value of n is 286, the value of standard deviation is
1.812 and the mean is 4.2.

3.4.3 Sampling process


There are 47 banks in the city of Bangalore which include both public sector and
private sector banks. Out of these, 10 banks are selected at random. The sample of ten banks
includes six public sector banks, one old private sector bank and three new private sector
banks. The so selected banks have a significant presence in the city of Bangalore which is a
cosmopolitan city and the IT capital of India. Moreover, these banks are key players from
both the public and private sectors and thus the sample can be said to represent the entire
Indian banking sector. Each bank has many branches in the city of Bangalore. From the
selected banks, five branches from each bank were selected at random by using random
numbers generated from the scientific calculator. The selected banks and selected branches
from each bank are listed below in Table 3.3.

92
Table 3.3: List of banks and branches chosen for the study
Sl. Type of No. of branches in
Name of Bank Selected branches
No. bank Bangalore city
1. Richmond Road
New 2. Chamarajapet
1. Axis Bank Private 62 3. Kalyan Nagar
sector 4. Basaveshwaranagar
5. RBI Layout
1. Rajarajeshwarinagar
2. Banashankari
Public
2. Bank of Baroda 38 3. Indiranagar
Sector
4. Brigade Road
5. BTM Layout
1. Whitefield
2. Marutinagar
Public
3. Canara Bank 235 3. Rajajinagar III Block
Sector
4. Jalahalli
5. R T Nagar
1. Nagavara
New 2. Amruthahalli
4. HDFC Bank Private 160 3. Nagarbhavi
sector 4. Kengeri
5. Domlur
1. HRBR Layout
New 2. Malleswaram
5 ICICI Bank Private 130 3. J P Nagar
sector 4. Richards Town
5. Hebbal
1. Padmanabhanagar
2. Sahakaranagar
Indian Overseas Public
6. 60 3. Mahalakshmi Layout
Bank Sector
4. Sivan Chetty Gardens
5. Basavanagudi
1. Peenya
Old 2. Cox Town
South Indian
7. Private 21 3. Kengeri Satellite Town
Bank
sector 4. Bangalore City branch
5. Brigade Road
1. Arekere
2. Bannerghatta
State Bank of Public
8. 200 3. Kasturi Nagar
India Sector
4. Rajajinagar V Block
5. Yeshwanthpur
1. Bhashyam Circle
2. Seegehalli
State Bank of Public
9. 180 3. RPC Layout
Mysore Sector
4. Mathikere
5. Chandra Layout
1. Gandhinagar
2. Frazer Town
Public
10. Syndicate Bank 124 3. Vasanthnagar
Sector
4. Sheshadri Road
5. Vidyaranyapura

93
The researcher has requested the branch manager of each selected branch to give the
list of customers and employees. The branch manager has given representative lists under
each category. From this list, 6 employees and 6 customers are selected at random so that the
required sample size can be reached for the study. The questionnaire was distributed to all
the selected persons. The researcher has made frequent visits to the branches to collect the
questionnaires. In spite of meticulous follow up, a few people have not responded and the
researcher was unable to collect their responses. The final number of questionnaires
collected from the respondents are shown in Table 3.4.
Table 3.4: No. of questionnaires accepted for the study
Size determined No. of questionnaires No. of questionnaires No of questionnaires
as per formula distributed received accepted
282 for
300 291 288
customers
286 for
300 297 288
employees

With the expectation of certain amount of rejection in the responses, the researcher
distributed a total of 300 questionnaires each which was in excess of the required sample
size. After scrutinizing the responses received, the researcher dropped about 12 responses on
the whole and the final samples for both the customers and the employees were 288 each.
The rejection rate is calculated at 4% which is quite meager and well within the expected
rejection rate of 10%.

3.5 CONFIRMATORY ANALYSIS


Confirmatory analysis is the most common method of confirming the consistency of
data in the constructs. Confirmatory analysis is a multivariate statistical procedure that is
used to test how well the measured variables represent the constructs. The confirmatory
model is also termed as the measurement model. There are many constructs in the survey
questionnaires. Each construct has a set of observed variables. It is important to verify if the
set of observed variables for a construct are relevant to that construct or not. The researcher
has verified the consistency of the observed variables in all the constructs.
The confirmatory factor analysis in respect of two constructs of the customer
questionnaire has been displayed in Figure 3.1.

94
Fig. 3.1: Confirmatory Analysis for two variables in the customer questionnaire
The measurement model for two constructs Employee Behaviour and Trust and
Loyalty has been displayed in the Figure 3.1 above. The construct ‘Employee Behaviour’
has five observed variables and the construct ‘Trust and Loyalty’ has four observed
variables. The factor loadings in respect of each of the constructs have been displayed in
Figure 3.1. The factor loading is above the threshold level of 0.6 for both the constructs. It
can be concluded that the constructs are adequately explained by the variables. The
measurement model in respect of two constructs of the employee questionnaire has been
displayed in Figure 3.2.

Fig. 3.2: Confirmatory Analysis in the employee questionnaire


The construct ‘Training Needs’ has five observed variables and the construct
‘Service Quality’ has six observed variables. The factor loadings in respect of each of the
constructs are shown in Figure 3.2. The factor loading for each of the variables is above the
threshold level of 0.6 for both the constructs. It can be concluded that the constructs are
adequately explained by the variables. The results of the confirmatory factor analysis in
respect of all other constructs have also shown acceptable factor loadings and are shown in
Tables 3.5, 3.8 and 3.11.

95
3.6 ASSESSMENT OF THE MEASUREMENT MODELS FOR FINAL
RELIABILITY AND VALIDITY
It is necessary to establish convergent and discriminant validity as well as reliability
which doing a Confirmatory factor analysis. Confirmatory factor analysis (CFA) is a
statistical technique used to verify the factor structure of a set of observed variables. CFA
allows the researcher to test the hypothesis that a relationship between observed variables
and their underlying latent constructs exists. (Suhr, D. D. 2006). The factors have to
demonstrate adequate validity and reliability. The following tools are employed for
assessment of the measurement model: Composite Reliability (CR), Convergent Validity and
Discriminant Validity.
1. Composite Reliability – is a measure of the overall reliability of a construct. The value
varies between 0 and 1. Values of composite reliability of 0.6 and above are acceptable.
Values less than 0.6 indicate lack of internal consistency.
2. Convergent Validity – the items that are indicators or the observed variables in a
specific construct should converge or share a high proportion of variance with each
other. According to Hair, J., Black, W., Babin, B., and Anderson, R. (2010), if there are
convergent validity issues in the validity examination, then it indicates that the latent
factor is not well explained by the observed variables. Malhotra et al (2013) observe that
AVE is a strict measure of convergent validity even more conservative than CR. The
researcher has used the average variance extracted (AVE) for measuring convergent
validity. The value of AVE is calculated by using standardized factor loadings. The
threshold value of AVE is 0.5. If the AVE value is more than 0.5, it indicates adequate
convergence.
3. Discriminant validity is the extent to which a construct is truly distinct from other
constructs. High discriminant validity indicates that a construct is unique and captures
phenomena that are not represented by other constructs. If the discriminant validity
examination does not yield the required results, it indicates that the variables correlate
with variables of the other constructs to a large extent i.e. the latent variable is better
explained by some other variables than by its own observed variables. The researcher has
used the Fornell Larcker criterion which is a conservative method of assessing
discriminant validity. It compares the square root of AVE with the latent variable
correlations. The square root of AVE of each construct should be greater than its
correlation with any other constructs.

96
3.6.1 Confirmatory Analysis for Customer Perspective

Fig. 3.3: Confirmatory Analysis for Customer Perspective

97
Table 3.5 shows the final reliability and validity for Customers’ research instrument.
Table 3.5: Final Reliability and Validity for Customers’ research instrument

Factor Cronbach’s Composite


Construct Factors AVE
loading Alpha Final Reliability
BA1 0.47
Branch Ambience BA3 0.69 0.727 0.509 0.759
BA4 0.71
EB1 0.60
EB2 0.65
Employee Behaviour EB3 0.67 0.795 0.570 0.815
EB4 0.75
EB5 0.74
PS1 0.51
PS2 0.42
PS3 0.67
PS4 0.75
Product Satisfaction 0.813 0.611 0.843
PS5 0.73
PS6 0.46
PS7 0.76
PS8 0.72
TL1 0.76
TL2 0.68 0.797 0.626 0.782
Trust and Loyalty
TL3 0.58
TL4 0.73

From Table 3.5 it can be inferred that all the factor loadings are above the threshold
level of 0.4 which establishes the item validity of the constructs. The researcher has
performed the reliability test after final data collection. The final values of Cronbach’s Alpha
are found to be greater than 0.6 which confirms the reliability of the variables used to
measure the construct. The Composite Reliability values are found to be higher than 0.6
which indicates that all the constructs have high level of internal consistency reliability. The
AVE values are also found to be above the threshold value of 0.5. Thus, it can be inferred
that the four constructs have high levels of convergence. As all the parameters meet the
prescribed value the data is appropriate for further analysis and model building.

98
The discriminant validity for the Customers’ research instrument is displayed in Table 3.7.
Table 3.6: Discriminant Validity for Customers’ research instrument
EB BA PS TL
EB (0.754)
BA 0.606 (0.713)
PS 0.683 0.626 (0.781)
TL 0.625 0.601 0.632 (0.791)

Table 3.6 displays the values of AVE and squared correlations. Values in brackets
are square roots of AVE scores which should be greater than the squared correlation values
to establish non-existence of any relationship. It can be inferred that no relationship exists
among the constructs and Discriminant validity for the Customers’ research instrument is
established.
Table 3.7: Model fit indices for Customer Perspective model
Chi-
Model CMIN/DF P-Value GFI AGFI CFI RMSEA
square
Study model 367.225 2.295 0.000 0.889 0.854 0.916 0.064
Acceptable Greater Less
Recommended Greater Greater Greater
fit than than
value than 0.9 than 0.9 than 0.9
[1-4] 0.05 0.08

Table 3.7 represents the CFA model fit indices to assess the overall model fit. The
value of Chi-Square to the degrees of freedom ratio for an acceptable model should be less
than 4. In this case, the value is 2.295 which is very well within the suggested maximum
value. The RMSEA score is 0.67, well below the accepted threshold score of 0.1 (Hair et al
2010). Moreover, the GFI and AGFI values are close to 0.9 and CFI is above 0.9 for which
1.0 indicates exact fit (Schreiber et al 2006). Thus, the model is a good fit and can be
considered for further analysis.

99
3.6.2 Confirmatory Analysis for Learning and Growth Perspective

Fig. 3.4: Confirmatory Analysis for Learning and Growth Perspective

100
Table 3.8 shows the final reliability and validity for the employees’ research
instrument – Learning and Growth perspective
Table 3.8: Final Reliability and Validity for employee research instrument –
Learning and Growth perspective
Cronbach’s
Factor Composite
Construct Factor Alpha AVE
Loadings Reliability
Final
TN1 0.61
TN2 0.70
Training Needs TN3 0.75 0.864 0.576 0.870
TN4 0.88
TN5 0.82
OS1 0.77
OS2 0.72
Organisational
OS3 0.63 0.843 0.558 0.862
support for training
OS4 0.82
OS5 0.68
CO1 0.76
Career CO2 0.73
0.847 0.608 0.861
opportunities CO3 0.79
CO4 0.84
RR1 0.85
RR2 0.76
Rewards and RR3 0.83 0.856 0.581 0.873
Recognition
RR4 0.65
RR5 0.67
EE1 0.51
Employee EE2 0.79
EE3 0.72 0.789 0.520 0.790
Empowerment
EE4 0.67
VM1 0.77
Vision and Mission VM2 0.82
0.834 0.566 0.838
Alignment VM3 0.74
VM4 0.67

From Table 3.8 it can be inferred that all the factor loadings are above the threshold
level of 0.4 which establishes the reliability of the constructs. The Cronbach’s alpha for the
final data has been found to be greater than 0.6 which establishes the reliability of the
constructs. The Composite Reliability values are found to be higher than 0.6 which indicates

101
that all the constructs have high level of internal consistency reliability. The AVE values are
also found to be above the threshold value of 0.5. Thus, it can be inferred that the six
constructs have high levels of convergence and the data is appropriate for further analysis
and model building. The discriminant validity for the employees’ research instrument –
Learning and Growth perspective is displayed in Table 3.9.
Table 3.9: Discriminant Validity for employee research instrument –
Learning and Growth perspective
CO TN OS RR VM EE
(0.780)
CO
0.712 (0.759)
TN
0.717 0.714 (0.747)
OS
0.747 0.644 0.721 (0.762)
RR
0.673 0.469 0.647 0.653 (0.752)
VM
0.675 0.593 0.683 0.617 0.682 (0.721)
EE

Table 3.9 displays the values of AVE and squared correlations. Values in brackets
are square roots of AVE scores which should be greater than the squared correlation values
to establish non-existence of any relationship. From the above table, it can be inferred that
no relationship exists among the constructs and the discriminant validity for the employees’
research instrument – Learning and Growth Perspective is established.
Table 3.10: Model fit indices for Learning and Growth Perspective model
Chi-
Model CMIN/DF P-Value GFI AGFI CFI RMSEA
square
Study model 713.980 2.356 0.000 0.847 0.809 0.916 0.069
Acceptable Greater Less
Recommended Greater Greater Greater
fit than than
value than 0.9 than 0.9 than 0.9
[1-4] 0.05 0.08

Table 3.10 represents the CFA model fit indices to assess the overall model fit. The
value is 2.356 which is very well within the suggested maximum value of 4. The RMSEA
score is 0.69, GFI and AGFI values are less than 0.9 and CFI is above 0.9. All the model fit
values are within the acceptable values except for GFI and AGFI which are fairly acceptable
as the benchmark value for these indices is close to 1. Thus, the model can be taken to be a
good fit and can be considered for further analysis.

3.6.3 Confirmatory factor Analysis for Internal Process Perspective


102
Fig. 3.5: Confirmatory factor Analysis for Internal Process Perspective

103
Table 3.11 shows the final reliability and validity for employees’ research instrument
– Internal Process perspective.
Table 3.11: Final Reliability and Validity for employee research instrument –
Internal Process perspective
Cronbach’s
Factor Composite
Construct Factor Alpha AVE
Loadings Reliability
Final
SQ1 0.73
SQ2 0.36
Service Quality SQ3 0.67 0.845 0.522 0.868
SQ4 070
SQ5 0.74
SQ6 0.74
PQ1 0.77
PQ2 0.75
Product Quality PQ3 0.77 0.862 0.596 0.881
PQ4 0.74
PQ5 0.82
PI1 0.80
Product PI2 0.73
0.823 0.576 0.844
Innovation PI3 0.75
PI4 0.75
RM1 0.63
RM2 0.81
RM3 0.68
Risk Management 0.876 0.553 0.895
RM4 0.75
and Compliance
RM5 0.86
RM6 0.74
RM7 0.64

It can be observed from the above table that the factor loadings for the various factors
are greater than 0.4. This establishes the reliability of the constructs. The values of
Cronbach’s alpha, AVE and composite reliability of the construct should be more than 0.6,
0.5 and 0.6. Table 3.11 displays the final reliability and validity of all the constructs in
the research instrument for internal process perspective. The results shown above indicate
that all the constructs are having the prescribed minimum values. It can thus be concluded
that the data available is appropriate for further analysis and model building. The
discriminant validity for the employees’ research instrument – Internal Process perspective is
displayed in Table 3.12.
104
Table 3.12: Discriminant Validity for employee research instrument –
Internal Process perspective
PQ SQ PI RM
PQ (0.772)
SQ 0.667 (0.722)
PI 0.647 0.658 (0.759)
RM 0.682 0.659 0.692 (0.743)

Table 3.12 display results of discriminant validity. The table displays the value of
AVE and squared correlations. Since the AVE scores are more that the squared correlations
it can be inferred from the tables above that no relationship exists among the constructs and
the requirements for discriminant validity for employees’ research instrument – Internal
Process perspective have been established.
Table 3.13: Model fit indices for Internal Process Perspective model
Chi-
Model CMIN/DF P-Value GFI AGFI CFI RMSEA
square
Study model 405.085 2.067 0.000 0.890 0.859 0.946 0.061
Acceptable Greater Less
Recommended Greater Greater Greater
fit than than
value than 0.9 than 0.9 than 0.9
[1-4] 0.05 0.08

Table 3.13 represents the CFA model fit indices to assess the overall model fit. The
value is 2.067 which is very well within the suggested maximum value of 4. The RMSEA
score is 0.61, GFI and AGFI values are close to 0.9 and CFI is above 0.9. All the model fit
values are within the acceptable values except for GFI and AGFI which are fairly acceptable
as the benchmark value for these indices is close to 1. The model can be taken to be a good
fit and can be considered for further analysis.

105
3.7 DATA CLEANING
Before performing any kind of analysis, the researcher has to ensure that the data
collected is in proper condition for analysis. There are three processes which can be adopted
to test the data collected for any kind of abnormality.
The following are the three processes:
1. Missing Value analysis
2. Unengaged responses
3. Outliers

3.7.1 Missing value analysis


Missing value analysis is performed to find if there are any missing values in certain
fields in the data collected. Care has to be taken to see that there are no missing values in the
data as it may lead to misleading results and reduce the precision of the analysis. If the data
has any missing values, then the data of that particular response has to be rejected. The
researcher has checked the data collected for missing values and has found no missing
values in the data.

3.7.2 Unengaged responses


Some of the respondents may not apply their mind and respond to the questionnaire
in a routine manner. In such cases, the data may not yield accurate results. To eliminate this
possibility, the standard deviation for each respondent is calculated. If the value of the
standard deviation is found to be less than 0.3, then such responses are rejected and deleted
from the data set. The researcher has calculated the standard deviation for each response
obtained and found no such unengaged responses in the data set.

3.7.3 Outliers
Outliers in statistical analyses are extreme values that do not seem to fit with the
majority of responses in a data set. If these are not removed, they can have a large effect on
any conclusions that might be drawn from the data. Extreme values may cause distortion in
the calculation of mean values. The data obtained from the respondents is checked for
outliers using the Box Plot feature in SPSS. The following outputs in Figure 3.6 are obtained
when the demographic variable is checked with the satisfaction levels in the training
obtained in the bank.

106
Fig. 3.6: Test for outliers

It can be observed from the above figures that there are no outliers that have been
detected in the data collected. Similar tests were performed for other variables also.

3.8 TESTING THE ASSUMPTIONS


The data collected has to be tested for certain assumptions to ensure that the data is
fit for application of a certain tool. If not, the data cannot be used for that analysis. The data
has to be properly prepared and thoroughly examined to minimize the measurement error
and maximise the validity and reliability of the data. The data is usually verified for
normality, homogeneity, linearity and multi-collinearity. All these assumptions are necessary
for structural equation modelling. Test of homogeneity is required for performing the
ANOVA test. Linear and multi-collinearity tests are required for conducting regression
analysis. Test of normality is required for all types of analysis. Since all the variables in the
researcher’s questionnaires are construct in nature, no simple regression analysis was
conducted. Structural equation modeling was adopted to check for multiple regression.

3.8.1 Normality
It is an essential prerequisite for many statistical tests that the data has to have a
normal distribution. The validity of the statistical tests performed depends on the normality
of the data. If the data is not found to be normal, then the test results become unreliable. The
Q-Q plot and histogram methods have been adopted to check for normality of the data.

3.8.1.1 Q-Q Plot


Normal Q-Q plot is one of the most popular techniques used for testing normality.
The researcher has applied the Q-Q plot for all variables in both the questionnaires. The Q-Q

107
plot for one variable from the customer questionnaire and one from the employee
questionnaire has been shown in Figure 3.7 below.

Fig. 3.7: Test of Normality of data

Figure 3.7 portrays the Q-Q plot for factors from the customer questionnaire and the
employee questionnaire. The data is said to be normally distributed when the points are lying
on the line or are close to the line. Since the points in the above figure are lying close to the
line, it can be concluded that the data collected is normally distributed. Q-Q plots for other
variables also revealed the normality of the data. The researcher has performed the test for
normality for all constructs in the data and the data was found to be normally distributed.

3.8.1.2 Histogram test for normality


The normality can also be tested using the histogram test. The frequency distribution that
plots the observed values against their frequency provides both a visual judgment about
whether the distribution is bell shaped. The Figure 3.8 shows the histogram for the variable
“The bank charges for services provided by the bank are reasonable” from the customer
questionnaire and “Our mobile banking services are effective and customer friendly” from
the employee questionnaire. It can be observed that the histogram presents a normal curve.
Hence, the data can be assumed to be normal.

108
Fig. 3.8: Histogram showing normality of data

3.8.2 Homogeneity
Test of homogeneity of data is a prerequisite in applying statistical tools like
ANOVA. The researcher has used Levene statistic to check homogeneity of the data
collected. The null hypothesis of this test is that the variances are equal. For this test, it is
necessary to use one set of factor variable and another set of dependent variable. The
demographic variables are loaded as factor variables while the construct variables are loaded
as dependent variables. The value of Levene statistic should be above 0.5 and the significant
value must be greater than 0.05, when it can be said that the variances are equal within the
population distribution.
Educational qualification of the customers has been taken as the demographic
variable for testing the homogeneity of the customer questionnaire. The results have been
portrayed in Table 3.14.
Table 3.14: Results of homogeneity variance of customer questionnaire
Branch Ambience Variables Levene Statistic Significance
The branch has a clean and pleasant environment 0.630 0.596
Ample parking facility is available 2.361 0.072
The counters are easily available 1.806 0.146
I feel ease and comfort when I deal with this Bank 1.264 0.287

From the results in Table 3.14, it can be seen that the Levene statistic for all the
constructs is over the threshold value of 0.5 and the significance level is greater than 0.05.
Thus, the null hypothesis is accepted and it is concluded that the variances are equal. This
establishes the test of homogeneity of variance.
109
For testing the homogeneity of the employee questionnaire, designation of the
employees has been taken as the demographic variable. The results are shown in Table 3.15.
Table 3.15: Results of homogeneity variance of employee questionnaire
Levene
Training Needs Variables Significance
Statistic
I have ample learning/training opportunities in the bank 1.313 0.271
The training I receive is relevant and applicable to my
0.378 0.685
immediate job
The bank constantly updates my knowledge about existing
0.022 0.978
and new bank products
The bank provides opportunities to develop new job skills 0.287 0.751
I am satisfied with the training I have received in the bank 1.249 0.288

From Table 3.15 it can be inferred that the significant value is more than 0.05 for all
the variables. This suggests the presence of homogeneity in the data. Therefore, the null
hypothesis is accepted and it can be concluded that the variances are equal within the
population distribution.

3.8.3 Linearity
Linearity is an important assumption in regression analysis. It explains the linear
relationship between the dependent and the independent variable. The linear relationship can
be observed by a scatter diagram. Here, customer perception is taken as the dependent
variable and Product Satisfaction is taken as the independent variable for the Customer
questionnaire and employee perception is taken as the dependent variable and Career
Opportunities is taken as the independent variable. Figure 3.9 shows the results of the test of
linearity.

110
Fig. 3.9: Linearity test

A line drawn in the diagram above is a linear line. The points in the scatter diagram
should follow the linear line for establishing the linearity test. From figure 3.9, it can be
established that there is a linear relationship between the variables chosen for the analysis as
the points in the scatter diagram are following a linear line.

3.8.4 Multi-collinearity
This is a phenomenon in which two or more predictor variables in a multiple
regression model are highly correlated, meaning that one can be linearly predicted from the
others with a substantial degree of accuracy. Multi-collinearity can be assessed by examining
Tolerance and Variance Inflation factor (VIF). The values of Tolerance range between 0 and
1. Any value of Tolerance over 0.7 is accepted as a good value. The VIF has a lower bound
of 1 but no upper bound. Values of VIF that exceed 3 are often regarded as indicating multi-
collinearity.
From the customer questionnaire, Branch Ambience variables are taken as the
independent variables and Trust and Loyalty is taken as the dependent variable. It can be
seen that the values of VIF in the Table 3.16 are less than 2. The tolerance values are also
more than 0.7 which is the required value. Hence, multi-collinearity in the data has been
ruled out.

111
Table 3.16: Results of Multi-collinearity test for Customer’s research instrument
Branch Ambience variables Tolerance VIF
The branch has a clean and pleasant environment 0.791 1.265
Ample parking facility is available 0.805 1.243
The counters are easily available 0.761 1.514
I feel ease and comfort when I deal with this Bank 0.728 1.373

Similar results are obtained in the employee data set. Table 3.17 shows the results of the test.
Table 3.17: Results of Multi-collinearity test for employees’ research instrument
Employee Empowerment variables Tolerance VIF
Decentralisation of responsibilities is practised in 0.705 1.682
my organisation
I like the level of responsibility I am given in my 0.731 1.883
work
My suggestions are encouraged by the bank 0.719 1.855

I like to take up new challenges in my job 0.760 1.316

3.9 METHODOLOGY FOR FINANCIAL PERFORMANCE MEASUREMENT


For the purposes of measuring the financial performance of the banks using CAMEL
analysis, AHP analysis and the Balanced Scorecard, secondary data was collected from the
annual reports published by the banks chosen for the study and Moneycontrol.com for time
series data of banks.

3.9.1 Time Period of the Study


Secondary data has been collected for a period of 7 years from the years 2008-09 to
2014-15 for the purpose of CAMEL analysis. The years have been so chosen because the
data for capital adequacy according to BASEL II norms is available only after 2008.

3.9.2 Sample unit


The commercial banks in India from both the public sector and the private sector are
considered as the sample unit.

3.9.3 Sample size


A total of 10 banks already identified on a random basis for the collection of primary
data have been chosen for the study of financial performance.

112
3.9.4 Method of analysis
The secondary data collected has been used to analyse the financial performance of
the banks using the CAMEL method, the AHP method and also for the Financial Perspective
of the Balanced Scorecard. The CAMEL analysis only measures financial performance
based on the data available. The AHP also uses the financial data with some amount of
subjectivity from the researcher for arriving at priority rankings.

3.10 MEASUREMENT UNDER THE CAMEL METHOD


CAMEL is basically an acronym for 5 parameters of analysis of bank performance –
Capital Adequacy, Asset Quality, Management Efficiency, Earnings Quality and Liquidity.
Each of these parameters is assessed based on a set of ratios. The Table 3.18 gives the ratios
used under each parameter, the formula for each of the ratios. The banks are ranked based on
their performance under each ratio which is then consolidated to give a rank under each
parameter. Finally, the ranks obtained under the various CAMEL parameters are
consolidated to give a final rank to the banks.
Table 3.18: CAMEL parameters, ratios used, description and formulae
Parameter Ratio Description
Capital Capital CAR is a ratio of Capital to Risk Weighted Assets. Risk
Adequacy Adequacy weighting adjusts the value of an asset (loan) for risk by
ratio (CAR) multiplying it with a factor that reflects its risk.
(𝑇𝑖𝑒𝑟 𝐼 𝑐𝑎𝑝𝑖𝑡𝑎𝑙 + 𝑇𝑖𝑒𝑟 𝐼𝐼 𝑐𝑎𝑝𝑖𝑡𝑎𝑙)
𝑅𝑖𝑠𝑘 𝑤𝑒𝑖𝑔ℎ𝑡𝑒𝑑 𝑎𝑠𝑠𝑒𝑡𝑠
Debt-Equity This is calculated as a ratio of total outside borrowings to net
ratio worth. Deposits have not been considered as borrowings.
𝑇𝑜𝑡𝑎𝑙 𝑂𝑢𝑡𝑠𝑖𝑑𝑒 𝑏𝑜𝑟𝑟𝑜𝑤𝑖𝑛𝑔𝑠
𝑆ℎ𝑎𝑟𝑒ℎ𝑜𝑙𝑑𝑒𝑟 ′ 𝑠 𝑛𝑒𝑡 𝑤𝑜𝑟𝑡ℎ
Total Indicates proportion of advances to total assets.
Advances to 𝑇𝑜𝑡𝑎𝑙 𝐴𝑑𝑣𝑎𝑛𝑐𝑒𝑠
∗ 100
Total Assets 𝑇𝑜𝑡𝑎𝑙 𝐴𝑠𝑠𝑒𝑡𝑠
Government Indicates the level of investment in government securities as
Securities to compared to the total investments in other securities.
Total 𝐼𝑛𝑣𝑒𝑠𝑡𝑚𝑒𝑛𝑡 𝑖𝑛 𝑔𝑜𝑣𝑡. 𝑠𝑒𝑐𝑢𝑟𝑖𝑡𝑖𝑒𝑠
∗ 100
Investments 𝑇𝑜𝑡𝑎𝑙 𝑖𝑛𝑣𝑒𝑠𝑡𝑚𝑒𝑛𝑡𝑠

113
Asset Quality Net NPA to This is a measure of the overall quality of the bank's loan book.
Net Advances Non-performing assets cease to generate income for the bank.
Ratio 𝑁𝑒𝑡 𝑁𝑃𝐴
∗ 100
𝑁𝑒𝑡 𝐴𝑑𝑣𝑎𝑛𝑐𝑒𝑠
Total Banks deploy their funds into investments to reduce the risk of
Investments their loans becoming non-performing assets. These funds are
to Total locked up and cannot be lent by the bank.
Assets Ratio 𝑇𝑜𝑡𝑎𝑙 𝐼𝑛𝑣𝑒𝑠𝑡𝑚𝑒𝑛𝑡𝑠
∗ 100
𝑇𝑜𝑡𝑎𝑙 𝑎𝑠𝑠𝑒𝑡𝑠
Net NPA to Indicates efficiency of banks in assessing credit risk and
Total Assets recovering debts
Ratio 𝑁𝑒𝑡 𝑁𝑃𝐴
∗ 100
𝑇𝑜𝑡𝑎𝑙 𝐴𝑠𝑠𝑒𝑡𝑠
Percentage Helps to study the trend in the NPA over the years
change in Net 𝐶ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑁𝑃𝐴
∗ 100
NPAs 𝑁𝑃𝐴 𝑜𝑓 𝑝𝑟𝑒𝑣𝑖𝑜𝑢𝑠 𝑦𝑒𝑎𝑟
Management Business per This ratio expresses the efficiency and productivity of the human
Efficiency employee resources in garnering business for the bank
𝑇𝑜𝑡𝑎𝑙 𝐵𝑢𝑠𝑖𝑛𝑒𝑠𝑠
𝑁𝑜. 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠
Profit per Indicates the productivity and efficiency of the employees in
employee improving business and maximizing profitability
𝑁𝑒𝑡 𝑃𝑟𝑜𝑓𝑖𝑡
𝑁𝑜. 𝑜𝑓 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝑠
Credit Ratio indicates the ability of a bank to make optimal use of
Deposit Ratio deposits which are low cost funds to maximize profits.
𝑇𝑜𝑡𝑎𝑙 𝐴𝑑𝑣𝑎𝑛𝑐𝑒𝑠
∗ 100
𝑇𝑜𝑡𝑎𝑙 𝐷𝑒𝑝𝑜𝑠𝑖𝑡𝑠
Return on Net This ratio is a measure of profitability.
Worth 𝑁𝑒𝑡 𝑃𝑟𝑜𝑓𝑖𝑡
∗ 100
(RONW) 𝑁𝑒𝑡 𝑊𝑜𝑟𝑡ℎ
Earning Net Profit to This ratio indicates profitability. It indicates the efficiency of
Quality Total Assets utilization of assets to produce profits during a period.
Ratio 𝑁𝑒𝑡 𝑃𝑟𝑜𝑓𝑖𝑡
∗ 100
𝑇𝑜𝑡𝑎𝑙 𝐴𝑠𝑠𝑒𝑡𝑠
Net Interest Indicates the earning capacity of the bank and its ability to lend its
Income to resources to earn interest income
Total Assets (𝑁𝑒𝑡 𝐼𝑛𝑡𝑒𝑟𝑒𝑠𝑡 𝐼𝑛𝑐𝑜𝑚𝑒)
∗ 100
Ratio 𝑇𝑜𝑡𝑎𝑙 𝐴𝑠𝑠𝑒𝑡𝑠
Operating This ratio is a measure of operating efficiency. It measures the
Profit to Total revenue left after paying away the operating costs.
Assets Ratio 𝑂𝑝𝑒𝑟𝑎𝑡𝑖𝑛𝑔 𝑃𝑟𝑜𝑓𝑖𝑡
∗ 100
𝑇𝑜𝑡𝑎𝑙 𝐴𝑠𝑠𝑒𝑡𝑠
Interest This ratio measures the interest income generated from the banks’
Income to core activity of lending as a proportion to the banks’ total income
Total Income earned by the bank.
114
Ratio 𝐼𝑛𝑡𝑒𝑟𝑒𝑠𝑡 𝐼𝑛𝑐𝑜𝑚𝑒
∗ 100
𝑇𝑜𝑡𝑎𝑙 𝐼𝑛𝑐𝑜𝑚𝑒
Liquidity Liquid Assets This ratio measures the liquidity position of the bank. It reveals
to Total the readiness of the bank to meet its financial obligations.
Assets Ratio 𝐿𝑖𝑞𝑢𝑖𝑑 𝐴𝑠𝑠𝑒𝑡𝑠
∗ 100
𝑇𝑜𝑡𝑎𝑙 𝐴𝑠𝑠𝑒𝑡𝑠
Liquid Assets This ratio indicates how efficient a bank is in meeting the
to Total unexpected deposit withdrawals by its customers by comparing
Deposits liquid assets to total deposits.
Ratio 𝐿𝑖𝑞𝑢𝑖𝑑 𝐴𝑠𝑠𝑒𝑡𝑠
∗ 100
𝑇𝑜𝑡𝑎𝑙 𝐷𝑒𝑝𝑜𝑠𝑖𝑡𝑠
Liquid Assets It reflects the ability of a bank to meet the demand of depositors
to Demand for which the banks have to place their funds in a liquid form.
Deposits Demand deposits are withdrawn on demand and hence a bank has
Ratio to be always prepared to meet these obligations.
𝐿𝑖𝑞𝑢𝑖𝑑 𝐴𝑠𝑠𝑒𝑡𝑠
∗ 100
𝐷𝑒𝑚𝑎𝑛𝑑 𝐷𝑒𝑝𝑜𝑠𝑖𝑡𝑠
Government Banks invest in government securities to meet their statutory
Securities to requirements. Government securities are the most liquid and
Total Assets safest among different forms of investments.
Ratio 𝐼𝑛𝑣𝑒𝑠𝑡𝑚𝑒𝑛𝑡 𝑖𝑛 𝐺𝑜𝑣𝑡. 𝑆𝑒𝑐𝑢𝑟𝑖𝑡𝑖𝑒𝑠
∗ 100
𝑇𝑜𝑡𝑎𝑙 𝐴𝑠𝑠𝑒𝑡𝑠

3.11 MEASUREMENT UNDER THE AHP METHOD


The Analytic Hierarchy Process (AHP) is designed to solve complex multi-criteria
decision problems. AHP requires the decision maker to provide judgments about the relative
importance of each criterion and then specify a preference for each decision alternative using
the criterion. The output of AHP is a prioritized ranking of the decision alternatives based on
the overall preferences expressed by the decision maker.
AHP analysis involves the following steps:
1. Developing the Hierarchy
2. Pairwise comparison of criteria
3. Calculation of priority for each criteria
4. Calculation of Consistency Index and Consistency Ratio
5. Calculation of Priorities of each bank based on each criteria
6. Ranking the banks with the highest priority values

115
3.12 MEASUREMENT OF THE FINANCIAL PERSPECTIVE UNDER THE
BALANCED SCORECARD METHOD
For measurement under the Financial Perspective, the data has been taken from the
annual reports published by the banks. The average of the data for seven years for each bank
for each CAMEL ratio has been calculated. This data has been converted into a five point
scale based on the range of the averages obtained. The conversion to a five point scale is for
homogeneity of analysis with the other perspectives which were also on a five point scale.
The average of these averages for each bank gives the score for the financial perspective. A
pictorial representation of the process of conversion of one criterion of the CAMEL analysis
to a five point scale is given in Figure 3.10.

Fig. 3.10: Process of conversion to scale data

3.13 RESEARCH TOOLS AND SOFTWARE PACKAGE USED


Various statistical tools for analyzing the collected data have been used. Statistical
software and packages such as Microsoft Excel, SPSS 20, IBM AMOS 20, STATA 10, and
e-views 7 have been used by the researcher for this study. The statistical techniques used in
this study for analyzing the data are as follows:
1. Frequency analysis
2. Simple Mean
3. Correlation analysis
4. Analysis of Variance (ANOVA)
5. Chi-Square test
6. Correspondence analysis
7. Cluster analysis
8. Discriminant analysis
9. Canonical correlation
10. Sobel statistic
11. Structural Equation Modeling (SEM) for Confirmatory Factor analysis and Path analysis

116
3.14 EXPLANATION OF THE DIFFERENT ANALYSIS USED

3.14.1 Frequency analysis


Frequency analysis has been used for demographic variables to understand the nature
of the respondents. It has also been used to understand the nature of savings and borrowings
and the kinds of accounts that the customer is maintaining with the bank.

3.14.2 Simple Mean


Simple mean is a measure of central tendency. Both the software i.e. Microsoft Excel
and SPSS 20 have been used to calculate simple mean. The technique has been used to
calculate averages for the bank performances in the secondary data collected wherein the
average of the 7 years under observation has to be arrived at for ranking the banks. Also,
simple mean has been used to rank the variables in each and every construct.

3.14.3 Correlation analysis


This analysis is used for analyzing the direction of relationship between two
variables. SPSS 20 has been used for conducting this analysis. The researcher has used this
analysis to understand the relationship between the CAMEL variables used in the secondary
analysis.

3.14.4 Analysis of Variance (ANOVA)


ANOVA explores the existence of any differences among group means. ANOVA is
generally used when there are more than two groups. ANOVA has been calculated using
SPSS software version 20. ANOVA analysis has been used to find the relationship between
demographic variables and the observed variables in the customer and employee
questionnaires.

3.14.5 Chi-Square test


This test is mainly used for hypothesis testing. Researcher has used the software
SPSS 20 for this purpose. Chi-square test is used to discover if there is a relationship
between two categorical variables. The Chi-Square analysis has been used to find the
association between cluster memberships and demographic profile of the respondents

3.14.6 Correspondence analysis


The Chi-Square test is usually followed by correspondence analysis. This analysis
shows the association of two sets of variables in a two-dimensional representation.

117
Correspondence analysis is a descriptive/exploratory technique designed to analyse simple
two-way and multi-way tables containing some measure of correspondence between the
rows and columns. Correspondence analysis is a statistical technique that provides a
graphical representation of cross tabulations.

3.14.7 Cluster analysis


This analysis is used to categorise objects into similar groups. It is an exploratory
technique which is used to group objects. In this study, cluster analysis has been applied in
many places to categorise customers and employees into different but similar groups based
on their perceptions. SPSS 20 has been used to conduct this analysis. Both k-means cluster
and two-step cluster methods have been used. Two-step cluster method was first used to
determine the number of clusters generated automatically.

3.14.8 Discriminant analysis


Discriminant analysis is used to separate two or more objects. The purpose of
discriminant analysis is to obtain a model to predict a single qualitative variable from one or
more independent variables. Discriminant analysis derives an equation as linear combination
of the independent variables that will discriminate best between the groups in the dependent
variable. This linear combination is known as the discriminant function. The weights
assigned to each independent variable are corrected for the interrelationships among all the
variables. The weights are referred to as discriminant coefficients. The number of functions
obtained is always one less than the number of groups in the dependent variable. The
discriminant function coefficients are partial coefficients that reflect the unique contribution
of each variable to the classification of groups in the dependent variable.
This analysis has been applied after the cluster analysis. Cluster analysis separates
the data into different clusters. Discriminant analysis established the reliability of such
analysis. This analysis has been performed using SPSS 20.

3.14.9 Canonical Correlation


Canonical Correlation is similar to correlation. Canonical correlation discloses linear
relationships between two sets of variables. It is an additional procedure for assessing the
relationship between variables. STATA 10 has been used to perform canonical correlation.
The study has used canonical correlation to measure the association between the customer
and employee perceptions and their demographic characteristics.

118
3.14.10 Sobel statistic
The Sobel test is a method of testing the significance of a mediation effect. The test
helps to study the reduction in the effect of the independent variable after including a
mediator in the model. If the reduction is significant, the effect of the mediation is
statistically significant.

3.14.11 Structural Equation Modeling


Structural equation modeling is a multivariate statistical analysis technique that is
used to analyze structural relationships between measured variables and latent constructs.
SEM is a largely a confirmatory technique. SEM has been employed for performing
Confirmatory Factor analysis (CFA) and path analysis. CFA ensures consistency of variables
in each and every construct. Path analysis measures causal relationships between constructs.
SEM has been used by the researcher evaluate the reliability and validity of the constructs
and to construct three models – Customer satisfaction model, Employee Satisfaction model
and Business Excellence model. IBM AMOS 20 has been used for performing SEM.

119

You might also like