0% found this document useful (0 votes)
7 views33 pages

Adobe Scan 03-Jan-2024

This document provides an overview of Analysis of Variance (ANOVA), a statistical method used to compare means across multiple populations. It explains the logic behind ANOVA, its applications, and the significance of using variance instead of standard deviation for hypothesis testing. The document also discusses the importance of experimental design and the advantages of ANOVA over t-tests when dealing with more than two groups.

Uploaded by

Kaushiki Riya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
7 views33 pages

Adobe Scan 03-Jan-2024

This document provides an overview of Analysis of Variance (ANOVA), a statistical method used to compare means across multiple populations. It explains the logic behind ANOVA, its applications, and the significance of using variance instead of standard deviation for hypothesis testing. The document also discusses the importance of experimental design and the advantages of ANOVA over t-tests when dealing with more than two groups.

Uploaded by

Kaushiki Riya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 33
Cc analysis of Variance (independent Measures) ae! Objectives After’ completing this chapter you will be able to: Recognise situations requiring the comparison Get acquainted with the concepts and components of analysis of vai Understand the logic of ANOVA. Introduce F distribution and learn Compare more than two population means using ANOVA. Compute and test the significance of the ratio in one-way and two-way analysis of variances for independent measures. «Describe the assumptions underlying the ANOVA. Get acquainted with different types of transformatior of two means or populations. riance (ANOVA). how to use it in statistical inferences. nn of data. 14.1_ Introduction _ o test hypotheses using data from either one or two ple test (Chapter 1D) to determine whether a mean, median, standard deviation/variance, quartile deviation, percentage/proportion or a correla- tion coefficient was significantly different from a hypothesised value. In the two-sample tests two medians, two standard (Chapter 12), we examined the differenc® between two means, rere coefficients, of two proportions/percemtages and we ceviations/variances, two COrré the to lem arhether this difference was significant nose we have means from three oF more than three populations instead of only two. ithscase we cannot apply the methods introduced in Chapter 12 because they are limited tote for the equality of only two means. The analysis of / variance (ANOVA) will enable whether move than two population Means 1) be considered equal. In this chap ne mr aT ANOVA that will enable us to test for eee i” Wi Cala a statistical technique Known as col the tifensanes among more than two sample means Using ANOVA, we will b e able 12, we learned how t In Chapters 11 and samples, respectively. We used one-sam bh 545 -™ to make inferences about whether our samples are drawn from populations having thy som, mean. 546 STATISTICS FOR BEHAVIOURAL AND SOCIAL SCIENCES 14.2 Nature and Purpose The ANOVA was developed by Ronald A Fisher, a renowned British Statistician, and reported by him in 1923 The name test was given toit by Snedecor (1946) in Fisher's hon Since that time it has found wide application in many areas of experimentation, Its, early application was in the field of. agriculture. This device had made tremendous Contribution to designing of experiments and their statistical analysis. a The ANOVA deals with variances rather than with standard deviation or Standard ettor It is a method for dividing the variation observed in experimental data into different ‘Paris each part assignable to a known source, cause or factor. We may assess the relative magnitude of variation resulting from different sources and ascertain whether a particular part of the Variation is greater than expectation under the null hypothesis. The ANOVA is inextricably associated with the design of experiments. Obviously, if we are to relate different Parts of the variation to particular causal circumstances, experiments must be designed to permit this to occur in a logically rigorous fashion, because the main function of experimental design js to maximise systematic variance, control extraneous sources of variance and minimise error variance. The ANOVA technique is useful in testing differences between two or more means. Its special merit lies in testing differences between all of the means at the same time, The ANOVA is a powerful aid to the researcher. It helps him/her in designing experiments efficiently, and enables him/her to take account of the interacting variables. It also aids in testing hypothesis. Thus, ANOVA is a hypothesis-testing procedure that 1s used to eval mean difference between two or more treatments (or populations). As with all inferential Procedures, ANOVA uses simple data as the basis for drawing general conclusions about pop- ulations. It may appear that F- and ttests are simply two different ways of doing exactly the same job, testing the mean differences. In some respects, this is true—both tests use sample data to test hypoth about ulation means, However, F-test has a tremendous advantage over PHESt Specifically, Fie is limited to situations where there are only two treatments to compare. The major advantage of ANOVA is that it can be used to compare two or more treatments. Thus, ANOVA provides researchers with much greater flexibility in designing experiments and interpreting results. Therefore, ANOVA is much more needed than the tests. Like the ttests presented in Chapters 11 and 12, ANOVA can be used with either an independent-measures or repeated-measures design. As mentioned earlier, one should recall that an independent-measures design means that there is a separate sample of 1 subjects for each of the treatments being compared. This design is otherwise known as onion group design or between-group design. In a Tepeated-measures design, on the contrary, the same sample of n members is tested in all of the different treatment conditions. This design is otherwise known as within-group design. In addition, ANOVA can be used to evaluate the results of a research study that involves more than one independent variable or fa For example, an investigator might want to Compare the effectiv, of two ¢ i methods (independent variable 1) for three different class sizes (independent variable > The structure of this research study is a 2x3 factorial design which involves compan independent sample means from six differen reatment combinations; a separate eae ie group is used for each treatment condition. The dependent variable for this study Woul’ each subject’s score on a standardised achievement test, a Chapter 14 ‘Analysis of Variance (independent Measures) 547 xe, ANOVA provi go ve OP pe used 10 era ae ee ftexible dataanalysis en Ieetons and is one of the ee of ae differences in a wide variety 7 car whe ly used hypothesis-testi res. In Phe rt pee en et 2a Io onemanipulat ent variable ot treatment variable On the contrary, when 4 fea Pa etaDIee Fra Gfaubjects the variable is Ge are the epnent classification variable. For example, class size and meth- hing ey are pre-existi variables, whereas age and sex are the quasiindependent et pre-existing subject variables). In the context of ANOVA, an indepen aed 0 of tee ables (ie en asi-i sa varia OF tuasi-independent variable is called a factor. prestandt Test jot two treatment meal ns OF ase of experi! ments not adequate for compari- test, when han two means. The most serious objection to the use of ¢ involved, 143_ The ttest is adeq' two le means involving only two gr Msinvolving more t ore than two comparisons are to be made, is the large number of computations be too cumbersome to carry out. The general formula for determining the two groups ata time is NIN=D g, KAD, where , For example if there are 10 er uate when we want to determine whether or n differ significantly from each other. It is applied in ¢ ‘oups. However, for various reasons, t-test is m which would ‘combinations to be made taking number of ‘coups, We would Nor Arefers to the number of treatment group’ 4010-1) _10x9 a > 45 separate ttests; for 15 groups, We would have to make 105 as the number of treatment groups increases, the number of compari ‘hat is, the computation work increases disproportionately- Furthermore, if a few comparisons (Le, ratios) turn out to be significant, it will be difficult to interpret the results. Let us elucidate this point by the help of an example. Suppose, an investigator is interested in studying the effects of 10 treatments in an exper” iment, and assigned randomly 10 separate groups of subjects—one group to each of the treat- nent conditions. Thus, there are 10 group means. Evidently, 45 possible ‘tests will have to be made for 10 treatment means. That is, first ttest Hy: 4=H2 then second t-test Hy=si= Hw and so on til we perform all the 45 Ftests for the difference between every pair of means. Out of the 8 ests, we expect to find an averaj 3 tratios/values (0.05 545) to be significant at 5% level by chance alone. Suppose, WE find that out of 45 tests only five are significant at 0.05 ‘ed, When testis being applied, there is nO Way to know whether these five differences are ‘tue differences or within chance expectation. The more ttests we perform, the more likely it isthat more differences will be statistically significant purely by chance. Thus, the test is not itech pprocee to simultaneously evaluate! tree a nos ee means. We would ae ility of Type Lerror in the XP eae ess. tine. iN OVA or Ftest,on the other hand, permits ustoeval luate three or More means at one cals making comparisons in experiments involving more than two means, the equalit falsdown, Hence the NOVA or Etest should alway® be preferred. The Etest is also an ad ms at for detemnining the significance OF 1 Ways ns. For two groups (df=K-1-2-1- In words, the null hypothesis states that temperature has no effect on learni Thatis the population means forthe tree temperature conditions areal 8 Pt, Histates that there is no treatment effect. Once again notice thatthe hy meyers stated in terms of population parameters, even though we use sample dae sss ate aig For the alternative hypothesis (H,), we may state that: ‘O test them, H,; At least one population mean is different from others. In Seneral, H, state ment conditions are not all the same, that is, there isa real treatment effet tet Note that we have not given any specific alternative hypothesis. This is ber cifferentaltematives are posible, and it would be tedious to list them al, on™ for example, woul be that the fist two population means are identical, but man different. Another alternative states thatthe last two means are the same, but tat et different. Other alternatives might be: the fists Hy s4#H¢u5 (all three means are different) Hi: m=uy but yp is different Itshould be pointed out that a researcher typically entertains only one (or at most a few) of these alternative hypotheses. For the sake of simplicity, we will state a general altemaiy hypothesis rather than try to list all the possible specific alternatives, 14.5.1 Basic Terminology, Notations, Concepts and Formulae used for ANOVA Before we introduce the notation, we will first look at some special terminology that isu ANOVA. As noted earlier, in analysis of variance an independent variable isa factor. Therefor the experiment shown in Table 141, the factor is teaching method. Because this experi * only one independent variable, it isa single-factor experiment. The next term which wet know is levels The levelsin an experiment consist of the different values or categor's independent variable or factor. For example, in the learning experiment (Table 1 oe nob three categories of the teaching. Therefore, the factor ‘teaching method’ has three| Fe att words the individual treatment conditions that make up a factor are called levels Because ANOVA most often is used to examine data from more than two I ditions (and more than two samples), we will need a notational system to hel Ce will ws the all the individual scores and totals. To help introduce this notational system” i will™ hypothetical data from Table 142 along with some of the notations and ia ne ser! described. The data given in Table 14.2 represent the results of an independ ut! peat iment comparing learning performance under three spacing conditions of ea - Ht ythetical data from an experimer init ple three spacing conditions of distributed eae Lea ae Chapter 14 i Analysis of Variance (Independent Measures) 553 each ———_—_ a Spacing Conditions FDL OT 1 2 3 aa ieseconds) (Seconds) | _(5 Seconds) 7 a 8 as - 9 5 10 = 6 8 = 2 12 $ 5 N=15; 29 47 LEX=G=95 58 94 X..=633 171 453 (i) Theletter kis used to identify the: -number of treatment conditions, thats, thenumber oflevels of the factor. For an independent-measures study, Kalso specifies the number of independent or separate samples oF §rOUPS For the data in Table 14.2, there are three treatment conditions, So k=3. i) Thenumber of scores in each treatment condition is j _nfor the data in Table 142, n=5 for all the treatment conditions. If the samples are of different sizes, we can identify a specific sample by using a subscript. For example, ™, is the number of scores in treatment condition 1; n, is the number of scores in treat- ment condition 2 and so on. i) The total number of scores in the entire study is specifi nstant), then all the samples are of the same size (11s Co 142, there are n=5 scores in each of the k=3 treatments. So, N: x5=15. ¥) The total or sum of the scores for each treatment condition isidentified by the sigma vital lett total of scores for a specific treatment condi- Sof the capital letter X, that is, LX. tion can be identified by adding fumerical subscript to the /X. For example, the __ ‘Ota ofthe scores for the third treatment condition in Table 14.2 is =X, Wl Thesum ofall the scores in the entire research study (the grand total) is idemtitic by G. We can compute G by adding up all Nscores or by adding up the treatment totals, | for the data given in Table 14.2 is GSS 95, G=SSX.For example, thegrand toa i for each treatment is identified by a bar (horizontal Ni pewean or average of the scores i wer the capital letter X, thats, X- The mean of" he scorestor a specific tre 8, ‘an of the dentified by a lowercase letter ied by a capital letter N. When N= kn. For the data in Table A ae oo DUM 554 STATISTICS FOR BEHAVIOURAL AND SOCIAL SCIENCES (vii) The mean or average of all the scores in theentire study (the by X bar dot dot ie, X.) For example, the grand mean for the da ee2 is X=633. data given jg Mig (viii) The total orsum of the squares of the scores for each treatment cong; bey by the sigma (©) of the square of the capita etter X, that is, See isid squares of the scores fora specific treatment condition can be X?. The totaled numerical subscript to the 2X2. For example, the total of the identified by 0 the the first treatment condition in Table 14.2is EX? =75, SQUATES Of the sya (ix), Thesum of the squares of all the scores in the entire study (ie, all N; mae by sum of the sums of the squares of the cores for each treatment cory Sie Sy” por example, the total of the squares of al N scores gi ten tat Ni st SEX?=699. n Tabet ‘ Let us now discuss about the concepts ‘that are used in the computati ing are some of such concepts. Putation of ANOVA. The lg, (i) Correction term (C). For computing the sum. of squares by the direct method a correction is need term (C) is obtained by squaring the grand total (G or 2X) and then dividin, led. The coy , King it by thet number of subjects or observations in the experiment (N=kn). So, C= (GP _wsxr present example, from the data given in Table 142, NN Mth (@? _ 5) _ Spay toner (ji) Sum of squares (SS) ‘The sum of the squared deviations around the mean is called sum of squares ("ori a shortened form. The total sum of squares may be divided into two additive and indepen. dent parts, such as, within-groups sum of squares (or within-treatments sum of squares) and between-groups sum of squares (or between-treatments sum of squares). These are discused below one by one. (a) Total sum of squares. As the name implies, the total sum of ‘squares (SS,)is thesum of squares for the entire set of N scores in the experiment, We compute this value by using the computational formula for SS, which reads $8, =DEX?-C Applying this above formula to the set of data in Table 14.2, we obtain $5,=699-601.67=97.33 ments) Within-groups sum of squares. The within-groups (or within-treatn o ? each of squares (SS,) is the pooled sum of squares based on ion of & the variatl ‘end about its own mean. The within-groups sum of squares 18 also a ous squares. All the uncontrolled sources of variation are pooled in the wi of squares. We compute the SS, by the following formule: 2 58, sex oe | n — hi , Chapter 14 Analysis of Variance (Independent Measures) 555 this formula to the set of data in Table 14.2, we obtain SS, -[1s- [sf al 453-4" 5 3 fF -[1s-%]-[m- | 453-2209 5 5 5 =[75~72.2]+[171-168.2]+[ 453-4418] =28+28+112=168. apply? the $5, can be obtained by subtraction, taking advantage of the addi- characterising this analysis. We know that the total sum of squares has and independent components like between-groups sum of squares (SSp) ‘oups sum of squares (SS,,). Therefore, $5, =55, +85, Hence, SS, =$S, -SS,, lternativel jon theorem wo additive and within-gr 5, can be obtained by subtracting the SS, from the SS,. Applying this for- en in Table 14.2, we obtain $8,=97.33-80.53=168 re can be no verification of the computation of the within-grouP® sum of squares by the subtraction method. Therefore, beginners would do well to calcu- late independently the within-groups sum of squares by the previous formula cited above; this formula treats within-groups sum of squares as the pooled sum of squares based on the variation of the individual observations about the mean of the particu- That is, the mula to the data giv However, the lar group. Ifno error or mistake is committed in the computation process, the outcome by the direct method is exactly the same as obtained by the subtraction method. Between-groups sum of squares. The between-groups sum of squares (SS,) is a measure of the variation of the group means about the combined or grand mean. If the group means do not differ among, themselves at all, the sum of squares between groups will be zero. Thus, greater the variation in the group means, the larger is the sum of squares between groups. The SS, iscalled true sum of squares. The sum of the squares between the various groups can be found by taking the tion from the grand mean (or combined mean), mean of each group, getting its devia Squaring thisdeviation.and then multiplyingeach of these by thenumberof individuals follows: ( "in each group and lastly adding these values as Pin) +(X, in) (X, $5, = 2X - Xn) =(X, -X A Pplying this formula to the set of data in Table 142, we obtain S$, = (38 -6.33)°(5)+(58 6.33)" (5) + (94 - 6.33)"(S) a (-253)%(5) + (-0.53)'(5) + (3.07)"(5) 6.4009 x5 -+0.2809 «5 + 9.42495 32,0045 + 1.4045 + 47.1245 = 80.5335 = 80.53 556 STATISTICS FOR BEHAVIOURAL AND SOCIAL SCIENCES Py This above method of computing between-groups sum of res and cumbersome, especially when treatment/group Means are not ewally is it isin the case of present example Table 142. Therefore, we win Who, iP? formula for SS, that uses the treatment totals (=X° D Use a cM e ) instead compte” ‘th This computational formula to obtain the betwee Of treatment uae? | wil "N-STOups s m we the direct method. In this method, the totals of each of the thee ate a ue example) have been squared and oo by the number of Obsery gr ouPs ie oom (zx) Ons yp Oy in each group, and then summed [axe - Finally, the Correction term . bi, ® subtracted from the sum of squares. bg (i Thus, the direct method of obtaining SS, reads as follows: n m -_ | | Applying this formula to the set of data in Table 14.2, we obtain : [SoS | cone | 9S 88 20°). sone @ 5) * Ss 55 =[72.2+168.2+ 4418 }- 601.67 = 6822-60167 914 ° If no error or mistake is committed while computing the SS, then the outcomes both the above methods will be the same. At this point in the analysis, we have computed all three of the to verify our calculations by checking to see that the two co Using the data from Table 142, SS values and itis appropriate mponents add up to the toad SS, = $8, +85, 97.33 = 80.53 +1680 Thus, the sum of squares are additive. The formula for each SS and the relationships among these three values are show" Figure 14.2. c | Total SS | 2Ex*-c ——— —~. Between-groups SS KR Within-groups SS | ee) 28S inside each grOUP | | oe . or $S,- $S, Note n yr they Figure 14.2 Partitioning the sum -_ esa" ' -measule of squares (SS) for the independent variance ji) DEST yewill f for the total s “awe Will partition this value i. nd then WE into two additive and independent compo! om between treatments/groups and degrees of freedom within pee off : incomputl wil Note hte that the two parts we obtained from th Pe 557 Chapter ' 14 Analysis of Variance (Independent Measures) ee of Freedom aes alysis of degree of freedom (df) follows the same pattern as the analysis of SS. Firs' f for the tota cd I set of Nscores (or kn scores when 1 is constant in k treatments) nents: degrees t (or groups). in mind. The ane fin ing degrees of freedom, there are two important considerations to keep f value is associated with a specific SS value. Normally, the value of df is obtained by counti t i yy counting the number of tocalculate SS and then subtracting 1 from it. Fors example, if you compute ‘of scores, then df=n—1. tems that were used @ Each di SS for aset Gi p this in mind, we will examine the degrees of freedom for each part of the analysis. Total degrees of freedom (df,). The df, is associated with SS,. To find the df asso- ciated with SS, you must first recall that this $S value measures variability for the entire set of Nscores. Therefore, this df value will be legree of freedom is lost by taking deviations about the grand mean. for the data in Table 14.2, the total number of scores is N=15, so the total degrees of freedom would be df,=N-1=15-1=14. b) Within-groups degrees of freedom (df,.). find the df associated with SS,, we must look at ho Remember, we first find SS inside of each of the treatment (or groups) and then add her to get the SS,,. Each of the treatment SS value measures variabil- n-1. When all fa) df,=N-1.One d The df,, is associated with the SS, TO Wy this $S value is computed. these values toget! ity for the n scores in the treatment/group, so each SS will have these individual treatment values are added together, we obtain df, =2(n-1)= Sdf in tment, one df is lost by taking deviations from each treatment. In each group or trea the group mean. For the data n-1=5-1=4 degre ent treatment conditions having s this gives a total of 12 [df=k (n-I Notice that this formula for df simply a (the n values) and subtract 1 for each treatm’ you obtain, df, =N- k (Adding up all the n values gives treatment, then altogether you have subtracted k because there thedata in Table 142, N=15 and k=3,50, df= 15-312. Between-groups degrees of freedom (af,). The df, is associated with the SS,, The df associated with SS, can be found by considering the SS formula. This SS for- mula measures the variability for the set of treatment totals (2X). To find df, simply count the number of treatment totals (2X) and subtract 1. Because the number of treatments is specified by the letter &, the formula for df is df,=k-1. We have k means and one df is lost by expressing the group means as deviations from the grand mean. For the data in Table between-treatments degrees Of fre df, =k-1=3-1=2 meeps oe, ee treatment. Because there are three differ- constant) of subjects or observations, ents degrees of freedom. tdds up the number of scores in each treatment ent If these two stages are done separately N.If you subtract 1 for each are k treatments). For in Table 14.2, each treat! .es of freedom inside each ame size (71 | for the within-treatm 142, there are three treatment conditions (k=3) fedom are computed as follows: 3), so the his analysis of degrees of fr eedom add Up to equal ton al degrees of freedom, In other words, the degrees of freedom are addit are additive: 958 STATISTICS FOR BEHAVIOURAL AND SOCIAL SCIENCES | Total df N-1 LL a ———. Between-groups df [ Within-groups df kt = Figure 14.3 Partitioning degrees of freedom (df) for the independent-measures analysis of variance df,=df,+df, or 14=2+12 The complete analysis of degrees of freedom is shown in Figure 14.3. (iv) Mean squares or variance estimates. . . In the terminology of the analysis of variance (ANOVA), the variance is called the mean square or simply MS. In other words, in ANOVA, it is customary to use the term mean square (MS) in place of the term variance. As discussed earlier, variance is defined as the mean of the squared deviations. In the same way that we use SS to stand for the sum of the squared devia- tions, we now will use MS to stand for the mean of the squared deviations. The mean squares are otherwise known as variance estimates (VE). The MS is obtained by dividing the SS by the df. Thus, MS=Variance= <2, Since there are three types of SS (ie, total $s, within-groups SS and between-groups SS) having their own associated degrees of freedom, there would be three types of MS or variance estimates, like total variance estimate or MS, (s?), within- groups variance estimate or MS,,(s;,) and between-groups variance estimate or MS,(s;). These three types of MS or variance estimates are computed below with reference to the data given in Table 14.2 SS, _ 97.33 tal VE P= a. Total VE/MS_ (3?) ae 1a 7895 Within-groups VE/MS_(s,)=""" =—S== = 1.40 5 2,_ SS, Between groups VE/MS_(s3)= 7" = "S25 — 40.265 b Thus, we notice that the sum of squares (SS) and degrees of freedom (df) are additive. The vari- ance estimates, which are sometimes spoken of as the mean squares (MS), are not additive. (v) The Fratio. We have now the variance estimates (or MS) for the between-groups and within-groups. The between-group variance is called the true variance, whereas the within-group variance |S known as the error variance. The Fratio simply compares these two variances: p= MSv_ Su. For the data given in Table 142, the Eratio is: MS, s2 , 40.265 _ 6 76 140 Chi apter 14 Analysis of Variance (Independent Measures) 559 | yiyThe ANOVA summary table. f ; ;« useful to organise the results of the analysis in one t hea able called an ANOVA summary ‘the table shows the source of variation (between groups, within groups, and total vari- able. 7 ; nh SS. dl, MSand F ina serial order. For the previous computations, the ANOVA summary ae ‘constructed as follows: source of ss df aa = variation petween Groups 80.53 within Groups 16.80 Total 97.33 Note:** p. rx? snares 3x2) of treatment scores, etc. 1/>%2/andX;, and sum of the sums of ~ hs of instructions, an achievement testis administered to the three groups. The chi / apter 14 — Analysis of Variance (Independent Measures) 567 ue ven bel y i jevement scores are §! low. Test whether the three group means are different of not. oe 3 4 5 4 iM 13, 15 16 8 9 10 salstion sarthe solution of the problem the following steps should be followed: , Thegiven data are entered in Table 14.3 having all its notations. 1 Computation of a variety of summary statistics for the data in Table specifically, "=5; k=3; kn=N=3x5=15; 3X,=52, 3X,=68, EX,=37; sEX=G=157; X,= 4,-36, X,=74,X.= 10.47; ZX? = 550, EX} =940, EX; =295, EEX? = 1785. statement of the null hypothesis (H,) and alternative hypothesis (H,). The Ha contends that there is no significant difference among the group/treatment means; any observed difference being due to mere chance or sampling error. The H, on the con- trary, states that there is a real or genuine difference between ‘the treatment means. The Hand H, are stated as follows: Hy? oy = Hy = Hs (OF X,=X,=%,) Hy oy # oy #4 (OF X,2X,#X,) 14.3. 104, Table143 Achievement test scores of subjects treated by three methods of instructions Method Subjects Lecture Seminar Discussion 562 _ STATISTICS FOR BEHAVIOURAL AND SOCIAL SCIENCES 4, Calculation of the correction term (C). = GEXF _(@? _ ASI? _ 2464 _ 64327 Ne Ne als 1S 5. Computation of the sum of squares (SS). (a) Total 8S = £5X? -C = 1785 - 1643.27 = 141.73 @XP _¢ (b) Between-groups SS = n _. [se (68 SF esaar se omese =| 2204, $824 2) 1643.27 Seon =[5408 +9248 +2738]-164327 = 17394 -1643.27 = 96.1. (©) Within-groups $S= Total SS~Between-groups SS |41.73-96.13=45.60 Alternatively, within-groups SS can also be computed by the following formula which is rather a tedious method: Within-groups ss-s[sx° : wsl ex2 CX" Jaf nx2 2%” |, nx2 EH" on, an 7 _| 550-52” ].[o40 -{68" _ GIP -|ss0 7 }[90- . |+{23s ae - [550-408] +[940-9248]+[295 -2738] =92+152+212= 456 6. Determination of the degrees of freedom (df). Since there are three types of sum of squares, there are three types of degrees of freedom (df). (a) Total df=N-1or kn-1 =15-1=14 or 3x5-1=14. (b) Between-groups df=k-1=3- (c) Within-groups df= N-k or k(n=1) =15-3=12 or 3(5-1)=3x4=12 uitation of mean squares (MS). Like sum of squares, mean squares are of three ch MS is determined by dividing the corresponding SS by its associated df as 7. Comp" types, and ea oe Total SS_ 14173 otal .73 = = = 10.12 (a) Total MS-Foralat 14 - 563 Chi apter 14 Analysis of Variance (independent Measures) 4 Summary of one-way analysis of variance gable 14 oe | source of Variation ss Between-groups 9613 Within-groups (Error) 45.60 Total 14173 ote:"* p> _ ee @ groups MS= 2 | -| 269-43" -[ 361-92"). 287-43" |, 38) | 8 | sf 7 6 = [269 - 231.13] + [364-352.80] +1287 - 264.14] +[68-54] = 37.87 +11.2 +2286 + 14.0 =85.93 6 Determination of degrees of freedom (df). (a) Total df=N-1=26-1=25 1t_ methods of 3 (¢) Within-groups df= N- k=26-4=22 Computation of mean squares (MS). la) Total Ms= L0tal SS _ 16815 _ 673 Totaldf 25 Between-groupsSS _ 82.22 _ 97 4) Between-groupsdf 3 oe in- 93 (0. Within: s—Within-groupss _ 85:93 _ 3.9) sits) Within-groupsdf 22 (b) Between-groups MS= Computation of Fratio etween-groups MS ~"Within-groups MS * Prepar ‘atic T as Tablelae ion of ANOVA summary table. The results of analysis are summarised in 7 | ‘st of signific Consulting Table E i thenume ance. Consulting Tal leE in the appendix with df- | sgnit€tator and df=22 with the denominator, we find tha ‘ae | nce at the 0.08 level is 305, and at the 0.01 level is 4, a. i of Frequited for $2. Since the | obtained F value 566 STATISTICS FOR BEHAVIOURAL AND SOCIAL SCIENCES Table 14.6 Summary of one-way ANOVA Source of Variation ss Between-groups | 82.22 3593 | 168.15 [__168:15_] Within-groups (error) Total Note:** p<0.01. nificant at 0.01 level of significance. nificantly different in their jis accepted. The overall significance ot fall on a straight line with zero groups are random samples from a tical values, it is sig n hat the four groups are sig) is rejected and Hy do n of 7.01 is greater than both the cri Therefore, p<0.01. This indicates t mean performance. Hence, the Ho of Findicates that the means of the four groups slope. Hence, the null hypothesis that the four common normal population is rejected. we can conclude that the four methods of pre- On the basis of the results of the analysis, sentation produced significant differences in the four groups. In other words, the method of presentation has significantly affected the mean recall scores of four groups of subjects. Example 14.3 A comparative psychologist conducts a research study to compare learning keys. The following data show the performance of two species, Vervet and Rhesus, mo} / 1g dat number of trials taken by the animals to have a perfect learning. Test the significance of the difference between the sample means and prove that F=t”. Verve: 3 4,556 6 7% 8 Rhesus: 9, 6 7, 7, 6 8 8 9 Solution iven data are presented in Table 14.7 along with its notations. 1 Theg 2. Computation of a variety of summary statistics for the given data in 6; group totals (£X):2X, =44,2X, = 60, Table 14.7. Specifically, k=2, n=8, kn X, =55,X,=7.5 and grand mean (X.)=65; sum of the squares of group scores (2X2): 2X? = 260, 2X? = 460 and sum of the sums of squares of group scores (S2X?)=720. 3. Statement of H, and H,, as follows: Hy: #,=4, (or X,=X,) Hy 4, %4,(or X,#X,) 4. Calculation of the correction term (C). _(2Exy _ (GP _(104)* _ 10816 Nag aND a §1Gm auc Cc 5, Computation of the sum of squares (SS). (a) Totals $S$=5X? -C =720.0 - 676.0 = 44.0 ea Chapter 14 Analysis of Variance (Independent Measures) Number of trial: s taken by two species of monkeys to have perfect learning gable 147 Vervet Rhesus w lao | lalala la |e jw o | |e Jo la fra |~ fa lo 3X: 44 55 5X’ 260 _ (ex? of et SB" ]-6 (b) Between-groups SS= ae Ty 2 ae F).- ors0-| 55° +000) 6760 = [242.0 + 450.0] — 676.0 = 692.0 — 676.0 = 16.0 () Within-groups SS=SS.— Alternative method for within-grouPs SS=2| SS, = 44: 0-16.0=28.0 : [= x2 EX ] ‘ [x X,) n, . : - 260-43 |-[0-S | = 260 - 242.0] +1460 — 450.0] = 18.0 + 10.0 = 28.0 “ Determination of the degrees of freedom (df). 568 STATISTICS FOR BEHAVIOURAL AND SOCIAL SCIENCES () Between-groups df=k- (©) Within-groups di - OF k(n-1)=28-1)=2x7=14 Computation of mean squares (MS). @) Total Ms- Total SS _ 44 _, 9, Total df 15 (b) Between. _ Between-groupsSS _ 16 _ 169 "BrOUPS MS= 7 rween-groups df ~ 1 — ny © Withingroups ms_Within-groups SS _ 28 _ 5 OURS MS= Wr thin-groups df” 14 8. Computation of Fratio. p_ Between-groups MS_16.0_g 4 Within-groupsMS 2.0 9. Preparation of ANOVA summary table as given in Table 14.8. 10. Test of significance. Consulting Table E in the appendix with df=1/14, the critical F Value at 0.05 level is 4.60 and at (101 level is 886, Since the obtained value of F=80 is &reater than 4.60 but less than 8.86, it is significant at 0.05 level of significance but not at 0.01 level. Hence, we conclude that the mean difference is significant at 0.05 level, Therefore, p<0.05. This indicates that the two group means are significantly different from each other in 95% of cases. So, the Hy is rejected and H, is accepted. The second part of the given Example 14.3 is to prove that F=t?. In order to prove this, we have to compute tratio from the given data. We shall use the following formula for ¢, the formula for small samples: t= AK | EXP+EX (1 (N, +N,)-2(), 1 First, we should find out 2x? and £22 as follows: 2 a sat =zx7 EAI p69 (49 _ 56 1936 1, 8 8 = 260-242 = 18.0 Table 14.8 Summary of one-way ANOVA Source of Variation ss | af MS F Between-groups _ | 0 | 1 16.0 8.0" Withingroups(Error) | (280 4 20 Total Note:* p<0.05. Chapter 14 i Analysis of Variance (Independent Measures) 569 Zax} = EX? — (2X, (60y = 460-—— = 3600 7 0 3 7 100-== = 460-450=10.0 55-75 | 18+10 (3 3) + t= (8+8)-2(8 8 2.0 =-—— =-2829 =-2.83 0.707 | “.t=-2.83; 7 =(-2.83) =8,0089=80 | F=8.00; VF =V80 =2828=283 Thus, F =t? or VF =t (Proved) ‘The tratiomay be positive or negative, but Fratio is always positive. With regard to the significance of the obtained ¢ratio, we can say that since the obtained Aatio is significant at 0.05 level, the corresponding tratio must be significant at 0,05 level of sgnificance. To verify this, we may test the significance of the obtained tratio by taking the of freedom and consulting the table containing the critical values of t f= N+N;-2=(8+8)-2=16—2~14. Consulting Table C in the appendix with df=14, the itical value of tat 0.05 level is 2.145, and at 0.01 level is 2.97. Since the ‘obtained value of 1-283 is greater than 2.145 but less than 2.977, it is significant at 0.05 level of significance. be noted that Therefore, p<0.05. Both techniques (F and 0) lead to the same conclusion. It may 1-1, \F =t or, putting it the other way around, =F. when df for ‘between groups =! 145.4 Strength of Association that the observed differ However, it does not in ences between the treatment means dicate anything about the strength of the treatment effect. The statistic called Omega square (o?) is a measure of the strength of of the total variability ina set of scores that can treatment effect. It gives us the proportion f the variance in the scores can be beaccounted for by the treatments. ‘That is, what portion o! jn the treatment groups. The formula for the strength of ‘counted for by the differences sociation is: The significant Fratio indicates aenot likely to arise by chance. “hete w=Omega square a Sum of squares between groups ee of treatment groups s = Mean squares within groups ‘=Sum of squares total. me os” 570 STATISTICS FOR BEHAVIOURAL AND SOCIAL SCIENCES Let a‘ Us Now compute the strength of treatment effects in our illustrative Example 141 Th values of é 11 ave ar SS, and MS, have been obtained from Table 14.4. The steps in computing «2,1. 96.13; $$,=141.73; MSw= 380; k=3 2 96.13~(3- 11380) _ 9613-2380 ° 14173+380 «(14553 ns approximately 61% of the variance in the dependent variable is accounted for by the aes ey Peed of instruction. In other words, there is fairly a strong relationship : a of instruction (independent variable) and achievement scores (d variable) of the subjects. a os 4 There is an alternative formula to find out the strength of the relation between indepen. lent and dependent variables. This formula reads as: SS, 2 _SSp aor oe Tras SS, The symbol » is the Greek letter eta and SS denotes sum of squares. This statistic is known as the correlation ratio. It may be interpreted as a simple proportion in the same way that 7 is interpreted as a proportion. It is a measure of the strength of association between the depen- dent and independent variables involved in the experiment. By referring to Table 14.4, we found that SS,=96.13; 55,,=45.60 and SS,=14173. Let us now compute the strength of association or n?, (eta square) as follows: == = 0678 = 0.68 14173 1-0.32 = 0.68 14173 The obtained correlation ratio or Nex is 0.68. Thus, 68% of the variation in the data can be attributed to the independent variable. If we compare omega square (@*) and eta square ('P), the eta square is little higher than omega square. To test whether a correlation ratio is significantly different from 0, we may use the Fratio, the formula reads as: 7 ny Mk-1) “(= O/(N-k) where n? =the obtained correlation ratio ke Number of treatment groups =Total number of observations. Referring to Table 144, k=3and N= 068/3-1)__ 068/2 0.34 Thus, =~ gas 3) 02/12 0027 2? > the Fratio used in testing the significance of this correlation ratio is 1259. This is the same so.the Pithin the rounding of decimals, as that previously obtained in Table 144. The analy- cedures and le ation ; 5 re variance and the correlation-ratio approach are equivalent proc Chapter 14 Analysis of Variance (Independent Measures) 57" ame F051 _ way analysis of variance is otherwise known as a two-way classification OF two-factor analysis of variance. When a research study involves more than one factor, it is called a facto” rial jesign. The simplest version of a factorial design is the two-factor ‘ANOVA. In this secon specifcall we will examine analysis of variance as it applies to research studies with exactly two factors. In addition, we will limit our discussion to studies that use 2 separate sample for cach treatment condition, that is, independent-measures design. Finally, We will consider revearch designs only where the sample size (n) is the same for all treatment conditions. In the terminology ‘of ANOVA, this section will examine two-factor, independent measures, equal n design. nother words, those experiments which investigate simultaneously the effects of Lue independent variables (or factors), each having two or more Jevels to which separate groups © subjects are randomly assigned, on the dependent variable are called the two-factor exper ments. The technique used to analyse the data of such experiments are called two-way OT two-factor ANOVA. The two-way ANOVA permits the simultaneous study of two ind factors, whereas the one-way ANOVA permits the study of only one indepe! factor. Moreover, the two-way ANOVA permits the evaluation of interaction between two fac- torsor independent variables, whereas the one-way ANOVA does not permit such evaluation. Thus study of interaction effect is an advantage BF two-way ANOVA compared to one Way ANOVA. ‘Toilustrate two-way ANOVA, assume that an investigator wishes to study the effect of two \evelsof intelligence (ie, superior intelligence and ‘ferior intelligence) and three methods of instruction (ie, lecture, seminar and discussion) on the achievement scores of the subjects. factor experiment are identified as Traditionally, the two independent variables in a two- factor Aand factor B. In this experiment, let intelligence be factor A, having two levels, that is, pein intelligence and inferior intelligence, represented by , and a, respectively. Similarly, ie second independent variable, ‘method of instruction is factor Bhaving three levels, that is, lecture, seminar, and discussion methods, represented by b, b, and b, respectively, The total ‘umber of treatment conditions or cells in this experiment will be Ax B=2x 6. Thus, the lane would require 6 separate samples of same size (1), say, 5 observations and the total me subjects will be 2x3x5=3 0. The data may be arranged in a table containing two and three columns. The rows corres pond to intelligence (fact ehodk factor B). " . Ce __ Table 149 shows the arrangement of data of a 2x3 factoria bove example. pes tcumren = elven in\the Two-way Analysis of Va 6 ™ lependent variables OF ndent variable or _— : a ies cells in Table 149, and each cell is a joint or combined function of both th T the ist two-factor ANOVA wil test for mean differences in the experi ing te structured lke the intelligence and method of instruction eee cr fesearch stud ‘xample, the two-factor ANOVA tests for on example in Table 14 ) L Mean a Mean difference between the two intelligence levels (or betw n differences between the three method levels (or ee tows) 'n three co}; ‘umns) ~~ 372 STATISTICS FOR BEHAVIOURAL AND SOCIAL SCIENCES Table 149 The six treatment conditions in a two-way analysis of variance @,(Superior ah | ab Intelligence | Intelligence) : actor 4) a,(Inferior ah ab, Intelligence) Total 3. Any other mean differences that may result from unique combinations of a Specific intelligence anda Specific method level (For example, superior intelligence may be espe. Cially facilitative when the method of instruction is seminar). Thus, the two-factor ANOVA combines three separate hypothesis tests in one analysis. Each of these three hypothesis tests will be based on its own Fratio computed from the data. The three Fratios will all have the same basic structure: p= Natiance (differences) between sample means Variance (differences) expected by chance As always in ANOVA, a large value for the Fratio indicates that the sample mean differences are greater than chance. To determine whether the obtained Fratios are significantly greater than chance, we will need to compare each Fratio with the critical values of Fgiven in Table Ein the appendix. 14.6.1 Main Effects and Interaction Effect The goal of a two-factorial “xperiment is to evaluate the main effects of each of the two fac- tors as well as their interaction effect, Thus, as noted earlier, a two-factor or two-way ANOVA actually involves three distinct hypothesis tests. Here, we will examine these three tests in more detail. 14.6.1.1 Main Effects The mean differences among the levels of one factor are referred to as the ‘main effect of that factor. When the design of an experiment is represented in a tabular form (or asa matrix) with one factor determining the rows and the second factor determining the columns, then the mean differences between (or among) the rows would describe the main effect of one factor and the mean differences among (or between) the columns would describe the main effect of the second factor. In our example of intelligence and methods of instruction, one Purpose of the experiment is to determine whether differences in intelligence (factor A) result in differences in achieve: ment. To answer this question, we will compare the mean achievement score for all subjects tested with superior intelligence versus the mean achievement score for all subjects tested — gm > ee -way analysis 30 Hypothetical mean scores for six treatment conditions in a two y poet! of variance 573 ures) Chapter 14.» Analysis of Variance (Independent Meas Ty. Method (Factor B) | b b, bs (Lecture) | (Seminar) | (Discussion) | = | a(Superior |X, , =55 X,,, =85 4, =70 Xa = ce | Intelligence) -_ meta) | anterior | X,,=35 | x,,-65 | X,,=50 Intelligence) —z X, = X,, = X,, =60 Xu= ie Ans * (Grand mean) shinferior intelligence, irrespective of the methods of instruction. Note that this process utes the mean differences between the rows in Table 149, i is ta, which are given make this process more concrete, let us take some hypothetical data, . a ite 1410. This table shows the mean achievement score for each of the treatment con: {ions (or cells) as well as the mean for each row (ie, each intelligence level) and for each column (ie, each method). , These data presented in Table 14.10 indicate that subjects in the superior intelligence con- ‘ton (the top row) obtained an average score of X,, =70. This overall mean was obtained by computing the average of the three means in the top row. In contrast, inferior intelligence veulted in a mean score of X, = 50 (the overall mean for the bottom row). The difference ‘eween these two means (ie, X, and X, ) constitutes what is called the main effect of intelli- snctorthe main effect of factor A: Smilarly, the main effect of factor B (methods) is defined by the mean differences among ‘secolumns of the table (or matrix), For the data in Table 14.10, the two Sroups of subjects ‘Sucted with lecture method obtained an overall mean score of X,, = 45. Subjects instructed ‘th the seminar method averaged X,, =75 and subjects instructed with the discussion — achieved a mean score of %,, = 60. The differences among these three means consti- Fa main effect of method or the main effect of factor B. a, bn ‘or experiment, any main wa, With a hypothesis test to determi “Unless th tines that the observ, ‘tor, ply the result of samplin, ‘eur 4 ' fo evaluate main effects, we will state hy Ag the ror orchance lehyyand the main effect of factor B,and tl for bag gamle We are considering, factor A involves the com “EW lege once: The null hypothesis (H,) would stat els, that is, ; » intelligence has no effect on achie effects that are observed in the data must be eval- ine whether or not they are Statistically significant parison of two di 'e that there is no dif © different ferer ‘vement. In symbols, en Ney Hy y= Hy, logy Mative hy . ots In Sr (Hy is that the two different levels of intelligence do prog u ‘uce dif- ON 574 STATISTICS FOR BEHAVIOURAL AND SOCIAL SCIENCES Hy Hy, # Hay To evaluate these hypotheses, we will compute an Fratio that compares the actu} m ferences between the two intelligence levels versus the amount of difference that w expected by chance or sampling error. Thus, p— Natiance (differences) between the means for factor A ~~ Variance (differences) expected by chance/error Han qi Ould be or Variance (differences) between row means ~ Variance differences) expected by chance/error Similarly, factor B involves the comparison of the three different methods of instruction, 7 null hypothesis (H,) states that overall there are no differences in mean achievement Cotes among the three methods of instruction. In symbols, Hg: Hy, = Ma, = Hy, The alternative hypothesis (H,) states that there are differences among the three Means, {ny symbols, Hy uy, #4, #4, or H,: Atleast one mean is different from others The Fratio will compare the obtained mean differences among the three methods of instruc. tion versus the amount of difference that would be expected by chance or sampling error, Thus, p- Natiance (differences) between the means for factor B "Variance (differences) expected by chance/error or Fe Variance (differences) between column means Variance (differences) expected by chance/error 14.6.1.2 Interaction Effect In addition to evaluating the main effect of each factor individually, the two-factor ANOVA allows us to evaluate other mean differences that may result from unique combination of the two factors. For example, specific combinations of intelligence and methods of instruction may have effects that are different from the overall effects of intelligence or methods of instruction acting alone. Any ‘extra’ mean differences that are not explained by the main effects are called an interaction between factors, The real advantage of combining two factors within the same study or experiment is the ability to examine the unique effects caused DY 3 interaction. In other words, an interaction between two factors occurs whenever the mea! differences between individual treatment conditions, or cells, are different from what wou! be predicted from the overall main effects of the factors. ae 2 Chapt Pterl4 Analysis of Variance (Independent Measures) 575 14.11 Hypothetical mean scores table for six treatment conditions in a two-way analysis of variance Method (Factor B) (Lect, b, ‘ _ | (Lecture) | (Seminar) | (Discussion) a, (Superior x 7 . intelligence | Intelligence) an 48 Xam = 85 Kan, =80 factor) | a,(Inferior Rie x x Intelligence) nn 45 Xam = 65 oe X,, =45 ,, =78 X,, =60 REET os Tomake the concept of an interaction more concrete, we will re-examine the data shown in Table 14.10. For these data, there is no interaction, that is, there are no extra mean differences shat are not explained by the main effects. For example, within each method of instruction ach column of the table) the subjects scored 20 points higher in the superior intelligence condition than in the inferior intelligence condition. This 20-point mean difference is exactly what is predicted by the overall main effect of intelligence (factor A). Let us now consider the hypothetical data shown in Table 14.11. These new data show «actly the same main effects that existed in Table 14.10 (the column means and the row means have not been changed). But now there is an interaction between the two factors. Forexample, in discussion method (third column), there is a 40-point difference between the superior intelligence and the inferior intelligence conditions. This 40-point difference cannot te explained by the 20-point main effect of intelligence. Also, in the lecture method (first column), the data show no difference between the two intelligence conditions. Again, this zero difference is not what would be expected based on the 20-point main effect of intelligence. ‘Theextra, unexplained mean differences are an indication that there isan interaction between the two factors. To evaluate the interaction, the two-factor ANOVA first identifies mean differences that cannot be explained by the main effects. After the extra mean differences are identified, they aeevaluated by an Fratio with the following structure: Variance (mean differences) not explained by main effects F= Variance (differences) expected by chance/error ‘nenull hypothesis for this Fratio simply states that there is no interaction: 'L: There is no interaction between factors A and B. All the mean differences between treatment conditions or cells are explained by the main effects of the two factors. The alternative hypothesis is that there is an interaction between the two factors: 1: There isan interaction between factors A and B. The mean differences between treat- Ment conditions or cells are not what would be predicted from the overall main ‘Hlects of the two factors. Thus, w | 7 i ' ty eae have introduced the concept of an interaction as the unique effect produced by Monti working together. Now, we want to say something more about interactions by "8 wo alternative definitions of an interaction. 2, 576 STATISTICS FOR BEHAVIOURAL AND SOCIAL SCIENCES The first alternative definition of the concept of an interaction focuses on the interdependency between the two factors. More specifically, if the two factors are Mm oy pendent, so that one factor does influence the effect of the other, then there wit ite action. On the contrary, if the two factors are independent, so that the effect of eng” ie not influenced by the other, then there will be no interaction. In short, when the ene i factor depends on the different levels ofa second factor, then there isan internen <0, the factors. 2 bet Referring to the data given in Table 14.10, we note that the size of the intel {top row versus bottom row) does not depend on the methods of instruction fe" ey the change in intelligence shows the same 20-point effect forall three levels of sort tat, the intelligence effect does not depend on methods of instruction and there meet Let us now consider the data given in Table 1411. Here the effect of changir a depends on the methods of instruction and there isan interaction, For exawr eile, method, changing intelligence has no effect. However, there isa 20-point dif tue Superior and inferior intelligence a seminar method, and there is a40-point aie cussion method. Thus, the effect of intelligence depends on the methods of a =a means there is an interaction between the two factors, Tuction, which The second alternative definition of an interaction is obtained when the two-factor experiment/study are presented in a graph. In this case, the concent fe ofa tion can be defined in terms of the pattern displayed in the graphy Figure Lag ime set of data we have been considering (ie, data given in Tables 1410 and 141i data from Table 1410, where there is no interaction, ae presented in Figure rage struct this igure, we selected one of the factors to be dsplayed on the fogs Xcaxis in this case, different levels of method are displayed. The dependentan sg achievement scores, is shown on the vertical axis (ie, Y-axis) ariable, mean Note that the figure actually contains two separate graphs: the top li ship between method of instruction and mean Paton ieee eee ae tor and the bottom line shows the relationship when the intelligence is infers heron the picture in the graph matches the structure of the data matrix; the columee of thee ae as values along the X-axis and the rows of the matrix appear as separate lines in the soft this particular set of data, Figure 14.4(a), note that the two lines are parallel, thatis the distance between lines is constant. In this case, the distance between lines reflects the 20-point difference in mean performance between Superior and inferior intelligence, and this 20-point difference is the same for all three methods of instruction, __ Now look at the Figure 14.4(b), which shows data from Table 1411, Here, we note thattte lines in the graph are not parallel, The distance between the lines changes as we scan from et to right. For these data, the distance between lines corresponds to the intelligence effect, th! is, the mean difference in achievement for superior intelligence versus inferior intelligens® The fact that this difference depends on the methods of instruction indicates an interact between factors. In short, when the results of a two-factor study are presented in a grap i existence of nonparallel lines (lines that cross or converge) indicates an interaction lxtwe™! two factors. tt In sum, the concept of an interaction is easiest to understand using the Pepe dependency, that is, an interaction exists when the effects of one variable (or ea yd on another factor. However, the easiest way to identify an interaction ai me isan’ to draw a graph showing the treatment means, The presence of nonparallel i co tthe way to spot an interaction. The Ax Binteraction typically iscalled’A by Biter is an interaction of intelligence and method, it may be called the ‘intelligen’ interaction, (@) Mean scores = Mean scores 90 80 70 60 50 30 20 Chapter 14 Analysis of Variance (Independent Measures) 577 - intelligence 2 Inferior intelligence Lecture Seminar Discussion Methods of instruction ‘Superior intelligence (ae Inferior intelligence ye — ee Lecture Seminar Discussion Methods of instruction from Table 14.10, where there is no interaction, (b) ‘gue 144 ing the data ere is no inter Geen seine the data from Table 14.11, where there is an interaction. 482. Notation and Formulae "*onay or two-factor ANOVA is composed of three distinct hypothesis tests: ° Themain effect of factor A (often called t ‘ ‘0 define the rows of the matrix or table, the main ¢! the A effect). Assuming that factor A is used ffect of factor A evaluates the mean differences between rows. called the Beffect). Assuming that factor Bis used to define i) The mai lin eff 1 h ecolumnsof the matrix the main effect of factor B evaluates the mean differences ‘Ween columns. ~~

You might also like