0% found this document useful (0 votes)
11 views6 pages

IAR Lecture 3

The document discusses a study by Van Rinsum et al. (2018) that examines how auditors' judgments of aggressive accounting are influenced by their accountability to management versus an audit committee, finding that checklist use can impair auditor independence when appointed by management. It also covers lecture notes on statistical methods including ANOVA and OLS regression, emphasizing their applications in analyzing differences between groups and relationships between variables. The document outlines the procedures for conducting these analyses and the assumptions underlying OLS regression.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views6 pages

IAR Lecture 3

The document discusses a study by Van Rinsum et al. (2018) that examines how auditors' judgments of aggressive accounting are influenced by their accountability to management versus an audit committee, finding that checklist use can impair auditor independence when appointed by management. It also covers lecture notes on statistical methods including ANOVA and OLS regression, emphasizing their applications in analyzing differences between groups and relationships between variables. The document outlines the procedures for conducting these analyses and the assumptions underlying OLS regression.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

IAR Lecture 3

Pre-lecture reading of article


Disclosure Checklists and Auditors’ Judgments of Aggressive Accounting - Van Rinsum et
al. (2018)

Introduction:
This study investigates if auditors who feel accountable to management (as opposed to the
audit committee) are susceptible (vatbaar) to pro-client bias after using a disclosure
checklist.

Assuming that all required disclosures are indeed made, as is the case in our setting, we
argue that disclosure checklist-use increases the likelihood that auditors’ independence is
impaired when they are appointed by the client firm’s management.

We manipulate two factors: (1) whether the auditor was appointed by the firm’s management
or an independent audit committee, and (2) disclosure checklist use.

We find that the use of a disclosure checklist indeed increases acceptance of aggressive
accounting methods by the auditors who are accountable to management. When auditors
are appointed by the audit committee, we find no such effect of disclosure checklist use.

Our results imply that in a situation in which all required disclosures are made, checking off a
disclosure checklist results in a less critical state of mind, leading auditors to judge the
accounting method per se as more acceptable when management is the appointing party.
Lecture notes
Introduction & Recap:
1. Recap
2. ANOVA (analysis of variance)
3. Van Rinsum, Maas, Stolker 2018 article
4. Ordinary least squares regression (OLS)

Furthermore, in the next week more regression models.

Recap:
- How to know whether the research is strong enough to reject H0?
- So how do you know whether your pick of a sample can be generalised to the
population of the research?
- Testing strategy for this problem above:
1. Pick sample n
2. Calculate the sample mean (average)
- Empirical estimate of the sample data.
3. Calculate the standard error of the sample mean (SEx)
- SEx = s/√n
- with s being the sample standard deviation
4. Calculate the test statistic, which measures the distance between x and mu0 in terms
of standard errors:
- ( x - mu0 ) / SEx
5. Compare the obtained t-statistic with critical values from its distribution (here a t-
distribution with n-1 degrees of freedom)
- The critical values at which H0 would be rejected depends on alpha. which
indicates the max allowed probability of making a type 1 error (reject H0 when
it’s actually true). Most common: 0,1, 0,05, 0,01.
- Reject H0 when p-value < alpha
- Reject H0 when test statistic is larger/smaller than critical values
associated with certain p-value

ANOVA - Analysis of Variance:


- Used very often in academic research
- Instead of ANOVA you can also use regressions. Regression outputs are more
usable according to the lecturer.
- Not going to discuss all the details. More a general understanding, which is sufficient
for reading academic papers.
- So far, we only looked at one variable. This week we will look at relationships
between two variables.
- i.e. how does one variable influence another one?
- Begin with categorical independent variables
- For two groups, use two-sample t-test or a paired t-test.
- ANOVA is a generalization of the two-sample t-test. When the case consists of 2
groups, the two tests are equivalent.
ANOVA for 2> groups
- Basic ANOVA offers a way to test whether the observed differences in means of a
variable across two or more groups within a sample are statistically significant.
- H0: mu1 = mu2 = mu3, etc.
- No difference except for random chance.
- H1: not all mu’s are equal
- This doesn’t specify which one is not equal to others
- Idea of this: decompose the total variance of a sample in a between-group and a
within-group-component (“error”).
- An independent variable in ANOVA language is called a factor.

ANOVA Procedure:
1. Compare the mean squares for between and within variation
2. Mean squares: sum of squares of deviation of observed values from predicted values
divided by respective degrees of freedom.
3. Total sum of squares:
- SST=SSA+SSE
- SST= total variation
- SST= ⅀i ⅀k (Xik - X)^2
- SSA= variation between groups:
- SSA= ⅀i ni (Xi-X)^2
- SSE= variation within groups:
- SSE= ⅀i ⅀k (Xik - Xi)^2
4. Next you divide both terms by their respective degrees of freedom to get the mean
squares:
- (I - 1): number of groups minus one for between groups
- (n - I): number of total observations (n) minus number of groups for within
groups
5. The ratio of the mean squares gives you the test statistics, which conveniently
follows an F-distribution with XX degrees of freedom (critical values have to be
looked up in a table (z-table, t-table, etc))

Interaction effect:
- TSS = SSA + SSB + SSAB + SSE
- SSA: variation explained by A
- SSB: variation explained by B
- SSAB: this is calculated from the difference between each cell mean and what you
would predict as the cell mean given the combination of the other factors (this is the
interaction effect)

Interaction effect: the effect that one factor is not dependent on another factor.
TSS = the total sum of squares.
The degrees of freedom for the calculation of the mean squares are:
- (I-1) for SSA
- (J-1) for SSB
- (I-1)(J-1) for SSAB
- (n-I*J) for SSE

The F- statistic for the significance tests of the individuals factors and their interaction is
again calculated as the ratio of the respective means squares and the mean squares of the
error.

ANOVA - R example
The effect of college education and the gender the effect on future income.

H0: obtaining a bachelor degree does not affect a person’s future income
H1: obtaining a bachelor degree does affect a person’s future income

Van Rinsum, Maas, Stolker (2018)


Topic: auditing.
Method: laboratory experiment with experienced auditors

The method is odd due to normal auditing papers that are based on archival research.

OLS Regressions
Regressions are the workhorse of econometrics.
Today the basics of regressions.
ANOVA is useful when we are interested in differences between groups.
- Explanatory (independent) variables are categorical.

In many other cases we often see that a variable is a continuous variable


- ANOVA doesn’t work for continuous variables
- instead: use Ordinary Least Squares (OLS) regression
- This also works on categorical variables btw, so OLS can be used instead of
ANOVA aswell.

Conditional expectation function: you can break down into a part explained by one variable
to explain another one.

Scatterplot shows all the observed data in a dataset. The regression line shows a trend
through these observations.
OLS: minimize unexplained part of Y (residual e)

Assuming a lineair relationship, the regression equation is:


Y= a + Bx + e
- a = the intercept
- B = the slope coefficient (the effect of the treatment)
- e is the error term, the difference between the observed Yi and the predicted values
(generated by the regression model)

OLS : to minimize the squared residuals.

OLS Regression assumptions (also Gauss Markov assumptions)


1. We have a random sample of observations on Y and X from the population
2. There is indeed a linear relationship between X and Y
3. No perfect linear relationship among the explanatory variables, no perfect collinearity
a. A computational necessity
4. The conditional mean of the error term is zero: e = 0
a. Endogeneity is a serious problem, can be easily overlooked and results in
biased coefficients. This will be dealt with later in this course.
b. This means that the error term is exogeneous and does not correlate with any
of the explanatory variables
5. The error terms are independent and identically distributed with zero mean
a. That means they have the same mean of zero and the same variance
b. Having the same variance is called homoscedasticity
c. Can be easily violated without causing biased coefficients, but reduces the
precision of the estimates.

4. OLS REGRESSION EXAMPLE

Rewatch part 2: R example


One factor ANOVA-analysis.
H0: there is no difference between the average future income between the group of people
who don’t have a bsc degree and people who do have a bsc degree
H1: there is a difference between the average future income between the group of people
who don’t have a bsc degree and people who do have a bsc degree

EDUCBA: variable that classifies to what group one belongs (bsc degree / no bsc degree)

What you want to prove with ANOVA:


- Are the samples taken from the population statistically significant so that it can be
said that INDEED there are differences in income between the groups (BSc or No
BSc).

With the significance level of the F-table, if it is at the given level or larger, it is significantly
proven.

You might also like