Pr2 Chapter 3 Module
Pr2 Chapter 3 Module
Practical Research 2
1st Semester – Final Term
Understanding Data
and
Ways to Systematically Collect Data
0|P age
Lesson
The Quantitative Research
14 Design
The research design will guide you in choosing the strategy, data
collection, measurement, and data analysis that you will use in your
research to answer your research problem. In this lesson, the focus is on the
quantitative research design, its types, and its strengths and weaknesses.
What’s New
The quantitative research design focuses on gathering numerical
data. Its methods highlight objective measurements and the statistical,
mathematical, or numerical analysis of data. In this design, data are
collected through polls, questionnaires, and surveys (LeTourneau University,
2020).
Comparative
Comparesearch
Evaluative research
1|P age
What is It
Quantitative research can be experimental or non-experimental.
In experimental research, an independent variable is manipulated to see
the effects on the dependent variables
In non-experimental research, the independent variable is not
manipulated and there is no random assignment to groups.
Quantitative research usually finds answers using variables. It also
demonstrates the relationships among the variables.
A variable is a condition or characteristic that can take on different
values or categories. It is an independent variable (IV) when it is being
manipulated by the researcher while the dependent variable (DV) is the one
being observed and measured by the researcher (www.scribbr.com.n.d.).
1. Experimental research
In experimental research, an independent variable is manipulated to
determine the effects on the dependent variables (www.scribber.com). An
independent variable (IV) is a variable that is presumed to cause a change
to occur in another variable. A dependent variable (DV) is the variable that
is presumed to be influenced by one or more independent variables
(Johnson & Christensen,2014).
Example:
A teacher wants to test the effectiveness of a new technique of
teaching how to solve problems in mathematics. Before the start of the
experiment, the group to be used is given an achievement test about the
problems to be covered. After the experimental period, the same test in
another form is given to the group as a posttest.
a. Pre-experimental research design
In this research design, a single group is usually studied
Example:
Example:
A researcher wants to evaluate a new method of teaching
fractions to fourth graders. He conducts the study with a
treatment group consisting of one class of grade 4 pupils and a
control group consisting of another class of grade 4 pupils. In
3|P age
this study, the pupils are not randomly assigned to classes by
the teacher (Price, Jhangiani, & Chiang,n.d).
c. True experiment
• In this design, the researcher has to manipulate the
variable that is hypothesized to affect the dependent
variable that is being studied.
• In this design, research subjects have to be randomly
assigned to the sample groups.
Example:
2. Non-experimental research
In non-experimental research, the independent variable is not
manipulated and there is no random assignment to groups. Non-
experimental research can be descriptive, causal-comparative, or
correlational research.
a. Descriptive research
It describes the current status of an identified variable.
Descriptive research projects are designed to provide information
about a phenomenon without doing any comparison or findings of the
relationship between variables. It is concerned with conditions of
relationships that exist, practices that prevail, beliefs, processes that
are going on, effects that are being felt, or trends that are developing.
The most common descriptive research method is the survey, which
includes questionnaires, personal interviews, phone surveys, and
normative surveys (Koh & Owen, 2020).
Example:
Teacher A wants to determine the satisfaction of the SHS
students on the Alternative Delivery Mode.
b. Correlational research
Correlational research tries to determine the extent of a
relationship between two or more variables using statistical data. It
also seeks to figure out if two or more variables are connected and in
what way (Study.com, 2003).
4|P age
b.1. Types of Correlational Research
b.1.1. Positive correlational research
A type of correlational research comprising two (2) variables
that are statistically parallel where an increase or decrease
in one (1) variable causes a like change in the other
(formplus, 2020).
Example:
A researcher wants to find out if an increase in workers'
salaries will increase the prices of commodities and services
and vice versa.
c. Causal
It attempts to establish cause-effect relationships among
the variables. This type of design is very similar to true
experiments, but with some key differences because an
independent variable is identified but not manipulated by the
experimenter.
Example:
d. Evaluative research
5|P age
• Evaluative research enhances knowledge and decision-
making and leads to practical applications.
Example:
Lesso
n The Sample and Sampling
15 Procedures
In this lesson, the general types of sampling and the steps to compute for
the sample were highlighted. It is deemed important that researchers are familiar
with the basic sampling techniques as this will save time and effort while doing the
research.
What’s New
In a quantitative research study, information is collected from various
sources. Most often than not, the source of data includes persons or a group of
individuals. You must be able to differentiate the terminology used to refer to these
persons. Using the proper terminology constitutes to a better understanding of your
research. Participants, respondents, and subjects are the people who the researcher
selects for their study.
6|P age
Subjects are the people in the researcher's experiment - usually quantitative
research. The subject is a term used more in science (quizlet.com.,2020)
Sources: Johnson & Christensen (2017) and Calderon & Gonzales, (1993)
What is It
What are sampling and samples?
Sampling- way of determining the research sample or number of
respondents/ participants in the study. It may be defined as measuring a
small portion of something and then making a general statement about the
whole thing. It produces samples that are a part or portion of the whole
population.
What is Population?
7|P age
Population refers to the total number of people, objects, or things under
study. It is the totality of individuals that possesses some observable
characteristics also known as variables.
General Types of Sampling
There are two (2) general types of sampling: probability sampling and
nonprobability sampling.
8|P age
Stratified random • The process of selecting Suppose the students of a
sampling randomly, samples from college are respondents in a
the different strata study. The students are
(groups) of the population stratified according to the
used in the study. courses they are taking, their
• This is used when the sex, and the curricular years
population of the inquiry they are in. The sample of
has class stratifications or 20% is taken from every
groupings.
stratum based on course,
• This method is used when sex, and curricular year. For
the population is
instance, there are 50 male
heterogeneous, where
students in the first year
certain homogenous
groups, or of similar taking education. The sample
characteristics can be is 20% of 50 is 10. There are
isolated to form 380 female students in the
strata second year taking up
• A stratified sample is marketing. The sample is
obtained by taking 20% of 380 is 76%.
samples from each
stratum or group of
population.
Cluster sampling it is usually used when the In a survey of nurse-
population applicants in various
is unknown or the employment agencies, the
researcher cannot researcher selects several
complete the total list of agencies at random and
the members of the conducts an interview of very
population he wishes to 10th nurse applicants in the
study but he can only agencies.
complete the list of groups
or clusters of the
population.
Nonprobability Sampling
Sampling Description Example
Technique
9|P age
Accidental • A method of selecting An interviewer stands in a street
sampling the subjects who corner and interviews everyone
happen to be who passes by.
available at that time
or volunteered
themselves to be the
subjects of the
study.
• This is said to be the
weakest of all
sampling procedures
because it is
impossible to
estimate the error
from the sampling in
the process of
selection.
10 | P a g e
Purposive sampling • This is also called If research is to be conducted on
judgment sampling the history of a place, the old
because sample people of the place must be
groups are judged to consulted.
be typical of the If methods and techniques of
chosen population. teaching are the subjects of an
• This method simply inquiry, teachers are the ones to be
means choosing the contacted.
sample with a specific
purpose or objective
in mind. Thus, you
must decide the
criteria for choosing
your samples.
• It determines the
target population,
those to be involved
in the study.
• In this technique, the
respondents are
chosen based on their
knowledge of the
information desired.
n = sample size
N= the size of the population
e= the margin of error
4. If the sampling is clustered, or if the population is stratified, compute
the sample proportion (percent) by dividing the result in No. 3 by the
population.
5. Multiply the number of sampling units in each final sampling stratum
by the rate (percent) to find the sample from each final sampling
stratum.
6. Add the samples from all the final sampling strata to find the total
sample.
11 | P a g e
Note: If the population (N) is not given but the sample size (n) and the percentage/
proportion (%) are identified, we can use n/% to get the N.
Example:
A study of the teaching of science in the high schools of the division will be
conducted, and science teachers will be the respondents. There are 245 teachers
of biology, 247 teachers of chemistry, and 121 teachers of physics. There is a total
of 613 respondents.
The sampling procedure follows:
613 613
n= =
1+613 (0.052 ) 1+613 (0.0025)
613
n=
1+1.5325
613
n= = 242.05
2.5325
IF the researcher will use stratified sampling, the process will involve:
Step 4. The teachers are grouped into three categories according to the
branch of science they are teaching, so we use stratified sampling.
Step 5: Get the sample proportion or the percentage per group using the
formula:
𝑛
Sample proportion/ percentage (%) that is 𝑥 100
𝑁
12 | P a g e
Round this off to
39.97% is whole number
0.3997 in since we are
decimal form talking about
number of person
Example for Stratified Sampling
Total sample must be 242
Subject Number of Teachers Percentage Sample
per major (n)
Biology 245 (245÷ 613) x 100= 39.97 % 0.3997 x 242 = 96.72 =97
Chemistry 247 40.29% .4029 x 242= 97.50 = 98
Physics 121 19.74 % 47. 77= 48
Total (N) 613 100% 243
Adjusted Values:
𝑁(𝑠𝑡𝑟𝑎𝑡𝑢𝑚)
𝑛(𝑠𝑡𝑟𝑎𝑡𝑢𝑚) = (n)
𝑁
RAOSOFT CALCULATOR
Aside from using Slovin’s Formula, you could also use raosoft
calculator online
http://www.raosoft.com/samplesize.html
13 | P a g e
Encode here the
population size
In writing the sampling technique, you need to specify how do you select your
samples. You need to reason for the benefits and limitations of your selected
sampling design. You need also to include the strength of the sample design or its
practicality. Always consider the practicality and plausibility of your sampling
design.
You may also consider these questions: - Who are the samples of your
study? - Why choose these samples? - How many? - How will you select
them?
Lesson
Construct an Instrument and
16 Establishes its Validity and Reliability
This lesson talks about the essentials of validity and reliability of the
instrument to be used in researches. In doing so, research topics that have
14 | P a g e
qualitative themes utilize quantitative methods in establishing the credibility of its
results. This enables you to constructs an instrument and establishes its validity
and reliability. It is expected that in this lesson that learners can recognize both
reliability and validity in the instrument to be utilized in the research study.
At the end of the lesson, you will be able to:
What is It
In research, the concern of a researcher is how to minimize possible errors
and biases by maximizing the reliability and validity of data. This then requires
that the tool for the collection of data is valid and reliable. This lesson explains the
technical meaning of these two concepts. The types of and methods of reliability as
well as validity. This also provides examples and the research instrument as well
as relating validity and reliability that can be helpful to the researchers.
Validity refers to the quality of the instrument being functionally only when
it’s a specific purpose. That is when an instrument measures what it is supposed
to measure. Since the instruments of the study are used by the researcher in the
methodology to obtain the data, the validity of each one should be established
beforehand. This is to set the credibility of the findings and the correctness and
accuracy of the following data analysis. For instance, when a study investigates
the common causes of absences, the content of the instrument must focus on
these variables and indicators. Similarly, when a researcher formulates a problem
about the behavior of the students during school assemblies, the instrument must
consist of the indicators or measures of the behavior of students during such time.
Types of Validity
In Educational Testing and Measurement: Classroom Application and
Practice, Kubiszyn and Borich (2007) enumerate the different types of validity.
1. Face Validity. This is also known as logical validity. It involves whether the
instrument is using a valid scale. The procedure calls only for intuitive judgment
just by looking at the instrument, the researcher decides if it has face validity. It
includes the font, size, spacing, the size of the paper used, and other necessary
details that will not distract respondents from answering the questionnaire.
2. Content validity. This kind of validity is determined by studying the
questions to see whether they’re able to elicit the necessary information. An
instrument with high content validity has to meet the objective of the research.
This type of validity is not measured by the numerical index but instead relies on
logical judgment as to whether the test measures its intended subject.
Content validity is measured by subjecting the instrument to an analysis
by a group of field experts who have theoretical and practical knowledge of the
subject. Three to five experts would suffice. The expert assesses the items of the
15 | P a g e
questionnaire and determines if the items measure the variables being studied.
Then, the experts’ criticism will be considered in the revision of the instrument.
3. Construct Validity. This type of validity refers to whether the test
corresponds with its theoretical construct. It is concerned with the extent to which
a particular measure relates to other measures and to which it is consistent with
the theoretically – derived hypothesis. Therefore, the process of construct
validation is theory-laden. Factor analysis, a relevant technique to construct
validity, is a refined statistical procedure that is used to analyze the
interrelationship of behavior data.
4. Criterion-related Validity or equivalent test. This type of validity is an
expression of how scores from the test are correlated with an external criterion.
There are two types of this kind of validity.
a. Concurrent validity. It deals with measures that can be
administered and validated at the same time. It is determined by administering
both the new test and the established test to a group of respondents, then finding
a correlation between the two sets of the scores. Validity established with an
accepted and availed the second test that measures what the researcher is trying
to measure.
Example:
The Stanford-Binet V, a widely accepted standardized IQ test is used to
determine the IQ of nursing students. The researcher published a design for a
short screening test that measures the same. The scores on the Standard- Binet V
and the short screening test are compared to assess the relationship between
scores.
b. Predictive validity. It refers to how well the test predicts the future
behavior of the examinees. This particularly useful in aptitude tests, which are
tests to predict how well test-takers in some future settings will perform in some
future settings.
It is advised that when a drafted questionnaire is to be subjected for
validation, a rating sheet of the acceptability of the indicators must be provided for
the experts to mark and give his judgment. The markings and comments for the
experts that validated the proposed questionnaire will be the basis of the revision
of the proposed instrument or questionnaire.
Reliability refers to the consistency of the results of an instrument in
repeated trials. A reliable instrument can also be used to verify the credibility of
the subject if the latter yield the same results in several tests. However, this is
only true if the instrument used is valid. It is important to note that, while a valid
instrument is always reliable, a reliable instrument is not always necessarily valid.
This is most especially true when the subjects are human, who are governed by
judgment and prone to error. Nevertheless, testing the reliability of an instrument
is very crucial in research studies that deal with a lot of samples.
For example, Jaycee, who is monitoring her weight, uses a weighing scale.
She weighed herself in the morning, afternoon, and evening and recorded the
results afterward. Her recorded weights are 65 lbs, and 70 lbs respectively. The
weighing scale can be considered reliable since the deviation of the results is small
and negligible.
16 | P a g e
Methods in Establishing Reliability
1. Test-retest or stability. In this method, the same test is given to a group of
respondents twice. The scores in the first test are correlated with the scores with
the second test. When there is a high correlation index, it means that there is also
high reliability of the test. Some of the problems here are the observations that
some subjects may be able to recall certain items given during the first
administration of the test, and that the scores may differ because the students
have adapted to the test.
Carmines and Zeller (1979), in the book Reliability and Validity Assessment,
list the weaknesses identified using the test-retest method:
a. Even if the test-retest correlation can be computed and established, its
interpretation is not necessarily straightforward. A low test-retest correlation may
not indicate that the reliability of the test is low but rather signify instead that the
underlying theoretical framework has changed. The longer the time interval
between measurements, the more likely that the concept change.
b. Reactivity refers to the fact that sometimes, the very process is not done
logically and that phenomenon can induce a change in itself.
c. Overestimation due to memory is another weakness in using the test-retest
method. The person’s mental recollection of his or her responses which he or she
gives during the first measurement is quite likely to influence the responses which
he or she gives during the second measurement. Memory effects that may
influence reliability estimates.
2. Internal Consistency. If the test question is designed to measure a single
basic concept, it is reasonable to assume that a respondent who gets one item
right is likely to be right in another similar item. In other words, items should be
correlated with each other and the test ought to be internally consistent.
Reliability is directly related to the validity of the measure. There are several
important principles. First, a test can be considered reliable, but not valid.
Consider the SAT, used as a predictor of success in college. It is a reliable test
(high scores relate to high GPA), though only a moderately valid indicator of
success (due to the lack of structured environment-class attendance, parent-
regulated study, and sleeping habits – each holistically related to success).
Finally, the most useful instrument is both valid and reliable. Proponents of
the SAT argue that it is both. It is a moderately reliable predictor of future success
17 | P a g e
and a moderately valid measure of a student’s knowledge in Mathematics, Critical
Reading, and Writing.
There are other criteria in assessing validity and reliability that can be used
in assessing the literature. (Polt& Beck, 2004). These are sensitivity; specificity’
comprehensibility; precision; speed; range; linearity and reactivity.
Sensitivity. The instrument should be able to identify a case study correctly, i.e.,
to screen or diagnose a condition correctly.
Specificity. The instrument should be able to identify a non-case correctly, i.e. to
screen out those without the conditions correctly.
Comprehensibility. Subjects and researchers should be able to comprehend the
behavior required for accurate and valid measurements.
Precision. The instrument should discriminate among people who exhibit varying
degrees of an attribute as precisely as possible.
Speed. The researcher should not rush the measuring process so that he or she
can obtain a reliable measurement.
Range. The instrument should be capable of detecting the smallest expected value
of the variable to the largest, to obtain meaningful measurements.
Linearity. The researcher normally strives to construct measures that are equally
accurate and sensitive over the entire range of values.
Reactivity. The instrument should, as much as possible, avoid affecting the
attribute being measured.
The following are examples of establishing the validity and reliability of an
instrument.
Example 1
Data gathering employed two sets of survey questionnaires for the students
and the teachers. This was developed by the researcher of the approval of the
advisory committee. Pre-testing was done to improve the survey-questionnaires
for the students of Dońa Juana Chico National High School and the teachers of
Rizal National High School. They did not serve as respondents.
The results of the pre-test were analyzed to ensure clarity and to determine
whether they could yield data needed in the study. The pre-test results showed a
Cronbach’s Alpha Reliability Coefficient of 0.923 indicating good reliability of the
instrument. As a rule, Cronbach Alpha must be at least .80 to be considered
reliable.
Example 2
18 | P a g e
The instrument underwent validation. It was pre-tested at Dr. Gloria D.
Lacson
General Hospital in San Leonardo, Nueva Ecija, which is not included in the
study.
A group of seven staff nurses and two nurse supervisors were requested to
answer the questionnaires upon approval of the permit addressed to the hospital
director. The results were checked and analyzed. After 15 days, the corrected
questionnaire was administered to the same respondents. The reliability
coefficient of 80% and above indicated that the instrument is already valid,
reliable, and ready to use.
Cristobal, Amadeo, and Maura Consolacion D. Cristobal. Practical Research. Diwa Learning System, 2017
Adopting or Adapting an Instrument
Adopting an Instrument
Adopting an instrument is quite simple and requires very little effort. Even
when an instrument is adopted, though, there still might be a few necessary
modifications. For example, the Intrinsic Motivation Inventory that measures
intrinsic motivation, which can be found here, needs to be slightly modified to
reflect the specific situation that the researcher is interested in. Intrinsic
motivation is not a general variable but is directed at a specific activity: intrinsic
motivation in Mathematics, intrinsic motivation in social studies, intrinsic
motivation in playing a sport, intrinsic motivation in reading a book, etc.
Therefore, the items on the Intrinsic Motivation inventory should reference that
specific activity. For example, an item on the Intrinsic Motivation Inventory reads,
"I enjoyed doing this activity very much." How will the participants know what
"this activity" is? Therefore, the researcher should modify the item to read "I
enjoyed the math’s computer program very much." Note that the substance of the
item was not changed, only the reference of "this activity."
Even though adopting an instrument requires little effort on behalf of the
researcher, the questionnaire still must be appropriately designed so you must
know how to develop a questionnaire.
When an instrument is adopted, it is important to appropriately describe
the instrument in the Instruments section of Chapter 3. In the description, include
• Who developed the instrument?
• Who validated the instrument?
• Other studies that have used the instrument
Adapting an Instrument
19 | P a g e
These research instruments or tools are ways of gathering data. Without
them, data would be impossible to put in hand. The most common instrument or
tool of research for obtaining the data beyond the physical reach of the observer
may be sent to human beings who are thousands of miles away or just around the
corner.
2. Open form / Open-ended can be answered with “Yes” or “No,” or they have a
limited set of possible answers (such as A, B, C, or All of the Above). Closed-ended
questions are often good for surveys because you get higher response rates when
users don’t have to type so much. Also, answers to closed-ended questions can
easily be analyzed statistically, which is what you usually want to do with survey
data.
21 | P a g e
https://ph.images.search.yahoo.com/yhs/search;_ylt=AwrxgqqWb.1elXEA9xTfSQx.;_ylu
Dear Respondents,
22 | P a g e
May I ask for your time to answer this questionnaire as for the researcher to gather the
necessary information needed to complete her study with the title stated above. Rest assured
that the date that you will share will be kept in utmost confidentiality.
-Researcher
Direction: Below are the indicators on the effectiveness of Math After School Habit in
developing motivation in learning. Put a check (/) in the column that corresponds your
agreement or disagreement using the scale below.
(4) SA- Strongly Agree (3) A- Agree (2) DA- Disagree (1) SD- Strongly
Disagree
Indicators 4 3 2 1 WAM VD
With the activities given to be done after school…
1. I am able to develop engagement in regular acts 12 15 3 0
of studying
2. I tend to organize and keep good notes 15 10 3 2
3. I read textbook and listen in class 1 8 17 4
4. I learned how to manage my time for studying 12 13 4 1
at home and other things after school.
5. I feel my desire to succeed 17 5 4 4
6. I become good in concentration and 14 16 0 0
memorization
7. I am able to see resources at home to learn my 8 16 6 0
lessons
8. I am able to develop positive mindset in 7 8 9 6
studying mathematics subject
9. I become excited to attend mathematics class 6 15 9 0
10. I become more hard working in doing my 10 10 7 3
assignments
Overall WAM
23 | P a g e
Research Methodology
Research Design
This study will use the mixed method of research. Descriptive and evaluative
designs will be utilized in describing the profile and in evaluating the classroom
observation rating as well as the comparison among the math teachers’
performances which are assessed through classroom observation and collaborative
feedback respectively. The said designs are believed to be appropriate in
determining and describing quantitative data.
Descriptive method will also be used in determining how classroom
observation and collaborative feedback improved math teachers’ performance
relative to teaching learning process and class management, pupils’ outcomes and
professional growth and development.
Qualitative method through the use of an interview form that the researcher
will devise will be used in elaborating how class observation and collaborative
feedback enhance teachers personally. The qualitative method through the use of
interview as tool will allow the researcher to gather qualitative data in addition to
the rating (quantitative) as to explain more and provide evidence on how truly the
practice of class observation and collaborative feedback enhance mathematics
teachers’ performance in general.
Research Locale
The locale of the study will be the Lopez West District in the Division of
Quezon. In Lopez West District, there are 9 secondary schools which offer both
Junior High School and Senior High School programs. In terms of secondary school
sizes within the said locale, there is one large school, 1 medium school and 7 small
schools situated in the different Barangays in the Municipality of Lopez. The stated
schools have mathematics teachers in SHS and JHS.
Class observation and collaborative feedback is also a practice in the district.
In large schools mostly, teachers are observed by Principal and/or department
heads and master teachers while in small schools OIC take charge in the
observation.
The respondents of the study will be the selected mathematics teachers from
9 secondary schools in West District of Lopez, Quezon which are categorized into
small, medium and large. To determine the sample size, the researcher will use the
raosoft calculator while stratified sampling will be used in determining the number
of respondents from each school category to be included in the study. From the 31
mathematics teachers in the West district, 29 will be selected.
24 | P a g e
Table 1.
Distribution of Respondents
Research Instrument
The research includes two instruments to gather the data. First is the survey
form which will gather data on the respondents’ profile and class observation rating
and indicators on how class observation enhance teacher performance as well as
an interview form. Instruments used can be found in Appendix A.
Statistical Treatment
On the other hand, weighted mean will be used to measure how classroom
observation enhance math teachers’ performance relative to teaching-learning
process, class management, student outcome and professional growth. And, to
determine the significant difference on the Mathematics teachers’ performance
through class observation and collaborative feedback in the new normal when they
are grouped according to profile, T- test and one- way ANOVA will be used.
Specifically, t-test will be used in finding the significant difference on the teacher’
performance as they are grouped according to sex. Likewise, ANOVA will be used
when respondents are grouped according to size of school, rank, years in service
and number of preparations since these include more than two groups.
25 | P a g e
APPENDIX
I. Respondents’ Profile
School size: ( ) small ( ) medium ( )Large
Rank: ( ) T-I ( )T-II ( )T-III ( )MT-I ( )MT-II
Field of specialization:
( )Non-Education
( ) Education but Non-Math major
( ) Education major in Math
Indicators 4 3 2 1 WAM VD
Through class observation and Collaborative
Feedback…
Teaching Learning Process
1. I gain new strategies and methodologies to 15 14 1 0
deliver my lesson in new normal modalities
2. I am able to reflect on my practices 20 5 4 1
Class Management
1. I acquire strategies in imposing reinforcement 10 20 0 0
and motivation
2. I am learn new techniques on how to manage 10 19 1 0
classes in the new normal
Students’ Outcome
1.Students become participative during the class 8 12 8 2
encounter
2.Students enjoyed the learning activities provided 12 18 0 0
that I learned from feedback
26 | P a g e
1. What changes in your life as Math teacher happened because of class
observation and collaborative feedback practice in your school?
2. How do class observation and collaborative feedback influence you
personally as Mathematics Teacher?
27 | P a g e