0% found this document useful (0 votes)
56 views15 pages

CH V Data and Collection Instruments

The document discusses data collection instruments for research. It explains that data collection involves identifying the data needed, sources of data, and constructing an instrument. The main types of instruments discussed are questionnaires, interviews, observations, tests, and documents. The document emphasizes the importance of reliability and validity in instruments and describes various methods to establish reliability such as test-retest and internal consistency. Guidelines are provided for writing effective questions for questionnaires, including making items clear, avoiding double-barreled questions, and ensuring respondents are competent to answer.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views15 pages

CH V Data and Collection Instruments

The document discusses data collection instruments for research. It explains that data collection involves identifying the data needed, sources of data, and constructing an instrument. The main types of instruments discussed are questionnaires, interviews, observations, tests, and documents. The document emphasizes the importance of reliability and validity in instruments and describes various methods to establish reliability such as test-retest and internal consistency. Guidelines are provided for writing effective questions for questionnaires, including making items clear, avoiding double-barreled questions, and ensuring respondents are competent to answer.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 15

CHAPTER V – DATA AND COLLECTION INSTRUMENTS

RESEARCH-GS

The data collection process includes identifying data to be gathered; identifying the
sources of data; constructing an instrument for data collection; and administering the
instrument.
At the end of this lesson the students are expected to:
1) Identify research data from proposed problem statements;
2) Choose an appropriate instrument for data gathering; and
3) Design a data gathering instrument based on the requirements of the
research problem.
Data collection depends on the data types needed in the study. Data refers to
qualitative or quantitative attributes of a variable or set of variables. Data (plural of
"datum") are typically the results of measurements and can be the basis of graphs, images,
or observations of a set of variables. Data are often viewed as the lowest level of
abstraction from which information and then knowledge are derived.

Data on its own carries no meaning. In order for data to become information, it
must be interpreted and take on a meaning. For example, the height of Mt. Everest is
generally considered as "data", a book on Mt. Everest geological characteristics may be
considered as "information", and a report containing practical information on the best way
to reach Mt. Everest's peak may be considered as "knowledge".

Raw data refers to a collection of numbers, characters, images or other outputs


from devices that collect information to convert physical quantities into symbols that are
unprocessed. Such data is typically further processed by a human or input into a computer,
stored and processed there, or transmitted (output) to another human or computer (possibly
through a data cable). Raw data is a relative term; data processing commonly occurs by
stages, and the "processed data" from one stage may be considered the "raw data" of the
next.

Experimental data refers to data generated within the context of a scientific investigation
by observation and recording. Field data refers to raw data collected in an uncontrolled in situ
environment.

Types of Data
Data are classified as primary and secondary. Primary data are those gathered by
researchers directly. These are often referred to as first-hand data or, legally, a prima facie
evidence. Secondary data are gathered by people who are not directly involved with the
study. These could be found in magazines, books or journals.

1
Sources of Data
The accuracy of data depends on knowing the right sources and methods collecting
them. In educational research, data are often collected from people, artifacts, documents or
records. The choice of the sources of data will depend on the requirements and the design
of the study.

People are called respondents, participants or interviewees, depending on the


method of data collection used. Respondents or participants are usually interviewed or
asked to accomplish questionnaires.

Artifacts are objects bearing evidence of use by generations under study like
buildings, rocks, artworks, furniture, clothing, equipment, etc. Documents are records of
past events. This is in the form of annual reports, memoranda, court decisions, diplomas or
diaries.
Records, on the other hand, are listings of measurements taken on the
characteristics or attributes under study. This could be test scores, census results, statistical
bulletins, etc. In historical research, oral statements like stories, myths, speeches, legends,
songs and other forms of expression have been used by people through the ages leaving
oral records for future generations (Fraenkel and Wallen, 1993).

A research instrument is a device designed or adopted by a researcher for data


gathering. In order to be able to draw accurate findings and conclusions, an instrument
should be valid and reliable to objectively answer the problems stated in the study.
The choice of instruments depends on the nature of the problem and the research
design. The more commonly used instruments consist of questionnaires, interviews,
observations, tests and documents.

Reliability and Validity of Instruments


Reliability and validity are salient aspects of the effectiveness of data gathering
procedures. Reliability is the degree of consistency that the instrument or procedure
demonstrates. Validity is the quality that enables it to measure what it is supposed to
measure.

Validity. There are three types of validity—content validity, criterion validity and
construct validity. Content validity refers to the degree to which the test items or questions
in an instrument actually measure, or are specifically related to, the traits for which the test
or the instrument was designed. Will it gather data that required to answer the problem
statements? Is the format appropriate?

Criterion validity refers to the relationship between scores obtained using one or
more instruments or measures. How well do such scores estimate future performance of a
certain class or population in another evaluation? How strong is the relationship between
measures?

2
Construct validity refers to the nature of the psychological construct or
characteristics being measured by the instrument. It can be measured by noting group
differences, changes, correlations, processes, multi-trait, multi-method ways and through
factorial validity. Factor validity is considered the most powerful method of construct
validation (Adanza in Vizcarra, 2003).

Reliability. Reliability is the measurement of internal consistency of the research


instrument. The instrument is reliable if administered repeatedly, providing similar results.
To establish reliability of research instruments, any of the four tests maybe administered—
test-retest, the Kuder Richardson Formula 20 and 21, and the parallel form method.

The test-retest method is done by administering the instrument on the same group
of respoendent repeatedly. An interval of two to three weeks is advisable to avoid possible
recall on answers learned in the first instance. Congruency of scores is established by
Pearson Product Moment Coefficient of Correlation.

The split-half method is another method of determining the reliability of a


researcher-made instrument. This test measures internal consistency by dividing the
respondents into two groups. The Pearson r or other correlational tests can be used to
determine the degree of association.

The Kuder-Richardson Formula 20 (KR20) is longer and more laborious than


KR21. KR21 is more preferred because it is simpler in terms of mathematical
computation. However, KR21 can only be used if it can be assumed that the items are of
equal difficulty (Fraenkel and Wallen in Villarce, 2003).

Commonly Used Instruments and Their Preparation

Data collection is often conducted with the use of questionnaires, interviews,


observations, tests and documents.

A. Questionnaire

A questionnaire is relatively economical, has standardized questions, can ensure


anonymity, and questions can be written for specific purposes. Questionnaires can use
statements or questions, but in all cases the subject (respondent) is responding to
something written. Researchers should give much thought to justification whenever they
develop new questionnaires. Existing instruments could be used or adapted for use instead
of preparing a new one. Time and money may be saved if existing questionnaires with
established reliability and validity were located.

Specific objectives for using the questionnaire must be defined and listed.
Objectives are based on the research problems or questions, and they show how each piece

3
of information will be used. Questions must be specific enough to indicate how the
responses from each item will meet the objectives.

Writing Questions and Statements


After defining the objectives of the research problem and ascertaining that no
existing instruments can be used, the task of writing questions or statements begins. It is
best to write items by objective and to be aware of the way the results will be analyzed
once the data are collected. Two general considerate are made for the items: (1) comply
with rules for writing most types of items, and (2) decide which item format is best.

The following are the guidelines in writing effective questions and statements
(Babbie, 1989):
1. Make items clear. An item achieves clarity when all respondents interpret it in
the same way. Never assume that the respondent will read something into the
item. Vague and ambiguous words like few, sometimes, and usually should be
avoided, as should jargon or complex phrases.
2. Avoid double-barreled questions. A question should be limited to a single idea
or concept. Double-barreled questions contain two or more ideas, and
frequently the word and are used in the item. Double-barreled questions and
statements are undesirable because the respondent may, if given an opportunity,
answer each part differently. Example: “School counselors spend too much
time with recordkeeping and not enough time with counseling of personal
problems”. If a respondent is asked to agree or disagree with the statement, it
would be possible to agree with the first part and disagree with the second idea.

3. Respondents must be competent to answer. It is important that the respondents


are able to provide reliable information. There are many instances where
subjects (respondents) are unable to make a response they can be confident of,
and in such circumstances it is best to provide in the response options
something like unsure or do not know in order to give the subjects an
opportunity to state their true feelings or beliefs.

4. Questions should be relevant. If subjects are asked to respond to questions that


are unimportant to them or are about things they have not thought about or care
nothing about, it is likely that the subjects will respond carelessly, and the
results will be misleading.

5. Simple items are best. Long complicated items should be avoided because they
are more difficult to understand, and respondents may be unwilling to try to
understand them. Assume that respondents will read and answer items quickly,
and that it is necessary to write items that are simple, easy to understand, and
easy to respond to.

4
6. Avoid negative items. Negatively stated items should be avoided because they
are easy to misinterpret. Subjects will unconsciously skip or overlook the
negative word, so their answers will be the opposite o the intended. If
researchers use negative items they should underline or capitalized the negative
word (not, or NO).

7. Avoid biased items or terms. They way in which items are worded, or the
inclusion of certain terms, may encourage particular responses more than
others. Such items are termed biased and, of course, should be avoided. There
are many ways to bias an item. The identification of a well-known person or
agency in the item can create bias. Example: “Do you agree or disagree with
the superintendent’s recent proposal to…?” is likely to elicit a response based
on an attitude toward the superintendent, not the proposal.

Some items provide biased responses because of the social desirability


of the answer. Social desirability is the tendency to respond to items so that the
answer will make the subject look good. If teachers were asked whether they
ever ridicule their students, you can be fairly sure, even if the responses are
anonymous, that the answer will be no because good teachers do not ridicule
students. Student responses to the same question or observations of other
teachers might provide different information. Items are ambiguous if the
respondent has not distinct answer to the question.

General Format
The general layout and organization of the questionnaire is very important. If it
appears to be carelessly done or confusing, respondents are likely to set it aside and never
respond. A well-done format and appearance provides a favorable first impression and
will result in cooperation and serious, conscientious responses. The following rules should
be adhered to carefully:

1. Carefully check grammar, spelling, punctuation, and other details.


2. Make sure printing is clear and easy to read.
3. Make instructions brief and easy to understand.
4. Avoid cluttering the questionnaire by trying to squeeze many items onto each
page.
5. Avoid abbreviated items.
6. Keep the questionnaire as short as possible.
7. Provide adequate space for answering open-ended questions.
8. Use a logical sequence, and group related items together.
9. Number the pages and items.
10. Use examples if the items may be difficult to understand.
11. Put important items near the beginning of a long questionnaire.
12. Be aware of the way the positioning and sequence of the questions may affect
the responses.
13. Print response scales on each new page.

5
Types of Items
The type of item should be based on the advantages, uses, and limitations of these
options. The common approaches to be way questions and statements maybe asked and
answered are as follows:

1. Open and Closed form. Closed form of questions are written when the subject
chooses between predetermined responses while open form is when subjects
write in any response they want. The choice of form to use depends on the
objective of the item and the advantages and disadvantages of each type.
Closed form items (also called structured or closed-ended) are best for
obtaining demographic information and data that can be categorized easily.

Example:
Check the number of hours you spent in cooking for the occasion:
___0-2 ____3-5 ____6-8 ___9-12 ___12-15

It is easier to score closed form item, and the subject can answer the
items more quickly. It is bet to use closed form items with a large number of
subjects or large number of items.

There are certain disadvantages to using structured items. If


categories are created that fail to allow the subjects to indicate their feelings or
beliefs accurately, the item is not very useful. This occurs with some forced
choice items. A structured item cues the respondent with respect to possible
answers. If an open-ended format was used, list two or three factors that were
relevant that they thought were important. A structure format could, however,
list twenty-five factors and have the respondents check each one that was
important. The respondent may check factors that would have been omitted in
the open-ended mode.

One approach to the case in which both the open and closed form
have advantages is to use open-ended questions first with a small group of
subjects in order to generate salient factors, and then use closed-ended items,
based on the open-ended responses, with a larger group. Open-ended items
exert the least amount of control over the respondent and can capture
idiosyncratic differences. If the purpose of the research is to generate specific
individual responses, the open-ended format is best; if the purpose is to provide
more general group responses, the closed form is best.

2. Scaled Items. A scale is a series of gradations, level, or values that describe


various degrees of something. Scales are used extensively in questionnaire
because they allow fairly accurate assessments of beliefs or opinions. This is
because many of our beliefs and opinions are thought of in terms of gradations.

6
Beliefs can be very strongly or intently, positive or negative opinion of
something.

The usual format of scaled items is a question or statement followed


by scale of potential responses. The subjects check the place on the scale that
best reflects their beliefs or opinions about the statement. Likert scale is the
most widely used scale. A true Likert scale has a stem that includes a value or
direction and the respondent indicates agreement or disagreement with the
statement. Likert-type items use different response scales; the stem can be
either neutral or directional.

Example:

Science is very important.


______ ______ _____ ______ ______
Strongly Agree Neither Disagree Strongly
Agree agree or Disagree
disagree

Science is:
______ _____ _____ _____ _____
Critical Very Important Somewhat Very
Important Important Unimportant

How often is your teacher well organized?


_____ _____ _____ _____ _____
Always Most of Sometimes Rarely Never
the time

How would you rate Cindy’s performance?


______ ______ _____ _____ _____
Very Poor Fair Good Excellent
Poor

It is generally better to include the middle category. If the neutral


choice is not included and that is the way the respondent actually feels, then the
respondent is forced either to make a choice that is incorrect or not to respond
at all.

A variation of the Likert scale is the Semantic Differential. This


scale uses adjective pairs, with each adjective as an end anchor in a single
continuum. On this scale only one descriptor (a word or a phrase) is placed at

7
each end. The scale is used to elicit descriptive reactions toward a concept or
object.

Example:
Math
Like _____ _____ _____ _____ _____ _____ _____ Dislike
Tough _____ _____ _____ _____ _____ _____ _____ Easy

My Teacher
Easy _____ _____ _____ _____ _____ _____ _____ Hard
Unfair _____ _____ _____ _____ _____ _____ _____ Fair
Enthusiastic_____ _____ _____ _____ _____ _____ _____
Unenthusiastic
Boring _____ _____ _____ _____ _____ _____ _____ Not
Boring

Affect is assessed by the scale, anchored by the terms pleasant-


unpleasant, while value of the activity is measured by important-unimportant.
For younger respondents (3 to 6 years old), scale is limited but provides a range
from happy to sad.

3. Ranked Items. In scaled items, answers can be the same, making it difficult to
differentiate between each item. A rank order assessment allows more valuable
information on comparable items to be gathered.

Example: Rank-order the following activities with respect to their


importance as to ways our research fund should be allocated this year. Use
1=most important; 2 = next most important; and so forth until 5 = least
important.
____ Annual colloquium
____ Individual research projects
____ Invited speakers
____ Computer software
____ Student assistantships

4. Checklist Items. A checklist is simply a method of providing the respondent a


number of options from which to choose. The item can require a choice of one
of several alternatives.
Example: Check as many as apply. The more enjoyable topics in
Biology are:
___ Botany
___ Comparative Anatomy
___ Genetics
___ Ecology

8
___ Microbiology
___ Zoology

Checklist can also be used in asking respondents to answer yes or no


to a question, or to check the category to which they belong. Note that with
categorical responses, a respondent can be place in one category only.

Example: Are you married? ___ yes ___ noCheck the appropriate
category:

___ single, never married ___ divorced


___ married ___ widowed
___ separated

Item Format

The clearest approach to presenting items and answers to items is to write the item
on one line and to place the response categories below, not next to, the item. It is also
advisable to use boxes, brackets or parentheses rather than a line to indicate where to place
the check mark. With Likert scale or Semantic Differential scales, the use of continuous
lines or open blanks for check marks is not recommended since the check mark is often
entered between two options.

Sometimes when a researcher asks a series of questions, answering one question is


a certain way directs the respondent to other questions. These are called contingency
questions.

Example: Have you used the Mathematics Curriculum Guide?


( ) Yes ( ) No

If Yes, how often have you used the activities suggested?


( ) 0-2 times ( ) 3-5 times ( ) 6- 10 times ( ) more than 10
times

If several questions will use the same response format, as is typical with Likert
scale items, it is often desirable to construct a matrix of items and response categories.

Example:

Check whether you Strongly disagree (1); Disagree (2); Agree (3) or
Strongly agree (4) with the following statements:

1. Members of the class do favors for one another. 1 2 3 4


2. The book and equipment students need or want are

9
easily available to them in the classroom 1 2 3 4

3. There are long periods during which the class


does nothing. 1 2 3 4

4. The class has students with many different interests. 1 2 3 4


5. Certain students work only with their close friends. 1 2 3 4
Pretesting

A pretest of questionnaires is highly recommended before using it for research. It


is best to locate samples of subjects with characteristics similar to those that will be used in
the study. Size of pretest sample should be greater than twenty but it is better to have less
than none at all. Space should be provided for comments on the individual items and the
questionnaire as a whole. The researcher needs to know if it takes too long to complete, if
the direction and items are clear, and so on.

If there are enough pretest subjects, an estimate of reliability may be calculated,


and some indication will be given of whether there is sufficient variability in the answers
to investigate various relationships. Two steps in getting feedback about the questionnaire
before being used in a study are: (1) an informal critique of individual items as they are
prepared, and (2) a pretest of the full questionnaire.

B. INTERVIEW SCHEDULES

Interviews are essentially vocal questionnaires. The major steps in constructing an


interview are the same as in preparing a questionnaire—justification, defining objectives,
writing questions, deciding general and item format, and pretesting. The obvious
difference is that interviews involve direct interaction between individuals, and interaction
has advantages and disadvantages as compared with the questionnaire. An interview
schedule is a written list of questions, open or closed-ended, prepared for use by an
interviewer, in a person-to-person interaction. It is a research instrument/ tool for
collecting data, whereas interviewing is a method of data collection.
Interview technique is flexible and adaptable. It can be used for many types of
problems and persons. Nonverbal and verbal behavior can be noted in face-to-face
interviews, and the interviewer has an opportunity to motivate the respondent. Interviews
result in a much higher response rate than questionnaires, especially for topics that concern
personal qualities or negative feelings. Questionnaires are preferable for obtaining factual,
less personal information.

Advantages of the Interview

1. The interview is more appropriate for complex situations.

10
2. It is useful for collecting in-depth information.
3. Information can be supplemented with information obtained from responses
with those gained from observation on non-verbal reactions.
4. Questions can be explained.
5. Interviewing has a wider application and any type of population.

The Disadvantages of the Interview

1. Interviewing is time-consuming and expensive.


2. The quality of data depends upon the quality of the interaction.
3. The quality of data depends upon the quality of the interviewer.
4. The quality of data may vary when many interviewers are used.
5. The researcher may introduce his/her bias.
6. The interviewer may be biased.

Asking Personal and Sensitive Questions


There are two acceptable techniques in asking threatening or sensitive questions:
(1) direct and (2) indirect manner. With the indirect manner, one can be sure that an
affirmative answer is accurate. Some respondents may be offended by direct questions and
hence, may not answer even non-sensitive questions.

Some ways of asking personal questions in an indirect manner are as follows:


1. By showing drawings or cartoons;
2. By asking a respondent to complete a sentence;
3. By asking a respondent to sort cards containing statements; and
4. By using random devices (lottery, calculators, etc)

Order of Questions
The order of questions in a questionnaire or in an interview schedule is important
as it affects the quality of information, the interests and even willingness of a respondent to
participate in a study. Two ways considered best in ordering questions are as follows:

1. Questions should be asked in a random order. The random approach is useful


in situations where a researcher wants respondents to express their agreement or
disagreement with different aspects of an issue. A logical listing of statements

11
or questions may ‘condition’ a respondent to the opinions expressed by the
researcher through the statements.

2. Questions should follow a logical progression based upon the objectives of the
study. This procedure gradually leads respondents into the themes of the study,
starting with simple themes and progressing to complex ones. It sustains the
interest of the respondents and gradually stimulates them to answer the
questions.

Prerequisites for Data Collection


Before conducting any data collection the following points should be considered:
1. Motivation to share the required information. It is essential for respondents
to be willing to share information with you.

2. Clear understanding of the questions. Respondents must understand what is


expended of them in the questions.
3. Possession of the required information. If respondents do not have the
required information, they cannot provide it.

C. OBSERVATION SCHEDULES
The observational method relies on a researcher’s seeing and hearing things and
recording these observations, rather than relying on subject’s self-report response to
questions or statements. A number of devices have been extensively used in recording
information gathered through observation. Checklists, rating scales, scorecards, and scaled
specimens provide systematic means of summarizing or quantifying data collected by
observation or examination.

Checklist. Checklist is the simplest devise in collecting observational data. It is a


prepared list of behaviors or items. The presence of absence of a behavior may be
indicated by:

a. checking yes or no,

b. indicating the type or number of items by inserting the appropriate word or


number
c. putting marks before a type of behavior whenever it is observed to indicate how
many times it occurred.

This simple device systematizes and facilitates the recording of observations and
helps to ensure the consideration of the important aspects of the object or act observed.

12
Rating scale. The rating scale involves qualitative description of a limited number
of aspects of a thing or of traits of a person. Ratings may be set up in five to seven
categories in terms such as:

a. Superior Above Average Average Fair Inferior


b. Excellent Good Average Below Average Poor
c. Always Frequently Occasionally Rarely Never

Another procedure establishes positions in terms of behavioral or situational


descriptions. More specific statements enable the judge to identify more clearly the
characteristic to be rated.

Examples are as follows:

Always exerts a strong influence on his associates.


Sometimes is able to move others into action.

Rating scales are weakened by the difficulty of clearly defining the trait or
characteristic to be evaluated. The halo effect also causes raters to carry qualitative
judgment from one aspect to another. Hence, there is a tendency to rate a person who has a
pleasing personality high on other traits such as intelligence of professional interests.
Raters also have a tendency to be generous. Rating scales should carry the suggestion that
raters omit the rating of characteristics that they have had no opportunity to observe.

Scorecards. Scorecards usually provides for the appraisal of a relatively large


number of aspects. The presence of each characteristic or aspect, or the rating assigned to
each, has a predetermined point value. Thus, the scorecard rating may yield a total
weighted score that can be used in the evaluation of the object observed.

A scorecard is used in evaluating socio-economic characteristics of families, traits


of communities, schools or students. Its limitations are similar to those of the rating scales.

D. TESTS
Tests refer to the standard set of questions presented to each subject that requires
completion of cognitive tasks. The responses on answers are summarized to obtain a
numerical value that represents a characteristic of the subject. The cognitive task can focus
on what the person knows (achievement), is able to learn (ability or aptitude), chooses or
selects (interest, attitude or value) or is able to do (skills). All tests measure current
performance.

Tests differ more in their use than in their development or actual test items. There
are different types of inferences and uses that are made on test results. It is what is done on

13
the test result that creates distinctions such as achievement and aptitude. Test construction
is not covered in this course since a whole course in the education curricula are offered
solely for this purpose.

Types of Tests
1. Standardized Test

This test provides a uniform procedure for administering and scoring the
instrument. Same questions are asked each time the test is administered, with the
set of directions, on how it must be used. It is most often commercially prepared
by experts. It may not be specific enough to provide a sensitive measure of the
variable.

2. Norm-and-Criterion-Referenced Test

Norm-referenced test is meant to show how individual scores compare to


scores of a well-defined reference or norm group of individuals. The interpretation
of results, then, depends on how the subject compare to others.

Criterion-referenced test is conducted by comparing individual scores to


professionally judged standards of performance. The comparison is between the
score and a criterion or standard rather than the scores of others.

3. Aptitude Tests

The purpose of an aptitude test is to predict future performance. The results


are used to make a prediction about performance or some criterion (like grades,
teaching effectiveness, certification, or test scores) prior to instruction, placement
or training. Aptitude refers to the predictive use of the scores from a test, rather
than the nature of the test items.

4. Achievement Test

The purpose of an achievement test is to measure what has been learned


rather than to predict future performance. Achievement tests have more restrictive
coverage than aptitude test. They are more closely tied to school subjects, and
measure more recent learning than aptitude tests.

5. Performance Assessment

The emphasis of a performance assessment is on the measurement of


student proficiency on cognitive skills by directly observing how a student
performs the skill in an authentic context. Contexts are ‘authentic’ when they
reflect what students actually do with what they learn.

14
REFERENCES

Kumar, R.(1996). Research Methodology—A step-by-step guide for beginners.


Melbourne, Australia: Addison Wesley Longman. Pp115-124.

Wikipedia, accessed on November 20, 2010). This page was last modified on 7 November
2010 at 11:08.

ACTIVITY

Prepare a questionnaire or an interview guide to gather the data of the research

problem you have identified in your earlier exercises. Be sure to prepare questions for each

of the problem statements.

15

You might also like