DCIT60 Module
DCIT60 Module
This course describes the skills, knowledge, and performance outcomes required to
develop a research orientation among students and to acquaint them with
fundamentals of research methods like quantitative, qualitative, or mixed method
research approaches that will lead to the production of a good, timely, and relevant
research study. It also encompasses the critical understanding on identifying and
assessing ethical issues related to research, the awareness, and benefits of research
in the field of interest and its help in their future career, in the society or community
and in the local and global environment.
Compiled by:
COURSE OUTLINE
A. Research basics
B. Research theory
C. Classification of research
A. Chapter 1 – Background
C. Chapter 3 – Methodology
Objectives: At the end of the semester, the student will be able to:
A. Research basics
The contributions made by systematic research lead to progress in almost every field
of science. Research is the principal tool used in virtually all areas of science to expand the
edges of knowledge. A more academic interpretation is that research involves finding out
about things that no-one else knew either.
Research method on the other hand are the tools and techniques for doing research.
In all cases, it is necessary to know what the correct tools are for doing the job, and how to
use them to best effect. They represent the tools of the trade, and provide you with ways to
collect, sort and analyze information so that you can come to some conclusions.
The following are some of the usage that can be done in research to gain knowledge:
sense of the myriad other elements involved, such as human, political, social,
cultural and contextual.
▪ Evaluate - This involves making judgements about the quality of objects or events.
Quality can be measured either in an absolute sense or on a comparative basis.
To be useful, the methods of evaluation must be relevant to the context and
intentions of the research.
▪ Compare - Two or more contrasting cases can be examined to highlight
differences and similarities between them, leading to a better understanding of
phenomena.
▪ Correlate - The relationships between two phenomena are investigated to see
whether and how they influence each other. The relationship might be just a loose
link at one extreme or a direct link when one phenomenon causes another. These
are measured as levels of association.
▪ Predict - This can sometimes be done in research areas where correlations are
already known. Predictions of possible future behavior or events are made on the
basis that if there has been a strong relationship between two or more
characteristics or events in the past, then these should exist in similar
circumstances in the future, leading to predictable outcomes.
▪ Control - Once you understand an event or situation, you may be able to find ways
to control it. For this you need to know what the cause and effect relationships are
and that you are capable of exerting control over the vital ingredients. All of
technology relies on this ability to control.
There are numerous types of research design that are appropriate for the different types
of research projects. The choice of which design to apply depends on the nature of the
problems posed by the research aims. Each type of research design has a range of research
methods that are commonly used to collect and analyze the type of data that is generated by
the investigations.
B. Research theory
Theory is a model or framework for observation and understanding, which shapes both
what we see and how we see it. Theory allows the researcher to make links between the
abstract and the concrete; the theoretical and the empirical; thought statements and
observational statements etc.
▪ Theory frames what we look at, how we think and look at it.
▪ It provides basic concepts and directs us to the important questions.
▪ It suggests ways for us to make sense of research data.
▪ Theory enables us to connect a single study to the immense base of knowledge to
which other researchers contribute.
▪ It helps a researcher see the forest instead of just a single tree.
▪ Theory increases a researcher’s awareness of interconnections and of the broader
significance of data.
▪ Theories are, by their nature, abstract and provide a selective and one-sided account
of the many-sided concrete social world.
▪ Theory allows the researcher to make links between the abstract and the concrete, the
theoretical and the empirical, thought statements and observational statements etc.
▪ There is a two-way relationship between theory and research. Social theory informs
our understanding of issues, which, in turn, assists us in making research decisions
and making sense of the world.
▪ Theory is not fixed; it is provisional, open to revision and grows into more accurate and
comprehensive explanations about the make-up and operation of the social world.
Metaphysics is concerned with questions such as what it is to be, who we are, what
is knowledge, what are things, what is time and space.
There are two philosophies involved in metaphysics, these are:
1. Idealism - advocates that reality is all in the mind, that everything that exists is
in some way dependent on the activity of the mind. It means that as phenomena
are reliant on mental and social factors, they are therefore in a state of constant
change.
2. Materialism (or reductionism) - insists that only physical things and their
interactions exist and that our minds and consciousness are wholly due to the
active operation of materials. Thus, phenomena are independent of social
factors and are therefore stable.
Epistemology is the theory of knowledge, especially about its validation and the
methods used. It deals with how we know things and what we can regard as acceptable
knowledge in a discipline. It is concerned with the reliability of our senses and the power of
the mind.
All giraffes that I have seen have very long necks (repeated observation)
Therefore, I conclude that all giraffes have long necks (conclusion)
Induction was the earliest and, even now, the commonest popular form of scientific
activity. This reasoning is being used every day in human’s life as they learn from their
surroundings and experiences. Hence, come up with conclusions from that experienced then
generalize from them, consequently set them up as a rule or belief.
Deductive reasoning begins with general statements (premises) and, through logical
argument, comes to a specific conclusion.
This is the simplest form of deductive argument, and is call a syllogism. As you can
see it consists of a general statement (called the first premise), followed a more specific
statement inferred from this (the second premise), and then a conclusion which follows on
logically from the two statements. However, the problem with deductive reasoning is that the
truth of the conclusions depends very much on the truth of the premise on which it is based.
It is this combination of experience with deductive and inductive reasoning which is the
foundation of modern scientific research, and is commonly referred to as scientific method.
There are certain assumptions that underlie scientific method that relate to a materialist
view of metaphysics and a positivist view of epistemology. These assumptions are:
▪ Order – the universe is an ordered system that can be investigated and the
underlying ‘rules’ can be exposed.
▪ External reality – we all share the same reality that does not depend on our
existence. We can therefore all equally contribute to and share knowledge that
reflects this reality.
▪ Reliability – we can rely on our senses and reasoning to produce facts that reliably
interpret reality.
▪ Parsimony – the simpler the explanation the better. Theories should be refined to
the most compact formulation .
▪ Generality – the ‘rules’ of reality discovered through research can be applied in all
relevant situations regardless of time and place.
However, these assumptions are not accepted by the opposite camp in metaphysics and
epistemology. Those with an idealist and relativist point of view insist on the importance of
human subjectivity and the social dimension to facts and their meanings. This clash of view-
points is unlikely ever to be resolved.
There is an important issue that confronts the study of the social sciences that is not so
pertinent in the natural sciences. This is the question of the position of the human subject and
researcher, and the status of social phenomena. The two extremes of approach are termed
positivism and interpretivism. Again, as in the case of ways of reasoning, a middle way has
also been formulated that draws on the useful characteristics of both approaches.
The researcher encounters a world already interpreted and his/ her job is to reveal this
according to the meanings created by humans rather than to discover universal laws.
Therefore, there can be more than one perspective and interpretation of a phenomenon.
For further understanding, table 1 list the comparison between positivist and relativist
approach.
Postmodernism - challenges key issues such as meaning, knowledge and truth which
have opened up new perspectives and ideas about the essence of research. It denounces the
meta-narratives of the modern movement as a product of the Enlightenment, and insists on
the inseparable links between knowledge and power. In fact, there is no universal knowledge
or truth. Science is just a construct and only one of many types of knowledge that are all
subjects of continual reinvention and change.
One of the strands of postmodernism examines the structure of language and how it is
used. It challenges the assumption that language can be precisely used to represent reality.
Meanings of words are ambiguous, as words are only signs or labels given to concepts (what
is signified) and therefore there is no necessary correspondence between the word and the
meaning, the signifier and the signified. The use of signs (words) and their meanings can vary
depending on the flow of the text in which they are used, leading to the possibility of
‘deconstructing’ text to reveal its underlying inconsistencies. This approach can be applied to
all form representation – pictures, films etc. that gain added or alternative meanings by the
overlaying of references to previous uses. This can be seen particularly in the media where it
is difficult to distinguish the real from the unreal – everything is representation, there is no
reality.
Critical Realism - Inevitably, there has been a reaction to this postmodernist challenge to
traditional science which threatens a descent into chaos and powerless- ness to act because
of lack of possibility of agreement on truths and reality. This has been labelled critical reality
based on critical reasoning.
Critical reasoning can be seen as a reconciliatory approach, which recognizes, like the
positivists, the existence of a natural order in social events and discourse, but claims that this
order cannot be detected by merely observing a pattern of events. The underlying order must
be discovered through the process of interpretation while doing theoretical and practical work
particularly in the social sciences. Unlike the positivists, critical realists do not claim that there
is a direct link between the concepts they develop and the observable phenomena. Concepts
and theories about social events are developed on the basis of their observable effects, and
interpreted in such a way that they can be understood and acted upon, even if the
interpretation is open to revision as understanding grows. This also distinguishes critical
realists from relativists, who deny the existence of such general structures divorced from the
specific event or situation and the context of the research and researcher.
C. Classification of Research
C.1 Basic Research - this is also called fundamental research or pure research. Basic
Research seeks to discover basic truths or principles and adds to the body of knowledge.
Examples:
Examples:
This process is common to virtually all research projects, whatever their size and
complexity. And they can be very different. These differences are due to their subject matters;
for example, compare an investigation into sub-nuclear particles with a study of different
teaching methods, differences in scales of time and resources, and extent of pioneering
qualities and rigor. Some projects are aimed at testing and refining existing knowledge, others
at creating new knowledge.
The answers to four important questions strengthen the framework of any research
project:
There are three phases involved in structuring a research project with a total of eight
steps.
1. Formulating a research problem – it is the first and most important step in the research
process. The more specific and clearer the problem the better. The main function of
formulating a research problem is to decide what you want to find out about. And
Probably the simplest way to set up a research problem is to ask a question.
Questions can then be used to break the main problem down into questions to the
define sub-problems. The different things you can do to split up the main question are
to:
▪ Split it down into different aspects that can be investigated separately, e.g.
political, economic, cultural, technical.
▪ Explore different personal or group perspectives, e.g. employers, employees.
▪ Investigate different concepts used, e.g. health, wealth, confidence,
sustainability.
▪ Consider the question at different scales, e.g. the individual, group,
organization.
▪ Compare the outcomes of different aspects from the above ways of splitting
down.
▪ People
▪ Problems
▪ Programs
▪ Phenomena
The main function of the research design is to explain how to find the answers to the
research questions. A research design should include the following: the study design
per se and the logistical arrangements that you propose to undertake, the
measurement procedures, the sampling strategy, the frame of analysis and the time-
frame.
3. Selecting a sample - The accuracy of the findings largely depends upon the way the
sample is selected. The basic objective of any sampling design is to minimize, within
the limitation of cost, the gap between the values obtained from the selected sample
and those prevalent in the study population.
The underlying premise in sampling is that a relatively small number of units, if selected
in a manner that they genuinely represent the study population, can provide – with a
sufficiently high degree of probability – a fairly true reflection of the sampling population
that is being studied. In selecting a sample, bias must be avoided and the maximum
precision of the resources must be attain.
1. Collecting data - Many methods could be used to gather the required information. As
a part of the research design, it must be decided upon the procedure to adopt to collect
the data. In this phase, the researcher actually collecting the data. Ethical concerns
must be considered in collecting the data.
2. Processing and displaying the data - Analyzing the data depends on the type of
information gathered. The presentation of the processed data must be given
emphasis as well to properly communicate it with the reader.
3. Writing a research report - Writing the report is the last and, for many, the most difficult
step of the research process. This report informs the world what activities have done,
what have been discovered and what conclusions are drawn from the findings. The
report should be written in an academic style and be divided into different chapters
and/or sections based upon the main themes of the study.
There are organizations with ethical committees who are responsible in reviewing the
researches. The role of ethics committees is to oversee the research carried out in their
organizations in relation to ethical issues. It is they who formulate the research ethics code of
conduct and monitor its application in the research carried out by members of their
organizations. Applying for ethics approval inevitably involves filling in forms.
1. The individual values of the researcher relating to honesty and frankness and
personal integrity.
Unless otherwise stated, a researcher’s work and ideas are regarded as their own
work. And the worst offense in with respect to honesty is plagiarism. Intellectual property is a
collection of ideas and concepts while plagiarism refers to directly copying someone else’s
work into their report, thesis etc. and letting it be assumed that it is their own. Using the
thoughts, ideas and works of others without acknowledging their source, even if they
paraphrased into their own words, is unethical. Equally serious is claiming sole authorship of
work which is in fact the result of collaboration or amanuensis (‘ghosting’).
Apparently, the researcher cannot entirely rely on their own ideas, concepts, and
theories. Thus, acknowledging the sources of the literatures, features, theories and originators
should be done. This is called citation. There are different established citation methods hence,
they all consists of brief annotations or numbers placed within the text that identify the cited
material. These methods of reference cater for direct quotations or ideas etc. from the work
of others gathered from a wide variety of sources such as books, journals, conferences, talks,
interviews, TV programs, and many others should be meticulously used.
Apart from correct attribution, honesty is essential in the substance of what the
researchers have written. Researchers have responsibilities to fellow researchers,
respondents, the public and the academic community. Accurate descriptions are required of
what they have done, how they have done it. How they obtained the information, the
techniques used, the analysis carried out, and the results of experiments with a countless
details of every part of the work.
The sources of financial support for the research activities should be mentioned, and
pressure and sponsorship from sources which might influence the impartiality of the research
outcomes should be avoided.
Social research, and other forms of research which study people and their relationships
to each other and to the world, need to be particularly sensitive about issues of ethical
behavior. As this kind of research often impinges on the sensibilities and rights of other people,
researchers must be aware of necessary ethical standards which should be observed to avoid
any harm which might be caused by carrying out or publishing the results of the research
project.
Some of the situations that raise ethical issues in research are the following:
Research Aims – research merely aimed at gaining greater or new knowledge and
understanding of a phenomenon which has little or no ethical consequences. The expansion
of scientific knowledge is generally regarded as a good thing and most of the applied research
are subjected to ethical investigation. Will the results of the research benefit society, or at
least not harm it? Will there be losers as well as gainers? The research aims and their
consequences must be clearly stated. Normally, researchers argue that the aims of their
research are in accordance with the ethical standards prescribed by your university or
organization.
Use of Language – how the researchers use the language has an important influence in doing
a research. The researcher must be neutral in using terminology involving people, who and
what they are, and what they do. Patronizing or disparaging, and bias, stereotyping,
discrimination, prejudice, intolerance and discrimination must be avoided. Noticeably, the
acceptable terminology changes with time, thus the terms used in some older literature may
not be suitable to use now.
Presentation – how the researchers present themselves might influence the attitude and
expectations of the people involved in the research work. Student as researchers, should
present themselves as just that, and give the correct impression that they are doing the
research as an academic exercise which does not have the institutional or political backing to
cause immediate action. Practitioner researchers, such as teachers, nurses or social workers,
have a professional status that lends more authority and possibly power to instigate change.
The research situation can also be influential. Stopping people in the street and asking
a few standardized questions will not raise any expectations about actions, but if you spend a
lot of time with a, perhaps lonely, old person delving into his or her personal history, the more
intimate situation might give rise to a more personal relationship that could go beyond the
simple research context.
Dealing with participants – researcher should treat participants with due ethical consideration,
in the way they were chosen, deal with them personally and how the researcher uses the
information they provide. In many cases, participants choose freely whether to take part in a
survey by simply responding to the form or not. However, friends or relatives may feel that
they have an obligation to help the researcher despite reservations they may have and could
result in a restriction of their freedom to refuse.
Participants will decide whether to take part according to the information they receive
about the research. The form that this information takes will depend on the type of person, the
nature of the research process and the context. It should be clear and easily understood so
they can make a fair assessment of the project in order to give an informed consent. Particular
attention is needed when getting consent from vulnerable people such as children, the elderly
or ill, foreign language speakers and those who are illiterate.
In carrying out the research, there are many considerations that must be taken care of
to avoid unethical situations such as:
Potential Harm and Gain - the principle behind ethical research is to cause no harm and, if
possible, to produce some gain for the participants in the project and the wider field.
Accordingly, the researcher should assess the potential of the chosen research methods and
their outcomes for causing harm or gain. This involves recognizing what the risks might be
and choosing methods that minimize these risks, and avoiding making any revelations that
could in any way be harmful to the reputation, dignity or privacy of the subjects.
Recording Data - there is a danger of simplifying transcripts when writing up data from
interviews and open questions. When you clean up and organize the data, you can start to
impose your own interpretation, ignoring vocal inflections, repetitions, asides, and subtleties
of humor, thereby losing some the meanings. Further distortion can be introduced by being
governed by one’s own particular assumptions.
Participant Involvement - questions about rapport are raised if the research entails close
communication between the researcher and the participants. The researcher should not take
familiarity so far as to deceive in order to extract information that the participant might later
regret giving. Neither should the researcher raise unrealistic expectations in order to gain
favor.
Honesty, Deception and Covert Methods - honesty is a basic tenet of ethically sound research
so any type of deception and use of covert methods should be ruled out. Although certain
information of benefit to society can only be gained by these methods due to obstruction by
people or organizations that are not willing to risk being scrutinized. The risks involved make
the use of deception and covert methods extremely questionable, and in some cases even
dangerous.
Storing and Transmitting Data - the Data Privacy Act of 2012 in the Philippines and equivalent
regulations cover the conditions regarding collections of personal data in whatever form and
at whatever scale. They spell out the rights of the subjects and responsibilities of the compilers
and holders of the data. The data that you have collected may well contain confidential details
about people and/or organizations. It is therefore important to devise a storage system that is
safe and only accessible to you. If the researcher needs to transmit data, he or she must take
measures that the method of transmission is secure and not open to unauthorized access.
Checking Data and Drafts - it is appropriate to pass the drafts of the research report to
colleagues or supervisors for comment, but only with the proviso that the content is kept
confidential, particularly as it is not ready for publication and dissemination at this stage. The
intellectual independence of the findings of the report could be undermined if the researcher
allows sponsors to make comments on a draft and they demand changes to be made to
conclusions that are contrary to their interests. It is not practical to let respondents read and
edit large amounts of primary data.
Disposing of Records – a suitable time and method should be decided for disposing of the
records at the end of the research project. Ideally, the matter will have been agreed with the
participants as a part of their informed consent, so the decision will have been made much
earlier. The basic policy is to ensure that all the data is anonymous and non-attributable. This
can be done by removing all labels and titles that could lead to identification. Better still, data
should be disposed of in such a way as to be completely indecipherable. This might entail
shredding documents, formatting discs and erasing tapes.
The literature review is an integral part of the research process and makes a valuable
contribution to almost every operational step. It has value even before the first step; that is,
when the researcher is merely thinking about a research question that he or she may want to
find answers to through the research journey. In the initial stages of research, it helps the
researchers to establish the theoretical roots of their study, clarify ideas and develop their
research methodology. Later in the process, the literature review serves to enhance and
consolidate the researcher’s own knowledge base and helps them to integrate findings with
the existing body of knowledge.
Bringing Clarity and Focus to Research Problem - the literature review can play an extremely
important role in shaping the research problem because the process of reviewing the literature
helps the researcher to understand the subject area better and thus helps to conceptualize
the research problem clearly and precisely and makes it more relevant and pertinent to the
field of inquiry. When reviewing the literature, the researchers learn what aspects of their
subject area have been examined by others, what they have found out about these aspects,
what gaps they have identified and what suggestions they have made for further research. All
these will help the researcher gain a greater insight into their own research questions and
provide them with clarity and focus which are central to a relevant and valid study. In addition,
it will help the researchers to focus their study on areas where there are gaps in the existing
body of knowledge, thereby enhancing its relevance.
Improving the research methodology - going through the literature acquaint the researchers
with the methodologies that have been used by others to find answers to research questions
similar to the one they are investigating. A literature review tells them if others have used
procedures and methods similar to the ones that they are proposing, which procedures and
methods have worked well for them and what problems they have faced with them. By
becoming aware of any problems and pitfalls, the researchers will be better positioned to
select a methodology that is capable of providing valid answers to their research question.
This will increase their confidence in the methodology they plan to use and will equip them to
defend its use.
Broadening the knowledge base in your research area - the most important function of the
literature review is to ensure that the researchers read widely around the subject area in
which they intend to conduct their research study. It is important that they know what other
researchers have found in regard to the same or similar questions, what theories have been
put forward and what gaps exist in the relevant body of knowledge.
1. Searching for the existing literature in your area of study - To search effectively for the
literature in your field of enquiry, it is imperative that you have at least some idea of
the broad subject area and of the problem you wish to investigate, in order to set
parameters for your search.
2. Reviewing the selected literature - After identifying several books and articles as
useful, the next step is to start reading them critically to pull together themes and issues
that are of relevance with the study.
The related literature is a section in a research paper, thesis, dissertation and research
project in which the resources are taken from books, journals, magazines, novels, poetry and
many others which have direct bearing to the proposed study. The presentation of the related
literature is in chronological order from recent to past. Some universities allow an arrangement
by topic order while others are in alphabetical order. The relevance of each literature
presented in the present study is also explained. It is unscientific if related literature are
presented and of no explanation at the relevance of the current study.
Related studies are the published and unpublished research studies. These have
direct bearing to the present study which are segregated into foreign and local studies. It must
be arranged in chronological order, from recent to the past. The relevance of this studies to
the present study should be explained.
Objectives: At the end of the semester, the student will be able to:
Most methods of data collection can be used in both qualitative and quantitative research.
The distinction is mainly due to the restrictions imposed on flexibility, structure, sequential
order, depth and freedom that a researcher has in their use during the research process.
Quantitative methods favor these restrictions whereas qualitative ones advocate against them.
There are two major approaches to gathering information about a situation, person,
problem or phenomenon. In a research study, in most situations, there is a need to collect the
required information; however, sometimes the information required is already available and
need only be extracted. Based upon these broad approaches to information gathering, data
can be categorized as primary data and secondary data.
Research uses data as the raw material in order to come to conclusions about some issue.
It depends on the issue being investigated what data needs to be collected.
Although much data seems to be solid fact and permanently represents the truth, this
is not the case. Data are not only elusive (evasive), but also ephemeral (temporary). They
may be true for a time in a particular place as observed by a particular person, but might
be quite different the next day. Take for example, a daily survey of people’s voting
intentions in a forthcoming general election. The results will be different every day even if
exactly the same people are asked, because some change their minds because of what
they have heard or seen in the interim period.
Data are essential in research since they are part of a hierarchy of information. They
vary from general to specific, from abstract to a more concrete one. Understanding the said
order is helpful in breaking down the research problems to a more concrete components which
can be easily understand. The hierarchy can be expressed into the following:
In this hierarchy of information, the theory is concise and yet the most general and
abstract among the components present in the order. Going down in the hierarchy, it becomes
more particular and concrete (figure 5), for example:
Data use in research come in two main forms, the primary data and secondary data. The
primary data are the data directly collected from the data source while secondary data are
written sources that interpret or record primary data.
The primary data are the first and most immediate recording of a situation. Without this
kind of recorded data, it would be difficult to make sense of anything but the simplest
phenomenon and be able to communicate the facts to others.
The primary data can be identified into the following types based on how they are
collected:
Primary data generally can provide all the information present in human’s life and
surroundings but collecting those is time consuming and seems to be impossible often. More
so, conducting large surveys and other studies is costly.
Secondary data are data that have been interpreted and recorded. Secondary data are
cascaded in the form of news bulletins, magazines, newspapers, documentaries, advertising,
the Internet etc. The data are wrapped, packed and turned into concise articles or acceptable
forms. The reliability of the secondary data depends merely on the source and how it was
presented.
Reviewing the quality of evidence that has been presented in the arguments, and the
validity of the arguments themselves, as well as the reputation and qualifications of the writer
or presenter are some of the means to assess the reliability of the secondary data. It is also
advisable to compare the data from other sources if possible to see if there is bias and
inaccuracies.
Research data can be characterized into two broad categories, such as quantitative or
qualitative. Quantitative data are used when a researcher is trying to quantify a problem, or
address the "what" or "how many" aspects of a research question. It is data that can either be
counted or compared on a numeric scale. Qualitative data describes qualities or
characteristics. It is collected using questionnaires, interviews, or observation, and frequently
appears in narrative form.
Quantitative data are measurable, more or less they are accurate since it can be count
or compared on a numerical scale. For instance, it could be a total number of students in
CvSU Cavite City Campus or the ratings on a scale of 1-10 of the quality of food served at
school cafeteria. Mathematical procedures may use to analyze the numerical data. In addition,
there are statistical analysis software such as SPSS which is commonly utilized in analyzing
quantitative data.
Other examples of quantitative data are the number of populations, the family income,
the prices of shares, sports statistics, engineering calculations, and many others.
Qualitative data are data that cannot be measure accurately and cannot be counted.
They are generally expressed in words rather than numbers. These data are categorized
based on properties, attributes, labels, and other identifiers, such qualitative data are also
known as categorical data. Ideas, customs, morals, beliefs are some of the essential human
activities and attributes that are investigated in the study of human beings and their societies
and cultures that cannot be measured in any exact way but it does not mean that they are less
valuable than quantitative data.
Other typical examples of qualitative data are observations notes, interview transcripts,
literary texts, minutes of meetings, historical records, memos and recollections, and
documentary films. It may be recorded very close to the events or it may be remote and edited
interpretations. Also, qualitative data rely on human interpretation and evaluation and cannot
be measured in a standard way thus, it is necessary to assess its reliability. Consulting diverse
sources of data relating to the same event or phenomena is one way to assess the reliability
and completeness of the data, it is called triangulation.
Moreover, numbers like national identification number, phone number, etc. are
however regarded as qualitative data because they are categorical and unique to one
individual.
In statistics, data can be measured in different ways depending on their nature. These are
commonly referred to as levels of measurement – nominal, ordinal, interval and ratio.
Classification varies, sometimes it comes into many types and sometimes into two
types only. Examples of many types are buildings, which can be classified into commercial,
industrial, educational, religious and others. Same with marital status, it can be single, married,
separated, divorced, or widowed. While sex can be classified into male or female only. What
matter is that every category is mutually exclusive, there is no overlap between them and none
of them have numerical significance. Ideally, all data must be able to be categorized, although
there are instances that there is a need for a “remainder” category for all those that cannot fit
in the given categories.
Nominal data can be analyzed using simple graphic and statistical techniques. Bar
graphs, for example, can be used to compare the sizes of categories and simple statistical
properties such as the percentage relationship of one subgroup to another or of one subgroup
to the total group can be explored.
Note: a sub-type of nominal scale with only two categories (e.g. male/female) is called
“dichotomous.”
Ordinal indicates “order”. Ordinal data is quantitative data which have naturally
occurring orders and the difference between is unknown. It can be named, grouped and also
ranked.
In ordinal measurement, the data are placed in order with regard to a particular
property they share. With ordinal scales, what matter is the order of the values. Precise
measurement of the property is not required, only the perception of whether one is more or
less than the other. Ordinal scales are typically measures of non-numeric concepts like
satisfaction, happiness, discomfort, etc.
For example, in a survey asking respondents about their satisfaction with the product,
options are:
• 1-Totally Satisfied
• 2-Satisfied
• 3-Neutral
• 4-Dissatisfied
• 5-Totally Dissatisfied
The answer to questions “how much” will remain unanswered. The understanding of
various scales helps statisticians and researchers so that the use of data analysis techniques
can be applied accordingly. Thus, an ordinal scale is used as a comparison parameter to
understand whether the variables are greater or lesser than one another using sorting. The
central tendency of the ordinal scale is Median.
The interval scale enables the researchers to quantify and differentiate between
options for the feedback to have a meaningful outcomes. It is often more effective for most
businesses and scientific studies than the nominal scale and ordinal scale because it can be
measured precisely. Questions that can be measured on the interval scale are the most
commonly used question type in research studies. Feedback or answer options must be
limited to variables that can be assigned a numerical value where the difference between the
two variables is equal.
One of the most commonly used interval scale questions is arranged on a five-point
Likert Scale question, where each answer is denoted with a number, and the variables range
from extremely dissatisfied to extremely satisfied. A Likert scale is a rating scale, often found
on survey forms, that measures how people feel about something. It’s named after Rensis
Likert, the social psychologist who invented the use of scale points in this type of rating
system. For example, you might use a Likert scale to measure how people feel about products,
services, or experiences. And each of these questions would then have a set number of
responses for people to choose from, respectively:
• Strongly Agree, Disagree, Neither Agree nor Disagree, Agree, Strongly Agree
• Highly Dissatisfied, Dissatisfied, Neutral, Satisfied, Highly Satisfied
• Never, Almost Never, Neutral, Almost Every Time, Every Time
An example of a ratio level measurement is the age bracket of the respondents, wherein
the following are the given options:
• Below 20 years
• 21-30 years
• 31-40 years
• 41-50 years
• 50 years and above
Secondary data is the data collected other than the researcher himself/herself. This
data has already been collected in the past through primary sources and made readily
available for researchers to use for their own research. Secondary data may also have been
collected for general use with no specific research purpose like in the case of the national
census.
The advantage of using sets of secondary data is that it has been produced by teams
of expert researchers, often with large budgets and extensive resources way beyond the
means of a single student, so it cuts out the need for time consuming fieldwork.
Secondary data can provide a baseline for primary research to compare the collected
primary data results to and it can also be helpful in research design. Most of the secondary
data is collected over a long period of time, thus it can provide the opportunity to explore more
the subject area. It can be use as well to compare the primary data that the researcher may
have collected to explore more and put the findings into a larger context.
There are numerous types of secondary data, the main being documentary sources
in the form of written and non-written materials, and survey data in the form of statistical
information.
There are instances that secondary data can be treated as primary data. For example,
if you are analyzing a work of art in the form of a painting, you could use it as primary data by
looking at the subject, materials and techniques used in the painting, the proportions, etc.
particular to that particular painting or artist. Alternatively you could use it as secondary data
when examining it for aesthetic trends, as evidence for developments in art history, or as a
commentary on the society of the time. The same could be said for pieces of music, films or
television programs.
The downside of using the secondary data is that it is impossible to give a full
description of all sources of secondary data. The detailed nature of the subject of research
can only be use to determine the appropriate source and the possible range of subjects which
is enormous. However, there are readily available data and can be access through:
Data Sets Online - There are several online sites that provide access to data sets
from a variety of sources. Don’t forget that not all data sets are in the form of lists of statistics.
Spatial information is contained on maps and charts. One rich mine of information is the
Geographic Information System (GIS) which is a source of much geographical and
demographic data for areas throughout the world based on mapping techniques.
The common documentary data used in social researches are the so called “ Cultural
Texts”. There are numbers of prevailing theoretical debates which are concerned with the
subjects of language and cultural interpretation. These issues have frequently become central
to sociological studies. The need has therefore arisen for methodologies that allow analysis
of cultural texts to be compared, replicated, disproved and generalized. From the late 1950s,
language has been analyzed from several basic viewpoints such as the structural properties
of language (notably Chomsky, Sacks, Schegloff), language as an action in its contextual
environment (notably Wittgenstein, Austin and Searle) and sociolinguistics and the
‘ethnography of speaking’ (Hymes, Bernstein, Labov and many others).
Libraries, Museums and Other Archives - These are generally equipped with
sophisticated catalogue systems which facilitate the tracking down of particular pieces of data
or enable a trawl to be made to identify anything which may be relevant. Local libraries often
store data and collections of local interest. Museums, galleries and collections: these often
have efficient cataloguing systems that will help your search. Larger museums often have their
own research departments that can be of help. Apart from public and academic institutions,
much valuable historical material is contained in more obscure and less organized collections,
in remote areas and old houses and specialist organizations. However, problems may be
encountered with searching and access in less organized and private and restricted private
collections. The attributes of a detective are often required to track down relevant material,
and that of a diplomat to gain access to the private or restricted collections.
Identifying the secondary data does not give an assurance that it can be used in a
research. The researcher must check it’s suitability and relevance in accord to his/her research
purpose or objectives on hand. Relevance is a function of the level of aggregation of the data,
as well as the units and time increments in which the data are reported. The data should also
come in a format that can be utilized easily. Secondary data should not be out of date, the
format and restrictions in their use must be considered as well.
More so, considering the questions below may help the researcher to assess its
suitability:
• Do measures match those you need, e.g. economic, demographic, social statistics?
• Coverage – is there sufficient data of required type, and can un- wanted data be
excluded?
• Population – is it the same as required for your chosen research?
• What variables are covered – the precise nature of these might not be so important
for descriptive work but could be essential for statistical tests or explanatory
research?
Validating and identifying the sources of secondary data is not possible at all times.
But then, the researcher must still initiate in authenticating the sources and its credibility. A
quick assessment can be made by examining the source of the data like for instance knowing
the reputation of the organization supplying the data.
Government statistics and data provided by large, well known organizations are likely
to be authoritative, as their continued existence relies on maintaining credibility. Records held
by smaller organizations or commercial companies will be more difficult to check for reliability.
In these cases, it is important to make a check on the person or institution responsible for the
data, and to explore whether there are any printed publications relating to the research which
might give them more credibility. Credibility of data refers to their freedom from error or bias.
Many documents are written in order to put across a particular message and can be selective
of the truth. This may be the case of data acquired from particular interest groups, or reports
compiled by those who wish to create a certain impression or achieve particular targets
Authentication of historical data can be a complex process, and is usually carried out
by experts. A wide range of techniques are used, for example textual analysis, carbon dating,
paper analysis, locational checks, cross referencing and many others.
The common goal in analyzing the secondary data is to look for patterns or trends, to
track progressions through time, or to seek out repetition of certain results to build up a strong
case.
There are various ways to analyze secondary data, which is no different in analyzing
primary data as well. Some of the suitable methods for secondary data are content analysis,
data mining and meta-analysis.
More so, content analysis can be used to make qualitative inferences by analyzing the
meaning and semantic relationship of words and concepts. And since it is not limited to any
certain range of texts, it is used in various fields such as marketing, media studies,
anthropology, cognitive science, psychology, and many social science disciplines. The goals
of doing content analysis varies, and some of them are:
• Unobtrusive data collection - can analyze communication and social interaction without
the direct involvement of participants, with such there researcher’s presence doesn’t
influence the results.
• Transparent and replicable - content analysis follows a systematic procedure that can
easily be replicated by other researchers, yielding results with high reliability.
• Highly flexible - can conduct content analysis at any time, in any location, and at low
cost, just need an access to the appropriate sources.
Data mining refers to extracting or mining knowledge from a large amount of data (Han,
2006). This is the technique commonly used not only in businesses sectors but as well as
other institutions like the academe. Large databases that are generated by electronic and
other methods in modern business are the storage of the data to be extract in data mining
process. These databases are called data warehouse or data marts.
Data mining uses statistical tools to explore the data to discover insight, such as
patterns or trends, and behaviors which can be used as a basis in prediction of trends. There
are also data visualization tools that can help the researcher to gain a clear understanding of
the data in visual form.
B.4.3 Meta-Analysis
1. Define the issue to be investigated, for instance the effects of class size on student
learning.
2. Collect the studies according to issue defined at the outset. These may be published
or unpublished research results. Care must be taken to select similar studies of good
quality, to avoid very different types and qualities of data.
3. Find common methods of measurement of variables used to detect significant
relationships.
4. Select the purpose of analysis of results data, either a comparison to detect how much
the results of the studies varied, or to track a particular variable across all the studies
and accumulate the results to indicate its importance.
5. Carry out the statistical analysis to compare or compute significance levels. An
estimation of the size of the effect of one variable on another is another aspect to be
explored. Sometimes it may be useful to divide the studies into sub-groups to clarify
the outcomes.
6. Report the results and discuss the limitations of the research and recommend further
research in the subject.
There are plenty of problems associated with meta-analysis. The main one is that the
wide range of methods and statistical techniques used in the various studies make comparison
and combination difficult to justify. Another is that the published works only record successful
outcomes where statistically significant results are achieved, leaving all the other test results
unrecorded. This can lead to an over-optimistic result in the meta-analysis. Despite these, it
is a useful way to assimilate the results of numerous studies dedicated to one issue.
Primary data is data that is collected by a researcher from first-hand sources for the
specific purposes of a research project.
There are several basic methods used to collect primary data; here are the main ones:
• asking questions
• conducting interviews
• observing without getting involved
• immersing oneself in a situation
• doing experiments
• manipulating models.
C.1 Sampling
participants. The time, effort, and budget are some of the common reasons and not to mention
the willingness of them as well. Sampling is the process of selecting just a small group of
cases from out of a large group. The identified sample will be the representative of all the
rest.
There are basically two types of sampling procedure which are probability sampling
and non-probability sampling.
C.1.1 Probability Sampling - techniques that give the most reliable representation of
the whole population. Random methods are employed in this technique to select the sample.
The select procedure should aim to guarantee that each element (person, group, class, type
etc.) has an equal chance of being selected and that every possible combination of the
elements also has an equal chance of being selected. In case of different classes of cases
within the population, a specific technique is used such as simple random sampling, stratified
sampling, cluster sampling, and alike.
There are four main types of probability sample, these are simple random sampling,
systematic sampling, stratified sampling, and cluster sampling.
o Simple random sampling - every member of the population has an equal chance
of being selected. The sampling frame should include the whole population. Tools
like random number generators or other techniques that are fully based on chance
can be used on this type of sampling method.
For instance, out of 1000 employees of company Z will be chosen. You assigned
numbers 1 to 1000 to each employee in the company database and the random
generator will select 1 to 100.
o Stratified sampling - involves dividing the population into subpopulations that may
differ in important ways. It allows you draw more precise conclusions by ensuring
that every subgroup is properly represented in the sample.
To use this sampling method, you divide the population into subgroups (called
strata) based on the relevant characteristic (e.g. gender, age range, income
bracket, job role).
Based on the overall proportions of the population, you calculate how many people
should be sampled from each subgroup. Then you use random
or systematic sampling to select a sample from each subgroup.
For instance, the company has 800 female employees and 200 male employees.
You want to ensure that the sample reflects the gender balance of the company,
so you sort the population into two strata based on gender. Then you use random
sampling on each group, selecting 80 women and 20 men, which gives you a
representative sample of 100 people.
o Cluster sampling - also involves dividing the population into subgroups, but each
subgroup should have similar characteristics to the whole sample. Instead of
sampling individuals from each subgroup, you randomly select entire subgroups.
If it is practically possible, you might include every individual from each sampled
cluster. If the clusters themselves are large, you can also sample individuals from
within each cluster using one of the techniques above.
This method is good for dealing with large and dispersed populations, but there is
more risk of error in the sample, as there could be substantial differences between
clusters. It’s difficult to guarantee that the sampled clusters are really
representative of the whole population.
For instance, the company has offices in 10 cities across the country (all with
roughly the same number of employees in similar roles). You don’t have the
capacity to travel to every office to collect your data, so you use random sampling
to select 3 offices – these are your clusters.
This type of sample is easier and cheaper to access, but it has a higher risk of sampling
bias, and you can’t use it to make valid statistical inferences about the whole population.
For instance, you are researching opinions about student support services in your
university, so after each of your classes, you ask your fellow students to complete
a survey on the topic. This is a convenient way to gather data, but as you only
surveyed students taking the same classes as you at the same level, the sample
is not representative of all the students at your university.
Voluntary response samples are always at least somewhat biased, as some people
will inherently be more likely to volunteer than others.
In case of an academician, he or she may send out the survey to all students at
their university and a lot of students decide to complete it. This can certainly give
them some insight into the topic, but the people who responded are more likely to
be those who have strong opinions about the student support services, so they
can’t be sure that their opinions are representative of all students.
o Purposive sampling - This type of sampling involves the researcher using their
judgement to select a sample that is most useful to the purposes of the research.
It is often used in qualitative research, where the researcher wants to gain detailed
knowledge about a specific phenomenon rather than make statistical inferences.
An effective purposive sample must have clear criteria and rationale for inclusion.
For instance a researcher want to know more about the opinions and experiences
of disabled students at his/her university, so he/she purposefully select a number
of students with different support needs in order to gather a varied range of data
on their experiences with student services.
There are different methods in collecting primary data, these are asking questions,
interviews, observation, immersion, experiments, and model or simulation.
The two basic types of questions may be asked are closed format and open format
questions. In closed format questions, the respondents are given choices or options to select
from. It is quite easier for the respondents to complete the survey. However, the disadvantage
of this is it limit the range of possible answers.
On the other hand, in the open format questions, The respondent is free to answer in
their own content and style. These tend to permit freedom of expression and allow the
respondents to qualify their responses. This freedom leads to a lack of bias but the answers
are more open to researcher interpretation. They are also more demanding and time
consuming for respondent and more difficult to code.
Asking questions can be done also through accounts or diaries. These qualitative data
collection methods are used to find information on people’s actions and feelings by asking
them to give their own interpretation, or account, of what they experience. Accounts can
consist of a variety of data sources: people’s spoken explanations, behavior (such as
gestures), personal records of experiences and conversations, letters and personal diaries.
As long as the accounts are authentic, there should be no reason why they cannot be used
as an argued explanation of people’s actions.
While questionnaire surveys are relatively easy to organize they do have certain
limitations, particularly in the lack of flexibility of response. Interviews are more suitable for
questions that require probing to obtain adequate information. The use of interviews to
question samples of people is a very flexible tool with a wide range of applications.
This is a method of gathering data through observation rather than asking questions.
The aim is to take a detached view of the phenomena, and be ‘invisible’, either in fact or in
effect. When studying humans or animals, this detachment assumes an absence of
involvement in the group even if the subjects are aware that the observation is taking place.
Observation can be used for recording data about events and activities, and the nature or
conditions of objects, such as buildings or artefacts.
Observation is not limited to the visual sense. Any sense such as smell, touch, hearing,
can be involved, and these need not be restricted to the range perceptible by the human
senses. A microscope or telescope can be used to extend the capacity of the eye, just as a
moisture meter can increase sensitivity to the feeling of dampness. Instruments have been
developed in every discipline to extend the observational limits of the human senses.
This is a process of gathering primary data that not only involves observation, but also
experience in every sense of the word. It is based on the techniques devised by
anthropologists to study social life and cultural practices of communities by immersing
themselves in the day-to-day life of their subjects.
The researcher tries to ‘fit in’ as much as possible so as to see and understand the
situation from the viewpoints of the group being studied. At its most extreme, the subjects of
the study will not be aware that they are being observed. Covert methods are used to disguise
the role of the observer.
Experiments are used in many different subject areas, whether these are primarily to
do with how things interact with each other, or how people interact with things, and even how
people interact with other people. Although experiments are commonly associated with work
in laboratories where it is easiest to impose control, they can be carried out in almost any other
location. It may not even be possible to move the event to be studied into the laboratory, or
doing so might unduly influence the outcome of the experiment. For example, some events in
nature or social occurrences are so rooted in their context that they cannot be transferred to
the laboratory. The design of experiments and models depends very much on the type of
event investigated, the sort of variables involved, and the level of accuracy and reliability
aimed at practical issues such as time and resources available .
A model, like an experiment, aims to isolate and simplify an event in order to inspect it
in detail and gain useful data. The difference is that models only provide a representation of
the event through a simulation, that shows relationships between variables. Models are used
to mimic a phenomenon (rather than isolating it as in an experiment) in a form that can be
manipulated, in order to obtain data about the effects of the manipulations. The purpose of a
model can be to describe a phenomenon, to serve as a structure for organizing and analyzing
data, or to explore or test a hypothesis.
As with experiments, it is essential to understand the system that lies behind the
phenomena to be modelled and what are the important variables and how they interact. The
actual form of the model can be diagrammatic, physical, or mathematical.
• Mathematical models or simulations - show the effects of different inputs into a system
and predict the resultant outcomes. They are used to predict the weather, predict the
performance of materials under certain conditions, and combined with physical models
can mimic flying an airplane. They are invariably quantitative models and are divided
into two major categories, deterministic and stochastic models. These categories
relate to the predictability of the input–deterministic models deal only with
predetermined inputs within a closed system, stochastic models are designed to deal
with unpredictable inputs such as the effect of chance or of influences from outside the
system being modelled. The computer is an invaluable tool in the construction of
mathematical models. Spreadsheet programs provide a systematic two-dimensional
framework for devising models, and furnish facilities for cross calculations, random
number generation, setting of variable values, and the build-up of series of formulae.
Quantitative analysis deals with data in the form of numbers and uses mathematical
operations to investigate their properties. The levels of measurement used in the collection of
the data are important factor in choosing the type of analysis that is applicable, as is the
numbers of cases involved.
• measure
• make comparisons
• examine relationships
• make forecasts
• test hypotheses
• construct concepts and theories
• explore
• control
• explain.
Rows and columns are the two components of a data set. A row is given to each record
or observation and each column is given to a variable, allowing each cell to contain the data
for the case/variable as shown in figure 7. The data might be in the form of integers, real
numbers or categories. Missing data also need to be indicated, distinguishing between
genuine missing data and a ‘don’t know’ response. It is easy to make mistakes in the rather
tedious process of transferring data. It is therefore important to check on the accuracy of the
data entry.
Parametric and Non-parametric are the two major classes of statistics. On the other
hand, statistics is a powerful tool in analyzing a data. There are instances that the term
“statistics” being used to refer to sample. And the term “parameter” is used to refer to the
population.
• Nominal – this is the first level of measurement and the data in this level can only be
categorized. The numbers in the variable are used only to classify the data. Aside from
numbers, words, letters, and alpha-numeric characters can be used also to classify
the data. For instance, the research needs to identify the number of female and male
per observation. In this case, the gender can be represented by letters such as F for
female and M for male.
• Ordinal – this is the second level of measurement and the data in this level can be
categorized and ranked. This level depicts some ordered relationships among the
variable’s observations. For instance, five students got a score of 98, 96, 95, 94,and
90 respectively in the examination. The student got the score of 98 will be rank first,
then the student got the score of 96 will be rank second. The third rank will be given to
the student got a score of 95, the fourth rank is for the students scored 94, and the fifth
rank is the student got 90 points. This level of measurement indicates an ordering of
the measurements.
• Interval – this is the third level of measurement and the data in this level can be
categorized, ranked, and evenly spaced or specify the distances between each interval
in an observation. Thus, knowing the lowest interval to the highest interval. , For
example, the distance between 94 degrees Celsius and 96 degrees Celsius is the
same as the distance between 100 degrees Celsius and 102 degrees Celsius.
• Ratio – this is the fourth level of measurement and the data can be categorized,
ranked, evenly spaced, and has a natural zero. The zero in the scale makes this type
of measurement unlike the other types of measurement, although the properties are
similar to that of the interval level of measurement. In the ratio level of measurement,
the divisions between the points on the scale have an equivalent distance between
them.
Analyzing the data really depends on the level of measurement of the variable which
sometimes resulted to limited insights.
D.2.1 Parametric Statistics - refers to the use of statistical tests or methods when the
data being studied comes from a sample or population of people that is normally distributed.
It assumes that the variance is homogeneous. The types of data involves in parametric
statistics are interval scale data and ratio scale data. Thus, parametric tests are restricted to
data that show a normal distribution. The variables are independent on one another and it
uses continuous scale of measurement. The usual central measure in parametric test is the
mean. Some of the tools that can be used in parametric test are:
D.2.2 Non-Parametric Statistics - refers to a statistical method in which the data are
not assumed to come from prescribed models that are determined by a small number of
parameters; examples of such models include the normal distribution model and the linear
regression model. Nonparametric statistics sometimes uses data that is ordinal, meaning it
does not rely on numbers, but rather on a ranking or order of sorts. For example, a survey
conveying consumer preferences ranging from like to dislike would be considered ordinal data.
Some of the test or tools to employ in non-parametric statistics are:
• The Sign Test - compares the means of two “paired”, non- parametric samples
example: Is there a difference in the gill withdrawal response of Aplysia in night
versus day?
• The Friedman Test - like the Sign test, (compares the means of “paired”, non-
parametric samples) for more than two samples.
Example. Is there a difference in the gill withdrawal response of Aplysia
between morning, afternoon and evening?
Non-parametric statistical tests are used when: the sample size is very small; few
assumptions can be made about the data; data are rank ordered or nominal; and samples
are taken from several different populations.
There are two types of statistical test that can be employ in data analysis, the
descriptive and inferential statistics.
• Univariate analysis – analyses the qualities of one variable at a time. Only descriptive
tests can be used in this type of analysis.
• Bivariate analysis– considers the properties of two variables in relation to each other.
Inferences can be drawn from this type of analysis.
• Multivariate analysis–looks at the relationships between more than two variables.
Again, inferences can be drawn from results.
Qualitative data is an information that is associated with ideas, opinions, values, and
behaviors of individuals during a social context. It refers to non-numeric data like interview
transcripts, notes, video and audio recordings, pictures and text documents.
Qualitative data analysis (QDA) is the process of turning written or qualitative data into
findings. These data mostly are expressed in the forms of words such as descriptions,
accounts, opinions, feelings, and alike. There are no formulas or rules for this process.
Frequently, the situation or process under study is not sufficiently understood initially in order
to determine precisely what data should be collected. Therefore, repeated bursts of data
collection and analysis allow adjustments to what is further investigated, what questions are
asked and what actions are carried out based on what has already been seen, answered and
done.
Text or narrative data come in many forms and from various sources. The following may
produce narrative data and QDA may be employ:
E.1.1 Content analysis - it refers to the method of categorizing verbal or activity data to
classify, summarize and tabulate the information. In content analysis, the data can be
analyzed into two levels such as descriptive and interpretative. In descriptive analysis, it
answers the question “What is the data?” while in interpretative analysis, it answers the
question “What was meant by the data?”.
Using content analysis, researchers will quantify and analyze the presence, meanings,
and relationships of such words, themes, or ideas. Content Analysis is employed to spot the
intentions, focus or communication trends of a personal, cluster or establishment. Aside from
revealing patterns in communication content, it is also employed to explain attitudinal and
behavioral responses to communications.
E.1.2. Narrative analysis – these are transcribed experiences. Narrative analysis refers
to a cluster of analytic methods for deciphering texts or visual data that have a storied kind. It
involves the reformulation of stories bestowed by respondents taking into consideration
context of every case and totally different experiences of every respondent. In alternative,
narrative analysis is the revision of primary qualitative knowledge by man of science.
There are two approaches in narrative analysis, these are the thematic version and
the structural version. In thematic version, it probes what a story is concerning while in the
structural version it asks how a story is composed to attain specific communicative aims.
Different approaches to narrative analysis are categorized on an idea of whether or
not they target the narrative content or structure, with the thematic version interrogating what
a story is concerning, whereas the structural version asks how a story is composed to attain
specific communicative aims.
The primary activity in narrative analysis is to give details to the stories presented by
individuals in numerous contexts and supported their different experiences.
Discourse Analysis involves real text (not invented, created or artificial text). This
analyzation is not limited to what is said only, rather it takes into consideration all aspects of
social and historical contexts. This maybe a great tool for finding out political meanings that
inform written and spoken text.
E.1.5. Grounded theory – it involves the gathering and analysis of data. The idea is
“grounded” in actual data, which implies the analysis and development of theories happen
when you have collected the information. Grounded theory (GT) is an associate inductive,
comparative methodology that gives systematic guidelines for gathering, synthesizing,
analyzing, and conceptualizing qualitative data for the purpose of data gathering and
modeling.
In efforts to identify empirical social phenomena, and construct theories that are
constrained by those phenomena, majority options and theories of GT adopt 3 major
categories of data, which are coding, memoranda, and theoretical sampling. In GT, data
gathering and data analysis are interactive. From the time data assortment begins, grounded
theorists interact in data analysis, that ends up in additional data collection, subsequent data
analysis, and so on.
There are steps in the analysis process involving qualitative data. This process is not
fixed, thus moving back and forth between steps is likely to happen:
1. Knowing the data – good analysis depends on understanding the data very well,
thus reading it and re-reading it is necessary. In case of recordings, listening from
it several times is helpful. Not all data collected are quality data. There are
instances that information provided does not add meaning or value. Investing
time and effort in analysis may give the impression of greater value that is
merited. Limitations and levels of analysis is expected as well.
2. Focus the analysis – review the purpose or objective and what is really wanted to
find out. Identify a few key questions to answer, although these questions may
change as the process progress but it will helps the researcher where to start.
There are common approaches how to focus on the analysis and two of them
are:
Focus by question or topic, time period or event – the analysis is focus at how
individuals or groups responded to each question or topic, or for a given time
period or event. This is often done with open-ended questions. Consistencies and
differences among the respondent’s answers can be identified.
Focus by case, individual, or group – organize the data from or about the case,
individual or group, and analyze it as a whole.
Combining the two approaches is also possible wherein the data will be analyze
by questions and by case, individual or group.
Objectives: At the end of the semester, the student will be able to:
The soul purpose of writing a thesis is to demonstrate your proficiency and skills
in academic research and appropriate academic communication, both written and oral.
When writing your thesis, your information retrieval skills are developed and your
facility for critical and analytical thinking, problem solving and argumentation is
strengthened − all of which are skills required for success in your future working life.
This chapter will guide the students based on the established guidelines of the
university to ensure uniformity in the general content, format, and style of the
manuscript. The department of Information Technology is under the thematic area for
Engineering and Technology Development. (Crizaldo, Ilagan, Plete, & Sedigo, 2016)
It is also an important note to take that the university has allowed group thesis
made up of 2-3 students depending on the following proposal complexity: 1. Difficulty
of the problem; 2. scope of the study; 3. Coverage of the study terms of area; and 4.
Expenses that will be entailed in the conduct of the study.
A. Chapter 1 – Background
The background of the study provides context to the information that you are
discussing in your paper. Thus, the background of the study generates the reader's
interest in your research question and helps them understand why your study is
important. Typically, the background of a study includes a review of the existing
literature on the area of your research (general topic), leading up to your specific
topic. Once you have discussed the contribution of other researchers in the field, you
can identify gaps in understanding, that is, areas that have not been addressed in
these studies. You can then explain how your study will address these gaps and how
it will contribute to the existing knowledge in the field.
1 The introductory part of the
proposal must immediately catch
2 the interest and attention of the
reader. There is no prescribed
3 length of the introduction but it
must be concise and complete.
While stating the significance, you must highlight how your research will be
beneficial to the development of science and the society in general. You can first
outline the significance in a broader sense by stating how your research will contribute
to the broader problem in your field and gradually narrow it down to demonstrate the
specific group that will benefit from your research. While writing the significance of
your study, you must answer questions like:
This is written as a paragraph heading in bold space two spaces after the
objective of the study. See Figure 11 as a sample for this area of the research
manuscript.
The time and place of the study is written in paragraph form stating the
month and the year the study will start and end and the place of places where the
study will be conducted. Time of the study starts on the day when the proposal is
approved ands the researcher starts preparing the experiment or instruments for data
gathering up to the day when the researcher is ready to present the results for the final
defense. Place or places of the study is or are where the study will be conducted. This
is written as a paragraph heading in bold face two spaces after the significance of the
study.
will be used, focus and depth of the investigation, system development, analysis and
the statistical tools that will be used to make generalizations.
7. Definition of Terms
1
2
1
2
1
2
3
1. information seeking: the ability to scan the literature efficiently, using manual
or computerized methods, to identify a set of useful articles and books
2. critical appraisal: the ability to apply principles of analysis to identify unbiased
and valid studies.
1. be organized around and related directly to the thesis or research question you
are developing
2. synthesize results into a summary of what is and is not known
3. identify areas of controversy in the literature
4. formulate questions that need further research
Upon identifying a subject or project study, the following steps may help in conducting
RRL:
1. Related legal basis – the sources of these materials are laws, constitution,
department directives such as circulars, orders, memoranda, and many others
which have implications to government thrusts
2. Related literature – are published articles, books, journals, magazines, novels,
poetry and many others which have direct bearing to the proposed study
4. Ayers (1994) claims that dewatering improves waste handling and reduces the
volume of garbage to be incinerated. Incidentally, dewatering is another method
of waste treatment designed to treat wet garbage.
It is a way of giving credit to individuals for their creative and intellectual works
that you utilized to support your research. A citation style dictates the information
necessary for a citation and how the information is ordered, as well as punctuation
and other formatting.
1. Ideas, information, results, opinions from any source that you have
summarised, paraphrased or directly quoted
2. Definitions of terms
3. Illustrations, tables, figures drawn from sources
4. Your ideas that are also those of an author you have read
5. Plans, ideas or anything that was stimulated by others
Paraphrasing and summarizing are both related terms. They are often
confusing for people. Paraphrasing and summarizing are essential techniques for an
effective and efficient essay. These are an absolute must when dealing with scientific
concepts. Both paraphrasing and summarizing are allowed and accepted till due
credit is given to the original source, and only till the work is not copied and is free
from any kind of plagiarism.
Paraphrasing
Paraphrasing is used;
When the ideas have a greater relevance than the style of writing.
When you want to simplify the work of another person.
Summarizing
Summarizing is the tool in writing which is used when you need the main idea of the
text. It is a condensed form of the written text in your own words with only the highlights
of the text. A summary is much shorter than the original text. It excludes the
explanation of the text. Only the main idea or the basic information is included.
Summarizing is used to refer to work that culminates into the present writing that you
are doing. It is sometimes used when you want to draw attention to an important point.
It is also applicable when you want to distance yourself from the original text.
Summarizing is used;
Summary:
1. Paraphrasing is writing any particular text in your own words while summarizing is
mentioning only the main points of any work in your own words.
2. Paraphrasing is almost equal to or somewhat less than the original text while
summarizing is substantially shorter than the original.
3. Paraphrasing may be done for the purpose of simplifying the original work while
summarizing is done to mention only the major points without any kind of explanation
about the matter.
APA Format
In-text references must be included following the use of a quote or paraphrase taken
from another piece of work.
In-text citations are citations within the main body of the text and refer to a direct
quote or paraphrase. They correspond to a reference in the main reference list.
These citations include the surname of the author and date of publication only. Using
an example author James Mitchell, this takes the form:
• Direct Quote: The citation must follow the quote directly and contain a page
number after the date, for example (Mitchell, 2017, p.104). This rule holds for
all of the variations listed.
• Parenthetical: The page number is not needed.
Two Authors:
The surname of both authors is stated with either ‘and’ or an ampersand between.
For example:
Mitchell, Smith, and Thomson (2017) state… Or …(Mitchell, Smith, & Thomson,
2017).
Further cites can be shorted to the first author’s name followed by et al:
Only the first author’s surname should be stated followed by et al, see the above
example.
No Authors:
If the author is unknown, the first few words of the reference should be used. This is
usually the title of the source.
If this is the title of a book, periodical, brochure or report, is should be italicized. For
example:
If this is the title of an article, chapter or web page, it should be in quotation marks.
For example:
Works should be cited with a, b, c etc following the date. These letters are assigned
within the reference list, which is sorted alphabetically by the surname of the first
author. For example:
If these works are by the same author, the surname is stated once followed by the
dates in order chronologically. For instance:
If these works are by multiple authors then the references are ordered alphabetically
by the first author separated by a semicolon as follows:
For the first cite, the full name of the group must be used. Subsequently this can be
shortened. For example:
In this situation the original author and date should be stated first followed by ‘as
cited in’ followed by the author and date of the secondary source. For example:
Lorde (1980) as cited in Mitchell (2017) Or (Lorde, 1980, as cited in Mitchell, 2017)
• In-text citation doesn’t vary depending on source type, unless the author is
unknown.
• Reference list citations are highly variable depending on the source.
Book referencing is the most basic style; it matches the template above, minus the
URL section. So the basic format of a book reference is as follows:
Mitchell, J.A., Thomson, M., & Coyne, R.P. (2017). A guide to citation. London,
England: My Publisher
Author surname, initial(s) (Ed(s).*). (Year). Title (ed.*). Retrieved from URL (*optional.)
Mitchell, J.A., Thomson, M., & Coyne, R.P. (2017). A guide to citation. Retrieved
from https://www.mendeley.com/reference-management/reference-manager
Articles differ from book citations in that the publisher and publisher location are not
included. For journal articles, these are replaced with the journal title, volume
number, issue number and page number. The basic structure is:
Mitchell, J.A. (2017). Citation: Why is it so important. Mendeley Journal, 67(2), 81-95
Mitchell, J.A. (2017). Citation: Why is it so important. Mendeley Journal, 67(2), 81-
95. Retrieved from https://www.mendeley.com/reference-management/reference-
manager
Mitchell, J.A. (2017). Changes to citation formats shake the research world. The
Mendeley Telegraph, Research News, pp.9. Retrieved from
https://www.mendeley.com/reference-management/reference-manager
Author surname, initial(s). (Year, month day). Title. Title of the Magazine, pp.
Author surname, initial(s). (Year, month day). Title. Retrieved from URL
Website example:
Mitchell, J.A. (2017, May 21). How and when to reference. Retrieved
from https://www.howandwhentoreference.com.
C. Chapter 3 – Methodology
This chapter of the research study discusses the entirety of the project
development. It includes discussions from data gathering techniques that the
developers used in order to develop the project study. It includes the project design,
functional decomposition diagram, project development, operation testing and
procedure, evaluation procedure, and statistical treatment.
1. Research Design
It refers to the overall strategy that you choose to integrate the different
components of the study in a coherent and logical way, thereby, ensuring you will
effectively address the research problem; it constitutes the blueprint f or the
collection, measurement, and analysis of data (De Vaus, 2001).
In survey studies, once data are collected, the most important objective of a
statistical analysis is to draw inferences about the population using sample
information.
"How big a sample is required?" is one of the most frequently asked questions
by the investigators. If the sample size is not taken properly, conclusions drawn from
the investigation may not reflect the real situation for the whole population
It is necessary to apply the importance of the size of sample and the method of
determination of a sample size along with the procedure of sampling in relation to the
chosen study. If there is any effect of bias on determination of sample size.
When you conduct research about a group of people, it’s rarely possible to collect data
from every person in that group. Instead, you select a sample. The sample is the group
of individuals who will actually participate in the research (McCombes, 2019).
To draw valid conclusions from your results, you have to carefully decide how you will
select a sample that is representative of the group as a whole. There are two types of
sampling methods:
Context Diagram
sometimes called a level
0 data-flow diagram, is
drawn in order to define
and clarify the
boundaries of the
software system. It
identifies the flows of
information between the
system and external
Figure 20. Sample Context Diagram entities.
Operation and Testing Procedure show steps undertaken to test the functionality of
the developed technology. This is an effective way of checking the total functionality
of the system as it helps the researcher to test, identify the errors and bugs, and the
possible improvements needed during the testing phase of the development.
4. Statistical Treatment
Statistical treatment of data is when you apply some form of statistical method to a
data set to transform it from a group of meaningless numbers into meaningful output.
Statistical treatment of data involves the use of statistical methods such as:
• mean,
• mode,
• median,
• regression,
• conditional probability,
• sampling,
• standard deviation and
• distribution range.
The results and discussion sections are one of the challenging sections to
write. It is important to plan this section carefully as it may contain a large amount of
scientific data that needs to be presented in a clear and concise fashion. The purpose
of a Results section is to present the key results of your research. Results and
discussions can either be combined into one section or organized as separate sections
depending on the requirements.
For the BSIT and BSCS programs, Chapter 4 of thesis writing includes the
following parts:
1. Project Description
2. Project Structure
3. Project Test Result
4. Project Capabilities and Limitations
5. Project Evaluation Results
1. Textual Presentation
The discussion about the presentation of data starts off with its most raw and
vague form which is the textual presentation. In such form of presentation, data is simply
mentioned as mere text, that is generally in a paragraph. This is commonly used when
the data is not very large. This kind of representation is useful when we are looking to
supplement qualitative statements with some data. For this purpose, the data should not
be voluminously represented in tables or diagrams. It just has to be a statement that
serves as a fitting evidence to our qualitative evidence and helps the reader to get an
idea of the scale of a phenomenon.
The research summary format resembles that found in the original paper (just
a concise version of it). Content from all sections should be covered/reflected,
regardless of whether corresponding headings are present or not. Key structural
elements of any research summary are as follows:
• Title – it announces the exact topic / area of analysis and can even be formulated
to briefly announce key finding(s) or argument(s) delivered.
• Results section – this section lists in detail evidence obtained from all
experiments with some primary data analysis, conclusions, observations, and
primary interpretations being made. It is typically the largest section of any
analysis paper; hence, it has to be concisely rewritten, which implies
understanding which content is worth omitting and which is worth leaving.
• Conclusion – in the original article, this section could be absent or merged with
“Discussion”. Specific research summary instructions might require this to be a
standalone section. In a conclusion, hypotheses are revisited and validated or
denied, based on how convincing the evidence is (key lines of evidence could be
highlighted).
• References – this section is for mentioning those works that were cited directly in
your summary – obviously, one has to provide appropriate citations at least for the
original article (this often suffices). Mentioning other works might be relevant when
your critical opinion is also required (Scribbr, 2018)
The Conclusions section sums up the key points of your discussion, the
essential features of your design, or the significant outcomes of your investigation. As
its function is to round off the story of your project, it should:
• be written to relate directly to the aims of the project as stated in the Introduction
• indicate the extent to which the aims have been achieved
• summarise the key findings, outcomes or information in your report
• acknowledge limitations and make recommendations for future work (where
applicable)
• highlight the significance or usefulness of your work.
References
APA, A. P. (2019, October). https://apastyle.apa.org/about-apa-style. Retrieved from
https://apastyle.apa.org: https://apastyle.apa.org/about-apa-style
Crizaldo, R., Ilagan, B. J., Plete, A. J., & Sedigo, N. (2016). CvSU Manual for Thesis
Writing. Indang, Cavite: Cavite State University.
Han K.M., H. J. (2006). Data Mining Concepts and Techniques. San Francisco.
Han, K. M. (2006). Data Mining Concepts and Techniques, 2nd Edition . San
Francisco: Morgan Kaufmann.
Pyrczak, F. &. (1992). Writing Empirical Research Reports: A Basic Guide for Students
of the Social and Behavioral Sciences. Los Angeles: Pyrczak Publishing.
Routledge N., W. (2011). Research Method, The Basic. London and New York.