Research Logy

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 10

How to Give an Oral Report

Report consists of two type’s namely oral and written report


The same steps apply for oral reports as they do for written reports.
However, , we will go into ways to perfect the oral presentation of a report so
that one can impress teachers and students with our knowledge and delivery
of information.
Follow these guidelines to ensure success in giving an oral report:

1. Select a topic
2. Research the topic at the library and on the internet
3. Decide on a thesis and find evidence to back up your thesis statement
4. Create a written outline on paper
5. Write notes to yourself on paper or on index cards on the main points
of the report
6. Practice speaking the report to yourself
7. Practice the oral report in front of a mirror
8. Practice the oral report in front of a friend or family member
9. Select the appropriate attire for giving the oral report
10. Give the oral report with your notes in hand

While these suggestions may seem initially quite simple, they are rather self-
explanatory. It is important to remember that oral reports are just another
strain of written reports. The same information must be presented. The only
difference is in the way the information is presented. People who appear
calm and collected can convey information sometimes better than just as
well-informed people who are anxious in front of a group. Some people write
better than they can publicly speak. Consequently, the written report will
benefit them more.
Here are some tips on giving an oral report.

• Do practice in front of people


• Do practice in front of a mirror so that you can see what you look like
• Do make eye contact with some people "in the audience"
• Do not spend the entire oral report staring at your notes
• Do not recite a written report aloud
• Do remove all flashy jewelry or noise-making attire before you speak.
You do not want distractions from your presentation.
• Do speak slowly and coherently.

It is important to remember that your oral report is not your written report.
Do not write the report in your notes. Simply write an outline of the report as
your notes. If you write too much information in your handheld paperwork,
then you may run the risk of simply reading an oral statement instead of
presenting an oral report.

In many ways, planning an oral report is similar to planning a


written report.

• Choose a subject that is interesting to you. What do you care


about? What would you like to learn more about? Follow your
interests, and you'll find your topic.
• Be clear about your purpose. Do you want to persuade your
audience? Inform them about a topic? Or just tell an entertaining
story?

An oral report also has the same three basic parts as a written
report.

• The introduction should "hook" your audience. Catch their


interest with a question, a dramatic tale or a personal experience
that relates to your topic.
• The body is the main part of your report, and will use most of your
time. Make an outline of the body so that you can share information
in an organized way.
• The conclusion is the time to summarize and get across your
most important point. What do you want the audience to
remember?

It's important to really know your subject and be well organized. If you know
your material well, you will be confident and able to answer questions. If your
report is well organized, the audience will find it informative and easy to
follow.

Think about your audience. If you were listening to a report on your subject,
what would you want to know? Too much information can seem
overwhelming, and too little can be confusing. Organize your outline around
your key points, and focus on getting them across.

Remember—enthusiasm is contagious! If you're interested in your subject,


the audience will be interested, too.

Rehearse
Practicing your report is a key to success. At first, some people find it helpful
to go through the report alone. You might practice in front of a mirror or in
front of your stuffed animals. Then, try out your report in front of a practice
audience-friends or family. Ask your practice audience:

• Could you follow my presentation?


• Did I seem knowledgeable about my subject?
• Was I speaking clearly? Could you hear me? Did I speak too fast or
too slow?

If you are using visual aids, such as posters or overhead transparencies,


practice using them while you rehearse. Also, you might want to time
yourself to see how long it actually takes. The time will probably go by faster
than you expect.

Report!

•Stand up straight. Hold your upper body straight, but not stiff, and
keep your chin up. Try not to distract your audience by shifting
around or fidgeting.
• Make eye contact. You will seem more sure of yourself, and the
audience will listen better, if you make eye contact during your
report.
•Use gestures. Your body language can help you make your points
and keep the audience interested. Lean forward at key moments,
and use your hands and arms for emphasis.
•Use your voice effectively. Vary your tone and speak clearly. If
you're nervous, you might speak too fast. If you find yourself
hurrying, take a breath and try to slow it down.

How to write the draft to presentation? Explain with suitable


examples.

Drafts are necessary components of any piece of work. But how should one
approach them, and what sort of technique produces the best rough draft?
This article recommends a strategy that may lead to more creative and
flowing pieces of work.
The Goal of the Rough Draft

Rough drafts are called “rough” for a reason. They are not meant to be
refined and publication-ready pieces; rather, it is expected that they include
poor diction, unnecessary copy and rambling text.

One’s mission when writing a rough draft should not be to write brilliant,
solid text that could be published immediately. In fact, as explained below,
that sort of approach can limit the overall quality of the story. Rather, they
should focus on getting ideas down on paper, experimenting with the story
and producing content. A good rough draft is rich in ideas, experimental and
full of content.

Problems With the “Write and Edit” Writing Style

Many writers feel the need to edit and refine their rough draft as they write
it. This not only results in a longer writing process, but it can limit the overall
quality of one’s work.

When writers feel obliged to interrupt their rough draft writing process to
tweak and edit copy, their focus is not on developing new ideas or
experimenting with the story idea, but on making changes that would later
be covered in the ensuing editing process. This may result in a duller, less
creative draft.

In addition, if writers are concerned with the overall quality of the first draft,
they may be less open to trying out new story directions and ideas for fear of
producing low-quality content. As a result, their first drafts could contain less
interesting material and lower content levels.

The negative effects of the "Write and Edit" style can also result from stress
over writing, as explained in this article.
Write First, Edit Later

One strategy to writing a good rough draft is to focus on getting content


down instead of making edits along the way. This strategy allows writers to
develop ideas and produce creative blueberry! content without feeling
pressured to polish their writing along the way.

This strategy will not result in a solid, ready-to-publish article. But it will
result in a content and idea-rich draft which can be edited and refined in
later steps. Allowing oneself to write without the pressure to edit along the
way can result in a more relaxed writing experience; this in itself may result
in more free-flowing and creative work.
Writers can take this idea one more step and experiment with “freewriting,”
a writing method in which the author does not stop writing until a set time. If
one freewrites an entire story, he or she may be surprised to see how
creative, imaginative and flowing the ensuing work will be.

Good rough drafts do not have to be polished, sensible or even coherent.


They simply have to include enough creative, free-flowing content for the
author to edit and revise into a successful story. If one focuses on getting
their writing done, knowing they can edit later, his or her ensuing rough draft
should be successful.

Document Structure
If you choose to write your draft use the example-tutorial from CVS as a
starting point.
Each tutorial should begin with an Introduction, with the following
sections:

 Introduction
 Purpose - What the document enables the reader to accomplish
 Audience - The skills and interests of your intended readers
 Additional Resources - relevent man and info pages, related
documentation included in /usr/share/doc/ on Fedora systems, and public
Websites
Including these sections in your draft also helps other members of the
Project to provide more relevent feedback.
The second section of your draft should include brief sections explaining
any technical concepts that you will use throughout the document. For
example, the Managing Software with Yum tutorial has the following
Concepts section:
 Concepts
 About Packages
 About Repositories
 About Dependencies
 Understanding Package Names
Establishing the technical concepts at the start of the document enables
you to focus the rest of the document on the tasks that the reader may
wish to accomplish. Reuse relevent "About" sections from published
Fedora documents if they exist - this is perfectly acceptable, and saves
you uneccessary work.
The remainder of the sections are the body of the tutorial. Fedora
documents are task-orientated, so each section should focus on a
particular type of activity or task
Statistical hypothesis testing
A statistical hypothesis test is a method of making statistical decisions
using experimental data. In statistics, a result is called statistically
significant if it is unlikely to have occurred by chance. The phrase "test of
significance" was coined by Ronald Fisher: "Critical tests of this kind may be
called tests of significance, and when such tests are available we may
discover whether a second sample is or is not significantly different from the
first."[1]

Hypothesis testing is sometimes called confirmatory data analysis, in


contrast to exploratory data analysis. In frequency probability, these
decisions are almost always made using null-hypothesis tests; that is, ones
that answer the question Assuming that the null hypothesis is true, what is
the probability of observing a value for the test statistic that is at least as
extreme as the value that was actually observed?[2] One use of hypothesis
testing is deciding whether experimental results contain enough information
to cast doubt on conventional wisdom.

Statistical hypothesis testing is a key technique of frequentist statistical


inference, and is widely used, but also much criticized. The main direct
alternative to statistical hypothesis testing is Bayesian inference. However,
other approaches to reaching a decision based on data are available
via decision theory and optimal decisions.

The critical region of a hypothesis test is the set of all outcomes which, if
they occur, will lead us to decide that there is a difference. That is, cause
the null hypothesis to be rejected in favor of the alternative hypothesis. The
critical region is usually denoted by C.

Statistical test as a trial

A statistical test procedure is comparable to a trial. A defendant stands trial


and is considered innocent as long as his guilt is not proven. The prosecutor
tries to prove the guilt of the defendant. Only when there is enough charging
evidence the defendant is condemned.
In the start of the procedure there are two hypotheses H0: "the defendant is
innocent", and H1: "the defendant is guilty". The first one is called null
hypothesis, and is for the time being accepted. The second one is
called alternative (hypothesis). It is the hypothesis one tries to prove.

In good law practice one doesn't want to condemn an innocent defendant.


That's why the hypothesis of innocence is only rejected when an error is very
unlikely. Such an error is called error of the first kind (i.e. the condemnation
of an innocent person), and the occurrence of this error is controlled to be
seldom. As a consequence of this asymmetric behaviour, the error of the
second kind (setting free a guilty person), is often rather large.
An introductory example

A person is tested for clairvoyance. He is 25 times shown the reverse of a


randomly chosen play card and asked which suit it is. The number of hits is
called X.

As we try to prove his clairvoyance, for the time being the null hypothesis is
that the person is not clairvoyant. The alternative is, of course: the person is
(more or less) clairvoyant.

If the null hypothesis is valid, the only thing the test person can do is guess.
For every card, the probability (relative frequency) of guessing correctly is
1/4. If the alternative is valid, the test subject will predict the suit correctly
with probability greater than 1/4. We will call the probability of guessing
correctly p. The hypotheses, then, are:

and

When the test subject correctly predicts all 25 cards, we will consider
him clairvoyant, and reject the null hypothesis. Thus also with 24 or
23 hits. With only 5 or 6 hits, on the other hand, there is no cause to
consider him so. But what about 12 hits, or 17 hits? What is the
critical number, c, of hits, at which point we consider the subject to
be clairvoyant, versus coincidental?
How do we determine the critical value c? It is obvious that with the
choice c = 25 (i.e. we only accept clairvoyance when all cards are
predicted correctly) we're more critical than with c = 10. In the first
case almost no test subjects will be recognised to be clairvoyant, in
the second case, some number more will pass the test.

In practice, one decides how critical one will be. That is, one decides
how often one accepts an error of the first kind- a false positive, or
Type I error.

With c = 25 the probability of such an error is:

Hence, very small. The probability of a false positive is the


probability of randomly guessing correctly all 25 times.

Less critical, with c = 10, gives:

Thus, c = 10 yields a much greater probability of false


positive.

Before the test is actually performed, the desired probability


of a Type I error is determined. Typically, values in the range
of 1% to 5% are selected. Depending on this desired Type 1
error rate, the critical value c is calculated. For example, if
we select an error rate of 1%, c is calculated thus:

From all the numbers c, with this property, we choose


the smallest, in order to minimize the probability of a
Type II error, a false negative. For the above example,
we select: c = 12.
Example

As an example, consider determining whether a suitcase contains some


radioactive material. Placed under a Geiger counter, it produces 10 counts
per minute. The null hypothesis is that no radioactive material is in the
suitcase and that all measured counts are due to ambient radioactivity
typical of the surrounding air and harmless objects. We can then calculate
how likely it is that we would observe 10 counts per minute if the null
hypothesis were true. If the null hypothesis predicts (say) on average 9
counts per minute and a standard deviation of 1 count per minute, then we
say that the suitcase is compatible with the null hypothesis (this does not
guarantee that there is no radioactive material, just that we don't have
enough evidence to suggest there is). On the other hand, if the null
hypothesis predicts 3 counts per minute and a standard deviation of 1 count
per minute, then the suitcase is not compatible with the null hypothesis, and
there are likely other factors responsible to produce the measurements.

The test described here is more fully the null-hypothesis statistical


significance test. The null hypothesis represents what we would believe by
default, before seeing any evidence. Statistical significance is a possible
finding of the test, declared when the observed sample is unlikely to have
occurred by chance if the null hypothesis were true. The name of the test
describes its formulation and its possible outcome. One characteristic of the
test is its crisp decision: to reject or not reject the null hypothesis. A
calculated value is compared to a threshold, which is determined from the
tolerable risk of error.
[edit]The testing process

Hypothesis testing is defined by the following general procedure:

1. The first step in any hypothesis testing is to state the


relevant null and alternative hypotheses to be tested. This is
important as mis-stating the hypotheses will muddy the rest of the
process.
2. The second step is to consider the statistical assumptions being
made about the sample in doing the test; for example, assumptions
about the statistical independence or about the form of the
distributions of the observations. This is equally important as invalid
assumptions will mean that the results of the test are invalid.
3. Decide which test is appropriate, and stating the relevant test
statistic T.
4. Derive the distribution of the test statistic under the null
hypothesis from the assumptions. In standard cases this will be a well-
known result. For example the test statistics may follow a Student's t
distributionor a normal distribution.
5. The distribution of the test statistic partitions the possible values
of T into those for which the null-hypothesis is rejected, the so
called critical region, and those for which it is not.
6. Compute from the observations the observed value tobs of the
test statistic T.
7. Decide to either fail to reject the null hypothesis or reject it in
favor of the alternative. The decision rule is to reject the null
hypothesis H0 if the observed value tobs is in the critical region, and to
accept or "fail to reject" the hypothesis otherwise.

It is important to note the philosophical difference between accepting the


null hypothesis and simply failing to reject it. The "fail to reject" terminology
highlights the fact that the null hypothesis is assumed to be true from the
start of the test; if there is a lack of evidence against it, it simply continues
to be assumed true. The phrase "accept the null hypothesis" may suggest it
has been proved simply because it has not been disproved, a logical fallacy
known as the argument from ignorance. Unless a test with particularly
high power is used, the idea of "accepting" the null hypothesis may be
dangerous. Nonetheless the terminology is prevalent throughout statistics,
where its meaning is well understood.

You might also like