Show Menu
Cheatography

Step 5 Cheat Sheet (DRAFT) by

Step 5: Interpret your Result

This is a draft cheat sheet. It is a work in progress and is not finished yet.

Statis­tical signif­icance

In hypothesis testing, statis­tical signif­icance is the main criterion for forming conclu­sions. You compare your p value to a set signif­icance level (usually 0.05) to decide whether your results are statis­tically signif­icant or non-si­gni­ficant.
If a result is statis­tically signif­icant, that means it’s unlikely to be explained solely by chance or random factors. In other words, a statis­tically signif­icant result has a very low chance of occurring if there were no true effect in a research study.
The p value, or probab­ility value, tells you the statis­tical signif­icance of a finding. In most studies, a p value of 0.05 or less is considered statis­tically signif­icant, but this threshold can also be set higher or lower.
The signif­icance level, or alpha (α), is a value that the researcher sets in advance as the threshold for statis­tical signif­icance. It is the maximum risk of making a false positive conclusion (Type I error) that you are willing to accept.
In a hypothesis test, the p value is compared to the signif­icance level to decide whether to reject the null hypoth­esis.
 
• If the p value is higher than the signif­icance level, the null hypothesis is not refuted, and the results are not statis­tically signif­icant.
 
• If the p value is lower than the signif­icance level, the results are interp­reted as refuting the null hypothesis and reported as statis­tically signif­icant.
Usually, the signif­icance level is set to 0.05 or 5%. That means your results must have a 5% or lower chance of occurring under the null hypothesis to be considered statis­tically signif­icant.
The signif­icance level can be lowered for a more conser­vative test. That means an effect has to be larger to be considered statis­tically signif­icant.
The signif­icance level may also be set higher for signif­icance testing in non-ac­ademic marketing or business contexts. This makes the study less rigorous and increases the probab­ility of finding a statis­tically signif­icant result.
 

Effect Size

indicates the practical signif­icance of your results. It’s important to report effect sizes along with your infere­ntial statistics for a complete picture of your results. You should also report interval estimates of effect sizes if you’re writing an APA style paper.
Effect size tells you how meaningful the relati­onship between variables or the difference between groups is. It indicates the practical signif­icance of a research outcome.
A large effect size means that a research finding has practical signif­icance, while a small effect size indicates limited practical applic­ations.

Freque­ntist vs. Bayesian statistics

Freque­ntist statistics
emphasizes null hypothesis signif­icance testing and always starts with the assumption of a true null hypoth­esis.
Bayesian statistics
In this approach, you use previous research to contin­ually update your hypotheses based on your expect­ations and observ­ations.
 
Bayes factor compares the relative strength of evidence for the null versus the altern­ative hypothesis rather than making a conclusion about rejecting the null hypothesis or not.
 

Decision error

Type I and Type II errors are mistakes made in research conclu­sions.
A Type I error means rejecting the null hypothesis when it’s actually true, while a Type II error means failing to reject the null hypothesis when it’s false.

The probab­ility of making a Type I error is the signif­icance level, or alpha (α), while the probab­ility of making a Type II error is beta (β).
These risks can be minimized through careful planning in your study design.

Type I & Type II Error