Research Methods & Statistics in Exercise and Sport (HSE104, T2 201

Research Methods  & Statistics Practical Tips

How To Identify Whether A Study Is Valuable To Use Based On Reading It’s Abstract

P value

Statistically significant

Sample size: which may influence how much statistical power the study has to have correct and accurate findings

EBSCOHost Databases

Is a great database to use because it acts as a master database.

Journal article credibility. E.G. British Journal of Sports Medicine (BJSM) is notorious for it’s rigorous acceptance process. In 2016 only 11% of research manuscripts submitted for review were accepted for publication.

Impact factor is a qualitative determinant of the quality of a journal. A measure reflecting the yearly average number of citations to recent articles published in that journal.

 The higher the impact factor the better.

British Journal of Sports Medicine (6.724 – 2016)

American Journal of Sports Medicine (4.362 – 2014)

Science Impact Factor (37.205)

Nature (40.137 – 2016)

Learning Module #1: Introduction To Research & The Scientific Method + Ethical & Professional Conduct In Scientific Research

Week #1 (Daniel)

The four major steps of the scientific method consist of (#Exam)

Defining the problem

Formulating a hypothesis

Gather data

Analysing, interpreting and communicating the results.

Understanding Statistical Power and Significance Testing an interactive visualization tool:

Learning Module #2:

2.1 Early Stages of the Research Process

Week #2

Early Stages Of Research

Acknowledgements section may reveal clues to hidden bias’ such as where hidden sources of funding are coming from.

References: An a  topic that is well covered a variety of different resources and newer studies should be evaluated. Also look at whether authors are citing themselves and ask yourself if there was a good reason to do so.

Misinterpretations of Research by the Media and Uninformed is COMMON – eveident by media outlets writing senstationlist headlines to gain attention.b

General structure: no paper will be structured the same way. Some may miss an abstract and only have a discussion section and no conclusion.

Be cautious about making conclusions on one research paper, the experiment/study needs to be repeated by researches to ensure the results are consistent. Therefore look at systematic reviews which aim to synthesize and critically evaluate a high quantity of studies.


Inductive Reasoning

Begins with observation.

Inductive reasoning = qualitative (data then hypothesis)

Deductive Reasoning

Begins with theoretical explanation.

Deductive reasoning = quantitative (hypothesis first then data)

Primary & Secondary Sources

2.2 Quantitative Methods

Quantitative research takes place when information is gathered in/converted to numerical form.

Quantitative research often stems from deductive reasoning to test theoretical concepts and hypotheses that have been previously defined.

Quantitative methods are ideal for comparing data in a systematic way, making generalisations to a population of interest or for gathering evidence to test a hypotheses.

E.G. Surprisingly to most surveys that ask to mark things like ‘strongly agree, strongly disagree’ are quantitative.

2 Types Of Quantitative Study Categories


Descriptive research attempts to describe the status of the study’s focus, by observing and analysing exciting problems, events and patterns or by using historical data.


 Involves the manipulation of treatments in an attempt to establish a cause-and-effect relationship. The researcher attempts to control all the factors except for experimental or treatment variable.

Descriptive Quantitative Study Designs


Limitation of this type of study is that the temporality cannot be established. (What came first – chicken or the egg?). Cross sectional studies preclude us from knowing the direction the association. Are people not physically active because they have diabetes or do they have diabetes because they are not active?

But they’re very useful for determining the prevalence of a specific risk factor.

Involve the selection of a group of people at random from a defined population group. E.G. Info is collected about the groups PA and their followed over a period of time to monitor whether a disease occurs.

The gold standard for testing a research hypothesis.

Though it is not an appropriate study design for investigating the association between PA and cancer because there is a long latent period between exposure to the risk factor (physical inactivity) and manifestation of cancer.

Though it is commonly used for observing the changes of more immediate health outcomes such as fat loss or changes in cardiovascular health. They work best for smaller more manageable trials can be conducted on the effects of more immediate health outcomes.

2.3 Qualitative Methods


Strengths Of Qualitative Methods

5 Types Of Qualitative Designs

Types Of Qualitative Measures / Method Of Data Collection

Observation / Interviews / Focus Groups


Learning Module #3

Week #3

3.1 Statistical Concepts


Statistics is an objective set of processes for analysing quantitative data to investigate what patterns might exist.

Researchers seek to discover:

Is the effect of interest statistically significant?

Meaning if the research is repeated will the effect be observed repeatedly?

How strong (or meaningful) is the effect of interest?

The question refers to the magnitude, or size of the effect.

Statistical Concepts/Terms

Effect = the analysis outcome you are interested in

Population: A larger group from which a sample is taken

Probability: The 0-100. odds a certain event will occur. Expressed on a scale of 0 – 1 / 0-100%.

Central Tendency: The use of a single value or score as a representative of a group of scores (i.e. a data set). Central tendency measures include the mean, median and mode.

Statistical Power: The likelihood that your statistics may return a ‘false negative’ or a type 2 error. This error occurs when the results indicate there is no effect when there IS a real effect. It is recommended not to go ahead with a study unless the statistical power is 80%+.

SP is important because it dictates what magnitude of effect you will be able to detect through your statistical analysis. A study that is ”underpowered’  is unable to detect an effect (especially a small effect) even if a true effect does exist. (e.g. a real relationship between variables, a real difference between groups).

A lack of SP also limits how conclusive you can be when interpreting your results, because the existence of a true effect cannot be ruled out.

Variability: The degree of spread in the data. Measures include, range, standard deviation and variance.

Standard Deviation: A measure of spread. Low SD = data is closely clustered around the mean. High SD = data is dispersed over a wider range of values. SD allows us to determine whether a value is statistically significant or part of expected variation.

Randomisation: A process used to select a sample from a population to obtain a ‘representative’ sample that is not biased.

Researches can be more confident with a representative sample that their observations and findings ma be generalised to the broader population that the sample was drawn from.

Statistical Significance: The likelihood that your statistics may return a ‘false positive’ or a type 1 error. This sort of error occurs when the results of your analysis indicate there is an effect (or relationship, or difference) when there is no real effect (or relationship, or difference). E.G. Telling someone they have cancer when they don’t.

Normal Distribution: Provides a theoretical model for the structure of a data set. E.G. The mean is an accurate way to summarise all values within a data set in a single value, if the data is normally distributed.

Type 1 Errors / False Positive:

When the results of your analysis indicate there is an effect (or relationship, or difference) when there is no real effect (or relationship, or difference). E.G. Telling someone they have cancer when they don’t.

Type 2 Errors / False Negatives:

When the results indicate there is no effect when there IS a real effect. E.G. Telling someone they don’t have cancer when they do.

Why is it important to complement central tendency values with measures of variability?

It gives an additional layer of robustness to have greater confidence in which analysis techniques you should be performing.

Null Hypothesis Testing

Is an approach to statistics where you begin your analysis assuming there is no effect (no relationship between variables, no difference between groups).

“The null hypothesis represents ‘zero effect’. Therefore, the null hypothesis for this study is that there is no difference in height between male and female swimmers.”

When comparing two variables you expect there to be 0 difference. E.G. You expect there to be no difference between male and female height.

In the majority of analyses, an alpha of 0.05 is used as the cutoff for significance. If the p-value is less than 0.05, we reject the null hypothesis that there’s no difference between the means and conclude that a significant difference does exist. If the p-value is larger than 0.05, we cannot conclude that a significant difference exists. 

The alternative hypothesis is that you expect there will be a difference between two variables. E.G. There will be a difference between male and female heights.

3.2 Measurement Properties


The degree to which a test measures what it purports to measure.


Is the consistency or repeatability of a measure.

Responsiveness & Sensitivity

Refers to the degree to which a measure can accurately detect changes or effects over time.


Is the ease with which a measurement can be undertaken administered and scored.

3.3 Common Analysis Types & Techniques

Statistical tests used in sports science generally fall into parametric and non-parametric approaches.

Parametric Tests:

Based on the assumptions about the structure of the data. These tests assume that the data fits certain ‘parameters’. Normality and equal variances. These assumptions include:

It is usually ideal to use parametric statistics because they have more statistical power than non-parametric alternatives.

Disadvantages of parametric approaches

If parametric statics are used to investigation data that does not satisfy parametric assumption you increase the risk of making a type 1 error (i.e. false positive). That is, your test may detect a statistically significant effect when there is no real effect.

What are non-parametric analyses?

Non-parametric tests do not make assumptions about the structure of the data. There are variances in the data – they are not consistent. These techniques are sometimes called ‘distribution free’ analyses.

Non-parametric tests can be useful for analysing data that violets parametric assumption. This approach is also appropriate for analysing ranked (ordinal) data and categorical data.

Disadvantages of non-parametric approaches

Non-parametric analyses may use less of the information that exists within a data set because data is sometimes converted to a ranked form rather then being analysed as their actual values.

NPA generally have less statistical power than parametric tests. As a result these tests can have an increased risk of type 2 error and are less likely to detect a statistically significant effect even when there truly is an effect.

Common Analysis Techniques

Learning Module #4

Week #4

4.1 Organising Quantitative Data

Messy Data & Clean Data

Good Data Management Principles

Version Control: Raw data should be kept in its original form and stored separately from its clean data set.

Data Dictionary: Created to accompany every data set to describe its contents and structure.

File Structure: Each variable should form a column, observations should form a row and each data set should contain info on only one unit of analysis.

Data Types


Data Cleaning Checklist

4.2 Assumption Testing

Assumption of Normality

Assumes the population from which the sample is drawn is normally distributed on the variable of interest and that the sample aprox fits a normal distribution.

Why Test Normality?

Before we can proceed with parametric statistics we must test whether the data we want to analayse meets this assumption.

If a variable of interest violets this assumption the variable may not be suited to statistical analysis using parametric techniques. (non-parametric methods may be more appropriate for analysing these variables)

Methods To Test For Normality

Histagram / Shapiro-Wilk / Kolmogorov-Smirnov / Skewness / Kurtosis

Remember: The aim of normality testing is to determine whether a sample distribution approximates the normal distribution. In other words, a sample distribution does not have to be perfectly normal to meet the assumption of normality.




Homogeneity = the same

Homogeneity of variances = equality of variances

Variance = the average squared deviations between a group of observations and their respective mean. It’s a measure of spread.

 Assumption of Homogeneity of Variance

The assumption of homogeneity of variance assumes that the variance (difference between each observation and the mean) is equal along the scale of observations.

Why is it important?

Because its a common assumption across several statistical analyses including the t-test and ANOVA.

It’s also useful for testing meaningful hypotheses.


To test for homogeneity of variance we can use a combination of plots and formal statistical tests.

Interpreting a Box & Whisker Plot

To complement a visual assessment of a box and whisker plot we use a formal statistical test to assess homogeneity of variance.

Interpreting Levene’s Test Outputs

4.3 Central Tendency & Variability

Central tendency is commonly referred to as descriptive statistics because central tendency and variability measures are used to quantitatively describe key characteristic of data.


Standard Deviation

A measure of spread. Low SD = data is closely clustered around the mean. High SD = data is dispersed over a wider range of values. SD allows us to determine whether a value is statistically significant or part of expected variation.

A 5-sigma standard deviation describes a 1 in 35 million discovery.

Interquartile Range

Is used to express variability in data sets that are not normally distributed. It is calculated as the difference between the 3rd quartile (i.e. 7th percentile) and the 1st quartile (i.e. 25th percentile) of data set.

Reporting Central Tendency & Variability Measures

If a variable is normally distributed use means and standard deviations when reporting descriptive statistics. 

If a variable is not normally distributed use medians and interquartile ranges when reporting descriptive statistics.

Learning Module #5

Week #5

5.1 Correlation Analysis

The relationship between 2 or more variables is one of the most common questions in exercise and sport science. This is achieved by conducting a correlation analysis. 

What Are Correlations?

A correlation analysis is a statisical technique that is useful for quantifying:

Direction: of a relationship (positive/negative) between variables.

The direction of a correlation is indicated as either positive, negative or zero (if no relationship exists).

Strength: Magnitude of a relationship between variables.

 The strength of a correlation is expressed on a scale from -1.00 to +1.00

Correlation Coefficient (r) r value

Correlation can be expressed numerically using correlation coefficient. (A score that indiactes the direction and strength of a relationship btw variables).

When we say there’s a correlation between variables we can’t assume one variable is affecting another / in other words we can’t assume ‘direction of effect / causality (cause and effect) we’re just determining if there is a trend / correlation between variables so we can make an educated prediction.

Correlation Scatter Plots

Positive relationship = r = 0-1

The closer to 1 you are the more tightly those dots are around the line = the higher the correlation.

Negative linear correlation = r = -1 – 0

Fundamental Principle: Correlations are used to investigate the strength and direction of relationships between variables. 

5.2 Interpreting & Reporting Correlations

“If the scatter along the y-axis is much larger than the scatter around the line than we get correlation coefficients which are really large in magnitude and strong. But if the scatter along the y-axis and the scatter around the  line are similar than we get correlation coefficient which are closer to 0.”

Learning Module #6.1

Differences between two groups – The t test

Week #6


T tests are one method of quantitative research to identify whether a difference between two groups is statically significant.

When a t test result shows a significant difference, this means the probability that the observed difference is due to random chance is 5%. This is also known as the probability of a false positive.

OR, you could interpret this as: When a t test result shows a significant difference this means that the probability that the observed difference is due to something other than random chance is 95%. bIn other words, this is the likelihood that the statistical result is a true effect.

T tests can be used to compare two groups on one factor (i.e., only one independent variable)

Types Of T Tests

One-Sample T Test

A statistical test for determining whether the mean of one sample is statistically different from a target value (e..g, the population mean).

E.G. Measure resting HR in a sample of 100 high school students across Victoria, then we could use a one-sample t test to compare our measurements to the population mean for resting HR of Australian high school students.

Independent T Test

Compares the means between two unrelated groups on the same dependent (outcome) variable. Also know as an unpaired sample t test.

E.G. Is there a significant difference between the 100m sprint times of elite males vs. elite female sprinters at the 2016 olympic games.

Dependent T Test

Compares the means between two related groups on the same dependent (outcome) variable. Also know as paired sample t test.

E.G. Deliver a 6 month falls prevention program for a sample of older adults. To see if the program was effective we could measure the number of falls participants experienced in the year before undertaking the program versus the year after. Because we are comparing the number of falls in the same sample of people, measured at two different time points, we would use a dependent t test.

Learning Module #6.2

Interpreting & Reporting T Test Results

Reporting Descriptive Statistics

Important: Central tendency and variability measures can only describe certain features about a data set. We cannot know from these descriptive statistics alone, whether a difference exists between groups. We need to combine descriptive statistics with the results of a statistical analysis (such as the t test) in order to make such a judgement. 

Checking Levene’s Test Results

Interpret Leven’s Tests Results

The T Statistics

The t statistics (t)

Is normally distributed with the mean of zero. This means that extreme values (moving away from 0 in either a positive or negative direction) are uncommon, so extreme t statistics correspond with low p values.

The degree of freedom (df)

Indicates the number of values in an analysis that are free to vary, and is calculated as the number of observations minus 1. For an indepedent t test the degrees of freedom can be calculated within each group them summed:

df for Group 1 (e.g. winning teams) = 85 -1 =84

df for Group 2 (e.g. losing teams) = 85 – 1 = 84

df for comparing Group 1 vs. Group 2 = 84 + 84 = 168

N/n/a Value

N = The total number of people in the experiment.

n = How many people are in each level of the independent variable.

a = Refers to how many levels of the factor you have. (E.G. If you’re testing a 0mg dose, a 50mg dose and a 100mg dose = 3 levels)

The P Value

The significance level or p value is shown under column heading ‘sig’. This value indicates whether or not there is a statistically significant difference between the two groups.

Researcher reports there is a significant difference between two treatment groups at the 0.05 level. This means that…there is 5% probability (or less) that the difference occurred by chance without any treatment effect.

Mean Difference

Is simple the difference between the means of the two groups. The order that the groups are specified in is important because the mean difference is calculated as group 1 minus group 2. In this case group 1 = ‘win’ and group 2 = ‘loss’. Therefore we can report that winning teams made 4.8 more field goals per game than losing teams.

Learning Module 7.1

Differences between more than two groups – ANOVA

Week #7

Analysis Of Variance (ANOVA)

Is a statistical test that can be used to answer questions such as:

What is the difference between groups? / What is the difference between treatments” What is the difference between time points?  What is the difference between experimental conditions?

Key Features Of The ANOVA

Intro to Calculating a One-Way ANOVA

One-way ANOVA have 1 IV (hence the name ‘one-way’) and 3+ DV.

ANOVAs are useful for comparing groups because they can identify:

Whether the mean of each group is different to the means of other groups.

Whether the spread of the data (variance) within each group is different to the spread of data within other groups.

The key statistical output of an ANOVA test centers upon a statistic known as the F ratio.

F Ratio

F Ratio = variance between groups / *divide* variance within groups

Higher f statistic = a higher difference / more significant difference

Limitation Of The ANOVA

ANOVA can identify that differences exist between multiple groups, but it cannot identify exactly which groups are different from one another.

Learning Module 7.2

Interpreting and reporting one-way ANOVA results

A mock example run-through:

The post hoc will give you the direction of the differences or where those differences lay whereas the ANOVA can’t.

You don’t want there to be difference in variance. 

Anything that’s an assumption (checking that the data set is in a parametric form) test you want it to be above 0.05 (lavines / KS / shapiro-wilk) because then you’re data is normally distributed and the variances are equal. 

Non-assumption tests: T-test / ANOVA / post-hoc you want it to be below 0.05.


Video: One-way ANOVA in SPSS Deakin Guide

One-Way ANOVA in SPSS Statistics 

Learning Module 8.1

Qualitative methods – Rationale and study design

Week #8

Qualitative research methods are useful for gathering rich information about people’s perceptions, observations, experiences, beliefs, values, motivations, choices and behaviors.

We conduct qualitative research because…

1. We want to develop a complex, detailed understanding of an issue.

2. We want to empower individuals to share their stories, especially those whose perspectives may be under-explored or unreported.

3. We want to write in a flexible style that is suited to conveying stories, rather than being restricted by formal structures used for academic writing.

4. We want to understand the contexts and settings for a problem, issue, event or phenomenon.

5. Quantitative research methods do not fit the problem that we’re interested in.

Value of qualitative research in exercise and sport

Help researchers gain deeper understanding of the subjective experiences of athletes and coaches and help develop more effective interventions.

Qualitative Study Designs


Narrative research focuses on providing an account of an event/action that are chronologically connected.

When to use: Narrative research is best for capturing the detailed stories/life experiences of a single life or the lives of a small number of people.


Focus on describing what all participants have in common as they experience a phenomenon. The purpose of phenomenology is to describe the shared experiences, perspectives and observations of individuals who experienced a phenomenon of interest.

When to use: The type of problems best suited to this design are those where it is important to understand shared experiences among individuals in order to develop appropriate practices or policies.

Grounded Theory

Where the researcher generates a general explanation (theory) of a process, and this theory is ‘grounded in’ the views of many participants.

When to use: Suitable when a theory is not available to explain a process or available theories are incomplete or developed on other populations than those of interest to the researcher.


The researcher describes the shared and learned patterns of values, behaviors, beliefs and language of a culture sharing group. Typically they study large cultural groups involving many people who interact over time. This kind of research involves extended observations of the group where the researcher is immersed in the day to day lives of the people.

When to use: If the needs are to describe how a cultural group works and to explore the beliefs, language, behaviors and issues such as power, resistance and dominance.

Case Study

Involves the study of an issue explored through one or more cases within a specific context through detailed in depth data collection involving multiple sources of information.

When to use: When the researcher has clearly identifiable cases within a specific setting and is seeking an in depth understanding of the cases or comparison of several cases.

Learning Module 8.2

Sampling methods and trustworthiness

In qualitative research sampling is n ongoing process that often continues throughout the course of the study which is different from quantitative research where participant recruitment is usually conducted once.

The objective of qualitative sampling is to achieve ‘information-richness’, which comprises two considerations: appropriateness and adequacy.

Two considerations:


Quantitative sampling aims to identify appropriate participants. Appropriate participants are those who can best inform the study because they have relevant characteristics to the research question / phenomenon of interest.


Aims to obtain an adequate sample of information sources. This can be achieved by collecting rich. in depth data, and by gathering multiple forms of data (i.e. recruiting a range of people from a variety of places, capturing several events, analyzing different types of data). Having an adequate sample of information helps the researcher provide a full description of the phenomenon of interest.

Qualitative Sampling Methods

Purposive Sampling:

PS groups study candidates (potential participants) according to criteria that are relevant to the research question. The objective is to select ‘information-rich’ cases that are likely to provide the greatest insight into the phenomenon being studied such as…

Typical Cases: Participants who are ‘normal’ or ‘average’ examples from the population being studied.

Extreme Cases: Participants who are unusual examples in the population.

Negative Cases: Participants who are ‘exceptions to the rule’.

Quota Sampling:

Researchers plan out how many people to recruit as participants and with which characteristics. QS is used when researchers want set specific targets for the number of participants in a sample and the number of participants within sub-groups of that sample. Using this method helps researchers select a sample that reflects the corresponding proportions of each subgroup within the broader population.

Snowball Sampling:

A method where existing participants use their social networks to refer the researcher to other people that they know who could participate in or make a contribution to the study. Useful for recruiting participants from groups that are not easily accessible to researchers if using other sampling strategies.


Is achieved when the data collected are generally applicable, consistent and neutral.

4 Constructs of Trustworthiness


Refers to how well the findings of the research represent reality and how accurately the participants settings and contexts are described.


Refers to whether the results would be useful to those in other contexts. Qualitative researchers must provide sufficient ‘thick’ description of the phenomenon being studied so that the reader can draw conclusions about whether the findings transfer to their circumstances.


When clear and detailed descriptions are provided about the study design, data gathering procedures and data analysis approaches. This enables future researchers to repeat the work and gives readers confidence in the decisions made in the study.


Deals with the issue of research bias, and reflects the degree to which others can have confidence in the findings of the research. 

Providing Evidence of Trustworthiness


The use of different data collection methods (combination of observation, focus groups and individual interviews). Using a variety of methods compensates for the limitations of each method and takes advantage of their benefits. Another form of triangulation may involve the sampling of a wide range of participants so that individual perspectives can be corroborated against others.

Member Checks

Checking the accuracy of the data can take place during the course of data collection, and also after data collection is complete. Researchers undertake member checking when they ask participants to read transcripts of any dialogues in which they have participated and review whether their words accurately convey the meaning they intended to communicate.

Thick Description

A ‘thick’ detailed description of the phenomenon being explored helps the reader to understand the actual situations and contexts that have been investigated. Thick descriptions help readers determine whether the overall findings ‘ring true’.

Prolonged Engagement

Refers to the need for qualitative researchers to spend enough time obtaining good data and gaining knowledge of the meanings that exist within the data. 

Audit Trail

Describes the changes that occurred during the course of a study and how they influenced the study.

Clarifying Researcher Bias

Acknowledging and managing potential sources of bias and provide evidence to demonstrate how these biases were dealt with.

Negative Case Checking

Making a deliberate effort to examine cases in which what they expect to happen did not happen. This helps address bias and enabled the researcher to investigate the complexities of a phenomenon.

Peer Debriefing

Another person examines the study, data and conclusions drawn. They serve as a ‘devils advocate’ who critiques research and questions the researcher to see whether their findings hold up to independent scrutiny.

Learning Module #9.1

Week #9

Introduction to qualitative analysis

Qualitative data analysis is done during and after data collection so they can begin to develop tentative hypotheses.

Key Features

Intepretive: Making sense of the data, rather than just describing it.

Usually inductive: Guided by what the data says, rather than approaching the data with pre-conceived hypothesis.

Researchers need to take care to avoid reaffirming their assumptions and biases when analysing the data.

General Phases of Qualitative Analyses

1. Data Reduction

All forms of data must be reduced into a format that is ready for further analysis.

E.G. Transcribe interviews.

2. Sorting, Analyzing & Categorizing the Data

In order to identify patterns that emerge from the data which are used to categorise the data. This allows the researcher to make decisions about the concepts that are meaningful and relevant.

3. Interpreting the Data

Uses organised data to construct a holistic portrayal of the phenomenon being studied. Meaning is assigned to the qualitative data, analytical narratives and created to convey meaning.

The analytical narrative is an interpretive description of an event or situation. Two ways in which they are conveyed:

Narrative Vignettes: A detailed description of an event, including what people say, do, think and feel in that setting. Vignettes give the reader a sense of being there and are used to support the researchers assertions about important concepets that emerged from the data.

Direct quotations.

4. Theory Construction

Researchers theorise about their findings and consider the ways the data may be useful for developing a theory about the phenomenon being studied.

A theory: is an explanation of an event, action, phenomenon or problem of interest that allows researchers to draw inferences about future events.

Learning Module #9.2

Week #9

One approach: Thematic analysis

Thematic analysis is used to identify, analyse and report themes within qualitative data.


A theme captures something important about the data that is relevant to the research question and is a patterned response.

E.G. Responses from several participants that relate to the same topic may indicate a key theme in the data.

6 Phases of Thematic Analysis (#Exam)

Phase 1: Familiarizing Yourself with the Data

Transcribe data (if necessary) and immerse themselves in the data by reading and re-reading the data to ensure familiarity and depth of understanding.

Phase 2: Generating Initial Codes

Codes are applied to the data to identify interesting features of the data. The coding process is systematic and applied across the entire data set.

Phase 3: Searching for Themes

Codes are sorted and collated into potential themes. Some codes may go on to form main themes or sub-themes while some codes may be discarded.

Phase 4: Reviewing Themes

Researchers create a thematic map to outline diferent themes, identify how these themes fit together and understand the overall story they tell about the data.

Phase 5: Defining & Naming Themes

Define and refine the themes. This is the process of identifying precisely what each theme is about what what is captured by the themes overall. A detailed analysis is written for each theme to identify the ‘story’ that each theme tells. Researchers also consider how theme’s relate to each other and refine them to minimize overlap.

Phase 6: Producing the Report

Write up their findings into a scholarly report in order to communicate a concise coherent account of the story across each theme. Data extracts are used top build the analytical narrative. A good qualitativ research report goes beyond simple description by making an argument in relation to the research question.

Learning Module 10.1

Reporting Qualitative Results

Week #10

Common Qualitative Presentation Methods

Narrative Vignettes

The narrative vignette gives a detailed description of an event including what people say, do, think and feel. Gives the reader a sense of ‘being there’.

Themes & Data Extracts

List the identified themes in text or present them in a summary table. Data extracts may also appear as stand alone direct quotations.

Visualizing A model, Theory or Themes

Can be useful for showing the important components of a theory, and illustrating complex relationships. They can also be effective for displaying themes and their interactions.

Learning Module 10.2

Introduction to mixed methods and paradigms

Week #10

Mixing qualitative and quantitative methods together to solve problems and answer questions.

E.G. A researcher could have a qualitative research question and a quantitative research question in the same study.

Paradigm differences influence how we know our interpretations of reality, our values and methodology in research.

When to use mixed methods approach?

1. Incorporate a qualitative component into an otherwise quantitative study.

2. Build from one phase of a study to another.

-> Explore qualitatively then develop a quantitative instrument.

-> Follow up a quantitative study with a qualitative study to obtain more info.

3. When both quantitative and qualitative data together, provide a better understanding of the research problem than either by itself.

4. When one type of research is not enough to address the research problem.

Strengths & Limitations

Learning Module 10.3

Mixed methods study designs

Week #10

Mono-Method Design

Multi-Method Designs

Mixed-Method Designs

Consideration for Mixed Method Studies

Time Order (Sequence) of Quantitative & Qualitative Methods

Concurrent: at the same time

Sequential: one occurs before the other

Priority (weighting) Of Quantitative & Qualitative Methods

Decision Tree For Mixed Method Design

4 Mixed Methods Designs

Sequential Explanatory Design: Quan -> Qual -> Interpretation based on QUAN-> QUAL

Sequential Exploratory Design: Qual -> Quan -> Interpretative based on QUAL -> QUAN

Triangulation (convergent) Design: Quan + Qual -> Interpretation baswed on QUAN + QUAL

Embedded (advnaced) Design: QUAN/qual -> Interpretation based on QUAN (qual) | QUAL/quan -> Interpretation based on QUAL (quan)

Typology of Mixed Methods Designs

Identifying a Mixed Methods Study

Learning Module #11.1

Writing scientific reports

Week #11


When to use a table, figure or text to present results

Learning Module #11.2

Scientific manuscripts and peer review

Journal Impact Factor

Measures the frequency with the ‘average article’ in a journal has been cited in a particular year.

SCImago Journal Rank

Ranks journals using the Scopus database. It calculates the ratio of citations to articles in a journal and takes into account the quality of the citing journal.