Print

Print


Designing reliable and efficient experiments for social sciences (DRES01)

https://www.psstatistics.com/course/designing-reliable-and-efficient-experiments-for-social-sciences-dres01/

This course will be delivered by Danile Lakens form 4th - 7th February 2019 in Glasgow City Centre.

Course Overview:
This course aims to help you to draw better statistical inferences from empirical research, improve the statistical questions you ask when you collect data, design better and more efficient studies, and improve your meta-analytic thinking. In practical, hands on assignments, you will learn techniques and tools that can be immediately implemented in your own research, such as thinking about the smallest effect size you are interested in, justifying your sample size, evaluate findings in the literature while keeping publication bias into account, and perform small-scale meta-analyses.

Course programme:
Monday 4th – Classes from 09:30 to 17:30
Day 1: Improving Your Statistical Inferences
Welcome and Introduction
Overview of the relation between description, estimation, and hypothesis testing. Introduction to Bayesian, likelihood, and frequentist approaches. Relation to philosophy of science (scientific realism and constructive empiricism). Discussion of your existing knowledge about these approaches, and whether or how an improved understanding matters in practice.
Introduction to R and R Studio. We’ll illustrate basic concepts in statistical inferences. Practical assignment to perform simulations in R. We’ll examine p-value distributions and confidence intervals through simulations. How do you prevent misinterpretations of p-values and confidence intervals?
Introduction to effect sizes. Explanation of different families of effect sizes. How do effect sizes complement p-values? What are the effect sizes you can expect in your own research area? Should you care about raw or standardized effect sizes? How do you use effect sizes for power analysis?
Likelihoods and Bayesian statistics. Bayesian estimation (ROPE procedure) and Bayes factors. How do you quantify and update prior beliefs?

Tuesday 5th – Classes from 09:30 to 17:30
Day 2: Improving Your Statistical Questions
Reflection on Day 1
Guidance on how to act. Type 1 error control: Why it matters, and how it works in practice. What is ‘p-hacking’? How can you recognize it, and prevent it in your own research?
The Question: What would falsify your hypothesis? How can we specify falsifiable predictions? How do you determine your smallest effect size of interest based on theory, practical relevance, or feasibility?
Interpret null-effects using equivalence testing, the Bayesian ROPE procedure, and Bayes factors. Practical assignment to analyze existing data reporting null-results.
What do results from single studies tell you? How important are statistics in research lines? Why are replication studies important? How can we ask better theoretical questions?

Wednesday 4th – Classes from 09:00 to 17:30
Day 3: Improving the Informational Value of Studies
Reflection on Day 2
Type 2 error control: Statistical power. Which sample sizes do you need, and which effects can you study? How do you perform and report a power analysis using software such as g*power, and how can you perform power analyses using simulations?
Additional approaches to sample size justifications. Planning for accuracy, and limitations due to feasibility. How do you plan for both the presence, as the absence, of an effect? How do you justify the alpha level for your study?
Sequential analyses: How can you design studies by repeatedly collecting data without inflating error rates? What are similarities and differences between Frequentist and Bayesian approaches to sequential analyses?
How do you pre-register your research design? What should be included in a pre-registration in addition to the sample size justification? Where can you pre-register? Is it worth the effort?

Thursday 5th – Classes from 09:00 to 17:00
Day 4: Improving Your Meta-Analytic Thinking
Regrettably we work in a scientific enterprise where the published literature does not reflect real research. Publication bias and selection biases lead to a scientific literature that can’t be interpreted without taking these biases into account. We will discuss what real research lines look like, and how to meta-analytically evaluate the literature while keeping bias in mind
Reflection on Day 2
Open discussion. Any topics you’d like to discuss? Any questions you’d like to ask? Any barriers preventing you from incorporating what you’ve learned so far in your research?
There will be variation in single studies, and we need to think about science in more cumulative ways. We will discuss why not all studies can be expected to be significant in lines of research, even when there is a true effect, and how to deal with this when submitting research for publication.
Introduction to meta-analyses in R. Why is it important to think meta-analytically? Demonstrating meta-analyses through simulations. Explanation of heterogeneity in meta-analyses.
Discussion of meta-analytic bias correction techniques, such as trim-and-fill, PET-PEESE meta-regression, the Test of Excessive Significance, and P-curve analysis.

Please send any questions to [log in to unmask]

########################################################################

To unsubscribe from the PSYCH-POSTGRADS list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=PSYCH-POSTGRADS&A=1