Dear Jude,
Thank you for your note. Survey research is not an experimental art form.
It has well established and scientific (I typically do hesitate using this
word since it has a lot of baggage, but in this case I can use it with
confidence) rules, validated by decades of research.
Key to any survey, especially when validating models, is sampling. If the
sampling process is not done correctly, even if you have the best questions
in the world they won't work. Your results will simply not be valid and
reliable. Key to survey research is probability sampling: you need to give
each unit in your target population an equal chance of being represented in
your sample. There are many ways to do that when you do not have a full
list of units available. If you do not have a probability sample (and there
are many different probability sampling techniques) your results really do
not mean anything. Anathema to probability sampling is convenience
sampling, by definition convenience samples are made up of units that are
within your easy grasp (e.g. email lists). A well known case about the
dangers of non-probabilty sampling is the 1936 presidential elections in
the US: https://www.math.upenn.edu/~deturck/m170/wk4/lecture/case1.html
That said, convenience samples still have their uses, but I am not going in
there right now.
Second problem is the sample size, even if you collect the sample
correctly, if you do not have a large enough sample, your results will be
problematic (and the size of the sample is highly context dependent). A
sample size of 200 produces a sampling error of roughly +-7%. Let's say you
have a yes no question in your survey which 47% of your respondents say
yes, and 53% no. 6% seems like a big difference, but it does not mean
anything. The smallest difference your survey can detect is around 7%. The
more heterogeneous your target population, the larger your sample needs to
be. And , I would venture the say that designers are not a homogenous
group.
Another mental exercise: let's say you have a probability sample of 2000
respondents. Can you be happy about your survey and use it confidently?
Still no, depending on your coverage and non-response. You need to make
sure that the differences between those who answer your survey and those
who do not are not significant. If there are significant differences, even
if you have the right sample size with the correct data collection method,
your results are still problematic. Especially if you do not know a lot
about your population, coverage and non-response are hard to deal with.
Nowadays, the leading edge of survey methodology research is about reducing
the non-response and coverage errors in the age of cellphones and internet
(in the good old days when every household had just one phone you could use
random digit dialing and can be sure of having a representative sample,
nowadays, many people do not own landlines, and a lot of people have more
than one cell phone).
Every day survey methodologists are coming up with new approaches to tackle
these problems, for example you can see:
http://journals.sagepub.com/doi/abs/10.1177/0049124117729702
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5209407/
So, in the end, what I am trying to say is, I still stand behind what I
said :)
Warm wishes,
ali
PS As always, I wrote this in a haste, and do apologize for possible typos.
-----------------------------------------------------------------
PhD-Design mailing list <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------
|