And in haste, I found the first of many mistakes. I meant to write:
"David's research participants are inherently more complex than sub-atomic
particles" instead of "Humans are inherently more complex than David’s
research participants".
Sorry for that :)
ali
On 17 February 2017 at 23:28, Ali Ilhan <[log in to unmask]> wrote:
> Dear Terry,
>
>
>
> If I am understanding what you wrote correctly, I have some points of
> disagreement. You wrote:
>
>
>
> -snip-
>
>
>
> *“To recap, seen over-simplistically in terms of the research methods of
> large-sample size, associative, single-question analysis, the use of small
> samples in design research may appear to be faulty.*
>
>
> *However, seen in terms of methods of analysis that make use of the rich
> complex body of information available in each datum and between data, small
> sample analysis methods can be both more reliable and offer better
> information.”*
>
> -snip-
>
>
>
> I don’t think that you can say that small sample methods can be inherently
> more reliable and offer better information. It all depends on the research
> question, and all methods have their epistemological and technical
> problems. If your goal is to capture variance in a human population,
> regardless of the circumstances, you need a large enough N (whether the
> analysis is associative or single-question is entirely a different
> matter).
>
>
>
>
>
> You are writing as if modern statistics is a monolithic entity. It is not.
> There are methods that are better suited for explaining variation within
> units versus methods that focus on population averaged effects versus
> methods that focus on auto-correlation between longitudinal measurements
> and so on. There are methods which rely heavily on hypothesis testing and
> on the other hands there are methods that give primacy to what “the data
> has to say” without any apriority assumptions. Some methods rely on
> a-priori probability distributions while others do not. Some methods are
> more suitable for single question set-ups while others can tackle more
> complex designs. Each have their strengths and weaknesses, and each has its
> own usage.
>
>
>
>
>
> Going back to the example you gave about exam marks, Rasch Anlysis is only
> appropriate under certain circumstances. If you are interested, for
> example, in difference between test scores between classrooms or between
> schools districts, or if you want to decompose how much of the difference
> in test scores stem from within school variance vs. between school variance
> Rasch Analysis will not do the trick, you will need large samples.
>
>
>
> About usability testing and sub-atomic particles: Small Ns are probably
> good for identifying problems in a product (software, system you name it),
> precisely because you are trying to elicit information about a single
> product, and typically between unit variation in non-living things is
> extremely low (copies of your software will more or less be identical until
> you install them on different computers). But if you wanted to learn more
> about your users, not your products, again small Ns may not help. Once
> again, there is too much between and within unit variation. That is why you
> cannot use the models you use for sub atomic particles for humans. We have
> (at least for some type of particles) reliable theories that help us to
> predict behaviors of certain particles in a probabilistic manner. Yet we
> are unable to predict where the next revolution or economic crisis will
> happen, or when my next nervous breakdown will be even probabilistically.
> Humans are inherently more complex than David’s research participants, and
> there is no math that I know of that can explain or predict their behavior
> J
>
>
>
> You also wrote:
>
>
>
> *--snip--*
>
> *"There is increasing criticism of statistical analysis of big data as
> resulting in false findings. I've an excellent paper on this but can't put
> my hand on it at the moment."*
>
> *--snip—*
>
>
>
> Big data is typically not analyzed with traditional statistical methods.
> Machine learning and data mining involve statistics and there are
> substantial overlaps, but they are very different approaches. The latter
> two focus on the data and emergent patterns, while traditional statistics
> approach data to answer pre-set questions that stem from theory. That said,
> you can find similar criticisms about any method, regardless of the N they
> rely on. Big data can be analyzed in good or bad ways, like any other data.
> I don’t think there is an area of research that is devoid of false
> findings. Some false findings, however, are harder to catch. For most
> qualitative research, it would take immense time and effort to try to
> replicate the results, so it is even harder to find the problems in such
> approaches (I am not implying that quant. research is better).
>
>
>
> To summarize, I am not trying to say that big Ns are essentially better
> than small Ns. But each approach has its own niche, depending on the
> questions we are asking.
>
>
>
> Warm wishes,
>
>
>
> Ali
>
>
>
> PS. I wrote this in a haste, so please excuse me for typos and other
> grammatical issues.
>
-----------------------------------------------------------------
PhD-Design mailing list <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------
|