JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for PHD-DESIGN Archives


PHD-DESIGN Archives

PHD-DESIGN Archives


PHD-DESIGN@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

PHD-DESIGN Home

PHD-DESIGN Home

PHD-DESIGN  February 2017

PHD-DESIGN February 2017

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: Epistemological Differences -- Research in an Academic Discipline vs. Research in a Professional Practice

From:

Ali Ilhan <[log in to unmask]>

Reply-To:

PhD-Design - This list is for discussion of PhD studies and related research in Design <[log in to unmask]>

Date:

Fri, 17 Feb 2017 23:28:29 +0300

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (134 lines)

Dear Terry,



If I am understanding what you wrote correctly, I have some points of
disagreement. You wrote:



-snip-



*“To recap, seen over-simplistically in terms of the research methods of
large-sample size, associative, single-question analysis, the use of small
samples in design research may appear to be faulty.*


*However, seen in terms of methods of analysis that make use of the rich
complex body of information available in each datum and between data, small
sample analysis methods can be both more reliable and offer better
information.”*

-snip-



I don’t think that you can say that small sample methods can be inherently
more reliable and offer better information. It all depends on the research
question, and all methods have their epistemological and technical
problems. If your goal is to capture variance in a human population,
regardless of the circumstances, you need a large enough N (whether the
analysis is associative or single-question is entirely a different
matter).





You are writing as if modern statistics is a monolithic entity. It is not.
There are methods that are better suited for explaining variation within
units versus methods that focus on population averaged effects versus
methods that focus on auto-correlation between longitudinal measurements
and so on. There are methods which rely heavily on hypothesis testing and
on the other hands there are methods that give primacy to what “the data
has to say” without any apriority assumptions. Some methods rely on
a-priori probability distributions while others do not. Some methods are
more suitable for single question set-ups while others can tackle more
complex designs. Each have their strengths and weaknesses, and each has its
own usage.





Going back to the example you gave about exam marks, Rasch Anlysis is only
appropriate under certain circumstances. If you are interested, for
example, in difference between test scores between classrooms or between
schools districts, or if you want to decompose how much of the difference
in test scores stem from within school variance vs. between school variance
Rasch Analysis will not do the trick, you will need large samples.



About usability testing and sub-atomic particles: Small Ns are probably
good for identifying problems in a product (software, system you name it),
precisely because you are trying to elicit information about a single
product, and typically between unit variation in non-living things is
extremely low (copies of your software will more or less be identical until
you install them on different computers). But if you wanted to learn more
about your users, not your products, again small Ns may not help. Once
again, there is too much between and within unit variation. That is why you
cannot use the models you use for sub atomic particles for humans. We have
(at least for some type of particles) reliable theories that help us to
predict behaviors of certain particles in a probabilistic manner. Yet we
are unable to predict where the next revolution or economic crisis will
happen, or when my next nervous breakdown will be even probabilistically.
Humans are inherently more complex than David’s research participants, and
there is no math that I know of that can explain or predict their behavior J



You also wrote:



*--snip--*

*"There is increasing criticism of statistical analysis of big data as
resulting in false findings. I've an excellent paper on this but can't put
my hand on it at the moment."*

*--snip—*



Big data is typically not analyzed with traditional statistical methods.
Machine learning and data mining involve statistics and there are
substantial overlaps, but they are very different approaches. The latter
two focus on the data and emergent patterns, while traditional statistics
approach data to answer pre-set questions that stem from theory. That said,
you can find similar criticisms about any method, regardless of the N they
rely on. Big data can be analyzed in good or bad ways, like any other data.
I don’t think there is an area of research that is devoid of false
findings. Some false findings, however, are harder to catch. For most
qualitative research, it would take immense time and effort to try to
replicate the results, so it is even harder to find the problems in such
approaches (I am not implying that quant. research is better).



To summarize, I am not trying to say that big Ns are essentially better
than small Ns. But each approach has its own niche, depending on the
questions we are asking.



Warm wishes,



Ali



PS. I wrote this in a haste, so please excuse me for typos and other
grammatical issues.


-----------------------------------------------------------------
PhD-Design mailing list  <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager