Rosan,
I'm not sure how it works where you're based, but when I was a student in Geneva — and it was modeled largely on an American model for studying political science — we had a "colloquium." That meant, in short, that a bunch of graduate students and professors would meet once a month and a few people would have the floor that day. If we imagine a research process as an inverted pyramid, and think of the top as "what's why question?" all the way down to "my answer" (or maybe a funnel is a better metaphor), then our job was to learn where that person was in the funnel and try and help.
One real fear for any "lone scholar" is that the question they've asked isn't refined or specific enough — and at the very same time — is too specific and hence has limited value to the wider field as well as his or her own career. So people want to talk about that.
And on conducting a literature review — to find either material gaps in our knowledge that a study could fill, or a logical flaw or error that a person wants to correct — we want to be sure we didn't miss something important. So we need help.
And once we nail the question and have decided to either fill a gap, or correct an error (or, for the bold, make both an empirical and theoretical contribution), we need a way to proceed. This is the "design phase." We need people to both sit back and be creative with us to ask "what would answer that?" but also help abduce towards a solid design with falsifications (e.g., yeah, we could do it that way, but I heard the data set is flawed because they only interviewed men, so there is a gender bias, and since you're really interested in the whole population, that won't do …, but what might help is new work being done at X …)
Anyway, absent other scholars to bounce these things off of in a productive "can do" manner, we're all alone and its harder. And the worry is that we invest years answering something and it turns out we made a mistake. Happens all the time, but is worth trying to avoid.
I think a Ph.D. list should — among its others values — be a sort of colloquium. we should be able to work as a virtual design studio ON research design. And when the research is finished (or parts are, anyway), we can help at other phases too.
So more than anything, this is what I meant to promote in my note.
And what I also, by implication, want to encourage a movement away from is the substitution of research for mere conventional wisdom; the use of "groupthink" to relieve of us of the duty of independent thought; and the encroaching and furtive notion that design and design research is either "above" all this or somehow different and therefore gets to operate under different rules. It isn't and it doesn't.
Currently The Policy Lab is finishing a document with UNIDIR on "evidence-based programme design" for the reintegration of excombatants, and we introduce the strategic design framework as a way to conceptualize evidence (as something distinct from mere information), and how evidence is used to build theory for the benefit of designing solutions for social action. We will post that document here (or at least on Academic.edu and I'll mention it here). This is a HUGE step of advocacy for design and public policy.
This will be an agenda, and I'm extremely keen to find like-minded design practitioners and scholars who want to get involved in this work. That's why I haunt this list.
best,
d.
_________________
Dr. Derek B. Miller
Director
The Policy Lab
321 Columbus Ave.
Seventh Floor of the Electric Carriage House
Boston, MA 02116
United States of America
Phone
+1 617 440 4409
Twitter
@Policylabtweets
Web
www.thepolicylab.org
This e-mail includes proprietary and confidential information belonging to The Policy Lab, Ltd. All rights reserved.
On Feb 6, 2012, at 9:27 AM, Rosan Chow wrote:
> Hi Derek,
>
> Thank you for being so generous with your knowledge and time. Your post is much more than I would ever expect from anyone on the list.
>
> Although I might not follow exactly the research design or the research question that you have sketched, the issues that you asked me to consider are general enough to be very helpful indeed.
>
> Best regards,
> Rosan
>
> -----Original Message-----
>
> From: Derek B. Miller [mailto:[log in to unmask]]
> Sent: Freitag, 3. Februar 2012 11:38
> Subject: Re: Is claim/research on 'success' one-sided? RE: Apple Success under Jobs
>
> Rosan,
>
> I've now read your question carefully (this email only) and several times. This is what your research design requires if you are posing this as en empirical question:
>
> 1. "my reading of popular and professional accounts of the success of Apple under Jobs"
>
> That is a discourse analysis, and one helpful approach is predicate analysis. Your universe of relevant material will be defined by the criteria you set, i.e., popular and professional accounts. You will need to define those terms and and explain why these definitions are productive and generative in answering your question. Your answers will NOT be the only answers available. Rather, they need to show fidelity between your question, your modes of analysis, and the claims you will be making from your analysis.
>
> You also have a nicely available timeline for your analysis (i.e. under Jobs).
>
> 2. You are going to need to independently measure the success of Apple under Jobs. There are many, many ways to do this, but here are some of the key conceptual matters you will need to attend to: How are you measuring success? Revenue? Year-end accounting? Market share? Capitalization? Relationship of their strategic goal to their achievements, and therefore you're using an in-house measure? One way or another, this needs to be answered and quantified over time.
>
> 3.. Whatever the measure, you will now need to make a claim about the status of that level of success. And the only way to do this that I can think of is comparatively. So compare "like to like." I'm not an expert in this business area so i don't know the answer. But I need to see at least several time series lines showing a "measure of success" for these 2/3/4/5/ companies. And they should be co-terminous (sorry for the pun) with Job's tenure at Apple.
>
> 4. And you will then need to do another discourse analysis of the "popular and professional accounts" of these other companies.
>
> You eventually arrive at a series of measures of success that can be compared to measures of reputation grounded empirically in discourse analysis.
>
> You will then be able to look across the cases and say, "yup, Apple was definitely perceived to be doing better than rivals that were, in fact, doing just as well." Or not. I have no idea.
>
> You will still be in NO position to explain WHY any of this might be true. You will also be in no position to talk about the "tendency of journalists" to support their theories using Apple, because you didn't code the data for "theoretical claims by journalist" which is harder, but possible with the same data set. But even there, you'll need to separate out "journalists" from "popular and professional accounts" , and then creating a sub-set that identifies those who put forward theoretical claims and mobilized the "success" of Apple under Jobs to explain it.
>
> Good luck with the study. I look forward to the results.
>
> Derek Miller
>
>
> _________________
> Dr. Derek B. Miller
> Director
>
> The Policy Lab
> 321 Columbus Ave.
> Seventh Floor of the Electric Carriage House Boston, MA 02116 United States of America
>
> Phone
> +1 617 440 4409
> Twitter
> @Policylabtweets
> Web
> www.thepolicylab.org
>
> This e-mail includes proprietary and confidential information belonging to The Policy Lab, Ltd. All rights reserved.
>
> On Feb 3, 2012, at 10:55 AM, Rosan Chow wrote:
>
>> Hi Ken,
>>
>> I am glad that you were amused and thanks again for replying. Although it will take away the amusement, it is useful to contain the general question in the context which it arises. To repeat: the context was my reading of popular and professional accounts of the success of Apple under Jobs. I wanted to know whether the impression I got from the reading was correct and whether this impression represented a more general problem among researchers who study successful business / management / innovation / engineering / design cases. So my question is more specific than you have taken it to be.
>>
>> My impression was that there was a tendency for the journalists, bloggers, or even researchers to read or use the success of Apple to support their theory or point of view.
>>
>> I am aware of the values and difficulties of case study research and I know Nonaka's work a little bit. And precisely because of this background knowledge, I was even more struck to find that despite the theoretical discussion and careful analysis (which of course sets apart his paper from other more casual commentaries) , this particular paper of his left the same kind of impression mentioned above.
>>
>> He probably has done more work since to substantiate his theory of innovation management and I am not at all questioning his theory (which I actually like but this is not the point). I am curious: in this particular paper he used the cases studies to support his theories without, in my judgment, the kind of robust argument that you said needed for an ex-post facto analysis. For a positivist account of good theory building from Case Study research, I have located this paper:
>> http://intranet.catie.ac.cr/intranet/posgrado/Met%20Cual%20Inv%20accio
>> n/Semana%203/Eisenhardt,%20K.%20Building%20Theories%20from%20Case%20St
>> udy%20Research.pdf
>>
>> However, my focus is not on evaluating Nonaka's paper, but rather on my impression stated above. I would be happy to hear that my impression is not correct and there is no problem at all in the research on successful cases and Nonaka's paper was written this way because it was at the beginning of a theory building or whatever .... I am completely open ...but I would appreciate some pointers.
>>
>> Many thanks.
>> rosan
>>
>>
>> -----Original Message-----
>> From: Ken Friedman [mailto:[log in to unmask]]
>> Sent: Donnerstag, 2. Februar 2012 13:34
>> Subject: Re: Is claim/research on 'success' one-sided? RE: Apple
>> Success under Jobs
>>
>> Well, I'm just sitting here laughing. This is the big question: "is there a genuine problem when it comes to ex-post facto analysis of successful? If yes, how is it overcome in research?"
>>
>> As history, all analysis of cases is ex post facto. No one can properly isolate all the key variables in historical analysis, and no one can re-run historical cases to see whether alternate choices would actually have made a difference.
>>
>> On a limited basis, one can attempt to simulate the effects of minor differences in situations where one can analyze and conceptually isolate those differences, but there is no way to be sure.
>>
>> The way through this is robust, reasoned analysis. Nevertheless, historical analysis - including the historical analysis of business cases - always involves judgment calls.
>>
|