I posted a response to a query made by Rosan Chow. She asked an interesting research question about (to paraphrase) whether lessons learned about Apple's great successes under Steve Jobs might not be in line with the actual performance of the company. I want to use that experience as a catalyst for a wider discussion.
I can see many practical as well as academic contributions such a research project could make. For example and off the top of my head, admittedly these might include:
A. An opportunity to demonstrate that we (technologists, designers, business leaders, and others concerned with "learning from Apple") are either learning the wrong lessons from Apple's experience, or else that we need to better step back from hyperbole and ground ourselves better in data in order to make sure that business leaders (etc.) make wiser use of Apple (or Steve Jobs, or Apple under Jobs) as a case study for making decisions.
B. A chance to focus on specific folk theories, urban myths or rumors about Apple/Jobs that in being exposed demonstrate something about ourselves and the way we're managing our companies, our economies, our innovation processes
you name it.
And from an academic perspective, there are also contributions to make, to wit:
1. Comparative analysis of actual business performance and public rhetoric about that performance (has this been done? Using what techniques? I don't know
)
2. New ways of creating measures of communicative utterances that could give us a way to look empirically at communicative phenomena.
3. New comparisons of the corporate communication strategies and the actual public discourse around the topics that the companies managed. How did they do? What does this imply?
The list goes on and on. So it is, genuinely, an interesting question. It might not be everyone's cup of tea, but it's potentially productive. The comments that follow are NOT directed at Ms. Chow. It was simply her post that made me want to say this:
Having observed the list now for about two years (and I defer, of course to Ken, and Norman and the other big guns), I sense a sort of malaise, or boredom, or even distain for when the conversation turns from conversation about a question to the hard, nuts and bolts, no-screwing-around practical steps one needs to actually answer them. Of course we can measure corporate performance. Of course we can empirically document the way a company (or relationship, or whatever) is described. Of course we can use ordinal scales to create measures (it is "worst, bad, fair, acceptable good, very good, best"
a perfectly legiminate system of measure). But this is all hard, time consuming, sand governed by the rules of social science.
The simple fact is, we can't improve, criticize, "take a critical look at" or otherwise move beyond the basics under we understand what they are.
This is a Ph.D. list. These are questions about communication, about measures, about research design, and about comparative analysis. There are right ways and wrong ways to conduct such work. There are accurate and inaccurate ways to code data, count stuff, and make claims about them.
I would expect someone studying for a Ph.D. who asks social science questions (and you could ask theological ones, by contrast) to be MASTERING the fundamentals of research design and well enough to teach them to future Ph.D. candidates at the academy.
I would really like to see more people posing questions and asking other to help them refine their questions; state their question and ask about techniques for answering them; propose techniques for answering them and seeing if anyone can spot any flaws in the method before they get too far and waste a lot of time; assist with some inter-coder reliability testing on methods someone has devised to make sure it really works; to ask about their interpretation of data and whether that sounds OK or overstated; and whether the claims being made could make a contribution to something wider and more comprehensive than the literature the scholar relied on to build the case. Because almost always, the answer is yes., But we need each other to know this.
I'd love to be a resource for students or other scholars on ANY of those kinds of questions. I absolutely will NOT be a great resource on mot design-related themes. But if the investigation is grounded in empirical inquiry, I'm quite handy. And I would end by saying that this learned skill is by far the most valuable thing I learned when I did my Ph.D. Because the literature has moved on, the themes have changed, the field has evolved. But the methods change over decades and I know how to learn the new ones. That's worth 5-8 years of study
Cheers.
_________________
Dr. Derek B. Miller
Director
The Policy Lab
321 Columbus Ave.
Seventh Floor of the Electric Carriage House
Boston, MA 02116
United States of America
Phone
+1 617 440 4409
Twitter
@Policylabtweets
Web
www.thepolicylab.org
This e-mail includes proprietary and confidential information belonging to The Policy Lab, Ltd. All rights reserved.
On Feb 3, 2012, at 10:55 AM, Rosan Chow wrote:
> Hi Ken,
>
> I am glad that you were amused and thanks again for replying. Although it will take away the amusement, it is useful to contain the general question in the context which it arises. To repeat: the context was my reading of popular and professional accounts of the success of Apple under Jobs. I wanted to know whether the impression I got from the reading was correct and whether this impression represented a more general problem among researchers who study successful business / management / innovation / engineering / design cases. So my question is more specific than you have taken it to be.
>
> My impression was that there was a tendency for the journalists, bloggers, or even researchers to read or use the success of Apple to support their theory or point of view.
>
> I am aware of the values and difficulties of case study research and I know Nonakas work a little bit. And precisely because of this background knowledge, I was even more struck to find that despite the theoretical discussion and careful analysis (which of course sets apart his paper from other more casual commentaries) , this particular paper of his left the same kind of impression mentioned above.
>
> He probably has done more work since to substantiate his theory of innovation management and I am not at all questioning his theory (which I actually like but this is not the point). I am curious: in this particular paper he used the cases studies to support his theories without, in my judgment, the kind of robust argument that you said needed for an ex-post facto analysis. For a positivist account of good theory building from Case Study research, I have located this paper:
> http://intranet.catie.ac.cr/intranet/posgrado/Met%20Cual%20Inv%20accion/Semana%203/Eisenhardt,%20K.%20Building%20Theories%20from%20Case%20Study%20Research.pdf
>
> However, my focus is not on evaluating Nonaka's paper, but rather on my impression stated above. I would be happy to hear that my impression is not correct and there is no problem at all in the research on successful cases and Nonakas paper was written this way because it was at the beginning of a theory building or whatever
. I am completely open ...but I would appreciate some pointers.
>
> Many thanks.
> rosan
>
>
> -----Original Message-----
> From: Ken Friedman [mailto:[log in to unmask]]
> Sent: Donnerstag, 2. Februar 2012 13:34
> Subject: Re: Is claim/research on 'success' one-sided? RE: Apple Success under Jobs
>
> Well, Im just sitting here laughing. This is the big question: is there a genuine problem when it comes to ex-post facto analysis of successful? If yes, how is it overcome in research?
>
> As history, all analysis of cases is ex post facto. No one can properly isolate all the key variables in historical analysis, and no one can re-run historical cases to see whether alternate choices would actually have made a difference.
>
> On a limited basis, one can attempt to simulate the effects of minor differences in situations where one can analyze and conceptually isolate those differences, but there is no way to be sure.
>
> The way through this is robust, reasoned analysis. Nevertheless, historical analysis including the historical analysis of business cases always involves judgment calls.
>
|