Print

Print


Anne-Greet writes in her proposal:
Dear BNIM-people and Anne-Greet.
An earlier email today (18 May 2009) outlined a proposal for research into modes of generating material for BNIM interpretation, and asked for feedback. I've looked at the proposal and my feedback is below. It may suggest other methodology focuses for PhD or other research. We did have an idea for comparing FANI and BNIM interview/interpretation, but this hasn't happened so far. Anybody else got ideas and/or opportunity?
Best wishes
Tom

Anne-Greet writes in her very interesting proposal:

 

5.3.3 Track 2

The interpretation procedure will also change. At this moment the choice is made to first follow entirely the BNIM guide, meaning that the panel will start interpreting the second track chunk by chunk, changing their hypothesis during the process. This will take three hours (saturation is reached for that day). New is that in one of the following days the same panel will go over the same track and chunks with the visual data linked to each chunk. _Though an analysis of sound and intonation of the interviewee is not made, the audio will be turned on during the interpretation. Assumed is that watching without hearing someone speak will be awkward and distracting. Especially since the panel is not specialized in facial expressions the data is kept as natural as possible. During the interpretation process of the panel they will regularly be asked specifically why and how the interpretations are made. This is to make sure whether it is the voice or the image that pursues the interpreter_. This third panel meeting will also take three ours. All meetings are recorded on audio to enable the researcher to make a precise analysis of the panel meetings. This is especially needed in this research because the panel meetings are double layered. In the first place it is about the interpretation of the told story and in the second place the panels provide my research with empirical data of facial expressions in use of the BNIM. After this second panel interpretation of the second track the data will be processed by comparison with the data of the first interpretation of the second track… The question is answered whether adding facial expressions was an improvement for interpreting the story or not (italics added).

 

 

I am glad that this inquiry into the methodology of BNIM is taking place, and I am very much looking forward to the results.

 

I suggest that there is a feature of the design which might be supplemented to make the results more convincing.

 

 At the moment,  Anne-Greet proposes that it be “the same panel” that goes over the chunks now with the audio-visual data added. I think that this should produce very interesting reflections by panel members on moments where they now come to a different ‘hypothesising’ than they did when they only had the printed words (not of the TSS, I presume, but of the verbatim transcript) as material to interpret.

 

A parallel. At a recent PsychoSocialNetwork workshop run by Lynn Froggett of UCLAN whose centre has been working with video-clips, we looked at a transcript, came to our hasty conclusions; then saw a video-tape of the (non-interview) 2-person interaction. One person said: “The transcript on its own was seriously misleading about the dynamics of what was going on”.  This anecdote suggests the importance of the experiment in methodology that Anne-Greet is undertaking.

 

However, I think it would be useful to have the audio-visual+transcript/TSS same material also gone through by another group which is not the same  as the one that has already worked on printed material alone. I think this could overcome suspicions that the “the same group” might try artificially not to change, or definitely to change, its previous interpretations.

 

The other variable would be to compare the timed productivity of working with a TSS, rather than with the verbatim transcript (and verbatim transcript _+ audio-visual).  Why?

 

The existing default BNIM approach works

(a) not with the verbatim transcript, but with a condensation (the TSS or sequentialisation).

(b) in a 3-hour initial kick-start panel, it covers a certain number of chunks corresponding to a certain number of pages of transcript

(c } on the basis it comes to a certain level of hypothesis-confirmation/disconfirmation which sets up structural hypotheses for the rest of the transcript which has not been looked at by the 3-hour panel.

 

Anne-Greet’s experiment works – unless I’m imagining it wrong

 

(1) with the verbatim transcript (and not with a sequentialisation, TSS)

 

(2) and then (with the same group or not) with the verbatim transcript + the audio-visual record.

 

Given the density of the material of (transcript + video) and the possible enriched task of commenting particularly on ‘facial expressions’, the 3-hour audio-visual panel will – and this is my guess – get through far fewer pages of transcript (minutes of video-tape) than did/would the printed-chunks-only group.

 

Whatever number of pages of transcript the audio-visual  group does cover, ythey will then also I presume set up ‘structural hypotheses for the unexamined part of the interview’.

 

We have then had the same amount of time (3 hours) devoted to quite different pages of transcript.  We can then compare the ‘hypothesisings’ of the two groups, and – if further work is done – rate the ‘productivity’ of spending 3 hours on  the printed material versus 3 hours on printed + audio-visual.

 

Since ‘doing the audio-visual’ will inevitably function like a micro-analysis, it seems to me, then two possible extreme outcomes occur to me

 

A) That it is more advantageous in 3 hours to get further on by doing printed-text only, because this generates ‘better hypothesising’ (adequate SHs about the whole interview) for the same cost

 

B) That it is more advantageous in 3 hours to work with only very little (audio-visual + printed) material, because of the importance of the facial-visual input as “producing a very different and more correct hypothesising” than covering much more of the interview but in an inadequate printed-text way.

 

There are of course more complex possibilities than just (A) and (B).

 

 Writing this, I’m becoming increasingly aware  that – to know the ‘efficiency’ of facial expressions on their own – it would be important to silence the audio-channel for at least one ‘run’ of the experiment. Otherwise, the ‘efficiency’ claimed for facial observation is not valid, because the group has had facial-observation plus voice, and never facial observation on its own.

 

I think therefore that, to convince the not-convinced, there is a good case for separating out the default BNIM approach

 

            TSS

 

From four variants

 

            Verbatim printed text

 

            Verbatim printed text + audio

 

            Verbatim printed text + video

 

            Verbatim printed text  + audio-video

 

I personally would also have a difference between “the same groups” running through what would now be 5 modalities, and “five new groups” not aware of anything from the other groups.

 

These ideas are about formal possibilities, not necessarily about practicalities. But they might provide criteria for selecting the best combination of practical steps…… given time and resources.

 

I think it’s brilliant that there’s an opportunity for doing serious research on aspects of BNIM as method. So many thanks to Anne-Greet. If other people have relevant comments and experience, do contribute…..

 

Tom