British Psychological Society's
Mathematics, Statistics and Computing Section

Scientific Meeting  27 November 1999
Birkbeck College 
Room  MB427 (same place as last year)

Scientific Meeting

10:00		Committee Meeting (with coffee)
10:30		Diana Kornbrot (University of Hertfordshire)
		"Complexities in "simple" reaction times: The role of voluntary decision criteria"
11:00		Alick Elithorn & Mary Norrish (Children's Hope Foundation)
		"Letters words and sounds"
11:30		Coffee
11:45		Mark Lansdale (University of Loughborough)
		"Cumulative Encoding Models of Memory"
12.30		Lunch (in local area)  
2:00		A. A. J. Marley (McGill University) & R. Duncan Luce (University of California, Irvine)
		A simple axiomatization of binary rank-dependent expected utility theory of gains (losses)
2:30 		Raphael Gillett (University of Leicester)
		"Object relocation tasks: An exact test of significance with credit for partial knowledge"
3:00		Coffee
3:15		Fumi Itagaki (Asia University)
		"A basic analysis method for random number generation task"
3:45		 Chris Tofallis (University of Hertfordshire)
		"How to build single equation models with multiple responses and 
		explanatory variables"
 4:15		Annual General Meeting - including wine and snacks

For more information please email Dan Wright on [log in to unmask]



Letters words and sounds 
(Alick Elithorn & Mary Norrish, Children's Hope Foundation)

	The experiments we report repeat in vision in man an experiment carried out in touch with rhesus monkeys by the late Professor George Ettlinger.  The data collected by home computers from subjects of all ages is rough.  Much of this ‘roughness' however represents many 
subjects' inability to settle for a single exponential distribution of effort but rather to oscillate between two competing processes both of which are probably exponential in character.
	We argue that, from our findings, moderately strong inferences can be drawn from the heads and tails of the distributions and that this procedure highlights  "the uncertain nervous system"'s difficulty in deciding at any one time whether to read for sound or to read directly 
for meaning.  The statistical techniques used to date are described and suggestions are invited for improving the statistical and inference procedures.
	In accordance with best practice, we would be happy to make our data available to anyone interested in furthering the technique for separating overlapping distributions.   Perhaps more to the point, we can provide a floppy disc to anyone interested in confirming our findings either in their own home or laboratory on an IBM compatible computer.

Object relocation tasks: An exact test of significance with credit for partial knowledge
(Raphael Gillett, University of Leicester)

Abstract to follow

Complexities in "simple" reaction times: The role of voluntary decision criteria 
(Diana Kornbrot, University of Hertfordshire)

Simple reaction times are really rather complex.  Since the response is known in advance people can ‘anticipate' the signal and achieve very fast responses.  There are two common solutions to this problem - random foreperiods and ‘catch' trials.  These two paradigms might result in rather different response criteria and possibly even different psychological processes.  Using techniques based on linear moments it is possible to identify the RT distribution with relatviely small numbers of responses (50-80 per person per condition).  These new techniques are applied to vocal and key simple reaction time distributions in random and fixed foreperiod conditions in order to investigate the nature of the psychological processes involved.

A basic analysis method for random number generation task
(Fumi Itagaki, Asia University)

This study provides a basic analysis method of expressing the degree of randomness of number sequences gneerated by a human in a plane or a semi-spherical geometrical space.  225 and 200 number sequences data generated in a random number generation task (GND) and by the RND function of a computer were compared in this way.  Each matrix, which consists of successive number pair frequencies, was treated as a vector to estimate position based on the two standard symmetry and asymmetry axes.  These two axes had been constructed from an average matrix of human data.  A square root transformation was also applied to these matrices.  In both analyses, although the plot of RND data always showed a circular pattern, GND data were gathered in some specific arreas outside the circle of RND.  This appears to be an effective method of analyzing the factors of the central executive of working memory by constructing appropriate axes.

Cumulative Encoding Models of Memory
(Mark Lansdale, University of Loughborough)

abstract to follow

A simple axiomatization of binary rank-dependent expected utility theory of gains (losses)
(A. A. J. Marley, McGill University, and R. Duncan Luce, University of California, Irvine)

For binary gambles composed only of gains (losses) relative to a status quo, the rank-dependent expected-utilty model with a representation that is dense in intervals is shown to be equivalent to 
ten elementary properties plus event commutativity and a gamble partition assumption. The proof reduces to a (difficult) functional equation that has been solved by Aczel, Maksa, and Pales (submitted).

Single equation models with multiple response and explanatory variables 
(Chris Tofallis, University of Hertfordshire)

	A dependent quantity may be have to be measured by a combination of variables and then related to multiple explanatory variables. We show how ‘best fit' weights can be found for all these variables whilst alsoensuring the model satisfies any conditions which arise from theory 
or common sense (e.g. weights may have to be positive and lie within a certain range).
	The method is based on the priciple of maximising the correlation between the left and right sides of the equation that is being fitted to the data. By using constraints in the optimisation/ estimation procedure one can at the same time ensure the model satisfies necessary condtions. All of this can be done straightforwardly on a spreadsheet.