©1990, 1995 Title page (entry point) General contents

Abstract

Disasters in man-made systems (ships, aircraft, power plant, etc.) often point to a lack of ideal provision of information to the operators. To give good operator support, in remedy, we need to understand more about human cognition in such complex tasks, beyond what can be firmly deduced from verbal reports. The recent tradition of modelling by formalisms (such as GOMS, TAG) is applicable to tasks that are logically definable, but lacks the empirical input for effective modelling of complex tasks. When there is a clearly appropriate representation of situations and actions, machine learning can effectively derive rules that model behaviour, but for complex control tasks we have no satisfactory idea of the human representations of complex tasks.

Human control of a specially constructed bicycle-like simulation was studied, and showed the added difficulties of attempting to model a task in which psycho-motor factors are significant. A semi-complex, non-manual simulation task (mine-hunting) was devised and implemented as a scored game, in order further to study human representations and rules. Data from several subjects were collected and analysed with the help of a rule-induction program (CN2). The first experiment showed that representation content was important to the quality of rules derived by induction, and that a simple analysis of the frequency of common consecutive actions helped towards constructing compound actions that were more human-like. In the second experiment, information was priced to motivate subjects to turn off unnecessary information and so reveal the information they used. This revealed a context structure that was useful and informative in the preparation and separation of data for rule-induction. Following the experimental reports, extending and generalising these methods is discussed, considering prospects for their use in HCI design, and outlining a `guardian angel' paradigm for operator or user support.