• Ei tuloksia

Action uncertainty and reconciliation of the represen- represen-tational the ecological views

5 Studies and results

6.3 Action uncertainty and reconciliation of the represen- represen-tational the ecological views

The model presented in Study II and the approach proposed for modeling more complex attention allocation mechanisms discussed in the previous section are explicitly based on an internal representation of the environment. This puts the proposals on the representation-based side of the theoretical debate of online versus representation-based control discussed in section 2.1.

Zhao and Warren (2015) in their review of the online versus representation-based control debate conclude that the two approaches should be empirically tested with fully specified models handling missing sensory input. Study II does this very test for a fully specified representation-based model in the car following task and shows that the model drives and samples information comparably to humans when based on a probabilistic internal model and an action uncertainty control mechanism. This provides a concrete empirical challenge for the online-control side of the debate to specify how the observed behavior can be explained, preferably in a more parsimonious manner, using presumably multiple task-specific heuristics and exception handling mechanisms of strong online control.

However, outside the strict dichotomy, the action uncertainty approach has some overlap with the ecological view usually associated with online control. Specif-ically, action uncertainty control agrees with the core tenet of the ecological view that ”perception and the coordination of movement are not distinct achievements and that they their study ought not constitute separate endeavors” (Michaels &

Beek, 1995). But instead of trying to directly connect perception and coordination of movement, an action uncertainty formulation can model these separately, yet rigorously explain how perception serves to facilitate action by reducing action uncertainty and how actions are used to orient the sensors so that perception can efficiently fulfill this purpose.

As currently discussed in this thesis, the action policy is assumed to be given

and it is assumed to only handle the coordination of movement for the sake of some external goal, e.g. not to crash into the leading vehicle or stay on the road while steering, and the overt attention allocation is modeled as a separate mechanism, such as controlling a blinder in Study II or moving the eyes in the previous section’s discussion. However, in reality such distinction is unlikely to exist and keeping the action uncertainty at bay, by orienting the senses or altering how the task is conducted, are integrated into an ”unified action policy”. A simple example of such mechanism can be seen in the result of Study II: in addition to drivers sampling the leading vehicle visually presumably when action uncertainty reaches a threshold, they also conduct the task differently by leaving a longer time headway in response to the occluder, which in itself lowers action uncertainty. In more complicated tasks humans can perform quite elaborate maneuvers just to observe the scene better, from leaning to the side to peek around a corner to taking a kinematically suboptimal driving line to get a better view of the curve ahead (Crundall, Crundall, & Stedmon, 2012).

So, while separating the observation and action coordination is a simplifying approach to build reasonable approximations of human control of action especially in simpler tasks, more realistic and generalizable modeling will need to specify control mechanisms where perception and action are handled in a fundamentally unified manner.

7 Conclusions

Studies I and II show that drivers can successfully follow another vehicle with highly intermittent view to the leading vehicle. Time headway and intermittency of visual input were found to be strongly and robustly connected during car fol-lowing. This is in line with previously hypothesized car following models that incorporate the Task-Difficulty Homeostasis mechanism from traffic psychology.

The experimental data was used to estimate a quantitative parameterization for increase in time headway in response to visual distraction. This provides an ex-perimentally validated concrete link between the traditionally qualitatively stated Task-Difficulty Homeostasis and mathematically specified traffic engineering mod-els.

Study II explains and computationally models the connections between time headway and visual distraction as emerging from drivers using an action uncer-tainty mechanism, i.e. that drivers attend the scene in order to be confident in what action to take. The model’s behavior exhibits driving performance and attention

sharing comparable to human drivers. The model and the proposed action uncer-tainty mechanism are based on the assumption that humans use an internal cogni-tive representation of the environment for locomotor control and attention alloca-tion. Such formulation exhibiting human-like behavior shows that representation-based approaches for explaining human control of action and allocation of attention provide a viable alternative to explanations based on representation-free online control.

Study III presents an eye-movement signal analysis method designed especially for challenging eye tracking conditions that include relatively high levels of mea-surement noise and complex eye movement patterns, such as mobile recordings of tasks with complicated gaze target motion. In benchmarks with multiple datasets the method attains state-of-the-art performance in denoising and oculomotor event identification, and it is applied in Study IV to analyze eye-movements during steer-ing. The method uses segmented regression to simultaneously denoise and segment the signal and performs classifications for the segments. This differs from the tradi-tional approach of segmenting after prefiltering and per-sample classification which can be advantageous for classification performance especially with high levels of noise.

Study IV shows that humans can steer successfully when the path is visually presented only as discrete waypoints, and they look at such points where they are expected even when the points are not displayed. This is difficult to explain using online gaze control strategies and poses a challenge for steering models that tightly couple gaze patterns and control strategies. In contrast, such behavior is consistent with gaze control based on internal representation of the environ-ment, although no fully specified model of such a control strategy currently exists.

The action uncertainty formulation presented in this thesis is one possible ap-proach to rigorously model how humans use gaze patterns to sample information in representation-based control.

The results of Study I and II are quite directly applicable for traffic microsim-ulation and traffic level effects arising from driver distraction should be studied based on the estimated relationships between time headway and attention alloca-tion. The model of Study II shows promise for deeper understanding of perceptual and attentional demands of different traffic situations, which could be used in for example designing intelligent driving assistance systems and to increase under-standing of how problems in perception and attention allocation lead to accidents.

Although the task under study is driving, the focus of the dissertation is in understanding the basic mechanisms of attention allocation and active perceptual sampling in natural locomotor tasks in general. The theoretical proposals,

espe-cially the action uncertainty approach, are based on general principles and offer new perspectives on how perception and attention are connected to action and how this can be formulated in rigorous computational models.

References

Andersson, R., Larsson, L., Holmqvist, K., Stridh, M., & Nyström, M. (2016).

One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms. Behavior Research Methods, 49(2), 616–637. doi:10.3758/s13428-016-0738-9

Authié, C. N., & Mestre, D. R. (2011). Optokinetic nystagmus is elicited by curvi-linear optic flow during high speed curve driving. Vision Research, 51(16), 1791–1800. doi:10.1016/j.visres.2011.06.010

Brackstone, M., & McDonald, M. (1999). Car-following: A historical review. Trans-portation Research Part F: Traffic Psychology and Behaviour,2(4), 181–196.

Chattington, M., Wilson, M., Ashford, D., & Marple-Horvat, D. E. (2007). Eye–

steering coordination in natural driving.Experimental Brain Research,180(1), 1–14. doi:10.1007/s00221-006-0839-2

Clark, A. (2016). Surfing uncertainty. doi:10.1093/acprof:oso/9780190217013.001.

0001

Crundall, E., Crundall, D., & Stedmon, A. W. (2012). Negotiating left-hand and right-hand bends: A motorcycle simulator study to investigate experiential and behaviour differences across rider groups. PLoS ONE, 7(1), e29978.

doi:10.1371/journal.pone.0029978

de Winter, J. C., Happee, R., Martens, M. H., & Stanton, N. A. (2014). Effects of adaptive cruise control and highly automated driving on workload and situation awareness: A review of the empirical evidence. Transportation Re-search Part F: Traffic Psychology and Behaviour, 27, 196–217. doi:10.1016/

j.trf.2014.06.016

Feldman, H., & Friston, K. J. (2010). Attention, uncertainty, and free-energy. Fron-tiers in Human Neuroscience,4. doi:10.3389/fnhum.2010.00215

Fuller, R. (2005). Towards a general theory of driver behaviour.Accident Analysis

& Prevention, 37(3), 461–472.

Gibson, J. J. (1958). Visually controlled locomotion and visual orientation in an-imals. British Journal of Psychology, 49(3), 182–194. doi:10.1111/j.2044-8295.1958.tb00656.x

Godthelp, H., Milgram, P., & Blaauw, G. J. (1984). The development of a time-related measure to describe driving strategy.Human factors,26(3), 257–268.

Hoogendoorn, R., van Arem, B., & Hoogendoorn, S. (2013). Incorporating driver distraction in car-following models: Applying the tci to the idm. In Intelli-gent transportation systems-(itsc), 2013 16th international ieee conference on (pp. 2274–2279). IEEE.

Itkonen, T., Pekkanen, J., & Lappi, O. (2015). Driver gaze behavior is different in normal curve driving and when looking at the tangent point. PLOS ONE, 10(8), e0135505. doi:10.1371/journal.pone.0135505

Johnson, L., Sullivan, B., Hayhoe, M., & Ballard, D. (2014). Predicting human visuomotor behaviour in a driving task. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 369(1636), 20130044.

Kandil, F. I., Rotter, A., & Lappe, M. (2009). Driving is smoother and more stable when using the tangent point.Journal of Vision,9(1), 11. doi:10.1167/9.1.11 Killick, R., Fearnhead, P., & Eckley, I. A. (2012). Optimal detection of

change-points with a linear computational cost. Journal of the American Statistical Association, 107(500), 1590–1598. doi:10.1080/01621459.2012.737745

Kinnear, N. A. (2009). Driving as you feel: A psychological investigation of the novice driver problem. Doctoral thesis, Edinburgh Napier University.

Klauer, S. G., Dingus, T. A., Neale, V. L., Sudweeks, J. D., Ramsey, D. J., et al.

(2006).The impact of driver inattention on near-crash/crash risk: An analysis using the 100-car naturalistic driving study data(tech. rep. No. DOT HS 810 594). United States National Highway Traffic Safety Administration.

Knill, D. C., & Pouget, A. (2004). The bayesian brain: The role of uncertainty in neural coding and computation. Trends in Neurosciences, 27(12), 712–719.

doi:10.1016/j.tins.2004.10.007

Komogortsev, O. V., & Karpov, A. (2013). Automated classification and scoring of smooth pursuit eye movements in the presence of fixations and saccades.

Behavior research methods, 1–13.

Kujala, T., Mäkelä, J., Kotilainen, I., & Tokkonen, T. (2015). The attentional demand of automobile driving revisited: Occlusion distance as a function of task-relevant event density in realistic driving scenarios. Human Factors:

The Journal of the Human Factors and Ergonomics Society,58(1), 163–180.

doi:10.1177/0018720815595901

Land, M., & Lee, D. (1994). Where we look when we steer. Nature, 369(6483), 742–744. doi:10.1038/369742a0

Land, M., Mennie, N., & Rusted, J. (1999). The roles of vision and eye movements in the control of activities of daily living. Perception, 28(11), 1311–1328.

doi:10.1068/p2935

Lappi, O. (2014). Future path and tangent point models in the visual control of locomotion in curve driving. Journal of Vision, 14(12), 21–21. doi:10.1167/

14.12.21

Lappi, O., & Lehtonen, E. (2013). Eye-movements in real curve driving: Pursuit-like optokinesis in vehicle frame of reference, stability in an allocentric refer-ence coordinate system.Journal of Eye Movement Research, 6(1).

Lappi, O., Lehtonen, E., Pekkanen, J., & Itkonen, T. (2013). Beyond the tangent point: Gaze targets in naturalistic driving. Journal of Vision, 13(13), 11–11.

doi:10.1167/13.13.11

Lappi, O., & Mole, C. D. (2018). Visuomotor control, eye movements, and steering:

A unified approach for incorporating feedback, feedforward, and internal models. Psychological bulletin.

Lappi, O., Pekkanen, J., & Itkonen, T. H. (2013). Pursuit eye-movements in curve driving differentiate between future path and tangent point models. PLoS ONE, 8(7), e68326. doi:10.1371/journal.pone.0068326

Lappi, O., Rinkkala, P., & Pekkanen, J. (2017). Systematic observation of an expert driver’s gaze strategy—an on-road case study.Frontiers in psychology,8, 620.

Larsson, L., Nyström, M., & Stridh, M. (2013). Detection of saccades and postsac-cadic oscillations in the presence of smooth pursuit. IEEE Transactions on Biomedical Engineering, 60(9), 2484–2493.

Lee, D. N. (1976). A theory of visual control of braking based on information about time-to-collision.Perception, 5(4), 437–459. doi:10.1068/p050437

Lehtonen, E., Lappi, O., Koirikivi, I., & Summala, H. (2014). Effect of driving ex-perience on anticipatory look-ahead fixations in real curve driving.Accident Analysis & Prevention, 70, 195–208.

Lehtonen, E., Lappi, O., Kotkanen, H., & Summala, H. (2013). Look-ahead fixa-tions in curve driving. Ergonomics, 56(1), 34–44.

Macadam, C. C. (2003). Understanding and modeling the human driver. Vehicle System Dynamics, 40(1-3), 101–134.

Mack, D. J., Belfanti, S., & Schwarz, U. (2017). The effect of sampling rate and low-pass filters on saccades – a modeling approach. Behavior Research Methods.

doi:10.3758/s13428-016-0848-4

Mars, F. (2008). Driving around bends with manipulated eye-steering coordination.

Journal of Vision,8(11), 10–10. doi:10.1167/8.11.10

Mars, F., & Navarro, J. (2012). Where we look when we drive with or without active steering wheel control. PLoS ONE,7(8), e43858. doi:10.1371/journal.

pone.0043858

Mennie, N., Hayhoe, M., & Sullivan, B. (2007). Look-ahead fixations: Anticipatory eye movements in natural tasks.Experimental Brain Research,179(3), 427–

442.

Miall, R., & Wolpert, D. (1996). Forward models for physiological motor control.

Neural Networks, 9(8), 1265–1279. doi:10.1016/s0893-6080(96)00035-4 Michaels, C., & Beek, P. (1995). The state of ecological psychology. Ecological

psychology, 7(4), 259–278.

Näätänen, R., & Summala, H. (1974). A model for the role of motivational factors in drivers’ decision-making. Accident Analysis & Prevention, 6(3-4), 243–

261.

Nash, C. J., Cole, D. J., & Bigler, R. S. (2016). A review of human sensory dynam-ics for application to models of driver steering and speed control.Biological cybernetics, 110(2-3), 91–116.

Navarro, J., François, M., & Mars, F. (2016). Obstacle avoidance under automated steering: Impact on driving and gaze behaviours. Transportation Research Part F: Traffic Psychology and Behaviour, 43, 315–324. doi:10.1016/j.trf.

2016.09.007

Paden, B., Cap, M., Yong, S. Z., Yershov, D., & Frazzoli, E. (2016). A survey of motion planning and control techniques for self-driving urban vehicles.

IEEE Transactions on Intelligent Vehicles, 1(1), 33–55. doi:10 . 1109 / tiv . 2016.2578706

Pekkanen, J., & Lappi, O. (2017). A new and general approach to signal denois-ing and eye movement classification based on segmented linear regression.

Scientific Reports, 7(1). doi:10.1038/s41598-017-17983-x

Pekkanen, J., Lappi, O., Itkonen, T. H., & Summala, H. (2017). Task-difficulty homeostasis in car following models: Experimental validation using self-paced visual occlusion. PLOS ONE, 12(1), e0169704. doi:10 . 1371 / journal . pone . 0169704

Pekkanen, J., Lappi, O., Rinkkala, P., Tuhkanen, S., Frantsi, R., & Summala, H.

(2018). A computational model for driver’s cognitive state, visual perception and intermittent attention in a distracted car following task. Royal Society Open Science, 5(9), 180194. doi:10.1098/rsos.180194

Regan, D., & Gray, R. (2000). Visually guided collision avoidance and collision achievement.Trends in Cognitive Sciences, 4(3), 99–107. doi:10.1016/s1364-6613(99)01442-4

Rudin, L. I., Osher, S., & Fatemi, E. (1992). Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1-4), 259–268.

Saifuzzaman, M., & Zheng, Z. (2014). Incorporating human-factors in car-following models: A review of recent developments and research needs.Transportation research part C: emerging technologies, 48, 379–403.

Saifuzzaman, M., Zheng, Z., Haque, M. M., & Washington, S. (2015). Revisiting the task–capability interface model for incorporating human factors into car-following models. Transportation research part B: methodological, 82, 1–19.

Särkkä, S. (2009).Bayesian filtering and smoothing. doi:10.1017/cbo9781139344203 Senders, J. W., Kristofferson, A., Levison, W., Dietrich, C., & Ward, J. (1967). The attentional demand of automobile driving. Highway research record, (195), 15–33.

Summala, H. (2007). Towards understanding motivational and emotional factors in driver behaviour: Comfort through satisficing. InModelling driver behaviour in automotive environments (pp. 189–207). doi:10.1007/978- 1- 84628-

618-6_11

Tatler, B. W., & Land, M. F. (2011). Vision and the representation of the surround-ings in spatial memory. Philosophical Transactions of the Royal Society B:

Biological Sciences,366(1564), 596–610. doi:10.1098/rstb.2010.0188

Treiber, M., Hennecke, A., & Helbing, D. (2000). Congested traffic states in em-pirical observations and microscopic simulations. Physical review E, 62(2), 1805.

Tuhkanen, S., Pekkanen, J., Rinkkala, P., Mole, C., Wilkie, R. M., & Lappi, O.

(under review). Humans use predictive gaze strategies to target waypoints for steering.

Van Winsum, W. (1999). The human element in car following models. Transporta-tion research part F: traffic psychology and behaviour,2(4), 207–211.

Vig, E., Dorr, M., & Cox, D. (2012). Space-variant descriptor sampling for action recognition based on saliency and eye movements.Computer Vision–ECCV 2012, 84–97.

Wann, J. P., & Land, M. (2000). Steering with or without the flow: Is the retrieval of heading necessary? Trends in Cognitive Sciences, 4(8), 319–324. doi:10.

1016/s1364-6613(00)01513-8

Wann, J. P., & Swapp, D. K. (2000). Why you should look where you are going.

Nature Neuroscience, 3(7), 647–648. doi:10.1038/76602

Warren, W. H. (2012). Action-scaled information for the visual control of locomo-tion. InClosing the gap (pp. 261–296). Psychology Press.

Western Cape Government. (2017). No call or text is worth your life. #itcanwait.

Retrieved from https://youtube.com/watch?v=8crvXJJNxbQ

Wiedemann, R. (1974). Simulation des strabenverkehrsflusses. Schriftenreihe des Instituts für Verkehrswesen, (8).

Wilkie, R. M., Kountouriotis, G., Merat, N., & Wann, J. P. (2010). Using vision to control locomotion: Looking where you want to go. Experimental Brain Research, 204(4), 539–547.

Wilson, M., Stephenson, S., Chattington, M., & Marple-Horvat, D. E. (2007).

Eye movements coordinated with steering benefit performance even when vision is denied.Experimental Brain Research,176(3), 397–412. doi:10.1007/

s00221-006-0623-3

Yu, S.-Z. (2010). Hidden semi-markov models. Artificial Intelligence, 174(2), 215–

243. doi:10.1016/j.artint.2009.11.011

Zago, M., McIntyre, J., Senot, P., & Lacquaniti, F. (2009). Visuo-motor coor-dination and internal models for object interception. Experimental Brain Research, 192(4), 571–604. doi:10.1007/s00221-008-1691-3

Zhao, H., & Warren, W. H. (2015). On-line and model-based approaches to the visual control of action. Vision Research, 110, 190–202. doi:10.1016/j.visres.

2014.10.008