Partial concurrent thinking aloud

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Partial Concurrent Thinking Aloud (or partial concurrent think-aloud, or PCTA) is a method used to gather data in usability testing with screen reader users. It is a particular kind of think aloud protocol (or TAP) created by Stefano Federici and Simone Borsci [1] at the Interuniversity Center for Research on Cognitive Processing in Natural and Artificial Systems [2] of University of Rome "La Sapienza". The Partial Concurrent Thinking Aloud is built up in order to create a specific usability assessment technique for blind users, eligible to maintain the advantages of concurrent and retrospective thinking aloud while overcoming their limits. Using PCTA blind users’ verbalizations of problems could be more pertinent and comparable to those given by sighted people who use a concurrent protocol. In the usability evaluation with blind people, the retrospective thinking aloud is often adopted as a functional solution to overcome the structural interference due to thinking aloud and hearing the screen reader imposed by the classic thinking aloud technique; such a solution has yet a relapse in the evaluation method, because the concurrent and the retrospective protocols measure usability from different points of view, one mediated by navigation experience (retrospective) one more direct and pertinent (concurrent).[3] The use of PCTA could be widened to both summative and formative usability evaluations with mixed panels of users, thus extending the number of problems' verbalizations according to disabled users’ divergent navigation processes and problem solving strategies.

Cognitive assumptions of Partial Concurrent Thinking Aloud[edit]

In general, in the usability evaluation both retrospective and concurrent TAP could be used according to the aims and goals of the study. Nevertheless, when a usability evaluation is carried out with blind people several studies propose to use the retrospective TAP: indeed, using a screen reader and talking about the way of interacting with the computer implies a structural interference between action and verbalization. Undoubtedly, cognitive studies provided a lot of evidence supporting the idea that individuals can listen, verbalize, or manipulate, and rescue information in multiple task condition. As Colin Cherry [4] showed, subjects, when listening to two different messages from a single loudspeaker, can separate sounds from background noise, recognize the gender of the speaker, the direction, and the pitch (cocktail party effect). At the same time, subjects that must verbalize the content of a message (attended message) listening to two different message simultaneously (attended and unattended message) have a reduced ability to report the content of the attended massage, while they are unable to report the content of the unattended message. Moreover, K. Anders Ericsson and Walter Kintsch [5] showed that, in a multiple task condition, subjects’ ability of rescuing information is not compromised by an interruption of the action flow (as it happens in the concurrent thinking aloud technique), thanks to the “Long Term Working Memory mechanism” of information retrieval (Working Memory section Ericsson and Kintsch). Even if users can listen, recognize, and verbalize multiple messages in a multiple task condition and they can stop and restart actions without losing any information, other cognitive studies underlined that the overlap of activities in a multiple task condition have an effect on the goal achievement: Kemper, Herman and Lian,[6] analysing the users' abilities to verbalize actions in a multiple task condition, showed that the fluency of a user’s conversation is influenced by the overlap of actions. Adults are likely to continue to talk as they navigate in a complex physical environment. However, the fluency of their conversation is likely to change: Older adults are likely to speak more slowly than they would if resting; Young adults continue to speak just as rapidly while walking as while resting, but they adopt a further set of speech accommodations, reducing sentence length, grammatical complexity, and propositional density. Just by reducing length, complexity, and propositional density adults free up working memory resources. We do not know how and how much the content of verbalizations could be influenced by the strategy of verbalization (i.e. the modification of fluency and the complexity in a multiple task condition). Anyway, we well know that users in the concurrent thinking aloud verbalize the problems in a more accurate and pertinent way (i.e. more focused on the problems directly perceived during the interaction) then in the retrospective one.[7][8] The pertinence is granted to the user by the proximity of action-verbalization-next action; this multiple task proximity compels the subject to apply a strategy of verbalization that reduce the overload of the working memory. However, for blind users this time proximity between action and verbalization is lost: the use of the screen reader, in fact, increase the time for verbalization (i.e. in order to verbalize, blind users must first stop the [screen reader] and then restart it).

Protocol of Partial Concurrent Thinking Aloud[edit]

PCTA method is composed of two sections, one concurrent and one retrospective:

The first section is a modified concurrent protocol built up according to the three concurrent verbal protocols criteria described by K. Anders Ericsson and Herbert A. Simon:[9][10]

The first criterion
Subjects should be talking about the task at hand, not about an unrelated issue. In order to respect this rule, the time between problem retrieval, thinking and verbalization must be minimized to avoid the influence of a long perceptual reworking and the consequent verbalization of unrelated issues. Blind participants, using a screen reader, increase the time latency between identification and verbalization of a problem. To minimize this latency, users are trained to ring a desk-bell that stops both time and navigation. During this suspension, users can create a memory sign (i.e. ring the bell) and restart immediately the navigation. This setting modification allows to avoid the cognitive limitation problem and the influence of perceptual reworking, also creating a memory sign for the retrospective analysis.
The second criterion
To be pertinent, verbalizations should be logically consistent with the verbalizations that just preceded them. For any kind of user it is hard to be pertinent and consistent in a concurrent verbal protocol. Therefore, the practitioners could generally interrupt the navigation and ask for a clarification or stimulate the users to verbalize in a pertinent way. In order to do so and stop navigation to screen reader users, we propose to negotiate a specific physical sign with them: The practitioner, sitting behind the user, will put his hand on the user’s shoulder. This physical sign grants the verbalization pertinence and consistence.
The third criterion
A subset of the information needed during the task performance should be remembered. The concurrent model is based on the link between working memory and time latency. The proximity between the occurrence of a thought and its verbal report allows users to verbalize on the basis of their working memory.

The second PCTA section is a retrospective one in which users analyse those problems previously verbalized in a concurrent way. The memory signs, created by users ringing the desk-bell, overcome the limits of classic retrospective analysis; indeed, these signs allow the users to be pertinent and consistent with their concurrent verbalization, thus avoiding the influence of long term memory and perceptual reworking.

See also[edit]

References[edit]

  1. ^ Borsci, S., & Federici, S. (2009). "The Partial Concurrent Thinking Aloud: A New Usability Evaluation Technique for Blind Users". In P. L. Emiliani, L. Burzagli, A. Como, F. Gabbanini, & A. L. Salminen. Assistive technology from adapted equipment to inclusive environments 25. IOS Press. pp. 421–425. 
  2. ^ http://w3.uniroma1.it/econa/
  3. ^ Federici, S., Borsci, S., & Stamerra, G. (Accepted November 2009). "Web usability evaluation with screen reader users: implementation of the partial concurrent thinking aloud technique". Cognitive processing 11 (3): 263–72. doi:10.1007/s10339-009-0347-y. PMID 19916036.  Check date values in: |date= (help)
  4. ^ Cherry, E.C. (1953). "Some experiments on the recognition of speech, with one and with two ears". Journal of the Acoustical Society of America 25 (5): 975–979. doi:10.1121/1.1907229. 
  5. ^ Ericsson, K.A., Kintsch, W. (1995). "Long-Term Working Memory". Psychological Review 102 (2): 211–245. doi:10.1037/0033-295X.102.2.211. PMID 7740089. 
  6. ^ Kemper, S., Herman, R.E., & Lian, C.H.T. (2003). "The Costs of Doing Two Things at Once for Young and Older Adults: Talking While Walking, Finger Tapping, and Ignoring Speech or Noise". Psychology and Aging 18 (2): 181–192. doi:10.1037/0882-7974.18.2.181. PMID 12825768. 
  7. ^ Bowers, V.A & Snyder, H.L. (2003). Concurrent versus retrospective verbal protocols for comparing window usability. Human Factors Society 34th Meeting, 8–12 October 1990 HFES, Santa Monica. pp. 1270–1274. 
  8. ^ Van den Haak, M.J. & De Jong, M.D.T. (2003). Exploring Two Methods of Usability Testing: Concurrent versus Retrospective Think-Aloud Protocols. IEEE International Professional Communication Conference Proceedings Piscataway, New Jersey. 
  9. ^ Ericsson, K.A., Simon, H.A. (1980). "Verbal reports as data". Psychological Review 87 (3): 215–251. doi:10.1037/0033-295X.87.3.215. 
  10. ^ Ericsson, K.A., Simon, H.A. (1993). Protocol analysis: Verbal reports as data (Revised edition). MIT Press Cambridge.