Heuristic Evaluation

Heuristic evaluation originates in usability research, as a technique for early formative evaluation of digital systems. A group of experts is asked to assess a particular design using a given rubric (set of heuristics). It offers a low-fidelity rapid evaluation which often uncovers design flaws at an early stage. In a heuristic evaluation, a group of experts (usually 5-7) are asked to “walk through” the evaluated system as if they were users (learners) engaged in a typical activity. The experts are presented with a set of design heuristics - “rules of thumb” against which they are asked to assess their experience. Often they are provided with a score sheet, where they are asked to note any violation of these heuristics and rate its severity.

Heuristic evaluation is a methodology for investigating the usability of software originally developed by Jakob Nielsen (1993, 2000), a widely acknowledged usability expert. According to Nielsen (1994), heuristic evaluation involves “involves having a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the "heuristics”).” Nielsen’s original protocol for heuristic evaluation can be found on the Web at: http://www.useit.com/papers/heuristic/." (Reeves et al, 2002)

Heuristic evaluation has been recognised as a powerful technique for evaluating learning design. However, in order to adapt it to this use, the designer / researcher needs to define a protocol and a set of Heuristics.

References: 

Heuristic evaluation page on the Fluid project wiki

Ssemugabi, S. & de Villiers, R. (2010), 'Effectiveness of heuristic evaluation in usability evaluation of e-learning applications in higher education', South African Computer Journal 45 (0) . http://sacj.cs.uct.ac.za/index.php/sacj/article/view/37

Hagen, P.; Robertson, T.; Kan, M. & Sadler, K. (2005), 'Emerging research methods for understanding mobile technology use' 'Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future' , Computer-Human Interaction Special Interest Group (CHISIG) of Australia , 1-10. 

research.it.uts.edu.au/idhup/wordpress/wp-content/uploads/2009/10/Hagen_OzCHI2005.pdf.pdf 

Kjeldskov, J.; Graham, C.; Pedell, S.; Vetere, F.; Howard, S.; Balbo, S. & Davies, J. (2005), 'Evaluating the usability of a mobile guide: The influence of location, participants and resources',Behaviour and Information Technology 24 (1) , 51-66. http://disweb.dis.unimelb.edu.au/staff/showard/papers/BIT2005.pdf

Reeves, T. C.; Benson, L.; Elliott, D.; Grant, M.; Holschuh, D.; Kim, B.; Kim, H.; Lauber, E. & Loh, C. S. (2002), Usability and Instructional Design Heuristics for E-Learning Evaluation, in P. Barker & S. Rebelsky, ed., in 'Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2002 (pp. 1615-1621).' , AACE, Chesapeake, VA. http://www.csloh.com/research/pdf/EdMedia2002.pdfhttp://treeves.coe.uga.edu//edit8350/HEIPEP.html

Albion, P. (1999), Heuristic evaluation of educational multimedia: from theory to practice, in 'Proceedings ASCILITE 1999: 16th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education: Responding to Diversity' , pp. 9-15. http://eprints.usq.edu.au/6987/

Nielsen, J. (1994), Heuristic evaluation , John Wiley & Sons, Inc. , New York, NY, USA , pp. 25-62 .

Nielsen, J. (1994), 'How to conduct a heuristic evaluation' .  http://www.useit.com/papers/heuristic/heuristic_evaluation.html

Nielsen, J. & Molich, R. (1990), Heuristic evaluation of user interfaces, in 'CHI '90: Proceedings of the SIGCHI conference on Human factors in computing systems' , ACM Press, New York, NY , pp. 249-256 .

Contact:

Yishay Mor, Yishay.Mor@open.ac.uk 

Template for e-learning heuristic evaluation protocol (use this template)

Heuristic Evaluation of e-Learning


Comments