Usability Evaluation Essay

Total Length: 5036 words ( 17 double-spaced pages)

Total Sources: 20

Page 1 of 17

Usability Evaluation

Concept of Usability Evaluation

Heuristic Method

Issues in Usability Evaluation

Heuristic Evaluation Dimensions

The Evaluator

User Interfaces

Usability Problem Formats

Heuristic Evaluation Process

Inspection Phase 15

Identifying Usability Problems

Usability Problem Preparation Phase 16

Aggregation Phase 17

Procedure of Evaluation

Participants

The Static Web Interface

Observing and Quickly Visiting the Interface

Elaborating (Problems) and Revisiting (Interface and Materials)

Navigating the Interface

Annotating the Interface

Usability Evaluation

As part of the Web development process, Web developers are confronted with evaluating the usability of Web interfaces (i.e. Web sites and applications). Typically, a combination of manual methods and automatic tools are used for an effective Web site evaluation -- e.g. manual inspection is needed to supplement automatic validation tool results (Rowan 2000). However, Web projects are highly affected by their fast paced life cycles, leaving little room for full evaluations. Other major factors contributing to this situation are low budgeting assigned for testing and availability of usability experts.

Web developers need effective and cheap approaches to Web usability evaluation. Available automatic Web usability evaluation tools such as LIFT online and LIFT onsite (UsableNet 2002) and WebXACT (WatchFire 2007) have proven to be useful in finding syntactic problems. These include problems of consistency, verification of broken links, if pages contain links to the home page, alternative description of images (with use of the ALT tag in HTML), among others (Brajnik 2000). Other problems of semantic and pragmatic nature are left out by automatic evaluation tools (Farenc, 1996), and need to be handled. Farenc and collaborators (Farenc et al. 1996) explored the limitations of automatic usability evaluation tools. In analyzing 230 rules for their ERGOVAL automatic usability evaluation tool for Windows systems they found that a maximum of 78% of the rules could be automated "whatever the implemented methods are." The other 22% require input from humans to provide information and resolve semantic and pragmatic conflicts.

Usability problems that are not handled by automatic evaluation tools can be handled with semi-automatic and manual approaches. In semi-automatic approaches, the identification of usability problems start by the analysis of source files and completed with human intervention to provide information, make decisions or confirm problems. There are three manual methods that are typically used to find usability problems in user interfaces (Preece, 2002): a) usability testing where testers observe users performing tasks and report usability problems based on their observations, b) with questionnaires and interviews users are asked about their experience in using a system, missing features, and overall satisfaction, among other matters, c) in inspection methods experts examine user interfaces and report usability problems based on their judgment and expertise. Current paper is a report of usability evaluation that was conducted by the author

2. Methodology

The first step was to characterize the inspection process in Heuristic Evaluation to understand it better and come up with different ways to support it. A user study in the laboratory was conducted to understand how evaluators apply Heuristic Evaluation on Web interfaces. The output of this step is a rough characterization of the process and tool requirements.

Tool requirements were identified from the literature, Study findings, and experience. Evaluators in Study were found spending time in observing, annotating, and navigating the interface, as well as elaborating usability problems. Tools for inspection are proposed based on these activities.

Literature Review

Concept of Usability Evaluation

The concept of usability was defined in the field of human computer interaction (HCI) as the relationship between humans and computers. The International Organization for Standardization (ISO) proposed two definitions of usability, ISO 9241 and ISO 9126. ISO 9241 defines usability as "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use" (ISO 9241-11, 1998). In ISO 9126, usability compliance is one of five product quality categories, in addition to understandability; learn ability, operability, and attractiveness (ISO/IEC 9126, 2001). Usability depends on the interaction between user and task in a defined environment (Abran, Khelifi, Suryn, & Saffah, 2003; Bennett, 1984). Therefore, ISO 9126 defines usability as "the capability of the software product to be understood, learned, used and attractive to the user, when used under specified condition" (ISO/IEC 9126, 2001). While this definition focuses on ease of use, ISO 9241 uses the term "quality in use" to describe usability more broadly (Abran, et al., 2003; Bevan, 2001) (Figure 1-1). " quality in use" is defined as "the capability of the software product t enable specified users to achieve specified goals with effectiveness, productivity, safety, and satisfaction in specified contexts of use" (ISO/IEC 9126, 2001).


This term then "quality of use " because of the weaknesses in ISO 9126, such as unclear architecture at the detail level of the measures, overlapping concepts, lack of a quality requirement standard, lack of guidance in assessing the results of measurements, and ambiguous choice of measures (Abran, et al., 2003).

Usability of a technology is determined not only by its user-computer interactions, but also by the degree to which it can be successfully integrated to perform tasks in the intended work environment. Thus, usability is evaluated trough the interaction of user, system, and task in a specified setting (Bennett, 1984). The socio-technical perspective also indicates that the technical featured of health IT interact with the social features of a healthcare work environment (Ash, J.S., et al., 2007; Reddy, Pratt, Dourish, & Shabot, 2003). The meaning of usability should therefore comprise four major components: user, tool, task, environment (Figure 1-2) Bennett, 1984).

Heuristic Method

This section discusses Heuristic Evaluation in detail. It supplements other major Heuristic Evaluation surveys (Cox 1998; Dykstra 1993; Woolrych, 2001). It focuses is on Heuristic Evaluation process and tool support, however. The reader will find software requirements highlighted throughout the section.

Heuristic Evaluation is an inspection method proposed by Nielsen and Molich (1990). It follows the "discount" philosophy, in which simplified versions of traditional methods are employed (e.g. discount usability testing not requiring elaborate laboratory setups). It consists of having a small number of evaluators independently examine a user interface in search for usability problems. Evaluators, then, collaborate to aggregate all usability problems. During interface inspection evaluators use a set of usability principles as guide, known as "heuristics," to focus on common problem areas in user interfaces. An example of such heuristics is "Help users recognize, diagnose, and recover from errors (Nielsen 2005b)." Interface features that violate the heuristics are reported as usability problems.

There have been just a couple of tools developed for assisting evaluators in Heuristic Evaluations. Problem aggregation has been supported (Cox 1998). There was no intent for automating the aggregation process but rather supporting evaluators in manual processes in problem aggregation. These include identifying unique problems, discarding duplicates, and merging descriptions using the affinity diagrams (Snyder 2003). There has been some effort in semi-automating problem identification in Heuristic Evaluation, but it is a formal, application-dependent approach. Loer and Harrison (2000) developed a system for querying a model checker for searching potential usability problems in user interfaces.

Heuristic Evaluation is an inspection method proposed by Nielsen and Molich (1990). It is a simple method used to discover usability problems in user interfaces. It consists of having a small set of evaluators individually examine a user interface and judge for compliance with recognized usability principles called "heuristics." The lists of potential usability problems are aggregated in a single usability report. Members of the development team are presented with the report to agree on the usability problem fixes and priorities. Figure 1 depicts the overall Heuristic Evaluation process

Figure 1-Heuristic Evaluation Overview

Nielsen makes recommendations to conduct a Heuristic Evaluation (Nielsen 2005a, 1994a). A typical Heuristic Evaluation session lasts 2 hours. The evaluation can start with 2 passes of the user interface. A pass to get a general idea of the user interface design and overall interaction. Evaluators focus on particular parts in a second pass. Heuristics are meant to be used to help identify usability problems. With heuristics in mind evaluators carefully examine an interface and report interface features that were noticed to have violated them.

The output of a Heuristic Evaluation is a list of potential usability problems. Lists generated by all evaluators are aggregated. Evaluators meet and identify duplicates, combine problem descriptions, suggest solutions to problems and possibly rate their severity so they can be prioritized. Nielsen recommends using a 0-4 severity rating scale (Nielsen 1995b).

Table 1-Nielsen's Severity Rating Scale Borrowed from [Nielsen 1995b]

"0 = I don't agree that this is a usability problem at all"

"1 = Cosmetic problem only: need not be fixed unless extra time is available on project"

"2 = Minor usability problem: fixing this should be given low priority"

"3 = Major usability problem: important to fix, so should be given high priority"

"4 = Usability catastrophe: imperative to fix this before product can be released"

Several Heuristic Evaluation dimensions can be identified from the description above: the heuristics.....

Show More ⇣


     Open the full completed essay and source list


OR

     Order a one-of-a-kind custom essay on this topic


sample essay writing service

Cite This Resource:

Latest APA Format (6th edition)

Copy Reference
"Usability Evaluation" (2011, October 09) Retrieved May 19, 2024, from
https://www.aceyourpaper.com/essays/usability-evaluation-116916

Latest MLA Format (8th edition)

Copy Reference
"Usability Evaluation" 09 October 2011. Web.19 May. 2024. <
https://www.aceyourpaper.com/essays/usability-evaluation-116916>

Latest Chicago Format (16th edition)

Copy Reference
"Usability Evaluation", 09 October 2011, Accessed.19 May. 2024,
https://www.aceyourpaper.com/essays/usability-evaluation-116916