Interaction Design Evaluation Methods

Mahela Dissanayake
8 min readAug 2, 2021

Welcome readers, Today I’m going to talk about the methods that are used to evaluate and interaction design. Typically, usability inspection is aimed at finding usability problems in the design, though some methods also address issues like the severity of the usability problems and the overall usability of an entire system.

Many inspection methods lend themselves to the inspection of user interface specifications that have not necessarily been implemented yet, meaning that inspection can be performed early in the usability engineering lifecycle.

Heuristic Evaluation

Heuristic Evaluation is a Usability Inspection method used to identify usability problems in the user interface design for a Computer Software against a set of Heuristic Principles. In 1990, web usability pioneers Jakob Nielsen and Rolf Molich published an article on the topic “Improving a Human-Computer Dialogue” which contained a set of principles or heuristics that the industry specialist soon began to adopt to assess the interfaces in human-computer interactions. It is now a commonly used fast and practical way to solve problems in UIs and make decisions. A heuristic evaluation can be seen as a quick and lower-cost way to measure and improve your product’s usability before conducting usability tests.

It is usually conducted by a group of experts because it is very likely that one person will not be able to find all usability problems. On the other hand, a group of different people tend to analyze an interface from different angles and as a result, are more likely to identify a wider set of areas for improvement.

The Nielsen-Molich is the most widely used heuristics which state that a system should:

Keep users informed about its status appropriately and promptly.

Show information in ways users understand from how the real world operates, and in the users’ language.

Offer users control and let them undo errors easily.

Be consistent so users aren’t confused over what different words, icons, etc. mean.

Prevent errors — a system should either avoid conditions where errors arise or warn users before they take risky actions (e.g., “Are you sure you want to do this?” messages).

Have visible information, instructions, etc. to let users recognize options, actions, etc. instead of forcing them to rely on memory.

Be flexible so experienced users find faster ways to attain goals.

Have no clutter, containing only relevant information for current tasks.

Provide plain-language help regarding errors and solutions.

List concise steps in lean, searchable documentation for overcoming problems.

It should also be stated that you should not approach heuristic evaluations with the implicit notion that somehow there is a perfect user interface to be achieved and that by hiring experts, the product under evaluation is going to achieve it.

Also, heuristic evaluation is an inspection method that does not involve the end-user of the system which is an essential component of usability testing.

Walkthroughs

A walkthrough is a method used to review documents with peers, managers, and fellow team members who are guided by the author of the document to gather feedback and reach a consensus. A walkthrough can be pre-planned or organised based on the needs. Generally, people working on the same work product are involved in the walkthrough process.

Cognitive Walkthroughs

Cognitive walkthroughs are used to examine the usability of a product. They are designed to see whether or not a new user can easily carry out tasks within a given system. Unlike heuristic evaluation which is a more holistic usability inspection, Cognitive Walkthrough is a task-specific approach to usability. The idea is that if given a choice, most users prefer to do things to learn a product rather than to read a manual or follow a set of instructions. The cognitive walkthrough was originally designed as a tool to evaluate walk-up-and-use systems like postal kiosks and automated teller machines (ATMs). However, the cognitive walkthrough has been employed successfully with more complex systems like CAD software and software development tools to understand the first experience of new users.

The biggest benefit of a cognitive walkthrough is that it is extremely cost-effective and fast to carry out when compared to many other forms of usability testing. It can also be implemented prior to development during the design phase which can give rapid insight before the budget is spent developing an unusable product.

The Cognitive Walkthrough method does not take several social attributes into account. The method can only be successful if the usability specialist takes care to prepare the team for all possibilities during the cognitive walkthrough. This tends to enhance the ground rules and avoid the pitfalls that come with an ill-prepared team.

Pluralistic Usability Walkthrough

Pluralistic Walkthrough is a usability test method employed to generate early design evaluation by assigning a group of users a series of paper-based tasks that represent the proposed product interface and including participation from developers of that interface. The method is prized for its ability to be utilized at the earliest design stages, enabling the resolution of usability issues quickly and early in the design process. The method also allows for the detection of a greater number of usability problems to be found at one time due to the interaction of multiple types of participants like users, developers and usability professionals. This type of usability inspection method has the additional objective of increasing developers’ sensitivity to users’ concerns about the product design.

Web Analytics

Web analytics is the collection, reporting, and analysis of website data. The focus is on identifying measures based on your organizational and user goals and using the website data to determine the success or failure of those goals and to drive strategy and improve the user’s experience. Critical to developing relevant and effective web analysis is creating objectives and calls-to-action from your organizational and site visitors goals and identifying key performance indicators (KPIs) to measure the success or failures for those objectives and calls-to-action.

Web analytics processes are made of four essential stages or steps, which are:

Collection of data: This stage is the collection of the basic, elementary data. Usually, these data are counts of things. The objective of this stage is to gather the data.

Processing of data into information: This stage usually take counts and make them ratios, although there still may be some counts. The objective of this stage is to take the data and conform it into information, specifically metrics.

Developing KPI: This stage focuses on using the ratios (and counts) and infusing them with business strategies, referred to as key performance indicators (KPI). Many times, KPIs deal with conversion aspects, but not always. It depends on the organization.

Formulating online strategy: This stage is concerned with the online goals, objectives, and standards for the organization or business. These strategies are usually related to making money, saving money, or increasing market share.

Generally, web analytics has been used to refer to on-site visitor measurement. However, this meaning has become unclear as of late. Many different vendors provide on-site web analytics software and services. There are two main technical ways of collecting the data. The first and traditional method, server log file analysis, reads the logfiles in which the web server records file requests by browsers. The second method, page tagging, uses JavaScript embedded in the webpage to make image requests to a third-party analytics-dedicated server, whenever a webpage is rendered by a web browser or if desired when a mouse click occurs. Both collect data that can be processed to produce web traffic reports.

A/B testing

A/B testing, also known as split testing, refers to a randomized experimentation process wherein two or more versions of a variable (web page, page element, etc.) are shown to different segments of website visitors at the same time to determine which version leaves the maximum impact and drive business metrics.

Essentially, A/B testing eliminates all the guesswork out of website optimization and enables experience optimizers to make data-backed decisions. In A/B testing, A refers to ‘control’ or the original testing variable. Whereas B refers to ‘variation’ or a new version of the original testing variable. The version that moves your business metric(s) in the positive direction is known as the ‘winner.’ Implementing the changes of this winning variation on your tested page(s) / element(s) can help optimize your website and increase business ROI.

A/B testing is one of the components of the overarching process of Conversion Rate Optimization (CRO), using which you can gather both qualitative and quantitative user insights. You can further use this collected data to understand user behaviour, engagement rate, pain points, and even satisfaction with website features, including new features, revamped page sections, etc.

Predictive Models

predictive modelling, also called predictive analytics, is a statistical technique using machine learning and data mining to predict and forecast likely future outcomes with the aid of historical and existing data.

It works by analyzing current and historical data and projecting what it learns on a model generated to forecast likely outcomes. Predictive modelling can be used to predict just about anything, from TV ratings and a customer’s next purchase to credit risks and corporate earnings. Depending on definitional boundaries, predictive modelling is synonymous with, or largely overlapping with, the field of machine learning, as it is more commonly referred to in academic or research and development contexts.

Predictive modelling allows the testing approach to be customer based. You can know about customers’ opinions from predictive modelling, which makes the testing process customer-centric. Through Predictive modelling, enterprises can meet digital transformation objectives effectively.

While comparing test efficiency based on product management inputs and real-time user inputs, the former emerges as the winner. Predictive analytics helps the QA team to ensure that the customer gets what they need.

References

Ali, R. (2020, 09 23). Predictive Modeling: Types, Benefits, and Algorithms. Retrieved from Oracle Net Institute: https://www.netsuite.com/portal/resource/articles/financial-management/predictive-modeling.shtml

Jansen, B. J. (2009). Understanding user-web interactions via web analytics. Synthesis Lectures on Information Concepts, Retrieval, and Services.

Kaushik, A. (2010, April 19). Web Analytics 101: Definitions: Goals, Metrics, KPIs, Dimensions, Targets. Retrieved from Kaushik.net: https://www.kaushik.net/avinash/web-analytics-101-definitions-goals-metrics-kpis-dimensions-targets/

Mack, J. N. (1994). Usability Inspection Methods. New York: John Wiley & Sons.

Muniz, F. (n.d.). An Introduction To Heuristic Evaluation. Retrieved from UsabilityGeek: https://usabilitygeek.com/heuristic-evaluation-introduction/

Spencer, R. (2000). The streamlined cognitive walkthrough method, working around social constraints encountered in a software development company. CHI ’00: Proceedings of the SIGCHI conference on Human Factors in Computing Systems (pp. 353–359). New York: Association for Computing Machinery.

VWO. (2020). A/B Testing Guide. Retrieved from https://vwo.com/ab-testing/

Thank You for Reading.

--

--

Mahela Dissanayake

Software Engineering Undergraduate of University of Kelaniya, Sri Lanka.