Strenghtlog/Styrkelabbet

You can find the final report here.

Course: 

Methods Interactiondesign 1                

 

Duration

Part-time 11 weeks

Area of profession: 

Usability testing, Usability evaluation, Prototyping

 

Method/Tools: 

Scrum through JIRA, UTUM- method, System Usability Scale, Figma, Heuristic evaluation

The assignment: 

Project management was in focus; to plan, execute and present a project of our own choice. Our group contained of 4 students, and we decided to do a usability test, evaluate and depending on the outcome of the data also redesign  with a prototype in Figma. 

Since we all share a common intrest in the health and fitness world, we came to an agreement to study and analyze the free version of the training application Strenghtlog by Styrkelabbet.

 

Aim of this project: 

As physical activity is important for the majority of individuals, this project aims to investigate whether the application appears to motivate and engage the user to be more physically active.

Strenghtlog/Styrkelabbet: 

Styrkelabbet operates within the health and fitness industry with a focus on strength training. They utilize various platforms to reach fitness enthusiasts, including a podcast and a website (Styrkelabbet, 2023)Additionally, Styrkelabbet has a fitness application, StrengthLog, which offers two different versions: a free version with limited content accessible to all users and a premium version that grants access to all features of the application for a fee. In the free version, users can access pre-designed digital training programs and create their own workout routines. Users can also track their training activity and maintain workout statistics.”

What we did: 

  • Conducted a usability tests using UTUM-method (here is the survey)
  • Carried out an (heuristic & criteria-based) expert evaluation 
  • Conducted a SUS (System Usability Scale)
  • Created a low-fidelity prototype using Figma
  • Documented a report

What I did: 

  • Conducted and executed 3 of 5 usability tests
  • Analyzed the data from the usability tests
  • Conducted, carried out and analyzed the SUS data
  • Writing the report 

How we did it (Usability test): 

Within the defined target group, a comprehensive series of 5 usability tests were conducted, employing the UTUM methodology, cullminating in a thorough SUS (System Usability Scale) evaluation. 

The scenario was structured into eight different use cases, each of which was formulated into eight tasks for test subjects to perform during the test. Scenarios and tasks were designed to understand how the test subject interacted with the application and to uncover potential usability issues or opportunities for improvement. To minimize learning effects that could occur when test subjects become familiar with the app after a few tasks, the tasks were randomly assigned to ensure a randomized order. 

Test leaders explained the task to the test subject, initiated a timer upon task start, and then the test subject began the assignment. In cases where the test subject encountered significant difficulties with a task and had trouble proceeding, the test leader provided guidance on how to advance to complete the task and move on to the next part of the test. 

 

 

Test leaders explained the task to the test subject, initiated a timer upon task start, and then the test subject began the assignment. In cases where the test subject encountered significant difficulties with a task and had trouble proceeding, the test leader provided guidance on how to advance to complete the task and move on to the next part of the test. 

Issue that occured: After the scenario portion of the test was completed, test subjects were asked to fill out a UTUM-inspired scale and answer some concluding survey questions to capture their own perception of their interaction and user experience with the application. The choice of the UTUM scale was a project team oversight, as it was originally intended to be an SUS scale to obtain quantitative and measurable data.  

Solution: This problem was discovered after the tests were conducted and was corrected within three days after the discovery. This mean that test subjects were contacted again a few days after their participation in the study to provide answers for the SUS scale. 

 

The tasks

Analysis of the data – You can find the data file here.

 – We compiled the data using Excel sheet

– We coded the content from each participants to find similarities and/or differances.

– We identified and categorized the areas of  low usability

– We used the data to interpret if our aim of the study could be answered

The time (seconds) required for all tasks

 

  

 

 

The areas of low usability according to the data from the usability test.

The categories that emerged in the content analysis from the usability tests

System Usability scale

The results indicated that on the SUS scale, the application received an average score of 36.5 out of a total of 100 points. The range between the lowest and highest scores was 12.5 points and 62.5 points, respectively, on the SUS scale.

As seen in the diagram above, this means that the application received an average score equivalent to an ’F’ on the A-F scale. If described in terms of adjectives, test participants rated the application somewhere between ’Terrible’ and ’Bad.’ A moderate result on the SUS scale typically falls around 68 points, while 68-80.3 is considered a good SUS score. None of the test participants rated the application as good, and based on the SUS score, StrengthLog’s training application can be considered significantly below average. The results indicated that only one out of five test participants would consider using the application in their training, and it was the same person who rated the application the highest on the SUS scale.

Conclusions from the data

 

 

– Difficult to navigate

 

– The systems response was difficult to interpret

 

– Low intuitiveness 

 

– 1/5 would possibly use the application

 

– Lack of alternatives regarding gender identity

 

– The icons was difficult to understand

 

– could the age of the user matter?

our suggestions

To see more of the suggested changes please see the final report here (written in Swedish).

Areas for improvement

Regarding the projects process and workflow

I believe it´s essential to analyze one´s work in order to facilitate skill development. Evaluating both successful and less successful aspects provides valuable insights for the next project or task!

Observers for the usability tests: The test leader in our usability tests was focused on the participants actions and needs. The test leader was fully focused on how to guide the particiapant without causing biases or give to much help that would effect the usability test in the wrong way. This made it more difficult for the testleader to take notes about when and where the user got frustrated and how the participants solved the issue, and also how long each problem took to solve. With observers in attendance (even remotely), maybe more data could have been collected from the usability tests.

Same test leader for the usability test: It might have been somewhat different content collected from the usability test if the tests were executed by the same test leader in all five tests. For example, the participants recieved slightly different approaches regarding guidence when stuck in a task. However, the overall intepret of the data and the result of the usability tests and evaluations combined, our conclusion is that the result of our suggestions for usability improvement on the application would likely have been similar, if not the same.

Ethical dilemmas: Another area for improvement is about a ethical perspective. As we all got acquainted with the application, we realized there was difficulties and challenges for the participants. Parts of the interface was quite challenging, even after a thoroughly review of the application. The chances that the participants would struggle was fairly high and might cause unnecessary suffering for the participants. This turned out to be true in some cases where multiple participants were clear in their communication about how  frustrated they felt when they couldn´t finish a given task. 

Scrum: This project was carried out through Scrum in JIRA. While the method was suitable for the project we found it challenging when dealing with time differences. Although the whole process of the project was executed digitally, the members of the projects faced challenges when scrum meeting was to be held due to being in different timezones.  Furthermore, to avoid misunderstandings on what task that needed to be prioritized, the tasks in the sprints could be more clear and specified.