Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current Restore this Version View Version History

« Previous Version 3 Current »

Due date

19 September 2024

Epic

DS-34 - Getting issue details... STATUS

Task

DS-71 - Getting issue details... STATUS

Designer

Alison Eyo

Useberry

Insert links

Document summary

This document outlines the efforts undertaken and results obtained in verifying the usability of core user journeys within the Moniepoint transfer feature.

Objectives

As part of our ongoing efforts to improve the user experience and align our products with strategic goals, we have identified the System Usability Scale (SUS) score as a key performance indicator (KPI) for user satisfaction. Additionally, we will be tracking two other critical usability metrics: time on task and success rate. These metrics will provide a comprehensive view of how well our product supports primary user goals across web and mobile platforms.

The purpose of this usability test is to establish a usability benchmark for the product, which will serve as a foundation for future improvements.

Success metrics

  • SUS score

  • Time on task

  • Success rate

Test plan

Core user journeys

Persona

Core journey

Rationale

Adebayo (35y/o)
Small business owner who frequently makes payments to various suppliers and contractors

Initiating a transfer to new recipient

This journey represents 90% of transfers, making it critical for user satisfaction and app utility. It was selected based on comprehensive analysis of user behavior data.

Emeka (40y/o)
Manager who regularly pays the same set of vendors for his company

Sending money to a saved beneficiary

While less common than new recipient transfers, this journey is important for understanding how the app supports repeated transactions and user convenience

Chinwe (28 y/o)
Values automated financial management

Setting up a recurring transfer

This journey is crucial for users who need to make regular payments. It was chosen to evaluate the app's ability to handle more complex, long-term transfer setups.

Fatima (32y/o)
Executive assistant who manages multiple accounts and transfers

Managing/deleting a recurring transfer

Essential for providing users control over their automated transfers. It was selected to assess the flexibility and user-friendliness of the recurring transfer feature

Oluwaseun (45y/o)
Business accountant who keeps detailed records of all financial transactions

Reviewing transfer history/downloading statements

Critical for user trust and financial management. It was chosen to evaluate the app's transparency and ability to provide clear transaction records.

Amina (38y/o)
HR Professional responsible for processing company payroll

Performing bulk transfer to multiple recipients

Important for business users who need to make multiple payments efficiently. It was selected to assess the app's capability to handle complex, multi-recipient transfers

Test environment

The initial plan was to conduct moderated tests using a live testing account.
However, due to geographical constraints, this approach would have limited our participants to users from only one of the 36 states.

To overcome this limitation, we opted for unmoderated, remote testing using mobile interactive prototypes accessible on both mobile devices and web browsers. This method allowed for broader geographical coverage and more diverse participant inclusion. Upon completing each core journey test, participants were asked to answer a series of questions and indicate their satisfaction level, providing valuable quantitative and qualitative feedback.

Tasks

Core journey

Tasks

[journey name]

[journey name]

Participants

Participant information

Number of participants

Indicate the total number of users involved (5-8 participants per journey).

Demographics

Include relevant demographic information such as age range, gender, location, experience level, job role, familiarity with the product, or other factors depending on the target audience.

Selection criteria

Describe the criteria used for selecting participants. Were they existing users, new users, or a mix of both?

Participant profiles

Delete this info panel, this is only for guidance.

If applicable (otherwise delete section), provide a few short participant profiles to offer context - see examples below.

Participant A

Chinedu

Age

35

Occupation

SME owner (fashion retail)  

Tech savviness

Moderate

Fintech usage

Uses mobile banking apps (GTBank, Access Bank) for daily transactions and a POS terminal for customer payments. Prefers platforms that offer quick, reliable transfers and clear transaction histories.  

Primary devices

Android phone (Tecno Camon 16) 

Key insights

Chinedu heavily relies on his POS terminal for business operations and uses fintech apps to track transactions. He is often frustrated by network downtime with POS systems and values transparency in fees. Easy-to-read transaction reports and the ability to quickly reconcile daily sales are critical for his business.

Participant B

Halima

Age

27  

Occupation

Market trader

Tech savviness

Low

Fintech usage

Uses mobile money apps like OPay and PalmPay for personal savings, transfers, and accepting payments from customers. Not particularly tech-savvy, but uses these services due to their ease of access and lower transaction fees compared to traditional banks.

Primary devices

Android phone (Infinix Hot 10)

Key insights

Halima needs simple, straightforward fintech solutions with minimal steps to complete transactions. She values reliability, especially when receiving payments from customers. She finds features like SMS confirmations and immediate access to funds critical for trust in the service.

Participant C

Tunde

Age

40

Occupation

Corporate accountant (mid-sized firm)  

Tech savviness

High

Fintech usage

Uses digital banking platforms and business fintech solutions for managing payroll, supplier payments, and invoicing. Regularly interacts with multiple banking platforms for business transactions.  

Primary devices

Windows laptop, iPhone 13

Key insights

Tunde requires fintech solutions that streamline business transactions and offer integration with accounting software. He values features like automated invoicing, bulk payments, and detailed reporting, but he finds it frustrating when banking platforms lack API integrations with their business software. Efficiency and robust security features are critical for business operations.

Participant D

Amaka

Age

22

Occupation

NYSC corp member (service year)  

Tech savviness

High

Fintech usage

Primarily uses personal banking apps like Kuda and ALAT for saving, budgeting, and personal transfers. Also utilises fintech apps for peer-to-peer transfers and mobile payments.

Primary devices

Android phone (Samsung A32)

Key insights

Amaka prefers fintech apps that offer budgeting tools and savings features. She values gamification and rewards systems, such as cashbacks and referral bonuses, to increase engagement. Her main frustration comes from long transaction processing times or unexpected app downtimes. Security and user-friendly onboarding are important factors for her trust in the app.

Participant E

Babatunde

Age

50

Occupation

POS operator  

Tech savviness

Moderate

Fintech usage

Relies heavily on mobile money apps like Paga and Moniepoint to run his POS business, facilitating cash withdrawals, deposits, and transfers for customers. He also uses traditional banking apps for reconciling the daily balance.

Primary devices

Android phone (Itel A56)

Key insights

Babatunde’s biggest concern is the reliability of the fintech app or platform, especially in high-traffic areas. He prefers platforms with low transaction failure rates and quick settlement times. Instant customer support is crucial when POS systems encounter issues, and he values platforms with lower transaction fees.

Recruitment process

[insert text here]

Usability metrics and results

Success rate definition

Delete this info panel, this is only for guidance.

Success criteria:

  • A task is considered successful if the participant completes it as intended within the defined parameters, without assistance, and within a reasonable time frame.

  • Define clear success criteria for each task, such as reaching a specific page, completing a transaction, or filling out a form correctly.

Measurement:

  • Success rate will be calculated as the percentage of participants who successfully complete each task according to the predefined criteria.

SUS score

SUS is a standardised questionnaire used to measure the usability of a product, using a 10 questions 5-point scale to generate a score between 0 and 100, with higher indicating higher usability.

The average SUS score is typically 68. A score of 68 or higher is generally considered acceptable or better than average. Above 70 is good, and above 80 is excellent.

sus scale.png

Time on task

The time it takes participants to complete each journey. Shorter times generally indicate better usability. This metric helps to measure efficiency.

  • [insert journey name]: users should/are expected to complete this journey within X minutes.

  • [insert journey name]: users should/are expected to complete this journey within X minutes.

  • [insert journey name]: users should/are expected to complete this journey within X minutes.

  • [insert journey name]: users should/are expected to complete this journey within X minutes.

  • [insert journey name]: users should/are expected to complete this journey within X minutes.

Success rate

[designer to define success for each journey]

  • [insert journey name]: this journey is considered successful when [specify].

  • [insert journey name]: this journey is considered successful when [specify].

  • [insert journey name]: this journey is considered successful when [specify].

  • [insert journey name]: this journey is considered successful when [specify].

  • [insert journey name]: this journey is considered successful when [specify].

Results

Delete this info panel, this is only for guidance.

SUS score

The average SUS score was [insert score], with [brief interpretation: e.g., this score falls within/above/below the industry benchmark].

Analysis: the SUS score reveals [positive/negative] aspects of user satisfaction, particularly related to [insert key findings].

Time on task

The average time to complete key tasks was [insert time].

Analysis: tasks such as [task 1] took longer than anticipated due to [insert reason], indicating potential usability challenges.

Success rate

The overall success rate was [insert percentage]. This indicates that [insert percentage of participants] were able to successfully complete the core tasks without assistance.

Analysis: certain tasks such as [specific task] had a lower success rate due to [insert reasons], while others were completed with high efficiency.

Core journey

SUS score

Time on task

Success rate

Comments

[journey name]

e.g. 60

1m 30s

%

Brief key feedback or observation for this journey

[journey name]

Brief key feedback or observation for this journey

[journey name]

Brief key feedback or observation for this journey

[journey name]

Brief key feedback or observation for this journey

[journey name]

Brief key feedback or observation for this journey

[journey name]

Brief key feedback or observation for this journey

[journey name]

Brief key feedback or observation for this journey

Post-test qualitative questionnaire feedback

Delete this info panel, this is only for guidance.

Post-test questionnaire:

After completing the tasks, participants will be asked additional targeted questions to gather specific feedback on their experience. These questions will be presented BEFORE the SUS questionnaire, for example:

  • Was there anything confusing or frustrating about the sign-up process?

  • How easy or difficult did you find the sign-up process?

  • Did you encounter any difficulties in reading or understanding the text or labels during the tasks?

  • Were there any features or elements that stood out as particularly helpful or problematic?

  • Did you experience any challenges with the colour scheme or visual elements?

  • Do you have any additional comments or suggestions for improving this experience?

These questions will help uncover specific areas of concern or satisfaction that may not be fully captured by the SUS score alone.

Participants provided additional insights through open-ended post-test questions:

Common pain points

List any recurring issues, frustrations, or confusion that participants mentioned.

Positive feedback

Summarise any positive comments about the product or specific features.

User suggestions

Insert suggestions for improvement or areas where users struggled but offered solutions.

Recommendations for improvement

Based on the test findings, the following recommendations are proposed to improve the product’s usability:

Design changes

  • [Task/feature]: revise [UI component, workflow, etc.] to streamline the user journey and reduce time on task.

  • [Task/feature]: clarify instructions on [specific screen/interaction] to improve success rates.

Quick wins

  • Minor adjustments to [insert feature or task] that can be implemented without major development effort.

Long-term recommendations

  • Consider a more comprehensive redesign of [specific user journey], especially focusing on [insert issue].

Conclusion

Summary

The usability test revealed that while the product performs adequately in most areas, there are specific pain points related to [specific feature]. By focusing on these areas, we can significantly improve the user experience.

Next Steps

The design team will implement the recommended changes and prepare for a follow-up usability test to measure improvements in SUS, time on task, and success rate.