AI Testing Platform

Designed AI testing platform that builds trust with Test Managers. Extended human capabilities while achieving 35% efficiency gains.
The problem
This project aligned with a critical business objective: reducing operational overhead to free up Test Manager capacity for higher-value work. I began by establishing clear success criteria before diving into research and design.
Methodology
I follow the Double Diamond methodology, diverging to explore the full problem space through research, then converging on evidence-based solutions. This systematic approach ensures I'm solving the right problems while maintaining creativity at each stage of the design process.

User research
We ran weekly user research sessions to observe Test Managers navigating their current processes. From this study we discovered the following insights:
Test Managers were using spreadsheets for all data management.
The testing process involved distinct stages: launching tests, moderating results, and delivering outcomes.
Each stage involved multiple manual tasks, consuming significant time.


The solution
Solution design involved user testing and multiple iterations. High-level logic was mapped out, followed by prototypes. We explored various options, particularly around automating decision-making processes. The challenge was presenting complex testing data in a simple, easy-to-understand way while incorporating AI assistance.
Lo-fi wireframes

Hi-fi designs

User's feedback
Prototypes were iterated on after remote user testing sessions. Users were asked to complete a full testing cycle using the designs. We observed their understanding of the AI-assisted process and followed up to understand how they expected the new system to impact their workflow.
Results
The final design transformed the way Global App Testing Managers run the testing process. The new platform stores and tracks data previously managed in spreadsheets. It uses machine learning and AI to automate most of the Test Managers decisions and actions.
🏆 Testing efficiency increased by 35%
🏆 User satisfaction improved by 50%
🏆 Test completion rate increased by 28%
Where do we go from here?
First, I would recommend conducting usability testing again with new participants. Have the changes made it easier for users to complete their tasks?
If usability testing is not an option, I would recommend tracking the following metrics:
What percentage of Test Managers are currently using the new software?
How has the new software impacted the testing delivery time and results quality?