With three websites, this organization wanted to do user testing on all three websites so they could continue to improve and iterate on its design. They knew that getting constant feedback allowed them to continually improve their products. For this project, I was able to help them with test goals and planning, conduct the research, and write up and present the results.

Process

During the project, we followed the below process.

  • Testing goals: At the start of any project, it’s important to find out what the goals of the project are. These may be written in the contract or in some document hidden somewhere that the team may not have access to. It’s important to review and confirm these goals. We started with reviewing and setting goals to ensure our test plan could help support these goals.
  • Test plan & script: With the goals in hand, we reviewed the websites to be tested. On this project, we were looking at and testing three separate websites, so we needed three test plans and scripts. Because the websites were very information heavy and our testing would not focused on interaction design, we defined success parameters around being able to find the information as well as to read and comprehend the information.
  • Recruitment: Working with a recruitment company, we developed a recruitment screener to target the right users. Again, because we were testing three websites, we needed three sets of recruitment criteria.
  • Testing: The testing happened over a one month period. We tested the three websites with eight different participants each, so we did 24 tests in a one month period. We traveled to four different locations and met and tested with a lot of interesting people! To run the test, we had one facilitator and one notetaker sitting with the participant. We provided a laptop and mouse and recorded the screen and audio as the participant went through the testing.
  • Analysis: During the analysis phase, we (the facilitator and notetaker) reviewed the notes and impressions, created a list of issues, and built these issues into larger themes and challenges. Essentially, we knew there were larger problems, but we also wanted our report to be based on evidence, not our opinions. Combing through the test results and compiling issues allowed us to work from the ground up, from small items that cause large problems.
  • Presentation: The results were reported to our client in a presentation, an executive summary, and a full report. As we were testing three websites, this meant nine documents! It was a lot of documentation, but required for the organization’s standards.

Challenges

On this project, we concurrently tested three separate websites. Keeping them all straight was challenging. As we traveled to different locations and tested all three sites at all three locations, this meant a lot of context switching. To resolve this, we went through our notes as soon as possible after a test to ensure we had tracked everything we wanted to track. We also reviewed recordings when we had missed something in the testing.

Given the number of tests, we needed a way to compile our results. We had many many notes in a spreadsheet which, at first, was overwhelming. Through collaborating on an approach, we were able to come up with a standard way to analyze the results that lead to high quality, consistent reporting.

How Key Pointe Helped

As the lead on this user research project, I put together all of the documentation and deliverables, conducted half of the testing sessions, and presented the findings to the client. While the scope was large, I really enjoyed working with my co-researcher and the travel.