Remote testing in practice – what we learnt

In the Research and Design Team at the Office for National Statistics (ONS), we are transforming social surveys using a user-centred design approach. For us, this means that we work iteratively, designing with our users in mind. In social survey transformation, our end users are respondents to the voluntary social surveys at the ONS.  In practice, we design draft questions and survey materials, we test with recruited participants, before re-designing based on feedback.

Designing and testing with participants, gives us valuable insight into what works and what does not work. This applies to all elements of the respondent journey from the letters that are sent to invite respondents to take part, to the online questionnaire, to the thank you e-mail they receive after taking part. Pre-coronavirus (COVID-19), we were usually out and about speaking to participants in their homes, but COVID-19 prevented us from working in that way.

The testing process is central to our user-centred design approach. The challenge was figuring out how we could replicate a testing session remotely. This was also an opportunity to add another tool to our toolkit as qualitative researchers.

For our work, we are concerned with how participants understand our questions and respondent materials.  We aim to replicate the most realistic situation possible so our findings are robust and valid. This is involves providing participants with high quality printed materials and prototypes of the questions they would be asked, to match the quality that they would receive as an actual respondent so not to contaminate the feedback.

Our colleagues, Vicky Cummings and Jamie Trollope have written remote testing guidance to help researchers conduct remote research.  In this blog post we will discuss how we put this guidance into practice and reflect on our experience of running remote research.

How is it different?

Cognitive interviews

For us a cognitive interview involves observing a participant completing a task (usually an online survey) and then asking them a series of questions on elements of the task. There are many differences between conducting cognitive interviews face-to-face compared to doing them remotely, for example we kept the sessions shorter to keep participants engaged. However, there are still some similarities. This includes using the same types of interview materials, such as the topic guide, which is used to standardise the questions asked across interviews and observation documents for noting participant behaviours in the session. The recruitment process, incentivisation, sampling requirements and number of researchers in the session also remains the same.

Focus groups

Focus groups are group discussion sessions, where 8 to 10 people are brought together (usually in a neutral location, such as hotel conference room) to discuss a range of topics relating to our research questions. The main differences between face-to-face focus groups and remote focus groups concern sampling, organisation and facilitation. In remote testing, as we’re not limited to a physical location, we benefit from being able to recruit a wider geographic area, as remote focus groups allow recruitment from across the UK.

In order to replicate real user behaviour, we aim to provide our recruited participants with the most realistic experience possible. Whilst there is no need to book a venue, any physical tasks (such as survey invitation letters) for the focus group may need to be designed and printed in advance to allow for postage across the UK. The principles of facilitation remain the same, as does resourcing.

Ethical considerations

In the remote testing guidance our colleagues suggested using the UK Statistics Authority’s Ethics self-assessment tool.  We found the tool was easy to use and helped us to consider all six of the ethical principles and risks associated with conducting remote testing and the research project. For example, we considered whether the wider context of the COVID-19 pandemic would increase the sensitivity of questions asking about causal and temporary working.

Another tip in the guidance was to consider adapting the way you obtain consent from your recruited participants to take part in the research sessions. Alongside the confirmation email, a leaflet was sent to participants ahead of time explaining the research and how we treat their information in line with General Data Protection Regulations (GDPR). In the session, we asked the participant to confirm that they had read the leaflet. If a participant did not recall receiving the leaflet, we shared a PDF version with them and talked through the key points before commencing the session.

Lastly, Vicky and Jamie discussed the physical space in which the interviews would take place. For us in practice, this meant that we needed to find a private space to conduct the session in our homes where we were not likely to be interrupted (this meant coordinating schedules with housemates or family members). We also asked that the participants try and do the same, letting them know in advance the sessions would be audio recorded.

Software considerations

In their guide, Vicky and Jamie encouraged researchers to think about research aims, as well as participant and organisational needs.

As mentioned in the introduction, we aim to provide our recruited participants with high quality prototypes, to replicate the most realistic situation possible. Any software we choose would need to facilitate these aims.

With this in mind, we needed the software to:

  • have audio and video capability – to allow us to interact with respondent
  • have screen sharing functionality
  • have an easy to use interface
  • be accessible on participants own device (not requiring download)
  • be approved for use by our organisation

We decided to use Google Meet, as this met our requirements and as it had already been rolled out for use at the ONS. We were able to watch participants go through the questionnaire on their own device in their own time, encouraging them to think aloud whilst screen sharing with us. The video capability allowed us to see participants facial expressions whilst reading our printed materials and observe how they interacted with the various materials. This is important as we didn’t want to lose the rich data that can be gained from participants facial expressions and behaviours.

We found it was also important to build in a bit of time beforehand to ease into the technology, and we made this part of building rapport with our participants. The aim being to minimise any anxiety the respondents may have had around using the technology.

As for the usability of the software, participants just needed to follow the URL link provided to join the meeting, with their camera and microphone turned on. However, at the recruitment stage we requested that they join using a laptop or desktop computer, as this meant that would not need to download software (which they would’ve been asked to do on a mobile device). This would be something to be aware of if one of your research aims is to look at mobile devices specifically.

Reflections on conducting remote research

We ran mock sessions before conducting the live sessions with the participants which helped us identify where the pitfalls might be and allowed us to prepare for those. While the power of technology allowed us to conduct these methods remotely, we have all experienced it failing us in one way or another.

Therefore, we had back-ups in place.  These included:

  • having the observer turn off their camera after introductions
  • using the chat box
  • preparing screenshots which we could share, if the materials did not arrive in time
  • acting as human mouse if a participant cannot access the questionnaire

Whilst these back up plans helped us get through the sessions, they do have their limitations. For example, we lose out on some of the natural participant behaviours, such as scrolling or hovering during their interaction with the questionnaire.

In face-to-face research, non-verbal communication and cues are vital, this is more difficult remotely. We adapted our communication by:

  • having both lead interviewer and participants use video – allows rapport building and some non-verbal cues to be seen
  • using small affirmations, saying “yes”, “okay”, “hmm-mm” to let the participant know we were listening to them

We found that participant engagement was varied in the cognitive interviews, with some being distracted. This can be disheartening, but we mitigated for this by thinking of ways to bring the participant back to task quickly (for example using phrases like ‘just thinking about this question specifically’ and so on). We also recognise and accept that there can be some natural interruptions (for example, important phone calls, family members, pets) when participants are in the setting of their own home and so a degree of flexibility is also needed, so say to the participant that this is okay, pause the session and allow them to see to it. Be prepared then to bring them back to task quickly so the session can continue.

In the focus group, we found participant interaction to be slightly different. They did not warm up and interact with each other as seamlessly in the virtual room as they would in a physical room.  In future, we will also explore having virtual break out rooms to allow participants to discuss each task in smaller groups before feeding back to the group as a whole. This mirrors the task-based discussion we aim for when testing materials in face-to-face sessions.

Vicky and Jamie highlighted that remote sessions are likely to end more abruptly than face-to-face research. To mitigate this we:

  • made the participant aware when there were only a few questions left and that the interview would be coming to an end in a few minutes.
  • finished our questioning by saying “That’s all the questions I have for you, would you like to ask us anything?”
  • explained what happens next with the data we have collected
  • thanked the participant and reassured them their contributions had been useful

In conclusion – do it!

Overall, we have found that remote research has worked well and provided us with some really valuable insights. Whilst the pandemic as provided many challenges, this gives us a chance to be innovative, and add a new tool to our toolbox.

However, we acknowledge that the methods outlined here exclude those who do not have an internet connection or specific devices, or those who are less digitally literate.

One of the principles of user-centred design is to be consistent, not uniform. Therefore, in a world where face to face research is on pause, we are continuing to investigate methods that we can use to reach those groups that are likely to be excluded from digital methods of remote research.

 

Meg Pryor and Tara McNeill
Louise Foster-Key
Meg Pryor and Tara McNeill are Research Officers at the Office for National Statistics. Meg works in the Best Practice and Impact Division and Tara works in Social Survey Transformation.

Related