Issues related to participants’ performance of the required tasks
Potential effects of the testing context
When testing outside a sound booth it is important to consider both the cognitive and acoustic effects that the testing environment may have task performance. While it is possible for remote testing to be conducted in environments with limited distractions, it is also possible that participants may not be alone in the testing environment or that the environment has distracting elements. Further, the participant could attempt to multitask during the testing. For headphone based studies, passive noise attenuating headphones may be advantageous as well as using a moderate level masking noise. For remote testing with speakers, background noise, room acoustics and the positioning of the speakers can influence performance. The use of moderate level masking noise to overcome background noise may inconvenience individuals near the participant. Brungart et al (in press) measured speech perception in crowded public spaces while simultaneously measuring the background noise level on every trial. They were then able to compare performance as a function of the background noise level.
Communicating instructions
In conventional testing environments, after explaining the task, participants often have the opportunity to ask questions. Further, it is often possible to observe the data in near real time allowing for the experimenter to correct obviously incorrect behavior. During remote testing, this is often not the case. Subjects may ignore simple instructions like ensuring that headphones are placed on the correct ears and may not fully comprehend more complicated instructions. Multiple versions of the instructions may be required depending on the number of platforms the experiment is compatible with and if all subjects are not required to be fluent in the same language.
Age considerations
Apart from standard considerations related to the relationship between age, hearing impairment, and cognitive decline, remote testing performance may depend upon the comfort and skill level using a computer/tablet. Auditory remote testing presents a unique challenge since there is a complex relationship between computer skills and hearing impairment and age (see Henshaw et al., 2012).
Linguistic considerations (translation, etc)
Remote testing provides the access to participant populations who speak a much wider variety of languages than may be available in a traditional single-site experimental design. While this can provide benefits, it also can affect performance on the task if the testing material is not appropriately translated and modified for the range of languages to be tested.
Technological literacy of participants
Remote testing may involve diverse levels of participants’ familiarity, experience, and facility with the technologies employed. Careful consideration should be given to maximizing accessibility across the targeted population when selecting a platform and designing a study. Consider, for example, a tablet preconfigured to run a single app with settings for that participant versus a laptop that requires signing in to wi-fi, downloading an update, and saving/uploading a data file. A typically developing college-age cohort might reasonably be expected to complete either study with minimal extra intervention (see participant administration), but the first option might be appropriate for a broader cohort. The latter approach also risks confounding results by reducing the likelihood that some subject groups complete the full task and/or by introducing additional cognitive load unrelated to the research question.
Supervision of performance
Remote testing presents challenges for the experimenter to supervise the participant and their performance. Adequate supervision of the participants can be critical for keeping the participant motivated and engaged with the task. Supervision can also be critical for identifying situations where the participant may be confused with the instructions. Finally, while some experimental procedures can be fully automated, in some cases experimenter intervention is critical. For example, open-set speech-in-noise testing either requires the participants to self-score their performance or for the experimenter to observe the responses in near real time. Supervision of remote testing can range from the experimenter being at the remote testing site, either physically or virtually, to automated help systems being built into the testing platforms, to asynchronous supervision and help via phone, etc.
Evaluation of participant experience
Remote testing participant populations potentially have a wider range of experience with auditory and/or behavioral testing. The experience of the participant with standard behavioral paradigms may impact the performance on the task.