Issues /


Platform considerations.

The primary dimension along which approaches to remote-testing vary is the tradeoff between experimental control and convenience/accessibility. This tradeoff impacts all aspects of the study design, although different balances more be appropriate for different aspects (e.g. combining careful stimulus control with convenience sampling of participants). There are many different research Platforms available to support remote testing; please refer to Platform Descriptions for detailed information about specific platforms.

What approach should I use for remote testing?

There are really three big questions you need to answer when deciding how to set up an auditory experiment for remote data collection: what hardware will I use, what software will I use, and who are the subjects I want to test. In all three cases, the alternatives range from convenient and less controlled to more time consuming and well specified.

hardware: calibration & interfaces

  • loose control of auditory stimuli with respect to calibration & frequency response
  • flexible graphic and temporal specs
  • no “special” data collected (e.g., ambient noise levels)

Any user hardware

e.g., PC & headphones

or more controlled solutions (below)

  • stimulus level & frequency response defined within ~5 dB
  • controlled graphic and temporal specs
  • some specialized data collection capabilities (e.g., touch screen input)

specified hardware

e.g., iPad

or more controlled solution (below)

  • strict control of level and freq. response
  • other specialized measurement or interface required (e.g., calibrated ambient noise recordings, software permitting)

lab hardware

e.g., deliver tablet & sound level monitor

software: data handling and experimental control

  • standard procedures w/out modifications required

Preconfigured hearing research packages

e.g., PART

or more controlled solutions (below)

  • custom stimuli
  • standard interface and response sufficient

build-your-own systems

e.g., Gorilla

or more controlled solution (below)

  • custom procedures or real-time processing
  • non-standard interface or data collection (e.g., voice recordings, hardware permitting)

fully custom scripts

e.g., MATLAB or Python

subjects: demographics & instruction

  • anyone can participate
  • no one-on-one instruction required

anonymous & unsupervised

e.g., Mturk

or more controlled solutions (below)

  • targeted population
  • specialized instruction (e.g,. via zoom)

“by invitation” access

most hardware/software will work

  • populations requiring real-time supervision (e.g., children)
  • protocols requiring rigorous training

supervision by proxy (e.g., parent)

experimenter-administered protocol hybrid model (e.g., training in person, data collection at home)

most hardware/software will work

Some other dimensions along which platforms vary:

Settings (In-lab, kiosk, at-home, in-the-wild)
At this time, most of the platforms identified with remote-testing appear optimized for testing in remote but isolated settings, such as in a participant’s home. Most are equally deployable to in-lab settings, possibly with greater control over computing and stimulus hardware. Depending on the study, it might be feasible to utilize a single design for both remote and in-lab validation studies. Keep in mind, however, that for commercial platforms the pricing structure may not be ideal for in-lab deployment.

Portable systems configured for standalone use with minimal experiment intervention could also be deployed in a kiosk setting, i.e. semi-permanently installed for unsupervised walk-up use. Depending on the motivation for remote-testing, kiosk deployment could provide numerous advantages such as sampling of geographically targeted populations in health-care offices, at music events, etc. Study design for kiosk-based testing is likely to share many elements of design for take-home / tablet-based studies where simplicity and clear instruction are prioritized over controlled sampling and experimenter supervision.

Finally, some of the platforms identified for remote testing may be suitable for us in everyday / "real-world" settings such as bars, cafes, classrooms, and outdoor settings. Again, depending on the motivation for remote testing, this use could enhance the ecological validity of a study by measuring performance in behaviorally relevant backgrounds rather than in controlled lab settings.

Supervision (experimenter present in person or remote, vs standalone task)
Instruction about the task, clarification when questions are malfunctions occur, and debriefing are all common interactions between experimenters and participants in lab-based testing. Shifting to remote testing requires careful consideration of how (or if) such communication must be facilitated, and identification of a research platform capable of supporting it. At one extreme of this dimension lies in-lab testing, with constant in-person interaction available as needed. At the other lie completely standalone tasks. Detailed and effective instruction can take the place of many interactions, but is less helpful for unexpected errors in the task or in the research hardware/software itself, or for special populations which experience specific challenges. An intermediate solution is for an experimenter to provide direct real-time supervision remotely. Some platforms support this feature directly. Others may require use of a secondary service (telephone / video calling, screen sharing, etc) running alongside and independently of the research platform itself.
Whose device? (experimenter provided / take-home vs participant BYO)
Another important dimension is that of the hardware selection and control. Laboratory-owned equipment (e.g. a lab-configured PC or tablet) can be used for in-lab testing, and for remote testing, by delivering or shipping the equipment to the participant’s location. Greater control is obviously achievable with experimenter-provided equipment, which could be delivered with earphones, displays, and response devices. In that case, devices can be calibration calibrated prior to delivery, and calibration verified after use and return.

Participant-controlled equipment offers less control, but may allow greater flexibility in accessing the research paradigm online or by direct download. In that case, other procedures will be necessary to verify the correct operation and stimulus delivery (e.g. psychophysical calibration), and preparations should be made to provide technical support to participants attempting to download and install research software for participation.

Platform device type (browser, tablet, headset, PC, custom hardware)
Similarly, device types vary significantly across research platforms, from entirely software-based platforms accessed online (via a web browser) or by download, to standalone devices such as tablets and VR headsets, to general purpose or custom computing hardware. Some approaches (e.g. physiological data collection, custom earphones) may require additional custom hardware for control or calibration.
Special hardware support (headphones, soundcard, UI response, etc)
Many online platforms are capable of presenting auditory stimuli via the interface or sound card built in to the participants hardware, and using whatever earphones the participant has on hand. Few of these offer the level of stimulus control and calibration that experimenters may be used to working with in the lab. For this reason, it may be worthwhile to consider platforms capable of working with off-board audio interfaces (e.g. USB sound cards) which can be delivered, along with a standard model earphone, to the participant’s location even if the research platform itself is fully online. Other types of specialized hardware may be required for certain types of response data, such as physiological data (heart rate, GSR, EKG / EEG), head and hand tracking (possible using VR headsets), etc.
Platform OS environment (e.g. Windows, iOS, Android)
Platforms also vary according to the operating system environment in which they run. Tablet-based systems may be compatible with Apple’s iOS (iPhone / iPad) or Google’s Android, which also supports a range of other devices including VR headsets; PC-based systems may run on Microsoft Windows or Apple MacOS. Few platforms run on multiples Ones, aside from fully online platforms, which may be compatible with any OS and a wide range of web browsers.
Software environment type (bespoke app, customizable app, MATLAB, js, Python, etc)
An important consideration for study design and implementation is the software development environment of the selected research platform. Some online platforms off simple web-based tools for defining sequences of trials (instructions, stimuli, responses, feedback,…) with no direct coding required. Other approaches support standard scripting languages such as javascript and python. PC- and tablet-based approaches can make use of a wider array of software development environments including MATLAB, Unity, Xcode, etc. The time, effort, and expertise required to develop for each platform could thus vary significantly. In some cases, there may be a tradeoff between platform complexity as experienced by developers and participants (e.g. a standalone and easy-to-use iOS app versus a MATLAB or Python script running in a separate interpreter).
Costs involved (software costs, subscriptions, required services)
Finally, the costs associated with various remote-testing platforms vary significantly and follow a number of different models. On the one hand are in-house and open-source programs with minimal or no acquisition costs. On the other are online platforms with subscription models that charge by the year, study, or participant. Some research platforms also require specialized hardware, which may be available from the platform vendor or third parties.