Issues /
ResponseIssues related to collection of response dataRemote testing, as understood by this Task Force, involves the collection of data ("responses") from research participants interacting with a response device such as a paper form, web survey, tablet app, or VR game. This article attempts to describe some considerations that might apply to the collection and processing of response data. Comparison to in-lab response collection (see also Task Performance. )Assuming that appropriate hardware/software resources can be provided to the participant, the types of response data that can be collected during remote testing do not, in principle, differ from those available during in-lab testing. However, the types of response data that are most easily accessed will depend on the remote-testing platform and the types of tasks the platform is intended to implement.
Thus, a superficial but useful distinction can be made between two major types of tasks:
Another difference between survey- and trial-based tasks has to do with whether individual participants complete the task once (as typical for a survey) or many times (as typical for trial-based tasks). Different considerations may apply to data handling (managing one vs. many data files per participant), counter-balancing conditions across repeated trial-based runs, randomizing question order across survey versions assigned to different participants, etc. Platforms may vary in their suitability for administering a survey task once to each of many (possibly anonymous) participants versus tracking a smaller number of participants across multiple sessions of trial-based tasks. Types of response data that may be collected during remote testingMost relevant to the purpose of this article are the types of response data collected during survey- vs trial-based tasks. There is no hard distinction between these, and most (all?) response types (multiple choice, rating scale, head pointing, pupil dilation) could, in principle, be used in either survey- and trial-based tasks. However, certain types of responses are commonly encountered in survey-based tasks, and these are the most commonly supported by many remote-testing platforms.
Note that the availability of specific response devices (buttons, sliders, etc) and response data types may be limited by platform. For example, Gorilla (see Platforms) supports the following response types:
Other platforms, particularly non-browser platforms such as PC or tablet apps, may offer a wider range of response types, including:
Speech / Audio response collectionIn-lab testing of speech perception (for example) often combines open-set responding ("Repeat the word BLUE") with in-person scoring by a human observer. Some platforms allow synchronous interaction between experimenter and remote participant which can support a similar approach. However, low quality audio/video streaming, dropouts, or distortion might disrupt accurate scoring. A few approaches may be used to support open-set data collection:
Response Calibration (see also Calibration)Although most survey-type responses should interpretable in an absolute sense and thus require no calibration to determine value, some continuously variable response data (for example, touch displays, tilt/force sensors) may require psychophysical or hardware calibration. See Calibration for more details.
|