Recent Changes - Search:

Main

Examples

ExamplePsychophysical

Examples using fully online / browser-based platforms

Example: Psychophysical task (UC Irvine)

An abbreviated (one-minute) version of the experiment described below is playable here (a more complete version is provided below). The source code for the abbreviated experiment is available here.

Implementation

The jsPsych library was used. The tutorial was helpful in getting started. Most of the experiment logic is handled by jsPsych. We used jsPsych's event-related callback functions to generate and play back stimuli at trial onset. We used the AudioBuffer function in JavaScript to handle audio playback. We recommend this approach to all experimenters who require adaptive control of their stimuli or who sample from a large stimulus set because it allows the stimulus to be computed inside the participant's browser without need for communication back and forth to the experimenter's server. Alternatively, if one wishes to rely only on jsPsych's built-in functions for stimulus presentation, the sounds must be generated beforehand and saved as .wav or .mp3 files.

Software and Platform

Code editing was done primarily in Brackets, a simple text editor that provides a live preview of the code in an adjacent Google Chrome window. We found Brackets to be completely adequate for a project of this scale, although more advanced editors are available (e.g., VS Code). Only knowledge of HTML and JavaScript was required to code up the experiment.

Google Firebase was used to host the experiment. Its free offerings are generous. We were able to collect data for more than 100 participants over the course of two weeks without purchasing higher storage or traffic limits. To give a brief account of the steps required to host an experiment on Firebase:

  1. A Firebase project is created using the Firebase web console.
  2. The Firebase CLI (command line interpreter) is installed on the computer where the code is being edited.
  3. The CLI is used to login to the account associated with the Firebase project. The user may then specify the code files to be deployed to the web. A URL will be automatically generated on first deployment.

In addition to hosting, Firebase provides methods for storing data (e.g., spreadsheets of trial parameters and participant responses). These methods are branded as Cloud Storage. Some work was required to set up the experiment to call the Firebase storage functions correctly and save participants' data in the desired format. The tutorials available on the Firebase website were helpful, but it took time to get everything running properly. Once everything was working, updating code was easy and performance was reliable. Both the Firebase web console and CLI can be used to update code, monitor website performance, and download saved data.

Participants

Participants were recruited from an institutional subject pool. A direct link to the experiment was posted on the subject pool website. Undergraduate students were offered course extra credit as compensation. A team of research assistants manually checked participants’ recorded data and dispensed credit.

Consent

From the front page of the experiment, a participant was able to download the Study Information Sheet (IRB materials). The front page also included rules for receiving extra credit as well as procedural warnings (e.g., “Do not refresh the page.”). The participant used checkboxes to confirm that

  1. they were using headphones or earbuds,
  2. they were seated in a quiet place with no distractions, and
  3. they agreed to take part in the study.

The experiment would not progress until all boxes had been checked.

Instruction

The participant received instructions from a video with voice-over. The video described the nature of the stimuli (using a piano roll as reference) and the task required of the participant. This was followed by a practice block of the task.

Procedure

Level calibration was not attempted (in giving consent, participants indicated that they were using headphones adjusted to a “comfortable” level and were in a quiet environment; this was sufficient for our purposes). The experiment asked participants to listen to pure-tone sequences and indicate whether each sequence contained frequencies from a musically major or musically minor set (single interval, 2AFC). Feedback was given after every trial. Participants could take breaks at their leisure, but they were expected to complete the experiment within one hour. The experiment itself did not enforce this time limit.

Inclusion Criteria

An easy semitone discrimination task was included among the conditions of the experiment. When supervision (e.g., via video call) is not possible, we recommend including catch conditions like this. Every listener with normal pitch processing was expected to perform this task easily; however, almost half of participants performed no better than chance in this condition. We suspect that this may be due reduced compliance outside of the lab setting as errors with the stimulus playback were not reported. These participants were excluded from the analysis. The distribution of the response variable (d’ sensitivity) was simulated for a guessing participant; the inclusion criterion was to perform above the 99th percentile of this distribution.

Further examples

Several examples of code are available via GitHub (one may search on GitHub for “jspsych” or “hearing experiments”). The source code for the experiment described here has not yet been made publicly available, but the code for a simplified version is available via the link at the top. To learn more about the flow of the experiment (e.g., consent solicitations, demographic surveys, instruction set, etc.), the full experiment is playable here.

Contact for more information: Sebastian Waz.

Edit - History - Print - Recent Changes - Search
Page last modified on August 31, 2020, at 05:36 PM