Example: Psychophysical task.

This experiment was run through an institutional subject pool (extra credit for undergraduate students). An independent website was used to host the experimental code and data. This decision provided easier access for the experimenter, but at a cost of not having subjects automatically receiving extra credit (the experimenter did this by hand).

This experiment involved subjects listening to a series of low-frequency tones and indicated whether the sequence was a major or minor chord (single interval, 2afc). Level calibration was not attempted; subjects were instructed to check a box to indicate they were using headphones, had adjusted stimulus to a “comfortable” level and were in a quiet location with no distractions. The experiment would not progress until participants checked this box.

All instructions were a combination of audio and graphic materials. Initial information regarding the basic experiment was described (video with audio). The video described the nature of the stimuli (via a schematic representation of the tone sequence with frequency on the y-axis drawn as piano keys and time on the x-axis) and the task required of the participant. This was followed by a practice block of the task

After completing one condition, the process was iterated for all conditions, each time with examples using both sound and graphics, instruction regarding responses, etc. Listeners could take breaks at their leisure, but they were expected to complete the experiment within one hour. The experiment itself did not enforce this time limit, but participants were told that they would receive one hour’s credit, and the experiment was designed to be completed within that amount of time.

The jsPsych library was used. The tutorial (https://www.jspsych.org/tutorials/hello-world/) was helpful in learning how to program the experiment. Most of the experiment logic is handled by jsPsych. Note that jsPsych does not generate audio; if one wishes to rely solely on jsPsych for stimulus presentation, the sound must be pre-saved as .wav or .mp3 files (Matlab may be used). Alternately, jsPsych can be used to call arbitrary (i.e., stimulus generating) javascript functions (see https://www.jspsych.org/overview/callbacks/), but the experimenter must write these functions on their own. To generate audio, the following describes one way to handle audio in javascript: https://mdn.github.io/webaudio-examples/audio-buffer/. NOTE: do not click the "Make white noise" button on that page with headphones on!

The browser/editor combination used was Google Chrome / Brackets (http://brackets.io/), although more advanced development environments may be used by an experienced developer (e.g., https://code.visualstudio.com/). The index.html and the .js files are edited in Brackets. The experiment is run by opening the index.html in your web browser (it should work on Chrome, Firefox, and Safari)

Once the code for your experiment successfully runs in your local browser, it must be placed online for subject access via their browser (i.e., they do not download files). Here, Google Firebase was used to host the experiment (its free offerings are generous; we were able to collect data for more than 100 participants without purchasing higher storage and traffic limits): https://firebase.google.com/ . The tutorial videos and demo projects available on the Firebase website help, but it takes time to master the materials. Once the materials are working, updating code is easy and performance is very reliable. Firebase provides both browser and desktop consoles from which code can be updated, website performance can be monitored, and saved data can be downloaded.

Several examples of code are available via GitHub (https://github.com/; e.g., https://github.com/sebwaz/Tone-Scrambles, https://github.com/Tuuleh/masters-battery; one may also search on GitHub for “jspsych” or “hearing experiments”). The experiment described here may be tried at the following link: https://scramble-battery.firebaseapp.com/. The source code for this specific experiment has not yet been made publicly available.