Examples using fully online / browser-based platforms

Measuring Band Importance Online

The goal of this experiment was to determine if suprathreshold manipulations of speech signals altered the weighting of different frequency regions in their contribution to speech recognition (also known as band importance). An example MATLAB implementation of this type of project can be found here

Preparing the Experiment

For this experiment, all participants hear the same sequence of speech stimuli that have been filtered to exclude specific sets of frequency bands. These stimuli were generated in advance using this MATLAB implementation. In the lab, we generate a level calibration signal by concatenating the speech stimuli used in the experiment and generating random phase noise with the same long term spectrum as the stimuli. We opted not to ship participants experimental hardware, so we cannot precisely control stimulus level online. Instead, we used the calibration noise as a starting point to adjust listening levels on the participant’s end (see running the experiment below), then checked to ensure the quietest portions of the speech in each frequency band would be audible. As with the calibration signal, we concatenated all of the stimuli used in the experiment, but then we filtered the concatenated signal into the frequency bands we would be testing in the experiment. For each frequency band, we plotted a histogram of the distribution of energy (estimated with the Hilbert transform) of each band and calculated a cutoff level that was the 5th percentile of that distribution as a desired minimum audible level in that band. We generated noises that were bandlimited to match the frequency range of each band and set the level of each of those bands to the desired minimum audible level for that band. This did not guarantee an absolute stimulus level but using these bandlimited noises to check audibility (see below) would ensure that the speech energy in each band was audible regardless of the volume of the participant’s computer.

Hosting the Experiment

We wrote a custom script in javascript using the jsPsych library to present sound files one at a time. This script reads in a sequence file whenever the experiment is loaded (set via a query in the URL, e.g. ?sequence=Demo, where Demo.txt is the corresponding sequence file). Sequence files contain the name of one sound file on each line, and these sound files are played successively when a button is pressed on the web page. The custom script is hosted on cognition.run, which is a free service for hosting jsPsych experiments. An alternative we explored was to host the experiment on our own Amazon Web Services S3 bucket, but this would have cost us hosting fees. We may switch back to that approach in the future, as porting javascript from one host to another is relatively easy. An example of our program can be found here

Running the Experiment

Each experimental session was conducted in a WebEx call with the participant. Participants were instructed in advance to install the WebEx desktop so they could share their computer audio during the experiment. Participants were also instructed to listen with a good pair of headphones and to ensure their environment was quiet and free of interruptions during the experiment.

Consent was administered through REDCap in a web browser running on the experimenter’s computer. The experimenter shared control of the web browser with the participant so the participant could proceed through the consent forms at their own pace and provide signatures. During consent the experimenter could also assess the listening environment of the participant and request adjustments as needed to ensure the participant could hear and be heard clearly.

To set the volume of the participant’s computer, participants were first direct to a youtube video which played the calibration signal. Participants were instructed to adjust the volume of youtube to 100% and then adjust their computer volume so this video was “loud, but comfortable”. Participants were then directed to an audibility check program hosted on cognition.run, found here. This program played between 2 and 5 of the minimum audible level bandlimited noise bursts on each trial. Participants had to count the bursts and repeat back how many they heard. If they failed the audibility check by getting any of the counts wrong we asked them assess their listening conditions (headphones, background noise) and turn the volume up if needed. If they could not pass a repeated audibility check then we discontinued the study and paid them. Note that this approach does not distinguish between hearing loss and poor quality listening conditions if the participant does not pass the check, but for our purposes we wanted to exclude both problems anyway.

Speech recognition required participants to listen to speech stimuli and repeat them back. We sent participants a link to each experimental block (for example https://4yudz5vdqc.cognition.run/?sequence=BIFDemo), asked them to share their computer audio (in WebEx, this is done through Share Content -> Optimize for Motion and Video, ensuring ‘share computer audio’ is checked, and selecting their web browser), and told them to proceed through the experimental block at their own pace. On the experimenter end, the experimenter muted their microphone in WebEx and recorded the participant’s audio feed through Audacity. In windows, this is done by selecting WASAPI as the input device, which records the experimenter’s computer’s audio. When the participant shares their computer audio, the experimenter will hear and record both the stimuli and the participant’s spoken responses, which makes validating scores easier later.

Any experiment designed to be run in this manner should be robust to lost data. We have had sudden interruptions in the participant’s environment (dogs barking, family members entering the room) and momentary hitches in the WebEx call that have rendered the response inaudible. If the experimenter determines that the participant was distracted or if the response is inaudible then the trial should be discarded.

Participant Feedback

A total of 18 participants completed this study, of which 8 completed a remote research participation survey. Responses on the survey were overall positive, with all participants rating the quality of the video connection, their comfort level being recorded, and setup and training as either "good" or "very good". Four participants said they felt that remote participation was less difficult than in lab participation, while the remaining four said it was not easier or more difficult. Five participants said they would prefer future studies to be remote, while the remaining three said they had no preference. While this was a small sample, it indicates that participation in our study was facilitated by the online implementation.