Examples using fully online / browser-based platforms
Piloting a speech-on-speech masking experiment using a MATLAB Web App Server
Questions of Interest:
General testing of online experiments, specifically a co-located speech-on-speech masking task with varying levels of talker level predictability (‘TalkerLevPredExpt’ in the online experiment list).
Platform / Infrastructure:
MATLAB Web App Server (browser-based testing): A Windows virtual server was set up at BU, with the Web App Server and compiled Matlab experiments. You can test these experiments by visiting: https://kiddlabserver.bu.edu . It does not work with Safari or IE, but should work with Chrome, Firefox, and Edge browsers. (The password ‘demo’ can be used to test each of the experiments, but there are a limited number of simultaneous active participants.) The experiments were existing Matlab projects, with the audio code modified for web-browser output and the GUI migrated to Matlab’s AppDesigner (instead of Guide-based GUIs).
Participant Recruiting:
We used existing participants in our database from previous in-lab research, all normal hearing. Participants were consented using Qualtrics, as well as an ‘attestation’ of the subject to maintain a distraction and noise free environment during remote testing.
Activities:
The dynamic range of the speech stimuli was only 18 dB (all suprathreshold), so the subjects calibrated their headphone volume by presenting speech stimuli over a 30 dB range with instructions to set the lowest level at a ‘quiet voice’ loudness, the mid-level at ‘normal conversation’, and the loudest at a ‘loud voice’.
The task was BUG matrix speech with varying amounts of talker level predictably, with forced-choice options for their responses (buttons on the screen). The stimuli were spatialized using HRTFs, but all stimuli were co-located at 0-degree azimuth and elevation. The (full) experiment has 112 blocks, each with 12 trials. Each trial consists of a presentation of a ‘Target’ talker voice along with two simultaneous ‘masker’ voices, as the subjects had done in the lab previously in similar experiments. The Target talker always begins with the name ‘Sue’ and you must focus only on that voice and respond with what the talker said. The loudness of the talkers may be changing, though, and paying attention to the name of the condition, which is displayed before each new block of trials, helps the subject focus on the appropriate cues.
The total time to complete the experiment was approximately 6 hours, usually done in 1-hour sessions, whenever the subject desired. We also asked them for various information about their testing setup, including: type of computer (Mac/PC, laptop/desktop), browser, headphones make/model, etc.
Participants were paid using Venmo, along with a Qualtrics form for confirming receipt of payment.
Conclusions / Impressions:
Given that all stimuli were speech, co-located at 0-degree azimuth, and the fact that there was a limited dynamic range of level, we felt that this task was well suited for remote testing. The task could even have been done monaurally, and was not expected to be substantially affected by headphone quality.
The participants that were run seemed to like the freedom of doing the testing at their leisure, and reported no major problems with the platform itself. They commented that the experiment ran just like the similar tasks they had participated in within the lab previously.
Thus far the data seem reasonable, and our plan is to do in-lab testing for comparison of the results.
Contact:
Andrew Byrne, ajbyrne@bu.edu, Boston University