Examples using fully online / browser-based platforms

Multipart anonymous psychoacoustics (Purdue U.)

Questions of interest

Although we were interested in remote-testing for large-N studies prior to the COVID-19 outbreak, our efforts were significantly hastened by the pandemic. Our present goal is to perform studies on supra-threshold hearing on large anonymous cohorts of adult subjects with (putative) normal-hearing status. To this end, we combined multiple pre-existing free and open-source platforms and have conducted some initial studies with encouraging results. In the long run, we hope to develop this rudimentary infrastructure further and to expand to studying clinical populations.

Infrastructure setup

We decided to focus on browser-based testing. Individual tasks are created using jsPsych, which is a free and open-source JavaScript library for running behavioral experiments in a web browser. Tasks start with a set of instruction screens, and then a volume-setting screen with instructions to start with a low volume (10-20% of max on their computers) and adjust up to a clearly audible and comfortable level while a calibration sound is playing. The stimuli for all other audio trials are presented at a level that is not more than 6 dB of this calibration sound (i.e., relative calibration only). Each audio trial involves stimulus presentation and requests a button-click response (classic nAFC style). Optionally, feedback is provided.

In brainstorming study-designs within our lab, we realized we needed more than a single jsPsych-based task. In particular, we wanted to combine

  • A consent page
  • A demographic survey
  • Multiple individual audio tasks implemented using jsPsych, with dynamic constraints on task sequence (e.g., present task #2 if subject scores more than Y% in condition 1 of task #1, else present task #3).
  • Debriefing pages that track compensation amounts for each subject based on their particular trajectory through the study

Complex study flow of this type was made possible using Django, which is a free and open-source Python-based framework for developing web applications and controlling server behavior (so-called "backend" logic that complements our "front-end" jsPsych JavaScript). The Django application was set up to serve the consent form, the survey pages, and the individual jsPsych task pages as the subject proceeded through the study. Pages where lab members can upload tasks and create studies were also set up. Pages other than jsPsych tasks were created using simple HTML and styled using Bootstrap. For the jsPsych tasks, the Django app was also set up to automatically extract single-trial data as it was generated within jsPsych and write it to a SQLite database on our server, and perform calculations to decide task flow and compensation. Note that Django comes with functionality for working with databases (knowledge of SQL is not necessary), and also comes with secure authentication capabilities that allows us to have logins for lab members and fine-grained permission control on creating tasks/studies and viewing results. The Django app is hosted on a virtual private server (VPS) rented from Linode.com and served by Apache2 on Ubuntu Linux. Communications between our server and subject browsers are encrypted using free SSL capabilities provided by Let’s Encrypt

Our Django app, which includes a "demo" study can be viewed here: https://www.snaplabonline.com. We are also happy to share the code via GitHub.

Participant Recruitment and Pre-Screening

Participants were recruited anonymously via Prolific.co. Researchers can post web-based tasks on Prolific and anyone can sign up to be a participant. Individuals who sign up for participant accounts are requested to answer a series of "About You" questions. Prolific allows researchers to pre-screen participants based on their responses to different items in the "About You" section. Our pre-screening criteria included being US/Canada residents and native speakers with English as first language (because we use speech stimuli spoken with North-American accents), 18–55 years of age, having participated in at least 40 other studies previously on Prolific, more than 90% of their previous Prolific submissions approved (Prolific allows researchers to reject participant submissions when there is clear evidence of lack of compliance with instructions), and answering "No" to the question "Do you have any hearing loss or hearing difficulties?".

Study Flow

Lab members can create individual jsPsych tasks and surveys and string them together with decision rules into a study sequence (a tree structure) within the Django app. Thus far, the most data we have is for our entry study which we use to filter participants for other studies. Once our entry study sequence is created within the Django app, a participant link is generated for posting on Prolific. When subjects click on this link, the Django app automatically records the participant’s anonymized Prolific ID. If the subject did not provide consent within the past 7 days (e.g., by participating in another study from our lab), a consent form is displayed first. Upon receipt of consent, the app serves a demographic survey page which re-verifies that the subjects meet our pre-screening criteria, and includes additional demographic questions (e.g., race, ethnicity) and questions about hearing status, neurological status, music training etc. Upon completion of the survey, a headphone screening task is served (adapted from Woods et al., 2017). If they do not pass the screening task, they are shown a debrief page with instructions to "return" the study on Prolific and details of the partial compensation they will receive. If they pass the headphone screening, they are then served a speech-in-babble task based on the Modified Rhyme Test (House et al., 1963). Completion of this task concludes the entry study. The subjects are debriefed and provided with a URL that when clicked, submits a completion code back to Prolific.

Subject payments

A lab member can log in to Prolific to approve participant submissions based on the completion-codes and by cross-checking with the log pages in the Django app. Upon approval, Prolific pays the specified compensation amount to the subject by deducting our account. For participants who fail the headphone check task and "return" the study without a completion code, the partial compensation amount calculated by the Django app is manually paid via the "bonus" feature within Prolific. Thus, subject payments are done anonymously.

(Very) Preliminary Results/Impressions

Thus far, about 30% of our subjects fail the headphone screening task, consistent with literature (Milne et al., 2020). We include catch trials to obtain data quality metrics, and use jsPsych’s "blur" event tracking to exclude subjects showing poor engagement. Our minimal experience suggests that median performance and across-individual variance is roughly on par with in-person data from our lab and with literature, which is encouraging.

Prolific also allows researchers to conduct multi-part/longitudinal studies by specifying a "custom allowlist" of participants to invite. We are testing the use of this feature for filtering participants for other studies based on their responses in our entry study

Contact for more information: Hari Bharadwaj.

[UPDATE] A pre-print with more details is now available:

Mok, B. A., Viswanathan, V., Borjigin, A., Singh, R., Kafi, H. I., & Bharadwaj, H. M. (2021). Web-based Psychoacoustics: Hearing Screening, Infrastructure, and Validation. bioRxiv 2021.05.10.443520; doi: https://doi.org/10.1101/2021.05.10.443520