Examples using fully online / browser-based platforms
Multipart anonymous psychoacoustics (Purdue U.)
Questions of interest
Although we were interested in remote-testing for large-N studies prior to the COVID-19 outbreak, our efforts were significantly hastened by the pandemic. Our present goal is to perform studies on supra-threshold hearing on large anonymous cohorts of adult subjects with (putative) normal-hearing status. To this end, we combined multiple pre-existing free and open-source platforms and have conducted some initial studies with encouraging results. In the long run, we hope to develop this rudimentary infrastructure further and to expand to studying clinical populations.
In brainstorming study-designs within our lab, we realized we needed more than a single jsPsych-based task. In particular, we wanted to combine
- A consent page
- A demographic survey
- Multiple individual audio tasks implemented using jsPsych, with dynamic constraints on task sequence (e.g., present task #2 if subject scores more than Y% in condition 1 of task #1, else present task #3).
- Debriefing pages that track compensation amounts for each subject based on their particular trajectory through the study
Our Django app, which includes a "demo" study can be viewed here: https://www.snaplabonline.com. We are also happy to share the code via GitHub.
Participant Recruitment and Pre-Screening
Participants were recruited anonymously via Prolific.co. Researchers can post web-based tasks on Prolific and anyone can sign up to be a participant. Individuals who sign up for participant accounts are requested to answer a series of "About You" questions. Prolific allows researchers to pre-screen participants based on their responses to different items in the "About You" section. Our pre-screening criteria included being US/Canada residents and native speakers with English as first language (because we use speech stimuli spoken with North-American accents), 18–55 years of age, having participated in at least 40 other studies previously on Prolific, more than 90% of their previous Prolific submissions approved (Prolific allows researchers to reject participant submissions when there is clear evidence of lack of compliance with instructions), and answering "No" to the question "Do you have any hearing loss or hearing difficulties?".
Lab members can create individual jsPsych tasks and surveys and string them together with decision rules into a study sequence (a tree structure) within the Django app. Thus far, the most data we have is for our entry study which we use to filter participants for other studies. Once our entry study sequence is created within the Django app, a participant link is generated for posting on Prolific. When subjects click on this link, the Django app automatically records the participant’s anonymized Prolific ID. If the subject did not provide consent within the past 7 days (e.g., by participating in another study from our lab), a consent form is displayed first. Upon receipt of consent, the app serves a demographic survey page which re-verifies that the subjects meet our pre-screening criteria, and includes additional demographic questions (e.g., race, ethnicity) and questions about hearing status, neurological status, music training etc. Upon completion of the survey, a headphone screening task is served (adapted from Woods et al., 2017). If they do not pass the screening task, they are shown a debrief page with instructions to "return" the study on Prolific and details of the partial compensation they will receive. If they pass the headphone screening, they are then served a speech-in-babble task based on the Modified Rhyme Test (House et al., 1963). Completion of this task concludes the entry study. The subjects are debriefed and provided with a URL that when clicked, submits a completion code back to Prolific.
A lab member can log in to Prolific to approve participant submissions based on the completion-codes and by cross-checking with the log pages in the Django app. Upon approval, Prolific pays the specified compensation amount to the subject by deducting our account. For participants who fail the headphone check task and "return" the study without a completion code, the partial compensation amount calculated by the Django app is manually paid via the "bonus" feature within Prolific. Thus, subject payments are done anonymously.
(Very) Preliminary Results/Impressions
Thus far, about 30% of our subjects fail the headphone screening task, consistent with literature (Milne et al., 2020). We include catch trials to obtain data quality metrics, and use jsPsych’s "blur" event tracking to exclude subjects showing poor engagement. Our minimal experience suggests that median performance and across-individual variance is roughly on par with in-person data from our lab and with literature, which is encouraging.
Prolific also allows researchers to conduct multi-part/longitudinal studies by specifying a "custom allowlist" of participants to invite. We are testing the use of this feature for filtering participants for other studies based on their responses in our entry study
Contact for more information: Hari Bharadwaj.
[UPDATE] A pre-print with more details is now available:
Mok, B. A., Viswanathan, V., Borjigin, A., Singh, R., Kafi, H. I., & Bharadwaj, H. M. (2021). Web-based Psychoacoustics: Hearing Screening, Infrastructure, and Validation. bioRxiv 2021.05.10.443520; doi: https://doi.org/10.1101/2021.05.10.443520