In early June, an article I wrote with co-authors about the Self-Referent Encoding Task (SRET) was published in the journal Psychological Assessment (Dainer-Best, Lee, Shumake, Yeager, & Beevers, in press). The article is entitled “Determining optimal parameters of the Self Referent Encoding Task: A large-scale examination of self-referent cognition and depression”, and you can find it on the publisher’s website by following that link. (If you are unable to access the publisher’s copy, an updated post-print ((The online database SHERPA defines a post-print as “The final version of an academic article or other publication—after it has been peer-reviewed and revised into its final form by the author.” As a general rule, this means the version of the manuscript that is to be published—but not in the journal’s style. As such, the link there is to a version of the manuscript that I created in LaTeX. )) is available on the Open Science Framework, here.) I’m very pleased to see that this article has been published.
I think the conclusions we reach are worthwhile. This article is primarily about methodology in studying depression, and we took this opportunity to investigate a commonly-used task on a larger-than-normal scale and across three samples (572 college students, 293 adults on Amazon Mechanical Turk, and 270 adolescents). We were interested in answering this question: what is the best way to link how people describe themselves (what we call “self-referential processing”) to how elevated they are in terms of depressive symptoms? Researchers often use the task mentioned above (the Self-Referent Encoding Task, or SRET) to measure self-referential processing. Here, we collected data on that behavioral task, and measured depressive symptoms with a questionnaire. The SRET has a number of outcomes, from simple endorsements (do you describe yourself using positive and negative words?) to computational outcomes (what I elsewhere call “the rate of accumulation of information needed to make the decision about whether each word was self-referential”). By using a recursive validation procedure, we were able to make a pretty good argument for which outcomes from the SRET should be the focus for future work: the number of positive and negative words that individuals endorse as being self-referential, and the processed responses which mark “accumulation of information”—but not the reaction times or recall of words.
Additionally, this article gave me the opportunity to begin expressing a long-held interest in open science. I published our data and code on the Texas Data Repository, published the pre-print (when I submitted the article—updated to the post-print, above), and laid out several of our analyses in supplemental websites on GitHub. I also created a Shiny app which allows you to visualize the correlations between variables.