Steve studies the psychology of how humans interact with technology. Much of his work has focused on how social media incentivizes the creation of polarizing content, and why we believe in and spread misinformation. His current work focuses on how the impact of social media differs around the globe, and how we can use recent advances in artificial intelligence to improve psychological science.
Steve is a postdoctoral researcher in Psychology at New York University in the Social Identity and Morality Lab. He received his PhD from the University of Cambridge (Trinity College), where was a Gates Cambridge Scholar. Previously, he studied Psychology and Symbolic Systems at Stanford University.
He has published in journals such as the Proceedings of the National Academy of Science, Nature Human Behavior, Science Advances, Psychological Science, Trends in Cognitive Sciences, Perspectives on Psychological Science, PNAS Nexus, Nature Communications, and the Journal of Experimental Social Psychology. His research has been covered by outlets such as the New York Times, BBC, NBC, CBS Sixty Minutes, the Atlantic, the Wall Street Journal, the Guardian, and the Freakonomics podcast.
He has received grants from the Templeton World Charity Foundation, the Russell Sage Foundation, the Center for the Science of Moral Understanding, the AE Foundation, Google, Cambridge, and NYU. His thesis was awarded the Psychology of Technology Dissertation Fellowship, and was a a finalist for the SESP dissertation award.
Steve is also very interested in science communication, and has written articles for the Washington Post, the Guardian, the LA Times, the Boston Globe, and Psychology Today. He also makes science communication TikToks under the name @stevepsychology, and has more than 1.1 million TikTok followers.
Steve is currently leading an international collaboration testing the causal impact of social media usage around the world. This collaboration involves more than 640 researchers residing in 76 countries, and has received $275,000 in total grant funding. You can learn more about this collaboration here: globalsocialmediastudy.com.
Download Steve’s CV.
You can contact Steve at firstname.lastname@example.org.
PhD Psychology, 2022
University of Cambridge
BA in Psychology, Minor in Symbolic Systems, 2018
Recent studies have documented the type of content that is most likely to spread widely, or go “viral” on social media, yet little is known about people’s perceptions of what goes viral or what should go viral. This is critical to understand because there is widespread debate about how to improve or regulate social media algorithms. We recruited a sample of participants that is nationally representative of the US population (according to age, gender, and race/ethnicity) and surveyed them about their perceptions of social media virality (n = 511). In line with prior research, people believe that divisive content, moral outrage, negative content, high-arousal content, and misinformation are all likely to go viral online. However, they reported that this type of content should not go viral on social media. Instead, people reported that many forms of positive content – such as accurate content, nuanced content, and educational content – are not likely to go viral, even though they think this content should go viral. Importantly, these perceptions were shared among most participants, and were only weakly related to political orientation, social media usage, and demographic variables. In sum, there is broad consensus around the type of content people think social media platforms should and should not amplify, which can help inform solutions for improving social media.
The social and behavioral sciences have been increasingly using automated text analysis to measure psychological constructs in text. We explore whether GPT, the large-language model underlying the artificial intelligence chatbot ChatGPT, can be used as a tool for automated psychological text analysis in various languages. Across 15 datasets (n = 31,789 manually annotated tweets and news headlines), we tested whether GPT-3.5 and GPT-4 can accurately detect psychological constructs (sentiment, discrete emotions, and offensiveness) across 12 languages (English, Arabic, Indonesian, and Turkish, as well as eight African languages including Swahili, Amharic, Yoruba and Kinyarwanda). We found that GPT performs much better than English-language dictionary-based text analysis (r = 0.66-0.75 for correlations between manual annotations and GPT-4, as opposed to r = 0.20-0.30 for correlations between manual annotations and dictionary methods). Further, GPT performs nearly as well as or better than several fine-tuned machine learning models, though GPT had poorer performance in African languages and in comparison to more recent fine-tuned models. Overall, GPT may be superior to many existing methods of automated text analysis, since it achieves relatively high accuracy across many languages, requires no training data, and is easy to use with simple prompts (e.g., “is this text negative?”) and little coding experience. We provide sample code for analyzing text with the GPT application programming interface. GPT and other large-language models may be the future of psychological text analysis, and may help facilitate more cross-linguistic research with understudied languages.
The extent to which belief in (mis)information reflects lack of knowledge versus a lack of motivation to be accurate is unclear. Here, across four experiments (n = 3,364), we motivated US participants to be accurate by providing financial incentives for correct responses about the veracity of true and false political news headlines. Financial incentives improved accuracy and reduced partisan bias in judgements of headlines by about 30%, primarily by increasing the perceived accuracy of true news from the opposing party (d = 0.47). Incentivizing people to identify news that would be liked by their political allies, however, decreased accuracy. Replicating prior work, conservatives were less accurate at discerning true from false headlines than liberals, yet incentives closed the gap in accuracy between conservatives and liberals by 52%. A non-financial accuracy motivation intervention was also effective, suggesting that motivation-based interventions are scalable. Altogether, these results suggest that a substantial portion of people’s judgements of the accuracy of news reflects motivational factors.
Understanding how vaccine hesitancy relates to online behavior is crucial for addressing current and future disease outbreaks. We combined survey data measuring attitudes toward the COVID-19 vaccine with Twitter data in two studies (N1 = 464 Twitter users, N2 = 1,600 Twitter users) with preregistered hypotheses to examine how real-world social media behavior is associated with vaccine hesitancy in the United States (US) and the United Kingdom (UK). In Study 1, we found that following the accounts of US Republican politicians or hyper-partisan/low-quality news sites were associated with lower confidence in the COVID-19 vaccine—even when controlling for key demographics such as self-reported political ideology and education. US right-wing influencers (e.g. Candace Owens, Tucker Carlson) had followers with the lowest confidence in the vaccine. Network analysis revealed that participants who were low and high in vaccine confidence separated into two distinct communities (or “echo chambers”), and centrality in the more right-wing community was associated with vaccine hesitancy in the US, but not in the UK. In Study 2, we found that one’s likelihood of not getting the vaccine was associated with retweeting and favoriting low-quality news websites on Twitter. Altogether, we show that vaccine hesitancy is associated with following, sharing, and interacting with low-quality information online, as well as centrality within a conservative-leaning online community in the US. These results illustrate the potential challenges of encouraging vaccine uptake in a polarized social media environment.
According to recent work, subtly nudging people to think about accuracy can reduce the sharing of COVID-19 misinformation online (Pennycook et al., 2020). The authors argue that inattention to accuracy is a key factor behind the sharing of misinformation. They further argue that “partisanship is not, apparently, the key factor distracting people from considering accuracy on social media” (p. 777). However, our meta-analysis of data from this paper and other similar papers finds that partisanship is indeed a key factor underlying accuracy judgments on social media. Specifically, our meta-analysis suggests that the effectiveness of the accuracy nudge intervention depends on partisanship such that it has little to no effect for US conservatives or Republicans. This changes one of Pennycook and colleague’s (2020) central conclusions by revealing that partisanship matters considerably for the success of this intervention. Further, since US conservatives and Republicans are far more likely to share misinformation than US liberals and Democrats (Guess et al., 2019; Lawson & Kakkar, 2021; Osmundson, 2021), this intervention may be ineffective for those most likely to spread fake news.
There has been growing concern about the role social media plays in political polarization. We investigated whether outgroup animosity was particularly successful at generating engagement on two of the largest social media platforms: Facebook and Twitter. Analyzing posts from news media accounts and US congressional members (n = 2,730,215), we found that posts about the political outgroup were shared or retweeted about twice as often as posts about the ingroup. Each individual term referring to the political outgroup increased the odds of a social media post being shared by 67%. Outgroup language consistently emerged as the strongest predictor of shares and retweets: the average effect size of outgroup language was about 4.8 times as strong as that of negative affect language, and about 6.7 times as strong as that of moral-emotional language – both established predictors of social media engagement. Language about the outgroup was a very strong predictor of “angry” reactions (the most popular reactions across all datasets), and language about the ingroup was a strong predictor of “love” reactions, reflecting ingroup favoritism and outgroup derogation. This outgroup effect was not moderated by political orientation or social media platform, but stronger effects were found among political leaders than among news media accounts. In sum, outgroup language is the strongest predictor of social media engagement across all relevant predictors measured, suggesting that social media may be creating perverse incentives for content expressing outgroup animosity.
Can attending live theatre improve empathy by immersing audience members in the stories of others? We tested this question across three field studies (n = 1622), including a pre-registered replication. We randomly assigned audience members to complete surveys either before or after seeing plays, and measured the effects of the plays on empathy, attitudes, and pro-social behavior. After, as compared to before, seeing the plays, people reported greater empathy for groups depicted in the shows, held opinions that were more consistent with socio-political issues highlighted in the shows, and donated more money to charities related to the shows. Seeing theatre also led participants to donate more to charities unrelated to the shows, suggesting that theatre’s effects on pro-sociality generalize to different contexts. Altogether, these findings suggest that theatre is more than mere entertainment; it can lead to tangible increases in empathy and pro-social behavior.