The social and behavioral sciences have been increasingly using automated text analysis to measure psychological constructs in text. We explore whether GPT, the large-language model underlying the artificial intelligence chatbot ChatGPT, can be used as a tool for automated psychological text analysis in various languages. Across 15 datasets (n = 31,789 manually annotated tweets and news headlines), we tested whether GPT-3.5 and GPT-4 can accurately detect psychological constructs (sentiment, discrete emotions, and offensiveness) across 12 languages (English, Arabic, Indonesian, and Turkish, as well as eight African languages including Swahili, Amharic, Yoruba and Kinyarwanda). We found that GPT performs much better than English-language dictionary-based text analysis (r = 0.66-0.75 for correlations between manual annotations and GPT-4, as opposed to r = 0.20-0.30 for correlations between manual annotations and dictionary methods). Further, GPT performs nearly as well as or better than several fine-tuned machine learning models, though GPT had poorer performance in African languages and in comparison to more recent fine-tuned models. Overall, GPT may be superior to many existing methods of automated text analysis, since it achieves relatively high accuracy across many languages, requires no training data, and is easy to use with simple prompts (e.g., “is this text negative?”) and little coding experience. We provide sample code for analyzing text with the GPT application programming interface. GPT and other large-language models may be the future of psychological text analysis, and may help facilitate more cross-linguistic research with understudied languages.