The Center for Humans and Machines (CHM) at the Max-Planck-Institute for Human Development in Berlin conducts interdisciplinary science to understand, anticipate, and shape major disruptions from digital media and Artificial Intelligence (AI) to the way we think, learn, work, play, cooperate, and govern. Our goal is to understand how machines are shaping human society today and how they may continue to shape it in the future. The Center is composed of an interdisciplinary, international, and diverse group of scholars, and a science support team.
Internships will last between 6-12 weeks and start in summer 2026.
This summer, we are looking for motivated student interns who are excited about working at the intersection of computer science and social sciences. The following projects are on offer:
Project 1: LLM conversation ending
Our goal is to train a model that can recognize when a conversation has ended and then stop.
LLMs fall into a repetitive attractor state when used for repeated self-conversation. One theory is that this indicates a lack of self-motivated behavior on the part of the LLMs. A valid objection to this idea is that LLMs cannot indicate when they want to end a conversation. In this project, we aim to teach an LLM to produce a specialized token when it wishes to end a conversation. To do so, we need a dataset of conversation endings to train the AI to recognize an unused token that signals the end of a conversation. Then, the LLM would need to be fine-tuned without impairing its other capabilities. If successful, we will test the resulting LLM with human participants and in self-conversation to determine if attractor states persist.
Project 2: Technical Research Assistant for AI Companion for Loneliness | Can AI companions reduce loneliness in older adults?
This project conducts a randomized controlled trial of an AI companion for older adults, offering a summer research assistant hands-on experience in interdisciplinary psychiatry–psychology–computer science research, including data collection, data analysis, and technical support for a web interface and Python backend.
Project 3: (Why) do people take moral advice from Chatbots?
We are running a study on the effects of Chatbot advice on moral judgments. We have existing data from one experiment. The intern's task would be to familiarize themselves with the research question and design, analyze the data (behavioral data as well as natural language data), get familiar with new work that has been published on the topic since we ran this first study, and come up with potential follow-up experiments which we could conduct together.
Project 4: Participatory AI Governance: Collective Artificial Personalities
Contemporary AI systems are predominantly aligned through centralized, vendor-driven pipelines, resulting in homogenized normative profiles that reflect few of the communities they serve. This project investigates whether community-grounded governance, including shared constitutions and structured feedback, can enable AI systems to credibly represent group identities. It focuses on the design, implementation, and empirical evaluation of participatory mechanisms that allow communities to shape model behavior.
Project 5: Deep Learning Intern: Collective Artificial Personalities
Post-training methods play a central role in shaping the behavioral profiles of large language models, including their response style, normative orientation, tone, and emotional expression. This project investigates how collective personalities can be encoded, stabilized, and systematically evaluated within LLMs using participatory post-training pipelines. Particular emphasis is placed on multi-persona architectures and training procedures designed to achieve the representational alignment of specific groups.
Project 6: Deep Learning Intern: Reinforcement Learning for Artificial Pedagogy
Optimizing AI systems for immediate task helpfulness often produces answer-giving behaviors that inhibit long-term student learning. This project investigates how pedagogical strategies can be discovered and optimized by shifting the reinforcement objective from task success to student generalization. Using a teacher–student framework, it examines how teacher models evolve when rewarded for the transfer performance of a less-capable student model on unseen, structurally related tasks.
There is no fixed application deadline. Applications will be reviewed on a rolling basis, and positions may be filled as suitable candidates are identified. Early applications are strongly encouraged.
Please note that applicants who require a visa or work authorization to intern in Germany should apply well in advance, as the visa process may take several months.
Please apply, including a CV without a photo and your latest transcript.
The data protection declaration for the processing of personal data within the scope of your application can be found here: https://www.mpib-berlin.mpg.de/1589569/en_infos_bewerbung.pdf
Diversity and severe disability:
The Max Planck Society strives for gender equality and diversity, because diversity, equality, and inclusion enrich our community and promote scientific excellence. We have therefore set ourselves the goal of increasing the proportion of women in areas where they are underrepresented and employing more people with severe disabilities. With this in mind, we expressly welcome applications from people who are often underrepresented in the workplace due to characteristics such as gender identity, disability, religion, ethnicity, and age. Our website gives you an impression of how we understand and live diversity and what opportunities we offer to respond to your individual needs: www.mpib-berlin.mpg.de/diversity
ID: 202709