The Imperative for Prosocial AI

Abstract (ENG): As artificial intelligence transitions from experimental technology to societal infrastructure, we face a critical choice: will AI amplify human flourishing or exacerbate existing inequalities? Unlike previous industrial revolutions driven primarily by commercial interests, we have the opportunity to embed prosocial values into AI systems from the outset. This requires addressing four interconnected challenges. First, we must design AI systems that prioritize human wellbeing over narrow optimization metrics, recognizing that technology amplifies the values and intentions of its creators. Second, we must guard against agency decay—the gradual erosion of human decision-making capacity as we become overly reliant on AI systems. Third, we need hybrid intelligence that combines AI’s computational strengths with human creativity, intuition, and moral reasoning. Finally, we must develop double literacy: both human literacy (understanding ourselves and social dynamics) and algorithmic literacy (comprehending how AI systems work, their biases, and limitations). The path forward requires a mindset that has four characteristics: Awareness of our current choice point, Appreciation for the complementary strengths of human and machine intelligence, Acceptance of our responsibility to shape AI development, and Accountability for ensuring prosocial outcomes. One might call this the A-Frame mindset. We cannot expect the technology of tomorrow to be better than the humans of today—the responsibility lies with us to determine what ends AI should serve.
Abstract (DE): Während sich künstliche Intelligenz von experimenteller Technologie zu gesellschaftlicher Infrastruktur entwickelt, stehen wir vor einer entscheidenden Wahl: Wird KI das menschliche Wohlergehen verstärken oder bestehende Ungleichheiten verschärfen? Im Gegensatz zu früheren Industrierevolutionen, die hauptsächlich von kommerziellen Interessen getrieben wurden, haben wir die Möglichkeit, prosoziale Werte von Anfang an in KI-Systeme einzubetten. Dies erfordert die Bewältigung vier zusammenhängender Herausforderungen. Erstens müssen wir KI-Systeme entwickeln, die menschliches Wohlbefinden über enge Optimierungsmetriken stellen, und erkennen, dass Technologie die Werte und Absichten ihrer Schöpfer verstärkt. Zweitens müssen wir uns vor Handlungsverfall hüten—dem allmählichen Verlust menschlicher Entscheidungsfähigkeit bei übermäßiger Abhängigkeit von KI-Systemen. Drittens benötigen wir hybride Intelligenz, die KIs rechnerische Stärken mit menschlicher Kreativität, Intuition und moralischem Denken verbindet. Schließlich müssen wir doppelte Kompetenz entwickeln: sowohl menschliche Kompetenz (Verstehen von uns selbst und sozialer Dynamik) als auch algorithmische Kompetenz (Begreifen, wie KI-Systeme funktionieren, ihre Vorurteile und Grenzen). Der Weg vorwärts erfordert eine Einstellung die von vier Charakteristiken geprägt ist: Bewusstsein für unseren aktuellen Entscheidungspunkt, Wertschätzung für die komplementären Stärken menschlicher und maschineller Intelligenz, Akzeptanz unserer Verantwortung zur Gestaltung der KI-Entwicklung und Rechenschaftspflicht für prosoziale Ergebnisse. (Auf English sind diese vier Aspekte eine Alliteration mit A daher die Terminologie des ‘A-Frame’ in der Originalfassung). Wir können nicht erwarten, dass die Technologie von morgen besser ist als die Menschen von heute—die Verantwortung liegt bei uns zu bestimmen, welchen Zwecken KI dienen soll.
As artificial intelligence transitions from experimental curiosity to societal infrastructure, we stand at a singular moment in time. The choices we make today about AI development and deployment will fundamentally shape the trajectory of human civilization for generations to come. Previous industrial revolutions prioritized efficiency and profit over human welfare. Learning the lessons from the past we now have the opportunity — and responsibility — to move beyond the classic schema of a zero-sum game to embed prosocial values into the very DNA of a technology that is bound to shape the coming century.
Breaking the Historical Pattern
The first three industrial revolutions followed a predictable pattern: technological advancement driven primarily by commercial interests, with social consequences addressed as afterthoughts. The steam engine, electricity, and computers all brought tremendous benefits, but also created new forms of inequality, environmental degradation, and social disruption that we’re still grappling with today.
AI represents something fundamentally different. Its capacity to amplify human capabilities, both positive and negative, is extraordinary. But as we are pushing the boundaries of AI sophistication it matters to keep in mind two aspects:
Human intention determines the values and priorities that drive AI development, profit-maximization versus human flourishing.
Technological Neutrality – Algorithms, training data, and system architectures embody human intentions and history in concrete technical implementations.
We cannot expect the technology of tomorrow to be better than the humans of today.
Yet there is potential for an intentional positive shift whereby prosocial trends are embedded into algorithms. Research demonstrates that AI systems can exhibit distinct behavioral patterns, including prosocial tendencies when appropriately designed and trained. This suggests that we can deliberately shape AI to bring out the best in people rather than simply optimizing for engagement or profit.
Prosocial AI doesn’t mean sacrificing commercial viability. Instead, it requires reframing success metrics to include human flourishing, environmental sustainability, and social cohesion alongside traditional business outcomes. Companies that embrace this approach will likely find themselves better positioned for long-term success as consumers and regulators increasingly demand technology that serves broader human interests. Investing in prosocial AI will turn out to be a win-win-win-win for the humans we are, the communities we belong to, the countries we are part of, and the planet we depend on.
The Agency Decay Challenge
As we navigate the transition from AI experimentation to integration and eventual reliance, we face a critical but underappreciated threat: agency decay. This phenomenon occurs when humans become overly dependent on AI systems, gradually losing their capacity for independent decision-making and critical judgment.
Studies reveal that overreliance on AI occurs when users accept AI-generated recommendations without question, leading to errors in task performance and a gradual erosion of human cognitive abilities. This isn’t merely about convenience, it’s about preserving our fundamental human capacity for autonomous thought and action.
The implications are far-reaching. If we abdicate our decision-making to AI systems, we risk becoming passive consumers of algorithmic outputs rather than active agents in our own lives. This is particularly concerning given evidence that AI assistance can accelerate skill decay without users‘ awareness.
Agency decay represents a new form of learned helplessness for the digital age. The antidote isn’t to reject AI, but to engage with it mindfully, treating our cognitive and decision-making abilities as muscles that require regular exercise. We must design AI systems that enhance rather than replace human judgment, and cultivate practices that preserve our autonomy even as we benefit from AI’s capabilities.
The Hybrid Intelligence Imperative
The future doesn’t belong to either artificial intelligence or natural intelligence alone. What happens next depends on their synthesis. Hybrid intelligence represents the optimal integration of AI’s computational power with human creativity, intuition, and moral reasoning.
Each form of intelligence brings unique strengths. AI excels at processing vast amounts of data, recognizing complex patterns, and performing consistent analysis at scale. NI contributes contextual understanding, creative problem-solving, emotional intelligence, and the ability to navigate ambiguous ethical terrain.
Research on human-AI collaboration indicates that the most effective systems harness these complementary strengths, rather than treating AI as a replacement for human capabilities. This requires designing AI systems that are transparent about their limitations, invite human oversight, and create meaningful opportunities for human input throughout the decision-making process.
For organizations, this means moving beyond simple automation toward hybrid intelligence for entirely new forms of capability through human-machine collaboration. The present is too complex for either-or thinking. To curate a society where everyone can thrive means leverage the best of all our assets. Rather than looking for labels of artificial, natural and augmented intelligence – this is the time to cultivate amended intelligences. This approach not only preserves human agency but also creates more robust and adaptable systems that can handle unexpected situations and ethical dilemmas.
The Double Literacy Challenge
Successfully navigating the AI-integrated future requires double literacy, proficiency in both human and algorithmic domains. This goes far beyond basic digital literacy to encompass a holistic understanding of the dynamics that underpin and influence our interaction with AI systems, and the consequences thereof.
Human literacy involves understanding ourselves and our social dynamics with exceptional clarity. This includes recognizing our cognitive biases, understanding how our emotions influence our decisions, and developing the self-awareness necessary to know when to trust our intuition versus when to rely on data-driven analysis.
Algorithmic literacy requires understanding not just how to use AI tools, but how they generate their outputs, what data they’re trained on, and where they’re most likely to fail or produce biased results. This knowledge is essential for maintaining appropriate skepticism and knowing when to question or override AI recommendations.
Current research indicates that many users lack the skills necessary to critically evaluate AI outputs, making them vulnerable to accepting incorrect or biased information. Educational institutions and organizations must prioritize developing these dual competencies to prepare people for an AI-integrated world.
Designing for Human Flourishing
Creating prosocial AI requires intentional design choices at every level — from the datasets used for training to the user interfaces that shape daily interactions. Academic research on prosocial behavior demonstrates that AI systems can be designed to encourage helping behaviors, cooperation, and other positive social outcomes.
This means moving beyond narrow optimization for engagement metrics toward broader measures of human wellbeing. AI systems should be designed to promote meaningful relationships, encourage learning and growth, support mental health, and contribute to community resilience.
Prosocial AI also requires addressing the alignment problem, ensuring that AI systems‘ goals and values reflect human values rather than simply optimizing for easily measured proxies. This is particularly important as AI systems become more autonomous and influential in shaping human behavior and social outcomes. However, tackling the algorithmic alignment challenge we need to address the human alignment angle first. Double alignment is needed, and it begins offline with the harmonization of human aspirations and actions, from intention to implementation.
The Path Forward: An A-Frame for Action
The transition to prosocial AI won’t happen automatically. It requires deliberate action from individuals, organizations, and society as a whole. The path forward requires a mindset that has four characteristics: Awareness of our current choice point, Appreciation for the complementary strengths of human and machine intelligence, Acceptance of our responsibility to shape AI development, and Accountability for ensuring prosocial outcomes. Referred to as the A-Frame mindset it provides a practical framework for the required transformation:
Awareness: Recognize that the choices we make about AI development and deployment today will shape the future of human civilization. Understand both the tremendous potential and the significant risks involved. Stay informed about AI developments and their implications for human flourishing.
Appreciation: Value the unique strengths that both artificial and natural intelligence bring to solving complex problems. Appreciate the complementary nature of human and machine capabilities rather than viewing them as competing forces. Recognize that the goal isn’t to replace human judgment but to augment it.
Acceptance: Accept that the AI revolution is already underway and that withdrawal isn’t a viable option. Embrace the challenge of thoughtfully integrating AI into our lives and institutions while preserving what makes us distinctly human. Accept responsibility for shaping this transition rather than being passive recipients of technological change.
Accountability: Hold ourselves and others accountable for the social impacts of AI systems. Demand transparency from AI developers about how their systems work and what values they embody. Take responsibility for maintaining our own cognitive abilities and decision-making skills even as we benefit from AI assistance.
The future of AI isn’t predetermined. We have the opportunity to create artificial intelligence that serves human flourishing rather than simply maximizing narrow metrics. But this requires sustained effort, thoughtful design, and a commitment to putting human wellbeing at the center of technological development. The choices we make today will determine whether AI becomes a tool for human flourishing or a source of severe inequality and alienation.
The time for action is now. The future of human civilization may well depend on our ability to create AI that brings out the best in us rather than simply amplifying our existing patterns of behavior. This is both our greatest challenge and our most beautiful opportunity.
The Logic of ProSocial AI in a snapshot:

Alle Rechte vorbehalten.
Abdruck oder vergleichbare Verwendung von Arbeiten des Instituts für Sozialstrategie ist auch in Auszügen nur mit vorheriger schriftlicher Genehmigung gestattet.
Publikationen des IfS unterliegen einem Begutachtungsverfahren durch Fachkolleginnen- und kollegen und durch die Institutsleitung. Sie geben ausschließlich die persönliche Auffassung der Autorinnen und Autoren wieder.