Summary

Introduction

Contemporary moral philosophy faces a profound challenge that strikes at the heart of how humanity should allocate its resources and attention. The conventional ethical framework that prioritizes immediate concerns and present-day welfare may be fundamentally misguided when confronted with the vast temporal scope of human potential. This philosophical investigation examines whether our primary moral obligations should extend far beyond the present moment to encompass the welfare of countless future generations who have yet to be born.

The central argument emerges from three interconnected premises that appear individually reasonable yet collectively revolutionary in their implications. Future people possess the same moral worth as present individuals, the number of potential future humans could vastly exceed current populations, and present actions can meaningfully influence long-term outcomes across centuries or millennia. Through rigorous philosophical analysis, examination of population ethics, assessment of existential risks, and careful consideration of practical objections, this exploration challenges readers to fundamentally reconsider the temporal scope of moral consideration and the ultimate priorities that should guide human civilization.

The Moral Foundation: Why Future People Deserve Equal Consideration

The philosophical foundation for extending moral consideration across time rests on a principle of temporal neutrality that challenges deeply embedded intuitions about moral priority. Just as geographic distance does not diminish the moral worth of distant strangers, temporal distance should not reduce our obligations to future people. This seemingly straightforward claim carries revolutionary implications for how societies should allocate resources between present needs and long-term investments.

The argument for temporal moral equality draws strength from examining what makes someone morally considerable in the first place. If moral status derives from the capacity to experience suffering and flourishing, or from possessing interests that can be frustrated or fulfilled, then the temporal location of these experiences appears morally irrelevant. The pain experienced by someone in the year 2200 will be just as real and morally significant as pain experienced today, regardless of the centuries separating these experiences.

Several objections challenge this temporal neutrality principle, each revealing important practical considerations while failing to undermine the core philosophical claim. The uncertainty objection holds that ignorance about future preferences and circumstances makes effective help impossible. However, this objection conflates uncertainty about specific details with complete ignorance about basic human needs and values that persist across historical periods. The reciprocity objection notes that future people cannot help present generations in return, potentially undermining the basis for moral obligation. Yet widely accepted duties to help distant strangers or protect those unable to reciprocate demonstrate that moral obligations need not depend on reciprocal relationships.

The discount objection suggests that some temporal preference may be rational given uncertainty about future circumstances and the possibility that future people will be better positioned to solve their own problems. This objection identifies legitimate reasons for modest discounting based on uncertainty while failing to justify dramatic reductions in moral consideration based solely on temporal distance. The assumption that future people will automatically be better off requires justification rather than acceptance as a default position.

Historical evidence demonstrates that present actions can indeed influence distant future outcomes through the persistence of ideas, institutions, and cultural practices across generations. The abolition of slavery established moral principles that continue shaping human rights discourse centuries later, while constitutional frameworks and scientific methodologies established in previous eras continue generating benefits today. These examples illustrate how moral consideration for future people translates into concrete obligations to shape beneficial long-term trajectories.

Trajectory Changes: How Present Actions Shape Civilization's Long-Term Path

The concept of trajectory changes illuminates how relatively small present interventions might influence the entire future course of human civilization by altering fundamental values, institutions, or capabilities that persist and compound across generations. Unlike interventions that provide temporary benefits to specific individuals, trajectory changes modify the underlying direction of human development with effects that continue expanding over time.

Historical analysis reveals numerous examples of trajectory changes that fundamentally reshaped civilization's path through the persistence of ideas and institutions across centuries. The scientific revolution established methodological approaches to knowledge that continue generating benefits across diverse fields. Democratic institutions created governance structures that spread globally despite periodic setbacks. The development of human rights concepts established moral frameworks that continue expanding to protect previously marginalized groups.

The mechanism underlying trajectory changes involves the cultural transmission of values, practices, and institutional arrangements across generations. Once certain principles become embedded in legal systems, educational curricula, or social norms, they tend to reproduce themselves through teaching, imitation, and institutional momentum. This persistence creates leverage points where interventions at critical moments can influence vast numbers of future people through cascading effects across time.

Contemporary opportunities for beneficial trajectory changes center on several key domains where present decisions could establish precedents or frameworks that persist for centuries. The development of artificial intelligence systems presents unprecedented possibilities for embedding beneficial values and capabilities in technologies that could influence civilization's entire future trajectory. The establishment of space settlements could determine which human values and institutions spread beyond Earth. The evolution of global governance structures may establish precedents for managing planetary-scale challenges that persist across multiple centuries.

The concept of value lock-in represents the most concerning category of trajectory change, where beneficial moral progress becomes impossible due to technological or institutional constraints that prevent future course corrections. Advanced surveillance technologies might enable authoritarian control systems that prove impossible to overthrow regardless of future preferences. Powerful artificial intelligence systems aligned with narrow or harmful objectives could prevent future moral development by eliminating the diversity and debate necessary for continued progress. Economic or political systems might become so entrenched that beneficial reforms become structurally impossible, trapping humanity in suboptimal arrangements indefinitely.

Existential Safeguarding: Protecting Humanity from Permanent Catastrophe

Existential risks represent threats that could permanently curtail or eliminate humanity's long-term potential, demanding special attention not because they are necessarily more probable than other catastrophes, but because their consequences would be irreversible and affect all future generations. These risks encompass scenarios ranging from human extinction to permanent civilizational collapse or stagnation that prevents further moral and technological progress.

Natural existential risks include asteroid impacts, supervolcanic eruptions, and gamma-ray bursts that could eliminate human civilization or survival entirely. While these threats have extremely low annual probabilities, their cumulative risk over centuries or millennia becomes non-negligible. More importantly, they establish baseline levels of existential danger that humanity faces regardless of technological development, providing context for evaluating anthropogenic risks that may prove far more probable.

Anthropogenic existential risks emerge from human activities and technologies, creating new categories of threat that may dwarf natural risks in both probability and severity. Nuclear weapons introduced the possibility of civilization-ending warfare for the first time in human history. Engineered pandemics could potentially cause human extinction through the deliberate or accidental release of modified pathogens with enhanced transmissibility or lethality. Advanced artificial intelligence systems could pose existential risks if they pursue objectives misaligned with human values while possessing capabilities that prevent human oversight or control.

The analysis of civilizational collapse reveals important distinctions between temporary setbacks and permanent curtailment of human potential. Historical examples like the fall of the Roman Empire demonstrate that even major civilizations can collapse while leaving room for eventual recovery and renewed progress. However, certain types of collapse might prove more difficult to recover from, particularly if they occur after humanity has depleted easily accessible energy resources, caused irreversible environmental damage, or lost critical knowledge and institutional capabilities.

Technological stagnation represents a subtler but potentially more probable existential risk than outright extinction. If scientific and technological progress permanently ceased, humanity might remain trapped at current development levels indefinitely, never realizing the vast potential for future flourishing that continued progress could enable. Stagnation could result from cultural changes that discourage innovation, resource depletion that prevents further development, institutional failures that block beneficial changes, or the establishment of stable but suboptimal social arrangements that resist improvement.

Population Ethics: The Moral Value of Potential Future Lives

The moral significance of potential future people raises fundamental questions about the value of existence itself and the ethics of bringing new people into being. These questions prove essential for evaluating the importance of ensuring humanity's long-term survival and determining whether a larger future population would be better than a smaller one, assuming adequate welfare levels.

The intuition of neutrality suggests that creating happy people is morally neutral rather than good, while creating miserable people remains clearly bad. This asymmetric view appears in many contexts, from personal decisions about having children to policy decisions about population growth, and seems to reflect common moral thinking that distinguishes between making existing people happy and making happy people exist. The neutrality intuition implies that human extinction would be tragic primarily because of its effects on existing people rather than because of the lost opportunity to create future flourishing lives.

However, the neutrality intuition faces serious philosophical challenges that undermine its coherence when subjected to careful analysis. If creating a life of suffering is bad, then symmetry suggests that creating a life of happiness should be correspondingly good. The asymmetry required to maintain neutrality about happy lives while condemning the creation of miserable lives lacks adequate philosophical justification and leads to counterintuitive implications in cases involving population choices.

The total view holds that creating additional happy lives makes the world better, while creating additional miserable lives makes it worse, treating the welfare of possible people symmetrically. This view avoids the problems facing neutrality intuitions and provides clear guidance for population choices. However, it leads to the repugnant conclusion that a world with an enormous population of people with barely positive welfare could be better than a world with a smaller population of extremely happy people, violating strong intuitions about what makes outcomes desirable.

The critical level view attempts to avoid the repugnant conclusion by setting a threshold above which lives are worth creating and below which they are not. Lives above the critical level contribute positively to the world's value, while lives below it contribute negatively, allowing for more intuitive judgments about population size and welfare trade-offs. However, this approach creates its own problems, including the sadistic conclusion that adding lives of extreme suffering could sometimes be better than adding moderately good lives if the total welfare calculation favors the former option.

Despite persistent disagreement among philosophers about population ethics, decision-making under moral uncertainty suggests adopting approaches that perform reasonably well across different theories rather than betting everything on a single view. This typically yields support for ensuring that future lives are sufficiently good while maintaining some positive value for creating additional flourishing lives, providing moral reasons for both ensuring civilization's survival and promoting conditions that enable future flourishing.

Addressing Objections: Uncertainty, Tractability, and Implementation Challenges

The practical implementation of longtermist priorities faces several serious objections that challenge both the theoretical framework and its real-world applications. These objections require careful consideration because they identify genuine limitations and potential problems with longtermist reasoning while revealing important constraints on how such principles should guide action.

The uncertainty objection holds that our knowledge about long-term consequences is so limited that attempts to optimize for the distant future are likely to backfire or prove ineffective. This objection gains strength from the poor track record of long-term predictions and the complexity of social and technological systems that make precise forecasting impossible. However, the objection conflates uncertainty about specific details with complete ignorance about general patterns and robust strategies that remain beneficial across many possible futures.

The tractability objection questions whether present actions can meaningfully influence long-term outcomes given the complexity of historical causation and the tendency for small effects to dissipate over time. This challenge requires distinguishing between interventions that provide temporary benefits and those that create persistent changes in values, institutions, or capabilities. Historical examples of lasting influence through moral movements, scientific discoveries, and institutional innovations demonstrate that some present actions do have persistent long-term effects.

The fanaticism objection argues that longtermist reasoning leads to extreme conclusions that justify harmful present actions in service of speculative future benefits. This concern identifies a genuine risk that requires careful attention to moral uncertainty, the rights of present people, and the limitations of consequentialist reasoning. However, the objection often assumes that longtermist priorities necessarily conflict with present welfare, when many interventions benefit both current and future generations.

The implementation challenge focuses on the practical difficulties of translating longtermist insights into effective action given institutional constraints, political realities, and human psychological limitations. Democratic institutions struggle to represent future generations who cannot vote or lobby for their interests. Market mechanisms systematically undervalue long-term consequences that extend beyond typical investment horizons. Individual psychology exhibits strong present bias that makes sustained attention to distant consequences psychologically difficult.

Addressing these objections requires developing robust strategies that acknowledge uncertainty while identifying interventions that remain beneficial across many possible futures. This includes focusing on building general capabilities and institutions that can respond flexibly to emerging challenges, rather than betting on specific predictions about future problems. It also involves seeking opportunities where longtermist priorities align with present interests, reducing conflicts between temporal perspectives while building broader support for long-term thinking.

Summary

The longtermist framework fundamentally reframes moral priority-setting by recognizing that humanity's future could be vastly larger and more significant than its past and present combined, demanding a corresponding shift in how societies evaluate actions, allocate resources, and understand obligations across time. The convergence of temporal moral equality, enormous potential future populations, persistent leverage points for influence, and genuine threats to long-term human potential creates a compelling case for extending moral consideration far beyond conventional time horizons.

While each element of the longtermist argument involves significant uncertainty and faces legitimate objections, their combination provides a robust framework for understanding humanity's current moral situation and opportunities for beneficial impact that could echo across centuries. This perspective offers essential insights for anyone seeking to understand how philosophical reasoning can illuminate practical questions about technology governance, global priorities, and the ultimate direction of human civilization.

About Author

William MacAskill

William MacAskill, author of "What We Owe the Future," crafts literary works that transcend conventional boundaries, forging a path into the depths of ethical philosophy and social responsibility.

Download PDF & EPUB

To save this Black List summary for later, download the free PDF and EPUB. You can print it out, or read offline at your convenience.