Jump to content

Proactionary principle

From Wikipedia, the free encyclopedia

The proactionary principle is an ethical and decision-making principle formulated by the transhumanist philosopher Max More as follows:[1]

People’s freedom to innovate technologically is highly valuable, even critical, to humanity. This implies several imperatives when restrictive measures are proposed: Assess risks and opportunities according to available science, not popular perception. Account for both the costs of the restrictions themselves, and those of opportunities foregone. Favor measures that are proportionate to the probability and magnitude of impacts, and that have a high expectation value. Protect people’s freedom to experiment, innovate, and progress.

The proactionary principle was created as an opposing viewpoint to the precautionary principle, which is based on the concept that consequences of actions in complex systems are often unpredictable and irreversible and concludes that such actions should generally be opposed. The Proactionary Principle is based upon the observation that historically, the most useful and important technological innovations were neither obvious nor well understood at the time of their invention. More recommends 10 principles in his paper "Proactionary Principle":

  1. Freedom to innovate
  2. Objectivity
  3. Comprehensiveness
  4. Openness/Transparency
  5. Simplicity
  6. Triage
  7. Symmetrical treatment
  8. Proportionality
  9. Prioritization
  10. Renew and Refresh

In a syndicated newspaper article that has been translated into eight languages, Steve Fuller has argued that the precautionary principle and the proactionary principle are likely to replace the right-left divide in politics in the 21st century.[2] A subsequent book, The Proactionary Imperative by Fuller and Lipinska attempts to make the proactionary principle fundamental to transhumanism as a world-view, stressing the principle's interpretation of risk as an opportunity rather than a threat.[3]

In theory, sufficient study of the variables of any proposed course of action may yield acceptable levels of predictability. In this regard the proactionary principle can be looked upon as the philosophical formulation of the accepted mathematical principles of extrapolation and the logical principles of induction.[citation needed]

However, the proactionary principle argues that "sufficient study" may in some cases be impractical. For instance, in releasing a new life form into the biosphere — whether genetically modified plant, animal, or bacteria — one would have to simulate the biosphere to achieve "acceptable levels of predictability". While the innovator of the new life form might point out that such a simulation would be a heavy burden, the other life forms in the biosphere could suffer irreparable harm in the case of an untested release. More's first principle, freedom to innovate, would place the burden of proof on those who propose a restrictive measure.[citation needed]

According to the proactionary principle (and cost-benefit analysis), the opportunity cost of imposing a restrictive measure must be balanced against the potential costs of damage due to a new technology, rather than just considering the potential damages alone.[citation needed]

See also

[edit]

References

[edit]
  1. ^ "Extropy Institute Resources".
  2. ^ Fuller, Steve (2012-05-07). "The Future of Ideological Conflict". Project Syndicate. Retrieved 2012-05-26.
  3. ^ Steve Fuller, Veronika Lipinska, The Proactionary Imperative: A Foundation for Transhumanism, Palgrave Macmillan 2014