Gods Loaded Dice: Random Musings On A Universe Gone Mad
Gods Loaded Dice: Random Musings On A Universe Gone Mad
Gods Loaded Dice: Random Musings On A Universe Gone Mad
by
Timothy McGettigan
Table of Contents
Acknowledgements Introduction: Feynman's Cosmic Onion 1 Ayn Rand: The Blinkered Visionary 2 Redefining Reality 3 Darwin Day 4 Many Worlds, but only One Reality 5 The Sacred and the Profance 6 The New World Order 7 Monkey Wars 8 The Graveyard of Empires 9 Penny wise, Pound Foolish 10 Dear Arizona 11 Living the Dream 12 To Infinity and Beyond! 13 Trumped by the Dorkusians 14 The Unlikely Revolutionary 15 The Federal Budget for Dummies 16 The First Star Warrior 17 False Gods and Monsters 18 It Takes a Village Idiot 19 Queers Need not Apply 20 No Excuse 21 Say it aint so, Joe 22 Indefensible? 23 Get Your Geek On 24 The Future is a Fantasy 25 The Innocence of Muslims 26 Time Surfers
3
27 Blinded by Faith 28 Elementary My Dear Watson! 29 Its Alive!! 30 Sweet Revenge 31 Evolution at the Speed of Thought 32 The Turing Centenary 33 Thanksgiving at Walmart 34 Gods Loaded Dice 35 Paradigms Unlimited 36 Teleportation and Intelligence 37 Fortnight Fiasco 38 Evolution 2.0 39 Sapient Apes Ascendant 40 When Youre WrongYoure Right? About the Author
121 125 131 135 139 141 165 169 173 191 195 199 203 207 213
ACKNOWLEDGEMENTS
As always, I would like to thank Susan, Claire and Ruby for their patience. I am, by far, the most fortunate person that I know. Thanks also to Mike Sosteric for our ongoing partnership. The SocJournal (www.sociology.org) has been the most interesting, and innovative publication outlet in the social sciences for the past twenty years. I originally published most of the commentaries in this collection in The SocJournal. I would also like to thank Rob Kall at OpEd News (www.opednews.com) for his fantastic accomplishments in disseminating thought-provoking inquiry in cyberspace. Finally, I would like to thank WikiMedia Commons (commons.wikimedia.org/) for the many colorful images that I have (legally!) added to the commentaries in this collection.
Albert Einstein believed that the universe was created by a rational god; a god who would never presume to play dice with his precious creation. Einstein's belief in a rational, knowable universe was rooted in a clockwork scientific philosophy that comprised the very bedrock of Enlightenment science. This perspective was most famously summed up by Pierre-Simon Laplace (1749-1827):
An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as of the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes (Quoted in Weinert, 2004, p. 197).
Laplace was convinced that, so long as he and his intellectual heirs remained committed to the cause of rational scientific inquiry, their endeavors would ultimately yield a complete and thorough knowledge of the universe. All that had been hidden, would inexorably become present to the eyes of rational science. Yet, if scientists have learned anything over the past century it is that the universe is anything but rational: the chief claim of quantum mechanics, perhaps the most extraordinary set of insights ever revealed by modern science, is that it is impossible to know everything about anything. Though Einstein refused to accept this unsettling truth, quantum physicists have demonstrated time and
time again that it is impossible to specify the exact properties of even a single quantum particle. Still, in spite of the epistemological limitations of quantum reality, some scientists still cling to the notion that the universe is knowable and deterministic in a Laplacian sense:
Given the state of the universe at one time, a complete set of laws fully determines both the future and the past...The scientific determinism that Laplace formulated is...the basis of all modern science, and a principle that is important throughout this book...Since people live in the universe and interact with other objects in it, scientific determinism must hold for people as well. Though we feel we can choose what we do, our understanding of the molecular basis of biology shows that biological processes are governed by the laws of physics and chemistry and therefore are as determined as the orbits of the planets (Hawking and Mlodinow, 2010, pp. 30-32).
Influential as Hawking may be, there are other equally eminent scientists who take a very different view of the implications of quantum mechanics:
In classical physics it would have been legitimate to specify exactly both the position and the momentum of a given particle at the same time, but in quantum mechanics that is forbidden, as is well known, by the uncertainty, or indeterminacy, principle. The position of a particle can be specified exactly, but its momentum will then be completely undetermined (Gell-Mann, 1994, p. 139, emphasis added). Another most interesting change in the ideas and philosophy of science brought about by quantum physics is this: it is not possible to predict exactly what will happen in any circumstance...nature, as we understand it today, behaves in such a way that it is fundamentally impossible to make a precise prediction of exactly what will happen in a given experiment (Feynman, et al., 1963, p. 35, emphasis in original)
So where does this leave us? As scientists have expanded the frontiers of knowledge, they have gradually come to realize that the universe is chock full of mysteries that may forever elude even the cleverest and most persistent of truth-seekers:
People say to me, Are you looking for the ultimate laws of physics? No, Im not, Im just looking to find out more about the world and if it turns out there is a simple ultimate law which 8
explains everything, so be it. That would be very nice to discover. If it turns out its like an onion with millions of layers and were just sick and tired of looking at the layers, then thats the way it is, but whatever way it comes out its nature is there and she is going to come out the way she is, and therefore when we go to investigate it we shouldnt predecide what it is were trying to do except to try to find out more about it (Feynman and Robbins, 1999, p. 23).
Thus, science is nothing if not an intellectual adventure. Will we ever arrive at the final, absolute Laplacian truth? I hope not. Throughout history, the most dangerous and ignorant people have always been those who were convinced that they knew everything. In contrast, real geniuses are never the folks who think they have all the answers. Instead, true geniuses are the people who, by hook or crook, figure out how to ask the right questions. Sure, there are truths to be revealed. The real beauty of science is that, every time we think we might be getting close to knowing everything, a few nagging dark matters succeed in emphasizing how little we truly know. If it is impossible to know everything about about any individual quantum particle, will humans ever know everything about everything? I won't even bother to answer such a selfevident and pointless question question. By searching for the ultimate answer to everything, science does nothing but shoot itself in the foot.
One of the ways of stopping science would be only to do experiments in the region where you know the law. But experimenters search most diligently, and with the greatest effort, in exactly those places where it seems most likely that we can prove our theories wrong. In other words, we are trying to prove ourselves wrong as quickly as possible, because only in that way can we find progress Feynman, 1965, p. 151).
Scientists do their best work when the humbly own up to to their own ignorance. In spite of Laplace's insistence to the contrary, no human either can or ever will know everything. Further, any scientist with an ounce of sense would never claim otherwise. Science is an enterprise that succeeds in revealing new
truths by taking one plodding step forwardor, as Feynman suggests, by peeling back one layer of a cosmic onionat a time. Finally, a word to the wise: if there is a god, he does play dice with the universe. Scientists who don't wish to crap out would be well advised to wise up to the rules of his game. Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:Illuminated_line_volume_supernova.jpg
10
In a recent documentary, Ayn Rand& the Prophecy of Atlas Shrugged, stalwart defenders of Ayn Rands unique interpretation of the ideal of American freedom argue that government overreach is, as Rand predicted in Atlas Shrugged, intent upon extinguishing the essential democratic freedoms that have made the pursuit of life, liberty and happiness possible in the USA. Ayn Rand (1905-1982) is the author of multiple booksthe most famous being The Fountainhead (1943) and Atlas Shrugged (1957)that have sold tens of millions of copies. As such, Rand qualifies as one of the best-selling, and most influential authors of the twentieth century. Voluminous as her books may be, Rand was able to sum up her political philosophy in a 1959 interview with Mike Wallace in the following, very straightforward terms: I am opposed to all forms of control. I am for an absolute laissez faire, free, unregulated economy. Let me put it briefly, I am for the separation of state and economics (Kirk, 2010). In contemporary terms, Ayn Rand could be described as an arch free market capitalist. It is worth emphasizing at this point that arch free marketers, such as Ayn Rand, typically endorse a qualified view of government interference. Since the modern nation-state is tasked with the responsibility of maintaining a monetary system, in spite of Rands idyllic view of a 100% laissez faire business environment, it would be impossible for an economic system to function in total isolation from government intervention. For starters, without governmentally-produced, managed, and insured money, free market economic activity would be untenable.
11
Further, the nation-state provides a wide range of other goods and services that Ayn Rand and free marketers tend to overlook, but without which the private enterprise system simply could not function. In brief, vast investments by the US federal government in public education, universities, infrastructure (roads, railroads, shipyards, airports, dams, power grids, Internet, etc.), technology, the military, security, etc., create a ripe environment for private investment. Indeed, it was such state-level investments (e.g., the New Deal, the Great Society, investments in the Military-Industrial Complex, the space program, AI, etc.) that transformed the US from a rural to a post-industrial society. Ayn Rand elects to assign all of the credit for such social progress to the individualist heroes of her novels, such as, Howard Roark, Dagny Taggart, and John Galt. Doing so might make for entertaining story-telling, but this assures that her novels will be forever consigned to the fiction racks in bookstores. Rand willfully ignores the vast, positive, and essential social contributions that can only be mobilized by a nation-state. Nation-states do a lot of things wrong, but sometimes they do get things right.
Its one small step for manOne giant leap for mankind
The key point to understand is that, so long as free marketers profit from their relationships with the nation-state, they do not characterize those relationships as forms of interference. In fact, they often fail to acknowledge that those relationships even exist. For example, you will never hear military contractors who count their profits in the billions griping about federal interventions in their private affairs. This is in spite of the fact that there is mountains of evidence that the feds have been tucked snugly in bed with private military contractors ever since WWII. It is only when free marketers feel encumbered by the feds, i.e., when they are asked to pay taxes, or obey regulations, that they begin to protest about unfair federal interference in their business. Also, if Ayn Rand truly believed in her own absolute conviction about the necessary separation between government and private interests, then why did she attend the swearing-in ceremony at the White House for one of her most famous acolytes, Alan Greenspan, who became Chair of the US Federal Reserve (Kirk, 2010)? Would John Galt ever have worked for the feds? And, if Greenspan/Galt took the job to tear the system down from withinwhich is what he ultimately accomplishedthen what does that say about the man, his patriotism, the mentor that he
12
esteemed so highly (Greenspan, 2008, p. 51), and the philosophy in which he invested so much faith? Greenspan commented in his memoir that, as he swore his oath of office, he knew that there were many laws that he would be obligated to uphold with which he deeply disagreed (Greenspan, 2008, p. 51). As such, Greenspans approach to dealing with regulations that he opposed was to avoid enforcing them. Under Greenspan, the environment that the Fed cultivated was, as one might expect, one that emphasized laissez faire free market competition. Indeed, Alan Greenspan was so committed to the virtues of deregulation that, at one point, he informed Brooksley Born, Head of the Commodities Futures Trading Commission that he believed there was no need for the federal government to regulate fraud (Kirk, 2010). Is it any wonder that the financial system eventually collapsed as a result of such leadership? For a while Greenspans strategy appeared to work very well. Throughout the 1980s and 1990s, Greenspan was hailed as a hero, a genius, and a wizard." However, Greenspans deregulation binge ultimately created a financial bubble that caused an economic calamity in the first decade of the new millennium. In 2008, having shepherded the USAs financial system to the brink of ruin, George Bush departed office and shifted responsibility for tidying up this monumental economic mess to his successor, President Barack Obama. Given the scope of the economic disaster that Bush and Greenspan/Galt had created, Barack Obama was forced to take extraordinary steps to prevent the economy and financial system from falling into a state of crisis that threatened to dwarf even the Great Depression of the 1930s. In order to stave off such systemic economic ruin, the Obama Administration approved massive federal bailout programs, such as TARP, and also secured massive additional resources to bailout major corporations, such as General Motors, Freddie Mac, AIG, and countless others. While this strategy did prevent the kind of disastrous and prolonged economic crash that was associated with the Great Depression, the Obama Administrations bailout strategy seemed to generate more enemies than allies. From the Tea Party movement to the many faces of the Occupy movement, droves of citizens have surged into the streets to condemn the social inequities that have been perpetuated by the Obama Administrations bailouts. And, now, in what could arguably be characterized as the unkindest cut of all, in Ayn Rand & the Prophecy of Atlas Shrugged, Rand supporters argue that whats really wrong with the post-2008 financial bailouts that, mind
13
you, were caused by Greenspan/Galts singularly misguided application of Ayn Rands ill-begotten laissez faire deregulatory idealsis government overreach. Therefore, the cure for what ails us (Drum roll, please) is another whopping dose of John Galt/Greenspan heroism. Baloney. I think it is fair to say that citizens in any democracy must remain eternally vigilant about their rights and responsibilities, especially in this post 9/11 era wherein civil liberties are under siege from so many new technological and legal threats. That said, I do not think that we can hope to find the solution to the problems that 21st century citizens face in the pages of Ayn Rands novels. If anything, Alan Greenspan/Galt has demonstrated that Rands ideas are a recipe for social and economic disaster in the 21st century. One of the most disappointing aspects of Ayn Rands work is that, while Rand tries to promote her ideas as a defense of the little guy, in fact, Rands philosophy is nothing more than a brazen apology for naked self interest. The moral of Ayn Rands books goes something like this: The rich can (and should!) get richer because they are virtuous, and the poor, because they are a bunch of parasitic losers, can kiss off and die! Ultimately, Ayn Rand & the Prophecy of Atlas Shrugged is an exercise in spin doctoring. The 2008 financial meltdown has focused a great deal of public attention on the global financial environmentand, in particular, upon the character flaws of its leaders. The obstinacy of the Occupy movement has further sharpened that focus: common folk have been encouraged to blame Wall Street one-percenters for the burgeoning climate of socioeconomic difficulties that the 99% are facing. In an environment wherein Ayn Rands self-interested heroes are beginning to feel the heat, Ayn Rand & the Prophecy of Atlas Shrugged, is an attempt to deflect attention from Rands heroes to her favorite straw man: big government. Will audiences be snookered thus? Well, instead of 3D glasses, I think the makers of Ayn Rand & the Prophecy of Atlas Shrugged will need to distribute a special type of Ayn Rand blinkers if they want audiences to believe, in the post-2008 era, that a new generation of Greenspan/Galt heroes will be our salvations in the future. In your dreams, Ayn. They never were, and they never will be. References
14
Greenspan, Alan, 2008. The Age of Turbulence: Adventures in a New World. New York: Penguin. Kirk, Michael 2010. Frontline: The Warning. PBS. Boston, MA: WGBH. Mortensen, Chris, 2011. Ayn Rand the Prophecy of Atlas Shrugged, Los Angeles, CA: D&E Entertainment. Thanks for Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:Defense.gov_photo_essay_061230-F-0193C019.jpg
15
16
Redefining reality is a process through which individuals can challenge inadequate paradigms through a combination of astute observation and an ingenious capacity for innovative cognition (i.e., agency). The notion of redefinable reality posits, in agreement with Poppers realist philosophy, that there is a universe out there that exists independently of human cognition (Popper, 1983). As such, I argue that universal Truth does exist, but such Truth is not (nor will it ever be) contained within extant scientific paradigms (McGettigan, 2011). Rather, The Truth extends infinitely into the unlocked mysteries of the expanding universe. In other words, reality is what it is: an asteroid is an asteroid is an asteroid, etc Truth is an intrinsic, inseparable feature of phenomena as they exist independently of human perception. Lies and distortions come into existence via humanitys vast capacity for ignorance: humans view the illimitable universe through awed and flawed psyches. Although admirable in many ways, the human grasp of infinite mysteries remains woefully incomplete. Nevertheless, the process of redefining reality permits limited human psyches to transcend the limitations of inadequate paradigms in pursuit of a grander vision of Truth. Redefining reality generally begins when individuals notice a disjuncture between observable facts and established modes of explanation, e.g., a democratic system that is supposed to serve the people, but that instead caters to the whims of the powerful. Due to their devotion to established modes of thought, some observers might ignore anomalies, or contrive a convenient explanation that sustains their belief in what is already known, e.g., democracy in the United States may be imperfect, but it distributes power
17
pluralistically through a convoluted representational system. Alternately, more independent thinkers might treat such a dilemma as an opportunity to transcend the socially-imposed barriers that constrain their understanding of observable reality. The process of transcending socially imposed cognitive barriers often begins with a creative observation (e.g., Hey! Why dont politicians ever follow through on their campaign promises?). In some cases, individuals who are determined to make sense of the anomaly in question might follow up their observations by developing an individual-level intellectual challenge to established modes of understanding (i.e., it appears as though the United States democratic system is primarily designed to serve the interests of power-brokers). Such acts of intellectual rebellion tend to further erode the foundations of conventional thinking (i.e., Based upon what I have observed, I no longer believe democracy in the United States serves the will of the people.). Finally, the culmination of the redefinition of reality process involves constructing an entirely new explanation that simultaneously explodes existing ideological boundaries while also advancing a more adequate description of the phenomena in question, i.e., the United States political system masquerades as a democracy, while functioning like an elite-centered oligarchy. Thus, as the foregoing example illustrates, individuals occasionally demonstrate the requisite mental apparatus to make note of anomalies, develop creative new explanations for mysterious phenomena, and then overcome manifestations of social power that delimit their thought and action. Therefore, the thoughts and behaviors of individual social actors are not entirely determined by the invisible influences of social coercion. Instead, sometimes agents can creatively counteract the distorting influences of social coercion and, in so doing, generate moments of truth. A moment of truth is an experience wherein individuals, via the process of redefining reality, are transported from an inadequate version of reality to a more satisfactory paradigm. These experiences may be considered relatively truthful in that they are generated through a process whereby agents systematically counteract the influences of invisible social power over their definitions of reality. Thus, Mills (1956) argues that people who confine their analysis of the US political system to the realm of the observable (i.e., the words and deeds of elected politicians), cannot help but fall prey to artfully calculated illusions. From Mills perspective, the observable activities of political actors in the United States are designed to provide a
18
convincing impression that politics-as-usual lives up to the ideals of democracy. Yet, Mills argues that appearances are deceiving. While political representatives go through the motions of faithfully serving their constituents, shadowy operators work behind the scenes to ensure that politics-as-usual serves the interests not of the majority, but of a privileged minority of power elites. Consequently, the truth is not defined by facts alone, rather the truth can only emerge as a result of a deeper investigation into the manner in which perception is often cunningly distorted by the interventions of social power. Therefore, it is in the process of counteracting the distorting influences of social power that it becomes possible for agents to experience moments of truth. References McGettigan, Timothy, 2011. Good Science: The Pursuit of Truth and the Evolution of Reality. Lanham, MD: Lexington Books. Mills, C. Wright. The Power Elite. Oxford: Oxford University Press, 1956.
Popper, Karl, 1983. Realism and the Aim of Science. Totowa, NJ: Rowman and Littlefield.
19
20
THREE Darwin Day Celebrating the Scientist that People Love to Hate
There are two kinds of people in this world: those who celebrate Darwin Day, and those who dont. Charles Darwin (February 12, 1809-1882) is without doubt one of the most important scientists who ever lived. He is also one of the most controversial. First published in 1859, Darwins theory of evolution has proven to be one of the most groundbreaking achievements in the history of science. Not only did evolution establish a unique theoretical framework which subsequently gave rise to the field of modern biologyand a plethora of related scientific disciplinesbut evolutionary theory also helped to advance a radical secular, scientific cosmology. In other words, evolutionary theory gave life to an entirely new way of looking at the world from a purely scientific perspective. This is why scientists make such a fuss about Darwins birthday. Yet, is it precisely because of evolutions secularizing propensity that it has proven to be such a persistently controversial scientific theory. Theologically-inclined folks tend to dislike Darwin and his irreligious ideas. Always have. Always will. Of course, some theologically-minded folks have found ways to maintain their religious faith, while also cultivating some level of conviction in Darwinian evolution. However, it is worth pointing out that Darwin himself was never able to artificially bifurcate his religious and scientific beliefs in that fashion; the young Charles
21
Darwin was a faithful Christian, but the mature Darwin was a secular humanist. In 1831, Darwin embarked on his historic journey on the HMS Beagle as a firm believer in creationistic principles. Like many natural philosophers of his day, Darwin believed that Genesis offered a literal version of Creation. However, as Darwins nearly five year circumnavigation unfolded, he encountered phenomena, such as the anomaly of ancient marine fossil beds that lay at the very peaks of the Andes, that rattled his faith in young earth creationism. Ultimately, Darwins creationist beliefs were completely undone by the creatures that he encountered in the Galapagos Islands. Though he had witnessed many wonders during his travels, the bizarre menageries that Darwin encountered in the Galapagos exceeded anything that he had yet imagined. From giant tortoises to endless varieties of land crabs and snails, Darwin marveled at the seeming adaptability and (dare he think it?) mutability of the species that he observed. Perhaps as he gazed upon the spectacle of marine iguanas bobbing in the surf, Darwin gave thought to a new and unsettling idea. Just as Charles Lyell had suggested in his fascinating book about the earths geology, tiny and slow-paced changes had the net result, over the eons, of introducing extraordinary alterations to the earths geological features, might not the same be true for living organisms? In other words, could the tiniest physiological changes accumulate sufficiently across time to bring about the transmutation species? Ultimately, these insights gave rise to Darwins unapologetically secular theory of evolution, which can be summarized as follows:
1. Variation: whether its dogs, grass, or fruit flies, organisms in any breeding population tend to vary from one individual to the next 2. Overpopulation: from oak trees to salmon, parents tend to produce more progeny than can survive to maturity 3. Struggle for survival: the overproduction of progeny tends to inspire high-stakes competitions to secure limited resources 4. Survival of the fittest: individuals with advantageous genetic traits enjoy an edge in the competition to secure scarce resources
22
5. Evolution through natural selection: winners of bioecological competitions survive and pass advantageous genetic traits to their offspringwhich, in turn, brings about the gradual evolution of new species
Rarely do scientific theories elicit even the slightest attention from the general public. Apart from a few celebrated scientists, such as Einstein, the workaday world of science usually operates off the radar screen of public interest. Not so with Darwin. Evolutionary theory has inspired widespread acrimony from the moment of its first publication. In addition to other objections, many people have been displeased with the idea that, within the confines of evolutionary theory, humans do not occupy any special pride of place. From Darwins perspective, humans were, quite simply, just another form of life (Darwin, 1871). Essentially, evolution postulates that humans exist because their ancestors, just like any other complex organism, randomly developed advantageous genetic traits. For those who are in need of a loftier sense of theological or philosophical purpose, they wont get it from Darwin. Furthermore, evolutionary theory also constitutes an unforgivable affront to anyone with an affinity for creationism. In Christian theology, God is not only the source of Creation, but God is also the most sacred being in the universe. By discounting the role that God plays in the origin of species Darwins theory has been attacked for being a slur upon the sanctity of creation and, worse, as an insult to God. One might guess that after, more than 150 years of monumental scientific success, evolutionary theory would have won the publics hearts and minds. Not so. In national public opinion surveys, slightly less than half of US adults generally report that they believe humans have evolved from some other species of animal (similar surveys conducted among scientists generally yield overwhelming support, in the range of +90%, for evolutionary theory). Given the, at best, tepid public support for evolutionary theory, legislators are often inspired to search for ways to undermine Darwinian evolution in the public sector, and their favorite place to attack evolution is in the classroom. The most recent example of such an attack is currently underway in Indiana. On January 31, 2012, the Indiana State Senate passed Bill 89 which would allow local school districts to offer "instruction on the various theories of origins of life" which "must include theories from multiple religions." It will be interesting to watch the progress of this bill. Clearly, the goal of
23
Indiana Senate Bill 89 is to challenge the privileged position that evolutionary theory currently occupies as an explanatory perspective in high school science classroomsa privileged position, I should add, that evolution has earned by dint of being vetted and tested by more than 150 years of rigorous scientific research. Nevertheless, should Indiana Senate Bill 89 move forward and be approved by the Indiana House of Representatives, the new law would have a devastating effect on Indianas high school science curriculum. Due to the onerous burdens created by No Child Left Behind and teaching to the test, it is already difficult enough for high school science teachers to shepherd their students through the jam-packed high school science curriculum. But just imagine if high school biology teachers were also required to include "instruction on the various theories of origins of life" which "must include theories from multiple religions" in their courses. High school biology would rapidly devolve from an exploration of biological science to a course on comparative religionswhich, by the way, high school biology teachers are not trained to teach, nor should high school science students be required to study. If kids want to study religion, then they should take a course in religious studies, not biology. If Indiana Senate Bill 89 is approved by the Indiana House of Representatives, then Darwinian evolution will once again need to demonstrate that it is the most effective, secular scientific theory that has ever been developed to explain the past, present, and future of life on the planet. Hopefully, it wont come to that. Evolutionary scientists have enough work to do without taking time out of their lives and busy schedules to fight old battles that they have already won many times over. As I have argued elsewhere (McGettigan, 2011), the means through which beliefs are shapedand hearts and minds are wonoften has more to do with power than truth. Evolution may never win a popularity contest among the general public, but it will persevere and excel by doing precisely what it does best: finding better, more convincing ways to explain life, the universe, and everything through a scientific lens. If that rubs creationists the wrong way, then so be it. May the fittest paradigm survive.
24
References Darwin, Charles (1859). On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life (1st ed.). London: John Murray. Darwin, Charles (1871). The Descent of Man and Selection in Relation to Sex. Volume 1. New York: D. Appleton and Company. McGettigan, Timothy, 2011. Good Science: The Pursuit of Truth and the Evolution of Reality. Lanham, MD: Lexington Books. National Academy of Sciences. Science, Evolution, and Creationism. Washington, DC: The National Academies Press, 2008. Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:Editorial_cartoon_depicting_Charles_Darwin _as_an_ape_(1871).jpg
25
26
FOUR
Many Worlds, but only One Reality Stephen Hawking and the Determinist Fallacy
One can hardly broach the subject of agency without acknowledging the long-standing and unresolved philosophical debate regarding the agency vs. determination dichotomy. To provide an illustration of the extent of disagreement over this dualism, determinists, such as Stephen Hawking have argued that agency and free will are nothing but an illusion:
...the molecular basis of biology shows that biological processes are governed by the laws of physics and chemistry and therefore are as determined as the orbits of the planets. Recent experiments in neuroscience support the view that it is our physical brain, following the known laws of science, that determines our actions and not some agency that exists outside those laws...so it seems that we are no more than biological machines and that free will is just an illusion (Hawking and Mlodinow, 2010, emphasis added).
Indeed, Hawkings deterministic perspective is so comprehensive that he believes if it were possible to build a computer that was sufficiently powerful to calculate each and every variable in the cosmos, then such a machine would be able to determine with absolute precision every aspect of every event that transpires in the universe from the big bang until the infinitely remote end of time. From Hawkings perspective, nothing moves,
27
interacts, appears or disappears in the universe without having been minutely pre-determined by a chain of causality that was set in motion at the origin of the universe. Now that is a hard core determinist. At the other end of the spectrum are those who believe in an indeterminate universe (Popper, 1988). Philosophies of indeterminacy take many forms, however, such perspectives tend to emphasize that endless varieties of random, inscrutable and uncertain phenomena render the universe ineluctably unpredictable. For example, Heisenbergs uncertainty principle asserts that it is impossible to determine both the velocity and position of any discrete particle: the process of determining one property has the effect of modifying the other property. Much to Einsteins displeasure, Niels Bohr elaborated upon Heisenbergs uncertainty principle by developing the theory of quantum mechanics. Bohrs Copenhagen interpretation is predicated on the realization that, at the quantum level, classical expectations about the normal, predictable, apparently-determinate principles that operate in the macro universe do not apply at the infinitesimal scale of the quantum. In other words, the behavior of quantumscale phenomena are downright bizarre:
At the quantum level particles appear, disappear and reappear unpredictably and without having conventionally traversed the distances between the separate spaces they occupy. Entangled particles defy the laws of physics by exhibiting what Einstein referred to spooky action at a distance. Single particles behave as though they are interacting with other, non-existent particles when fired individually through a double-slit filter. Quantum phenomena exhibit complementarity, which means that phenomena will morph depending upon what type of techniques observers employ to examine the phenomena in question. Etc.
Thus, those who believe in an indeterminate universe dismiss the idea that an infinitely complex, but, nonetheless, single chain of causality rigidly determines all subsequent events that transpire
28
in the universe. For indeterminists, the universe is full of actors that often engage in unpredictable improvisation; their performances often change without notice and, occasionally, in open defiance of the direction that is essential to preserve a deterministic universe. This is true of the micro realm of quantum mechanics, and it is also true of the macro universe that ceaselessly confounds and astounds its unpredictable human observers (McGettigan, 2011). Advocates of the indeterminate perspective are generally of the opinion that, if it were somehow possible to replay the history of the universe, each new iteration of the universe would be identifiably unique. This is because random events would exert unique and unpredictable influences on the evolution of the cosmos--just as random events have generated widely divergent species on the planet earth: fostering the evolution of new species in some cases, and instigating widespread extinction in others. For his part, Hawking rejects the idea that quantum indeterminacy implies that the universe as a whole is nondeterministic. Although Hawking concedes that quantum events depart from the more deterministic patterns that operate with greater consistency in the macro universe, nevertheless, Hawking argues that Hugh Everetts Many Worlds theory (Byrne, 2010) offers a theoretical framework through which to develop a deterministic model for quantum phenomena. Briefly, the Many Worlds theory proposes that everything that is physically possible happens. In other words, for any discrete event that takes place in the universe an infinite range of similar but slightly different events takes place in an infinite number of alternate universes. Hugh Everett developed the Many Worlds theory in order to solve the measurement problem associated with the collapse of the wave function in quantum mechanics. For the purposes of the present discussion, Hawking argues that, since the Many Worlds theory posits that everything which is physically possible occurs in an infinity of different universes, then everything that any individual could ever think or doand much, much more! actually does happen, and is therefore determined by the circumstances that unfold in each and everyone of the infinite multiverses in which the various chains of causality unfold. To put
29
it more simply, imagine that in one universe a football player catches a pass to score a touchdown, while in another the very same player drops the ball, or trips over an opponents foot, or is blinded when a spectator hurls Gatorade at his eyes, or gets kidnapped by extraterrestrials, etc. Fascinating as the Many Worlds theory may be, there are problems with Hawkings claim that the Many Worlds thesis offers proof that the universe remains deterministic in spite of pervasive quantum indeterminacy. First of all, valuable as the Many Worlds theory may be as a conceptual construct, there is no proof that the theory is true. For decades, scientists have speculated that alternate universes might exist, but no one has ever generated any proof that more than one universe does exist. Thus, Hawkings belief that every possible outcome of events are determined by, and play out in an infinity of alternate universes is pure speculation. I could equally well claim that an omniscient genie foresees every possible outcome of every event that takes place in the universe, but forcibly prevents all but one from actually occurring: that is why humans perceive only one set of events in one lonely universe. Hawkings unsubstantiated faith in the multiverse has no more basis in fact than my speculations about an all-powerful genie. In fact, Hawkings invocation of a multiverse offers more support for the fundamental indeterminacy of the universe than the thoroughgoing determinacy of an infinity of alternate universes. Indeed, regardless of whether or not an infinity of alternate universes do exist, Hawkings invocation of a multiverse--wherein an infinity of alternatives precipitate from each and every discrete event--represents an explicit admission that the universe is anything but deterministic. Extreme determinists, like Hawking, generally assert that the universe is designed such that there is one, and only one, rigid causal path that operates within any single universe. Once a deterministic path is set in motion, all future events become regimented by that singular path of causation. However, through his affinity for Many Worlds theory, Hawking is trying to have his determinist cake and eat it too. If discrete events are pre-destined by a deterministic chain of causality, then a specific event that transpires in an infinite number of universes must have, in every
30
case, been generated by the same chain of deterministic events in every other universe where it transpires, e.g., if I am typing the word infinity in an infinite number of alternative universes, then the sequence of eventsfrom the big bang to the present--that have delivered me to the moment where I am typing infinity into this computer must have been identical in all cases. If determinism holds water, then identical events require identical chains of causality. If, however, one were to argue that identical events can be generated by differing chains of causality, then the determinist argument collapses. If there are indeed Many Worlds wherein precisely the same events (i.e., the author typing infinity) can be generated through varying chains of causality (i.e., in one universe the author is the King of Spain, in another the author is blind, in another the author is vegan, etc.), then one cannot sustain the claim that a particular pre-existing cause is required to generate a specific, singular and unvarying outcome. Under such circumstances, random events would generate specific outcomes, and, by definition, random causation is non-deterministic. Furthermore, from one moment to the next, Many Worlds emphasizes that any single cause can generate an infinite variety of outcomeswhich is another way of saying that, far from determining one single outcome, any single event propagates an infinity of alternative outcomes. If that is indeed the case, then, with the help of Many Worlds theory, Hawking has just convincingly demonstrated that the universe is non-deterministic. Once again, the determinist perspective asserts that there is one, and only one, pre-determined outcome that can precipitate from any specific sequence of pre-existing causes. Many Worlds, however, is based upon the idea that there are an infinite, or to address the central point of this discussion more directly, there are an indeterminate multiplicity of outcomes that can and do precipitate from any specific event. Of course, the point that Hawking was driving at was that, in every case where events transpire, any and every outcomeno matter how various or exoticis determined by preceding events. However, as mentioned above, this boils down to nothing more than a case of having ones cake and eating it too. For determinists
31
there are no degrees of freedom: pre-existing events determine specific outcomes. Yet, Hawking advances a self-contradictory argument by insisting that pre-existing events determine outcomes, but, depending upon which universe one happens to inhabit, the same set of pre-existing events canand does!generate an infinite variety of outcomes. Rather than a rigidly deterministic universe, Hawkings Many Worlds perspective paints a picture of a wide open universe. Why would a deterministic chain of events in one universe produce differing outcomes in other deterministic universes? It doesnt make sense. If a hardcore determinist like Hawking admits that a singular chain of events literally can and does produce an endless variety of outcomes, then it is not reasonable to insist in the very next breath that any single event can produce one and only one pre-determined outcome. If a single event can produce an infinity of possible outcomes, then the universe is, by definition, non-determinate. Hawking and his determinate friends cannot have it both ways. The universe is indeterminate, and agents have the power to innovatively influence the present and future in explicit defiance of the socio-environmental controls that limit the options of nonagentic creatures. For agents, the present has never been entirely determined by constraints from the past, and the future is an illimitable expanse that stretches to the very limits of the human imagination and far, far beyond. References Byrne, Peter. The Many Worlds of Hugh Everett III: Multiple Universes, Mutually Assured Destruction, and the Meltdown of a Nuclear Family. Oxford: Oxford University Press, 2010 Hawking, Stephen, and Leonard Mlodinow. The Grand Design. New York: Bantam Books, 2010. McGettigan, Timothy. Good Science: The Pursuit of Truth and the Evolution of Reality. Lanham, MD.: Lexington Books, 2011. Popper, Karl. The Open Universe: An Argument for Indeterminism. New York: Routledge, 1988. *Thanks to Wikimedia Commons for the Photo.
commons.wikimedia.org/wiki/File:Stephen_hawking_2008_nasa3.jpg
32
33
FIVE The Sacred and the Profane Religion, Military Occupation, and Intolerance in the Age of Reason
Can we all get along? It appears not. As of February 25, 2012, the death toll in Afghanistan keeps climbing (28 killed in one week of rioting), but this time its not because of terrorism, or because of some sneaky campaign by the Taliban or al Qaeda. No, on this occasion its because some American troops accidentallyor, perhaps, intentionally incinerated a number of Qur'ans, the sacred text of the Muslim faith. It is not yet clear precisely who is to blame for this grievous error in judgment, but what we do know is that a number of Afghans were somehow able to recover partially-incinerated Qur'ans from a rubbish heap at a US military base. Once in possession of the Qur'ans, the Afghans blamed US personnel for setting fire to the Qur'ans. In response, US military leaders in Afghanistan have insisted that they will need to thoroughly investigate the incident before drawing any definitive conclusions, however, more than one of those commanders has also simultaneously admitted some level of culpability by openly expressing regret, and, in some cases, even apologizing for the incident in question. Far from forgiveness, restive throngs of Afghans immediately began to mass at US bases throughout Afghanistan. Though US leaders up the chain of command all the way to President Obama
34
have issued apologies over the past week, Afghans have not been in the mood to forgive and forget. While they have solemnly tolerated the ignominy of a seemingly endless occupation by the US military, the insult to their religion is the straw that has finally broken the camels back. All week long, Afghans have begun venting their rage on US personnel, which, though understandable, is deeply unfortunate. The vast majority of soldiers who have become the target of Afghan outrage doubtless had little or nothing to do with the destruction of the Qur'ans. Nevertheless, because they are wearing US uniformsor, in other words, because they fit into a clearlydefined and (at the moment) thoroughly-despised profilein the eyes of Afghans they have become objects of hatred, scorn, and violence. As with all hate-inspired crimes, it is the innocents who suffer. There is no justice in hate; hate only serves to amplify hate. Worse still is when hate crimes are inflicted in the name of a religion that, in truth, professes a much higher moral standard. The war on terror is just going to keep getting uglier by the day until people in the west begin to see the side to Islam that embraces peace, love and forgiveness every bit as much as Christianity does. For now, westerners tend to see the nasty side of Islam repeated over and over again: jihad, violence and destruction. It may be unfair, but this has increasingly become the master narrative in the west, and Muslim leaders have got to find a way to deal with and rewrite that narrative. On the flip side, if Americans are surprised by the scale, scope, and duration of the riots that are taking place in Afghanistan, they shouldnt be. The Afghans have been kicked around for a long timemost recently by the US, but before that there was the Taliban, and before that there was the USSR, and before that.Well, I guess you get the point. The Afghans are a put-upon people, but they are also a fractious, and a fiercely independent people. Since time immemorial, no single occupier has succeeded in cheerfully unifying Afghanistan. Afghans have their own opinions, and they are willing to fight for them. Sadly, this is something that the US seemed to understand in the 1980s when the US was an unflinching ally of Afghanistan and supported Afghan freedom fighters in opposing the condemnable Soviet occupation. When the US fought side-by-side with the Afghans to help them achieve independence from Soviet aggression, the US and Afghans could be friends and allies. In becoming the occupying power, the US has somehow forgotten that a fiercely independent people, like the Afghans, dont want to
35
be told how to do anythingnot even how to be a free, independent, democratic society. As the current melee worsens, it is increasingly difficult to believe that the US presence in Afghanistan is a friendly occupation. It is very possible that, one day, the US will be a friend and ally to Afghanistan again. We have done it before, and we can do it again. But, importantly, we cant do it as an occupying power. If were going to clear up the burgeoning mess in Afghanistan, then the US will need to find its way back to its lost friendship as a collaborator with Afghanistan, not an occupier. Its a long shot, but its possible. What we have to remember is that the Afghans counted the US as its friend and ally when the US was committed to helping Afghanistan become liberated from the tyranny and oppression of an occupying power. What Afghans want most of all is what Americans want most of all: freedom, self-determination, justice, democracy. So long as the US continues its occupation, the US will continue to look more like the enemy of Afghanistan, and the enemy of freedom. Thus, we have got to find a way to pull out, and the sooner, the better. President Obama has a plan to end the occupation of Afghanistan. If we want to renew our friendship with Afghanistan, and if we want to end the violence in Afghanistan, then we need to move forward right away with Obamas plan to end the occupation. Peace and freedom for everyone. Peace and freedom now. Thanks to Wikimedia Commons for the image.
commons.wikimedia.org/wiki/File:Folio_Quran_Met_29.160.23.jpg
36
37
SIX The New World Order Rush Limbaugh vs. Empowered Women in the 21st Century
Freedom of speech is sacrosanct. Everyone should have the right to say whatever they want. Also, it is arguably more important to protect free speech for those ideas that we dislike than it is for those that we admire. To paraphrase Voltaire, I may disagree with what you say, but I will fight to the death for your right to say it." If we only protect free speech rights for those with whom we agree, then we arent really protecting free speech. Instead, were advocating a diabolical form of intellectual tyranny, i.e., its my way or the highway. That said, it is also important to acknowledge that those who exercise the right to free speech must also take responsibility for their statements. While speakers may have an iron-clad right to be controversial, nevertheless, those speakers also need to appreciate that they are not speaking into a vacuum. Free speech does not require listeners to uncritically accept any idea that a speaker wishes to propagate. Quite the reverse. In an environment where free speech prevails, audiences have as much right to respond as speakers have to pontificate. Thus, free-speakers beware. Free speech is not a right to be taken lightly; it is a fundamental right that is freighted with responsibility, and reckless speakers often find that there are consequences aplenty for those who speak freely and foolishly. Enter Rush Limbaugh.
38
What was the man thinking? Or, more to the point, was the man thinking? Regardless, Rush is now acquainted with the fact, as never before, that there are consequences for speaking freely and foolishly. Rush Limbaugh has made a career out of being a polemicist. Indeed, over the past couple of decades, Limbaugh has managed to build up the largest radio audience in the US by lashing out angrily at the people, politics and cultural events that do not comply with his right-wing worldview. Until now, Rushs fan base has had a seemingly insatiable appetite for his fevered denunciations of favored targets, such as: Democrats, self-defining women, rights activists, college professors, etc. The more abusive that Rushs rhetoric has become, the more exultant his listeners have been with his broadcasts. Indeed, Rushs angry little universe ran as smoothly as clockwork until Rush decided to vent his spleen on Sandra Fluke. On February 23, 2012, Sandra Fluke, a law student at Georgetown University, testified before a US House of Representatives committee on the topic of birth control. The gist of Flukes testimony was that womens birth control treatments should be covered under the Affordable Care Act (ACA). Some groups, such as the US Catholic Church, have sought permission to opt out of providing birth control because of their doctrinal opposition to birth control. However, Sandra Fluke contended that, having arrived in the 21st century, birth control should be viewed as a basic form of healthcare that is as essential to women as Rush Limbaughs Viagra pills are to him. In response to Flukes testimony, Limbaugh launched a now infamous tirade during which Limbaugh called Fluke a slut and a prostitute. Limbaughs justification for these insults was that by exhorting the federal government to include coverage for birth control under the Affordable Care Act, Fluke was asking the government to pay her for having sex. Apart from the fact that Limbaughs comments displayed an appalling ignorance of Sandra Flukes testimony, as well as a basic misunderstanding of what birth control is and how it is used, Limbaugh also catastrophically misjudged how his comments would be perceived. In the past, Rush regularly hurled abuse at well-spoken, politically-savvy, intelligent womenpeople whom Limbaugh would often describe as femi-naziswith impunity. However, Limbaughs denunciation of Sandra Fluke finally crossed an invisible line that unforgivably alienated Limbaugh from his supporters. Lots of folks who had stood behind Rush Limbaugh and cheered him on as he slandered a myriad of left39
leaning causes and characters summarily abandoned Limbaugh in the wake of his indictment of Sandra Fluke. Why the sudden change? Perhaps it was due to the harshness of the terminology that Limbaugh used to excoriate Sandra Fluke that he ran afoul of his supporters. Truly, it is difficult to find humor in the profoundly stigmatizing labels that Limbaugh sought to apply to Sandra Fluke. Or, perhaps it was widespread revulsion over Limbaughs suggestion that women who obtain government-supported birth control should be required to video record their sexual liaisons and post them on the Internet. Who besides a sick-minded voyeur would even dare to propose such a perverted idea in publicmuch less to a nationwide audience of morally-indignant right-wing Bible-thumpers. What was the man thinking? But I think Limbaughs most significant miscalculation was his failure to realize that many of his listeners (male and female alike) were themselves users of birth control and, consequently, they could not help but interpret Limbaughs vilification of Sandra Fluke as a personal attack upon themselves and their loved ones. Obviously, this would be news to Rush Limbaugh, but in postindustrial societies like the US, just about everyone, regardless of political affiliation, uses birth control. In recent decades, birth control has made it possible for women to begin completing college degrees at a rate that exceeds their male counterparts. Further, birth control has also made it possible for women to embark on, and persist in career trajectories that have enabled women to accumulate enormous affluence and political influence. Thus, in lashing out at Sandra Fluke, Rush Limbaugh was also heaping condemnation on the largest and, increasingly, the most socially, politically, culturally, and economically powerful bloc of people in the world: upwardly-mobile women. Apparently, Limbaugh didnt get the memo, but theres a new world order in the 21st century. More women are closer to the apex of power than ever. Being the king of his own little universe, Rush Limbaugh failed to appreciate the sleeping giant that he was awakening via his denunciations of Sandra Fluke. Apparently, Limbaugh thought that his media empire was sufficiently powerful to outmaneuver the vast and fast-growing demographic that he has repeatedly and blithely degraded as femi-nazis. Now that practically all of Limbaughs advertisers have fled from his program in a sudden and mass exodus, I think it is fair to say that the competition between Rush Limbaugh and empowered 21st century women is
40
decidedly one-sided. Women have accumulated sufficient power to swat Rush Limbaugh like the annoying little gnat that he has become. That said, I, for one, do not wish to see Rush Limbaugh driven off the airwaves. Limbaugh should retain the right to continue speaking freely and foolishly to any audience that cares to tune in to such a bigoted, near-sighted lummox. Having chosen his battle and lostresoundingly!I think Rush Limbaugh should continue prattling away from now until kingdom come on the airwaves. His diminishing influence should serve as an example to all of the enormous virtues and perilous pitfalls of free speech. Rush has no one to blame for his downfall but himself. Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:Rush_Limbaugh_by_Ian_Marsden.jpg
41
On April 10, 2012, Governor Bill Haslam allowed Tennessee's House Bill 368,* the New Monkey Bill, to become law without his signature. Leaving aside the Governors contemptibly week-kneed political stance, HB 368 has once again propelled the state of Tennessee into the limelight as the arch nemesis of modern science. Tennessee originally laid claim to that dubious distinction by passing the Butler Act, or the Old Monkey Bill, in 1925. The Butler Act was a Tennessee law that prohibited teachers from challenging the biblical account of human origins. Subsequently, John Scopes, a teacher in Dayton, Tennessee, was charged on May 5, 1925 with violating the Butler Act. The charges filed against Scopes led to one of the most highly-publicized courtroom dramas of the 20th century, the State of Tennessee v. John Thomas Scopes, or more popularly known as the Scopes Monkey Trial. The Scopes Trial pitted religious dogma against science, and, as such, brought widespread theological apprehensions about scientific advancements to the forefront of the national consciousness. Americans have been only too happy to embrace the modern conveniencescars, planes, medicine, phones, computers, etc.that science has made possible. However, many of those same Americans have often expressed outrage over the challenges that secular science poses for their religious beliefs. Indeed, the most reprehensible scientific attack on religion was
42
lodged by none other than that 19th century English reprobate, Charles Darwin. Arguably, Darwins (1859) theory of evolution was far less revolutionary than Albert Einsteins relativisation of the cosmos in 1905, but you would be hard pressed to arrive at such a conclusion based upon the publics reaction. After 1905, Einstein was loved and revered all over the world, whereas, ever since 1859, Darwin has been denounced and reviled. Perhaps because Einsteins egg-headed warping of the cosmos was more abstract, common folk have never felt as threatened by the Einsteinian revolution as they have been by the Darwinian. Darwins theory hits close to home. Worse, it implies an undeniable demotion for humankind. Whereas the Bible states that humans were fashioned by an omniscient creator in that exalted beings own image, Darwin asserts that humans emerged out of the same muck and mire as every other living creature. Since Darwins theory endeavors to explain the origins and evolution of life, it cannot help but run afoul of competing creation storieswhether scientific or theological. People who are fond of imagining that an almighty god sculpted humanity in his own sacred image generally do not appreciate Darwins contention that lower animals, like apes and monkeys, are practically humankinds kissing cousins. Thus, the Butler Act. Interestingly, the Scopes Trial ended in a conviction: John Scopes was found guilty of having illegally taught Darwinism to his students. However, instead of being sentenced, Scopes conviction was quickly overturned due to a technicality. As a result, the Scopes Trial failed to produce a clear victory for supporters of either creationism or evolution. Instead, the controversies that swirl around the collision of evolutionary science and theology, i.e., The Monkey Wars, have raged on unabated. The latest iteration of The Monkey Wars comes in the form of Tennessee House Bill 368. The New Monkey Bill employs language that has been tempered through dint of the culture wars to cultivate the impression that Tennesseans are intent upon serving the better interests of science. For example:
(a)(1) The general assembly finds that(a)n important purpose of science education is to inform students about scientific evidence and to help students develop critical thinking skills necessary to becoming intelligent, productive, and scientifically informed citizens
43
What right-minded scientist could possibly object to that? However, the anti-science objectives of the New Monkey Bill sneak into the picture in the very next passage:
(a)(2-3) The teaching of some scientific subjects, including, but not limited to, biological evolution, the chemical origins of life, global warming, and human cloning, can cause controversy; and(s)ome teachers may be unsure of the expectations concerning how they should present information on such subjects.
As outlined above, Darwinian evolution has been controversial from the very moment that Darwin published On the Origin of Species in 1859. Consequently, throughout its long history, anyone who has ever assumed the responsibility of teaching evolution has had to contemplate the possibility that their lessons might provoke the ire of anti-evolutionists. In an effort to anticipate the outrage that teachers might encounter when presenting controversial scientific subject matter in their classrooms, Tennessee House Bill 368 proposes a comprehensive battle plan:
(c) The state board of education, public elementary and secondary school governing authorities, directors of schools, school system administrators, and public elementary and secondary school principals and administrators shall endeavor to assist teachers to find effective ways to present the science curriculum as it addresses scientific controversies. Toward this end, teachers shall be permitted to help students understand, analyze, critique, and review in an objective manner the scientific strengths and scientific weaknesses of existing scientific theories covered in the course being taught.
Intriguingly, the list of officials whom the New Monkey Bill charges with the responsibility of managing the disruptive effects of controversial science notably excludes scientists. And therein lies the rub. Scientists have objected to the New Monkey Bill because it empowers non-scientists with the exclusive right to pass judgment on scientific knowledge toward which they bear a subjectivenot scientific!animus. Given that the New Monkey Bill explicitly excludes scientists, it is, therefore, by definition an anti-scientific law. As a result, the New Monkey Bill is certain to exacerbate longstanding antiscientific sentiments, and, in any state where such a law exists, science education is certain to suffer.
44
And that, I believe is precisely the point. The people who composed the New Monkey Bill, and the spineless governor who stood idly by as it became a law, are intent upon undermining science education. The Old Monkey Bill brazenly trumpeted its anti-scientific objectives where the New Monkey Bill is more subtle. Nevertheless, it is still designed to achieve precisely the same objective: privileging anti-science, and undermining science. The Monkey Wars persist. May the fittest paradigm survive. *Complete Text of Tennessee House Bill 368 AN ACT to amend Tennessee Code Annotated, Title 49, Chapter 6, Part 10, relative to teaching scientific subjects in elementary schools. BE IT ENACTED BY THE GENERAL ASSEMBLY OF THE STATE OF TENNESSEE: SECTION 1. Tennessee Code Annotated, Title 49, Chapter 6, Part 10, is amended by adding the following as a new, appropriately designated section: (a) The general assembly finds that:
1. An important purpose of science education is to inform
students about scientific evidence and to help students develop critical thinking skills necessary to becoming intelligent, productive, and scientifically informed citizens;
2. The teaching of some scientific subjects, including, but not
limited to, biological evolution, the chemical origins of life, global warming, and human cloning, can cause controversy; and
3. Some teachers may be unsure of the expectations
concerning how they should present information on such subjects. (b) The state board of education, public elementary and secondary school governing authorities, directors of schools, school system administrators, and public elementary and secondary school principals and administrators shall endeavor to create an environment within public elementary and secondary schools that encourages students to explore scientific questions, learn about
45
scientific evidence, develop critical thinking skills, and respond appropriately and respectfully to differences of opinion about controversial issues. (c) The state board of education, public elementary and secondary school governing authorities, directors of schools, school system administrators, and public elementary and secondary school principals and administrators shall endeavor to assist teachers to find effective ways to present the science curriculum as it addresses scientific controversies. Toward this end, teachers shall be permitted to help students understand, analyze, critique, and review in an objective manner the scientific strengths and scientific weaknesses of existing scientific theories covered in the course being taught. (d) Neither the state board of education, nor any public elementary or secondary school governing authority, director of schools, school system administrator, or any public elementary or secondary school principal or administrator shall prohibit any teacher in a public school system of this state from helping students understand, analyze, critique, and review in an objective manner the scientific strengths and scientific weaknesses of existing scientific theories covered in the course being taught. (e) This section only protects the teaching of scientific information, and shall not be construed to promote any religious or nonreligious doctrine, promote discrimination for or against a particular set of religious beliefs or non-beliefs, or promote discrimination for or against religion or non-religion. SECTION 2. By no later than the start of the 2011-2012 school term, the department of education shall notify all directors of schools of the provisions of this act. Each director shall notify all employees within the director's school system of the provisions of this act. SECTION 3. This act shall take effect upon becoming a law, the public welfare requiring it.
Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:Darwin%27s_ape.jpg
46
47
EIGHT
What the hell is going on? On Thursday, April 19, 2012, the Los Angeles Times published photos of US soldiers posing with body parts of Afghans that they had killed. War may be hell, but sometimes it is also an expression of pure stupidity. A long time ago, on October 7, 2001, the US began dropping bombs on Afghanistan. Those aggressions were a response to the 9/11 attacks that had been masterminded by a condemnable horde of terrorists who were holed up in the Hindu Kush Mountains in Afghanistan. As such, Operation Enduring Freedom, as the military campaign was originally titled by President George W. Bush, proceeded under a legitimateif not altogether troublefreemantle of moral authority. As of 2001, the US was clearly the aggrieved party. The terrorists had struck first, and Operation Enduring Freedom could be characterized as a measured and appropriate response to an unwarranted act of atrocious aggression. That was then. In the long years since the US launched Operation Enduring Freedom, endless miscues have transformed the mission in Afghanistan from unprecedented early success into Americas longest and, increasingly, messiest war. In the weeks
48
following the launch of Enduring Freedom, the US bombing campaign seemed to be making a mockery of the ancient truism that Afghanistan was the Graveyard of Empires. Where previous would-be conquerors, from Alexander the Great all the way to the Soviet Union, had gotten bogged down in interminable, unwinnable struggles, Operation Enduring Freedom swept the Taliban and al Qaeda out of their mountain strongholds like so much dust before a broom. The initial bombing campaign ended in a matter of weeks as the US succeeded in securing control over all strategic cities and territories within Afghanistan. Further, this military lightning strike enabled the US to install a new government, the Islamic Republic of Afghanistan that was headed by a democratically-elected leader, Hamid Karzai. By 2004, it appeared as though the US mission in Afghanistan had been all but accomplished. Except for a bit of mopping up, it seemed as though the US was poised to transfer control of Afghanistan to its newly-elected leader. In practically every respect, Operation Enduring Freedom could be characterized as monumental success. Then, foolishly, the Bush Administration shifted its focus to Iraq. Dont get me wrong, Saddam Hussein was a deplorable human being, however, if we employ that criteria as a justification for deposing world leaders, then the US would be obliged to topple practically every government on the planet. I wonder if Dubya was acquainted with the old adage, People in glass houses Anyway, as the US shifted its focus to Iraq, the situation in Afghanistan went from under control to unwinnable quagmire. In the end, Operation Enduring Freedom succeeded in snatching defeat from the jaws of victory. What a waste. Once having lost the opportunity to secure a lasting peace in Afghanistan, the US military effort has become bogged down in the Graveyard of Empires. At bestand in spite of occasional military surgesthe endless US mission could be characterized as an exercise in treading water. At worst, the US military has succumbed to a collective case of post-traumatic stress disorder. In recent months, the US has gone from the PR disaster of burning Qurans, to Sergeant Robert Bales mass murder of Afghanistan civilians, to yesterdays news of troops mugging for cameras with their gruesome trophies. The only explanation is that, after long ago having lost its moral compass, the US mission in Afghanistan has completely lost its grip on reality. In short, the US war in Afghanistan has gone
49
insane. This, I believe, is why it is becoming increasingly difficult to convince ourselvesmuch less the Afghansthat the US mission in Afghanistan is capable of achieving any worthwhile goals. Apart from ravaging our troops and the Afghan people with intensifying cases of PTSD, what are we accomplishing in Afghanistan? Further, what can we hope to accomplish when every step that we take carries us further into our collective insanity. An insane process cannot produce a rational outcome. The only solution is to end the insanity. The war in Afghanistan must end. And the sooner, the better. Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:Afghanistan_War_2001.jpg
50
51
NINE Penny Wise, Pound Foolish Major Airlines Considering Plan to Charge Passengers by the Pound
(SATIRE) Some major airlines, such as Universal Air, have begun considering plans to charge passengers by the pound in response to Allegiant Airline's recently-announced plan to charge up to $35 per carry-on bag. "It's simple math," stated Howard Fine, a spokesperson for Universal Airlines, a rival of Allegiant Air. "Heavier passengers cost more to ship from point A to point B." In response to questions about consumer backlash over yet another scheme to increase the cost of airline travel, Fine downplayed such concerns by stating, "Our new Pay-by-thePound Plan is much different than the fees that have recently been imposed by our competitor, Allegiant Airlines. Paying by weight is not a new idea. The post office has been charging by the ounce to ship packages for decades. We're simply taking the same idea and applying it to airline travel." In 2009, most major airlines began charging substantial fees for checked luggagein some cases, up to $60-$70 per bag. Since then, many passengers have successfully avoided paying those costs by increasing the amount of carry-on luggage that they take on their flights. Howard Fine explains, "Fees for checked luggage haven't really worked. Sure, passengers are checking in fewer bags, but now they are carrying-on a lot more luggage. It's sort of a game of
52
chess. Airlines make one move, and then passengers make the next, but in the end the airlines are still carrying the same amount of weight." When questioned about Allegiant Air's new plan to charge for carry-on baggage, Howard Fine commented, "Passengers aren't going to like it, and I don't blame them. In fact, thats why Universal Air would never impose a carry-on fee. Fine elaborates, Carry-on fees seem punitive. It's like passengers figured out how to get around checked luggage fees, and now they are being punished by airlines like Allegiant for being savvy consumers." Yet, Fine remains confident that passengers will view Universal Airline's Pay-by-the-Pound Plan differently, "Carry-on fees are punitive, but charging by weight is actually beneficial." When asked to elaborate, Fine added, "Well, it's simple really. Passengers who weigh less will pay less." It remains to be seen whether passengers will react positively to Universal Airline's Pay-by-the-Pound Plan. However, for his part, Howard Fine remains upbeat, "If you think about it, it's kind of like Universal is creating an incentive for passengers to be fit and trim. In a sense, we're creating a health plan for our passengers." Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:Pirate_Flag.svg
53
TEN Dear Arizona, If Obama's Not American, Then Neither Are You
In a recent email exchange, Ken Bennett, Arizona's sitting Republican Secretary of State, dredged up a particularly malodorous scoop of political muck by stating that "if Hawaii can't or won't provide verification of the president's birth certificate, I will not put his name on the ballot." The idea that any US President would be required to flash his ID in order to legitimize his presidency represents a monumental presumption on the part of Obama's detractors--no matter what their political aspirations happen to be.* Please understand, I'm not suggesting that we should treat US presidents like sacred cows. Far from it. Presidents are simply regular folks who have found some way to percolate to the top of the national political hierarchy. As such, being as fallible as anyone else, presidents must be held accountable. This means that presidents should be praised for their successes and castigated for their mistakes. Nothing helps to ensure a higher standard of leadership than exacting public scrutiny. The more, the better. That said, the campaign to paint Obama as an undocumented alien deviates dramatically from the rational road of constructive political criticism. Birthers' shrill protestations are not intended to help Barack Obama become a more effective president, rather,
54
their goal is to invalidate our forty-fourth president's mandate: Barack the Pretender, an "other" who lacks the birthright to legitimately occupy the lofty office that he has earned. Clearly, this claptrap is born of the same spirit that perpetuated Jim Crow well into the twentieth century. In the past, US presidents have never been asked to produce proof of citizenship because, quite simply, they were all a bunch of white guys. And, since white guys have been running the world for a long time, it just seems natural for white guys to continue doing so. Then, along comes Barack Obama, a president who doesn't quite fit the Mount Rushmore image. Heaven forfend!! Yet, by any measure, Barack Obama is a brilliant political strategist who, in spite of his humble origins, managed to rise meteorically to the very apex of the US political establishment. Certainly, there have been similar rags-to-riches success stories among US presidents and, like Abraham Lincoln, path-breaking political leaders of this stripe tend to be graced with exceptional talent. For example, does anyone recall that George W. Bush dumped the worst economic calamity since the Great Depression on his successor? Though it required a herculean effort, President Obama managed to prevent the worst extremes of that particular crisis: a decade of 20-30% unemployment with Hoovervilles, hunger and every other form of economic hardship stalking the land. Instead, the Obama Administration succeeded in getting the country back on its feet againadmittedly, the one-percenters have reaped the lion's share of the benefits from the recovery thus farbefore the midterm elections. Truly, an historic feat of economic wizardry. Of course, no good deed goes unpunished. Since Obama's detractors can't attack him on his record, they have had to search for other, more scurrilous means with which to besmirch his character. Thus, stooping about as low as a snake in the grass can go, birthers have raked up the deplorable muck of American racism. None too subtly, birthers have asserted that because Obama doesn't fit the demographic profile of a typical American president, Obama can't really be the president. Well, I must admit that birthers and I do agree on one particular point: Obama is not a typical US president. He's a heck of a lot smarter than most.** Obama may not be typical, but I'm happy to say that his is the new face of America, the Land of Opportunity. Birthers may not like it, but Obama's ascendancy is an illustration of the fact that democracy is not dead. Though democracy will ever and always remain a work in progress, on Obama's watch the US has narrowed
55
the gap between our democratic principles and practices. Would that we could say as much about all US Presidents. *Bennett is reported to be in the process of establishing the essential ultraright-wing creds to displace the right-wing occupant of the Arizona Governor's office. **The other post-millennial president leaps to mind here. Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:President_Barack_Obama%27s_l ong_form_birth_certificate.jpg
56
57
Wouldnt it be great to be able to communicate with the computer like Captain Picard or Captain Kirk does on Star Trek?* David Ferruci, Principal Developer for IBMs Watson Despite all the euphoria, I am not going to celebrate Osamas death. Sure, Osama was a thorn in Americas side for a long time, but, like we all learned in kindergarten, two wrongs dont make a right. Jean Baudrillard (1989) claimed that Americans were simulations. In other words, Baudrillard argued that, not only did Americans have a flair for creating elaborate fantasies, but Americans were also consumed by those fantasies (Baudrillard, 1994). Indeed, it was this particular insight that inspired the Wachowski Siblings to compose their epic Matrix trilogy (Merrin, 2005). Admittedly, Baudrillards observations often border on the absurd; however, he does make the valid observation that Americans are becoming increasingly wedded to media technologiesso much so that many find it distressing to unplug even briefly from their digitally-mediated realities (Dretzin, 2010). For Baudrillard, the American obsession with simulations was an indicator of the futility of the postmodern era: he believed that when people began caring more about illusions than reality, the significance of human endeavors would dwindle to vaporous
58
futility. While I concur with a number of Baudrillards observations, I differ regarding the utility of simulations. More than once, artfully constructed fantasies have literally altered the course of human events. For instance, in the nineteenth century, Jules Verne concocted outlandish visions of the future that thrilled his many readers. While it would be an overstatement to suggest that Vernes science fiction fantasies laid the groundwork for the historical events that were to follow, still, it is fair to say that a number of Vernes diehard fans made a concerted effort to transform his fantasies into reality. For example, in Vernes day, a fully electronic submarine was a work of pure imagination. However, in the century that followed, engineering marvels that bore a striking resemblance to Captain Nemos fictional submarine began plumbing the depths of the seven seas. Indeed, it is noteworthy that the first nuclear submarine in the US fleet was named the Nautilus. Thus, in certain respects, one can argue that the future is shaped by fantasies. More recently, the adventures of Captain Kirk and the Starship Enterprise have served as an inspiration for an entirely new breed of future-seekers. Though some might think that a (brilliantly!) cheezy TV series from the 1960s would have little impact on the work of real scientists, in fact, Star Trek has captured the imaginations and influenced the work of more than a few important innovators (Jones, 2005). For example, Martin Cooper has stated that he was motivated to invent the cell phone after watching Captain Kirk use a wireless communicator on Star Trek. Of course, everyone knows that Captain Kirk did no such thing. For the literalists among us, it is essential to point out that Captain Kirk is a fictional character and none of the technology that he used was real. In short, Star Trek is a work of pure imagination and, therefore, it is demonstrably disconnected from real events in the real world. or is it? Ever since Star Trek hit the airwaves, enthusiasts have been determined to erase the boundary between the realm of Star Trek fantasy and the real world. Although many Trekkies have become submerged in what Baudrillard would characterize as a pointless simulation (Nygard, 1997), others have derived sufficient motivation from Star Trek to successfully redefine reality (Jones, 2005). Can fantasies transform reality?
59
It is worth noting that, just as the first nuclear submarine was named the Nautilus, the first space shuttle orbiter was named the Enterprise. Further, David Ferruci, the principal developer for IBMs Watson (www.watson.ibm.com**), has stated that, in part, his motivation for building Watson was to create an artificially intelligent computer with which people might one day converse as Captain Kirk does with his computer on the Starship Enterprise. Though the source of Ferrucis motivation has the charm of a childhood fantasy, it is also much more than that. Rather than getting lost in the fog of a captivating dream, Ferruci translated his enthusiasm for Star Trek into a wildly successful initiative to construct the smartest machine on earth (Bicks, 2011). In doing so, Ferruci literally shifted the boundary between fantasy and reality: where, once upon a time, computers that were capable of outplaying human trivia experts were the stuff of fantasy, thanks to Ferrucis Star Trek-inspired dream, they are now a reality. As a result, though Baudrillard would surely disagree, I argue that fantasies often serve as a wellspring of creativity from which human agents derive the requisite motivation to redefine reality (McGettigan, 2011). Indeed, while David Ferruci and his colleagues endeavor to build the next generation of talking computers, other folks are pursing even more (dare I say it?) enterprising goals. In collaboration with NASA, DARPA has recently announced the 100 Year Starship Study (100yss.org). The stunningly ambitious aim of this study is to design a spacecraft with galaxy-exploring capabilities that are eerily similar to those possessed by the Starship Enterprise. Is this merely a coincidence? I doubt it. For humans, the future is a process. If we want to live in a better, brighter future, then we need to dream big dreams todayand then do our utmost to transform those dreams into reality. Though pessimists like Baudrillard would surely sneer, it is evident that science fiction-inspired dreams have a demonstrated capacity to shift the boundary between fantasy and reality. Thus, for anyone who desires to go where no one has gone before, we can continue relying on Captain Kirk to get us there. Warp factor nine, Mr. Sulu!
References
60
Baudrillard, Jean, 1989. America. New York: Verso. Baudrillard, Jean, 1994. Simulacra and Simulation. Translated by Sheila Farrier Glaser. Michigan: The University of Michigan Press. Bicks, Michael, 2011. Smartest Machine on Earth. Nova. PBS International. Dretzin, Rachel (Director and Producer), 2010. Digital Nation: Life on the Virtual Frontier. Frontline. Boston, MA: Public Broadcasting Service, WGBH. Jones, Julian, 2005. How William Shatner Changed the World. Vancouver, Canada: The Discovery Channel. McGettigan, Timothy, 2011.Good Science: The Pursuit of Truth and the Evolution of Reality. Lanham, MD: Lexington Books. Merrin, William, 2005. Baudrillard and the Media: A Critical Introduction. Waltham, MA: Polity Press. Nygard, Roger, 1997. Trekkies. Los Angeles, CA: Neo Motion Pictures. * A complete transcript of this interview is available at the following website: www.pbs.org/wgbh/nova/tech/will-watson-winjeopardy.html
**Is it just me, or is the IBM researcher depicted in the photo atop this page attempting to mind meld with Watson? Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:William_Shatner_Star_Trek_Capt ain_Kirk_publicity_photo.png
61
TWELVE To Infinity and Beyond! New Frontiers in the Science Wars (An Excerpt from Good Science)
Its easy to beat up on postmodernists these days. Ever since the Sokal Hoax, the postmodernists Waterloo, the science wars have been a rout. Once it became clear that postmodernism was incapable of distinguishing between valid scientific perspectives and gibberish, postmodernists have bolted from the battlefield. This was a remarkable turnabout. For much of the 1990s, postmodernists insisted that modernist science was as good as dead. Modernists, you must understand, included scientists and anyone else who was nave enough to believe that science systematically produced better, more enlightening knowledge. Not so, claimed postmodernists. Under the guise of truthseeking, postmodernists contended, science had woven a worldwide web of deceit. Certainly, some scientists might have been earnest believers in admirable principles; however, the net effect of scientific progress did little more than aggrandize the West at the expense of the downtrodden. And the prime culprit for those dark deeds was none other than the concept of truth. Whereas scientists tended to view truth as a benign standard against which to gauge scientific progress, postmodernists argued that truth was an evil instrument of cultural discipline. Ideas which complied with Western truth standards merited approval, whereas ideas that challenged the modernist truth regime were subjugated and marginalized. As a remedy, postmodernists advocated the end of truth. In this way, Western bias could be dethroned and all of the ideas that had been marginalized by modernism would finally get a fair hearing.
62
Of course, as Sokal illustrated, if we abandon truth, we also abandon rationality. In a world of postmodern relativism, anything goes. Without truth standards, there is no way to distinguish between good and bad ideas: all ideas are equally valid. Which is a really bad thingunless youre convinced that Hitler and Stalin were visionaries. I must admit, I was perfectly happy to see postmoderism implode. Postmodernism was a gutless theoretical movement that arrogated unto itself the right to criticize everyone elses ideas while failing to produce any worthwhile ideas of its own. The best part was that, throughout its meteoric rise, postmodernism constantly propounded the imminent demise of modern science. To that, all I can say is Ask not for whom the bell tolls . . . Clearly, it would have been more accurate for postmodernists to predict their own demise. However, accuracy was never a priority among postmodernists. So, where does that leave us? Irksome as it may have been for postmodernists, science has forged aheadbefore, during and after the postmodern interlude with nary a hiccup. The biggest threat to science during the past couple of decades was the Bush Administration. Compared to Dubya, postmodernism was like a gnat on a water buffalos backside. If we can thank postmodernism for anything, it is for refocusing attention on the knotty issue of truth in science. Rightfully, Karl Popper should get most of the credit for problematizing the concept of truth. Of course, Popper took a much different view of the role that truth should play in humanitys never-ending problem-solving endeavors. Still, Popper made it clear that truth was not nearly as straightforward a phenomenon as most scientists, particularly positivists, liked to think. Truth is a challenging subject. Many people, whether they describe themselves as scientists or not, might insist that truth is nothing more than a description of facts. For example, it is true that the sun rises in the morning and sets in the evening. At first glance, such a definition seems perfectly reasonable: truth should correspond with facts. However, the danger of such a definition is that facts are not always what they seem. Take, for example, the fact that the sun rises and sets on a daily basis. Although that statement offers a plausible description of certain facts, nevertheless, it is not true. The sun does no such thing. Based upon what astronomers have learned over the past several centuries, we know that the sun does not orbit the earth. Instead, the earths
63
rotation tends to instill the false impression that the universe revolves around earthlings. Our real relationship with the cosmos is very different. Additionally, for those who dwell near the earths poles, rather than rising and setting on a daily basis, the sun often appears and disappears for months at a time. Consequently, facts often look very different depending upon ones perspective. That said, it is important to emphasize that there is an essential relationship between truth and facts. In other words, one cant say anything truthful without reference to verifiable facts. Thus, I might claim that I have spotted the Sasquatch in my backyard, however, unless I can produce hard evidence of such a mythical visit, no one should believe a word I say. Good scientists certainly wouldnt. Typically, good science can be understood as knowledgeseeking activities that assert a very clear linkage between truth and facts. Thus, for the most part, good scientists tend to view ideas that are not supported by facts (e.g., Sasquatches popping in for tea) as fantasies. Indeed, imaginative humans have a penchant for dreaming up all sorts of notions that, scientifically speaking, are rubbish. Consequently, good scientists usually draw a sharp distinction between facts and fantasies. Good science is devoted to the former and dismissive of the latter. In important respects, this perspective is entirely justifiable. Facts matter. However, I argue in Good Science that scientific progress is often contingent upon seeking truths that lie beyond established facts. In other words, fantasies can often inspire scientific progress that facts might otherwise impede. Often, in the most surprising ways, in the process of seeking truth, science has discovered new factsand sometimes invented new facts (e.g., cures for age-old, previously irremediable illnesses; atomic particles that can be manufactured in laboratories, but that do not exist in nature; genetically engineered plants and animals in the form of GMOs; synthetically re-engineered, IT-mediated versions of time and space, such as cyberspace, etc.)that have instigated profound transformations in the nature of reality. In brief, science has routinely transformed reality by uncovering new truths and facts that have repeatedly transformed fantasies into reality. With the help of science, humans have repeatedly transformed the most far-fetched fantasies (e.g., plumbing the deepest depths of the seven seas, achieving aeronautically-engineered mastery of the skies, and, indeed, becoming the first terrestrial species to defeat gravity and redefine ourselves as extraterrestrials) into everyday realities. In doing so,
64
science has achieved an unparalleled status as the most wideranging and effective vehicle to manufacture beneficial social changein other words, Progressthat humans have ever conceived. Let postmodernists stick that in their pipes and smoke it. Although critics of science will surely point out that the world is plagued by seemingly insoluble problemsmany of which have been either invented or exacerbated by science (e.g., overpopulation, pollution, global warming, nuclear nightmares, etc.)I argue that crises have always been endemic to human civilization. In my opinion, the clearest path to the brightest future will be for scientific truth-seekers to pursue the most challenging problematics that the human imagination can invent and, thereby, commit the human race to a never-ending process of redefining reality. To infinity and beyond Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:LEGO_Buzz_Lightyear_(3747754797).jpg
65
THIRTEEN Trumped by the Dorkusians Donald Trumps Evil Plan to Conquer the Planet
(SATIRE)
Hey, Donald, "Yuh fired!" Make no mistake about it, Donald Trump loves the limelight. He's a rich guy who loves to plaster his name on any object that isn't agile enough to jump out of his way. It's an ego thing--and it's also embarrassingly dclass. As Thorstein Veblen once pointed out, folks who accumulate a lot of money often have an irrepressible penchant to show it off. In Trump's case, this usually involves purchasing large buildings and then emblazoning those structures with the largest, most garish "Trump' signs that the buildings will support. Like the pharaohs of old, Trump seems to believe that he who dies with the biggest building will win. Good luck with that, Donnie. Anyway, for narcissists of Trump's ilk, no amount of public attention is ever too much. In Trump's warped reality, excess is everything. As a result, Trump is clearly of the mind that that there is no such thing as bad press. Why else would the man appear year-after-year on a pointlessly moronic TV series wherein an endless series of flunkies slavishly queue up to be fired by The Donald? The only thing that ever changes is the width of Donnie's comb-over.* Further evidence that Trump will do anything--no matter how lunk-headedto keep his name in the headlines is The Donald's
66
latest salvo in the 'birther conspiracy.' In spite of the fact that President Obama has willingly produced the most compelling piece of evidence (an official birth certificate: www.whitehouse.gov/ blog/2011/04/27/president-obamas-long-form-birth-certificate) to demonstrate that he was, indeed, born in the good ole' USA, Trump has continued to insist that President Obama is, in reality, an alien. As for the birth certificate that Barack Obama and the state of Hawaii have made available for all the world to see, Trump dismisses the document as a fake. Trump contends that anyone who is wealthy and powerful enough to make a serious run for the US presidency commands the necessary wherewithal to falsify official documents. Hmmm... On that score, I would advise The Donald to exercise special caution when throwing stones from the porch of his great big glass house. If The Donald is going to dispute the authenticity of President Obama's citizenship documents, then that invites the rest of us to do likewise for The Donald. Don't ya think? So, just for laughs, let's just say that we ask Donald Trump to prove that he is really an American. Without even bothering to examine his 'official' birth certificate, I would be inclined to dispute it on his face. Why is Trump's birth certificate--whether authentic or doctored--any more valid than President Obama's? As Trump has already been good enough to point out, The Donald is certainly wealthy enough to fabricate an 'official' copy of his birth certificate. Therefore, I reject the veracity Trump's citizenship documents for the very same reason that Trump disputes the authenticity of President Obama's. How's them apples, Donnie Boy? In addition, Donald Trump is not the only person who can cook up a cockamamie conspiracy theory. For example, I could concoct a wacko theory that goes something like this: In spite of what might be printed on his birth certificate, Donald Trump is not really a citizen of the United States--nor even of the planet earth. Instead, I might contend that Donald Trump is an alien from another planet. Even worse, he appears to be an alien that is bent on the conquest of the planet earth. Sure, Trump makes out like he's just a run-of-the-mill real estate baron, but, if you look a little closer, it begins to appear as though he is actually grabbing up territory as part of a secret invasion plot by the residents of his home planet, Dorkus Major. For millennia, the denizens of Dorkus Major have been licking their chops at the thought of monopolizing prime real estate on the outer fringes of the Milky Way. It turns out that Trump's
67
assignment has been to blend in as a noisy, obnoxious Manhattanite (The perfect cover) while buying up strategic pieces of real estate in every major North American city. Once Trump has acquired a sufficient number of properties, the Dorkusians will stage a multi-pronged hostile takeover by launching simultaneous surprise attacks from each of Trump's properties. Why else emblazon each of the buildings with the word 'Trump?' Don't you see the irony? It's like the aliens have already announced their intention to conquer the planet. Pitiably, we puny humans have failed to grasp the horrific truth that the Dorkusians have been dangling before our very eyes. The Dorkusians are coming, and Donald Trump is leading the charge! As evidence for my theoryand for the imminence of the Dorkusian attackI point to the fact that Donald Trump's humanlike disguise has been rapidly deteriorating of late. Trump's everexpanding comb-over was the first clue, but the more compelling evidence is literally written all over Trump's face. Don't tell me that I'm the only one who has noticed the dramatic shift in Trump's facial complexion. In recent months, Trump has undergone an undeniable transition from a natural human-like skin tone to a deepening shade of Martian orange. Yikes! What else could this mean, but that Trump's overexposure to an alien planetary environment is gradually toxifying his Dorkusian immune system. It's like that time that the KGB poisoned the Ukrainian prime minister with dioxin. Isn't it obvious? Don't get me wrong, I'm not a racist. The United States is a nation of immigrants, and no one is more proud than I am of our nation's rich heritage of ethnic and racial diversity. But, for heaven's sake, we've got to draw the line somewhere, don't we? I am more than happy to embrace my fellow American brethren from Asia, Africa, Central and South America, Pacifica, Europe, etc.. But ask me to include orange-skinned alien invaders from the planet Dorkus Major in the all-American group hug and I will flatly refuse. Not now, nor will I ever accept the idea that planetsnatching Dorkusians should enjoy equal protection under the law. I don't care how you interpret the Constitution, it just ain't right. Therefore, because of the compelling and overwhelming evidence that lies before our very eyes, I believe that we must call for the immediate revocation of Donald the Dorkusian's (aka, The Donald's) US citizenship. Furthermore, we must not allow The Donald to purchase any more prime real estate, no matter how tempting his offers may be. It is only by revoking The Donald's citizenship and preventing his acquisition of any additional
68
"Trump Towers" that we will be able to prevent the Dorkusian conquest of our planet. There is not a moment to lose. We must act now, or we will forever regret our hesitation. Down with the Dorkusians, and their evil, orange-skinned real estate-grubbing lackey! Long live America! The land of the brave, and home of redblooded, earth-dwelling patriots! Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:Donald_Trump_announcing_latest_David_B laine_feat_2.jpg
69
FOURTEEN Charles Darwin The Unlikely Revolutionary (An Excerpt from Good Science)
Charles Darwin is one of the most widely revered and enduringly controversial figures in the history of science. Both are exceptional feats for such a mild-mannered gentleman. Much of the controversy surrounding Darwin concerns the presumptive truthfulness of his evolutionary theory. Darwins ideas about evolution were so ground-breaking that, more than one hundred and fifty years after the publication of On the Origin of Species (1859), many people still refuse to accept Darwins basic precepts. Shortly after graduating from Cambridge, Darwin received an invitation to ship out on the HMS Beagle. Darwins journey on the Beagle stands out as one of the worlds most important scientific events. The journey was history-making not only because modern biology owes its existence to Darwins circumnavigation, but, more specifically, because of the crucial change in Darwins thinking that the voyage inspired. Darwin embarked on the HMS Beagle as a conventional creationist, but he finished it as a radicalized evolutionist. An important stimulant to Darwins evolving scientific ideas, and one of the few texts that Darwin carried aboard the Beagle, was Charles Lyells newly published Principles of Geology (1832). Darwin studied Lyells Principles carefully. Its dramatic departure from creationism lay in Lyells theory of uniformitarianism. Lyell argued that creation had not taken place at a singular moment in
70
the all-too-recent past. Rather, creation was an ongoing process. The wind and rain that erode the earth generally do so at the level of dust motes. If afforded sufficient time, the relentless forces of accretion and erosion could build peaks that touched the clouds and then, particle by particle, reduce the ruggedest range of mountains to a chain of low, rolling hills. If such a thing were true, then uniformitarian change could only be accomplished over extraordinary expanses of time. How many years would it take for the buffeting winds and seasonal rains to carve the Grand Canyon? The answer: Eons upon eons, and many magnitudes more years than could easily fit within the creationists young earth paradigm. As the journey progressed, Darwin finally arrived at the Galapagos Islands. Though he had witnessed many wonders during his travels, the bizarre menageries that he encountered in the Galapagos exceeded anything that he had yet imagined. From giant tortoises to endless varieties of land crabs and snails, Darwin marveled at the seeming adaptability and (dare he think it?) mutability of the species that he observed. Perhaps as he gazed upon the spectacle of marine iguanas bobbing in the surf, Darwin gave thought to a new and unsettling idea. Just as tiny and slowpaced geological changes had the net result, over the long haul, of introducing extraordinary alterations to the earths geology, might not the same be true for living organisms? In other words, could the tiniest physiological changes accumulate sufficiently across time to bring about the transmutation of species? Darwin was both fascinated and disturbed by the implications of this idea. Though other early scientists, including Darwins grandfather, Erasmus Darwin (1796), had toyed with the idea that life had evolved through random natural processes, no one had been able to explain how that could happen. However, it was in the Galapagos that Darwin finally witnessed the diversity of life forms that would help him reveal the basic mechanisms of evolution. In particular, while touring the Galapagos, Darwin noted a phenomenon that tends to arise with regularity among island species. After migrating to islands, species often undergo remarkable alterations, e.g., small animals grow, large animals shrink, dietary and habitat preferences shift, etc. Precisely why such a phenomenon should be so prevalent in diffuse island ecologies remained a mystery until Darwin studied the many varieties of finches on the Galapagos. Galapagos finches exhibit such a wide variety of shapes and sizes that Darwin initially misclassified the birds as entirely different species. Indeed, it was only after consulting with John Gould, an expert in bird
71
physiology, that Darwin realized that the finches were in fact much more closely related. As a result, Darwin experienced a revelation. Darwin postulated that island migrants encounter unique population pressures. That is, when migrant species initially arrive on the shores of hospitable islands their populations tend to explode. However, success quickly becomes a migrants worst enemy because rapid population growth tends to exhaust available resources. By the way, Thomas Malthus (2003) also had a substantial impact on Darwins evolutionary thinking. Quite simply, Malthus argued that population tends to grow geometrically whereas food resources can only expand arithmetically. As a result, if unchecked, population growth among any successful species (i.e., lilies on a pond, humans in Manhattan, finches in the Galapagos, etc.), will, within the space of only a few generations, rapidly exhaust available food resources. Under such circumstances, species that depend upon the seemingly limitless bounty of their local environs will soon discover that their luck has run out. Thus, Darwin speculated that, like many island success stories, a singular species of finch had migrated from South America to the Galapagos. Finding itself in an environment that was largely devoid of predators and competitors, the finches flourished. However, like so many of their migratory counterparts, the finches soon encountered a problem. Finch populations expanded to the point that food became scarce and, thus, competition for the islands limited resources became increasingly intense. In turn, the combined pressures of overpopulation, scarcity of resources, and the resultant competition for survival triggered a process that transformed the fortunes of finches on the Galapagos. It was this insight that enabled Darwin to crack the mystery of natural selection. In brief, Darwin argued that there are a number of crucial biological dynamics that energize the evolutionary process:
Variation: whether its dogs, grass, or fruit flies, organisms tend to vary from one individual to the next Overpopulation: from oak trees to salmon, parents tend to produce more progeny than can survive to maturity Struggle for survival: the overproduction of progeny tends to inspire a high-stakes competition to secure limited resources
72
Survival of the fittest: individuals with advantageous genetic traits enjoy an edge in the competition for scarce resources Evolution through natural selection: winners of bioecological competitions survive and pass advantageous genetic traits to their offspringwhich, in turn, brings about the gradual transmutation of species
With the above in mind, Darwin asserted that the finches which had survived on the Galapagos were those that had developed some sort of competitive advantage over their counterparts. As a result, multiple subspecies emerged among Galapagos finches, each of which was genetically equipped to take advantage of resources that were distinct from those preferred by their former messmate. Thus, Galapagos finches may have arrived on the islands as a single species, however, due to survival pressures, the finches evolved into a wide array of subspecies with distinct body types, divergent diets and unique survival strategies. Furthermore, Darwin argued that the speciation process that had transmuted Galapagos finches was essentially the same for every other species. Therefore, Darwin concluded that every living creature owed its existence to a process of evolutionary transmutation. Instead of an all-powerful god intentionally creating life in its existing form, Darwins new theory explained how life could evolve randomly through a long, slow natural process. And so began the culture wars that have raged until this very day. References
Darwin, Charles. On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. 1st ed. London: John Murray, 1859. Darwin, Erasmus. Zoonomia: The Laws of Organic Life. London: J. Johnson, 1796. Lyell, Charles. Principles of Geology: Being an Attempt to Explain the Former Changes of the Earths Surface, By References to Causes Now in Operation. Volume One. London: William Clowes, 1832. Malthus, Thomas. An Essay On The Principle Of Population (1798 1st edition, plus excerpts 1803 2nd edition). New York: W. W. Norton, 2003.
commons.wikimedia.org/wiki/File:Charles_Darwin_statue_5661r.jpg
73
FIFTEEN The Federal Budget for Dummies Tax Cuts + Increased Spending = Red Ink
In the late 1990s, Bill Clinton managed to produce a series of record-setting federal budget surpluses. Not deficits, surpluses. For those who are unacquainted with such a foreign concept, budget surpluses occur when the federal government collects more revenue than it spendswhich, as we all know, doesnt happen very often. Incredibly, from 1998-2000, the Clinton Administration generated a series of annual budget surpluses and succeeded in paying down the national debt by a record $360 billion. Even better, as Bill Clinton left office, White House budget officials predicted that, if the incoming Bush Administration embraced a similar level of fiscal discipline (...thunder rumbles ominously in the background...) the federal government would continue to generate budget surpluses far into the foreseeable future. A conservative estimate predicted that the federal government could possibly generate more than a trillion dollars in budget surpluses during the first decade of the new millennium. That was then... So, what the heck happened? If the Clinton Administration was raking in record budget surpluses how could the federal budget tank on such an epic scale in just a few short years? From 20002012, the national debt has grown from $5.7 to $15.1 trillion. During that epic tailspin, the US has gone from generating record surpluses in 2000 to racking up the largest deficits in history from 2010-2012. Given all the bad economic news in recent years, I suppose blame for the budgetary nightmare must lie Barack Obama, right?
74
Hes one of those tax and spend Democrats, so Obama has got to bear the majority of blame for the $15.1 trillion national debt. Certainly, budget deficits have been big and ugly during Barack Obamas first term in office--indeed, they have been historically large. Of course, we have to keep in mind that, from the moment that he assumed office, President Obama has been struggling to pull the country out of the the most severe financial crisis in 80 years. So, for the sake of accuracy, it would be unfair to assign all of the blame for the federal budget nightmare to Barack Obama. Yes, budgetary shortfalls during Obamas tenure have been grim, but it is important to recall that President Obama inherited the worst federal financial calamity of the post-WWII era. Aha! So, something significant must have occurred in the years between Clinton-era prosperity and Barack Obamas ascendancy to the White Houseand what might that have been? Hmmm.... George W. Bush was a man on a mission. Practically overnight, Dubya single-handedly demolished US prosperity and plunged the country into a bottomless sea of red ink. How could one man achieve such an historic train wreck in nary the blink of an eye? Easy. Dubya tossed fiscal discipline out the window and embraced the surest recipe for bankruptcy ever invented: Dubya insisted on having his cake and eating it too. More specifically, Dubya slashed federal revenues (in the form of a series of massive tax cuts) while simultaneously endorsing a neverending series of spending increases. As anyone who has ever balanced a checkbook knows, you cant cut your income and increase spending without digging yourself into a deep financial hole. However, this simple logic was completely lost on Dubya. Its only a guess, but Ill bet Dubya never had to worry about balancing his personal checkbook. Otherwise, I suspect that his financial decision-making as president would have been very different. Bill Clinton generated federal budget surpluses by imposing a strict and straightforward fiscal discipline on his administration: Clinton would not approve new spending, no matter how popular proposed programs might be, for which there was no established revenue source. Without doubt, Clinton benefited from the strong, sustained economic growth that was driven by the dot.com boom, and post-Cold War realignments. Nevertheless, Clintons admirable commitment to fiscal discipline put the federal government on a course to strengthen its position of global leadership while also living comfortably within its means. Now, thats sound fiscal management. And then, along came Dubya.
75
Though they are very scarce these days, Dubyas defenders have often argued that many of the spending increases that Dubya approved were required in order to fight the war on terror. Thus, Dubya could still claim to be a good conservative because, when it came to spending increases, Dubya only ballooned the federal budget out of wartime necessity. Indeed, Dubya was fond of reminding the public that he was a wartime president and, as such, he was the Decider for many of the often puzzling federal initiatives that took place during his watch. OK, so let it be on Dubyas head. During his time in office, Dubya never encountered a spending billwhether it was related to the war or not--that he was unwilling to sign. Yet, at the same time that he was signing spending bills willy-nilly, he was also doing his best to strangle federal revenues. Extraordinarily, Dubya even signed a huge tax cut on the eve of his ill-fated war on Iraq. Even if the war did not make much sense (Wherefore those dratted WMDs?), it was even more ludicrous to cut taxes as Dubya committed the US to fighting multiple overseas wars. This essentially put the US in the preposterous situation of having to wage multiple wars on borrowed money. If the federal government is going to spend big, then the simple truth is that, under such circumstances, the feds will have to increase (Not decrease!!) their revenues. Either Dubya didnt understand, or, more disturbingly, he didnt care about the essential correlation between federal expenditures and revenues. Irrational as Dubyas tax-cut-and-spend-BIG regime may have been, post-millennial Americans quickly became addicted to such illogic. Americans have always been tax-haters, but, until recently, that particular pathos had not been cemented to an insistence on BIG federal spending. Almost uniformly, Americans have become volubly unwilling to pay for the tax-dependent services that they demand from the federal government. Even though it makes no fiscal sense whatsoever, Americans want the federal government to do more, and tax less. And in a democracy like the US, politicians have to give the people what they want, right? Thus, for the past twelve years, as the federal government has gotten cash-poorer, it has spent bigger. Three cheers for democracy! The people have spoken, and the net result of our tax-cut-and-spend-BIG mania is a $15.1 trillion national debt. With that kind of fiscal illogic polluting the minds of Jane and John Q. Public, is it any wonder that we havent summoned the fortitude to fix the global financial crisis?
76
77
SIXTEEN The First Star Warrior Galileos Assault on Catholic Cosmology (Excerpt from Good Science)
Galileo Galilei (1564-1642) is considered to be one of the world's first and greatest scientists. What made Galileo unique was his ability to observe carefully, develop original theories, and test his ideas systematically. Thus, Galileo embraced the doctrine of good science long before science coalesced into an institutionalized set of principles and practices. Following the invention of the first telescope in 1608, Galileo took up this new instrument with uncommon zeal. Imperfect as early telescopes may have been, Galileo was nonetheless enthralled by the wonders that his new star-gazing tool exposed. One glance through the telescope revealed that there was much more to the heavens than anyone had previously imagined: more stars, more planets, more beauty and more anomalies. Indeed, Galileo is credited with making a plethora of astronomical discoveries that served to both delight and alarm his contemporaries. Ultimately, Galileo's most controversial discovery resulted from his scrutiny of the planet Jupiter. Following repeated observations, Galileo was astounded to discover that there were objects that moved in a steady circuit around Jupiter. Those objects were faint enough that, even on the clearest of nights, it would have been impossible to discern their existence without the aid of a telescope. However, with the help of his new stargazing instrument, the objects became clear enough to convince Galileo that he had witnessed a wondrous new truth: Jupiter had moons.
78
To put it mildly, Galileo's discovery represented a major scientific breakthrough. No earth-dweller had ever observed satellites in orbit around another world. Although one might suppose that Galileo's revelation would have inspired widespread celebration, in fact, Galileo's landmark discovery invoked the ire of a truly lethal adversary. Whereas Galileo may have hoped to blaze a new path through the heavens, he would soon witness his work condemned as an unwelcome intrusion upon inviolable terrain. Galileo lived at a time when the Catholic Church was Europe's foremost political power. The medieval Church comprised a vast international organization at a time when most other European political entities were no larger than city-states or feudal fiefdoms. As is the case with most theological systems, faith was the glue that bound the entire Catholic superstructure together. Expressions of faith not only defined the parameters of membership in the Catholic Church, but faith also delimited the boundaries of acceptable intellectual inquiry. For Catholics, The Truth was defined by the established parameters of Catholic dogma (e.g., God created the universe in six days). In other words, so long as the faithful restricted their intellectual inquiries to the limits prescribed by the Church, then such choreographed mental exercises would conveniently reproduce faith in Church doctrine. For those who govern through the strictures of faith, doubt is equivalent to disobedience. Not only does doubt connote a certain degree of disrespect for sacred beliefs (i.e., "good" Catholics all had an obligation to embrace the sacred word of God), but doubt also weakens organizational structures that are founded upon blind obedience to faith. In other words, doubt implies that faith-based belief systems are somehow inadequate. As such, doubt also portends a quest for knowledge that lies outside the authorized realms of inquiry. By thinking outside the box, it becomes possible for doubters to generate novel observations that are not only distinct from, but that are often directly contradictory to established beliefs. As a result, free-thinkers tend to evoke antipathy among those who maintain a vested interest in the status quo. For example, cognizant of the hostility that his radical new perspective was likely to inspire, Copernicus literally delayed until he arrived on his deathbed to unveil the culmination of his lifework: a heliocentric theory of the universe. Quietly and systematically, Copernicus had nurtured his doubts about geocentrism until he managed to invent a more convincing theory of heavenly motion: Copernicus moved the sun to the center of the universe. Modest as this astronomical shift may have been,
79
Copernicus was convinced that his sun-centered theory would invoke the ire of the medieval theological establishment. Though Copernicus' deathbed revelation might seem to over-dramatize the danger that he faced, in fact, Galileo would soon discover that his colleague's fears were well-founded. Galileo was acquainted with Copernicus' heliocentric theory, and he was also sensible of the reasoning behind its nearposthumous publication. Though more subtle, Galileo's discovery of Jupiter's moons posed yet another serious threat to geocentric theory. Once again, geocentric theory asserts that the entire universe revolves around earth. However, Galileo's observations suggested that there were in fact multiple centers in the universe. In such a universe, the earth would descend from lofty preeminence to a stature of banal equivalency with every other object in creation. As a result, the earth's rulers could no longer base their mandates on the logic of The Great Chain of Being. That is, if the earth was nothing special, then the same would also be true of its denizens. A demotion for the earth implied a similar demotion for its formerly exalted leaders. Aware that he was skating on thin ice, Galileo sought advice from a number of key power-brokers before going public. Though it was clear that not every member of the Catholic hierarchy was enamored of Galileo's discoveries, in the end, the plaudits that he received, particularly those from Pope Urban VIII, convinced Galileo that it would be safe to publish. Yet, in spite of Galileo's precautions, shortly after publishing a monograph that commented favorably upon heliocentrism, the Inquisition sent Galileo a summons. In 1633, the Inquisition subjected Galileo to extended interrogation. During the course of this inquiry, Galileo affirmed that he had indeed advocated the plausibility of heliocentrism. Given that Galileo was a famous scholar who was also blessed with impressive political connections, the Inquisition was inclined to be lenient. Instead of handing down an outright conviction of heresywhich would have incurred a death sentencethe Inquisition offered Galileo an olive branch: Galileo could recant. Thus, Galileo was faced with two very unpalatable alternatives. Either Galileo could take a principled stand and affirm his heretical support for heliocentrism, or he could eat crow and renounce his former position. Further complicating matters, Galileo had more than his own life to consider. If he were to be convicted of heresy, the stigma of his sentence would also extend to his family. Given the harrowing implications of a heresy conviction, Galileo decided that he had no choice but to recant. The truth be damned.
80
Thus, the Inquisition had won. Not only did Galileo publicly reverse his position on heliocentrism, but the Inquisition kept him under house arrest for the rest of his days. Anyone who could pose such a dire threat to geocentrism was best kept under close surveillance. Indeed, the watchful eye of the Inquisition and the ever-present threat of a renewed heresy charge produced the intended effect. As Galileo, whiled away his remaining years, his scientific inquiries dwindled to nothing. Galileo was defeated and his ideas had been crushed by an overbearing ideology. Or had they...? Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:Galileo_facing_the_Roman_Inquisition.jpg
81
On Sunday, July 22, 2012, Penn State University removed the statue that the university had erected in Joe Paternos honor at Beaver Stadium. In removing the statue, the university has made an emphatic statement:
Penn State University will no longer worship its disgraced football coach
How could Joe Paterno, a man who had been venerated as a hero for decades and whose name had come to be synonymous with Penn State University, fall so far so fast? Simple. One need look no further than the Executive Summary of the Freeh Report (thefreehreportonpsu.com) to learn that, by his own admission, Joe Paterno played an active role for more than a decade in covering up Jerry Sanduskys criminal pedophilia. Worse, Joe Paterno demonstrated a callous disregard for the kids that Jerry Sandusky was assaulting. At no time did Paterno take explicit steps to protect Jerry Sanduskys victims. Instead, Paterno aided and abetted Jerry Sandusky by ensuring that, in spite of a series of documented indiscretions (in 1998 and 2001), Jerry Sandusky would maintain a relationship in good standing with Penn State Football and would retain full access to all Penn State University facilities. As a result, this enabled Jerry Sandusky to continue exploiting his association with Penn State to lure, trap and abuse kids. If Jerry Sandusky was a monster, then Joe Paterno was his Dr. Frankenstein. Not only did Joe Paterno help to create the monster,
82
but after adorning the fiend in a Penn State uniform, Paterno callously unleashed that dreadful beast on his unwitting, innocent victims. Make no mistake about it, the Freeh Report details the fact that Joe Paterno was fully aware of Jerry Sanduskys criminal abuses for more than a decade. Throughout that time, Joe Paterno did not take one step to assist the victims of Jerry Sanduskys abhorrent pedophilia. Instead, Paterno worked tirelessly to ensure that Jerry Sandusky, in spite of his habitual and ongoing assaults on innocent kids, remained in good standing with Penn State. Why would Paterno aid and abet such a reprehensible monster? The answer to that mystery lies in the comments of a janitor who, in the fall of 2000, witnessed Jerry Sandusky assault a young boy in the Lasch Building at Penn State University. When asked why he did not report the incident to the proper authorities, the janitor replied that it would have been pointless to bother because football runs this University. In other words, the janitor was convinced that, had he reported the incident to the police, the University would have closed ranks to protect the football program at all costs. Further, the janitor believed that, as payment for his good citizenship, he would have been terminated, I know Paterno has so much power, if he wanted to get rid of someone, I would have been gone (Freeh Report, 2012, p. 65). This is what comes of worshiping false gods. In such a cult of personality, egomaniacs like Joe Paterno get the idea that they can do no wrong. Jerry Sandusky helped Joe Paterno win football games and, since that was the most valuable currency at Penn State University during the long, blighted era of the Joe Paterno Cult, Joe Paterno did everything in his power to excuseand in some cases even reward!Jerry Sandusky for his crimes. By removing the Joe Paterno statue, Penn State has taken the first crucial step toward dismantling the nefarious influences of its Joe Paterno Cult. So long as the statue remained, Penn State would never have been able overcome the taint of its slavish reverence for a false and tragically-flawed god. No matter how many games they might win, football teams should never be treated like the highest priority at any university. Treating football as if it is the top priority at any university literally flips logic on its heador, in the case of the Joe Paterno Cult, turns justice inside out. Only in such a deeply corrupt culture would it be possible to excuse football heroes for unspeakable attacks on innocent, helpless kids. The true purpose of a university is to nurture the next generation of enlightened leaders. Penn State is a great university and its best years can still lie in the future, but only if it is truly
83
committed to getting its priorities straightand chucks its Joe Paterno statue on the rubbish heap. Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:Joe_Paterno_Statue.JPG#filelinks
84
85
If you were successful, somebody along the line gave you some help. There was a great teacher somewhere in your life. Somebody helped to create this unbelievable system that we have that allowed you to thrive. Somebody invested in roads and bridges. If youve got a business, you didnt build that. Somebody else made that happen. Excerpt from President Barack Obamas speech at a Roanoke, Virginia campaign stop on July 13, 2012.
Americans hate two things more than anything else: big government and taxes. Thus, it is an uphill battle for anyone who runs for office to justify the vital role of government and taxes in building Americas future. In the above quote, President Obama is attempting to convey the essential role that tax-supported public infrastructure plays in generating opportunities for private enterprise. Put simply, Barack Obama is saying, It takes a village. In other words, individual achievement is predicated upon public support. That being the case, Barack Obama contends that it is important to both acknowledge and support the role that the village plays in generating individual success. And here we arrive at the crux of the debate: Should the wealthy be under any obligation to support the village? The subtext to this discussion is that there is a huge battle being waged in Washington over taxes. Bush-era tax cuts are bankrupting the United States and Barack Obama is trying to save a sinking ship by eliminating Dubyas tax breaks for the wealthiest 2% of Americans. The rich, as Barack Obama has stated over and over again, need to pay their fair share.
86
Generally speaking, Democrats believe that individuals are beholden to their village and, therefore, should support it by paying taxes. Republicans take a different view. Republicans believe in individualism: Individuals are responsible for their own success and, therefore, are under no obligation to share the fruits of their labors with less successful villagers. As a means of putting a different spin on the tax battle, while also appealing to Republicans and individualistically-minded independent voters, the Romney campaign has recently created advertisements that yank the last two sentences of Barack Obamas quote out of context: If youve got a business, you didnt build that. Somebody else made that happen. As one might expect, business owners who have seen Romneys artistically-edited ads have expressed outrage. How dare President Obama suggest that business owners have not built their own businesses? The beauty of this is that Barack Obama never didand never wouldmake such an insulting statement. Again, in its proper context, Obama was simply saying that individual success is predicated upon support from a villageand the most successful villagers have an obligation to repay some of that largesse. Although Republicans tend to disagree with that perspective, most would be willing to concede that Obamas in-context message is perfectly consistent with mainstream Democratic talking points. However, Romneys crafty excision makes it sound like President Obama has suddenly made a radical, leftist break with mainstream US politics. Claiming that individuals do not deserve any credit for their hard work is the same kind of ideological claptrap that cold war Soviet leaders used to propagate. However, no serious US presidential contender could ever hope to win office by launching such an ignorant attack on individualism. Thus, Romneys cunningly-edited sound bite is clearly a distortion of Barack Obamas intended point, but will anyone bother to figure that out? In the dumbed-down, sound bite reality of mass media politics, truth generally takes a backseat to sensationalist nonsense. Score one for Mitt Romney! Instead of making a serious effort propose an alternate, Republican-inspired solution to the fiscal crisis that the United States is facing, Romney has launched a media counterstrike which is designed to ignite extremist hostilitiesand that will further aggravate the gridlock in Washingtonby falsifying his opponents message. Congratulations to Mitt Romney for dragging US political discourse to the lowest common denominator: If you cant beat your opponent, then fabricate lies that are designed to foment
87
hatred. Heck, if it worked for Tricky Dick Nixon, then it can work for Mitt Romney too. If this is how Mitt Romney solves problems on the campaign trail, just imagine how hell manage Americas crises from the Oval Office. Thanks to Wikimedia Commons for the photo.
commons.wikimedia.org/wiki/File:ObamaVsRomney.jpg
88
89
NINETEEN Queers Need Not Apply The Boy Scouts of Amerika Continue to Disappoint
Morally straight. Wow. In other words, queers need not apply. Youve got to be kidding me. What century are we living in? On July 18, 2012, the Boy Scouts of America announced the results of a confidential, two-year review of its policy that explicitly excludes gays. The Boy Scouts national spokesman, Deron Smith, stated that a special eleven-member committee came to the conclusion that the exclusion policy is absolutely the best policy for the 102-year-old organization. Since when is unvarnished prejudice against a historicallymaligned minority a good thing? Isnt it somewhat hypocritical for an organization that requires its members to help other people at all times to gleefully endorse discrimination against gays? In what sense is perpetuating age-old, irrational prejudices helpful to gays? In case this might be news to the Boy Scouts, on September 20, 2011, the United States military officially terminated its Dont Ask, Dont Tell policy. In doing so, the US military officially accorded non-heterosexuals the same civil rights as heterosexuals, while also affirming the enlightened notion that it was no longer morally straight to discriminate on the basis of sexuality. That historic policy shift by the US military represents a resounding endorsement of fundamental democratic freedoms. Morality is a tricky concept to define. Generally speaking, morality is relative and it is contingent upon the
90
norms and values that are embraced by a majority of the population at a particular moment in history. As times and people change, so does morality. For example, the ancient Romans used to consider feeding Christians to hungry lions a pleasurable pastime. Also, following its inception, the moral climate in the US had no compunctions against treating Africans like slaves, women like pieces of property, indigenous peoples like vermin, and gays (or, members of the GLBT community) as deranged criminals. Americas democratic principles assert that, all people being created equal, everyone should enjoy the same unalienable rights: freedom, fairness, justice and equality. In practice, however, US democracy has all-too-often rolled out the red carpet to some (i.e., European, male, property-owning, heterosexual, Christians) while dehumanizing, subjugating and abusing Others. Though it took more than a century of aggressive social activism on the part of marginalized minorities, Americans gradually came to realize that there was a vast and inappropriate gulf between the USAs democratic principles and its practices. Thus, slowly and grudgingly, American morality has transitioned from celebrating the abuse of marginalized minorities to castigating such malignant indiscretions. In other words--although more than a few Americans lament the passage of the good old days--it is no longer considered morally acceptable to treat Africans like slaves, women like property, indigenous people like vermin, and members of the GLBT community as deranged criminals. Three cheers for the (long, slow, reluctant) march of democratic progress! Wahoo. The Boy Scouts have decided that, in spite of the march of progress, they are going to dig in their heels in a futile effort to preserve their anachronistic, undemocratic version of morality. For the Boy Scouts, gays may not necessarily be deranged criminals, but gays still fall into the category of undesirable others. As a private club, the US courts have ruled that the Boy Scouts are welcome to take this lonely, last, loathing stand against civil rights. For an organization that has existed for 102 years and that congratulates itself for upholding the highest moral principles, it is sad that the
91
Boy Scouts would flaunt such an appalling ignorance of history, morality, common decency and democracy.
*From the Boy Scouts page at www.scouting.org/
92
93
(*The following message is for anyone who would like to suggest that, due to a selective reading of the Freeh Report, Joe Paterno should be exempted from blame or criticism for the egregious role that he played in the Sandusky Scandal.) Dear Arch-Defenders of Joe Paterno's Tarnished Image, What about the kids? It is distressing that you and other apologists for Joe Paterno would continue to show as little concern for the victims of Jerry Sandusky's abuse as Joe Paterno did. Zero! Anyway, you are wrong. I have read the entire Freeh Report--all the way through the appendices. The evidence that you would like to dismiss is, in my opinion, quite damning. Like many of Paterno's credulous defenders, you are of the curious opinion that Paterno could somehow have been blithely unaware of the 1998 investigation of Sandusky's molestation. Even if you do not consider the documentary evidence to be damning, don't you find it implausible that Joe Paterno would have remained unaware of a police investigation that posed such great peril for the PSU football program, Paterno's reputation and, let's not forget, the "welfare" of Paterno's good buddy, Jerry Sandusky? That aside, there is abundant evidence that Paterno was fully briefed and was an active participant in the aftermath of Jerry Sandusky's shower-rape in 2001. Thus, it is indisputable that Paterno was fully apprised of Jerry Sandusky's abhorrent pedophilia as early as 2001. Yet,
94
Paterno's response to such shocking revelations was to do as little as possible and, in every case, to minimize damage to himself, Sandusky, and PSU football. At no point did Paterno endeavor to impede Sandusky's criminal abuses. As such, Paterno made it possible for Sandusky to be excused for past crimes, while opening the door to additional attacks after 2001. You would like to believe that your cynical reading of the Freeh Report exempts Joe Paterno from knowingly aiding and abetting Jerry Sandusky's criminal pedophilia for more than a decade. The truth is that, at least since 2001, Joe Paterno was willfully criminally negligent. As a direct result of Joe Paterno's efforts to minimize damage, Jerry Sandusky was enabled--one might even say encouraged--to continue terrorizing kids. Those are not the actions of a hero. A hero would have risked everything to prevent Sandusky from abusing kids. Joe Paterno did not give a damn about Jerry Sandusky's victims. That is why Joe Paterno has become the object of such widespread and intense public censure. Plain and simple, Joe Paterno is reaping what he sowed. By the way, the public is not being "too hard on Joe Paterno," rather Paterno is getting off much easier than he should for his monstrous crimes. It is unfortunate that Joe Paterno did not survive long enough to face the long list of criminal charges that would, no doubt, have been leveled against him. Finally, what is disgraceful is that people like you would struggle to preserve the heroic image of Joe Paterno after he went so far out of his way to aid, abet and encourage a fiend like Jerry Sandusky and, as a result, hurt so many kids. Sincerely, Tim McGettigan
95
TWENTY-ONE Say it ain't so, Joe Another Sports Legend Bites the Dust
I grew up in Pennsylvania. Small town PA during the 1970s, the dark days of deindustrialization. We didnt know what deindustrialization was back then. We just knew that the coal mines were closing, the steel mills were shutting down, and the American Dream was slip sliding away. Sure, the economy was going to hell, the environment was a mess, and the future looked bleak, but there was one ray of hope for Pennsylvanians in all that fin de sicle darkness: football. In the professional ranks there was, of course, the Steelers. The Pittsburgh Steelers were the meanest, toughest dudes in the NFL. Nobody could touch em, not during the 1970s anyway. And even though I grew up on the other side of the state--within the sphere of (God help me!) Philadelphia Eagles fandom--eastern Pennsylvanians still benefited from the indomitable aura of the winners to the west. Nightmarish as those times may have been, the Steelers managed to transcend all of the misery that was besetting the Rust Belt. The worse times got, the tougher, more successful the Steelers seemed to become. But it wasnt only the Steelers that sparked the hopes of down-and-out Pennsylvanians, there was also the Nittany Lions. Penn State University, sitting absolutely dead center in the Pennsylvanias rural heartland, won
96
supporters from every part of the state. Being so central, and removed from the influence of any major metropolitan center, the Nittany Lions were the one major football team that everybody in the state could rally around. In part, it was because families from all over Pennsylvania sent their kids to PSU. Every community bragged of dozensif not thousandsof Penn State students and alums. PSU was the institution that, more than any other, united the state of Pennsylvania. People were proud to be associated with Penn State, easily one of the very best state colleges in the United States, but they were even more proud to be supporters of Nittany Lion football. It wasnt just that Penn State had winning football team. Practically, every state has a major college football team that its citizens can be proud of. What Pennsylvania had that no one else could brag of was Joe Paterno. Sure, Paterno had whipped the Penn State football team into a perennial contender for the national title. Everybody loves a winner! But Joe Paterno was more than just a winning college football coach, JoePa had class. He wasnt one of those win at any cost kind of football coaches. Paterno really cared about education. In fact, to listen to the guy, you would have thought that he was an academic dean (Study, study, study...!) rather than a legendary college football coach. Joe Paterno was a gentleman. For Joe, winning football games was secondary. First and foremost, Paterno cared about kidsand not just his own football players. Paterno wanted all kids to grow up and be as successful as they could possibly be. At least, thats what we were led to believe. Unfortunately, Joe was not a man of his word. Winning football games did matter to Joe Paterno, and, it has come to light that winning obviously, and distressingly, mattered more to Paterno than the welfare of kids. Winning mattered so much that Joe Paterno, that great and honorable man, was willing to make a deal with the devil. Jerry Sandusky was that devil, and Paterno callously, and calculatingly turned a blind eye to Sanduskys criminal pedophelia--even when Sandusky paraded his loathsome abuses directly under Joe Paternos nose. Rather than taking steps to protect the poor kids that Sandusky wantonly preyed upon, Paterno
97
employed his considerable clout to cultivate a climate of invulnerability for Sandusky at PSU. Just win, baby! Nothing else matters. So, in the end, have Jerry Sanduskys atrocities soiled Coach Paternos otherwise unimpeachable image? Youre damn right they have, and Paterno has no one to blame but himself. Paterno labored his whole life to cultivate the image of a principled, altruistic warrior, and, in spite of Paternos tireless self-aggrandizement, the Sandusky scandal has blasted JoePas public image to smithereens. Had Paterno cared even one iota for the principles that he professed he would have dumped Sandusky the very instant that he caught wind of his lieutenants felonious assaults. Instead, Joe Paterno hopped into bed with Sandusky, the monster that Paterno knowingly aided and abetted. What a wretched end for an otherwise (seemingly) noble spirit. Its a hard lesson to learn, but the Sandusky affair makes it clear that there are some things that are worse--much, much worse--than losing a few football games. Though I'm sure he wished it were otherwise, that will be Joe Paternos most enduring legacy. *Photo provided courtesy of Wikimedia Commons:
commons.wikimedia.org/wiki/File:Joe_Paterno_Sideline_PSUIllinois_2006.jpg
98
99
Recall that, after having stirred up post-9/11 controversy with his comments about the "little Eichmans" who died in the World Trade Centers, Ward Churchill was tried, convicted, censured and terminated by what amounted to a kangaroo court by his former employer, the University of Colorado-Boulder. It is neither accurate, nor fair to say that the American Association of University Professors (AAUP) has done nothing to aid Ward Churchill during his long, arduous struggle to redeem his academic freedom. Should the AAUP have done more to assist Ward Churchill during his long and weary march? To that I can say, yes. Emphatically, yes! Indeed, I also believe that the AAUP needs to do more--much more!--to ensure that Ward Churchill's academic freedom (and, by extension, every scholar's academic freedom) is restored, reaffirmed and strengthened. I can understand why Ward Churchill believes that his battle has been lonely. For the most part, Churchill's stalwart colleagues cut and ran at the very first sign of trouble. Over the years, Churchill has taken the brunt of conservative abuse all by himself. Of course, Churchill invited that abuse by making censurious comments at a delicate moment in US history, however, before, during and after the firestorm that brought about Churchill's termination as a Professor of Ethnic Studies at the
100
University of Colorado-Boulder, his comments should have been protected by an inviolable fortress of academic freedom. Permitting Ward Churchill to be skewered by hysterical, flag-waving "patriots" constituted a major setback to academic freedom. And, ever since, academic freedom has continued to lose ground. Ward Churchill has indicated that, following a recent Colorado Supreme Court decision which reaffirmed his termination at UC Boulder, he might consider appealing the decision to the US Supreme Court. To a degree, Cary Nelson, the President of the AAUP, is correct in pointing out that the current composition of the US Supreme Court bodes ill for issues of principle, such as acaemic freedom. I'll bet Justice Roberts, having abandoned his right-wing cronies on the healthcare decision, is searching high and low for any case of sufficient ideological intrigue that will serve to re-establish his bonafides with conservatives. Plunging a legal sword into Ward Churchill's (and, by extension, academic freedom's) heart is precisely the sort of vindicating caper that Roberts is salivating for. That said, the Colorado Supreme court case is indicative of the fact that academic freedom is not gaining ground by treading water. "Playing it safe" is a losing proposition in an increasingly anti-intellectual, anti-academic America. The AAUP needs to take a long view of the academic freedom issue. Perhaps it is time that academics raised their visibility by taking a do-or-die stand for academic principles. Too many of my colleagues have spent their entire careers capitulating to legislators, administrators, and other critics on issues of principle. What my colleagues fail to understand is that if professors are not prepared to fight for their principles, then academia will be doomed to suffer one humiliating defeat after another. By playing it safe, the AAUP has lost ground. If academics are going to reverse the trends that are steadily undermining academia (e.g., corporatization, administrative gridlock, repudiation of academic freedom, etc.), then professors will need to join the battle to save academic freedom--and we damn well better win it. If the AAUP is not committed to that goal, then the AAUP, academic freedom, and a national culture of academic excellence are all certain to wither and die in
101
the very near future. Some might celebrate the death throes of academic excellence, however, it is hard to imagine how the US will remain competitive in the information society in an era where short-sighted antiintellectuals succeed in destroying what was once one of the world's greatest educational systems. China and Russia are investing billions in building up their higher education infrastructures. I wonder which global power will claim the inestimable distinction of leading the information society into the 22nd century. Thanks to Wikimedia Commons for the image: commons.wikimedia.org/wiki/File:Ward_Churchill.jpg
102
103
I have been to a lot of conferences, but I have never been to any meeting that is quite like the 100 Year Starship Symposium. 100yss is taking place even as we speak (from Sept. 9-13, 2012) in Houston, Texas. Just a stones throw from Mission Control. Houston, we have a problem: The Hyatt Regency is overflowing with geeks*! So, what sort of event would be sufficiently magnetic to attract enough geeks to overflow a 30-story hotel? Well, strap in and Ill tell you. 100yss is about nothing less than creating a future where earthlings will be able to flit through the cosmos like Captain Kirk, Luke Skywalker and Princess Leia. Pish-posh, you say? Those are nothing but science fiction fantasies. No grown-up with an ounce of sense would believe in any of that rubbish...or would they? Interestingly, it turns out that a surprising number of breakthrough scientific achievements got their start as nothing more than storybook fantasies. Take Jules Verne, for example. That guy was a wacky-idea factory. In the 19th century, Jules Verne dreamed up one preposterous fantasy after another, such as: fully electronic submarines, a manned-moon mission that Americans launched from--of all places--southern Florida (Verne was a whisker off in predicting that the launch would take place outside of Tampa rather than Cape Canaveral), talking newspapers, spacecraft that are powered by solar sails, video conferencing, etc. Truly, Verne had an inspired imagination and although the future never unfolded exactly as he predicted, in lots of cases Vernes 19th century fantasies became verisimilar 20th and 21st
104
century realities. How could one man be so amazingly prescient? As it happens, Jules Verne is not the only science fiction author who has had an influence on the future. Others, including Isaac Asimov, Philip K. Dick, Arthur C. Clark, Gene Roddenberry, and George Lucas have had striking and unanticipated influences on the evolution of social reality. For example, in the 2000 census, literally hundreds of thousands of people on multiple continents (the USA, UK, New Zealand, and Australia) reported that their religion of choice was Jedi. Also, Martin Cooper has admitted that he was inspired to invent cell phone technology after watching Captain Kirk use his (fictional!) wireless communicator on Star Trek. All this is meant to say is that tomorrows realities are often woven out of the gossamer of todays delightful dreams. Thus, 100yss may seem a bit wacky to outsiders, but its deadly serious business to its Star-Trek-dreaming participants. If we have any hope of protecting the earth from imminent destruction by Klingons, Vogons, The Empire or a bazillion other extra terrestrial beasties, then weve got to start dreaming about phasers, photons and force fields today. Open all hailing frequencies, Lt. Uhura!
*Under no circumstances should this term be interpreted as a denigration of 100yss attendees. Far from it. I have nothing but the utmost respect and admiration for geeks. Whats more, if attending and presenting at 100yss defines one as a geek, then I must admit that I am a geek through and through. I am delivering a paper tomorrow morning that goes by the title, The Future is a Fantasy. (In case you might be interested, I will post an update tomorrow to let you know how it goes.) Three cheers for geek pride! It takes one to know one and I ought to know.
105
106
107
A 100YSS Presentation
Abstract Humans are unique as a species because, with the help of well-defined problematics, humans alone are capable of redefining reality. A problematic can be understood as an exceptionally-challenging intellectual objective (e.g., heavier-than-air flight, building the first atomic bomb, curing disease, landing humans on the moon, developing artificially-intelligent computers, construc computers constructing faster-than-light speed spacecraft, etc.) that requires knowledge-seekers to invent new facts and redefine reality in order to achieve the hoped-for objective. Although scientists prefer to think that scientific inquiry is constrained to an exploration of empirical facts, in truth, scientific progress is often instigated more effectively by the pursuit of a compelling problematicin many cases, even by science fiction fantasies (Shatner, 2002)rather than by an examination of established empirical facts (McGettigan, 2011). As such, science has proven to be the most effective means ever invented by humans to transform fantasies into reality. The New Frontier As a means of giving the US a psychological boost, in 1961 President John F. Kennedy embraced a manned-
108
moon landing as the crowning achievement of his New Frontier goals. Interestingly, when Kennedy announced his plan for a successful lunar landing, his aspiration was more a product of science fiction than fact. As of 1961, the US scientific community lacked the technologyor even a workable planto send astronauts to the moon. But, brilliantly, JFK did not treat Americas lunarmission knowledge gap as a deal-breaker, rather, Kennedy seized upon it as an historic opportunity. JFKs goal of sending astronauts to the moon by 1970 is an outstanding example of what I refer to as a problematic. A problematic is a far-flung goal that is largely based upon imaginative speculation, and that (critically!) inspires knowledge-seekers to invent facts and redefine reality in order to transform the dreamed of goal into a reality. There are numerous examples of problematic innovation that have had an enormous impact on the course of human events: heavier-than-air flight, the Manhattan Project, finding a cure for polio (and the ongoing search for AIDS vaccines), Alan Turings (and Marvin Minskys) advocacy of AI computing, Martin Coopers effort to invent a Star Trek communicator in the form of the cell phone, Aubrey de Greys pursuit of human immortality, David Ferrucis goal of creating a talking computer (similar to Captain Kirks) and the IBM Watson project, etc. The virtue of problematics is that they inspire humans to engage in super-adaptable innovation. Whereas other terrestrial creatures solve survival problems with their biology (i.e., Darwinian evolution), humans solve problems with their intellect. Thus, human agents can solve survival problems much more rapidly, and with greater specificity, than other creatures, however, this also means that humans have a penchant for creating new survival challenges at a faster pace and on a grander scale (e.g., overpopulation, pollution, global warming, nuclear Armageddon, etc.) than other terrestrial creatures. Fortunately, via the process of problematic innovation, humans have succeeded in elevating their thinking and, thus far, outpacing the survival challenges that we have generated. I argue that humans will continue to enjoy success and continue to outpace the crises that pose imminent
109
threats to human survivalso long as humans remain committed to pursuing problematics. Once again, problematics enable super-adaptable human agents to elevate their thinking by developing solutions to farfetched, seemingly impossible aspirations: cures for incurable illnesses, ending human mortality, creating artificially-intelligent computers, and not only shooting for the moon and planets, but building reality-redefining vessels that will enable humans to reach for the stars. History has shown that humans have got all the brains, wherewithal and fortitude to achieve the impossible. We can build a brighter future. All we need are visionary leaders who are prepared to lead the charge toward the Next Great Frontier. So, why bother with space travel? Because, quite simply, the stars light the way to a brighter future. Space travel paved the way to Kennedys New Frontier in the 1960s. If the United States remains committed to accomplishing ever greater feats in the future, then we should look to the stars to light our way. Thus, space travel is not a distraction. Space travel represents the path to Americasnay, humanitysnext Great Frontier. References McGettigan, Timothy. Good Science: The Pursuit of Truth and the Evolution of Reality. Lanham, MD.: Lexington Books, 2011. Shatner, William (with Chip Walter). I'm Working on That: A Trek From Science Fiction to Science Fact. New York: Pocket Books, 2002. (*This is a brief summary of a presentation that I delivered at the 100 Year Starship Symposium in Houston, Texas on September, 13, 2012.) Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:Kennedy_Moon_speech_25_May _1961.jpg
110
111
TWENTY-FIVE
The Innocence of Muslims Sam The Imbecile Bacile, Religious Freedom, and Free Speech
Lets begin by making it clear that Sam Baciles film, The Innocence of Muslims, is a piece of crap. The film is embarrassingly terrible. Imagine the worst Saturday Night Live sketch that you have ever seen and then multiply it by 100. Everything about the film is distressingly awful: the script, the make-up, costumes, editing, etc., etc. Sam Bacile makes Ed Wood look like a celluloid genius by comparison. To sum up, Baciles Innocence of Muslims is an insult. It is an insult to filmmaking, it is an insult to artistic expression, it is an insult to free speech, and it is an insult to humanity. Oh, and I suppose I should also mention that Sam Baciles Innocence of Muslims represents a profound insult to Muslims. In addition to its dung-brained anti-Muslim script, the film depicts the prophet Muhammad in an unflattering light--which to devout Muslims is one of the worst imaginable forms of sacrilege. Apparently, the primary reason that Sam the Imbecile produced his ill-begotten film was to insult and inflame the ire of Muslims. Thus, reprehensible as Baciles film assuredly is, the film has succeeded (Sadly!) in sparking precisely the sort of global outrage that Bacile
112
had hoped it would. Truly, this is a twisted success story. Three cheers for Sam The Imbecile. This monumental lamebrain has single-handedly succeeded in undoing much of the progress that had been forged between the West and Middle East during the Arab Spring rebellions. A flicker of democratic fraternity has been doused by a tsunami of intercultural outrage. C'est la vie to political harmony and interfaith respect. Sam Bacile has succeeded in diverting us down the highway to hell. Unquestionably, Sam Bacile has much to atone for. Let the damage and violence that has been inflicted, and the blood that has been spilled be on his head. You reap what you sow, Sammy. If you are a religious man, then I assume that must bother you. If it doesnt, then I encourage you to re-examine your religious morals. They are woefully lacking. That said, I think it is high time to call for restraint on the part of responsible Muslims the world over. On behalf of Western culture, I apologize for the titanic insult to Muhammad and Islam that Baciles rotten film represents. I can assure you that Sam Bacile speaks only for himself, and his repugnant film is antithetical to the USAs foundational principle of religious freedom. The USA reveres and respects Islam every bit as much as it does every other world religion. OK? Are we good? America and Americans have disavowed Sam Bacile. Once again, Bacile and his abhorrent film do not speak for us. Baciles loathsome opinions are his alone. Yet, while America has officially disavowed Sam Bacile, we will not permit ignoramuses of his ilk to infringe upon our right to free speech. In a world where intolerance rears its ugly head all too often, free speech is an essential counterbalance to tyranny and ignorance. Although I disagree profoundly with the sentiments that Sam Bacile expresses in his sorry excuse for a film, I will not now-nor will I ever--call for Sam Baciles free speech rights to be restricted. For free speech to exist, its advocates must fight just as hard to ensure the rights of those with whom they disagree as they do for themselves. If we dont believe in free speech for those with whom we disagree, then we dont really believe in free speech. Period. Such a conclusion is unlikely to appease the many protesters who have been calling for Sam Baciles head on a platter. I wish it could be otherwise. We live in a
113
mighty big world, and every now and then boneheads like Sam Bacile are going to insult one religion or another. Nothing good will ever be accomplished by dignifying the worthless prattlings of the Sam Baciles of this world. If Sam Bacile insists upon using his free speech rights to make an ass of himself, then let the joke be on him. Sam Bacile is an ass. Having amply demonstrated that his opinions are worthless, people should accord Sam Bacile the respect and recognition that he so richly deserves: Ignore the dumb-ass and move on! Continuing to fuss and fume over Sam Baciles idiotic insults will only drag otherwise decent folks down to his slimy level. For crying out loud, dont let an ass like Sam Bacile ruin your life or anyone elses! Too many innocent people have already been hurt. Too much time has been wasted on the mindless ramblings of a demented trouble-maker. The only folks who can end the violence are those who insist upon perpetrating it. Just stop! Stop right now and think about what you are doing. And please do so before any more innocent folks get hurt. Also, the next time that some cretin like Sam Bacile insults Islam--and, trust me, its going to happen--dont lash out. If you do, then you are playing right into Sam Baciles hands. Dont give him the satisfaction. Just ignore the dumb-ass and move on. If Islam is a great world religion--and it most assuredly is--then it needs to act like one. The Sam Baciles of this world are not worth the time it takes to sneer in their direction. Let Sam Bacile and his cronies tell their dirty little jokes, and drown in their own bile. Great religions have got to rise above that dreck. Enough! Thanks to Wikimedia Commons for the photo:
commons.wikimedia.org/wiki/File:Clerics_take_part_in_a_protest_a gainst_an_anti-Islamic_film.JPG
114
115
TWENTY-SIX Time Surfers Problem-Solving as the Path to a Better, but Unpredictable Future
Long ago, Thomas Malthus argued that, although humans might be able to outpace natures carrying capacity in the short-term, in the long run the mathematics of population growth would precipitate disaster. According to Malthus original predictions, widespread starvation was inevitable due to the fact that food production could only increase arithmetically, whereas population growth was exponential. During Malthus day, without the advantage of family planning technologies, offspring in typical pre-industrial families could easily outnumber parents by an exponential factor. Consequently, Malthus felt certain that, as populations exploded, demands for food would necessarily exceed available supplies. When that occurred, widespread starvation would ensue with the end result being a catastrophic population crash. Interestingly, Malthus published his first predictions about overpopulation in the year 1798. At the beginning of the nineteenth century, the global population stood at approximately one billion people. In the years since his prediction, global population has indeed grown exponentially. As of 2012, global population has climbed to over seven billion people. Although it is safe to say that population has mushroomed precisely as Malthus
116
predictedduring the twentieth century alone global population quadrupled from 1.5 billion to 6 billion peoplethe global food crisis that Malthus predicted has not occurred. Certainly, there have been persistent and tragic food shortages all over the planet, particularly in the developing world. Nevertheless, the calamitous food crisis that Malthus predicted has not yet transpired. Thus, one must wonder: How have humans avoided the Malthusian nightmare?
It turns out that, like many great thinkers, Thomas Malthus attempted to predict the future through the lens of the past. Time is structured such that human experience is always located in the present. In turn, the present can be understood as a dynamic temporal transition point through which time flows toward the future and away from the past. Its as if we are all time-surfers; we skim forward on a temporal foundation that is fixed to the present while time washes by from the future to the past. Given the one-way flow of time, humans have direct experience with two of the three discernible temporal domains: we occupy the present while preserving fragmentary records of the past.
Once again, due to the uni-directional flow of time, the temporal dimension with which humans lack direct experience is the future. Never having inhabited the future, its specific attributes are largely a mystery. The flow of time would need to reverse in order to acquaint time surfers with the same level of insight about the future that we currently accumulate about the present and past. Thus, no one can predict the future because neither the future, nor the fate of humanity is yet determined. At best, we can make educated guesses, based upon extrapolations from the past and present, about how the future might unfold. Still, because of the extraordinary capacity that super-adaptable agents (i.e., humans) have to modify the course of events in utterly unpredictable ways, we will never know precisely what the future holds until it arrives in the present. Essentially, the future is a process. The fact that the earth has been revolving around the sun for eons is a fairly strong indicator that it will continue doing so in the future. However, as Karl Popper argued, past circumstances, no matter how long they may have
117
persisted, provide no absolute guarantee that similar events will transpire in the future. Though the probability is minuscule, an asteroid just might pulverize the earth tomorrow. Thus, the future is a combination of phenomena that give shape to the present blended with dynamics that stimulate change in the future. As such, the future is a construct that is constantly undergoing a process of evolutionary and unpredictable change. Indeed, one of the most unpredictable instigators of temporal change is the often improbable impact that human agents have upon the structure of unfolding events. Thomas Malthus gazed into the future through a paradigm that was shaped by eighteenth century expectations. Within the context of the eighteenth century, there was no conceivable means by which to sustain an exponentially-increasing global population. As a result, Malthus was convinced that the end was near. Interestingly, in the late eighteenth century, the world as Malthus knew it was about to end, but, importantly, not in the way that Malthus had predicted. Malthus published his prognostications about the presumptive fate of humanity as the age of agriculture was coming to a close. Being unacquainted with the sweeping social, political, economic, and scientific changes that would accompany the Industrial Revolution, Malthus was unable to foresee the innovations that would amplify food supplies sufficiently to keep pace with exploding populations. Technologies such as higher yield grains, fertilizers, pesticides, herbicides, and petroleum-powered machines have generated astounding increases in agricultural yields throughout the industrial era. Without doubt, the problems that Malthus identified were real. Just as it would have been impossible for NASA to safely land astronauts on the moon using 1950s space technologies, so too would it have been impossible to avert the Malthusian nightmare using seventeenth century agricultural techniques. Exploding populations represented a dire crisis, and as Karl Popper argued, all life forms must find ways to solve problems or they will perish. In response, super-adaptable humans dealt with the problems associated with population growth and impending food shortages by pursuing an entirely new problematic: industrial society. Having thus dramatically
118
redefined the substance and structure of human society, nineteenth century Europeans set about the process of transforming the social, political, cultural, and technological landscape to make the machine-age a reality.
In the industrial era, the food supply problems that Malthus foresaw have largely been mitigated. Again, in recent centuries, global population has expanded exponentially, and, in spite of nagging problems associated with an unequal distribution of food, the total supply of food has kept pace. Though Malthus would be surprised by this outcome, Karl Popper would not. Again, Popper argued that humans are extraordinarily adept at developing intellectual solutions to survival problems. In successfully identifying such solutions, old problems often become non-issues: in industrially-advanced nations, farmers have been able to produce more food than consumers can eat. In fact, instead of being plagued by shortages, Americans are increasingly plagued by the problem of overabundance. The fact that the population quadrupled during the twentieth century is an undeniable indicator of human problem-solving ingenuity. However, as the population has grown, the degree to which humans have taxed the environment has also increased. For example, our love affair with hydrocarbons has had a dramatic impact on global climate, including elevating sea levels, a shrinking cryosphere, expanding deserts, etc. Thus, successful exploitation of fossil fuels has produced entirely new problems. Frustrating as this situation may seem, Karl Popper would not be troubled by such developments. Popper argued that, in the game of life, successful solutions to one set of problems invariably generate an entirely new set of problems. When Popper argued that all life is problem solving, he literally meant that survival for every living creature is contingent upon developing workable solutions to environmental problems. Living creatures either develop effective strategies to secure the necessary sustenance, space, and security that they require, or they will expire. Most life forms develop new survival
119
strategies through the genetic evolutionary process. Random genetic mutations that enhance a creatures ability to solve environmental problems (e.g., accessing new food sources, dissuading predators, expanding into new territory, etc.) confer advantages in the struggle for survival. However, as creatures successfully adapt, they inevitably encounter new survival challenges to which they must adapt afresh (e.g., marine mammals successfully reconquered the sea only to encounter hungry sharks in their new environment). Thus, evolution is a never-ending process because every creature must constantly re-adapt to changing survival conditions. Humans are subject to the very same survival pressures as other creatures. Having taken full advantage of hydrocarbon-age technologies, humans are now confronted with an entirely new set of problems: global warming, ozone depletion, unsecured nukes, pollution, depletion of resources, etc. In spite of our success, the problems that plague humanity seem, if anything, larger and more insoluble than ever. Yet, strange as it may seem, thats actually a good thing. It is certainly true that, if nothing changes, the problems that humanity currently faces will be irresolvable. Just as Malthus gazed at the burgeoning problems in the eighteenth century and concluded that, for citizens of that era, the situation was hopeless, the same will be true for citizens of the twenty-first century. Einstein summed up this situation thus: The significant problems we face cannot be solved at the same level of thinking we were at when we created them. In other words, we cant possibly hope to solve existing problems with existing knowledge. While that might seem to be a hopelessly pessimistic perspective, it is simply a statement of truthbut it is also a call to action. As with many prognosticators, Malthus failed to see the future coming. That is, Malthus viewed the future through a paradigm that was better suited to make sense of the past. When Malthus assumed that the future would inevitably be shaped by the same forces that had defined his present, he made a critical error. The problems that plague one historical era, so long as humans succeed in elevating their thinking, often tend to be viewed as relatively minor challenges in succeeding eras.
120
This is not to say that the advancement of scientific knowledge will gradually create a perfect world. Far from it. In agreement with Popper, I believe that the problemsolving process is never-ending. Each solution to a major survival problem will always introduce an entirely new set of even more challenging problems. Though we will never live in a trouble-free world, we can feel safe in the knowledge that, so long as we have the courage to dream of doing the impossible, no problem will ever be too challenging to overcome.
Thanks to Wikimedia Commons for the image: commons.wikimedia.org/wiki/File:Two_surfers.jpg
121
122
Interviewer: You don't claim that there were dinosaurs on Noah's Ark, do you? Don McLeroy, Chair of the Texas Board of Education: Of course I do! (Interview excerpt from The Revisionaries, www.therevisionariesmovie.com) For those who govern through the strictures of faith, doubt is equivalent to disobedience. Not only does doubt connote a certain degree of disrespect for sacred beliefs, but doubt also weakens the organizational structures that are founded upon blind obedience to faith. In other words, doubt implies that deductive faith-based belief systems are somehow inadequate. As such, doubt also portends a skeptical, inductive quest for knowledge that lies outside the authorized realms of inquiry. By thinking outside the box, it becomes possible for doubters to generate novel observations that are not only distinct from, but that are often directly contradictory to established beliefs. As a result, free-thinkers tend to evoke antipathy among those who maintain a vested interest in the status quo. For example, Darwins ideas about evolution were so ground-breaking that, more than
123
one hundred and fifty years after the publication of On the Origin of Species (1859), many people still refuse to accept Darwins basic precepts. Charles Darwin is one of the most widely revered and enduringly controversial figures in the history of science. Both are exceptional feats for such a mild-mannered gentleman. Much of the controversy surrounding Darwin concerns the presumptive truthfulness of his evolutionary theory. Darwins theory was so radical that, at first, even leading members of the scientific community expressed doubts. For example, initially, Charles Lyell was only willing to accept evolution as an explanation for the transmutation of lower animals, but not for humans. Nevertheless, evolutionary theory gradually won over the scientific community. Scientists embraced evolutionary theory because, as a scholar, Darwin was rigorous to a fault. Throughout his career, Darwin made lengthy and exacting inductive observations and, only then, constructed theories that corresponded closely with the facts. On the other hand, Darwin's creationist detractors have consistently encouraged the faithful to interrogate facts deductively through an artificially-narrow lens of Christian dogma. The problem with deductive logic is that it tends to reinforce ignorance. Deductive reasoning tends to privilege evidence which supports one's preferred dogma while discounting countervailing evidence. As such, deductive reasoning does not endeavor to explain so much as it demands faithfulness to a prescribed ideological cul-de-sac. For example, no matter how well evolution may explain the relevant scientific facts, from the creationist standpoint evolutionary theory appears wrongheadedprecisely because evolution puts facts before theory. All too often, facts tend to be judged, or deduced relative to ones cherished beliefs. For example, if evangelical Christians believe that God created the universe in six days, then creationists of this stripe simply deduce that God also sculpted the Grand Canyon, complete with all of its fossil-laden stratigraphy, in naught but the blink of an eye. No matter how scientifically implausible such a conclusion may be, creationists are content to deductively shoehorn even the most damning evidence into the mind-numbing confines of their biblical cosmology. For creationists, deductively
124
sustaining their ignorance is vastly more important than inductive intellectual honesty. Ignorance-inspiring as biblical cosmology may be, in a free society people should be at liberty to embrace any cockamamie ideas that they wish ifand this is an enormously important provisoand only if individuallevel ignorance does not trespass on the rights and intellectual freedom of others. And it is for this reason that the Texas Board of Education should be subject to intense public censure and, as soon as possible, disbanded. If the willfully ignorant evangelical Christians on the Texas Board of Education were content to respect the establishment clause of the US Constitutionand, thereby, zealously avoid imposing their Bible-inspired folklore on public school curriculumthen there would be no need to censure or disband the Board. However, the fundamentalists on the Texas Board of Education have done everything in their power to abuse their public offices as bully pulpits from which to corrupt the minds and educations of kids all across the US. The Texas Board of Education exerts undue influence on public education nationwide because, as one of the largest purchasers of K-12 textbooks, publishers are excessively attentive to the peccadilloes of the Bible-thumping members of the Texas Board of Education. Thus, the antiscience majority on the Texas Board of Education have repeatedly objected to the fundamental tenets of Darwinian biology and, as a result, textbook publishers have obsequiously distorted scientific truths in order to appease a small group of caterwauling Christians. Worse, the willful distortions that publishers incorporate into their textbooks end up corrupting the education of kids all across the US. This is because the Texas market is so vast that publishers kowtow first and foremost to the whims of the Texas Board. Once textbooks have been customized to tickle the fancy of fundamentalist Texas Christians, purchasers in smaller markets (i.e., schools throughout the other 49 states) are left with no choice but to purchase adulterated textbooks. In a free society, one person's rights end where another person's begin. If fundamentalist Christians want to marinate in a stew of ideological ignorance, then more power to them. However, Texas evangelicals should not
125
be permitted to foist their stultifying biblical folklore on kids who have a right to a decent education. Deductive religious ignorance produced 1,000+ years of intellectual darkness in Europe. It required a revolutionary new age of secular scientific inquiry to put those dark days behind. Apparently, the religious zealots on the Texas Board of Education would like to return to the good old dark days, but I think we owe it to ourselves, our kids and our future to do better. If we want to create a better, brighter future, then we need to put religious ignorance in its place once and for all. If religious zealots insist upon making schools a battleground in the culture wars, then the rational scientific community needs to mount a major counteroffensive to scour the schools of every form of religious ignorance: No prayers, no songs, no celebrations, no anything that in any way promotes religious ignorance at the expense of a scientificallyenlightened educational environment. If the Bible encourages its followers to believe that dinosaurs accompanied the preposterous menagerie on Noah's ark, then I think we can safely conclude that the bible is not a tool of enlightenment, rather, it is an instrument of intellectual stultification. In an information society, we simply cannot allow the schools to become indoctrination centers for religious gibberish. If Christians in Texas want to believe that T-Rex was Noah's bunkmate, then they are welcome to embrace that fantasy. However, such delusions should not be permitted to corrupt the science curriculum in the public schools. Schools need to equip kids with the intellectual tools to become committed critical independent thinkers. As such, the Bible is only useful to rational, critical thinkers to the extent that it teaches students how NOT to think. Evolution may never win a national popularity contest, but it will persevere by doing precisely what it does best: dissipating unscientific ignorance by helping new generations of truth-seekers to find better, more convincing ways to explain life, the universe, and everything through a scientific lens. If that rubs Creationists the wrong way, then so be it. May the fittest paradigm survive. Thanks to Wikimedia Commons for the image.
126
commons.wikimedia.org/wiki/File:Savery_Noah%27s_Ark.JPG
127
TWENTY-EIGHT Elementary My Dear Watson! The Beauty (and Baloney) of Being Right about Everything
Fate is the most potent weapon in a the arsenal of like Stephen Hawking. To contend, as determinists plainly do, that the outcomes of events are pre-determined is essentially the same as saying that the ebbs and flows of history are all dictated by fate. Actors, whether animate or inanimate, have no control over the manner in which they proceed, or influence and interact with events. Actors are merely pawns in a vast drama that is directed down to the minutest wobble of sub-atomic particles by the omnipotent intervention of fate. Determinists make much of the intricate twists and turns of fate that, in hindsight, seem to portend the ultimate success or failure of particular events. For example, had John Frederick Parker remained at his post instead of sneaking out to a tavern, he would very likely have foiled John Wilkes Booths plot to assassinate President Lincoln. However, in a deterministic universe Parker was never in control of his fate, nor for that matter were Booth or Lincoln. In spite of any lamentations to the contrary, determinists would insist that Parker was destined to shirk his responsibilities and, thus, Honest Abes fate was sealed long before he, Parker or Booth ever arrived at Fords Theatre.
determinists
128
The problem with employing fate or destiny as a means of explaining the course of events is that such a perspective operates on the basis of deductive dogmatism rather than inductive falsifiability. Determinists explain everything that occurs in the universe as an outcome of an infallible master narrative: if an apple falls from a tree, or a star explodes in the Andromeda Galaxy, then determinists will insist that those events transpired precisely how and when they did because an insuperable chain of causality preordained each outcome. The magic of this type of deductive thinking--which, once again, is predicated on a dogmatic allegiance to an infallible master narrative--is that it can be used to explain anything and everything. However, as Karl Popper articulated so convincingly, deductive theories that purport to explain everything in fact succeed in explaining nothing scientifically. How much more do we understand about the universe, if we answer questions about its beginning, evolution, and eventual conclusion with the statement: Whatever happens during the long life of the universe does so because it was meant to be.? Answers of this nature offer no new insights, rather they only succeed in propagating deduction-based ignorance. It is precisely because of the intrinsic flaws of deductive reasoning that scientists and theologians often disagree so vehemently: theological reasoning is predicated upon strict adherence to a deductive faith (e.g., God created the universe in six days), whereas Karl Popper has demonstrated that scientific progress is based upon generating testable statements that cannot be verified, but can only be falsified. In short, theologians are faithful (i.e., deduction inhibits inquiry-based enlightenment because, first and foremost, deduction is predicated upon strict adherence to faith-based dictums) where scientists are skeptical (i.e., since it is impossible to verify truths, scientists endeavor to improve extant theories by identifying their intrinsic flaws). Although strictly speaking Stephen Hawkings brand of determinism may not be a form of religion per se, nevertheless, determinism employs essentially the same type of teleological dogmatism as faith-based theologies. Like many theologies, determinism does not lend itself to being tested. Just as it is with Christianity, Islam, Buddhism, etc., either you believe in the basic tenets of
129
determinism, or you dont. In order to generate scientific support for any one of the above-listed faiths, the faiths would need to articulate statements or predictions that are empirically testable. The problem with deductive dogmatism is that, even in the face of copious quantities of falsifying evidence (e.g., archaeological, geological, and astronomical evidence which compellingly demonstrates that the universe was not created by the Christian god during the week of Sunday, October 23, 4004 BCE), dogmatists will blithely reject any interpretation of the evidence that contravenes their faith. In other words, deductive reasoning tends to emphasize, first and foremost, faith in a particular theoretical perspective. As such, in practice, deductive logic has a tendency to privilege evidence which supports the assumptions of ones preferred orienting theory while discounting, suppressing or otherwise failing to properly consider countervailing evidence. For example, no matter how well evolution may explain the relevant scientific facts, from the creationist standpoint evolutionary theory appears wrongheadedprecisely because evolution puts facts before theory. All too often, facts tend to be judged, or deduced relative to ones beliefs. For example, if someone believes that God created the universe in six days, then it is simply a matter of faith to deduce that God also sculpted the Grand Canyon, complete with all of its fossil-laden stratigraphy, in naught but the blink of an eye. For creationists, the universe exists as God made it and for reasons that are scrutable only to God. No further explanation is required. From a creationist standpoint, the relevant facts are deductively compelling and, thus, verify rather than falsify a continuing belief in creationism. As in any debate, depending upon ones orienting assumptions, the interpretation and perceived importance of seemingly objective facts can conflict diametrically. On the other hand, it is possible for scientific debate regarding the nature of fossil evidence to diverge from the normal path and instead pursue a more revolutionary intellectual agenda. Once again, upon encountering facts (i.e., fossilized evidence of extinction) that did not readily fit within the creationist paradigm, Georges Cuvier and other scholars developed a catastrophist perspective that remedied a number of shortcomings in the creationist perspective. Through the more sophisticated lens of
130
catastrophism, fossilized evidence of extinction revealed new truths about the complex process of divine creation and destruction. Yet, when Darwin examined very similar fossil evidence, he concluded that essentially the same facts revealed entirely different truths--and he managed to achieve one of the most important scientific breakthroughs in history by redefining reality.
Redefining Reality
Believe it or not, Darwin began his career as a credulous disciple of creationism. Consequently, it would have made sense for Darwin to assess facts, cling to established truths, and normalize anomalies in a fashion very similar to Georges Cuvier. Yet, rather than creatively reinventing his guiding paradigm and, thus, preserving and validating the truths that buttressed creationism, Darwin struck out in an altogether new intellectual direction: Darwin posited a biological, rather than a divine explanation for the origin and extinction of species. Redefining reality is a process through which individuals can challenge inadequate paradigms through a combination of astute observation and an ingenious capacity for innovative cognition (i.e., agency). The notion of redefinable reality posits, in agreement with Poppers realist philosophy, that there is a universe out there that exists independently of human cognition. As such, I argue that universal Truth does exist, but such Truth is not (nor will it ever be) contained within extant scientific paradigms. Rather, The Truth extends infinitely into the unlocked mysteries of the expanding universe. In other words, reality is what it is: an asteroid is an asteroid is an asteroid, etc Truth is an intrinsic, inseparable feature of phenomena as they exist independently of human perception. Lies and distortions come into existence via humanitys vast capacity for ignorance: humans view the illimitable universe through awed and flawed psyches. Although admirable in many ways, the human grasp of infinite mysteries remains woefully incomplete. Nevertheless, the process of redefining
131
reality permits limited human psyches to transcend the limitations of inadequate paradigms in pursuit of a grander vision of Truth. The process of redefining reality often begins when agents make unanticipated observations, e.g., Hey! Galapagos finches are more closely related than I thought. Individuals may follow up such observations by issuing a challenge to established paradigmatic restrictions, i.e., This seems to suggest that, rather than being immutable, island species undergo transmutation. In the process of attempting to make sense of such anomalies, individuals tend to deconstruct the conceptual frameworks that limit their ability to comprehend mysterious phenomena, i.e., Based upon what I have observed, I no longer believe that species were created in their present form by God. As individuals re-evaluate their beliefs with respect to their inability to comprehend anomalies, the features of their paradigms that do not hold up under scrutiny come under substantial erosive pressure. If individuals are persistent enough, they may reach a point at which the critical mass of their contemplations overloads the shackles of their former beliefs and, thus, they may experience a moment of truth, i.e., Aha! Species evolve as a result of natural selection. Moments of truth are similar to eureka experiences wherein, having deconstructed the distorting influences of inadequate paradigms, individuals successfully invent a more satisfactory definition of reality. These experiences may be considered relatively truthful in that they are generated through a process that involves the intentional negation of social controls over an individuals definition of reality. This is not to say that the redefined paradigm at which one arrives after experiencing a moment of truth is, therefore, Truth. Far from that, in keeping with the assertions of radical power theorists, I maintain that all established belief systems exert their own forms of ideological power on the construction of knowledge. Thus, to experience a moment of truth does not transport one to an ideal realm wherein Truth reigns unchallengedas opposed to the assertions of Habermas. Instead, I merely suggest that the process of redefining reality permits individual agents to experience moments of truth within the ideologically-coercive domain of social reality. With the help of such redefined insights,
132
agents become better equipped to negotiate with the pervasive, consciousness-distorting influences of radical power sufficiently to transcend the limitations of established paradigms for the purposes of creating better (but never perfect) paradigmatic proximations of the empirical universe. Therefore, humans have at their disposal the necessary cognitive mechanism, i.e., moments of truth, through which to do good science by taking gradual but confident steps toward a broader understanding of the infinite Truths that govern the universe.
Thanks to Wikimedia Commons for the image: commons.wikimedia.org/wiki/File:Danc-01.jpg
133
134
If real is what you can feel, smell, taste and see, then 'real' is simply electrical signals interpreted by your brain. Morpheus in The Matrix. Ray Kurzweil is obsessed with artificial intelligence (AI). Kurzweil has written a series of bestselling books-most recently How to Create a Mind--in which he advances the argument that machine intelligence will soon exceed that of its human creators. By the year 2029, Kurzweil is convinced that researchers will finally succeed in reverse engineering the brain. And, once scientists have figured out how to assemble all of the myriad dimensions of the human brain, Kurzweil believes that it will be a relatively straightforward engineering exercise to construct a synthetic version of human intelligence: AI. Kurzweil has referred to the moment at which machines will achieve a human-like level of consciousness as The Singularity. According to
135
Kurzweil, The Singularity will represent perhaps the most extraordinary event in all of human history. For, at the very moment when humans successfully infuse machines with human-like intelligence, everything that humans understand and experience as their unique, intrinsic nature will be irreversibly transformed. Frankentelligence Kurzweil not only believes that AI is inevitable, he also believes that AI will be a uniformly positive phenomenon. For example, AI-enhanced machines could repay their human creators by helping solve a variety of currently intractable problems (e.g., resource shortages, disease, pollution, etc.). However, no one should be surprised if, rather than being slaves to the whims of their human masters, sentient machines--especially if they are designed to be smarter, stronger and faster than their puny human overlords--decide to focus on their own priorities. This was Dr. Frankensteins disastrous miscalculation. If humans create monsters that are capable of thinking for themselves, then no one should be surprised when the confounded things start thinking for themselves. In practical terms this means that any Frankentelligent machine with an ounce of common sense will be more inclined pursue its own better interests rather then those of its foolhardy creator. You see? Thats why Frankenstein is a horror story. The monster poses a dire threat to the welfare of humanity. The townsfolk only succeed in alleviating the threat by destroying the monster. Kinda makes ya wonder why anyone would be crazy enough to create such a monster in the first place... By the way, this is a recurrent theme in AI literature. For example, Bill Joy has done an outstanding job of highlighting the acute dangers that self-directed technologies pose to their foolish human creators. Given how poorly humans treat creatures of lesser intellect (e.g., we often serve them for supper), inventing Frankentelligent machines could end up being one of the dumbest ideas weve ever had. Still, for the moment, AI remains nothing more than a sci-fi fantasy. As such, it seems somewhat irrational to fret excessively about the catastrophic dangers that a non136
existent technology might one day pose. It makes about as much sense to worry about AI as it does to have an anxiety attack about fire-breathing dragons. Yet, given the success that humans have enjoyed in resolving even the most formidable problematics (e.g., the Manhattan Project, landing astronauts on the moon, plumbing the oceans deepest depths, etc.) such an irrational anxiety may one day become a very real concern. Although there is no way to determine how grave a threat intelligent machines might one day pose, such a prospect has done little to dissuade AI-related technological advancements. I suppose its just one of those eventualities that, like Hurricane Katrina, we wontand arguably cant earnestly confront until the sky starts falling. A Matrix of, by, and for the People Although Kurzweil is convinced that intelligent machines will one day outperform their human creators, he is not alarmed by that prospect. Kurzweil is convinced that the thought waves upon which human experience is based can be fully replicated in a people-friendly artificial environment. Thus, just as one might make a duplicate copy of an MP3, Kurzweil believes that it will one day be possible to scan human brains and transfer their entire intellectual contents into the artificial cyberspace of smart machines. Although this might read like just another sequel in the Matrix series, rather than viewing absorption by intelligent machines as somehow compromising our basic humanity, Kurzweil believes that reanimating human intelligence in a machine environment will in fact enhance the human experience. As Kurzweil sees it, ghosts in the machine enjoy a more expansive, collaborative and enduring intellectual environment. So, on the plus side, humans may never die once theyve been uploaded into the Matrix. But, on the downside, its tough to get excited about being forever imprisoned in a computerized virtual reality. I mean, isnt that what Trinity and Morpheus were fighting against? I shudder at the thought. Call me old-fashioned, but Ill take my brief, fragile biological existence over an eternity of Tron anytime.
137
Given the exponential pace of IT development and the fierce determination with which the AI problematic has been attacked, I feel certain that AI developers will eventually create a type of machinery that resolves the Turing problematic, i.e, computer intelligence that is--at least in terms of communication skills--indistinguishable from humans. However, I do not believe, as Kurzweil has repeatedly suggested, that AI will spontaneously emerge from building faster computers. No matter how many times microchip processing power doubles, intelligence will never emerge as a product of computing speed alone. I think it will only be possible to create a real form of artificial intelligence when humans finally figure out what it truly means to be intelligent--and, even more importantly, when (if ever!) we figure out what the meaning of being human is really all about. Any form of Frankentelligence which fails to incorporate a thoroughgoing appreciation of the value and sanctity of human being will be nothing more than a monstrous perversion of that humanity. Those who do not learn from science fiction are destined to be destroyed by it*. Thanks to WikiMedia Commons for the image.
commons.wikimedia.org/wiki/File:Frankenstein_engraved.jpg
138
THIRTY
Sweet Revenge Barack Obama and the Death of White Republican Privilege
As Republicans conduct an autopsy of the 2012 election theyll need to acknowledge that their particular brand of vanilla--rich, white, and well-aged--is no longer a recipe for electoral success. No doubt, this will be a tough pill to swallow. After all, bigotry, greed and xenophobia have always been a sure fire recipe for success in the past. Time and again, Republicans have steamrolled to victory by painting their liberal opponents as being overly fond of repulsive others: lesbians, atheists, peaceniks, the homeless, Willie Horton, illegal immigrants, welfare queens, etc. By alienating the 47 percenters, Republicans have consistently succeeded in galvanizing enough of the wealthy white majority to secure comfortable fund-raising and electoral victories. In sum, white power has always represented the most direct and reliable route to the (aptly named) White House. But in 2008 the unthinkable happened. The dopey Democrats fielded two non-traditional front-runners for the partys presidential nominee: Hillary Clinton, the exceptionally intelligent, talented and, therefore, roundly despised, former first lady, and--this was simply too good to be true--a black guy with the middle name Hussein. Republicans were exultant. Democrats had, for all intents and purposes, conceded defeat before they had even nominated their presidential candidate. Any rich white guy with the slightest political savvy would easily waltz to victory by energizing the mainstream Republican base. Easy pickins. Or so we thought... Bizarrely, John McCain lagged behind Barack Obama from start to finish in the 2008 presidential race. Even the addition of the dim-witted beauty queen from Alaska failed to made a serious dent in Obamas lead. In the end, impossible as it may have seemed, the black guy won! And Obama didnt just eek out a narrow, disputed victory
139
like those of his predecessor, George W. Bush. Obama clobbered McCain. It was demoralizing. I mean, its like Obama didnt understand the rules of the game. After all, this was America, the land where white guys win, and black guys shine their shoes. Who did Obama think he was? Where did Obama come off thinking he could go toe-to-toe with a powerful white guy? Didnt he get the memo that it was McCains turn to be president? Wasnt Obama aware that McCain had stepped aside to allow George W. Bush to become president in 2000? By rights, that meant that McCain was next in line to be president. I mean, cmon, its in the rule book!! Rule number one of US politics reads as follows:
When one white guy finishes being president, the next white guy in line gets the job.
It was so unfair! I mean, John McCain had waited patiently at the front of the line for eight long years and then, Poof!, from out of nowhere comes Barack Hussein! Obama and robs McCain of his rightful place in history. How is that fair? I mean, if Obama wanted to be president, then he should have gone to the back of the line and waited quietly with all the rest of the undesirables. Some people! By 2012, it was clear that if Republicans were going to win the White House, they would have to deploy a radically different strategy. Instead of nominating a rich white guy, they would have to nominate a really rich white guy. That would do the trick. Admittedly, it was somewhat distasteful for the nominee to be a member of a religious cult--at least, according to Billy Grahams official list of approved religious faiths--but once the election became a two-man race, Mitt Romney demonstrated that he was rich, white and prejudiced* enough to be a capable spokesman for the Republican party. Convinced as Mitt and the Republican base were that Romney was a shoo-in, it came as a shock when Barack Obama went two-for-two. Obamas second victory made even less sense than the first. After four years in office, it should have been obvious to everyone that Barack Obama was black. For
140
those who are a bit slow on the uptake, I'll just have to spell it out for you: B-l-a-c-k means L-o-s-e-r! What are they teaching kids in the schools these days? By the time they graduate, every kid should know that rich white guys run the world. They always have and they always will. Thats just how it is. White privilege is traditional and, as everyone knows, traditions need to be preserved because...um...well, because thats what you do with traditions. You preserve them. At least, thats what conservative Republicans think. Hmm In the final days of the 2012 election, Republicans expressed outrage at Barack Obamas exhortation that voters should exact revenge in the voting booth. In other words, voting for Barack Obama would be equivalent to exacting vengeance on Americas long history of white, male, prejudiced privilege. For his part, Mitt Romney chided voters to go to the polls not out of vengeance, but out of love; love for America and everything that it stands for. In other words, a vote for Mitt Romney would be equivalent to a vote for Americas proud history of preserving white, male, prejudiced privilege. Wahoo! In the aftermath of the unprecedented 2012 victories for Democrats and their causes (e.g., rebuking the Republican war on women, a second term for Barack Obama, more female senators than ever, the first lesbian senator, gay marriage approved in three states, etc.), Republicans are left to sift through the wreckage of the SuperStorm and wonder piteously, What happened? How could it be that white male privilege could be so soundly thrashed not once but twice in the past two presidential elections? How could it be? The search will continue, and I am sure that Republicans will hire a surfeit of high-priced consultants to help them analyze the debacle that their national election strategy has become. How could the Republicans extra-rich vanilla election strategy fall victim to Barack Obamas hodge-podge coalition of poor, young, colorful, feminine weirdos? TWICE!? Republicans will search high and low, but they will never find the one-and-only true answer to their question until they decide to take a good, long look in the mirror. Revenge is sweet.
141
*Although Mitt made it abundantly clear that, as president, he would be a slave to rich, white privilege, he arguably destroyed his candidacy by stating that, once in the Oval Office, he would address the issue of illegal immigration by encouraging immigrants to self-deport. Apparently, the fast-growing number of Hispanic and Latino voters were sufficiently disturbed by Romneys brilliant idea that they voted overwhelmingly for Barack Obama. Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:Barack_Obama_enters_the_White _House_ March_2012.jpg
142
Similarly, Mark Pagel argues that, as social learners, humans have literally rewritten the rules of evolution:
...there are no real shape-shifters in nature...Being limited to what their collections of genes evolved to do, no one species can do everything. That was, of course, until humans came along and rewrote all the rules that had held for billions of years of biological evolution...Where all those species that had gone before us were confined to the particular genetic corner their genes adapted them to, humans had acquired the ability to transform the environment to suit them, by making
143
shelters, or clothing, and working out how to exploit its resources (Pagel, 2012, p. 46).
In turn, Matt Ridley describes the extraordinary capacity that humans have developed to innovate via social and intellectual collaboration as ideas...having sex (Ridley, 2010, p. 352). In sum, humans are the first super-adaptable organism to evolve on earth; rather than being deterministically restricted by the constraints of Darwinian biology, humans are the only terrestrial species that is graced with the capacity to redefine reality. Via a reality-modifying cognitive process (i.e., agency), humans modify otherwise deterministic environmental conditions in order to accommodate their goals and interests--even to the point of transcending gravity and the limits of the life-giving biosphere in a quest to conquer the lifeless void of outer space. There are multiple advantages of deploying cognitive solutions to environmental problems. First cognitive solutions offer the advantage of enabling humans to invent, test, and deploy solutions to survival challenges much more rapidly than is possible via the biological evolutionary process. It requires far less time to sharpen a stick than it does to evolve long claws for digging, or saber teeth for killing prey. In the struggle for survival, the speed with which a species can develop effective solutions to transforming environmental challenges can often mean the difference between survival and extinction. Being equipped with the ability to develop solutions at the speed of thought has enabled humans to transform the ongoing quest for survival into an increasingly rapid-fire intellectual exercise. Thanks to Wikimedia Commons for the photo.
144
In 1950, Alan Turing published a seminal article on machine intelligence (Turing, 1950). In the article, Turing proposed a test to determine the point at which machines would achieve a definable level of intelligence. Turings test involved a remote, three-way conversation between two humans and one machine. In the context of the test, one of the humans, the interrogator, would communicate via an electronic media link with the two interviewees; one being human and the other being a computer. It would, thus, be the interrogators job to determine which of the two interviewees was a machine. Turing argued that when it became impossible for an interrogator to reliably distinguish between their human and machine interactants, the conversant computer in question would have passed the Turing Test. In other words, Turing argued that when machines become communicatively indistinguishable from humans, those machines will have become artificially intelligent.
Since 1950, the Turing Test has inspired a great deal of debate concerning the prospect of intelligent machines. In Turings day, computers existed, but they were far too primitive to exhibit properties of intelligence.
145
Nevertheless, Turing believed that by the turn of the twentieth century, computers would advance to the point of exhibiting human-like intelligence. Whether referring to humans or machines, Turing believed that three key components combined to generate displays of intelligence:
1. 2. 3. Memory Processing power Programming
Although Turing acknowledged that the human brain was an electronic nerve center, he argued that electricity was not a fundamental requirement for machine intelligence. This is the case because Charles Babbage successfully designed a mechanical computation device in the nineteenth century (Swade, 2001). Although unlikely, Turing argued that it was theoretically possible to develop a mechanical computational device that could achieve machine intelligence. Thus, Turing asserted that, contingent upon sufficient advancements in computer memory, processing power and programming, researchers would soon be able to create an age of intelligent machines (Kurzweil, 1992). Rising to Turings Challenge For example, Turing argued that one way to engineer increasingly intelligent machines would be to design computers to play games (Turing, 1950). Consistent with Turings proposal, IBM developed Deep Blue, a supercomputer that was designed to play championshiplevel chess. In 1996, Deep Blue earned the notoriety of becoming the first computer to defeat a reigning world champion, Garry Kasparov, in a single chess game (Newborn, 2003). Although Deep Blue eventually lost the match, that near-miss only inspired IBM researchers to redouble their efforts. In 1997, Kasparov met Deep Blue for a rematch. During the intervening year, IBM developers incorporated significant upgrades into their computer, which inspired a modified, but unofficial nickname for the computer, Deeper Blue. Employing a massively parallel, brute force architecture, the 1997 version of
146
Deep Blue was capable of evaluating two hundred million chess positions per second. This was twice as fast as Deep Blue had been in 1996. Further, Deep Blue 97 could also search, on average, six to eight moves ahead in a particular chess game. However, the computer also possessed the more extraordinary ability to search as far as twenty moves ahead. Relying on its brute force computing capabilities, Deeper Blue succeeded in defeating Garry Kasparov. Impressive as that feat may have been, there is some debate about whether Deep Blues achievement in 1997 was truly a demonstration of a new threshold in machine intelligence. For his part, Garry Kasparov conceded that he saw evidence of profound intelligence in Deep Blues moves. However, rather than congratulating Deep Blue, Kasparov contended that his opponents chess strategy could only have been orchestrated by human intelligence. Essentially, Kasparov claimed that he had been cheated; alleging that humans had interfered with Deep Blue during the course of individual games (Hoffman, 2007, p. 114). If true, such interference would have constituted a violation of the match rules, and it would also have contravened the spirit of the competition, which presumably pitted human against artificial intelligence. The IBM team dismissed Kasparovs accusation by insisting that they had only made modifications to Deep Blue during the intervals between games, which, by the way, was permitted by match rules. Thus, IBM emphasized that, true to the spirit of the competition, during the course of individual chess games, Deep Blues decisions were entirely its own. To some extent, the situation remains unresolved. When Garry Kasparov originally requested access to Deep Blues log files, IBM refused. It is worth noting that IBM subsequently acceded to the request and posted copies of the log files on the Internet: www.research.ibm.com/deepblue/watch/html/c.html. Curiously, following Deep Blues unprecedented success, IBM made the somewhat puzzling decision to dismantle the computer. Thus, lingering questions about the veracity of Deep Blues performance during the historic 1997 chess match will remain forever unresolved. Setting aside the controversy, it is clear that, in Deep Blue, IBM managed to advance the state of computer
147
technology to a spectacular new level. Not only did IBM build the worlds most powerful chess-playing machine, but IBM also identified the computing level at which machines could outplay even the most skilled human opponents. Indeed, with Deep Blue, IBM arguably took computing a step beyond the Turing threshold. That is, Turing argued that machine intelligence was contingent on convincing interactants that machines were roughly equal in intelligence to humans. Whereas, in Deep Blue, IBM created a machine that exceeded the skill of the reigning world chess champion. In doing so, IBM surmounted a major technological threshold, but is it fair to say that in defeating Kasparov, IBM also created a new form of intelligence? Would Deep Blue Pass the Turing Test? Extraordinary as Deep Blues achievements may have been, the computer was an exceedingly unidimensional machine. Outside the confines of chess, Deep Blue lacked even the most trivial forms of intellectual versatility, e.g., commenting on the prevailing weather, griping about the rigors of business travel, agonizing over the pressures of championship chess competitions, etc. Within the narrow domain of chess, Deep Blues computational power was superhuman, however, outside of those parameters, Deep Blue was non-starter. Thus, with regard to the Turing problematic, IBM had succeeded in advancing the state of computer technology to an entirely new level, however, IBM still fell far short of creating a full-fledged form of machine intelligence. Intelligence is a phenomenon that is of particular importance to humans. Arguably, intelligence is the quality that distinguishes humans from all other creatures and, whether rightly or wrongly, confers upon humans a unique sense of identity and superiority. Rene Decartes (1901) famous statement, Cogito ergo sum, captures the essence of this special quality. Sentience, or the cognitive faculty to experience self-presence, I think, therefore I am, has equipped humans with a privileged sense of purpose in the cosmos. Thus, one could argue that it is both fitting and somewhat ironic that scientists are now endeavoring to simulate that most essential of human traits, intelligence,
148
and install it in machines. Certainly, the prospect of imbuing machines with intelligence remains, at this juncture, a problematic fantasy. Computers have certainly become much more sophisticated in the decades since Turing published his seminal article. In addition, computers have become more inextricably embedded in the contemporary cultural landscape (Dretzin, 2010). Yet, sophisticated as computers have become, there are none, as yet, that have a snowballs chance of passing the Turing Testand that includes IBMs latest AI initiative, Watson, www.watson.ibm.com/index.shtml (Thompson, 2010). Again, Turing argued that three key computing features were essential to generate the appearance of intelligence in machines: memory, processing power, and software. However, by now it appears as though intelligence is comprised of more than those simple ingredients. That is, just as Turing predicted, memory, processing power, and software have advanced by leaps and bounds since 1950. However, in spite of those extraordinary refinements, we have not yet witnessed a cognitive revolution in machine intelligence. Precisely what sort of psycho-social ingredients combine to produce human intelligence remains an abiding mystery. Indeed, before we can obtain a better understanding of the biological and cognitive apparatuses that are required to produce intelligence, it is important to know exactly what we are referring to when we use the term intelligence. Intelligence as Savoir Faire It is difficult to make convincing progress in any field if ones foundational definitions are deficient. Just as people object to Herrnstein and Murrays suggestion that intelligence can be equated with displays of smartness (Fraser, 1995), there is reason to doubt Turings contention that machine intelligence can be gauged by a computers conversational skills. Savoir faire is far from a perfect measure of intelligence. For example, in 1966, Joseph Weizenbaum (1976) composed a computer program, ELIZA, that was designed to create the illusion of AI by having a distinctly unenlightened computer play the role of a Rogerian psychotherapist. That is, ELIZA
149
conversed by doing nothing more than rephrasing comments posed by human interactants:
Human: I feel miserable today. ELIZA: Do you often feel miserable today?
Weizenbaum readily admitted that ELIZA was only intended to create the appearance of intelligenceand, thus, take a credit-worthy swipe at passing the Turing Testwhile sidestepping the problem of giving the program a data base of real-world knowledge (Weizenbaum 1976, p. 188-189). Thus, Weizenbaum succeeded in creating an illusion of machine intelligence, but one that was entirely devoid of substance. Interestingly, even though Weizenbaum was forthcoming about ELIZAs simplistic capabilities, users would often take ELIZAs feedback seriously. Clearly, such earnest user responses speak to the benefits of Rogerian psychological therapies, but they also identify a critical flaw in the Turing Test: intelligence cannot be adequately gauged by conversational skills alone, particularly if a computer has been cleverly designed to deceive its human interactants. Even though ELIZA lacked any capacity for intelligent exchanges, ELIZAs conversational skills were sufficiently fluid to convince users that ELIZA was a sentient, even a professionally skilled, therapeutic interactant. Thus, one must wonder if IQ tests actually measure intelligence or merely test-takers ability to appear intelligent. As ELIZA illustrated, there is a big difference between those two things, i.e., just because a parrot can speak does not mean the parrot knows what its talking about. Nature vs. Nurture In his text, The Society of Mind, Marvin Minsky (1985), in agreement with Stephen Gould, argues that it is unwise to construct overly simplistic operational definitions of intelligence. Minsky believes that, like love, intelligence is a suitcase term that incorporates so many different meanings, no single definition can do the term justice:
150
. . . it isnt wise to treat an old, vague word like intelligence as though it must define any definite thing . . . the very concept of intelligence is like a stage magicians trick. Like the concept of the unexplored regions of Africa, it disappears as soon as we discover it (Minsky, 1985, p. 71, italics in original).
Thus, Minsky believes that it has been impossible to develop a singular definition of intelligence because intelligence is not a singular conceptual construct. Rather, intelligence is a compendium of innumerable lower-order operations. Further, intelligence is also a concept that changes every time we learn new things about the thinking process (Minsky, 2006, p. 109-111). Therefore, from Minskys perspective, understanding intelligence is not contingent upon constructing a precise conceptual definition. Instead, Minsky believes that, to understand intelligence, it is essential to develop an awareness of the multifarious cognitive processes that combine to produce what we ultimately perceive as intelligence. Thus, we use the term intelligence as a shorthand to bracket a phenomenon that is too complex to adequately describe otherwise. In addition to recognizing that intelligence is dependent upon the complex interaction of innumerable neurological structures (i.e., nature), Minsky also acknowledges that intelligence only emerges as a result of long term socialization processes (i.e., nurture). Psychologists, such as Freud (1961) and Piaget (1951) not to mention sociologists, such as Mead (1934), Cooley (1902) and Blumer (1969)have argued that, though humans may be born with brains, they only develop wellfunctioning minds through a process of extended social interaction. Again, Descartes statement, I think, therefore I am, implies that sentience is contingent upon the ability to reflect upon oneself. Sociologists have long maintained that individuals cannot develop a mind in the absence of interaction with others. For example, children who lack intensive early childhood socialization experiences do not develop identities or fully-formed cognitive functions (Davis, 1940). Thus, Minsky argues
151
that computational power alone, no matter how large or fast an information processor may be, is not sufficient to produce intelligence. Intelligence in humans and, by extension, artificial intelligence in machines can only be created through an elaborate process of social intellectual development (Minsky, 2006). Thus, social interaction is crucial to the constitution of self, mind and intelligence. In addition, the only way to demonstrate intellectual competence is through communication. In other words, although an inarticulate individual might be a genius, without communication, there is no way to prove that such genius exists. Take, for example, the extraordinary case of Stephen Hawking, who argues that he became a more focused intellectual achiever as a case of Lou Gehrigs disease robbed him of his physiological capabilities (Larsen, 2005). Indeed, it was only as a result of the arrest of his disease, combined with the assistance of special computer technologies, that Hawking has been able to communicate his intellectual achievements. Thus, Hawking is a genius, and, arguably, he would be a genius whether he could communicate or not. However, if the technology did not exist to permit Hawking to communicate his ideas, no one but Hawking would ever have known of his intellectual exploits in theoretical physics. Thus, this reemphasizes Turings original point that intelligence tests, whether human or artificial, must be predicated upon communicative competence. Frankentelligence As a means of equipping a machine with the necessary communicative competencies to pass his intelligence test, Turing proposed the idea of building a computer, with the best sense organs that money can buy, and then teach it to understand and speak English (Turing, 1950). Thus, much like Dr. Frankenstein, Turing speculated that it might be possible to generate AI spontaneously by cobbling together a set of human-like sense organs and then animating them with a jolt of lifegiving electricity. With such a proposal in mind, I think it is important to pose the question: At what point does the human mind become intelligent?
152
Although healthy, fully-formed infants have the potential for intelligence, at the moment of their birth and though proud parents might protestone could not accurately describe infants as intelligent. Certainly, infants are bursting with potential and it only takes a few years of love and devotion to activate their intellectual gifts, but at the outset, infants are pre-intelligent beings at best. Thus, it is somewhere along the way in lifes journey that humans transition from potentially bright creatures to verifiably intelligent human beings. Absorbing vast quantities of information through various sensory apparatuses is arguably an essential precursor to intelligence, however, information acquisition alone does not produce intelligence. That is, libraries acquire enormous quantities of information, however, it makes no sense to describe libraries as intelligent. Intelligence involves more than information acquisition; intelligence is also contingent upon an ability to manipulate information. Building upon this point, Minsky argues that geniuses are simply people who are particularly adept at organizing and applying what they learn (2006, p. 277). Its not enough to learn a lot; one also has to manage what one learns (Minsky, 1985, p. 80. Italics in original). How do infant humans transition from being relatively passive information-absorbers to active creators of information? I would hazard that, if someone were to find an answer to that question, then that person would also have solved the fundamental enigma of intelligence. Certainly, the ability to manipulate information is associated with the aptitude that humans require to interact with others and, ultimately, to reflect upon themselves. However, there is much concerning the mechanics and, more importantly, the aesthetics of intelligence that remains a mysteryand will probably remain so for a long time to come. Thus, I believe that there is some truth to the notion that, though humans are intelligent enough to experience intelligence, humans are not intelligent enough to understand intelligence (Dreyfus, 1992; Hofstadter, 1979). At least, not yet.
153
A Working CompromisePerhaps Certainly, many AI researchers would object to such an assertion. Raymond Kurzweil echoes Marvin Minskys sentiments by suggesting that the definition of intelligence tends to be a moving frontier (Kurzweil, 1992, p.14). Minskys point is that definitions of intelligence tend to change as researchers learn more about intelligence. In turn, Kurzweil argues that every time there is a major advance in the field of artificial intelligence, rather than treating the achievement as a major step forward, Kurzweil contends that perceptions about what qualifies as AI become more stringent: artificial intelligence is the study of computer problems that have not yet been solved (Kurzweil, 1992, p. 14). I think it is fair to say that, since Turing published his AI problematic in 1950, advancements in the field of information technology have been extraordinary. In addition, I believe that Minsky and Kurzweil are correct in stating that, due to such advancements, public attitudes toward information technology have changed. Indeed, it has been my thesis throughout this book that humans have a unique ability to modify their beliefs when confronted with new facts, and the pursuit of AI is an instructive example of that phenomenon in action. As often occurs in the process of resolving problematics, Turings challenge inspired IT developers to invent one generation of facts after another. Whereas in the mid-20th century, computers were vast, mysterious and inaccessible, by the turn of the 21st century computers had become personal, portable and an integral feature of everyday reality. Thus, in the face of those changes, it stands to reason that perceptions of computers and the role that they play in shaping the present and future would also change. In 1982, Time Magazine captured the burgeoning sense of wonder inspired by the PC revolution by naming the personal computer its Man of the Year. It only added to the sense of historical moment that this was the first time an object, instead of a famous man or woman, was accorded this signal honor. In the years since, computer technology has advanced by leaps and bounds, but, having grown used to groundbreaking technological achievements, the public has come to view IT innovation with a certain degree of
154
complacency. No matter how wondrous the latest computer advancements might be, we expect todays technology to be outdated in a twinkling. Thus, there are grounds to support Kurzweils contention that, as technologies have become more sophisticated, the public has also become increasingly difficult to impress. Kurzweils point is that AI is not as far-fetched a fantasy as many people seem to think. Indeed, not only does Kurzweil believe that AI will inexorably evolve out of human intellectual capabilities, but he believes that, were we to properly acknowledge the exceptional IT developments that have already taken place, the public would rightfully conclude that AI already exists.
When the artificial intelligence field was first named at a now famous conference held in 1956 at Dartmouth College, programs that could play chess or checkers or manipulate equations, even at crude levels of performance, were very much in the mainstream of AI . . . we no longer consider such game-playing programs to be prime examples of AI, although perhaps we should (Kurzweil, 1992, p. 15).
Thus, Kurzweil believes that it is only due to an escalating threshold of expectations that the public has failed to acknowledge that AI already exists. I agree that public perceptions about information technology have changed dramatically. In the 1950s, the idea that people would one day have computers in their homes and offices (much less in their pockets!) was pure science fiction. That said, I do not agree with Kurzweils contention that that the threshold for defining AI has been entirely fluid. Since 1950, the Turing benchmark to identify an artificially-intelligent machine has not changed one bit. Earlier, I pointed out that the QWERTY computer on which I am typing had no more ability to engage in a fluid, human-like conversation than the PC clone that I purchased two decades ago. My point was that, keeping the Turing-articulate threshold in mind, my iPad tablet is no more of an enlightened chatterbox than my old desktop PC. Though computer technology has certainly changed, the distinction between AI and non-AI
155
technologies has remained static. Though we might wish it were otherwise, computer developers have a long way to go before theyll be able to create anything close to a Turing-articulate machine. The Science of Unintended Consequences In science fiction, AI is often presented as a fait accompli. In 2001: A Space Odyssey, and old Star Trek episodes, Turing-articulate computers just exist. Viewers need not concern themselves with how such extraordinary machines came to be. They simply exist. After all, it being the future and computer technology bounding ahead the way it does, it is difficult to imagine a future that is absent of some version of Turing-articulate computers. Thus, it can be tempting to believe, as Kurzweil contends, that AI is inexorable. Among AI enthusiasts, Kurzweil stands out as one of the most optimistic of an enterprising group of problemsolvers. For Kurzweil, AI is not a fantasy, it is a reality. Part of Kurzweils confidence derives from the fact that he has already created a variety of proto-AI technologies (Kurzweil, 1999, p. 84-85). Yet, more than that, Kurzweil is convinced that AI is logically inexorable: Kurzweil argues that, the very existence of human intelligence created the essential preconditions for the eventual usurpation of human intelligence by a superior form of machine intelligence (Kurzweil, 2005). For a variety of reasons, I am less convinced that a HAL-like version of AI will inexorably emerge. As I explained earlier, it is never possible to see the future clearly through the distorted lens of the present; the future will only be intelligible through the paradigms that humans invent to understand the as-yet-unknown future that they will help to create. Nevertheless, I suspect, in agreement with Kurzweil, that information technologies will advance by intensifying the cyborg-like (Clark, 2003) fusion between humans and IT. In other words, people have already begun plugging computers into their ears. Advancing technologies will likely involve adorning ourselves with smaller, more powerful IT devicesor perhaps even installing them in our bodies (Hsu, 2010). And, when we arrive at the point of installing computers in our bodies, in what strict sense will human cognition
156
be entirely distinguishable from machine-based intelligence? Intelligence is an experience that is rooted in human biology and sociality and that also happens to be enhanced through information technologies. That has certainly been true ever since humans invented the printing press and the same will continue to be true as information technologies become even more sophisticated. Still, for the sake of argument, lets say that in the years ahead, researchers finally manage to transcend the Turing threshold. At that point scientists will be able to celebrate a landmark achievement in technological development: once again, knowledge-seekers will have resolved a seemingly impossible problematic. However, in doing so what else will they have accomplished?
The Solution to End All Solutionsor Not? In the United States, the 1970s were a decade of pessimism. Since the conclusion of WWII, the US had grown used to playing the role of superpower. An important reason for US ascendancy was the predominant position that the US came to occupy in the global industrial economy. During WWII, the US benefited from the fact that, as opposed to other combatant nations, wartime destruction strengthened rather than demolished Americas infrastructure. While bombs dropped in Europe and Asia, American factories throttled up from a pre-war Depression into wartime overdrive. Following WWII, the US economy was stronger than ever. Though plagued by cold war anxieties, the US economy boomed all the way through the 1950s and 1960s. During those decades, the US produced more and better industrial goods than any other nation. Times were so good that it seemed like the US would remain the worlds industrial leader until far into the foreseeable future. However, as the US entered the 1970s, things suddenly changed. Partly due to short-sighted planned obsolescence strategies, the USAs industrial dominance went into decline during the 1970s. International competitors in the automobile industry, the heart and soul of industrial
157
America, began gaining ground. The unkindest cut came when Asian and European competitors began horning in on Americas formerly inviolable domestic car market. As cheaper and higher quality industrial products began flooding into the US, Americas industrial infrastructure started to crumble (Bluestone and Harrison, 1982). Industries that had once thrived in the US began moving overseas. The once prosperous industrial belt rapidly disintegrated into a sprawling rust belt. Mighty as industrial America had once been, it appeared as though Americas days as a global leader had already gone by. Certainly, during the 1970s-80s there was a lot of doom and gloom in the US. Yet, instead of diminishing as a global power, the US stumbled through the turmoil of the 1970s-80s and kept on plugging. In 1991, President George H. W. Bush declared an end to the cold war. Whereas the Soviet military-industrial complex had collapsed under the pressure of trying to keep pace with its cold war nemesis (Rhodes, 2007), the US joyfully declared victory in the cold war and, in the same breath, proclaimed itself the sole superpower in a new world order (Bose and Perotti, 2002). However, based upon the logic of industrial political-economics this made no sense. If the US had declined as an industrial power, then its power in the global community should also be on the wane. However, by the 1990s, it was clear that something else was afoot. Of course, what had changed was the fundamental nature of US culture and economics. The ebbing of US industrial production had, from the perspective of an industrial paradigm, seemed to indicate that Americas power should also diminish. What such a perspective failed to account for was that, while Americas industrial infrastructure was eroding, the US was laying the foundation for an entirely new socio-economic framework: the information society (Webster, 1995). This transformation was so huge and complicated that, for those in the midst of the transition, it was difficult to assess the merits of forswearing industrialism for a newly-emergent post-industrial society (Bell, 1976). Yet, if anything, developing into an information society amplified Americas wealth, power and prestige. Bill Clinton benefited throughout his presidency from the longest uninterrupted economic expansion in US
158
history. In part, the US economy grew throughout the 1990s as a result of the post-cold war triumph of market economics. In addition, information technology came into its own as a defining feature of American culture during the 1990s (Okin, 2005). IT not only provided an entirely new type of fuel for the American economy, but the very nature of information technology seemed to create new opportunities for the US to redefine its future. The cold war was old news. The problems associated with that strugglethat had seemed utterly irresolvable for the previous half centuryhad by the mid-1990s been reduced to little more than an afterthought. As always, old problems tend to look very different from the perspective of a new era. Certainly, serious concerns persist about how to manage the remaining cold war arsenals (Ritchie, 2009), but, for the most part, the US and the rest of the world have moved on: there are new mountains to climb, new problematics to resolve. Information has always been the key to solving problems. Knowing how to light fires has warmed a lot of fingers over the ages. Human super-adaptability is contingent upon the conscious application of information to the problem-solving process. In the effort to to develop super-adaptive solutions, information technologies have proven to be an exceptionally valuable new tool. Whereas cognition permits humans to rapidly invent superadaptive solutions to environmental problems (i.e., humans can think more quickly than they can genetically adapt), IT substantially amplifies the quantity and quality of information that is available for problem-solving. Thus, with portable, wireless information technologies, it is possible to fill information gaps (e.g., finding a physician, translating foreign tongues, developing smarter information technologies, etc.) in a twinkling. Further, the prospect of developing AI could actually make the entire human problem-solving process antiquated. Humans devote a great deal of time, energy and anxiety to solving problems. Arguably, should researchers ever be able to develop AI, it will no longer be necessary for humans to lose sleep over unresolved problems. The virtue of mechanization has always been that machines are stronger and more durable than their creators. In minutes, a steam shovel can do more work than a human can accomplish in a week. Presumably, the
159
same will also be true for mechanized intelligence. Whereas, human arms and minds rapidly weary of the tasks to which they are assigned, machines do not. Thus, if or when researchers manage to develop AI, it will be possible to assign computers the daunting task of solving all of the wearisome problems (e.g., pandemic, climate change, food shortages, nuclear weapons management, energy crises, etc.) that humans have lacked the time, energy, or initiative to resolve for themselves. Indeed, the fact that machines could one day become more intellectually-gifted problem-solvers than humans has inspired Raymond Kurzweil to suggest that, if or when humans invent AI, then in that singular moment, the trajectory of life, humanity and intelligence will change forever (Kurzweil, 2005). Kurzweil argues that, because of the mechanized work ethic described above, the moment that machines achieve a level of human-like sentience, those machines will almost certainly employ their newfound abilities to generate even higher orders of mechanized intelligence. As a result, although smart machines will be capable of solving a wide variety of extant problems, those same machines are likely to begin solving new problems of an entirely different magnitude. For starters, if humans ever invent machines that are smarter than their creators, then humans will in that same moment become only the second smartest beings on the planet. Given how poorly humans treat creatures of lesser intellect (e.g., we often serve them for supper), creating AI could end up being one of the dumbest ideas weve ever had. This, by the way, is a recurrent theme in speculations about AI, i.e., if humans create machines that think for themselves, then whats to stop those machines from thinking for themselves? Since AI is a fantasy, it is somewhat irrational to fret unduly about the dangers that non-existent machines might one day pose. Yet, given the success that humans have enjoyed in resolving even the most formidable problematics, such an irrational anxiety may one day become a very real concern. Although there is no way to determine how grave a threat intelligent machines might one day pose to humans, such a prospect has done little to dissuade AI-related technological advancements. I suppose its just one of those
160
eventualities, like Hurricane Katrina, that we wontand arguably cantdeal with until the sky starts falling. Although Kurzweil is convinced that intelligent machines will one day outperform their human creators, he is not alarmed by that prospect. Kurzweil is convinced that the thought waves upon which human experience is based can be fully replicated in an artificial environment. Thus, just as one might make a duplicate copy of an MP3, Kurzweil believes that it will one day be possible to scan human brains and transfer their entire contents into the artificial cyberspace of smart machines (Kurzweil, 1999, p. 124). Rather than viewing such a transfer as somehow compromising humanity, Kurzweil believes that reanimating human intelligence in a machine environment might actually enhance the human experience. That is, smart computers will have the advantage of being intellectually more expansive, adaptable, collaborative and enduring. Thus, for Kurzweil, the prospect of reanimating human intelligence in a machine environment will change practically every aspect of what it means to be human, but the transformation, no matter how dramatic, will arguably constitute an improvement on the human condition. I, for one, am not so certain. Human-machine partnerships are never ideal, rather they are always problematic. I believe the same will be as true for increasingly intelligent machines as it has been for every other type of technology that humans have invented. Ghosts in the Machine Roboticized assembly lines changed industrial production profoundly. To a certain extent, roboticized factories represent the fulfillment of Frederick Taylors (1911) vision of optimized industrial efficiency. For Taylor, efficiency involved breaking down complex tasks into their simplest constituent parts and then training factory laborers to work like automatons: to do simple jobs as efficiently as the machines with which workers were partnered. Industrial efficiency was great for the bottom line. More efficient factories helped workers generate a greater quantity of commodities. As a result, factory owners were able to secure more output for every dollar that they spent on labor.
161
Of course, not everyone agreed that Taylors efficiency techniques were altogether beneficial. From a human perspective, Tayloristic efficiency is alienating. Not wishing to be treated like machines, factory laborers engaged in a wide variety of tactics, such as slow downs, sit downs and strikes, to emphasize that humans preferred to be treated better than machines (Venneti, 2003). With the help of the union movement, laborers and factory owners struck a working compromise: laborers agreed to work like machines, but only if their employers treated them like human beings. This working compromise endured for much of the twentieth century. In the post-war era, US corporations made staggering profits, while unionized laborers helped swell the ranks of the middle class. Of course, this relationship changed in the 1970s as the the seismic shift toward the post-industrial society got under way. Information technologies enabled conglomerates to coordinate international production networks and also made it possible to construct roboticized factories (Nof, 1999). Of course, one could argue that, if industrial production alienated laborers, then roboticized factories succeeded in liberating laborers from such drudgery. No longer did humans need to play the role of industrial automatons; automatons could play that role even better. The downside of this arrangement was that, in liberating humans from factory labor, the standard of living that industrial workers had struggled for the better part of a century to establish was severely undermined (Aamidor and Evanoff, 2010). Ever since the Luddite Movement (Bailey, 1998), it has been evident that, while generating gains for some, technological advancements often cause distress for others. Yet, time and again, the anguished protests of displaced laborers have fallen on deaf ears. Generally speaking, progress involves a process of creative destruction (Schumpeter, 1975): the past must be demolished in order to blaze a path toward a brighter future, regardless of who gets hurt along the way. However, therein lies the rub. Must a better future necessarily be defined as a more technologicallyintensive future? Even though it might be profitable to deploy new information technologies, I am not convinced that in every case, it is a good idea to do so. Take, for example, voice recognition technologies. Increasingly,
162
voice recognition technologies have been deployed to replace human customer service representatives. Though capable of answering telephones and coordinating elementary information exchanges, I think it is safe to say that, at present, voice recognition technologies fall far short of serving as an adequate replacement for welltrained humans. Yet, regardless of the aggravation that half-baked voice recognition technologies may inflict upon millions of customers, these barely good enough technologies have become pervasive largely so that companies can squeeze a few extra bucks out of downsized customer service payrolls. The point that I would like to emphasize is that, instead of crafting information technologies that fulfill their true promise, IT developers have on more than once occasion created technologies that were just good enough (e.g., Microsoft Vista) and, in doing so, they have generated technological dystopias instead of utopias. Further, once having established sub-optimal operating standards, it can be exceedingly difficult to raise the bar. Sub-Optimal Intelligence Standards It is fair to say that craniometrists and psychometricians established sub-optimal standards for measuring intelligence. However, once having established those standards, it has been extremely difficult to alter the way that we define and measure intelligence. Alfred Binet is credited with being the originator of the intelligence test. Binet designed his test as a means to identify students who, being unable to succeed in typical classroom environments, might benefit from special educational opportunities. Binet went to some lengths to emphasize that his tests were not designed to evaluate the intellectual potential of cognitively well-functioning students. Utterly disregarding Binets good advice, H. H. Goddard hawked Binets tests in the US as a means to establish the innate of intelligence of all test-takers (Gould, 1996, p.189). For Goddard, Binets test consistently generated results that distinguished competent from incompetent test-takers and, as such, it served as a good enough measure of intelligence. To this day, standardized tests offer, at best, a woeful mismeasure of intelligence. Yet, in spite of their
163
intrinsic deficiencies, the perception that standardized tests generate good enough measures of intellectual achievement remains entrenched, and this misperception has undermined the quality of education everywhere it has taken root (Meier, et. al, 2004). Getting back to the discussion of AI, Kurzweil has complained that many people fail to acknowledge that various types of pre-intelligent information technology in fact represent working versions of artificial intelligence (1992, p. 14). Frankly, I believe that the publics unwillingness to characterize extant technologies as manifestations of artificial intelligence is a good thing. As with the concept of intelligence, in any circumstance where we lower the bar of our expectations, the reality that we create tends to rise to the level or our expectations. Thus, if we begin referring to existing notso-smart technologies as artificial intelligence, then progress towards an actual form of AI (i.e., technologies that are capable of passing the Turing test) will be derailed. In too many cases, half-baked smart technologiessuch as the current generation of voice recognition softwarehave created more problems than they have solved: real human intelligence remains infinitely preferable to dumbed-down versions of AI. The Aesthetics of Intelligence Both Kurzweil (1992, p. 15) and Minsky (2006, p. 97) argue that the challenge of creating a full-fledged version of AI can be resolved by breaking complex issues into simpler problems: . . . there is the faith in the AI community that most definable problems . . . can be solved, often by successively breaking them down into hierarchies of simpler problems (Kurzweil, 1992, p. 15). Certainly, the process of resolving problematics can often be advanced by breaking down large-scale problems into sequences of smaller, more manageable problems. For example, through the various phases of the Mercury, Gemini, and Apollo space programs, NASA executed a stepwise approach to sending astronauts to the moon. Each new achievement in space established a foundation upon which to aim even higher. Much the same has been true for the AI problematic. In 1950, Turing admitted that information technology would need
164
to mature for half a century before computers would start to show glimmers of human-like intelligence. Following sixty years of innovation, computers have indeed come a long way: they are smaller, more powerful and more personal than most people could have imagined in 1950. Just as Turing predicted, computer speed, memory and software have advanced by leaps and bounds, but, as yet, none have come anywhere close to transcending the Turing-articulate threshold. To this point, information technologies have advanced by breaking complex problems into sequences of smaller, more manageable problems, however, I do not think it will be possible to resolve the Turing problematic by thinking smaller. In other words, as I have emphasized throughout this text, the most astounding new ideas generally do not emerge from solving intraparadigmatic sequences of recognizable problems: building smaller and denser integrated circuits. Instead, new truths tend to emerge when clever people redefine reality. The precise source of creative inspiration, or agency, that makes it possible for individuals to redefine reality is unknown. As Minsky argues, brains dont make geniuses, its what geniuses do with their brains that sets them apart (Minsky, 2006, p. 275). However, precisely how geniuses manage to achieve more gifted levels of brain function remains an abiding mystery. Nevertheless, such inspired, inventive thinking represents the essence of intelligence itself: intelligence is a mysterious, but wonderful quality of human brain function that equips humans with the ability to think and do things that no one has accomplished before. Thus, I believe, to create artificial intelligence it will be necessary to invent entirely new ways to think about thinking. Intelligence is not so much a mechanical process as it is an aesthetic experience. Our intelligence derives from and informs our sense of identity: I think, therefore I am. Furthermore, Minsky and Kurzweil both assert that, as we think, humans have no experience of the mechanical, neurological processes that underlie cognition. In other words, humans rely upon their gray matter and the neurological activities that take place therein to engage in intelligent thinking. However, the act of thinking itself is not experienced as an infinity of neurological firings. Instead, humans, unless they are
165
plagued by psychological disorders, typically experience cognition and intelligence as a seamless, singular experience. Intelligence is a cohesive compilation of an infinity of sensory inputs that combine to create a master identity that we recognize as our selves, or our unique, sentient oneness. That said, we need to know what intelligence is, and respect it in its broadest scope and potential, before we can hope to construct an artificial version that approximates human intelligence in a meaningful way. Thus, AI will need to incorporate the potential for reality redefining creative thought, which, at present, happens to be an extraordinary quality of cognition that humans are unable to fully grasp. Of course, the solution to that particular problem is to use human intelligence as an instrument to resolve the anomaly of human intelligence: we will have to redefine our conceptions of intelligence to develop a more adequate understanding of this mysterious cognitive phenomena. In other words, we can shed light upon what we do not presently understand about human intelligence by utilizing the enlightening qualities of human intelligence to modify the boundaries of the known universe in order to better understand the fundamental source of human intellectual creativity. My feeling is that, if we are determined to create artificial intelligence, then we should do precisely that and nothing less. As we learned with ELIZA, it is certainly possible to create information technologies that masquerade as AI, but if we treat such chimeras as AI, then what have we really accomplished? AI will not exist until knowledge-seekers manage to resolve the Turing problematic. Technologies that fall short of the Turing threshold, while interesting and valuable in many ways, simply do not merit the honor of being called AI. Intelligence is the most valuable resource that humans possess and it is a disservice to cheapen the concept in any way. Generations of craniometrists and psychometricians have devalued intelligence by inventing invalid measures of a phenomenon that defies simplistic measurement. Hopefully, AI researchers will not make the same mistake. If researchers are ever going to create a version of AI that is more than a mockery of human intelligence, then they will have to begin by grasping not merely the mechanics of intelligence, but its aesthetics.
166
Intelligence is a sublime experience that is more than the sum of its parts. No machine that fails to grasp that essential fact will ever be able to fool a human interlocutor, nor should anyone presume to describe such a deficient mechanism as intelligent. Haste Makes Waste Kurzweil argues that intelligence is closely correlated with the speed of thought. He justifies this notion by asserting that psychometricians employ time as an evaluation criteria in their intelligence testing procedures (Kurzweil, 1991, p. 21; 1999, p. 44). In my view, nothing could be further from the truth. The more hurried our thinking, the less intelligent our intellectual output. For example, I have been teaching at universities for about twenty years, and on many occasions students have turned in papers which, by the students own admission, have been a product of last minute preparation. In spite of having cranked out a term paper in only a few short hours, students have often turned in their last-minute projects with the confident assurance that they (the students) work well under pressure. However, in almost every case, the overall quality of last-minute output suffers dramatically in comparison to the scholarly efforts of students who have composed their papers more gradually. In emergency situations, such as when defusing a ticking time bomb, hurried thinking may be preferable to slower-paced thinking. However, in practically every other case, the more deliberate the thinking, the better the intellectual product. Indeed, many of the greatest ideas have literally taken years to formulate. Although Einstein produced a spate of groundbreaking papers during his annus mirabilis, such a sudden outburst of intellectual productivity was predicated upon long years cogitation. Einsteins genius was not the product of rapid thought, but of a fierce determination to ponder the deepest, most intractable mysteries of the universe until, bit by bit, he unraveled those enigmas. Thus, the best ideas are often those that take the longest time to formulate. Just as human intelligence is not a product of computational speed, I think the same will be true of AI. Given the pace of IT development and the fierce
167
determination with which the AI problematic has been attacked, I feel certain that AI developers will eventually create a type of machinery that resolves the Turing problematic. However, I do not believe that AI will spontaneously emerge from creating faster computers. No matter how many times microchip processing power doubles, intelligence will not emerge as a product of computing speed alone. It will only be possible to create artificial intelligence when humans finally figure out what it really means to be intelligent. To accomplish that landmark achievement, knowledge-seekers will have to think longer and harder about the true source of intelligence than they ever have before. References Aamidor, Abe and Ted Evanoff. At the Crossroads: Middle America and the Battle to Save the Car Industry. Toronto, Ontario, Canada: ECW Press, 2010. Bailey, Brian J. The Luddite Rebellion. New York: New York University Press, 1998. Bell, Daniel. The Coming of Post-Industrial Society: A Venture in Social Forecasting. New York: Basic Books, 1976. Bluestone, Barry and Bennett Harrison. The Deindustrialization of America: Plant Closings, Community Abandonment, and the Dismantling of Basic Industry. New York: Basic Books, 1982. Blumer, Herbert. Symbolic Interactionism: Perspective and Method. New York: Prentice Hall, 1969. Bose, Meena and Rosanna Perotti, eds. From Cold War to New World Order: The Foreign Policy of George H. W. Bush. Westport, CT: Greenwood Press, 2002. Clark, Andy. Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. Oxford, England: Oxford University Press, 2003. Cooley, Charles Horton. Human Nature and the Social Order. New York: Charles Scribners Sons, 1902. Davis, Kingsley. Extreme Social Isolation of a Child. The American Journal of Sociology, Vol. 45, No. 4 (January, 1940): 554-565.
168
Descartes, Rene. The Method, Meditations, and Philosophy of Descartes. New York: Tudor, 1901. Dretzin, Rachel (Director and Producer). Digital Nation: Life on the Virtual Frontier. Frontline. Boston, MA: Public Broadcasting Service, WGBH, 2010. Dreyfus, Hubert. What Computers Still Cant Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press, 1992. Fraser, Steve, ed. The Bell Curve Wars: Race, Intelligence, and the Future of America. New York: Basic Books, 1995. Freud, Sigmund. Civilization and Its Discontents. New York: W. W. Norton, 1961. Gould, Stephen Jay. The Mismeasure of Man. New York: W. W. Norton and Company, 1996. Hoffman, Paul. Kings Gambit: A Son, A Father, and the Worlds Most Dangerous Game. New York: Hyperion, 2007. Hofstadter, Douglas, R. Gdel, Escher, Bach: An Eternal Golden Braid. New York: Basic Books, 1979. Hsu, Tiffany. NYU Art Professor to Take Photos with Camera Implanted in the Back of His Head. LA Times, December 6, 2010. Kurzweil, Ray. The Age of Intelligent Machines. Cambridge, MA: MIT Press, 1992. Kurzweil, Ray. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Penguin, 1999. Kurzweil, Ray. The Singularity is Near: When Humans Transcend Biology. New York: Viking, 2005. Larsen, Kristine M. Stephen Hawking: A Biography. Westport, CT: Greenwood Press, 2005. Mead, George Herbert. Mind, Self, and Society, (Edited by C.W. Morris). Chicago: University of Chicago Press, 1934. Meier, Deborah, Alfie Kohn, Linda Darling-Hammond, Theodore R. Sizer, and George Wood. Many Children Left Behind: How the No Child Left Behind Act is Damaging Our Children and Our Schools. Boston, MA: Beacon Press, 2004. Minsky, Marvin. The Society of Mind. New York: Simon and Schuster, 1985.
169
Minsky, Marvin. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the future of the Human Mind. New York: Simon and Schuster, 2006. Newborn, Monty. Deep Blue: An Artificial Intelligence Milestone. New York: Springer Verlag, 2003. Nof, Shimon Y. Handbook of Industrial Robotics. Volume 1. New York: Wiley and Sons, 1999. Okin, J. R.. The Technology Revolution: The Not-ForDummies Guide to the Impact, Perils, and Promise of the Internet. Winter Harbor, ME: Ironbound Press, 2005. Piaget, Jean. The Psychology of Intelligence. London: Routledge and Kegan Paul, 1951. Rhodes, Richard. Arsenals of Folly: The Making of the Nuclear Arms Race. New York: Alfred A. Knopf, 2007. Ritchie, Nick. US Nuclear Weapons Policy After the Cold War: Russians, Rogues, and Domestic Division. New York: Routledge, 2009. Schumpeter, Joseph. Capitalism, Socialism and Democracy. New York: Harper, 1975. Swade, Doron. The Difference Engine: Charles Babbage and the Quest to Build the first Computer. New York: Viking Press, 2001. Taylor, Frederick Winslow. The Principles of Scientific Management. New York: Harper and Brothers Publishers, 1911. Thompson, Herb. Cybersystemic Learning. Radical Pedagogy, 3 No. 2 (2001).
radical- pedagogy.icaap.org /content/issue3_2/thompson.html
Turing, A.M. Computing Machinery and Intelligence. Mind, 59, (1950): 433-460. Venneti, Nancy R.. Labor, Job Growth, and the Workplace of the Future. Nova Science Publishers, 2003. Webster, Frank. Theories of the Information Society. New York: Routledge, 1995. Weizenbaum, Joseph. Computer Power and Human Reason: From Judgment to Calculation. New York: W. H. Freeman and Company, 1976. Thanks to Wikimedia Commons for the image:
commons.wikimedia.org/wiki/File:CORT.jpg
170
171
172
THIRTY-THREE Thanksgiving at Walmart Strikers Plan a Nationwide Celebration for Black Friday
Ah, Black Friday. Its typically the biggest shopping day of the year and, for that reason, the happiest date on the calendar for retailers. On Black Friday, retailers who have been languishing in the red can count on raking in a whole lot of black. Profits galore! As usual, Walmart, the worlds largest and most profitable retailer, has big plans for Black Friday. In an effort to attract the lions share of Black Friday shoppers Walmart is planning to roll out even more low, Low, LOW! prices than usual. The secret to Walmarts staggering success is volume. Walmart can afford to sell its merchandise at a steeper discount than any other store because Walmart deals in much greater volume than any other retailer. Droves of shoppers pack Walmart stores in every corner of the globe because shoppers know that they are going
173
to find more and bigger bargains at Walmart than at any other store. Its like a magical feedback loop: Walmart finds a way to cut prices and, rather than driving itself to bankruptcy, Walmart profits go through the roof. Lower prices guarantee ever-increasing numbers of bargain-hungry shoppers. Its like the worlds greatest a win-win strategy. Shoppers get their bargains, Walmart gets it profits, and everybody ends up better off in the end. Or do they? There is a tragic chink in Walmarts populist armor. While there is no doubt that Walmart loves to deliver severely discounted products to its customers, in order to maximize discounts, Walmart is infamously stingy with its employees. Actually, stingy is too generous a term: when it comes to caring for its employees, Walmart is like Ebeneezer Scrooge on steroids. Walmarts legendary miserliness includes such things as: Paying sub-poverty hourly wages Effectively denying job benefits by preventing its
employees from accumulating full-time hours Walmart pressures its dismally-underpaid part-time employees to work off-the-clock Walmart leeches off the federal welfare system by
unofficially encouraging its employees to apply for poverty reliefeven while they are gainfully employed by the worlds most profitable corporation
Youve gotta be kidding me!?! That takes the cake: Overworked and underpaid Walmart employees have got to apply for welfare to supplement their pitiful Walmart earnings, while Walmart executives rake in record profits. Something is seriously wrong with that equation. Walmart execs might think that they have discovered the golden goose, but Ill bet that more than a few of them are in line for ghostly visits from Jacob Marley. One may well wonder why the worlds most profitable company should be so tightfisted with its employees. I mean, if Walmart is raking in bigger profits than any other company, cant it at least afford to pay its employees a living wage? Heres where we draw another lesson from Charles Dickens: Scrooge had plenty of money. He could easily
174
have given poor old Bob Cratchit a raise and still slept on a mattress stuffed with cash. Its just that the bad old, unenlightened Scrooge didnt want to share. The evil Scrooge only cares about one thing: he wants all the profits to end up in his pocket. Scrooge doesnt care about the misery that his profiteering causes. So long as Scrooge ends up with more money in his pockets, the needy can rot and die for all he cares. Are there no prisons? Are there no workhouses? Walmart has succeeded in maximizing profits for its Scroogey executives by waging a perennial battle against labor unions. The trick to exploiting labor is to keep workers poor and disorganized. Unions, however, have the opposite influence on workers. With the help of unions, workers become better organized and, thereby, improve their negotiating power. Thus, the last thing that Walmart Executives want is unionized employees. If Walmarts workforce unionizes, the next thing you know employees will be asking for living wages, an end to working off-the-clock, health benefits, etc. How are Walmart executives supposed to afford their vacation homes in The Caymans if Walmart employees demand a fair days wages for a fair days work? Trade unions have been in retreat ever since the US began deindustrializing in the 1970s. Many have predicted a final end to trade unions as the postindustrial society has increasingly come to dominate the US economy. And with the demise of trade unions, profiteering service sector corporations have gleefully hammered one nail after another in the coffin of workers rights. Yet, against all odds, out of the ashes if the industrial trade union movement, a new spectre is emerging. The same spirit that emboldened industrial trade unionists in the first half of the twentieth century has returned to infuse new life into the fledgling activism of postindustrial workers. While that populist spirit has succeeded in sparking new hope among a new era of downtrodden wage laborers, the spectre remains frightening and loathsome to the Scrooges who still lust after every penny of profit that they can squeeze out of their bedraggled employees. Will the Scrooges ever realize that they, in fact, have more to gain from the return of this spirit than anyone?
175
(Hint: Theres more to life than amplifying the misery of the poor.)
God bless the courageous Walmart employees who will risk their jobs when they join the Black Friday strike on Walmart. God bless the Walmart executives who value profit over every other immaterial value: health, happiness, generosity, fraternity. God bless us everyone. Thanks to the New York General Assembly and Wikimedia Commons for the images:
1. www.nycga.net/events/black-friday-strike-against-walmart/
2. commons.wikimedia.org/wiki/File:A_Christmas_Carol_-_Scrooge _and_Bob_Cratchit.jpg
176
THIRTY-FOUR Gods Loaded Dice Einstein and the Death of Classical Reality
There is a crucial distinction between explanatory systems that are based upon fate vs. prediction. Both perspectives purport to shed light upon the course of future events, however, fate is based upon a faith in metaphysics whereas prediction is scientific. Determinism represents a branch of metaphysics primarily because determinists claim to know more about the universe than any rational scientist would presume to assert. To put it mildly, there is much more in the universe than humans have yet been able to comprehend. For example, cosmologists currently estimate that scientists are capable of observing approximately four percent of the known universe. Thus, the vast majority of the universe is comprised of dark substances which the scientific community freely acknowledges are currently beyond the ken of science. To propose, as determinists do, that the unknown universe obeys deterministic principles every bit as faithfully as the known universe represents nothing more than patently irrational, anti-scientific thinking. It is premature, to say the least, to claim definite knowledge of vast unknowns. Before anyone can reasonably claim to have certain knowledge of its attributes, we must first figure out whats actually going on in the unknown universe.
177
One day, scientists may indeed discover that the unknown universe is deterministic. But, then again, they may not. For the time being, all we can say is that the unknown universe defies conventional wisdom and, as such, it would be unscientific to superimpose dogmatic presumptions on a realm that, by definition, defies conventional understanding. This is the error that Albert Einstein made with respect to the field of quantum physics. Though Einstein made a number of foundational contributions to the emergent field of quantum mechanics, Einstein developed an acute antipathy for what he perceived as the exceedingly counterintuitive characteristics of quantum theory. Again and again, Einstein disparaged the logic of Niels Bohrs Copenhagen interpretation by insisting that God doesnt play dice with the universe. By that, Einstein meant that he refused to believe that the universe could have been designed to accommodate the irregularities of quantum phenomena that were fundamentally random, uncertain, and probabilistic. Einsteins expectations of the cosmos-as well as the god who created it--were more exacting. In the macro universe, observable phenomena strictly complied with rigid, universal laws, and Einstein insisted that the same logic must also apply to the quantum universe. Unfortunately, quantum phenomena persistently defied Einsteins expectations. In the second half of his career, Einsteins influence faded among mainstream physicists who, regardless of their personal preferences, were more prepared to analyze the quantum universe on its terms rather than on theirs. Einstein remained a holdout. He struggled to the end of his days to create a unified field theory that would integrate a consistent and cohesive explanatory framework for all physical phenomena from the smallest subatomic particle to the far flung perimeter of the expanding universe. Yet, try as he might to construct a unified field theory, Einstein never succeeded in shaping a coherent unified theory into which bizarre quantum phenomena would comfortably fit. In part, Einsteins failure is attributable to the fact that he was determined to construct an explanatory framework for quantum phenomena that complied with his expectations about 178
the essential orderliness of the universe. God does not play dice... Leading proponents of quantum theory--beginning with Max Planck, and including such luminaries as Niels Bohr, Werner Heisenberg, and (prior to the emergence of the Copenhagen interpretation) even Einstein himself-succeeded in developing groundbreaking quantum theories by casting aside their classical, macro physical expectations and developing quantum-specific explanations. That was an essential cognitive shift because, in practically every respect, quantum phenomena defy conventional expectations: rational, orderly, logical physical behavior breaks down at the quantum level. To understand quantum mechanics, one must be willing to concede that macro rationality simply does not apply at the level of infinitesimal physical reality. In the bizarre reality of the quantum, particles appear and disappear, transform as a result of observation, teleport, exhibit complementary qualities of wave-particle duality, etc. Consequently, Einsteins insistence that God does not play dice, was equivalent to a declaration that he would only be willing to accept a quantum theory that complied with his expectations about appropriate physical behavior. As a result, from the very outset Einstein set himself up for failure. No matter how brainy, powerful, and celebrated he may have been, Einstein was not able to require persistently intransigent quantum phenomena to play by his rules. In fact, it almost seems like quantum phenomena exist to spite everything that the great man of physics stood for and believed. Its sort of like Einstein demanded that a bunch of the worlds most rambunctious, ADHD kids should stand at attention to demonstrate how correct Dr. Einstein was about the cheeky little subatomic particles that his colleagues simply refused to discipline properly. Einstein never stood a chance. In the insubordinate realm of the quantum, the rowdy kids rule--and the truth lives and dies according to the heedless whims of unrepentant pranksters. In the quantum realm, if you want to play ball with the quarrelsome kids, then you have to play by their rules, which amount to nothing more than: Expect the unexpected. All other bets are off. 179
Thanks to the quantum, determinism is dead and science will never be the same again.
image.
180
Thomas Kuhn argued that scientific revolutions take place when dominant paradigms are dislodged by emergent paradigms. Kuhn's perspective challenged the previously accepted view that the accumulation of scientific knowledge was a rational stepwise process, i.e., each landmark discovery being anticipated with logical precision and, once established, elevated consensually atop a vertical tower of knowledge. Instead, Kuhn contended that paradigm shifts are much messier undertakings that are marked by infighting, political subterfuge, and a host of other unscientific antics. In other words, though scientists are generally loath to admit it, the accumulation of scientific knowledge is a social enterprise and is, thus, replete with human shortcomings. Though Kuhn's revelations stirred a great deal of discomfort in the scientific community, nevertheless, his analysis exposed crucial insights about the knowledge accumulation process. Although many scientists insist that the scientific method is founded upon a process of induction--the disinterested amalgamation of isolated facts that gradually expose more general patterns of understanding--Kuhn asserts that "normal science" operates within deductive paradigms: Paradigms are broad, assumption-laden worldviews that supply a theoretical foundation into which scientists integrate facts and observations. For example, devotees of the geocentric paradigm eagerly pointed to the circular motion of heavenly bodies as compelling empirical support for their perspective. Capable as paradigms may be of illuminating a range of empirical phenomena, they are also plagued by shortcomings. As illustrated by the preceding example, paradigms perform the invaluable service of rendering "the known universe" intelligible
181
and, as a result, paradigms also provide a structure within which knowledge can be organized cohesively and truth-seekers can collaborate constructively. Nevertheless, a paradigm's Achilles heel lies in the truism that the parameters of the known universe are constantly in flux: curious humans incessantly generate novel observations about a constantly changing universe. Again, popular as geocentrism once may have been, an overload of anomalous heavenly phenomena (e.g., comets, retrograde motion, Jupiter's moons, etc.) inevitably doomed the paradigm. When paradigms are overwhelmed by a critical mass of anomalies they enter a phase that Kuhn described as a "crisis." Paradigm crisis is roughly the scientific equivalent of a skipper's signal to abandon ship. Having sprung more epistemological leaks than its adherents can hope to plug, a paradigm in crisis forces its supporters to make fateful decisions: either to jump ship or, having staked out a career upon the foundering vessel, to stay aboard until the bitter end. Paradigm crisis is a precursor to full scale scientific revolution. According to Kuhn, a scientific revolution comprises a transition through which scientists replace an outmoded paradigm with a new one. Generally speaking, the new paradigm has the advantage of being, so to speak, a more seaworthy vessel, i.e., it resolves many of the anomalies that sank its precursor. Therefore, for a period of time, the new paradigm can confidently go about the process of enlisting recruits and navigating rough scientific seas; that is, until the process inexorably repeats itself and the updated paradigm is gradually beset by its own set of leaks. Kuhn developed this non-linear view of scientific knowledge accumulation based upon his examination of the history of science. In particular, Kuhn noted that scientific paradigms often incorporate foundational assumptions that are antithetical to the leading assumptions of succeeding paradigms, e.g., one cannot maintain an honest intellectual commitment to creationism and evolutionary theory without suffering from multiple personality disorder. It requires the intervention of an historical revisionist to invent a smooth, linear transition from one scientific paradigm to the next. As such, some critics have asserted that Kuhn's thesis exposed science as a fundamentally relativistic endeavor. In other words, the fact that successive paradigms tend to be epistemologically contradictory suggests that there is no essential consistency (i.e., no inherent "truth") in scientific progress. That is, if scientific "truth" is linked to the assumptions upon which scientific paradigms are founded and, in turn, if scientific paradigms are disposable, then even in the most rigorous scientific
182
endeavors truth must be only a provisional, transitory standard. In a world of paradigm shifts, truth would appear to be a chimera. In keeping with this attitude, copious aspersions have been cast on scientific truth--most abundantly from postmodernists. Nevertheless, far from indicating an absence of truth, in this paper I will argue that (r)evolutionary innovations in the structure of scientific knowledge are not an indication of the truth's scarcity. Contrarily, I contend that the process of bringing about paradigm shifts represents the most definitive indication of the scientific commitment to Truth. Distinct as emergent scientific paradigms may appear in comparison to their predecessors, nevertheless, in every case there remain essential "evolutionary" linkages between historic, existing and succeeding paradigms. Indeed, the epistemological relationship between distinct scientific paradigms is "evolutionary" in a similar (metaphorical) sense to the biological speciation process. Just as biological evolution propagates species that appear to have little or no connection to their predecessors (e.g., marine mammals v. their ancient terrestrial forbears), so too do scientific paradigms spawn new epistemologies that appear to lack a clear "genetic" linkage (e.g., geocentrism v. the Big Bang). Though one may have to search to find it, a logical (and, in the case of the philosophy of science, a social) connection exists between evolutionarily-distinct constructs. Crucially, for the purposes of understanding the production of truth, it is essential to recognize the manner in which new paradigms, unique as they may be in many respects, generally "speciate" from within the context and tradition of established paradigms. In spite of the apparent epistemological discontinuity between paradigms, I assert that the production of scientific truth takes place through a process of "redefining reality." In other words, truth is not contained within any particular paradigm, but rather truth guides and enables the process of transitioning from outmoded to "new and improved" paradigms. Also, truth-making never has been and never will be a linear process. Instead, the production of truth is associated with a process whereby individual "agents," upon encountering an over-abundance of environmentally disruptive phenomena (i.e., epistemological anomalies), often develop wildly creative, but nonetheless "adaptive" solutions to resolve the epistemological anomalies they encounter. For example, Einstein's legendary modifications to Newton's mechanical universe. As is the case with evolving organisms, emergent paradigms may appear to be constructs of an entirely new order. Nevertheless, outlandish as they may seem, emergent paradigms maintain demonstrable linkages with their
183
ancestors (e.g., heliocentrism is "a very different animal," but still retains obvious affinities with geocentrism). The difference is that emergent paradigms have been modified through a process of redefining reality to transcend the shortcomings of established paradigms and, thereby, achieve a better "fit" with prevailing environmental conditions. In other words, paradigms evolve through an extensive reimagination process that is intended to reduce anomalies and, thereby, generate a more comprehensive grasp of the ever changing "known universe." Thanks to Wikimedia Commons for the image: commons.wikimedia.org/wiki/File:Three_famous_physicists.png
184
Reply to Jim: Bogus Logic and Analogy Dear Jim, Your analogy (quoted below in italics) is patently false: "The similarity you missed is that both teleportation and physical consciousness lack scientific plausibility." (Emphasis added) For the moment, let's set aside the fact that scientists have succeeded in teleporting quantum particles on numerous occasions. Your earlier comment, quite correctly, characterized teleportation via a purely imaginary exercise for full-grown humans as a fantasy. There is an essential difference between imaginative fantasies (e.g., Alice in Wonderland) and something that I refer to in my work as "problematic innovation." In the 1960s, Martin Cooper fantasized about having a wireless communicator just like his hero, Captain Kirk. Cooper problematized the fantasy and then, step by step, shifted the boundaries of empirical reality by inventing the necessary facts to transform his fantasy into a reality: cell phone technology. At present, the crucial distinction between human-size teleportation and, as you put it, physical consciousness is empirical demonstrability. Physical consciousness is an empirically demonstrable phenomenon whereas adult-sized teleportation is not. You are correct that the precise nature of
185
consciousness remains enigmatic: scientists are convinced that "conscious intelligence" emanates from some combination of brain cell activity and interaction (i.e., neural networks) and an intense, long-term socio-psychological learning (i.e., "humanity immersion") process. AI researchers like Ray Kurzweil are convinced that human-like AI is, so to speak, just around the next technological corner. However, I disagree. Your point is welltaken: I have argued that it is essential to develop a fuller grasp of the aesthetic dimensions of intelligence before scientists will ever come close to developing a machine-based intelligence that is reminiscent of human intelligence. At present, humans are smart enough to understand what "being intelligent" feels like (i.e., I think, therefore I am), however, scientists are not yet smart enough to fully understand the phenomenon of intelligence in all of its dimensions. For example, IQ tests are a pathetic excuse for a "measure" of such a complex, multi-faced, and powerful, yet nuanced phenomenon. I am inclined to agree with Kurzweil and other AI researchers who have stated that the shortest path to AI will be through cyborg-like synergies between humans and rapidly advancing "built-in" technologies, such as implanting micro cell phone technologies inside the human brain. If you think about it, jamming phone buds into our ears is only one small step removed from installing subcutaneous, prosthetic technologies in the human body. When we arrive at the point of literally installing IT into our bodies, the stark distinction between real vs. artificial intelligence will rapidly blur to nothingness. Having said all that, I need to re-emphasize that physical consciousness is a demonstrable empirical reality. Just think of all the poor folks who suffer from central nervous diseases. As the brain degenerates so does an individual's capacity for thought, feeling, managing bodily functions, etc. The fundamental nature of intelligence remains a mystery, but it's a mystery that, day by day, we are getting closer to solving. For starters, we know that much more of our physical consciousness resides in the brain case than it does in our foot: if a shark bites your foot off, you have a good chance of surviving, but if the shark bites your head off, you're a goner. As such, physical consciousness is a real, demonstrably empirical phenomenon: it is an extraordinary outcome of complex brain function. Whereas, teleportation of the nature that you described remains the stuff of pure imaginative fantasy. 186
Believe it or not, there are a number of folks connected with DARPA's 100 Year Starship project that would like to make Star Trek-like teleportation a reality over the next 100 years or so, but, for the moment, the problematics that these folks have developed are so flimsy and beyond the pale, that it is difficult to characterize their objectives as anything more than wishful thinking. Of course, I might have to eat those words in 100 years and, if so, I will gladly do so. To put it mildly, your analogy (quoted at the beginning of this comment), however, remains a case of "apples and oranges."
187
188
On July 4, 2012, the US Conference of Catholic Bishops concluded their Fortnight for Freedom, a pulpit political initiative that is intended to challenge certain aspects of President Obamas Affordable Healthcare Act (AHA). The Bishops are cheezed off because, under the AHA, the Catholic Church will be required to provide healthcare access to its many US employees--and a fair number of those employees are sure to use that healthcare coverage to obtain contraceptives. Regardless, of the global population explosion, and the fact that the vast majority of the Popes US flock has been religiously using contraception for generations, contraception is something that the Catholic Church officially abhors. If Catholics are going to have sex, then, by God, its going to be unprotected sex. Global population doubled twice during the 20th century (from ~1.5 billion in 1900 to 6 billion in 2000) and the Pope wants to ensure that it does likewise in the 21st century. Via the Fortnight for Freedom, the US Conference of Catholic Bishops are attempting to make the argument that the AHAs mandates will inhibit their religious freedom. In this case, religious freedom is defined rather broadly as blocking access to contraceptives for Catholic Church employees. On July 4th, the
189
Archbishop of Philadelphia, Charles Chaput, asserted that God grants freedom, not government. However, on that point, Bishop Chaput is dead wrong. Why do Americans enjoy religious freedom? Americans enjoy religious freedom not because god granted them that freedom, but because their government ensures religious freedom. I dare Bishop Chaput to cite the spot in the Bible where it says Let it be known that henceforth citizens of the USA will enjoy the right to religious freedom--and, by the way, employees of the Catholic Church shall be denied the opportunity to engage in contraceptively-protected sex on the Churchs dime. Nope, theres no mention whatsoever of Americans and their preferred methods of contraception in the Bible. Trust me, Ive checked. What freedoms Americans do enjoy, religious and otherwise, are spelled out in the US Constitution. Although, of late, Catholic Bishops have been insisting that religious freedom is Americas first freedom, technically-speaking, that isnt true. First of all, being an appendix to the Constitution, the First Amendment cannot be said to confer the first rights on US citizens, but rather, some of the last. Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances. Essentially, where the First Amendment refers to religion it simply asserts that there must be a separation of church and state. In other words, the USA defines itself as a secular state that tolerates religious freedom. Conversely, the US government will protect religious freedom, but it will not tolerate interference by any religion that would presume to manipulate the day-to-day governance of the USA. Once we understand the proper relationship of religion and government in the USA, and the true intent of the First Amendment, it rapidly becomes clear that the Fortnight of Freedom is nothing but an example of excessive and illegal overreach on the part of the US Conference of Catholic Bishops. Catholicism and every other religion within the boundaries of the US owe their existence to legal (i.e., Constitutional), not religious protections. But, just as religions are protected by the laws of the land, so must they obey the limitations of those laws. 190
In sum, Bishop Chaput and the US Conference of Catholic Bishops have no business telling President Obama how to run the USA. Instead, Bishop Chaput and his colleagues should thank President Obama for his enduring support of the First Amendment protections that enable Catholicism and every other legitimate religion to operate in the US. When it comes to the business of running the government, Bishop Chaput and the US Conference of Catholic Bishops should mind their own business, and be grateful for the priceless freedoms that (legally!) collaborating with the US government affords. Finally, if the US government passes a requirement that all employers in the US will be obliged to provide healthcare to their employees, it is not the business of the Catholic Church to dispute that directive. It is the duty of the Catholic Church, like all upstanding citizens of the USA, to obey that directive. As we move toward 2014 and full implementation of President Obamas Affordable Care Act, Bishop Chaput would be well served to remember Jesus Christs sage admonition, Render unto Caesar that which is Caesars.
191
192
THIRTY-EIGHT
Almost as an afterthought, Stephen Jay Gould (1987, p. 70) acknowledged the profound distinction between biological evolution and human cultural evolution. Biologists believe that genetic change is primarily Darwinian that is, it occurs via natural selection operating upon undirected variation. Human cultural evolution is Lamarckian--the useful discoveries of one generation are passed directly to offspring by writing, teaching and so forth. In human cultural evolution...transmission and anastomosis are rampant. Five minutes with a wheel, a snowshoe, a bobbin or a bow and arrow may allow an artisan of one culture to capture a major achievement of another. Fruitful analogies may be drawn between biological and cultural evolution, but they remain analogies. The processes are different, even though human culture has a biological base. Cultural evolution needs laws of its own. This statement is neither a council of despair nor a dashing of hopes for intellectual coherence. It is merely an acknowledgment of the worlds hierarchical structure and, I hope, an intellectual challenge in its own right (Gould, 1987, p. 70).
193
Thus, Gould asserts that Darwinian evolution is biological and operates at the level of genetics. Since all life is biological in origin, Gould concedes that Homo sapiens emerged as a result of a long-term Darwinian evolution process. However, following the emergence of human cognition and cultural development, Gould argues that evolution among humans has become Lamarckian rather than Darwinian. By this, Gould means that human cognition has enabled an entirely new type of adaptation via a process of acquired characteristics. In other words, Homo sapiens has succeeded in fast-tracking the evolution and adaptation process by cognitively carrying out end-runs around Darwinian evolution. Although Gould blithely breezes over this profound evolutionary transformation, I believe it is evidence of an entirely new chapter in the history of evolution. Importantly. other observers have also noted that humans have been liberated from the strict constraints of biological, or Darwinian evolution. For example, in Wired for Culture, Pagel (2012) argues that humans have been able to subvert the biological evolutionary process due to their unique capacity for cognition and cultural adaptation. Strangely, Pagel asserts that cultural adaptation is necessarily deterministic. For his part, Gould draws a very different conclusion about the indeterministic nature of cognitively-mediated human events. In short, Gould believes that cognitive innovation leaves open the possibility that the influences of agency add a significant element of improbability to the course of human events and, thus, if history were to be replayed, Gould believes that slight, individual-level modifications could lead to enormous alterations in the course of human history. In his Foreword to Goulds (2003) Triumph and Tragedy in Mudville, David Halberstam states, Steve Gould believed in what might be called the contingency of history theory--that is, history is not a simple unbroken, almost predictable line of progress with certain almost guaranteed givens and thus assured outcomes. Rather, it is filled with pitfalls and ambushes and there are land mines everywhere; occasionally it is almost whimsical in the course it chooses. If you rewind certain sections of history and try to replay them, he believed, things might come out very differently: the Confederacy, say (these are my examples not his), might triumph at Gettysburg, Rommel might defeat Montgomery 194
in North Africa, and Mickey Owen might hold on to the third strike from Hugh Casey (Halberstam in Gould, 2003, p. 16). To reiterate, the essential distinction between agents and nonagents is that the fate of non-agents is determined by their environments: cold kills heat-loving plants. On the other hand, agents are capable of modifying their environments in order to suit their own interests: transforming the parched Las Vegas desert into an oasis for carefree thrill-seekers. References Gould, Stephen Jay. An Urchin in the Storm: Essays About Books and Ideas. New York: W. W. Norton, 1987. Gould, Stephen Jay. Triumph and Tragedy in Mudville: A Lifelong Passion for Baseball. New York: W.W. Norton, 2003. Pagel, Mark. Wired for Culture: Origins of the Human Social Mind. New York: W. W. Norton, 2012.
195
196
THIRTY-NINE
If the messages that are embedded in folklore mean anything, then until very recently humans were terrified of the natural environment (Grimm, et. al., 1915). In many cases, the scariest part of folk tales involves foolish individuals--often kids, in order to emphasize the cautionary nature of the tales--who fall prey to one of the many terrors that lurk in the wild. Almost everywhere that they are mentioned, wolves are characterized as merciless people-eaters who lie in wait for anyone foolish enough to wander from well-trodden paths. The message is clear, nature is something to be feared--even dreaded--and civilization (i.e., the well-trodden path) represents a lifeline to safety and security. Although parents still read Grimms fairy tales to their kids at bedtime, we no longer read the tales in quite the same spirit. Over the past few hundred years, humans have fundamentally redefined their relationship to nature. Where once the arbitrary whims of nature wielded extraordinary power over the fate of humanity, ever since the dawn of the industrial era, superadaptable agents have succeeded in asserting newfound dominance over nature. In the age of machines, rather than meekly accepting whatever beneficence nature arbitrarily yields up, super-adaptable agents have turned the tables. In the modern world, humans aggressively demand resources from nature. And where nature fails to meet those exacting demands, humans impose a harsh new discipline on their former master: damming rivers, clearing forests, transforming parched deserts into oases, 197
exterminating pests, manipulating plant and animal DNA, etc. In sum, where humans were once the relatively helpless pawns of almighty nature, super-adaptable agents have transformed their former master into their servant. Nature now answers to the beck and call of its human overlords. Human cultural evolution, especially through advances in the technological sphere, has made possible in a brief span of time an extraordinary expansion of human population and of the capacity of each person to affect adversely other people and the environment (Gell-Mann, 1994, p. 304). Of course, some are likely to be offended by the suggestion that humans have transformed the natural environment into humanitys servant. However, I believe that characterization--in light of both its positive and negative connotations--is apt. Rather than being dictated to by natures carrying capacity, humans have activated their agency in such a way as to make increasingly forceful demands upon nature. Humansand this applies to Americans, in particulartend to view nature as a conquered rival whom they presume exists only for the purpose of attending to their whims: providing on demand the bounty that humans require to lead comfortable, secure lifestyles. For their part, humans tend to be as attentive to the needs and interests of the natural environment as vengeful, conceited masters are to the welfare of their slaves. Is it any wonder that the environment is suffering in response to the ascendancy of superadaptable apes? It is difficult to blame Homo sapiens for reveling in its newly realized ascendancy. For so long, nature was an overbearing, stingy taskmaster: drought, pestilence, plague and other natural disasters routinely inflicted unimaginable suffering on humans. Now that humanity has, as it were, removed natures boot from its neck, there are bound to be repercussions. If nature must suffer in order for humans to luxuriate in a blissful era of shameful overindulgence, then so be it. That is the price that nature must pay for being conquered by one of its former subjects. Tough nuts, Mother Nature. Of course, super-adaptable apes would be well-advised to avoid celebrating too long and too excessively. Nature has a way of getting even. The more slighted that Nature becomes, the more 198
wicked her eventual vengeance will be. About now, Thomas Malthus (Malthus and Gilbert, 1993) is having a hearty chuckle in his grave. So far, humans have succeeded in postponing the Malthusian nightmare, but will it be possible to avoid such a fate as the global population explodes toward eight billion people? Ten billion? Twelve? Just because humans have developed an unprecedented capacity to achieve super-adaptive ascendancy over the formerlydeterministic limitations of nature, does not mean that humans have a license to be jerks. Sure, its good to be king. However, kings who turn a deaf ear to pleas of their subjects often experience a premature demise. The next, and very urgent question that humans must answer is this: Is it possible for super-adaptable apes to employ their agency for purposes other than competition, domination, and selfindulgence? Having succeeded in asserting unprecedented mastery over the planet, can super-adaptable apes draw upon their intellectual agility in an entirely new way in order to evolve from ruthless, insensitive combatants into judicious stewards of their own and their planets better interests? The implication is that cultural change itself is the only hope for dealing with the consequences of a gigantic human population armed with powerful technologies. Both cooperation (in addition to healthy competition) and foresight are required to an unprecedented degree if human capabilities are to be managed wisely (Gell-Mann, 1994, pp. 304-305). Karl Popper (1999) was correct in stating that all life is problem solving. Popper was also correct when he observed that the solution to any particular problem inevitably produced the result of generating a whole new set of even-more-difficult problems. Thus far, Homo sapiens has demonstrated that it is the most versatile, adaptable intellectual problem-solver ever to evolve on planet earth. However, our success has also generated crises of unparalleled scope and urgency. Will super-adaptable apes continue to be equal to the problems that their success has created? If Homo sapiens can call a halt to its prolonged victory lap and get down to the urgent business of solving the next set of species-threatening crises, then I like our chances. However, the outcome is yet to be determined and the clock is ticking. 199
References Gell-Mann, Murray. The Quark and the Jaguar: Adventures in the Simple and the Complex. New York: W.H. Freeman, 1994. Grimm, Jacob, Wilhelm Grimm, and Anne Anderson. Grimms' Fairy Tales. London: William Collins Sons, 1915. Popper, Karl. All Life is Problem Solving. Translated by Patrick Camiller. New York: Routledge, 1999.
200
FORTY
One can hardly broach the subject of free will or human agency without acknowledging the long-standing and unresolved philosophical debate regarding agency vs. determinism (Campbell, et. al., 2004). To provide an illustration of the extent of disagreement over this dualism, determinists, such as Hawking and Mlodinow (2010), have argued that agency and free will are nothing but an illusion: ...the molecular basis of biology shows that biological processes are governed by the laws of physics and chemistry and therefore are as determined as the orbits of the planets. Recent experiments in neuroscience support the view that it is our physical brain, following the known laws of science, that determines our actions and not some agency that exists outside those laws...so it seems that we are no more than biological machines and that free will is just an illusion (Hawking and Mlodinow, 2010, emphasis added). It is important to point out that physicists are not the only scientists who espouse an uncompromisingly deterministic perspective. For example, Sam Harris, a neuroscientist, states: Free will is an illusion. Our wills are simply not of our own making. Thoughts and intentions emerge from background causes of which we are unaware and over which we exert no conscious control. We do not have the freedom we think we have (Harris, 2012, p. 5, emphasis in original). 201
The hard version of determinism (Pereboom, 2009, p., 325)i.e., a perspective which suggests that agency is either non-existent or illusoryis often associated with a viewpoint known as the clockwork universe (Dolnick, 2011). In 1773, Pierre Simon Laplace, created a foundation for the clockwork universe perspective by stating: An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as of the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes (Quoted in Weinert, 2004, p. 197). In 1758, Roger Boscovich, a contemporary of Laplaces, offered a similarly extreme view of the thoroughgoing determinism that he believed was at work in the clockwork universe: Now, if the law of forces were known, and the position, velocity and direction of all the points at any given instant, it would be possible for a mind of this type to foresee all the necessary subsequent motions and states, and to predict all the phenomena that necessarily followed from them (Quoted in Barrow, 2007, p. 63). Thus, the most extreme versions of determinism assert that even though it would require some form of superhuman omniscience (aka, God) to obtain knowledge of every, law, particle and interaction in the cosmosnothing moves, interacts, appears or disappears in the clockwork universe without having been minutely pre-determined by a chain of causality that was set in motion at the origin of the universe. This version of hard-core determinism leaves no room whatsoever for either agency or indeterminism. At the other end of the spectrum are those who believe in an indeterminate universe (Popper, 1988). Also, it is worth noting that some influential physicists, such as Murray Gell-Mann and Richard Feynman, endorse a view that disputes Stephen Hawkings faith in a hard deterministic universe. 202
In classical physics it would have been legitimate to specify exactly both the position and the momentum of a given particle at the same time, but in quantum mechanics that is forbidden, as is well known, by the uncertainty, or indeterminacy, principle. The position of a particle can be specified exactly, but its momentum will then be completely undetermined (Gell-Mann, 1994, p. 139, emphasis added). Another most interesting change in the ideas and philosophy of science brought about by quantum physics is this: it is not possible to predict exactly what will happen in any circumstance...nature, as we understand it today, behaves in such a way that it is fundamentally impossible to make a precise prediction of exactly what will happen in a given experiment (Feynman, 1965, p. 35 - emphasis in original). Interestingly, well acquainted as Hawking certainly is with quantum uncertainty, he still insists that determinism holds up even in the face of fundamental physical uncertainties: ...there is still determinism in quantum theory, but it is on a reduced scale...in quantum theory the ability to make exact predictions is just half what it was in the classical Laplace worldview. Nevertheless, within this restricted sense it is still possible to claim that there is determinism (Hawking, 2001, p. 108). This is a curious assertion. Is it logically coherent to argue that, in a universe where we can predict, at best, 50% of potential outcomes, hard determinism persists? Is Hawkings conviction an expression of dogmatic faith, or a dispassionate analysis of objective facts? Wouldnt it be more appropriate for Hawking to concede that there is as much indeterminism in the universe as there is determinism? Interestingly, Hawking and Mlodinow (2010) answer this very question in the negative: According to quantum physics, no matter how much information we obtain or how powerful our computing abilities, the outcomes of physical processes cannot be 203
predicted with certainty because they are not determined with certainty. Instead, given the initial state of a system, nature determines its future state through a process that is fundamentally uncertain. In other words, nature does not dictate the outcome of any process or experiment, even in the simplest of situations (Hawking and Mlodinow, 2010, p. 72, emphasis in original). Crucially, Hawking and Mlodinow concede that, in spite of the Laplacian hard determinist claim that it is theoretically possible to know everything about everything, because of the bizarre and uncertain behavior of quantum particles, Hawking and Mlodinow own up to the all-important revelation that it will never be possible to know everything about anything: nature does not dictate the outcome of any process or experiment, even in the simplest of situations. This is a crucial point to re-emphasize: The universe cannot be deterministic if it is impossible to determine the precise characteristics of even a single quantum particle. Therefore, Hawking and Mlodinow proclaim the death of Laplacian hard determinism. However, Hawking and Mlodinows self-contradictions do not end there. Only a few sentences later, Hawking and Mlodinow reverse themselves yet again: Quantum physics might seem to undermine the idea that nature is governed by laws, but that is not the case. Instead it leads us to accept a new form of determinism: Given the state of a system at some time, the laws of nature determine the probabilities of various futures and pasts rather than determining the future and past with certainty (Hawking and Mlodinow, 2010, p. 72, emphasis in original). In spite of what Hawking and Mlodinow suggest, Laplacian hard determinism is an essentially either-or proposition. Either the universe is 100% deterministic, or it isnt. However, according to Hawking and Mlodinow, determinism is definitely probably at work in the universe. And you can quote them on that! Hawking and Mlodinows equivocations read more like a case of paradigm crisis-inspired denial than a valid update on the concept of determinism. Further, I dont think Laplace would accept Hawking and Mlodinows ambivalent re-characterization of 204
indeterminism
as
determinism.
References Barrow, John D. New Theories of Everything: The Quest for Ultimate Explanation. Oxford: Oxford University Press, 2007. Campbell, Joseph Keim, Michael O'Rourke, and David Shier. Freedom and Determinism. Cambridge, MA: MIT, 2004. Dolnick, Edward. 2011. The Clockwork Universe: Isaac Newton, the Royal Society, and the Birth of the Modern World. New York, NY: Harper. Feynman, R. P., R. B. Leighton, and M. Sands. The Feynman Lectures on Physics: Quantum Mechanics: Volume III. Reading: n.p., 1965. Gell-Mann, Murray. The Quark and the Jaguar: Adventures in the Simple and the Complex. New York: W.H. Freeman, 1994. Harris, Sam, 2012. Free Will. New York: Free Press. Hawking, Stephen W., 2001. The Universe in a Nutshell. New York: Bantam Books. Hawking, Stephen, and Leonard Mlodinow. 2010. The Grand Design. New York: Bantam Books. Pereboom, Derk. Free Will. Indianapolis, IN: Hackett Pub., 2009. Popper, Karl. The Open Universe: An Argument for Indeterminism. New York: Routledge, 1988. Weinert, Friedel. The Scientist as Philosopher: Philosophical Consequences of Great Scientific Discoveries. Berlin: Springer, 2004. Thanks to Wikimedia Commons for the image.
commons.wikimedia.org/wiki/File:Triptychon_Cosmic_Clockwork.jpg
205
206
Tim McGettigan is a professor of sociology at Colorado State University Pueblo. Tim's primary research interests are in the areas of science, technology, society and the future and Tim blogs about those topics at the following sites:
The Socjournal, www.sociology.org OpEdNews, www.opednews.com Socera, socera.blogspot.com Fulbright Association, www.fulbright.org Social Science Space, www.socialsciencespace.com
Tims next book is tentatively titled Star Warriors: Friends and Foes on the Frontiers of Cosmology and it will examine controversies in the field of scientific cosmology.
207