Wednesday, June 27, 2012

Homage to Pluck

  Why would someone doggedly pursue an objective regarded by most as quixotic?  Face a probability of success that is infinitesimal?  Spend a whole career doing so? 

 Many have asked such questions about the extraordinary scientist Jill Tarter, director of the Center for SETI (Search for Extra-Terrestrial Intelligence) Research.  I thought of those questions on the occasion of last weekend's SETI conference, which celebrated her 35-year career in SETI as she steps down at 67 from the Center's directorship.

  No one is more identified with SETI than Tarter.  You may have seen her in fictional form as the young scientist played by Jodie Foster in the 1992 film Contact, which was modeled on Tarter's life and work.  (It's a film worth seeing if you haven't yet, and watching again if you have.) 

  What is the substance of Tarter's life work, which invites so much skepticism?  The fundamental issue is not whether there might be life of some sort elsewhere in the universe.  I think that most scientists, including Tarter, would bet that there is.  After all, there are 200-400 billion stars in our Galaxy alone and about 100-200 billion galaxies—altogether on the order of a trillion trillion stars in the observable universe.  And we now know from observations in our Galactic neighborhood by the Kepler space telescope that planets orbiting stars are pretty common, a few of them possibly habitable.  As has been said, if there is no life anywhere else in that enormous universe, it would be an incredible waste of real estate!

  However, extra-terrestrial life is surely preponderantly primitive, mostly microbial, as it is on Earth.  It might be successful in taking root in the most extreme places, as it has in the scalding, sulfurous vents deep in Earth's oceans.  There are at least two such sites even within our solar system—Mars and Jupiter's moon Europa—where conditions seem right for microbes to exist in subsurface water.  Space probes sent to those bodies might possibly find such life within decades. 

  No, Tarter has devoted herself to an incredibly harder task: discovering intelligent extra-terrestrial life.  Such life, if it exists at all, is almost certainly extremely rare, even rarer if it were technologically at a stage of exhibiting signs of its presence that we could detect, and rarer still if it were disposed to purposefully emit a powerful beacon announcing that presence.  For comparison, Earth has hosted life for 4.5 billion years, yet only for the past century has one species of its intelligent life—homo sapiens—displayed extra-terrestrially detectable signs of its existence; none of those signs is an intentional beacon.  In other words, planet Earth has been announcing the existence of its intelligent life for only a hundred-millionth of its history, and only marginally. That small time window is but one mark of how unlikely it would be for us to see detectable signs of intelligent life from any given planet at any given time—and this is presupposing a prior unlikelihood, that life had already started on that planet in the first place and could evolve into a technologically sophisticated civilization

  To elaborate further on the long odds against SETI, look at the full array of obstacles it still faces, even assuming that it will have what Tarter never had: a hoped-for radio-telescope observatory that can simultaneously view a million stars in our Galaxy over a very wide band of frequencies. (Tarter was able to check out but a few thousand stars at limited frequencies.)  First, at least one of those million stars—say it is X light-years away—must have a very rare planet like Earth, capable of hosting life that could eventually evolve into an advanced civilization.  Next, precisely X years ago such a civilization must have already developed and have been in that tiny time window, such as our current epoch on Earth, when it had the technical sophistication to show detectable signs of its presence, hopefully an intentional beacon.  Next, SETI's observatory must be sensitive enough and looking at the right frequencies to detect those signs, and be able to determine that they are not random.  What are the chances that all these suppositions will simultaneously be true?  Certainly not zero, but still minute.  Such are the daunting considerations that Tarter has faced throughout her career.

  So that brings me back to the questions I posed at the outset: what keeps Tarter and her colleagues going up against such enormous odds, with the almost-certainty that success will not come in their lifetimes or maybe ever?  Part of the answer must surely be the intellectual satisfaction of creating tools for the hunt; in one generation these tools have gone from single-antenna radio telescopes and primitive computers to large radio-telescope arrays and very powerful computers.  Another part of the answer must be the thrill of the hunt itself.  Yet another is the huge payoff if the hunt is successful.  But I think Tarter has said it best herself: "The great thing about being a scientist is that you never have to grow up.  You can keep on asking 'Why?' "

  Although Tarter is stepping down from the SETI Center directorship, she is not retiring.  She has dedicated herself to raising funds for SETI, which have dried up in these depressed economic times.  SETI's main observatory, the privately funded Allen Telescope Array (ATA), had to be shut down for some months at the end of last year for lack of operational funds and is now barely limping along.  Tarter's new goal, she says, is to present the next generations of SETI researchers with the planned 350-antenna ATA (it now has 42 antennas) and enough funding to keep it operating at full staffing.

  Bottom line: I think we should all pay homage to this exceptional woman.  Few of us would have the courage to spend our careers as she has, on such an other-worldly, almost-impossible objective. I suggest that you savor her charm, vitality and pluck by watching her 2009 presentation at TED.

Wednesday, June 20, 2012

Neurons and the Internet

  Socrates would have much to say about the Internet if he were alive today—and he would not be complimentary.
 
  In his own time, Socrates was distressed by the increasing use of writing in the place of bardic recitation, oral discourse and philosophical dialog.  That may be why we have no writings directly from him.  We know of his concern only through Plato, in whose Phaedrus we find Socrates recounting objections by mythological Egyptian god-king Thamus to the gift of writing: "[It] will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves; … they will appear to be omniscient and will generally know nothing." 

  A reincarnated Socrates would no doubt have the same reaction to the Internet.  He might paraphrase Thamus by saying, "The Internet will create forgetfulness in its users, because they will not use their own memories; they will trust external websites and not remember of themselves; they will acquire broad knowledge but it will be shallow." 

  Neurologically speaking, Socrates would be onto something.  Modern research has shown that neuronal circuits in the brain, including its memory, wax and wane in response to demands placed on them.  For example, would-be London cabbies are required to memorize the location of every street and landmark within six miles of Charing Cross and the best routes among them.  As they train to meet this requirement, their brains' posterior hippocampi—which store spatial representations and navigation information—get bigger. After a cabbie's retirement, the hippocampus returns to normal size.

  Persistent use of the Internet might be causing similar brain changes.  Recent studies show that people who heavily multitask and hyperlink find prolonged concentration difficult. They are much more easily distracted by irrelevant environmental stimuli.  They tend to have less control over working memory—the temporary memory involved in ongoing information processing, which is sensitive to interruptions.  They have more difficulty in the formation and retention of both short- and long-term memories.

  It would seem from these observations that neuronal circuits such as long-term memory, which depends on intense and repeated concentration to thrive, are being weakened in chronic Internet users, and circuits involved in rapid scanning, skimming and multitasking are being strengthened.  This is an atavism of a sort, a throwback to the days of cave people, who may not have been deep thinkers but were supremely alert to sudden events like motion in the peripheral vision that might warn of a predator.  Today's sudden events consist of arriving email and texts, hyperlinks, and the like, which deflect us from current tasks onto others.

  These topics and many more are discussed in a fascinating book by Nicholas Carr, The Shallows: What the Internet is Doing to Our Brains.  Carr's main thesis is that today's Internet-dominated activities are indeed causing our brains to rewire themselves.  He aptly quotes the Jesuit priest and media scholar John Culkin: "We shape our tools and thereafter they shape us."  Carr notes that it has ever been thus, from the writing that so worried Socrates, through the map, the clock, the compass, and so on to modern times.  Each new tool has broadened our ability to understand and control our surroundings, but each has exacted penalties in the form of loss of previous skills and the societal attributes that they underpinned. 

  The balance of benefits and costs of an new tool is not always apparent at the start.  It turns out that the original Socrates was wrong to worry that writing would be preponderantly harmful: the forgetfulness he feared did not occur.  Rather, at least until multitasking recently came into play, book-reading forced us to concentrate deeply and linearly, actually enhancing our abilities both to stay with a single task and to commit the knowledge we thus gained to long-term memory.  On the other hand, our modern-day Socrates might turn out to be right about the detriments of the Internet.  It does appear to be driving us in the opposite direction, toward spasmodic, superficial scattering of our attention; outsourcing our memory to the Web; and a possible intellectual destiny in "the shallows" of Carr's title.

  The counterbalancing upside of the Internet is of course substantial: the ease and efficiency with which so many workaday tasks can now be done. The balance between benefit and detriment will take a long time to become clear.  I suggested in the most recent of my several previous postings concerning the Internet—click on "Internet" in the index to the right to see those postings—that a final assessment might require a century or two to achieve. 

  (Did you deflect your attention to the index?  Did you click on the suggested hyperlink and peruse those previous postings?  Even if you did neither, did your working memory fail to maintain the first part of the sentence in mind during the interruption, forcing you to read it again?  Voila!  The world of online reading.) 

  I have two reasons for contending that it will take generations or centuries to fully assess the impact of the Internet.  First, Internet Age technologies are still unfolding in a now unknown way, with unknown consequences.  Second, the brain will likely show itself more malleable in adapting to the Internet's demands than Carr and others credit.  There is already evidence that the working memories of heavy multitaskers are expanding in order to retain temporary information on several tasks simultaneously.  Further, the brains of those raised on the Internet are still engaged in their initial wiring, which won't be finished until they are 25 years old, say 2020 for the earliest Web babies.  That generation and its successors will probably end up wired much more effectively for life in the Internet Age than their elders like Carr, whose brains must be rewired helter-skelter from earlier configurations, which they might not want to surrender.  (There's even an evolutionary implication here, which I am not addressing, for then we would be talking of waiting millennia for a verdict, not merely generations or centuries.)

  Then, will our latter-day Socrates be as wrong about the Internet as the original Socrates was about writing?  We can't know yet.  Patience is the name of the game.

Wednesday, June 13, 2012

Fresh Life Bubbling Up

  There are occasions that unfailingly cause a catch in my throat.  They have to do with the bubbling up of fresh life.  The birth of a baby—a tabula rasa on which all the hopes and possibilities of life will be inscribed.  Later, the signs that the baby is becoming aware of him- or herself as a distinct person, and is beginning to explore the boundaries of that personhood.  Yet later, when the child first enters school and starts to perceive the exciting world of knowledge that lies ahead to be absorbed.  Such effervescences of life restore my optimism for the world, despite the world-weariness and sometimes-cynicism of aging.  They are invigorating antidotes to negativity, balms for the abrasions of life.

 Last Sunday was another such catch-in-the-throat occasion for me, when I attended the commencement ceremonies at College Preparatory School in Oakland.  A Helen Green Turin Memorial Scholar was graduating—the second recipient of a four-year scholarship for minority students named after my late wife.  Along with her ninety classmates, she is about to go off into the world on her own, her first time being away from the security of home and family. Oh! I thought, what can compare to the sight of fresh life bubbling up in all those youngsters, quite literally bright-eyed and bushy-tailed as they relished their accomplishment and looked forward with zest to their next stage of independence?

  Commencement addresses are sometimes yawners, but on Sunday history teacher Don Paige, who had been selected by the seniors to address them, impressed me with his simple but vital message.  Paige reminisced about the two times while he was in college, full of his just-attained adulthood, when he had asked a revered grandfather for advice on life.  Both times, Grampy had cradled the side of Paige's head with a hand and responded, "Do good things."  Both times, Paige was puzzled and disappointed by the apparent blandness of the advice. 

  It was only years later that Paige successfully parsed that brief imperative sentence.  "Do": a active verb, a call to activity rather than passivity.  "Good": a vague adjective implying a change for the better.  "Things": an equally vague noun by which, Paige now understood, his grandfather meant all of those actions that can effect a change for the better in others' lives.  "Do good things": Help others improve their lives.

  Paige ended his talk by suggesting to the Class of 2012 that they augment College Prep's motto "Mens Conscia Recti" (a mind aware of what is right) with "and do good things."  I love the dissonance there: a bookish Latin call for meditative thought, paired with a worldly Anglo-Saxon exhortation to action.  A good mix with which to approach life.

  As I watched the Helen Green Turin Scholar receive her diploma, that familiar catch in my throat gripped me again.  I thought, Grampy had really hit the nail on the head.  Each contributor to Helen's memorial endowment had done a good thing, indelibly changing that young woman's life for the better, encouraging the life force to bubble up in her. 

   I also knew that a bit of Helen's soul will be going out into the world with her, rejuvenated.

Tuesday, June 5, 2012

Gödel and God

    Kurt Gödel, whom some have called the greatest logician since Aristotle, undermined the very foundations of classical mathematics with his Incompleteness Theorem (and, by the way, thereby reaffirmed his belief in God).  To mathematicians, his work was nothing short of cataclysmic.  To most of the rest us, it is merely counter-intuitive, paradoxical and maddening.  I invite you down the rabbit hole into a realm of paradox worthy of Alice.

  Until Gödel proved his theorem, it was thought that mathematics—alone of the sciences—was self-contained, not having to refer to anything outside mathematics.  Mathematics was created by mathematicians as a complete, fenced-off entity, while other scientists had to discover the outside world.  More precisely, mathematicians held that any true statement in any properly set up mathematical system (say arithmetic or algebra) could be shown to be true by using solely the axioms and rules of that system.

  Gödel proved the opposite.  His theorem shows that any properly set up mathematical system containing arithmetic has true statements within it that cannot be proved true by using solely the axioms and rules of that system.  Mathematics is therefore not complete unto itself as was supposed.  Confirmation of the truths not provable within mathematics can only be found outside of what had previously been assumed to be a self-contained mathematics.  This is where God came in for Gödel—but more about that later.

  It all started at the turn of the twentieth century, when Bertrand Russell realized that mathematical logic would always contain contradictions.  He illustrated this by using a folksy paradox about a lone, male barber in a town where every male keeps himself clean-shaven either by shaving himself or being shaved by the barber.  Then, Russell asked, who shaves the barber?  There's the paradox: If the barber doesn't shave himself then he (the barber) must shave himself; if the barber shaves himself, then he (the barber) doesn't shave himself.

  Such paradoxes are called self-referential.  In this one, all males must refer themselves to the barber if they don't shave themselves.  The barber, being male, must thus refer himself to himself to be shaved if he doesn't shave himself, setting up the paradox. 

  The proof of Gödel's Incompleteness Theorem is very complicated, but its core consists of the construction of a true, self-referential arithmetical proposition that is shown to be unprovable within arithmetic.  A taste of the core argument can be found in a much simpler argument about a supposedly self-contained truth-telling machine:

1. Imagine a truth-telling machine M that can answer all questions allowing solely a yes or no answer about the truth of any statement submitted to it, but by axiom can only answer correctly; that is, it cannot lie.  (M stands in the stead of Gödel's starting point of arithmetic.)

2.  Consider the statement S, that “M will never say S is true.”  (This is akin to the self-referential proposition in arithmetic that Gödel constructed, for the statement S is defined in terms of the statement S.)

3.  Ask M if S is true.

4.  If M says S is true, then "M will never say S is true" is thereby falsified, so M has incorrectly answered a question.  Hence, M cannot say that S is true, since by axiom it gives only correct answers.

5. Step 4 confirms that M will never say S is true, verifying that the statement S in step 2 is indeed true.

6.  Here’s the dilemma:  S is true but M cannot say so.  M is consequently an incomplete truth-telling machine.  The answer to "Is S true?" lies outside of M. 

  Gödel's theorem, which rocked the world of mathematics, didn't shock Gödel.  He had been a lifelong Platonist, believing with Plato that anything we can see or conceive of is just a poor shadow of an eternal, objective ideal that exists beyond the real world.  It was hence no surprise to him that there are mathematical truths that mathematics cannot prove.  He concluded that proofs of those truths are part of the omniscience of an eternal God who is external to the real world, thus strengthening his long-held theism.  In his later years, Gödel even tried to construct a formal logical proof of God's existence.

  Gödel published his theorem in 1931, when he was just 25.  He fled the Nazis from his native Austria in 1938, and spent his last forty years at the Institute for Advanced Study in Princeton, where he became Albert Einstein's closest intellectual companion until Einstein's death in 1955. 

  As I suggested at the outset, understanding paradoxes like those I've described can be maddening—perhaps just reading this blog posting has had that effect on you.  Worse, they might quite literally drive mad those whose life work involves them, and perhaps that's what happened to Gödel.  In a great irony, Gödel died in 1978 in a self-referential sort of way, so convinced that people were trying to poison him that he refused to eat and died of starvation.

  If you want to read more about Gödel—the man, his life, his times, and details about his theorem and its proof—I recommend Incompleteness: The Proof and Paradox of Kurt Gödel by Rebecca Goldstein.