Thursday, July 26, 2012

A Few More Apples

[Editorial note by George Turin: I got two responses to my recent comparison of Silicon Valley to other historical sites of great creativity such as Elizabethan London, in which I pointed out that creativity depended heavily in all those sites on a free interchange of ideas—'open sourcing' in modern lingo [1].  A writer friend was shocked that I would make the comparison at all, and I responded to him in [2].  My son David had a totally different complaint: I hadn't recognized that Silicon Valley's creativity was just one facet of the wider-spread creativity of the San Francisco Bay Area, exhibited not only by the digirati but by the literati and 'musicati' too.  I was interested in his take, and asked him to write a guest posting, which follows.]

  Recently there was a squawk about Steve Jobs' claim, in his posthumously released autobiography, that he conceived of his Apple empire while on LSD.  Tales of acid inspiration are not uncommon—it's routine to associate the drug with the Beatles' groundbreaking later recordings and, maybe not so coincidentally, with their own Apple trademark.  But Jobs' claim rattled a nerve in the straights.  That LSD could inspire a computer organization was too cute to leave untouched, so his comment ricocheted around the newswires.  More than a few times, it ran up against the press that a book called Marketing Lessons from the Grateful Dead was getting at around the time of Jobs' death.  I think that's not coincidental: both the Grateful Dead and Jobs were nurtured in the ethos of Northern California in the 1950s and 1960s.

  I bring this up in The Berkeley Write because, although I like its earlier assertion that Silicon Valley is the Mecca of our times in part because of the open sourcing of technology there, I feel that it short-changed the overarching role that the Bay Area played as a hub for many creative communities, not just technical ones.  It also didn't take into account a common thread that ran through all of these communities: LSD.

  In a 1985 Playboy interview, Jobs said something about the Silicon Valley area that I think is important:

"Woz and I very much liked Bob Dylan's poetry, and we spent a lot of time thinking about a lot of that stuff. This was California. You could get LSD fresh made from Stanford.  You could sleep on the beach at night with your girlfriend.  California has a sense of experimentation and a sense of openness—openness to new possibilities."  (Emphasis is mine.)

(It's ironic that, while Stanford inspired Silicon Valley through supporting student enterprises, it apparently also did so by making acid.  Both activities found fertile ground in the Bay Area of the day.)
  
  The California that Woz and Jobs were experiencing was also home for a young Jerry Garcia.   Garcia and the Grateful Dead were originally from Menlo Park, only a few miles away from where Woz and Jobs grew up.  They started as the house band that Ken Kesey employed to play at his experimental acid parties.  Kesey had participated in early psychedelic experiments while a student at Stanford.
  
  If acid was inspiring the creative community of Silicon Valley, then openness to taking the drug has to be attributed to San Francisco.  The Beat Generation (including such creative literary lights as Allen Ginsberg and Lawrence Ferlinghetti) had taken firm root there, giving the city yet another burgeoning creative community and a growing reputation for open-mindedness.

  While surely not everyone participating in the percolating open-source mentality of the Bay Area was dosing, one could say metaphorically that those who did were affecting the water supply.  Inspired by new visions of how the world works and at least temporarily disabused by the acid muse of the concept of ownership, many techies began freely sharing technology and the Grateful Dead began freely sharing tapes and encouraging audiences to record their shows.  In these cases the Beatnik/hippie dream did not die—it became big business.  And more.

  I'm now an expat from the Bay Area—I live and learn in London.  The interest in the UK in Jobs' life when he died caused a surge of pride in me about my roots.  As you may know, Californians aren't always favorably viewed abroad.  Yet stories about Jobs, the Grateful Dead and the California that nurtured them ran through the UK papers for a few months, along with widespread speculation that the Bay Area had given the world its new brain.  The world in turn was learning where to send a thank-you letter.  I didn't rush to re-introduce 'cute' and 'yummy' to my lexicon, but I did put my San Francisco-ness up front again in casual conversation. 

  Still, in discussions about San Francisco with Londoners who'd picked up on the spate of recent headlines and chosen to commend me for my great choice of birthplace, I noticed that the common thread of LSD was missing.  I was disappointed to see it missing from this blog too.  Perhaps that is understandable—it is very hard to publicly commend a drug like LSD for advancing our evolution.  No matter how hard that is, it is harder still to deny that taboo substances are often the common experience of a generation, intimately linked with its ideals, creations and in some cases with its centers of creativity.  If wine gave us poetry, what then LSD?

  I've not really tried LSD—the 'really' is a long story—but I have my eye on the new 'standardized' approach to taking it in legal, sympathetic 'retreat' environments guided by Shamanic coaches. I'm thinking that these organized psychoactive rituals might bear some fruit for us humans—maybe even a few more Apples.
  
David Turin

Thursday, July 19, 2012

The Self

   I recently mentioned in this blog how awed I am when I see a baby becoming aware of its unique self.  Yet, what is the Self?  That question has been examined for millennia.  Historically, the answers have been all over the map.

  Christianity identifies the self with a God-given soul, separate from the body and surviving it.  Buddhism denies the existence of a distinct, permanent self, positing only an ever-changing consciousness, which on death can be reincarnated into another body.  Aristotle argued that the soul does not have an existence independent of the body. Descartes viewed the soul as immaterial and distinct from the material body, though oddly regarded the brain's pineal gland as the place where they interact.  Hume saw the self not as a discrete entity, but as a bundle of sensations, perceptions and thoughts, perhaps even rejecting the concept of the self altogether.  Modern neuroscience may have a more coherent story to tell … then again, it may not. 

  Our brain has about 100 billion neurons, almost all present at birth.  They are not directly responsible for the functioning of the brain; that role is assumed by the staggering number of synaptic interconnections that are formed among them.  Under genetic control, the embryo's and infant's brain makes tens of millions of interconnections per minute, forming primitive neuronal networks that await further sculpting.

  In early life that sculpting is mostly under the influence of sights and sounds.  For example, during a critical period for activating one primitive network, optical data from the eyes drive the development of the visual cortex, the site where those data are increasingly translated into recognizable geometric and chromatic patterns.  In another critical period, the sounds of one's native language are learned and categorized, leading to the formation of a language cortex.  In a word, the neurons in our brain are driven to form very specialized networks by a constant barrage of sensory information arriving from the outside world.

  More than influences from raw sensory data are involved.  We are social creatures, so most of the behavior of our Self is impressed on our neural system by interacting with others.  As babies, we realize early that smiling elicits attention.  As we mature, we mimic others' behavior as a way of becoming part of the social networks of home, school, workplace, etc.  We learn what is expected of us and discover what is considered offensive or immoral.  We share emotions.  We respond to others' opinions of us.  We find out what others have thought by reading and listening.  We acquire skills that are necessary for our lives and jobs.  As this interaction with the outside world proceeds from infancy throughout our life, the neuronal network in our brain reformats itself constantly.

  So what in fact is the Self?  In The Self Illusion: How the Social Brain Creates Identity, experimental psychologist Bruce Hood argues a viewpoint close to Hume's.  Self, he contends, is no more than a mirroring of all the external influences we have experienced throughout our lives, as filtered by and instantiated into the mass of increasingly networked neurons in the brain.  He doesn't deny the nature side of the nature-nurture dichotomy, just asserting that our intrinsic properties express themselves in interactions with the social world that defines us.  It is an illusion, he says, for us to try to distinguish anything about the Self that is independent of the mirroring of the outside world reflected in the neuronal network.

  To get a fuller understanding of Hood's reasoning, it's worth while reading an interview with him by Jonah Lehrer, the author of the book Imagine that I discussed in my posting Neurons and Creativity.

  Hood's representation of the Self may seem compelling until one notices a big fly in the ointment:  If the Self is illusory, so free will must be.  Hood claims that even now, as I type this very sentence, I am not exercising free will.  Any choices I think I have made, he says, "must be the culmination of the interaction of many hidden factors ranging from genetic inheritance, life experiences, current circumstances and planned goals … play[ing] out as patterns of neuronal activity in the brain. … We are not aware of these influences because they are unconscious and so we feel that the decision has been arrived at independently."  Hood is not at all alone among contemporary neuroscientists in holding this doctrine.  Sam Harris has just published a short tractate Free Will that makes the same case, as do others.  They paint a disconcerting picture of humans as self-programming automata that develop only under the influence of the social matrix in which they are embedded, with no vestige of free will.

  In short, Hood and Harris come down firmly on the side of Spinoza, who said, "Men are mistaken in thinking themselves free; their opinion is made up of consciousness of their own actions, and ignorance of the causes by which they are determined."   But, if Hood and Harris are wrong, if there is free will, their model of Self lacks an extremely important component.

  In supporting their position on free will, both writers cite a few EEG and fMRI experiments that purport to show that the motor cortex of a subject's brain exhibits activity a half-second or more before the subject senses a decision to move.  Hood, however, points out that such results are prone to misinterpretation.  The subject is both the observed and an observer, passively awaiting an urge to act while simultaneously actively trying to detect consciousness of a decision to act, both in the same neuronal network.  Interaction of these activities may distort the results.

  Aside from these scant and questionable neurophysiological results, Hood's and Harris' main case against free will is based on proof by fiat: it's so because I say it's so.  Their very definition of Self—i.e., no more than a mirroring of our lifelong interaction with the outside world, instantiated in a neuronal network that runs autonomously—is taken as an axiom. They then conclude that the Self ipso facto cannot contain an independent free will, since free will is not part of the axiom.  As Harris says, "Thoughts and intentions simply arise in the mind.  What else could they do?"  This seems to me to be a classic case of circularity, of begging the question.  It doesn't pass the smell test.  Dare I say that it is illusory?

  Despite their insistence that there is no free will, both authors are pragmatic.  Hood points to data showing that believing in free will leads to better performance in, and enjoyment of, life.  Harris says that "for most purposes, it makes sense to ignore the deep causes of desires and intentions—genes, synaptic potentials, etc.— … when thinking about our own choices and behaviors."  I agree, and more than just as a way of avoiding the issue.  Until neuroscience can provide a much more complete understanding of the subject, I believe that free will will remain in the domain of philosophy and religion.

  Meanwhile, I will continue to believe that I wrote this posting because I intended to do so of my own free will, not because of the firings of autonomous and anonymous neurons.

Wednesday, July 11, 2012

Computer Science

  At the risk of boring some readers of The Berkeley Write, I've decided to review the remarkable history of computer science, from its barely existing a half century ago to being at the core of almost everything now.  That history is remarkable not merely because computers have become so ubiquitous, but especially because computer science has fundamentally changed the way we solve problems. 

  When I was an electrical engineering undergraduate at MIT in the late 1940s, I was dumbfounded to find that a classmate was studying Boolean algebra, an algebra of logic.  As one who was steeped in the mathematics of the continuum, I asked him why he was interested in an algebra having only two values—0 and 1.  He told me that he wanted to study digital computers. 

  I couldn't understand why he would be wasting his time.  Although I had worked on electro-mechanical analog computers during the summer after my freshman year, I scarcely knew what a digital computer was, and had absolutely no inkling at all of the revolution they would shortly ignite.  In my own exculpation, I should say that there were only a handful of them in existence at the time, and that even Thomas Watson, founder and then-chairman of IBM, had a few years earlier estimated their ultimate world-wide commercial market to be about five.  (To paraphrase John Kenneth Galbraith's remark about economic forecasting: The only function of engineering forecasting is to make astrology look respectable.)

  In 1960, when I joined the faculty at UC Berkeley, digital computers were much more widespread, but the term "computer science" was still an oxymoron: few could perceive any science there at all.  The study of computers at Cal, as elsewhere, was embedded in the department of electrical engineering, where the few courses on computers involved only the design of their electronic circuitry, peripherals and elementary programming.  Later in the decade, a few faculty members started going beyond those engineering issues, into basic research on the limits on digital computation and the theory of programming languages—i.e., computer science.

  Computer science has since come a very long way, now equally sharing with mathematics the role of fundament of all the sciences, and being central to many non-science disciplines as well.  Most particularly, it has largely changed our way of solving problems, from classical analysis to algorithmic procedures.

  In mathematics and the sciences, classical analysis is characterized by theorem proving and by the formulation of problems so that they can be solved by equations.  You had doses of that way of thinking in your high school geometry, algebra and physics courses, if not further.  For example, you encountered the quadratic equation, y = ax2 + bx + c (often arising in the physics of motion), and were asked to find the values of x for which y = 0.   The two solutions of that problem turned out to be x = - (b/2a) ± ((b2 – 4ac)1/2)/2a.  That's pure analysis.

  On the other hand, algorithmic thinking is characterized by a step-by-step, iterative procedure, the modus operandi of computers.  The process for such thinking is often described by a flow chart such as that below, which is taken from one of the forms of the federal income tax return.  You start at the box at the upper left, and step through the chart by answering a series of yes/no questions.  If you get to the box labeled "You must figure your penalty," you then proceed to a set of forms that, rather than giving a formula for calculation of the penalty, guides you through a sequence of "if-then" statements to make the calculation, one step at a time.  That's a pure algorithmic procedure.


  Computer science now occupies a central role in the physical sciences and elsewhere because many problems are too complex for mathematical analysis—for example, the behavior of chaotic systems such as the weather or population dynamics, or of economic systems with a large numbers of variables and participants. To investigate such a problem, a sophisticated algorithmic procedure is designed to model the system at hand, and the procedure is then run on a computer to provide insight into the system's performance.  It's not just a matter of using the computer as a glorified calculator; to get accurate results in reasonable amounts of time and formats, investigators must be as steeped in the fundamentals of computer science as their predecessors were in mathematics.

  Even though computer science didn't start flourishing until the 1960s, it had its ur-moment in 1935 in England in the work of Alan Turing.  He posited a machine that manipulates symbols in a step-by-step fashion.  The device, now called a Turing machine, is quintessentially algorithmic.  Turing never built one.  His genius consisted in specifying an iterative procedure that even now models any computer program and therefore any computer.

  Turing, a superbly talented mathematician, also used the concept of his machine to address a question his contemporary Kurt Gödel had tackled: the completeness of mathematics. Gödel's Incompleteness Theorem showed that some true mathematical statements can't be mathematically proved to be true.  Turing approached the question differently, from the viewpoint of the computability of algorithms.  In a result very much related to Gödel's theorem, he showed that not all problems are solvable computationally in a finite amount of time.

  The question of computability is now front and center in computer science.  Even if a problem is theoretically computable in a finite amount of time, the time needed may be unreasonably large, given the exabyte (quintillion byte) data sets that are becoming common in such scientific and commercial operations as the Large Hadron Collider, the Sloan Digital Sky Survey, the World Data Centre for Climate, Amazon, YouTube, Google, etc.  (The first of these generates a quadrillion bytes of data—more than the information in all the world's libraries—every second!)  This "big data" challenge goes to the heart of computer science: how to effectively organize huge data sets, and how to formulate algorithms that handle them both efficiently in computation time and accurately as the size of the data set increases. 

  An indication of the current importance of big data is the establishment this year of a major research institute at UC Berkeley with a private $60-million grant from the Simons Foundation.  The institute will bring together researchers from around the world to study efficient algorithmic approaches to such important big-data subjects as evolution, the immune system, maintaining privacy while doing data analysis, and climate modeling. You will surely hear much more about big-data research in the next decade.

  So, to repeat: in my own professional lifetime, computer science has come from barely existing at all to being "at the core of almost everything," even of disciplines that were not previously thought of as particularly quantitative.  I find it a stunning story.

Thursday, July 5, 2012

Hope

  Today, a heavy subject: catastrophe, survival and regeneration.  Bear with me.  My musings sometimes veer toward the darker side.

  We are reminded daily by newspapers of all the disasters that Man and Nature can inflict.  As I write this, my newspaper reports the ongoing slaughter of thousands and destruction of whole cities in the civil war in Syria; random suicide bombings in Iraq; a huge forest fire that has destroyed a suburb of Colorado Springs, mercifully killing only a few.  We scarcely need to be reminded of the earthquake and tsunami in Japan last year.  And some horrors have become part of our vocabulary—Srebrenica, 9/11, and the cataclysms of the Holocaust, Rwanda and Hiroshima. 

  We also read of and even know survivors of calamities who have carried on—not just muddled through, but reconstructed their lives and planned their futures anew.  How do they do it?  When I lost my wife, I was able somehow to come through that dark night.  Yet what if one's entire existence and community were swept away?  How would one survive then?

  For years, whenever I read about survivors of a catastrophe, sayings from my youth popped unbidden into my mind: Cicero's "While there's life, there's hope" and Pope's "Hope springs eternal in the human breast."  Even in my youth, they were long-since clichés.  Thinking of them so automatically seemed cavalier, a shrugging off of the survivors' pain with a nostrum. 

  On a recent such occasion, again finding those clichés just too pat, I sought a more profound understanding by returning to a book I read many years ago:  Life and Fate by Vassily Grossman, a saga of the Stalin years in the USSR.  It stunned me with its window into a modern-day Armageddon, and I have held it close to me since, ranking it with other great Russian epics like War and Peace and Dr. Zhivago.  I decided to re-examine what it says about survival.

  The torments faced by the characters in the book verge on the incomprehensible, even given our latter-day knowledge of those dreadful decades: the tyranny of Stalinism, with its constant terror, purges, imprisonments, torture, executions, and an endemic atmosphere of mistrust and betrayal; the Nazis' invasion of the USSR and their subsequent massacres of Jews and others in occupied territories; and the apocalyptic Battle of Stalingrad, around which much of the plot centers.  Every one of Grossman's protagonists has been either imprisoned in the Gulag at one time or another, usually because of a false denunciation; or lost a parent or a spouse or a child to the Soviet or Nazi camps or the war; or is currently in trouble with the authorities for Kafkaesque reasons.  Grossman himself suffered many of these agonies, including the later destruction by the KGB of all copies of the manuscript of Life and Fate.  (Fortunately for the world, dissidents had made two microfilm copies, which were smuggled to the West.)

  By the last chapter, the survivors of this Armageddon, although having endured unthinkable misery, still clutch onto vital sparks of humanity.  They are epitomized by the matriarch of the main clan of the saga.  She has lost her husband to the Gulag, a son, daughter and grandson to the war, all of her possessions on abandoning Moscow ahead of the German advance, and then lived through the hell of the Battle of Stalingrad.  Yet at the book's end, she is busily planning a move from Stalingrad's ruins to a city further into the Steppe to create a new life for her granddaughter and great-grandson.  So there's the mystery again.  How can a soul like hers be tortured and nearly broken, and still seize on those remaining grains of life?  What makes some of us capable of survival and even compassion when, rationally, we should throw in the towel, go crazy, commit suicide? 

  The sheer fortitude with which Grossman's survivors have overcome each tragedy throughout the book doesn't answer these questions, for fortitude is a surface manifestation of something deeper.  We have to ask, what is its source?  Devotion to ideals, smoldering anger at injustice, desire for retribution?  Maybe for some.  But Grossman's answer lies in a passage on the last page of the book:

" … you could hear both a lament for the dead and the furious joy of life itself.  It was still cold and dark, but soon the doors and shutters would be flung open.  Soon the house would be filled with the tears and laughter of children … "

I believe Grossman is saying that hope for a better future is the all-encompassing source of survival, without which fortitude is an empty display.  Absent such hope, he concludes, there's no life worth living. 

  Maybe Cicero's and Pope's "clichés" were on the mark after all.  Maybe, as a response to disaster, they are as profound as can be.