Wednesday, May 30, 2012

The Jury System

  Winston Churchill once said, "Democracy is the worst form of government except all the others that have been tried."  I feel the same way about the jury system.  I came to this conclusion during my first service as a juror over fifty years ago and haven't changed it since.  A recent summons to jury duty reminded me of that initial experience.

  At the time I was a resident of Los Angeles County.  In those days in LA, and maybe even now, one was called to jury duty for a month, and had to show up every weekday whether or not then empaneled in a jury.  (I now live in Alameda County, where one is summoned for just one day of service at a time, and dismissed if not picked then for a jury—much more efficient for the citizen if not the county.)

  I spent most of those thirty days in LA in a jury assembly room waiting to be called for possible inclusion in juries.  (I eventually served on about a half dozen.)  That seemingly interminable waiting none the less had a big payoff for me in the form of a splendid civics lesson.  As I talked at length with many of the other potential jurors, I was able to assay the raw human material of juries; it was of course diverse and it was not always impressive.  Much more importantly, I soon discovered an amazing feature of the jury system: When assembled into a jury, that undifferentiated raw material is magically metamorphosed by a strange alchemy, as if it were an admixture of lesser metals transmuted into pure gold.

  The backdrop of my civics lesson was the notorious Caryl Chessman case, which after a dozen years was reaching a final resolution.  Chessman had spent most of his adult life in jail.  While on parole, he was arrested and charged with being the infamous "red-light bandit," who followed cars to secluded areas and tricked their occupants with his red spotlight into thinking he was a police officer.  When they pulled over, he robbed and—in the case of several young women—raped them. 

  Chessman was convicted in 1948 and condemned to death under California's "Little Lindbergh Law," which allowed capital punishment for kidnapping if it also involved bodily harm.  In this case, the dragging of one of the women a few feet from her car was deemed by a court to be kidnapping.

  Chessman always maintained his innocence and spent twelve years on appeals, acting as his own attorney and also laying out his arguments in four books that became best sellers.  His case was an international cause cèlébre, involving appeals on his behalf by such notables as Eleanor Roosevelt and Billy Graham.

  By 1960, Chessman's legal appeals had deflected eight execution deadlines.  In mid-February of that year, then-Governor Pat Brown (the father of present Governor Jerry Brown) had issued another sixty-day stay of execution, which was about to expire as my jury term began.  The story was in the newspapers every day, and little was talked about in the jury assembly room other than whether Governor Brown should commute the sentence to life imprisonment or let the execution proceed.  I remember being shocked at how many potential jurors were impatient with the slow process of justice, feeling that the case should long since have been over and done with.  "Just kill the bastard!" was a common opinion.

  When I first heard that sentiment, I had not yet served on a jury.  It gave me a sense of foreboding.  Here were people with whom I might serve, and many of them—especially the least informed—seemed to me to be irrationally and emotionally baying for an execution.  If I were to serve on a jury with any of them, how would they act? 

  I need not have worried, for the metamorphosis I have mentioned always manifested itself.  In every jury on which I served, my fellow jurors carefully examined evidence.  They cogently and articulately argued their viewpoints.  They respectfully listened to others.  They tried to identify with plaintiffs and defendants.  They resisted being intimidated by judges and attorneys.  Above all, they seemed to have a reverence for their temporary but sovereign roles as personifications of the ideal of justice. 

  I have seen the same alchemy in operation every time in the last five decades I have served on a jury.  Jurors from every profession, spanning the whole range of education, of all ages, coming from every social condition, in each case unite in trying to determine the facts of a case and render a true and just verdict.  It never fails to impress me.

  On the other hand, my opinion of judges and lawyers has tumbled each time.  I have often found judges to be peremptory, impatient, and inured to the plights of those who appear before them—I suppose they are toughened by seeing too much of the worst of human nature over too many years.  And I have rarely found lawyers to be either seekers of truth or high-minded advocates—I suppose they cannot be in our adversarial system of justice.

  So, if I am ever unfortunate enough to be a defendant in a criminal case, or even only a party in a civil suit, I would willingly place myself in the hands of my peers. They will of course be fallible human beings, but I would count on that miraculous alchemy to transform them into ones who are as impartial as human beings can be.  I'd bet my life on it.

  Postscript: Although nominally opposed  to the death penalty, Governor Brown did not commute Chessman's death sentence.  He was executed on May 2, 1960, just as a new stay from a judge was being phoned to the prison's warden.  (The judge's secretary had originally misdialed the number, delaying delivery of the stay by a few critical minutes.)  In 1977, the U.S. Supreme Court struck down kidnapping as a capital offense.

Wednesday, May 23, 2012

Creativity à la Silicon Valley

  A writer friend of mine, whom I'll call X, choked on a comparison I made in an earlier posting.  It took a Heimlich maneuver to resuscitate the poor fellow.

  I had written about communal creativity, the kind that arises when creative people frequently and randomly interact, stimulating each others' individual creativity.  In a book I referred to, Jonah Lehrer says that communal creativity reaches a peak when an assortment of supportive social, civic, economic and demographic conditions align.  What he calls "clots of excess genius" then form.  (Better to call them "clots of excess creativity," since true genius is too rare to come in clots.)  He gives as examples fifth century BCE Athens, fifteenth-century Florence, late sixteenth- and early seventeenth-century London and today's Silicon Valley, and I used the same examples.

  After recovering sufficiently, X emailed me to ask how Silicon Valley's "techies" and their creations could possibly be compared with the cultural illuminati and creations of Athens, Florence or London.  To put words into his mouth, it was as if he were asking: Can one even begin to compare Steve Jobs with Sophocles or Leonardo or Shakespeare?  Can one even begin to compare the iPad with Oedipus Rex, Mona Lisa, or Hamlet?  Put this way, the juxtaposition is staggering.

  It isn't that Silicon Valley doesn't have a clot of excess creativity. Annalee Saxenian, in her now classic book Regional Advantage, showed how the alignment of  the very same social, civic, economic and demographic conditions cited by Lehrer led to an explosion of communal and individual creativity there.  That alone at least invites comparison of Silicon Valley with earlier eras, if not necessarily leading to its inclusion in the pantheon. 

  X might accede thus far.  He admits that creators of technology have at least mental originality, but objects on another ground, asserting that their work doesn't emotionally engage the whole being of the creator or the beholder of the creation.  This, he seems to say, excludes Silicon Valley from any grouping with the other eras.  I wish I could imbue him with the sense of beauty and wonder that creators and beholders of engineering masterworks can feel.  It's sad that, more than a half century after C. P. Snow wrote about the disjunction between the two cultures of science/engineering and the humanities, mutual comprehension is still largely lacking.

  Yet X has correctly identified a difference in kind.  Many of the earlier eras' major creations were artistic or literary masterpieces by single creators. Silicon Valley's major creations are technological masterworks, usually agglomerations of efforts by many individuals.  Since responses to these different species of creations are so subjective, side-by-side comparisons of creators and creations from Silicon Valley with those from the earlier eras might shed more light on the comparer than the compared.  I think the four eras should therefore be compared by using a more objective yardstick: their overall impacts on civilization, which is probably what Lehrer had in mind in the first place.

  The golden age of Athens set the course for western civilization for millennia.  Fifteenth-century Florence set the tone for the rest of the Renaissance and still enthralls us with the beauty it created.  Writers in late sixteenth- and early seventeenth-century London set a standard for English literature for centuries.  Yet even as measured against these colossal accomplishments, X isn't justified in being so dismissive of Silicon Valley.  It almost alone created the Information Age, enormously changing the very structure of society in the twentieth and twenty-first centuries and likely for centuries to come.  In terms of the magnitude of the consequences each creative era has had for the world, Silicon Valley clearly belongs to the quartet. 

  I will concede one point to X, though.  It is a big one, concerning the relative merits of the contributions of the four eras, not just their magnitude.  The creative heritages we have received from Athens, Florence and London, seen through the lens of time, are unalloyed boons. We are not yet sufficiently distant from the heritage of Silicon Valley to comprehensively and dispassionately evaluate its ultimate contribution to civilization.  On the plus side, the Information Age has made the cultural heritage of the world easily and freely available to all of its population, not just its elite.  It has begun to redress the imbalance between the powerful and the powerless by giving a stentorian voice to those who had been ignored.  It illuminates the dark recesses of society that would never have been discovered otherwise.  On the negative side of the ledger—as I have argued in the past two weeks—the Information Age has greatly distorted the way we interrelate personally, the "alone together" effect.  And there might be perils of the Information Age (sentient robots?) that are as yet unknown—after all, it took almost two centuries for the global warming arising from the still-ongoing activities of the Industrial Revolution to become evident.

  The jury remains out on the net legacy of Silicon Valley, its balance of good and bad.  The verdict might not be rendered for a century or two.  I wish I could be there to hear it.

Wednesday, May 16, 2012

Online Education

  Sherry Turkle's book Alone Together, which I discussed in last week's posting, has a single sentence that has haunted me ever since I read it: "Today's young people have grown up … not necessarily tak[ing] simulation to be second best."  That mindset underlies the "alone together" syndrome that is afflicting society, especially our youngest generation.  In our drive to become digitally ever more together, we have paradoxically become ever more alone as we simulate traditional and intimate forms of contact with diluted ones online.

  Our best universities now seem to be on their way to exacerbating that syndrome by heavily adopting online learning.  MIT and Harvard recently jointly launched a nonprofit, online-education venture called edX. Almost simultaneously, Stanford, Princeton, and the Universities of Pennsylvania, California (Berkeley) and Michigan (Ann Arbor) joined a commercial online-education venture, Coursera.  The latest online courses have many technological bells and whistles—computer-mediated testing to provide students with feedback on their progress, social networks to enable discussion among them, automated and/or crowd-sourced grading, and so forth—that simulate the conventional learning environment.

  Both new ventures are stepping over the corpses of previous online efforts such as Fathom (Columbia, et al.), which failed in 2003, and AllLearn (Oxford, Yale and Stanford), which failed in 2006. Despite these failures, the universities involved in edX and Coursera seem mesmerized enough by online learning's possibilities to try again. I hope they are sufficiently aware of its dangers. The dangers are nuanced, depending on the objectives of the students.

  As a tool of continuing education in one's later years, online courses seem relatively benign, although even in this context they feed the "alone together" malaise.  Since edX and Coursera currently offer at most only certificates of completion, not college credits, they are well suited to the needs of this continuing-education audience. All the same, the logistics are unnerving: last fall, over 100,000 students around the world took three free, non-credit Stanford computer science classes online and tens of thousands satisfactorily completed them!

  The situation is much different for younger students seeking degrees. Neither their educational objectives nor their mature selves are fully formed, so interaction only with a screen cannot possibly substitute for the educational and personal maturation a "bricks and mortar" institution offers.  To the extent that complete courses of study leading to degrees are offered online (as the University of Phoenix now does) they cannot help but distort the very nature of education and its impact on such students. 

  Many arguments have been made both in favor of and against online education. The most telling one in favor is the democratization of access to learning, making it available to people around the world who could never otherwise have it. This argument alone can outweigh many of the traditional concerns.  Other less substantial but still important advantages are cited: additional students can enroll in overcrowded, much-sought-after courses at their own colleges; they can benefit from an ability to time-shift their attendance to hours that suit their own schedules and to "rewind" a lecture to review difficult parts; their access to the best teachers online may outweigh closer personal interaction with less talented ones available in a traditional setting. Further, there may be cost savings or income enhancements for struggling educational institutions, although the recent failures noted above call this into question.

  Some questions about possible drawbacks inherent in online courses have been asked by David Brooks in a column in the New York Times published just after edX was announced: Will massive online learning diminish the face-to-face community on which the learning experience has traditionally been based?  Will it elevate professional courses over the humanities?  Will online browsing replace deep reading?  Will star online teachers sideline the rest of the faculty?  Will academic standards drop?  Will the lack of lively face-to-face discussions decrease the passionate, interactive experience that education should be, reducing it to passive absorption of information?

  The many pros and cons are continuingly being parsed by those more expert than I.  However, from my viewpoint a sole desideratum should dominate the discussion: Since adoption of an online option aggravates the already severe "alone together" syndrome, it should be avoided if other options are available.  Using this criterion, an institution's quest for cost savings, productivity increases or income enhancement is insufficient in itself.  Likewise, a student's desire merely to substitute screen time for human interaction as a preferred mode of learning should be discouraged.  Students who must for some reason view a course's lecture component online should be required when possible to attend non-cyberspace support tutorials, seminars or discussion groups.

  The march toward broader adoption of online learning seems unstoppable, supported at both the top and bottom of the academic ladder.  Among its strong supporters in high places is Stanford president John Hennessy (see his article on the subject), who has enthusiastically predicted that this new wave of education will be something of a tsunami—an apt metaphor given the image of destruction it brings to my mind. At the student level, one finds that many students who are anyway always connected to the Internet don't think of online learning as second best.  They are comfortably at home on Facebook, with its billion users, so they find an online class of 100,000 unremarkable, and don't discern additional value in a class of forty in a traditional classroom.

  I am frightened by the likely acceptance of online education as a normal or even preferred mode of learning. Educational institutions may well soon face Hennessy's online tsunami, just as newspapers, book and music publishers, and magazines have.  I hope they, or at least some of them, will seek higher ground rather than scurry lemming-like into the deluge.  

Wednesday, May 9, 2012

Always Connected


  Writing this blog has alerted me to some of my internal contradictions.  For example, I yearn for equanimity, yet crave a faster connection to the Internet.  There's the rub: equanimity and torrents of data don't co-exist well, or at all.

  I guess I was vaguely aware of this contradiction before my blogging highlighted it, for in retrospect I realize I've been trying to sort it out.  I've persistently resisted replacing my antique voice-only cellphone with a smartphone, I suppose fearing that my tether to the Internet would be reinforced by mobile texting, browsing, email and apps.  At home I've come down to opening fewer than 1 in 4 of the emails in my already spam-free inbox.  I'm not active on social networks and I don't tweet.  Tilting the balance the other way, I've started blogging.  I think I've been searching all along for a sustainable equilibrium between the frenzy of technology and the stillness of self.  It's a hard struggle, but I think it essential for sanity in the Internet age.

  Many of us, particularly the young, are feeling the effects of not having such a balance.  Those effects are well analyzed by Sherry Turkle in her recent book Alone Together: Why We Expect More from Technology and Less from Each Other.  Turkle makes a compelling case that our expanding connectivity has had the perverse result of distancing us.  She has spent three decades as an MIT professor studying the issue, so I find no reason to challenge her credibility.

  Here are a few of the symptoms Turkle gives of an "alone together" syndrome: emailing or texting while in a meeting or dining with others; replacing intimate, spontaneous and hard-to-break-off telephone calls with less-demanding texts and emails; sidestepping the complexities of face-to-face friendships in favor of the less-stressful "friending" on Facebook or fantasy relationships with other avatars on Second Life. Today's young people, she says, are the first generation that does not take the simulation of closeness as second best to closeness itself.

  Turkle also has much to say about another symptom: multitasking.  By engaging in it, we enjoy the illusion that we are becoming more efficient, squeezing extra time into our already compressed schedules; we get high on that illusion.  What we have really done, she notes, is learned how to put others on hold as we switch among tasks, for we are actually capable of handling just one task at a time.  We now spend much of our time with family and friends distracted from giving them the full attention they deserve.  And all for naught, because research has shown that when we multitask our efficiency and the quality of our work are degraded. 

  I add two additional concerns to Turkle's, informed by my posting on creativity.  Being "alone together" in a crowd may prevent the random, impromptu interactions needed for communal creativity.  And having little or no down time for daydreaming may frustrate the Aha! part of individual creativity.

  For all the undisputed benefits of having a world of knowledge at our fingertips, this is a disheartening picture.  As a society we suffer from an Internet-driven obsessive-compulsive disorder.  Internal pressure assails us with withdrawal symptoms when our connection is broken, as if we had a substance addiction. (We do, although the "substance" is connectivity.)  External pressure adds to the malady, for employers are increasingly demanding 24x7 connectivity of their employees, even when on vacation.  Teenagers complain about like demands from their parents, who are newly empowered to keep track of them all the time.  I can't see how this can be good for any of us.

  Wait, it gets worse!  So far I've drawn just from the second part of Turkle's book, which is about always-connected networking.  The book's first part is about the growing impact of robots on our lives.  Today's young people, taken as they are with simulation, are more accepting of robots than the rest of us.  They grew up cherishing robotic toys like Furbies, which sold 40 million of their first generation from 1998-2000.  A third-generation will be introduced this year. 

  Those who are Furby-deprived might appreciate a description of the first Furby generation.  They were programmed to gradually speak English as they interacted with their owners, instead of their native "Furbish."  They demanded attention and responded lovingly when they got it with phrases like "I love you."  They could even communicate with other Furbies.  Like humans, they were always on—no switch—so the sole way to stop their sometimes annoying demands and chatter was to open them up with a screw driver and remove their batteries, in effect killing them.  Replacing the batteries reset them to their initial state, reincarnating them with no memory of what they came to "know" in their previous life.  Children actually mourned their death.  I can't imagine what the third-generation Furbies, fourteen years later, will be able to do, but their pretence of intelligent behavior will certainly be greater.

  Turkle's research shows that our youngest generation, primed by interaction with Furbies and other robotic toys, is quite open to the likely advent in a decade or two of widespread "intelligent" humanoid robots that could be lifelike and caring companions— even lovers and spouses (see a New York Times review of a 2007 book on that frightening reminder of Stepford Wives!).  I won't go there further, although Turkle does. My 20th-century mind rebels.  I fear that the shards of human togetherness remaining in "alone together" will shatter further as togetherness with robots increases.  

  I flee back to the lesser dislocation of society occasioned only by the Internet's connectivity.   An eerily apt poem by Wordsworth from two centuries ago still rings true with its original words.  It becomes all the more pertinent to today's topic by deleting a single letter.  Try replacing "spending" with "sending."

The world is too much with us; late and soon,

Getting and spending, we lay waste our powers;

Little we see in Nature that is ours;

We have given our hearts away, a sordid boon!

This Sea that bares her bosom to the moon,

The winds that will be howling at all hours,

And are up-gathered now like sleeping flowers,

For this, for everything, we are out of tune;


It moves us not. --Great God! I'd rather be

A Pagan suckled in a creed outworn;

So might I, standing on this pleasant lea,

Have glimpses that would make me less forlorn;

Have sight of Proteus rising from the sea;

Or hear old Triton blow his wreathèd horn.

Thus did Wordsworth bemoan the societal costs of the industrial revolution.  We should listen to him as we reckon the costs of the information revolution.

Wednesday, May 2, 2012

Neurons and Creativity

  I've been puzzled ever since I posted "Intuition and Expertise" about a month ago. If you read it, you may remember a central postulate of Daniel Kahneman's book Thinking, Fast and Slow: people use two distinct mechanisms in thinking.  These are a fast and intuitive "System 1" and a slow and analytical "System 2."  It's a model that left me with nagging questions, unaddressed by Kahneman.  Are the two systems merely metaphors that don't have discrete neurological counterparts?  Or is each system instantiated at a specific neurological site?

  I was therefore excited to hear an interview on NPR in which Jonah Lehrer discussed his new book Imagine: How Creativity Works.  He talked not only about intuitive and analytical thinking in creativity, but also about particular neural circuits that are their loci.

  Thankfully, Lehrer is a journalist, not a scientist like Kahneman, so his book is short and relatively crisp.  (Good! I thought. I didn't want to plunge into another tome like Kahneman's.)  The book has two parts, "Alone" and "Together," which respectively address individual and communal creativity.  Just the first part is concerned with the details of neural activity, so I will confine myself mostly to it.  But the other part is of sufficient interest to warrant a brief outline.

  Together.  Lehrer asks why such places as fifth century BCE Athens, fifteenth-century Florence, late sixteenth-century London and today's Silicon Valley came to have "clots of excess genius"—rare agglomerations of creative people.  He maintains that they have in common a vital combination of ingredients: a dense population, relative affluence, and civic and social institutions that encourage creativity. Together, these ingredients foster frequent interactions among imaginative individuals, resulting in their interchanging ideas.  Each individual's creativity, the "alone" kind, is stoked by bumping into other creative individuals at random.  The environment in turn attracts still more of the brightest and most original, driving a positive feedback loop.

  For example, late sixteenth- and early seventeenth-century London had one of the densest populations on earth; a relative lack of censorship; an amazing near-50% literacy rate nurtured by the English Reformation's rendering of the Bible into the vernacular; ardent and supportive theater-goers; and a plethora of coffee houses, which acted as milieux for chance meetings.  The result was a "clot of excess genius" containing the likes of Jonson, Marlowe, Shakespeare, Milton, Kyd, Spenser and Donne.  English literature has never again seen an efflorescence like that in such a short time period.

  Many modern companies try to mimic this clotting phenomenon.  At the insistence of Steve Jobs, for instance, all of the most-frequented spaces at Pixar and Apple—cafes and cafeterias, mailboxes, meeting rooms, even bathrooms—were clumped together centrally, so that people would be forced by happenstance to bump into each other, chatter and exchange ideas.  It seems to work, for Pixar and Apple are among the most creative companies ever established.

  Alone.  Still, while people can be prodded into creative acts by their surroundings, the act itself is usually individual.  That's where System 1 and System 2 thinking come into play.  Lehrer provides a fine insight into how these two forms of thinking occur in the brain.  Here are some of the facts he presents, with a few of my own comments interpolated.

  As I argued in my posting on intuition and expertise, System 1 leads to the Aha! moments that we all experience.  Finding the part of the brain where those epiphanies are generated has recently been made possible by using functional MRI (fMRI) measurements, which perceive increases in neural activity as surges in blood flow to those neurons.  Such measurements show that the anterior superior temporal gyrus (aSTG), a small fold of brain tissue above the right ear, becomes especially active a few seconds before an insight.  The aSTG functions subconsciously, apparently by obsessively searching for relationships among the myriad pieces of data that are stored in the brain.   When it finds a significant relationship, it "lights up" in the fMRI scan, like the light bulb above a cartoon character's head indicating an intuitive flash.  We may trigger an aSTG search because we are stumped in the analysis of a particular problem, probably because we are looking at the wrong data, or it may happen autonomously.  Interestingly, the aSTG seems to be most active when we are relaxed, possibly daydreaming.  That may be why Archimedes got his Eureka! moment while in a bath.

  On the other hand, System 2 thinking, the analytical kind, is in our conscious mind all along; we know that it is happening because it is effortful, concentrated work.  That work takes place in the prefrontal cortex (PFC), located behind the forehead.  The PFC has associated with it a short-term working memory that allows it  to focus on the very pieces of data it presumably needs to solve a problem—akin, I think, to a computer's RAM.  By not paying much attention to the vast amounts of other data in the brain (data "on disk" so to speak), the PFC emphasizes the fine-tuning of an emerging solution, but doesn't provoke epiphanies.  We gain incremental understanding from its activities, piece by piece.

  Part of the PFC may actually inhibit our full creativity: the dorsolateral prefrontal cortex (DLPFC), which is most closely associated with impulse control.  It constrains our thoughts and actions so that we don't make fools of ourselves or violate social norms—a sort of straitjacket that the mind places on itself.  It can therefore prevent us from thinking "out of the box."  Tellingly, it is one of the last areas of the brain developed by children, which may explain both their relative lack of social inhibitions and their as-yet unbridled creativity.  fMRI studies of jazz musicians while they are improvising show their DLPFC activity to be suppressed, as if they are purposely inhibiting their inhibitions so as to be able to jam; this suppression doesn't occur when they are merely playing a set piece.

  Another connection, or actually lack of one, between creativity and the PFC is quite amazing.  When we are asleep, the PFC shuts down!  Our analytical abilities and our inhibitions—our sanity checkers—are gone.  The brain is then free to run amok, dreaming uninhibited thoughts and making "insane" connections among its memories.  Occasionally connections that do make sense occur in this maelstrom.  Perhaps that is why we sometimes decide to "sleep on it" when we are trying to solve a particularly vexing problem.

  These few facts about the neurology of thinking are of course not the whole story.  No matter. They are the beginning of an answer to the "nagging questions" Kahneman left me with.  My inner scientist is very pleased.  I love it when observational data can be understood in terms of underlying physical phenomena!