About the Institute

The Hybrid Vigor Institute is dedicated to rigorous critical thinking and the establishment of better methods for understanding and solving society’s most difficult problems. Our particular emphasis is on cross-sector and collaborative approaches; we seek out experts and stakeholders from a range of fields for their perspectives or to work together toward common goals.
Principals | Advisors | What We Offer



hybridvigor.net houses the work of critical thinkers, researchers and practitioners who conduct cross-sector and cross-disciplinary explorations and collaborations.
Blog | Contributors | Topics

  Subscribe to Hybrid Vigor’s RSS Feed



Privacy | Funding


Contact Us



Intervention by Denise Caruso Read Intervention by Denise Caruso, Executive Director of the Hybrid Vigor Silver Award Winner, 2007 Independent Publisher Book Awards; Best Business Books 2007, Strategy+Business Magazine

'21st Century Risk' Archive


by ~ December 14, 2011

It is five years almost to the day since I published Intervention, my book on genetic engineering and risk. And I am more convinced than ever that everything I wrote about was spot-on. It seems like every week there is a new revelation about harmful consequences of living biotech products in the wild — consequences that were predicted by so-called activists, but totally dismissed by the industry and regulators.

For example, genes from engineered plants do spread, despite industry’s early and repeated declarations that they cannot. One result? Superweeds that now have built-in resistant to several herbicides.

What’s more, insects are adapting quickly to transgenic plants with insecticide genes. In Illinois and Iowa, a new generation of insect larvae feeds on the roots of genetically engineered corn. And in India, the pink bollworm is unaffected by the insecticide growing in the cotton plants it is eating (for which Monsanto blames the farmers).

And it seems that eating transgenic food may not be so harmless after all.

Yet nothing changes. In fact, the Obama administration is supporting the planting of genetically engineered crops in more than 50 national wildlife refuges across the country.

So … I think it’s time for me to start researching a sequel to Intervention, focused on exposing the dangerously cozy relationships between industry and regulators that ignore scientific common sense and put all of us at risk.

But I’m going to need your help — and I’ll send you a gift, or even many gifts, in thanks for your generosity.

Here’s the deal: For every $20 you donate to Hybrid Vigor, we will send a free copy of Intervention to you, or to anyone you’d like. Signed and inscribed, if you choose.

You also can have your gift copies sent to libraries. Just specify in the instructions that you want to donate your gift(s) to a library, of your choosing or ours, and we will take care of the rest. We can also donate your book(s) to or companies or non-profits or corporate libraries — say, for example, to venture capitalists that are funding biotech startups …

Just click here and merrily Paypal away (you can use a credit card at this link also):

Your generosity will be much appreciated, and put to good use.


by ~ June 1, 2010

Today is the first day of my month-long fellowship at the STUDIO for Creative Inquiry, in the College of Fine Art at Carnegie Mellon University.

I am here at the invitation of Golan Levin, director of the studio and, back in the day, a former colleague at Interval Research. The fellowship is funded by the National Endowment for the Arts.

Golan told me I could do anything I wanted, and so I invited Robin Gianattassio-Malle to come work with me on inventing a better way to help people learn and think about the consequences — both risks and benefits — of innovations in science and technology. We want to go beyond the usual binary, “fawning or damning” approach that dominates media coverage today, to actually informing people about these incredibly complicated issues.

I’m really excited about this project. It’s the first time in a long time I am going to have the opportunity to roll up my sleeves and do what I do best: to help people understand complexity in a way that is engaging, helpful and accurate. We are going to have to confront some tough design issues, but between us we have an amazing network to draw from.

Robin and I will be working at the STUDIO with two other fellows — Kyle McDonald and Jacob Tonsky — both of whom are wicked smart and from whom we expect to learn a lot.

We will be building a prototype over the next few weeks, and I will be posting updates about our progress. Yeehaw!


by ~ December 9, 2008

I’m racing off to Stanford University for a conference honoring the 40th anniversary of Douglas Engelbart’s ‘Mother of All Demos.”

Called Program for the Future, the conference aims to explore ways to “enhance our capacity for problem solving, decision making knowledge organization and planning in every field of human endeavor.”

When I interviewed Engelbart on The Site (in 1996 I think it was, and also probably was my favorite interview of all time, he is such a tremendously humble and lovely man), this is how I introduced him:

The very act you are engaged in at this moment-reading and clicking through information on a computer screen-would not be possible if not for Douglas Engelbart. While working at Stanford Research Institute in the late 1960s, Engelbart invented or envisioned almost everything that makes personal computing possible today:  the computer mouse, hypertext links, groupware, on-screen editing and much more. But almost 30 years ago, few if any of his peers shared his vision.

That vision (which I also explored in an NYT column back then) was about the power of technology to enable what Engelbart calls “collaborative intelligence.” And while we are kind of banging our way toward it, his ideas for how technology could serve as the connective tissue between people and information was more methodical and directed than our haphazard efforts today.

I spoke at the 30th anniversary celebration, so it was nice to get a call on Friday from Etan Ayalon (CEO, GlobalTech Research) to join a last-minute panel he was asked to put together and moderate for the conference. We’ll be discussing collective intelligence in the context of one of my favorite subjects:  how to be innovative about innovating.

I’ll be joining Phil McKinney (VP and CTO of the Personal Systems Group at Hewlett-Packard and Dr. Larry Leifer (founder and director of the Stanford Center for Design Research, and founding director of the Stanford Center for Innovations in Learning).

I thought I’d post the questions that Etan sent us to riff off during the panel, and my brief thoughts in response.

How do we best realize Doug Engelbart’s vision of combining people and technology to nurture innovation and better humanity, by addressing major challenges as well as creating new industries, products and jobs?

1. One problem at a time, using the right processes.
2. Need to improve the improvement/innovation process — the C-work, in Doug’s parlance.

•    Today we have pursuit of innovation without considering context. Often ‘solutions in search of a problem,’ instead of the other way around.
•    Pursuit of innovation in a solo inventor (or product development department, whatever) model leads to applying collective intelligence post facto; i.e., marketing department and customers aren’t part of the process
•    Context is also provided post facto, and selectively — usually by people with a specific and often narrow point of view
•    Context can only be accurately provided by others.

Is innovation a gift or a skill?

1. Both, and neither. Depends. Some people are natural outside-the-box thinkers. But the organization has to be designed to encourage exploration. And organizational design is a skill.
2. Why do you ask? The thoughts behind the question are as interesting as the question itself.

Is innovation an outcome or a process?

Personally I think it’s an outcome, but if it’s being done in an organization it’s more likely to happen if there are processes to support it. Again, what’s the motive behind the question?

Sharing the Benefits of Innovation for All, Not Just Lucky Digital Few - With an ever widening digital divide, how do we ensure that innovation benefits all segments of society in both developing and developed countries?

Process innovations can benefit everyone, I think. But with products, it’s more than a digital divide. Biotechnologies have this issue as well — expensive drugs, expensive seeds, etc. And we can’t ensure this without government intervention, at least not at first. I don’t think that’s how it works. But we can be thoughtful about how to stage innovations so they eventually get there.

Balancing Innovation Risks and Rewards – How? Who Should Participate in the Dialogue?

Who? All the relevant experts and stakeholders

How? By having the risk-reward conversation very early in the product development cycle. And by having a process that respects the question, which requires changing the R&D culture.

Also, we need to acknowledge that product innovation today in particular is more about driving profit than solving problems. This may need to be rethought if we are serious about creating a sustainable economy that isn’t wholly based on getting people to endlessly buy more stuff. It’s a very different risk-reward conversation when it’s framed that way.

Does innovation emerge from/require ambiguity and uncertainty?

Life is ambiguous and uncertain, which causes problems that need to be solved. So, yes. Also it emerges from the drive to improve, which some people have innately.


by ~ December 7, 2008

A few posts ago, I made a plea for the Obama administration to include social scientists in the mix as it moves to return science to its rightful position of inclusion and respect in the public policy sphere. If you want just one real-life example of what’s at stake by not doing so, read this letter about the “updated” Technical Assistance Document on anthrax contamination, proposed by EPA and several federal agencies after the 2001 and 2002 attacks.

It’s written to EPA administrator Stephen Johnson, from my colleague Baruch Fischhoff, the Carnegie Mellon risk expert and professor who’s chair of the Homeland Security Advisory Committee for the EPA’s Scientific Advisory Board.

Fischhoff wrote:

[S]everal Committee Members, myself included, were distressed at the lack of systematic, scientific attention to communicating with the public.  … It is not unique to this anthrax project, but reflects a general problem in our national emergency planning … As we saw in 2001, a b. anthracis (“anthrax”) attack has enormous potential for achieving our enemies’ goals, even when causing relatively few casualties … Much of that damage came from our own inability to communicate credibly, causing needless concern and distrust that persists to this day.

With its rigorous methodologies and an impressive body of academic literature supporting it, risk communication represents the bounty of wisdom that can be found in the applied social sciences, from fields including psychology, communications, decision analysis, rhetoric, sociology, political science, law, ethics, linguistics and anthropology.

But the scientific aspects of risk communication are often entirely overlooked or dismissed by technical experts and authorities in both emergency preparation and response. Instead, they assume that  their knowledge of technical details, their intuition about what to say to the public, or their charisma (this being the politicians) will give people enough information to respond to emergencies.

Call it ignorance, arrogance or denial, but that attitude is a big mistake, and it has real consequences.

Look back at Hurricane Katrina for some horrific examples. Not only did authorities fail to get the frail and the poor out of New Orleans, it utterly failed to persuade tens of thousands of them who could evacuate the city to do so.

And recall the disaster that one risk expert called the “Duct Tape Risk Communication” emergency preparation strategy, proposed by the White House in 2003, which immediately was turned into a lampoon to skewer the U.S. government, rather than inspiring citizens to take useful action.

People need to trust their leaders and technical experts to tell them the truth in emergencies, in ways that actually answer their questions — questions which will be different for business leaders than for schoolteachers — and address their fears. Without that trust, the public isn’t going to follow instructions.

As Fischhoff said in his letter to EPA, the only way to prepare for emergencies is to have an inventory of scientifically sound risk communications on hand — pre-scripted press releases, print and electronic explanatory materials, guides to self-testing, FAQs and the like — ready to be adapted to specific circumstances. And,

Communications research planning is not expensive.  However, it requires a skill set that is not represented in the anthrax [Techical Assistance Document] task force.  Nor is it present in most other parts of our national response effort [including the Emergency Consequence Assessment Tool and the WaterSentinel Program (PDF)].  As a result, much of what passes for risk communication advice has no scientific foundation.

Thankfully, compared to some of the other problems facing the Obama administration, this is an easy one to fix. And given the nature of some of those problems, they may want to fix this one now.


by ~ December 4, 2008

Maybe my imagination is getting the best of me, but I laughed out loud when I read last Thursday’s New York Times article about the minimal impact of a big hypertension study published in 2000 that compared various blood pressure drugs.

The study was called the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial, or ALLHAT. And while I’m sure the authors would never admit it, I desperately want to believe someone built that big ol’ clunky name around the classic cowboy insult “all hat, no cattle,” describing someone that’s all talk and no substance (in this case, the blockbuster drugs).

I do hope that’s what they had in mind, anyhow. It’s not just hilarious; it also makes sense, given what the Times article revealed about why the ALLHAT study had so little impact.

Its findings showed that cheap diuretics were at least as effective at treating high blood pressure as the expensive and heavily promoted drugs like beta blockers and calcium blockers — but that doctors were still prescribing the pricey stuff at a much higher rate.

One reason, according to Curt D. Furberg, a public health sciences professor who was the first chairman of the steering committee for the study, was that “The pharmaceutical industry ganged up and attacked, discredited the findings.”

The Times piece notes that Furberg eventually resigned “in frustration” from the steering committee, while another committee member went on to receive more than $200,000 from Pfizer, largely in speaking fees, the year after the Allhat results were released.

“There’s a lot of magical thinking that it will all be science and [there] won’t be politics,” Sean Tunis, a former chief medical officer for Medicare and an advocate for these kinds of comparative-effectiveness studies, told NYT.

I suspect there’s a lot more hat than cattle for a lot of the expensive drugs doctors are prescribing today to treat chronic conditions. And while of course drug companies are free to sell anything they’d like, I don’t really want to have to pay for the most expensive drug just because my doctor got seduced by the sales rep. Continue reading »


by ~ November 20, 2008

Earlier this week, I got a phone call from Steve Aldrich and Jim Newcomb, respectively CEO and director of research for Bio Economic Research Associates, a private research and advisory firm.

They’d read my paper on risk and synthetic biology and thought my characterization of their report on synthetic biology, “Genome Synthesis and Design Futures: Implications for the U.S. Economy,” was unfair.

The larger issue that our disagreement is based on — that is, how to pay proper fealty to scientific uncertainty — is at the core of my discontent with how technology innovations are assessed for risk and benefit.

So I told them I would write about our disagreement here. This way, they have an opportunity to respond, and maybe we can get a discussion going on the subject.

Here is what I wrote:

Of the most concern in the context of risk and governance are the reports that uncritically support synthetic biology, as they encourage development and commercial release with little or no acknowledgment of the degree of scientific uncertainty that surrounds the endeavor. A 174-page report on synthetic biology published by Bio-Economic Research Associates in 2007 and funded by the Department of Energy (which itself has invested heavily in synthetic biology research), contained but a single, three-quarter-page discussion of the limitations of the engineering paradigm as applied to living systems. Giving such short shrift to a topic that is still under deep consideration in the broader scientific community lends an air of certainty to a highly uncertain endeavor. Such under-representation has real significance from the perspective of investment and economic risk, as well as from that of health and the environment.

[Italics added by me; they aren’t in the paper.] Continue reading »


by ~ November 20, 2008

I don’t know what kind of planetary alignment took place over the past week with regards to synthetic biology, but whatever it was, I like it.

Over the course of five days in November, from Thursday the 13th to Monday the 17th, four conversations about synthetic biology took place. They involved everyone from non-profit leaders to engineers, social scientists, biologists and government regulators. We need more open-minded, smart people from many sectors thinking and talking about this technology, and pronto.

What on earth am I talking about? If you’ve never heard of synthetic biology, you aren’t alone. According to the Project on Emerging Nanotechnologies, less than one in 10 (9%) Americans say they have heard some or a lot about synthetic biology — and a whopping 67% have heard nothing at all. [Edited in response to first comment. Never let it be said that I do not listen to my critics.]

But venture capitalists, multinational chemical, energy and “life science” companies, and just about every government agency you can name are already investing millions of dollars to develop commercial synthetic biology applications. According to one report, the research market in 2006 was already $600 million, and “the potential for growth in the next 10 years is projected to expand this market to over $3.5B.”

Proponents and opponents and everyone in-between agree these applications will have a direct and significant effect on our lives and on the planet. (I’ve put links to good/accessible background reading at the end of this post.)

The first event was on Thursday the 13th, a day-long “teach-in” in San Francisco, held by and for civil society groups and NGOs, which as far as I can tell was organized by the ETC Group in Montreal. It was private, so there’s not much else to say about it — I found a link about it on the Food First site. If you want more information, contact Jim Thomas at the ETC Group.

The second, on Friday the 14th, was hosted by the Wilson Center’s Project on Emerging Nanotechnologies, which was a conversation with — well, it was with me, actually, and Rick Weiss, a former senior fellow at the Center for American Progress (you may know him from his previous incarnation as the Washington Post science writer). The occasion was the publication of my paper on synthetic biology, which you can read or download here. Continue reading »


by ~ November 17, 2008

My next column in Strategy+Business (coming out in Winter 2009) will be about the need to rewrite our innovation policies from scratch. I strongly believe that we need to move beyond simplistic “greasing of the wheels” for corporations via tax credits and patent reform, and look more closely at how to create a whole new ecosystem in which innovation — and particularly, scientific and technological innovation — can flourish to everyone’s benefit.

In that regard, Barack Obama’s call for a return to scientific integrity is cause for tremendous hope for those who have spent eight long years battling the anti-science, anti-innovation era of the outgoing administration.

The very first item on the Obama campaign’s science fact sheet, which was published in September 2008, states that Obama’s science-friendly science policy will ensure that “decisions that can be informed by science are made on the basis of the strongest possible evidence.”

It goes on to say that the Obama administration will (among many other things):

  • Appoint individuals with strong science and technology backgrounds to key positions;
  • Take advantage of the work of the National Academies to identify the federal government positions that require a strong science and technology background;
  • Ensure independent, non-ideological, expert science and technology advisory committees; and (last but certainly not least from Hybrid Vigor’s perspective);
  • Actively encourage multidisciplinary research and education, noting that “innovation often arises from combining the tools, techniques, and insights from researchers in different fields.”

Yes! That’s what I’m talkin’ about! That last one even takes a page straight out of Hybrid Vigor’s mission statement.

But … I’m concerned that social scientists are not specifically mentioned anywhere in the policy fact sheet, either in spirit or in fact, not even in the last item. This is a serious omission as well as risky one, and unfortunately it is all too common in discussions of interdisciplinary, multidisciplinary or cross-disciplinary research.

Social scientists can — and should — provide a critical bridge between innovation and the people that the products of innovation purport to serve. They can help policy makers think about the social and cultural context for research priorities and decisions in a way that technologists cannot, making sure that the “strongest possible evidence” that scientists provide is also the evidence that is most relevant to the decision at hand. Continue reading »


by ~ November 13, 2008

It is not always happy-making to be ahead of one’s time.

On Tuesday, the New York Times published package of articles that explored new genetic research and new ideas of what a gene is.

Much of the package was based on the findings of the ENCODE study, which was sponsored by the National Human Genome Research Institute.

The upshot of ENCODE, which was published about a year and a half ago, in June 2007, was pretty straightforward: the human genome is not a “tidy collection of independent genes,” after all, with each sequence of DNA linked to a single protein, which in turn is linked to a single function, like the production of an enzyme.

Instead, genes appear to operate in a complex network, and interact and overlap with one another and with other components in ways will challenge scientists ”to rethink some long-held views about what genes are and what they do.”

The lead story in the package notes this perspective, writing that scientists “no longer conceive of a typical gene as a single chunk of DNA encoding a single protein,” and quoting one of them as saying, simply, “It cannot work that way.”

YES! I was so excited that this issue was finally going to get some attention. Not only was one of the central themes of my book, Intervention, but I too wrote a column about ENCODE for the New York Times — called “A Challenge to Gene Theory: A Tougher Look at Biotech” — right after the results were published, in July 2007.

In it, I asked what (to me) is the most obvious and important question, but it was addressed nowhere in the NYT package: Continue reading »


by ~ October 29, 2008

I was recently talking to some German friends about their trips to the United States. Apart from the standard touristy things they found memorable about the U.S., they were all greatly impressed that they could go shopping for almost anything in the middle of the night. Even to modern Europeans, the concept of midnight shopping seems fantastic. Imagine their amazement when I explained that, in the U.S., they could go shopping on almost any holiday as well.

Today’s business culture thrives on on performance, success, winning, and constant availability. The world continues on its frenzied trend toward 24×7 services, “five 9’s” of up-time, and six sigma products. The drive to succeed has provided us with all sorts of modern conveniences—and plenty of modern instances.


But I’d like to say a few words in defense of failure, because I believe failure has an important purpose and we can’t simply wish failure away by focusing on success. In my view, systemic failures can be averted simply by introducing some planned imperfections into the systems we build. One of the lessons that should be learned from the current financial crisis is how securities originally thought to be insular from the housing market were proven to be directly on the financial fault line.

Here’s the problem: when a system (such as a computer network, power grid, or financial market) performs steadily for a period of time, it fades into the background and seems as certain as the rising of the sun. Over time, a complex and interdependent mesh of relationships develops. Because these dependencies aren’t explicit, it becomes nearly impossible to predict how the beating of the proverbial butterfly’s wings in one part of the system can wreak havoc in another.

Is there a way to tease out the dependencies in such networks and develop complex distributed systems that fail safely? I think there’s a simple solution: introduce the element of failure. Shoot for 4 9’s instead of 5. Interrupt the broadcast so that we can run the drill before the disaster strikes. Learning to fail on a regular basis could help us deal better with much larger, systemic failures in the future.