About the Institute

The Hybrid Vigor Institute is dedicated to rigorous critical thinking and the establishment of better methods for understanding and solving society’s most difficult problems. Our particular emphasis is on cross-sector and collaborative approaches; we seek out experts and stakeholders from a range of fields for their perspectives or to work together toward common goals.
Principals | Advisors | What We Offer



hybridvigor.net houses the work of critical thinkers, researchers and practitioners who conduct cross-sector and cross-disciplinary explorations and collaborations.
Blog | Contributors | Topics

  Subscribe to Hybrid Vigor’s RSS Feed



Privacy | Funding


Contact Us



Intervention by Denise Caruso Read Intervention by Denise Caruso, Executive Director of the Hybrid Vigor Silver Award Winner, 2007 Independent Publisher Book Awards; Best Business Books 2007, Strategy+Business Magazine

archive for August, 2007


by ~ August 5, 2007

My New York Times column today, “Testing Testers, Finding Flaws” was a pure pleasure to write.

Developed by computer science researchers at Keele University in England, it’s about a method for revealing shortcomings in human reasoning — and errors in research — that has the potential to help us solve some of the most vexing problems that we face today.

Here’s the first few paragraphs:

SOME problems are particularly tough nuts to crack. From cancer to computer viruses, no matter how much time and money we spend, they seem to defy all attempts to solve them.

Two computer science researchers at Keele University in England say they believe that more progress can be made by shifting our focus from the problems themselves to the people who strive to solve them. The researchers, Gordon Rugg and Joanne Hyde of Keele’s Knowledge Modelling Group, have come up with a process they call Verifier that is designed to seek out mistakes in existing research on difficult problems.

By applying the scientific method to knowledge itself, Verifier has proved adept at exposing gaps in logic that can result from expert biases and mistakes, gaps that can invisibly skew their research results.

While Verifier promises to improve the odds of solving vexing intellectual puzzles, it may also help industries that rely on research to develop more effective products and treatment interventions. In principle, its developers say, the method can be used on any problem in business or academia because shortcomings in human reasoning are universal. … [snip]

They are working on Alzheimer’s, autism and dyslexia. I think it’s also a brilliant approach to doing risk assessments in areas of complexity where people are creating innovative technologies from scientific discoveries.

I talked to Gordon Rugg why he’s not using the method as part of a collaborative process. It seemed to me that it would be a lot easier and less time intensive — and maybe even less susceptible to “operator error.”

He said it had not been designed that way, butmany people ask him about it. I would love to find a way to converge a bunch of these interdisciplinary/collaborative processes — and turn them into a kind of manual so that they can be deployed more often and more broadly.