Intellipedia, Discoverability & the Gov 2.0 Movement Toward Living Intelligence
I stumbled across this YouTube video from a link on Twitter and found it a pretty cogent argument selling the benefits of Intellipedia as driving quality of the work product as more important than the standard collaboration hype that so often seems to be the tone of most media coverage of the national intelligence sharing project. One of the more interesting observations by the video’s author, Chris Rasmussen, a social-software knowledge manager at the National Geospatial-Intelligence Agency, is that any Enterprise 2.0 project must replace a legacy process or risk failure. Here are his notes on the video:
Intellipedia is now in its fourth year and the dominate view of its role can aptly be described as “good for collaboration but not the product.” Each intelligence agency still vets and generates “their” products and Intellipedia is largely viewed as an adjunct of generic information compared to the official process. The living intelligence model aims to reduce parallel product creation by moving the review process into the same place where the collaboration takes place. This would create a central and transparent vetting system that replaces legacy processes. This is a key lesson for all Enterprise 2.0 endeavors–it must replace something. Living intelligence has also been referred to as purple intelligence.
Rasmussen also links to an article earlier this summer in Federal Computer Review where he discusses the purple intel concept and which asks “Is ‘discoverability’ the answer to the information breakdowns that have hampered homeland security efforts?”:
[Rasmussen] … prefers to call himself a purple intelligence and mashup evangelist, pointing to the fact that purple is the color that results from mixing multiple points of the spectrum.
Purple is an apt symbol for combining the expertise of organizations working to help prevent future attacks, he said.
Rasmussen has seen purple power in action through countless little success stories accomplished via Intellipedia, the information-sharing wiki that serves intelligence agencies, the military and the State Department. “All the time, people are connecting with others [who] they didn’t know worked on the same issue six feet down the hall,” he said.
Connecting the dots, more formally known as information discoverability, is gaining increasing attention from homeland security officials and experts in their ongoing attempt to corral anti-terrorism information that resides across federal, state and local jurisdictions.
In January, the departing director of national intelligence issued Intelligence Community Directive 501, which gave intelligence personnel a “responsibility to discover” information believed to be relevant to their work, along with a corresponding “responsibility to request” information they have discovered.
The directive defined discovery as the act of obtaining knowledge of the existence, but not necessarily the content, of information collected or analysis produced by any intelligence community element.
Two months later, the bipartisan Markle Foundation published a report that reaffirmed “discoverability” as the first step in any effective information-sharing system.
“Solving discoverability simplifies solving information sharing,” said Jeff Jonas, an IBM distinguished engineer and a member of the Markle Task Force on National Security in the Information Age.
But despite these high-profile mandates, challenges call into question the feasibility of discovery tools and techniques for solving data-sharing problems that span agencies, jurisdictions and cultural boundaries. Some say the technology isn’t even the hard part.
The real issue for Intelligence 2.0 then is not whether intelligence teams can collaborate more effectively but whether the methods can make a difference in the quality of the work product?
What do you think?