PrintPrint CiteCite
Style: MLAAPAChicago Close


Can New Technology and Tradecraft Enhance Intelligence Sharing and National Security? [Rush Transcript; Federal News Service]

Speakers: Calvin Andrus, Chief Technology Officer, Center for Mission Innovation, Central Intelligence Agency, and Stephen DeAngelis, Founder, President and Ceo, Enterra Solutions
Presider: Michael Moran, Michael Moran, Executive Editor,
March 22, 2007
Council on Foreign Relations

MICHAEL MORAN: Well, good afternoon everyone. I'll let you grumble for a moment. (Pause.)

I'd like to welcome you to today's Council on Foreign Relations meeting. I'm Michael Moran. I'm the executive editor of, the council's website. I hope I didn't need to tell you that. I hope you're all using it and finding it useful. I'm a person also -- I've spent a lot of time in journalism over the years and in various guises covered the intersection of technology and intelligence gathering, which is the topic, of course, of this meeting today.

Before we get into the substance of stuff, though, speaking of technology, I'd like to ask you at this point to turn off your cell phones, BlackBerrys, SmartPhones, iPods, Zunes. If anybody out there has a Star Trek communicator, please turn it off.

Also to remind you that at the last minute, thanks to a dispensation from Langley, this meeting has been put on the record, which means that any utterance may be subject to quotation. And I will do my best, as the executive editor of, to make sure that both indignities occur. We will have video, audio and a transcript of this meeting available as soon as we can on the website.

With that, I would like to introduce our guests. To my right, Dr. Calvin Andrus, who in my biz needs no introduction, but I'll do it anyway. He's currently the chief technology officer at the Center for Mission Innovation at the CIA. To put that more plainly, Dr. Andrus is helping the intelligence community come to grips with how new technologies might help them solve a dilemma that is as old as the intelligence business itself -- how to prevent the kind of compartmentalization of information that can be so debilitating to an industry like that.

Dr. Andrus won the intelligence community's 2004 Galileo Award for his paper, "The Wiki and the Blog: Toward a Complex, Adaptive Intelligence Community." That can also be found on the website. Dr. Andrus earned his Ph.D. from SUNY Stony Brook. And I welcome you.

CALVIN ANDRUS: Thank you. Thank you very much.

MORAN: To the left, I welcome Stephen DeAngelis, president and CEO of Enterra Solutions, which attempts to do for the corporate sector and for others what Dr. Andrus is doing for the intelligence community -- namely, help them cope with the speed and the frequency of new developments in technology and find ways to share knowledge intelligently in real time, something that the new world demands. Mr. DeAngelis has a B.A. from American and is a visiting scientist at the Department of Energy's Oak Ridge Laboratory

To begin, I'd like to kick it off with a very broad question, and I'll direct it to you, Calvin. With our audience in mind, in the simplest terms possible, could you describe what a wiki is, and would you describe why this phenomenon has applications in the intelligence community that might make current procedures outmoded?

ANDRUS: Okay. Well, very simply stated, a wiki is sort of like a word processor, but that everybody can edit the article that you're working on. And embedded in that article there are what we call hyperlinks, or links to other articles. And so over time, you get a collection of articles that are linked to each other, sort of an ever-growing knowledge base of things that people have worked on, and which can be updated sort of on a regular basis or in near real time.

MORAN: And one of the key kind of features of the wiki, of course, is that it's open to editing by a wide variety of people. It's not simply the work of one person, like a CIA analytical report would be.

ANDRUS: Right. So one of the principles of the wiki is that anybody who has access to the wiki is free to edit any page inside there, inside the wiki. And so you get this sort of -- so there's this "wisdom of crowds" effect that goes on inside of a wiki.

MORAN: And could you explain a bit about Intellipedia, the agency's version of the Wikipedia, which is more publicly known?

ANDRUS: Right. So the Wikipedia is out on the open Internet. We have taken that software and placed it up on our top-secret network. We've also placed an instance on our secret network, which is a separate network, and we've placed an instance on our unclassified network that's behind a firewall.

What we did is we uploaded or we populated our instance with the country pages from the open wiki, and then we sort of forked the knowledge, then we just sort of took it from there.

MORAN: And this has developed now into a tool that you actually use every day, is that right?

ANDRUS: Right. We actually sort of within the intelligence community -- it's available to everyone in the intelligence community -- we sort of went live in April of 2006. So it's almost a year old now. And as you can imagine, in the beginning there was almost nothing there and there wasn't much to see when you went there, and there weren't very many hyperlinks. But over the course of the year we've had one of these exponential growth curves in the number of people who are working on it, the number of edits that occur. We looked yesterday, and we were getting about -- I don't know if I can say this -- 500 edits an hour.

MORAN: So I'm going to try to put this in -- put a hypothetical example to help people understand this.

We all know there's a lot of confusion with regard to what's going on in Somalia these days, ever since the Ethiopians moved into Somalia. From the outside, from the perspective of a news editor, which is essentially my perspective, I knew there was something more to that story than we could see. It was clearly not the mighty (Egyptian ?) military acting alone. Clearly, you would have had a page on Somalia that would have told you these things rather quickly in this system, so that, for instance, somebody at DIA who had that knowledge in the old world would have had that knowledge exclusively while the CIA remained somewhat in the dark about it, wiki changes that, puts that knowledge across the intelligence community.

ANDRUS: Right. What it allows is it allows me -- even if I only have just a little snippet of information, I can place it in the wiki and then it's available to everyone else in the community instantaneously. Sometimes, you know, as analysts we don't want to just put out a snippet, we want to sort of think through the issue and get a nice big, well-organized paper before it goes out. But here, if there's already an article on Somalia, I can just stick in my paragraph and that sort of informs the rest of the community.

MORAN: Stephen, I wanted to turn to you now and say this model for an agency like the CIA, or for the intelligence community in general, is one thing, but this is being replicated now across the private sector as well. And I wonder if you could explain a bit about what Enterra is doing.

STEPHEN DEANGELIS: Sure. What my company does is we automate policy. We translate policy rules into software and software code-based applications to enable the information sharing that Calvin's talking about. So as you share information across organizations, whether it be in the commercial sector or in the federal sector or the government sector, you have to have a set of policies that says who gets to see the information and at what time does that person get to see the information, according to what rules.

What our firm does is we automate that policy that allows you to govern how information gets shared across organizations. So one of the things that you -- one of the challenges with information-sharing, as Calvin is talking about, is who gets to see the information, who gets to see the information. And if a person's state changes, if they change their job, if you're a warfighter and you go into battle, do you get to see more information? Under what rules do you get to see the information?

We're at a little bit of an event horizon now because we have so much information coming at us, our ability to process information is increasing, but our systemic means, our infrastructure policies surrounding how we share that information across organizations has yet to be fully fleshed out yet. So what we try to do is we try to create the governing policy tool to allow that information to be shared in a secure-compliant way across different what we call enclaves or different areas of expertise.

So, for example, if the Defense Department wants to look at certain information that the National Security Agency has, there has to be a set of rules that say on what set of conditions does that Department of Defense individual get to see the information that the NSA has collected.

MORAN: And of course the gatekeepers always existed before, but there were many of them, they were relatively arbitrary in their judgment, there were rules that they went by, but there was always a human factor.

DEANGELIS: Well, they were very compartmentalized. And Calvin can tell you, you know, if you look at an organization like any intelligence agency, there are different silos and there are different organizations inside the silos, and information typically was contained inside compartments inside a silo inside of an organization. So when you have a top secret clearance, you oftentimes get a clearance to access a compartment because you have a need to access that compartment as part of your job. And what that did is it created a culture where information was contained inside compartments. And that's where you got this, you know, problem that we saw on September 11th where you have a culture that says we're going to maintain security, but we're going to keep it inside a compartment. Yet we need to share the information so we can draw a consolidated view of an individual.

The same thing happens in the commercial sector where you have an insurance company may have Calvin listed under the life insurance section, yet he has health, he has other insurances, he has 401(k) plans. And to get a consolidated view of him, they need to extract information across different parts of the organization and see it. Now, the exposure of that data is pursuant to laws, you know, privacy acts like Gramm-Leach-Bliley; under what conditions can individuals see information about a person, and how do you manage that comprehensively.

MORAN: So let me try to bring your two worlds together here. And I'll try not to use the Orwellian tone to say this. But let's say on September 10th there is a flight school wiki in which people all across the country -- FBI, CIA, DIA -- any time "flight schools" happen to come up they have been tending that particular garden adding their perspective.

That, in turn, is connected to some kind of automated policy arm, designed by somebody like you, which would automatically order certain actions based on something, some event that was triggered on that wiki.

DEANGELIS: Right. You can analyze data sources, and then you can use a predictive model, a model that says: Based upon these sets of facts, we think this is happening. And using a certain kind of predictive model, you can predict and say there's likely occurrence that these set of facts will get us this result. And that -- and once you get that, then you share that information to the various organizations that need to take an action based upon that information being gathered from the data that's being collected.

MORAN: Okay. So we get -- we understand, I think, the silo -- the destruction of silos, and I think everybody would universally say that that's a common good, in general. But doesn't this system have similar or at least equivalent drawbacks in that it might be somewhat inflexible or, for instance, subject to a rogue editor, for instance, who might politicize the information, or a concerted group of editors who might politicize certain information?

And again, we look at September 11th as an easier, in some ways, example. But if you look at the WMD debate, how might that have played out differently in a world where this system existed?

ANDRUS: So what we're -- what you're asking is comparing sort of the way we do business now, compared to the way we might do business in a wiki world, right? And so the way we do business now is, we get one or several people will draft an article that goes through a number of steps of quality control, which we call review or edit, right? And in theory, if you add six or seven layers of quality control, at the end, you've got really, really good quality, right? We know that just doesn't happen all the time.

So this is a different model, which is, instead of sort of a hierarchical review, you have horizontal review, which is, I put out an idea, and then I have a whole lot of people looking at it, sort of at the peer level, and so it's more like a peer-reviewed -- now, it -- could somebody sort of hijack the tone or the direction of an article? Well, yes, they can, right, because they can edit it.

But there's a -- it's not a failsafe exactly, but there's a mechanism where if I'm really interested in, you know, the Taiwan page, I can subscribe to the page, so that any time a change is made, I get an alert in my in box, and I can go look at see what kind of changes were being made to that page. And so if there was something egregious, then I could go in and change it, right?

So then you get into these editing wars, right? And if you're in an editing war, then that's sort of a clue maybe we ought to have a conversation face to face about this or something, or have an offline discussion.

MORAN: And this, of course, happens in the public Wikipedia all the time --

ANDRUS: Right.

MORAN: -- where you can look at, for instance, any Israeli-Palestinian issue, and it's a back-and-forth issue over whether it's a diaspora or a group of people who chose -- voluntary -- to leave in 1947. And the way Wikipedia has done it is they've tagged up top that this an ongoing debate --

ANDRUS: Right.

MORAN: -- and you need to realize that this is not something that has been sort of solved.

ANDRUS: Right.

MORAN: And they --

ANDRUS: Right. So we have adopted some of those practices from the open Wikipedia, which is, if it's a new article, we'll put a little tag saying: This is a developing article. We're not quite sure what the bottom line is yet. We're just sort of exploring that.

So we put these little tags at the top of the articles to alert the reader that whether or not there is a consensus or not a consensus, or if it's debated.

And the Wikipedia tries to be an encyclopedia. So they try to sort of come to a consensus about the state of the knowledge on a particular topic.

We're not interested in necessarily being a(n) encyclopedia. So if there is a disagreement between the DIA and us, then it's okay for us to say: Given this -- these sets of sources, we believe the situation to be like this. DIA, on other hand, cites these sets of sources and has a different -- a slightly different view. And that's okay with us, to have that sort of diverging views inside the Intelopedia (sp).

MORAN: So would Doug Feith's operation get a separate page as well? They'd have their own footnotes, their -- would it be sourceable, for instance, for who is it that entered --

ANDRUS: Oh, yes. On the open Wiki, people -- well, it used to be you could edit anonymously. I think they're moving to change that. But in our world, everybody edits in true names. We know exactly who made what edit at what time. We know which agency they're from. So we sort of -- if you're a crank, right, or you're, you know, putting graffiti in, it'll become known very quickly, right, and --

MORAN: And the temptation's clearly there, I mean, because people are actually writing about the thing they cover or --

ANDRUS: Right.

MORAN: -- or even about themselves: "Mike Moran, a brilliant young editor" -- two of those things may no longer apply.

ANDRUS: Right. We sort of segregate the biographic information about authors. Often it's another little section, right? So you can do anything you want to your page, but we know it's just you, right.

MORAN: Now, Stephen, you gave me an example at lunchtime, which I found to both amazing and horrifying at the same time -- it was scary to me a little bit -- in which a policy that's automated, a policy system, in other words, which sees something happening and then automatically orders actions could be applied to homeland security in some ways.

And could you take us through that, the whole Baron's unabridged thinking?

DEANGELIS: Sure. We and Oak Ridge National Laboratories have created a thing called ResilienceNet, which is a "sense, think and act" system for a weapons of mass destruction response. Think of it as a multi-array sensor network you could put, say, under a bridge or in a channel, and as a container ship comes underneath it you could scan the container to see if there's a chemical, nuclear, biological explosive device inside of it. What typically happens then is if there's an alert to something like that, the way we currently do this we have disaster plans where people run around and flip through three-ring binders and try to determine what their actions are to mitigate that potential threat.

What we've been working on is an automated "sense, think and act" system that allows you to sense a particular threat, then translate a priori or translate ahead of time a city's nuclear response plan, their chemical response plan and their biological response plan into a set of automated instructions that's delivered as mathematical algorithms. You then think that there's this threat, and you then infuse data from other sources that would say, for example, the wind is blowing from the west to the east today instead of the south to the north, therefore if we're going to do an evacuation plan, we're going to plot the solution as to where we need to evacuate based upon that, and then instruct these systems of the first responders, second responders to take an automated action based upon that event-driven threat.

So you're able to sense a particular threat, invoke an automated disaster plan that gets recalibrated based upon the specific nature of that threat, and then instruct systems to take actions that are appropriate for systems to take, and then allow humans to take higher-order decisions. In other words, if you want to instruct the Department of Transportation system on Long Island to turn the red lights green to evacuate people out of Long Island, you can do so.

So the notion is to create a next-generation civilian defense infrastructure by automating responses as much as possible, and then allowing humans to check those responses and to take action themselves that are at a higher order, because the systems are taking as much as they can do as possible.

MORAN: So again, I see the two sides of this sword immediately, and I'm sure everybody does as well. On the one hand, in the 29 minutes between the second tower hitting and the third plane hitting the Pentagon, Otis Air Force Base had been in the air, rather than, "What should we do?"


MORAN: Automatically you've got people with -- armed combat air patrol over the major cities.

On the other hand, you've got the mayor of New York watching Long Island evacuate before you've actually confirmed whether there's actually a weapon on that ship or a large number of radiology machines.

So what -- how do you mitigate that problem?

DEANGELIS: Sure. I mean, well technically there a lot of ways of doing it, but I'll give you an example. We've run this at a test bed in Tennessee. So we run tractor trailers through a weigh station, and we scan the trailers as they come through and we see radiological signatures. And what happens is you start to identify patterns of radiological signatures based upon a set of frequent scans.

So for example, in one particular case, there was a radiological signature, and the scanner thought that there was something there. And actually it was a woman had passed away. She had radiation therapy before she passed away. She was cremated and her ashes were in an urn inside a container inside a truck. And the scanner picked it up, and we had to then discern what type of pattern that is. So you're able to actually discern what scans look like, and you can actually make -- you can distinguish the type of things that you look. It's rather precise. But what you find is you increase your knowledge base as you do this, and you have humans check the system where appropriate. And you have some things taken automatically where it's appropriate for the system to take them automatically.

We don't suppose that a system is going to act like Big Brother and take care of everything. It's going to instruct systems to take things that are appropriate for a system to take and have humans check other things and have them render a decision based upon that knowledge that's coming up from a comprehensive, consolidated view.

MORAN: So I'm going to ask one more question and then turn it over to the audience.

In all of this, of course, the political leadership to some extent is seeing their authority at least dispersed if not undermined. Because if you take the military, and this is the first time I came across anything like this, with something called netcentric operations back in the early '90s, when the military really started thinking about their various units out there as nodes in this great board, where information would be pouring in and they could constantly tell these nodes what to do, nodes being ships or tanks or planes or individual soldiers. Of course, the military, you know, notoriously command-driven and chain of command-driven, had to buy into this. And to me, I saw two problems, and I think I talked about this earlier.

One, suddenly, you've got the lateral conversation going on between Navy commander and Marine Corps captain without general or admiral, okay, although the general or admiral has access to that conversation. And you can make this analogy out into the civilian world. It could be the secretary of Treasury and the secret service.

The other problem, though, is that at that level, you're likely to have a guy who's not all that comfortable with something like a Wikipedia or an automated policy machine. So how do we put this over to you first? How do we mitigate that problem? Because it seems to me, that's a gigantic hurdle before this is going to move from experimenting to policy-affecting mechanism.

ANDRUS: Right. It is a very different paradigm, and it's unsettling to a lot of folks. You know, we have a number of instances where an issue comes up. And one issue in particular, we were watching an issue, and all the sudden as we were editing this piece during the day, this intelligence officer from the Transportation Security Administration starts chiming in. Never would my agency think to coordinate our work with that person, right? TSA, you know, what do they know? So -- don't quote that one. (Laughter.)

MORAN: Too late.

ANDRUS: Too late. (Laughter.)

TSA admirals are very good. (Laughter.) And we were extremely happy to have this person in on the conversation. But each of the people in that conversation were doing it sort of on their own authority, right? We didn't ask our bosses, can I go down there and talk to this person, or can he come over here? And we developed quite a robust analysis of the situation that was developing, and we had people from about seven different agencies within the space of maybe an hour-and-a-half draft this particular piece.

So but then the question: Where does it go from there? And so each agency wants to have, at least in the current system, sort of to be able to release that to the policymakers. And we're sort of in between time when that works, and sometimes what we're going to have to do is take snapshots of the wiki, and then put it into the other process. And we don't have that all sorted out yet.

I think about when we put word processing into my agency. Do you remember when we started word processing? Right? How many of you actually knew how to type when word processing came along? Many of you didn't know how to type, right? Some of you did, but some of you didn't, right?

And at least the senior analyst at my agency, right, didn't know how to type. But it wasn't just a matter of learning how to type. There was this sense of professionalism that I was hired to think, right? And if you want something typed, you hired some minimum-wage person to do the typing.

So there was this -- so when we gave them a word processor, it was sort of an affront to their professionalism that we would think that they should have to type their work, right? So there's this huge disconnect between the way we thought we were working and how we actually came to work, right? And that transition period was about a seven or eight transition period, right?

There were some senior analysts who said, I refuse to type. I will never type my own paper, right? And then, sort of at the end of that transition period, you would see these analysts with these, you know, big stack of yellow legal pad paper running the halls, trying to find a secretary somewhere who would -- because we stopped hiring secretaries.

MORAN: I was the news clerk who had to type Sy Hersh's pieces. (Laughter.)

ANDRUS: Right, right, so you see? So you lived through this.

MORAN: It was when he used to write for The New York Times. He would dictate.

ANDRUS: (Laughs.) So -- and there was a productivity loss in the beginning, and if you remember those early word processors, right, they were pretty bad. Like, you'd type for five hours and then it would crash, and your paper would get lost, right? It was a horrible experience for everyone involved, but did word processing go away? No. You all know how to type now, right? So something happened.

There were some people -- you know, the analysts would say, you know, we're winning the Cold War with paper and pencil. Why are we fixing something that's not broken, right?

So the adoption of word processing had nothing to do with what was going on inside my agency, all right? It had to do with what was happening outside my agency, all right? The world changed, right? We either had to adopt word processing or fail as an agency, right? We adopted word processing as a matter of survival, bureaucratic survival. We never thought of it that way. But if you look back -- if we had gone back to Congress in the mid-'80s and said, you know, this word processing thing -- it's too hard, we lose our stuff, you know, we don't know how to type, you know, just -- you know, we just don't want to do it; you know, we're just going to say no. No word processing at CIA, right? That's ridiculous. I mean, it's laughable, right?

So sort of fast-forward 20 years, right? Now we've got these things called wikis and, gosh, anybody can edit them, and they're not that good, and -- but guess what? The world outside my agency has changed, all right? The White House has accredited bloggers as a part of the White House press corps, right? So the president gets it, right? Does my agency get it, right -- that's the question. So -- (laughter) -- yes, we get it. Yes, we get it. (Laughter.)

So when you think about these sort of -- these technologies that sort of come at us from the outside, you have to think about, is this sort of -- something that's changing the world? Is the world changing around us, and do we need to sort of figure out what it is and get on board? And my argument is that wikis are like word processing in the sense that the world out there outside Langley has changed, and we just sort of need to get on the train and keep going.

MORAN: Now, how do you bring the political class on board with something that not only shares information but actually automates decisions that everyone wants in their hands?

DEANGELIS: Well, you know, we find it that the people that are driving this are the generals. So when I talk to General Cartwright, who was through Strategic Command -- he stands up and says, "I have post industrial-age challenges in -- I mean, post information-age challenges in an industrial-age infrastructure to solve those challenges."

MORAN: You -- that's the Strategic Command --


MORAN: -- using an -- how far up the decision chain do they go with automated --

DEANGELIS: Well, the notion is is that --

MORAN: Fail-safe -- if anybody saw that movie back in the --

DEANGELIS: "Dr. Strangelove"?

MORAN: Yeah, "Dr. Strangelove." (Laughter.)

DEANGELIS: No, I mean -- but basically, the amount of data that's coming at a -- whether it be a military organization, an intelligence agency is daunting. So we have this -- we have gotten very good at capturing data; the question is, how do we process it? We just can't process it at Langley. We don't have Boeing aircraft hangars of analysts reading data all day long and processing it. You've got to find automated means of doing it.

So what we try to do is to take the commanders' intent or take that concept of operation or that policy and translate it into a system that allows that policy to be translated into a system in the way that these people intended. To a large extent, it takes out the variability in how something is delivered within the system. It's calibrated to the commanders' intent or calibrated to the concept of operations for that organization.

MORAN: Okay. Now I'm going to turn it over -- once I get the mike back -- turn it over to you in the audience. And let me remind you, wait for the mike, stand up, announce who you are, please.

And let's fly. Yes, sir.

QUESTIONER: Steve Handelman. I have a question for Mr. Andrus. How do you -- we know the problems that Wikipedia has had commercially.

ANDRUS: Right. Yes.

QUESTIONER: How do you avoid perpetrating bad information? I mean, I can take a lot of errors in Wikipedia, but how many mistakes can you take in the CIA?

And the second part of that question is, at what point is a given paper that dozens or a handful of analysts are working on for different agencies -- at what point is it actionable? I mean, when do you close it off? When do you know that this is right, we are sending it up the chain to policymakers, and who makes that decision?

ANDRUS: Yeah. Those are very, very good, insightful questions, and I'll take a stab at answering them. But let me just say, we don't have -- we haven't answered those questions yet. We're not quite sure on the answers to those, but let me try and answer some of them.

In a wiki sense, if -- when we're dealing in a world of paper and if something gets typed onto a piece of paper, it's sort of finished, right? In a world of bytes where there's no paper, there's no typewriter, it's never finished, right? And so it's a huge sort of psychological paradigm shift to say we've got an article on Cuba that will never be finished until Cuba goes away. And it's just sort of -- so it's always not finished. But, on the other hand, when you publish paper, right, as soon as it goes out the door, it's out of date, right? A new piece of information just came in.

So the wiki has -- so in the paper world it's finished and out of date. In the wiki world it's not finished, but up to date. And it's just a very different paradigm, different way of thinking.

Now, I view -- we haven't done this yet, but I view a point where we can say -- we can sort of say at this point in time, we sort of all agree that this is the right state. And we could even sort of print that off, if we wanted to, or take a PDF and sort of put it off and put a stamp on it and say: This time, this day, you can rely on this information; anything after that, you know, take with a grain of salt. And then maybe a few days later or a few months later you take another time stamp. And we can put a little banner on there that says if you want sort of the official record, look at this date and time. But we haven't done that yet. We don't quite know how to do that.

So the second issue is how do we handle the perpetuation of error. Nature Magazine did a study of the Wikipedia -- some of you may be familiar with this -- where they compared the Wikipedia with the Britannica. And what they noticed was that the Britannica had approximately three errors per article and the Wiki had about four errors per article. But the Wiki articles were longer, so the error rate per word was smaller in the Wiki than in the Britannica. So they're about the same. The only difference between the two really was that the next day the Wiki was updated and had no more errors, or it didn't have those errors, right. But the paper Britannica continued to have those errors.

And so yes, errors creep into both systems. Sort of hierarchically reviewed papers that get published, there are errors in those; there are errors -- and the question is really how fast can we sort of resolve those errors and take care of them. And my argument is in the wiki, you can take care of it faster. But it also means they can be introduced faster. I understand that too.

So -- but, you know, we don't quite know how to handle that yet. That's a very good question.


QUESTIONER: Eben Kaplan, You said that errors can be added faster, but they can also be accelerated. I remember in the last football season the blogosphere was abuzz about a terrorist attack at a football stadium. It turned out to be an erroneous report, but it really gathered a lot of steam, and we all heard about it.

I'm wondering about the potential of your wiki to send people running to the wrong gate.

ANDRUS: Right. I view this as an artifact of the transition between the paper world and the electronic world. In the paper world we sort of -- once the paper is sort of produced, we have some confidence that there's been a lot of scrutiny and a lot of analysis and a lot of editing and a lot of review, and we can sort of rely on that.

In the wiki world it's sort of "let the buyer beware" right? Now, there are -- so if you think of the wiki more like a market, right, and there are bad products out there, and some of you have purchased them, right, how did you prevent that? Well, you didn't, right? You just -- you didn't buy it again, and nobody else bought it, and finally the company went bankrupt. But there's this short time period where it's, you know, "let the buyer beware."

And so that's why we try to put little banners up there that say, you know, this is a developing thing; we're not quite sure, we're not there yet. And I think going back to your comment, we ought to also develop, you know: At this day and time we think there is something solid here.

So yes, that potential exists that something could run off. What we hope is that our consumers will become increasingly better educated about the strength and weaknesses of the wiki compared to the paper world.

DEANGELIS: The other thing was that there was also a false assumption that whatever was written down was right. So in other words, people had this cultural bias that whatever was written down in the Encyclopedia Britannica was actually right. And I think with wikis it sort of reflects a cultural shift where people are challenging things and saying, you know, is this really right? I think I have this piece of information that could support this assertion. And you find people challenging more things that we used to take as given.

MORAN: So this is more of a canary in the coal mine than a definitive definition of what the issue is.

ANDRUS: Yeah, at least right now. For us the Wiki is still sort of in a experimental stage, the Intellipedia. It has not replaced our normal publication process. It's something we're trying to see how it works; it's gathering some steam. Some of us believe it will eventually sort of take over the major portion, but we're clearly years away from that in my view.

MORAN: Another question? Yes, sir.

QUESTIONER: I'm Eugene Staples.

It seems to me that any really quarrelsome subject cannot really be dealt with with Wikipedia. I read Wikipedia from time to -- I look things up on it, and sometimes they're pretty good. Sometimes they're great. But if you take, say -- somebody mentioned Israel and Palestine.

ANDRUS: Right.

QUESTIONER: If you take Islam -- if you type in any topic about Islam, which has got be, I don't know, maybe the top foreign policy concern in this country these days, you just get a mish-mash of really, in many cases, unintelligible junk.

Now, I'm not suggesting that you would have a repetition of that in a government-run Wikipedia kind of operation. But since most of what you deal with -- presumably the most the important things are going to be the most difficult --

ANDRUS: Right.

QUESTIONER: -- how do you keep personal opinions and prejudices and politics out of this?

You go back and look at the unfortunate history of the intelligence agencies prior to 9/11 and then in the whole runup to the Iraq war, where you had some people who didn't want to play at all or wanted to play only by their own rules. How do you handle that if you have got essentially a free-contributing kind of information operation?

ANDRUS: Yeah, well, that is a difficult issue. And we have throughout our current process, which is -- and many of the elements of our current process of lots of sort of quality control, editing and review -- are designed to sort of weed some of those things out.

But the number of people who touch a particular article is -- are just a few number of people, right? The author and three or four editors and maybe a dozen or so people within an agency that will look at it and coordinate on it. And as you mentioned, that process can also produce error, right? It can also be subject to a reviewer's bias or agenda, especially if that other person controls my promotion, right?

And so we try very hard as part of our value system within the agency to leave those biases behind and be able to challenge those things. And those things also exist in Intellipedia, right? But the correction mechanism is a different correction mechanism. As opposed to having seven or eight people review it, it's sort of open for public gaze, and anybody can come in and challenge. And so it's a different way of striving for error correction, which the theory is over the long run -- right? -- with a lot of people looking at it, it will get better, right? In the short run, when it's -- when an article's brand new, it's just going to have errors and we just -- we don't know how to handle that -- (off mike).

QUESTIONER: Yes, I'm Jim Zirin.

I just wondered, under the old system, an analyst does an assessment -- a paper assessment -- he says that there's a likelihood there are weapons of mass destruction in Iraq.

ANDRUS: Right.

QUESTIONER: Someone else writes an assessment and says it's highly unlikely that there are weapons of mass destruction in Iraq.

ANDRUS: Right.

QUESTIONER: Someone gets both of those assessments.

ANDRUS: Right.

QUESTIONER: And somehow or other, they compete with each other and it has to be resolved. Under the wiki system, don't you kind of have a built-in sour where someone write there are weapons of mass destruction in Iraq; the second one comes in -- it's just a matter of the timing -- and superimposes an opinion that there are no weapons of mass destruction in Iraq. How do you resolve it? And are both preserved for someone to look at and resolve the difference?

ANDRUS: Let me answer the easy part of your question first. Yes, it's all preserved. Every edit by every person is preserved, and you can go back and look at the -- which person said what with a date/time stamp on it.

So with every wiki article there's an associated discussion page, and so what we're trying to do is develop a trade craft about how we use it. And it's not there yet, but what we're suggesting that people do, if you get into to one of these contests of wills or opinions, that you think it over onto the discussion page and sort of, you know, work it out there. And if it doesn't work out, then you may have to have, you know, two paragraphs or you may up at the top say, "This is, you know, one view of the world. Here's a link to the other view of the world," and cross-link them. Because, as you know, some of these things are just not resolvable now.

Someone's going to ask me in just a second, "But doesn't the policymaker want, you know, the bottom line," and in most cases, yes. In fact, the 9/11 and WMD commission said they would like to know where we disagree, and so this is a way to do that.


QUESTIONER: Yes, Hi. I'm Elizabeth Addonizio, and this discussion is very interesting and focused on the contents. And I'm curious, I'm a user in the military intelligence community, and we often joke but only half jokingly that we as users have several sort of passwords and steps to get into the secret and top secret networks and are often facing problems because we have to renew these passwords and accesses to the system, but the system itself often gets compromised. And so I'm wondering if you see -- because we often joke that, you know, others seem to have much easier access to these systems than we do, and we're the users of this -- of these systems.

So I'm wondering, you were focused a lot on the content and sort of pros and cons of this way of maintaining content. I'm wondering if you see this sort of security apparatus around this content keeping pace with this innovative updating concept of maintaining content, because it seems to me with that has to come a sort of different security apparatus around the content that itself is perhaps is lagging behind. Then, you know, it may create a whole -- you know, another series of problems.

And so I wonder if you could talk, you know, in addition to the content, about the infrastructure around the content and how you see that kind of keeping pace. And apparently, all the users are known as they, you know, update, but I imagine there's ways to compromise that, and the system goes down and then, you know, we have no content.

ANDRUS: I don't know, Steve, if you want to make a comment and then I can make a comment.

DEANGELIS: That's what I was talking about earlier was policy management. I mean, the key challenge right now is, as we look at sharing -- as we look at information sharing in a national security environment, and we look at creating means of breaking down enclaves, breaking down stovepipes and having people share information, you have to have a systemic means of dealing with that. The challenges you just maintain. People change jobs, they have different access as a result of their job changing.

So what we're finding in place right now are technologies like service-oriented architectures and multilayered security and dynamic policy management as technical means of enabling this information sharing. But the infrastructure is really the, in my opinion, the difficult part of this, is how do you actually ensure the security of the data because people are loath to share their information unless they can have a level of confidence that there's a minimum set of -- there's a minimum level of maturity there to share that information.

So what we're trying to do in this particular space is put in place that infrastructural means of sharing information, and it's a combination of both, you know, computer-based technologies and means of enabling policy in a way that is rapidly adaptable to a changing environment. And that's the challenge is that -- you know, you mentioned earlier about static versus dynamic, and what you find is that the ways that we've done things in the paper environment was very static, but the threat environment for us is very dynamic. So you need a dynamic means of making your organization resilient to change in the new marketplace.

So what you're seeing right now -- to answer your question -- is huge investments in the technical means of ensuring security and sharing of data that will enable the policy of sharing information that we all want, but we're working very hard on the technical means to do that.

ANDRUS: You're absolutely right in the sense that we -- if you think of the word "processing," as sort of a revolution in the way we produce stuff, introducing wikis is a similar thing, and we had these same issues. We sort of knew when a secretary was typing and you had carbon paper, right, and all that stuff, you knew how to control that; and when when you had word processing, you print off 15 copies, right, or you had a copy machine, that introduced -- we had to introduce new ways of controlling the information.

And so we're once again going through that now, and so we have a wiki developed on the open Internet, they're not so concerned about security, right? We want to adopt these things. And our security infrastructure didn't know this was coming, couldn't have prepared for it in advance, and so has to sort of keep up. And so we're working those issues out at the same time that we're just sort of moving ahead because this is sort of a whole new world for us.

DEANGELIS: But as the key enabler because without that, no one's willing to share their information.

ANDRUS: So let me -- so one way that we're working it, there's -- we have sort of a brute force method, which is, "If I have a piece of intelligence that's not approved for the general-purpose top-secret Wiki, I can put that in some controlled space, and then I can have a link from the Wiki to the controlled space, and those people that have access to the controlled space will get in, everybody else will get stopped, right? But at least they know something is there, right? But that's sort of a brute force; it's not as elegant as what he's describing. But we need to sort of get to where he's going.

MORAN: Yes, sir.

QUESTIONER: So concerning -- my name is Paul Richards.

Concerning the inputs, Dr. Andrus, you said in your earlier remarks that, concerning cranks and graffiti, we know who they are. That's very interesting to me because who makes that determination? Obviously, most apparent cranks I would think there's a consensus and it turns out that they are true cranks. But a minority of cranks are the true innovators -- indeed, the true innovators, by definition, in many contexts, are the crazy ones.


QUESTIONER: And is there a -- (chuckles) -- paradox here that yourself, a senior member of the Center for Mission Innovation --

ANDRUS: Right.

QUESTIONER: -- could quite be presenting more hurdles for the true innovators by this procedure that excludes cranks? (Laughter.)

ANDRUS: (Laughs.) Do you want me to just commit suicide right now? (Laughter.)

Yeah, there is a little bit of that paradox, that when you have a wiki environment where everybody is there, you have this, you know, regression toward the mean, right? You have sort of -- it can be a groupthink environment. And so if somebody is a little cranky or a little edgy or sort of has these far-out ideas, they get squashed in the mass, right? So the Wiki is probably not a good place to encourage cranks, right?

So there's a companion technology which we call the blogs, right? So blogs are generally single-authored, right? And so, what you can do is you allow people to have blogs. So the cranky people can sort of say their cranky things in the blogs, right -- it doesn't contaminate the Wiki -- but what they can do is they can put a footnote and say, "There's another way of looking at this; see my blog entry." Right? So their voice can still be heard, even if it's not within the Wiki that has a tendency to sort of go toward the center of gravity.

MORAN: But -- and one other thing that might mitigate this -- maybe this is exactly what you were talking about -- is there a built-in structure for dissent from the mainstream? I mean that's -- because it's -- so many policy decisions over the years seem to have lacked that.

ANDRUS: Right. There's no explicit mechanism for dissent. The only mechanism is that I can edit what you've just put in, right, and then somebody else can edit me. And so -- but we don't have any formal sort of rules about that; it just is sort of happening.


QUESTIONER: Karen Monaghan, CFR.

Often, it's the senior leadership that kind of puts a stamp on whether something is valid or not. So I wonder if either of you can give me an example of either a senior policymaker who said, "I'd really like to see what the Wikipedia debate or the Intellipedia debate is on upcoming elections in Nigeria." Or a senior official at a corporate institution who said, you know, "I understand we have a policy mechanism that's sharing information on some issue, and I really want to know what that is."

Can you -- do you have real world examples of -- oftentimes, the mass -- the mobs -- are very actively engaged, but it really doesn't go anywhere, or a debate over a certain policy or issue has come to fore because there has been a debate in the Intellipedia realm.

MR. : Want to go first?

ANDRUS: Yeah, give me the hard one. Okay. I can't think of one right now. Like I say, we started about a year ago and there were just a few of us on there, and it's sort of a grass-roots thing. People have normal day jobs and they sort of work on this when they can, and it's not as robust as it will be five years from now. So I can't think of an example right now where that's happened. I'm hoping.

MORAN: Can I just pose a kind of real-world question? Let's say this year's presidential State of the Union address included an assertion that Iran was seeking yellow-cake uranium in Niger. That clearly would not -- that might be directly contradicted by the wiki entry on that topic. How would that conflict resolve itself?

ANDRUS: In today's world? In today's world, it would have to go back through the normal established channels. Somebody might say, "Ah, here's a clue that I saw in the wiki; let me go back through the normal way we do things." So in today's world it would only be sort of clues or hints that you'd have to take back through the sort of normal channels.

MORAN: But would there be a backchannel that says, "Change that wiki"?


MORAN: G.W. Bush signing in. (Laughter.)

ANDRUS: It would be pretty hard to just change it on -- we're a pretty independent bunch inside. And we resist politicization, and it's one of our core values.

DEANGELIS: I have a very pedestrian example of how it works. I have a cell phone that didn't update properly for Daylight Savings Time. We called the phone company, couldn't get it resolved. We called the hardware manufacturer, couldn't get it resolved. I told my staff to put a request into our wiki about that, and within a half-an-hour, we had from 10 different sources the correct way of solving the problem that we couldn't get from the phone company or the hardware manufacturer. And it was a very interesting thing. And we're not open to the world, we just have people that we collaborate with, who sent us a response very, very quickly.

We use it as -- the way we use the wiki in a commercial entity is we use it to bubble up the best ideas on a particular topic. So I will say to the staff, "I want to solve this particular problem," and then what happens is you get this huge community of interest that is connected to your business, and people start collaborating because technologists, in my world, are inherently challenged by problems. So the way to keep technologists happy is challenge them with very interesting problems. So you put out a problem and then you get people bubbling up information and ideas very, very rapidly.

So we use it as a way to bubble up ideas. And the cranks in our particular world get cut out because it's really a meritocracy as to who gets acknowledged. And people are seeking acknowledgement by coming up with the best idea as quickly as possible.

MORAN: I mean, in many ways this has existed for years. I used to work at Radio Free Europe in the early '90s, and we had a thing called the logbook, basically developments that would be relevant to those taking on the next shift. For instance, "Gorbachev is being held in his dacha on the Black Sea coast" would be cut out, literally -- typed up, cut out and pasted in there, and the next shift would come on and read it. And this was in essence the non-virtual wiki. And then you'd see that someone had baby, too. (Laughter.)

MR. : Right.

MORAN: But I mean, it gave you relevant information, and we actually do something like that within my own newsroom.

ANDRUS: So we -- I was going to say if, again -- if there was a presidential speech where some assertion was made, we wouldn't necessarily keep it out. I think somebody would put it in, say, "Here is an assertion that was made in the presidential speech. What do we think of that?" Right? And then the sort of debate would ensue, and that's how we would handle it, as opposed to just change the bottom line because it was in a presidential speech.

MORAN: We have time for one more question, and you have been waving your hand for quite a while, sir.

QUESTIONER: This is for Dr. Andrus. Burton Gerber's my name. I realize that you're talking about a system that is being developed or going through testing of various sorts.

ANDRUS: Right.

QUESTIONER: But one thing you said put me back into the world that I thought we were trying to get away from, and that is on information-sharing, because you talked about what comes back down to basically ORCON, that someone putting something into a Wikipedia says, "Oh, gosh, there's some people reading this who aren't authorized to read at whatever this level is, so I'm going to put it in" -- and there's a little subset over here, and evidently people know that there is a subset but don't know what's in it.

ANDRUS: Right.

QUESTIONER: So then we're back to the same problem that everyone's been talking about: that there has to be more, not less, information-sharing.

So how do you handle that? Because ORCON can be a very important point, but it also conflicts with information-sharing.

ANDRUS: Yeah. Yeah. Yeah. We think we're doing good just to get all of the intelligence agencies talking about stuff we can talk about, right? You raised sort of this other -- yes, I think we want to get there somehow. I don't know that we know how to get there. We really do -- ORCON means "originator controlled," meaning if I'm the producer of some piece of information, then if you're going to use it beyond what I've given you information for, you have to come back and get my permission, right, because I control the dissemination of that information.

So -- and originators control their information for good reason. They have some collection mechanism that they're trying to protect, right, so it continues to produce information. And if that -- collection mechanisms get compromised by having information distributed too widely, then we lose our sources.

So there's this natural tension that we've struggled with since -- probably since before the creation of the agency. And we're trying to do -- we're trying to take pride in the little successes we're having, knowing that there's this big problem out there that isn't -- we don't have a good solution for yet. And maybe he has the solution to it. (Laughter.)

DEANGELIS: It's important to realize that the wiki's only one piece of the things that the intelligence community is using. There is establishment of the National Intelligence Library. That may or may not get established, but that will be more of an interim controlled step, where we look at the policy that surrounds who should get access to information, then we automate the policy, so we allow for a real-time sharing of information according to rules that we've established. So it's not -- it's sort of an in-between step. It's not, you know, sort of a "Wild World of the Wiki," but it's an in-between step where things get checked in that people are able to access and get graded by the level of access and the way that the intelligence is being used. Did it result in something good happening? And so there's a whole bunch of policies surrounding an infrastructure being built to make that happen right now.

MORAN: So the writings of both Mr. DeAngelis and Dr. Andrus are on Just search under their names; you'll see them -- and add to my click count. (Laughter.)

And I'd like to thank both of you for being here and for illuminating the situation so much. (Applause.)


More on This Topic

Analysis Brief

The Search For Intelligence

Few would challenge the intelligence credentials of Lt. Gen. Michael V. Hayden. But there are deep concerns among many lawmakers whether it...