Saheli has some thoughts on her website about what she would like to see in Recovery 2.0. I’d like to recap some of the overall attributes I’d like to see as well without trying to get into too many specifics.
Centralized vs Decentralized: It should not be a centralized system but instead a decentralized one. Centralized systems can easily get overloaded and collapse. Decentralized systems however can distribute the load so that if one area overloads, other areas can easily take over the work.
Rigid vs Flexible: It should not be a rigid structured system but instead a flexible loose system that can restructure itself on the fly. It is not so much building a perfect system as one that is flexible and scalable enough to adapt to different situations.
Controlled vs Autonomous: There should be no command and control center directing the operations of the people. Instead people make their own decisions based upon the situational awareness information that everyone relays to the system. A decentralized approach where everyone acts as they see fit means elements can react faster to situations without waiting for a central command to relay decisions. A “Shared Mental Model” (read about situational awareness) is critical to this approach though which means that everyone has to be on the same page with regards to how the system works and how best to deal with different situations. A lot of this is achieved through proactive training BEFORE these disasters occur (which means yes we should be doing mock disaster tests with this system to see how it works but more importantly to see how people react). A perfect way to test the system though would be using it for a real situation which is not disaster-related (since if the system is flexible enough, it should be usable for any collective large scale effort which is why for me, the Recovery 2.0 project is really just a subset of my greater Connected Communities project).
Small vs Big: The fundamental concept of the system should be small groups working on a small local scale instead of massive groups working at a global scale. By focusing just on their sphere of control and abilities, each element of the system is not overloaded by the entire collective effort but is instead able to focus on just their immediate efforts. The end result is that each independent local action connected with each other creates a collective swarm effort that accomplishes much larger goals than they ever could on their own.
Again a lot of the thinking above mirrors how the Internet works. And actually if you start looking at some of the more successful applications that are used by people on the Internet right now (i.e. BitTorrent) and how they work (i.e. small pieces working collectively in a swarm to achieve a massive collective effort) then you’ll see which direction I’m going in. Also another great source for how the Internet works is Chapter Five of The Cluetrain Manifesto which is entitled The Hyperlinked Organization.
2 replies on “Recovery 2.0 System Attributes”
Sorry to take so long to get to this.
I am only a bit skeptical about centralized vs. decentralized. This is b/c the disaster preparation/response community has been using the incident command system for a long time, and puts a lot of stock into it. The failure during Katrina was not because of centralized command as a principle but as an implementation. Knocking an otherwise tried and tested model is a bit worrisome to me. I put a lot of stock in giving the experts on the ground their due, when they are actually experts.
Everything else you’ve said looks great to me though.
Hmmm, trying to figure out how I can best word this. You’re right that a lot of existings centralized systems in our world today work amazing well. So if it sounds like I’m knocking them, I’m not. Instead I’m looking at the future of collaboration and thinking about millions upon millions of people all collaborating at the same time on one goal. Any centralized system of that nature today would probably overload and be unusable. It would be like having no decentralized Internet but instead a single phone number to dial into a master mainframe to allow everyone to communicate in unison. That mainframe would have to be amazingly powerful and complex to handle that many people.
That is why the Internet is capable of dealing with so many people though because it is decentralized with each person connecting in one local point distributing the load across the entire system. Therefore, all I’m saying if we want to look at a system that can literally handle the entire world collaborating at once upon it, it would have to be such a system that is decentralized just as the Internet is itself.
But you know what though, if we are looking at helping people today then we need to use the technology of today the best we can and that includes using centralized systems. I actually mentioned this in my post about Decentralized Emergency Load Sharing which actually uses a few centralized systems to distribute this heavy load at least a little bit. In a nutshell, just imagine three or fours steps in deciding what disaster information you are looking for or submitting and each one of those steps would be on a separate site instead of one site. That way one site could hand submitting and searching for missing persons while another site could handle emergency rescue information. It kind of avoids the “all of your eggs in one basket” situation but still uses the centralized site approach.