With Wikistrat’s International Grand Strategy Competition now complete, I wanted to take this opportunity to sum up what I think unfolded over the month-long contest. As head judge, I am uniquely suited for the task, because I’m fairly certain that I’m the one person who perused every line of every entry of every team every week. To give you some sense of that effort: roughly 30 teams cranked, on average, 7,500-8,000 words per week. That’s close to a million words in all!
We're not going to pretend that every word entered was golden. The purpose of the competition was to elicit ideas in aggregate - not to collectively produce the one "perfect" document (i.e., the bureaucratic approach). In mass harvesting exercises such as these - no matter the level of expertise involved - there is a certain amount of chaff. By design, participants are put through a variety of methodological paces that force them to winnow their ideas down to their essentials. So while the journey matters plenty, it's the destination that we collectively seek: those nuggets of strategic insight that arise from the focused and repetitive interplay of so many minds tackling the same subjects from a variety of angles. For it is amidst that maelstrom of intellectual activity that a variety of competing perspectives are collaboratively blended into foreign policy visions worthy of the label "grand strategy."
We achieved that goal in spades, meaning there was than enough “wheat” to be found throughout the entries, which got better and better with each passing week. Besides telling us that collaborative competition works, the continuous uptick in performance also proved the validity of the “massively multiplayer consultancy” model, which is what we believe Wikistrat can offer as a result of its ongoing efforts to build an online community of strategists from across the globe – the Facebook-meets-Wikipedia dynamic.
If, for example, you’re a client interested in having dozens/hundreds/thousands of strategic thinkers chase down a problem, query or brainstorm for you, then Wikistrat can mobilize them en masse for a X-week-long collaborative competition not all that unlike what we just did in this contest. We’d simply tailor the parameters and the participants. But the key thing is, your desired effort would now involve a true “wisdom of the crowd” dynamic, with the crowd in this instance being a vetted group of strategic thinkers collaboratively competing to come up with the best answer.
Why we think that’s a better route: In today’s complex world, we’re certain that companies and governments will benefit deeply from such intellectual exposure. No, we don’t think this completely replaces in-house studies or working with contractors, because they’ll always be those needed deep dives on specific issues. Plus you simply can’t outsource your strategic thought processes in every instance. But there will also increasingly be the need to tap into far wider pools of thinking, or ones that explore issues more “horizontally” (i.e., plumbing the cross-domain connections) than “vertically” – especially when you’re talking strategic planning on an international scale. In a black-swan world, you can never ask too many “what if” questions, or have too many bright minds coming up with possible answers.
We also think the competition proved itself as a useful method for attracting and identifying talent within our burgeoning online community of strategists. There are literally thousands upon thousands of professional, apprentice (like our grad students) and avocational strategists out there on the Web generating useful analysis, and, in a disconnected sense, you could say they’re all collaboratively competing for our attention with their blogs, sites, etc.
But when Wikistrat pulls them into an online venue explicitly designed to foster that collaborative competition – directly, then we turbo-charge the dynamic by concentrating it to an unprecedented degree. The International Grand Strategy Competition was a brilliant demonstration of that potential: no established “stars” among the 200-plus individual team members, and yet collectively they pushed each other to generate a constant flow of innovative strategic ideas. And the longer the competition went on, the higher the flow and quality of those ideas – innovation feeding off innovation.
Frankly, it got hard to grade it all, because on an individual basis, everything started trending up toward “A’s.” But just as designed and encouraged in this competition, the collective grading came down to a rank ordering, meaning there could be only one #1, one #2, etc. with regard to every assigned task. And no, the same few teams didn’t win each ranking, as ten different teams each scored one of the sixteen #1 rankings – with only three teams scoring multiple wins.
But the best part was this: whenever the top effort was so identified, you could readily see its impact on the next week’s play, as other teams started copying the techniques, reach of vision, etc., that earned that one team such high recognition previously. That meant the “bar” rose rapidly throughout the competition, with the most ambitious teams clearly seeking to out-innovate the established leaders, which is why there was so much movement in the overall rankings week by week. You can see the proof on the team entries: the longer the competition went on, the more the most innovative teams had their ideas cited by others, because to not do so risked being left behind in the expanding dialogue.
To say that it was exciting to witness is an understatement, and let me tell you why: I taught an experimental course at the Naval War College in 2003, while I was writing The Pentagon’s New Map. In the elective, which attracted an unusual percentage of that class’s top students, I taught the officers how to generate competing scenarios using an X-Y axis approach (two questions yielding four boxes). To be honest, going into the class I had no idea if you could develop the skill, even as I knew it was easy to teach the procedure. But we kept at it, week after week, just repeating the effort on new subjects, levels of analysis and regions. At first, the generated scenarios were just awful, and I spent a lot of time offering constructive criticism, but over time they got better and once they did, the confidence level of the students rose and – sure enough – in the last few weeks of class they performed most of the critiques themselves in a peer-to-peer fashion, while I merely pushed them toward more elaborately scaled efforts.
Well, I witnessed the same dynamic unfolding in the competition, and it was a thing of beauty: the more teams and analysts became aware of each other’s work, the more they effectively critiqued it – in a peer-to-peer fashion – by co-opting some aspect and expanding it further in their own efforts. And with the expanding complexity built into the competition’s design, I – the head judge – found myself “stealing” what I could for Time Battleland posts and my World Politics Review column.
As they say, talent imitates but genius steals! ;<)
The teams themselves can track their own progress by the numbers of eye-popping interjections I left in my grading notes (later distributed to the competitors). Simply put, the “wow’s” began piling up exponentially with each passing week, and – just like in my experimental War College class – my initial feelings of despair (“Maybe you just can’t teach this stuff?”) invariably gave way to serious respect for what the next generation has to offer.
It is my sincere hope that those competitors who were energized by the competition will seek to maintain an affiliation with Wikistrat, because as we move forward with our massively multiplayer consultancy, we think we’ll be able to offer them the kick-ass combination of an online community where they can “sharpen the blade” while simultaneously selling their best ideas – collaboratively and competitively – to a host of global corporations and government agencies eager to explore globalization, in all its current and future complexity, for strategic planning purposes.
Thomas P.M. Barnett