Beyond Brute Force: Unexpected Lessons from Crowdsourcing Satellite Imagery Analysis for UNHCR in Somalia
The recent SBTF effort to identify IDP shelters in Somalia for the UNHCR has been notable for several reasons beyond the fantastic work by our very, very hard working volunteers; some of whom may now need an eye exam and glasses… And I feel that what I’m seeing is an inflection point in the development of crisis mapping (or indeed any form of “live” or “crowdsourced” mapping). It’s the point at which we move beyond the “brute force” method of chopping large tasks into little pieces and disseminating them among a distributed human network and begin reaping the rewards of the process itself, as a collaborative space for learning and outreach. For me, this has been the most unexpected dimension of this project so far and I wanted to share my thoughts here for feedback.
I am always skeptical of crowdsourced data or, indeed, any data. As a geographer and remote sensor whose focus is enumerating displaced populations, I have to be: skepticism is part of my job. All data contain error, so best to acknowledge it and decide what that error means. There is still a lot of uncertainty around these types of volunteered geographic information; specifically questions over the positional accuracy, precision, and validity of these data among a wide variety of other issues. These quantitative issues are important because the general assumption is that these data will be operationalized somehow and it is, therefore, imperative that they add value to already confusing situations if this enterprise is to be taken seriously in an operational sense . The good news is that research so far show that these “asserted” data are not – a priori – necessarily any worse than “authoritative” data and can be quite good due to the greater number of individuals to correct error.
It was with this thinking in mind that I joined the current SBTF effort and I very much appreciate the willingness of our great colleagues at Tomnod, DigitalGlobe, JRC, and UNHCR to treat this as an experiment to see how well a very large amount of very specific imagery analysis could be performed with crowdsourcing. We are beginning to analyze the data now and will likely be doing so for the next month or so. What has been surprising, however, have been a few new twists along the way that I feel probably are lost in the exclusively quantitative concerns that so many (myself included) focus on.
- There is huge potential here for stakeholder engagement and broadening your outreach: In a time of plummeting budgets, building a constituency for what your organization does is paramount. Efforts such as this give the public and chance to get engaged, to take part in your mission in a fairly easy way. Speaking about the involvement of students from her Air Photo classes at the University of Georgia, Dr. Marguerite Madden said that the engagement, “raised awareness of this grave situation and many [students] got online to find out more information about why this is happening and what is being done to help…” Today there are almost 200 more people who are familiar with the UNHCR and its mission in Somalia than there was two weeks ago. That’s one heck of an ancillary benefit, especially considering that a vast majority of the volunteers are students with the energy and the desire to contribute to a project such as this. Which brings me to the second point…
- The collaboration may be as important as the data. I have been consistently (and pleasantly) surprised by the rich discussion among the volunteers about virtually every aspect of this project. We specifically set out to include the academic community and, especially, the remote sensing community by engaging with the student chapters of the American Society for Photogrammetry and Remote Sensing (ASPRS) due to their higher level of familiarity with imagery analysis. Columbia University’s New Media Taskforce and the University of Madison-Wisconsin’s Department of Geography were major contributors and geography departments at both George Mason University and The University of Georgia hosted mapping events (tip of the hat to Lawrence Jefferson and Chris Strother for making those happen). As a result, we created a very rich environment for exchange and learning. Dr. Madden jumped at the chance to use her class as an opportunity to introduce un-orthodox platforms for imagery analysis to her students and everyone benefited. They learned how crowdsourcing for imagery analysis could work in a live environment and we got tons of good feedback on everything from our rule-set to the platform from her very bright students. It’s this meeting of “professional science” and “citizen science” that helps foster new developments in how we approach these emerging practices.
- It’s not always about fixes, it’s about concepts: while I believe that Linus’ Law is a powerful argument for crowdsourcing, it’s important to note that this not only applied to technical bugs, but conceptual ones. Part of remote sensing involves creating a rule-set to aggregate features on the surface of the Earth into meaningful classes that allow you to say something about how the world is or works. While there are robust, scientific, ways to go about this it is worth emphasizing the fact that every classification scheme (in any science) is situated within a specific context and point of view. It’s entirely possible to have very well thought out classification schemes that have little to do with the lived reality on the ground. It was with profound humility that I read the very insightful questions posted to our working Google Doc by volunteers, some of whom had zero experience with remote sensing and, yet, had very perceptive insights regarding the assumptions made by our classification scheme. For more than just a steady workforce to place dots on a map, the volunteers really put their thinking caps on to get under the hood of both the technical aspects of the effort but also the conceptual ones. It was precisely their perspective as non-experts that gave them the ability to see things in a new way.
We remain committed to a critical analysis regarding the substantive contribution of our effort to UNHCR operations, but the sense of community in our dedicated channels of communication that allowed for such vibrant discussion should also be understood as valuable. While the operational use of projects like this cannot go unexamined, it bears repeating these types of projects offer much more than just an additional set of data but present a unique forum and opportunity for creative collaboration, engagement, and learning.
By keeping these thoughts in mind we can begin to move beyond the “brute force” period in crisis mapping, in which complex and – generally – machine-driven functions are simply distributed to a human network and, instead, expand the meaning of geographic data in these new spaces of engagement.
Many thanks to all who have participated in the project thus far. As always, you are fantastic teachers.