← back

Evaluation as Governance:
The Practical Politics of Reviewing, Rating and Ranking on the Web

There is hardly anything these days that is not being evaluated on the web. Books, dishwashers, lawyers, ex-boyfriends, doctors, haircuts and websites are just some examples targeted by novel review, rating and ranking schemes. Used in an increasing number of areas, these schemes tend to be conceptualised as techno-scientific solutions to public problems. By soliciting and aggregating feedback and distributing it as comments, lists, ratings and stories, they are thought to make hidden qualities transparent, hold people to account and foster participation. At the same time, the rapid proliferation of evaluative practice and its far-ranging implications have raised a number of concerns.

Review and ratings have proliferated on the webIn this project, I take a closer look at the mundane, everyday practices that go into establishing, maintaining, and (at times) disrupting these schemes and the ways in which they reconfigure networks of governance and accountability. Building on recent arguments in Science and Technology Studies (STS), governance theory, and ethnomethodology, I develop a relational view of governance and accountability as a contingent and situated accomplishment. The project therefore aims to recast the current debate about online reviews, ratings and rankings and offer an alternative outlook by "respecifying" (Garfinkel, 1991) evaluation as governance. Who, which, or what is governing what, which, or whom—and what are the implications? How can a better understanding of evaluative practice contribute to theoretical and policy debates about transparency, accountability and "democratic" participation? And how to study all this in view of vastly distributed information systems?

Empirically, I explore these questions ethnographically by following reviews and ratings in two different settings: web-based patient feedback, a "small data" setting, in which evaluations are thought to be grounded in individual experience of care; and search engine optimization (SEO), a "big data" setting, which is characterized by automated algorithmic ordering. The project shows that attending to evaluative practice—whether through counting or accounting—allows us to appreciate the importance and mechanics of non-obvious modes of governance, including the moderation of “experience”, strategies of dealing with moments of wonder (when you realize it could be otherwise), the un-knowability of relevance accreditation, as well as different ways of "ethicising" participation in evaluative practice.

The project is generously supported by a DAAD Doctoral Scholarship and a PGP Corporation Scholarship.

Download: email me for a copy!

Follow-up projects

One outcome of this research was the ESRC-funded How's My Feedback? project, a collaborative design initiative to rethink and evaluate web-based review and rating schemes.

Related publications

Related talks and presentations

Updated: January 25, 2013

← back