The Danger of Aggregate Displays in Social Software
Where is the ethical line drawn when designing interfaces that show popularity?
One of the most important results of people interacting socially online is that we can measure the effect of social influence. A ground-breaking study by Columbia professor Duncan Watts showed how this could be done. (I wrote it up in How Aggregate Displays Change User Behavior) This is one of the most important studies I’ve seen…it clearly shows a relationship between people’s actions and the aggregate information that’s shown to them in the interface.
For those not familiar with Watts’ study, it showed that when faced with an interface showing what other people did we are definitely influenced by that behavior. If we are shown a list of the most downloaded songs, as in the study, we cannot help but give more weight to those songs downloaded more. We’ll be more likely to download those songs ourselves. This echoes countless studies from social psychology that show how we are affected by our environment.
Dangerous Territory
But one dangerous effect of aggregate displays might not be apparent at first. After we realize that our displays are affecting people, the next question becomes: which aggregate displays do we show and when? This is a question a lot of design teams are grappling with as they build out their social software.
But taking it even further we get into ethical territory. This was made plain to me by a question that someone asked me the other day after we were discussing Watts’ study. They asked: “If people respond to aggregate displays, and change their behavior accordingly as they’ve done in Watts’ study, aren’t those people also in a position to be manipulated?”
Where is the ethical line drawn when designing interfaces that show popularity?
One of the most important results of people interacting socially online is that we can measure the effect of social influence. A ground-breaking study by Columbia professor Duncan Watts showed how this could be done. (I wrote it up in How Aggregate Displays Change User Behavior) This is one of the most important studies I’ve seen…it clearly shows a relationship between people’s actions and the aggregate information that’s shown to them in the interface.
For those not familiar with Watts’ study, it showed that when faced with an interface showing what other people did we are definitely influenced by that behavior. If we are shown a list of the most downloaded songs, as in the study, we cannot help but give more weight to those songs downloaded more. We’ll be more likely to download those songs ourselves. This echoes countless studies from social psychology that show how we are affected by our environment.
Dangerous Territory
But one dangerous effect of aggregate displays might not be apparent at first. After we realize that our displays are affecting people, the next question becomes: which aggregate displays do we show and when? This is a question a lot of design teams are grappling with as they build out their social software.
But taking it even further we get into ethical territory. This was made plain to me by a question that someone asked me the other day after we were discussing Watts’ study. They asked: “If people respond to aggregate displays, and change their behavior accordingly as they’ve done in Watts’ study, aren’t those people also in a position to be manipulated?”
Well my first thought was “You rascal. You clever rascal. We’re just beginning to get a handle on how aggregate displays affect behavior and you want to exploit it already!”. But my answer had to be “Yes, people can be manipulated if the data being aggregated isn’t accurate or valid”.
In subsequent talks I’ve had with people, it has become clear to me that many industries not only manipulate aggregate numbers, but they rely on them to drive business. In the music industry, for example, this is common practice. Studios choose which artists to promote, and suddenly they’re everywhere, their pseudo-popularity created in order to generate actual popularity. This is how we have the Britney Spears’ of the world. She’s a talented singer, of course, but she’s not that talented. While she might be in the top 1% of the population when it comes to singers, her talent is not proportional to the marketing and advertising budgets and revenue that she generates, which is in the top .0001%. (FYI: I made these figures up) Britney is a popular phenomenon, but its impossible to tell how much of her popularity is actual fan support and how much is artificially generated.
So there’s a distinction we can make between actual popularity (which has social influence) and artificial popularity (which also has social influence). The first is driven by actually being a fan of someone. The second is driven by bordering-on-unethical advertising.
It is easy to imagine how this distinction might get unethical in a hurry. Imagine, for example, a list of “Most Popular” music wasn’t really the most popular. Maybe someone paid someone else to show their music on the list. Instead of the most popular, it was the most paid for. This is where real danger comes in. We’re starting to see ways of influencing people’s behavior online…but where do we draw the line?
Some people don’t even like actual popularity, as they are aware of its influence and don’t like the rich-get-richer effect, so to speak. But to me actual popularity is fine, for the most part as long as it’s real. If people are passionate about Britney Spears then it’s OK they influence others…they should be able to spread their passion via word-of-mouth. But when that excitement is artificial, then, well, it’s just not telling the truth…