Posted

Algorithms and the Perception of Bias

matrixOn Saturday, July 23, Facebook acknowledged its anti-spam systems had briefly and accidentally blocked links to WikiLeaks files containing internal Democratic National Committee (DNC) emails. WikiLeaks had released 19,000 leaked documents from the DNC containing communication between Democratic Party officials on Friday, July 22. The following day, people tweeted screenshots of an error message they received when attempting to post links to the leaked documents: “The content you’re trying to share includes a link that our security systems detected to be unsafe.”

Facebook told BBC news  its “anti-spam systems briefly flagged links to these documents as unsafe.” (Twitter faced similar accusations of censorship regarding the DNC-leaked documents but has denied the accusations as “uninformed.”)

But charged political environment aside, such incidences are expected occurrences for any company that relies in whole or part on algorithms to regulate their content. Algorithms work, and they are often seen as a bias-free method of producing efficient results. Yet that same “reputation” can trigger a consumer backlash when the algorithm yields an undesired result. When that happens, the very assumption of scientific efficiency that has led companies to rely on an algorithm can lead to a subsequent perception of human manipulation or filtering.

For example, consider the recent release of Google Maps’ new redesign. The new Maps shows “areas of interest” demarcated by an orange dot derived from an—you guessed it—algorithm that measures concentration of restaurants, bars and shops. But what about those businesses that were not marked in orange? It’s unlikely that business owners unhappy with the (lack of) designation will find much satisfaction in blaming an algorithm.

Especially in a context where enormous amounts of content are constantly being processed and presented to users, social media companies need to be aware of the possible pitfalls of using algorithms. Social media companies may want to consider including a disclaimer in their terms and conditions about their use of an algorithm to protect themselves from possible liability. Social media companies should also have a policy and practice in place to respond when that dependable algorithm yields a result that certainly looks like good ol’ fashioned human error.

After all, in this increasingly algorithmic world, blame is still something reserved mostly for humans.