Page MenuHomePhabricator

Review of papers by Tufekci and Sandvig et. al.
Closed, ResolvedPublic


Review of papers

Algorithmic Harms Beyond Facebook and Google: Emergetn Challenges of Computational Agency
Zeynep Tufekci


Can an Algorithm be Unethical?
Christian Sandvig, Kevin Hamilton​, Karrie Karahalios​, Cedric Langbort​

Event Timeline

aetilley created this task.Sep 11 2015, 5:25 PM
aetilley claimed this task.
aetilley raised the priority of this task from to Needs Triage.
aetilley updated the task description. (Show Details)
aetilley added a subscriber: aetilley.
Restricted Application added a subscriber: Aklapper. · View Herald TranscriptSep 11 2015, 5:25 PM

Tufekci's paper is mostly expository of other studies, but the studies that she mentions are truly fascinating. aetilley has never had a Facebook account, but was intrigued by the possibilities that Tufekci mentions.
Sandvig et. al. seem to be a diverse group of experts taking many pages to say something which is more or less obvious, but perhaps it bears repeating. There is a distinction between a function, an algorithm for computing a function, and a specific implementation of an algorithm. Racism, and bias in general can creep in at more than one level.
A mantra that kept coming to mind while reading these was "strive for open algorithms and open training sets." The principal barrier here is in determining the level of detail at which to describe an algorithm/dataset to a most likely non-technical user or in which to let said user specify their own personal algorithm.

The Sandvig paper did make brief mention of feedback mechanisms which seem to be pertinent to our considerations.

"Through close study of how the above scenarios might be more or less likely to result in racist outcomes, we might look to design better safeguards into such algorithms. Some precedent exists for such care in algorithm design. In computer security or health systems, for example, sandboxing processes and other safeguards lessen the risk that systems suffer serious security or privacy failures. Though such approaches might begin to take us into system design rather than algorithm design, we can point to some specific cases in the above examples where the addition of a few new inputs, outputs or steps in the algorithm could result in a process which tends to be more ethical. Indeed, without an adequate consideration of the algorithms the system design process itself is impoverished, as selection of the algorithms to be employed is a major task.
"Consider a version of the algorithm in the video surveillance scenario that could receive feedback from the operator with respect to false positives or false negatives. These additional labels from the operator then become extra training data for the system. An adaptive system could incorporate this feedback without the need for fully retraining the system. An algorithm might be designed to specifically detect if the operator is racist, giving the system the ability to “learn” from the operator’s own racist conclusions to prevent racist outcomes in the future."

aetilley moved this task from Review to Done on the Scoring-platform-team (Current) board.
aetilley renamed this task from Review of papers by Tufekci and Saldvig et. al. to Review of papers by Tufekci and Sandvig et. al..Sep 11 2015, 10:20 PM
aetilley set Security to None.
awight added a subscriber: awight.Sep 18 2015, 4:44 PM
Halfak closed this task as Resolved.Sep 19 2015, 4:11 PM