AI-driven content moderation can never be perfect


At this week’s panel discussion in Brussels hosted by the Center for Democracy and Technology, I discussed the capacities and limitations of AI-driven content moderation techniques. Other panelists included Armineh Nourbakhsh now with S & P Global, who discussed the Tracer tool she helped develop while at Reuters and Emma Llansó with CDT, who discussed the recent paper she co-authored, “Mixed Messages? The Limits of Automated Social Media Content Analysis.” Prabhat Agarwal from the European Commission provided an insightful policymaker perspective.

Platforms and other Internet participants use automated procedures to take action against illegal material on their systems such as hate speech or terrorist material. They also use the same techniques to enforce their own voluntary terms of service that they require of users in order to maintain stable and attractive online environments. Finally, there is an intermediate type of content which is not just unattractive and is not quite illegal in itself but which is harmful and needs to be controlled. Disinformation campaigns fall in this category.

The government pressure to use automated systems is growing

The reason for discussing automatic take downs of this material right now is not just technical. Policymakers are very concerned about the effectiveness of the systems used for content moderation and are pushing platforms to do increasingly more. Sometimes this spills over into regulatory requirements.

The recently-announced EU proposed regulation to prevent terrorist content online is a good example. Much attention focused on its proposal to allow competent national authorities in the member states to require platforms to remove specific instances of terrorist content within an hour of being notified. But much more worrisome is its requirement that platforms put in place “pro-active measures” designed to prevent terrorist material from appearing on the systems in the first place. Since pro-active measures are automatic blocking systems for terrorist content, the proposed regulation explicitly creates a derogation from the current ecommerce directive to impose a duty to monitor systems for terrorist content. Still worse, if a company were to receive too many removal orders and could not reach agreement with the regulator on a design improvement, the company could be required to implement and maintain an automatic takedown system prescribed by the regulator.

Protective measures are needed to make removal decisions fair

There must be explicit standards used for the removal decision and those standards need to be transparent so that users can form expectations about what is out of bounds and what is acceptable. Moreover, there needs to be an explanation for the individual removal decisions that describes the specific features or aspect of the content that triggered the removal. Finally, because no system can be perfect, there needs to be a redress mechanism to allow material to be restored when it has been removed in error.  As a model, look to the practices of the credit industry in the U.S., which has had these elements of transparent, explanation and redress for generations.

For reasons that Emma Llansó outlined in her paper, it is a mistake to rely solely on automated systems for removals. The error rate is simply too high, and human review is always needed to ensure that the context and meaning of the content is fully considered prior to removal.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *