Artificially Intelligent Information Safeguards: Algorithms for Distinguishing Between Opinions, Facts and Alternative Facts

Presenter Information

Dan Goldwasser

Streaming Media

Infographic

Description

The way we access information has changed dramatically over the last decade. The exciting advances in social and data-driven computing have dramatically increased the amount of information avaialble and the way it is distributed and consumed. Users now have access to unprecedented amounts of information, and as a result often rely on automated recommendation algorithms to find relevant information. An unfortunate result of this trend is that the safeguards designed to ensure the validity and veracity of available information do not scale appropriately. Thus, false information spreads. This raises a natural question: "Can the same technological advances be used to protect us from malicious content?" This talk will discuss the prospects and challenges involved in scaling up these safeguards by using algorithms that can automatically analyze text and identify biased and deceptive content. These algorithms should combine linguistic information (e.g., how does word choice impact the reader?), common-sense reasoning (e.g., which facts are not likely to be true in a given context?) and social information (e.g., which sources should I trust?), in order to effectively filter unreliable information.

Location

Stewart 218

Start Date

9-27-2017 1:15 PM

DOI

10.5703/1288284316620

Share

COinS
 
Sep 27th, 1:15 PM

Artificially Intelligent Information Safeguards: Algorithms for Distinguishing Between Opinions, Facts and Alternative Facts

Stewart 218

The way we access information has changed dramatically over the last decade. The exciting advances in social and data-driven computing have dramatically increased the amount of information avaialble and the way it is distributed and consumed. Users now have access to unprecedented amounts of information, and as a result often rely on automated recommendation algorithms to find relevant information. An unfortunate result of this trend is that the safeguards designed to ensure the validity and veracity of available information do not scale appropriately. Thus, false information spreads. This raises a natural question: "Can the same technological advances be used to protect us from malicious content?" This talk will discuss the prospects and challenges involved in scaling up these safeguards by using algorithms that can automatically analyze text and identify biased and deceptive content. These algorithms should combine linguistic information (e.g., how does word choice impact the reader?), common-sense reasoning (e.g., which facts are not likely to be true in a given context?) and social information (e.g., which sources should I trust?), in order to effectively filter unreliable information.