Abstract

Designing effective and inclusive governance and public communication strategies for artificial intelligence (AI) requires understanding how stakeholders reason about its use and governance. We examine underlying factors and mechanisms that drive attitudes toward the use and governance of AI across six policy-relevant applications using structural equation modeling and surveys of both US adults (N = 3,524) and tech- nology workers enrolled in an online computer science master’s degree program (N = 425). We find that the cultural values of individualism, egalitarianism, general risk aversion, and techno-skepticism are important drivers of AI attitudes. Perceived benefit drives attitudes toward AI use but not its governance. Experts hold more nuanced views than the public and are more supportive of AI use but not its regulation. Drawing on these findings, we discuss challenges and opportunities for participatory AI governance, and we recommend that trustworthy AI governance be emphasized as strongly as trustworthy AI.

Comments

This is the accepted version of Matthew R O’Shaughnessy, Daniel S Schiff, Lav R Varshney, Christopher J Rozell, Mark A Davenport, What governs attitudes toward artificial intelligence adoption and governance?, Science and Public Policy, Volume 50, Issue 2, April 2023, Pages 161–176, https://doi.org/10.1093/scipol/scac056

Keywords

Artificial intelligence policy; public opinion; public engagement

Date of this Version

8-29-2022

Published in:

This is the accepted version of Matthew R O’Shaughnessy, Daniel S Schiff, Lav R Varshney, Christopher J Rozell, Mark A Davenport, What governs attitudes toward artificial intelligence adoption and governance?, Science and Public Policy, Volume 50, Issue 2, April 2023, Pages 161–176, https://doi.org/10.1093/scipol/scac056

Share

COinS