AI in Public Governance: Ensuring Rights and Innovation in Non-High-Risk AI Systems in the United States

Authors

  • Tasriqul Islam
  • Sadia Afrin
  • Neda Zand

DOI:

https://doi.org/10.47672/ejt.2577

Keywords:

AI Policy, AI Framework, IT Governance, Public Sector, Public Values, AI Governance

Abstract

Purpose: The area of artificial intelligence (AI) is one of the most rapidly developing areas in IT. This paper aims to contribute to the ongoing effort to create an AI governance framework that takes public confidence in AI policy into account. The article begins by talking about how important public trust is for the proper regulation of new technologies. Subsequently, it assesses public sentiment on AI technology as it relates to governmental functions.

Materials and Methods: Researchers have looked at how people in the US feel about AI, how it's being used, and whether it's suitable for public administration tasks to use AI.

Findings: According to the findings, people have different opinions on whether AI is acceptable and if its judgments impact the job market, the justice system, and national security in the long run. The 2018 AI Public Opinion Survey found that while many Americans are worried about AI, many also see its potential.

Implications to Theory, Practice and Policy: Public trust is fundamental to effective AI governance, as discussed in the article's conclusion.

Downloads

Download data is not yet available.

References

Beatriz Botero Arcila. (2024). AI liability in Europe: How does it complement risk regulation and deal with the problem of human oversight? Computer Law & Security Review, 54(6), 106012–106012. https://doi.org/10.1016/j.clsr.2024.106012

Bygrave, L. A., & Schmidt, R. (2024). Regulating Non-High-Risk AI Systems under the EU’s Artificial Intelligence Act, with Special Focus on the Role of Soft Law. SSRN Electronic Journal, 8(09). https://doi.org/10.2139/ssrn.4997886

Cavalcante, P. (2023). AI is at full speed in public management, but how about the risks and the governmental measures? https://www.ippapublicpolicy.org/file/paper/6628eddc69339.pdf

Chen, T., Gascó-Hernandez, M., & Esteve, M. (2023). The Adoption and Implementation of Artificial Intelligence Chatbots in Public Organizations: Evidence from U.S. State Governments. https://discovery.ucl.ac.uk/id/eprint/10174202/1/Chatbot_Final%20to%20Share.pdf

Chen, Y.-C., Ahn, M. J., & Wang, Y.-F. (2023). Artificial Intelligence and Public Values: Value Impacts and Governance in the Public Sector. Sustainability, 15(6), 4796. https://doi.org/10.3390/su15064796

Georg Stettinger, Weissensteiner, P., & Siddartha Khastgir. (2024). Trustworthiness Assurance Assessment for High-Risk AI-Based Systems. IEEE Access, 07(09), 1–1. https://doi.org/10.1109/access.2024.3364387

Kretschmer, M., Kretschmer, T., Peukert, A., & Peukert, C. (2023, November 3). The risks of risk-based AI regulation: taking liability seriously. ArXiv.org. https://doi.org/10.48550/arXiv.2311.14684

Moon, M. J. (2023). Searching for Inclusive Artificial Intelligence for Social Good: Participatory Governance and Policy Recommendations for Making AI More Inclusive and Benign for Society. Public Administration Review, 8(6). https://doi.org/10.1111/puar.13648

Robles, P., & Mallinson, D. J. (2023). Artificial intelligence technology, public trust, and effective governance. Review of Policy Research, 09(07). https://doi.org/10.1111/ropr.12555

Silja Vöneky, & Schmidt, T. (2024). Regulating AI in non-military applications: lessons learned. Edward Elgar Publishing EBooks, 07(09), 352–369. https://doi.org/10.4337/9781800377400.00027

Downloads

Published

2024-12-27

How to Cite

Tasriqul Islam, Sadia Afrin, & Neda Zand. (2024). AI in Public Governance: Ensuring Rights and Innovation in Non-High-Risk AI Systems in the United States. European Journal of Technology, 8(6), 17–27. https://doi.org/10.47672/ejt.2577

Issue

Section

Articles