The Legal and Political Implications of AI Bias: An International Comparative Study

Authors

  • Stephanie Ness Diplomatische Akademie
  • Mithun Sarker Iron Horse Terminals
  • Mykola Volkivskyi Taras Shevchenko National University of Kyiv
  • Navdeep Singh NerdCuriosity.Com

DOI:

https://doi.org/10.47672/ajce.1879

Keywords:

Artificial Intelligence, Governance, Ethical Considerations, Legal Implications (JEL Code: K23), Political Dimensions (JEL Code: O33), Discrimination, Fairness, Conformity Assessments (JEL Code: D63), Sensitive Data, Fundamental Rights, Transparency, Accountability, Inclusivity.

Abstract

Purpose: "The Legal and Political Implications of AI Bias: An International Comparative Study" extensively navigates the intricate terrain of AI governance, with a specific focus on the ethical challenges arising from bias in AI systems. The purpose of this study is to underscore the urgent need for robust regulatory frameworks to address issues of bias, discrimination, and fairness within the realm of AI technologies.

Materials and Methods: The research methodology involved a comprehensive analysis of international perspectives on AI bias. This entailed examining existing literature, legal frameworks, and political dynamics surrounding AI governance in various countries. Comparative analysis was conducted to elucidate the diverse approaches adopted by different nations to tackle AI bias and unravel the corresponding legal and political consequences.

Findings: The study highlighted the inherent risks associated with biased algorithms and stressed the paramount importance of proactively detecting and mitigating bias to prevent discrimination and promote fairness in AI systems. Additionally, it advocated for comprehensive measures such as risk management strategies, conformity assessments for high-risk AI applications, and the careful handling of sensitive data to identify and rectify biases that could lead to discriminatory outcomes.

Implication to Theory, Practice and Policy: The study was informed by theories of ethical governance and legal frameworks in AI development and deployment. It was validated through the comparative analysis of international perspectives, which provided insights into the effectiveness of different regulatory approaches in addressing AI bias. Recommendations to practitioners include implementing risk management strategies, conducting conformity assessments for high-risk AI applications, and ensuring the careful handling of sensitive data to identify and rectify biases. Practitioners are urged to prioritize ethical considerations and advocate for responsible deployment practices to mitigate AI bias effectively. Recommendations to policymakers emphasize the need to prioritize ethical considerations and advocate for responsible deployment practices in AI governance. Policymakers are urged to develop robust regulatory frameworks that promote transparency, accountability, and inclusivity in AI development and deployment to build a more equitable and trustworthy AI ecosystem.

In essence, the study provides crucial insights into the complex interplay between legal frameworks, political dynamics, and ethical considerations in addressing AI bias on a global scale. It paves the way for the establishment of fair and unbiased AI systems that benefit society as a whole.

Downloads

Download data is not yet available.

References

Acquisti, A., & Gross, R. (2006). Imagined communities: Awareness, information sharing, and privacy on the Facebook. In Privacy enhancing technologies (pp. 36-58). Springer.

Anderson, T., & Rainie, L. (2018). The Future of Well-Being in a Tech-Saturated World. Pew Research Center.

Barocas, S., & Hardt, M. (2019). Fairness and Abstraction in Sociotechnical Systems. Information, Communication & Society, 22(7), 900-915.

Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671-732.

Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, communication & society, 15(5), 662-679.

Diakopoulos, N. (2016). Accountability in Algorithmic Decision Making: A Design Perspective. Data Society Research Institute.

Diakopoulos, N. (2016). Accountability in Algorithmic Decision Making: A Design Perspective. Data Society Research Institute.

Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness and Abstraction in Sociotechnical Systems. arXiv preprint arXiv:1104.3913.

European Commission. (2020). White Paper on Artificial Intelligence - A European approach to excellence and trust. European Commission.

Friedler, S. A., Scheidegger, C., & Venkatasubramanian, S. (2016). On the (im)possibility of fairness. arXiv preprint arXiv:1609.07236.

Green, B., & Chen, E. (2019). When the Robot Doesn't See Dark Skin. ProPublica.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.

Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1-33.

Kent, W., & Adams, R. (2020). Governing Artificial Intelligence: A Human Rights Perspective. Science and Engineering Ethics, 26, 1815-1834.

Lipton, Z. C. (2016). The mythos of model interpretation. arXiv preprint arXiv:1606.03490.

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679.

Narayanan, A., & Zevenbergen, B. (2015). Bitcoin: A Peer-to-Peer Electronic Cash System. Retrieved from https://bitcoin.org/bitcoin.pdf

O'Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group.

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.

Powers, D. M. (2011). Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. Journal of Machine Learning Technologies, 2(1), 37-63.

Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and Discrimination: Converting Critical Concerns into Productive Inquiry.

Selbst, A. D., & Barocas, S. (2018). The Intuitive Appeal of Explainable Machines. Fordham Law Review, 87(3), 1085-1122.

Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751-752.

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76-99.

Zarsky, T. (2016). The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision-making. Science, Technology, & Human Values, 41(1), 118-132.

Downloads

Published

2024-03-16

How to Cite

Ness, S., Sarker, M. ., Volkivskyi, M., & Singh , N. . (2024). The Legal and Political Implications of AI Bias: An International Comparative Study. American Journal of Computing and Engineering, 7(1), 37–45. https://doi.org/10.47672/ajce.1879

Issue

Section

Articles