Applying Systems Theory to Ethical AI Development: Mitigating Unintended Consequences through Feedback Loop Analysis

Authors

  • Christian C. Madubuko, PhD., MA; PGDE, BA; Dip School of Regulation and Global Governance, Australian National University, Canberra, Australian Capital Territory, ACT
  • Chamunorwa Chitsungo, MBA, MSc; Grad. Cert. Dip Charles Sturt University, Canberra Campus, Australian Capital Territory, ACT

DOI:

https://doi.org/10.47672/ajir.2447

Keywords:

Artificial Intelligence (AI) O33, Systems Theory D85, Ethical Decision-Making M14, Technological Innovation O31, Social Responsibility Z13

Abstract

Purpose: The rapid adoption of Artificial Intelligence (AI) technologies has sparked critical discourse on their ethical implications. Current debates often lack a systems-oriented perspective, limiting our understanding of how AI systems can unintentionally create complex feedback loops leading to significant, unintended consequences. This paper aims to develop an integrative framework that combines Systems Theory with ethical paradigms for AI development, addressing the multifaceted challenges presented by AI technologies in society.

Materials and Methods: This research employs a systems-oriented analytical framework to elucidate how AI systems interact with various societal and environmental variables. By identifying latent feedback loops, this study reveals ethical dilemmas, including bias amplification, social inequality, and ecological degradation. The analysis critically explores how these systemic interactions impact algorithmic decision-making processes, influencing the mitigation or exacerbation of existing inequities.

Findings: The findings highlight the significant influence of systemic interactions on the ethical implications of AI deployment. By applying a systems-oriented lens, we can better address ethical challenges and enhance the efficacy and fairness of AI applications.

Implications to Theory, Practice and Policy: This paper asserts that integrating systemic thinking into the design, deployment, and governance of AI can improve ethical scrutiny and accountability. The theoretical contributions advocate for a paradigm shifts in integrating ethical considerations into AI development. The paper concludes by proposing actionable strategies grounded in Systems Theory to equip developers, policymakers, and stakeholders with tools for creating ethically robust and socially responsible AI frameworks. By engaging with ethical AI discourse through an interdisciplinary lens, this research underscores the need to align technological innovation with ethical imperatives and advocates for a transformative approach to AI development that prioritizes societal welfare.

Downloads

Download data is not yet available.

References

Baracas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104(3), 671-732.

Bardzell, J. (2010). Feminist HCI: Taking stock and outlining an agenda for design. Proceedings of the 28th International Conference on Human Factors in Computing Systems, 1301-1310.

Bartlett, J., et al. (2019). Ethics by design: An AI for the ethical future. In Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency.

Bar-Yam, Y. (2004). Making things work: Solving complex problems in a complex world. Cambridge: Knowledge Press.

Beer, S. (1972). Decision and Control: The Importance of Feedback Loops in Management. New York: Wiley.

Bertalanffy, L. von. (1968). General systems theory: Foundations, development, applications. New York: George Braziller.

Bhatia, S., et al. (2014). Designing intelligent systems: Robotics in a dynamic environment. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation.

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. In Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency.

Breiman, L., et al. (1986). Classification and regression trees. New York: Wadsworth International.

Brey, P. (2012). The strategic role of technology in the digital economy. Cybernetics and Systems, 43(3), 209-230.

Buchanan, R. (2001). Design research and the new learning. Design Issues, 17(4), 3-8.

Burrell, J. (2016). How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1-12.

Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora necessarily contain human biases. Proceedings of the National Academy of Sciences, 114(48), 1-7.

Checkland, P. (1999). Systems thinking, systems practice: Includes a 30-year retrospective. Wiley.

Crawford, K. (2021). The age of misinformation: How AI and algorithms shape our social truths. Cambridge: MIT Press.

Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625), 311-313.

Darwin, A., et al. (2014). Decisions by design: The evolution of expert systems in healthcare. Artificial Intelligence, 215, 32-44.

Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.

Dignum, V. (2019). Responsible artificial intelligence: Designing AI for human values. Communications of the ACM, 62(5), 56-62.

Dignum, V. (2020). Responsible AI: Designing AI for human values. The International Journal of Human-Computer Studies, 138, 54-66.

Doshi-Velez, F., & Kim, P. (2017). Towards a rigorous science of interpretable machine learning. Proceedings of the 31st Conference on Neural Information Processing Systems.

Floridi, L. (2016). The ethics of artificial intelligence. In Frontiers in Artificial Intelligence and Applications. IOS Press.

Floridi, L., et al. (2018). AI4People-An ethical framework for a good AI society: Opportunities, risks, and recommendations. Minds and Machines, 28(4), 689-707.

Forrester, J. W. (1999). System dynamics: Systems thinking and modeling for a complex world. Cambridge: MIT Press.

Frodeman, R., et al. (2012). Sustainable development: In search of solutions to complex problems. Environmental Science & Policy, 21, 95-98.

Gell-Mann, M. (1994). Complex adaptive systems. In Complexity: Metaphors, models, and reality. Menlo Park: Addison-Wesley.

Gilbert, N., & Troitzsch, K. G. (2005). Simulation for the social scientist. Maidenhead: Open University Press.

Gilpin, L. H., et al. (2018). Explaining explanations: An overview of interpretability of machine learning. 2018 IEEE 5th International Conference on Data Science and Advanced Analytics.

Goodall, N. J. (2014). Machine learning for autonomous vehicles: Opportunities and challenges. IEEE Intelligent Transportation Systems Magazine, 6(3), 86-96.

Guarino, N. (1998). Formal ontology in information systems. In Proceedings of the 1st International Conference on Formal Ontology in Information Systems.

Gunningham, N., et al. (2017). Smart regulation: Designing better outcomes in complex environmental systems. Regulation & Governance, 11(2), 109-122.

He, K., et al. (2015). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770-778.

Holland, J. H. (1998). Emergence: From chaos to order. Reading, MA: Addison-Wesley.

Jobin, A., Ienca, M., &Andorno, R. (2019). Artificial intelligence: The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.

KDD. (2014). Knowledge discovery and data mining: Perspectives from the ACMSIGKDD. 21st ACM SIGKDD Conference on Knowledge Discovery and Data Mining.

Kiesler, S., et al. (2008). Informed consent in the age of digital healthcare: The role of patients in making decisions. Health Affairs, 27(3), 808-819.

Klimek, P., et al. (2020). Systems thinking approaches in understanding the dynamics of human-AI interaction. Journal of Artificial Intelligence Research.

Kollnig, T., et al. (2020). Data-driven modeling for robotic process automation: From theory to practice. Journal of Automation and Computing, 37(2), 169-179.

Krizhevsky, A., et al. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25.

Lum, K., & Isaac, W. (2016). To predict and serve? Significant disparities in predictive policing, the intersection of race and policing. The Stanford Law Review, 69(2), 125-153.

Mackenzie, D. (2006). An introduction to the global economy. Palgrave Macmillan.

Meadows, D. H. (2008). Thinking in systems: A primer. Chelsea Green Publishing.

Midgley, G. (2003). Systems thinking: Addressing the complex problems of the world. In Complexity, Systems and the Future. 48-66.

Mittlestadt, B. D., et al. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21.

Morley, J., et al. (2021). AICare: A framework for managing the ethical implications of AI technologies in care environments. Artificial Intelligence in Medicine, 115, 101005.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York: NYU Press.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown Publishing Group.

O'Reilly, U.-M., & McCarthy, J. (2013). Systems thinking and adaptive governance for complex environments. Journal of Environmental Management, 114, 245-250.

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. New York: Penguin Press.

Pickering, A. (2010). The mangle of practice: Time, agency, and science. Chicago: University of Chicago Press.

Radford, A., et al. (2019). Language models are unsupervised multitask learners. DeepAI Technical Report.

Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Proceedings of the National Academy of Sciences, 115(45), 10316-10323.

Rahwan, I., et al. (2019). Machine behaviour. Nature, 568(7753), 477-486.

Regenwetter, M., et al. (2019). The role of public engagement in ethical AI development. Proceedings of the National Academy of Sciences.

Scherrer, A., et al. (2016). Algorithmic decision-making and its ethical implications: Towards a fair and accountable AI. The International Journal of Information Ethics.

Schulman, J., et al. (2017). Proximal policy optimization algorithms. Proceedings of the 34th International Conference on Machine Learning.

Simon, H. A., et al. (2020). A human-centered approach to AI development. Communications of the ACM, 63(2), 34-36.

Singhal, A. (2012). Introducing the Knowledge Graph: Things, Not Strings. Google Blog.

Sterman, J. D. (2000). Business dynamics: Systems thinking and modeling for a complex world. McGraw-Hill.

Susskind, R. (2020). Tools and weapons: The promise and the peril of the digital age. New York: Penguin Press.

Topol, E. J. (2019). Deep medicine: How artificial intelligence can make healthcare human again. New York: Basic Books.

Valerie, K. (2019). The importance of cybersecurity in AI. U.S. Federal Communications Commission.

Van Dooren, W., et al. (2019). The ethics of AI at the frontiers of machine learning. Nature Machine Intelligence, 1(3), 117-119.

Vosoughi, S., et al. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.

Wiener, N. (1948). Cybernetics: Or control and communication in the animal and the machine. Cambridge: MIT Press.

Wright, D., & De Hert, P. (2012). Privacy and data protection in the age of the digital economy. European Journal of Law and Technology.

Zeng, J., et al. (2017). An overview of the privacy concerns with smart assistant devices and the ethical implications. Proceedings of the IEEE International Conference on Smart City and Green Computing.

Zhao, J., et al. (2017). Mitigating gender bias in job recruitment: A systematic review. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency.

Zhou, X., et al. (2019). Implementation of AI techniques in healthcare. Artificial Intelligence Review, 50(1), 153-162.

Zittrain, J. (2019). The future of the Internet and how to stop it. Yale University Press.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. London: Profile Books.

Downloads

Published

2024-09-25

How to Cite

Christian C. Madubuko, PhD., MA; PGDE, BA; Dip, & Chamunorwa Chitsungo, MBA, MSc; Grad. Cert. Dip. (2024). Applying Systems Theory to Ethical AI Development: Mitigating Unintended Consequences through Feedback Loop Analysis. American Journal of International Relations, 9(6), 1–31. https://doi.org/10.47672/ajir.2447

Issue

Section

Articles