Ethical AI and the Future of Healthcare: Combining Academic Theory and Industry Practice to Ensure Patient-Centered Care

Authors

DOI:

https://doi.org/10.47672/ejt.1727

Keywords:

Artificial Intelligence, Ethics, Sociotechnical Systems Theory, Lean Six Sigma, Academic-Industry Partnership

Abstract

Purpose: This paper seeks to define a risk taxonomy, establish meaningful controls, and create a prospective harms model for AI risks in healthcare.  Currently, there is no known comprehensive definition of AI risks, as applied to industry and society. 

Materials and Methods: The temptation for current research, both in academia and industry, is to apply exclusively-tech-based solutions to these complex problems; however, this view is myopic, and can be remedied by establishing effective controls informed by a holistic approach to managing AI risk.  Sociotechnical Systems Theory (STS) is an attractive theoretical lens for this issue, because it prevents collapsing a multifaceted problem into a one-dimensional solution.  Specifically, the multidisciplinary approach"”one that includes both the sciences and the humanities"”reveals a multidimensional view of technology-society interaction, exemplified by the advent of AI. 

Findings: After advancing this risk taxonomy, this paper utilizes the risk management framework of Lean Six Sigma (LSS) to propose effective mitigating controls for the identified risks.  LSS determines controls through data collection and analysis, and supports data-driven decision making for industry professionals.

Implications to Theory, Practice and Policy: Instantiating the theory of STS into industry practices could be critical, then, for determining and mitigating real-world risks from AI.  In summary, this paper combines the academic theory of sociotechnical systems with the industry practice of Lean Six Sigma to develop a hybrid model to fill a gap in the literature.  Drawing upon both theory and practice ensures a robust, informed risk model of AI use in healthcare.

Downloads

Download data is not yet available.

References

AI, Algorithmic, and Automation Incidents and Controversies (2023). About AIAAIC. https://www.aiaaic.org/about-aiaaic

AI, Algorithmic, and Automation Incidents and Controversies (2023). NarxCare drug addiction risk assessment. https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/narxcare-drug-addition-risk-assessment

AI, Algorithmic, and Automation Incidents and Controversies (2023). Epic sepsis prediction model.https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/epic-systems-sepsis-prediction-model

AI, Algorithmic, and Automation Incidents and Controversies (2023). Idaho Medicaid disability resource allocation model. https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/idaho-medicaid-disability-resource-allocation

Claburn, T. (2023). 'AI divide' across the US leaves economists concerned. The Register.https://www.theregister.com/2023/10/24/ai_adoption_distribution

Council for Six Sigma Certification (2018). Lean Six Sigma Green Belt Training Manual. https://www.sixsigmacouncil.org/wp-content/uploads/2018/09/Lean-Six-Sigma-Green-Belt-Certification-Training-Manual-CSSC-2018-06b.pdf

Department of Health and Human Services, National Practitioner Databank, 45 C.F.R. § 60.3 (2023). https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-A/part-60/subpart-A/section-60.3

Eitel-Porter, R. (2020). Beyond the promise: Implementing ethical AI. AI and ethics. Springer Nature. https://doi.org/10.1007/s43681-020-00011-6

Farhud, D. D. & Zokaei, S. (2021). Ethical issues of artificial intelligence in medicine and healthcare. Iranian Journal of Public Health. https://doi.org/10.18502/ijph.v50i11.7600

Knight, W. (2023). Six months ago Elon Musk called for a pause on AI: Instead development sped up. Wired. https://www.wired.com/story/fast-forward-elon-musk-letter-pause-ai-development

Korn, J. (2023). How companies are embracing generative AI for employees"¦or not. CNN.https://www.cnn.com/2023/09/22/tech/generative-ai-corporate-policy/index.html

Lately, D. (2015). Silicon Valley's cult of nothing. The Baffler. https://thebaffler.com/latest/cult-of-nothing

Leeds University, School of Business (2023). Socio-technical systems theory. https://business.leeds.ac.uk/research-stc/doc/socio-technical-systems-theory#:~:text=Socio%2Dtechnical%20theory%20has%20at,parts%20of%20a%20complex%20system.

Morley, J., Machado, C., Burr, C., Cowls, J., Joshi I., Taddeo M., & Floridi L. (2020). The ethics of AI in healthcare: A mapping review. Social Science & Medicine (260). https://www.sciencedirect.com/science/article/abs/pii/S0277953620303919

National Academy of Medicine (2019). Patient-centered, integrated health care quality measures could improve health literacy, language access, and cultural competence. https://nam.edu/patient-centered-integrated-health-care-quality-measures-could-improve-health-literacy-language-access-and-cultural-competence/#:~:text=That%20IOM%20report%20committee%20recommended,timely%2C%20efficient%2C%20and%20equitable

Pegoraro, R. (2023). Companies adopting AI need to move slowly and not break things. Fast Company. https://www.fastcompany.com/90888603/applied-ai-move-slowly-not-break-things

Sartori, L. & Theodorou, A. (2022). A sociotechnical perspective for the future of AI: narratives, inequalities, and human control. Ethics Information Technology (24)4. https://doi.org/10.1007/s10676-022-09624-3

Shah, S., & Matin, R. (2023). BT08 Mapping UK frameworks for ethical artificial intelligence applied to dermatology. British Journal Of Dermatology. Blackwell Publishing Inc. https://academic.oup.com/bjd/article/188/Supplement_4/ljad113.374/7207265 .

Stahl, B. C. (2023). Embedding responsibility in intelligent systems: from AI ethics to responsible AI ecosystems. Scientific Reports (13)1.https://doi.org/10.1038/s41598-023-34622-w

Taulli, T. (2021). Artificial intelligence: How non-tech firms can benefit. Forbes.https://www.forbes.com/sites/tomtaulli/2021/05/14/ai-artificial-intelligence-how-non-tech-firms-can-benefit/?sh=2257869f1962

The Joint Commission (2023). The Joint Commission FAQs. https://www.jointcommission.org/who-we-are/facts-about-the-joint-commission/joint-commission-faqs/#:~:text=Joint%20Commission%20surveyors%20visit%20accredited,Commission%20accreditation%20surveys%20are%20unannounced

Rajan J. & Rag A. (2023). Companies going slow on AI risk falling behind: Bain report. The Economic Times. https://economictimes.indiatimes.com/tech/technology/companies-going-slow-on-ai-risk-falling-behind-bain-report/articleshow/103790282.cms

Rebitzer, J., & Rebitzer, R. (2023). AI adoption in U.S. health care won't be easy. Harvard Business Review. https://hbr.org/2023/09/ai-adoption-in-u-s-health-care-wont-be-easy#:~:text=But%20history%20suggests%20that%20the,that%20can%20

upend%20profitable%20operations.

U.S. Food & Drug Administration. (2022). Proposed rule: Quality system regulation amendments. U.S. Department of Health and Human Services. https://www.fda.gov/medical-devices/quality-system-qs-regulationmedical-device-good-manufacturing-practices/proposed-rule-quality-system-regulation-amendments-frequently-asked-questions#:~:text=On%20February%2023%2C%202022%2C%20the,used%20by%20many%20other%20regulatory

van de Poel, I. (2020). Embedding values in artificial intelligence (AI) systems. Minds and Machines (30)3, 385-409. https://doi.org/10.1007/s11023-020-09537-4

Xing X., Wu, H., Wang, L., Stenson, I., Yong, M., Del Ser, J., Walsh, S., & Yang, G. (2022). Non-imaging medical data synthesis for trustworthy AI: A comprehensive survey. American Computing Machinery. https://arxiv.org/abs/2209.09239

Downloads

Published

2024-01-04

How to Cite

Plummer, D. . (2024). Ethical AI and the Future of Healthcare: Combining Academic Theory and Industry Practice to Ensure Patient-Centered Care. European Journal of Technology, 8(1), 1–13. https://doi.org/10.47672/ejt.1727