This project seeks to establish a new interdisciplinary research program at UOW that addresses the ethical, legal and social implications (ELSI) of using artificial intelligence (AI) in health and social care.
AI Ethics in Health and Care
Despite 75% of Australians knowing about AI, only 33% know this can be used for diagnosis in health and social care, and under 50% find this acceptable.
In health care, AI has the potential to augment but also replace some health professional tasks, such as radiology. Further, proprietary algorithms can now screen, diagnose conditions and predict prognosis. In social care, AI is entering welfare administration and decision-making. Despite these advances, AI can introduce concerns about data privacy, confidentiality, reinforcement of bias and prejudice and introduction of legal risks. This project aims to create an AI that is keenly attuned to moral, legal and social responsibility.
The project will deliver the first systematic, high-quality Australian survey of knowledge, attitudes and values to AI in health and social care. Specifically, this will investigate Australians’ views on AI in two contexts: 1) Diagnosis and screening in healthcare, especially breast cancer and cardiovascular disease; and 2) the provision of advice about disability payments and services in the welfare sector.
The team
Professor Stacy Carter from the Faculty of the Arts, Social Sciences and Humanities and the Australian Centre for Health Engagement, Evidence and Values, is responsible for implementation and writing regarding ELSI of AI. This will be supported by Professor Khin Win (EIS, School of Computing and Information Technology) and Dr Scarlet Wilcock (Law and Social Service).
Other leaders will include Dr Tam Ha (ASSH, epidemiology), Senior Professor David Steel (EIS, statistics including for online research), and Professor Nina Reynolds (BAL, survey design and digital research). Ha, Steel and Reynolds will take responsibility on quantitative methodology.