Participants’ Perceptions of Privacy and Data Sharing Regarding Health-Related Data Using Artificial Intelligence
DOI:
https://doi.org/10.51983/ajsat-2023.12.2.3973Keywords:
Privacy, Artificial Intelligence, Cybersecurity, HCI, Mobile HealthAbstract
Substantial research has been performed to explore the effective integration of artificial intelligence (AI) into healthcare. Consequently, it becomes imperative to grasp the perspectives and concerns of general public in relation to the utilization of their health data in AI research, particularly within the context of privacy issues. This study aims to explore the awareness of participants about the AI, their privacy concerns about data sharing using AI and its use on healthcare data. We carried out a comprehensive study by employing a self-administered questionnaire tool with participants using convenience sampling methodology. A total of 450 participants were enlisted from Saudi Arabia. Conditional binary logistic regression models were employed to compute odds ratios (ORs) and 95% confidence intervals. Among the participants, 168 (37.3%) showed that they have knowledge about AI. In terms of personal data’s vulnerability when using AI technology, 186(41.3%) perceived a privacy risk about their health data. 201(44.7%) indicated their trust in AI’s ability to safeguard data privacy. Regarding the use of machine learning for medical record analysis, 180(40.0%) declared it riskier than benefits. For AI research purposes, 205 (45.6%) supported data sharing and 213(47.3%) believed hospitals should have strict regulations while 214 (47.6%) believed hospitals should provide limited access to data to ensure health data privacy. Furthermore, the study found that younger individuals were more likely to trust AI with their data privacy (OR = 0.540, 95% CI; 0.300-0.972), while participants with higher education levels were nearly three times more likely to trust AI with their data privacy compared to those with lower education (OR = 2.894, p = 0.047, 95% CI; 1.012-8.278). Patients’ viewpoints, the extent of assistance they receive, and their comprehension of health data research and artificial intelligence exhibited significant variations, often influenced by privacy issues. To ensure the acceptability of AI research and its seamless integration into clinical practice in the future, it is imperative to engage the public more extensively and stimulate discussions, particularly concerning privacy concerns.
References
F. Jiang et al., “Artificial intelligence in healthcare: past, present and future,” Stroke and vascular neurology, vol. 2, no. 4, 2017.
K. W. Johnson et al., “Artificial intelligence in cardiology,” Journal of the American College of Cardiology, vol. 71, no. 23, pp. 2668-2679, 2018.
M. Aitken et al., “Public responses to the sharing and linkage of health data for research purposes: a systematic review and thematic synthesis of qualitative studies Big data and machine learning in health care,” BMC medical ethics, vol. 17, no. 13, pp. 1-24, 2016.
G. Litjens et al., “A survey on deep learning in medical image analysis,” Medical image analysis, vol. 42, pp. 60-88, 2017.
D. A. Bluemke et al., “Assessing radiology research on artificial intelligence: a brief guide for authors, reviewers, and readers - from the radiology editorial board,” vol. 294, ed: Radiological Society of North America, 2020, pp. 487-489.
A. L. Beam and I. S. Kohane, “Big data and machine learning in health care,” Jama, vol. 319, no. 13, pp. 1317-1318, 2018.
M. Aitken, J. de St. Jorre, C. Pagliari, R. Jepson, and S. Cunningham-Burley, “Public responses to the sharing and linkage of health data for research purposes: a systematic review and thematic synthesis of qualitative studies,” BMC medical ethics, vol. 17, pp. 1-24, 2016.
S. Kalkman, J. van Delden, A. Banerjee, B. Tyl, M. Mostert, and G. van Thiel, “Patients’ and public views and attitudes towards the sharing of health data for research: a narrative review of the empirical evidence,” Journal of medical ethics, vol. 48, no. 1, pp. 3-13, 2022.
D. Schiff and J. Borenstein, “How should clinicians communicate with patients about the roles of artificially intelligent team members?,” AMA journal of ethics, vol. 21, no. 2, pp. 138-145, 2019.
M. D. McCradden et al., “Ethical concerns around use of artificial intelligence in health care research from the perspective of patients with meningioma, caregivers and health care providers: a qualitative study,” Canadian Medical Association Open Access Journal, vol. 8, no. 1, pp. E90-E95, 2020.
M. D. McCradden, T. Sarker, and P. A. Paprica, “Conditionally positive: a qualitative study of public perceptions about using health data for artificial intelligence research,” BMJ open, vol. 10, no. 10, pp. e039798, 2020.
K. El Emam, E. Jonker, L. Arbuckle, and B. Malin, “A systematic review of re-identification attacks on health data,” PloS one, vol. 6, no. 12, p. e28071, 2011.
G. Martin, S. Ghafur, J. Kinross, C. Hankin, and A. Darzi, “WannaCry - a year on,” vol. 361, ed: British Medical Journal Publishing Group, 2018.
C. A. Nelson et al., “Patient perspectives on the use of artificial intelligence for skin cancer screening: a qualitative study,” JAMA dermatology, vol. 156, no. 5, pp. 501-512, 2020.
Y. P. Ongena, M. Haan, D. Yakar, and T. C. Kwee, “Patients’ views on the implementation of artificial intelligence in radiology: development and validation of a standardized questionnaire,” European radiology, vol. 30, pp. 1033-1040, 2020.
B. Stai et al., “Public perceptions of artificial intelligence and robotics in medicine,” Journal of endourology, vol. 34, no. 10, pp. 1041-1048, 2020.
M.-C. Laï, M. Brian, and M.-F. Mamzer, “Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study among actors in France,” Journal of translational medicine, vol. 18, no. 1, pp. 1-13, 2020.
S. Gao, L. He, Y. Chen, D. Li, and K. Lai, “Public perception of artificial intelligence in medical care: content analysis of social media,” Journal of Medical Internet Research, vol. 22, no. 7, pp. e16649, 2020.
B. Balaram, T. Greenham, and J. Leonard, “Artificial Intelligence: real public engagement,” RSA, London. Retrieved November, vol. 5, pp. 2018, 2018.
S. Cave and K. Dihal, “Hopes and fears for intelligent machines in fiction and reality,” Nature machine intelligence, vol. 1, no. 2, pp. 74-78, 2019.
M. Aldossari and A. Albalawi, “Role of Shoulder Surfing in Cyber Security (Experimental Study to the Comparative Framework),” American Journal of Computer Science and Technology, vol. 6, no. 3, pp. 102-108, 2023.
M. Aldossari and D. Zhang, “D&L: A Natural Language Processing Based Approach for Protecting Sensitive Information from Shoulder Surfing Attacks,” 2023.
M. Aldossari, “The Use of Text Recognition, Lip Reading, and Object Detection for Protecting Sensitive Information from Shoulder Surfing Attacks,” Ph.D., The University of North Carolina at Charlotte, United States - North Carolina, 30529612, 2023.
M. Tabassum, A. Alqhatani, M. Aldossari, and H. Richter Lipford, “Increasing user attention with a comic-based policy,” in Proceedings of the 2018 chi conference on human factors in computing systems, 2018, pp. 1-6.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 The Research Publication
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.