In the healthcare industry, AI is unlocking new possibilities and solving grand healthcare challenges by improving care outcomes, life science innovation, and patient experience in unimaginable ways. ChatGPT, a language model trained on vast amounts of text data to mimic human language and learn patterns, is already making headlines.

The integration of artificial intelligence (AI) and healthcare has the potential to revolutionize medical care. However, its implementation does not come without risks. Data privacy and security issues arise when personal health data is collected for AI integration without obtaining adequate consent, shared with third parties without sufficient safeguards, re-identified, inferred, or exposed to unauthorized parties.

Compliance with regulations requires proper privacy-preserving techniques and mechanisms for granular consent management.

Accuracy and data safety risks with AI

AI models are becoming increasingly popular with their ability to make predictions and decisions from large data sets. However, when trained on data with inherent biases, these models can lead to incorrect or unfair outcomes. For example, a model might be trained on a dataset that is predominantly comprised of one gender, socio-economic class, or geographic region. This model might then be used to make decisions or predictions for a different population, such as gender balance, which could result in biased or inaccurate results.

AI models depend heavily on the data supplied to them to be trained properly. If the data that is provided is imprecise, inconsistent or incomplete, then the results generated by the model can be unreliable. AI models bring their own set of privacy concerns, particularly when de-identified datasets are being used to detect potential biases. The more data that is fed into the system, the greater its potential for identifying and creating linkages between datasets. In some instances, AI models may unintentionally retain patient information during training, which can be revealed through the model’s outputs, significantly compromising the patients’ privacy and confidentiality.

Regulatory compliance challenges with AI

As AI develops and is increasingly integrated into healthcare organizations’ operations, it has put a strain on regulatory bodies to keep up with the rapid advances in technology. This has left many aspects of AI’s application in healthcare in a state of ambiguity and uncertainty, as legislation and regulations have yet to be developed that will ensure the data is used responsibly and ethically.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

According to a paper published in BMC Med Ethics, AI presents a unique and complex challenge for algorithms due to the sheer amount of patient data that must be collected, stored, and accessed in order to offer reliable insights. By taking advantage of machine learning models, artificial intelligence can be used to identify patterns in patient data that may be otherwise difficult to recognize.

Although a patchwork of laws, including HIPAA, apply, there remains a gap in terms of how privacy and security should be addressed. The problem with existing laws is that they are not designed specifically for AI. For instance, HIPAA does not directly regulate entities unless they act as business associates for covered entities. Signing a business associate agreement (BAA) with third parties dilutes the problem to some extent. However, vendors can get by without a BAA if the data is de-identified and no longer subject to HIPAA. In such a case, again, data privacy issues arise as AI has the ability to adapt and re-identify previously de-identified data.

In September 2021, the U.S. Food and Drug Administration (FDA) released its paper titled “Artificial Intelligence and Machine Learning Software as a Medical Device Action Plan” to address how AI regulations should be implemented in the healthcare sector. This paper proposed ideas on managing and regulating adaptive AI and ML technologies, including requiring transparency from manufacturers and the need for real-world performance monitoring.

ChatGPT in healthcare and privacy concerns

The advent of ChatGPT has brought enormous transformations to how the healthcare industry operates. Its application can be seen in patient education, decision-making processes for healthcare professionals, disease surveillance, patient triage, remote patient monitoring, and conducting clinical trials by helping researchers identify patients that meet inclusion criteria and are willing to participate.

Like every AI model, ChatGPT depends on troves of data to be trained. In healthcare, this data is often confidential patient information. ChatGPT is a new technology that has not been thoroughly tested for data privacy, so inputting sensitive health information may have huge implications for data protection. Also, its accuracy is not reliable as of yet. 6 in 10 American adults feel uncomfortable with their doctors relying on AI to diagnose diseases and provide treatment recommendations. Observers were mildly impressed when the original version of the ChatGPT passed the U.S. medical licensing exam, though just barely.

In March 2023, following a security breach, the Italian data regulator banned ChatGPT’s Italian users’ data processing operations over privacy concerns. The watchdog argued that the chatbot lacked a way to verify the age of users and that the app “exposes minors to absolutely unsuitable answers compared to their degree of development and awareness.” Later, the service was resumed after ChatGPT announced a set of privacy controls, including providing users with a privacy policy that explains “how they develop and train ChatGPT” and verifies their age.

Unless the data on which it was trained is made public and the system’s architecture is made transparent, even with an updated privacy policy, it may not be enough to satisfy the GDPR, reports Techcrunch: “It is not clear whether Italians’ personal data that was used to train its GPT model historically, i.e., when it scraped public data off the Internet, was processed with a valid lawful basis — or, indeed, whether data used to train models previously will or can be deleted if users request their data deleted now.”

The development and implementation of AI in healthcare comes with trade-offs. AI’s benefits in healthcare may largely outweigh the privacy and security risks. It is crucial for healthcare organizations to take these risks into account when developing governing policies for regulatory compliance. They should comprehend that antiquated cybersecurity measures can’t deal with advanced technology like AI. Until regulations related to AI technology become clearer, patients’ safety, security, and privacy should be prioritized by ensuring transparency, granular consent and preference management, and third-party vendors’ due diligence before partnering for research or marketing purposes.