By: Craig Roxborough, Registrar and CEO
Recently I had the pleasure of attending the Ontario Physiotherapy Association (OPA) InterACTION conference. The keynote was focused on Artificial Intelligence (AI), providing some background on the development of this technological innovation and discussing its role and future in clinical practice.
It was clear in the presentation and discussion that there is potential here. AI will inevitably evolve to a point where it helps augment or complement clinical practice to be more efficient and potentially more effective. It was also clear that there are a lot of unknowns and risks, particularly given the ‘black box’ nature of these tools and our inability to ‘see’ how they work.
So, I wanted to take a moment to explore AI a bit further and outline some key considerations for physiotherapists looking to integrate these tools in their practice. This is not the first time an innovation has landed on our doorstep, so reviewing existing guidance is a good start as we navigate this evolving world together.
Supporting Practice
While there is no consensus yet on how effective or helpful AI tools will be, there is an indication that there are benefits to using AI in practice.
We know that many healthcare professionals are facing burnout; particularly with the more administrative tasks of managing a practice. Tools are already being developed to help, either by supporting record-keeping practices or producing treatment recommendations after a clinical assessment. At InterACTION, we heard about using AI to build out routine exercise protocols.
Off-loading these tasks to AI is attractive as it reduces administrative burden and enables physiotherapists to spend more time on patient care. But while these benefits may be real, there are also risks that need to be addressed and human intervention and oversight remain essential.
Mitigating Risks and Addressing Challenges
Tools that facilitate charting, making a diagnosis, or recommending a treatment protocol are not new – dictation and transcription services already exist. Clinical practice guidelines provide an algorithmic approach to assessing and treating a patient. What these tools have in common is the need for a clinician to ensure the output is sound and their legal and professional obligations are met. The same is true for AI.
Privacy
One of the biggest questions around the use of AI in healthcare relates to privacy. Particularly how patient health information is being managed, stored, and used within these systems.
In Ontario, patient health information is protected by the Personal Health Information Protection Act (PHIPA). Any software tool with access to personal health information needs to be used in compliance with PHIPA. This is true for electronic medical records and virtual care tools, as well as any AI tool being integrated into practice.
When adopting AI tools, it may be very difficult to ensure you abide by your legislative requirements. Concerns include where the data is stored (i.e., outside of Canada), whether the data is being scrubbed and used for other purposes, and the security protocols that exist to protect any information in the system.
PHIPA is currently being updated to set requirements for all consumer electronic service providers managing patient health information. This may be helpful going forward, but ultimately, it’s your responsibility to ensure you protect patients’ health information in compliance with PHIPA.
Consent
Given the uncertainty and unique nature of AI, the default recommendation emerging at this point is for clinicians to obtain consent from patients before using AI. This includes explaining to patients what the tool is, how it will be used, what the risks are, and ensuring they are comfortable with its use in their care.
Bias
Current AI tools develop their abilities based on reviewing extensive data sets. However, these data sets are likely to reflect or contain bias, leading some critics to suggest that AI tools often show significant bias that risks marginalizing or discriminating against equity-seeking groups. Clinicians must use their judgement to evaluate the outputs and ensure that the specific circumstances of the patient are always being considered and reflected in the outcome.
Accuracy
The accuracy of AI tools in charting or making diagnoses and treatment recommendations has not yet been fully validated. Ultimately, physiotherapists are responsible for their charting and for the care they provide, regardless of the process. You need to evaluate any outputs from AI tools to ensure they accurately and comprehensively capture the care that has been provided, support a treatment plan that is clinically sound and reflect the patient’s unique needs.
Looking Forward
We’re only just scratching the surface. Regulators are actively looking at the implications of AI and a regulatory response that ensures we move forward ethically and responsibly. In fact, this blog has been informed by existing guidance from the College of Physicians and Surgeons of Alberta.
For now, it’s helpful to recount our core obligations to ensure these tools are being integrated responsibly. Keep these points in mind as you consider integrating AI tools into your practice:
- Protect patient health information at all times.
- Engage your patients in the process and obtain their consent.
- Critically evaluate any output generated by these tools to ensure they reflect the unique circumstances of patients.
- Only make diagnoses and treatment recommendations that are clinically sound and align with your professional judgment.
- Proceed with optimism and caution.