Ensuring equitable access to healthcare in the age of algorithms and AI

Yesterday, Dr. Peter Vaughan, chair of the board of directors of Canada Health Infoway, spoke at Longwoods’ Breakfast with the Chiefs.

After outlining the current state and future perspectives of digitization in healthcare, his main message was two-fold: 1. We are at risk of a “failure of imagination”, i.e. we cannot fathom all the possible futures that digital disruption might confront us with and hence fail to plan for their pitfalls adequately. 2. There is great potential for algorithms to be built in such a way as to solidify and deepen inequalities that currently exist in our system, and we need government oversight of such algorithms to prevent this from happening.

The first point is easy to understand, the second point may need little more explanation. Algorithms are used widely to determine what information is presented to us online, what choices are offered to us. We are all familiar with websites, offering us items we ‘might also like’, based on our past choices and based on what other purchasers have bought.

At a time when data from various sources can be linked to create sophisticated profiles of people, it would be easy for a healthcare organization to identify individuals that are potentially ‘high cost’ and to deny them service or to restrict access to services. Bias can creep into algorithms quickly. If people of a certain age, ethnic background or location are deemed to be ‘higher risk’ for some health issues or for unhealthy behaviours, and this is built into an algorithm that prioritizes ‘lower risk’ customers, then you are discriminated against if you share the same profile, no matter how you actually behave.

Discrimination is often systemic, unless a conscious effort is made to break the cycle of disadvantaged circumstances leading to failure to thrive leading to lower opportunity in the future. As Dr. Peter Vaughan pointed out, we in Canada value equitable access to healthcare, education and other public goods. We expect our government to put safeguards in place against discrimination based on background and circumstances. But how can this be done?

Private, for-profit enterprises have a right to segment their customers and offer different services to different tiers, based on their profitability or ‘life-time customer value’. Companies do this all the time, it is good business practice. But what about a private digital health service that accepts people with low risk profiles into their patient roster, but is unavailable to others, whose profile suggests they may need a lot of services down the line? Is this acceptable?

And if the government were to monitor and regulate algorithms related to the provision of public goods (such as healthcare) who has the right credentials to tackle this issue? People would be needed who understand data science – how algorithms are constructed and how AI feeds into them – and social sciences – to identify the assumptions underpinning the algorithms – and ethics. Since technology is moving very fast, we should have started training such people yesterday.

And how could algorithms be tested? Should this be part of some sort of an approval process? Can testing be done by individuals, relying on their expertise and judgement? Or could there be a more controlled way of assessing algorithms for their potential to disadvantage certain members of society? Or a potential for automation of this process?

I am thinking there may be an opportunity here to develop a standardized set of testing tools that algorithms could be subjected to. For example, one could create profiles that represent different groups in society and test-run them as fake applicants for this or that service.

Also, algorithms change all the time, so one would perhaps need to have a process of re-certification in place to ensure continued compliance with the rules.

And then, there would be the temptation for companies to game the system. So, if a standardized set of test cases were developed to test algorithms for social acceptability, companies may develop code to identify and ‘appease’ these test cases but continue discriminating against real applicants.

In any case, this could be an interesting and important new field for social scientists to go into. However, one must be willing to combines the ‘soft’ social sciences with ‘hard’ stats and IT skills and find the right learning venues to develop these skills.

Much food for thought. Thank you, Dr. Peter Vaughan!