When we think about digital health, it is easy to dive straight into thinking about the latest medical discoveries and technological innovation that is accelerating this market's growth at such a rapid pace. However, we rarely think about the more critical stances surrounding digital health. In this blog we will take a look at the discipline of sociology and, specifically, how the study of society can help us develop more inclusive, equitable, effective and successful digital health solutions and interventions.
Social scientists argue that the techno-medical discourse that prefaces digital health must be cognisant of the bigger picture. We must be aware of the fact that society fundamentally shapes the way we treat and care for people. Why are some patients more resistant to digital health solutions than others? How can we explain the discrepancies in the levels of trust afforded to some interventions as opposed to others? How can we ensure we create digital solutions that are socially responsible and ethical? Sociology can help answer some of these questions when it comes to barriers of engagement with digital health - and ensure the needs of the patient, who matters most in this equation, are kept the pivotal focus of every stage of digital health development and implementation.
The importance of cultural responsiveness – do algorithms discriminate?
Sociology must be utilised during the development of digital health solutions in order to create solutions that are inclusive, intersectional and culturally responsive across all social groups. Sometimes, it is easy to misclassify ‘patients’ as one homogenous social group. This is far from the reality – we don’t have to look far to see how racial and ethnic disparities disproportionately affected Covid-19 outcomes. We can apply this same logic when thinking about digital health interventions more broadly.
For example, when looking at the diabetes population – the demographic is broad and not confined to any single homogenous identity. Whether it be because of age, gender, race, further disability or comorbidities, people with the same illnesses react differently to different interventions. We cannot put all people with diabetes in the one box and expect a digital solution to produce the same effect for them all. It is true to say that while some will do better and successfully manage to control their condition via a digital companion, others will be much less receptive to incorporating technology into the management of their disease.
The next question following on from this is, why are minority groups sometimes getting left behind when it comes to digital health? The answer lies in our unconscious bias. Is it intentional? Probably not – but the nature of technology following social determinism means that the systemic biases and discriminatory practises that are a part of our social world become baked into datasets and algorithms. It is not possible to teach an algorithm social concepts such as equality and fairness; it can only work and build on what it knows – and that is the data that it is fed. If minority groups are underrepresented in datasets, the algorithms will, naturally, perform worse for them.
One example of this is with AI-powered facial recognition technology and its misclassification of darker-skinned females. Renowned computer scientist and digital activist, Joy Buolamwini has shown that algorithms associated with facial recognition technology have a tendency to be racially biased, and are more likely to mis-classify and mis-gender darker skinned females in particular. Looking specifically at healthcare, we have also seen issues with racial bias in pulse oximetry where misleading results are reported to occur in white patients 3.6% of the time, while they were reported to occur 11.7% of the time in Black patients.
Are these algorithms and medical interventions inherently racist or sexist? – No. The reality is, that the demographic of Silicon Valley and the creators of such technology have mostly tended to be white, male, and able-bodied. The data in which these creators feed to algorithms can be skewed because of this. During the creation stages of such technology, we need to employ an intersectional approach - one that accounts for the spectrum of identities in which an individual may assign to. For example, a disabled black woman will not navigate the same social world as an able-bodied white man. Therefore, digital health solutions must account for this. We need to include a wider demographic of people and not just the ‘default’ which traditionally excludes minority groups based on race, gender, disability, and so on.
What can we do to ensure a culturally responsive tech sphere? We can create policy that guarantees creators are educated in the social sciences and understand how their unconscious bias may be amplified through their digital innovations. While they might be difficult conversations, we must address the implicit biases in our social world, and work together to dismantle them. If we ensure that digital health interventions employ a cultural responsiveness, they will gain more trust.
How can we explain other barriers to engagement?
The democratisation of healthcare alongside digital technologies is helping patients to become key decision makers, active-leaders, instigators, and experts in their own healthcare decisions. It is no longer enough for patients to be passive participants in their health. While this leads to improved efficiency, reduced costs and most importantly, improved health outcomes - this transformation, which Dickensen (2013) coined the shift from ‘we medicine’ to ‘me medicine’, is understandably overwhelming for many.
As the prevalence of digital health solutions grows, issues may arise from a patient perspective such as compliance. People disengage from the digital for a vast variety of reasons. The socioeconomic determinants of health play a part: we must remember that disparities in socioeconomic status result in many patients lacking the resources to fully immerse themselves in certain digital health interventions. Engagement techniques such as gamification are often offered as the empowerment solution for those patients who are otherwise not empowered, or simply indifferent. While injecting a little fun into healthcare seems relatively harmless, and quite effective, for some it may be seen as attempting to alter what is the true meaning of health – and shaping it into a commodity or good to be ‘earned’. However, for some, particularly those who are very ill, health and its very nature cannot, and never will be, playful. We must ensure that techniques such as gamification are not used in such a manner that undermines the patient experience of health.
The most important route to gain trust in digital is for providers to keep patient perspectives close. There is no doubt a number of digital health companies and products who are conscious of this, and have taken patient-centric and user centric design principles, into the ‘experience’ of using a tech tool in the management of health and wellness quite seriously. There are companies with sociologists, behavioural scientists, functionalists, among others within teams that are building some of these products.The concept that patients are people, and should be involved in the building of their healthcare, goes a long way in building trust.
Going forward, we should be having those difficult conversations. We should look outside the box. We should understand why digital health interventions face barriers to engagement. Because when we start looking at the bigger picture, the pieces start falling into place.