Rheumatic Heart Disease – Time for a rethink?

Acute Rheumatic Fever (ARF) and Rheumatic Heart Disease (RHD) are increasingly rare in wealthy societies but remain prevalent in the Third World and marginalized groups. Prophylactic treatments have not changed since the 1950s. There is a need for improved diagnostic and treatment solutions.

One hundred years ago in Western society, Acute Rheumatic Fever(ARF) and Rheumatic Heart Disease (RHD) were common. Many sufferers were condemned to an early death with heart failure. Nowadays these diseases are all but unknown in most resource rich Western societies and interest in research has declined. But the disease is still common in the Third World and in marginalized groups in the West such as Remote Indigenous Communities. Indeed Australia has the highest rates in the world in Remote Northern Territory. We still use prophylaxis developed in the 1950s and treatment remains unchanged

The history of prophylaxis (1)

With the discovery of antibiotics in the 1920s and 1930s, and the linkage of Rheumatic Fever with the infection by Group B streptococcus, attention turned to antibiotic treatment. It was found that Sulphonamides reduced the incidence of recurrent attacks of Acute Rheumatic Fever and progressive valve damage.

When Penicillin was discovered in the 1940s it proved to be even more effective and less prone to side-effects than Sulphonamides. But all these regimes need frequent administration. A poorly soluble depot Penicillin preparation known as Benzathine Penicillin was developed in the 1950s and has remained the mainstay of prophylaxis for RHD ever since. It can be administered by injection every 3-4 weeks. While there has never been a controlled trial of this regime, there is good empirical evidence that it reduces ARF recurrences by two thirds. Group B Streptococcus has remained sensitive to it when many other bacteria have developed resistance. In recent years Benzathine Penicillin has been increasingly difficult to source as drug manufacturers turn to more lucrative drugs. Patients have become more resistant to painful intramuscular injections. In spite of Benzathine Penicillin’s success, there is clearly a need for a better prophylactic solution.

The History of Treatment

Treatment of an episode of Acute Rheumatic Fever remains symptomatic – analgesia for joint pain, antipyretics for fever and Valproate for Chorea. There is no current treatment to reduce the autoimmune mediated damage to heart valves. As far as I am aware there are no treatments under development. Any residual Streptococcal infection is treated with Benzathine Penicillin.

The History of Diagnosis

Acute Rheumatic Fever (ARF) has always been a clinical diagnosis and remains so today. There is no single source of truth – the diagnosis is made on the basis of major and minor criteria as devised by Jones (5). These criteria have been revised and relaxed over time to make them more sensitive. The corollary of this is that they have become less specific, particularly as the disease has become rarer. The “pretest probability” problem comes into play. In a low probability population (RHD is still relatively uncommon, even in Remote settings), a test or intervention with poor specificity will generate many false positives. The symptom/sign that generates most problems is joint involvement. Fever and arthralgia are common in various illnesses. Objective arthritis is less common, polyarthritis even less so. In the early days of the Jones Criteria, 2 major and one minor criterion were required for a diagnosis. This has now been relaxed to one major and two minor criteria. Moreover in the early Jones criteria, the only major criterion involving joints was polyarthritis – ie objective signs of arthritis (redness, warmth, effusion) in several joints. This is now relaxed to allow monoarthritis or even polyarthralgia (subjective pain in several joints) as a major criterion. In practice this means that a patient presenting with fever and arthralgia (common in viral illnesses), but without any other relevant signs can be labelled as ARF. Often the details of a clinical presentation are not recorded. In particular the results of examination of joints may not be available. Other major criteria such as carditis (Echo changes, new murmur, heart failure) and Chorea are more specific and predictive of ongoing RHD. In practice, the oft described eythema marginatum and subcutaneous nodules are rarely seen. Interestingly the finding of PR changes on ECG has never been regarded as a major criterion or evidence of carditis, though it appears to be specific in practice and is easy to perform at first assessment.

Incidence (2,4)

Rates of both ARF and RHD have been increasing in recent years. The reason for this is not clear – I have discussed the possibilities in a previous post

https://tjilpidoc.com/2024/06/13/rheumatic-heart-disease-a-new-epidemic/

In a paper from 2011, after a first ARF diagnosis, 61% developed RHD within 10 years. After RHD diagnosis, 27% developed heart failure within 5 years. So it is important to identify patients with ARF and prevent recurrences with prophylaxis.

There were 172 cases of surgery for RHD in indigenous patients in Australia and NZ in the period 2001- 2012 (3). On average this is less than 20 cases a year. 

In 2023 in Queensland, Western Australia, South Australia and the Northern Territory, 97 people underwent surgical events for RHD (one event per person). Most of these (75 people, 77%) were First Nations people. (ref)

So clearly there has been an increase of RHD clients undergoing surgery – is this due to better access or is there a real increase in RHD?

The Northern Territory appears to have a dramatically higher incidence of ARF and RHD than other states with significant indigenous populations such as Queensland and WA. Again the reason for this is not clear – it seems intuitively unlikely that these indigenous populations are less prone to RHD.

Promotion of ARF diagnosis

There has been increased awareness of ARF and RHD in Remote communities in recent years with campaigns to educate health staff and promote the idea that ARF should be considered in patients presenting with fever and joint symptoms. While this is admirable, we know from a 2005 study that many patients entering hospital with a provisional diagnosis of ARF have an alternative at discharge. (6) Because of this promotion, ARF has become the “probability diagnosis” with this scenario in many places. Streptococcal titres have become a defacto criterion when in fact they are a poor positive discriminator of ARF. Diagnostic “precision” appears to have declined with alternatives not considered. Many of these patients do not not have an authoritative assessment by a senior clinician at the time – this is deferred to a later date. This is problematic because relevant clinical symptoms and signs resolve or the patient may not see the clinician at all. Once a provisional label of ARF/RHD is attached to the patient, it can be impossible to remove, even in doubtful cases.

What are the costs of misdiagnosis?

There is a significant imposition on the client with a diagnosis of ARF. They are subjected to monthly injections and periodic reviews for anything up to 10 years. The Health service also bears significant costs. Some of the differential diagnoses of ARF carry significant risk (eg osteomyelitis, septic arthritis, Slipped Capital Femoral Epiphysis). Clearly if these are not treated in a timely fashion there is a risk of long term disability or even death.

Outcomes from different presentations

Many cases of RHD are found when they are already established, presenting as heart failure, murmurs or on screening (eg “Deadly Heart Trek”). Those presenting with Chorea have a high correlation with later development of RHD. While the paper I have previously quoted suggested a high rate of RHD development in all cases of ARF, on my personal review of records those presenting with joint symptoms alone appeared to have a lower rate of development of documented RHD even after some years

Where to from here?

ARF/RHD remains a significant problem in Remote Australia and marginalized groups but treatment and assessment protocols have not changed in recent years. ARF remains a clinical diagnosis. There is a significant rate of misdiagnosis with associated costs and risks. If a single test to prove or disprove ARF could be developed, this would be an advance. There has never been a treatment to reduce the immune mediated harm of ARF. In the age of targeted antibodies, perhaps this issue could be revisited. A better prophylactic drug should also be sought.

In the meantime it should be policy that all new cases of ARF are assessed at the time by a Senior Clinician to avoid “mislabelling” as much as possible.

References

(1) Evolution Evidence and Effect of Secondary Prophylaxis for Rheumatic Fever

Wyber, Rosemary1,; Carapetis, Jonathan1,2

Journal of the Practice of Cardiovascular Sciences 1(1):p 9-14, Jan–Apr 2015. | DOI: 10.4103/2395-5414.157554

(2) Acute rheumatic fever and rheumatic heart disease: incidence and progression in the Northern Territory of Australia, 1997 to 2010

Joanna G Lawrence 1Jonathan R CarapetisKalinda GriffithsKeith EdwardsJohn R Condon

 10.1161/CIRCULATIONAHA.113.001477

(3) A review of valve surgery for rheumatic heart disease in Australia

Elizabeth Anne Russell 1,2Lavinia Tran 2Robert A Baker 3Jayme S Bennetts 3,4Alex Brown 5,6Christopher Michael Reid 2Robert Tam 7Warren Frederick Walsh 8Graeme Paul Maguire 1,2,9,✉

BMC Cardiovasc Disord. 2014 Oct 2;14:134. doi: 10.1186/1471-2261-14-134

(4) Recent increases in incidence

AIHW data Acute rheumatic fever and rheumatic heart disease in Australia, Acute rheumatic fever – Australian Institute of Health and Welfare

(5) Rheumatic fever Identification, management and secondary prevention

Australian Family Physician 

Volume 41, Issue 1, January-February 2012

https://www.racgp.org.au/afp/2012/january-february/rheumatic-fever

(6) The challenge of acute rheumatic fever diagnosis in a high-incidence population: a prospective study and proposed guidelines for diagnosis in Australia’s Northern Territory

Anna Ralph 1, Susan Jacups, Kay McGough, Malcolm McDonald, Bart J Currie

Heart Lung Circ . 2006 Apr;15(2):113-8.

Simplify Vaccination to halt the decline in rates

Vaccination has proven safe and highly effective in reducing infectious disease. But commercialization, increasing complexity, and resistance from anti-vaccine groups is affecting rates of vaccination and herd immunity. Complex schedules, poor database connectivity, requirements for training and cold chain requirements create administrative overhead which is leading to declines in vaccination rates. Abolishing training requirements, reducing complexity and harmonizing data management could improve immunization outcomes.

Vaccination is one of the more effective interventions that the Health system undertakes. The early vaccines were life savers – historical accounts of diptheria for example were harrowing. Polio virtually disappeared, and would have done so completely but for ideological and political opposition in some parts of the world. Whooping cough is a distressing and potentially damaging illness, particularly in younger children. Tetanus is deadly but is virtually unknown in vaccinated subjects. General Practice has provided a major proportion of vaccination services, though other providers such as pharmacies are now doing so.

Commercialization

But of course like everything else in the health system vaccines are now a commercial opportunity – more and more are produced for “edge cases” where the disease they are preventing causes less and less morbidity and virtually zero mortality. For example, Chicken pox used to be regarded as a “nuisance” childhood illness, albeit with occasional serious morbidity. RSV vaccine for adults has been promoted recently – as a clinician I dont recall a single case of RSV related disease in older adults. The absolute incidence worldwide varies, but is probably of the order of 100/100,000 annually – ie 0.1%. Like all in Medical Evidence nowadays, these commercial drivers bedevil discussion about vaccines – what was once an undeniable benefit has become marginal.

Objectors

In spite of its effectiveness, vaccination has generated enormous resistance from some groups with conspiracy theories and misinformation rife. These groups generate significant publicity and political heat, particularly via social media. The political response has been to over-regulate vaccination with onerous requirements for training and recording.

Complexity and Administrative burden

The vaccine schedule has become increasingly complex with all these new vaccines. Of course, each state Health Department has different views on priorities. Sometimes this is justified – for example, Hepatitis B had a carrier rate of up to 30% in some populations in the NT. This has now dropped dramatically as a result of vaccination. But the end result is that the vaccine schedule is different in each state. Funding may determine which vaccine is given. Some groups are funded by Government, while others are not – records have to be kept of subsidized vaccines all the way down to individual serial numbers. Even wastage must be recorded. A provider must search various clunky, separate databases to ensure that the patient has not already received the vaccination. A duplicate dose is treated as a major incident when probably the worst that will happen is that patient will have better immunity. Due to the complexity of the schedule, it may be difficult to decide what vaccines are due, particularly for clients from interstate and for “catchup” vaccination. There are various requirements for notification to databases eg the Australian Immunization Register (AIR), some of which are quite onerous. All these requirements increase the complexity and administrative burden for the vaccine provider.

Cold Chain

Vaccines must be stored at a defined temperature range from production to administration or they risk being destroyed or degraded. This is certainly a problem in the third world in vaccine delivery to Remote places.

In Australia, cold chain requirements for vaccine storage are onerous and expensive – special temperature controlled refrigerators with data logging are required. The standard temperature range for storage is 2 to 8 degrees Celsius. While it is true that freezing denatures many vaccines, periods of over-temperature may only shorten storage life and the vaccine may still be usable. Vaccination bureaucrats from the local Primary Health Network organizations (PHNs) seem to think they have the right to walk into a private GP practice, inspect vaccine storage arrangements and demand logs of temperature readings. They regard any deviation from the standard temperature range as requiring disposal of the vaccine, often at the cost of the private practice.

Are rates declining?

The rate of vaccination has increased steadily over many years but has reached a plateau and now appears to be declining (Australian Govt data).

It is probably still at a level where “herd immunity” (population immunity) overall is good enough to prevent outbreaks of vaccine preventable disease. But there are local areas where herd immunity is lower and here the risk of vaccine preventable disease is increased.

Why are rates declining?

There are many possibilities – the very success of vaccines has made the diseases they prevent rare. Most members of the public have no experience of these conditions and may not see the necessity for many vaccines as a result. Vaccine objectors have become more common with widespread misinformation on social media. Perhaps they have a point as vaccines are increasingly produced for “edge cases” with little morbidity or mortality.

Agency nurses in Primary Care (common nowadays) may simply refuse to give vaccinations – “Oh no doctor, I dont have the certificate”. They may continue to refuse even when informed that anyone can give a vaccine under doctor supervision. As a result of the complexity of the schedule, onerous training requirements, cost of infrastructure and administrative “overhead”, many providers simply bypass the vaccination when it is due and move onto more lucrative work. As a safe procedure, vaccination is now massively over-regulated and complex. In my opinion this administrative “overhead” has started to have an impact on vaccination rates. Less vaccination services are being provided in General Practice as a whole due to it’s decline and the cost of providing infrastructure and Practice Nurses.

Actions

If we are to reverse the decline in rates, I would advocate removing mandatory training, reducing the complexity of the schedule, harmonizing and connecting databases and reducing the administrative complexity of vaccination.

References

https://www.health.gov.au/topics/immunisation/immunisation-data/childhood-immunisation-coverage/immunisation-coverage-rates-for-all-children

Doctors and Aging – are we throwing the Wisdom out with the Bathwater?

The article discusses concerns about aging doctors’ competency and safety. They may experience cognitive decline, particularly affecting fluid intelligence. AHPRA plans regular performance reviews for doctors over 70 to address this. But is screening effective? It may unjustly impact experienced physicians who provide a valuable service to the community.

Aging Doctors have been in the news lately – are they safe and competent? Are they keeping up with current practice? Decline in performance is inevitable with age. In particular, dementia increases with age and the line between “forgetful” and significant cognitive decline can be difficult to define. Cerebral decline causing physical symptoms such as Parkinson’s Disease also increases with age.

Licensing agencies such as AHPRA in Australia are proposing regular reviews of performance based on age to capture decline before it become problematic. But will these reviews detect such decline and improve safety or simply impose cost and inconvenience ?

“Fluid” and “Crystalline” Intelligence

The concepts of “fluid” and “crystalline” intelligence were first proposed by Cattell in 1963.

Fluid intelligence can be thought of as problem solving ability by reasoning – “thinking on your feet”. Crystalline intelligence involves problem solving using a “library” of past memories – ie an heuristic pattern recognition process. This can be seen as “experience” and perhaps even “wisdom”. Fluid intelligence declines with age, particularly if it is not exercised and trained, but crystalline intelligence continues to increase with age. – see below

The Age of Peak Performance

A decline in performance is inevitable with age, particularly for those who achieve elite status.

For careers requiring peak physical performance such as gymnastics, competitive sport or elite athletics, performance peaks early – often in the 20s or 30s.

In careers that rely on fluid intelligence performance peaks typically in the 40s or 50s. In a study of Nobel Laureates, the most common age for producing a Magnum Opus was in their 30s. Poets typically peak in their 40s, while novelists take a little longer. High achieving scientists spend their later career promoting their achievements and teaching. In the business world many older job seekers complain that they are not hired. Is this because of “ageism”, or is it because business requires “fluid intelligence”?

What type of intelligence makes a good doctor?

It is noteworthy that many doctors continue to practice into old age – some even into their 80s and 90s. These older doctors appear to be functioning effectively. This would suggest that their practice depends more on crystalline than fluid intelligence – ie their “experience” is valuable. In clinical roles such as Primary Care they work from their library of patterns built up over a lifetime of practice. They have a long term relationship and knowledge of their clients.

What makes a good Surgeon?

Some Procedural Medical Roles such as Surgery depend on manual performance and as well as a functioning cognition. The performance of surgery requires skill in the manual techniques involved, such as dissection and suturing. It also needs skill in planning, sequencing and detailed knowledge of anatomy and pathology. In the surgical world of today many techniques are now endoscopic rather than “open”. Here, the ability to visualize and manipulate in 3 dimensions is important – this is arguably a “fluid intelligence” skill. Impairment of vision or physical ability such as with Parkinson’s disease could be expected to impair surgical performance.

But as well as the manual skills needed for surgery, clinical skills are still necessary for the “complete surgeon ” – as the old adage says – “choose well, cut well”. While the surgeon does not have to deal with undifferentiated or complex/multimorbid presentations as in Primary Care, he/she must still have good clinical method and the ability to judge whether intervention will improve the outcome. Here experience and “crystalline intelligence” is important.

Screening for impairment.

We know that it is difficult to show benefit from screening in general. A general checkup does not improve outcomes (see my article https://tjilpidoc.com/2023/05/29/get-your-checkup/). It seems that while many screening tests have been proposed, very few pass the the tests of early detection, possible intervention and improved outcomes. But here the test is simply to detect impairment and intervene before harm is done – is there a fair and reasonably economical way to screen doctors for impairment?

The Health practitioner regulation national law defines an impairment of a health practitioner as “… a physical or mental impairment, disability, condition or disorder (including substance abuse or dependence) that detrimentally affects or is likely to detrimentally affect … the person’s capacity to practise the profession”.

Currently AHPRA relies on reporting of impairments by the practitioner themselves, other practitioners, clients or the General Public. Reports can be anonymous though this does make progressing them difficult. There are various tools and scales available to measure impairment but these are generally in the context of compensation or disability assessment for benefits.

But now AHPRA is proposing to screen doctors who are apparently functioning normally on the basis of age.

Perhaps the closest analogy to the proposed AHPRA assessment is the “Fitness to Drive” process that GPs must perform on older drivers. While it is relatively easy to perform visual acuity assessment, cognitive ability is much harder to assess. Crude tools such a Mini Mental State Examination (MMSE) will show gross deterioration but more subtle deterioration is difficult to detect. One meta-analysis looked at 2247 articles. (4) Only 4 met psychometric criteria and only one met “clinical utility criteria”. The study authors concluded that on road functional testing remained the gold standard of assessment. I am not aware of similar studies aimed at Medical practitioners. Perhaps the gold standard here would be functional testing by another practitioner? This is difficult enough with trainee Registrars. The cost and difficulty of such a process are obvious. In practice most impaired drivers are detected as a result of accidents or by their relatives driving with them – the mandatory driver tests become an opportunity to stop them driving

I could find no studies which showed actual measured improvements in safety with driver testing though there are many statements on various websites of its necessity. In our practice, it appears that consensus drives what we do. As with much in the world today “say it often enough and it becomes truth”!

Conclusion

AHPRA is proposing a regular mandatory checkup for doctors over 70 to assess their competence. Clinical skills are based largely on “crystalline intelligence” which increases with age. Screening for disease or impairment is difficult and there is no evidence backed, reasonably economical method to assess subtle cognitive impairment. We may be throwing away experienced doctors who provide a valuable service in the Heath system without improving clinical safety.

References

(1) “Your Professional Decline Is Coming (Much) Sooner Than You Think”

The Atlantic

https://www.theatlantic.com/magazine/archive/2019/07/work-peak-professional-decline/590650/

(2) Raymond B Catell
THEORY OF FLUID AND CRYSTALLIZED
INTELLIGENCE:
A CRITICAL EXPERIMENT

Journal of Educational Psychology 1963 54(1) 1-22

4. https://www.tandfonline.com/doi/full/10.1080/09638288.2025.2512057#summary-abstract

(5) AHPRA

https://www.legislation.qld.gov.au/view/html/inforce/current/act-2009-045

Corporate Dementia and the Health System – is AI the solution?

The health system is suffering “cognitive decline” due to anonymous consultations and poor continuity of care. Clinicians often ignore past medical history, leading to poor management of complex multimorbid patients and an increased rate of Clinical Error. Smart design of electronic health records and the use of targeted AI applications could enhance data visibility and improve patient care, reversing this decline and fostering continuity.

As I age I start to worry about dementia. Yes, I forget names – but generally if I wait a few minutes they will come back. But I am concerned for the Health System – it appears to be suffering from cognitive decline.  It is now routine for clients to be seen by clinicians who have never seen them before – the “anonymous consultation”. With the loss of traditional General Practice, ubiquitous but poor quality electronic medical record systems and increasing staff turnover everywhere, continuity and client relationships have suffered. Past history is “forgotten” – diagnosis relies on the presenting symptoms and observations. Clinicians are less sophisticated than they were – they are not likely to consider alternative diagnoses such as masquerades and unusual presentations. Multimorbidity is less well managed by the “anonymous” clinicians as they struggle to learn the detail of a complex client. Expertise and the knowledge of and a relationship with a client has been replaced by complex rituals of checkups and Careplans, items and counterchecks. But important issues still seem to fall through the cracks – it appears these systems are not a safety net. Lumps that should have been investigated and removed are forgotten for years before they are finally dealt with. Or perhaps they are never confronted and the patient dies – even then the potential cause is not identified. Clients may present repeatedly with the same issue with the system apparently not remembering previous investigations and outcomes.

At a policy level the constant loss of corporate knowledge due to turnover means that the same issues come up again and again – they appear to be immutable – the previous solutions and outcomes have been forgotten.   

Does Past History Matter?

We spent a lot of time in our training learning a systematic approach to Medical History. A major part of this was obtaining past history. It was regarded as an important predictor of the likely diagnosis on the presenting occasion and there were often other issues that required followup. For a complex multimorbid patient it was important to obtain a full picture of the clients medical history to plan and prioritize interventions. But it appears from the behaviour of many clinicians that this is no longer regarded as important. Past history is almost routinely ignored and the client treated for their presenting complaint. The system relies on programmed interventions such as Careplans to manage ongoing issues. But these are far from perfect – in practice they cannot be easily tailored to an individual and in many cases relevant past history is missed and forgotten. Clinicians are encouraged not to think independently but to just follow the prompts. In a previous article I discussed Complexity in Medicine – if indeed many clients can be regarded as “Complex” then a programmed approach is likely to fail. Good management of Complex Multimorbid clients (the majority of those over 40) requires a map of their issues and a tailored “thinking” approach at every encounter. When serious lifethreatening illness presents, it often does so over several clinic visits over several days. When mistakes occur in this situation it is usually due to different clinicians not referring to the events on previous days – again, Past History Matters.    

What is AI ?

AI (Artificial Intelligence) is the buzzword of the times. But many people dont have a clear understanding of what it means. There have been massive advances in this field in the last 20 years and it continues to rapidly evolve. Essentially it involves several large datasets. A suitable dataset is separated into categories with the desired outcome. This is then used to “train” a mathematical algorithm as a “black box”. Once the model has “learned” the pattern to an acceptable degree, it is supplied with “wild” data to achieve the same outcome as the training dataset. These AI algorithms involve many simultaneous multidimensional calculations similar to 3D games – hence the use of video chips optimized for this. The American company Nvidia has seen its shareprice skyrocket – it started life as a video card and chip maker. More recently there has been a proliferation of LLMs (Large Language Models). These incorporate large numbers of language concepts and the relationships between them. They form the basis of smart chatbots and assistants for tasks as diverse as programming to marketing. These can ingest large amounts of data such as text and extract relevant concepts from them. 

The search problem for clinicians

If we accept that Past History does matter, then the first task after the elucidating the presenting complaint from a client is to obtain a past history. The clinician encountering a complex multimorbid patient for the first time must try to get a complete Medical picture of their client. They are searching for “anything” relevant. This involves interrogating the patient and the medical record for important concepts . These are found in letters, pathology and imaging results, progress notes and documents. If the clinician is lucky, a previous expert clinician has created a summary of the relevant issues. The search may involve working across several record systems and interfaces. But this remains difficult due to silos and restricted access due to security concerns. The situation has not improved in recent years – major development projects have not allocated resources to interfaces and vendors remain resistant to interoperability for commercial reasons. In general the record has been designed by bureaucrats searching for specific data – they have a different search problem to the clinician. Data is hidden in easily searchable “items” rather than free text (which is frowned upon). There is poor interface design, with lots of irrelevant headings, poor formatting, poor labelling, administrative entries and many “null” values. A clinician has to search this very “noisy” environment for relevant data.   

How do we record a Clinical encounter?

It is important to record what happens in a clinical interaction for various reasons. Perhaps the most important is Clinical, to assist with future clinical interactions. There is also a legal imperative in case the encounter should be contested in future.  Here, too much detail is never enough and this drives much of what is in the record.   Third, the record is used for administrative and research purposes. Again this drives much of the content of the record even though it’s primary purpose is ostensibly clinical. How to record all this? The ultimate is to take video of a consultation. This is not performed often for many reasons including consent, storage and clinician resistance. In most cases a written account is entered by the clinician into an electronic record, with details of demographics, items describing various actions and measurements and free text “Progress notes” describing the interaction. Finally, most systems enter a “reason for encounter’ which may be multiple. Here codesets such as ICD or ICPC2 form the basis of a picklist. So an encounter can be described in various ways, from a full video to a single code.  

Possible Use Cases for AI

(1)  A “Past History Engine”

Could an AI generated Past History summary help the Clinician searching the record of a “Complex” patient? Could it prompt the less sophisticated Clinician with relevant information?

Large Language Model (LLM) based systems are now being widely used  in various areas of commerce and legal practice to extract relevant concepts from large datasets of text. It would be possible for such a system to ingest an entire medical record with associated documents and even the documents linked to other systems. It does so much more efficiently than a human – in fact it may be impossible for a human clinician to perform this task efficiently in a large record with hundreds of entries and thousands of data items.

Territory Kidney Care

This system was set up to provide a summary of the past history of the many clients in the Northern Territory of Australia (NT) who were suffering from or at risk of renal disease. The aim was to help clinicians in managing them and to provide an early warning of deterioration by automated prompts. Some 10,000 clients were entered onto the system. Data on these clients was obtained from various sources with the relevant consents of their treating organizations. Data included problem codes, clinical measurements, pathology results and Medicare billing – this was entered into a single database which in turn was interrogated to provide a summary and timeline of various parameters available via a web interface. This effort was privately funded and mentored by a well known research organization. It cost a small fraction of typical comparable record systems (approx 1% of the cost the NT Govt new system Acacia!). The web interface was designed by a Darwin company – it was clean, simple and easy to use. While this was not a full Electronic Record System, it did show what was possible at minimal cost and with good interface design. By contrast, Government Health IT systems are typically expensive and difficult for users to navigate.

This system did not use AI – just basic data algorithms and clean interface design with minimal administrative “noise”. It also overcame the apparently intractable problem of getting data across interfaces between different systems – data was obtained from most of the large Health organizations in the NT. It gave a useful insight into where clients were attending.

If such an approach was broadened into large record systems and an algorithm based on an LLM to mine all the text into the record, we could obtain a good summary of a client’s past history. This would improve clinical management, efficiency and safety. It could go a long way towards providing the continuity that has been lost in modern Health systems

(2) “Recall Engine”  

 One of the mechanisms used in records to maintain continuity is the “recall”. It is entered for a specific date in the future with details of the clinician targeted and the actions to take. But current systems generate large numbers of often poorly targeted or incorrect recalls. The recall may be “serviced” but not removed or become unnecessary. As a result of this large (even overwhelming) number of recalls the system is difficult to use and is often ignored altogether. In my own surveys of records, outstanding recalls are not serviced in the majority of encounters. Could a smart or AI enabled system improve this situation? It could be based on the “Past History Engine” above and generate a minimum of recalls that are relevant and targeted.  It could also generate a single optimized “Careplan” based on their known problems for the client with relevant planned interventions. 

(3) Location 

In a large system of clinics such as NT Government Remote Health, a particular client has a “usual clinic” which is tasked with delivering scheduled interventions such as Careplans . But many people visit several clinics, often in different health systems across borders and with different provider organizations. Some are “transient” with no identified clinic “owning” them. These tend to be a high risk and high needs group. One approach to service clients better would be to adopt an opportunistic approach to care and deliver all interventions wherever they present. But if we are to persist with the “programmed” approach we need to identify which clinic the client should be attached to – all clients, transient or otherwise, should belong to a nominated clinic. An AI algorithm looking at time series location data could possibly predict where the person is next likely to attend and target relevant interventions there.     

Conclusion

The Health System is suffering from cognitive decline driven by increasing staff turnover and administrative complexity. Electronic Health Record (EHR) systems are not up to the task of replacing a long term relationship with an effective means of maintaining continuity. This decline could be reversed with changes in policy and business rules, smart EHR design and targeted use of AI to improve data visibility and prompt clinicians appropriately.   

Commercialization and Continuity: Primary Care in Australia

The term “Primary Care” is thought to date back to about 1920, when the Dawson Report was released in the United Kingdom. That report, an official “white paper,” mentioned “primary health care centres,” intended to become the hub of regionalized services in that country. (ref)

One definition is

“the provision of integrated, accessible health care services by clinicians who are accountable for addressing a large majority of personal health care needs, developing a sustained partnership with patients, and practising in the context of family and community.”

Outcomes

There is good evidence that high quality Primary Care improves outcomes. Tertiary Care is expensive and in contrast, does not improve outcomes – it may even be harmful (1)

Six mechanisms, alone and in combination, may account for the beneficial impact of primary care on population health. They are (1) greater access to needed services, (2) better quality of care, (3) a greater focus on prevention, (4) early management of health problems, (5) the cumulative effect of the main primary care delivery characteristics, and (6) the role of primary care in reducing unnecessary and potentially harmful specialist care.

Preventive interventions are best when they are not related to any one disease or organ system. Eg cessation of smoking, wearing of seatbelts and physical exercise. The benefits of these interventions are clear. It is more difficult to show benefit from screening for specific diseases, though many interventions have been proposed. The idea that a checkup and early detection of various diseases will change the outcome is promoted regularly in popular media. But there is no convincing evidence that a regular checkup improves outcomes (see my previous article).

Equity is important – cost to the consumer reduces equity.

Primary care has a greater impact in improving outcomes in lower socioeconomic groups.

Specialist and tertiary services have either no effect or an adverse effect on overall outcomes. But access to hospitals and tertiary care is political “hot button” issue – politicians find it difficult to resist such pressure.

There is good evidence that patients attending a specialist directly are more likely to have unnecessary hospital admissions and poor outcomes than those seeing a Primary care physician first. (3)

There is a good theoretical basis for this – a specialist used to hospital practice over-estimates the rate of abnormality in a Primary Care population when planning intervention or investigation – ie the “pretest probability” in hospital patients is different to the probability in a Primary Care population. This affects the “sensitivity” and “specificity” of the intervention.  

WHO paper on Primary Care (2008) (2)

This paper describes several models of service delivery in Primary care.

“Medical” model.

In this approach, the client interacts intermittently with the service in treating or managing specific diseases or issues – the service has responsibility limited to the condition being managed for the client during an episode of care.  

“Program” Model

Programs target a specific area of the client or population Health – eg Chronic Disease, Trachoma,  Rheumatic Heart Disease. The client’s Health Care is broken into “parts” and different clinicians deal with one area. Care may become fragmented and it is important to communicate between various providers/clinicians. Often the record systems in use are not up to the task or there may even be “silos” between record systems. Responsibility for the client is limited to the condition being managed by the program.    

Holistic Model

The Clinician or service takes responsibility for the client from birth to death and manages all their issues including advocating in “nonmedical” issues such as housing and employment. This is regarded by the WHO as the most effective approach.

In practice most Primary Care services in Australia are a combination of the first and second models – it is rare to see the third model.

Generalist vs Specialist Care

If we accept that an “Holistic” model is best and most effective, and that Care should be fragmented as little as possible, then the Clinician delivering the care must have a broad knowledge and scope of practice. He/she must be able to deal with all ages and be able to cope with presentations across many clinical domains. Specialists deal well with their area of specialty but poorly otherwise. Current systems devalue the Generalist with many tasks they have previously performed being taken up by various specialities. (eg normal childbirth). Common conditions formerly dealt with by a GP are now routinely referred. This is inefficient and expensive and may result in poorer outcomes (see above).       

Continuity – the value of a relationship

Continuity is one of the principal pillars of good Primary Care. The Clinician has a relationship with the client and knows their history and family background. There is good evidence that this improves outcomes and saves money.

“Among other improvements, continuity of care leads to a higher quality of care, more preventive care, decreased emergency department visits, and reduced odds of avoidable hospitalization.”(ref)

Hospital systems deal with episodic illness on the whole – they “live in the now” and cope poorly with the requirements of ongoing care in complex chronic illness.

Complexity and multimorbidity

Primary Care is increasingly managing clients with many different issues. The Single Issue “cure” approach is no longer valid in these cases. We must become skilled at managing Complexity and adopt a different approach – incremental and iterative and accept that we cannot “solve” most of the issues. (See my article on Complexity).  Continuity is critical here.

What is happening now?

Commercialization

In Third World countries where Governments have not controlled the delivery of Primary Care and resources are limited, delivery has become dominated by commercial providers. Clients bear much if not all the cost. There is a proliferation of ineffective treatments and a reduction in quality. Equity is reduced and the “Inverse Effect” dominates – ie most of the care goes to those who can afford it but need it least.

In Western countries such as Australia this effect has been less, but Governments struggle with the cost of programs such as Medicare. Commercial providers have exploited the open access nature of Medicare, with the result that Government has imposed more and more complex rules and barriers to save costs. Subsidies such as Rebates have not kept up with costs and “the Gap” paid by clients has steadily increased. Specialist services are essentially no longer Bulk Billed and the rate of Bulk Billing in GP services is declining. Equity and access has declined as a result, where the Australian health system was once regarded as having excellent equity as a reasonable cost. New and expensive drugs and treatments appear regularly – Governments struggle to fund these. Medical Evidence has probably become corrupted (see my previous article).       

General Practice in Australia

General Practice was the principal provider of Primary Care services in Australia in the past. But some believe it is in decline, or even in crisis. Various bodies and clinicians are competing in the Primary care space eg nurse practitioners and pharmacists. Various specialists now perform many roles formerly the domain of General Practice – there has been a loss of scope for GPs

In the Health system generally continuity has been devalued. There is acceptance of massive staff turnover in services such as Remote Health. Hospital systems have always been staffed at a junior level by doctors in training who rotate regularly though different posts. Record systems in hospitals are primitive and not up to the task of compensating for this rapid turnover of staff.

Bulk Billing (medical services free to the consumer) and Emergency Department waiting times are political “hot button” issues receiving a lot of attention at present.

The Federal Government recently announced “Acute Care Clinics” – this promotes the idea that most medical presentations are single issue and that continuity is not important. Bulk Billing incentives were increased but the basic Rebate for GP consultations was left unchanged. The Health Minister (!!) urged consumers to “shop around” for a Bulk Billing clinic so they could avoid paying a gap fee. While cost is important to equity, this statement ignores the value of continuity. It appears the Health Minister does not understand this. Governments have been unwilling to increase Rebates to keep Gap costs down, instead relying on Bulk Billing incentives. Complex illness takes more time and requires sophisticated clinical skills to manage. These clients are generally less able to pay for services but the Rebates for longer consultations are effectively less.       

The GP is increasingly required to perform bureaucratic tasks, generally involving access to various expensive resources. There is also an increase in “legal” tasks such as licensing medicals and certificates. These consultations have three parties involved – the GP, the client and another body paying the cost or requiring the report or certificate. The GP has two relationships and duties – one to the client and one to the third party. These relationships may be in conflict and cause “moral ambiguity”– a conflict which the GP must manage.     

Conclusion

Good Primary Care is effective in improving outcomes and economical of resources.

The principal elements of Primary Care are “Expert Generalism” and Continuity. Complexity is an increasing challenge which requires a new approach and calls on a Generalist Knowledge and a relationship with the client.

 But the traditional model of Primary Care in Australia is under threat. GPs face more complexity for less money, competition from other providers, an increase in nonmedical tasks and a downgrading of clinical scope. Policy makers and politicians appear unaware of these challenges.

References

(1) Milbank Q. 2005 Sep; 83(3): 457–502.

Contribution of Primary Care to Health Systems and Health

Barbara Starfield, Leiyu Shi, and James Macinko

(2) The World Health Report 2008 Primary Health Care WHO

(3) Can Fam Physician. 2021 Sep; 67(9): 679–688.

Why does continuity of care with family doctors matter?

Review and qualitative synthesis of patient and physician perspectives

Dominik Alex Nowak, MD MHSc CCFP, Natasha Yasmin Sheikhan, MPH MHS

Sumana Christina Naidu, BHSc Kerry Kuluski, MSW PhD

Ross E.G. Upshur, MD MSc MCFP FRCPC

Rheumatic Heart Disease – a New Epidemic?

The incidence of Rheumatic Heart Disease (RHD) in Remote Australia has apparently increased in recent years. In part this is due to increased screening and possibly improved case finding. But Overdiagnosis due to reduced clinical standards may also explain the increase. The overdiagnosis of ARF can lead to unnecessary burdens on clients and the Health Service and increases the risk of overlooking other serious conditions. This highlights the need for improved diagnostic precision at first presentation.

The incidence of Rheumatic Heart Disease in Remote Australia appears to have increased significantly or even doubled according to some surveys in the last 10 years or so. (1)

What is happening? Are living conditions in Remote Communities getting worse still? Are we finding previously undiagnosed RHD?

We know that Acute Rheumatic Fever(ARF) and Rheumatic Heart Disease(RHD) are diseases of poverty and overcrowding. They are largely unknown in modern urban Australia but still are common in Remote Communities, particularly in NT. 

While living conditions in many communities are still “third world” standard, I can find no evidence for further worsening in recent years and my own anecdotal experience over 20 years or so would suggest that things are no worse than they have been in the past. “Closing the Gap” reports show little improvement, but they do not suggest worsening of living conditions and life expectancy.

Are we detecting previously undiagnosed disease?

In my anecdotal experience virtually every person in a community presents to the clinic, often frequently. It seems intuitively unlikely that significant symptomatic heart failure as a result of valve dysfunction would not have been picked up on presentation. Echocardiogram and cardiology review is available for acute symptomatic disease. Heart failure due to acute carditis or deterioration of RHD is uncommon, but can be confused with more common conditions such as pneumonia. Adverse outcomes as a result are likely to occur (https://tjilpidoc.com/2022/03/09/poor-administration-a-health-hazard/

There has been an understandable promotion of screening by Echocardiogram with programs such as the “Deadly Heart Trek” which have found asymptomatic RHD in some clients.  It is generally accepted that prophylactic Penicillin reduces recurrence of ARF and deterioration (though there are no prospective trials to prove this). Thus finding asymptomatic clients and treating them with prophylactic Penicillin would seem intuitively a Good Idea. But like all screening processes it can be difficult to show benefit – whether this screening will result in improved outcomes is yet to be established.

Overdiagnosis

There is one other possible explanation for the apparent increased incidence – that we are over-diagnosing ARF. In recent years there has been promotion of the idea that clinicians have been missing cases of ARF and should be on the lookout for it to reduce the incidence of serious RHD with prophylaxis. Once a diagnosis of ARF is established, even provisionally, that client is subject to a regime of monthly injections and reviews for anything up to 10 years. Many clients are discharged from hospital at their initial presentation without an expert assessment and classification – this is relegated to a later date. But elective Echocardiography and Cardiology review are difficult to access for Remote clients for various reasons. It may be months or even years before these are performed. By this time the relevant clinical signs and data may be lost or otherwise unavailable. As a result even senior clinicians are reluctant to reverse a provisional diagnosis and it can be difficult if not impossible to remove the Rheumatic Fever “label” once it is applied. 

ASOT AntiDNAse and streptococcal serology – what is the normal?

Acute Rheumatic Fever increases Antistreptolysin Titre (ASOT). The upper laboratory limit of normal in Australia is 200 IU. But the majority (65%) of asymptomatic subjects had a level >200 IU with some being as high as 800 IU. The level increased with age>10 and season (winter) in an Egyptian study (2). In Australia it is likely to be high in Remote Community subjects because of living conditions and frequent exposure to Group A Streptococcus, but I could find no research on this question for  Remote Australia. However, it seems likely that normal levels are much higher than the accepted laboratory range. This makes it a poor positive discriminator for ARF, though it may be helpful in ruling out the disease if it is negative. Similar issues apply in the case of AntiDNAse.(4)

The Clinical Criteria for ARF  

Skin infection is common and is the likely source of streptococcal infection in most cases of ARF, at least in the Top End and tropical Australia. (McDonald et al). In spite of this, conventional teaching still sees pharyngitis and tonsillitis as the primary source. Acute Rheumatic Fever remains a clinical diagnosis – there is no independent lab test or other indicator which can reliably discriminate it from other diagnoses. The diagnosis is made on the Jones Criteria, which were first introduced in 1944. They have been modified several times since to increase their sensitivity in low risk populations. This has the effect however of reducing specificity. In reading the references there still seems to be ambiguity, particularly with regard to arthritis/arthralgia. In the most strict version of the criteria, only polyarthritis was allowed as a major criterion – ie several joints involved with objective signs such as effusion and redness. In the more recent versions monoarthritis or even polyarthralgia are allowed as major criteria in high risk areas. Chorea is probably pathognomonic in young people as other causes of acute Chorea are uncommon. “Carditis” can be difficult to define in a Remote setting where echocardiography is not generally available on the spot. A small group of patients present in heart failure due to carditis – these are challenging to diagnose and manage and errors are frequent in this group.       

ARF – typical presentations

A common presentation is joint pain or arthritis with or without fever and raised ESR/CRP.  Chorea is less common, with acute carditis or other presentations the least common. In my 20 years experience in Remote Health I have not seen the classically described erythema marginatum or subcutaneous nodules. Because of the increased awareness of ARF as a diagnosis, the classical criteria have been relaxed – I have seen a provisional diagnosis of ARF made on a presentation of  monoarthritis or even polyarthralgia and raised CRP but without other criteria. ASOT appears to be used as a de facto criterion when it is not a positive discriminator (see above) Enthusiasts argue that any potential harm from overdiagnosis is outweighed by the benefit to a client with true ARF in reducing long term disability with prophylaxis. I would argue that the imposition on clients of an unnecessary diagnosis is not trivial, with monthly painful injections and frequent reviews for up to 10 years or more. There is a workload burden on the Remote Clinic involved and an opportunity cost as a result. The results of misdiagnosis at presentation can be significant – I have personally seen a case of knee pain in a child diagnosed as ARF when in fact it was tibial osteomyelitis and definitive treatment was delayed. On another occasion knee pain was considered to be ARF when in fact the diagnosis was Slipped Capital Femoral Epiphysis. In both these cases the misdiagnosis could have resulted in significant disability. Indeed a study at Royal Darwin Hospital showed that many of the cases admitted with presumed ARF had an alternative diagnosis at discharge. (3)   

RHD presentations

 The majority of clients with severe RHD requiring surgical intervention or documented valve changes on ECHO either have longstanding RHD with the details of the presentation lost in the mists of time, have presented with heart failure or have been found on Echocardiography screening. No clients who presented with joint symptoms in my case reviews showed evidence of RHD on Echocardiogram. Chorea seems a more reliable criterion with at least some of these subsequently developing RHD changes  

Clinical standards

As a practitioner near retirement of course I think things were better in the old days

Our medical clinical training was rigorous with an emphasis on clinical method. This emphasis appears to have been lost in recent years – many clinicians do not take a detailed relevant history of the presentation or refer to previous attendances or past history. Examination is cursory if at all.

We have come to rely on lab testing and imaging for diagnosis when a rigorous clinical method in the hands of an expert clinician remains the most effective diagnostic tool. Many clinicians are nonmedical – they have not undergone the clinical training that doctors go through. There is a heavy reliance on telemedicine, which means that examination is limited. General Practitioners have been largely relegated to administrative tasks and navigating complex chronic disease. Their role in the assessment of acute presentations has been reduced and their opinion is often not respected. They are no longer seen as “expert generalists” at the centre of the clinical process. The assessment of an acute presentation is the classic scenario where masquerades and alternative diagnoses must be considered as well as the “probability diagnosis”. (Murtagh 6) ARF has now become a “probability diagnosis” due to its promotion as a condition which must not be missed. Unsophisticated clinicians often do not consider the alternatives. ARF is a clinical diagnosis. I have noted a tendency in unsophisticated Remote Staff to overreport clinical diagnoses (Otitis media, pharyngitis, bronchiolitis for example). Is this happening with ARF also? 

Workforce issues in Remote Australia

The Remote workforce is heavily “casualized” and there is massive staff turnover in most Remote Clinics. Health encounters have become “commoditized” and anonymous – client and clinician often do not know each other. (see previous post)  Many Remote Services are struggling to maintain their workforce numbers. These factors further reduce the quality and safety of clinical assessments.

ECHO – how reliable is it?

In file reviews I have noted on some occasions that an echocardiogram was reported as abnormal with Rheumatic changes but subsequent echocardiograms were reported as normal. In one case there was a normal report with abnormal reports before and after. We have always been taught that Rheumatic valve changes do not resolve with time. If this is the case then the quality of echocardiograms must be brought into question. Ultrasonography is a difficult skill, with cardiac ultrasound even more so. Where there is doubt, there is a tendency to overreport changes to avoid missing significant lesions.  

Conclusions

The apparent increase in Rheumatic fever and RHD in the last decade can be explained in part by screening and finding asymptomatic patients. But it is likely that the increase in ARF diagnosis is in part due to overdiagnosis, as a result of casualization of the workforce, reduction in clinical standards, promotion of the diagnosis and reduction in the role of expert clinicians such as doctors.

This overdiagnosis has significant consequences for patients and Remote Clinics and it can be difficult to reverse the “label” once it is applied.  Any patient admitted with a provisional diagnosis of ARF should undergo careful assessment by a senior clinician before discharge and classification as ARF. While it is important not to miss cases of ARF, we should be aiming to improve our diagnostic precision so that we do not impose an unnecessary burden of treatment on clients and the health service, and do not miss other potentially serious conditions.   

An answer should be sought to the question – Do RHD changes resolve with time? Echocardiogram is a difficult skill – there is a need for review of some results and rigorous standards.

References

(1) AIHW Acute Rheumatic Fever and Rheumatic Heart Disease in Australia 2022

https://www.aihw.gov.au/reports/indigenous-australians/arf-rhd-2022/contents/arf

(2) Antistreptolysin O titer in health and disease: levels and significance

Alyaa Amal Kotby, Nevin Mamdouh Habeeb, and Sahar Ezz El Elarab

Pediatr Rep. 2012 Jan 2; 4(1): e8.

(3) The challenge of acute rheumatic fever diagnosis in a high-incidence population: a prospective study and proposed guidelines for diagnosis in Australia’s Northern Territory

Anna Ralph 1, Susan Jacups, Kay McGough, Malcolm McDonald, Bart J Currie

Heart Lung Circ . 2006 Apr;15(2):113-8.

(4) Detection of upper limit of normal values of anti-DNase B antibody in children’s age groups who were admitted to hospital with noninfectious reasons

Servet Delice,1 Riza Adaleti,2 Simin Cevan,3 Pinar Alagoz,4 Aynur Bedel,5 Cagatay Nuhoglu,5 and Sebahat Aksaray2

North Clin Istanb. 2015; 2(2): 136–141.

(5) Low rates of streptococcal pharyngitis and high rates of pyoderma in Australian aboriginal communities where acute rheumatic fever is hyperendemic

Malcolm I McDonald 1, Rebecca J Towers, Ross M Andrews, Norma Benger, Bart J Currie, Jonathan R Carapetis

Clin Infect Dis . 2006 Sep 15;43(6):683-9.

(6) General Practice 8th edition

John Murtagh

Complexity in Medicine

What is “Complexity”?

We all have a lay understanding of complexity – the word describes a system or object that has many parts, the workings of which may be difficult to understand. But many systems that appear complex are merely “complicated” – their many components are well described and understood, at least by someone. A Smartphone is such a system. A system that is “complex” is one that is comprised of many interacting components or systems which behave and affect each other in unpredictable ways. In our world there are many such systems such the weather, the economy, and large software projects as discussed in previous posts. We use mathematics to describe, analyse and design many of the objects and systems in use in our world today. But “Complex” systems are nonlinear and unpredictable – they are not easily amenable to mathematical analysis. (See my article on Complexity and Software design)

Another approach to Complexity is to consider that systems , clinical issues, or even patients can be regarded as an interconnected web with components that affect each other.

““Everything hangs together” defines complexity; the Latin word complexus literally means interwoven—studying complex problems thus is the study of interconnectedness and interdependence.

This notion is reflected in the definition of a complex (adaptive) system: A complex (adaptive) system is “a whole consisting of two or more parts (a) each of which can affect the performance or properties of the whole, (b) none of which can have an independent effect on the whole, and (c) no subgroup of which can have an independent effect on the whole.””(ref)

Most current protocols and research are based on a convergent, “reductionist” approach to diagnosis and treatment. Indeed much of of our Clinical Reasoning uses such a process. While this approach may help to “solve” a particular clinical presentation or issue, it often does not describe or capture the essential elements of the whole patient. As Primary Care Practitioners, we all are familiar with the patient with multiple issues whose condition does not improve in spite of our best efforts over time. A complex system is at the root of most “wicked” problems – could Complex System thinking improve our management of these clients?

In recent years there has been increasing interest in studying complexity in Health and developing some techniques for approaching these problems.

The Principles of dealing with “Complex” problems

“Start with Awareness”

If we recognize that a problem is Complex as defined above, we can then adopt a different approach. The first revelation is to accept that we may not be able to “solve” the problem but there may be elements that we can change.

Elucidate relevant issues and their connections

To understand the problem we must map out and model relevant issues and how they affect each other. There may be feedback “loops” – changing one issue will affect another which in turn will affect the first issue.

In most “wicked” problems there are multiple layers of issues which can be arranged in a hierarchy. Some are in our immediate sphere of influence, some are “high level” issues over which we can have no effect. Many of these issues are “wicked” problems in themselves. Of course, this map will always be approximate and imperfect, but it is a useful exercise to improve our understanding and document the problem.

Identify issues or connections that may be amenable to intervention

Evolutionary rather than revolutionary change

In the study of large “complex” IT systems, Bar Yam (ref) advocated incremental change in different areas of the system over time rather than a “Big Bang” revolutionary change. This principle can be applied to all Complex problems, even down to managing individual patients in Primary Care

Ongoing review, testing and adjustment of our interventions

Of course this requires continuity. This may be achieved by an individual relationship or by a well structured means of communication between members of a team.

Applying these principles to “Wicked” problems

The Primary HealthCare System as a Whole

At the recent WONCA conference in Sydney one of the Keynote speakers (Prof Trish Greenhalgh) described Primary Health as a “Sector Suffering”.

Could we apply Complex System thinking to this “wicked” problem?

To start to understand the issues, she outlined three broad areas where the Primary Health sector is suffering using the Buddhist “Three Poisons” as an analogy – Greed, Hatred/anger and ignorance/delusion.

Greed as epitomized by the pig describes the “Commercialization of Health” which now dominates and corrupts policy, research and indeed the evidence on which our practice is based. In my view, this is an “Elephant in the Room” which we should acknowledge and start to address.

Anger/hatred is epitomized by the snake. Many in the sector are burnt out and disillusioned, politics is combative and paralysed by vested interests. Anger can be negative and destructive, but it can also be harnessed to create positive change.

The Rooster epitomizes ignorance and delusion. Those managing the sector such as bureaucrats and politicians are either ignorant or choose to ignore the advice from those working in the sector. In the workplace of organizations we should build a positive team culture with active communication between all members

Clearly this analysis is only the start of deconstructing the issues, but it gives a framework to work from. There are many layers of issues and interconnections, some of which may be amenable to evolutionary change.

Complex system thinking on a population level – eg “Obesity”

The developed world is getting fatter and this issue underlies many Chronic Diseases. Obesity appears to be a “wicked” intractable problem at both population and individual levels.

At another session of the WONCA conference, the participants were invited to describe the population problem of obesity using the principles of complex system analysis. It soon became clear that there are many layers to the problem with many interconnected issues. Currently our approach is to exhort the patient to “eat less” and “exercise more”. But from even a cursory analysis it becomes clear that it is simplistic to rely on the individual to remedy the issue – this alone is a useful conclusion.

The individual patient encounter

Is this approach applicable at an individual patient/consult level? Are patient encounters “Complex”?

In Medical School we were trained to recognize and manage the patterns of single issue illness. Most of our education since has been also on individual conditions and medications with little emphasis on managing the whole patient. Yet much of Primary Care Medicine now is involved with managing Multimorbidity (see my previous article). In addition to the multiple medical issues and client factors such as language, there are social and family pressures, and resource and financial limitations imposed by payors. In my view these encounters are indeed “complex”.

Managing individual multimorbid clients using the principles of complex thinking outlined above would mean:

Identifying the issues and the connections between them, particularly those that are amenable to intervention and that positively affect others. eg Weight loss improving Diabetes and Hypertension. General Practitioners have been doing this intuitively for a long time using the Problem Oriented Medical Record is a mechanism. A good record can overcome many of the problems associated with lack of continuity. But Electronic Medical Records suffer from poor interface design, administrative “noise” cluttering the record and imperfect utilization. (see Software Design in Health )

Recognizing that there is no single discrete solution to the patient’s problems.

Aiming for evolutionary change

Testing our interventions over time. Here we must recognize the value of continuity and a professional relationship.

If we accept that the management of an individual multimorbid patient is a “Complex” problem, then prediction of their progress and the interventions required becomes difficult or impossible, particularly in the long term. Our current systems of Chronic Disease management rely on “Careplans” of scheduled interventions, often years ahead, by relatively unskilled and often “anonymous” practitioners. This approach is especially prevalent in Australia in settings where there is high staff turnover and/or a disadvantaged population, such as Remote Health, Corrections, or Refugee Health. These clients have a high burden of Chronic Disease and Multimorbidity.

If we were to adopt a “Complex Systems Thinking” approach, it is likely in my view that their care could be improved.

References

Approaching Complexity – start with awareness

Joachim P. Sturmberg MBBS, DORACOG, FRACGP, MFM, PhD

https://onlinelibrary.wiley.com/doi/10.1111/jep.13355

Josephine Borghi, Sharif Ismail, James Hollway, Rakhyun E. Kim, Joachim Sturmberg, Garrett Brown, Reinhard Mechler, Heinrich Volmink, Neil Spicer, Zaid Chalabi, Rachel Cassidy, Jeff Johnson, Anna Foss, Augustina Koduah, Christa Searle, Nadejda Komendantova, Agnes Semwanga, Suerie Moon, Viewing the global health system as a complex adaptive system – implications for research and practice, F1000Research, 10.12688/f1000research.126201.1, 11, (1147), (2022).

Get Your Checkup!

But at what Cost?

The “Checkup” has become a common theme In General Practice and Primary Care. 

Men are exhorted with blokey slogans like “get your grease and oil change” to have their regular checkup or they will suffer all sorts of dire consequences

Women are prompted with signs in public conveniences to have their regular PAP smear.

It seems an intuitively attractive idea that if we look for disease and detect it early we are more likely to be able to cure it and outcomes will be improved. 

In particular the spectre of Cancer is kept at bay.

But what is the evidence?

Screening for disease

Many examinations and tests have been proposed over the years to look for occult disease – ie disease that has not yet presented with symptoms or signs.

The RACGP Red Book lists many recommended procedures and a further 15 that it says are not supported by evidence.

 Health Screening is the process of looking for disease in people that are well in order to detect a disease or classify them as likely or unlikely to have a disease.

The aim is to detect early disease in apparently healthy individuals. Case finding is a more targeted approach to an individual or group at risk of a particular condition

Screening for disease in asymptomatic people is also termed “Primary Prevention”.

To be valid, a screening test or procedure must pass three evidence tests.

The test must reliably detect an important health condition before it would otherwise present. 

There must be a treatment for the condition. 

The outcome must be improved as a result.

Very few screening procedures pass these tests when they are rigorously applied.

Those that do have surprisingly weak evidence to validate them.

PSA (Prostate Specific Antigen) as a screening test 

The debate about PSA has raged for years and seems further than ever from being finally resolved.

We regularly see in social media and TV items exhorting men to have a checkup and all will be well 

But when we apply the 3 tests above to PSA as a screening test it falls short.

(1) Does it detect prostate cancer reliably? 

The figures are debated but roughly 20% of men with prostate cancer have a normal PSA, ie its sensitivity is 80%.

Conversely 80% of men with a high PSA do not have cancer (low specificity). However a high result invariably results in more investigation including biopsy which has its own risks and errors.

(2) Is treatment of prostate cancer effective?

Various treatments have been proposed – radical surgery to remove the cancer completely, curative radiotherapy or hormonal treatment   

All have significant failure rates (not curing the cancer) and side effects are almost universal. Impotence is likely, incontinence is possible and significant side effects such as radiation proctitis (inflammation of the rectum) are common.

Moreover many men with prostate cancer die from other causes – the cancer may never affect their lifespan. The 10 yr survival disadvantage of men with prostate cancer is only 2%

(3) Is the outcome improved?

A large German meta analysis concluded:

The benefits of PSA-based prostate cancer screening do not outweigh its harms. We failed to identify eligible screening studies of newer biomarkers, PSA derivatives or modern imaging modalities, which may alter the balance of benefit to harm. In the treatment group, 2 of 1000 men were prevented from dying of prostate cancer by treatment. But all-cause mortality was similar in both screening and control groups. In the screening group there was a significant burden of morbidity associated with investigation and treatment side effects. For every 1000 men screened, 220 suffered significant side effects or harm.

Once the diagnosis is made, there may be some differences in subgroups and risk can be stratified. There can be a discussion with the individual about the best treatment in their particular circumstances. 

But the initial decision to screen by necessity is based on population data. A discussed above, PSA screening in this situation is not supported by the data. 

The Evidence for Secondary and Tertiary prevention

Secondary and tertiary Prevention describe activities which manage known risk factors for disease (secondary prevention or “case finding”) or even the disease itself to prevent recurrence of events or worsening of the disease (tertiary prevention). Examples of this are managing risk factors for Ischaemic Heart Disease (Hypertension , Cholesterol, smoking) in a client who has suffered a heart attack or Hypertension in patients with impaired renal function. In this situation the evidence for benefit is much stronger than in Primary Prevention.(ref)

But to achieve this benefit the health service must maintain a clear summary of the client issues and ensure that a program of regular relevant interventions is delivered. There is reasonably good evidence that a programmed series of interventions (a “Care Plan”) effectively reduces hospitalization and complications of known Chronic Disease.

Here a good EHR (electronic Health Record) system with logical business rules is important. But many of the current EHR systems in use suffer from poor “data visibility” ie important data about a client such as past history is difficult to find. This is due to poor program design and “noise” due to unnecessarily complex dialogs and administrative information cluttering the record.

(see my previous articles Poor Administration – a Health Hazard?   and Software Design in Health – TjilpiDoc )

The General Checkup

A “General Checkup” has not been shown to improve outcomes in the general population.

A large meta- analysis of nearly 200,000 subjects failed to show benefit in outcomes (mortality or morbidity) (ref)

There were more diagnoses and treatment, however.

In the Indigenous population the idea of a checkup seems intuitively attractive because of the high rate of ill health generally.

However there does not appear to be research supporting this assertion.

The Checkup as a Safety Net

The Checkup in its various forms seems to be implicitly regarded as a “safety net”. 

However, the studies of a General Checkup and the effects on outcomes (minimal) would suggest that this is not so.

Indeed it is my anecdotal experience that known issues are often ignored and new disease is rarely found on a routine checkup. Most new issues present as an acute illness or event.

The Commercial Value and cost of the Checkup

The Checkup is a relatively low risk activity legally and can be performed by less sophisticated clinicians to a large extent as it is a scheduled and programmed activity. It does not require highly developed clinical acumen and there are usually no difficult decisions. In spite of the lack of evidence, it is well remunerated by Medicare. It has become a commercially attractive option for Primary Care practices. But it generates significant system costs in addition to the checkup itself. There are oncosts for pathology and imaging generated – this is attractive to providers of these services. In spite of all this extra cost to the system the research quoted above would suggests that there is no improvement in outcomes.

Primary Care, Imaging and Pathology Providers have a vested interest in performing these services, even though the evidence for them is poor.

Why the disconnect between evidence and practice?

The PSA question continues to be debated even the though the evidence is clear. A regular “General Checkup” continues to be promoted in spite of the lack of evidence of benefit and significant cost. 

Is this similar to the Climate Change debate where vested interests prevent real action? I would argue commercial vested interests are causing this disconnect. In fact much of our practice in Health is driven by commercial interests and much of our evidence has become corrupted by commercial drivers. As we struggle to deliver Health services and General Practice is apparently in crisis it is time in my view to review our whole basis of Health Service delivery and explicitly address these issues. 

References 

Assessment of prostate-specific antigen screening: an evidence-based report by the German Institute for Quality and Efficiency in Health Care

Ulrike PaschenSibylle SturtzDaniel FleerUlrike LampertNicole SkoetzPhilipp Dahm

First published: 07 May 2021

https://doi.org/10.1111/bju.15444

Citations: 4

BMJ. 2012; 345: e7191.

General health checks in adults for reducing morbidity and mortality from disease: Cochrane systematic review and meta-analysis

Lasse T Krogsbøll, doctor,Karsten Juhl Jørgensen, doctor, Christian Grønhøj Larsen, doctor, and Peter C Gøtzsche, professor, director

Effect of evidence-based therapy for secondary prevention of cardiovascular disease: Systematic review and meta-analysis

PLoS One. 2019; 14(1): e0210988.

Published online 2019 Jan 18. doi: 10.1371/journal.pone.0210988

Effect of evidence-based therapy for secondary prevention of cardiovascular disease: Systematic review and meta-analysis

PLoS One. 2019; 14(1): e0210988.

Published online 2019 Jan 18. doi: 10.1371/journal.pone.0210988

Software Design in Health

Photo by Toa Heftiba u015einca on Pexels.com

Software Design is an arcane subject, a long way from the day to day practice of Health Practitioners. Yet we all use computer systems – indeed they are central to our day to day practice. Good Health IT design is important to efficiency, work satisfaction and Clinical Safety, yet it appears not to be considered as an important factor in the commissioning of Health IT systems .

In this article I explore some relevant issues.

“Technical Debt” (Ref 1)

The design of a new system is a significant investment in money and resources. There is pressure to deliver on time and on budget. During the process compromises will be made on design and system functionality. Sometimes problems that should be solved now are deferred in favour of a “bandaid” solution. This incurs a “Technical Debt” which may have to be repaid later with further design work.

Data Modelling is an example of “technical debt”. This can be important when different systems need to communicate but is not so critical within a single system. Data sent across the interface between the systems must be carefully structured or serious problems can arise when the format of the data is not compatible. For example, a numerical quantity can be specified in various ways – signed integer, unsigned integer, floating point decimal to name a few. In programming, these quantities have different lengths and are represented and manipulated in different ways. If this is not recognized and allowed for in design, an error will result.

A special program or script may have to be devised to parse and convert data when it crosses from one system to another. If the work of data modelling and interface standard specification has been deferred at the design phase, “Technical debt” has been incurred which must be “repaid” at a later date. (ref Sundberg). There does not seem to be much interest from Vendors or Buyers in a formal Data Modelling system.

Data Modelling – why?

Any significant software system must store data to be useful. Health systems require large quantities of often complex data which to be stored, recalled and manipulated. A particular data entity can be represented in various ways. For example a pulse rate is likely to lie within a certain range. It will not be negative and is not measured in fractions Thus it could be represented by an unsigned integer quantity – this takes up much less memory space than a floating point number for example. On the other hand a vaccine temperature measurement will require recording in decimal fractions of a degree and might be negative. Thus a floating point number is required to represent this measurement. Additional “metadata” might be required for interpretation of the figure such as the means of measurement. Suitable ranges could be included in the data definition. This would allow alerts to be generated in decision support systems when the measurement falls outside the normal range.

Data modelling is the process of looking at data entities to be stored in a system database and specifying such constraints and data types. It makes the data computable. It then becomes more useful for search and decision support systems. It also allows the specification of data standards at interfaces between systems and may remove the need for “middleware” to connect systems. The internet is a good example where standards for data and interfaces have been agreed – this allows many disparate systems to communicate.

OpenEHR (ref 2) is an example of a data modelling system which is gaining increasing acceptance throughout the world.

Standards and Interoperability 

– “the good thing about Standards is that there are so many to choose from”

There are strong commercial reasons why standardization is so hard to achieve. In everything from razors to printers there appears a plethora of shapes, sizes and standards. Of course when there is a need to link components together a standard interface matters. A razor blade will not link to a handle unless the relevant components are the correct shape and size. Reducing the shape and size of all razor blades to a common standard allows competition and invariably results in a reduction in price. Thus there is a strong commercial incentive for manufacturers to resist standardization, particularly if they hold market dominance. Windows and Microsoft made Bill Gates one of the richest men in the world. Apple is one the largest corporations in the world on the back of its proprietary operating systems and products. This intellectual property is jealously guarded. There are many examples where corporations have used their market power and closed intellectual property to maintain their market position and increase profits. Microsoft has been fined repeatedly by the EU for anticompetitive behaviour. One of the largest IT corporations in US health IT (Epic Systems) had to be forced by the Obama administration to open its systems to interface with outside systems. (Ref 3)

This commercial incentive to resist standardization and interoperability appears not to be acknowledged as an issue when governments are procuring Health IT systems. Just as the razor manufacturer can charge a premium for blades that fit their handle, health IT vendors appear able to charge eyewatering figures for their programs which are proprietary and do not interface with other systems. The user must pay an ongoing licence fee to continue to use the software. Moreover, because their source code is proprietary, no one else can work on it for upgrades and bugfixes – thus they can charge a premium for ongoing support. Changing to another system is expensive. Data will have to be migrated – the vendor will charge to access and modify data. Because it is not standardized and modelled, software will have to be written to migrate it to another system. Users will have to be retrained. All these costs mean that the customer is effectively “locked-in” to an expensive system, often of mediocre quality.

The NHS upgrade of 2002-11 was probably one the most spectacular and expensive Health IT failures of all time (see “Why is Health IT so hard?”). At the time there was much talk of Open Source systems and Interoperability. Yet more than 10 years later the NHS IT scene is still dominated by large commercial corporations, and Government is still paying large amounts for disparate systems. The Nirvana of Interoperability seems as far away as ever.

Software quality and why it matters.

Large software systems may contain millions of lines of source code. They are constantly modified and extended often over a long period. Many programmers may work on the system over time.

A high quality system has less bugs, is less likely to fail catastrophically, but perhaps most importantly can be extended and modified.

This is important because there will inevitably be software errors (“bugs”) in a large system. The probability density of bugs increases with software complexity and can even be quantified. Many are “edge cases” or of nuisance value. But some are critical and must be fixed.

The user will almost certainly want to make changes to the software as their business needs evolve or issues never addressed at the initial deployment become apparent. Some of these issues arise as a result of “Technical Debt” (see above). Some arise because of the “Complex” nature of the system and the fact they could not have been easily predicted as a result. (see “Complexity and Software Design”)

Some software is regarded as “legacy” (ref 4) – this means essentially that it cannot be modified without unpredictable failures – it has become “frozen”. While the program may function well enough, the quality of the underlying source code is such that making changes and fixing bugs is difficult if not impossible. This happens when the initial design is poor and changes are not carefully curated and managed over time. The code is not well abstracted into components, it is not well documented, interfaces are not discrete and well described, variables are “hard-coded” in different locations, and the codebase is not testable. A well designed program is generally separated into database, business logic, and user interface “layers”. The interfaces between these layers are well described and “discrete”. It should be possible to substitute different programs in each layer and have the whole system continue to function. 

A modern software development suite will automatically test code after changes are made and will employ a versioning system to manage change. One approach to Software Design is to write a test for a section of code and then write the code to meet the test.

Another characteristic of “legacy” code is that it is usually  proprietary. It may even use it’s own proprietary language and coding framework. It takes time and resources to develop software components that we are all used to such as typical window behaviour, “drag and drop” and “widgets” that form part of Graphical User Interfaces. If the code is proprietary it may not have had the benefit of the work of many programmers over time that Open languages and frameworks have had. This may explain the “primitive” interface look and function that many large enterprise systems have.

It is not standard practice, even when acquiring large enterprise level systems, to require an independent analysis of underlying software source code and coding systems quality. Customers generally look only at the function of the program. This is surprising given the sums of money that are routinely spent on these projects. The vendor will usually cite commercial confidentiality to prevent such scrutiny. This may also be a reason why the “waterfall” (see above) process of software design is preferred by Vendors and Buyers alike. Software Quality will have a big impact on how easy it is to extend and develop a system.

Software Security and Open Source

Many systems are connected to the internet nowadays. This means they are open to penetration by bots and hackers which are increasingly sophisticated. There may be hundreds of hacking attempts in an hour in a typical system.  

Indeed this has recently become topical with the penetration and theft of data in large Health related IT systems. “Ransomware” attacks are now commonplace.

Open Source software by definition is public and open to scrutiny. Some argue that this makes the code vulnerable to hackers as they can easily analyse the source for potential vulnerabilities. The counter argument is that “many eyes” can look at the codebase and identify these vulnerabilities, allowing them to be corrected.

Proprietary software also seems to have it’s share of hacks. Perhaps the source code is available to the hacker, perhaps it has been “reverse engineered” from object code, perhaps the hacker has simply tried a suite of hacking tools.

In the case of a recent large attack “development” code was inadvertently deployed to a production server.

“Trojans” or backdoor access may have been built into the code by the original coder. There was a famous case of an undocumented “backdoor” built into a large commercial database widely used in enterprise systems. This hack was  said to be exploited by US intelligence.  Of course if the code is secret these Trojans will never be discovered. High quality code is less likely to have vulnerabilities and if they are present it is possible to correct them. 

The User Interface

This is a focus for commercial mobile phone and computer app providers where usability is critical to uptake in a competitive environment. But in large Health IT systems, usability and the quality of the User Interface does not attract the same attention. Is this yet another adverse effect of the commercial contracting process and “Vendor Lockin”?

The User Interface has also been well studied in the context of Safety Critical systems. Poor User Interface design and poor software quality were factors in the Therac-25 incident. Here a malfunction in a Radiotherapy machine caused harm to several patients and some deaths. (ref 5)

In my view it is also a factor in many poor clinical outcomes in Health when critical previous history data is obscured in the record. The treating Clinician does not access this data and makes a clinical error as a result.

In my experience this factor is not considered in most “after the fact” reviews and “Root Cause Analyses”. Usually the Clinician is held responsible for the outcome and a common response is to require “further training”.  

Some principles of User Interface design: (ref 6)

Simplicity Principle – This means that the design should make the user interface simple, communication clear, common tasks easy and in the user’s own language. The design should also provide simple shortcuts that are closely related to long procedures.

Visibility Principle – All the necessary materials and options required to perform a certain task must be visible to the user without creating a distraction by giving redundant or extraneous information. A great design should not confuse or overwhelm the user with unnecessary information.

Feedback Principle – The user must be fully informed of the actions, change of state, errors, condition or interpretations in a clear and concise manner without using unambiguous language.

Tolerance Principle – This simply means that the design must tolerant and flexible. The user interface should be able to reduce the cost of misuse and mistakes by providing options such as redoing and undoing to help prevent errors where possible.

Reuse Principle – The user interface should be able to reuse both internal and external components while maintaining consistency in a purposeful way to avoid the user from rethinking or remembering. In practice this means data used in several locations should only be entered once.

In all the various Health IT systems that I have used, the User interface does not appear to conform with many of these principles. Typically, the program is complex with multiple layers of dialogs. 

There is a lot of redundancy of headings, poor use of screen “real estate”, poor formatting of dialogs, and displays of multiple “null” values

Often a lot of irrelevant “administrative” information is displayed. The end result is poor usability and poor “data visibility”. Important clinical data is hidden in layers of dialogs or poorly labelled documents.

 These failures reduce efficiency and user satisfaction and increase risk in an already difficult and at times dangerous Clinical Environment.

Conclusion

Software Quality is important to cost, extensibility, Interoperability and Clinical Safety. It should receive more attention in the commissioning and upgrading of Health IT systems. The design of the User Interface is a Clinical Safety Issue and should be considered as a factor when adverse clinical outcomes occur.

References

1. Scalability and Semantic Sustainability in Electronic Health Record Systems

Erik Sundvall

Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.

http://liu.diva-portal.org/smash/record.jsf?pid=diva2%3A599752&dswid=3087

2. Data Modelling – OpenEHR

https://www.openehr.org/

3. Obama legislation

https://www.modernhealthcare.com/article/20160430/MAGAZINE/304309990/how-medicare-s-new-payment-overhaul-tries-to-change-how-docs-use-tech

4. Legacy software

https://www.manning.com/books/re-engineering-legacy-software#toc

5. The Therac-25 Incident

https://thedailywtf.com/articles/the-therac-25-incident

6. User Interface design

http://ux.walkme.com/graphical-user-interface-examples/

Open Source Software – A Paradox

The nuts and bolts of Software – Intellectual Property

The code running in the myriad of computers in the world is called object code .

This is a series of low level commands read sequentially from memory that tell the computer processor what to do. These are in assembly language which is completely unintelligible to humans. (except for maybe a very few nerds who like this stuff!)

Depending on the computer language used, this object code is generated from source code by another program called an interpreter or compiler. This source code is human readable and describes the function of the program. Intellectual property resides in the source code. Large enterprise levels programs may have millions of lines of such code. These are usually proprietary – ie the source code is copyright and secret. The user of the program buys a licence which allows them to use the code for a limited time but most other rights are strictly limited. In particular they must use the original vendor for support including bugfixes and upgrades as the source is not available to anyone else. This lack of alternatives allows the vendor to charge more for support than would otherwise be the case – this situation is termed “Vendor Lockin”.

Data and Intellectual Property

Useful and substantial programs operate on data such as names, addresses, text and measurements. These data are stored in a repository generally called a database. But in the computer world, data can be stored in different ways – for example a number can be binary, integer, decimal, signed or unsigned. All these quantities are handled and stored in different ways. Data can be modelled in different ways with constraints and associations with other data. Databases have structures called schemas which allow them to store and recover data reliably. These models and schemas are also generally proprietary. A vendor may charge a fee to convert data to a format suitable for another database or even regard the customer’s data as proprietary and refuse to do so altogether. The customer is truly “locked in” in this situation and the barriers to change to another program are substantial.

The Open Source Paradox

Yet another software development paradigm is termed “Open Source”. The design process is not necessarily different to those discussed above – rather the difference is in how the development is funded and how the intellectual property created is treated. The software is “Open” – the source code is public and free for anyone to use as they see fit. However, there is one important caveat – software developed from this codebase must also remain “Open”. Much of this software has been developed by volunteers for free, though commercial programmers may also use this model and there is no reason why a commercial entity cannot charge for installing or supporting an Open Source program. But the source code must be publicly available.

Commercial developers argue that an Open Source model does not deliver the resources required in large software developments and the resources needed for ongoing support. They argue that Open Source cannot deliver the quality that commercial offerings can. But is this really true?

If you are browsing the internet you are likely to be using Open Source software. The majority of web servers are based on an Open Source stack – typically LAMP (Linux operating system, Apache webserver, MySQL database and PHP scripting language). Certainly the internet would not function without Open standards such as HTTP. The Linux Desktop now rivals commercial alternatives such as Windows or MacIntosh in functionality and stability.

But how can “free” software be equal to if not better than proprietary software? You get what you pay for, right?

This apparent Paradox can be explained by several factors.

The passion of volunteers – “nerds” will always want to show how clever they are. They may want the functionality of a particular program that is not otherwise available

Corporate memory – the efforts of the “nerds” and others are not lost. The code is available to others to extend and improve. Open Source versioning systems such as SubVersion and GitHub have been developed, which allow tracking of changes and cooperation between developers. GitHub now has more than 50 million users worldwide.

Programmers who have no connection with each other apart from an interest in a particular project can cooperate via these systems and their work is automatically saved and curated. Over time this is powerful. In the proprietary world programmers may work for years on a project, producing high quality software, but if their work does not gain market acceptance it is locked up in a proprietary licence and forgotten.

Development techniques are more suited to “complex” systems engineering. Open Source software is developed incrementally and with many competing solutions. As discussed previously this is likely to produce a better outcome in a complex environment.