Get Your Checkup!

But at what Cost?

The “Checkup” has become a common theme In General Practice and Primary Care. 

Men are exhorted with blokey slogans like “get your grease and oil change” to have their regular checkup or they will suffer all sorts of dire consequences

Women are prompted with signs in public conveniences to have their regular PAP smear.

It seems an intuitively attractive idea that if we look for disease and detect it early we are more likely to be able to cure it and outcomes will be improved. 

In particular the spectre of Cancer is kept at bay.

But what is the evidence?

Screening for disease

Many examinations and tests have been proposed over the years to look for occult disease – ie disease that has not yet presented with symptoms or signs.

The RACGP Red Book lists many recommended procedures and a further 15 that it says are not supported by evidence.

 Health Screening is the process of looking for disease in people that are well in order to detect a disease or classify them as likely or unlikely to have a disease.

The aim is to detect early disease in apparently healthy individuals. Case finding is a more targeted approach to an individual or group at risk of a particular condition

Screening for disease in asymptomatic people is also termed “Primary Prevention”.

To be valid, a screening test or procedure must pass three evidence tests.

The test must reliably detect an important health condition before it would otherwise present. 

There must be a treatment for the condition. 

The outcome must be improved as a result.

Very few screening procedures pass these tests when they are rigorously applied.

Those that do have surprisingly weak evidence to validate them.

PSA (Prostate Specific Antigen) as a screening test 

The debate about PSA has raged for years and seems further than ever from being finally resolved.

We regularly see in social media and TV items exhorting men to have a checkup and all will be well 

But when we apply the 3 tests above to PSA as a screening test it falls short.

(1) Does it detect prostate cancer reliably? 

The figures are debated but roughly 20% of men with prostate cancer have a normal PSA, ie its sensitivity is 80%.

Conversely 80% of men with a high PSA do not have cancer (low specificity). However a high result invariably results in more investigation including biopsy which has its own risks and errors.

(2) Is treatment of prostate cancer effective?

Various treatments have been proposed – radical surgery to remove the cancer completely, curative radiotherapy or hormonal treatment   

All have significant failure rates (not curing the cancer) and side effects are almost universal. Impotence is likely, incontinence is possible and significant side effects such as radiation proctitis (inflammation of the rectum) are common.

Moreover many men with prostate cancer die from other causes – the cancer may never affect their lifespan. The 10 yr survival disadvantage of men with prostate cancer is only 2%

(3) Is the outcome improved?

A large German meta analysis concluded:

The benefits of PSA-based prostate cancer screening do not outweigh its harms. We failed to identify eligible screening studies of newer biomarkers, PSA derivatives or modern imaging modalities, which may alter the balance of benefit to harm. In the treatment group, 2 of 1000 men were prevented from dying of prostate cancer by treatment. But all-cause mortality was similar in both screening and control groups. In the screening group there was a significant burden of morbidity associated with investigation and treatment side effects. For every 1000 men screened, 220 suffered significant side effects or harm.

Once the diagnosis is made, there may be some differences in subgroups and risk can be stratified. There can be a discussion with the individual about the best treatment in their particular circumstances. 

But the initial decision to screen by necessity is based on population data. A discussed above, PSA screening in this situation is not supported by the data. 

The Evidence for Secondary and Tertiary prevention

Secondary and tertiary Prevention describe activities which manage known risk factors for disease (secondary prevention or “case finding”) or even the disease itself to prevent recurrence of events or worsening of the disease (tertiary prevention). Examples of this are managing risk factors for Ischaemic Heart Disease (Hypertension , Cholesterol, smoking) in a client who has suffered a heart attack or Hypertension in patients with impaired renal function. In this situation the evidence for benefit is much stronger than in Primary Prevention.(ref)

But to achieve this benefit the health service must maintain a clear summary of the client issues and ensure that a program of regular relevant interventions is delivered. There is reasonably good evidence that a programmed series of interventions (a “Care Plan”) effectively reduces hospitalization and complications of known Chronic Disease.

Here a good EHR (electronic Health Record) system with logical business rules is important. But many of the current EHR systems in use suffer from poor “data visibility” ie important data about a client such as past history is difficult to find. This is due to poor program design and “noise” due to unnecessarily complex dialogs and administrative information cluttering the record.

(see my previous articles Poor Administration – a Health Hazard?   and Software Design in Health – TjilpiDoc )

The General Checkup

A “General Checkup” has not been shown to improve outcomes in the general population.

A large meta- analysis of nearly 200,000 subjects failed to show benefit in outcomes (mortality or morbidity) (ref)

There were more diagnoses and treatment, however.

In the Indigenous population the idea of a checkup seems intuitively attractive because of the high rate of ill health generally.

However there does not appear to be research supporting this assertion.

The Checkup as a Safety Net

The Checkup in its various forms seems to be implicitly regarded as a “safety net”. 

However, the studies of a General Checkup and the effects on outcomes (minimal) would suggest that this is not so.

Indeed it is my anecdotal experience that known issues are often ignored and new disease is rarely found on a routine checkup. Most new issues present as an acute illness or event.

The Commercial Value and cost of the Checkup

The Checkup is a relatively low risk activity legally and can be performed by less sophisticated clinicians to a large extent as it is a scheduled and programmed activity. It does not require highly developed clinical acumen and there are usually no difficult decisions. In spite of the lack of evidence, it is well remunerated by Medicare. It has become a commercially attractive option for Primary Care practices. But it generates significant system costs in addition to the checkup itself. There are oncosts for pathology and imaging generated – this is attractive to providers of these services. In spite of all this extra cost to the system the research quoted above would suggests that there is no improvement in outcomes.

Primary Care, Imaging and Pathology Providers have a vested interest in performing these services, even though the evidence for them is poor.

Why the disconnect between evidence and practice?

The PSA question continues to be debated even the though the evidence is clear. A regular “General Checkup” continues to be promoted in spite of the lack of evidence of benefit and significant cost. 

Is this similar to the Climate Change debate where vested interests prevent real action? I would argue commercial vested interests are causing this disconnect. In fact much of our practice in Health is driven by commercial interests and much of our evidence has become corrupted by commercial drivers. As we struggle to deliver Health services and General Practice is apparently in crisis it is time in my view to review our whole basis of Health Service delivery and explicitly address these issues. 


Assessment of prostate-specific antigen screening: an evidence-based report by the German Institute for Quality and Efficiency in Health Care

Ulrike PaschenSibylle SturtzDaniel FleerUlrike LampertNicole SkoetzPhilipp Dahm

First published: 07 May 2021

Citations: 4

BMJ. 2012; 345: e7191.

General health checks in adults for reducing morbidity and mortality from disease: Cochrane systematic review and meta-analysis

Lasse T Krogsbøll, doctor,Karsten Juhl Jørgensen, doctor, Christian Grønhøj Larsen, doctor, and Peter C Gøtzsche, professor, director

Effect of evidence-based therapy for secondary prevention of cardiovascular disease: Systematic review and meta-analysis

PLoS One. 2019; 14(1): e0210988.

Published online 2019 Jan 18. doi: 10.1371/journal.pone.0210988

Effect of evidence-based therapy for secondary prevention of cardiovascular disease: Systematic review and meta-analysis

PLoS One. 2019; 14(1): e0210988.

Published online 2019 Jan 18. doi: 10.1371/journal.pone.0210988

Software Design in Health

Photo by Toa Heftiba u015einca on

Software Design is an arcane subject, a long way from the day to day practice of Health Practitioners. Yet we all use computer systems – indeed they are central to our day to day practice. Good Health IT design is important to efficiency, work satisfaction and Clinical Safety, yet it appears not to be considered as an important factor in the commissioning of Health IT systems .

In this article I explore some relevant issues.

“Technical Debt” (Ref 1)

The design of a new system is a significant investment in money and resources. There is pressure to deliver on time and on budget. During the process compromises will be made on design and system functionality. Sometimes problems that should be solved now are deferred in favour of a “bandaid” solution. This incurs a “Technical Debt” which may have to be repaid later with further design work.

Data Modelling is an example of “technical debt”. This can be important when different systems need to communicate but is not so critical within a single system. Data sent across the interface between the systems must be carefully structured or serious problems can arise when the format of the data is not compatible. For example, a numerical quantity can be specified in various ways – signed integer, unsigned integer, floating point decimal to name a few. In programming, these quantities have different lengths and are represented and manipulated in different ways. If this is not recognized and allowed for in design, an error will result.

A special program or script may have to be devised to parse and convert data when it crosses from one system to another. If the work of data modelling and interface standard specification has been deferred at the design phase, “Technical debt” has been incurred which must be “repaid” at a later date. (ref Sundberg). There does not seem to be much interest from Vendors or Buyers in a formal Data Modelling system.

Data Modelling – why?

Any significant software system must store data to be useful. Health systems require large quantities of often complex data which to be stored, recalled and manipulated. A particular data entity can be represented in various ways. For example a pulse rate is likely to lie within a certain range. It will not be negative and is not measured in fractions Thus it could be represented by an unsigned integer quantity – this takes up much less memory space than a floating point number for example. On the other hand a vaccine temperature measurement will require recording in decimal fractions of a degree and might be negative. Thus a floating point number is required to represent this measurement. Additional “metadata” might be required for interpretation of the figure such as the means of measurement. Suitable ranges could be included in the data definition. This would allow alerts to be generated in decision support systems when the measurement falls outside the normal range.

Data modelling is the process of looking at data entities to be stored in a system database and specifying such constraints and data types. It makes the data computable. It then becomes more useful for search and decision support systems. It also allows the specification of data standards at interfaces between systems and may remove the need for “middleware” to connect systems. The internet is a good example where standards for data and interfaces have been agreed – this allows many disparate systems to communicate.

OpenEHR (ref 2) is an example of a data modelling system which is gaining increasing acceptance throughout the world.

Standards and Interoperability 

– “the good thing about Standards is that there are so many to choose from”

There are strong commercial reasons why standardization is so hard to achieve. In everything from razors to printers there appears a plethora of shapes, sizes and standards. Of course when there is a need to link components together a standard interface matters. A razor blade will not link to a handle unless the relevant components are the correct shape and size. Reducing the shape and size of all razor blades to a common standard allows competition and invariably results in a reduction in price. Thus there is a strong commercial incentive for manufacturers to resist standardization, particularly if they hold market dominance. Windows and Microsoft made Bill Gates one of the richest men in the world. Apple is one the largest corporations in the world on the back of its proprietary operating systems and products. This intellectual property is jealously guarded. There are many examples where corporations have used their market power and closed intellectual property to maintain their market position and increase profits. Microsoft has been fined repeatedly by the EU for anticompetitive behaviour. One of the largest IT corporations in US health IT (Epic Systems) had to be forced by the Obama administration to open its systems to interface with outside systems. (Ref 3)

This commercial incentive to resist standardization and interoperability appears not to be acknowledged as an issue when governments are procuring Health IT systems. Just as the razor manufacturer can charge a premium for blades that fit their handle, health IT vendors appear able to charge eyewatering figures for their programs which are proprietary and do not interface with other systems. The user must pay an ongoing licence fee to continue to use the software. Moreover, because their source code is proprietary, no one else can work on it for upgrades and bugfixes – thus they can charge a premium for ongoing support. Changing to another system is expensive. Data will have to be migrated – the vendor will charge to access and modify data. Because it is not standardized and modelled, software will have to be written to migrate it to another system. Users will have to be retrained. All these costs mean that the customer is effectively “locked-in” to an expensive system, often of mediocre quality.

The NHS upgrade of 2002-11 was probably one the most spectacular and expensive Health IT failures of all time (see “Why is Health IT so hard?”). At the time there was much talk of Open Source systems and Interoperability. Yet more than 10 years later the NHS IT scene is still dominated by large commercial corporations, and Government is still paying large amounts for disparate systems. The Nirvana of Interoperability seems as far away as ever.

Software quality and why it matters.

Large software systems may contain millions of lines of source code. They are constantly modified and extended often over a long period. Many programmers may work on the system over time.

A high quality system has less bugs, is less likely to fail catastrophically, but perhaps most importantly can be extended and modified.

This is important because there will inevitably be software errors (“bugs”) in a large system. The probability density of bugs increases with software complexity and can even be quantified. Many are “edge cases” or of nuisance value. But some are critical and must be fixed.

The user will almost certainly want to make changes to the software as their business needs evolve or issues never addressed at the initial deployment become apparent. Some of these issues arise as a result of “Technical Debt” (see above). Some arise because of the “Complex” nature of the system and the fact they could not have been easily predicted as a result. (see “Complexity and Software Design”)

Some software is regarded as “legacy” (ref 4) – this means essentially that it cannot be modified without unpredictable failures – it has become “frozen”. While the program may function well enough, the quality of the underlying source code is such that making changes and fixing bugs is difficult if not impossible. This happens when the initial design is poor and changes are not carefully curated and managed over time. The code is not well abstracted into components, it is not well documented, interfaces are not discrete and well described, variables are “hard-coded” in different locations, and the codebase is not testable. A well designed program is generally separated into database, business logic, and user interface “layers”. The interfaces between these layers are well described and “discrete”. It should be possible to substitute different programs in each layer and have the whole system continue to function. 

A modern software development suite will automatically test code after changes are made and will employ a versioning system to manage change. One approach to Software Design is to write a test for a section of code and then write the code to meet the test.

Another characteristic of “legacy” code is that it is usually  proprietary. It may even use it’s own proprietary language and coding framework. It takes time and resources to develop software components that we are all used to such as typical window behaviour, “drag and drop” and “widgets” that form part of Graphical User Interfaces. If the code is proprietary it may not have had the benefit of the work of many programmers over time that Open languages and frameworks have had. This may explain the “primitive” interface look and function that many large enterprise systems have.

It is not standard practice, even when acquiring large enterprise level systems, to require an independent analysis of underlying software source code and coding systems quality. Customers generally look only at the function of the program. This is surprising given the sums of money that are routinely spent on these projects. The vendor will usually cite commercial confidentiality to prevent such scrutiny. This may also be a reason why the “waterfall” (see above) process of software design is preferred by Vendors and Buyers alike. Software Quality will have a big impact on how easy it is to extend and develop a system.

Software Security and Open Source

Many systems are connected to the internet nowadays. This means they are open to penetration by bots and hackers which are increasingly sophisticated. There may be hundreds of hacking attempts in an hour in a typical system.  

Indeed this has recently become topical with the penetration and theft of data in large Health related IT systems. “Ransomware” attacks are now commonplace.

Open Source software by definition is public and open to scrutiny. Some argue that this makes the code vulnerable to hackers as they can easily analyse the source for potential vulnerabilities. The counter argument is that “many eyes” can look at the codebase and identify these vulnerabilities, allowing them to be corrected.

Proprietary software also seems to have it’s share of hacks. Perhaps the source code is available to the hacker, perhaps it has been “reverse engineered” from object code, perhaps the hacker has simply tried a suite of hacking tools.

In the case of a recent large attack “development” code was inadvertently deployed to a production server.

“Trojans” or backdoor access may have been built into the code by the original coder. There was a famous case of an undocumented “backdoor” built into a large commercial database widely used in enterprise systems. This hack was  said to be exploited by US intelligence.  Of course if the code is secret these Trojans will never be discovered. High quality code is less likely to have vulnerabilities and if they are present it is possible to correct them. 

The User Interface

This is a focus for commercial mobile phone and computer app providers where usability is critical to uptake in a competitive environment. But in large Health IT systems, usability and the quality of the User Interface does not attract the same attention. Is this yet another adverse effect of the commercial contracting process and “Vendor Lockin”?

The User Interface has also been well studied in the context of Safety Critical systems. Poor User Interface design and poor software quality were factors in the Therac-25 incident. Here a malfunction in a Radiotherapy machine caused harm to several patients and some deaths. (ref 5)

In my view it is also a factor in many poor clinical outcomes in Health when critical previous history data is obscured in the record. The treating Clinician does not access this data and makes a clinical error as a result.

In my experience this factor is not considered in most “after the fact” reviews and “Root Cause Analyses”. Usually the Clinician is held responsible for the outcome and a common response is to require “further training”.  

Some principles of User Interface design: (ref 6)

Simplicity Principle – This means that the design should make the user interface simple, communication clear, common tasks easy and in the user’s own language. The design should also provide simple shortcuts that are closely related to long procedures.

Visibility Principle – All the necessary materials and options required to perform a certain task must be visible to the user without creating a distraction by giving redundant or extraneous information. A great design should not confuse or overwhelm the user with unnecessary information.

Feedback Principle – The user must be fully informed of the actions, change of state, errors, condition or interpretations in a clear and concise manner without using unambiguous language.

Tolerance Principle – This simply means that the design must tolerant and flexible. The user interface should be able to reduce the cost of misuse and mistakes by providing options such as redoing and undoing to help prevent errors where possible.

Reuse Principle – The user interface should be able to reuse both internal and external components while maintaining consistency in a purposeful way to avoid the user from rethinking or remembering. In practice this means data used in several locations should only be entered once.

In all the various Health IT systems that I have used, the User interface does not appear to conform with many of these principles. Typically, the program is complex with multiple layers of dialogs. 

There is a lot of redundancy of headings, poor use of screen “real estate”, poor formatting of dialogs, and displays of multiple “null” values

Often a lot of irrelevant “administrative” information is displayed. The end result is poor usability and poor “data visibility”. Important clinical data is hidden in layers of dialogs or poorly labelled documents.

 These failures reduce efficiency and user satisfaction and increase risk in an already difficult and at times dangerous Clinical Environment.


Software Quality is important to cost, extensibility, Interoperability and Clinical Safety. It should receive more attention in the commissioning and upgrading of Health IT systems. The design of the User Interface is a Clinical Safety Issue and should be considered as a factor when adverse clinical outcomes occur.


1. Scalability and Semantic Sustainability in Electronic Health Record Systems

Erik Sundvall

Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.

2. Data Modelling – OpenEHR

3. Obama legislation

4. Legacy software

5. The Therac-25 Incident

6. User Interface design

Open Source Software – A Paradox

The nuts and bolts of Software – Intellectual Property

The code running in the myriad of computers in the world is called object code .

This is a series of low level commands read sequentially from memory that tell the computer processor what to do. These are in assembly language which is completely unintelligible to humans. (except for maybe a very few nerds who like this stuff!)

Depending on the computer language used, this object code is generated from source code by another program called an interpreter or compiler. This source code is human readable and describes the function of the program. Intellectual property resides in the source code. Large enterprise levels programs may have millions of lines of such code. These are usually proprietary – ie the source code is copyright and secret. The user of the program buys a licence which allows them to use the code for a limited time but most other rights are strictly limited. In particular they must use the original vendor for support including bugfixes and upgrades as the source is not available to anyone else. This lack of alternatives allows the vendor to charge more for support than would otherwise be the case – this situation is termed “Vendor Lockin”.

Data and Intellectual Property

Useful and substantial programs operate on data such as names, addresses, text and measurements. These data are stored in a repository generally called a database. But in the computer world, data can be stored in different ways – for example a number can be binary, integer, decimal, signed or unsigned. All these quantities are handled and stored in different ways. Data can be modelled in different ways with constraints and associations with other data. Databases have structures called schemas which allow them to store and recover data reliably. These models and schemas are also generally proprietary. A vendor may charge a fee to convert data to a format suitable for another database or even regard the customer’s data as proprietary and refuse to do so altogether. The customer is truly “locked in” in this situation and the barriers to change to another program are substantial.

The Open Source Paradox

Yet another software development paradigm is termed “Open Source”. The design process is not necessarily different to those discussed above – rather the difference is in how the development is funded and how the intellectual property created is treated. The software is “Open” – the source code is public and free for anyone to use as they see fit. However, there is one important caveat – software developed from this codebase must also remain “Open”. Much of this software has been developed by volunteers for free, though commercial programmers may also use this model and there is no reason why a commercial entity cannot charge for installing or supporting an Open Source program. But the source code must be publicly available.

Commercial developers argue that an Open Source model does not deliver the resources required in large software developments and the resources needed for ongoing support. They argue that Open Source cannot deliver the quality that commercial offerings can. But is this really true?

If you are browsing the internet you are likely to be using Open Source software. The majority of web servers are based on an Open Source stack – typically LAMP (Linux operating system, Apache webserver, MySQL database and PHP scripting language). Certainly the internet would not function without Open standards such as HTTP. The Linux Desktop now rivals commercial alternatives such as Windows or MacIntosh in functionality and stability.

But how can “free” software be equal to if not better than proprietary software? You get what you pay for, right?

This apparent Paradox can be explained by several factors.

The passion of volunteers – “nerds” will always want to show how clever they are. They may want the functionality of a particular program that is not otherwise available

Corporate memory – the efforts of the “nerds” and others are not lost. The code is available to others to extend and improve. Open Source versioning systems such as SubVersion and GitHub have been developed, which allow tracking of changes and cooperation between developers. GitHub now has more than 50 million users worldwide.

Programmers who have no connection with each other apart from an interest in a particular project can cooperate via these systems and their work is automatically saved and curated. Over time this is powerful. In the proprietary world programmers may work for years on a project, producing high quality software, but if their work does not gain market acceptance it is locked up in a proprietary licence and forgotten.

Development techniques are more suited to “complex” systems engineering. Open Source software is developed incrementally and with many competing solutions. As discussed previously this is likely to produce a better outcome in a complex environment.

Complexity and Software Design 

(Why Large IT Projects Fail) 

From the literature and the cases above it is apparent that large IT system deployments fail more often than not, and are almost never completely delivered. Is there a fundamental reason for this?

Large IT projects are certainly complex in the conventional sense. Bar-Yam (ref) argues that they are also “complex” in the mathematical sense. There may be several independent large systems interacting in unpredictable ways. This complexity property makes them difficult to analyse and therefore difficult if not impossible to predict. It is therefore practically impossible to specify the outcome of a complex system design at the outset. Thus the traditional “waterfall” (see below) specification process is likely or even bound to fail.

Conventional large system engineering has been based on historic designs such as the Manhattan Project, or the Apollo project to land a man on the moon. While these appear “complex” the outcome sought was relatively discrete and simple in an engineering sense. Health IT systems have a complex output in many records which are used in various ways for individual, management and population purposes. The Health IT environment is inherently complex with large amounts of data of different types.

From Bar Yam – “The complexity of a task can be quantified as the number of possible wrong ways to perform it for every right way. The more likely a wrong choice, the more complex the task. In order for a system to perform a task it must be able to perform the right action. As a rule, this also means that the number of possible actions that the system can perform (and select between) must be at least this number. This is the Law of requisite variety that relates the complexity of a task to the complexity of a system that can perform the task effectively. “

Clearly in the case of Health records there are many “wrong ways” and probably many “right ways” to record a health encounter!

The governance of large projects in government is itself complex – there are many stakeholders with various agendas interacting in various ways, not necessarily under the control of the design team.

Bar Yam argues that a fundamentally different approach should be taken to the design of complex systems. Rather than attempting to specify a single solution at the outset, the design process should focus on the process of evolution towards a solution.

Even agile software design processes (see below) are not “evolutionary” in this sense. They do not have many possible solutions competing with only the “fittest” surviving – they are a single solution evolving by iterative small changes towards a solution. The Open Source software development process is perhaps closer – here there may be several projects developing a similar solution to a particular problem.

There are design processes which are a combination of these techniques. Toyota are regarded as world leaders in design and their vehicles are market leaders. They use a traditional systems engineering approach, but have several teams working in parallel on the same problem. Each team is allowed to progress their work to an advanced stage before a single solution is chosen. This redundancy would appear to be wasteful but in fact produces superior designs.

Where change is being made to a large complex system, Bar Yam advocates small changes at various places in the system. These gradually replace the current components over time, with the old system being maintained for a period for redundancy and safety.

The internet can be regarded as a large system that was developed using this “complex systems engineering” approach, being created and revised incrementally over decades. The internet has brought change to the world equivalent to the industrial revolution. It is a Complex Systems Engineering success story.

Software design approaches – “Waterfall” vs “Agile” 

The traditional approach to commissioning or designing a software system is termed the “Waterfall” process.

Software designers or vendors attempt to capture a detailed set of specifications from users which completely describe the functions of the new system. The customer assesses the new system against the specifications. When the two parties agree that the specifications have been met to an acceptable degree, the system is delivered. Further evolution of the system is a separate issue – often not considered at the initial design phase. So the design effort is essentially a one-off “waterfall” process using a traditional systems engineering approach. In practice, there are a number of problems with this approach in large Health IT systems.

Firstly the vast majority of new Health IT systems are not designed from scratch – they are usually created by modifying a system already held by the software vendor. There are claims by the vendor that their system can be “customized” to fit the customer’s requirements. In practice, the customer usually accepts significant compromises as to which specifications are not addressed. Contracts generally limit the writing of new software to address specifications, because this is expensive and uncertain. Indeed if the product being considered is not well designed and maintained, it may be impossible to modify it to a significant extent (see “Legacy Software” in a further article)

Secondly the process of capturing accurate specifications is difficult – misunderstandings may remain between customer and software engineer. To make the process more reliable, formal processes and even software languages have been written for this purpose.

Thirdly, the customer’s needs will always change and evolve. There is often “Specification Creep”, particularly if there are many “stakeholders” and if the process is not tightly managed. In large Government projects there may be hundreds of stakeholders with various agendas. Invariably the delivery of specifications is incomplete. If the process of updating and improving the software is not explicitly planned and funded changes after the startup date will be difficult. Bureaucrats generally want to deliver a specific outcome on a certain date – funding is always contested and limited. A system which is developed incrementally over time is less politically attractive than a showpiece startup announcement. They do not generally expect to fund an ongoing process of evolution and improvement.

Finally, the project seeks to replace a current system with a “big bang” approach. The old system is not necessarily retained for safety and redundancy

Thus, if we accept that that these systems are “complex” as discussed above, then this approach is almost certain to fail.

An alternative design process has emerged in recent years termed “agile” or “extreme” programming. Here the designer starts with a partial but quickly produced solution to the customer’s requirements. The customer assesses the solution and proposes changes and extensions. The designer produces these and the customer assesses the system again and proposes improvements. Through an iterative series of such cycles, the system evolves towards a solution. This approach would overcome several of the problems above – in particular some of the issues of complexity and difficulty in specification would be better addressed. However this approach is not generally adopted in large Government funded systems. 

Next – Open Source and Software Design 


When Systems Engineering Fails — Toward Complex Systems Engineering

SMC’03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme – System Security and Assurance (Cat. No.03CH37483), 2003

Yaneer Bar-Yam

Why is Health IT so hard?

The History of large Health IT Projects

Only a minority of large Health Information Technology (IT) projects are delivered with functionality approximating that specified by the customer, on time and on budget. There is a significant rate of what can only regarded as complete failures. Given the money spent (usually in the hundreds of millions of dollars) this is a surprising result.

To give a few examples: (1) 


The number one project disaster of all time is probably the massive £12 billion plan to create the world’s largest civilian database linking all parts of the UK’s National Health Service. This was initially launched in 2002. The project was in disarray from the beginning, missing initial deadlines in 2007, and eventually being scrapped by the UK government in September 2011.

The nine-year debacle, under the National Programme for IT, was way over cost and years behind schedule due to technical issues, issues with vendors and constantly changing system specifications.

In early 2012, one of the primary suppliers, CSC made a $1.49 billion write-off against the botched project. One report claimed the failed project had cost UK taxpayers £10bn with the final bill expected to be “several hundreds of millions of pounds higher.”

South Australian EPAS system (2) 

This system was set up initially in 2 country hospitals and SA Repatriation Hospital in 2012

This was a pilot for a statewide electronic medical records system

It was to be deployed in the new Royal Adelaide Hospital but when the hospital opened the system was unusable and paper records were used. At that time the total published cost was $422M. This can only be regarded as a comprehensive failure, though the SA government sought to ”reset” the project using the same software.

HealthSMART modernisation program

In mid-2008, the Victorian government unveiled its HealthSMART program to modernise and replace IT systems across the Victorian public health sector.

Implementation costs for the HealthSMART clinical ICT system rollout blew out to $145.3 million or 150 per cent more than the original budget of $58.3 million, according to an Auditor-General’s report.

The Auditor-General’s audit report also suggested that the absence of appropriate controls and effective mitigations at certain sites could pose serious safety risks to patients.

MyHealth Record (3) 

The Federal Government has been engaged in the development of a National E Health record for more than 20 years in various forms

In spite of the investment of some $1.97B at Jan 2020,  it has not achieved a useful universal Health record for all Australians.

Some 23M records had been created with half of these holding no data. Of those with data many are incomplete or not useful.  

The uptake by the public has been limited by concerns about privacy. It appeared that many government agencies were expecting to gain access to the data – this has now been limited. GPs are used to managing privacy – they are reluctant to allow their patient’s data to be uploaded to a system which appeared not to have to same privacy protections as their own systems.

GPs are also expected to manage and curate the data – however there is little provision by Government to acknowledge the cost and legal risk and reward them for doing so 

Early in the process there was the rollout of the Public Key Infrastructure (PKI) system. This was intended to allow identification of health providers such as doctors electronically so that billing and other functions could be carried out online. Unfortunately this did not achieve widespread acceptance by providers because it was cumbersome to use and most of the legal risk was placed on providers with an onerous contract. It appeared that the vendors of the system were able to essentially absolve themselves of risk. The provider password was created by the vendor. As the system had the legal force of a signature, some providers were concerned that having a password held elsewhere by an unknown entity was an unacceptable risk to them.

At one point there was an attempt to agree on interface standards between systems. This was never successful because of the commercial disincentives to standardization and the fact the vendors of software systems were not adequately remunerated. Much the money appeared to be spent on conferences, strategic plans and administration.

Summary of reasons 

In my view there three main reasons why the majority of these large projects fail either partially or completely.

(1) Complexity

These projects are genuinely difficult. They are large, enterprise level systems with many users, inputs and outputs. This means they are complex in both the mathematical and the lay sense.

Complex systems are difficult or impossible to analyse mathematically. The outcome of a design process is therefore difficult if not impossible to predict. Ideally a designer will adopt an “evolutionary” approach and replace parts of the system progressively while maintaining current systems for redundancy. But these large Health IT systems are designed with a “waterfall” approach – ie specifications are gathered and the system delivered as a “one-off” on a specific “go-live” date.

(2) Commercial reasons

These projects are commissioned by Governments. They are exclusively delivered by large corporations usually for eyewatering sums of money. The source code is invariably proprietary and secret. Interoperability between systems appears to be discouraged by the vendor. In any case it seems impossible to achieve – there are strong commercial reasons for this.

The tendering and contracting process is invariably “commercial in confidence” – this does not allow external scrutiny. The important decisions which will determine the direction of the project are made at the start of the process and in secret. In practice these systems are rarely designed from scratch – they are usually created by adapting an existing system held by the vendor. Inevitably this involves some compromise on the part of the customer as to which specifications are met.

(3) Government/Political factors 

These projects are politically risky. 

But the large corporation delivering the software can be used as a “Risk Partner” by the bureaucrats and politicians commissioning them. 

There is an imperative to deliver a discrete project at a specific time – an evolutionary design process is not politically attractive. 

These large projects require specific skills to manage – Government decision making and management systems perform poorly at the best of times – they are generally not up to the task.

I will explore these issues further in a series of posts.  


(1) Spectacular IT Failures

(2) SA EPAS system

(3) MyHR

Has Medical Evidence been Captured by Commercial Interests?

(A Short History of Diabetes Research)

It is widely accepted that corporate interests have undue influence in political affairs.

Corporate lobbyists seem to have access to decision makers where ordinary people are shut out.

Politicians routinely move from politics to well paid jobs in the industries they were responsible for in Government.

Political parties receive large, often secret donations from corporate interests – presumably there is a quid pro quo in the way policy is framed.

Vested interests are a powerful driver in the development of policy.   

Has the same thing happened in Health?

Health is a large industry with more than $100B spent per year by Government and private payors.

Many senior clinicians receive benefits in the form of research grants or direct funding from Pharmaceutical companies and other interests involved in Health. 

There is at least a risk that these payments have and are distorting medical evidence in favour of the treatments being studied.

The History of Research in Diabetes 

To illustrate this issue – let us study the history of the treatment in Type 2 Diabetes 

With the discovery of Insulin it became possible to treat Type 1 Diabetes. Sufferers of this condition had a grim prognosis before this with a life expectancy of only a few years. 

Type 1 Diabetes is characterized by Insulin deficiency.  Insulin is a hormone produced by the pancreas which keeps blood sugar within a tight range. Blood sugar is essential to metabolism, particularly for the brain.

Without insulin the blood sugar becomes much higher than normal and the patient may develop a life threatening condition called Diabetic Ketoacidosis.

 The DCCT study showed that good control of blood sugar with regular insulin injections extended lifespan to near normal. 

Poor control was associated with microvascular (small blood vessel) disease causing blindness and kidney failure, and macrovascular (large blood vessel) disease causing heart attack and stroke.

Type 2 Diabetes (T2DM) is similar to Type 1 in that the patient has high blood sugar and is subject to many of the same complications as Type 1 Diabetes sufferers, ie renal failure, blindness, heart attack and stroke.

However here the similarity ends. Type 2 Diabetes is characterized not by lack of insulin, but high levels of insulin and insulin resistance by fat cells. It is strongly associated with Obesity and the Metabolic Syndrome

 a combination of obesity, hypertension and high Cholesterol.

The findings of the DCCT study in Type 1 Diabetics have been extrapolated to Type 2 Diabetes – ie that tight sugar control prevents death and disability from the complications of Diabetes.

Is this extrapolation valid?

HBA1C as a measure of Blood Glucose

Another piece in this puzzle is the development of the Glycated Haemaglobin (HBA1C) test as a measure of average blood sugar. 

Blood sugar can be measured directly but it fluctuates constantly due to fasting, eating and the administration of insulin or other treatments.

HBA1C is a measure of the average blood sugar over the last 6 weeks or so. 

It has become a proxy for the effectiveness of Diabetes treatment, so much so that it is now used by Health administrators as a Key Performance Indicator (KPI) to measure the effectiveness of the Health service.

Research in Type 2 Diabetes

In 1962, a US Senator was surprised to discover that the tablets his daughter was taking for Diabetes had no systematic research performed to show whether they had any benefit.

He initiated the first Randomized Controlled Trial (RCT) to study the effectiveness of the treatment (UGDP). This was a revolution at the time in Medical Research – patients are allocated randomly to either treatment or placebo groups. Neither the treating clinician nor the patient know which treatment is being administered. This was a large, carefully conducted trial which ran for some 8 years before it became clear that the treatments were actually harmful. The trial was stopped and shares in the company making the drug plummeted. The researchers endured a storm of criticism and legal action which was successfully defended, but lasted some 10 years.

Understandably, this had a chilling effect on research for a time, but in 1975 a large trial was set up in UK to attempt to answer the question whether tight Diabetic control in Type 2 Diabetes improved outcomes. (UKPDS).

This was a large trial which ran for 10 years. It showed an improvement in mortality in patients receiving treatment for Hypertension. However it did not show a significant improvement in mortality with intensive glucose lowering treatment , though there were improvements in Diabetes related complications. A small subgroup showed significant improvement in mortality – obese patients on Metformin (which causes weight loss). 

Several other large trials were conducted (ACCORD, ADVANCE) which also failed to show significant improvement in mortality with intensive Diabetes treatment. In fact the ACCORD trial found increased mortality with tight glucose control.

Several followup studies were conducted some years after the end of these trials. The UKPDS participants appeared to show significant improvement in those receiving intensive treatment, but these findings were not replicated in the other trials.

In spite of these apparently equivocal findings, current guidelines advocate tight control in type 2 Diabetes. HBA1C is regarded as an endpoint in treatment with the accepted target being less than 7.0%

These seminal studies were largely Government funded. However Government has essentially withdrawn from Health Research since and most drug research is now funded by Pharmaceutical companies.

For some 20 or more years after these studies, mortality was effectively ignored as an endpoint and HBA1C became a proxy for outcomes. If a drug lowered HBA1C it was regarded as effective.

This produced a whole series of drug classes – some have since been shown to be harmful such as Rosiglitazone and have disappeared.

In recent years new classes such as the Glutides are showing  cardiovascular and mortality benefit and these endpoints are again receiving more attention.

Is Obesity the real issue? 

The developed world is getting fatter.

In the last generation average weight has increased by some 7kg which is a dramatic increase.

There has also been a dramatic increase in the incidence of Type 2 Diabetes which is strongly associated with Obesity. It now affects 3-5% of the population with a further probable 3% undiagnosed 

Obesity shortens lifespan by up to 7 years. But significant weight loss in the obese can return lifespan to near normal. Moreover, a 15kg weight loss completely reverses Diabetes in those who achieve it (DIRECT study)

Yet weight loss and obesity have achieved little attention in the treatment of Diabetes – the focus has been on medication and HBA1C. The standard view is that once established, Diabetes is irreversible and relentless – the best that can be done is to prevent complications by keeping HBA1C low. Again standard recommendations for treatment appear to be at odds with evidence. Only those drugs that help with weight loss (Metformin, Glutides) are now being shown to improve Cardiovascular Mortality and death rates     

What about Insulin?

Insulin is still widely recommended as a treatment to lower HBA1C in T2DM when other measures fail.

It promotes the movement of Glucose from the blood into fat cells and its storage as fat. As such it is an anabolic hormone and promotes weight gain while lowering blood Glucose and HBA1C.

But does it make any difference to outcomes? 

The original UGDP study recruited patients for treatment with Insulin. The insulin arms were of the study were continued after the oral hypoglycaemics were stopped – there was no improvement in CV outcomes or all cause deaths

A Google search with the question  “Does Insulin improve outcomes in Type 2 Diabetes?” yielded a number of results – the top 3 are quoted below

“Mortality and Other Important Diabetes-Related Outcomes With Insulin vs Other Antihyperglycemic Therapies in Type 2 Diabetes”

Craig J. Currie,corresponding author Chris D. Poole, Marc Evans, John R. Peters, and Christopher Ll. Morgan

J Clin Endocrinol Metab. 2013 Feb; 98(2): 668–677.

The authors conclude that:

“In people with T2DM, exogenous insulin therapy was associated with an increased risk of diabetes-related complications, cancer, and all-cause mortality.”

“Insulin: Potential Negative Consequences of Early Routine Use in Patients With Type 2 Diabetes”

Harold E. Lebovitz, MD, FACE⇓

↵Corresponding author: Harold E. Lebovitz,

Diabetes Care 2011 May; 34(Supplement 2): S225-S230.


“Starting insulin therapy early in the course of chronic treatment of patients with type 2 diabetes would imply that there are unique benefits to insulin treatment. As addressed above, there is little evidence to support such a view. Insulin treatment is neither durable in maintaining glycemic control nor is unique in preserving β-cells. Better clinical outcomes than those that occur with other antihyperglycemic regimens have not been shown. The downside of insulin therapy is the need to increase the dose and the regimen complexity with time, the increase in severe hypoglycemia, and the potential increase in mortality as well as the potential increased risk for specific cancers.”

“Insulin Therapy in People With Type 2 Diabetes: Opportunities and Challenges?”

Philip Home1, Matthew Riddle2, William T. Cefalu3⇑, Clifford J. Bailey4, Reinhard G. Bretzel5, Stefano del Prato6, Derek Leroith7,8, Guntram Schernthaner9, Luc van Gaal10 and Itamar Raz11

+Author Affiliations

Corresponding author: William T. Cefalu,

Diabetes Care 2014 Jun; 37(6): 1499-1508.

This article is less negative than the other two with regard to insulin but identifies subgroups where it is not beneficial

So it appears that research and guidelines/recommendations are in conflict in many cases.

Why is this so?

I would argue that vested interests are corrupting our medical evidence base and have been doing so for many years. Much of the evidence we rely on for our day to day clinical practice is tainted.  

We have ignored therapeutic approaches which would improve outcomes (ie weight loss) in favour of medication regimes which target a proxy measure (HBA1C). Moreover treatment which at best has no benefit (Insulin) is still widely recommended.

We need to acknowledge this explicitly and start to consider ways of reversing this trend. 


Summary of the DCCT/EDIC study

UDGP Study

Meinert CL, et al. A study of the effects of hypoglycemic agents on vascular complications in patients with adult-onset diabetes. II. Mortality results. Diabetes. 1970;19(suppl):789-830.


UK Prospective Diabetes Study Group. Intensive blood-glucose control with sulphonylureas or insulin compared with conventional treatment and risk of complications in patients with type 2 diabetes. Lancet. 1998;352:837-853.


Patel A, et al. Intensive blood glucose control and vascular outcomes in patients with type 2 diabetes. N Engl J Med. 2008;358(24): 2560— 2572.

ACCORD study

Gerstein HC, et al. Effects of intensive glucose lowering in type 2 diabetes. N Engl J Med. 2008. 358(24):2545-2559.

The DiRECT Trial

Poor Administration – a Health Hazard?

A recent 4 Corners program has highlighted the risks faced by Remote clients with Rheumatic Heart Disease. (RHD) (ref)

RHD is a disease now all but unknown in “polite society” ie the rest of Australia. But it remains common in Remote communities. The risk factors are well known – poor housing with the resultant overcrowding and poor hygiene. As a result of this Skin Sores are common. These are colonized or caused by Group A Streptococcus which drives the high incidence of Rheumatic Fever.  The Housing issue has been described in many Government reports over many years but appears to be as immutable as ever.

The 4 Corners Program describes the journey or four patients with RHD in a Queensland Clinic – there appears to have been a delay in managing the deterioration of the condition.

Similar cases are occur regularly in all Remote environments.

As ever it is tempting to blame the individual clinicians for the poor outcomes. But are there systemic issues in our health system that are contributing to the problem?

The Presentation

Deterioration in RHD is an unusual but not rare presentation. It may mimic other conditions such as Pneumonia with cough, shortness of breath and fever. The deterioration may be due to a further attack of Rheumatic Fever and Carditis (inflammation of the heart) causing heart valve damage, or perhaps bacterial infection of an implanted valve prosthesis. An unsophisticated Clinician may mistake this for a more common condition such as Chest Infection, particularly if the relevant information about the RHD is not prominent in the record.

The Clinician

Remote places are difficult to staff – most initial encounters occur between a client and Remote Area Nurse (RAN) or Aboriginal Health Practitioner (AHP). Doctors are not resident in most Remote places but visit on an intermittent basis for planned consultations. Remote staff perform sterling work in the face of difficult conditions. They provide a very competent emergency service. However nonmedical staff do not have the sophisticated clinical training that doctors undergo. They are are less able to manage an unusual or nontypical presentation.

The “Anonymous Consultation”

Staff turnover is accepted as normal in Remote practice. Typical figures in Remote Clinics are in the order of 150% turnover each year. It is estimated that reducing this turnover by half would save $32M per year in NT (ref). In spite of these costs, policy makers seem content to accept the current situation – staff turnover is not a “KPI” or even actively measured. There do not appear to be active programs to reduce it.

But what are the risks of rapid staff turnover to clinical standards and safety? Was this a relevant factor in these cases? In my own personal experience a large proportion of consults in Remote practice are “anonymous” – ie the client and clinician have not met before. My own personal surveys in various Remote locations have found consistently that for every 10 consultations, a client sees 6-7 different clinicians.

If the clinician assessing the case had met the client before and knew them previously, would they have seen a deterioration in their condition? Would they have ensured that they had received notice of appointments and that the client attended? Would the client have been able to communicate better? Would the clinician have taken more notice of their story?

In my view a relationship with the client is important in managing long term conditions, and the answer to these questions is yes. It also means that there is less reliance on electronic systems of recall and administration.

The Record and Data Visibility

Health is ever more complex, with new treatments and subspecialties appearing all the time. There is a push to standardize and systematize clinical interaction with Careplans and treatment protocols. In theory this will “commoditize” clinical encounters, allowing them to be conducted by any clinician with the relevant training. A personal relationship between client and clinician is at least in theory less important. Care is compartmentalized with different clinicians dealing with different issues. The role of the General Practitioner “Expert Generalist” has been devalued.

Of course, all this creates complexity in the medical record with Careplans, referrals, appointments, treatment items and letters. The Medical Record is now Electronic – if this system is not well designed, relevant information can lost in the “noise” of all this process. In many systems interface design has not been high priority – there are redundant dialogs and headings, critical data may be hidden in poorly labelled secondary dialogs. The noise is further exacerbated by administrative information such as travel and appointment letters cluttering the record.

There may be further administrative problems, with relevant letters never reaching the record or appointments never being notified.

These systemic administrative failures are safety issues which should be part of Quality and Safety reviews, but are not addressed as such. Reviews focus on the performance of individual clinics and clinicians but rarely address systemic issues.


In general Remote Health Services provide a high quality response in Primary Care and Emergency Care under difficult conditions.

However, the combination of an unusual but serious presentation, an “anonymous” clinical encounter and poor administrative and record systems can be deadly.

Administrative and management performance should be part of Safety and Quality Review. In particular the massive staff turnover which is routine in Remote Health should be measured and addressed as a priority. The content and interface design of Electronic Medical Record Systems should also be regarded as a safety issue, with more effort being put into this part of their design.


4 Corners

Remote health workforce turnover and retention: what are the policy and practice priorities?

John Wakerman  1 John Humphreys  2 Deborah Russell  3 Steven Guthridge  4 Lisa Bourke  5 Terry Dunbar  6 Yuejen Zhao  7 Mark Ramjan  8 Lorna Murakami-Gold  9 Michael P Jones  10

Multimorbidity – the New Epidemic

Multimorbidity is a relatively new word in the clinical lexicon – what is it?

It is commonly defined as the presence of two or more chronic medical conditions in an individual. It can present challenges in care particularly with higher numbers of coexisting conditions and related polypharmacy.

These conditions may include recognized Chronic Disease problems such as Diabetes, Heart disease, Chronic Airways Disease and Osteoarthritis, but also

Mental Health problems

•ongoing conditions such as learning disability

•symptom complexes such as frailty or chronic pain

•sensory impairment such as sight or hearing loss

•alcohol and substance misuse.

How common is it?

A 2008–2009 BEACH sub-study that measured the prevalence of multiple chronic condition at GP consultations found that of the 8707 patients sampled from 290 GPs, approximately half (47.4%, 95% CI: 45.2–49.6) had two or more chronic conditions. Figure 1 shows that the proportion of patients with multiple chronic conditions at encounters rises significantly with age; about 90% of patients aged 80 years or more had two or more chronic conditions, while nearly 30% had seven or more. (Ref 1)

Figure 1. Proportion of patients with different numbers of multiple chronic conditions at GP encounters by patient age

This suggests that we should reconsider our current health care system’s focus on single diseases.

“The Single Condition Model” in medicine

Most research is designed to show the effect of interventions in single conditions. Those with multiple conditions are excluded to avoid confounding the data. Guidelines are designed in general to guide management in single conditions. But if we follow these guidelines in multimorbid clients and sum all the interventions together, we end up with a significant “treatment burden”.

As an example – consider the following situation:

Mrs F• 79 years old with multiple conditions including:

• osteoporosis

• osteoarthritis

• diabetes type II


• hypertension

If evidence based “Best Practice” treatment were followed, she would require:

• 12 different drugs in 19 dosages at five points in a day

• 14 different non-pharmacological interventions (rest,exercise, shoes, avoid exposure to allergens)

• nutrition: reduce intake of salt, potassium, cholesterol, Magnesium, Calcium, calories, alcohol

at least 5 doctor visits per year.

These multiple interventions are complex, difficult for both client and providers to deliver, are expensive and carry the risk of interactions which may cause harm.

Due to the “single condition” model of most research, we have little or no evidence that the interventions will be beneficial in this specific situation.

Multimorbidity and Clinical Reasoning

The study of Clinical Reasoning attempts to analyse the thought processes of a clinician when dealing with clinical problems. The “single issue” presentation is well studied – the potential traps and cognitive biases are well understood. One Clinical Reasoning framework was described by Murtagh (Ref 2 ). But the research quoted above would suggest that a single issue “diagnostic” presentation is increasingly uncommon. Many presentations involve managing multiple known problems and balancing priorities. This appears to be a “higher order” task – it has been generally left to sophisticated clinicians. The General Practitioner is uniquely qualified for this role. A broad medical knowledge and a long term relationship with the client combined with the relevant legal authority makes him/her an “Expert Generalist”.

But there appears to be little relevant research – the GP is making these decisions intuitively. Should we develop a formal model of Clinical Reasoning in this space?

The Rise of Machine Driven Care

In recent years there has been a view among many that treating long term conditions such as Hypertension, Diabetes and raised Cholesterol “to target” results in reduced Cardiovascular risk.

If a programme of interventions such as measuring blood pressure, testing blood sugar and measuring weight is delivered on a regular basis outcomes are improved. But there is “Therapeutic Inertia” which must be overcome – the measurements must be “treated to target” regardless of side effects or other reasons for not doing so. Doctors in particular have been regarded as being responsible for “Therapeutic Inertia”.

This idea is attractive because it can be delivered by less sophisticated clinicians. Careplans are devised with schedules of interventions – if they are followed there will be less emergency attendances. There is reasonably good evidence for this approach. (Ref 3)

But what about the Multimorbid clients? Can we devise Careplans to suit them? If we sum together all the interventions suggested by “Best Practice”, we create a complex matrix which in practice often is not delivered. Whats more, every client seems to have a different combination of Chronic Problems – it is impossible to devise “off the shelf” careplans to fit all. The electronic record systems that create these Careplans are not sophisticated enough to allow easy editing or to devise individualized Careplans.

Again it falls to the “Expert Generalist” GP to rationalize these complex plans and to reduce the medication and intervention burden that seems to build up like barnacles encrusting an old boat.

In my view we need to recognize the limitations of our “single issue” approach, develop electronic systems to manage multiple problems in a rational way and study the impacts of complexity and “noise” on safety and outcomes.

We should also develop models of Clinical Reasoning for this mode of practice.


  1. Australian Family Physician Volume 42, No.12, December 2013 Pages 845-845
  2. A Safe Diagnostic Model Ch 9 John Murtagh’s General Practice
  3. The cost-effectiveness of primary care for Indigenous Australians with diabetes living in remote Northern Territory communities Susan L Thomas, Yuejen Zhao, Steven L Guthridge and John Wakerman Med J Aust 2014; 200 (11): 658-662. || doi: 10.5694/mja13.11316

The Containment of Anxiety in Remote Health

“Government policy and poor performance by bureaucracy is a significant cause of “The Gap” in First Nations Health and life expectancy.”

This is clearly a contentious statement.

However many reports have identified deficiencies in policy and its delivery which have remained unchanged over many years.

In Remote Health, many clinics appear to be in a state of chaos – staff turnover is high, morale is low and community engagement is poor. Service delivery is not measured in any meaningful way but almost certainly it could be improved. Remote communities can be a difficult environment for visitors – the journey of one such visitor is described in a narrative by Mamood (1).

This paper describes a period in the history of a fictional remote health facility in a similar narrative format. While the narrative is fictional, it will resonate with many in the remote health workforce. Similar events occur on a regular basis. The paper puts forward an hypothesis based on social psychology theory to explain these events.   

A Year in the Life of a Remote Clinic

Every so often the planets align and a clinic functions well for a time

A hard working and effective manager is employed, often after a period of crisis in the clinic. While she has no formal training in management, she is a veteran of many years in remote practice, capable of dealing with any situation and able to turn her hand to whatever task is required. She has endured long lonely nights of call, facing difficult situations of distress, violence and medical emergency. She knows the issues which face the service and wants to improve things for the local people. She has watched their struggles for many years and now has a deep respect for them.  She seems to have a natural affinity for people and the skills required to make them work as a team.

She engages local staff and the clinic begins to connect with the community again. Staff now stay for longer than they used to. Local people appreciate someone who knows their name and their family connections.  Mothers bring their babies to a person they know and trust. Before they stayed away and were blamed for avoiding their obligations to conform with an impersonal system of measurements and injections, administered by an ever changing parade of unfamiliar faces.

 A workplace culture slowly develops where hard work and a cooperative clinic environment is valued – staff treat each other and their clients with respect and courtesy. The word gets around amongst the remote health workforce that this clinic is the place to work. The manager has among other qualities, the ability to choose the right staff to maintain the good working environment she has created.

One of the qualities she is looking for is the ability to think independently and be self motivated. She is able to delegate tasks knowing that staff will perform them without supervision. As a good manager she knows that she must delegate ruthlessly so that she can focus on higher order tasks.

Still the workload is overwhelming – CQI and KPIs, ordering systems that seem to require her to communicate with an endless series of functionaries to get necessary supplies, employment processes that take months to get contracts written, constant errors by pay office which need her intervention,  the vagaries and disputes of the travel subsidy system, the human foibles of staff – it seems endless.

She tries to engage an administrator to handle some of the paperwork. But departmental policy does not allow accommodation for these employees. In these communities housing for visiting workers is scarce and jealously guarded by the various agencies.  

Local indigenous staff lack the computer literacy and authority to deal with the whitefella’s bureaucracy. Family obligations may prevent them from dealing evenhandedly with travel and escort disputes, but they are able to provide a connection and knowledge of the local community. If the manager is fortunate she finds a partner of a worker housed by another agency to deal with the intricacies of the whitefellas system and with local staff has the best of both worlds.

She maintains the workload for a while – weekends and evenings are spent catching up. She manages the many small crises in a remote place at the cost of her own health and sanity.

She solves the many small problems that staff have delegated up to her and then gone home to relax, secure in the knowledge that they have done their duty.  

She knows the ways of the organization she works for – she is skilled at keeping her higher managers satisfied and away from where they can do harm.

She cannot report these minor crises to them and hope for support. From bitter experience, she knows that their response will be punitive and destructive or create ever more paperwork to manage.  She knows that some of the things she has done to create the functional workplace that is the clinic are not entirely in line with policy, even if they have been “unofficially” sanctioned by her managers. Sometimes she has ignored directives from above in order to get the work done –  there did not seem to be any penalty.

But the planets do not remain aligned for ever. There are many ways this fragile island can be destroyed.

Perhaps she gets tired, or sick, or just wants to go back to talking with the clients that are the “real” work of the organization. After all those staff who are junior to her are actually earning more money than she is. They get paid for being on call while she fills the gaps gratis as part of her award. They seem to be able to come and go as they please.  As a manager she has to do whatever is necessary with no extra pay for all those weekends and nights.

Perhaps her administrator’s partner moves on, transferred by the agency that he works for. Suddenly she finds the endless demands for travel escorts become her issue. It seems that every enquiry at the front desk, every phone call and every complaint which were previously filtered and managed by her office administrator have to be instantly referred to her – no-one else seems willing or able to intervene. 

While she tries to have a hand in choosing staff she is overruled in the name of economy – “we cant use that agency – they are too expensive”.

The person who arrives is not someone she would have chosen. Because the clinic is larger than average, town has decided that they can send staff inexperienced in remote health for training. They will need supervision for a period and will of course, not be able to participate in the after hours roster until they are able to make independent decisions about patient treatment. Other staff resent the newcomer for her easy life, her ability to go home and relax at night, her inability to perform basic tasks such as venepuncture or IV cannulation, her penchant for referring patients to others without solving the issues or even exploring the possibility of doing so. The tyrant that is the after hours roster dominates the clinic. The mantra from management in town is that programs such as proactive treatment of Chronic Disease are now “core business”. If only the clinic staff could be diligent enough to see clients in a planned manner then they would not need to be seen for emergencies. Still people seem to turn up until the small hours of the morning with mind numbing regularity – children with fever, old ladies struggling for breath, people with wounds from fights and family violence, psychotic patients, survivors of suicide attempts – the pressure for decisions is unrelenting. And this after a busy day of work – the person on call feels their mind turning to water – perhaps they sense that their decision making is unsafe, as if they had drunk a bottle of wine. They just wish the clients would go away and let them sleep. No pilot or other emergency services worker would be asked to do such shifts.  

Eventually this situation can longer be sustained. With the reduced number on call due to the junior staff member other staff decide to leave or take extended holidays. Perhaps someone makes a mistake – a child with fever had a serious infection – not just the flu. Perhaps a serious injury was missed. It falls to the manager to make up the deficits and explain the mistakes to family and administrators.  

Or perhaps the attention of town management turns to the clinic – “we have a problem at XXX”  Of course the fault is all at the clinic end – it seems that there are no KPIs to measure pay office errors or time to delivery of equipment orders or staff turnover rates, or the number of local indigenous staff employed.

There has been an unexpected death in the community that received unwelcome publicity, exposing the higher management to the glare of media attention.

A man has collapsed – bystanders called 000 and there was an interminable delay before the clinic ambulance arrived. Unlike many communities, it appears that they have political allies in town. The issue has reached the ear of the minister and he is seeking answers from his department. Media have been alerted – they are pressing the minister for details. The manager knows that nothing galvanizes her superiors like a “ministerial”. There have been written reports and a teleconference to discuss the issues. But it seems that the powers that be are not satisfied. A group of senior managers and their support staff arrive by chartered aircraft from town. Anyway, they can use the opportunity to see how the clinic is running and canvass other issues. They are ushered into the manager’s office and the door is closed.

Clinic business is suspended for the day – only “real emergencies” will be seen. One by one, those staff that were involved in the incident enter the room, surrounded by a ring of hostile faces. At the end of the inquisition they emerge looking chastened.

Like a dog that has been disciplined by its master, the senior manager must bite someone more junior in the hierarchy as quickly as possible. He does not see it as his duty to shield his staff from the heat of the media attention and look for constructive solutions to any issues that might exist.

The staff member who first responded to the emergency feels exhausted and humiliated by the experience. The inquisitors did not seem to understand that such emergencies are always fraught affairs, with the outcome determined by harsh statistics – only a tiny fraction of people who have an out of hospital cardiac arrest will survive. The staff member had only been in the community for a few weeks – there were delays in finding the place of the collapse, delays in calling his second on call, vital equipment had been left behind. Still he should have known all these things – did he have any orientation? The ring of eyes now turn to the manager. Among the thousand details that a remote clinician must remember before being able to function in the workplace was the orientation relating to emergencies – did she use the orientation manual? Why did he not know these things?

Soon it is time for the senior managers to fly back to town – they must be there before nightfall. The other issues facing the clinic have been forgotten – they will have to wait for another day. There is little support for the staff member involved – he is left to work through the events of the crisis in his mind, analyzing them over and over. There is no review of the processes and systems which led to the failures which might have made the difference – no thought of the effects of staff turnover, of the impossibility of orientation in the short time allowed. The manager consoles him as best she can – she has seen this scenario before. The staff member takes the next day off work and then decides to cut short his contract – after all he only had a week to go.

Management are conscious that staff turnover is an issue – it is expensive, and has effects on service delivery and morale. They have appointed a new nurse to a long term position to replace an experienced RAN who has just left the community. He had come to the end of his contract. He was well regarded – competent, likeable and with a good connection with those hard to reach young men. He had run the men’s clinic and managed the Mental Health patients, ensuring that they received their regular medication. Without this they were likely to present to the clinic in the middle of the night in a police van, distressed by their demon voices and surrounded by a crowd of anxious family.

He had expressed a wish to stay – why had he not simply been re-employed?

Perhaps it was because he had been sharply critical of management at times, even though he never deviated from official policy in dealing with clients. His partner had also worked in the community and been well regarded. She had published academic work with conclusions that ran counter to current department policy. Apparently he had wanted some variations from the award – more time off to see family. This could not be accommodated according to higher management – it might set a dangerous precedent. So he had simply been allowed to leave – he could not afford to have no work arranged. The manager’s entreaties had fallen on deaf ears – no-one in town had seen fit to negotiate a special arrangement with an effective staff member to maintain continuity. The manager now had to ensure that the Mental Health clients were managed and she had another gap in the after hours roster.

The new staff member has limited experience in Remote work but she has undergone some weeks of orientation in town. Still she would not be able to participate as first on call in the roster for at least a month. Within days of her arrival her furniture is delivered – it seems she intends to make the community her home. The manager was not involved in her recruitment but she comes with glowing references and is apparently very capable.

But soon it seems there are problems – there is interpersonal friction between various staff and the new recruit. The new staff member is well aware of her rights and quite prepared to speak up to enforce them. She is unwilling to deal with children as she has limited experience in this field. Within a few weeks she is taking time off due to various ailments.

The manager moves quickly to rectify the situation – clearly this person is not suitable for remote work. A small team such as hers must have all members working effectively. She confronts the new recruit and voices her concerns. The reaction is predictable – the new recruit feels unfairly treated and threatens to sue. A standoff ensues – the department is involved in mediation between the parties. It is rumoured that she has been involved in a legal fight with a previous employer. Those in town who made the appointment dont see the problem – the clinic is fully staffed after all.  The new recruit is sent away for several weeks of training to improve her skills.

The probationary period in her contract passes without action – it seems she will be here for the long term after all.

The manager has decided she can no longer sustain this life and will move on at the end of her contract. She will join the army of temporary health staff paid by agencies to fill the many gaps in the system. She will be able to work seeing clients only – no more arguments with travel, no more headaches over housing, no more dealing with complaints. She wont be in any place long enough to become embroiled in the local politics. Her pay will be managed by the agency – no more arguments with pay office over oncall entitlements. 

She was under no illusions that she would change the world when she arrived. Still it is with sadness that she attends the ceremony put on for her by the local people – they dance for her as an honoured member of their community. She reflects how little has changed in spite of her years of work.

She has given three months notice of her intention to quit. But her position is not advertised until some weeks after she has gone. It will take several months to work through the steps involved in employing a replacement. A series of temporary managers are engaged to run the clinic until a more permanent replacement can be found. The merrygoround of faces begins again. The competent staff that were coming back regularly decide that they have better options elsewhere. Local people are resigned to the ups and downs of government services – they have seen it all before.

Eventually a new permanent manager is employed.

She appears much more closely aligned with town than the last one – she regularly communicates with them via phone and email on all sorts of questions. She does not approve of many of the arrangements that the previous manager made to run the clinic. Local staff are not entitled to housing, vehicles are being used for nonclinic purposes, there are too many on call mobile phones. There is disquiet with some of the changes. Medications will no longer be delivered to clients, log books must be kept for vehicle use, detailed job descriptions will be drawn up, people will not be seen after hours unless their condition is serious. The planning meeting that was previously held each morning with a cup of coffee is longer required – the manager will delegate tasks.  The recall list generated by the computer record system which was checked through each morning is now largely aspirational and on some days is ignored altogether. 

Some local staff have not been seen for weeks – the new manager is not concerned – they are not central to the running of the clinic in any case. 

Strangely enough the clinic numbers seem to be down – in particular young men and mothers with children appear not to be attending as they were. But the after hours roster is as busy as ever. In spite of the edict about “big sickness” on a sign at the shop and on the door of the clinic,  many of these attendances seem to be for conditions that could have been dealt with during the day. When one nurse asks why she is told that they had come earlier but the wait was too long. She has heard all this before..

And so the clinic enters another phase of its history.

The Containment of Anxiety

In the narrative above, it is an individual manager who creates a functioning clinic against all the odds. It seems at times that the hierarchy above her conspires to destroy what she has created, rather than supporting and recognizing her endeavours.

Isabel Menzies-Lyth was a social psychologist who wrote several papers on the psychology of large organizations. Her various papers were collected in a volume  “The Containment of Anxiety in Institutions”. Perhaps the most well known is entitled “Social Systems as a Defence against Anxiety”.(2)

Her original research describes the situation affecting nurse trainees in a large London hospital in the 1960’s. She was engaged by hospital management to find a solution for poor staff morale and a high rate of attrition of trainees.

Nursing students were dropping out of training – often after several years. Many were promising students. Morale was poor in those remaining. Menzies-Lyth was engaged to find out why this was happening. She conducted an extensive series of interviews and concluded that:

The patient journey is distressing to observe – they suffer pain, disability and even death. Nursing tasks can be unpleasant or even repulsive. Relationships formed during a hospital stay are lost as patients are discharged. Strong primitive and often distressing emotions are aroused in staff. 

An organizational culture developed to cope with this – this involved collusion, often unconscious, between staff members in creating systems and strategies that gave some immediate relief from these distressing emotions. However these strategies were often dysfunctional and damaging to to the service and its delivery of care in the longer term. 

Some of these strategies were:

(1) Depersonalize relationships by constant rotation. Avoid relationships with individual patients – “Everyone looking after all the patients”. Patients were seen as conditions or numbers.

(2) Eliminate Decision making by ritual task performance. Decisions in a clinical context always involve some uncertainty and anxiety as a result. If this decision making can be replaced by a ritualized task, anxiety can be reduced.

(3) Splitting the patient into parts. There is a strong tendency to break the care of an individual into components to be performed by various staff.

(4)   Projections – juniors unreliable and untrustworthy – all tasks must be closely  managed  – detailed protocols must be followed without question. Superiors are invested with qualities like all knowing and reliable.

(5)   Decision making process is complex, cumbersome and diffuse with many checks and counterchecks. The end result is that it is difficult to identify who is responsible for a decision and individual responsibility is reduced.

(4)   Rotation carried into higher levels – no-one acts in a position for long – they are often seconded to other positions.

When this culture occurs in an organization, it is immutable and largely unconscious – anyone who attempts to challenge it is punished. The end result is that there is no discretion for juniors in any tasks. The Organization is unresponsive to client needs and other problems due to the poorly functioning decision making process. 

Those who want to exercise discretion are dissatisfied – these tend to be the more capable. They are not rewarded for their initiative – indeed they may be punished. Hence their only option is to leave. Thus the organization is gradually filled by those who remain and who will tolerate this environment.

 Does this model explain the seemingly intractable problems with poorly functioning management in health and perhaps in government bureaucracy in general?

The Health Workforce

Many of the staff in Health bureaucracy have a background in Nursing and many have experience in Remote work.

Remote Health clinics themselves are also run by nurses – doctors work in some of the larger communities, but have only a peripheral role and little authority over the day to day running of a clinic.

The local indigenous workforce has little control over clinic management in Government run clinics. In community controlled organizations a local health board has more control which may mitigate some of the issues discussed here.

The level of local indigenous workforce engagement varies from little or none to being an effective part of the health workforce, depending on local conditions. In times past, clinic services were delivered entirely by local health workers in some places. However due to the complexity of modern health, community expectations, legal issues and government policy, this is now rare.

The Effects of Rapid Staff Turnover

In remote health clinics, frequent staff rotation appears to be tolerated even though there is a stated policy to reduce it. At times it reaches extraordinary levels – some clinics may undergo virtually a complete staff turnover every few months.

This has various effects, mostly deleterious.

It is impossible to provide detailed orientation for a rapidly changing workforce in a complex and challenging environment.

Administrative systems such as pharmacy, stock control, the organization of specialist visits  and travel are run usually by nursing staff. As a result of the frequent turnover and difficulties in orientation, these function poorly – this has a significant impact on service delivery.

Clinical effectiveness and indeed safety depends on a good relationship with clients. This is virtually impossible with rapid staff turnover.

In particular, a good working relationship and knowledge of clients is critical to effective programs involving continuity such as the management of chronic disease, child health, and antenatal care. There is good evidence that effective management of these programs reduces emergency attendances, evacuations and hospital admissions.

The rapid turnover of staff is likely to be more expensive. Agency margins are added to the cost of travel and relocation.

Why is this situation tolerated in spite of its adverse effects, increased cost and an express policy to the contrary? Personnel management is essentially a centralized bureaucratic task which is almost never under the control of individual clinics. What is the driver of this in Government organizations?

Is the constant rotation a hangover from hospital nursing training?

Health staffing has always been characterized by constant rotation and change, at least at a junior level. Nurse training and junior medical officer positions in hospitals involve rotations between various placements. These are typically for 3 to 6 months only in each position. 

Poor management performance is tolerable for short term agency staff, as long as they get paid and can go away to recover. Agencies absorb the cashflow penalty of errors and disputes over pay and conditions. Nurses working for agencies cite the constant errors by pay office as one of the factors in their decision to leave fulltime employment. This poor performance by bureaucracy is costing the government dearly in increased casual rates. 

Because of constant change and turnover, the organization is afflicted by a sort of “corporate dementia” as the knowledge of staff is constantly lost. Reports into remote health practice seem to repeat the same issues. Policies seem to go in cycles.

This corporate dementia worsens the already poor performance of the organization in policy development and decision making. It probably also affects clinical safety. A new staff member with no knowledge of a client is reliant on the Electronic Health Record. But these records are filled with a mountain of irrelevant “noise” and have poorly formatted user interfaces. Important clinical issues may not be detected by a staff member struggling with the complexity of their new role. 

Just as clients are not consulted, front line staff are rarely involved in policy development. Rather than fixing the problem at source, solutions to issues often involve short term or labour intensive fixes with further impositions on already overworked frontline clinical staff. This in turn further impacts service quality.    

Local people do not have much influence.

In spite of the millions spent on “consultation” there are few if any systematic surveys of the local people’s views on the delivery of health services, much less any action based on them.

People in remote communities have had relatively little political representation, though this is possibly changing in recent elections. It is unusual for potential staff to be vetted by community members.

Staff selection for long term positions appears to be poor – is there an unconscious bias towards those who will not challenge the culture?.

Perhaps more importantly, staff retention appears not to be addressed in any meaningful way. In private enterprise, organizations will always prefer to retain current staff, rather than recruit replacements. They know this is less expensive and results in better company performance even though they may have to pay a premium to retain good staff. In Remote Health, long term employees report that there is no acknowledgement of their service. Capable people are not actively retained. Those who express dissent with current policy may be actively removed as they pose a threat to organizational culture.

Staff turnover appears to be the dominant issue affecting Quality and Safety in the delivery of Remote Health Services. But this does not apply in the part of the organization managing the process – these staff are relatively stable. Government Bureaucracy is not exposed to the rigour of private enterprise and does not measure performance by the generation of profit. The managing workforce is relatively protected by Public Service employment awards.


If Menzies-Lyth’ thesis is true, what are staff in the organization anxious about? Perhaps this organizational culture develops in any large protected organization which is not subject to regular scrutiny. In any case it appears that service delivery quality is not something that drives management decisions. Health systems are in crisis, particularly in Rural and Remote practice. If this is to be addressed, the culture and quality of management must improve. 


(1) “Kartiya are like Toyotas”

Click to access kartiya_are_like%20_toyotas.pdf

(2) A Case Study in the Functioning of Social Systems as a Defence against Anxiety

Isabel E P Menzies Case Study in the functioning of social systems against anxiety