Software Design in Health

Photo by Toa Heftiba u015einca on Pexels.com

Software Design is an arcane subject, a long way from the day to day practice of Health Practitioners. Yet we all use computer systems – indeed they are central to our day to day practice. Good Health IT design is important to efficiency, work satisfaction and Clinical Safety, yet it appears not to be considered as an important factor in the commissioning of Health IT systems .

In this article I explore some relevant issues.

“Technical Debt” (Ref 1)

The design of a new system is a significant investment in money and resources. There is pressure to deliver on time and on budget. During the process compromises will be made on design and system functionality. Sometimes problems that should be solved now are deferred in favour of a “bandaid” solution. This incurs a “Technical Debt” which may have to be repaid later with further design work.

Data Modelling is an example of “technical debt”. This can be important when different systems need to communicate but is not so critical within a single system. Data sent across the interface between the systems must be carefully structured or serious problems can arise when the format of the data is not compatible. For example, a numerical quantity can be specified in various ways – signed integer, unsigned integer, floating point decimal to name a few. In programming, these quantities have different lengths and are represented and manipulated in different ways. If this is not recognized and allowed for in design, an error will result.

A special program or script may have to be devised to parse and convert data when it crosses from one system to another. If the work of data modelling and interface standard specification has been deferred at the design phase, “Technical debt” has been incurred which must be “repaid” at a later date. (ref Sundberg). There does not seem to be much interest from Vendors or Buyers in a formal Data Modelling system.

Data Modelling – why?

Any significant software system must store data to be useful. Health systems require large quantities of often complex data which to be stored, recalled and manipulated. A particular data entity can be represented in various ways. For example a pulse rate is likely to lie within a certain range. It will not be negative and is not measured in fractions Thus it could be represented by an unsigned integer quantity – this takes up much less memory space than a floating point number for example. On the other hand a vaccine temperature measurement will require recording in decimal fractions of a degree and might be negative. Thus a floating point number is required to represent this measurement. Additional “metadata” might be required for interpretation of the figure such as the means of measurement. Suitable ranges could be included in the data definition. This would allow alerts to be generated in decision support systems when the measurement falls outside the normal range.

Data modelling is the process of looking at data entities to be stored in a system database and specifying such constraints and data types. It makes the data computable. It then becomes more useful for search and decision support systems. It also allows the specification of data standards at interfaces between systems and may remove the need for “middleware” to connect systems. The internet is a good example where standards for data and interfaces have been agreed – this allows many disparate systems to communicate.

OpenEHR (ref 2) is an example of a data modelling system which is gaining increasing acceptance throughout the world.

Standards and Interoperability 

– “the good thing about Standards is that there are so many to choose from”

There are strong commercial reasons why standardization is so hard to achieve. In everything from razors to printers there appears a plethora of shapes, sizes and standards. Of course when there is a need to link components together a standard interface matters. A razor blade will not link to a handle unless the relevant components are the correct shape and size. Reducing the shape and size of all razor blades to a common standard allows competition and invariably results in a reduction in price. Thus there is a strong commercial incentive for manufacturers to resist standardization, particularly if they hold market dominance. Windows and Microsoft made Bill Gates one of the richest men in the world. Apple is one the largest corporations in the world on the back of its proprietary operating systems and products. This intellectual property is jealously guarded. There are many examples where corporations have used their market power and closed intellectual property to maintain their market position and increase profits. Microsoft has been fined repeatedly by the EU for anticompetitive behaviour. One of the largest IT corporations in US health IT (Epic Systems) had to be forced by the Obama administration to open its systems to interface with outside systems. (Ref 3)

This commercial incentive to resist standardization and interoperability appears not to be acknowledged as an issue when governments are procuring Health IT systems. Just as the razor manufacturer can charge a premium for blades that fit their handle, health IT vendors appear able to charge eyewatering figures for their programs which are proprietary and do not interface with other systems. The user must pay an ongoing licence fee to continue to use the software. Moreover, because their source code is proprietary, no one else can work on it for upgrades and bugfixes – thus they can charge a premium for ongoing support. Changing to another system is expensive. Data will have to be migrated – the vendor will charge to access and modify data. Because it is not standardized and modelled, software will have to be written to migrate it to another system. Users will have to be retrained. All these costs mean that the customer is effectively “locked-in” to an expensive system, often of mediocre quality.

The NHS upgrade of 2002-11 was probably one the most spectacular and expensive Health IT failures of all time (see “Why is Health IT so hard?”). At the time there was much talk of Open Source systems and Interoperability. Yet more than 10 years later the NHS IT scene is still dominated by large commercial corporations, and Government is still paying large amounts for disparate systems. The Nirvana of Interoperability seems as far away as ever.

Software quality and why it matters.

Large software systems may contain millions of lines of source code. They are constantly modified and extended often over a long period. Many programmers may work on the system over time.

A high quality system has less bugs, is less likely to fail catastrophically, but perhaps most importantly can be extended and modified.

This is important because there will inevitably be software errors (“bugs”) in a large system. The probability density of bugs increases with software complexity and can even be quantified. Many are “edge cases” or of nuisance value. But some are critical and must be fixed.

The user will almost certainly want to make changes to the software as their business needs evolve or issues never addressed at the initial deployment become apparent. Some of these issues arise as a result of “Technical Debt” (see above). Some arise because of the “Complex” nature of the system and the fact they could not have been easily predicted as a result. (see “Complexity and Software Design”)

Some software is regarded as “legacy” (ref 4) – this means essentially that it cannot be modified without unpredictable failures – it has become “frozen”. While the program may function well enough, the quality of the underlying source code is such that making changes and fixing bugs is difficult if not impossible. This happens when the initial design is poor and changes are not carefully curated and managed over time. The code is not well abstracted into components, it is not well documented, interfaces are not discrete and well described, variables are “hard-coded” in different locations, and the codebase is not testable. A well designed program is generally separated into database, business logic, and user interface “layers”. The interfaces between these layers are well described and “discrete”. It should be possible to substitute different programs in each layer and have the whole system continue to function. 

A modern software development suite will automatically test code after changes are made and will employ a versioning system to manage change. One approach to Software Design is to write a test for a section of code and then write the code to meet the test.

Another characteristic of “legacy” code is that it is usually  proprietary. It may even use it’s own proprietary language and coding framework. It takes time and resources to develop software components that we are all used to such as typical window behaviour, “drag and drop” and “widgets” that form part of Graphical User Interfaces. If the code is proprietary it may not have had the benefit of the work of many programmers over time that Open languages and frameworks have had. This may explain the “primitive” interface look and function that many large enterprise systems have.

It is not standard practice, even when acquiring large enterprise level systems, to require an independent analysis of underlying software source code and coding systems quality. Customers generally look only at the function of the program. This is surprising given the sums of money that are routinely spent on these projects. The vendor will usually cite commercial confidentiality to prevent such scrutiny. This may also be a reason why the “waterfall” (see above) process of software design is preferred by Vendors and Buyers alike. Software Quality will have a big impact on how easy it is to extend and develop a system.

Software Security and Open Source

Many systems are connected to the internet nowadays. This means they are open to penetration by bots and hackers which are increasingly sophisticated. There may be hundreds of hacking attempts in an hour in a typical system.  

Indeed this has recently become topical with the penetration and theft of data in large Health related IT systems. “Ransomware” attacks are now commonplace.

Open Source software by definition is public and open to scrutiny. Some argue that this makes the code vulnerable to hackers as they can easily analyse the source for potential vulnerabilities. The counter argument is that “many eyes” can look at the codebase and identify these vulnerabilities, allowing them to be corrected.

Proprietary software also seems to have it’s share of hacks. Perhaps the source code is available to the hacker, perhaps it has been “reverse engineered” from object code, perhaps the hacker has simply tried a suite of hacking tools.

In the case of a recent large attack “development” code was inadvertently deployed to a production server.

“Trojans” or backdoor access may have been built into the code by the original coder. There was a famous case of an undocumented “backdoor” built into a large commercial database widely used in enterprise systems. This hack was  said to be exploited by US intelligence.  Of course if the code is secret these Trojans will never be discovered. High quality code is less likely to have vulnerabilities and if they are present it is possible to correct them. 

The User Interface

This is a focus for commercial mobile phone and computer app providers where usability is critical to uptake in a competitive environment. But in large Health IT systems, usability and the quality of the User Interface does not attract the same attention. Is this yet another adverse effect of the commercial contracting process and “Vendor Lockin”?

The User Interface has also been well studied in the context of Safety Critical systems. Poor User Interface design and poor software quality were factors in the Therac-25 incident. Here a malfunction in a Radiotherapy machine caused harm to several patients and some deaths. (ref 5)

In my view it is also a factor in many poor clinical outcomes in Health when critical previous history data is obscured in the record. The treating Clinician does not access this data and makes a clinical error as a result.

In my experience this factor is not considered in most “after the fact” reviews and “Root Cause Analyses”. Usually the Clinician is held responsible for the outcome and a common response is to require “further training”.  

Some principles of User Interface design: (ref 6)

Simplicity Principle – This means that the design should make the user interface simple, communication clear, common tasks easy and in the user’s own language. The design should also provide simple shortcuts that are closely related to long procedures.

Visibility Principle – All the necessary materials and options required to perform a certain task must be visible to the user without creating a distraction by giving redundant or extraneous information. A great design should not confuse or overwhelm the user with unnecessary information.

Feedback Principle – The user must be fully informed of the actions, change of state, errors, condition or interpretations in a clear and concise manner without using unambiguous language.

Tolerance Principle – This simply means that the design must tolerant and flexible. The user interface should be able to reduce the cost of misuse and mistakes by providing options such as redoing and undoing to help prevent errors where possible.

Reuse Principle – The user interface should be able to reuse both internal and external components while maintaining consistency in a purposeful way to avoid the user from rethinking or remembering. In practice this means data used in several locations should only be entered once.

In all the various Health IT systems that I have used, the User interface does not appear to conform with many of these principles. Typically, the program is complex with multiple layers of dialogs. 

There is a lot of redundancy of headings, poor use of screen “real estate”, poor formatting of dialogs, and displays of multiple “null” values

Often a lot of irrelevant “administrative” information is displayed. The end result is poor usability and poor “data visibility”. Important clinical data is hidden in layers of dialogs or poorly labelled documents.

 These failures reduce efficiency and user satisfaction and increase risk in an already difficult and at times dangerous Clinical Environment.

Conclusion

Software Quality is important to cost, extensibility, Interoperability and Clinical Safety. It should receive more attention in the commissioning and upgrading of Health IT systems. The design of the User Interface is a Clinical Safety Issue and should be considered as a factor when adverse clinical outcomes occur.

References

1. Scalability and Semantic Sustainability in Electronic Health Record Systems

Erik Sundvall

Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.

http://liu.diva-portal.org/smash/record.jsf?pid=diva2%3A599752&dswid=3087

2. Data Modelling – OpenEHR

https://www.openehr.org/

3. Obama legislation

https://www.modernhealthcare.com/article/20160430/MAGAZINE/304309990/how-medicare-s-new-payment-overhaul-tries-to-change-how-docs-use-tech

4. Legacy software

https://www.manning.com/books/re-engineering-legacy-software#toc

5. The Therac-25 Incident

https://thedailywtf.com/articles/the-therac-25-incident

6. User Interface design

http://ux.walkme.com/graphical-user-interface-examples/

Author: Richard Hosking

GP Alice Springs Remote Health Music Electronics Amateur Radio

One thought on “Software Design in Health”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: