Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

A rating scale was developed to assess the contribution made by computer software towards the delivery of a quality consultation, with the purpose of informing the development of the next generation of systems. Two software programmes were compared, using this scale to test their ability to enable or inhibit the delivery of an ideal consultation with a patient with heart disease. The context was a general practice based, nurse run clinic for the secondary prevention of heart disease. One of the programmes was customized for this purpose; the other was a standard general practice programme. Consultations were video-recorded, and then assessed by an expert panel using the new assessment tool. Both software programmes were oriented towards the implementation of the evidence, rather than facilitating patient-centred practice. The rating scale showed, not surprisingly, significantly greater support from the customized software in the consultation in five out of eight areas. However, the scale's reliability measured by Cronbach's Alpha, was sub-optimal. With further refinement, this rating scale may become a useful tool that will inform software developers of the effectiveness of their programmes in the consultation, and suggest where they need development.

Original publication




Journal article


Medical Informatics and the Internet in Medicine

Publication Date





267 - 280