It is widely known that there is significant inter-observer variability and intra-observer variability between pathologists as has been shown in many aspects in literature. One example is a study which reviewed variability in pathologist diagnosis.
Pathologists submitted cases of what they considered to be “classic” cases of melanoma, and recorded each case as benign or malignant. In 62% of cases there was a clash in agreement between the pathologists. One pathologist thought 57% of cases were malignant whereas another thought that only 27% were malignant. This highlighted the huge differing in opinions between pathologists, something which we experienced firsthand recently when we invited attendees at the Molecular Pathology Conference to guess the tumor percentage of a specimen cell. You can read all about the scale of guesses in our experiment here.
For the diagnosis of melanoma, this large difference of opinion is particularly staggering but not overly surprising. The diagnosis appears to be subjective to the experience and perception of the pathologist. This, as with all things, can lead to human error. It is in our nature to be better at approximate reasoning rather than numerical analysis i.e. without instruments we cannot measure temperature with exact figures but wecan say the temperature is cold, cool, warm or hot. It is the same for describing the grade of a tumor; it can be a well differentiated, moderately differentiated or poorly differentiated tumor. But, we cannot calculate how abnormal a cell is; it is a feeling based on years and years of experience looking at cells.
In the computer science world everything is calculated and measured to an accurate precision in a way that eliminates personal bias and emotional involvement. Tests have to be executed and results reproducible. Creating a piece of technology that can perform calculations on a piece of tissue to determine what is tumor and what is not will produce the same result no matter how many times you run it on the same slide. This rules out the potential for intra-observer variability.
This therefore begs the question:
“Would you want a computer to diagnose you?”
Even with the sheer amount of technology in today’s society, the answer is still probably “no”. People trust people more than technology because human error can be programmed into the system as technology is man-made. However, people do trust a thermometer to inform their GP that they have a high temperature and their GP uses this extra information to diagnose the patient. The thermometer itself does not deduce that based on the temperature reading the patient has X, Y or Z. It is used as a tool for the GP to give an accurate diagnosis.
Why not bring this ideology into pathology?
Technology that can detect tumor empowers pathologists to give a consistently accurate diagnosis. PathXL’s TissueMark is the perfect example of this technology.
TissueMark represents the relationship between computational science and pathology and is central to the acceptance of algorithms in diagnostic pathology. Like we said in a previous blog, we are approaching a world of CAD (computer assisted diagnosis) with emphasis on ‘assisted’. In the same way of using a thermometer, our software can indicate a figure and percentages of tumor but cannot make the diagnosis or final call. This leaves the pathologist in full control.
To find out more information on any of our PathXL products or to arrange a free demonstration, please contact us today. You can also keep up to date with the latest product news and digital pathology stories by following us on Twitter, LinkedIn and Google+.