Glioma research: Asking right questions

There is an arms race to find the next molecular target. The potential spin-offs are enormous. Royalty payments. Insurance payouts.

Despite insane profits, big pharma has lost its drive to push forward for drug discovery. The easy way is to buy out the biotechnology companies (startups) or chase the clinical conditions which have healthy fat margins (like hypertension). Rare diseases like brain tumours haven’t seen any incremental investments over the past few years because of poor outcomes. Tumour treating field is the only “breakthrough” in recent times for recurrent tumours.

Therefore, the onus lies on informal networks of universities and individual researchers for pushing this narrative forward. Despite the wasted research dollars, there is a lot of promise for translational research.

My proposal has the following (very broad/generic) outline here.

The problem, at the outset, is the cost of sequencing. But it is a necessary evil. Unless we know what type of a tumour we are dealing with or its genetic signature, we cannot hope for proper characterisation. This information needs to be mated to clinical follow up with standard protocols.

Is there any scope for in-vivo monitoring? If yes, what is going to be its timeline? How frequently are we going to see for the mutations? What is the rate of mutations? What is its timescale? When should we intervene?

Another favourite pet theory is the class distinction for stem cells. Do they exist? If yes, why can’t they be reliably identified? What are their niches and what is the best way to target them?

Each sequencing would reveal a wealth of clinical data (both genomics as well as radio-genomics) and spur on more deep dive into the molecular ontology. Yes, that might fulfil the wet dream for molecular targets as well. However, as a radiation oncologist, I am only keen to know whether I can reduce my tumour volumes, how we can reduce the dose to normal structures (brain) and combine efforts with patient-related outcomes.

Bring it on! Let us do it! (Have some laughs!!)

Quality of life in brain tumours

This issue is very thorny one in the neuro-oncology community. How do you measure the quality of life objectively?

A RANO working group has defined that outline and is aware of it. We, as radiation oncologists, aren’t oblivious to the fact that radiation therapy offers one single shot to give the maximum chance of cure. I am not discussing the issue of re-irradiation here, but the idea is to minimise the impact of existing delivery mechanisms.

Beyond the tumour volumes (2-3 cm for high-grade gliomas), this is both empirical and observational. They observed that bulk of failures happened in the high dose region. It brings us to two important questions here.

1) If we know that it is going to happen in the 95% isodose, why don’t we focus on intentional dose heterogeneity, at the expense of conformity? We could explore mathematical formulations for it- how best to predict which dose fractionation would be best suitable for the likely outcomes, where the failure is expected to take place and escalate the dose to that region.

2) Some tumours usually fail elsewhere, outside the treatment area. If this is the case, why not “lower” the dose to the treatment area (so-called “de-escalation”)?

Do you see the immediate impact?

Lower the total dose to the normal brain!

Now, that leads us to two more questions.

1) Why don’t we lower the dose to 55+Gy for Grade III tumours, because they have a better outcome?

2) Does Temozolomide also act as a radiation sensitiser?

The problems with these very broad-based assumptions are that we do not have a robust criterion for pre-operative or even intra-operative validation of tumour subsets by use of MR spectroscopy or perfusion (or use of any other metabolites, for that matter). Likewise, after intense scrutiny and numerous workshops, we have just been able to define the glioblastomas/grade III astrocytomas along with the molecular data (or even other variants) objectively. Previously, palisading necrosis was all that we had from my pathology colleagues. Now, we are wading in molecular soup, and no one has the complete picture of how things can be nailed!

However, use of these molecular methods isn’t widespread.

One way out is to sequence the tumours completely, follow up patients standard course fractionation and prospectively identify patterns of failure.

It would be akin to a very preliminary “precision medicine” and not the hype cycle that seeks to identify the “molecular targets”.

No, we don’t need more “research” on something that is being duplicated across the labs. But we need to be able to channelise something that we have learned.

Who is going to bell the cat?

I think, currently, we are just trying to identify who the cat is.

RANO: Working plan for the use of patient-reported outcome measures in adults with brain tumours

Lancet Oncology, 19 (2018) e173-e180. doi:10.1016/S1470-2045(18)30004-4

Why is this paper important?

It is because there are no reliable means of patient-reported outcomes (PRO). These metrics are an essential part of monitoring the course of treatment as well as quantifying the impact of the same. For years, we have been relying on metrics like Mini-Mental State Examination. I have found that examination to be sorely limited because it is full of biases and highly dependent on the cognition/mood status of patients. There has to be a more robust metric.

Hence, the great blurb from this paper:

The first step would be to provide an overview of the guidelines of previous initiatives on the collection, analysis, interpretation, and reporting of PRO data

It is the step in the right direction because of it an acknowledgement of what we don’t know. I have attempted to involve formal psychometric testing, but it usually takes hours and have limited clinical utility. The existing tests have undergone validation in different “trials” (most of which are either single author led studies or institutional trials) leading to much confusion. Do we have a standard way of reporting them?

Not yet.

It leads us to the second step.

The second step would be to identify what PRO measures have been applied in brain tumour studies so far. As mentioned, several PRO measures are already used frequently (e.g., MD Anderson Symptom Inventory Brain Tumor Module, Functional Assessment of Cancer Treatment-Br, EORTC Quality of Life Questionnaire C30 and BN20, and the Barthel Index)

Content validity should also be culturally sensitive. What applies in one geography doesn’t translate in another part of the world (which adds to the complexity of the task).

Therefore, I feel that the third step is the most crucial question in patient-reported outcomes.

The third step would be to establish the content validity of the existing PRO measures identified in the second step. Are all essential aspects of functioning and health for patients with brain tumours covered by these instruments?

The next excerpt nails this in the right direction. It is not the patient defined outcomes alone but has to be validated by physician scoring system as well.

How is this going to shape up?

This framework refers to a patient’s functioning at three distinct levels. The most basic level is a patient’s impairment in body function, such as muscle weakness. Assessment of these impairments can be done with PRO measures, such as a symptom questionnaire, but also with clinician-reported outcome measures such as a neurological examination

Last but not the least is the psychometric properties-it has to prove its reliability as well! This, of course, applies to reproducibility across different domains.

The fourth step is to identify the psychometric properties of the detected PRO measures. How valid and reliable are these instruments for patients with brain tumours

To achieve this goal, the committee proposes to use COSMIN taxonomy and defines it as such:

The COSMIN taxonomy distinguishes three quality domains: reliability, validity, and responsiveness, each of which includes one or more measurement properties. Reliability refers to the degree in which the measurement is without measurement error, whereas validity refers to the degree in which an instrument truly measures the construct intended to measure. Responsiveness refers to the ability of an instrument to detect (clinically relevant) changes over time.

These criteria will help to shape up the course of treatment beyond the survival outcomes and focus on preservation of quality of life.

More on that later.