Size does matter

The size of clinical trials has now become a raging issue. I came across it on Twitter, and I’d like to put in my perspective to it.

The Wall Street Journal article presents a reasonably nuanced view about the need for trials. What it leaves out in the process is that some diseases like those involving brain, because of their relative rarity, would always need a clinical trial. Likewise, for common cancers arising in breast and prostate, the opinion for long-term clinical trials is divided because it is a significant public health problem.

The treatment protocols for brain tumours like gliomas hasn’t changed much in the past 15+ years. For even rarer diseases like CNS lymphomas, the role of chemotherapy has expanded manifold. Patients present to different facilities with varying standard of care. Not everyone has access to the “research facilities”, and especially in developing countries, that conceptual framework is non-existent. The treatment protocols are often the trial and error in what “fits” in with the Indian subset of patients. It is true primarily because out of pocket expenditure is a significant public health issue.

Now comes the emerging role of “personalised medicine” where the opinion for big or small trials is even more sharply divided. What everyone secretly agrees but never speaks out in the open? It is more important to understand the need to publish negative trials. The focus of the oncological community is towards the big bang positive studies; especially for the “blockbuster” drugs. These are often intricately linked to prevailing stock prices. There are perverse incentives as well, not to take the financial risks. It is the pharma companies that decide on “treatment protocols” and the “standard of care” where conflicts of interest are given short shrift in the protocols. That is the reason why I insist on public funding of trials where a leeway has to be made to fail. Previously, I have also argued that “personalised medicine” is way too much in its infancy. We are only nibbling at the outliers and nowhere near the core of the problem.

It is also incredibly naive to assume that if a company is offering an “unrestricted educational grant”, it has no say in the outcomes. It gets them a seat on the board to be able to influence the reports indirectly.

So does size matter? More extensive trials, are time honed but require immense resources. I strongly feel that hair-splitting in current treatment options offers no means to an end. Instead of a clear focus on the outliers (like the drugs), protocols need to include radiation therapy as an inherent component of treatment.

Translational medicine needs to become the centre-stage, and public funding should avoid a substantial scale duplication of work. It comes with its caveats.

Glioma research: Asking right questions

There is an arms race to find the next molecular target. The potential spin-offs are enormous. Royalty payments. Insurance payouts.

Despite insane profits, big pharma has lost its drive to push forward for drug discovery. The easy way is to buy out the biotechnology companies (startups) or chase the clinical conditions which have healthy fat margins (like hypertension). Rare diseases like brain tumours haven’t seen any incremental investments over the past few years because of poor outcomes. Tumour treating field is the only “breakthrough” in recent times for recurrent tumours.

Therefore, the onus lies on informal networks of universities and individual researchers for pushing this narrative forward. Despite the wasted research dollars, there is a lot of promise for translational research.

My proposal has the following (very broad/generic) outline here.

The problem, at the outset, is the cost of sequencing. But it is a necessary evil. Unless we know what type of a tumour we are dealing with or its genetic signature, we cannot hope for proper characterisation. This information needs to be mated to clinical follow up with standard protocols.

Is there any scope for in-vivo monitoring? If yes, what is going to be its timeline? How frequently are we going to see for the mutations? What is the rate of mutations? What is its timescale? When should we intervene?

Another favourite pet theory is the class distinction for stem cells. Do they exist? If yes, why can’t they be reliably identified? What are their niches and what is the best way to target them?

Each sequencing would reveal a wealth of clinical data (both genomics as well as radio-genomics) and spur on more deep dive into the molecular ontology. Yes, that might fulfil the wet dream for molecular targets as well. However, as a radiation oncologist, I am only keen to know whether I can reduce my tumour volumes, how we can reduce the dose to normal structures (brain) and combine efforts with patient-related outcomes.

Bring it on! Let us do it! (Have some laughs!!)

Quality of life in brain tumours

This issue is very thorny one in the neuro-oncology community. How do you measure the quality of life objectively?

A RANO working group has defined that outline and is aware of it. We, as radiation oncologists, aren’t oblivious to the fact that radiation therapy offers one single shot to give the maximum chance of cure. I am not discussing the issue of re-irradiation here, but the idea is to minimise the impact of existing delivery mechanisms.

Beyond the tumour volumes (2-3 cm for high-grade gliomas), this is both empirical and observational. They observed that bulk of failures happened in the high dose region. It brings us to two important questions here.

1) If we know that it is going to happen in the 95% isodose, why don’t we focus on intentional dose heterogeneity, at the expense of conformity? We could explore mathematical formulations for it- how best to predict which dose fractionation would be best suitable for the likely outcomes, where the failure is expected to take place and escalate the dose to that region.

2) Some tumours usually fail elsewhere, outside the treatment area. If this is the case, why not “lower” the dose to the treatment area (so-called “de-escalation”)?

Do you see the immediate impact?

Lower the total dose to the normal brain!

Now, that leads us to two more questions.

1) Why don’t we lower the dose to 55+Gy for Grade III tumours, because they have a better outcome?

2) Does Temozolomide also act as a radiation sensitiser?

The problems with these very broad-based assumptions are that we do not have a robust criterion for pre-operative or even intra-operative validation of tumour subsets by use of MR spectroscopy or perfusion (or use of any other metabolites, for that matter). Likewise, after intense scrutiny and numerous workshops, we have just been able to define the glioblastomas/grade III astrocytomas along with the molecular data (or even other variants) objectively. Previously, palisading necrosis was all that we had from my pathology colleagues. Now, we are wading in molecular soup, and no one has the complete picture of how things can be nailed!

However, use of these molecular methods isn’t widespread.

One way out is to sequence the tumours completely, follow up patients standard course fractionation and prospectively identify patterns of failure.

It would be akin to a very preliminary “precision medicine” and not the hype cycle that seeks to identify the “molecular targets”.

No, we don’t need more “research” on something that is being duplicated across the labs. But we need to be able to channelise something that we have learned.

Who is going to bell the cat?

I think, currently, we are just trying to identify who the cat is.