Glioma research: Asking right questions

There is an arms race to find the next molecular target. The potential spin-offs are enormous. Royalty payments. Insurance payouts.

Despite insane profits, big pharma has lost its drive to push forward for drug discovery. The easy way is to buy out the biotechnology companies (startups) or chase the clinical conditions which have healthy fat margins (like hypertension). Rare diseases like brain tumours haven’t seen any incremental investments over the past few years because of poor outcomes. Tumour treating field is the only “breakthrough” in recent times for recurrent tumours.

Therefore, the onus lies on informal networks of universities and individual researchers for pushing this narrative forward. Despite the wasted research dollars, there is a lot of promise for translational research.

My proposal has the following (very broad/generic) outline here.

The problem, at the outset, is the cost of sequencing. But it is a necessary evil. Unless we know what type of a tumour we are dealing with or its genetic signature, we cannot hope for proper characterisation. This information needs to be mated to clinical follow up with standard protocols.

Is there any scope for in-vivo monitoring? If yes, what is going to be its timeline? How frequently are we going to see for the mutations? What is the rate of mutations? What is its timescale? When should we intervene?

Another favourite pet theory is the class distinction for stem cells. Do they exist? If yes, why can’t they be reliably identified? What are their niches and what is the best way to target them?

Each sequencing would reveal a wealth of clinical data (both genomics as well as radio-genomics) and spur on more deep dive into the molecular ontology. Yes, that might fulfil the wet dream for molecular targets as well. However, as a radiation oncologist, I am only keen to know whether I can reduce my tumour volumes, how we can reduce the dose to normal structures (brain) and combine efforts with patient-related outcomes.

Bring it on! Let us do it! (Have some laughs!!)

RANO: Working plan for the use of patient-reported outcome measures in adults with brain tumours

Lancet Oncology, 19 (2018) e173-e180. doi:10.1016/S1470-2045(18)30004-4

Why is this paper important?

It is because there are no reliable means of patient-reported outcomes (PRO). These metrics are an essential part of monitoring the course of treatment as well as quantifying the impact of the same. For years, we have been relying on metrics like Mini-Mental State Examination. I have found that examination to be sorely limited because it is full of biases and highly dependent on the cognition/mood status of patients. There has to be a more robust metric.

Hence, the great blurb from this paper:

The first step would be to provide an overview of the guidelines of previous initiatives on the collection, analysis, interpretation, and reporting of PRO data

It is the step in the right direction because of it an acknowledgement of what we don’t know. I have attempted to involve formal psychometric testing, but it usually takes hours and have limited clinical utility. The existing tests have undergone validation in different “trials” (most of which are either single author led studies or institutional trials) leading to much confusion. Do we have a standard way of reporting them?

Not yet.

It leads us to the second step.

The second step would be to identify what PRO measures have been applied in brain tumour studies so far. As mentioned, several PRO measures are already used frequently (e.g., MD Anderson Symptom Inventory Brain Tumor Module, Functional Assessment of Cancer Treatment-Br, EORTC Quality of Life Questionnaire C30 and BN20, and the Barthel Index)

Content validity should also be culturally sensitive. What applies in one geography doesn’t translate in another part of the world (which adds to the complexity of the task).

Therefore, I feel that the third step is the most crucial question in patient-reported outcomes.

The third step would be to establish the content validity of the existing PRO measures identified in the second step. Are all essential aspects of functioning and health for patients with brain tumours covered by these instruments?

The next excerpt nails this in the right direction. It is not the patient defined outcomes alone but has to be validated by physician scoring system as well.

How is this going to shape up?

This framework refers to a patient’s functioning at three distinct levels. The most basic level is a patient’s impairment in body function, such as muscle weakness. Assessment of these impairments can be done with PRO measures, such as a symptom questionnaire, but also with clinician-reported outcome measures such as a neurological examination

Last but not the least is the psychometric properties-it has to prove its reliability as well! This, of course, applies to reproducibility across different domains.

The fourth step is to identify the psychometric properties of the detected PRO measures. How valid and reliable are these instruments for patients with brain tumours

To achieve this goal, the committee proposes to use COSMIN taxonomy and defines it as such:

The COSMIN taxonomy distinguishes three quality domains: reliability, validity, and responsiveness, each of which includes one or more measurement properties. Reliability refers to the degree in which the measurement is without measurement error, whereas validity refers to the degree in which an instrument truly measures the construct intended to measure. Responsiveness refers to the ability of an instrument to detect (clinically relevant) changes over time.

These criteria will help to shape up the course of treatment beyond the survival outcomes and focus on preservation of quality of life.

More on that later.

Why blogging is essential

When you face an empty sheet, the hardest part is to define the direction you want to give to your words.

This post was in response to a brilliant blog post on 33charts, which is peddled by an influential paediatrician. I love the way he wraps up his ideas which is both a joy and a delight to read.

I have flirted and experimented with blogging consistently over the past few years (a decade or more). I am aware of how the blogging landscape evolved.

This neuroblog was set up later in response to many recommendations by those who had been there. Blogging is the best way to be able to get your ideas out. It showcases what is on your mind.

If you are clear in your mind, you can set out to do what you wish to achieve. Hence, this blogging platform is essential to categorise as well as firm up the opinion.

Twitter is sorely limited to express both the nuance as well as context. A blogging platform only explains the background, but spoken word or personal interactions best explain nuance.

Each one of these leads to a more vibrant diversity of opinion.

(Images are subject to copyright of their owners)

Can we have a Spotify like model for academic publishing?

I have always disliked the idea of paywalls. If I have to borrow the cliche from silicon valley rags, it amounts to having “friction” in accessing the resources. It is a huge pain, especially, if you don’t have the resources.

Much has been written about the advent of scientific publishing in the internet era. And despite the monopolistic tendencies of publishers in erecting huge barriers, likes of Sci-Hub are winning. A few “pirate” websites have extended its support and currently houses the bulk of published scientific literature. Interestingly, the majority of access happens from university campuses where they have already paid for the access! It is a natural human tendency to look for the most comfortable way out!

This post is not about accessing Sci-Hub, but it reminds me of a bitter battle between pirates and Hollywood executives who had similar issues in 90’s with torrents. They are losing profits, as per their claims (despite mansions/big cars/lavish lifestyles) and are locked in pitched battles with telecom service providers to identify those who “pirate”. Paywalls and digital rights management system did not deter those who were determined to access content. Telegram, for example, has emerged as one of the largest piracy hubs for movie distribution on mobile because of its generous file limits.

This post is not to discuss these nuances, but it set me thinking. Why can’t we have a Spotify/Netflix like model for academic papers? At a monthly cost of around 10 USD with all publishers pooling in their resources, it can be a formidable challenge to a tendency to pirate. Technology has made DRM easier. Do we see pirated Netflix original content? Yes, true. But Netflix/Spotify offers a superior UI. The ratio of people paying up versus those who pirate is larger; hence these companies are profitable.

Publishing houses should accept the writing on the wall. I would be all for a Spotify like model for papers. Every stakeholder stands to gain.