Goals of research

There has been an outpouring of dollars in basic molecular research. Many clinicians have joined in with their labs to push for “clinically relevant research”. It is evident that there would be a lot of duplication and overlap between it.

For example, look at IDH gene in the pathogenesis of gliomas. We know it carries a prognostic significance. We also know about the molecular pathogenesis. How does duplicating the research across different labs helps us or makes us any wiser?

The answer lies in the pharmaceutical business goldmine. Loath to spend on basic research in molecular pathways, the research, instead has been farmed out to a network of labs. It is easy for anyone to form a company and then sell out by being acquired. It is excellent for research ecosystem as it brings about new innovative ideas, but there are some serious issues here.

Public funded research gets outpriced for the end users who have contributed in no small measure to the same. They need to become more aware of these repercussions. Shrinking federal grants for public funded research means that there is no adequate oversight and auditing of the labs that are doing the same thing. These are potentially very high stakes, and patent awards can make individuals pretty rich.

I agree that these are generalisations and that this opinion isn’t set in stone. I have based the above assertion on my reading of the situation as well as verbal accounts.

What is urgently required is a partnership at all levels. It is to focus on one idea that has the potential to work in brain tumours. Pool in resources, under legal agreements, to work on the different aspects of the same problem. The idea above is more akin to a hub-and-spoke model of research. The goal is the identify molecular pathway and understand its implications for radiation therapy.

Let’s say, hypothetically, IDH gliomagenesis is the new pathway discovered. One team to work at a molecular level to identify potential inhibitory points, other to identify molecules that bring about this change. Another side to study the effect of radiation therapy and the pathway. Aggregated results would avoid duplication and overlap and lead to faster translational outcomes.

The problem is that they end up leaving radiation as an after-thought. It should change.

Twitter for oncologists: More reflections.

One thing is apparent. Twitter as a service, is for sharing links alone. The original premise was to get the overall perspective of how users discuss issues in “real time” and function as a “real-time” search engine. Google, at some point, listed Twitter results but it ended for reasons best known to them.

The more the people on any platform leads to an excessive banter. Separating the signal from the noise becomes even more difficult as informational deluge overwhelms us. While it is fanciful to have more Twitter (or Instagram) followers and show off as “influencers”, it doesn’t help much because of abysmal rates of engagement. While I may consistently get a large number of “Tweet impressions” (mumbo-jumbo of acronyms that Twitter is marketing), this is useless as it doesn’t translate to real life behavioural change.

It is evident from the fact that engagement with my shared links is abysmally poor. My idea of being on the social network is an academic exchange. Suppose I share in a link which is opened and read by another- it would foster a dialogue of information.

On the other extreme, I have come across “verified” accounts of “star-influencers” in Oncology community who push out links with annotations, pictures, survival curves and proper attribution to the authors. How do those “star-influencers” manage it?

I have a strong reason to believe that these links are pushed by dedicated teams using enterprise accounts. A lot of window dressing takes place and after “approval” is “tweeted” out. You have to see the pattern to understand it. It is impossible to juggle professional commitments with tweeting links all the time. There has to be a team involved.

The race for “followers” has polluted the ecosystem. Automated bots propel the specific “likes” making it impossible to differentiate legitimate traffic from bot sponsored and propagated traffic.

I am not cynical. I use Twitter for ideas to write on this blog here. I observe trends. I interact with virtual selves of humans, genteel people scattered all over the planet. It is fun to learn from there, to ping them and understand their perspectives. The trick is to moderate, turning off retweets which don’t concern you, muting specific words and staying focused on what you wish to gain. As a result, I have whittled down to less than half off my previous unread tweets on the timeline. It took time to cull away the deadwood and the fresh perspectives soak in. In the end, it was worth it.

Research in radiation oncology: Break the logjam

I came across this on Twitter (where else!) Despite the “weirdness” (pun intended), it was apparent that it raised substantial issues. I had responded to it, but it merited a blog post.

There has been an institutional push to observe and record in western countries. Higher disposable incomes with specific segments of society helped them to get a better education and as a result, better opportunities. It is not getting into a nuanced debate about the racial differences or affirmative action. Inequalities have always played a part but so is the ability to capitalise on opportunities that present itself.

A lot of research happens because of institutionalised mechanisms. The children have exposure to ideas from the school and paid internships, scholarships and grant opportunities. In India, the approach is entirely insular and works in silos. Medical science has grown incredibly complicated, and it is beyond the purview of anyone to grasp nuances of differentials.

As a result of those initiatives, a few developed economies have led and broken ground in “research” (whether it is transformational or applicable to real-world solutions is immaterial). It has spurred on the likes of China (an aspirational economy) to ape the same system led by the US, but rigid hierarchies stymie them. It is indeed laughable when Government of India decides to set up a “scientific officer for innovation” because it cannot happen in silos. Throwing money at central “research institutes” isn’t going to help because lack of real-world application has hardly moved the needle in any meaningful direction. Likewise, the research is mostly divorced from socio-cultural contexts.

We can only break the log-jam if we first identify the cause of the problem. Outsourced research to understand molecular pathways and then to apply developmental molecules for “blocking them” only perpetuates, what I call a scientific fraud of “monumental proportions” because of perverse incentives associated with “pharmaceuticals”.

(Radiation Therapy needs love- not in delivery methods but radiobiology and fractionation). It is sad that radiation oncologists have more faith and belief in “combination regimes”- altered fractionation schemes have been beneficial too. But progress is excruciatingly slow here.

It would be difficult to think beyond patent protections and intellectual property if someone else controls the purse strings.

Glioma research: Asking right questions

There is an arms race to find the next molecular target. The potential spin-offs are enormous. Royalty payments. Insurance payouts.

Despite insane profits, big pharma has lost its drive to push forward for drug discovery. The easy way is to buy out the biotechnology companies (startups) or chase the clinical conditions which have healthy fat margins (like hypertension). Rare diseases like brain tumours haven’t seen any incremental investments over the past few years because of poor outcomes. Tumour treating field is the only “breakthrough” in recent times for recurrent tumours.

Therefore, the onus lies on informal networks of universities and individual researchers for pushing this narrative forward. Despite the wasted research dollars, there is a lot of promise for translational research.

My proposal has the following (very broad/generic) outline here.

The problem, at the outset, is the cost of sequencing. But it is a necessary evil. Unless we know what type of a tumour we are dealing with or its genetic signature, we cannot hope for proper characterisation. This information needs to be mated to clinical follow up with standard protocols.

Is there any scope for in-vivo monitoring? If yes, what is going to be its timeline? How frequently are we going to see for the mutations? What is the rate of mutations? What is its timescale? When should we intervene?

Another favourite pet theory is the class distinction for stem cells. Do they exist? If yes, why can’t they be reliably identified? What are their niches and what is the best way to target them?

Each sequencing would reveal a wealth of clinical data (both genomics as well as radio-genomics) and spur on more deep dive into the molecular ontology. Yes, that might fulfil the wet dream for molecular targets as well. However, as a radiation oncologist, I am only keen to know whether I can reduce my tumour volumes, how we can reduce the dose to normal structures (brain) and combine efforts with patient-related outcomes.

Bring it on! Let us do it! (Have some laughs!!)

RANO: Working plan for the use of patient-reported outcome measures in adults with brain tumours

Lancet Oncology, 19 (2018) e173-e180. doi:10.1016/S1470-2045(18)30004-4

Why is this paper important?

It is because there are no reliable means of patient-reported outcomes (PRO). These metrics are an essential part of monitoring the course of treatment as well as quantifying the impact of the same. For years, we have been relying on metrics like Mini-Mental State Examination. I have found that examination to be sorely limited because it is full of biases and highly dependent on the cognition/mood status of patients. There has to be a more robust metric.

Hence, the great blurb from this paper:

The first step would be to provide an overview of the guidelines of previous initiatives on the collection, analysis, interpretation, and reporting of PRO data

It is the step in the right direction because of it an acknowledgement of what we don’t know. I have attempted to involve formal psychometric testing, but it usually takes hours and have limited clinical utility. The existing tests have undergone validation in different “trials” (most of which are either single author led studies or institutional trials) leading to much confusion. Do we have a standard way of reporting them?

Not yet.

It leads us to the second step.

The second step would be to identify what PRO measures have been applied in brain tumour studies so far. As mentioned, several PRO measures are already used frequently (e.g., MD Anderson Symptom Inventory Brain Tumor Module, Functional Assessment of Cancer Treatment-Br, EORTC Quality of Life Questionnaire C30 and BN20, and the Barthel Index)

Content validity should also be culturally sensitive. What applies in one geography doesn’t translate in another part of the world (which adds to the complexity of the task).

Therefore, I feel that the third step is the most crucial question in patient-reported outcomes.

The third step would be to establish the content validity of the existing PRO measures identified in the second step. Are all essential aspects of functioning and health for patients with brain tumours covered by these instruments?

The next excerpt nails this in the right direction. It is not the patient defined outcomes alone but has to be validated by physician scoring system as well.

How is this going to shape up?

This framework refers to a patient’s functioning at three distinct levels. The most basic level is a patient’s impairment in body function, such as muscle weakness. Assessment of these impairments can be done with PRO measures, such as a symptom questionnaire, but also with clinician-reported outcome measures such as a neurological examination

Last but not the least is the psychometric properties-it has to prove its reliability as well! This, of course, applies to reproducibility across different domains.

The fourth step is to identify the psychometric properties of the detected PRO measures. How valid and reliable are these instruments for patients with brain tumours

To achieve this goal, the committee proposes to use COSMIN taxonomy and defines it as such:

The COSMIN taxonomy distinguishes three quality domains: reliability, validity, and responsiveness, each of which includes one or more measurement properties. Reliability refers to the degree in which the measurement is without measurement error, whereas validity refers to the degree in which an instrument truly measures the construct intended to measure. Responsiveness refers to the ability of an instrument to detect (clinically relevant) changes over time.

These criteria will help to shape up the course of treatment beyond the survival outcomes and focus on preservation of quality of life.

More on that later.

Why blogging is essential

When you face an empty sheet, the hardest part is to define the direction you want to give to your words.

This post was in response to a brilliant blog post on 33charts, which is peddled by an influential paediatrician. I love the way he wraps up his ideas which is both a joy and a delight to read.

I have flirted and experimented with blogging consistently over the past few years (a decade or more). I am aware of how the blogging landscape evolved.

This neuroblog was set up later in response to many recommendations by those who had been there. Blogging is the best way to be able to get your ideas out. It showcases what is on your mind.

If you are clear in your mind, you can set out to do what you wish to achieve. Hence, this blogging platform is essential to categorise as well as firm up the opinion.

Twitter is sorely limited to express both the nuance as well as context. A blogging platform only explains the background, but spoken word or personal interactions best explain nuance.

Each one of these leads to a more vibrant diversity of opinion.

(Images are subject to copyright of their owners)

Can we have a Spotify like model for academic publishing?

I have always disliked the idea of paywalls. If I have to borrow the cliche from silicon valley rags, it amounts to having “friction” in accessing the resources. It is a huge pain, especially, if you don’t have the resources.

Much has been written about the advent of scientific publishing in the internet era. And despite the monopolistic tendencies of publishers in erecting huge barriers, likes of Sci-Hub are winning. A few “pirate” websites have extended its support and currently houses the bulk of published scientific literature. Interestingly, the majority of access happens from university campuses where they have already paid for the access! It is a natural human tendency to look for the most comfortable way out!

This post is not about accessing Sci-Hub, but it reminds me of a bitter battle between pirates and Hollywood executives who had similar issues in 90’s with torrents. They are losing profits, as per their claims (despite mansions/big cars/lavish lifestyles) and are locked in pitched battles with telecom service providers to identify those who “pirate”. Paywalls and digital rights management system did not deter those who were determined to access content. Telegram, for example, has emerged as one of the largest piracy hubs for movie distribution on mobile because of its generous file limits.

This post is not to discuss these nuances, but it set me thinking. Why can’t we have a Spotify/Netflix like model for academic papers? At a monthly cost of around 10 USD with all publishers pooling in their resources, it can be a formidable challenge to a tendency to pirate. Technology has made DRM easier. Do we see pirated Netflix original content? Yes, true. But Netflix/Spotify offers a superior UI. The ratio of people paying up versus those who pirate is larger; hence these companies are profitable.

Publishing houses should accept the writing on the wall. I would be all for a Spotify like model for papers. Every stakeholder stands to gain.