In Scientific Communication for medico-marketing, the commonest approach is to find studies showing superior outcomes of our product vs. competitor products. However, there is one approach we often miss out on. Studies that strongly show the superiority of competitor products. Surprised? However, the fact is that sometimes, you might indeed find very strong evidence by reading between the lines. I will share two examples here.
Example 1
I was once approached by a client to critically analyze a publication that claimed superlative outcomes with a competitor product. The client wanted to know whether there are any flaws in the study that can be used to question the outcomes. This was published in a very reputed journal; hence, we were not very optimistic because such journals minutely review the manuscripts submitted to them. Nevertheless, we decided to check. It was a publication on an anti-diabetes agent reducing adverse cardiac outcomes. On reading the abstract, one would be impressed.
But on deep reading, this was what we found:
1. Very Complex inclusion criteria. There were at least 5 criteria for each different age group of patients, in addition to 5 criteria common to all age groups. Some of the criteria were conflicting across age groups. At the end of reading them, one was totally confused as to which types of patients were included and who was to be excluded. No HCP would be able to understand them clearly.
2. When one looked at the baseline profile of recruited patients, the mean age of the cohort was not in line with the comorbidities that were mandatory for inclusion of that age group. e.g. the mean age was 60 years and only 30% of patients had a history of cardiac disease at baseline, whereas having cardiac disease was mandatory for including patients above 60 years. How was this possible? This could happen only if most patients were actually not aged 60 yrs and above. Hence, for transparency age should have been reported as median and IQR instead of mean so that readers would get an idea of the proportion of patients in each age group.
3. The sample size was planned considering a 5% annual dropout rate. But the actual dropout rate was much much higher and there was no information about how the missing data was accounted for in the statistical analysis. This could have also reduced the power of the study which was originally planned at 90%.
4. The primary outcome was expected to be seen in 1% of patients on placebo but was actually seen in 12% of patients and in 11% of patients on the study drug.
With all of the above, would an HCP be really convinced that this drug was effective? My client had found what they wanted. It was easy to see that they could convince HCPs that the outcomes of this study are questionable.
Example 2
In this case, my client had lost market share in a territory where the competitor was aggressively communicating the results of a new meta-analysis, which showed that the competitor’s product (product A) was much superior to product B (client’s product). These were dermatological products for a particular indication. On critically evaluating the paper, I found several flaws as below:
1. Very few studies (7-8) were included in the meta-analysis although there are numerous studies comparing both products.
2. Most of the included studies were quite old (>15 yrs).
3. The most appalling finding was that the search criteria for the meta-analysis did not include product B at all. This was a huge bias.
4. Studies that were included compared a combination of product A (e.g., product A with corticosteroid) vs. plain product B – again a huge bias!!!
5. Profiles of patients in the included studies were very heterogeneous; thus, these would have greatly influenced the outcomes.
6. The biggest finding was that among the 7-8 included studies, only 1 showed superior outcomes with the competitor’s product (product A with steroid) vs my client’s product (product B, plain). However, the sample size of this study was quite large compared to those of other studies included. Hence, this single study was able to sway the outcomes of the meta-analysis in favor of the competitor’s product.
So, this was an easy one for us to create in-clinic communication material wherein we could say these results are not reliable. But we did not stop there. We added data from more recent, large studies and statements from various guidelines to add strength to our communication. Reading between the lines of published papers can indeed provide strong tools to fight competition!!
