Posted in Scicomm

The case for preprints

Daniel Mansur, the principal investigator of a lab at the Universidade Federal de Santa Catarina that studies how cells respond to viruses, had this to say about why preprints are useful in an interview to eLife:

Let’s say the paper that we put in a preprint is competing with someone and we actually have the same story, the same set of data. In a journal, the editors might ask both groups for exactly the same sets of extra experiments. But then, the other group that’s competing with me works at Stanford or somewhere like that. They’ll order everything they need to do the experiments, and the next day three postdocs will be working on the project. If there’s something that I don’t have in the lab, I have to wait six months before starting the extra experiments. At least with a preprint the work might not be complete, but people will know what we did.

Preprints level the playing field by eliminating one’s “ability to publish” in high-IF journals as a meaningful measure of the quality of one’s work.

While this makes it easier for scientists to compete with their better-funded peers, my indefatigable cynicism suggests there must be someone out there who’s unhappy about this. Two kinds of people come immediately to mind: journal publishers and some scientists at highfalutin universities like Stanford.

Titles like NatureCellNew England Journal of Medicine and Science, and especially those published by the Elsevier group, have ridden the impact factor (IF) wave to great profit through many decades. In fact, IF continues to be the dominant mode of evaluation of research quality because it’s easy and not time-consuming, so – given how IF is defined – these journals continue to be important for being important. They also provide a valuable service – the double-blind peer review, which Mansur thinks is the only thing preprints are currently lacking in. But other than that (and with post-publication peer-review being largely suitable), their time of obscene profits is surely running out.

The pro-preprint trend in scientific publishing is also bound to have jolted some scientists whose work received a leg-up by virtue of their membership in elite faculty groups. Like Mansur says, a scientist from Stanford or a similar institution can no longer claim primacy, or uniqueness, by default. As a result, preprints definitely improve the forecast for good scientists working at less-regarded institutions – but an equally important consideration would be whether preprints also diminish the lure of fancy universities. They do have one less thing to offer now, or at least in the future.

Posted in Scicomm

How cut-throat competition forces scientists to act against the collective

Brian Keating, an astrophysicist who led the infamous cosmic inflation announcement in 2014, thinks this is how science works: “… you put out a result, and other scientists work to test the result”. However, his own story shows that this is a cute ideal that’s often unreasonable to expect on ground. That scientists are often not putting out results and expecting others to test them as much as rushing to announce results to scoop others even if they don’t yet have enough data to make their claims.

In fact, there are many supposed truisms about science put to the test every day but whose results we choose to ignore because it would be easier if it were “self-correcting” and “objective” instead of us having to confront the very real possibility that science’s autocorrect works across decades, not years, and the truths it uncovers are objective insofar as they are not pursued by scientists wondering about what will get them published, famous, well-funded and rewarded.

This malaise is not specific to Keating and his team; it applies to all scientists because these ambitions are motivated by a flawed administration of science, increasingly emulated around the world as states rush to increase their scientific “output”. And it is when you compare the methods of this administration to what people think science is that you that you realise you’re practically harbouring a cognitive dissonance.

Now, Keating was working with a single team of scientists (that he was leading) on a single instrument, the BICEP2 telescope near the South Pole. On the other hand, many major discoveries of this century are expected out of ‘Big Science’ collaborations: teams of people working together to study a natural phenomenon and obtain a common result, and other teams to replicate that result. The Large Hadron Collider (LHC) is a famous example. The collective faculties of its 3,000+ scientists and engineers are necessary to operate the machine and its five detectors and analyse the data to present meaningful results.

However, the LHC has always been an ‘easy’ example that lets off the person quoting it from having to grapple with the numerous other collaborations that don’t work the way the LHC’s does. Every time there is a paper written by engineers at CERN, the European laboratory that hosts the LHC, based on LHC data, the names of all the people involved in the experiment to which the author belongs are listed as coauthors. For example, in 2015, the CMS and ATLAS experiments at the LHC jointly published a paper, about a more sensitive mass measurement of the Higgs boson, with 5,154 authors.

The members of the Planck space telescope, on the other hand, had refused to share data with the BICEP2 team for whatever reason. Keating had reasoned, “Either [Planck] didn’t have the data we wanted, or they did have it and they were going to scoop us.” The Planck team would the relevant part of parts the data later that year, but in the meantime, Keating et al courted public adulation in an effort to cement their candidacy for a Nobel Prize.

Two sources of conflict

This ‘pursuit of the scoop’ is fascinating because it describes a vector of action within scientific research that most of us typically don’t account for, yet which seems to influence the outcomes and communication of research in significant ways. Researchers believe the Nobel Prize is the highest honour, whereas The System is not efficient enough to recognise their every effort in the right context, so they decide the only way to go is to take risks and cut ahead.

Last week, science journalist Jennifer Ouellette described another arena where a quarrel for credit had been unravelling the same way it did with the BICEP2 experiment.

In August 2017, the twin Laser Interferometer Gravitational-wave Observatories (LIGO) and 70 telescopes around the globe tracked a neutron-star merger. It was the world’s first demonstration of multi-messenger astronomy, where multiple instruments study the same phenomenon in multiple channels (electromagnetic and gravitational) to understand its evolution through different laws of physics. The results of the studies were announced by the LIGO Scientific Collaboration in October 2017 with much fanfare (warranted because neutron-star mergers are spectacular in many ways).

Between August and October, however, the portion of the astronomy community caught up in the analysis and follow-up observations was going nuts. The merger had been ‘observed’ through three events that were detected thus: gamma rays by space telescopes, gravitational waves by LIGO and the kilonova explosion by ground telescopes. LIGO has had a habit of checking its observations repeatedly before making an announcement to the public – while, according to Ouellette, astronomers have gone the other way, having had no reason to wait before being able to claim a discovery with sufficient confidence. This was one source of conflict.

The other source was the familiar one of primacy. All members of the collaboration had been keenly aware that Kip Thorne, Barry Barish and Rainer Weiss had received the Nobel Prize for physics in 2017 following LIGO’s first announcement of the detection of gravitational waves in 2016. And the members wanted to make sure their contributions to the final announcement were properly acknowledged so they would remain in contention for future rewards, of which there were potentially many.

Ouellette writes,

According to [Josh Simon, an astronomer in Chile], things got messy after he and his colleagues spotted the kilonova and identified the host galaxy. Five other teams detected the event in their images within the next hour, and it wasn’t clear whether those teams spotted the kilonova before or after Carnegie’s announcement. This in turn sparked a lively debate about how much credit the subsequent teams should receive. …

The debate over credit extended to who should be listed as authors on the primary omnibus paper describing the discovery. LIGO made a good-faith effort to be as inclusive as possible, but hackles were raised over how the collaboration defined what constituted a “unique” contribution or discovery. In the end, the omnibus paper had two tiers of co-authors. The first included the six groups deemed “the discoverers,” with the second tier comprised of those who did the follow-up work and analysis. Even so, “there were a lot of people in that second category who thought they should have been in the first category because they did make a first or unique contribution,” [LIGO spokesperson David Reitze] says.

We have frequently derided Indian ministers for being so obsessed with the Nobel Prize – it isa silly obsession – but the LIGO and BICEP2 tales demonstrate that as a feature it is not unique to India or China. Everyone wants a Nobel Prize.

The Nobel intent

However, there are two different cultures at work here, even though both their followers will kneel at the same altar. In India, for example, ministers dream of Indian scientists winning a Nobel Prize but their actions haven’t always been consistent with their desires. In the US, for another example, the infrastructure for good research is already in place but unless operation expenditure is hiked, the community will undeservingly suffer from the effects of overcrowding. As Ouellette said,

… astronomers tend to cluster in smaller, independent groups, and they are fiercely competitive, vying both for limited funding and for precious time on the world’s limited number of telescopes. Being first to report a breakthrough observation is hugely important to most astronomers. (emphasis added)

Such spending is also tied to the US’s consideration of itself as the world’s “leader” of scientific research. American scientists have won the most Nobel Prizes in the last century – but that was a century when the US was truly the research leader. It has dropped the ball of late and the effects will surely show a few decades from now in the Nobel Prize count.

Of course, in both cases it must be acknowledged that the Nobel Prize symbolises power. For the individual, it is power in the form of acknowledgment of work. For the institute, it is power in the form of access to funds. For the country, it is power in the form of prestige. It hasn’t mattered if the way the prize is awarded is flawed; the cultural and historical cachet it still caries is astounding, prompting two weighty collaborations to almost unravel in its pursuit. We must acknowledge that this is how science works. There are likely to be other team-efforts worldwide where individual desires have superseded community goals.

This also does not make LIGO’s way of doing things better – or even the LHC’s for that matter. A habit of giving everyone on the experiment credit means the person who actually did any work relevant to the results of the paper gets the same amount of credit as a fresh PhD student. Peter Coles, a theoretical cosmologist at Cardiff University, has called this “absurd”. Panjab University used this flaw to its advantage in climbing through global university rankings because its scientists had been listed as coauthors in numerous papers published by LHC experiments.

It is clear that the ultimate fix to this problem will have to ensure that all work is properly acknowledged and, if necessary, rewarded. David L. Clements, an observational astrophysicist at Imperial College London, commented on Coles’s post, “More permanent contracts, a less publication-fixated funding environment, and more money in the field, reducing the level of cut-throat competition, would help, but can you realistically see any of that happening?” It is becoming clear that the essential animus of a hyper-competitive environment is rooted in the availability of three types of resources at certain levels in the hierarchy of scientific enterprise: evaluators, methods to ensure fairer evaluations and acknowledgments (assuming we already have the resources to conduct research).

The more evaluators there are, the more evaluations that will happen (vertically, horizontally or both). They must not completely rely, or even over-rely, on reductive proxies like the impact factor or h-index to judge candidates’ performance. Instead, they must be able afford (and not be blindly expected to perform) qualitative tests, such as speaking to a candidate’s supervisor to understand her contributions better and assess her work by actually reading her papers. Once evaluations are complete, all candidates must be rewarded. This encompasses suitable rewards made available in a timely manner. Without these measures, however, participants of a competition will have few reasons to believe it will be fair or empathetic.

A recent Twitter conversation (below) between Mukund Thattai, a biologist at the National Centre for Biological Sciences, Bengaluru, and Shailja Gupta, an adviser and scientist at the Department of Biotechnology, teased out the nuances of this issue. In particular, it highlighted the need for a two-way collaboration between the scientific community and the Government of India.

The Wire
May 1, 2018

Featured image: An engraved bust of Alfred Nobel. Credit: sol_invictus/Flickr, CC BY 2.0.

Posted in Scicomm

Cognitive flexibility and nationalism 2.0

Remember that paper about cognitive flexibility and nationalism? The one that said people who are more nationalistic in their politics tend to have lower cognitive flexibility? I’d blogged about it here. I hadn’t read the study’s paper, published in the Proceedings of the National Academy of Sciences, because I didn’t think I had to to be able to call the study’s conclusions into question. An excerpt from my previous post:

… ideological divisions, imagined in the form of political polarisation, are bad enough as it is without people on one side of the aisle being able to accuse those on the other side of having “low cognitive flexibility”. The nuance can be worded as prosaically as the neuroscientists would prefer but this won’t – can’t – stop the less-nationalistic from accusing the more-nationalistic of simply being stupid, now with a purported scientific basis.

This is why I believe something has to be off about the study. The people on the right, as it were on the political spectrum, are not stupid. They’re smart just the way those of us on the left imagine ourselves to be. Now, one defence of the study may be that it attempts to map a hallmark feature of the global political right, sort of a rampant anti-intellectualism and irrationality, to its neurological underpinnings – but nationalism is more than its endorsement of traditions or traditional values.

As it turned out: if I had, the paper would have revealed more problems with the study and made a stronger case than I was able that it is quite likely a product of the “publish or perish” kind of thinking. The reason I revisit this study now is an interesting conversation I had with Shruti Muralidhar, a cortical and hippocampal neuroscientist, currently a postdoc at the Massachusetts Institute of Technology. Before I’d written my post, I’d asked Shruti if she could read the paper and possibly critique it.

My primary concern was basically about assigning a kind of “hierarchy of cognitive abilities” to the political spectrum – that sounds dangerous. By saying the political right has less cognitive flexibility, I’d felt like the study was reaching the conclusion that there might be a purely biological explanation for why people behave the way they do. This kind of reductionism is eminently dangerous.

According to Shruti, “This understanding of the paper is not far from what they want the reader to take away – but sadly, they have little or no backing to actually prove or disprove this claim.” She summed her observation up in a few points (quoted verbatim):

  1. Cognitive flexibility is simply just that. It doesn’t mean more or less intelligence, smarts or anything like that. In fact, it might not even be a “positive” trait depending on the situation at hand.
  2. The study’s authors have administered only two cognitive tests, and one of them clearly gives counterintuitive, unexpected or, one might even say, “wrong” results, as in goes against the study’s primary hypothesis.
  3. These are correlation studies, which usually are to be taken with a bagful of salt.

The first question that arises then is why the authors – or PNAS – decided to publish their study when 50% of their tests turned in results that opposed their hypothesis: that the more nationalistic are less flexible, cognitively speaking.

She also pointed out many issues with the language in the paper, especially lines that could be misinterpreted easily. Some in particular stuck out because they revealed a deeper epistemological issue with the study.

Shruti said, “The authors clearly admit that cognitive flexibility is a multi-dimensional beast and that it is difficult to understand it completely,” and often suggest that they don’t understand it completely themselves. One giveaway is that they keep saying variations of “We need more and better tests”.

A bigger giveaway is a line on page 6: “However, it is also conceivable that immersing oneself in strongly ideological environments may encourage psychological inflexibility and promote a preference for routines and traditions.” In other words, if A stood for “more nationalistic” and B for “less cognitive flexibility”, then the authors were saying that A therefore B while also admitting that B therefore A. In other other words, their correlation was in doubt, leave alone causation. This portion concludes thus:

Nevertheless, more research is necessary to understand the nature of cognitive flexibility and the various ways in which it manifests in relation to ideological thinking.

The authors haven’t defined cognitive flexibility explicitly in their paper, instead referencing older studies on the subject. Even so, Shruti said that those papers might not be able to provide the final word either because, as one of her peers had pointed out, “Since this study is EU/Britain-specific, their idea of what ideological inflexibility is might also be different from, say, India’s or the rest of the world’s. Europe thrived on systems and thinking-within-the-box for centuries.”

All together, the paper appears to describe a study of the “low-hanging fruit” variety. Its central hypothesis has been neither proved nor disproved, the reader is left in doubt about whether the tests were properly chosen (and why more tests weren’t performed), and the paper is strewn with admissions that the authors don’t claim to understand what one of the more important keywords in the study really means.

Worst of all (to me) is that the paper has been published with a misleading headline and the university press release, with an incredibly misleading one that should take all the responsibility for fake news born as a result (and strengthening the case that they shouldn’t be trusted). And there’s quite a bit of it:

  • PsychCentral has an article that only quotes Leor Zmigrod, the lead author of the atudy and a psychologist at the University of Cambridge
  • The same is true of an article in The Guardian by Nicola Davis. The headline goes ‘Brexiters tend to dislike uncertainty and love routine, study says’ – more of the reductionism at work.
  • Andrew Brown of The Guardian takes the study’s conclusions at face value, writing in his column:

… some kinds of political argument are going to be literally interminable. Obviously this isn’t true of any particular issue. Even the question of our relations with Europe will be settled some time before the heat death of the universe. But it may be replaced by something else which arouses the same passions and splits the population in the same way, because the cognitive traits [Zmigrod] is analysing are all part of the normal variation of humanity.

In fact, it seems no prominent coverage of the paper has invited an independent researcher to comment on its findings. I concede that I myself didn’t speak to a psychologist – Shruti is a neuroscientist – but all of Shruti’s observations are hard to ignore.

Finally, if I were looking to publish a paper right now, I’d hypothesise that flattering, non-critical coverage of scientific papers – peer-reviewed or otherwise – is more common among news publishers if each paper makes it easier for the publication to maintain its political position.

Featured image credit: mwewering/pixabay.

Posted in Scicomm

A peek behind the curtain of the most infamous cosmic blunder of our time

I was once stupid too, and still am in many ways. One of the instances when I was more stupid than usual was when I wrote an article about the now-infamous BICEP2 ‘discovery’ of evidence of cosmic inflation in 2014. The ‘discovery’ eventually turned out to be a non-discovery because the scientists behind it had acted too soon with their announcement, overlooking a serious gap in their data.

As a science journalist, I’d failed because I hadn’t solicited independent comments for my piece, as a result letting The Hindu (where I worked at the time) publish an eminently wrong article. I will never forget that this happened, if only to remind myself of the importance of soliciting independent comments on all science articles, no matter how mundane the peg.

The BICEP2 instrument studies the cosmic microwave background (CMB) radiation. Some scientists were using BICEP2 to detect the imprint of gravitational waves on the magnetic component of the CMB radiation. Specifically, they were looking for some curling patterns in the magnetic mode associated with a rapid expansion of the universe thought to have happened between 10-36 and 10-33 seconds after the universe was born.

This expansion has been called the cosmic inflation and the period it happened, the inflationary epoch. Cosmic inflation was a hypothesis that sought to explain why parts of today’s universe seem to have similar physical features despite being separated by billions of lightyears. If cosmic inflation didhappen, the explanation would be that, once upon a time, the universe was very small and these distant parts were in fact more closely packed together then.

The first announcement, on March 17, 2014, was marked with a lot of fanfare. It was cosmology’s big day, and news publications around the world covered the announcement. Most of them included comments from scientists not involved in the data-taking, scientists who said something about the results was suspicious. That suspicion snowballed over time into a full-blown rebuttal that, within a few months, torpedoed the original study and forced the authors to apologise.

The problem turned out to be that gravitational waves couldcause the curling pattern on the magnetic mode of the CMB – and so could radiation emitted by cosmic dust, as seen by BICEP2. And the BICEP2 data was found to have recorded only the effects of cosmic dust.

In the last four years, I’ve realised how I had acted stupidly and learnt an important lesson the hard way. However, I was still curious why the BICEP2 team had acted stupidly. And though it seemed obvious, I had trouble accepting that the team had behaved the way it had simply because it was so excited, because it wanted to become famous.

On April 19 this year, Nautilus published an essay by Brian Keating, adapted from a book he has written about the BICEP2 fiasco. Keating was one of the leaders of the collaboration behind the announcement, working at Harvard University’s Centre for Astronomy (CfA). The essay provides a behind-the-scenes look at how scientists had missed the cosmic dust signal in their data analysis.

By the end of the essay, Keating appears to try to assuage readers that this was how science worked, that “you put out a result, and other scientists work to test the result”. However, the essay in toto highlights this is not how science works, and that this image of scientific endeavours is far too idealistic.

For example, a constant undercurrent throughout the enterprise seems to have been a rush to scoop. Keating et al had their eyes on a Nobel Prize, and wanted to be the first group to make the announcement that they’d seen the remains of the universe’s “birth pangs”.

He says this rush is why his team decided to present their BICEP2 results to the press even before the corresponding paper was peer-reviewed and published in a science journal. He writes:

… we feared that sending the paper to a journal would be unfair, giving a particular group – referees and their friends – a head start on proposal submission. My field is so competitive that the only people who weren’t on BICEP2 who could have reviewed the highly technical aspects of the paper were competitors. Our first priority was to make a scientific presentation to communicate our results to all our peers in the cosmology community.

Next, it seems the CfA team had been aware that dust in the Milky Way could play spoilsport to their apparent discovery, so they tried to get data from the team operating the Planck satellite. This satellite measures electromagnetic radiation across a wide swath of the sky, much larger than the BICEP2 survey area, and in a larger range of frequencies as well.

One of these frequencies was 353 GHz, at which Planck was able to study the effect of cosmic dust exclusively. The CfA team needed this data – but despite multiple requests, the Planck team refused to share the data. This is big news to me because I had no idea the CfA and the Planck teams treated each other as competitors! If only they’d worked together, the BICEP2 fiasco might never have happened.

… such a map [of cosmic dust] did exist, one with the exact high-frequency data we needed. There was only one catch: It belonged to our competitor, the Planck satellite. And in early 2014, the Planck team hadn’t yet released their B-mode polarization data. We were scared Planck might not only hold the key to proving our measurement right, but might have already glimpsed the inflationary B-mode signal before we did. … We desperately tried to work with the Planck team, while being careful not to tip them off as to what we’d found … [but they] wouldn’t cooperate. Either they didn’t have the data we wanted, or they did have it and they were going to scoop us. We had to go it alone.

Soon after, Keating and his team found a picture of a Powerpoint slide posted online that appeared to be from a talk given by one of the Planck team members. They decided to use the information presented in the slide, which suggested that BICEP2 had good and legitimate data, even though they weren’t sure if the slide was meant for quantitave analysis.

Thus, March 17 came and went, then June did too, when the CfA team’s paper was published in the journal Physical Review Letters. Then, around November, the Planck team had their paper published. As Keating writes,

With the Planck 353 GHz paper appearance came the beginning of the end of the BICEP2 team’s inflation elation. Although the Planck team was careful to release no data for the Southern Hole, the field where BICEP2 observed—perhaps out of fear we would digitize it—they made a blunt assessment of the potential amount of dust polarization contamination in the Southern Hole, saying it was of “the same magnitude as reported by BICEP2.” This meant dust was as likely a culprit for our B-modes as were inflationary gravitational waves.

The BICEP2 story well elucidates how science really works.

“Scientists are people too” is one way to put it. Another, and possibly better, way is to remember that institutionalised tendencies like torturing the data to yield more papers, conducting research to attract a Nobel Prize and scooping the competition aren’t one-offs, and that it’s foolish to think they wouldn’t percolate through the scientific community to create flawed ambitions.

These are all essential components of how humanity produces its knowledge. In other words, the scientific enterprise isn’t one that’s free of human foibles.

Featured image: The BICEP2 telescope (right) in Antarctica. Credit: Amble/Wikimedia Commons, CC BY-SA 3.0.

Posted in Scicomm

Exploring what it means to be big

Reading a Nature report titled ‘Step aside CERN: There’s a cheaper way to break open physics‘ (January 10, 2018) brought to mind something G. Rajasekaran, former head of the Institute of Mathematical Sciences, Chennai, told me once: that the future – as the Nature report also touts – belongs to tabletop particle accelerators.

Rajaji (as he is known) said he believed so because of the simple realisation that particle accelerators could only get so big before they’d have to get much, much bigger to tell us anything more. On the other hand, tabletop setups based on laser wakefield acceleration, which could accelerate electrons to higher energies across just a few centimetres, would allow us to perform slightly different experiments such that their outcomes will guide future research.

The question of size is an interesting one (and almost personal: I’m 6’4” tall and somewhat heavy, which means I’ve to start by moving away from seeming intimidating in almost all new relationships). For most of history, humans’ ideas of better included something becoming bigger. From what I can see – which isn’t really much – the impetus for this is founded in five things:

1. The laws of classical physics: They are, and were, multiplicative. To do more or to do better (which for a long time meant doing more), the laws had to be summoned in larger magnitudes and in more locations. This has been true from the machines of industrialisation to scientific instruments to various modes of construction and transportation. Some laws also foster inverse relationships that straightforwardly encourage devices to be bigger to be better.

2. Capitalism, rather commerce in general: Notwithstanding social necessities, bigger often implied better the same way a sphere of volume 4 units has a smaller surface area than four spheres of volume 1 unit each. So if your expenditure is pegged to the surface area – and it often is – then it’s better to pack 400 people on one airplane instead of flying four airplanes with 100 people in each.

3. Sense of self: A sense of our own size and place in the universe, as seemingly diminutive creatures living their lives out under the perennial gaze of the vast heavens. From such a point of view, a show of power and authority would obviously have meant transcending the limitations of our dimensions and demonstrating to others that we’re capable of devising ‘ultrastructures’ that magnify our will, to take us places we only thought the gods could go and achieve simultaneity of effect only the gods could achieve. (And, of course, for heads of state to swing longer dicks at each other.)

4. Politics: Engineers building a tabletop detector and engineers building a detector weighing 50,000 tonnes will obviously run into different kinds of obstacles. Moreover, big things are easier to stake claims over, to discuss, dispute or dislodge. It affects more people even before it has produced its first results.

5. Natural advantages: An example that comes immediately to mind is social networks – not Facebook or Twitter but the offline ones that define cultures and civilisations. Such networks afford people an extra degree of adaptability and improve chances of survival by allowing people to access resources (including information/knowledge) that originated elsewhere. This can be as simple as a barter system where people exchange food for gold, or as complex as a bashful Tamilian staving off alienation in California by relying on the support of the Tamil community there.

(The inevitable sixth impetus is tradition. For example, its equation with growth has given bigness pride of place in business culture, so much so that many managers I’ve met wanted to set up bigger media houses even when it might have been more appropriate to go smaller.)

Against this backdrop of impetuses working together, Ed Yong’s I Contain Multitudes – a book about how our biological experience of reality is mediated by microbes – becomes a saga of reconciliation with a world much smaller, not bigger, yet more consequential. To me, that’s an idea as unintuitive as, say, being able to engineer materials with fantastical properties by sporadically introducing contaminants into their atomic lattice. It’s the sort of smallness whose individual parts amount to very close to nothing, whose sum amounts to something, but the human experience of which is simply monumental.

And when we find that such smallness is able to move mountains, so to speak, it disrupts our conception of what it means to be big. This is as true of microbes as it is of quantum mechanics, as true of elementary particles as it is of nano-electromechanical systems. This is one of the more understated revolutions that happened in the 20th century: the decoupling of bigger and better, a sort of virtualisation of betterment that separated it from additive scale and led to the proliferation of ‘trons’.

I like to imagine what gave us tabletop accelerators also gave us containerised software and a pan-industrial trend towards personalisation – although this would be philosophy, not history, because it’s a trend we compose in hindsight. But in the same vein, both hardware (to run software) and accelerators first became big, riding on the back of the classical and additive laws of physics, then hit some sort of technological upper limit (imposed by finite funds and logistical limitations) and then bounced back down when humankind developed tools to manipulate nature at the mesoscopic scale.

Of course, some would also argue that tabletop particle accelerators wouldn’t be possible, or deemed necessary, if the city-sized ones didn’t exist first, that it was the failure of the big ones that drove the development of the small ones. And they would argue right. But as I said, that’d be history; it’s the philosophy that seems more interesting here.

Posted in Life notes, Scicomm

How science is presented and consumed on Facebook

This post is a breakdown of the Pew study titled The Science People See on Social Media, published March 21, 2018. Without further ado…

In an effort to better understand the science information that social media users encounter on these platforms, Pew Research Center systematically analyzed six months’ worth of posts from 30 of the most followed science-related pages on Facebook. These science-related pages included 15 popular Facebook accounts from established “multiplatform” organizations … along with 15 popular “Facebook-primary” accounts from individuals or organizations that have a large social media presence on the platform but are not connected to any offline, legacy outlet.

Is popularity the best way to judge if a Facebook page counts as a page about science? Popularity is an easy measure but it often almost exclusively represents a section of the ‘market’ skewed towards popular science. Some such pages from the Pew dataset include facebook.com/healthdigest, /mindbodygreen, /DailyHealthTips, /DavidAvocadoWolfe and /droz – all “wellness” brands that may not represent the publication of scientific content as much as, more broadly, content that panders to a sense of societal insecurity that is not restricted to science. This doesn’t limit the Pew study insofar as the study aims to elucidate what passes off as ‘science’ on Facebook but it does limit Pew’s audience-specific insights.

§

… just 29% of the [6,528] Facebook posts from these pages [published in the first half of 2017] had a focus or “frame” around information about new scientific discoveries.

Not sure why the authors, Paul Hitlin and Kenneth Olmstead, think this is “just” 29% – that’s quite high! Science is not just about new research and research results, and if these pages are consciously acknowledging that on average limiting their posts about such news to three of every 10 posts, that’s fantastic. (Of course, if the reason for not sharing research results is that they’re not very marketable, that’s too bad.)

I’m also curious about what counts as research on the “wellness” pages. If their posts share research to a) dismiss it because it doesn’t fit the page authors’ worldview or b) popularise studies that are, say, pursuing a causative link between coffee consumption and cancer, then such data is useless.

From 'The science people see on social media'. Credit: Pew Research Center
From ‘The science people see on social media’. Credit: Pew Research Center

§

The volume of posts from these science-related pages has increased over the past few years, especially among multiplatform pages. On average, the 15 popular multiplatform Facebook pages have increased their production of posts by 115% since 2014, compared with a 66% increase among Facebook-primary pages over the same time period. (emphasis in the original)

The first line in italics is a self-fulfilling prophecy, not a discovery. This is because the “multiplatform organisations” chosen by Pew for analysis all need to make money, and all organisations that need to continue making money need to grow. Growth is not an option, it’s a necessity, and it often implies growth on all platforms of publication in quantity and (hopefully) quality. In fact, the “Facebook-primary” pages, by which Hitlin and Olmstead mean “accounts from individuals or organizations that have a large social media presence on the platform but are not connected to any offline, legacy outlet”, are also driven to grow for the same reason: commerce, both on Facebook and off. As the authors write,

Across the set of 30 pages, 16% of posts were promotional in nature. Several accounts aimed a majority of their posts at promoting other media and public appearances. The four prominent scientists among the Facebook-primary pages posted fewer than 200 times over the course of 2017, but when they did, a majority of their posts were promotions (79% of posts from Dr. Michio Kaku, 78% of posts from Neil deGrasse Tyson, 64% of posts from Bill Nye and 58% of posts from Stephen Hawking). Most of these were self-promotional posts related to television appearances, book signings or speeches.

A page with a few million followers is likelier than not to be a revenue-generating exercise. While this is by no means an indictment of the material shared by these pages, at least not automatically, IFL Science is my favourite example: its owner Elise Andrews was offered $30 million for the page in 2015. I suspect that might’ve been a really strong draw to continue growing, and unfortunately, many of the “Facebook-primary” pages like IFLS find this quite easy to do by sharing well-dressed click-bait.

Second, if Facebook is the primary content distribution channel, then the number of video posts will also have shown an increase in the Pew data – as it did – because publishers both small and large that’ve made this deal with the devil have to give the devil whatever it wants. If Facebook says videos are the future and that it’s going to tweak its newsfeed algorithms accordingly, publishers are going to follow suit.

Source: Pew Research Center
Source: Pew Research Center

So when Hitlin and Olmstead say, “Video was a common feature of these highly engaging posts whether they were aimed at explaining a scientific concept, highlighting new discoveries, or showcasing ways people can put science information to use in their lives”, they’re glossing over an important confounding factor: the platform itself. There’s a chance Facebook is soon going to say VR is the next big thing, and then there’s going to be a burst of posts with VR-mediated content. But that doesn’t mean the publishing houses themselves believe VR is good or bad for sharing science news.

§

The average number of user interactions per post – a common indicator of audience engagement based on the total number of shares, comments, and likes or other reactions – tends to be higher for posts from Facebook-primary accounts than posts from multiplatform accounts. From January 2014 to June 2017, Facebook-primary pages averaged 14,730 interactions per post, compared with 4,265 for posts on multiplatform pages. This relationship held up even when controlling for the frame of the post. (emphasis in the original)

Again, Hitlin and Olmstead refuse to distinguish between ‘legitimate’ posts and trash. This would involve a lot more work on their part, sure, but it would also make their insights into science consumption on the social media that much more useful. But until then, for all I know, “the average number of user interactions per post … tends to be higher for posts from Facebook-primary accounts than posts from multiplatform accounts” simply because it’s Gwyneth Paltrow wondering about what stones to shove up which orifices.

§

… posts on Facebook-primary pages related to federal funding for agencies with a significant scientific research mission were particularly engaging, averaging more than 122,000 interactions per post in the first half of 2017.

Now that’s interesting and useful. Possible explanation: Trump must’ve been going nuts about something science-related. [Later in the report] Here it is: “Many of these highly engaging posts linked to stories suggesting Trump was considering a decrease in science-agency funding. For example, a Jan. 25, 2017, IFLScience post called Trump’s Freeze On EPA Grants Leaves Scientists Wondering What It Means was shared more than 22,000 times on Facebook and had 62,000 likes and other reactions.”

§

Highly engaging posts among these pages did not always feature science-related information. Four of the top 15 most-engaging posts from Facebook-primary pages featured inspirational sayings or advice such as “look after your friends” or “believe in yourself.”

Does mental-health-related messaging on the back of new findings or realisations about the need for, say, speaking out on depression and anxiety count as science communication? It does to me; by all means, it’s “news I can use”.

§

Three of the Facebook-primary pages belong to prominent astrophysicists. Not surprisingly, about half or more of the posts on these pages were related to astronomy or physics: Dr. Michio Kaku (58%), Stephen Hawking (58%) and Neil deGrasse Tyson (48%).

Ha! It would be interesting to find out why science’s most prominent public authority figures in the last few decades have all been physicists of some kind. I already have some ideas but that’ll be a different post.

§

Useful takeaways for me as science editor, The Wire:

  1. Pages that stick to a narrower range of topics do better than those that cover all areas of science
  2. Controversial topics such as GMOs “didn’t appear often” on the 30 pages surveyed – this is surprising because you’d think divisive issues would attract more audience engagement. However, I also imagine the pages’ owners might not want to post on those issues to avoid flame wars (😐), stay away from inconclusive evidence (😄), not have to take a stand that might hurt them (🤔) or because issue-specific nuances make an issue a hard-sell (🙄).
  3. Most posts that shared discoveries were focused on “energy and environment, geology, and archeology”; half of all posts about physics and astronomy were about discoveries

Featured image credit: geralt/pixabay.

Posted in Op-eds, Scicomm

Jayant Narlikar's pseudo-defence of Darwin

Jayant Narlikar, the noted astrophysicist and emeritus professor at the Inter-University Centre for Astronomy and Astrophysics, Pune, recently wrote an op-ed in The Hindu titled ‘Science should have the last word’. There’s probably a tinge of sanctimoniousness there, echoing the belief many scientists I’ve met have that science will answer everything, often blithely oblivious to politics and culture. But I’m sure Narlikar is not one of them.

Nonetheless, the piece IMO was good and not great because what Narlikar has written has been written in the recent past by many others, with different words. It was good because the piece’s author was Narlikar. His position on the subject is now in the public domain where it needs to be if only so others can now bank on his authority to stand up for science themselves.

Speaking of authority: there is a gaffe in the piece that its fans – and The Hindu‘s op-ed desk – appear to have glazed over. If they didn’t, it’s possible that Narlikar asked for his piece to be published without edits, and which could have either been further proof of sanctimoniousness or, of course, distrust of journalists. He writes:

Recently, there was a claim made in India that the Darwinian theory of evolution is incorrect and should not be taught in schools. In the field of science, the sole criterion for the survival of a theory is that it must explain all observed phenomena in its domain. For the present, Darwin’s theory is the best such theory but it is not perfect and leaves many questions unanswered. This is because the origin of life on earth is still unexplained by science. However, till there is a breakthrough on this, or some alternative idea gets scientific support, the Darwinian theory is the only one that should continue to be taught in schools.

@avinashtn, @thattai and @rsidd120 got the problems with this excerpt, particularly the part in bold, just right in a short Twitter exchange, beginning with this tweet (please click-through to Twitter to see all the replies):

Gist: the origin of life is different from the evolution of life.

But even if they were the same, as Narlikar conveniently assumes in his piece, something else should have stopped him. That something else is also what is specifically interesting for me. Sample what Narlikar said next and then the final line from the excerpt above:

For the present, Darwin’s theory is the best such theory but it is not perfect and leaves many questions unanswered. … However, till there is a breakthrough on this, or some alternative idea gets scientific support, the Darwinian theory is the only one that should continue to be taught in schools.

Darwin’s theory of evolution got many things right, continues to, so there is a sizeable chunk in the domain of evolutionary biology where it remains both applicable and necessary. However, it is confusing that Narlikar believes that, should some explanations for some phenomena thus far not understood arise, Darwin’s theories as a whole could become obsolete. But why? It is futile to expect a scientific theory to be able to account for “all observed phenomena in its domain”. Such a thing is virtually impossible given the levels of specialisation scientists have been able to achieve in various fields. For example, an evolutionary biologist might know how migratory birds evolved but still not be able to explain how some birds are thought to use quantum entanglement with Earth’s magnetic field to navigate.

The example Mukund Thattai provides is fitting. The Navier-Stokes equations are used to describe fluid dynamics. However, scientists have been studying fluids in a variety of contexts, from two-dimensional vortices in liquid helium to gas outflow around active galactic nuclei. It is only in some of these contexts that the Navier-Stokes equations are applicable; that they are not entirely useful in others doesn’t render the equations themselves useless.

Additionally, this is where Narlikar’s choice of words in his op-ed becomes more curious. He must be aware that his own branch of study, quantum cosmology, has thin but unmistakable roots in a principle conceived in the 1910s by Niels Bohr, with many implications for what he says about Darwin’s theories.

Within the boundaries of physics, the principle of correspondence states that at larger scales, the predictions of quantum mechanics must agree with those of classical mechanics. It is an elegant idea because it acknowledges the validity of classical, a.k.a. Newtonian, mechanics when applied at a scale where the effects of gravity begin to dominate the effects of subatomic forces. In its statement, the principle does not say that classical mechanics is useless because it can’t explain quantum phenomena. Instead, it says that (1) the two mechanics each have their respective domain of applicability and (2) the newer one must be resemble the older one when applied to the scale at which the older one is relevant.

Of course, while scientists have been able to satisfy the principle of correspondence in some areas of physics, an overarching understanding of gravity as a quantum phenomenon has remained elusive. If such a theory of ‘quantum gravity’ were to exist, its complicated equations would have to be able to resemble Newton’s equations and the laws of motion at larger scales.

But exploring the quantum nature of spacetime is extraordinarily difficult. It requires scientists to probe really small distances and really high energies. While lab equipment has been setup to meet this goal partway, it has been clear for some time that it might be easier to learn from powerful cosmic objects like blackholes.

And Narlikar has done just that, among other things, in his career as a theoretical astrophysicist.

I don’t imagine he would say that classical mechanics is useless because it can’t explain the quantum, or that quantum mechanics is useless because it can’t be used to make sense of the classical. More importantly, should a theory of quantum gravity come to be, should we discard the use of classical mechanics all-together? No.

In the same vein: should we continue to teach Darwin’s theories for lack of a better option or because it is scientific, useful and, through the fossil record, demonstrable? And if, in the future, an overarching theory of evolution comes along with the capacity to subsume Darwin’s, his ideas will still be valid in their respective jurisdictions.

As Thattai says, “Expertise in one part of science does not automatically confer authority in other areas.” Doesn’t this sound familiar?

Featured image credit: sipa/pixabay.

Posted in Life notes, Scicomm

We don't have a problem with the West, we're just obsessed with it

When you don’t write about scientific and technological research for its inherent wonderfulness but for its para-scientific value, you get stories born out of jingoism masquerading as a ‘science’ piece. Take this example from today’s The Hindu (originally reported by PTI):

A new thermal spray coating technology used for gas turbine engine in spacecraft developed by a Rajasthan-based researcher has caught the attention of a NASA scientist, an official said.

Expressing his interest in the research, James L. Smialek, a scientist from NASA wrote to Dr. Satish Tailor after it was published in the journal Ceramics International and Thermal Spray Bulletin, said S.C. Modi, the chairman of a Jodhpur-based Metallizing Equipment Company.

This story is in the news not because a scientist in Rajasthan (Tailor) developed a new and better spray-coating technique. It’s in the news because a white man* (Smialek) wrote to its inventor expressing his interest. If Smialek hadn’t contacted Tailor, would it have been reported?

The article’s headline is also a bit off: ‘NASA keen on India-made technology for spacecraft’ – but does Smialek speak for NASA the organisation? He seems to be a senior research scientist there, not a spokesperson or a senior-level decision-maker. Additionally, “India-made”? I don’t think so. “India-made” would imply that a cohesion of Indian institutions and laboratories are working to make and utilise this technology – whereas while we’re fawning over NASA’s presumed interest, the story makes no mention of ISRO. It does say CSIR and DRDO scientists are “equally” interested but to me “India-made” would also then beggar the question: “Why cut funding for CSIR?”

Next, what’s a little funny is that while the Indian government is busy deriding Western ‘cultural imports’ ruining our ‘pristine’ homegrown values, while Indian ministers are constantly given to doubting the West’s scientific methods, some journalists are using the West’s acknowledgment to recognise Indian success stories. Which makes me think if what we’re really doing is being obsessed with the West instead of working towards patching the West’s mistakes, insofar as they are mistakes, with our corrections (very broadly speaking).

The second funny thing about this story is that, AFAIK, scientists writing in one part of the world to those in other is fairly regular. That’s one of the reasons people publish in a journal – especially in one as specific as Ceramics International: so people who are interested in research on the same topic can know what their peers are up to. But by reporting on such incidents on a one-off basis, journalists run the risk of making cross-country communication look rare, even esoteric. And by imbibing the story with the quality of rareness, they can give the impression that Smialek writing to Tailor is something to be proud of.

It’s not something to be proud of for this reason simply because it’s an artificial reason. It’s a reason that doesn’t objectively exist.

Nonetheless, I will say that I’m glad PTI picked up on Tailor’s research at least because of this; akin to how embargoes are beacons pointing journalists towards legitimate science stories (although not all the time), validation can also come from an independent researcher expressing his interest in a bit of research. However, it’s not something to be okay with in the long-term – if only because… doesn’t it make you wonder how much we might not know about what researchers are doing in our country simply because Western scientists haven’t written to some of them?

*No offence to you, James. Many Indians do take take some things more seriously because white people are taking it seriously.

Featured image credit: skeeze/pixabay.

Posted in Life notes, Scicomm

Dealing with plagiarism? Look at thy neighbour

Four doctors affiliated with Kathmandu University (KU) in Nepal are going to be fired because they plagiarised data in two papers. The papers were retracted last year from the Bali Medical Journal, where they had been published. A dean at the university, Dipak Shrestha, told a media outlet that the matter will be settled within two weeks. A total of six doctors, including the two above, are also going to be blacklisted by the journal. This is remarkably swift and decisive action against a problem that refuses to go away in India for many reasons. But I’m not an apologist; one of those reasons is that many teachers at colleges and universities seem to think “plagiarism is okay”. And for as long as that attitude persists, academicians are going to be able to plagiarise and flourish in the country.

One of the other reasons plagiarism is rampant in India is the language problem. As Praveen Chaddah, a former chairman of the University Grants Commission, has written, there is a form of plagiarism that can be forgiven – the form at play when a paper’s authors find it difficult to articulate themselves in English but have original ideas all the same. The unforgivable form is when the ideas are plagiarised as well. According to a retraction notice supplied by the Bali Medical Journal, the KU doctors indulged in plagiarism of the unforgivable kind, and were duly punished. In India, however, I’m yet to hear of an instance where researchers found to have been engaging in such acts were pulled up as swiftly as their Nepali counterparts were, or had sanctions imposed on their work within a finite period and in a transparent manner.

The production and dissemination of scientific knowledge should not have to suffer because some scientists aren’t fluent with a language. Who knows, India might already be the ‘science superpower’ everyone wants it to be if we’re able to account for information and knowledge produced in all its languages. But this does not mean India’s diversity affords it the license to challenge the use of English as the de facto language of science; that would be stupid. English is prevalent, dominant, even hegemonic (as K. VijayRaghavan has written). So if India is to make it to the Big League, then officials must consider doing these things:

  1. Inculcate the importance of communicating science. Writing a paper is also a form of communication. Teach how to do it along with technical skills.
  2. Set aside money – as some Australian and European institutions do1 – to help those for whom English isn’t their first, or even second, language write papers that will be appreciated for their science instead of rejected for their language (unfair though this may be).
  3. DO WHAT NEPAL IS DOING – Define reasonable consequences for plagiarising (especially of the unforgivable kind), enumerate them in clear and cogent language, ensure these sanctions are easily accessible by scientists as well as the public, and enforce them regularly.

Researchers ought to know better – especially the more prominent, more influential ones. The more well-known a researcher is, the less forgivable their offence should be, at least because they set important precedents that others will follow. And to be able to remind them effectively when they act carelessly, an independent body should be set up at the national level, particularly for institutions funded by the central government, instead of expecting the offender’s host institution to be able to effectively punish someone well-embedded in the hierarchy of the institution itself.

1. Hat-tip to Chitralekha Manohar.

Featured image credit: xmex/Flickr, CC BY 2.0.

Posted in Scicomm, Science

On cancers, false balance and the judiciary

Climate change has for long been my go-to example to illustrate how absolute objectivity can sometimes be detrimental to the reliability of a news report. Stating that A said “Climate change is real” and that B replied “No, it isn’t” isn’t helping anyone even though it has voices from both sides of the issue. Now, I have a new example: cancer due to radiation from cellphone towers. (And yes, there seems to be a pattern here: false balance becomes a bigger problem when a popular opinion is on the verge of becoming unpopular thanks new scientific discoveries.)

This post was prompted by a New York Times article published January 5, 2018. Excerpt:

From 1991 to 2015, the cancer death rate dropped about 1.5 percent a year, resulting in a total decrease of 26 percent — 2,378,600 fewer deaths than would have occurred had the rate remained at its peak. The American Cancer Society predicts that in 2018, there will be 1,735,350 new cases of cancer and 609,640 deaths. The latest report on cancer statistics appears in CA: A Cancer Journal for Clinicians. The most common cancers — in men, tumours of the prostate; in women, breast — are not the most common causes of cancer death. Although prostate cancer accounts for 19 percent of cancers in men and breast cancer for 30 percent of cancers in women, the most common cause of cancer death in both sexes is lung cancer, which accounts for one-quarter of cancer deaths in both sexes.

This is a trend I’d alluded to in an earlier post: that age-adjusted cancer death rates in the US, among both men and women, have been on a steady downward decline since at least 1990 whereas, in the same period, the number of cellphone towers has been on the rise. More generally, scientific studies continue to fail to find a link between radio-frequency emissions originating from smartphones and cancers of the human body. Source: this study and this second study.

The simplest explanation remains that these emissions are non-ionising – i.e. when they pass through matter, they can excite electrons to higher energy levels but they can’t remove them entirely. In other words, they can cause temporary disturbances in matter but they can’t change its chemical composition. Some have also argued that cellphone radiation can heat up tissues in the body enough to damage them. This is ridiculous: apart from the fact that the human body is a champion at regulating internal heat, imagine what’s happening the next time you get a fever or if you go to Delhi in May.

Those who continue to believe cellphone towers can damage our genes do so for a variety of reasons – including poor outreach and awareness efforts (although I’m told TRAI has done a lot of work on this front) and, more troublingly, the judiciary. By not ensuring that the evidence presented before them is held to higher scientific standards, Indian courts have on many occasions admitted strange arguments and thus pronounced counterproductive verdicts.

For example, in April 2017, the Supreme Court (of India) directed a BSNL cellphone tower in Gwalior be taken down after one petitioner claimed radiation from the structure had given him Hodgkin’s lymphoma. If the court was trying to err on the side of caution: what about the thousands of people now left with poorer connectivity in the area (and who are not blaming their ailments on cellphone tower radiation)?

This isn’t confined to India. In early 2017, Joel Moskowitz, a professor at the Berkeley School of Public Health, filed a suit asking for the state of California to release a clutch of documents describing cellphone safety measures. Moskowitz believes that cellphone radiation causes cancer, and that Big Telecom has allegedly been colluding with Big Government to keep this secret away from the public.

In December 2017, a state judge ruled in Moskowitz’s favour and directed the California Department of Public Health (CDPH) to release a “Guidance on How to Reduce Exposure to Radiofrequency Energy from Cell Phones” – a completely unnecessary set of precautions that, by the virtue of its existence, reinforces a gratuitous panic. By all means, let those who believe in this drivel consume this drivel, but it shouldn’t have been at the expense of making a mockery of the court nor should it have been effected by pressing the CDPH’s reputation to endorse the persistence of pseudoscience. What a waste of time and money when we have bigger and more legitimate problems on our hands.

… which brings us to climate change and the perniciousness of false balance. On December 20, 2017, Times of India published an article titled ‘Can mobile phones REALLY increase the risk of brain cancer? Or is it too far-fetched?’. It quotes studies saying ‘yes’ as well as those saying ‘no’ but it doesn’t contain any attributions, citations or hyperlinks. Sample this:

Lab studies where animals are exposed to radio frequency waves suggest that as the waves are not that strong and cannot break the DNA, they cannot cause cancer. But some other studies claim that that they can damage the cells up to some level and this can support a tumour to grow.

It also contains ill-conceived language, for example by asking how radio-frequency waves become harmful before it goes on to ‘discuss’ whether they are harmful at all, or by saying the waves are “absorbed” in the human body. But most of all, it’s the intent to remain equivocal – instead of assuming a rational position based on the information and/or knowledge available on the subject – that’s really frustrating. This is no different from what the Californian judge did or what the SC of India did: not consider evidence of better quality while trying to please everyone.

Featured image credit: Free-Photos/pixabay.