Posted in Op-eds, Science

On cosmology's scicomm disaster

Jamie Farnes, a theoretical physicist at Oxford University, recently had a paper published that claimed the effects of dark matter and dark energy could be explained by replacing them with a fluid-like substance that was created spontaneously, had negative mass and disobeyed the general theory of relativity. As fantastic as these claims are, Farnes’s paper made the problem worse by failing to explain the basis on which he was postulating the existence of this previously unknown substance.

But that wasn’t the worst of it. Oxford University published a press releasesuggesting that Farnes’s paper had “solved” the problems of dark matter/energy and stood to revolutionise cosmology. It was reprinted by PhysOrg; Farnes himself wrote about his work for The Conversation. Overall, Farnes, Oxford and the science journalists who popularised the paper failed to situate it in the right scientific context: that, more than anything else, it was a flight of fancy whose coattails his university wanted to ride.

The result was disaster. The paper received a lot of attention in the popular science press and among non-professional astronomers, so much so that the incident had to be dehyped by Ethan SiegelSabine Hossenfelder and WiredUK. You’d be hard-pressed to find a better countermeasures team.

The paper’s coverage in the international press. Source: Google News

Of course, the science alone wasn’t the problem: the reason Siegel, Hossenfelder and others had to step in was because the science journalists failed to perform their duties. Those who wrote about the paper didn’t check with independent experts about whether Farnes’s work was legit, choosing instead to quote directly from the press release. It’s been acknowledged in the past – though not sufficiently – that university press officers who draft the releases needed to buck up; rather, more importantly, the universities need to have better policies about what roles their press releases are supposed to perform.

However, this isn’t to excuse the science journalists but to highlight two things. First: they weren’t the sole points of failure. Second: instead of looking at this episode as a network where the nodes represent different points of failure, it would be useful to examine how failures at some nodes could have increased the odds of a failure at others.

Of course, if the bad science journalists had been replaced by good ones, this problem wouldn’t have happened. But ‘good’ and ‘bad’ are neither black/white nor permanent characterisations. Some journalists – often those pressed for time, who aren’t properly trained or who simply have bad mandates from their superiors in the newsroom – will look for proxies for goodness instead of performing the goodness checks themselves. And when these proxy checks fail, the whole enterprise comes down like a house of cards.

The university’s name is one such; and in this case, ‘Oxford University’ is a pretty good one. Another is that the paper was published in a peer-reviewed journal.

In this post, I want to highlight two others that’ve been overlooked by Siegel, Hossenfelder, etc.

The first is PhysOrg, which has been a problem for a long time, though it’s not entirely to blame. What many people don’t seem to know is that PhysOrg reprints press releases. It undertakes very little science writing, let alone science journalism, of its own. I’ve had many of my writers – scientists and non-scientists alike – submit articles with PhysOrg used here and there as a citation. They assume they’re quoting a publication that knows what it’s doing but what they’re actually doing is straight-up quoting press releases.

The little bit this is PhysOrg’s fault is because PhysOrg doesn’t state anywhere on its website that most of what it puts out is unoriginal, unchecked, hyped content that may or may not have a scientist’s approval and certainly doesn’t have a journalist’s. So buyers beware.

Science X, which publishes PhysOrg, has a system through which universities can submit their press releases to be published on the site. Source: PhysOrg

The second is The Conversation. Unlike PhysOrg, these guys actually add value to the stories they publish. I’m a big fan of them, too, because they amplify scientists’ voices – an invaluable action/phenomenon in countries like India, where scientists are seldom heard.

The way they add value is that they don’t just let the scientists write whatever they’re thinking; instead, they’ve an editorial staff composed of people with PhDs in the relevant fields as well as experienced in science communication. The staff helps the scientist-contributors shape their articles, and fact-check and edit them. There have been one or two examples of bad articles slipping through their gates but for the most part, The Conversation has been reliable.

HOWEVER, they certainly screwed up in this case, and in two ways. In the first way, they screwed up from the perspective of those, like me, who know how The Conversation works by straightforwardly letting us down. Something in the editorial process got shorted. (The regular reader will spot another giveaway: The Conversation usually doesn’t use headlines that admit the first-person PoV.)

Further, Wired also fails to mention something The Conversation itself endeavours to clarify with every article: that Oxford University is one of the institutions that funds the publication. I know from experience that such conflicts of interest haven’t interfered with its editorial judgment in the past, but now it’s something we’ll need to pay more attention to.

In the second way, The Conversation failed those people who didn’t know how it works by giving them the impression that it was a journalism outlet that saw sense in Farnes’s paper. For example, one scientist quoted in Wired‘s dehype article says this:

Farnes also wrote an article for The Conversation – a news outlet publishing stories written by scientists. And here Farnes yet again oversells his theory by a wide margin. “Yeah if @Astro_Jamie had anything to do with the absurd text of that press release, that’s totally on him…,” admits Kinney.

“The evidence is very much that he did,” argues Richard Easther, an astrophysicist at Auckland University. What he means by the evidence is that he was surprised when he realised that the piece in The Conversation had been written by the scientist himself, “and not a journo”.

Easther’s surprise here is unwarranted but it exists because he’s not aware of what The Conversation actually does. And like him, I imagine many journalists and other scientists don’t know what The Conversation‘s editorial model is.

Given all of this, let’s take another look at the proxy-for-reliability checklist. Some of the items on it we discussed earlier – including the name of the university – still carry points, and with good reason, although none of them by themselves should determine how the popular science article should be written. That should still follow the principles of good science journalism. However, “article in PhysOrg” has never carried any points, and “article in The Conversation” used to carry some points but which now fall to zero.

Beyond the checklist itself, if these two publications want to improve their qualitative perception, they should do more to clarify their editorial architectures and why they are what they are. It’s worse to give a false impression of what you do than to provide zero points on the checklist. On this count, PhysOrg is guiltier than The Conversation. At the same time, if the impression you were designed to provide is not the impression readers are walking away with, the design can be improved.

If it isn’t, they’ll simply assume more and more responsibility for the mistakes of poorly trained science journalists. (They won’t assume resp. for the mistakes of ‘evil’ science journalists, though I doubt that group of people exists).

Posted in Op-eds, Science

Engineering a way out of global warming

After its licentious article about Earth having a second moon, I thought National Geographic had published another subpar piece when I saw this headline:

Small Nuclear War Could Reverse Global Warming for Years

The headline is click-bait. The article itself is about how regional nuclear war, such as between two countries like India and Pakistan, can have global consequences, especially on the climate and agriculture. That it wouldn’t take World War III + nuclear winter for the entire world to suffer the consequences of a few – not hundreds of – nuclear explosions. And that we shouldn’t labour with the presumption that detonating a few nuclear bombs would be better than having to set all of them off. So I wouldn’t have used that headline – which seems to suggest we should maybe implanting the atmosphere with thousands of tonnes of some material to cool the planet down.

I don’t think it’s silly to come to that conclusion. Scientists at the oh-so-exalted Harvard and Yale Universities are suggesting something similar: injecting the stratosphere with an aerosol to absorb heat and cool Earth’s surface. Suddenly, global warming isn’t our biggest problem, these guys are. Through a paper published in the journal Environmental Research Letters, they say that it would be both feasible and affordable to “cut the rate of global warming in half” (source: CNN) using this method. From their paper:

Total pre-start costs to launch a hypothetical SAI effort 15 years from now are ~$3.5 billion in 2018 US $. A program that would deploy 0.2 Mt of SO2 in year 1 and ramp up linearly thereafter at 0.2 Mt SO2/yr would require average annual operating costs of ~$2.25 billion/yr over 15 years. While these figures include all development and direct operating costs, they do not include any indirect costs such as for monitoring and measuring the impacts of SAI deployment, leading Reynolds et al (2016) to call SAI’s low costs a solar geoengineering ‘trope’ that has ‘overstayed its welcome’. Estimating such numbers is highly speculative. Keith et al (2017), among others, simply takes the entire US Global Change Research Program budget of $3 billion/yr as a rough proxy (Our Changing Planet 2016), more than doubling our average annual deployment estimates.

 

Whether the annual number is $2.25 or $5.25 billion to cut average projected increases in radiative forcing in half from a particular date onward, these numbers confirm prior low estimates that invoke the ‘incredible economics’ of solar geoengineering (Barrett 2008) and descriptions of its ‘free driver’ properties (Wagner and Weitzman 2012, 2015, Weitzman 2015).

My problem isn’t that these guys undertook their study. Scientifically devised methods to engineering the soil and air to slow or disrupt global warming have been around for many decades (including using a “space-based solar shield”). The present study simply evaluated one idea to find that it is eminently possible and that it could deliver a more than acceptable return per dollar spent (notwithstanding the comment on unreliable speculation and its consequences). Heck, the scientists even add:

Dozens of countries would have both the expertise and the money to launch such a program. Around 50 countries have military budgets greater than $3 billion, with 30 greater than $6 billion.

I’m all for blue-sky research – even if this particular analysis may not qualify in that category – and that knowing something is an end in and of itself. I.e., knowledge cannot be useless because knowing has value. Second: I don’t think any government or organisation is going to be able to implement a regional, leave alone global, SAI programme just because this paper has found that it is a workable idea. Then again, ability is not the same as consideration and consideration has its consequences as well.

My grouse is with a few lines in the paper’s ‘Conclusion’, where the scientists state that they “make no judgment about the desirability of [stratospheric aerosol injection].” They go on to state that their work is solely from an “engineering perspective” – as if to suggest that should anyone seriously consider implementing SAI, their paper is happy to provide the requisite support.

However, the scientists should have passed judgment about the desirability of SAI instead of copping out. I can’t understand why they chose to do so; it is the easiest conclusion in the whole enterprise. No policymaker or lawmaker who thinks anthropogenic global warming (AGW) is real is going to consider this method to deal with the problem (or maybe they will, who knows; the Delhi government thinks it’s responding right by installing giant air filters in public spaces). As David Archer, a geophysicist at the University of Chicago, told CNN:

It will be tempting to continue to procrastinate on cleaning up our energy system, but we’d be leaving the planet on a form of life-support. If a future generation failed to pay their climate bill they would get all of our warming all at once.

By not judging the “desirability of SAI”, the scientists have effectively abdicated their responsibility to properly qualify the nature and value of their work, and situate it in its wider political context. They have left the door open to harmful use of their work as well. Consider the difference between a lawmaker brandishing a journal article that simply lays out the “engineering perspective” and another having to deal with an article that discusses the engineering as well as the desirability vis-à-vis the nature and scope of AGW.

Posted in Science

INO can keep env. ministry clearance

The India-based Neutrino Observatory (INO), a mega science project stranded in the regulatory boondocks since the Centre okayed it in 2012, received a small shot in the arm earlier this week.

On November 2, the National Green Tribunal (NGT) dismissed an appeal by activists against the environment ministry’s clearance for the project.

The activists had alleged that the environment ministry lacked the “competence” to assess the project and that the environmental clearance awarded by the ministry was thus invalid. But the principal bench of the NGT ruled that “it was correct on the part of the EAC and the [ministry] to appraise the project at their level”.

The INO is a Rs-1,500-crore project that aims to build and install a 50,000-tonne detector inside a mountain near Theni, Tamil Nadu, to study natural elementary particles called neutrinos.

The environment ministry issued a clearance in June 2011. But the NGT held it in abeyance in March 2017 and asked the INO project members to apply for a fresh clearance. G. Sundarrajan, the head of an NGO called Poovulagin Nanbargal that has been opposing the INO, also contended that the project was within 5 km of the Mathikettan Shola National Park. So the NGT also directed the INO to get an okay from the National Board for Wildlife.

Poovulagin Nanbargal (Tamil for ‘Friends of Flora’) and other activists have raised doubts about the integrity of the rock surrounding the project site, damage to water channels in the area and even whether nuclear waste will be stored onsite. However, all these concerns have been allayed or debunked by the collaboration and the media. (At one point, former president A.P.J. Abdul Kalam wrote in support of the project.)

Sundarrajan has also been supported by Vaiko, leader of the Marumalarchi Dravida Munnetra Kazhagam party.

In June 2017, INO members approached the Tamil Nadu State Environmental Impact Assessment Authority. After several meetings, it stated that the environment ministry would have to assess the project in the applicable category.

The ministry provided the consequent clearance in March 2018. Activists then alleged that this process was improper and that the ministry’s clearance would have to be rescinded. The NGT struck this down.

As a result, the INO now has all but one clearance – that of the National Board for Wildlife – it needs before the final step: to approach the Tamil Nadu Pollution Control Board for the final okay. Once that is received, construction of the project can be underway.

Once operational, the INO is expected to tackle multiple science problems. Chief among them is the neutrino mass hierarchy: the relative masses of the three types of neutrinos, an important yet missing detail that holds clues about the formation and distribution of galaxies in the universe.

The Wire
November 4, 2018

Posted in Science

Doubts cast on LIGO results… again

A group of Danish physicists that doubted last year whether two American experiments to detect gravitational waves had actually confused noise for signal has reared its head once more. New Scientist reported earlier this week that the group, from the Niels Bohr Institute in Copenhagen, independently analysed the experimental data and found the results to be an “illusion” instead of the actual thing.

The twin Laser Interferometer Gravitational-wave Observatories (LIGO), located in the American states of Washington and Louisiana, made the world’s first direct detection of gravitational waves in September 2015. The labs behind the observatories announced the results in February 2016 after multiple rounds of checking and rechecking. The announcement bagged three people instrumental in setting up LIGO the Nobel Prize for physics in 2017.

However, in June that year, Andrew Jackson, the spokesperson for the Copenhagen group, first raised doubts about LIGO’s detection. He claimed that because of the extreme sensitivity of LIGO to noise, and insufficient efforts on scientists’ part to eliminate such noise from their analysis, what the ‘cleaned-up’ data shows as signs of gravitational waves is actually an artefact of the analysis itself.

As David Reitze, LIGO executive director, told Ars Technica, “The analysis done by Jackson et al. looks for residuals after subtracting the best fit waveform from the data. Because the subtracted theoretical waveforms are not a perfect reconstruction of the true signal, … [they] find residuals at a very low level and claim that we have instrumental artefacts that we don’t understand. So therefore he believes that we haven’t detected a gravitational wave.”

Scientists working with LIGO had rebutted Jackson’s claims back then. The fulcrum of their argument rest on the fact that LIGO data is very difficult to analyse and that Jackson and co. had made some mistakes in their independent analysis. They also visited the Niels Bohr Institute to work with Jackson and his team, and held extended discussions with him in teleconferences, according to Ars Technica. But Jackson hasn’t backed down.

LIGO detects gravitational waves using a school-level physics concept called interference. When two light waves encounter each other, two things happen. In places where a crest of one wave meets a crest of the other, they combine to form a bigger crest; similarly for troughs. Where a crest of one wave meets the trough of another, they cancel each other. As a result, when the recombined wave hits a surface, the viewer sees a fringe pattern: alternating bands of light and shadow. The light areas denote where one crest met another and the shadow, where one crest met a trough.

Each LIGO detector consists of two kilometre-long corridors connected like an ‘L’ shape. A machine at the vertex fires two laser beams down each corridor. The beams bounce off a mirror at the end come back towards the vertex to interfere with each other. The lasers are tuned such that, in the absence of a gravitational wave, they reconvene with destructive interference: full shadow.

When a gravitational wave passes through LIGO, distorting space as it does, one arm of LIGO becomes shorter than the other for a fleeting moment. This causes the laser pulse in that corridor to reach sooner than the other and there’s a fringed interference pattern. This alerts scientists to the presence of a gravitational wave. The instrument is so sensitive that it can detect distortions in space as small as one-hundredth the diameter of a proton.

At the same time, because it’s so sensitive, LIGO also picks up all kinds of noise in its vicinity, including trucks passing by a few kilometres away and little birds perching on the detector housing. So analysts regularly have dry-runs with the instrument to understand what noise in the signal looks like. When they do detect a gravitational wave, they subtract the noise from the data to see what the signal looks like.

But this is a horribly oversimplified version. Data analysts – and their supercomputers – take months to clean up, study and characterise the data. The LIGO collaboration also subjects the final results to multiple rechecks to prevent premature or (inadvertent) false announcements. The analysts’ work has since spawned its own field of study called numerical relativity.

Since the September 2015 detection, LIGO has made five more gravitational wave detections. Some of these have been together with other observatories in the world. Such successful combined efforts lend further credence to LIGO’s claims. The prime example of this was the August 2017 discovery of gravitational waves from a merger of neutron stars in a galaxy 130-140 million lightyears away. Over 70 other observatories and telescopes around the world joined in the effort to study and characterise the merger.

This is why LIGO scientists have asserted that when Jackson claims they’ve made a mistake, their first response is to ask his team to recheck its calculations. But though their response hasn’t changed the second time Jackson and co. have hit back, a better understanding of the problem has emerged: Is LIGO doing enough to help others make sense of its data?

For one, the tone of some of these responses hasn’t gone down well. Peter Coles, a theoretical cosmologist at the Cardiff and Maynooth Universities, wrote on his blog:

I think certain members – though by no means all – of the LIGO team have been uncivil in their reaction to the Danish team, implying that they consider it somehow unreasonable that the LIGO results such be subject to independent scrutiny. I am not convinced that the unexplained features in the data released by LIGO really do cast doubt on the detection, but unexplained features there undoubtedly are. Surely it is the job of science to explain the unexplained?

From LIGO’s perspective, the fundamental issue is that their data – a part of which is in the public domain isn’t easily understood or processed. And Jackson believes LIGO could be hiding some mistakes behind this curtain of complexity.

His and his group’s opinion, however, remains in the minority. According to the New Scientist report itself, many scientists who sided with Jackson don’t think LIGO has messed up but that it needs to do more to help independent experts understand its data better. Sabine Hossenfelder, a theoretical physicist at the Frankfurt Institute for Advanced Studies, wrote on her blog on November 1:

… the issue for me was that the collaboration didn’t make an effort helping others to reproduce their analysis. They also did not put out an official response, indeed have not done so until today. I thought then – and still think – this is entirely inappropriate of a scientific collaboration. It has not improved my opinion that whenever I raised the issue LIGO folks would tell me they have better things to do.

The LIGO collaboration finally issued a statement on November 1. Excerpt:

The features presented in Creswell et al. arose from misunderstandings of public data products and the ways that the LIGO data need to be treated. The LIGO Scientific Collaboration and Virgo Collaboration (LVC) have full confidence in our published results. We are preparing a paper that will provide more details about LIGO detector noise properties and the data analysis techniques used by the LVC to detect gravitational-wave signals and infer their source properties.

A third LIGO instrument is set to come up by 2022, this one in India. The two American detectors and other gravitational-wave observatories are all located on almost the same plane in the northern hemisphere. This limits the network’s ability to pinpoint the location of sources of gravitational waves in the universe. A detector in India would solve this problem because it would be outside the plane.

Indian scientists have also been a significant part of LIGO’s effort to study gravitational waves. Thirty-seven of them were part of a larger group of physicists awarded the Special Breakthrough Prize for fundamental physics in 2016.

The Wire
November 4, 2018

Posted in Scicomm, Science

New anomaly at the LHC

Has new ghost particle manifested at the Large Hadron Collider?, The Guardian, October 31:

Scientists at the Cern nuclear physics lab near Geneva are investigating whether a bizarre and unexpected new particle popped into existence during experiments at the Large Hadron Collider. Researchers on the machine’s multipurpose Compact Muon Solenoid (CMS) detector have spotted curious bumps in their data that may be the calling card of an unknown particle that has more than twice the mass of a carbon atom.

The prospect of such a mysterious particle has baffled physicists as much as it has excited them. At the moment, none of their favoured theories of reality include the particle, though many theorists are now hard at work on models that do. “I’d say theorists are excited and experimentalists are very sceptical,” said Alexandre Nikitenko, a theorist on the CMS team who worked on the data. “As a physicist I must be very critical, but as the author of this analysis I must have some optimism too.”

Senior scientists at the lab have scheduled a talk this Thursday at which Nikitenko and his colleague Yotam Soreq will discuss the work. They will describe how they spotted the bumps in CMS data while searching for evidence of a lighter cousin of the Higgs boson, the elusive particle that was discovered at the LHC in 2012.

This announcement – of a possibly new particle weighing about 28 GeV – is reminiscent of the 750 GeV affair. In late 2015, physicists spotted an anomalous bump in data collected by the LHC that suggested the existence of a previously unknown particle weighing about 67-times as much as the carbon atom. The data wasn’t qualitatively good enough for physicists to claim that they had evidence of a new particle, so they decided to get more.

This was December. By August next year (2016), before the new data was out, theoretical physicists had written and published over 500 papers on the arXiv preprint server on what the new particle could be and how theoretical models could have to be changed to make room for it. But at the 38th International Conference on High-Energy Physics, LHC scientists unveiled the new data said that the anomalous bump in the data had vanished and that what physicists had seen earlier was likely a random fluctuation in lower quality observations.

The new announcement of a 28 GeV particle seems set for a similar course of action. I’m not pronouncing that no new particle will be found – that’s for physicists to determine – but only writing in defence of those who would cover this event even though it seems relatively minor and like history’s repeating itself. Anomalies like these are worth writing about because of the Standard Model of particle physics, which has been historically so good at making predictions about particles’ properties that even small deviations from it are big news.

At the same time, it’s big news in a specific context with a specific caveat: that we might be chasing an ambulance here. For example, The Guardian only says that the anomalous signal will have to be verified by other experiments, leaving out the part where the signal that LHC scientists already have is pretty weak (4.2σ and 2.9σ (both local as opposed to global) in two tests in the 8 TeV data and 2.0σ and 1.4σ deficit in the 13 TeV data). It also doesn’t mention the 750 GeV affair even though the two narratives already appear to be congruent.

If journalists leave such details out, I’ve a feeling they’re going to give their readers the impression that this announcement is more significant than it actually is. (Call me a nitpicker but I’m sure being accurate will allow engaged readers to set reasonable expectations about what to expect in the story’s next chapter as well as keep them from becoming desensitised to journalistic hype.)

Those who’ve been following physics news will be aware of the ‘nightmare scenario’ assailing particle physics, and in this context there’s value in writing about what’s keeping particle physicists occupied – especially in their largest, most promising lab.

But thanks to the 750 GeV affair, most recently, we also know that what any scientist or journalist says or does right now is moot until LHC scientists present sounder data + confirmation of a positive/negative result. And journalists writing up these episodes without a caveat that properly contextualises where a new anomaly rests on the arc of a particle’s discovery will be disingenuous if they’re going to justify their coverage based on the argument that the outcome “could be” positive.

The outcome could be negative and we need to ensure the reader remembers that. Including the caveat is also a way to do that without completely obviating the space for a story itself.

Featured image: The CMS detector, the largest of the five detectors that straddle the LHC, and which spotted the anomalous signal corresponding at a particle at the 28 GeV mark. Credit: CERN.

Posted in Science

57 years after the mad bomb

Fifty-seven years ago on October 30, the most powerful nuclear weapon in the history of nukes was detonated by the Soviets. The device was called the RDS-220 by the Soviet Union and nicknamed Tsar Bomba – ‘King of Bombs’ – by the US. It had a blast yield of 50 megatonnes (MT) of TNT, making it 1,500-times more powerful than the Hiroshima and Nagasaki bombs together.

The detonation was conducted off the island of Novaya Zemlya, four km above ground. The Soviets had built the bomb to one-up the US and followed Nikita Khrushchev’s challenge on the floor of the UN General Assembly a year earlier, promising to teach the US a lesson (the B41 nuke used by the US in the early 1960s had a yield of half as much).

But despite its intimidating features and political context, the RDS-220 yielded one of the cleanest nuclear explosions ever and was never tested again. The Soviets had originally intended for the RDS-220 to have a yield equivalent to 100 MT of TNT, but decided against it because of two reasons.

First: it was a three-stage nuke and weighed 27 tonnes and was only a little smaller than an American school bus. As a result, it couldn’t be delivered using an intercontinental ballistic missile. Maj. Andrei Durnovtsev, a decorated soldier in the Soviet Air Force, modified a Tu-95V bomber to carry the bomb and also flew it on the day of the test. The bomb had been fit with a parachute (whose manufacture disrupted the domestic nylon hosiery industry) so that between releasing the bomb and its detonation, the Tu-95V would have enough time to fly 45 km away from the test site. But even then, the bomb’s 100 MT yield would’ve meant Durnovtsev and his crew would’ve nearly certainly been killed.

To improve this to 50%, engineers reduced the yield from 100 MT to 50 MT, and which they did by replacing a uranium-238 tamper around the bomb with a lead tamper. In a thermonuclear weapon – which the RDS-220 was – a nuclear fusion reaction is set off inside a container that is explosively compressed by a nuclear fission reaction going off on the outside.

However, the Soviets took it a step further with Tsar Bomba: the first stage nuclear fission reaction set off a second stage nuclear fusion reaction, which then set off a bigger fusion reaction in the third stage. The original design also included a uranium-238 tamper on the second and third stages, such that fast neutrons emitted by the fusion reaction would’ve kicked off a series of fission reactions accompanying the two stages. Utter madness. The engineers switched the uranium-238 tamper and put in a lead-208 tamper. Lead-208 can’t be fissioned in a chain reaction and as such has a remarkably low efficiency as a nuclear fuel.

The second reason the RDS-220’s yield was reduced pre-test was because of the radioactive fallout. Nuclear fusion is much cleaner than nuclear fission as a process (although there are important caveats for fusion-based power generation). If the RDS-220 had gone ahead with the uranium-238 tamper on the second and third stages, then its total radioactive fallout would’ve accounted for fully one quarter of all the radioactive fallout from all nuclear tests in history, gently raining down over Soviet Union territory. The modification resulting in 97% of the bomb’s yield being in the form of emissions from the fusion reactions alone!

One of the more important people who worked on the bomb was Andrei Sakharov, a noted nuclear physicist and later dissident from the Soviet Union. Sakharov is given credit for developing a practicable design for the thermonuclear weapon, an explosive that could leverage the fusion of hydrogen atoms. In 1955, the Soviets, thanks to Sakharov’s work, won the race to detonate a hydrogen bomb that’d been dropped from an airplane, whereas until then the Americans had detonated hydrogen charges placed on the ground.

It was after the RDS-220 test in 1961 that Sakharov began speaking out against nuclear weapons and the nuclear arms race. He would go on to win the Nobel Peace Prize in 1975. One of his important contributions to the peaceful use of nuclear power was the tokamak, a reactor design he developed with Igor Tamm to undertake controlled nuclear fusion and so generate power. The ITER experiment uses this design.

Source for many details (+ being an interesting firsthand account you should read anyway): here.

Featured image: The RDS-220 hydrogen bomb goes off. Source: YouTube.

Posted in Science

Does the neutrino sector violate CP symmetry?

The universe is supposed to contain equal quantities of matter and antimatter. But this isn’t the case: there is way more matter than antimatter around us today. Where did all the antimatter go? Physicists trying to find the answer to this question believe that the universe was born with equal amounts of both. However, the laws of nature that subsequently came into effect were – and are – biased against antimatter for some reason.

In the language of physics, this bias is called a CP symmetry violation. CP stands for charge-parity. If a positively charged particle is substituted with its negatively charged antiparticle and if its spin is changed to its mirror image, then – all other properties being equal – any experiments performed with either of these setups should yield the same results. This is what’s called CP symmetry. CPT – charge, parity and time – symmetry is one of the foundational principles of quantum field theory.

Physicists try to explain the antimatter shortage by studying CP symmetry violation because one of the first signs that the universe has a preference for one kind of matter over the other emerged in experiments testing CP symmetry in the mid-20th century. The result of this extensive experimentation is the Standard Model of particle physics, which makes predictions about what kinds of processes will or won’t exhibit CP symmetry violation. Physicists have checked these predictions in experiments and verified them.

However, there are a few processes they’ve been confused by. In one of them, the SM predicts that CP symmetry violation will be observed among particles called neutral B mesons – but it’s off about the extent of violation.

This is odd and vexing because as a theory, the SM is one of the best out there, able to predict hundreds of properties and interactions between the elementary particles accurately. Not getting just one detail right is akin to erecting the perfect building only to find the uniformity of its design undone by a misalignment of a few centimetres. It may be fine for practical purposes but it’s not okay when what you’re doing is building a theory, where the idea is to either get everything right or to find out where you’re going wrong.

But even after years of study, physicists aren’t sure where the SM is proving insufficient. The world’s largest particle physics experiment hasn’t been able to help either.

Mesons and kaons

A pair of neutral B mesons can decay into two positively charged muons or two negatively charged muons. According to the SM, the former is supposed to be produced in lower amounts than the latter. In 2010 and 2011, the Dø experiment at Fermilab, Illinois, found that there were indeed fewer positive dimuons being produced – but there was sufficient evidence that the number was off by 1%. Physicists believe that this inexplicable deviation could be the result of hitherto undiscovered physical phenomena interfering with the neutral B meson decay process.

This discovery isn’t the only one of its kind. CP violation was first discovered in processes involving particles called kaons in 1964, and has since been found affecting different types of B mesons as well. And just the way some processes violate CP symmetry more than the theory says they should, physicists also know of other processes that don’t violate CP symmetry even though the theory allows them to do so. These are associated with the strong nuclear force and this difficulty is called the strong CP problem – one of the major unsolved problems of physics.

It is important to understand which sectors, i.e. groups of particles and their attendant processes, violate CP symmetry and which don’t because physicists need to put all the facts they can get together to find patterns in them, seeds of theories that can explain how the creation of antimatter at par with matter was aborted at the cosmic dawn. This in turn means that we keep investigating all the known sectors in greater detail until we have something that will allow us to look past the SM unto a more comprehensive theory of physics.

It is in this context that in the last few years, another sector has joined this parade: the neutrinos. Neutrinos are extremely hard to trap because they interact with other particles only via the weak nuclear force, which is much weaker than the name suggests. Though a few trillion neutrinos will pass through your body in your lifetime, maybe three will interact with the atoms in your body. To surmount this limitation, physicists and engineers have built very large detectors to study them as they zoom in from all directions: outer space, from inside Earth, from the Sun, etc.

Neutrinos exhibit another property called oscillations. There are three types or flavours of neutrinos – called electron, muon and tau (note: an electron neutrino is different from an electron). Neutrinos of one flavour can transform into neutrinos of another flavour at a rate predicted by the SM. The T2K experiment in Japan has been putting this to the test. On October 24, it reported via a paper in the journal Physical Review Letters that it had found signs of CP symmetry violation in neutrinos as well.

A new sector

If neutrinos obeyed CP symmetry, then muon neutrinos should be transforming into electron neutrinos – and muon antineutrinos should be transforming into electron antineutrinos – at the rates predicted by the SM. But the transformation rate seems to be off. Physicists from T2K had reported last year that they had weak evidence of this happening. According to the October 24 paper, the evidence this year is less weak almost by half – but still not strong enough to shake up the research community.

While the trend suggests that T2K will indeed find that the neutrinos sector violates CP symmetry as it takes more data, enough experiments in the past have forced physicists to revisit their models after more data punctured this or that anomaly present in a smaller dataset. We should just wait and watch.

But what if neutrinos do violate CP symmetry? There are major implications, and one of them is historical.

When the C, P and T symmetries were formulated, physicists thought they were each absolute: that physical processes couldn’t violate any of them. But in 1956, it was found that the weak nuclear force does not obey C or P symmetries. Physicists were shaken up but not for long; they quickly rallied and fronted an idea 1957 that C or P symmetries could be broken but both together constituted a new and absolute symmetry: CP symmetry. Imagine their heartbreak when James Cronin and Val Fitch found evidence for CP symmetry violation only seven years later.

As mentioned earlier, neutrinos interact with other particles only via the weak nuclear force – which means they don’t abide by C or P symmetries. If within the next decade we find sufficient evidence to claim that the neutrinos sector doesn’t abide by CP symmetry either, the world of physics will be shaken up once more, although it’s hard to tell if any more hearts will be broken.

In fact, physicists might just express a newfound interest in mingling with neutrinos because of the essential difference between these particles on the one hand and kaons and B mesons on the other. Neutrinos are fundamental and indivisible whereas both kaons and B mesons are made up of smaller particles called quarks. This is why physicists have been able to explain CP symmetry violations in kaons and B mesons using what is called the quark-mixing model. If processes involving neutrinos are found to violate CP symmetry as well, then physicists will have twice as many sectors as before in which to explore the matter-antimatter problem.

The Wire
October 29, 2018

Posted in Op-eds, Science

What the Nobel Prizes are not

The winners of this year’s Nobel Prizes are being announced this week. The prizes are an opportunity to discover new areas of research, and developments there that scientists consider particularly notable. In this endeavour, it is equally necessary to remember what the Nobel Prizes are not.

For starters, the Nobel Prizes are not lenses through which to view all scientific pursuit. It is important for everyone – scientists and non-scientists alike – to not take the Nobel Prizes too seriously.

The prizes have been awarded to white men from Europe and the US most of the time, across the medicine, physics and chemistry categories. This presents a lopsided view of how scientific research has been undertaken in the world. Many governments take pride in the fact that one of their citizens has been awarded this prize, and often advertise the strength of their research community by boasting of the number of Nobel laureates in their ranks. This way, the prizes have become a marker of eminence.

However, this should not blind us from the fact that there are equally brilliant scientists from other parts of the world that have done, and are doing, great work. Even research institutions do this; for example, this is what the Institute for Advanced Study at Princeton University, New Jersey, says on its website:

The Institute’s mission and culture have produced an exceptional record of achievement. Among its Faculty and Members are 33 Nobel Laureates, 42 of the 60 Fields Medalists, and 17 of the 19 Abel Prize Laureates, as well as many MacArthur Fellows and Wolf Prize winners.

What the prizes are

Winning a Nobel Prize may be a good thing. But not winning a Nobel Prize is not a bad thing. That is the perspective often lost in conversations about the quality of scientific research. When the Government of India expresses a desire to have an Indian scientist win a Nobel Prize in the next decade, it is a passive admission that it does not consider any other marker of quality to be worth the endorsement. Otherwise, there are numerous ways to make the statement that the quality of Indian research is at par with the rest of the world’s (if not better in some areas).

In this sense, what the Nobel Prizes afford is an easy way out. Consider the following analogy: when scientists are being considered for promotions, evaluators frequently ask whether a scientist in question has published in “prestigious” journals like Nature, Science, Cell, etc. If the scientist has, it is immediately assumed that the scientist is undertaking good research. Notwithstanding the fact that supposedly “prestigious” journals frequently publish bad science, this process of evaluation is unfair to scientists who publish in other peer-reviewed journals and who are doing equally good, if not better, work. Just the way we need to pay less attention to which journals scientists are publishing in and instead start evaluating their research directly, we also need to pay less attention to who is winning Nobel Prizes and instead assess scientists’ work, as well as the communities to which the scientists belong, directly.

Obviously this method of evaluation is more arduous and cumbersome – but it is also the fairer way to do it. Now the question arises: is it more important to be fair or to be quick? On-time assessments and rewards are important, particularly in a country where resource optimisation carries greater benefits as well as where the population of young scientists is higher than in most countries; justice delayed is justice denied, after all. At the same time, instead of settling for one or the other way, why not ask for both methods at once: to be fair and to be quick at the same time? Again, this is a more difficult way of evaluating research than the methods we currently employ, but in the longer run, it will serve all scientists as well as science better in all parts of the world.

Skewed representation of ‘achievers’

Speaking of global representation: this is another area where the Nobel Foundation has faltered. It has ensured that the Nobel Prizes have accrued immense prestige but it has not simultaneously ensured that the scientists that it deems fit to adorn that prestige have been selected equally from all parts of the world. Apart from favouring white scientists from the US and Europe, the Nobel Prizes have also ignored the contributions of women scientists. Thus far, only two women have won the physics prize (out of 206), four women the chemistry prize (out of 177) and 12 women the medicine prize (out of 214).

One defence that is often advanced to explain this bias is that the Nobel Prizes typically reward scientific and technological achievements that have passed the test of time, achievements that have been repeatedly validated and whose usefulness for the common people has been demonstrated. As a result, the prizes can be understood to be awarded to research done in the past – and in this past, women have not made up a significant portion of the scientific workforce. Perhaps more women will be awarded going ahead.

This arguments holds water but only in a very leaky bucket. Many women have been passed over for the Nobel Prizes when they should not have been, and the Nobel Committee, which finalises each year’s laureates, is in no position to explain why. (Famous omissions include Rosalind Franklin, Vera Rubin and Jocelyn Bell Burnell.) This defence becomes even more meaningless when you ask why so few people from other parts of the world have been awarded the Nobel Prize. This is because the Nobel Prizes are a fundamentally western – even Eurocentric – institution in two important ways.

First, they predominantly acknowledge and recognise scientific and technological developments that the prize-pickers are familiar with, and the prize-pickers are a group made up of all previous laureates and a committee of Swedish scientists. Additionally, this group is only going to acknowledge research that is already familiar with and by people its own members have heard of. It is not a democratic organisation. This particular phenomenon has already been documented in the editorial boards of scientific journals, with the effect that scientific research undertaken with local needs in mind often finds dismal representation in scientific journals.

Second, according to the foundation that awards them, the Nobel Prizes are designated for individuals or groups who work has granted the “greatest benefit on mankind”. For the sciences, how do you determine such work? In fact, one step further, how do we evaluate the legitimacy and reliability of scientific work at all? Answer: we check whether the work has followed certain rules, passed certain checks, received the approval of the author’s peers, etc. All of these are encompassed in the modern scientific publishing process: a scientists describes the work they have done in a paper, submits the paper to a journal, the journal gets the paper reviewed up the scientist’s peers, once it is okay the paper is published. It is only when a paper is published that most people consider the research described in it to be worth their attention. And the Nobel Prizes – rather the people who award them – implicitly trust the modern scientific publishing process even though the foundation itself is not obligated to, essentially as a matter of convenience.

However, what about the knowledge that is not published in such papers? More yet, what about the knowledge that is not published in the few journals that get a disproportionate amount of attention (a.k.a. the “prestige” titles like Nature, Science and Cell). Obviously there are a lot of quacks and cracks whose ideas are filtered out in this process but what about scientists conducting research in resource-poor economies who simply can’t afford the fancy journals?

What about scientists and other academics who are improving previously published research to be more sensitive to the local conditions in which it is applied? What about those specialists who are unearthing new knowledge that could be robust but which is not being considered as such simply because they are not scientists – such as farmers? It is very difficult for these people to be exposed to scholars in other parts of the world and for the knowledge they have helped create/produce to be discovered by other people. The opportunity for such interactions is diminished further when the research conducted is not in English.

In effect, the Nobel Prizes highlight people and research from one small subset of the world. There are a lot of people, a lot of regions, a lot of languages and a lot of expertise excluded from this subset. As the prizes are announced one by one, we need to bear these limitations in mind and choose our words carefully, so as to not exalt the prizewinners too much and downplay the contributions of numerous others in the same field as well as in other fields and, more importantly, we must not assume that the Nobel Prizes are any kind of crowning achievement.

The Wire
October 1, 2018

Posted in Science

Proposed solution for Riemann hypothesis?

The hot news this week from the mathematical physics world is that the noted mathematician Michael Atiyah claimed to have solved the Riemann hypothesis, one of the most difficult unsolved problems known and whose resolution carries a $1 million prize. The problem is that Atiyah’s solution, while remarkable for its brevity, may not hold water.

The Riemann hypothesis is concerned with the Riemann zeta function, which – in very broad terms – provides a way to predict the position of prime numbers on the number line. Computers have been able to find prime numbers with scores of digits and mathematicians have been able to find in hindsight that, yes, the zeta function predicts they exist. However, what mathematicians don’t know (and this is the Riemann hypothesis) is whether the function can predict prime numbers ad infinitum or if it will break at some particularly large value. And solving the Riemann hypothesis problem means proving that the zeta function can indeed predict the position of all prime numbers on the number line.


A more technical explanation, reproduced from my article in The Wire last year, follows; article continues below this section:

In 1859, Bernhard Riemann expanded on Euler’s work to develop a mathematical function that relates the behaviour of positive integers, prime numbers and imaginary numbers. The Riemann hypothesis is founded on a function called the Riemann zeta function. Before him, Euler had formulated a mathematical series called Z (s), such that:

Z (s) = (1/1s) + (1/2s) + (1/3s) + (1/4s) + …

He found that Z (2) – i.e., substituting 2 for s in the Z function – equalled π2/6, and Z (4) equalled π4/90. At the same time, for many other values of s, the series Z (s) would not converge at a finite value: the value of each term would keep building to larger and larger numbers, unto infinity. This was particularly true for all values of s less than or equal to 1.

Euler was also able to find a prime number connection. Though the denominators together constituted the series of positive integers, with a small tweak, Z (s) could be expressed using prime numbers alone as well:

Z (s) = [1/(1 – 1/2s)] * [1/(1 – 1/3s)] * [1/(1 – 1/5s)] * [1/(1 – 1/7s)] * …

This was Euler’s last contribution to the topic. In the late 1850s, Riemann picked up where Euler left off. And he was bothered by the behaviour of the series of additions in Z (s) when the value of s dropped below 1.

In an attempt to make it less awkward (nobody likes infinities), he tried to modify it such that Z (2) and Z (4), etc., would still converge to interesting values like π2/6 and π4/90, etc. – but while Z (s ≤ 1) wouldn’t run away towards infinity. He succeeded in finding such a function but it was far more complex than Z (s). This function is called the Riemann zeta (ζ) function: ζ (s). And it has some weird properties of its own.

One such is involved in the Riemann hypothesis. Riemann found that ζ (s) would equal zero whenever s was a negative even number (-2, -4, -6, etc.). These values are also called trivial zeroes. He wanted to know which other values of s would precipitate a ζ (s) equalling zero – i.e. the non-trivial zeroes. And he did find some values. They all had something in common because they looked like this: (1/2) + 14.134725142i, (1/2) + 21.022039639i, (1/2) + 25.010857580i, etc. (i is the imaginary number represented as the square-root of -1.)

Obviously, Riemann was prompted to ask another question – the question that has since been found to be extremely difficult to answer, a question worth $1 million. He asked: Do all values of s that are non-negative integers and for which ζ (s) = 0 take the form ‘(1/2) + a real number multiplied by i‘?

In more mathematical terms: “The Riemann hypothesis states that the nontrivial zeros of ζ (slie on the line Re (s1/2.”


When I first heard Atiyah’s claim, I was at a loss for how to react. Most claimed solutions for the Riemann hypothesis are usually dismissed quickly because they contain leaps of logic not backed by sufficient mathematical rigour. On the other hand, Atiyah isn’t just anybody. He won the Fields Medal in 1966 and the Abel Prize in 2004, and has been associated with some famous solutions for problems in algebraic topology.

Perhaps the most famous and recent example of this was Vinay Deolalikar’s proof of another major unsolved problem in mathematics, whether P equals NP, in August 2010. The P/NP problem asks whether a problem whose solution is easy to check is also therefore easy to solve. Though nobody has been able to provide a proof for this conundrum yet, it is widely assumed by mathematicians and computer scientists that P = NP, i.e. a problem whose solution is easy to check is therefore also easy to solve. However, Deolalikar, then working at Hewlett Packard Research Labs, claimed to have a proof that P ≠ NP, and it couldn’t be readily dismissed either because, to borrow Scott Aaronson’s words,

What’s obvious from even a superficial reading is that Deolalikar’s manuscript is well-written, and that it discusses the history, background, and difficulties of the P vs. NP question in a competent way. More importantly (and in contrast to 98% of claimed P≠NP proofs), even if this attempt fails, it seems to introduce some thought-provoking new ideas, particularly a connection between statistical physics and the first-order logic characterization of NP.

Nonetheless, flaws were found in Deolalikar’s proof, as delineated prominently in Aaronson’s and R.J. Lipton’s blogs, and the claim was settled: P/NP remained (and remains) unsolved. Lesson: watch the blogs as a first response measure. The peers of a paper’s author(s) usually know what’s happening before the news does and, if a controversial claim has been advanced, they’re likely already further into a debate than the mainstream media realises.

So as a quick way out in Atiyah’s case, I hopped over to Shtetl Optimized, Aaronson’s blog. And there, at the end of a long post about the weirdness of quantum theory, was this line: “As of Sept. 25, 2018, it is the official editorial stance of Shtetl-Optimized that the Riemann Hypothesis and the abc conjecture both remain open problems.” Aha!

Some of you will remember that three physicists made a major announcement last year about finding a potential way to solve the Riemann hypothesis because they had unearthed an eerie similarity between the Riemann zeta function, central to the hypothesis, and an equation found in quantum mechanics. While they’re yet to post an update, the physicists’ thesis was compelling and wasn’t dismissed by the wider mathematical community, raising hope that it could lead to a solution.

Atiyah’s solution also concerns itself with a famously physical concept: the fine-structure constant, denoted as α (alpha). The value of this constant determines the strength with which charged particles like electrons interact with the electromagnetic field. It has the value of about 1/137. If it were higher, the electromagnetic force would be stronger and all atoms would be smaller, apart from numerous other cascading effects. Atiyah’s resolution of the Riemann hypothesis is pegged to a new derivation for the value of α, and this where he runs into trouble.

Sean Carroll, a theoretical physicist Caltech, called the derivation “misguided”.  Madhusudhan Raman, a postdoc at the Tata Institute of Fundamental Research, said that while he isn’t qualified to comment on the correctness on the Riemann hypothesis proof, he – like Carroll – had some problems with the physics of it.

His full explanation is as follows (paraphrased): It is tempting to think of α as a fixed number, like π (pi), but it is not. While the value of π does not change, the value of α does because it is related to the energy at which it is being measured. At higher energies, such as inside the Large Hadron Collider, the value of α will be higher. So α is not a number as much as a function that says its value is X at energy Y. However, Atiyah appears to have worked with the assumption that α is a single, fixed number like π. This isn’t true and therefore his derivation is suspect.

Sabine Hossenfelder, a research fellow at the Frankfurt Institute for Advanced Studies, also had the same issues with Atiyah’s effort. Carroll went a step further and said that if he had to be very charitable, then the derivation could pass muster but not without also discussing various issues in physics associated with α. However, he wrote, “Not a whit of this appears in Atiyah’s paper.”

At the same time – and unlike in numerous previous instances – these physicists and others besides continue to have great respect for Atiyah and his work, and why not? Though he is 89, as one comment observed on Carroll’s blog, “It’s brave to fight to the last, and, who knows, with his distinguished record and doubtless vast erudition, maybe there’s some truth or useful insights in these latest papers, even if [it’s] not quite what he claims.”

So also, the Riemann hypothesis endures unresolved.

The Wire
September 28, 2018

Posted in Op-eds, Science

An epistocracy

The All India Council for Technical Education (AICTE) has proposed a new textbook that will discuss the ‘Indian knowledge system’ via a number of pseudoscientific claims about the supposed inventions and discoveries of ancient India, The Print reported on September 26. The Ministry of Human Resource Development (MHRD) signed off on the move, and the textbook – drawn up by the Bharatiya Vidya Bhavan educational trust – is set to be introduced in 80% of the institutions the AICTE oversees.

According to the Bharatiya Vidya Bhavan website, “the courses of study” to be introduced via the textbook “were started by the Bhavan’s Centre for Study and Research in Indology under the Delhi Kendra after entering into an agreement with the AICTE”. They include “basic structure of Indian knowledge system; modern science and Indian knowledge system; yoga and holistic health care”, followed by “essence of Indian knowledge tradition covering philosophical tradition; Indian linguistic tradition; Indian artistic tradition and case studies”.

In all, the textbook will be available to undergraduate students of engineering in institutions other than the IITs and the NITs but still covering – according to the Bhavan – “over 3,000 engineering colleges in the country”.

Although it is hard to fathom what is going on here, it is clear that the government is not allowing itself to be guided by reason. Otherwise, who would introduce a textbook that would render our graduates even more unemployable, or under-employed, than they already are? There is also a telling statement from an unnamed scholar at the Bhavan who was involved in drafting the textbook; as told to The Print: “For ages now, we have been learning how the British invented things because they ruled us for hundreds of years and wanted us to learn what they felt like. It is now high time to change those things and we hope to do that with this course”.

The words “what they felt like” indicate that the people who have enabled the drafting and introduction of this book, including elected members of Parliament, harbour a sense of disenfranchisement and now feel entitled to their due: an India made great again under the light of its ancient knowledge, as if the last 2,000 years did not happen. It also does not matter whether the facts as embodied in that knowledge can be considered at par with the methods of modern science. What matters is that the Government of India has today created an opportunity for those who were disempowered by non-Hindu forces to flourish and that they must seize it. And they have.

In other words, this is a battle for power. It is important for those trying to fight against the introduction of this textbook or whatever else to see it as such because, for example, MHRD minister Prakash Javadekar is not waiting to be told that drinking cow urine to cure cancer is pseudoscientific. It is not a communication gap; Javadekar in all likelihood is not going to drink it himself (even though he is involved in creating a platform to tell the masses that they should).

Instead, the stakeholders of this textbook are attempting to fortify a power structure that prizes the exclusion of knowledge. Knowledge is power, after all – but an epistocracy cannot replace a democracy; “ignorance doesn’t oppress in the same way that knowledge does,” to adapt the words of David Runciman. For example, the textbook repeatedly references an older text called the ‘Yantra Sarvasva’ and endeavours to establish it as a singular source of certain “facts”. And who can read this text? The upper castes.

In turn, by awarding funds and space for research to those who claim to be disseminating ancient super-awesome knowledge and shielding them from public scrutiny, the Narendra Modi government is subjecting science to power. A person who peddles a “fact” that Indians flew airplanes fuelled by donkey urine 4,000 years ago no longer need aspire to scholarly credentials; he only has to want to belong to a socio-religious grouping that wields power.

A textbook that claims India invented batteries millennia before someone in Europe did is a weapon in this movement but does not embody the movement itself. Attempts to make this textbook go away will not make future textbooks go away, and attempts to counter the government’s messaging using the language of science alone will not suffice. For example, good education is key, and our teachers, researchers, educationists and civil society are a crucial part of the resistance. But even as they complain about rising levels of mediocrity and inefficiency, perpetrated by ceaseless administrative meddling, the government does not seek to solve the problem as much as use it as an excuse to perpetrate further mediocrity and discrimination.

There was no greater proof of this than when a member of the National Steering Committee constituted by the Department of Science and Technology to “validate research on panchgavyatold The Wire in 2017, “With all-round incompetence [of the Indian scientific community], this is only to be expected. … If you had 10-12 interesting and well-thought-out good national-level R&D programmes on the table, [the ‘cowpathy’] efforts will be seen to be marginal and on the fringe. But with nothing on the table, this gains prominence from the government, which will be pushing such an agenda.”

But we do have well-thought-out national-level R&D programmes. If they are not being picked by the government, it must be forced to provide an explanation as to why, and justify all of its decisions, instead of letting it bask in the privilege of our cynicism and use the excuse of our silence to sustain its incompetence. Bharatiya Vidya Bhavan’s textbook exists in the wider political economy of banning beef, lynching Dalits, inciting riots, silencing the media and subverting the law, and not in an isolated silo labeled ‘Science vs. Pseudoscience’. It is a call to action for academics and everyone else to protest the MHRD’s decision and – without stopping there – for everyone and the academics to vocally oppose all other moves by public institutions and officials to curtail our liberties.

It is also important for us to acknowledge this because we will have to redraft the terms of our victory accordingly. To extend the metaphor of a weapon: the battle can be won by taking away the opponent’s guns, but the war will be won only when the opponent finds its cause to be hopeless. We must fight the battles but we must also end the war.

The Wire
September 27, 2018