Posted in Science

Myth of harmful cell phone radiation is good business for IndiGo

When I fly, I always fly IndiGo. They’re not perfect but they and their services have become familiar, from their website (where I book my tickets) to when I exit the airport at my destination. The efficiency with which the IndiGo staff works – rather the economy of processes they follow – has seemed well thought-out. (For example, the air hostesses are sweet but the pilot also chips in over the intercom, keeping passengers updated about how high and fast they’re flying, etc.).

On my most recently flight, however, this facade of sanity was disturbed when I saw the following advertisement in their in-flight magazine:

Credit: Vasudevan Mukunth
Credit: Vasudevan Mukunth

You can see how that’d have gotten my goat. Indio strives to offer a highly optimised journey for the domestic traveller – including a healthy dose of pseudoscience. The funny thing is that the handheld extension plugged into the mobile phone has an electrical and electronic architecture similar to the one working inside the phone; the only difference is the absence of a signal receiver and emitter. It then follows that whatever radiation one is alleging the phone is serving as a hub of is all around us: if your phone is not on a call right now, some other phone in your vicinity surely is.

Cell phone radiation is not harmful because it is not ionising radiation. It’s that simple. Only ionising radiation can harm the body. It’s okay to want to protect yourself from threats but to believe your mobile phone is giving your head or your genitals cancer is stupid. On top of this, the product being advertised – aptly called the Phoni3 – promises to cut out 95% of the nonexistent harmful radiation. *facepalm* This is consumerism at the peak of its sway.

In fact, I’m curious why neither the makers of Phoni3 nor IndiGo saw fit to speak about background radiation. Did you know that the radiation your body is exposed to in the course of a six-hour flight is 444-times higher than the dose it receives if you live within 80 km of a nuclear power plant for a year? The reason we don’t panic is because even this elevated dose poses no danger to the human body. And the reason we don’t see an advertisement for lead-lined jackets or portable Faraday cages to wear/carry during air travel in the in-flight magazine is because it will be bad for business.

But anything short of hurting IndiGo can pass go. To wit, the following message is at the bottom of the same page containing the phone Phoni ad:

Credit: Vasudevan Mukunth
Credit: Vasudevan Mukunth

The government should ban advertisements for such products if only because, in this specific case, the Telecom Regulatory Authority of India (TRAI) has been working to dispel beliefs that cell phone radiation is harmful to the body. Unless the civil aviation authority bans such ads, TRAI’s efforts will be in vain. The IndiGo in-flight magazine is available for 180 passengers per flight of an Airbus A320, and the airline flies 131 such flights across the country a day (as of April 10, 2017). That’s more visibility than the TRAI can manage without significant effort.

Featured image credit: Javier Cañada/Unsplash.

Posted in Scicomm

The case for preprints

Daniel Mansur, the principal investigator of a lab at the Universidade Federal de Santa Catarina that studies how cells respond to viruses, had this to say about why preprints are useful in an interview to eLife:

Let’s say the paper that we put in a preprint is competing with someone and we actually have the same story, the same set of data. In a journal, the editors might ask both groups for exactly the same sets of extra experiments. But then, the other group that’s competing with me works at Stanford or somewhere like that. They’ll order everything they need to do the experiments, and the next day three postdocs will be working on the project. If there’s something that I don’t have in the lab, I have to wait six months before starting the extra experiments. At least with a preprint the work might not be complete, but people will know what we did.

Preprints level the playing field by eliminating one’s “ability to publish” in high-IF journals as a meaningful measure of the quality of one’s work.

While this makes it easier for scientists to compete with their better-funded peers, my indefatigable cynicism suggests there must be someone out there who’s unhappy about this. Two kinds of people come immediately to mind: journal publishers and some scientists at highfalutin universities like Stanford.

Titles like NatureCellNew England Journal of Medicine and Science, and especially those published by the Elsevier group, have ridden the impact factor (IF) wave to great profit through many decades. In fact, IF continues to be the dominant mode of evaluation of research quality because it’s easy and not time-consuming, so – given how IF is defined – these journals continue to be important for being important. They also provide a valuable service – the double-blind peer review, which Mansur thinks is the only thing preprints are currently lacking in. But other than that (and with post-publication peer-review being largely suitable), their time of obscene profits is surely running out.

The pro-preprint trend in scientific publishing is also bound to have jolted some scientists whose work received a leg-up by virtue of their membership in elite faculty groups. Like Mansur says, a scientist from Stanford or a similar institution can no longer claim primacy, or uniqueness, by default. As a result, preprints definitely improve the forecast for good scientists working at less-regarded institutions – but an equally important consideration would be whether preprints also diminish the lure of fancy universities. They do have one less thing to offer now, or at least in the future.

Posted in Scicomm

How cut-throat competition forces scientists to act against the collective

Brian Keating, an astrophysicist who led the infamous cosmic inflation announcement in 2014, thinks this is how science works: “… you put out a result, and other scientists work to test the result”. However, his own story shows that this is a cute ideal that’s often unreasonable to expect on ground. That scientists are often not putting out results and expecting others to test them as much as rushing to announce results to scoop others even if they don’t yet have enough data to make their claims.

In fact, there are many supposed truisms about science put to the test every day but whose results we choose to ignore because it would be easier if it were “self-correcting” and “objective” instead of us having to confront the very real possibility that science’s autocorrect works across decades, not years, and the truths it uncovers are objective insofar as they are not pursued by scientists wondering about what will get them published, famous, well-funded and rewarded.

This malaise is not specific to Keating and his team; it applies to all scientists because these ambitions are motivated by a flawed administration of science, increasingly emulated around the world as states rush to increase their scientific “output”. And it is when you compare the methods of this administration to what people think science is that you that you realise you’re practically harbouring a cognitive dissonance.

Now, Keating was working with a single team of scientists (that he was leading) on a single instrument, the BICEP2 telescope near the South Pole. On the other hand, many major discoveries of this century are expected out of ‘Big Science’ collaborations: teams of people working together to study a natural phenomenon and obtain a common result, and other teams to replicate that result. The Large Hadron Collider (LHC) is a famous example. The collective faculties of its 3,000+ scientists and engineers are necessary to operate the machine and its five detectors and analyse the data to present meaningful results.

However, the LHC has always been an ‘easy’ example that lets off the person quoting it from having to grapple with the numerous other collaborations that don’t work the way the LHC’s does. Every time there is a paper written by engineers at CERN, the European laboratory that hosts the LHC, based on LHC data, the names of all the people involved in the experiment to which the author belongs are listed as coauthors. For example, in 2015, the CMS and ATLAS experiments at the LHC jointly published a paper, about a more sensitive mass measurement of the Higgs boson, with 5,154 authors.

The members of the Planck space telescope, on the other hand, had refused to share data with the BICEP2 team for whatever reason. Keating had reasoned, “Either [Planck] didn’t have the data we wanted, or they did have it and they were going to scoop us.” The Planck team would the relevant part of parts the data later that year, but in the meantime, Keating et al courted public adulation in an effort to cement their candidacy for a Nobel Prize.

Two sources of conflict

This ‘pursuit of the scoop’ is fascinating because it describes a vector of action within scientific research that most of us typically don’t account for, yet which seems to influence the outcomes and communication of research in significant ways. Researchers believe the Nobel Prize is the highest honour, whereas The System is not efficient enough to recognise their every effort in the right context, so they decide the only way to go is to take risks and cut ahead.

Last week, science journalist Jennifer Ouellette described another arena where a quarrel for credit had been unravelling the same way it did with the BICEP2 experiment.

In August 2017, the twin Laser Interferometer Gravitational-wave Observatories (LIGO) and 70 telescopes around the globe tracked a neutron-star merger. It was the world’s first demonstration of multi-messenger astronomy, where multiple instruments study the same phenomenon in multiple channels (electromagnetic and gravitational) to understand its evolution through different laws of physics. The results of the studies were announced by the LIGO Scientific Collaboration in October 2017 with much fanfare (warranted because neutron-star mergers are spectacular in many ways).

Between August and October, however, the portion of the astronomy community caught up in the analysis and follow-up observations was going nuts. The merger had been ‘observed’ through three events that were detected thus: gamma rays by space telescopes, gravitational waves by LIGO and the kilonova explosion by ground telescopes. LIGO has had a habit of checking its observations repeatedly before making an announcement to the public – while, according to Ouellette, astronomers have gone the other way, having had no reason to wait before being able to claim a discovery with sufficient confidence. This was one source of conflict.

The other source was the familiar one of primacy. All members of the collaboration had been keenly aware that Kip Thorne, Barry Barish and Rainer Weiss had received the Nobel Prize for physics in 2017 following LIGO’s first announcement of the detection of gravitational waves in 2016. And the members wanted to make sure their contributions to the final announcement were properly acknowledged so they would remain in contention for future rewards, of which there were potentially many.

Ouellette writes,

According to [Josh Simon, an astronomer in Chile], things got messy after he and his colleagues spotted the kilonova and identified the host galaxy. Five other teams detected the event in their images within the next hour, and it wasn’t clear whether those teams spotted the kilonova before or after Carnegie’s announcement. This in turn sparked a lively debate about how much credit the subsequent teams should receive. …

The debate over credit extended to who should be listed as authors on the primary omnibus paper describing the discovery. LIGO made a good-faith effort to be as inclusive as possible, but hackles were raised over how the collaboration defined what constituted a “unique” contribution or discovery. In the end, the omnibus paper had two tiers of co-authors. The first included the six groups deemed “the discoverers,” with the second tier comprised of those who did the follow-up work and analysis. Even so, “there were a lot of people in that second category who thought they should have been in the first category because they did make a first or unique contribution,” [LIGO spokesperson David Reitze] says.

We have frequently derided Indian ministers for being so obsessed with the Nobel Prize – it isa silly obsession – but the LIGO and BICEP2 tales demonstrate that as a feature it is not unique to India or China. Everyone wants a Nobel Prize.

The Nobel intent

However, there are two different cultures at work here, even though both their followers will kneel at the same altar. In India, for example, ministers dream of Indian scientists winning a Nobel Prize but their actions haven’t always been consistent with their desires. In the US, for another example, the infrastructure for good research is already in place but unless operation expenditure is hiked, the community will undeservingly suffer from the effects of overcrowding. As Ouellette said,

… astronomers tend to cluster in smaller, independent groups, and they are fiercely competitive, vying both for limited funding and for precious time on the world’s limited number of telescopes. Being first to report a breakthrough observation is hugely important to most astronomers. (emphasis added)

Such spending is also tied to the US’s consideration of itself as the world’s “leader” of scientific research. American scientists have won the most Nobel Prizes in the last century – but that was a century when the US was truly the research leader. It has dropped the ball of late and the effects will surely show a few decades from now in the Nobel Prize count.

Of course, in both cases it must be acknowledged that the Nobel Prize symbolises power. For the individual, it is power in the form of acknowledgment of work. For the institute, it is power in the form of access to funds. For the country, it is power in the form of prestige. It hasn’t mattered if the way the prize is awarded is flawed; the cultural and historical cachet it still caries is astounding, prompting two weighty collaborations to almost unravel in its pursuit. We must acknowledge that this is how science works. There are likely to be other team-efforts worldwide where individual desires have superseded community goals.

This also does not make LIGO’s way of doing things better – or even the LHC’s for that matter. A habit of giving everyone on the experiment credit means the person who actually did any work relevant to the results of the paper gets the same amount of credit as a fresh PhD student. Peter Coles, a theoretical cosmologist at Cardiff University, has called this “absurd”. Panjab University used this flaw to its advantage in climbing through global university rankings because its scientists had been listed as coauthors in numerous papers published by LHC experiments.

It is clear that the ultimate fix to this problem will have to ensure that all work is properly acknowledged and, if necessary, rewarded. David L. Clements, an observational astrophysicist at Imperial College London, commented on Coles’s post, “More permanent contracts, a less publication-fixated funding environment, and more money in the field, reducing the level of cut-throat competition, would help, but can you realistically see any of that happening?” It is becoming clear that the essential animus of a hyper-competitive environment is rooted in the availability of three types of resources at certain levels in the hierarchy of scientific enterprise: evaluators, methods to ensure fairer evaluations and acknowledgments (assuming we already have the resources to conduct research).

The more evaluators there are, the more evaluations that will happen (vertically, horizontally or both). They must not completely rely, or even over-rely, on reductive proxies like the impact factor or h-index to judge candidates’ performance. Instead, they must be able afford (and not be blindly expected to perform) qualitative tests, such as speaking to a candidate’s supervisor to understand her contributions better and assess her work by actually reading her papers. Once evaluations are complete, all candidates must be rewarded. This encompasses suitable rewards made available in a timely manner. Without these measures, however, participants of a competition will have few reasons to believe it will be fair or empathetic.

A recent Twitter conversation (below) between Mukund Thattai, a biologist at the National Centre for Biological Sciences, Bengaluru, and Shailja Gupta, an adviser and scientist at the Department of Biotechnology, teased out the nuances of this issue. In particular, it highlighted the need for a two-way collaboration between the scientific community and the Government of India.

The Wire
May 1, 2018

Featured image: An engraved bust of Alfred Nobel. Credit: sol_invictus/Flickr, CC BY 2.0.

Posted in Scicomm

Cognitive flexibility and nationalism 2.0

Remember that paper about cognitive flexibility and nationalism? The one that said people who are more nationalistic in their politics tend to have lower cognitive flexibility? I’d blogged about it here. I hadn’t read the study’s paper, published in the Proceedings of the National Academy of Sciences, because I didn’t think I had to to be able to call the study’s conclusions into question. An excerpt from my previous post:

… ideological divisions, imagined in the form of political polarisation, are bad enough as it is without people on one side of the aisle being able to accuse those on the other side of having “low cognitive flexibility”. The nuance can be worded as prosaically as the neuroscientists would prefer but this won’t – can’t – stop the less-nationalistic from accusing the more-nationalistic of simply being stupid, now with a purported scientific basis.

This is why I believe something has to be off about the study. The people on the right, as it were on the political spectrum, are not stupid. They’re smart just the way those of us on the left imagine ourselves to be. Now, one defence of the study may be that it attempts to map a hallmark feature of the global political right, sort of a rampant anti-intellectualism and irrationality, to its neurological underpinnings – but nationalism is more than its endorsement of traditions or traditional values.

As it turned out: if I had, the paper would have revealed more problems with the study and made a stronger case than I was able that it is quite likely a product of the “publish or perish” kind of thinking. The reason I revisit this study now is an interesting conversation I had with Shruti Muralidhar, a cortical and hippocampal neuroscientist, currently a postdoc at the Massachusetts Institute of Technology. Before I’d written my post, I’d asked Shruti if she could read the paper and possibly critique it.

My primary concern was basically about assigning a kind of “hierarchy of cognitive abilities” to the political spectrum – that sounds dangerous. By saying the political right has less cognitive flexibility, I’d felt like the study was reaching the conclusion that there might be a purely biological explanation for why people behave the way they do. This kind of reductionism is eminently dangerous.

According to Shruti, “This understanding of the paper is not far from what they want the reader to take away – but sadly, they have little or no backing to actually prove or disprove this claim.” She summed her observation up in a few points (quoted verbatim):

  1. Cognitive flexibility is simply just that. It doesn’t mean more or less intelligence, smarts or anything like that. In fact, it might not even be a “positive” trait depending on the situation at hand.
  2. The study’s authors have administered only two cognitive tests, and one of them clearly gives counterintuitive, unexpected or, one might even say, “wrong” results, as in goes against the study’s primary hypothesis.
  3. These are correlation studies, which usually are to be taken with a bagful of salt.

The first question that arises then is why the authors – or PNAS – decided to publish their study when 50% of their tests turned in results that opposed their hypothesis: that the more nationalistic are less flexible, cognitively speaking.

She also pointed out many issues with the language in the paper, especially lines that could be misinterpreted easily. Some in particular stuck out because they revealed a deeper epistemological issue with the study.

Shruti said, “The authors clearly admit that cognitive flexibility is a multi-dimensional beast and that it is difficult to understand it completely,” and often suggest that they don’t understand it completely themselves. One giveaway is that they keep saying variations of “We need more and better tests”.

A bigger giveaway is a line on page 6: “However, it is also conceivable that immersing oneself in strongly ideological environments may encourage psychological inflexibility and promote a preference for routines and traditions.” In other words, if A stood for “more nationalistic” and B for “less cognitive flexibility”, then the authors were saying that A therefore B while also admitting that B therefore A. In other other words, their correlation was in doubt, leave alone causation. This portion concludes thus:

Nevertheless, more research is necessary to understand the nature of cognitive flexibility and the various ways in which it manifests in relation to ideological thinking.

The authors haven’t defined cognitive flexibility explicitly in their paper, instead referencing older studies on the subject. Even so, Shruti said that those papers might not be able to provide the final word either because, as one of her peers had pointed out, “Since this study is EU/Britain-specific, their idea of what ideological inflexibility is might also be different from, say, India’s or the rest of the world’s. Europe thrived on systems and thinking-within-the-box for centuries.”

All together, the paper appears to describe a study of the “low-hanging fruit” variety. Its central hypothesis has been neither proved nor disproved, the reader is left in doubt about whether the tests were properly chosen (and why more tests weren’t performed), and the paper is strewn with admissions that the authors don’t claim to understand what one of the more important keywords in the study really means.

Worst of all (to me) is that the paper has been published with a misleading headline and the university press release, with an incredibly misleading one that should take all the responsibility for fake news born as a result (and strengthening the case that they shouldn’t be trusted). And there’s quite a bit of it:

  • PsychCentral has an article that only quotes Leor Zmigrod, the lead author of the atudy and a psychologist at the University of Cambridge
  • The same is true of an article in The Guardian by Nicola Davis. The headline goes ‘Brexiters tend to dislike uncertainty and love routine, study says’ – more of the reductionism at work.
  • Andrew Brown of The Guardian takes the study’s conclusions at face value, writing in his column:

… some kinds of political argument are going to be literally interminable. Obviously this isn’t true of any particular issue. Even the question of our relations with Europe will be settled some time before the heat death of the universe. But it may be replaced by something else which arouses the same passions and splits the population in the same way, because the cognitive traits [Zmigrod] is analysing are all part of the normal variation of humanity.

In fact, it seems no prominent coverage of the paper has invited an independent researcher to comment on its findings. I concede that I myself didn’t speak to a psychologist – Shruti is a neuroscientist – but all of Shruti’s observations are hard to ignore.

Finally, if I were looking to publish a paper right now, I’d hypothesise that flattering, non-critical coverage of scientific papers – peer-reviewed or otherwise – is more common among news publishers if each paper makes it easier for the publication to maintain its political position.

Featured image credit: mwewering/pixabay.

Posted in Science

Performing with and without an audience

My feeling is that as far as creativity is concerned, isolation is required. … The presence of others can only inhibit this process, since creation is embarrassing.

– Isaac Asimov (source)

Be it far from me to fall for a behavioural studies paper that’s not yet been replicated, and much farther to do so based on a university press release, but this one caught my attention because it suggests something completely opposite to my experience: “when there’s an audience, people’s performance improves”. Sure enough, four full paras into the piece there’s a qualification:

Vikram Chib, an assistant professor of biomedical engineering at Johns Hopkins … who has studied what happens in the brain when people choke under pressure, originally launched this project to investigate how performance suffers under social observation. But it quickly became clear that in certain situations, having an audience spurred people to do better, the same way it would if money was on the line. (emphasis added)

The situation in question involved 20 participants playing a videogame in front of an audience of two and, in a different ‘act’, in front of no audience at all. If a participant played the game better, he/she received a higher reward. Brain activity was monitored at all times using an fMRI machine.

You realise now that the press release’s headline is almost criminally wrong, considering it’s likely been vetted by some scientists if not those who conducted the study itself. It suggests that people’s performance improves in all circumstances; however, a videogame is nothing like writing, for example. In fact, you’d be hard-pressed to find someone who can write when they’re being watched. This is because writing isn’t a performance art whereas a videogame could be. And when executing a performance, having an audience helps.

According to Chib and the press release, this is the mechanism of action:

When participants knew an audience was watching, a part of the prefrontal cortex associated with social cognition, particularly the thoughts and intentions of others, activated along with another part of the cortex associated with reward. Together these signals triggered activity in the ventral striatum, an area of the brain that motivates action and motor skills.

While this is interesting, 20 people isn’t too much, the task is too simple and definitely not generalisable, and the audience is too small. Playing a videogame in front of two strangers (presumably) is nothing like playing a videogame in a room chock full of people, or when the stakes are higher. In fact, in real life, you’re almost certainly being judged if there’s an audience watching you as you conduct a task, and your stress levels are going to be far higher than when you’re playing something on your Xbox in front of two people.

A final quibble is more a wondering about the takeaway. The study seems to have focused on a very narrowly defined task while one of its authors – Chib – freely acknowledges its various shortcomings. Why weren’t these known issues addressed in the same paper instead of angling for a follow-up? I suspect future studies will also perform the same experiment multiple times with different kinds of tasks.

But if the audience was a lot bigger, and the stakes higher, the results could have gone the other way. “Here people with social anxiety tended to perform better,” Chib said, “but at some point, the size of the audience could increase the size of one’s anxiety but we still need to figure that out.”

Perhaps this is a case of someone trying to jack up their publication count.

Featured image credit: Skitterphoto/pixabay.

Posted in Tech

View from the beanstalk

In early 2015, I developed an unlikely hobby: tinkering around with hosting solutions on the web, specifically providers of infrastructure as a service (IaaS). It’s unlikely because it’s not something I consciously inculcated; it just happened. Three years later, this hobby has morphed into a techno-garden of obsessions that I tend to on the side, in between the hours of my day-job editing science pieces.

When in college, I worked a little with Google App Engine – a BaaS (backend as a service) popular at the time for hosting apps but not so much now. I followed that up with Linode in 2012 after Posterous shut down, and then Digital Ocean in 2015.

Linode and Digital Ocean both provide virtual private servers (VPSs). A VPS is a virtual server installed on a physical server that utilises a specified fraction of the server’s resources. For example, one of Digital Ocean’s ‘popular’ VPS configurations comes with 4 GB RAM, 80 GB SSD and 4 TB bandwidth (for $40/mo). Another VPS config has 2 GB RAM, 50 GB SSD and 2 TB bandwidth (for $20/mo). Both these VPSs could be running on the same physical server, with a type of software called a hypervisor installed on it to partition and manage VPSs according to users’ requirements.

Other options include shared hosting, where you have access to a part of a server’s resources (RAM, SSD and bandwidth) but not full control over how you use them. This is encapsulated by saying you don’t have root-level access. Shared hosting is preferred for small blogs and websites because it’s low-priced (starting at ~$3/mo). Then there’s bare-metal hosting, whereby you take charge of an entire server and all its resources.

Digital Ocean was godsend because of the one-click installs it provided. You purchase a VPS config – a.k.a. provision a VPS – such that it comes pre-installed with software of your choice, chosen from a menu. The Digital Ocean UI made the offering look much less like the intimidating cPanel and more like a fun testing area, considering VPSs were available for just $5. I think that’s how my interest truly took off.

Thanks to Digital Ocean, I was able to quickly learn the basics of working with cloud-computing, SSH, Linux-like operating systems, security auditing, webservers, content delivery networks, VPNs, firewalls, SSL/TLS and APIs. I don’t think the whole enterprise cost me more than $10. Additionally, both Digital Ocean and Linode offer excellent documentation; if you don’t find answers there, you will at stackoverflow. So there’s really no excuse to not start learning these things right away, especially if you’re so inclined.

Actually, you should probably pick up on these things even if you’re not so inclined because these are the basic technologies through which humanity engages the Information Age’s most powerful medium of communication: the internet. Their architecture, technical specifications and functional affordances make up the framework in which we conduct our techno-politics. What they allow us to do become freedoms and violations; what they don’t allow us to do become safeguards and restrictions.

Extending the importance of understanding how they work to one higher level of abstraction – we have the foundations of online commerce, digital art and information sharing protocols. Going even further, we start to bump into questions about memory, persistence, intelligence and immortality.

Every one of us is situated somewhere on this beanstalk, and with each passing day, there are fewer ways as well as fewer reasons to get off. (Even those who reject the internet must engage with it – either to implement their rejection or to engage with others who continue to use the internet.) As one developer wrote:

Ignoring the cloud or web services because they are out of your comfort zone is no longer an option. The app economy is shifting. Adapt or die.

As I tried to learn more about how these technologies impacted our daily lives – an nth or zeroth level of abstraction depending on your POV – I also realised the world’s foremost interpreters of the internet’s implications were white men. They’re too numerous to list but sample these authors of my bookmarked blogs: Sam Altman, Marco Arment, Andy Baio, John Gruber, Jason Kottke, Jay Rosen, Bruce Schneier, Ben Thompson and Jeffrey Zeldman, among others.1 Even to begin to decide whether the privilege enjoyed by this coterie biases their aspirations vis a vis the internet, you will need to pick up the basics.

Fortunately, the cost of acquiring this knowledge has been falling. Tending to my garden of obsessions has meant surfing the interwebs for different IaaS providers for hours on end, the various features they offer (usually the same but every once in a while something new comes up) and – interestingly – comparing their Terms of Service. During one such excursion recently, I came upon two great forums: LowEndTalk (LET) and WebHostingTalk (WHT). If you’re looking for cheap but reliable hosting providers, especially of the shared or VPS variety, LET and WHT have got you covered.

For example, this is how I came upon some hosts – esp. RamNode, KnownHost, WebFaction and SecureDragon – that provide infra at costs you will find not low but altogether “cheap” if you’re coming in from the world of Amazon Web Services, Microsoft Azure, etc. If you’ve picked up the basics of server management and security, the prices drop further (sub-$5). Even managed WordPress hosting hasn’t been spared; compare the prices of LightningBase and, say, Pressable.

(Managed hosting is a form of shared hosting where the hosting provider manages an application installed on the machine for the user, such that the user will have to be concerned only with using the application rather than also maintaining it. WordPress is a popular application for which managed options are abundant because those who use WordPress often have a very different skillset than that required to maintain WordPress.)

In all, you will need to spend about five hours a week for a month and a total of $10 to unlock a whole new, and very socially and politically relevant, world. If you want to do more, check out Slashdot Deals for amazing learning ‘bundles’.

 

1. The only exception I’ve been able to think of is Om Malik. Then again, all the white people I’ve mentioned, and Malik, are also all American, and perhaps I’m focusing on American interpretations of the interet’s implications. I can think of a few people who operate out of India – Pranesh Prakash, Srinivas Kodali, Malavika Jayaram, Anuj Srivas, Kiran Jonnalagadda – but none of them are recognised worldwide whereas the white men all are. This, of course, isn’t surprising.

Posted in Culture

Who we will always be

An image from Yuri Shwedoff's 'Space' series. Credit: Yuri Shwedoff

I found this evocative image on Twitter today. It’s by a Russian artist named Yuri Shwedoff and the image is part of his ‘Space Series’, available to view and appreciate on Behance. I don’t know the provenance of the overlaid text though.

At a glance, it’s clear the image depicts a future where we’ve abandoned all space launches and have regressed to a more primitive form of life.

But then you realise the last NASA Space Shuttle launch was in July 2011. Perhaps some kind of Space Shuttle museum became abandoned as the world carried on? Doesn’t seem likely – the artist probably chose to depict the Space Shuttle because everyone recognises it.

Further, the rectangular beam-like structure below the Space Shuttle indicates the location is the Kennedy Space Centre Launch Complex 39A.

Another interesting feature is that the fuel tanks of earlier rockets had thinner walls than they do today, so the tank could be erected to an upright position only after being loaded with fuel and pressurised. So in this image, the Space Shuttle was ready for launch, and not just standing there waiting to be prepared for launch.

The crenellated mounds of earth and flora also suggest the 39A launchpad, with the rocket on it, has been abandoned for many centuries.

The weather is also curious because launchpads are usually located at sites above which there is often clear sky. But in this image, the sky is overcast. It could just be a rainy day – or it could be that the world has experienced some kind of catastrophe that has either precipitated weird weather patterns or, in the more dystopian view, clouded all of Earth á la a nuclear holocaust.

The greater catastrophe would also explain the primitive nature of technology in the image, in the form of a human riding horseback with what seems like arrows strapped to his back. The text, “It’s who we were…”, also suggests the same thing.

In all, the artist seems to say that in the early 21st century, something happened that caused us to abandon space launches, altered the world’s weather and, in time, left us technologically backward.

This is why I think the image is a bit confused. Gazing up at a Space Shuttle on the launchpad and saying “It’s who we were…” says nothing at all because, in a world with frequent spacefaring missions, something happened anyway. Our ambitions unto the final frontier didn’t change anything.

If anything, this accidental monument should’ve been for the now-hollow nuclear missile launch silo, or in fact a statue of a human itself.

Alternatively, I’d replace “It’s who we were…”, and its inherent sense of pride and longing, with a phrase that evokes shame and regret: “It’s who we will always be”.

(The original image by Shwedoff doesn’t have the text, so whoever put it on there has effectively defaced the image.)

Posted in Science

The March for Science, ed. 2018

K. VijayRaghavan, India’s new principal scientific advisor to the Government of India, has brought a lot of hope with him into the role as a result of his illustrious career as a biologist and former secretaryship with the Department of Biotechnology. Many stakeholders of the scientific establishment are already looking to him for positive changes in S&T policy, funding and administration in India under a government that, on matters of research and education, has focused on applications, translational research and actively ignored the spread of superstitious ideas in society.

In a recent interview, VijayRaghavan was asked about R&D funding in India. His response is worth noting against the backdrop of a ‘March for Science’ planned across India on April 14. As the interviewer reminds the reader, the 2018 Economic Survey bluntly acknowledged that India was underspending on research. This has also been one of the principal focus areas of the ‘March for Science’ organisers and participants: they have demanded that the Centre hike R&D spending to 3% and education spending to 10%, both as fractions of the GDP, apart from asking the government to stop the spread of superstitious beliefs.

Q: Getting funding for research is widely considered to be a prickly issue. The 2018 Economic Survey stated that India underspends on R&D. Is this a concern at the administration level?

A: These are wrongly posed questions, because it says that should magically the amount of funding go up, then science’s problems would be solved. Or that this is the key impediment. There’s no questions that there’s a correlation between increased R&D funding and innovation in many economies. South Korea is a striking example how high-tech R&D has resulted in transformation in their industries… Have we analysed, bottom-up, what Korea’s spending goes into and what we can learn from that and do afresh? Have we analysed our contest and learnt? …

Now interestingly, top-down this analysis has been done long ago. We as scientists, individuals and as journalists need to see that. The DST, and the DBT, the CSIR, the ICMR all have their plans should they get more resources. You can’t have a top-down articulation of how the resources can come and be used, unless that is also dynamically connected bottom-up.

When I look at 100 cases of why fund-flow is gridlocked, in about 70 cases, it’s poor institutional processes.

March for more than science

After the first Indian ‘March for Science’ happened in August 2017, the government showed no signs of having heard the participants’ claims, or even acknowledged the event. This was obviously jarring but it also prompted conversations about whether the march’s demands were entirely reasonable. Most news reports, include The Wire‘s, had focused on how this was the first outpouring of scientists, school-teachers and students, particularly at this scale. Scrutinising it deeply was taboo because there was some anxiety about jeopardising the need for such a march itself. However, ahead of the second march planned for April 14, it’s worth revisiting.

Sundar Sarukkai, the philosopher, had penned an oped the day after the 2017 march, asking scientists whether they had thought to climb down from their ivory towers and consider that the spread of superstitions in society under the Narendra Modi government may have been because of sociological and cultural reasons, and wasn’t simply a matter of spending more on R&D. Following a rebuttal from Rahul Siddharthan, Sarukkai clarified in The Wire:

Whenever ideal images are constructed (like ideal of woman, ideal of nation, etc.), one should be wary, since any such act is often driven by considerations of power. This ideal image of science too is used to establish science as a powerful agent within modern societies. The use of this ideal image to solve social problems related to caste, religion or hatred of any kind is a red herring. It is like using a hammer to fix a bulb. When we do that, it only means that we are not really interested in solving the problem (fixing the bulb) but more invested in using the method (the hammer) – irrespective of whether it is suitable for the task or not.

The terrible cases of lynching, hatred, oppression and misuse of religion must be unequivocally opposed. For those who are serious about that task, the solution is more important than the method used to achieve it. The categories of the ideal notion of science are applicable primarily to non-human systems. So even if they work well within such systems, there is no reason why they should do so within human systems.

A physicist said something similar to me around the time: that the old uncle preaching the benefits of homeopathy in his living room is doing so not because he doesn’t have access to scientific knowledge. That may be true but what’s more conspicuous by absence is someone in the same room challenging his views, communicating to him without being intimidating or patronising and having a discussion with him about what’s right, what’s wrong and the methods we use to tell the difference. Instead, focusing on making it easier for scientists to become and remain scientists alone will not take us closer to achieving the outcomes the ‘March for Science’ desires.

Sarukkai echoed this point in a comment to The Print: that scientists who march only for science are not doing anything useful, and that they must march against casteism and sexism as well (and social ills outside their labs). Without real change in these social contexts, it’s going to be near-impossible for those deemed less powerful by structures in place in these contexts to challenge the beliefs of those afforded more social authority. Ultimately, effecting such change is not going to be all about money – just as much as more money alone won’t solve anything, just as much as imploring the government to “fix” all these issues by itself will not work either.

This is where VijayRaghavan’s comments about R&D spending fit in. Before we throw more money in the general direction of supporting R&D, its Augean stables will have to be cleaned out and inefficiencies eliminated. One example, apropos VijayRaghavan’s comment about 70% of funds being gridlocked due to “poor institutional processes”, comes immediately to mind.

Sunil Mukhi, a theoretical physicist, wrote in 2008 that when he had been a member of the faculty at the Tata Institute of Fundamental Research, Mumbai, his station afford him a variety of privileges even as there was “no clear statement of our responsibility or duty to perform, and no consequences for failing to do so”. While he has since acknowledged a potential flaw in his suggested solution, the fact remains that many researchers often laze in prized research positions at well-funded institutes instead of also having to grapple with the teaching and mentorship load prevalent at state universities and colleges.

Additionally, though most people have directed their ire at the government for underfunding R&D, 55% of our R&D expenditure is from the public kitty. Among the ‘superpowers’, China is a distant second at less than 20%. So the marches for science should also ask the private sector to cough up more.

One for all

When the government pulled the financial carpet out from under the feet of the Council of Scientific and Industrial Research in 2014 and asked its 38 labs to “go fund themselves”, many scientists were aghast that the council was being handicapped even as more money was being funnelled into pseudo-research on cow urine. But there were also many other scientists who said that the CSIR had it coming, that – as a network of labs set up to facilitate applied and translational research – it was bloated, sluggish and ripe for a pruning. Perhaps similar audits, though with ample stakeholder consultations (not the RSS) and without drastic consequences, are due for the national scientific establishment as a whole.

As a corollary, it is also true that every march, protest or agitation undertaken against casteism, sexism, patriarchy, bigotry and zealotry can work in favour of the scientific establishment since what ‘they’ are fighting against is also what scientists, and science journalists, should be fighting against. Access to bonafide scientific ideas should not be solely through textbooks, news articles and freewheeling chats on Twitter. Instead, and irrespective of whether they become available, they should have the option to be availed through the many day-to-day interactions in which we confront structures of caste and class.

For example, there is no reason the person who cleans your toilet should not also cook your dinner. To institute this dumb restriction is to perpetuate caste/class divisions as well as to reject science in the form of hand-wash fluids. For another, there is no reason an employer shouldn’t let their domestic help use the toilet when they need to. However, the practice of expecting those who work in our homes to use separate toilets or be fired still persists, even in a society as ostensibly post-caste as West Bengal’s, demonstrating “the extent to which employer relations with domestic workers continue to be flavoured by caste” – as well as the extent to which we falsely attribute different human bodies with irrational biological threats.

These problems are also relevant to scientists, and must be solved before we can confront the bigger, and more nebulous, order of scientific temper in the country. However, such problems can’t be fixed by scientists and science alone.

It is worth reiterating that the ‘March for Science’ tomorrow is not a lost cause; far from it, in fact. The demand that 3% of GDP be spent on R&D is entirely valid – but it also needs to be accompanied by structural reforms to be completely meaningful. So the march, in effect, is an opportunity to examine the checks and balances of science’s administration in the country, the place of science in society, and introspect on our responsibility to confront a protean problem and not back down in the face of easy solutions. If the solution was as easy as ramping up spending on R&D and education, the problem would have been solved long ago.

The Wire, 13 April 2018.

Posted in Op-eds

Cognitive flexibility and nationalism

There’s something off about a new study that attempts to map the cognitive flexibility of people to their ideological preferences. To quote from the study’s ‘Significance’ section:

We found that individuals with strongly nationalistic attitudes tend to process information in a more categorical manner, even when tested on neutral cognitive tasks that are unrelated to their political beliefs. The relationship between these psychological characteristics and strong nationalistic attitudes was mediated by a tendency to support authoritarian, nationalistic, conservative, and system-justifying ideologies.

The intensity and extent of ideological divisions are being deepened across the world. This study examined over 300 citizens of the UK for “whether strict categorisation of stimuli and rules in objective cognitive tasks would be evident in strongly nationalistic individuals” – a nationalism indicated, for example, by these individuals being pro-Brexit. The results of the study could ostensibly apply to how certain groups around the world think: the extreme right in the US, the neo-Nazis in Germany, the National Front in France and the so-called “bhakts” in India.

These ideological divisions, imagined in the form of political polarisation, are bad enough as it is without people on one side of the aisle being able to accuse those on the other side of having “low cognitive flexibility”. The nuance can be worded as prosaically as the neuroscientists would prefer but this won’t – can’t – stop the less-nationalistic from accusing the more-nationalistic of simply being stupid, now with a purported scientific basis.

This is why I believe something has to be off about the study. The people on the right, as it were on the political spectrum, are not stupid. They’re smart just the way those of us on the left imagine ourselves to be. Now, one defence of the study may be that it attempts to map a hallmark feature of the global political right, sort of a rampant anti-intellectualism and irrationality, to its neurological underpinnings – but nationalism is more than its endorsement of traditions or traditional values.

While the outcomes of many socio-political actions may seem to promote irrational beliefs and practices, these actions are carefully engineered by very smart people and executed to perfection. One example that comes immediately to mind is the Bharatiya Janata Party’s social media strategy. Another is the resounding victory it achieved in the Lok Sabha and Uttar Pradesh elections in 2014 and 2017, resp.

(Both these enterprises are well-documented in the form of books – this and this, e.g. – and in fact make the less-nationalistic look quite silly for its sluggish group response. Would that say something about “our” cognitive abilities as well?)

Finally, a note about labels. Following astronomy research for half a decade has taught me that when stars explode, there is a tremendous variety of things that happen, such that it’s impossible for a five-century-old human enterprise to possibly identify, label, and categorise all of them within a small, finite group of processes. Similarly, trying to associate the symptoms of one infinite set (human socio-politics) with a finite-but-large set (human neurology) can be fraught with many mistakes.

Posted in Scicomm

A peek behind the curtain of the most infamous cosmic blunder of our time

I was once stupid too, and still am in many ways. One of the instances when I was more stupid than usual was when I wrote an article about the now-infamous BICEP2 ‘discovery’ of evidence of cosmic inflation in 2014. The ‘discovery’ eventually turned out to be a non-discovery because the scientists behind it had acted too soon with their announcement, overlooking a serious gap in their data.

As a science journalist, I’d failed because I hadn’t solicited independent comments for my piece, as a result letting The Hindu (where I worked at the time) publish an eminently wrong article. I will never forget that this happened, if only to remind myself of the importance of soliciting independent comments on all science articles, no matter how mundane the peg.

The BICEP2 instrument studies the cosmic microwave background (CMB) radiation. Some scientists were using BICEP2 to detect the imprint of gravitational waves on the magnetic component of the CMB radiation. Specifically, they were looking for some curling patterns in the magnetic mode associated with a rapid expansion of the universe thought to have happened between 10-36 and 10-33 seconds after the universe was born.

This expansion has been called the cosmic inflation and the period it happened, the inflationary epoch. Cosmic inflation was a hypothesis that sought to explain why parts of today’s universe seem to have similar physical features despite being separated by billions of lightyears. If cosmic inflation didhappen, the explanation would be that, once upon a time, the universe was very small and these distant parts were in fact more closely packed together then.

The first announcement, on March 17, 2014, was marked with a lot of fanfare. It was cosmology’s big day, and news publications around the world covered the announcement. Most of them included comments from scientists not involved in the data-taking, scientists who said something about the results was suspicious. That suspicion snowballed over time into a full-blown rebuttal that, within a few months, torpedoed the original study and forced the authors to apologise.

The problem turned out to be that gravitational waves couldcause the curling pattern on the magnetic mode of the CMB – and so could radiation emitted by cosmic dust, as seen by BICEP2. And the BICEP2 data was found to have recorded only the effects of cosmic dust.

In the last four years, I’ve realised how I had acted stupidly and learnt an important lesson the hard way. However, I was still curious why the BICEP2 team had acted stupidly. And though it seemed obvious, I had trouble accepting that the team had behaved the way it had simply because it was so excited, because it wanted to become famous.

On April 19 this year, Nautilus published an essay by Brian Keating, adapted from a book he has written about the BICEP2 fiasco. Keating was one of the leaders of the collaboration behind the announcement, working at Harvard University’s Centre for Astronomy (CfA). The essay provides a behind-the-scenes look at how scientists had missed the cosmic dust signal in their data analysis.

By the end of the essay, Keating appears to try to assuage readers that this was how science worked, that “you put out a result, and other scientists work to test the result”. However, the essay in toto highlights this is not how science works, and that this image of scientific endeavours is far too idealistic.

For example, a constant undercurrent throughout the enterprise seems to have been a rush to scoop. Keating et al had their eyes on a Nobel Prize, and wanted to be the first group to make the announcement that they’d seen the remains of the universe’s “birth pangs”.

He says this rush is why his team decided to present their BICEP2 results to the press even before the corresponding paper was peer-reviewed and published in a science journal. He writes:

… we feared that sending the paper to a journal would be unfair, giving a particular group – referees and their friends – a head start on proposal submission. My field is so competitive that the only people who weren’t on BICEP2 who could have reviewed the highly technical aspects of the paper were competitors. Our first priority was to make a scientific presentation to communicate our results to all our peers in the cosmology community.

Next, it seems the CfA team had been aware that dust in the Milky Way could play spoilsport to their apparent discovery, so they tried to get data from the team operating the Planck satellite. This satellite measures electromagnetic radiation across a wide swath of the sky, much larger than the BICEP2 survey area, and in a larger range of frequencies as well.

One of these frequencies was 353 GHz, at which Planck was able to study the effect of cosmic dust exclusively. The CfA team needed this data – but despite multiple requests, the Planck team refused to share the data. This is big news to me because I had no idea the CfA and the Planck teams treated each other as competitors! If only they’d worked together, the BICEP2 fiasco might never have happened.

… such a map [of cosmic dust] did exist, one with the exact high-frequency data we needed. There was only one catch: It belonged to our competitor, the Planck satellite. And in early 2014, the Planck team hadn’t yet released their B-mode polarization data. We were scared Planck might not only hold the key to proving our measurement right, but might have already glimpsed the inflationary B-mode signal before we did. … We desperately tried to work with the Planck team, while being careful not to tip them off as to what we’d found … [but they] wouldn’t cooperate. Either they didn’t have the data we wanted, or they did have it and they were going to scoop us. We had to go it alone.

Soon after, Keating and his team found a picture of a Powerpoint slide posted online that appeared to be from a talk given by one of the Planck team members. They decided to use the information presented in the slide, which suggested that BICEP2 had good and legitimate data, even though they weren’t sure if the slide was meant for quantitave analysis.

Thus, March 17 came and went, then June did too, when the CfA team’s paper was published in the journal Physical Review Letters. Then, around November, the Planck team had their paper published. As Keating writes,

With the Planck 353 GHz paper appearance came the beginning of the end of the BICEP2 team’s inflation elation. Although the Planck team was careful to release no data for the Southern Hole, the field where BICEP2 observed—perhaps out of fear we would digitize it—they made a blunt assessment of the potential amount of dust polarization contamination in the Southern Hole, saying it was of “the same magnitude as reported by BICEP2.” This meant dust was as likely a culprit for our B-modes as were inflationary gravitational waves.

The BICEP2 story well elucidates how science really works.

“Scientists are people too” is one way to put it. Another, and possibly better, way is to remember that institutionalised tendencies like torturing the data to yield more papers, conducting research to attract a Nobel Prize and scooping the competition aren’t one-offs, and that it’s foolish to think they wouldn’t percolate through the scientific community to create flawed ambitions.

These are all essential components of how humanity produces its knowledge. In other words, the scientific enterprise isn’t one that’s free of human foibles.

Featured image: The BICEP2 telescope (right) in Antarctica. Credit: Amble/Wikimedia Commons, CC BY-SA 3.0.

Design a site like this with WordPress.com
Get started