Posted in Uncategorized

Yo-yo fitness

Nagraj Gollapudi on the yo-yo fitness test, ESPN Cricinfo:

A yo-yo test involves a player shuttling between two cones that are set 20 metres apart on flat ground. He starts on a beep and needs to get to the cone at the other end before the second beep goes. He then turns back and returns to the starting cone before the third beep. That is one “shuttle”.

A player starts at speed level 5, which consists of one shuttle. The next speed level, which is 9, also consists of one shuttle. Speed level 11, the next step up, has two shuttles, while level 12 has three and level 13 four. There are eight shuttles per level from 14 upwards. Level 23 is the highest speed level in a yo-yo test, but no one has come close to getting there yet. Each shuttle covers a distance of 40 metres, and the accumulated distance is an aggregate of distance covered at every speed level.

The player gets ten seconds to recover between shuttles. At any point if he fails to reach the cone before the beep goes, he gets a first warning. Usually a player gets a few “reminders” to keep to the pace, but three official warnings generally marks the end of the test.

While the yo-yo test does not predict the overall success of a player, it is used to describe a player’s ability to recover between bursts of activity within a game as well as between games. As a result, players who have passed the yo-yo test at the level prescribed by their managers are likelier to function at their best for longer than those who haven’t. Now, there is an interesting quote by Chris Donaldson, the New Zealand fitness coach, buried in the article about one of the reasons his finds the yo-yo test useful: “This way, they can play the game for longer and faster and they can do things like stop the ball, take a miracle catch or run between wickets faster.”

‘Miracle catch’ is a curious term. We all know what it stands for: improbable catches that nobody expected players to be able to complete. In the same vein, they are also an element of the game that can’t be planned for in advance – except by keeping players fit – because a lot of it depends on situational awareness in the moment. Of course, to expect players to be able to pull off feats like these if they’re at peak fitness is not insensible – but it’s also interesting that team management expects such feats to be fully capitalised when opportunities present themselves, especially in T20 matches. And T20 matches are also held more frequently. (In the IPL, teams had to be ready for games at three-day intervals.)

This is the aspect of it that I find particularly disconcerting. T20 matches are held more regularly because they’re entertaining and are big revenue-generators. The ICC devised them in the first place to make cricket more interesting to newer and younger audiences and expand the sport’s market. But when you move further downstream of the format’s effects, you come to expectations of player fitness and – as Donaldson said – what coaches expect them to be able to do on the field. It seems like it’s not enough if fitness training prepares players to run faster between the wickets, run across from deep midwicket or long off to cut off fours or, in fact, be lively in the 20th over of the game as in the first. Now, we expect them to be trained to perform miracles.

One could say that the standards of the game are improving. The more we understand about human physiology, develop new performance techniques and metrics, and advance technology to enable sportspeople to control their bodies better and translate more of their on-field actions to in-game consequences, the higher the standards of the game will be. We’ve seen this in cycling, football, swimming, weightlifting, etc., as well as in cricket: lighter but stronger helmets and pads, heavier bats with better design, near-realtime ball-tracking, etc. But I would imagine there’s a point where these developments take the game beyond its original design itself, e.g. by making unusual aerobic exploits on-field a part of the standard set of expectations.

How much longer before players are penalised for not being miracle-workers then?

Posted in Uncategorized

Religious tolerance among children

Here’s another instance of an unsound university press release exaggerating the conclusions of a study. The headline goes “Children in India demonstrate religious tolerance, study finds”. According to the 2011 Census, 182 million people in India are between the ages of 9 and 15. The two studies whose results the press release describes questioned 63 and 37 children between the ages of 9 and 15. How is the press release’s headline justified?

Now, I’m not casting aspersions about these children. My annoyance is what the press release, and the studies’ authors quoted in it, are saying is the takeaway. These 100 children were not subjected to any biological or neurological tests; they were asked questions and their answers were recorded. As a result, I doubt the studies’ results are generalisable.

The 100 children were recruited for the tests, with their parents’ written consent, from a progressive school in Vadodara, Gujarat. The paper says that the school made an effort to admit children from both the Hindu and Muslim communities, and that their parents all belonged to the low-income group. This composition, so to speak, immediately invites confounding factors.

For example, I would imagine many people in the low-income bracket don’t want to invite trouble, so they stay away from communal disharmony and also teach their children to do so. (Indian public institutions have demonstrated a consistent reluctance to protect the weaker sections of society.)

For another, the children’s group’s composition breaks down the following way: “Younger Hindu (8 female, 8 male), younger Muslim (8 female, 7 male), older Hindu (8 female, 8 male) and older Muslim (8 female, 8 male).” That’s 16 children per religious group, by no means a corpus substantial enough to drive such sweeping conclusions.

Again, I’m not saying there are “bad children”, but only that some of the study’s  conclusions and the press release’s tone are not well-supported by the data – assuming empirical tests alone can describe such outcomes.

Posted in Uncategorized

The cost of global warming, from thermo 101

There’s a formula in thermodynamics 101 called Carnot’s theorem that goes like this:

CodeCogsEqn

This is a famous equation because it defines the absolute upper limit of efficiency achievable by a heat engine, irrespective of how much its performance is optimised. ηth is the thermal efficiency; Tc is the temperature of the surroundings into which the engine releases its exhaust heat; Th is the temperature at which heat enters the engine.

Say an engine combusts its fuel at 1,000 K and the ambient temperature is 298.15 K (25º C). Then the engine will have a thermal efficiency of 70.18% at best. Effectively, the engine needs to work at a hotter temperature and release heat into a cooler environment.

This is why, to make better engines, engineers are trying to build materials that can withstand a higher operating temperature (e.g. using ceramics). In doing so, they can increase the value of Th in the denominator, and reduce the value of the Tc/Th term.

Sadi Carnot, ‘the father of thermodynamics’ for whom the Carnot theorem is named, propounded his studies of thermodynamics in 1824. Making the reasonable assumption that the world didn’t warm significantly between 1824 and 1870, when the historical record for making common climate change measurements begins, the average global surface temperature in his time was 0.1º C cooler than the average between 1901 and 2000. In 2017 – in the post-industrial period – it was 0.8º C warmer.

Now, let’s extrapolate Carnot’s equation to a global context encompassing all the heat engines in the world – to the point where we’re effectively treating all of them as one big heat engine. Let’s also assume for simplicity’s sake that the average operating temperature of this engine is 1,500 K (1,227º C). Going by the numbers above, the thermal efficiency ceiling of this engine has fallen by 0.1% just because of global warming.

This may not seem like much until you couple it to the amount of power this mega-heat-engine contributes. For example, if a nuclear power plant generates 3,000 MW to move a turbine that produces 1,000 MW of electrical power, then a 0.1% drop in thermal efficiency means it will have to generate 344 MW more to keep producing 1,000 MW of electrical power. This in turn translates to higher resource consumption: uranium, coal, sunlight, wind, whatever.

Now compare this to the fact that the world’s total energy consumption in 2014 was an est. 109,613 TWh, which is 960 trillion MW over the course of the year.

And while some of the resources are renewable, they are all founded on the availability of one very important finite resource: land.

§

Any engineer (myself included) will be able to tell you that my calculations are based on grossly oversimplified assumptions. Possibly the grossest of all requires an acknowledgment that the typical steam-powered engines in Carnot’s time had an efficiency of 3%.

But I think my overall point still stands: as the world warms, heat engines are going to become less efficient than they would’ve been if the world was cooler, regardless of whether the consequences are trivial at a given scale. This efficiency could either be measured as an efficiency of the engine itself or one that takes into account resource requirements across the entire value chain.

The reason I write about this now is because of an article that appeared in the Bulletin of the Atomic Scientists on June 10, discussing the economic costs of adapting to climate change and contrasting them to the amount we’ll have to spend fixing things that a warming world will break. And one way things will break is capture in the following lines from the article:

… a 2012 paper in the American Economic Journal [found] that higher temperatures reduce economic growth rates, particularly in poorer countries. A 2015 paper by Stanford scientists published in Nature Climate Change built on this work, similarly finding that global warming will particularly hurt economic growth in poorer countries, and that “Optimal climate policy in this model stabilizes global temperature change below 2 degrees C.” This finding is consistent with the target set by the Paris climate accords.

It’s possible that one way these effects will be perceived on ground is by forcing thermal efficiency down, in turn forcing innovation that draws funds away from other activities. I acknowledge that these effects will be almost immeasurable, if only because it’s not a useful way to start working towards a solution.

On the other hand, I find it quite fascinating that a very simple equation first worked out almost two centuries ago provides a glimpse of the costs anthropogenic global warming has forced us to confront.

Posted in Uncategorized

Migraines

I’ve had migraines since 2006, when I started college. The pain is excruciating, disabling, ruinous, usually on both sides of my head. I can sense a migraine coming 12-24 hours beforehand, in the form of a small sphere of ache embedded deep inside my brain that I can amplify if I shake my head once or twice vigorously. The onset is also accompanied by constipation and a growing throbbing pain, as if a part of my brain was swollen and pushing against blood vessels flowing on the inside of the skull. When the ache kicks in fully, I can feel my heart beat in my head.

There’s two ways the pain goes away – a heavy dose of paracetamol (500 mg) or by avoiding triggers. Only the latter is 100% effective, and it took me half a decade to figure out what some of the triggers were.

Although it’s recognised to be the most common form of disability worldwide, afflicting an est. one billion people, scientists don’t know what causes a migraine. It’s commonly believed that it’s a mix of environmental and genetic factors, with one twins study from 2007 suggesting the latter’s influence ranges from 34% to 51%. However, this hasn’t prevented researchers from developing medicines to fight it (in much the same way we don’t know how selective serotonin re-uptake inhibitors work but they’re widely prescribed to treat depression).

One study from 2006 noted that there are two prevailing theories about the genesis of migraines united by a common theme: “the generally accepted role of the trigeminal ganglion in the painful phase of migraine”. It is this commonality that a new class of drugs called CGRPR antagonists – including erenumab, galcanezumb and eptinezumab – appears to address. Their mechanism of action throws some light on our understanding of this debilitating syndrome.

The acronym stands for calcitonin gene-related peptide receptor. Calcitonin is a hormone secreted by cells in the thyroid gland; it is responsible for reducing calcium and phosphorous levels in the blood when they rise above a threshold. The peptide in question is involved in the transmission of nociception – an “intense chemical, mechanical or thermal stimulation of the sensory nerve cells”, carried by sensory cells called nociceptors to the brain.

According to one 2008 paper, a migraine could be the result of nociceptors in the brain stem (connected to the trigeminal ganglion by a single sensory root) becoming over-sensitised, with the CGRP involved in maintaining them in that state. Conversely, the CGRPR antagonists work against this state.

Erenumab was developed by Amgen and is being marketed under the name Aimovig. It was approved for use by adults by the FDA on May 17, 2018, following a phase III clinical trial involving 955 subjects over six months. It is to be administered as a subcutaneous injection once a month, of dose 70mg or 140 mg.

According to the trial paper, published in November 2017, “A 50% or greater reduction in the mean number of migraine days per month was achieved for 43.3% of patients in the 70-mg erenumab group and 50.0% of patients in the 140-mg erenumab group, as compared with 26.6% in the placebo group.” But this is not entirely good news because one year’s worth of injections costs $6,900, which translates to Rs 39,000 per month.

Phase III trials for galcanezumab have been completed and Eli Lilly & Co., its maker, is awaiting FDA approval. Alder’s eptinezumab entered phase III trials in August 2017.

Posted in Uncategorized

'Kaala' is not 'Kabali' but questions Rajini's politics more

There are many similarities between Kaala, Pa Ranjith’s second flick with Rajinikanth, and their first film together, Kabali (2016). The thematic one is the most obvious, where Ranjith focuses on class mobility, caste discrimination and social welfare and brings them into mainstream cinema using Rajini as his frontman. In Kabali, this was done using the lens of labour rights and political identity. In Kaala, this has been done using community and political organisation.

Kaala is Kabali‘s successor in spirit. In Kabali, the focus was on the titular character’s rebellion against his presumed overlords, on his refusal to stay down when pushed down, a defiance depicted as the exclusive product of individual perseverance. Kaala is the first-order derivative of this tale of protest: social and political organisation and community work, together with the reminder that individual struggle – while a necessary first step – alone won’t work if we are to break the shackles of power and become free.

Some of the prominent themes that are explored are disenfranchisement through land rights, infighting within family and between members of the same community, commitment to the betterment of others even when suffering personal loss, and sacrificing one’s personal ambitions in order to join a greater cause. Some of these aspects were there in Kabali, too, but not in focus. In Kaala, they are the centrepieces; its tragedies are tragedies of disunity.

In both films, Rajini plays an emancipatory leader of the masses in their fight to get what is rightfully theirs – identity in the first and land in the second. Also in both films, Rajini succeeds powerful men to this role and battles enemies who have felled his predecessors. Curiously, both films also end on an ambiguous note, but this is executed in unsubtle fashion in Kaala.

In Kabali, Rajini was restrained onscreen, made room for other characters, didn’t pull off superhuman feats and didn’t deliver punch dialogues. In Kaala, Ranjith has let Rajini free to be himself, the superstar who beats his assailants to pulp singlehandedly, delivers comebacks in situations that don’t really need them, who – despite having made a show of his age by acknowledging that he’s often a grandfather – sings and dances and works tirelessly for his neighbourhood. While this renewed focus on Rajini will be all too familiar to his fans, it comes at the cost of a film that loses its dedication to a cause, or at least couldn’t stay focused long enough with the same intensity the way Kabali had.

Why the constant comparison to Kabali? Frankly, it is inescapable. Rajinikanth makes two consecutive movies with the same director who, unlike other directors, is in turn using the opportunity to delve into themes Tamil cinema has often exploited for the mass factor. Nonetheless, Kaala is an average production not because most Tamil movies are bad but because Kabali was able to deliver better. Not because Kaala deserves to be treated like a standalone movie in its own right but because it embraced a difficult subject like Kabali had.

Of course, in many other aspects the film is just as good. There is evident attention to detail in sets, as a friend I watched the film with and who had worked in Dharavi said. The cast is mostly good, particularly Easwari Rao as Selvi, to whom Kaala is wedded.

Another way the film is remarkable is in its portrayal of protests. There are people sitting and shouting slogans or marching from place to place but then there are also different kinds of protest music for different kinds of agitation. Ranjith especially infuses elements of hiphop, rap music and breakdancing into the way protesters posture themselves, the way they deliver their message, most importantly in the way different cultural groups of Dharavi meld in their fight.

Notably, three of the more peppy numbers Santhosh Narayanan composed for the film are either not used in full or are but in a way that is supplementary, not complementary (played as the end credits roll). They are Nikal nikal (‘Leave leave’), Katravai patravai (‘Educate and agitate’) and Theruvilakku (‘Street lights’). Together with their lyrics, this suggests they were composed as songs of protest available to use outside the film’s story as well.

All of this together holds up an interesting mirror to Rajini’s political ambitions in real-life, although Kaala was written before he announced that he would be contesting the state assembly elections next year. Many commentators have noticed a stark difference between Rajini’s “spiritual politics” and Ranjith’s Ambedkarism, and it would be undoubtedly interesting to see how the actor/politician squares the two off.

To start with, did Rajini know what he was talking about when he said that? Many think he doesn’t. As Kaala opens, there is a montage that plays showing how land has an always been an instrument of governance before it became the instrument of imperialism. One frame shows Lord Krishna blowing his conch as Arjuna charges on his chariot on the battlefield of Kurukshetra.

Kaala‘s climax, in similar vein, begins with a line at the start of its second half – “If it is your god’s dharma to take my land away from me, then your god is also my enemy” – and ends with an attempt to recast the Ramayana from Ravana’s point of view (arguably over-focusing on this metaphor as it draws on). It crescendoes with a valorisation of the demon-king’s ability to regrow his heads as Kaala’s followers keep rising in waves to resist Hindu nationalist insurgents burning down their homes.

However, on December 31, 2017, when Rajini announced his decision to enter state politics, he read out a verse from the Bhagavad Gita, about Krishna’s advice to Arjuna that he focus on his labours and not worry about the consequences. So what will Rajini’s labours will be?

While it is unclear what Rajini meant by “spiritual politics”, or whether he meant a plank that respects the spirit of civilised politics, there is certainly a difference between his silver-screen persona in Kaala, the first Rajini film in the post-political phase, and the kind of leader he says he is going to be. M.G. Ramachandran never had to contend with this sort of contradiction. If Rajini wants to succeed him, as he did Tamizh Nesan (Kabali) and Vengaiyan (Kaala), then he will have to be more reflexive and question his proximity to nationalist politics.

Posted in Uncategorized

A new fantasy

I’m no artist, nor a scholar of art. I can’t analyse images to pick out patterns. Heck, I think an image is well-crafted only because those commentators I trust have said it. My admiration of formal art is only by proxy, and will stay that way. However, I have an admiration of art that is my own – as we all do – born not out of historiographic analyses but of catharsis. Where I, as we all do, consume a painting or an illustration or a piece of music based on how it makes me feel right away, not how it makes me feel after it has suitably pickled in my consciousness.

For a long time, or at least for as long as I did, that’s how I would compose fantasy – by looking at novel constructions of geometric grammar, staring at strange images absorbing its essence in ways that I saw fit, in ways that pinged my waking mind. And for these exercises, I relied on a few carefully curated group of suppliers: Depthcore, But does it float, Ffffound, Justin Maller, Superfamous.

Fans of any of these creators and aggregators will immediate recognise the aesthetic at hand: moody, feverish, arbitrary, sometimes avant garde. Decidedly situated in the 21st century’s ways of life and meaning, in its offhanded glamorisation of the postmodern. Unmindful of rigour for its sake, and of formal processes and utilitarian order. Mindful of the niche fictions in a world constantly obsessed with waking up to reality. Where even pisspoor handwriting when repeated across the full face of a page is pregnant with beauty.

But there are still rules here, something at work, a fluid electric fence of sorts that keeps images from getting out of control. Perhaps it’s the hand of white people at work. I don’t know.

Ardent fans might even recognise the names. Maller. Vesna Pesich. Alex van Daalen. Ehren Kallman. And of course Folkert Gorter.

I hadn’t visited these parts of the web in a while, perhaps a decade, and decided to return to them today. I was disappointed to find Ffffound had shut in May last year. The others are still active, creating away.

Thomas Manuel once told me a little story about China Miéville. It seems China’s wife was telling him one day that the fridge people were en route to fix their broken appliance. In China’s beautifully fucked-up mind, it seemed as if walking talking refrigerators were coming over.

I love reading Paul Feyerabend because of his dashing style as well as because of his stand against method. An advocacy of unbridled creativity finding place in the practice of science because that’s where unorthodox ideas, solutions untethered from the chains of tradition and principles, are birthed. Not in the confines of method itself.

Beholding the works of the artists named above, and others, and disciplining your mind to come deliberately unhinged, to bring not simply new perspectives but as much as new nonfictions to be… to discover new modes of catharsis, as it were. Their works were the gateway drugs to new. I still remember the high I used to get in college, sitting by my room’s sole window, doused in darkness, looking out into the nowhere where our dorms were, a desert rimmed by city lights in the distance. Soaking in the memories of those images, listening to synthwave or psy trance. Or Rammstein’s Spring on loop.

Das alte leid. I yearn for those days, and so often that what pieces of them I remember have become mangled by repeated recollection, vandalised by the search for meaning.

… what every artist should attempt to do is shovel down into their own minds, excavate past the sediment of Western civilization that amounts to yet another, larger, school of art, and keep scraping deeper and deeper, all the way back to the beginning. In this view of things, each and every artist crafts a unique creation narrative, chronicles the birth of his or her own private aesthetic. Hence, the best work is not adult, intellectual, and informed; it is primitive, and childish, and raw.

J.C. Hallman

Posted in Uncategorized

Monstrous moonshine

I received an email from a fellow journalist last week with the following subject:

Banks wont recover even half of the Rs 4,000,000,000,000 bad loans of 37 companies

That number with all those zeroes is four trillion. It’s a large number. For example, there are only 30 billion stars in the Large Magellanic Cloud, only 100-400 billion stars in the Milky Way galaxy, only one trillion stars in the Andromeda galaxy.

There’s a famous quote attributed to Richard Feynman, the theoretical physicist, that very large numbers that used to be called astronomical should in fact be called economic. Why? Because at the time he said that, the US national deficit was $100 billion.

Four trillion, however, is a number bursting at its astronomical seam. In fact, consider the entire debt load of the world’s governments: an estimated $200 trillion. It’s not the sort of number we hear every day, in any context, and it’s not a number our cognition is equipped to easily fathom, at least not without notational assistance and some level of abstraction.

This brings me to an interesting anecdote my roommate, a physicist, once shared with me. It involves what’s called the Monster group. It’s a set of numbers organised according to certain rules such that it contains 8 × 1053 of them. Pause for a moment, let it sink in: the Monster group’s defining rules allow a very large number of numbers to be included, but not an infinite number of them.

How could something so large exist in nature without being all-encompassing at the same time?

This is why, even though the Monster group contains 10 billion times fewer numbers than the number of atoms in the universe, it is infinitely more interesting. While the world’s governments have been arbitrarily borrowing money such that the global debt is several multiples of the global GDP, while it is meaningless to try to understand the universe in terms of the vigintillions of atoms it holds, the Monster group – for all its mind-blowing vastness – is quite well-defined and meaningful.

To truly appreciate why this is so, we must start at what’s called a ‘finite group’.

Consider a set with five elements. The ‘permutation group’ of this set consists of all possible permutations of these five elements, which total 120. If the set had had six elements, then its permutation group would’ve had 720 elements. If the set had had seven elements, then its permutation group would’ve had 5,040 elements. And so forth.

Since the permutation group will always contain a finite number of elements, it’s called a finite group. Similarly, every mathematical groups that contains a finite number of elements can be classified as a finite group.

Now, in order to make sense of the different kinds of finite groups that are possible in mathematics, mathematicians came up with a classification scheme. They were able to categorise various finite groups according to their various mathematical properties. As a result, there are 18 families each comprising an infinite number of finite groups. Then there are 26 groups called sporadic groups, none of which can be fit under any of the 18 families.

The largest sporadic group is called the Monster group, holding 8 × 1053 numbers. And this is where it gets more interesting.

Meet ‘monstrous moonshine’.

Instead of fumbling with intricate mathematical concepts, I will defer at this point to a 2015 article in Quanta describing the idea (edited for brevity):

In 1978, the mathematician John McKay … had been studying the different ways of representing … the Monster group. … Mathematicians weren’t sure that the group actually existed, but they knew that if it did exist, it acted in special ways in particular dimensions, the first two of which were 1 and 196,883.

McKay … happened to pick up a mathematics paper in a completely different field, involving something called the j-function, one of the most fundamental objects in number theory. Strangely enough, this function’s first important coefficient is 196,884, which McKay instantly recognised as the sum of the monster’s first two special dimensions. …

John Thompson, a Fields medalist, … made an additional discovery. … The j-function’s second coefficient, 21,493,760, is the sum of the first three special dimensions of the Monster: 1 + 196,883 + 21,296,876. It seemed as if the j-function was somehow controlling the structure of the elusive Monster group.

The beautiful thing here is that, until the moments of McKay’s and Thompson’s discoveries, mathematicians had no reason to believe the Monster group and the j-function were even remotely related. However, there it was, hinting at a deep and mysterious connection between two distant branches of mathematics. This connection has come to be called monstrous moonshine.

In 1992, another mathematician named Richard Borcherds figured out the nature of this connection. Of all places, he found the answer lurking in string theory – the theory that imagines that “the universe has tiny hidden dimensions, too small to measure, in which strings vibrate to produce the physical effects we experience at the macroscopic scale” (to quote the same Quanta article).

In 2012, three physicists floated an even more bizarre idea: that apart from monstrous moonshine – which bridges group theory, number theory and string theory – there were 23 other moonshines establishing hitherto unknown links between mathematics and physics. This is called the umbral moonshine conjecture, and in 2015, scientists proved that they do exist.

What the hell is going on?

I will stop here, trusting that I’ve led you sufficiently far into a deep, but not bottomless, rabbit hole. 🙂

Posted in Uncategorized

The Keeper of Words

It had to come to this at some point, and here we are finally.

To undertake a challenge to write one blog post a day – when I’ve mentioned to my friends and colleagues that I’m doing this, all of their responses have presumed that this is a difficult thing to do. It surely seems difficult to explore one new idea every day and write about it, but I suggest you look at the bigger picture instead: it’s not at all implausible or difficult for us, all people, to be able to come up with 365 ideas in a year.

We generate dozens over drinks with a few good friends on Saturday evenings and toss them by the conversational wayside as impractical or outlandish. They’re ideas nonetheless, and are worth writing about in some form.

So it’s quite easy, especially once you get in the groove, to write one blog post a day. What is not easy about this exercise… rather, the true face of the challenge presents itself to you on that one day when you have no ideas to write about. Anybody can write a post when there’s an idea waiting to be written about, and there are always ideas. The undertaking is a real, actual challenge on that one day – the first day – when you’re forced to confront its fundamental essence, its eigenvalue: the writing.

Good writing is the soul of any story, good or bad, strange or charming, real or fictional. Like an organism, that soul is contained within a body defined by various elements of the story, a sinew of ideas coalescing over the fluid form of language to give it definite shape and, when well executed, an empathetic purpose.

The first time you’re brought face to face with a body that has had its flesh and blood and bones stripped away, it could feel as if you see nothing – a debris field with a blankness at its heart. However, a story is never so forgiving. You can claw away at the material at the interface between words and grammar on one side and the reader’s mind on the other, but you will never be left with nothing. The words will still be there, bared naked.

On the day, especially the first day, when ideas have deserted you, the only way to survive is to face the words you have birthed, put them gently down one after another, and try not to be the first to blink.

On this day, you must write for writing’s sake, you must write without syntactic hold or grasp, you must write to judge yourself more harshly than you ever have.

You must write without fear, you must write even though purpose fights to get away in the middle of every sentence, you must write to prove that – if nothing else – you are the keeper of words.

You must write.

Posted in Uncategorized

The three times intermediate-mass black holes were first discovered

There’s a report in Science dated June 8 with the headline ‘Middleweight black holes found at last’. The abstract describes an effort by an “international team” of astronomers to find intermediate-class black holes, which weigh more than tens of solar masses but less than millions of solar masses.

This is a bit confusing because labels in the natural and social sciences have fixed and well-defined meanings. So “intermediate-class” means something specific. When the first LIGO announcement of a black hole merger was made in February 2016, Karan Jani, a member of the LIGO Scientific Collaboration, told me that “intermediate-class” black holes weighing 20-10,000 solar masses generate the loudest signals in the instruments.

Doesn’t this mean black holes fitting the label intermediate-class were found in 2016 itself?

Further, this isn’t even the second time intermediate-class black holes have been announced discovered. In February 2017, astronomers from the Harvard-Smithsonian Centre for Astrophysics announced that they’d found a black hole weighing ~2,300 solar masses at the centre of a globular cluster called 47 Tucanae.

Another thing that concerns me here is how the data is being sliced. You have astronomers claiming discoveries every day and, given a recent spate of articles about the neutron-star merger discovery and the BICEP2 ‘cosmic blunder’, you know astronomy and cosmology are ultra-competitive realms of scientific endeavour. As a result, in such cases, there is often a real risk that someone out there will claim to discover something that is not really significant at all.

For example, it would seem intermediate-class black holes have been discovered thrice (LIGO, 47 Tucanae and when the “international team” mentioned above discovered them at the centres of 300 galaxies). At this rate, it’s quite possible this kind of black holes has been discovered even more. Which one was the first one? Or is the ‘intermediate-class’ itself going to be cut up into three or more parts to accommodate all these claims? Additionally, if you’re wary about using the term ‘intermediate-class’, then you should know that the term ‘middleweight’ also has precedence.

A press release from the Harvard-Smithsonian Centre for Astrophysicists in February 2017 had this line:

Astronomers expect that intermediate-mass black holes weighing 100 – 10,000 Suns also exist, but so far no conclusive proof of such middleweights has been found.

Lee Billings, a science writer for the Scientific American, wrote in June 2017:

Most of the black holes in LIGO’s mergers have been middleweights, being heavier than that 20–solar mass limit but much lighter than the supermassive variety, raising questions about their origins and relationship to the two well-studied populations of black holes. (emphasis added)

It’s likely that in both these cases the authors are using the term ‘middleweight’ to refer to the intermediate class in a non-technical way – but then so is the Science article.

Everybody remembers the infamous case of Brian Wansink, who tortured his data and sliced his results into smaller and smaller pieces that he published as separate papers. Nobody wants that sort of thing to happen because what Wansink did wasn’t science; he was simply hacking the system for personal gain. However, if we can’t reach consensus on what intermediate-class really means and, following that, when the first black hole of this class was found, we might never see an end to scientific papers and research groups claiming that their authors/members have discovered a new class of something.

There are a few issues that could keep such consensus from being reached; I can think of three.

1. The intermediate-class comprises four orders of magnitude: hundreds, thousands, tens of thousands and hundreds of thousands of solar masses. I’m no astronomer but this still sounds like a very wide swath to homogenise, especially if professional astronomers are going to claim the discovery of a black hole of each order of magnitude is significant. (E.g. the abstract of the latest discovery assumes intermediate-class black holes weighs thousands to millions of solar masses, excluding the hundreds.)

2. LIGO used gravitational waves to spot middleweight black holes; the 47 Tucanae group used radio and X-ray data from large observational studies; the “international team” used galaxy spectra data collected in the archives. Since each group has used a different technique to find what they have, does each effort get to stake its own, distinct claim to primacy?

3. The Wansink outcome – where astronomers are slicing the data really thin with the (reasonable) assumption that differences between one supernova and the next are the result of distinct processes instead of a common process with stochastic fluctuations. Of course, we’ve no way until much later in the future, after we’ve made lots more observations, to tell whether the way we’re slicing the data in 2018 actually makes sense. But in the same vein, there should be a way to tell if, based on observations made in the last few decades, we’re classifying black holes right.

(To be clear, while the first and second issues described could imply that astronomers are doing something wrong, the third doesn’t; it’s just something to consider.)

Posted in Uncategorized

The Higgs boson and the top quark

There were two developments in the news last week that were very important but at the same time didn’t get mainstream attention: Microsoft acquiring GitHub and the LHC collaboration’s measurement of the strength of the Higgs boson/top quark interaction.

Before either of these developments could pushed onto the front page (or equivalent), what the people could really have used was a “why does it matter” kinda piece. Paul Ford at Bloomberg had just such a piece explaining why it would be nice if more of us gave a damn about GitHub’s future. But I couldn’t fine the equivalent for the top quark announcement.

To me, the biggest reason to give a damn about the ATLAS and CMS detectors on the LHC measuring the strength of the interaction between the Higgs boson and the top quark is very simple. The Higgs boson, rather the Higgs mechanism in which it participates, is what gives a fundamental particle its mass. The stronger the boson couples with a particle, the higher the particle’s mass is.

The top quark is the heaviest known fundamental particle, which means the Higgs boson couples to it the strongest. It weighs ~172 GeV/c2, which is 1.3-times the mass of the Higgs boson itself, about 183-times the mass of a proton and almost the same as an entire atom of tungsten. If the fundamental particles were all the Angry Birds, the top quark would be Terence.

So by studying the strength and the nature of this coupling, physicists can learn more about both particles as well as the peculiarities of the Higgs mechanism. Additionally, of the six types of quarks, only the top quark has been known to never hadronise — i.e. come together with other particles to form a heavier particle. Protons and neutrons are each called a hadron because they’re made up of up and down quarks. The charm, strange and bottom quarks also hadronise.

In fact, all the other quarks can be (indirectly) observed only in the presence of other quarks, leaving the top quark to be the sole ‘bare’ quark in nature – further entrenching it as an object of interest among particle physicists.

Further refining what we know about the top quark and the Higgs boson also helps physicists decide what colliders of the future should be able to do, determine what questions they should be able to answer. And the sooner they know the better because particle accelerators/colliders are very hard to build and can take many years, so it pays to keep an eye on the ball at all times instead of regret not installing a feature later.