Thanks for joining me!
Good company in a journey makes the way seem shorter. — Izaak Walton

Thanks for joining me!
Good company in a journey makes the way seem shorter. — Izaak Walton

James English had a wonderful piece in Public Books recently, discussing how the Nobel Prize for literature:
What will it take for everyone to see that the Nobel Prizes for physics, chemistry and medicine work the same way? And that they don’t have to be assailed by public controversies to be acknowledged as imperfect prizes, whose status was seeded by a similar, if not the same, “admixture of capitals”.
There’s nothing to these prizes if not their prestige. But while that’s something any prize should aspire to have, it’s the wider zeitgeist of the Nobel Prizes’ appreciation that makes them interesting. Perhaps, as English argues, accepting this brokenness could pave the way to a more culturally appropriate celebration of what the prizes stand for, one that doesn’t quietly raise its glass to traditionalism on every December 10.
At first glance, this tweets appears to state something obvious:
Of course the three-dimensional arrangement of links and nodes, and the space they occupy, influences the ultimate form of the network. But being a tweet, it doesn’t capture a pivotal detail in the paper: the scientists who authored it aren’t talking about the length of the links but the girth. The girth appears to have a nontrivial impact on the network to the extent that, if you discounted it, the network would look significantly different.
In fact, this paper presents some interesting consequences of link thickness that can’t be intuited from the first principles.
The paper’s authors are all network theorists themselves, and one of them is Albert-László Barabási, one of the world’s foremost experts on the topic.
Scientists study networks by organising their internal components as nodes and links. In the example of a triangle, there are three nodes – a.k.a. vertices – and three links – a.k.a. sides. In a tetrahedral network, there are four nodes and six links. Using this simple classification scheme, scientists have found that certain things about a network can be explained only by analysing it in terms of its connections, not by its overall geometry.
As a result, network theory looks at networks using some parameters that don’t exist in the study of other systems. These include the number of connections at a node, network centrality, link betweenness, etc. And because the geometric properties are no longer of interest, network theorists assume that the nodes are infinitesimally small and the links are one-dimensional.
For example, a 2013 study used network analysis to determine the performance of batsmen who played in the Ashes that year. Satyam Mukherjee, the study’s author, built a network where each node was a batsman and each link was a partnership between two batsmen. The link length scaled according to the number of runs they scored together.
In this setup, Mukherjee found that the in-strength parameter denoted a player’s contributions in partnerships; closeness, their ability to play in different positions in the batting lineup; and centrality, the degree of their involvement in partnerships. The Google PageRank algorithm could be used to determine each player’s overall importance.
Mukherjee found the following English players scored the highest on each count (England won the Ashes 3-0 that year):

However, the current paper argues that by ignoring the geometric characteristics of the links and nodes, theorists might have been missing out on an important feature that determines why networks take the forms that they do. This may not apply to analysing cricketing statistics problem but it certainly does to “neurons in the brain, three-dimensional integrated circuits and underground hyphal networks,” where “the nodes and links are physical objects that cannot intersect or overlap with each other.”
The one geometric characteristic found to have a defining influence on the ways in which the network was and wasn’t allowed to grow was the link’s width.
Using computer simulations, the scientists found that there was a link thickness threshold. Below the threshold – with small link width, called the weakly interacting regime – the network was able to keep links from crossing each other by small rearrangements that didn’t affect its overall form. Above the threshold, in the strongly interacting regime, the link thickness began to have an effect on link length and curvature, and the links also became more closely packed together.
This is fascinating in two ways.
First: The study shows that the way the network grows depends not just on which nodes it wants to connect but also on how they’re connected. This could mean, for example, that network theorists will have to factor in the physical properties of materials involved in link-building to fully understand the network itself.
Second: Link thickness also affects the space between links, with links placed closer to each other as they become thicker. As a result, networks in the strongly interacting regime will be harder to construct using 3D-printing than those in the weakly interacting regime, in which links and nodes are more clearly separated.
Where does the threshold itself lie? It’s not a single, fixed link-width value such that the network properties on either side of it are markedly different. Instead, it’s more like a transition that occurs across a predictable range of values. In general terms, the threshold is the zone where the total volume occupied by the links approaches the total volume occupied by the nodes.
One chart from the paper (below) visualises it well. The first box on top shows the number of links that cross each other as link thickness increases. Note the box below it, which shows how the average link length changes as link thickness increases.

The colours represent two different network models. Orange lines denote a network in the elastic-link model, where the nodes’ positions are fixed and the links are free to move. The blue lines denote a network in the fully elastic model, in which both nodes and links can move freely.
To quote from the paper:
We determine the origin of the transition in the geometry of the networks by estimating the transition point rLc. When the links are much thinner than the node repulsion range rN, the layout is dominated by the repulsive forces between the nodes, which together occupy the volume VN = 4√2NrN3/3. When the volume occupied by the links becomes comparable to VN, the layout must change to accommodate the links. This change induces the transition from the weakly interacting regime to the strongly interacting regime.
The interestingness parade from this study has one more float, which draws from the bottommost box in the chart above. The scientists found that a network behaved more like a solid in the weakly interacting regime and like a gel in the strongly interacting one. This isn’t immediately evident because thicker links usually suggest higher robustness and structural rigidity. However, because they also influence link length and curvature, the network responds differently to external (physical) forces compared to networks with more slender, straighter links.
Specifically, the corresponding stress response was measured using the Cauchy stress tensor – a matrix of nine numbers used to calculate the stress at a single point in three dimensional space. In the weakly interacting regime, the stress response was dominated by node-node and link-link interactions, and the network prevented the stress due to the external force from spreading evenly in all directions. This is a feature of solid materials.
Similarly, in the strongly interacting regime, the stress spreads more uniformly through the network because the volume is dominated by links. So the stress response is dictated by the elastic properties of individual links and by link-link interactions, mimicking those of gels.
In sum, as a network transitions from the strongly interacting to the weakly interacting regimes, the link length begins to increase faster, links become more curved, and the network begins to behave more like a gel even as it becomes less amenable to 3D-printing.
Jamie Farnes, a theoretical physicist at Oxford University, recently had a paper published that claimed the effects of dark matter and dark energy could be explained by replacing them with a fluid-like substance that was created spontaneously, had negative mass and disobeyed the general theory of relativity. As fantastic as these claims are, Farnes’s paper made the problem worse by failing to explain the basis on which he was postulating the existence of this previously unknown substance.
But that wasn’t the worst of it. Oxford University published a press releasesuggesting that Farnes’s paper had “solved” the problems of dark matter/energy and stood to revolutionise cosmology. It was reprinted by PhysOrg; Farnes himself wrote about his work for The Conversation. Overall, Farnes, Oxford and the science journalists who popularised the paper failed to situate it in the right scientific context: that, more than anything else, it was a flight of fancy whose coattails his university wanted to ride.
The result was disaster. The paper received a lot of attention in the popular science press and among non-professional astronomers, so much so that the incident had to be dehyped by Ethan Siegel, Sabine Hossenfelder and WiredUK. You’d be hard-pressed to find a better countermeasures team.

Of course, the science alone wasn’t the problem: the reason Siegel, Hossenfelder and others had to step in was because the science journalists failed to perform their duties. Those who wrote about the paper didn’t check with independent experts about whether Farnes’s work was legit, choosing instead to quote directly from the press release. It’s been acknowledged in the past – though not sufficiently – that university press officers who draft the releases needed to buck up; rather, more importantly, the universities need to have better policies about what roles their press releases are supposed to perform.
However, this isn’t to excuse the science journalists but to highlight two things. First: they weren’t the sole points of failure. Second: instead of looking at this episode as a network where the nodes represent different points of failure, it would be useful to examine how failures at some nodes could have increased the odds of a failure at others.
Of course, if the bad science journalists had been replaced by good ones, this problem wouldn’t have happened. But ‘good’ and ‘bad’ are neither black/white nor permanent characterisations. Some journalists – often those pressed for time, who aren’t properly trained or who simply have bad mandates from their superiors in the newsroom – will look for proxies for goodness instead of performing the goodness checks themselves. And when these proxy checks fail, the whole enterprise comes down like a house of cards.
The university’s name is one such; and in this case, ‘Oxford University’ is a pretty good one. Another is that the paper was published in a peer-reviewed journal.
In this post, I want to highlight two others that’ve been overlooked by Siegel, Hossenfelder, etc.
The first is PhysOrg, which has been a problem for a long time, though it’s not entirely to blame. What many people don’t seem to know is that PhysOrg reprints press releases. It undertakes very little science writing, let alone science journalism, of its own. I’ve had many of my writers – scientists and non-scientists alike – submit articles with PhysOrg used here and there as a citation. They assume they’re quoting a publication that knows what it’s doing but what they’re actually doing is straight-up quoting press releases.
The little bit this is PhysOrg’s fault is because PhysOrg doesn’t state anywhere on its website that most of what it puts out is unoriginal, unchecked, hyped content that may or may not have a scientist’s approval and certainly doesn’t have a journalist’s. So buyers beware.

The second is The Conversation. Unlike PhysOrg, these guys actually add value to the stories they publish. I’m a big fan of them, too, because they amplify scientists’ voices – an invaluable action/phenomenon in countries like India, where scientists are seldom heard.
The way they add value is that they don’t just let the scientists write whatever they’re thinking; instead, they’ve an editorial staff composed of people with PhDs in the relevant fields as well as experienced in science communication. The staff helps the scientist-contributors shape their articles, and fact-check and edit them. There have been one or two examples of bad articles slipping through their gates but for the most part, The Conversation has been reliable.
HOWEVER, they certainly screwed up in this case, and in two ways. In the first way, they screwed up from the perspective of those, like me, who know how The Conversation works by straightforwardly letting us down. Something in the editorial process got shorted. (The regular reader will spot another giveaway: The Conversation usually doesn’t use headlines that admit the first-person PoV.)
Further, Wired also fails to mention something The Conversation itself endeavours to clarify with every article: that Oxford University is one of the institutions that funds the publication. I know from experience that such conflicts of interest haven’t interfered with its editorial judgment in the past, but now it’s something we’ll need to pay more attention to.
In the second way, The Conversation failed those people who didn’t know how it works by giving them the impression that it was a journalism outlet that saw sense in Farnes’s paper. For example, one scientist quoted in Wired‘s dehype article says this:
Farnes also wrote an article for The Conversation – a news outlet publishing stories written by scientists. And here Farnes yet again oversells his theory by a wide margin. “Yeah if @Astro_Jamie had anything to do with the absurd text of that press release, that’s totally on him…,” admits Kinney.
“The evidence is very much that he did,” argues Richard Easther, an astrophysicist at Auckland University. What he means by the evidence is that he was surprised when he realised that the piece in The Conversation had been written by the scientist himself, “and not a journo”.
Easther’s surprise here is unwarranted but it exists because he’s not aware of what The Conversation actually does. And like him, I imagine many journalists and other scientists don’t know what The Conversation‘s editorial model is.
Given all of this, let’s take another look at the proxy-for-reliability checklist. Some of the items on it we discussed earlier – including the name of the university – still carry points, and with good reason, although none of them by themselves should determine how the popular science article should be written. That should still follow the principles of good science journalism. However, “article in PhysOrg” has never carried any points, and “article in The Conversation” used to carry some points but which now fall to zero.
Beyond the checklist itself, if these two publications want to improve their qualitative perception, they should do more to clarify their editorial architectures and why they are what they are. It’s worse to give a false impression of what you do than to provide zero points on the checklist. On this count, PhysOrg is guiltier than The Conversation. At the same time, if the impression you were designed to provide is not the impression readers are walking away with, the design can be improved.
If it isn’t, they’ll simply assume more and more responsibility for the mistakes of poorly trained science journalists. (They won’t assume resp. for the mistakes of ‘evil’ science journalists, though I doubt that group of people exists).
Laser light has been used to cool atoms down to near absolute zero. The technique is simple, if versatile. (And includes some history involving a little-known Indian physicist.)
Laser light is shined on an atom that’s made to move towards the source of light. When the atom absorbs a photon, it slows down because of the law of conservation of momentum. The atom then emits the photon from a different direction.
By Newton’s third law, it should then receive a ‘kick’ in the direction opposite to this emission. But because the photons will be emitted in various random directions, their total ‘kick’ will be far smaller than the brakes applied by swallowing photons from just one direction.
By carefully tuning the laser’s frequency and intensity, scientists can ensure that the atom absorbs and emits enough photons to slow down. And when an atom slows down, it simply means – in the language of thermodynamics – that it has cooled down.
This entire process involves a coupling between light and matter, nothing else. The atom absorbs the photons and then spits them out – i.e. the atom interacts with electromagnetic radiation. The resulting drop in temperature is simply the result of the atom losing its kinetic energy. There are no other forms of energy involved.
However, because laser-cooling is such a cool technique, scientists have been curious about whether it could be used to slam the brakes on the kinetic energy of objects other than atoms. In a new study, published November 27, that’s what scientists say they have done (preprint here).
And this time, what they have done might just be cooler: they have used laser to slow down sound waves.
The technique is the same – and equally simple – except for one small change. In the case of atoms, photons mediated the interaction between the laser light and the atom. In the case of sound waves, there is a second mediator: Brillouin scattering.

We know sound in the air is simply a series of blocks of compressed and rarefied air. Another way to describe this is as a wave. The air is less dense in the rarefied parts and more dense in the compressed parts, so the sound is effectively a density wave. When sound passes through a solid, it does so through a similar density wave.
All waves carry some energy (according to the Planck-Einstein relation: E = hv, where h is Planck’s constant and v is the wave’s frequency). For example, the electromagnetic wave carries energy that, at certain frequencies, we call light or heat. The energy carried by a density wave moving through a solid is, at some frequencies, perceived by the human ear as sound.
So when photons from a laser can be used to remove energy from the density wave, it will effectively reduce the energy of the sound waves. We just need to figure out how to create a coupling between the laser photons and the density waves. This isn’t hard because part of the answer is in the language itself.
How do you couple a particle to a wave? You can’t – unless you can describe both of them as waves or both of them as particles. This is possible in physics through the wave-particle duality. You’ll remember from high school that light is both waves and particles. It’s just two different ways to describe the transport of electromagnetic energy.
You can do this with sound as well. It can be described as a density wave or a particle moving through a medium – two ways to describe the transport of acoustic energy. These ‘sound particles’ are called phonons (cf. quasiparticles).
So to cool a sound wave using lasers, you need to couple the laser photons with the phonons. Put another way, one packet of one kind of energy has to transform into a packet of a different kind of energy. The scientists accomplished this by colliding photons and phonons in a waveguide (a fancy term for any medium that’s carrying a wave).
When a photon is scattered off of a phonon, it can either lose some of its energy to the sound particle or gain energy. When the scattering is such that the photon gains energy, the phonon slows down according to the same mechanism at play between photons and an atom – based on the law of conservation of momentum. This interaction is called a Brillouin scattering.
In their experiment, the scientists, from North Arizona University and Yale University, used a silicon waveguide 2.3 cm long and carrying sound waves at 6 GHz. When they shined laser light of frequency close to the infrared part of the EM spectrum, they observed that the waveguide cooled by 30 K due to interactions between the photons and its phonons.
They used other techniques to make sure that this was the case, and that the material didn’t cool in other ways. For example, they measured the duration for which phonons of certain frequencies persisted in the system. For another, the phonons were found to slow down (a.k.a. “cool down” in thermodynamic-speak) only in one direction – the direction in which the laser was incident – and not others.
There are two more ways in which this experiment is interesting.
First, the scientists found that they didn’t have to setup a closed space, typically called an optomechanical cavity, to perform this experiment. Previous experiments involving light-matter coupling have required the use of such cavities to produce amplified effects. In this experiment, the effect was pronounced in an (relatively) open space itself.
Second, the scientists were able to show that they could influence different groups of phonons in the continuum of the solid simply by changing the frequency of laser light being shot at them.
The applications are obvious. Many devices in our lives, from ultra-sensitive instruments studying gravitational waves to machines that are used regularly, carry unnecessary vibrations that interfere with their purposes. The new study suggests that they can all be damped out simply by using lasers tuned to the right frequencies.
Almost exactly a year ago, I wrote a post about how quickly the discovery of black-hole mergers through gravitational waves was becoming run o’ the mill.
All of the gravitational wave detection announcements before this were accompanied by an embargo, lots of hype building up, press releases from various groups associated with the data analysis, and of course reporters scrambling under the radar to get their stories ready. There was none of that this time. This time, the LIGO scientific collaboration published their press release with links to the raw data and the preprint paper (submitted to the Astrophysical Journal Letters) on November 15. I found out about it when I stumbled upon a tweet from Sean Carroll.
This week, the LIGO team may just have one-upped itself. On December 1, Shane Larson, a physicist at Northwestern University, Chicago, and member of the LIGO Scientific Collaboration, wrote on his blog that the LIGO and Virgo teams were releasing a joint catalogue of their gravitational-wave detections till date. And in that catalog, Larson drew readers’ attention to the presence of not one, not two, but four new black-hole merger events.

He continued:
What stands out the most in the new LIGO catalog? We are still letting the implications settle in, but the most important thing the new events do is it makes our estimate of the population of black holes in the Universe more accurate, and we’ve started to examine those implications in a new study that is being released in tandem with this announcement.
This study is available here.
To (shamelessly) quote myself once more:
In the near future, the detectors – LIGO, Virgo, etc. – are going to be gathering data in the background of our lives, like just another telescope doing its job. The detections are going to stop being a big deal: we know LIGO works the way it should. Fortunately for it, some of its more spectacular detections … were also made early in its life. What we can all look forward to now is reports of first-order derivatives from LIGO data.
An online culture zine called Phantom Sway just discovered Mongolian folk rock and can’t stop raving about it (I found the article because 3 Quarks Daily picked up on it). What the superficial review fails to mention is the depth of this genre – like all genres – and restricts its attention to one band, The HU, while ignoring its breadth.
You’re probably wondering why I expected better, or that I’m being too harsh. Both would be right: I’m just peeved because I’ve been following Tuvan throat-singing for years and that it’s unfair that the one time a somewhat widely read publication picked up on it (3QD, not Phantom Sway), they chose to limit themselves to such a cursory review.
In fact, the most important thing Phantom Sway does, and does badly, is lumps all of throat-singing into one group called “Mongolian throat singing”. There are actually multiple types depending on the tones to be achieved as well as the regions in which they’re practised. And multiple proponents of multiple sub-genres as a result.
Throat-singing itself is a feature of multiple cultures, from Canada to Tibet to Japan. My favourite throat-singer, Albert Kuvezin, employs a form called kargyraa, a part of the Tuvan throat-singing of southern Siberia. The other forms of this classification are khoomei (recognised by UNESCO) and sygyt. They each have their own sub-styles as well, and many of them bear marked differences over just subtle variations.
Other forms of throat-singing from other regions include the khai of the Altai Republic and the now-extinct rekuhkara of Hokkaido.
The band that Phantom Sway picked, The HU a.k.a. Hunnu Rock, is a self-proclaimed proponent of what it calls ‘New Mongolian Rock’, seemingly shunning the throat-singing based classification.
If you’re into this type of music, you should check out Kuvezin’s discography, the punk-rock band Yat-Kha and the heavy-metal band Tengger Cavalry. My personal favourites are Hartyga – their album ‘Agitator’ exemplifies their brand of “psychedelic ethno-rock” –, the artisanal Huun Huur Tu, and a selection of less well-known singers/groups including Ay-Kherel.
There are two good options if you want to explore more of the zeitgeist of this musical genre: the music of Kongar Ool-Ondar and – even better – the Tuvan short-film Shu-De, produced by Michael Faulkner and released in 2013.
But whatever you do, please don’t start and stop with The HU. They’re new, mainstream, and it isn’t clear if they see themselves as exponents of throat-singing or instead – as some have pointed out – those of a particular politics.
Dennis Overbye, one of the New York Times‘s star science writers (the other being Carl Zimmer), had a curious piece up November 19 about why “we should leave some mysteries alone” and what mysteries he would like to leave alone personally. He wrote,
Jim Peebles, the famed cosmologist at Princeton University, once told me that if someone offered him a tablet of stone that held all the answers to the mysteries of the universe — how old it is, where it’s going — he would throw it away. The fun, he said, is in the attempt to find out. So here are some stone tablets that I would throw away.
The ‘curious’ aspect was made more so because Overbye was the author: he has a reputation as a lucid and articulate science writer. However, this piece is kind of a swamp.
The fundamental basis for Overbye’s provocative suggestion is that we “might be disappointed by the Big Reveal”. I’m not sure I agree with it – although it is in fact Overbye’s opinion and there is nothing I can or want to do about it.
I would choose differently for two reasons.
First: We will always have fantasies about the things around us, about the things we do or do not know of. Overbye says he does not want to know what is inside a black hole because finding out might force him to stop believing that a pair of socks he lost might be there. This is a perfectly harmless belief today. And I think it will be a perfectly harmless belief even after we find out what black-hole guts are made of.
Overbye doesn’t write serious science articles about his socks being inside a black hole faraway even though we don’t know what is inside black holes. This is because “we don’t know” is also a state of knowledge. It is not a void, an empty vessel to be freely populated with our whimsies, but an area carefully fenced-off and with restricted entry. When “we don’t know” isn’t stopping Overbye from assuming his socks are there, there is no reason “we do know” should.
If you are going to say, “It is because we might know how hot it is inside a black hole,” let me stop you right there. A logical breakdown is not helping anyone – and certainly not Overbye. Otherwise, his fantasy would have collapsed the moment he stopped to consider how his socks got inside the black hole in the first place. He is free to believe, as he does, that his socks are just there.
I personally believe the cheela really exist and that there are some kinds of stars out there whose outer surface is simply a curtain hiding a very advanced alien civilisation living on the inside. Because why not?
Second: I also firmly believe there will always be something we don’t know we don’t know – a.k.a. ‘unknown unknowns’ – and/or something we just don’t know – a.k.a. an unanswered question. We might be disappointed by the next “Big Reveal”, and the one after that, and the one after that, but I’m willing to bet it is turtles all the way down. There is never going to be a last “Big Reveal”. Which means we can always hope that the next reveal will be a big one, and we can always nurture this or that fantasy.
Now, the more interesting thing I wanted to discuss about Overbye’s piece was one line towards the end. Like many parts of his piece, it has a problem – and this one’s is elitism:
If we’re not smart enough to figure out [some futuristic tech by ourselves but instead do so by decrypting a note of alien origin], we don’t deserve to survive.
I realise this is a species-wide aspiration that Overbye is articulating and he probably means that we should deserve what we have. But it is too laconic for a line in its situation because it elides over human politics and suggests, at least to me, that every person only deserves to have what they have earned for themselves. If this is what he, or anyone, actually believes, then I do wish some kind of alien intervention proves them wrong with the hope that it levels the ‘playing field’.
We don’t deserve what we earn, we deserve what is right. It is hard to define this “right”; it could stand for different things in different contexts and cultures. The British writer George Monbiot provides a fitting example: ‘private luxury, public sufficiency’ might have been reasonable words to live by in a fully egalitarian society but in the Anthropocene epoch, they need to be ‘private sufficiency, public luxury’. ‘What is right’ is also certainly fair(er) because it addresses our moral responsibility to eradicate inequalities instead of pandering to the pseudo-superiority of biological smartness.
I would certainly enjoy reading a fantasy novel about an alien message being discernible only by adivasis because of some special vestment they acquired thousands of years ago, and for them to suddenly ascend to the top of the political pyramid. Would the adivasis have “figured it out”? We don’t know. But would the adivasis have deserved it? Absolutely. (Is everyone happy about it? Of course not, and for various reasons. Read the book to find out.)
What this means for Overbye’s wish is that we would deserve to survive if we figured out future technologies by reading an alien note instead of figuring it ourselves. This is because our own entirely human world already works this way. The inequalities we have perpetrated ensure that some people may never experience a better quality of life without quick and important interventions that empowers them to leap over systemic barriers. Whether that’s affirmative action or an extraterrestrial doodle doesn’t matter.
Even a very charitable interpretation of Overbye’s line above doesn’t come off properly. Will someone somewhere ever solve some of humanity’s problems to its overall benefit and availability? Definitely not. The prevailing world order does not admit it. In fact, as things stand, one of the wishes expressed in his article might just come true but not in a way Overbye might like. He writes:
And if we ever do stumble upon a message from some extraterrestrial civilisation, I don’t know want to know what it says. Knowing that aliens exist and imagining what they were up to would be enough to keep us busy for centuries.
We might not know that aliens exist if they do. The Atlantic recently had a wonderful feature about how the Chinese are likelier than any other to make first contact. If this does come to be – assuming it hasn’t already – what’s to say they won’t just keep the message to themselves? They have no obligation to share it with all of humanity, and their national government has cultivated the kind of authority necessary to keep such information a secret for however long it deems necessary.
In all, it seems Overbye’s reality is already populated with things that would be fantasies for most of the rest of the world, and the line he draws between what is already true (“what we do know”) and what he has a choice to believe (“what we don’t know”) is blurred by socio-political brushstrokes that he seems blind to. As a result, the choices he makes about which “stone tablets” he would throw away to preserve the mysteries surrounding them quickly becomes pernicious to those of us for whom many of these tablets are what we need to enjoy the kind of life that Overbye already has.
In this world – of not just the Chinese but more generally of those doing an atrocious job of balancing economic development with social justice – some stone tablets just should not be thrown away, sir.
There are two broad problems I’ve seen so far with writers/journalists quoting activists in science, health and environment stories as experts. (This post deals entirely with the Indian context.)
First: Who has time for activism? The answer almost always is someone on the mainland, far away from the place to which their activism actually applies, typically in a city. As a result, the activist is often unaware of ground realities, tends to be more idealistic than pragmatic and (often) has greater access to the media than people of other demographics.
Second: Who are activists? This is a prompt about what makes the activists ‘experts’. The answer is ‘nothing’ because activism is not on the same plane as expertise. However, reporters often conflate the two mantles because activists are more vocal, as well as louder, about what they believe should be the outcome whereas experts are typically quieter and harder to access.
§
These attributes spotlight the overarching responsibility of science journalism to interrogate and understand expertise, its forms and its function. IMO, the simplest way to conduct these exercises is to apply the editorial edict of “show, don’t tell” to all aspects of all science stories – including the quotes. Following this guideline could be good practice for everyone from rookies to pros, but it’s aimed mostly at rookies.
An important outcome of this is that it clarifies why expertise is better used to provide opinion, not fact, because the former is a variety of “show” and the latter, of “tell”. For example, you don’t use an expert’s quotes in a story to lay out how CRISPR works. That’s your responsibility as a science writer/journalist anyway. Instead, you ask them what they think about gene-edited human embryos, and probe further down that line.
In fact, assuming there’s a clear distinction between facts and opinions at all times, it’s important to separate experts from their facts and marshal them towards expressing their opinions as informed by those facts. Two reasons why. 1) Facts are immutable by definition, can be assimilated from more than one source (assuming availability) and don’t need expertise to be invoked. 2) Discussing opinions allows us to better scrutinise what this person believes instead of knows while silencing the prestige this person may have accrued for knowing. (That’s the popular conception of the scientific enterprise anyway.)
Generally, the edict works well to unravel expertise because it helps the writer know where the line is beyond which expertise transforms into authoritarianism (or behind which it devolves into naïvety). In pithier terms, it forces the writer to work harder to unpack a story by treading the fine line between respecting the authority of experts and not relying on it too much at the same time. It has the added advantages of allowing the writer to keep from editorialising and making it easier for the reader to assimilate their own (reasonable) takeaways.
So by all means quote a physicist who is also an activist with Greenpeace or whatever in a story about trophy-hunting. “Show, don’t tell” will help you cover your base as well as keep the expert from taking up anymore space in your story than is permissible. But this is for the rookie – and maybe the pro working in uncharted territory. The pro who is also in their comfort zone shouldn’t be quoting a physicist in the first place. One reason they are pros is because they know which problems should be solved using a given method.
You are taught in school that protons and neutrons are particles. However, unless you get into physics research later in life, the likeliest way you are going to find out that they are technically quasiparticles is through the science media. So here it is. 😄
Setting aside their electric charge, protons and neutrons are very similar particles. They have almost the same mass and they’re made up of exactly the same kinds of smaller particles. These smaller particles are called quarks and gluons. Three quarks and three gluons come together to form each proton or neutron. That is, protons and neutrons are technically quasiparticles because they are clumps of smaller particles that are grouped together and behave in a collective and predictable way.
This grainier picture of protons – and neutrons, but we’ll stick to protons because they’re both so similar – is necessary to understand their mass. In classical mechanics, the weight of a bag of oranges is equal to the weight of the bag plus the weight of all the oranges. But in quantum mechanics, and particle physics in particular, the mass of a proton need is not equal to the mass of the quarks that make it up (gluons are massless). This is because there are other energetic phenomena that ‘supply’ mass through the mass-energy equivalence (E = mc2).

Each proton weighs 938.2 MeV/c2 (a unit of mass unique to particle physics). It is made up of two up quarks – 2.4 MeV/c2 ×2 – and one down quark – 5 MeV/c2. That is just 9.8 MeV/c2 together. Where does the remaining 928.4 MeV/c2, or 98.95%, come from?
It comes mostly from the effects of one of the four fundamental forces, the strong nuclear force. A new paper authored by physicists from the US and China claims to show for the first time the precise contributions each of these effects towards the proton’s overall mass.
Since the 20th century, physicists have determined how much protons and neutrons weigh, and how much the quarks weigh, to a large degree of precision using experiments. But this hasn’t helped understand why protons weigh as much as they do because of the ‘bag of oranges’ problem. Additionally, quarks acquire their masses through the Higgs mechanism (involving the Higgs boson) whereas protons don’t because they are not fundamental particles. So there is something else that kicks in between the quarks and protons layers.

To understand what this is, physicists need to perform pen and paper computer calculations using the theory of these particles. There are two theories, as in ways of studying the interactions of particles, they could use here. One is called the Standard Model of particle physics, which strives to predict the properties of all known elementary particles (including the Higgs boson) in a single framework. The other is the framework of quantum chromodynamics, or QCD. It strives specifically to explain the behaviour of the strong nuclear force and the quarks it acts on (the force is mediated by gluons).
While previous studies to determine the proton’s mass using theoretical methods have been attempted, they have focused on using the Standard Model route, which is less difficult (but not significantly so) and involves more assumptions. The US/Chinese study takes the QCD route. This is useful because it will help physicists understand how contributions to the proton’s mass are rooted in concepts specific to QCD.
QCD is a very strange and difficult theory, and its effects show up as weird properties. For example, one effect is called colour confinement: it is impossible to tear apart clumps of quarks and gluons below the Hagedorn temperature (2,000,000,000,000 K, one of two known ‘absolute hot’ temperatures). It arises because of the properties of the energy field – a.k.a. the gluonic field – between two nearby quarks.
Heisenberg’s uncertainty principle states that you can’t know the momentum and position of a particle with the same precision at the same time. But colour confinement actually confines the position of quarks – so the uncertainty principle suggests that its momentum can be quite large. Physicists have previous calculated that this momentum could contribute a mass (through Einstein’s mass-energy-momentum equivalence) of a few hundred MeV/c2. Now we’re getting somewhere, although we still have aways to go.
The US/Chinese scientists used a technique called lattice QCD to take these calculations to the next level. Lattice QCD was developed because QCD is so difficult, and it is so difficult because the strong nuclear force is so strong. In fact, it is the strongest of the four forces, and prevents neutron stars from collapsing into black holes. The studies of other particles, such as quantum electrodynamics of electrons, don’t require specialised techniques because the force between electrons is not so strong.
More importantly, like most areas of modern physics, the real innovation in the present study comes from advancements in computing techniques (see here and here for examples from astronomy and materials science, resp.). The US/Chinese scientists developed new algorithms to solve lattice QCD problems better and also reduce errors. (According to their paper: “We present a simulation strategy to calculate the proton mass decomposition”.) As a result, they have elucidated four distinct contributions to the proton’s mass, from the following sources:
The interesting thing here is that the quark condensate is different from the other three sources because it is the only one made up entirely of just quarks. It contributes only ~9% to the proton’s mass. Also, earlier in this post, we saw that just adding up the masses of the constituent quarks yielded 1.05% of the proton’s mass. The new calculation says it is about 9%. The remaining 7.95% appears to come from virtual strange quarks – i.e. strange quarks popping in and out of existence in the vacuum of space – the up and down quarks’ interactions with them.

The other three sources involve the dynamics of quark-gluon interactions and the strong nuclear force that keeps them confined inside a proton. Quark energy relates to the kinetic energies of the confined quarks and gluonic field strength, to the kinetic energies of the confined gluons. They contribute 32% and 37% respectively. The anomalous gluonic contribution has to do with complex interactions between the constituent quarks and all virtual quarks (i.e. all charm, strange, bottom and top quarks popping in and out of existence in the vacuum). It pitches in with about 23%.
In sum: 1 proton’s mass = 9% quark condensate + 32% quark energy + 37% gluonic field + 23% anomalous gluonic contribution. (That’s actually 101% but becomes 100% if we use less approximate, more accurate values.)
We could also slice this thus: 1 proton’s mass = 9% quark condensate + 91% quark-gluon dynamics. Imagine there is an alternate universe where all the quarks have zero mass. The quark condensate contributes only ~9% to the proton’s mass, so in this alternate universe, protons and neutrons would still weigh 91% as much as protons and neutrons in our universe. This is possible thanks once again to the effects and strength of the strong nuclear force.
Let us take this just one step further. 1) Each proton and neutron weighs almost 1,900-times as much as an electron. 2) Protons, neutrons and electrons make up all the matter in the universe. 3) Electrons aren’t made up of quarks and gluons (i.e. they are not quasiparticles). All together, the non-quark contribution effectively makes up ~89% of all the mass of all the matter in the universe.