Posted in Scicomm

Why physical networks aren’t like their on-paper counterparts

At first glance, this tweets appears to state something obvious:

Of course the three-dimensional arrangement of links and nodes, and the space they occupy, influences the ultimate form of the network. But being a tweet, it doesn’t capture a pivotal detail in the paper: the scientists who authored it aren’t talking about the length of the links but the girth. The girth appears to have a nontrivial impact on the network to the extent that, if you discounted it, the network would look significantly different.

In fact, this paper presents some interesting consequences of link thickness that can’t be intuited from the first principles.

The paper’s authors are all network theorists themselves, and one of them is Albert-László Barabási, one of the world’s foremost experts on the topic.

Scientists study networks by organising their internal components as nodes and links. In the example of a triangle, there are three nodes – a.k.a. vertices – and three links – a.k.a. sides. In a tetrahedral network, there are four nodes and six links. Using this simple classification scheme, scientists have found that certain things about a network can be explained only by analysing it in terms of its connections, not by its overall geometry.

As a result, network theory looks at networks using some parameters that don’t exist in the study of other systems. These include the number of connections at a node, network centrality, link betweenness, etc. And because the geometric properties are no longer of interest, network theorists assume that the nodes are infinitesimally small and the links are one-dimensional.

For example, a 2013 study used network analysis to determine the performance of batsmen who played in the Ashes that year. Satyam Mukherjee, the study’s author, built a network where each node was a batsman and each link was a partnership between two batsmen. The link length scaled according to the number of runs they scored together.

In this setup, Mukherjee found that the in-strength parameter denoted a player’s contributions in partnerships; closeness, their ability to play in different positions in the batting lineup; and centrality, the degree of their involvement in partnerships. The Google PageRank algorithm could be used to determine each player’s overall importance.

Mukherjee found the following English players scored the highest on each count (England won the Ashes 3-0 that year):

  • Centrality – Ian Bell
  • Closeness – Matt Prior
  • In-strength – Jonathan Trott
  • PageRank – Graeme Swann

However, the current paper argues that by ignoring the geometric characteristics of the links and nodes, theorists might have been missing out on an important feature that determines why networks take the forms that they do. This may not apply to analysing cricketing statistics problem but it certainly does to “neurons in the brain, three-dimensional integrated circuits and underground hyphal networks,” where “the nodes and links are physical objects that cannot intersect or overlap with each other.”

The one geometric characteristic found to have a defining influence on the ways in which the network was and wasn’t allowed to grow was the link’s width.

Using computer simulations, the scientists found that there was a link thickness threshold. Below the threshold – with small link width, called the weakly interacting regime – the network was able to keep links from crossing each other by small rearrangements that didn’t affect its overall form. Above the threshold, in the strongly interacting regime, the link thickness began to have an effect on link length and curvature, and the links also became more closely packed together.

This is fascinating in two ways.

First: The study shows that the way the network grows depends not just on which nodes it wants to connect but also on how they’re connected. This could mean, for example, that network theorists will have to factor in the physical properties of materials involved in link-building to fully understand the network itself.

Second: Link thickness also affects the space between links, with links placed closer to each other as they become thicker. As a result, networks in the strongly interacting regime will be harder to construct using 3D-printing than those in the weakly interacting regime, in which links and nodes are more clearly separated.

Where does the threshold itself lie? It’s not a single, fixed link-width value such that the network properties on either side of it are markedly different. Instead, it’s more like a transition that occurs across a predictable range of values. In general terms, the threshold is the zone where the total volume occupied by the links approaches the total volume occupied by the nodes.

One chart from the paper (below) visualises it well. The first box on top shows the number of links that cross each other as link thickness increases. Note the box below it, which shows how the average link length changes as link thickness increases.

Source: https://doi.org/10.1038/s41586-018-0726-6

The colours represent two different network models. Orange lines denote a network in the elastic-link model, where the nodes’ positions are fixed and the links are free to move. The blue lines denote a network in the fully elastic model, in which both nodes and links can move freely.

To quote from the paper:

We determine the origin of the transition in the geometry of the networks by estimating the transition point rLc. When the links are much thinner than the node repulsion range rN, the layout is dominated by the repulsive forces between the nodes, which together occupy the volume VN = 4√2NrN3/3. When the volume occupied by the links becomes comparable to VN, the layout must change to accommodate the links. This change induces the transition from the weakly interacting regime to the strongly interacting regime.

The interestingness parade from this study has one more float, which draws from the bottommost box in the chart above. The scientists found that a network behaved more like a solid in the weakly interacting regime and like a gel in the strongly interacting one. This isn’t immediately evident because thicker links usually suggest higher robustness and structural rigidity. However, because they also influence link length and curvature, the network responds differently to external (physical) forces compared to networks with more slender, straighter links.

Specifically, the corresponding stress response was measured using the Cauchy stress tensor – a matrix of nine numbers used to calculate the stress at a single point in three dimensional space. In the weakly interacting regime, the stress response was dominated by node-node and link-link interactions, and the network prevented the stress due to the external force from spreading evenly in all directions. This is a feature of solid materials.

Similarly, in the strongly interacting regime, the stress spreads more uniformly through the network because the volume is dominated by links. So the stress response is dictated by the elastic properties of individual links and by link-link interactions, mimicking those of gels.

In sum, as a network transitions from the strongly interacting to the weakly interacting regimes, the link length begins to increase faster, links become more curved, and the network begins to behave more like a gel even as it becomes less amenable to 3D-printing.

Posted in Scicomm

Using light to cool sound

Laser light has been used to cool atoms down to near absolute zero. The technique is simple, if versatile. (And includes some history involving a little-known Indian physicist.)

Laser light is shined on an atom that’s made to move towards the source of light. When the atom absorbs a photon, it slows down because of the law of conservation of momentum. The atom then emits the photon from a different direction.

By Newton’s third law, it should then receive a ‘kick’ in the direction opposite to this emission. But because the photons will be emitted in various random directions, their total ‘kick’ will be far smaller than the brakes applied by swallowing photons from just one direction.

By carefully tuning the laser’s frequency and intensity, scientists can ensure that the atom absorbs and emits enough photons to slow down. And when an atom slows down, it simply means – in the language of thermodynamics – that it has cooled down.

This entire process involves a coupling between light and matter, nothing else. The atom absorbs the photons and then spits them out – i.e. the atom interacts with electromagnetic radiation. The resulting drop in temperature is simply the result of the atom losing its kinetic energy. There are no other forms of energy involved.

However, because laser-cooling is such a cool technique, scientists have been curious about whether it could be used to slam the brakes on the kinetic energy of objects other than atoms. In a new study, published November 27, that’s what scientists say they have done (preprint here).

And this time, what they have done might just be cooler: they have used laser to slow down sound waves.

The technique is the same – and equally simple – except for one small change. In the case of atoms, photons mediated the interaction between the laser light and the atom. In the case of sound waves, there is a second mediator: Brillouin scattering.

We know sound in the air is simply a series of blocks of compressed and rarefied air. Another way to describe this is as a wave. The air is less dense in the rarefied parts and more dense in the compressed parts, so the sound is effectively a density wave. When sound passes through a solid, it does so through a similar density wave.

All waves carry some energy (according to the Planck-Einstein relation: E = hv, where h is Planck’s constant and v is the wave’s frequency). For example, the electromagnetic wave carries energy that, at certain frequencies, we call light or heat. The energy carried by a density wave moving through a solid is, at some frequencies, perceived by the human ear as sound.

So when photons from a laser can be used to remove energy from the density wave, it will effectively reduce the energy of the sound waves. We just need to figure out how to create a coupling between the laser photons and the density waves. This isn’t hard because part of the answer is in the language itself.

How do you couple a particle to a wave? You can’t – unless you can describe both of them as waves or both of them as particles. This is possible in physics through the wave-particle duality. You’ll remember from high school that light is both waves and particles. It’s just two different ways to describe the transport of electromagnetic energy.

You can do this with sound as well. It can be described as a density wave or a particle moving through a medium – two ways to describe the transport of acoustic energy. These ‘sound particles’ are called phonons (cf. quasiparticles).

So to cool a sound wave using lasers, you need to couple the laser photons with the phonons. Put another way, one packet of one kind of energy has to transform into a packet of a different kind of energy. The scientists accomplished this by colliding photons and phonons in a waveguide (a fancy term for any medium that’s carrying a wave).

When a photon is scattered off of a phonon, it can either lose some of its energy to the sound particle or gain energy. When the scattering is such that the photon gains energy, the phonon slows down according to the same mechanism at play between photons and an atom – based on the law of conservation of momentum. This interaction is called a Brillouin scattering.

In their experiment, the scientists, from North Arizona University and Yale University, used a silicon waveguide 2.3 cm long and carrying sound waves at 6 GHz. When they shined laser light of frequency close to the infrared part of the EM spectrum, they observed that the waveguide cooled by 30 K due to interactions between the photons and its phonons.

They used other techniques to make sure that this was the case, and that the material didn’t cool in other ways. For example, they measured the duration for which phonons of certain frequencies persisted in the system. For another, the phonons were found to slow down (a.k.a. “cool down” in thermodynamic-speak) only in one direction – the direction in which the laser was incident – and not others.

There are two more ways in which this experiment is interesting.

First, the scientists found that they didn’t have to setup a closed space, typically called an optomechanical cavity, to perform this experiment. Previous experiments involving light-matter coupling have required the use of such cavities to produce amplified effects. In this experiment, the effect was pronounced in an (relatively) open space itself.

Second, the scientists were able to show that they could influence different groups of phonons in the continuum of the solid simply by changing the frequency of laser light being shot at them.

The applications are obvious. Many devices in our lives, from ultra-sensitive instruments studying gravitational waves to machines that are used regularly, carry unnecessary vibrations that interfere with their purposes. The new study suggests that they can all be damped out simply by using lasers tuned to the right frequencies.

Posted in Scicomm

The trouble with activism as expertise

There are two broad problems I’ve seen so far with writers/journalists quoting activists in science, health and environment stories as experts. (This post deals entirely with the Indian context.)

First: Who has time for activism? The answer almost always is someone on the mainland, far away from the place to which their activism actually applies, typically in a city. As a result, the activist is often unaware of ground realities, tends to be more idealistic than pragmatic and (often) has greater access to the media than people of other demographics.

Second: Who are activists? This is a prompt about what makes the activists ‘experts’. The answer is ‘nothing’ because activism is not on the same plane as expertise. However, reporters often conflate the two mantles because activists are more vocal, as well as louder, about what they believe should be the outcome whereas experts are typically quieter and harder to access.

§

These attributes spotlight the overarching responsibility of science journalism to interrogate and understand expertise, its forms and its function. IMO, the simplest way to conduct these exercises is to apply the editorial edict of “show, don’t tell” to all aspects of all science stories – including the quotes. Following this guideline could be good practice for everyone from rookies to pros, but it’s aimed mostly at rookies.

An important outcome of this is that it clarifies why expertise is better used to provide opinion, not fact, because the former is a variety of “show” and the latter, of “tell”. For example, you don’t use an expert’s quotes in a story to lay out how CRISPR works. That’s your responsibility as a science writer/journalist anyway. Instead, you ask them what they think about gene-edited human embryos, and probe further down that line.

In fact, assuming there’s a clear distinction between facts and opinions at all times, it’s important to separate experts from their facts and marshal them towards expressing their opinions as informed by those facts. Two reasons why. 1) Facts are immutable by definition, can be assimilated from more than one source (assuming availability) and don’t need expertise to be invoked. 2) Discussing opinions allows us to better scrutinise what this person believes instead of knows while silencing the prestige this person may have accrued for knowing. (That’s the popular conception of the scientific enterprise anyway.)

Generally, the edict works well to unravel expertise because it helps the writer know where the line is beyond which expertise transforms into authoritarianism (or behind which it devolves into naïvety). In pithier terms, it forces the writer to work harder to unpack a story by treading the fine line between respecting the authority of experts and not relying on it too much at the same time. It has the added advantages of allowing the writer to keep from editorialising and making it easier for the reader to assimilate their own (reasonable) takeaways.

So by all means quote a physicist who is also an activist with Greenpeace or whatever in a story about trophy-hunting. “Show, don’t tell” will help you cover your base as well as keep the expert from taking up anymore space in your story than is permissible. But this is for the rookie – and maybe the pro working in uncharted territory. The pro who is also in their comfort zone shouldn’t be quoting a physicist in the first place. One reason they are pros is because they know which problems should be solved using a given method.

Posted in Scicomm

Understanding the proton's mass – and then the universe's

You are taught in school that protons and neutrons are particles. However, unless you get into physics research later in life, the likeliest way you are going to find out that they are technically quasiparticles is through the science media. So here it is. 😄

Setting aside their electric charge, protons and neutrons are very similar particles. They have almost the same mass and they’re made up of exactly the same kinds of smaller particles. These smaller particles are called quarks and gluons. Three quarks and three gluons come together to form each proton or neutron. That is, protons and neutrons are technically quasiparticles because they are clumps of smaller particles that are grouped together and behave in a collective and predictable way.

This grainier picture of protons – and neutrons, but we’ll stick to protons because they’re both so similar – is necessary to understand their mass. In classical mechanics, the weight of a bag of oranges is equal to the weight of the bag plus the weight of all the oranges. But in quantum mechanics, and particle physics in particular, the mass of a proton need is not equal to the mass of the quarks that make it up (gluons are massless). This is because there are other energetic phenomena that ‘supply’ mass through the mass-energy equivalence (E = mc2).

My amazing illustration of the 'bag of oranges' problem.
My amazing illustration of the ‘bag of oranges’ problem.

Each proton weighs 938.2 MeV/c2 (a unit of mass unique to particle physics). It is made up of two up quarks – 2.4 MeV/c2 ×2 – and one down quark – 5 MeV/c2. That is just 9.8 MeV/c2 together. Where does the remaining 928.4 MeV/c2, or 98.95%, come from?

It comes mostly from the effects of one of the four fundamental forces, the strong nuclear force. A new paper authored by physicists from the US and China claims to show for the first time the precise contributions each of these effects towards the proton’s overall mass.

Since the 20th century, physicists have determined how much protons and neutrons weigh, and how much the quarks weigh, to a large degree of precision using experiments. But this hasn’t helped understand why protons weigh as much as they do because of the ‘bag of oranges’ problem. Additionally, quarks acquire their masses through the Higgs mechanism (involving the Higgs boson) whereas protons don’t because they are not fundamental particles. So there is something else that kicks in between the quarks and protons layers.

An amazing schematic illustrating the need for theoretical calculations.
An amazing schematic illustrating the need for theoretical calculations.

To understand what this is, physicists need to perform pen and paper computer calculations using the theory of these particles. There are two theories, as in ways of studying the interactions of particles, they could use here. One is called the Standard Model of particle physics, which strives to predict the properties of all known elementary particles (including the Higgs boson) in a single framework. The other is the framework of quantum chromodynamics, or QCD. It strives specifically to explain the behaviour of the strong nuclear force and the quarks it acts on (the force is mediated by gluons).

While previous studies to determine the proton’s mass using theoretical methods have been attempted, they have focused on using the Standard Model route, which is less difficult (but not significantly so) and involves more assumptions. The US/Chinese study takes the QCD route. This is useful because it will help physicists understand how contributions to the proton’s mass are rooted in concepts specific to QCD.

QCD is a very strange and difficult theory, and its effects show up as weird properties. For example, one effect is called colour confinement: it is impossible to tear apart clumps of quarks and gluons below the Hagedorn temperature (2,000,000,000,000 K, one of two known ‘absolute hot’ temperatures). It arises because of the properties of the energy field – a.k.a. the gluonic field – between two nearby quarks.

Heisenberg’s uncertainty principle states that you can’t know the momentum and position of a particle with the same precision at the same time. But colour confinement actually confines the position of quarks – so the uncertainty principle suggests that its momentum can be quite large. Physicists have previous calculated that this momentum could contribute a mass (through Einstein’s mass-energy-momentum equivalence) of a few hundred MeV/c2. Now we’re getting somewhere, although we still have aways to go.

The US/Chinese scientists used a technique called lattice QCD to take these calculations to the next level. Lattice QCD was developed because QCD is so difficult, and it is so difficult because the strong nuclear force is so strong. In fact, it is the strongest of the four forces, and prevents neutron stars from collapsing into black holes. The studies of other particles, such as quantum electrodynamics of electrons, don’t require specialised techniques because the force between electrons is not so strong.

More importantly, like most areas of modern physics, the real innovation in the present study comes from advancements in computing techniques (see here and here for examples from astronomy and materials science, resp.). The US/Chinese scientists developed new algorithms to solve lattice QCD problems better and also reduce errors. (According to their paper: “We present a simulation strategy to calculate the proton mass decomposition”.) As a result, they have elucidated four distinct contributions to the proton’s mass, from the following sources:

  • Quark condensate
  • Quark energy
  • Gluonic field strength energy
  • Anomalous gluonic contribution

The interesting thing here is that the quark condensate is different from the other three sources because it is the only one made up entirely of just quarks. It contributes only ~9% to the proton’s mass. Also, earlier in this post, we saw that just adding up the masses of the constituent quarks yielded 1.05% of the proton’s mass. The new calculation says it is about 9%. The remaining 7.95% appears to come from virtual strange quarks – i.e. strange quarks popping in and out of existence in the vacuum of space – the up and down quarks’ interactions with them.

The quark structure of a proton. The wiggly lines represent gluons. Caption: Jacek rybak/Wikimedia Commons, CC BY-SA 4.0
The quark structure of a proton. The wiggly lines represent gluons. Caption: Jacek rybak/Wikimedia Commons, CC BY-SA 4.0

The other three sources involve the dynamics of quark-gluon interactions and the strong nuclear force that keeps them confined inside a proton. Quark energy relates to the kinetic energies of the confined quarks and gluonic field strength, to the kinetic energies of the confined gluons. They contribute 32% and 37% respectively. The anomalous gluonic contribution has to do with complex interactions between the constituent quarks and all virtual quarks (i.e. all charm, strange, bottom and top quarks popping in and out of existence in the vacuum). It pitches in with about 23%.

In sum: 1 proton’s mass = 9% quark condensate + 32% quark energy + 37% gluonic field + 23% anomalous gluonic contribution. (That’s actually 101% but becomes 100% if we use less approximate, more accurate values.)

We could also slice this thus: 1 proton’s mass = 9% quark condensate + 91% quark-gluon dynamics. Imagine there is an alternate universe where all the quarks have zero mass. The quark condensate contributes only ~9% to the proton’s mass, so in this alternate universe, protons and neutrons would still weigh 91% as much as protons and neutrons in our universe. This is possible thanks once again to the effects and strength of the strong nuclear force.

Let us take this just one step further. 1) Each proton and neutron weighs almost 1,900-times as much as an electron. 2) Protons, neutrons and electrons make up all the matter in the universe. 3) Electrons aren’t made up of quarks and gluons (i.e. they are not quasiparticles). All together, the non-quark contribution effectively makes up ~89% of all the mass of all the matter in the universe.

Posted in Scicomm

The story of dust

What is dust?

It feels ridiculous just asking that question sitting in India. Dust is everywhere. On the roads, in your nose, in your lungs. You lock up your house, go on a month-long holiday and come back, and there’s a fine patina on the table. It’s inside your laptop, driving the cooling fan nuts.

It is also in the atmosphere, in orbit around Earth, in outer space even. It makes up nightmarish storms on Mars. Philip Pullman and Steven Erikson have written books fantasising about it. Dust is omnipresent. (The only dustless places I’ve seen are in stock photos strewn across the internet.)

But what exactly is it, and where did it all come from?

§

Earth

Dust is fine particulate matter. It originates from a tremendous variety of sources. The atmospheric – or aeolian – dust we are so familiar with is composed of small particles sheared off of solid objects. For example, fast-blowing winds carry particles away from loose, dry soil into the air, giving rise to what is called fugitive dust. Another source is the smoke from exhaust pipes.

Yet another is mites of the family Pyroglyphidae. They eat flakes of skin, including those shed by humans, and digest them with enzymes that stay on in their poop. In your house, exposure to their poop (considered a form of dust) can trigger asthma attacks.

Winds lift particulate matter off Earth’s surface and transport them into the troposphere. Once dust gets up there, it acts like an aerosol, trapping heat below it and causing Earth’s surface to warm. Once it collects in sufficient quantities, it begins to affect the weather of regions below it, including rainfall patterns.

Dust particles smaller than 10 microns get into your lungs and affect your respiratory health. They conspire with other pollutants and, taking advantage of slow-moving winds, stagnate over India’s National Capital Region during winter. Particles smaller than 2.5 microns “increase age-specific mortality risk” (source) and send hospital admissions soaring.

There is also dust that travels thousands of kilometres to affect far-flung parts of the world. The “Sahara is the world’s largest source of desert dust”, according to one study. In June this year, the Atlantic Ocean’s tropical area experienced its dustiest period in 15 years when a huge billow blew over from northeast Chad towards the mid-Americas. According to NASA’s Earth Observatory, Saharan dust “helps build beaches in the Caribbean and fertilises soils in the Amazon.”

But speaking of dust that migrates large distances, the transatlantic plume seems much less of a journey than the dust brought to Earth by meteorites that have travelled hundreds of thousands of kilometres through space. As these rocks streak towards the ground, the atmosphere burns off dust-like matter from their surfaces, leaving them hanging in the upper atmosphere.

Atoms released by these particles into the mesosphere drift into the planet’s circulation system, moving from pole to pole over many months. They interact with other particles to leave behind a trail of charged particles. Scientists then use radar to track these particles to learn more about the circulation itself. Some dust particles of extraterrestrial origin also reach Earth’s surface in time. They could carry imprints of physical and chemical reactions they might have experienced in outer space, even from billions of years ago.

§

Orbit

In the mid-20th century, researchers used optical data and mathematical arguments to figure that about four million tonnes of meteoric dust slammed into our planet’s atmosphere every year. This was cause for alarm: the figure suggested that the number of meteorites in space was much higher than thought. In turn, the threat to our satellites could have been underestimated. More careful assessments later brought the figure down. A 2013 review states that 10-40 tonnes of meteoric dust slams into Earth’s atmosphere every day.

Still, this figure isn’t low – and its effects are exacerbated by the debris humans themselves are putting in orbit around Earth. The Wikipedia article on ‘space debris’ carefully notes, “As of … July 2016, the United States Strategic Command tracked a total of 17,852 artificial objects in orbit above the Earth, including 1,419 operational satellites.” But only one line later, the number of objects smaller than 1 cm explodes to 170 million.

If a mote of dust weighing 0.00001 kg carried by a 1.4 m/s breeze strikes your face, you are not going to feel anything. This is because its momentum – the product of its mass and velocity – is very low. But when a particle weighing one-hundredth of a gram strikes a satellite at a relative velocity of 1.5 km/s, its momentum jumps a thousandfold. Suddenly, it is able to damage critical components and sensitively engineered surfaces, ending million-dollar, multi-year missions in seconds. One study suggests such particles, if travelling fast enough, can also generate tiny shockwaves.

Before our next stop on the Dust Voyage, let’s take a small break in sci-fi. The mid-century overestimation of meteoric dust flux may have prompted Arthur C. Clarke to write his 1961 novel, A Fall of Moondust. In the story, a cruise-liner called the Selene takes tourists over a basin of superfine dust apparently of meteoric origin. But one day, a natural disaster causes the Selene to sink into the dust, trapping its passengers in life-threatening conditions. After much despair, a rescue mission is mounted when an astronomer spots a heat-trail pointing to the Selene’s location from space, from onboard a spacecraft called Lagrange II.

This name is a reference to the famous Lagrange points. As Earth orbits the Sun, and the Moon orbits Earth, their combined gravitational fields give rise to five points in space where the force acting on an object is just right for it to maintain its position relative to Earth and the Sun. These are called L1, L2, L3, L4 and L5.

A contour plot of the effective potential of the Earth-Sun system, showing the five Lagrange points. Credit: NASA and Xander89, CC BY 3.0
A contour plot of the effective potential of the Earth-Sun system, showing the five Lagrange points. Credit: NASA and Xander89, CC BY 3.0

The Indian Space Research Organisation (ISRO) plans to launch its Aditya satellite, to study the Sun, to L1. This is useful because at L1, Aditya’s view of the Sun won’t be blocked by Earth. However, objects at L1, L2 and L3 have an unstable equilibrium. Without some station-keeping measures now and then, they tend to fall out of their positions.

But this isn’t so with L4 and L5, objects at which remain in a more stable equilibrium. And like anything that’s been lying around for a while, they collect dust.

In the 1950s, the Polish astronomer Kazimierz Kordylewski claimed to have spotted two clouds of dust at L4 and L5. These nebulous collections of particulate matter have since been called Kordylewski clouds. Other astronomers have contested their existence, however. For example, the Hiten satellite could not find any notable dust concentrations in the L4 and L5 regions in 2009. Some argued that Hiten could have missed them because the dust clouds are too spread out.

§

Space

Only two weeks ago, Hungarian astronomers claimed to have confirmed the presence of dust clouds in these regions (their papers here and here). Because the L4 and L5 regions are of interest for future space missions, astronomers will now have to validate this finding and – if they do – assess the density of dust and the attendant probabilities of threat.

Unlike Kordylewski, who took photographs from a mountaintop, the Hungarian group banked on dust’s ability to polarise light. Light is electromagnetic radiation. Each wave of light consists of an electric and a magnetic field oscillating perpendicular to each other. Imagine various waves of light approaching dust, their electric fields pointed in arbitrary directions. After they strike the dust, however, the particles polarise the waves, causing all of the electric fields to line up with one particular orientation.

When astronomers detect such light, they know that it has encountered dust in its path. Using different instruments and analytical techniques, they can then map the distribution of dust in space through which the light has passed.

This is how, for example, the European Space Agency’s Planck telescope was able to draw up a view of dust around the Milky Way.

A map of dust in and around the Milky Way galaxy, as observed by the ESA Planck telescope. Credit: NASA
A map of dust in and around the Milky Way galaxy, as observed by the ESA Planck telescope. Credit: NASA

That’s billions upon billions of tonnes. Don’t your complaints about dust around the house pale in comparison?

And even at this scale, it has been a nuisance. We don’t know if the galaxy is complaining but Brian Keating certainly did.

In March 2014, Keating and his team at Harvard University’s Centre for Astronomy announced that they had found signs that the universe’s volume had increased by a factor of 1080 in just 10-33 seconds a moment after its birth in the Big Bang. About 380,000 years later, radiation leftover from the Big Bang – called the cosmic microwave background (CMB) – came into being. Keating and co. were using the BICEP2 detector at the South Pole to find imprints of cosmic inflation on the CMB. The smoking gun: light of a certain wavelength polarised by gravitational waves from the early universe.

While the announcement was made with great fanfare – as the “discovery of the decade” and whatnot – their claim quickly became suspect. Data from the Planck telescope and other observatories soon showed that what Keating’s team had found was in fact light polarised by galactic dust. Just like that, their ambition of winning a Nobel Prize came crashing down. Ash to ash, dust to dust.

You probably ask, “Hasn’t it done enough? Can we stop now?” No. We must persevere, for dust has done even more, and we have come so close. For example, look at the Milky Way dust-map. Where could all that dust have come from?

This is where the story of dust takes a more favourable turn. We have all heard it said that we are made of stardust. While it would be futile to try and track where the dust of ourselves came from, understanding dust itself requires us to look to the stars.

The storms on Earth or Mars that stir dust up into the air are feeble breaths against the colossal turbulence of stellar ruination. Stars can die in one of many ways depending on their size. The supernovae are the most spectacular. In a standard Type 1a supernova, an entire white dwarf star undergoes nuclear fusion, completely disintegrating and throwing matter out at over 5,000 km/s. More massive stars undergo core collapse, expelling their outermost layers into space in a death-sneeze before what is left implodes into a neutron star or a black hole.

Any which way, the material released into space forms giant clouds that disperse slowly over millions of years. If they are in the presence of a black hole, then they are trapped in an accretion disk around it, accelerated, heated and energised by radiation and magnetic fields. The luckier motes may float away to encounter other stars, planets or other objects, or even collide with other dust and gas clouds. Such interactions are very difficult to model – but there is no doubt that these they are all essentially driven by the four fundamental forces of nature.

One of them is the force of gravity. When a gas/dust cloud becomes so large that its collective gravitational pull keeps it from dispersing, it could collapse to form another star, and live to see another epoch.

§

Together

This way, stars are cosmic engines. They keep matter – including dust – in motion. They may not be the only ones to do so but given the presence of stars throughout the (observable) universe, they certainly play a major part. When they are not coming to life or going out of it, their gravitational pull influences the trajectories of other, smaller bodies around them, including comets, asteroids and other spacefaring rocks.

The Solar System itself is considered to have been condensed out of a large disk of dirt and dust made of various elements surrounding a young Sun – a disk of leftovers from the star’s birth. Different planets formed based on the availability of different volumes of different materials at different times. Jupiter is believed to have come first, and the inner planets, including Earth, to have come last.

But no matter; life here had whatever it needed to take root. Scientists are still figuring what those ingredients could have been and their provenance. One theory is that they included compounds of carbon and hydrogen called polycyclic aromatic hydrocarbons, and that they first formed – you guessed it – among the dust meandering through space.

They could then have been ferried to Earth by meteors and comets, perhaps swung towards Earth’s orbit by the Sun’s gravity. When a comet gets closer to a star, for instance, material on its surface begins to evaporate, forming a streaky tail of gas and dust. When Earth passes through a region where the tail’s remnants and other small, rocky debris have lingered, they enter the atmosphere as a meteor shower.

Dust really is everywhere, and it seldom gets the credit it is due. It has been and continues to be a pesky part of daily life. However, unlike our search thus far for extraterrestrial companionship, we are not alone in feeling beset by dust.

The Wire
November 10, 2018

Posted in Op-eds, Scicomm

Let the arrogators write

Bora Zivkovic, the former ‘blogfather’ of the Scientific American blogs network, said it best: journalists are temporary experts. Reporters have typically got a few days to write something up on which scientists have been working for years, if not decades. They flit from paper to paper, lab to lab; without the luxury of a beat, they often cover condensed matter physics one day, spaceflight the next, ribosomes the day after, and exomoons after that. Over time, they’re the somewhat-jacks of many trades, but there’s only one that they’re really trying to master: story-telling.

The editors they work with to have these stories published are also somewhat-jacks in their own right. Many of them will have been reporters, probably still are from time to time, and further along the road (by necessity) to understanding what will get stories read.

However, I’ve often observed a tendency among many of the scientists I work with to trivialise these proficiencies, as if they’re products of a lesser skill, a lesser perseverance even. There have even been one or two so steeped in the notion that science reporters and editors wouldn’t have jobs if they hadn’t undertaken their pursuits of truths that they treat editors with naked disdain. Some others are less contemptuous but still aver that journalists are at best adjacent to reality, and lower on some imagined hierarchy as a result.

If these claims don’t immediately seem ludicrous to you, then you’re likely choosing to not see why.

First: If X – a person in any profession – believes that it’s easy to reach the masses, and cites Facebook and Twitter as proof, then it’s not that they don’t know how journalism works. It’s that they don’t know what journalism is as well as are professing ignorance of their personal definition being wrong. The fourth estate is responsible for keeping democracy functional. It’s not as simple as putting all available information in the public domain or breaking complex ideas down to digestible tidbits. It’s about figuring out how “write a story people will like reading” is tied to “speak truth to power”.

Second: I’m not going to say reporting and editing engage the mind as much as science does because I wouldn’t know how I’d go about proving such a thing. Axiomatically, I will say that those who believe reporting and editing are somehow ‘softer’ therefore ‘lesser’ pursuits (machismo?) or that they’re less engaging/worthwhile are making the same mistake. There’s no way to tell. There’s also no admission of the alternative that editors and reporters – by devoting themselves to deceptively simple tasks like stating facts and piecing narratives together – are able to find greater meaning, agency and purpose in them than the scientist is able to comprehend.

Third: This tendency to debase communication and its attendant skills is bizarre considering the scientist himself intends to communicate (and it’s usually a ‘him’ that’s doing the debasing). If I had to take a guess, I’d say these beliefs exist because they’re proxies for a subconscious reluctance to share the power that is their knowledge, and the expression of such beliefs a desperate attempt to exert control over what they may believe is rightfully theirs. There’s some confidence in such speculation as well because I actually know one scientist who believes scientists attempting to communicate their work are betraying their profession. But that story’s for another day.

All these reasons together is why I’d ask the arrogators to write more for news outlets instead of asking them to stop. It’s not that we get to cut off their ability to reach the masses – that could worsen the sense of entitlement – but that we’ve an opportunity to chamfer their privilege upon the whetstone of public engagement. This after all is one of the purposes of journalism. It works even when we’re letting the powerful write instead of the powerless because its strength lies as much in the honest conduct of it as its structure. The plain-jane conveyance of information is a very small part of it all.

Featured image credit: Edgar Guerra/Unsplash.

Posted in Scicomm, Science

New anomaly at the LHC

Has new ghost particle manifested at the Large Hadron Collider?, The Guardian, October 31:

Scientists at the Cern nuclear physics lab near Geneva are investigating whether a bizarre and unexpected new particle popped into existence during experiments at the Large Hadron Collider. Researchers on the machine’s multipurpose Compact Muon Solenoid (CMS) detector have spotted curious bumps in their data that may be the calling card of an unknown particle that has more than twice the mass of a carbon atom.

The prospect of such a mysterious particle has baffled physicists as much as it has excited them. At the moment, none of their favoured theories of reality include the particle, though many theorists are now hard at work on models that do. “I’d say theorists are excited and experimentalists are very sceptical,” said Alexandre Nikitenko, a theorist on the CMS team who worked on the data. “As a physicist I must be very critical, but as the author of this analysis I must have some optimism too.”

Senior scientists at the lab have scheduled a talk this Thursday at which Nikitenko and his colleague Yotam Soreq will discuss the work. They will describe how they spotted the bumps in CMS data while searching for evidence of a lighter cousin of the Higgs boson, the elusive particle that was discovered at the LHC in 2012.

This announcement – of a possibly new particle weighing about 28 GeV – is reminiscent of the 750 GeV affair. In late 2015, physicists spotted an anomalous bump in data collected by the LHC that suggested the existence of a previously unknown particle weighing about 67-times as much as the carbon atom. The data wasn’t qualitatively good enough for physicists to claim that they had evidence of a new particle, so they decided to get more.

This was December. By August next year (2016), before the new data was out, theoretical physicists had written and published over 500 papers on the arXiv preprint server on what the new particle could be and how theoretical models could have to be changed to make room for it. But at the 38th International Conference on High-Energy Physics, LHC scientists unveiled the new data said that the anomalous bump in the data had vanished and that what physicists had seen earlier was likely a random fluctuation in lower quality observations.

The new announcement of a 28 GeV particle seems set for a similar course of action. I’m not pronouncing that no new particle will be found – that’s for physicists to determine – but only writing in defence of those who would cover this event even though it seems relatively minor and like history’s repeating itself. Anomalies like these are worth writing about because of the Standard Model of particle physics, which has been historically so good at making predictions about particles’ properties that even small deviations from it are big news.

At the same time, it’s big news in a specific context with a specific caveat: that we might be chasing an ambulance here. For example, The Guardian only says that the anomalous signal will have to be verified by other experiments, leaving out the part where the signal that LHC scientists already have is pretty weak (4.2σ and 2.9σ (both local as opposed to global) in two tests in the 8 TeV data and 2.0σ and 1.4σ deficit in the 13 TeV data). It also doesn’t mention the 750 GeV affair even though the two narratives already appear to be congruent.

If journalists leave such details out, I’ve a feeling they’re going to give their readers the impression that this announcement is more significant than it actually is. (Call me a nitpicker but I’m sure being accurate will allow engaged readers to set reasonable expectations about what to expect in the story’s next chapter as well as keep them from becoming desensitised to journalistic hype.)

Those who’ve been following physics news will be aware of the ‘nightmare scenario’ assailing particle physics, and in this context there’s value in writing about what’s keeping particle physicists occupied – especially in their largest, most promising lab.

But thanks to the 750 GeV affair, most recently, we also know that what any scientist or journalist says or does right now is moot until LHC scientists present sounder data + confirmation of a positive/negative result. And journalists writing up these episodes without a caveat that properly contextualises where a new anomaly rests on the arc of a particle’s discovery will be disingenuous if they’re going to justify their coverage based on the argument that the outcome “could be” positive.

The outcome could be negative and we need to ensure the reader remembers that. Including the caveat is also a way to do that without completely obviating the space for a story itself.

Featured image: The CMS detector, the largest of the five detectors that straddle the LHC, and which spotted the anomalous signal corresponding at a particle at the 28 GeV mark. Credit: CERN.

Posted in Op-eds, Scicomm

Climate fear

The Intergovernmental Panel on Climate Change recently published a report exhorting countries committed to the Paris Agreement to limit global warming to an additional 1.5º by the end of this century. As if this isn’t drastic enough, one study has also shown that if we’re not on track to this target in the next 12 years, then we’re likely to cross a point of no return and be unable to keep Earth’s surface from warming by 1.5º C.

In the last decade, the conversation on climate change passed by an important milestone – that of journalists classifying climate denialism as false balance. After such acknowledgment, editors and reporters simply wouldn’t bother speaking to those denying the anthropogenic component of global warming in pursuit of a balanced copy because denying climate change became wrongful. Including such voices wouldn’t add balance but in fact remove it from a climate-centred story.

But with the world inexorably thundering towards warming Earth’s surface by at least 1.5º C, if not more, and with such warming also expected to have drastic consequences for civilisation as we know it, I wonder when optimism will also become pulled under the false balance umbrella. (I have no doubt that it will so I’m omitting the ‘if’ question here.)

There were a few articles earlier this year, especially in the American media, about whether or not we ought to use the language of fear to spur climate action from people and governments alike. David Biello had excerpted the following line from a new book on the language of climate change in a review for the NYT: “I believe that language can lessen the distance between humans and the world of which we are a part; I believe that it can foster interspecies intimacy and, as a result, care.” But what tone should such language adopt?

A September 2017 study noted:

… the modest research evidence that exists with respect to the use of fear appeals in communicating climate change does not offer adequate empirical evidence – either for or against the efficacy of fear appeals in this context – nor would such evidence adequately address the issue of the appropriateness of fear appeals in climate change communication. … It is also noteworthy that the language of climate change communication is typically that of “communication and engagement,” with little explicit reference to targeted social influence or behaviour change, although this is clearly implied. Hence underlying and intertwined issues here are those of cogent arguments versus largely absent evidence, and effectiveness as distinct from appropriateness. These matters are enmeshed within the broader contours of the contested political, social, and environmental, issues status of climate change, which jostle for attention in a 24/7 media landscape of disturbing and frightening communications concerning the reality, nature, progression, and implications of global climate change.

An older study, from 2009, had it that using the language of fear wouldn’t work because, according to Big Think‘s break down, could desensitise the audience, prompt the audience to trust the messenger less over time and trigger either self-denial or some level of nihilism because what else would you do if you’re “confronted with messages that present risks” that you, individually, can do nothing to mitigate. Most of all, it could distort our (widely) shared vision of a “just world”.

On the other hand, just the necessary immediacy of action suggests we should be afraid lest we become complacent. We need urgent and significant action in both the short- and long-terms and across a variety of enterprises. Fear also sells. it’s always in demand irrespective of whether a journalist is selling it, or a businessman or politician. It’s easy, sensational, grabs eyeballs and can be effortlessly communicated. That’s how you have the distasteful maxim “If it bleeds, it leads”.

In light of these concerns, it’s odd that so many news outlets around the world (including The Guardian and The Washington Post) are choosing to advertise the ’12-year-deadline to act’ bit (even Forbes’s takedown piece included this info. in the headline). A deadline is only going to make people more anxious and less able to act. Further, it’s odder that given the vicious complexities associated with making climate-related estimates, we’re even able to pinpoint a single point of no return instead of identifying a time-range at some point within which we become doomed. And third, I would even go so far as to question the ‘doomedness’ itself because I don’t know if it takes inflections – points after which we lose our ability to make predictions – into account.

Nonetheless, as we get closer to 2030 – the year that hosts the point of no return – and assuming we haven’t done much to keep Earth’s surface warming by 1.5º C by the century’s close, we’re going to be in neck-deep in it. At this point, would it still be fair for journalists, if not anyone else, to remain optimistic and communicate using the language of optimism? Second, will optimism on our part be taken seriously considering, at that point, the world will find out that Earth’s surface is going to warm by 1.5º C irrespective of everyone else’s hopes.

Third: how will we know if optimistic engagement with our audience is even working? Being able to measure this change, and doing so, is important if we are to reform journalism to the extent that newsrooms have a financial incentive to move away from fear-mongering and towards more empathetic, solution-oriented narratives. A major reason “If it bleeds, it leads” is true is because it makes money; if it didn’t, it would be useless. By measuring change, calculating their first-order derivatives and strategising to magnify desirable trends in the latter, newsrooms can also take a step back from the temptations of populism and its climate-unjust tendencies.

Climate change journalism is inherently political and as susceptible to being caught between political faultlines as anything else. This is unlikely to change until the visible effects of anthropogenic global warming are abundant and affecting day-to-day living (of the upper caste/upper class in India and of the first world overall). So between now and then, a lot rests on journalism’s shoulders; journalists as such are uniquely situated in this context because, more than anyone else, we influence people on a day-to-day basis.

Apropos the first two questions: After 2030, I suspect many people will simply raise the bar, hoping that some action can be taken in the next seven decades to keep warming below 2º C instead of 1.5º C. Journalists will make up both the first and last lines of defence in keeping humanity at large from thinking that it has another shot at saving itself. This will be tricky: to inspire optimism and prompt people to act even while constantly reminding readers that we’ve fucked up like never before. I’d start by celebrating the melancholic joy – perhaps as in Walt Whitman’s Leaves of Grass (1891) – of lesser condemnations.

To this end, journalists should also be regularly retrained – say, once every five years – on where climate science currently stands, what audiences in different markets feel about it and why, and what kind of language reporters and editors can use to engage with them. If optimism is to remain effective further into the 21st century, collective action is necessary on the part of journalists around the world as well – just the way, for example, we recognise certain ways to report stories of sexual assault, data breaches, etc.

Posted in Scicomm

War and peace

I asked a friend earlier today if he would be interested in explaining the observation of Lyman-alpha lines in anti-hydrogen to an audience of high-schoolers.But will high-schoolers ever be able to understand what Lyman-alpha lines in anti-hydrogen are? In other words, when will 17-year-olds ever be expected to know this?

They won’t. Not knowing what Lyman-alpha lines are or what anti-hydrogen is isn’t going to cost them anything, and knowing it is not going to be decidedly useful.

The education system is already mindful of this. This is why calculus is not taught to 10-year-olds and irrational numbers to two-year-olds.

However, what could set high-schoolers apart from two- or 10-year-olds is that, by the time they are 17, they usually know all of the first principles necessary for the study of most of science by then.

Nonetheless, why should we make an effort to communicate Lyman-alpha lines in anti-hydrogen to high-schoolers? What could the fundamental rationale be?

Assume here that “because it is possible” is not a valid reason because the reason must encapsulate need, not possibility.

One I can think of is that we could use the example of Lyman-alpha lines in anti-hydrogen to illustrate more elementary concepts, their connection with real applications and the critical thinking necessary to elucidate such relationships.

Simultaneously, we could also elucidate how these concepts can be understood with the same things that the students are learning in school, in effect exposing the richness of what they are learning.

Why would we pick Lyman-alpha lines for this? Because it is in the news.

But excluding the third reason, the first two are compatible with an insightful distinction Raghavendra Gadagkar provided at the recent #SciMedia workshop at Matscience.

When news breaks of a discovery, for example, it is wartime science journalism.

When science journalists write about a scientific idea or object that is not in the news such that it leads to a more informed viewership, it is peacetime journalism.

Wartime journalism often has little time for anything other than, as Nithyanand Rao put it at the same workshop, event-based stories.

Peacetime journalism, by virtue of not being in a rush, can afford process-based stories more.

The curious thing about CERN’s announcement here – of the study of Lyman-alpha lines in anti-hydrogen – is that it is not war; it is at best a skirmish.

At the same time, it has a lot of underlying processes that can be discussed in greater detail because, as a subject, it has a lot of room for deductive reasoning as well as a rich history.

In a sense, this is the kind of journalism I most like undertaking. (On this occasion, I was pressed for time and had to pass the task on to a friend.)

I could describe it as a synthesis of historical survey, some historiography and mostly pedagogy.

My most-read such article, and most exemplary as a result, is ‘The Graceful, and Graceless, Pursuits of Peace in the Quantum World’, written for The Wire in July this year.

Posted in Scicomm

Something worse than a flat-Earth society

Read two interesting articles this morning:

1. An editorial in the Indian Express about how India is the “land of the gullible”, where even the “mere trappings of science suffice to tantalise” people into believing BS and coughing up fortunes for snake oil.

2. An article in The Conversation about where flat-Earthers draw their sense of purpose from: the separation of science from scientific institutions. It’s a fascinating analysis of the ‘knowledge is power’ social paradigm, how 21st century ICT is destabilising it and unto what consequences.

The typical Indian experience (of public scientific temperament) relates to both texts. In a country where two people can borrow Rs 1.43 crore to build a machine that purportedly generates electricity from thunderbolts in space (and which for some reason is called a “rice puller”), it’s a sign that laypeople’s idea of what constitutes science has already been separated from that practised at scientific institutions.

The non-admission of traditional knowledge into the modern scientific method has meant that many Indians have – by way of traditional practices and rituals – already constructed a parallel scientific tradition that accrued power in time, just the way the Baconian tradition has. These ‘alternatives’ are both credible – like environmentalism*, yoga, philosophy*, etc. – and incredible – such as astrology*, ayurveda, unani, etc.

However, because many of us have become “used” to the idea that science needn’t be the exclusive preserve of scientists, we’re also not surprised when we encounter it outside of scientific institutions. To most people who believe in the curative power of these alternatives, for example, science rests with scientists as much as with those who are well-versed in the body of knowledge that birthed the alternatives. And it’s important to single out the Indian experience because, in a manner of speaking, we’ve currently got it worse than flat-Earth societies.

Now, some people will pipe up and say “not all of us are irrational” but that would miss the point entirely – just the way ‘not all men’ is no credible defence against feminists’ generalisation of male behaviour. It’s not individual complicity that matters but society’s inculcation of opportunities for many of us to get away with whatever we’ve done that does.

The modern scientific method delegitimised a lot of traditional knowledge, especially sidelining astrology and homeopathy and exiling those who were authorities in those fields to the fringe – just the way flat-Earthers of yore had been disenfranchised by 20th century institutions powered by political distrust, war and espionage. Now, with rising internet penetration and access to the social media, flat-Earthers are interrogating what scientists have claimed is the truth.

But in India, especially among Hindutva ‘scholars’, we’ve been witnessing the opposite: ex-masters not interrogating the methods of knowledge production that threaten their authority but simply trying to undermine them. (E.g. “the Vedas already knew this 3,000 years ago so you’re dumb”.) This to me is eminently worse than what flat-Earthers are trying to do; their industry at least suggests an attempt at disceptation, as opposed to an impulse to completely shut out the opposite side. As Harry Dyer, the author of the article, writes,

… four flat earthers debated three physics PhD students [at the convention Dyer was attending]. A particular point of contention occurred when one of the physicists pleaded with the audience to avoid trusting YouTube and bloggers. The audience and the panel of flat earthers took exception to this, noting that “now we’ve got the internet and mass communication … we’re not reliant on what the mainstream are telling us in newspapers, we can decide for ourselves”.

… Flat earthers were encouraged to trust “poetry, freedom, passion, vividness, creativity, and yearning” over the more clinical regurgitation of established theories and facts. Attendees were told that “hope changes everything”, and warned against blindly trusting what they were told.

On the other hand, ‘blindly trusting what we’re told’ rings a bell on our side of the world, or at least that yet more charlatans lie in the wait to dupe us. As the Indian Express editorial finishes,

Remember [Ramar Pillai,] who cooked up hydrocarbons from herbal messes by the simple expedient of secreting some fuel in the stirrer? He had a bull run with the press, and even some scientists were prepared to look seriously at a man who was claiming to violate all the laws of thermodynamics. Fortunately, the “rice puller” has been stopped in its tracks before it could become a craze. But we won’t have to wait too long before the spirit of Ramar Pillai rises again.

I’m sure there’s more to be said on this subject, will continue in future posts…

*Indian traditions of it, specifically.

Featured image credit: Andrew Stutesman/Unsplash.

Design a site like this with WordPress.com
Get started