Posted in Science

Why Titan is awesome #10

Titaaaaan!

How much I’ve missed writing these posts since Cassini passed away. Unsurprisingly, it’s after the probe’s demise that we’ve really begun to realise how much of Cassini’s images and data we were consuming on a daily basis, all of which is gone. There’s no more the steady stream of visuals of Saturn’s rings, bands, storms and panoply of moons – in fact all of which have been replaced by Jupiter’s rings, bands, storms and panoply of moons thanks to Juno. Nonetheless, one entire area of the Solar System has been darkened in my imagination. Until the next full mission to the Saturnian system (although nothing of the kind is in the works), we’ll have to make do with what Cassini data trickles down through NASA’s and ESA’s data-processing sieves.

One such is a new study about the temperature of the air high above Titan’s poles. Before Cassini’s death-dive into Saturn, the probe spent some time studying the moon’s polar atmosphere. Researchers from the University of Bristol who obtained this data noticed something odd: the part of the atmosphere over Titan’s poles began to develop a warm spot over late 2009 but that by 2012, it had become a ‘cold spot’. By 2015, the temperature at about 550 km above had dropped to 120 K (that’s a little below the temperature at which supercooled water turns into a glass).

On Earth, a warm spot forms over the poles because of two principle reasons: how Earth’s wind circulates around the planet and because of the presence of carbon dioxide. During winter, air over the corresponding hemispheric pole sinks down, becomes compressed and heats up. Moreover, the carbon dioxide present in the air also emits the heat it has trapped in its chemical bonds.

In 2012, astronomers using Cassini data had found that Titan also exhibits a wind circulation process that is moon-wide. It can be understood as Titan having two atmospheres, or layers, one on top of the other. In the lower atmosphere, there are three Hadley cells; each cell represents a distinct air circulation system wherein air rises up for 10 km or so from near the equator, moves up/down towards subtropical regions, sinks back down and returns to the equator along the surface. In the second, upper atmosphere, air moves between the two poles directly in a unified, global Hadley cell.

Titan_south polar vortex

Now, remember that Titan’s distance from the Sun means that one Titan-year is 29.5 Earth-years, that each Titanic season lasts over seven Earth-years and that seasonal shifts are much slower on the moon as a result. However, in 2012, scientists studying Cassini data found that the rate at which the air over one of Titan’s poles was sinking into the pole – like the air does on Earth – was happening really quickly: according to Nick Teanby, a researcher at the University of Bristol and also the lead author of the latest study, the rate of subsidence increased from 0.5 mm/s in January 2010 to 1.5 mm/s in June 2010. In other words, it was a shift that, unlike the moon’s seasons, happened rapidly (in just 12 Titanic days).

The same study concluded that Titan’s atmosphere was thicker than previously thought because trace gases like ethane, hydrogen cyanide, acetylene and cyanoacetylene were found to be produced at an altitude of over 500 km over the poles thanks to photochemical reactions induced by ultraviolet radiation and high-energy electrons streaming in from the Sun. These gases would then subside into the lower atmosphere over the polar region – which brings us to the latest study. It says that, unlike what carbon dioxide warming Earth’s atmosphere, the (once) trace gases actually cool the atmosphere, resulting in the dreadfully cold spot over Titan’s poles. They also participate in the upper Hadley cell circulation.

This is similar to a unique phenomenon observed over Saturn’s south pole in 2005.

Changes in trace gas abundances over Titan's south pole. Credit: ESA
Changes in trace gas abundances over Titan’s south pole. Credit: ESA

What a beauty you are, Titan. And I miss you, Cassini, more than I miss many other things in life.

I couldn’t find a link to the paper of the latest study; here’s the press release. Update: link to paper.

Links to previous editions:

  1. Why Titan is awesome #1
  2. Why Titan is awesome #2
  3. Why Titan is awesome #3
  4. Why Titan is awesome #4
  5. Why Titan is awesome #5
  6. Why Titan is awesome #6
  7. Why Titan is awesome #7
  8. Why Titan is awesome #8
  9. Why Titan is awesome #9

Featured image: Cassini’s last shot of Titan, taken with the probe’s narrow-angle camera on September 13, 2017. Credit: NASA.

Posted in Science

What it takes to wash a strainer: soap, water and some wave optics

When I stay over at a friend’s place whenever I come to Delhi, I try to help around the house. But more often than not, I just do the dishes – often a lot of dishes. One item I’ve always had trouble cleaning is the strainer, whether a small tea strainer or a large but fine sieve, because I can never tell if the multicoloured sheen I’m seeing on the wires is a patch of oil, liquid soap or something else. The fundamental problem is that these items are susceptible to the quirks of the wave of nature of light, as a result of which their surfaces display an effect called goniochromism, also known as iridescence.

At first (and over 12 years after high school), I suspected the wires on the sieve were acting as a diffraction grating. This is a structure that has a series of fine and closely spaced ridges on the surface. When a wave of light strikes this surface, the ridges scatter different parts of the wave in different directions. When these waves interact with each other on the other side, they interfere with each other constructively or destructively. A constructive interference produces a brighter band of colour; a destructive interference produces a darker band. How the wave becomes scattered is a function of its frequency: the lower the frequency (or redder the colour), the more the wave is bent around a grating.

As a result, white and continuous light appears to breakdown into its constituent colours when passed through a diffraction grating. But it must be noted that a useful diffraction grating used in a visible-light experiment has something like 4,000-6,000 ridges every centimetre. The width of each ridge has to be of comparable size to the wavelength of visible light because only then can it scatter that portion of light. On the other hand, the sieve I was holding appeared to have only 6-8 ridges every centimetre, so the structure itself couldn’t have been what was effecting the sheen.

Goniochromism, or iridescence, is caused when two transparent or semi-transparent films – like liquid soap atop water – reflect the incident light multiple times. In fact, this is one type of iridescence, called thin-film interference. Here, imagine a thin layer of soap on the surface of a thin layer of water, itself sitting on the surface of a vessel you’re cleaning. (With a strainer, the water-soap liquid forms meniscuses between the wires.) When white light strikes the soap layer, some of it is reflected our and some is transmitted. The transmitted portion than strikes the surface of the water layer: some of it is sent through while the rest is reflected back out.

When the light reflected by each of the two layers interact, their respective waves can interfere either constructively or destructively. Depending on the angle at which you’re viewing the vessel, bright and dark bands of light will be visible. Additionally, the thickness of the soap film also decides which frequencies are intensified and which become subdued in this process. The total effect is for you to see rainbow-esque pattern of undulating brightness on the vessel.

So herein lies the rub. Either effect, although the second more than the first, produces what effectively looks like an oily sheen on the strainer in my hand no matter how many times I scrub it with soap and run it under the water. And ultimately, I end up doing a very thorough job of it if there was no oil on the strainer to begin with – or a very bad one if there was oil on it but I’ve let it be assuming it’s soap residue. It’s a toss-up… so I think I’ll just follow my friend C.S.R.S’s words: “Just rub it a few times and leave it.”

Featured image credit: Lumix/pixabay.

Posted in Scicomm, Science

Onto drafting the gravitational history of the universe

It’s finally happening. As the world turns, as our little lives wear on, gravitational wave detectors quietly eavesdrop on secrets whispered by colliding blackholes and neutron stars in distant reaches of the cosmos, no big deal. It’s going to be just another day.

On November 15, the LIGO scientific collaboration confirmed the detection of the fifth set of gravitational waves, made originally on June 8, 2017, but announced only now. These waves were released by two blackholes of 12 and seven solar masses that collided about a billion lightyears away – a.k.a. about a billion years ago. The combined blackhole weighed 18 solar masses, so one solar mass’s worth of energy had been released in the form of gravitational waves.

The announcement was delayed because the LIGO teams had to work on processing two other, more spectacular detections. One of them involved the VIRGO detector in Italy for the first time; the second was the detection of gravitational waves from colliding neutron stars.

Even though the June 8 is run o’ the mill by now, it is unique because it stands for the blackholes of lowest mass eavesdropped on thus far by the twin LIGO detectors.

LIGO’s significance as a scientific experiment lies in the fact that it can detect collisions of blackholes with other blackholes. Because these objects don’t let any kind of radiation escape their prodigious gravitational pulls, their collisions don’t release any electromagnetic energy. As a result, conventional telescopes that work by detecting such radiation are blind to them. LIGO, however, detects gravitational waves emitted by the blackholes as they collide. Whereas electromagnetic radiation moves over the surface of the spacetime continuum and are thus susceptible to being trapped in blackholes, gravitational waves are ripples of the continuum itself and can escape from blackholes.

Processes involving blackholes of a lower mass have been detected by conventional telescopes because these processes typically involve a light blackhole (5-20 solar masses) and a second object that is not a blackhole but instead usually a star. Mass emitted by the star is siphoned into the blackhole, and this movement releases X-rays that can be spotted by space telescopes like NASA Chandra.

So LIGO’s June 8 detection is unique because it signals a collision involving two light blackholes, until now the demesne of conventional astronomy alone. This also means that multi-messenger astronomy can join in on the fun should LIGO detect a collision of a star and a blackhole in the future. Multi-messenger astronomy is astronomy that uses up to four ‘messengers’, or channels of information, to study a single event. These channels are electromagnetic, gravitational, neutrino and cosmic rays.

The masses of stellar remnants are measured in many different ways. This graphic shows the masses for black holes detected through electromagnetic observations (purple); the black holes measured by gravitational-wave observations (blue); neutron stars measured with electromagnetic observations (yellow); and the masses of the neutron stars that merged in an event called GW170817, which were detected in gravitational waves (orange). GW170608 is the lowest mass of the LIGO/Virgo black holes shown in blue. The vertical lines represent the error bars on the measured masses. Credit: LIGO-Virgo/Frank Elavsky/Northwestern
The masses of stellar remnants are measured in many different ways. This graphic shows the masses for black holes detected through electromagnetic observations (purple); the black holes measured by gravitational-wave observations (blue); neutron stars measured with electromagnetic observations (yellow); and the masses of the neutron stars that merged in an event called GW170817, which were detected in gravitational waves (orange). GW170608 is the lowest mass of the LIGO/Virgo black holes shown in blue. The vertical lines represent the error bars on the measured masses. Credit: LIGO-Virgo/Frank Elavsky/Northwestern

The detection also signals that LIGO is sensitive to such low-mass events. The three other sets of gravitational waves LIGO has observed involved black holes of masses ranging from 20-25 solar masses to 60-65 solar masses. The previous record-holder for lowest mass collision was a detection made in December 2015, of two colliding blackholes weighing 14.2 and 7.5 solar masses.

One of the bigger reasons astronomy is fascinating is its ability to reveal so much about a source of radiation trillions of kilometres away using very little information. The same is true of the June 8 detection. According to the LIGO scientific collaboration’s assessment,

When massive stars reach the end of their lives, they lose large amounts of their mass due to stellar winds – flows of gas driven by the pressure of the star’s own radiation. The more ‘heavy’ elements like carbon and nitrogen that a star contains, the more mass it will lose before collapsing to form a black hole. So, the stars which produced GW170608’s [the official designation of the detection] black holes could have contained relatively large amounts of these elements, compared to the stellar progenitors of more massive black holes such as those observed in the GW150914 merger. … The overall amplitude of the signal allows the distance to the black holes to be estimated as 340 megaparsec, or 1.1 billion light years.

The circumstances of the discovery are also interesting. Quoting at length from a LIGO press release:

A month before this detection, LIGO paused its second observation run to open the vacuum systems at both sites and perform maintenance. While researchers at LIGO Livingston, in Louisiana, completed their maintenance and were ready to observe again after about two weeks, LIGO Hanford, in Washington, encountered additional problems that delayed its return to observing.

On the afternoon of June 7 (PDT), LIGO Hanford was finally able to stay online reliably and staff were making final preparations to once again “listen” for incoming gravitational waves. As part of these preparations, the team at Hanford was making routine adjustments to reduce the level of noise in the gravitational-wave data caused by angular motion of the main mirrors. To disentangle how much this angular motion affected the data, scientists shook the mirrors very slightly at specific frequencies. A few minutes into this procedure, GW170608 passed through Hanford’s interferometer, reaching Louisiana about 7 milliseconds later.

LIGO Livingston quickly reported the possible detection, but since Hanford’s detector was being worked on, its automated detection system was not engaged. While the procedure being performed affected LIGO Hanford’s ability to automatically analyse incoming data, it did not prevent LIGO Hanford from detecting gravitational waves. The procedure only affected a narrow frequency range, so LIGO researchers, having learned of the detection in Louisiana, were still able to look for and find the waves in the data after excluding those frequencies.

But what I’m most excited about is the quiet announcement. All of the gravitational wave detection announcements before this were accompanied by an embargo, lots of hype building up, press releases from various groups associated with the data analysis, and of course reporters scrambling under the radar to get their stories ready. There was none of that this time. This time, the LIGO scientific collaboration published their press release with links to the raw data and the preprint paper (submitted to the Astrophysical Journal Letters) on November 15. I found out about it when I stumbled upon a tweet from Sean Carroll.

And this is how it’s going to be, too. In the near future, the detectors – LIGO, VIRGO, etc. – are going to be gathering data in the background of our lives, like just another telescope doing its job. The detections are going to stop being a big deal: we know LIGO works the way it should. Fortunately for it, some of its more spectacular detections (colliding intermediary-mass blackholes and colliding neutron stars) were also made early in its life. What we can all look forward to now is reports of first-order derivatives from LIGO data.

In other words, we can stop focusing on Einstein’s theories of relativity (long overdue) and move on to what multiple gravitational wave detections can tell us about things we still don’t know. We can mine patterns out of the data, chart their variation across space, time and their sources, and begin the arduous task of drafting the gravitational history of the universe.

Featured image credit: Lovesevenforty/pixabay.

Posted in Science

Awk CZTI result from Crab pulsar

An instrument onboard the ISRO Astrosat space-telescope has studied how X-rays being emitted by the Crab pulsar are being polarised, and how such polarisation varies from one pulse to the next. This is very important information for understanding how pulsars create and emit high-energy radiation – information that we haven’t been able to obtain from any other pulsars in the known universe. The underpinning study was published in Nature Astronomy on November 6, 2017.

Quick recap: CZTI stands for the Cadmium Zinc Telluride Imager, a 16-MP X-ray camera and, as The Wire has discussed before, one of the best in its class – in the league of the NASA Fermi and Swift detectors and even better in the 80-250 keV range. Pulsars are rotating neutron stars that emit focused beams of high-energy radiation from two polar locations on their surface. (As it rotates, the beams sweep past Earth like a lighthouse sweeping past ships, giving the impression that it’s blinking, or pulsating). We study them because they’re extreme environments that can help validate theories by pushing them to their limits.

There are two things notable about the current study: how CZTI studied the pulsar and what it found as a result.

1. How – The Crab pulsar, the remnant of a star that went supernova in 1,058 AD, is located 6,500 lightyears away in the direction of the Taurus constellation. Second, pulsars – despite their remarkable radiation output – emit few X-ray photons that can be studied from near Earth. Third, the Crab pulsar has a rotation period of 33 ms (i.e. very fast). For these reasons, CZTI couldn’t just study the pulsar directly and hope to find what it eventually did. Whatever X-ray was collected would’ve had to be precisely calibrated in time. So the CZTI team* partnered up with the Giant Metrewave Radio Telescope in Pune and the Ooty Radio Telescope in Muthorai (Tamil Nadu) for the ephemeris data. In all, there were 21 observations made over (CZTI’s first) 18 months.

2. What – Like a Ferrero Rocher from hell, a pulsar is a rotating neutron star on the inside, wrapped in a very strong magnetic field. Astronomers think charged particles are accelerated by this field and the energy they emit is shot into space, as X-rays + other frequencies of radiation. So studying how these X-rays are polarised could provide more info on how a pulsar produces its famous sweeping pulses. The CZTI data had a surprise: hard X-rays are being emitted by the Crab pulsar in the off-pulse – or the-beam-is-not-pointing-at-us – phase. In other words, the magnetic field isn’t involved in producing these X-rays; the neutron star itself is. Dun dun duuuuuuun!

It’s always nice to get science results that send researchers back to the proverbial drawing board, like the CZTI result has. It’s sweeter still when local researchers are involved – and even sweeter to be reminded that we haven’t been entirely left behind in non-theoretical particle physics research. There’s even more X-ray astronomy in India’s future. After Astrosat, launched in September 2015, ISRO has okayed a proposal from the Raman Research Institute (RRI), Bengaluru, to build an X-ray polarimeter instrument that the org will launch in the future (date not known). Called Polix, it is similar to the NASA GEMS probe that stalled in 2012.

*The CZTI team had scientists from Physical Research Laboratory, Ahmedabad; Tata Institute of Fundamental Research, Mumbai; Inter-University Centre for Astronomy and Astrophysics, Pune; IIT Powai; National Centre for Radio Astronomy, Pune; Vikram Sarabhai Space Centre, Thiruvananthapuram; ISRO, Bengaluru; and RRI.

Featured image: A composite image of the Crab Nebula showing the X-ray (blue), and optical (red) images superimposed. The size of the X-ray image is smaller because the higher energy X-ray emitting electrons radiate away their energy more quickly than the lower energy optically emitting electrons as they move. Caption and credit: NASA/ESA.

The Wire
November 7, 2017

Posted in Science

Neutron stars

When the hype for the announcement of the previous GW detection was ramping up, I had a feeling LIGO was about to announce the detection of a neutron-star collision. It wasn’t to be – but in my excitement, I’d written a small part of the article. I’m sharing it below. I’d also recommend reading this post: The Secrets of How Planets Form.

Stars die. Sometimes, when that happens, their outer layers explode into space in a supernova. Their inner layers collapse inwards under the gravity of their own weight in a violent rush. If the starstuff can be packed dense enough, the collapse produces a blackhole – a volume of space where the laws of quantum mechanics and relativity break down and the particles of matter are plunged into a monumental identity crisis. However, if the dying star wasn’t heavy enough when it blew up, then the inward rush will create a very, very, very dense object – but not a blackhole: a neutron star.

Neutron stars are the densest objects in the universe that astronomers can observe. The only things we know are denser than them are blackholes.

You’d think observed means ‘saw’, but what is ‘seeing’ but the light – a form of electromagnetic energy – from an event reaching our eyes? We can’t directly ‘see’ blackholes collide because the collision doesn’t release any electromagnetic energy. So astronomers have built a special kind of eyes – called gravitational wave detectors – that can observe ripples of gravitational energy that the collision lets loose.

The Laser Interferometer Gravitational-wave Detector (LIGO) we already know about. Its twin eyes, located in Washington and Louisiana, US, have detected three blackhole-blackhole collisions thus far. Two of the scientists who helped build it are hot favourites to win the Nobel Prize for physics next week. The other set of eyes involved in the last find is Virgo, a detector in Italy.

You’ve been told that blackholes are freaks of nature. Heavy objects bend spacetime around themselves. Blackholes are freaks because they step it up: they fold it. They’re so heavy that when spacetime bends around them, it goes all the way around and becomes a three-dimensional loop. Thus, a blackhole traps one patch of the cosmos around a vanishingly small heart of darkness. Even light, if it comes close enough, becomes trapped in this loop and can never escape. This is why astronomers can’t observe blackholes directly, and use gravitational-wave detectors instead.

But neutron stars they can observe. They’re exactly what their names suggest: a ball of neutrons. And neutrons experience a force of nature called the strong nuclear force, and it can be 100,000 billion billion billion times stronger than gravity. This makes neutron stars extremely dense and altogether incredibly heavy as well. On their surface, a classic can of Coke will weigh 355,000 billion tonnes, a thousand-times heavier than all the humans on Earth combined.

Sometimes, a neutron star is ravaged by a powerful magnetic field. This field focuses charged particles on the neutron star’s surface into a tight beam of radiation shooting off into space. If the orb is also spinning, then this beam of radiation sweeps through space like the light from a lighthouse sweeps over the sea near it. Such neutron stars are called pulsars.

Posted in Science

Drama along the line of sight

TIL one reason some of us are so dejected and furious at the TV when in, a cricket match being telecast live, a fielder appears to miss catching the ball even though it looks like he easily could have. A.k.a. why my grandpa loses his shit when M.S. Dhoni won’t chase a ball that has spun past his gloves and is racing towards the boundary. (If you don’t follow cricket, here’s a primer.)

When the bowler runs up to the wicket to bowl a ball, the camera focuses on the batsman, but the frame is set to capture everything from the umpire’s position in the foreground to the back-most man on the slip cordon (the line of fielders standing adjacent to the wicketkeeper) in the background.

Screen Shot 2017-10-10 at 17.58.42

However, the camera that’s recording this is located far from the pitch itself, down the ground and beyond the boundary line, at least 270-300 feet from the batsman (according to Law 19.1 of ICC Test Match Playing Conditions). This results in an effect called foreshortening. When the camera has a long line of sight, distances along the line of slight are shrunk by more than distances across the line of sight. For example, in the screenshot above, the pitch is 66 feet long and the wicketkeeper is standing almost 26 feet behind the batsman.

However, onscreen, the ball to be bowled is going to appear as if it’s going to travel 20 or so feet to the batsman and the 10 or so feet to the wicketkeeper. On a cognitive level, viewers are also unmindful of the foreshortening for two reasons: they used to it, and because the ball bowled (by Bhuvaneshwar Kumar, above) is going to move at 120-140 km/hr.

At the same time, foreshortening is going to make it appear as if it’s moving at a slower speed. Broadcasting channels employ on-ground radar to track the speed of the ball and display the number almost immediately after it’s bowled, so foreshortening could arguably dull our sense of how high these speeds are as well.

However, the contrast between shortening along the line of sight versus across the line of sight isn’t very evident in a cricket broadcast because the distances are of comparable magnitudes. Instead, consider the Cassini probe shot of the Jovian moons Epimetheus (lower left) and Janus shown below. Without accounting for foreshortening, it would appear as if the moons are close to each other. However, at the time this image was taken, Janus, on the right, was 40,000 km behind Epimetheus.

Credit: NASA/JPL/Space Science Institute
Credit: NASA/JPL/Space Science Institute

In the same vein, foreshortening is a common confounding factor when natural terrain is scanned from space.

Axiomatically, painters and graphic artists use foreshortening to imply depth in the viewer’s eye and also suggest an ‘appropriate’ viewing angle. Frescoes created in the 15th century were among the first to use foreshortening to provide an illusion of depth on two-dimensional surfaces. A famous example is Andrea Mantegna’s Lamentation of Christ, created c. 1480. Thanks to the effect being in play, Jesus’s body is angled towards the viewer (along the line of slight), drawing attention to this chest, abdomen, genitals and the holes in his hands and feet.

The_dead_Christ_and_three_mourners,_by_Andrea_Mantegna
Credit: Wikimedia Commons

Finally, foreshortening is held to be an essential compositional feature of millennials’ most ubiquitous creation: the selfies. The American art critic Jerry Saltz wrote for Vulture magazine in January 2014,

Maybe the first significant twentieth-century pre-selfie is M.C. Escher’s 1935 lithograph Hand With Reflecting Sphere. Its strange compositional structure is dominated by the artist’s distorted face, reflected in a convex mirror held in his hand and showing his weirdly foreshortened arm. It echoes the closeness, shallow depth, and odd cropping of modern selfies. In another image, which might be called an allegory of a selfie, Escher rendered a hand drawing another hand drawing the first hand. It almost says, “What comes first, the self or the selfie?” My favorite proto-selfie is Parmigianino’s 1523–24 Self-Portrait in a Convex Mirror, seen on the title page of this story. All the attributes of the selfie are here: the subject’s face from a bizarre angle, the elongated arm, foreshortening, compositional distortion, the close-in intimacy. As the poet John Ashbery wrote of this painting (and seemingly all good selfies), “the right hand / Bigger than the head, thrust at the viewer / And swerving easily away, as though to protect what it advertises.”

Posted in Science

Before seeing, there are the ways of imaging

When May-Britt Moser, Edvard Moser and John O’Keefe were awarded the 2014 Nobel Prize for physiology and medicine “for their discoveries of cells that constitute a positioning system in the brain”, there was a noticeable uptick in the number of articles on similar subjects in the popular as well as scientific literature in the following months. The same thing happened with the sciences Nobel Prizes in subsequent years, and I suspect it will be the same this year with cryo-electron microscopy (cryoEM) as well. And I’d like to ride this wave.

§

It has often been that the Nobel Prizes for physiology/medicine (a.k.a. ~ for biology) and for chemistry have awarded advancements in chemistry and biology, respectively. This year, however, the chemistry prize was more physics. Joachim Frank, Jacques Dubochet and Richard Henderson – three biologists – were on a quest to make the tool that they were using to explore structural biology more powerful, more efficient. So Frank invented computational techniques; Dubochet invented a new way to prepare the sample; and Henderson used them both deftly to prove their methods worked.

Since then, cryoEM has come a long way but the improvisations hence have only been more sophisticated versions of what Frank, Dubochet and Henderson first demonstrated … except for one component: the microscope’s electronics.

Just the way human eyes are primed to detect photons of a certain wavelength, extract the information encoded in them, convert that into an electric signal and send it to the brain for processing, a cryoEM uses electrons. A wave can be scattered by objects in its path that are of size comparable to the wave’s wavelength. So electrons, which have a shorter wavelength than photons, can be used to probe smaller distances. A cryoEM fires a tight, powerful beam of electrons into the specimen. Parts of the specimen scatter the electrons into a detector on the microscope. The detector ‘reads’ how the electrons have changed and delivers that information to a computer. This happens repeatedly as electron beams are fired at different copies of the specimen oriented at random angles. A computer then puts together a high-resolution 3D image of the specimen using all the detector data. In this scheme of things: a technological advancement in 2012 significantly improved the cryoEM’s imaging abilities. It was called the direct electron detector, developed to substitute the charged couple device (CCD).

The simplest imaging system known to humans is the photographic film, which uses a surface of composed of certain chemical substances that are sensitive visible light. When the surface is exposed to a frame, say a painting, the photons reflected by the painting impinge on the surface. The substances therein then ‘record’ the information carried by the photons in the form of a photograph. A CCD employs a surface of metal-oxide semiconductors (MOS). A semiconductor relies on the behaviour of electric charge on either side of a special junction: an interface of dissimilar materials to which impurities have been added such that one layer is rich in electrons (n) and the other, poor (p). The junction will now either conduct electricity or not depending on how a voltage is applied across it. Anyway: when a photon impinges on the MOS, the latter releases an electron (thanks to the photoelectric effect) that is then moved through the device to an area where it can be manipulated to contribute to one pixel of the image.

(Note: When I write ‘one photon’ or ‘one electron’, I don’t mean one exactly. Various uncertainties, including Heisenberg’s, prevail in quantum mechanics and it’s unreasonable to assume humans can manipulate particles one at a time. My use of the singular is only illustrative. At the same time, I hope you will pause to appreciate – later in this post – how close to the singular we’ve been able to get.)

CCDs can produce images quickly and with high contrast even in low light. However, they have an important disadvantage. CCDs have a lower detective quantum efficiency than photographic films at higher spatial frequencies. Detective quantum efficiency is a measure of how well a detector – like the film or a CCD – can record an image when the signal to noise ratio is higher. For example, when you’re getting a dental X-ray done to understand how your teeth look below the gums, your mouth is bombarded with X-ray photons that penetrate the gums but don’t penetrate the teeth. The more such photons there are, the better the image of your teeth. However, inundating your mouth with X-rays just to get a better picture risks damaging tissue and hurting you. This would be the case if an X-ray ‘camera’ had a CCD with a lower detective quantum efficiency. The simplest workaround would be to use an amplifier to boost the signal produced by the detector – but then this would also boost the noise.

So, in other words, CCDs have more trouble recording the finer details in an image than photographic films when there is a lot of noise coming with the incident signal. The noise can also be internally generated, such as during the process when photons are converted into electrons.

However, scientists can’t use photographic films with cryoEM instead because CCDs have other important advantages. They scan images faster, allow for easier refocusing and realignment of the object under study, and require lesser maintenance. This dilemma provided the impetus to develop the direct electron detector – effectively a CCD with better detective quantum efficiency.

Because a cryoEM is in the business of ‘seeing’ electrons, a scintillator is placed between the electrons and the CCD. When the electron hits the scintillator, the material absorbs the energy and emits a glow – in the form of a photon. This photon is then picked up by the CCD for processing. Sometimes, the incoming electron may not create a photon at exactly the location on the scintillator where it is received. Instead, it may bounce off of multiple locations, producing a splatter of photons in a larger area and creating a blur in the image.

In a direct electron detector, the scintillator is removed, forcing the CCD to directly receive and process electrons produced by the initial beam for study. Such (higher energy) electrons can damage the CCD as well as produce unnecessary signals within the system. These effects can be protected against using suitable hardware and circuit design techniques, either of which required advancements in materials science that weren’t available until recently. Even so, the eventual device itself is pretty simple in design. According to the 2009 doctoral thesis of one Liang Jin,

The device can be divided into three major regions. At the very top of the surface is the circuitry layer that has pixel transistors and photodiode as well as interconnects between all the components (metallisation layers). The middle layer is a p-epitaxial layer (about 8 to 10 µm thick) that is epitaxially grown with very low defect levels and highly doped. The rest of the 300 um silicon substrate is used mainly for mechanical support.

On average, a single incident electron of 200 keV will generate about 2,000 ionisation electrons in the 10 µm epitaxial layer, which is significantly larger than the noise level of the device (less than 50 electrons). Each pixel integrates the collected electrons during an exposure period and at the conclusion of a frame, the contents of the sensor array are read out, digitised and stored.

To understand the extent to which noise was reduced as a result, consider an example. In 2010, a research group led by Jean-Paul Armache of the Ludwig-Maximilians-Universität München was able to image eukaryotic ribosomes using cryoEM at a resolution of 6 angstrom (0.6 nanometers) using 1.4 million images. In 2013, a different group, led by Xiao-chen Bai of the Medical Research Council Laboratory of Molecular Biology in Cambridge, the UK, imaged the same ribosomes to 4.5 angstrom using 35,813 images. The first group used cryoEM + CCDs. The second group used cryoEM + direct detection devices.

An even newer development seeks to bring back the CCD as the detector of choice among structural biologists. In September 2017, scientists from the Femi National Accelerator Laboratory announced that they had engineered a highly optimised skipper CCD in their lab. The skipper CCD was first theorised by, among others, D.D. Wen in 1974. It’s a CCD in which the electrons released by the photons are measured multiple times – up to 4,000 times per pixel according to one study – during processing to better separate signal from noise. The same study said that, as a result, the skipper CCD’s readout noise could be reduced to 0.068 electrons per pixel. The cost for this was that from the time the CCD received the first electrons to when the processed image became available, it would be a few hours. But in a review, Michael Schirber, a corresponding editor for Physics, argues that “this could be an acceptable tradeoff for rare events, such as hypothetical dark matter particles interacting with silicon atoms”.

Featured image: Scientists using a 300kV cryo-electron microscope at the Max Planck Institute of Molecular Physiology, Dortmund. Credit: MPI Dortmund.