Advertise here with Carbon Ads

This site is made possible by member support. ❤️

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

🍔  💀  📸  😭  🕳️  🤠  🎬  🥔

kottke.org posts about science

Moon 101, a quick explainer video from National Geographic about the Moon

I have been going a little Moon crazy lately. There was the whole Apollo 11 thing, I finished listening to the excellent audiobook of Andrew Chaikin’s A Man on the Moon (which made me feel sad for a lot of different reasons), and am thinking about a rewatch of From the Earth to the Moon, the 1998 HBO series based on Chaikin’s book. This video from National Geographic answers a lot of questions about the Moon in a short amount of time.


What if you detonated a nuclear bomb in the Marianas Trench?

In response to a dumb viral video with almost 20 million views that suggests detonating a powerful nuclear device at the bottom of the ocean would unleash global chaos, Kurzgesagt provides a counterpoint using, you know, science. This was also an early What If? query:

The bubble grows to about a kilometer across in a couple of seconds. The water above bulges up, though only slightly, over a large area. Then the pressure from that six miles of water overhead causes it to collapse. Within a dozen or so seconds, the bubble shrinks to a minimum size, then ‘bounces’ back, expanding outward again.

It goes through three or four cycles of this collapse and expansion before disintegrating into, in the words of the 1996 report, “a mass of turbulent warm water and explosion debris.” According to the report, as a result of such a deep-water closed bubble creation and dissipation, “no wave of any consequence will be generated.”


When you do a DNA test and find out your dad is not your father

Sarah Zhang writes about a support group on Facebook for people who have discovered surprising parentage through DNA testing.

Lisa, 44, admits she is still trying to go of that anger. She had always felt out of place in her family. Her hair — which she always straightened — was naturally fine and curly, her skin dark. “People would think I’m Hispanic, and would speak Spanish to me on the street,” she says. So when an DNA test in 2015 revealed her biological father was likely African American, it clicked into place. But her mom denied it. “She wouldn’t answer me. She would change the subject,” recalls Lisa. When she kept pressing, her mother broke down, saying it would destroy the family and that her dad — the man she grew up with — would kill her. She refused to say anything else about Lisa’s biological father.

I’ve written about this before (here and here) and reading these stories never gets any less heartbreaking. Back in 2010, I shared this:

I know someone who adopted a baby and they have never told her that she’s adopted and don’t plan to (she’s now in her 20s). When DNA testing becomes commonplace in another 5-15 years, I wonder how long that secret will last and what her reaction will be.

DNA testing confirms what we should have known all along: family is more than what biology says it is. Families already look quite differently than they did 40-50 years ago and they will continue to shift in the future, MAGA be damned.


Which came first, bread or farming?

Based on the available archaeological evidence, researchers had assumed that bread and agriculture developed around the same time. But a recent find in Jordan of a 14,500-year-old flatbread indicates that bread was first made some 4000 years before agriculture was invented.

No matter how you slice it, the discovery detailed on Monday shows that hunter-gatherers in the Eastern Mediterranean achieved the cultural milestone of bread-making far earlier than previously known, more than 4,000 years before plant cultivation took root.

The flatbread, likely unleavened and somewhat resembling pita bread, was fashioned from wild cereals such as barley, einkorn or oats, as well as tubers from an aquatic papyrus relative, that had been ground into flour.

And now researchers are wondering, did the invention of bread drive the invention agriculture?

“We now have to assess whether there was a relationship between bread production and the origins of agriculture,” Arranz-Otaegui said. “It is possible that bread may have provided an incentive for people to take up plant cultivation and farming, if it became a desirable or much-sought-after food.”

University of Copenhagen archeologist and study co-author Tobias Richter pointed to the nutritional implications of adding bread to the diet. “Bread provides us with an important source of carbohydrates and nutrients, including B vitamins, iron and magnesium, as well as fibre,” Richter said.


The Dunning-Kruger Effect: we are all confident idiots

In a lesson for TED-Ed, David Dunning explains the Dunning-Kruger Effect, a cognitive bias in which people with lesser abilities tend to rate themselves as more proficient than they are.

Interestingly, this effect not only applies to those with lower abilities thinking they are better but also to experts who think they’re not exceptional. That is, the least & most skilled groups are both deficient in their ability to evaluate their skills.

Dunning also wrote a longer piece for Pacific Standard on the phenomenon.

In 1999, in the Journal of Personality and Social Psychology, my then graduate student Justin Kruger and I published a paper that documented how, in many areas of life, incompetent people do not recognize — scratch that, cannot recognize — just how incompetent they are, a phenomenon that has come to be known as the Dunning-Kruger effect. Logic itself almost demands this lack of self-insight: For poor performers to recognize their ineptitude would require them to possess the very expertise they lack. To know how skilled or unskilled you are at using the rules of grammar, for instance, you must have a good working knowledge of those rules, an impossibility among the incompetent. Poor performers — and we are all poor performers at some things — fail to see the flaws in their thinking or the answers they lack.

What’s curious is that, in many cases, incompetence does not leave people disoriented, perplexed, or cautious. Instead, the incompetent are often blessed with an inappropriate confidence, buoyed by something that feels to them like knowledge.

Confidence feels like knowledge. I feel like that simple statement explains so much about the world.

See also Errol Morris’ series for the NY Times about humanity’s unknown unknowns.

In closing, I’ll just note that thinking you’re impervious to the Dunning-Kruger Effect is itself an example of the Dunning-Kruger Effect in action. (via open culture)


James Hansen’s 1988 climate predictions have proved to be remarkably accurate

In 1988, Dr. James Hansen testified in front of Congress about the future dangers of climate change caused by human activity. That same year, the results of a study released by Hansen and his team at the Goddard Institute for Space Studies detailed three possible scenarios for possible future warming. Their middle-of-the-road prediction has proved to be remarkably accurate over the past 30 years.

Hansen Warming Trend

Changes in the human effects that influence Earth’s global energy imbalance (a.k.a. ‘anthropogenic radiative forcings’) have in reality been closest to Hansen’s Scenario B, but about 20-30% weaker thanks to the success of the Montreal Protocol in phasing out chlorofluorocarbons (CFCs). Hansen’s climate model projected that under Scenario B, global surface air temperatures would warm about 0.84°C between 1988 and 2017. But with a global energy imbalance 20-30% lower, it would have predicted a global surface warming closer to 0.6-0.7°C by this year.

The actual 1988-2017 temperature increase was about 0.6°C. Hansen’s 1988 global climate model was almost spot-on.

Scientists have known this was happening for decades and have been telling our government officials about it for more than 30 years. Our present inaction on a national level on this is shameful and “the global poor, the disenfranchised, the young, and the yet-to-be-born” will soon pay the price.

See also a brief history of America’s shameful inaction on climate change.


There’s no scientific or genetic basis for race

Elizabeth Kolbert writing for National Geographic: There’s No Scientific Basis for Race — It’s a Made-Up Label.

“What the genetics shows is that mixture and displacement have happened again and again and that our pictures of past ‘racial structures’ are almost always wrong,” says David Reich, a Harvard University paleogeneticist whose new book on the subject is called Who We Are and How We Got Here. There are no fixed traits associated with specific geographic locations, Reich says, because as often as isolation has created differences among populations, migration and mixing have blurred or erased them.

She also observes that there’s more diversity in Africa than all the other continents combined (which is what happens when the rest of the world’s population is based on a relatively small population that left Africa 60,000 years ago).


How the Earth’s Continents Will Look 250 Million Years From Now

Speaking of Pangaea, this video shows how the present-day continents came to be formed from the Pangaea supercontinent about 240 million years ago, then shows what the Earth’s surface might look like 250 million years in the future, if the tectonic plates continue to move in predictable ways.

I hope this explanation is helpful. Of course all of this is scientific speculation, we will have to wait and see what happens, but this is my projection based on my understanding of the forces that drive plate motions and the history of past plate motions. Remember: “The past reveals patterns; Patterns inform process; Process permits prediction.”

Look at how quickly India slams into the Asian continent…no wonder the Himalayas are so high.1 And it’s interesting that we’re essentially bookended by two supercontinents, the ancient Pangaea and Pangaea Proxima in the future.

  1. Though they may not be able to grow much more. Erosion and gravity work to keep the maximum height in check.


Flat Earthers and the Double-Edged Sword of American Magical Thinking

Alan Burdick recently wrote a piece for The New Yorker about the “burgeoning” flat Earth movement, a group of people who believe, against simple & overwhelming evidence, that the Earth is not spherical1 but flat.

If you are only just waking up to the twenty-first century, you should know that, according to a growing number of people, much of what you’ve been taught about our planet is a lie: Earth really is flat. We know this because dozens, if not hundreds, of YouTube videos describe the coverup. We’ve listened to podcasts — Flat Earth Conspiracy, The Flat Earth Podcast — that parse the minutiae of various flat-Earth models, and the very wonkiness of the discussion indicates that the over-all theory is as sound and valid as any other scientific theory. We know because on a clear, cool day it is sometimes possible, from southwestern Michigan, to see the Chicago skyline, more than fifty miles away — an impossibility were Earth actually curved. We know because, last February, Kyrie Irving, the Boston Celtics point guard, told us so. “The Earth is flat,” he said. “It’s right in front of our faces. I’m telling you, it’s right in front of our faces. They lie to us.”

John Gruber remarked on Burdick’s piece by saying:

In recent years I’ve begun to feel conflicted about the internet. On the one hand, it’s been wonderful in so many ways. I’ve personally built my entire career on the fact that the internet enables me to publish as a one-person operation. But on the other hand, before the internet, kooks were forced to exist on the fringe. There’ve always been flat-earther-types denying science and John Birch Society political fringers, but they had no means to amplify their message or bond into large movements.

Another way to put this is that all the people who bought those News of the World-style magazines from the grocery checkout — UFO sightings! Elvis lives! NASA faked the Moon landing! new treatment lets you live 200 years! etc.! — were able to find each other, organize, and mobilize because of the internet. And then they decided to elect one of themselves President.

I recently downloaded the audiobook of Kurt Andersen’s Fantasyland: How America Went Haywire: A 500-Year History and am looking forward to listening to it on my summer roadtrip. Here’s part of the synopsis:

In this sweeping, eloquent history of America, Kurt Andersen shows that what’s happening in our country today — this post-factual, “fake news” moment we’re all living through — is not something new, but rather the ultimate expression of our national character. America was founded by wishful dreamers, magical thinkers, and true believers, by hucksters and their suckers. Fantasy is deeply embedded in our DNA.

Over the course of five centuries — from the Salem witch trials to Scientology to the Satanic Panic of the 1980s, from P. T. Barnum to Hollywood and the anything-goes, wild-and-crazy sixties, from conspiracy theories to our fetish for guns and obsession with extraterrestrials — our love of the fantastic has made America exceptional in a way that we’ve never fully acknowledged. From the start, our ultra-individualism was attached to epic dreams and epic fantasies — every citizen was free to believe absolutely anything, or to pretend to be absolutely anybody.

Gruber’s point about the internet being a double-edged sword appears to be echoed here by Andersen about American individualism. Sure, this “if people disagree with you, you must be doing something right” spirit is responsible for the anti-vaxxer movement, conspiracy theories that 9/11 was an inside job & Newtown didn’t happen, climate change denialism, and anti-evolutionism, but it also gets you things like rock & roll, putting men on the Moon, and countless discoveries & inventions, including the internet.

Update: The Atlantic published an excerpt of Fantasyland last year:

I first noticed our national lurch toward fantasy in 2004, after President George W. Bush’s political mastermind, Karl Rove, came up with the remarkable phrase reality-based community. People in “the reality-based community,” he told a reporter, “believe that solutions emerge from your judicious study of discernible reality … That’s not the way the world really works anymore.” A year later, The Colbert Report went on the air. In the first few minutes of the first episode, Stephen Colbert, playing his right-wing-populist commentator character, performed a feature called “The Word.” His first selection: truthiness. “Now, I’m sure some of the ‘word police,’ the ‘wordinistas’ over at Webster’s, are gonna say, ‘Hey, that’s not a word!’ Well, anybody who knows me knows that I’m no fan of dictionaries or reference books. They’re elitist. Constantly telling us what is or isn’t true. Or what did or didn’t happen. Who’s Britannica to tell me the Panama Canal was finished in 1914? If I wanna say it happened in 1941, that’s my right. I don’t trust books — they’re all fact, no heart … Face it, folks, we are a divided nation … divided between those who think with their head and those who know with their heart … Because that’s where the truth comes from, ladies and gentlemen — the gut.”

Whoa, yes, I thought: exactly. America had changed since I was young, when truthiness and reality-based community wouldn’t have made any sense as jokes. For all the fun, and all the many salutary effects of the 1960s — the main decade of my childhood — I saw that those years had also been the big-bang moment for truthiness. And if the ’60s amounted to a national nervous breakdown, we are probably mistaken to consider ourselves over it.

(thx, david)

  1. More properly, the Earth is an oblate spheroid.


An AI learned to see in the dark

Cameras that can take usable photos in low light conditions are very useful but very expensive. A new paper presented at this year’s IEEE Conference on Computer Vision and Pattern Recognition shows that training an AI to do image processing on low-light photos taken with a normal camera can yield amazing results. Here’s an image taken with a Sony a7S II, a really good low-light camera, and then corrected in the traditional way:

AI image in the dark

The colors are off and there’s a ton of noise. Here’s the same image, corrected by the AI program:

AI image in the dark

Pretty good, right? The effective ISO on these images has to be 1,000,000 or more. A short video shows more of their results:

It would be great to see technology like this in smartphones in a year or two.


Willpower, Wealth, and the Marshmallow Test

The marshmallow test is a famous psychological experiment designed by Walter Mischel in the 1960s. Kids were given a single marshmallow but told they could have another if they refrained from eating the first one for 15 minutes. The results seemed to indicate a much greater degree of self-control amongst those children who were able to delay gratification, which led to better outcomes in their lives. From a New Yorker article about Mischel:

Once Mischel began analyzing the results, he noticed that low delayers, the children who rang the bell quickly, seemed more likely to have behavioral problems, both in school and at home. They got lower S.A.T. scores. They struggled in stressful situations, often had trouble paying attention, and found it difficult to maintain friendships. The child who could wait fifteen minutes had an S.A.T. score that was, on average, two hundred and ten points higher than that of the kid who could wait only thirty seconds.

But Mischel only tested ~90 kids from a single preschool. Researchers from UC Irvine and NYU recently redid the test with more kids that were more representative of the general population and found that household income was a big factor in explaining both the ability to delay and outcomes.

Ultimately, the new study finds limited support for the idea that being able to delay gratification leads to better outcomes. Instead, it suggests that the capacity to hold out for a second marshmallow is shaped in large part by a child’s social and economic background — and, in turn, that that background, not the ability to delay gratification, is what’s behind kids’ long-term success.

If you’re poor, you might look at the promise of future food somewhat dubiously…and not because of a lack of self-control:

The failed replication of the marshmallow test does more than just debunk the earlier notion; it suggests other possible explanations for why poorer kids would be less motivated to wait for that second marshmallow. For them, daily life holds fewer guarantees: There might be food in the pantry today, but there might not be tomorrow, so there is a risk that comes with waiting. And even if their parents promise to buy more of a certain food, sometimes that promise gets broken out of financial necessity.


An explainer video from 1923 about Einstein’s theory of relativity

In 1923, Inkwell Studios1 released a 20-minute animated explanation of Albert Einstein’s theory of relativity, perhaps one of the very first scientific explainer videos ever made. Films were still silent in those days and the public’s scientific understanding limited (the discovery of Pluto was 7 years in the future, and penicillin 5 years) so the film is almost excruciatingly slow by today’s standards, but if you squint hard enough, you can see the great-grandparent to YouTube channels like Kurzgesagt, Nerdwriter, TED Ed, minutephysics, and the 119,000+ videos on YouTube returned for a “einstein relativity explained” search. (via open culture)

  1. Inkwell later became Fleischer Studios, which made cartoons like Betty Boop, Popeye, and the first animated Superman series. They also introduced the bouncing ball as a technique for singing along to on-screen lyrics.


A brief history of fingerprints

Smudge Art

Chantel Tattoli’s piece for The Paris Review, The Surprising History (and Future) of Fingerprints, is interesting throughout, but these two things leapt from the screen (italics mine):

It is true that every print is unique to every finger, even for identical twins, who share the same genetic code. Fingerprints are formed by friction from touching the walls of our mother’s womb. Sometimes they are called “chanced impressions.” By Week 19, about four months before we are issued into the world, they are set.

WHAT?! Is this true? A cursory search shows this might indeed be the case, although it looks as though there’s not established scientific consensus around the process.

Also, Picasso was fingerprinted as a suspect in the theft of the Mona Lisa from the Louvre:

When French authorities interrogated Pablo Picasso, in 1911, at the Palais de Justice about the theft of the Mona Lisa from the Louvre that August, he was clad in his favorite red-and-white polka-dot shirt. Picasso cried. He begged forgiveness. He was in possession of two statuettes filched from the museum, but he hadn’t taken her.

“In possession of”? Turns out a pal of Picasso’s lifted the statuettes from the museum, which was notoriously easy to steal from, and sold them to the artist, who knew exactly what he was buying.

True to Pieret’s testimony, Picasso kept two stolen Iberian statues buried in a cupboard in his Paris apartment. Despite the artist’s later protestations of ignorance there could be no mistaking their origins. The bottom of each was stamped in bold: PROPERTY OF THE MUSÉE DU LOUVRE.

Fingerprint art by Evan Roth. (via @claytoncubitt)


Global Warming Blankets

Using simple graphic representations of annual temperatures (like this one posted by climate scientist Ed Hawkins), people are knitting and crocheting blankets that show just how warm the Earth has gotten over the past few decades. See Katie Stumpf’s blanket, for example.

Global Warming Blankets

According to climate scientist (and crocheter) Ellie Highwood, these blankets are a subset of “temperature blankets” made to represent, for example, daily temperatures over the course of a year in a particular location. The blanket she crocheted used NOAA data of global mean temperature anomalies for a 101-year period ending 2016.

I then devised a colour scale using 15 different colours each representing a 0.1 °C data bin. So everything between 0 and 0.099 was in one colour for example. Making a code for these colours, the time series can be rewritten as in the table below. It is up to the creator to then choose the colours to match this scale, and indeed which years to include. I was making a baby sized blanket so chose the last 100 years, 1916-2016.

If you read her post, she provides instructions for making your own global warming blanket.

P.S. You might think that with the Earth’s atmosphere getting warmer on average, these blankets would ironically be less necessary that they would have been 50 years ago. But climate change is also responsible for more extreme winter weather events — think global weirding in addition to global warming. So keep those blankets handy!


Degrees of Uncertainty

Degrees of Uncertainty is an upcoming documentary by Neil Halloran that “uses data-driven animation to explore the topic of global warming”. It’s based on this XKCD comic of A Timeline of Earth’s Average Temperature.

Halloran is a creator of the excellent The Fallen of World War II interactive documentary, so I’m looking forward to seeing what he does with the topic of climate change.


Can bacteriophages rescue us from drug-resistant bacteria?

Last month when I posted a video comparing the sizes of various microorganisms, I noted the weirdness of bacteriophages, which are bacteria-killing viruses that look a bit like a 20-sided die stuck on the top of a sci-fi alien’s body.

Bacteriophages are really real and terrifying…if you happen to be a bacteria. Bacteriophages attack by attaching themselves to bacteria, piercing their outer membranes, and then pumping them full of bacteriophage DNA. The phage replicates inside of the bacteria until the bacteria bursts and little baby bacteriophages are exploded out all over the place, ready to attack their own bacteria.

I couldn’t find a good explainer (video or text) about these organisms, but over the weekend, Kurzgesagt rode to the rescue with this video. In the second part of the video, they discuss whether bacteriophages might form the basis of an effective treatment for antibiotic-resistant infections.


The Finkbeiner test for gender bias in science writing

In a 2013 piece, Christie Aschwanden suggested a test in the spirit of the Bechdel test for avoiding gender bias in profiles written about scientists who are women.

To pass the Finkbeiner test, the story cannot mention:

- The fact that she’s a woman
- Her husband’s job
- Her child care arrangements
- How she nurtures her underlings
- How she was taken aback by the competitiveness in her field
- How she’s such a role model for other women
- How she’s the “first woman to…”

Aschwanden named the test after her colleague Ann Finkbeiner, who wrote that she was going to write a piece about an astronomer without mentioning that she, the astronomer, was a woman.

Meanwhile I’m sick of writing about [gender bias in science]; I’m bored silly with it. So I’m going to cut to the chase, close my eyes, and pretend the problem is solved; we’ve made a great cultural leap forward and the whole issue is over with.

And I’m going to write the profile of an impressive astronomer and not once mention that she’s a woman. I’m not going to mention her husband’s job or her child care arrangements or how she nurtures her students or how she was taken aback by the competitiveness in her field. I’m not going to interview her women students and elicit raves about her as a role model. I’m going to be blindly, aggressively, egregiously ignorant of her gender.

I’m going to pretend she’s just an astronomer.

(via @john_overholt)


An AI Can Realistically “Paint In” Missing Areas of Photographs

This video, and the paper it’s based on, is called “Image Inpainting for Irregular Holes Using Partial Convolutions” but it’s actually straight-up witchcraft! Researchers at NVIDIA have developed a deep-learning program that can automagically paint in areas of photographs that are missing. Ok, you’re saying, Photoshop has been able to do something like that for years. And the first couple of examples were like, oh that’s neat. But then the eyes are deleted from a model’s portrait and the program drew new eyes for her. Under close scrutiny, the results are not completely photorealistic, but at a glance it’s remarkably convincing. (via imperica)


How to harvest nearly infinite energy from a spinning black hole

Well, this is a thing I didn’t know about black holes before watching this video. Because some black holes spin, it’s possible to harvest massive amounts of energy from them, even when all other energy sources in the far far future are gone. This process was first proposed by Roger Penrose in a 1971 paper.

The Penrose process (also called Penrose mechanism) is a process theorised by Roger Penrose wherein energy can be extracted from a rotating black hole. That extraction is made possible because the rotational energy of the black hole is located not inside the event horizon of the black hole, but on the outside of it in a region of the Kerr spacetime called the ergosphere, a region in which a particle is necessarily propelled in locomotive concurrence with the rotating spacetime. All objects in the ergosphere become dragged by a rotating spacetime. In the process, a lump of matter enters into the ergosphere of the black hole, and once it enters the ergosphere, it is forcibly split into two parts. For example, the matter might be made of two parts that separate by firing an explosive or rocket which pushes its halves apart. The momentum of the two pieces of matter when they separate can be arranged so that one piece escapes from the black hole (it “escapes to infinity”), whilst the other falls past the event horizon into the black hole. With careful arrangement, the escaping piece of matter can be made to have greater mass-energy than the original piece of matter, and the infalling piece has negative mass-energy.

This same effect can also be used in conjunction with a massive mirror to superradiate electromagnetic energy: you shoot light into a spinning black hole surrounded by mirrors, the light is repeatedly sped up by the ergosphere as it bounces off the mirror, and then you harvest the super-energetic light. After the significant startup costs, it’s basically an infinite source of free energy.


How to reduce opioid addiction

This morning I ran across news from two different studies about reducing deaths from opioid overdoses and they both had the same solution: medication-assisted treatment. First, from a study involving inmates in Rhode Island correctional facilities:

The program offers inmates methadone and buprenorphine (opioids that reduce cravings and ease withdrawal symptoms), as well as naltrexone, which blocks people from getting high.

The data set is small but the results are encouraging: there were fewer overdose deaths of former inmates after the program was implemented in 2016.

In the 90s, France used a similar program to cut heroin overdose deaths by 79%:

In 1995, France made it so any doctor could prescribe buprenorphine without any special licensing or training. Buprenorphine, a first-line treatment for opioid addiction, is a medication that reduces cravings for opioids without becoming addictive itself.

With the change in policy, the majority of buprenorphine prescribers in France became primary-care doctors, rather than addiction specialists or psychiatrists. Suddenly, about 10 times as many addicted patients began receiving medication-assisted treatment, and half the country’s heroin users were being treated. Within four years, overdose deaths had declined by 79 percent.


“What do census tracts with highest concentrations of particular populations look like?”

The use of satellite imagery has revolutionized many areas of science and research, from archaeology to tracking human rights abuses to (of course) climate science. This vantage point makes different sorts of observations possible than looking at ground level does.

In what she calls “a work in progress”, Jia Zhang, a PhD candidate at MIT Media Lab, used census data to collect chunks of satellite images from areas with the highest concentrations of white, black, Asian, and Native American & Alaska Native people. The result is striking (but perhaps not surprising):

Census Satellite

I’m looking forward to seeing more of Zhang’s work in this area.


Alan Turing was an excellent runner

Alan Turing Runner

Computer scientist, mathematician, and all-around supergenius Alan Turing, who played a pivotal role in breaking secret German codes during WWII and developing the conceptual framework for the modern general purpose computer, was also a cracking good runner.

He was a runner who, like many others, came to the sport rather late. According to an article by Pat Butcher, he did not compete as an undergraduate at Cambridge, preferring to row. But after winning his fellowship to King’s College, he began running with more purpose. He is said to have often run a route from Cambridge to Ely and back, a distance of 50 kilometers.

It’s also said Turing would occasionally sometimes run to London for meetings, a distance of 40 miles. In 1947, after only two years of training, Turing ran a marathon in 2:46. He was even in contention for a spot on the British Olympic team for 1948 before an injury held him to fifth place at the trials. Had he competed and run at his personal best time, he would have finished 15th.

As the photo above shows, Turing had a brute force running style, not unlike the machine he helped design to break Enigma coded messages. He ran, he said, to relieve stress.

“We heard him rather than saw him. He made a terrible grunting noise when he was running, but before we could say anything to him, he was past us like a shot out of a gun. A couple of nights later we caught up with him long enough for me to ask who he ran for. When he said nobody, we invited him to join Walton. He did, and immediately became our best runner… I asked him one day why he punished himself so much in training. He told me ‘I have such a stressful job that the only way I can get it out of my mind is by running hard; it’s the only way I can get some release.’”

I found out about Turing’s running prowess via the Wikipedia page of non-professional marathon runners. Turing is quite high on the list, particularly if you filter out world class athletes from other sports. Also on the list, just above Turing, is Wolfgang Ketterle, a Nobel Prize-winning physicist who ran a 2:44 in Boston in 2014 at the age of 56.


A high-resolution tour of the Moon from NASA

Using imagery and data that the Lunar Reconnaissance Orbiter spacecraft has collected since 2009, NASA made this video tour of the Moon in 4K resolution. This looked incredible on my iMac screen.

As the visualization moves around the near side, far side, north and south poles, we highlight interesting features, sites, and information gathered on the lunar terrain.

See also The 100-megapixel Moon and A full rotation of the Moon.


A great list of science books written by women

Scientist and educator Joanne Manaster has compiled a growing list of science books written by women (with a rule of one book per author). Some of the books and authors featured are:

Hidden Figures by Margot Lee Shetterly.

Biomimicry by Janine Benyus.

My Life with the Chimpanzees by Jane Goodall.

Silent Spring by Rachel Carson.

Black Hole Blues and Other Songs from Outer Space by Janna Levin.

The Autistic Brain by Temple Grandin.

Me, Myself, and Why: Searching for the Science of Self by Jennifer Ouellette.

The Confidence Game by Maria Konnikova.

The Invention of Nature by
Andrea Wulf.

The Sixth Extinction by Elizabeth Kolbert.

The Immortal Life of Henrietta Lacks by Rebecca Skloot.

Code Girls by Liza Mundy.

Grunt: The Curious Science of Humans at War by Mary Roach.

The Human Age by Diane Ackerman.

Manaster is soliciting suggestions on Twitter for authors she may have missed.


What makes a tree a tree? Scientists still aren’t sure…

Broccoli Tree

In Knowable Magazine, Rachel Ehrenberg writes about the tricky business of understanding what a tree is. Trees are tall, woody, long-lived and have tree-like genes, right? Not always…

If one is pressed to describe what makes a tree a tree, long life is right up there with wood and height. While many plants have a predictably limited life span (what scientists call “programmed senescence”), trees don’t, and many persist for centuries. In fact, that trait — indefinite growth — could be science’s tidiest demarcation of treeness, even more than woodiness. Yet it’s only helpful to a point. We think we know what trees are, but they slip through the fingers when we try to define them.

Ehrenberg then suggests that we should think about tree-ness as a verb rather than a noun.

Maybe it’s time to start thinking of tree as a verb, rather than a noun - tree-ing, or tree-ifying. It’s a strategy, a way of being, like swimming or flying, even though to our eyes it’s happening in very slow motion.

This reminds me of one of Austin Kleon’s strategies for How to Keep Going: “forget the noun, do the verb”. Hey, it seems to be working for the trees. (via @robgmacfarlane)


Carl Sagan’s tools for critical thinking and detecting bullshit

In his 1995 book The Demon-Haunted World, astrophysicist Carl Sagan presented a partial list of “tools for skeptical thinking” which can be used to construct & understand reasoned arguments and reject fraudulent ones.

Wherever possible there must be independent confirmation of the “facts.”

Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.

Arguments from authority carry little weight — “authorities” have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts.

Spin more than one hypothesis. If there’s something to be explained, think of all the different ways in which it could be explained. Then think of tests by which you might systematically disprove each of the alternatives. What survives, the hypothesis that resists disproof in this Darwinian selection among “multiple working hypotheses,” has a much better chance of being the right answer than if you had simply run with the first idea that caught your fancy.

Try not to get overly attached to a hypothesis just because it’s yours. It’s only a way station in the pursuit of knowledge. Ask yourself why you like the idea. Compare it fairly with the alternatives. See if you can find reasons for rejecting it. If you don’t, others will.

Quantify. If whatever it is you’re explaining has some measure, some numerical quantity attached to it, you’ll be much better able to discriminate among competing hypotheses. What is vague and qualitative is open to many explanations. Of course there are truths to be sought in the many qualitative issues we are obliged to confront, but finding them is more challenging.

If there’s a chain of argument, every link in the chain must work (including the premise) — not just most of them.

Occam’s Razor. This convenient rule-of-thumb urges us when faced with two hypotheses that explain the data equally well to choose the simpler.

Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable are not worth much. Consider the grand idea that our Universe and everything in it is just an elementary particle — an electron, say — in a much bigger Cosmos. But if we can never acquire information from outside our Universe, is not the idea incapable of disproof? You must be able to check assertions out. Inveterate skeptics must be given the chance to follow your reasoning, to duplicate your experiments and see if they get the same result.

I found this via Open Culture, which remarked on Sagan’s prescient remarks about people being “unable to distinguish between what feels good and what’s true”.

Like many a science communicator after him, Sagan was very much concerned with the influence of superstitious religious beliefs. He also foresaw a time in the near future much like our own. Elsewhere in The Demon-Haunted World, Sagan writes of “America in my children’s or grandchildren’s time…. when awesome technological powers are in the hands of a very few.” The loss of control over media and education renders people “unable to distinguish between what feels good and what’s true.”

This state involves, he says a “slide… back into superstition” of the religious variety and also a general “celebration of ignorance,” such that well-supported scientific theories carry the same weight or less than explanations made up on the spot by authorities whom people have lost the ability to “knowledgeably question.”

Yeeeeeeeep.

Update: After I posted this, a reader let me know that Michael Shermer has been accused by several women of sexually inappropriate & predatory behavior and rape at professional conferences. I personally believe women, and I further believe that if Shermer was actually serious about rationality and his ten rules for critical thinking listed above, he wouldn’t have pulled this shit in the first place (nor tried to hamfistedly explain it away). I’ve rewritten the post to remove the references to Shermer, which actually made it more succinct and put the focus fully on Sagan, which was my intention in the first place (the title remains unchanged). (via @dmetilli)


A comparison of the sizes of various microorganisms, cells, and viruses

Microorganisms are so small compared to humans that you might be tempted to think that they’re all about the same size. As this video shows, that is not at all the case. The rinovirus and polio virus are 0.03 micrometers (μm) wide, a red blood cell is 8 μm, a neuron 100 μm, and a frog’s egg 1 mm. That’s a span of 5 orders of magnitude, about the same difference as the height of a human to the thickness of the Earth’s atmosphere.

Watching the animation, you might have noticed the T4 bacteriophage, which looks like a cross between the aliens in Arrival and a lunar lander. Can’t be real, right? Bacteriophages are really real and terrifying…if you happen to be a bacteria. Bacteriophages attack by attaching themselves to bacteria, piercing their outer membranes, and then pumping them full of bacteriophage DNA. The phage replicates inside of the bacteria until the bacteria bursts and little baby bacteriophages are exploded out all over the place, ready to attack their own bacteria.


Facial recognition AIs have a hard time with dark skin

For her Gender Shades project, MIT researcher Joy Buolamwini fed over 1000 faces of different genders and skin tones into three AI-powered facial recognition systems from Microsoft, IBM, and Face++ to see how well they could recognize different kinds of faces.

The systems all performed well overall, but recognized male faces more readily than female faces and performed better on lighter skinned subjects than darker skinned subjects. For instance, 93.6% of gender misclassification errors by Microsoft’s system were of darker skinned people.

Gender Shades

Her message near the end of the video is worth heeding:

We have entered the age of automation overconfident yet underprepared. If we fail to make ethical and inclusive artificial intelligence, we risk losing gains made in civil rights and gender equity under the guise of machine neutrality.


A world-historical theory of tool use

early tools.jpg

I love reading and rereading about the origin of humanity. I love that it’s not settled science: we’re still making new discoveries about when humans first left Africa, how and when we interbred with other hominins, and what makes us human in the first place. It’s just the coolest story, which is also every story.

Popular Science has a really nice new primer on the current state of research on early humanity. Embedded in it is a series of studies on tool use by early humans in Kenya that caught my attention. Basically, the tools got smaller and more portable, the materials used were more exotic (sourced from farther away), and they were decorated with pigments.

“That’s where there’s a similarity to technology in recent times; things start out big and clunky and they get small and portable,” says Richard Potts, head of the Smithsonian’s Human Origins Program and a co-author of the papers. “The history [of] technology has been the same ever since.”

I wonder, though, if all three vectors hold up across history: greater portability, greater range of materials, and greater decorative value.

I suspect the null hypothesis would be that technologies that work tend to stay roughly the same over time. (For most of early human history, our tools didn’t change up that much, which is exactly why the burst of activity in east Africa is noteworthy.) You need something to shake things up: either sudden availability of new materials, or a deprivation of old ones (like the Bronze Age collapse, which eventually helped usher in the Iron Age).

As it turns out, that’s exactly what happened.

“One of the things we see is that around 500,000 years ago in the rift valley of southern Kenya, all hell breaks loose. There’s faulting that occurs, and earthquake activity was moving the landscape up and down. The climate record shows there is a stronger degree of oscillation between wet and dry. That would have disrupted the predictability of food and water, for those early people,” Potts says. “It’s exactly under those conditions that almost any organism—but especially a hunter-gatherer human, even an early one—would begin to expand geography of obtaining food or obtaining resources. It’s under those conditions that you begin to run into other groups of hominins and you become aware of resources beyond your usual boundaries.”


“Oh My God!” People’s Reactions to Looking at the Moon Through a Telescope.

Wylie Overstreet and Alex Gorosh took a telescope around the streets of LA and invited people to look at the Moon through it. Watching people’s reactions to seeing such a closeup view of the Moon with their own eyes, perhaps for the first time, is really amazing.

Whoa, that looks like that’s right down the street, man!

I often wonder what the effect is of most Americans not being able to see the night sky on a regular basis. As Sriram Murali says:

The night skies remind us of our place in the Universe. Imagine if we lived under skies full of stars. That reminder we are a tiny part of this cosmos, the awe and a special connection with this remarkable world would make us much better beings — more thoughtful, inquisitive, empathetic, kind and caring. Imagine kids growing up passionate about astronomy looking for answers and how advanced humankind would be, how connected and caring we’d feel with one another, how noble and adventurous we’d be.