Advertise here with Carbon Ads

This site is made possible by member support. โค๏ธ

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

๐Ÿ”  ๐Ÿ’€  ๐Ÿ“ธ  ๐Ÿ˜ญ  ๐Ÿ•ณ๏ธ  ๐Ÿค   ๐ŸŽฌ  ๐Ÿฅ”

kottke.org posts about Ross Andersen

“How First Contact With Whale Civilization Could Unfold”

Ross Andersen for the Atlantic on the effort to talk to sperm whales using AI tech:

Their codas could be orders of magnitude more ancient than Sanskrit. We don’t know how much meaning they convey, but we do know that they’ll be very difficult to decode. Project CETI’s scientists will need to observe the whales for years and achieve fundamental breakthroughs in AI. But if they’re successful, humans could be able to initiate a conversation with whales.

This would be a first-contact scenario involving two species that have lived side by side for ages. I wanted to imagine how it could unfold. I reached out to marine biologists, field scientists who specialize in whales, paleontologists, professors of animal-rights law, linguists, and philosophers. Assume that Project CETI works, I told them. Assume that we are able to communicate something of substance to the sperm whale civilization. What should we say?

One of the worries about whale/human communication is the potential harm a conversation might cause.

Cesar Rodriguez-Garavito, a law professor at NYU who is advising Project CETI, told me that whatever we say, we must avoid harming the whales, and that we shouldn’t be too confident about our ability to predict the harms that a conversation could cause.

The sperm whales may not want to talk. They, like us, can be standoffish even toward members of their own species-and we are much more distant relations. Epochs have passed since our last common ancestor roamed the Earth. In the interim, we have pursued radically different, even alien, lifeways.

Really interesting article.

Reply ยท 0

The most mysterious star in the Milky Way

Astronomers are interested in the goings-on around a star in our galaxy called KIC 8462852. There appears to be a lot of debris around it, which is a bit unusual and might have any number of causes, including that an extraterrestrial intelligence built all sorts of things around the star.

Jason Wright, an astronomer from Penn State University, is set to publish an alternative interpretation of the light pattern. SETI researchers have long suggested that we might be able to detect distant extraterrestrial civilizations, by looking for enormous technological artifacts orbiting other stars. Wright and his co-authors say the unusual star’s light pattern is consistent with a “swarm of megastructures,” perhaps stellar-light collectors, technology designed to catch energy from the star.

“When [Boyajian] showed me the data, I was fascinated by how crazy it looked,” Wright told me. “Aliens should always be the very last hypothesis you consider, but this looked like something you would expect an alien civilization to build.”

Boyajian is now working with Wright and Andrew Siemion, the Director of the SETI Research Center at the University of California, Berkeley. The three of them are writing up a proposal. They want to point a massive radio dish at the unusual star, to see if it emits radio waves at frequencies associated with technological activity.

Phil Plait has more context on this weirdo star and how the alien angle is pretty far-fetched but also worth checking out.


The new hotness: using gravity waves to map the universe

Light (aka electromagnetic radiation) is responsible for most of what we know about the universe. By measuring photons of various frequencies in different ways โ€” “the careful collection of ancient light” โ€” we’ve painted a picture of our endless living space. But light isn’t perfect. It can bend, scatter, and be blocked. Changes in gravity are more difficult to detect, but new instruments may allow scientists to construct a different map of the universe and its beginnings.

LIGO works by shooting laser beams down two perpendicular arms and measuring the difference in length between them-a strategy known as laser interferometry. If a sufficiently large gravitational wave comes by, it will change the relative length of the arms, pushing and pulling them back and forth. In essence, LIGO is a celestial earpiece, a giant microphone that listens for the faint symphony of the hidden cosmos.

Like many exotic physical phenomena, gravitational waves originated as theoretical concepts, the products of equations, not sensory experience. Albert Einstein was the first to realize that his general theory of relativity predicted the existence of gravitational waves. He understood that some objects are so massive and so fast moving that they wrench the fabric of spacetime itself, sending tiny swells across it.

How tiny? So tiny that Einstein thought they would never be observed. But in 1974 two astronomers, Russell Hulse and Joseph Taylor, inferred their existence with an ingenious experiment, a close study of an astronomical object called a binary pulsar [see “Gravitational Waves from an Orbiting Pulsar,” by J. M. Weisberg et al.; Scientific American, October 1981]. Pulsars are the spinning, flashing cores of long-exploded stars. They spin and flash with astonishing regularity, a quality that endears them to astronomers, who use them as cosmic clocks. In a binary pulsar system, a pulsar and another object (in this case, an ultradense neutron star) orbit each other. Hulse and Taylor realized that if Einstein had relativity right, the spiraling pair would produce gravitational waves that would drain orbital energy from the system, tightening the orbit and speeding it up. The two astronomers plotted out the pulsar’s probable path and then watched it for years to see if the tightening orbit showed up in the data. The tightening not only showed up, it matched Hulse and Taylor’s predictions perfectly, falling so cleanly on the graph and vindicating Einstein so utterly that in 1993 the two were awarded the Nobel Prize in Physics.


Will technology help humans conquer the universe or kill us all?

Ross Andersen, whose interview with Nick Bostrom I linked to last week, has a marvelous new essay in Aeon about Bostrom and some of his colleagues and their views on the potential extinction of humanity. This bit of the essay is the most harrowing thing I’ve read in months:

No rational human community would hand over the reins of its civilisation to an AI. Nor would many build a genie AI, an uber-engineer that could grant wishes by summoning new technologies out of the ether. But some day, someone might think it was safe to build a question-answering AI, a harmless computer cluster whose only tool was a small speaker or a text channel. Bostrom has a name for this theoretical technology, a name that pays tribute to a figure from antiquity, a priestess who once ventured deep into the mountain temple of Apollo, the god of light and rationality, to retrieve his great wisdom. Mythology tells us she delivered this wisdom to the seekers of ancient Greece, in bursts of cryptic poetry. They knew her as Pythia, but we know her as the Oracle of Delphi.

‘Let’s say you have an Oracle AI that makes predictions, or answers engineering questions, or something along those lines,’ Dewey told me. ‘And let’s say the Oracle AI has some goal it wants to achieve. Say you’ve designed it as a reinforcement learner, and you’ve put a button on the side of it, and when it gets an engineering problem right, you press the button and that’s its reward. Its goal is to maximise the number of button presses it receives over the entire future. See, this is the first step where things start to diverge a bit from human expectations. We might expect the Oracle AI to pursue button presses by answering engineering problems correctly. But it might think of other, more efficient ways of securing future button presses. It might start by behaving really well, trying to please us to the best of its ability. Not only would it answer our questions about how to build a flying car, it would add safety features we didn’t think of. Maybe it would usher in a crazy upswing for human civilisation, by extending our lives and getting us to space, and all kinds of good stuff. And as a result we would use it a lot, and we would feed it more and more information about our world.’

‘One day we might ask it how to cure a rare disease that we haven’t beaten yet. Maybe it would give us a gene sequence to print up, a virus designed to attack the disease without disturbing the rest of the body. And so we sequence it out and print it up, and it turns out it’s actually a special-purpose nanofactory that the Oracle AI controls acoustically. Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it’s going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button. You could see it pursuing this very rapid technology proliferation, where it sets itself up for an eternity of fully maximised button presses. You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage โ€” and then it would take that advantage and start doing what it wants to in the world.’

Read the whole thing, even if you have to watch goats yelling like people afterwards, just to cheer yourself back up.


Are we underestimating the risk of human extinction?

Nick Bostrom, a Swedish-born philosophy professor at Oxford, thinks that we’re underestimating the risk of human extinction. The Atlantic’s Ross Andersen interviewed Bostrom about his stance.

I think the biggest existential risks relate to certain future technological capabilities that we might develop, perhaps later this century. For example, machine intelligence or advanced molecular nanotechnology could lead to the development of certain kinds of weapons systems. You could also have risks associated with certain advancements in synthetic biology.

Of course there are also existential risks that are not extinction risks. The concept of an existential risk certainly includes extinction, but it also includes risks that could permanently destroy our potential for desirable human development. One could imagine certain scenarios where there might be a permanent global totalitarian dystopia. Once again that’s related to the possibility of the development of technologies that could make it a lot easier for oppressive regimes to weed out dissidents or to perform surveillance on their populations, so that you could have a permanently stable tyranny, rather than the ones we have seen throughout history, which have eventually been overthrown.

While reading this, I got to thinking that maybe the reason we haven’t observed any evidence of sentient extraterrestrial life is that at some point in the technology development timeline just past the “pumping out signals into space” point (where humans are now), a discovery is made that results in the destruction of a species. Something like a nanotech virus that’s too fast and lethal to stop. And the same thing happens every single time it’s discovered because it’s too easy to discover and too powerful to stop.