Advertise here with Carbon Ads

This site is made possible by member support. ❤️

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

🍔  💀  📸  😭  🕳️  🤠  🎬  🥔

kottke.org posts about computing

A Navajo Weaving of an Intel Pentium Processor

a Navajo weaving of a Pentium chip next to a microscopic image of the actual chip

In 1994, a Navajo/Diné weaver named Marilou Schultz made a weaving of the microscopic pattern of an Intel Pentium processor. (In the image above, the weaving is on the left and the chip is on the right.)

The Pentium die photo below shows the patterns and structures on the surface of the fingernail-sized silicon die, over three million tiny transistors. The weaving is a remarkably accurate representation of the die, reproducing the processor’s complex designs. However, I noticed that the weaving was a mirror image of the physical Pentium die; I had to flip the rug image below to make them match. I asked Ms. Schultz if this was an artistic decision and she explained that she wove the rug to match the photograph. There is no specific front or back to a Navajo weaving because the design is similar on both sides,3 so the gallery picked an arbitrary side to display. Unfortunately, they picked the wrong side, resulting in a backward die image.

Schultz is working on a weaving of another chip, the Fairchild 9040, which was “built by Navajo workers at a plant on Navajo land”.

In December 1972, National Geographic highlighted the Shiprock plant as “weaving for the Space Age”, stating that the Fairchild plant was the tribe’s most successful economic project with Shiprock booming due to the 4.5-million-dollar annual payroll. The article states: “Though the plant runs happily today, it was at first a battleground of warring cultures.” A new manager, Paul Driscoll, realized that strict “white man’s rules” were counterproductive. For instance, many employees couldn’t phone in if they would be absent, as they didn’t have telephones. Another issue was the language barrier since many workers spoke only Navajo, not English. So when technical words didn’t exist in Navajo, substitutes were found: “aluminum” became “shiny metal”. Driscoll also realized that Fairchild needed to adapt to traditional nine-day religious ceremonies. Soon the monthly turnover rate dropped from 12% to under 1%, better than Fairchild’s other plants.

The whole piece is really interesting and demonstrates the deep rabbit hole awaiting the curious art viewer. (via waxy)

Reply · 0

“I Created Clippy”

Illustrator Kevan Atteberry created the Clippy character that was introduced in Microsoft Office 97. There was a ton of backlash when the character was introduced, but as time has passed, many people have begun to think fondly of him.

He’s a guy that just wants to help, and he’s a little bit too helpful sometimes. And there’s something fun and vulnerable about that.


How NASA Writes Space-Proof Code

When you write some code and put it on a spacecraft headed into the far reaches of space, you need to it work, no matter what. Mistakes can mean loss of mission or even loss of life. In 2006, Gerard Holzmann of the NASA/JPL Laboratory for Reliable Software wrote a paper called The Power of 10: Rules for Developing Safety-Critical Code. The rules focus on testability, readability, and predictability:

  1. Avoid complex flow constructs, such as goto and recursion.
  2. All loops must have fixed bounds. This prevents runaway code.
  3. Avoid heap memory allocation.
  4. Restrict functions to a single printed page.
  5. Use a minimum of two runtime assertions per function.
  6. Restrict the scope of data to the smallest possible.
  7. Check the return value of all non-void functions, or cast to void to indicate the return value is useless.
  8. Use the preprocessor sparingly.
  9. Limit pointer use to a single dereference, and do not use function pointers.
  10. Compile with all possible warnings active; all warnings should then be addressed before release of the software.

All this might seem a little inside baseball if you’re not a software developer (I caught only about 75% of it — the video embedded above helped a lot), but the goal of the Power of 10 rules is to ensure that developers are working in such a way that their code does the same thing every time, can be tested completely, and is therefore more reliable.

Even here on Earth, perhaps more of our software should work this way. In 2011, NASA applied these rules in their analysis of unintended acceleration of Toyota vehicles and found 243 violations of 9 out of the 10 rules. Are the self-driving features found in today’s cars written with these rules in mind or can recursive, untestable code run off into infinities while it’s piloting people down the freeway at 70mph?

And what about AI? Anil Dash recently argued that today’s AI is unreasonable:

Amongst engineers, coders, technical architects, and product designers, one of the most important traits that a system can have is that one can reason about that system in a consistent and predictable way. Even “garbage in, garbage out” is an articulation of this principle — a system should be predictable enough in its operation that we can then rely on it when building other systems upon it.

This core concept of a system being reason-able is pervasive in the intellectual architecture of true technologies. Postel’s Law (“Be liberal in what you accept, and conservative in what you send.”) depends on reasonable-ness. The famous IETF keywords list, which offers a specific technical definition for terms like “MUST”, “MUST NOT”, “SHOULD”, and “SHOULD NOT”, assumes that a system will behave in a reasonable and predictable way, and the entire internet runs on specifications that sit on top of that assumption.

The very act of debugging assumes that a system is meant to work in a particular way, with repeatable outputs, and that deviations from those expectations are the manifestation of that bug, which is why being able to reproduce a bug is the very first step to debugging.

Into that world, let’s introduce bullshit. Today’s highly-hyped generative AI systems (most famously OpenAI) are designed to generate bullshit by design.

I bet NASA will be very slow and careful in deciding to run AI systems on spacecraft — after all, they know how 2001: A Space Odyssey ends just as well as the rest of us do.


Early Computer Art in the 50s and 60s

a wavy black and white pattern generated by a computer

an intricate and colorful looping pattern

a computer drawing of a bunch of colorful squares stacked on top of each other

Artist Amy Goodchild recently published an engaging article about the earliest computer art from the 50s and 60s.

My original vision for this article was to cover the development of computer art from the 50’s to the 90’s, but it turns out there’s an abundance of things without even getting half way through that era. So in this article we’ll look at how Lovelace’s ideas for creativity with a computer first came to life in the 50’s and 60’s, and I’ll cover later decades in future articles.

I stray from computer art into electronic, kinetic and mechanical art because the lines are blurred, it contributes to the historical context, and also because there is some cool stuff to look at.

Cool stuff indeed — I’ve included some of my favorite pieces that Goodchild highlighted above. (via waxy)


The Lisa Personal Computer: Apple’s Influential Flop

The Apple Lisa was the more expensive and less popular precursor to the Macintosh; a recent piece at the Computer History Museum called Lisa “Apple’s most influential failure”.

Apple’s Macintosh line of computers today, known for bringing mouse-driven graphical user interfaces (GUIs) to the masses and transforming the way we use our computers, owes its existence to its immediate predecessor at Apple, the Lisa. Without the Lisa, there would have been no Macintosh — at least in the form we have it today — and perhaps there would have been no Microsoft Windows either.

The video above from Adi Robertson at The Verge is a good introduction to the Lisa and what made it so simultaneously groundbreaking and unpopular. From a companion article:

To look at the Lisa now is to see a system still figuring out the limits of its metaphor. One of its unique quirks, for instance, is a disregard for the logic of applications. You don’t open an app to start writing or composing a spreadsheet; you look at a set of pads with different types of documents and tear off a sheet of paper.

But the office metaphor had more concrete technical limits, too. One of the Lisa’s core principles was that it should let users multitask the way an assistant might, allowing for constant distractions as people moved between windows. It was a sophisticated idea that’s taken for granted on modern machines, but at the time, it pushed Apple’s engineering limits - and pushed the Lisa’s price dramatically upward.

And from 1983, a demo video from Apple on how the Lisa could be used in a business setting:

And a more characteristically Apple ad for the Lisa featuring a pre-stardom Kevin Costner:


Papercraft Models of Vintage Computers

a papercraft model of an original Apple Macintosh

a papercraft model of an IBM 5150 computer

a papercraft model of an Amiga 500 computer

Rocky Bergen makes papercraft models of vintage computers like the original Macintosh, Commodore 64, the IBM 5150, and TRS-80. The collection also includes a few gaming consoles and a boombox. And here’s the thing — you can download the patterns for each model for free and make your own at home. Neat!


A Demo of Pockit, a Tiny, Powerful, Modular Computer

Admission time: it’s been a long time since I considered myself any sort of gadget nerd, but I have to tell you that I watched much of this demo of Pockit with my jaw on the floor and my hand on my credit card. 12-year-old Jason would have run through a wall to be able to play with something like this. It does web browsing, streaming video, AI object detection, home automation, and just anything else you can think of. Reminded me of some combination of littleBits, Arduino, and Playdate. What a fun little device! (via craig mod)


Searching for Susy Thunder

Searching for Susy Thunder

A really entertaining and interesting piece by Claire Evans about Susan Thunder (aka Susan Thunder aka Susan Headley), a pioneering phone phreaker and computer hacker who ran with the likes of Kevin Mitnick and then just quietly disappeared.

She was known, back then, as Susan Thunder. For someone in the business of deception, she stood out: she was unusually tall, wide-hipped, with a mane of light blonde hair and a wardrobe of jackets embroidered with band logos, spoils from an adolescence spent as an infamous rock groupie. Her backstage conquests had given her a taste for quaaludes and pharmaceutical-grade cocaine; they’d also given her the ability to sneak in anywhere.

Susan found her way into the hacker underground through the phone network. In the late 1970s, Los Angeles was a hotbed of telephone culture: you could dial-a-joke, dial-a-horoscope, even dial-a-prayer. Susan spent most of her days hanging around on 24-hour conference lines, socializing with obsessives with code names like Dan Dual Phase and Regina Watts Towers. Some called themselves phone phreakers and studied the Bell network inside out; like Susan’s groupie friends, they knew how to find all the back doors.

When the phone system went electric, the LA phreakers studied its interlinked networks with equal interest, meeting occasionally at a Shakey’s Pizza parlor in Hollywood to share what they’d learned: ways to skim free long-distance calls, void bills, and spy on one another. Eventually, some of them began to think of themselves as computer phreakers, and then hackers, as they graduated from the tables at Shakey’s to dedicated bulletin board systems, or BBSes.

Susan followed suit. Her specialty was social engineering. She was a master at manipulating people, and she wasn’t above using seduction to gain access to unauthorized information. Over the phone, she could convince anyone of anything. Her voice honey-sweet, she’d pose as a telephone operator, a clerk, or an overworked secretary: I’m sorry, my boss needs to change his password, can you help me out?

Via Evans’ Twitter account, some further reading and viewing on Susy Thunder and 80s hacking/phreaking: Trashing the Phone Company with Suzy Thunder (her 1982 interview on 20/20), audio of Thunder’s DEF CON 3 speech, Exploding the Phone, The Prototype for Clubhouse Is 40 Years Old, and It Was Built by Phone Hackers, and Katie Hafner and John Markoff’s book Cyberpunk: Outlaws and Hackers on the Computer Frontier, Revised.


A Brief History of the Pixel

History Pixel

Computer graphics legend Alvy Ray Smith (Pixar, Lucasfilm, Microsoft) has written a new book called A Biography of the Pixel (ebook). In this adapted excerpt, Smith traces the origins of the pixel — which he calls “a repackaging of infinity” — from Joseph Fourier to Vladimir Kotelnikov to the first computers to Toy Story.

Taking pictures with a cellphone is perhaps the most pervasive digital light activity in the world today, contributing to the vast space of digital pictures. Picture-taking is a straightforward 2D sampling of the real world. The pixels are stored in picture files, and the pictures represented by them are displayed with various technologies on many different devices.

But displays don’t know where the pixels come from. The sampling theorem doesn’t care whether they actually sample the real world. So making pixels is the other primary source of pictures today, and we use computers for the job. We can make pixels that seem to sample unreal worlds, eg, the imaginary world of a Pixar movie, if they play by the same rules as pixels taken from the real world.

The taking vs making — or shooting vs computing - distinction separates digital light into two realms known generically as image processing and computer graphics. This is the classical distinction between analysis and synthesis. The pixel is key to both, and one theory suffices to unify the entire field.

Computation is another key to both realms. The number of pixels involved in any picture is immense - typically, it takes millions of pixels to make just one picture. An unaided human mind simply couldn’t keep track of even the simplest pixel computations, whether the picture was taken or made. Consider just the easiest part of the sampling theorem’s ‘spread and add’ operation - the addition. Can you add a million numbers? How about ‘instantaneously’? We have to use computers.


How Do Algorithms Become Biased?

In the latest episode of the Vox series Glad You Asked, host Joss Fong looks at how racial and other kinds of bias are introduced into massive computer systems and algorithms, particularly those that work through machine learning, that we use every day.

Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?


Lava Lamps Help Keep The Internet Secure??

Web performance and security company Cloudflare uses a wall of lava lamps to generate random numbers to help keep the internet secure. Random numbers generated by computers are often not exactly random, so what Cloudflare does is take photos of the lamps’ activities and uses the uncertainty of the lava blooping up and down to generate truly random numbers. Here’s a look at how the process works:

At Cloudflare, we have thousands of computers in data centers all around the world, and each one of these computers needs cryptographic randomness. Historically, they got that randomness using the default mechanism made available by the operating system that we run on them, Linux.

But being good cryptographers, we’re always trying to hedge our bets. We wanted a system to ensure that even if the default mechanism for acquiring randomness was flawed, we’d still be secure. That’s how we came up with LavaRand.

LavaRand is a system that uses lava lamps as a secondary source of randomness for our production servers. A wall of lava lamps in the lobby of our San Francisco office provides an unpredictable input to a camera aimed at the wall. A video feed from the camera is fed into a CSPRNG [cryptographically-secure pseudorandom number generator], and that CSPRNG provides a stream of random values that can be used as an extra source of randomness by our production servers. Since the flow of the “lava” in a lava lamp is very unpredictable, “measuring” the lamps by taking footage of them is a good way to obtain unpredictable randomness. Computers store images as very large numbers, so we can use them as the input to a CSPRNG just like any other number.

(via open culture)


Google Announces They Have Achieved “Quantum Supremacy”

Today, Google announced the results of their quantum supremacy experiment in a blog post and Nature article. First, a quick note on what quantum supremacy is: the idea that a quantum computer can quickly solve problems that classical computers either cannot solve or would take decades or centuries to solve. Google claims they have achieved this supremacy using a 54-qubit quantum computer:

Our machine performed the target computation in 200 seconds, and from measurements in our experiment we determined that it would take the world’s fastest supercomputer 10,000 years to produce a similar output.

You may find it helpful to watch Google’s 5-minute explanation of quantum computing and quantum supremacy (see also Nature’s explainer video):

IBM has pushed back on Google’s claim, arguing that their classical supercomputer can solve the same problem in far less than 10,000 years.

We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.

Because the original meaning of the term “quantum supremacy,” as proposed by John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met.

One of the fears of quantum supremacy being achieved is that quantum computing could be used to easily crack the encryption currently used anywhere you use a password or to keep communications private, although it seems like we still have some time before this happens.

“The problem their machine solves with astounding speed has been very carefully chosen just for the purpose of demonstrating the quantum computer’s superiority,” Preskill says. It’s unclear how long it will take quantum computers to become commercially useful; breaking encryption — a theorized use for the technology — remains a distant hope. “That’s still many years out,” says Jonathan Dowling, a professor at Louisiana State University.


The Most Important Pieces of Code in the History of Computing

Bitcoin Code

Slate recently asked a bunch of developers, journalists, computer scientists, and historians what they thought the most influential and consequential pieces of computer code were. They came up with a list of the 36 world-changing pieces of code, including the code responsible for the 1202 alarm thrown by the Apollo Guidance Computer during the first Moon landing, the HTML hyperlink, PageRank, the guidance system for the Roomba, and Bitcoin (above).

Here’s the entry for the three lines of code that helps cellular networks schedule and route calls efficiently and equitably:

At any given moment in a given area, there are often many more cellphones than there are base station towers. Unmediated, all of these transmissions would interfere with one another and prevent information from being received reliably. So the towers have a prioritization problem to solve: making sure all users can complete their calls, while taking into account the fact that users in noisier places need to be given more resources to receive the same quality of service. The solution? A compromise between the needs of individual users and the overall performance of the entire network. Proportional fair scheduling ensures all users have at least a minimal level of service while maximizing total network throughput. This is done by giving lower priority to users that are anticipated to require more resources. Just three lines of code that make all 3G and 4G cellular networks around the world work.


The Biggest Nonmilitary Effort in the History of Human Civilization

aldrin-moon.jpg

Charles Fishman has a new book, One Giant Leap, all about NASA’s Apollo program to land an astronaut on the moon. He talks about it on Fresh Air with Dave Davies.

On what computers were like in the early ’60s and how far they had to come to go to space

It’s hard to appreciate now, but in 1961, 1962, 1963, computers had the opposite reputation of the reputation they have now. Most computers couldn’t go more than a few hours without breaking down. Even on John Glenn’s famous orbital flight — the first U.S. orbital flight — the computers in mission control stopped working for three minutes [out] of four hours. Well, that’s only three minutes [out] of four hours, but that was the most important computer in the world during that four hours and they couldn’t keep it going during the entire orbital mission of John Glenn.

So they needed computers that were small, lightweight, fast and absolutely reliable, and the computers that were available then — even the compact computers — were the size of two or three refrigerators next to each other, and so this was a huge technology development undertaking of Apollo.

On the seamstresses who wove the computer memory by hand

There was no computer memory of the sort that we think of now on computer chips. The memory was literally woven … onto modules and the only way to get the wires exactly right was to have people using needles, and instead of thread wire, weave the computer program. …

The Apollo computers had a total of 73 [kilobytes] of memory. If you get an email with the morning headlines from your local newspaper, it takes up more space than 73 [kilobytes]. … They hired seamstresses. … Every wire had to be right. Because if you got [it] wrong, the computer program didn’t work. They hired women, and it took eight weeks to manufacture the memory for a single Apollo flight computer, and that eight weeks of manufacturing was literally sitting at sophisticated looms weaving wires, one wire at a time.

One anecdote that was new to me describes Armstrong and Aldrin test-burning moon dust, to make sure it wouldn’t ignite when repressurized.

Armstrong and Aldrin actually had been instructed to do a little experiment. They had a little bag of lunar dirt and they put it on the engine cover of the ascent engine, which was in the middle of the lunar module cabin. And then they slowly pressurized the cabin to make sure it wouldn’t catch fire and it didn’t. …

The smell turns out to be the smell of fireplace ashes, or as Buzz Aldrin put it, the smell of the air after a fireworks show. This was one of the small but sort of delightful surprises about flying to the moon.


The Women Who Helped Pioneer Chaos Theory

The story goes that modern chaos theory was birthed by Edward Lorenz’s paper about his experiments with weather simulation on a computer. The computing power helped Lorenz nail down hidden patterns that had been hinted at by computer-less researchers for decades. But the early tenets of chaos theory were not the only things that were hidden. The women who wrote the programs that enabled Lorenz’s breakthroughs haven’t received their proper due.

But in fact, Lorenz was not the one running the machine. There’s another story, one that has gone untold for half a century. A year and a half ago, an MIT scientist happened across a name he had never heard before and started to investigate. The trail he ended up following took him into the MIT archives, through the stacks of the Library of Congress, and across three states and five decades to find information about the women who, today, would have been listed as co-authors on that seminal paper. And that material, shared with Quanta, provides a fuller, fairer account of the birth of chaos.

The two women who programmed the computer for Lorenz were Ellen Gille (née Fetter) and Margaret Hamilton. Yes, that Margaret Hamilton, whose already impressive career starts to look downright bonkers when you add in her contributions to chaos theory.


Grace Hopper Explains a Nanosecond

In this short clip from 1983, legendary computer scientist Grace Hopper uses a short length of wire to explain what a nanosecond is.

Now what I wanted when I asked for a nanosecond was: I wanted a piece of wire which would represent the maximum

distance that electricity could travel in a billionth of a second. Now of course it wouldn’t really be through wire — it’d be out in space, the velocity of light. So if we start with a velocity of light and use your friendly computer, you’ll discover that a nanosecond is 11.8 inches long, the maximum limiting distance that electricity can travel in a billionth of a second.

You can watch the entirety of a similar lecture Hopper gave at MIT in 1985, in which she “practically invents computer science at the chalkboard”. (via tmn)


What Counts As Evidence in Mathematics?

Einstein-Blackboard-01.jpg

The ultimate form of argument, and for some, the most absolute form of truth, is mathematical proof. But short of a conclusive proof of a theorem, mathematicians also consider evidence that might 1) disprove a thesis or 2) suggest its possible truth or even avenues for proving that it’s true. But in a not-quite-empirical field, what the heck counts as evidence?

The twin primes conjecture is one example where evidence, as much as proof, guides our mathematical thinking. Twin primes are pairs of prime numbers that differ by 2 — for example, 3 and 5, 11 and 13, and 101 and 103 are all twin prime pairs. The twin primes conjecture hypothesizes that there is no largest pair of twin primes, that the pairs keep appearing as we make our way toward infinity on the number line.

The twin primes conjecture is not the Twin Primes Theorem, because, despite being one of the most famous problems in number theory, no one has been able to prove it. Yet almost everyone believes it is true, because there is lots of evidence that supports it.

For example, as we search for large primes, we continue to find extremely large twin prime pairs. The largest currently known pair of twin primes have nearly 400,000 digits each. And results similar to the twin primes conjecture have been proved. In 2013, Yitang Zhang shocked the mathematical world by proving that there are infinitely many prime number pairs that differ by 70 million or less. Thanks to a subsequent public “Polymath” project, we now know that there are infinitely many pairs of primes that differ by no more than 246. We still haven’t proved that there are infinitely many pairs of primes that differ by 2 — the twin primes conjecture — but 2 is a lot closer to 246 than it is to infinity.

This starts to get really complicated once you leave the relatively straightforward arithmetical world of prime numbers behind, with its clearly empirical pairs and approximating conjectures, and start working with computer models that generate arbitrarily large numbers of mathematical statements, all of which can be counted as evidence.

Patrick Hanner, the author of this article, gives what seems like a simple example: are all lines parallel or intersecting? Then he shows how the models one can use to answer this question vary wildly based on their initial assumptions, in this case, whether one is considering lines in a single geometric plane or lines in an n-dimensional geometric space. As always in mathematics, it comes back to one’s initial set of assumptions; you can “prove” (i.e., provide large quantities of evidence for) a statement with one set of rules, but that set of rules is not the universe.


The Secret History of Women in Coding

In an excerpt of his forthcoming book Coders, Clive Thompson writes about The Secret History of Women in Coding for the NY Times.

A good programmer was concise and elegant and never wasted a word. They were poets of bits. “It was like working logic puzzles — big, complicated logic puzzles,” Wilkes says. “I still have a very picky, precise mind, to a fault. I notice pictures that are crooked on the wall.”

What sort of person possesses that kind of mentality? Back then, it was assumed to be women. They had already played a foundational role in the prehistory of computing: During World War II, women operated some of the first computational machines used for code-breaking at Bletchley Park in Britain. In the United States, by 1960, according to government statistics, more than one in four programmers were women. At M.I.T.’s Lincoln Labs in the 1960s, where Wilkes worked, she recalls that most of those the government categorized as “career programmers” were female. It wasn’t high-status work — yet.

This all changed in the 80s, when computers and programming became, culturally, a mostly male pursuit.

By the ’80s, the early pioneering work done by female programmers had mostly been forgotten. In contrast, Hollywood was putting out precisely the opposite image: Computers were a male domain. In hit movies like “Revenge of the Nerds,” “Weird Science,” “Tron,” “WarGames” and others, the computer nerds were nearly always young white men. Video games, a significant gateway activity that led to an interest in computers, were pitched far more often at boys, as research in 1985 by Sara Kiesler, a professor at Carnegie Mellon, found. “In the culture, it became something that guys do and are good at,” says Kiesler, who is also a program manager at the National Science Foundation. “There were all kinds of things signaling that if you don’t have the right genes, you’re not welcome.”

See also Claire Evans’ excellent Broad Band: The Untold Story of the Women Who Made the Internet.


Buy the Cheap Thing First

cast iron skillets.jpg

Beth Skwarecki has written the perfect Lifehacker post with the perfect headline (so perfect I had to use it for my aggregation headline too, which I try to never do):

When you’re new to a sport, you don’t yet know what specialized features you will really care about. You probably don’t know whether you’ll stick with your new endeavor long enough to make an expensive purchase worth it. And when you’re a beginner, it’s not like beginner level equipment is going to hold you back…

How cheap is too cheap?

Find out what is totally useless, and never worth your time. Garage sale ice skates with ankles that are so soft they flop over? Pass them up.

What do most people do when starting out?

If you’re getting into powerlifting and you don’t have a belt and shoes, you can still lift with no belt and no shoes, or with the old pair of Chucks that you may already have in your closet. Ask people about what they wore when they were starting out, and it’s often one of those options…

What’s your exit plan?

How will you decide when you’re done with your beginner equipment? Some things will wear out: Running shoes will feel flat and deflated. Some things may still be usable, but you’ll discover their limitations. Ask experienced people what the fancier gear can do that yours can’t, and you’ll get a sense of when to upgrade. (You may also be able to sell still-good gear to another beginner to recoup some of your costs.)

Wearing out your beginner gear is like graduating. You know that you’ve stuck with the sport long enough that you aren’t truly a beginner anymore. You may have managed to save up some cash for the next step. And you can buy the nicer gear now, knowing exactly what you want and need.

This is 100 percent the truth, and applies to way more than just sports equipment. Computers, cooking, fashion, cars, furniture, you name it. The key thing is to pick your spots, figure out where you actually know what you want and what you want to do with it, and optimize for those. Everywhere else? Don’t outwit yourself. Play it like the beginner that you are. And save some scratch in the process. Perfect, perfect advice.


The Embroidered Computer

Artists Irene Posch & Ebru Kurbak have built The Embroidered Computer, a programmable 8-bit computer made using traditional embroidery techniques and materials.

Embroidered Computer

Embroidered Computer

Solely built from a variety of metal threads, magnetic, glas and metal beads, and being inspired by traditional crafting routines and patterns, the piece questions the appearance of current digital and electronic technologies surrounding us, as well as our interaction with them.

Technically, the piece consists of (textile) relays, similar to early computers before the invention of semiconductors. Visually, the gold materials, here used for their conductive properties, arranged into specific patterns to fulfill electronic functions, dominate the work. Traditionally purely decorative, their pattern here defines they function. They lay bare core digital routines usually hidden in black boxes. Users are invited to interact with the piece in programming the textile to compute for them.

The piece also slyly references the connection between the early history of computing and the textile industry.

When British mathematician Charles Babbage released his plans for the Analytical Engine, widely considered the first modern computer design, fellow mathematician Ada Lovelace is famously quoted as saying that ‘the Analytical Engine weaves algebraic patterns, just as the Jacquard loom weaves flowers and leaves.’

The Jacquard loom is often considered a predecessor to the modern computer because it uses a binary system to store information that can be read by the loom and reproduced many times over.

See also Posch’s & Kurbak’s The Knitted Radio, a sweater that functions as an FM radio transmitter.


Papercraft Computers

Papercraft Electronics

Papercraft Electronics

Papercraft Electronics

Rocky Bergen makes paper models of vintage electronics and computing gear. And here’s the cool bit…you can download the plans to print and fold your own: Apple II, Conion C-100F boom box, Nintendo GameCube, and Commodore 64.


Why Doctors Hate Their Computers

Nobody writes about health care practice from the inside out like Atul Gawande, here focusing on an increasingly important part of clinical work: information technology.

A 2016 study found that physicians spent about two hours doing computer work for every hour spent face to face with a patient—whatever the brand of medical software. In the examination room, physicians devoted half of their patient time facing the screen to do electronic tasks. And these tasks were spilling over after hours. The University of Wisconsin found that the average workday for its family physicians had grown to eleven and a half hours. The result has been epidemic levels of burnout among clinicians. Forty per cent screen positive for depression, and seven per cent report suicidal thinking—almost double the rate of the general working population.

Something’s gone terribly wrong. Doctors are among the most technology-avid people in society; computerization has simplified tasks in many industries. Yet somehow we’ve reached a point where people in the medical profession actively, viscerally, volubly hate their computers.

It’s not just the workload, but also what Gawande calls “the Revenge of the Ancillaries” — designing software for collaboration between different health care professionals, from surgeons to administrators, all of whom have competing stakes and preferences in how a product is used and designed, what information it offers and what it demands. And most medical software doesn’t handle these competing demands very well.


The Stylish & Colorful Computing Machines of Yesteryear

Holy moly, these photographs of vintage computers & peripherals by “design and tech obsessive” James Ball are fantastic.

Ball Computers

Ball Computers

Ball Computers

He did a similar series with early personal computers subtitled “Icons of Beige”.

Ball Computers

(via @mwichary)


The history and future of data on magnetic tape

IBM Magnetic Tape.jpeg

Maybe it’s because I’m part of the cassette generation, but I’m just charmed by IBM researcher Mark Lantz’s ode to that great innovation in data storage, magnetic tape. What could be seen as an intermediate but mostly dead technology is actually quite alive and thriving.

Indeed, much of the world’s data is still kept on tape, including data for basic science, such as particle physics and radio astronomy, human heritage and national archives, major motion pictures, banking, insurance, oil exploration, and more. There is even a cadre of people (including me, trained in materials science, engineering, or physics) whose job it is to keep improving tape storage…

It’s true that tape doesn’t offer the fast access speeds of hard disks or semiconductor memories. Still, the medium’s advantages are many. To begin with, tape storage is more energy efficient: Once all the data has been recorded, a tape cartridge simply sits quietly in a slot in a robotic library and doesn’t consume any power at all. Tape is also exceedingly reliable, with error rates that are four to five orders of magnitude lower than those of hard drives. And tape is very secure, with built-in, on-the-fly encryption and additional security provided by the nature of the medium itself. After all, if a cartridge isn’t mounted in a drive, the data cannot be accessed or modified. This “air gap” is particularly attractive in light of the growing rate of data theft through cyberattacks.

Plus, it writes fast (faster than a hard drive), and it’s dirt cheap. And hard drives are up against some nasty physical limits when it comes to how much more data they can store on platters. This is why cloud providers like Google and Microsoft, among others, still use tape backup for file storage, and why folks at IBM and other places are working to improve tape’s efficiency, speed, and reliability.

Lantz describes how the newest tape systems in labs can read and write data on tracks 100 nanometers wide, at an areal density of 201 GB per square inch. “That means that a single tape cartridge could record as much data as a wheelbarrow full of hard drives.” I’ve never needed a wheelbarrow full of hard drives, but I think that is pretty cool. And I’m always excited to find out that engineers are still working to make old, reliable, maybe unsexy technologies work better and better.


Alan Turing was an excellent runner

Alan Turing Runner

Computer scientist, mathematician, and all-around supergenius Alan Turing, who played a pivotal role in breaking secret German codes during WWII and developing the conceptual framework for the modern general purpose computer, was also a cracking good runner.

He was a runner who, like many others, came to the sport rather late. According to an article by Pat Butcher, he did not compete as an undergraduate at Cambridge, preferring to row. But after winning his fellowship to King’s College, he began running with more purpose. He is said to have often run a route from Cambridge to Ely and back, a distance of 50 kilometers.

It’s also said Turing would occasionally sometimes run to London for meetings, a distance of 40 miles. In 1947, after only two years of training, Turing ran a marathon in 2:46. He was even in contention for a spot on the British Olympic team for 1948 before an injury held him to fifth place at the trials. Had he competed and run at his personal best time, he would have finished 15th.

As the photo above shows, Turing had a brute force running style, not unlike the machine he helped design to break Enigma coded messages. He ran, he said, to relieve stress.

“We heard him rather than saw him. He made a terrible grunting noise when he was running, but before we could say anything to him, he was past us like a shot out of a gun. A couple of nights later we caught up with him long enough for me to ask who he ran for. When he said nobody, we invited him to join Walton. He did, and immediately became our best runner… I asked him one day why he punished himself so much in training. He told me ‘I have such a stressful job that the only way I can get it out of my mind is by running hard; it’s the only way I can get some release.’”

I found out about Turing’s running prowess via the Wikipedia page of non-professional marathon runners. Turing is quite high on the list, particularly if you filter out world class athletes from other sports. Also on the list, just above Turing, is Wolfgang Ketterle, a Nobel Prize-winning physicist who ran a 2:44 in Boston in 2014 at the age of 56.


Radiohead hid an old school computer program on their new album

As if you already didn’t know that Radiohead are a bunch of big ole nerds, there’s an easter egg on a cassette tape included in the Boxed Edition of OK Computer OKNOTOK 1997 2017. At the end of the tape recording, there are some blips and bleeps, which Maciej Korsan interpreted correctly as a program for an old computer system.

As a kid I was an owner of the Commodore 64. I remember that all my friends already were the PC users but my parents declined to buy me one for a long time. So I sticked to my old the tape-based computer listening to it’s blips and waiting for the game to load. Over 20 years later I was sitting in front of my MacBook, listening to the digitalised version of the tape my favourite band just released and then I’ve heard a familiar sound… ‘This must be an old computer program, probably C64 one’ I thought.

The program turned out to run on the ZX Spectrum, a computer the lads would likely have encountered as kids.


Watch a near-pristine Apple I boot up and run a program

Glenn and Shannon Dellimore own at least two original Apple I computers built in 1976 by Steve Wozniak, Dan Kottke, and Steve Jobs. The couple recently purchased one of the computers at auction for $365,000 and then lent it to London’s Victoria and Albert Museum for an exhibition. The hand-built machine is in such good condition that they were able to boot it up and run a simple program.

The superlative rarity of an Apple-1 in this condition is corroborated by this machine’s early history.The owner, Tom Romkey, owned the “Personal Computer Store” in Florida, and was certified as an Apple level 1 technician in 1981. One day, a customer came into his shop and traded in his Apple-1 computer for a brand new NCR Personal Computer. The customer had only used the Apple-1 once or twice, and Mr. Romkey set it on a shelf, and did not touch it again.

The Apple I was the first modern personal computer: the whole thing fit on just one board and used the familiar keyboard/monitor input and output.

By early 1976, Steve Wozniak had completed his 6502-based computer and would display enhancements or modifications at the bi-weekly Homebrew Computer Club meetings. Steve Jobs was a 21 year old friend of Wozniak’s and also a visitor at the Homebrew club. He had worked with Wozniak in the past (together they designed the arcade game “Breakout” for Atari) and was very interested in his computer. During the design process Jobs made suggestions that helped shape the final product, such as the use of the newer dynamic RAMs instead of older, more expensive static RAMs. He suggested to Wozniak that they get some printed circuit boards made for the computer and sell it at the club for people to assemble themselves. They pooled their financial resources together to have PC boards made, and on April 1st, 1976 they officially formed the Apple Computer Company. Jobs had recently worked at an organic apple orchard, and liked the name because “he thought of the apple as the perfect fruit — it has a high nutritional content, it comes in a nice package, it doesn’t damage easily — and he wanted Apple to be the perfect company. Besides, they couldn’t come up with a better name.”

In other words, Woz invented the Apple computer, but Jobs invented Apple Computer. Here’s a longer video of another working Apple I:

This one is also in great condition, although it’s been restored and some of the original parts have been replaced. If you’d like to play around with your own Apple I without spending hundreds of thousands of dollars at an auction, I would recommend buying a replica kit or trying out this emulator written in Javascript. (thx, chris)


Computer Show is back! (As an ad for HP printers.)

Missed this early this month while I was on vacation: Computer Show is back with a new episode, partnering with HP to showcase one of their fast color printers. Yes it’s an ad, but yes it’s still funny.


The Brilliant Life of Ada Lovelace

From Feminist Frequency, a quick video biography of Ada Lovelace, which talks about the importance of her contribution to computing.

A mathematical genius and pioneer of computer science, Ada Lovelace was not only the created the very first computer program in the mid-1800s but also foresaw the digital future more than a hundred years to come.

This is part of Feminist Frequency’s Ordinary Women series, which also covered women like Ida B. Wells and Emma Goldman.


Calculating Ada

From the BBC, an hour-long documentary on Ada Lovelace, the world’s first computer programmer.

You might have assumed that the computer age began with some geeks out in California, or perhaps with the codebreakers of World War II. But the pioneer who first saw the true power of the computer lived way back, during the transformative age of the Industrial Revolution.

Happy Ada Lovelace Day, everyone!