A beautifully shot HD video of machines manufacturing springs and other wire gizmos. I love how all the tools take turns and work together to make the widgets. Imagine the chatter amongst the tools:
“Ok, thanks, my turn.”
“Here, hold this while I turn it. Alright, we’re out.”
“Lemme just bend that a little for you.”
“Outta the way, I just gotta twist this for a sec.”
(via @pieratt, who says to substitute Steve Reich for the provided music)
“If you know you can save at least one person, at least save that one. Save the one in the car,” von Hugo told Car and Driver in an interview. “If all you know for sure is that one death can be prevented, then that’s your first priority.”
In other words, their driverless cars will act very much like the stereotypical entitled European luxury car driver. (via @essl)
Wired took an exclusive tour of NASA’s rockets and robots with photographer Benedict Redgrove and the photographic results are — sorry! — out of this world. Best viewed on Redgrove’s site, who must be — still sorry!! — over the moon about how they turned out. But seriously, that DARPA centaur-on-wheels robot…how cool is that?
The very beginning of Attack of the Killer Robots by Sarah Topol features this quote by Stuart Russell, a Berkeley computer science professor. It is terrifying:
A very, very small quadcopter, one inch in diameter can carry a one- or two-gram shaped charge. You can order them from a drone manufacturer in China. You can program the code to say: “Here are thousands of photographs of the kinds of things I want to target.” A one-gram shaped charge can punch a hole in nine millimeters of steel, so presumably you can also punch a hole in someone’s head. You can fit about three million of those in a semi-tractor-trailer. You can drive up I-95 with three trucks and have 10 million weapons attacking New York City. They don’t have to be very effective, only 5 or 10% of them have to find the target.
There will be manufacturers producing millions of these weapons that people will be able to buy just like you can buy guns now, except millions of guns don’t matter unless you have a million soldiers. You need only three guys to write the program and launch them. So you can just imagine that in many parts of the world humans will be hunted. They will be cowering underground in shelters and devising techniques so that they don’t get detected. This is the ever-present cloud of lethal autonomous weapons.
In addition to robots that run fast, can’t be knocked over, launch themselves 30 feet into the air, and climb up walls, Boston Dynamics also makes robots who move like people. Now, imagine if that robot swore like a longshoreman while going about its duties. This made me laugh super hard. (via @nickkokonas)
Boston Dynamics has a new 55-pound robot with an arm that looks like a head. It gets up after slipping on banana peels and can load your delicate glassware into the dishwasher.
Do they deliberately make these videos unsettling and creepy? Or is that just me? That last scene, where the robot kinda lunges at the guy and then falls over…I might have nightmares about that.
Jller is a machine that sorts stones from a specific river according to their geologic age.
The machine works with a computer vision system that processes the images of the stones and maps each of its location on the platform throughout the ordering process. The information extracted from each stone are dominant color, color composition, and histograms of structural features such as lines, layers, patterns, grain, and surface texture. This data is used to assign the stones into predefined categories.
A team of researchers at Stanford built a small army of tiny robots that pulled a car across a concrete floor.
With careful consideration to robot gait, we demonstrate a team of 6 super strong microTug microrobots weighing 100 grams pulling the author’s unmodified 3900lb (1800kg) car on polished concrete.
As any good tug of war team knows, the trick was to ensure that the tiny bots all pulled together at the same time. (via ny times)
I have been following with fascination the match between Google’s Go-playing AI AlphaGo and top-tier player Lee Sedol and with even more fascination the human reaction to AlphaGo’s success. Many humans seem unnerved not only by AlphaGo’s early lead in the best-of-five match but especially by how the machine is playing in those games.
Then, with its 19th move, AlphaGo made an even more surprising and forceful play, dropping a black piece into some empty space on the right-hand side of the board. Lee Sedol seemed just as surprised as anyone else. He promptly left the match table, taking an (allowed) break as his game clock continued to run. “It’s a creative move,” Redmond said of AlphaGo’s sudden change in tack. “It’s something that I don’t think I’ve seen in a top player’s game.”
When Lee Sedol returned to the match table, he took an usually long time to respond, his game clock running down to an hour and 19 minutes, a full twenty minutes less than the time left on AlphaGo’s clock. “He’s having trouble dealing with a move he has never seen before,” Redmond said. But he also suspected that the Korean grandmaster was feeling a certain “pleasure” after the machine’s big move. “It’s something new and unique he has to think about,” Redmond explained. “This is a reason people become pros.”
“A creative move.” Let’s think about that…a machine that is thinking creatively. Whaaaaaa… In fact, AlphaGo’s first strong human opponent, Fan Hui, has credited the machine for making him a better player, a more beautiful player:
As he played match after match with AlphaGo over the past five months, he watched the machine improve. But he also watched himself improve. The experience has, quite literally, changed the way he views the game. When he first played the Google machine, he was ranked 633rd in the world. Now, he is up into the 300s. In the months since October, AlphaGo has taught him, a human, to be a better player. He sees things he didn’t see before. And that makes him happy. “So beautiful,” he says. “So beautiful.”
Creative. Beautiful. Machine? What is going on here? Not even the creators of the machine know:
“Although we have programmed this machine to play, we have no idea what moves it will come up with,” Graepel said. “Its moves are an emergent phenomenon from the training. We just create the data sets and the training algorithms. But the moves it then comes up with are out of our hands — and much better than we, as Go players, could come up with.”
Generally speaking,1 until recently machines were predictable and more or less easily understood. That’s central to the definition of a machine, you might say. You build them to do X, Y, & Z and that’s what they do. A car built to do 0-60 in 4.2 seconds isn’t suddenly going to do it in 3.6 seconds under the same conditions.
Now machines are starting to be built to think for themselves, creatively and unpredictably. Some emergent, non-linear shit is going on. And humans are having a hard time figuring out not only what the machine is up to but how it’s even thinking about it, which strikes me as a relatively new development in our relationship. It is not all that hard to imagine, in time, an even smarter AlphaGo that can do more things — paint a picture, write a poem, prove a difficult mathematical conjecture, negotiate peace — and do those things creatively and better than people.
Unpredictable machines. Machines that act more like the weather than Newtonian gravity. That’s going to take some getting used to. For one thing, we might have to stop shoving them around with hockey sticks. (thx, twitter folks)
Update: AlphaGo beat Lee in the third game of the match, in perhaps the most dominant fashion yet. The human disquiet persists…this time, it’s David Ormerod:
Move after move was exchanged and it became apparent that Lee wasn’t gaining enough profit from his attack.
By move 32, it was unclear who was attacking whom, and by 48 Lee was desperately fending off White’s powerful counter-attack.
I can only speak for myself here, but as I watched the game unfold and the realization of what was happening dawned on me, I felt physically unwell.
Generally I avoid this sort of personal commentary, but this game was just so disquieting. I say this as someone who is quite interested in AI and who has been looking forward to the match since it was announced.
One of the game’s greatest virtuosos of the middle game had just been upstaged in black and white clarity.
AlphaGo’s strength was simply remarkable and it was hard not to feel Lee’s pain.
Let’s get the caveats out of the way here. Machines and their outputs aren’t completely deterministic. Also, with AlphaGo, we are talking about a machine with a very limited capacity. It just plays one game. It can’t make a better omelette than Jacques Pepin or flow like Nicki. But not only beating a top human player while showing creativity in a game like Go, which was considered to be uncrackable not that long ago, seems rather remarkable.↩
Boston Dynamics, creator of the Big Dog prancing robot, has upgraded their Atlas robot, which can walk on two legs, open doors, stack boxes, walk on slippery terrain, recover from being shoved, etc. And everyone’s all HA HA HA TERMINATOR but soon enough the HA HAs will become less hearty and more nervous. It took human ancestors hundreds of thousands of years to evolve from quadrupeds to bipeds and Boston Dynamics has done the same in just a few years.
Mark my words: no good will come of playing box keep-away with robots and treating them like, well, machines. It’s already started…did you notice Atlas didn’t even look behind itself to see if it needed to hold the door for anyone? And you think manspreading on the subway is a problem…wait until we have to deal with robotspreading by robots whose ancestors we shoved with hockey sticks.
Update: The hockey stick is back, accompanied by some “robust” tail pulling:
When the entire corpus of YouTube casually becomes a training ground for “young” AI programs, the bots are going to watch this and be all like, “WTF!?” and it’s gonna be on.
“Madeline the Robot Tamer” is a really lovely video about Madeline Gannon, a woman who dances, so to speak, with robots. As a resident at Pier 9, she developed Quipt, “a gesture-based control software that gives industrial robots basic spatial behaviors for interacting closely with people.” It’s a wonderful demonstration of robots and humans learning to work together.
When people drive cars, collisions often happen so quickly that they are entirely accidental. When self-driving cars eliminate driver error in these cases, decisions on how to crash can become pre-meditated. The car can think quickly, “Shall I crash to the right? To the left? Straight ahead?” and do a cost/benefit analysis for each option before acting. This is the trolley problem.
How will we program our driverless cars to react in situations where there is no choice to avoid harming someone? Would we want the car to run over a small child instead of a group of five adults? How about choosing between a woman pushing a stroller and three elderly men? Do you want your car to kill you (by hitting a tree at 65mph) instead of hitting and killing someone else? No? How many people would it take before you’d want your car to sacrifice you instead? Two? Six? Twenty?
The video above introduces a wrinkle I had never considered before: what if the consumer could choose the sort of safety they want? If you had to choose between buying a car that would save as many lives as possible and a car that would save you above all other concerns, which would you select? You can imagine that answer would be different for different people and that car companies would build & market cars to appeal to each of them. Perhaps Apple would make a car that places the security of the owner above all else, Google would be a car that would prioritize saving the most lives, and Uber would build a car that keeps the largest Uber spenders alive.1
Ethical concerns like the trolley problem will seem quaint when the full power of manufacturing, marketing, and advertising is applied to self-driving cars. Imagine trying to choose one of the 20 different types of ketchup at the supermarket except that if you choose the wrong one, you and your family die and, surprise, it’s not actually your choice, it’s the potential Trump voter down the street who buys into Car Company X’s advertising urging him to “protect himself” because he feels marginalized in a society that increasingly values diversity over “traditional American values”. I mean, we already see this with huge, unsafe gas-guzzlers driving on the same roads as small, safer, energy-efficient cars, but the addition of software will turbo-charge this process. But overall cars will be much safer so it’ll all be ok?
The bit about Uber is a joke but just barely. You could easily imagine a scenario in which a Samsung car might choose to hit an Apple car over another Samsung car in an accident, all other things being equal.↩
The trolley problem is an ethical and psychological thought experiment. In its most basic formulation, you’re the driver of a runaway trolley about to hit and certainly kill five people on the track ahead, but you have the option of switching to a second track at the last minute, killing only a single person. What do you do?
The problem becomes stickier as you consider variations of the problem:
As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you — your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?
As driverless cars and other autonomous machines are increasingly on our minds, so too is the trolley problem. How will we program our driverless cars to react in situations where there is no choice to avoid harming someone? Would we want the car to run over a small child instead of a group of five adults? How about choosing between a woman pushing a stroller and three elderly men? Do you want your car to kill you (by hitting a tree at 65mph) instead of hitting and killing someone else? No? How many people would it take before you’d want your car to sacrifice you instead? Two? Six? Twenty? Is there a place in the car’s system preferences panel to set the number of people? Where do we draw those lines and who gets to decide? Google? Tesla? Uber?1 Congress? Captain Kirk?
There’s an out of control trolley speeding towards a worker. You have the ability to pull a lever and change the trolley’s path so it hits a different worker. The first worker has an intended suicide note in his back pocket but it’s in the handwriting of the second worker. The second worker wears a T-shirt that says PLEASE HIT ME WITH A TROLLEY, but the shirt is borrowed from the first worker.
Reeeeally makes you think, huh?
If Uber gets to decide, the trolley problem’s ethical concerns vanish. The car would simply hit whomever will spend less on Uber rides and deliveries in the future, weighted slightly for passenger rating. Of course, customers with a current subscription to Uber Safeguard would be given preference at different coverage levels of 1, 5, and 20+ ATPs (Alternately Targeted Persons).↩
Motherboard has an interesting story about how women who lose limbs are finding prosthetic devices are made for men: “Man Hands.”
When Jen Lacey gets her toes done, she does both feet, even though one of them is made of rubber. “I always paint my toenails,” she says, “because it’s cute, and I want to be as regular as possible.” But for a long time, even with the painted toes, her prosthetic foot looked ridiculous. The rubber foot shell she had was wide, big and ugly. “I called it a sasquatch foot,” she jokes. “It’s an ugly man foot.”
Part of the problem is that most prosthetic devices are designed by men and most prosthetists are men.
There are a few reasons for all this male-centric design. The history of prosthetics is, in large part, a history of war. One of the earliest written records of a prosthetic device comes from the Rigveda, an ancient sacred text from India. Ironically, that amputee is a woman—the warrior queen Vishpala loses her leg in battle and is fitted with a replacement so she can return and fight again. But after that, the history of prosthetics is nearly entirely a history of men—Roman generals, knights, soldiers, dukes.
Every year, 30 percent of those undergoing an amputation are women. In other words, it’s the 70 percent that’s male that drives the market.
I, for one, welcome our new ROBOPRIEST overlords. I found ROBOPRIEST on artist Josh Ellingson’s website. The robot costume-for-two was intended to perform wedding ceremonies and is the brainchild of Ellingson and Selene Luna, a 3’10” performance artist. It speaks in a robot voice, has flashing eyes, and the interior of its hatch is decorated with dirty pictures.
The idea of ROBOPRIEST started as a joke on Twitter between me and Selene Luna, an actress friend of mine in Los Angeles. We were trying to come up with funny ideas to collaborate on wedding services.The joke then turned into reality when Selene asked me to build ROBOPRIEST for her one woman show, “Sweating the Small Stuff” in San Francisco. The costume consisted mostly of cardboard and foam rubber with a skeleton of plastic hula hoops. The “eyes” are speakers equipped with voice-activated electro-luminescent wire. The audio for ROBOTPRIEST’s voice and various sound-effects were created by sound designer, Jim Coursey.
Its components include children’s toy claws, silver lame, ductwork, an iPod, and a harness that enables Luna to operate the costume from inside while riding piggyback on Ellingson.
Selene pilots ROBOPRIEST from a harness attached to my back. The harness is called The Piggyback Rider and is really just a backpack strap with a bar that runs along the bottom. This allowed Selene to comfortably stand on my back and easily hop off if needed. The top of ROBOPRIEST is equipped with a hatch from which Selene can address her minions. The inside of the hatch is decorated with a collage of nudie magazine clippings (NSFW), something that I thought appropriate for the insides of a repressed robot’s head at the time, although it may just have been all the hot-glue fumes getting to me.
Ellingson’s site has sound clips and a video of ROBOPRIEST announcing himself, and there are lots of photos on Flickr showing the build process.
Clive Thompson writes about the newest innovation in junk mail marketing: handwriting robots. That’s right, robots can write letters in longhand with real ballpoint pens and you can’t really tell unless you know what to look for. Here’s a demonstration:
But it turns out that marketers are working diligently to develop forms of mass-generated mail that appear to have been patiently and lovingly hand-written by actual humans. They’re using handwriting robots that wield real pens on paper. These machines cost up to five figures, but produce letters that seem far more “human”. (You can see one of the robots in action in the video adjacent.) This type of robot is likely what penned the address on the junk-mail envelope that fooled me. I saw ink on paper, subconsciously intuited that it had come from a human (because hey, no laser-printing!) and opened it.
Handwriting, it seems, is the next Turing Test.
There is also a company that provided handwritten letters for sale professionals and I don’t know if that or the robot letters are more unusual.
Artificial intelligence is already well on its way to making “good jobs” obsolete: many paralegals, physicians, and even — ironically — computer programmers are poised to be replaced by robots. As technology continues to accelerate and machines begin taking care of themselves, fewer jobs will be necessary. Unless we radically reassess the fundamentals of how our economy and politics work, this transition could create massive unemployment and inequality as well as the implosion of the economy itself.
Susan Schneider, professor of philosophy at UConn, is among those researchers and scientists who believe that the first alien beings we encounter will be “postbiological in nature”…aka robots.
“There’s an important distinction here from just ‘artificial intelligence’,” Schneider told me. “I’m not saying that we’re going to be running into IBM processors in outer space. In all likelihood, this intelligence will be way more sophisticated than anything humans can understand.”
The reason for all this has to do, primarily, with timescales. For starters, when it comes to alien intelligence, there’s what Schneider calls the “short window observation” — the notion that, by the time any society learns to transmit radio signals, they’re probably a hop-skip away from upgrading their own biology. It’s a twist on the belief popularized by Ray Kurzweil that humanity’s own post-biological future is near at hand.
“As soon as a civilization invents radio, they’re within fifty years of computers, then, probably, only another fifty to a hundred years from inventing AI,” Shostak said. “At that point, soft, squishy brains become an outdated model.”
To use Elon Musk’s language, biological beings would be a “biological boot loader for digital superintelligence”. Schneider’s full paper on the topic is here: Alien Minds.
Amazon’s newest fulfillment center1 features hundreds of robots. Watch them work in an intricate ballet of customer service through increased speed of delivery and greater local selection. Also, ROBOTS!
Neill Blomkamp (District 9, Elysium) is coming out with a new film in the spring, Chappie. Chappie is a robot who learns how to feel and think for himself. According to Entertainment Weekly, two of the movie’s leads are Ninja and Yo-Landi Vi$$er of Die Antwoord, who play a pair of criminals who robotnap Chappie.
Discussions of AI are particularly hot right now (e.g. see Musk and Bostrom) and filmmakers are using the opportunity to explore AI in film, as in Her, Ex Machina, and now Chappie.
Blomkamp, with his South African roots, puts a discriminatory spin on AI in Chappie, which is consistent with his previous work. If robots can think and feel for themselves, what sorts of rights and freedoms are they due in our society? Because right now, they don’t have any…computers and robots do humanity’s bidding without any compensation or thought to their well-being. Because that’s an absurd concept, right? Who cares how my Macbook Air feels about me using it to write this post? But imagine a future robot that can feel and think as well as (or, likely, much much faster than) a human…what might it think about that? What might it think about being called “it”? What might it decide to do about that? Perhaps superintelligent emotional robots won’t have human feelings or motivations, but in some ways that’s even scarier.
The whole thing can be scary to think about because so much is unknown. SETI and the hunt for habitable exoplanets are admirable scientific endeavors, but humans have already discovered alien life here on Earth: mechanical computers. Boole, Lovelace, Babbage, von Neumann, and many others contributed to the invention of computing and those machines are now evolving quickly, and hardware and software both are evolving so much faster than our human bodies (hardware) and culture (software) are evolving. Soon enough, perhaps not for 20-30 years still but soon, there will be machines among us that will be, essentially, incredibly advanced alien beings. What will they think of humans? And what will they do about it? Fun to think about now perhaps, but this issue will be increasingly important in the future.
The directorial debut of Alex Garland, screenwriter of Sunshine and 28 Days Later, looks interesting.
Ex Machina is an intense psychological thriller, played out in a love triangle between two men and a beautiful robot girl. It explores big ideas about the nature of consciousness, emotion, sexuality, truth and lies.
Rex Sorgatz wonders what sort of robots we’ll build, R2-D2s or C-3POs.
R2-D2 excels in areas where humans are deficient: deep computation, endurance in extreme conditions, and selfless consciousness. R2-D2 is a computer that compensates for human deficiencies — it shines where humans fail.
C3-PO is the personification of the selfish human — cloying, rules-bound, and despotic. (Don’t forget, C3-PO let Ewoks worship him!) C3-PO is a factotum for human vanity — it engenders the worst human characteristics.
I love the chart he did for the piece, characterizing 3PO’s D&D alignment as lawful evil and his politics as Randian.
Automata is a film directed by Gabe Ibáñez in which robots become sentient and…do something. Not sure what…I hope it’s not revolt and try to take over the world because zzzz… But this movie looks good so here’s hoping.
Jacq Vaucan, an insurance agent of ROC robotics corporation, routinely investigates the case of manipulating a robot. What he discovers will have profound consequences for the future of humanity.
Automata will be available in theaters and VOD on Oct 10. (via devour)
This video combines two thoughts to reach an alarming conclusion: “Technology gets better, cheaper, and faster at a rate biology can’t match” + “Economics always wins” = “Automation is inevitable.”
That’s why it’s important to emphasize again this stuff isn’t science fiction. The robots are here right now. There is a terrifying amount of working automation in labs and warehouses that is proof of concept.
We have been through economic revolutions before, but the robot revolution is different.
Horses aren’t unemployed now because they got lazy as a species, they’re unemployable. There’s little work a horse can do that pays for its housing and hay.
And many bright, perfectly capable humans will find themselves the new horse: unemployable through no fault of their own.
I’ve had this page of misbehaving robot animated GIFs up in a tab for a few days now and every time it pops up on my screen, I watch all of them and then I laugh. That’s it. Instant fun. The garbage truck is my favorite, but what gets me laughing the most is how exuberantly the ketchup squirting robot sprays its payload onto that hamburger bun.
SRI International and DARPA are making little tiny robots (some are way smaller than a penny) that can actually manufacture products.
They can move so fast! And that shot of dozens of them moving in a synchronized fashion! Perhaps Skynet will actually manifest itself not as human-sized killing machines but as swarms of trillions of microscopic nanobots, a la this episode of Star Trek:TNG. (via @themexican)
It’s not up on their web site or YouTube yet, but I’m pretty sure this is a prototype of a new dancing robot built by Boston Dynamics, makers of the Big Dog, Cheetah, and Petman robots.
Looks similar to Atlas or Petman, but way more advanced…how did they pack all of the circuitry and power supplies into such a small yet realistic-looking housing? (via devour)
I wasn’t expecting a whole lot from this video of a robotic gymnast doing a routine on the high bar, but holy cow! I audibly gasped at the 33-second mark and again at 57 seconds.
Looks like a home-built, just some guy in his garage. The robot has learned some new tricks since that video was made. Here’s a quintuple backflip landing:
A double twist that it didn’t quite land:
And it does floor exercises as well…here’s a double back handspring:
Stay Connected