The news that google is running a fleet of 7 autonomous cars is making its way around the internets this week. The cars use radar, LIDAR, image recognition, some sort of (gyroscopic?) position estimation, and I’d assume GPS as well. Just as with us humans, it’s going to require a radically multi-modal approach to build robots that can truly sense their place in the world, as well as clever algorithms to integrate the data. This is front page news in the New York Times people; when The Grey Lady picks up a tech story you know the tech its reporting on is going mainstream. We’re even at the point where we can start arguing about whether or not this is legal.
And now at Freie University in Germany they’re taking it one step further with autonomous taxis that can be called from an iPad. It doesn’t seem like this is being widely deployed, but it’s only a matter of time. This is the sort of real application I’m looking for. The geek in me loves to see projects like the one at google solving the interesting technical AI challenges, but all this really starts to matter when we have a concrete vision of the technology’s effect on the real world. My estimate for how long it’s going to take for me to get a self-driving car has been revised downward.
When discussing this with someone the other day, I was reminded of a part in Vernor Vinge’s excellent book Rainbow’s End. If you haven’t read it, there’s a version online here. I couldn’t find the exact passage, but there’s a part where one of the characters is looking out into the road, and sees two separate parts of the street, one for high-speed, efficient and autonomous cars, and another for people who want to drive themselves. Of course, the speed limit on the human driven section is much lower. I’m sure we’ll get there eventually, but we’ll have some interesting times to go through first, both technologically and legally. There have been plenty of milestones for robotic cars in the past year, but I’m still waiting for one of the more unpleasant ones; the first person hurt or killed by a computer-driven car.
Sorry to the small (and possibly nonexistent) number of you that regularly come here, I haven’t gotten bored of blogging or run out of ideas but now that my initial enthusiasm has waned I’m taking a more relaxed approach to blogging. I will probably never be a prolific poster on the order of Mike Anissimov or Tyler Cowen, or even Kyle Munkittrick (who was an inspiration of sorts for starting this). Maybe some day, but the copious free time I thought I had is, well, less copious than I thought. I’ve been writing a few posts, which should trickle in as time goes by, but I’ve decided to draft them more carefully and review them a bit more before posting.
But today, a short post about Robocars.
Anyone who knows me well has probably heard me talk about how soon our cars will be driving themselves. I don’t claim any credit for the idea, science fiction writers have been talking about it for a long time, probably since cars were invented. But I still hear people say “I wouldn’t trust a robot to drive me around, what if it goes crazy?”. To which I usually respond “which is more likely? Your robot driver going crazy or your human driver?”. Which of course is ignoring the incredibly low probability of an AI driver getting drunk, falling asleep at the wheel, or getting distracted answering his phone or changing the radio station. But this argument usually ends with “Whatever, it’s going to be a long time until that happens anyway”.
Well it isn’t. Robotic cars are coming sooner than you think. There are already cars that park themselves. This year, a car is coming out that brakes to avoid hitting pedestrians. Some nut in California has even built a system for a Prius that drives itself. It’s only a matter of time before we all have these. Seriously, it’s not just some lone crazy (me) saying this; the vice president of R&D at GM says we’ll have fully autonomous cars by 2020.
And this will be a good thing. Usually when left leaning types hear this argument, their next fear (after safety) is that this will mean a postponement of the inevitable death of the car. Our wonderful public transit future where everyone rides high speed rail or bikes is slipping away. I’m with them on the bikes (and E-bikes are going to make this even better), but trains of all types are an expensive waste of energy, and are very difficult to move or reconfigure as demographics change. Buses are a much better option, although I personally find them to be overcrowded nausea inducing death traps, but they would also benefit from autonomous control. If we can all have energy-efficient robot taxis driving us around, rural citizens included, why do we need trains OR buses, except to satisfy some communitarian dream of everyone travelling together?
Since I’m all about making falsifiable predictions to track my understanding of where the world is going, here’s today’s: I’ll probably have to drive the first car I buy, but the second (or maybe the third) will be able to drive itself.
If you need more convincing of the utility and feasibility of this technology, see Brad Templeton’s presentation at Foresight 2010. The robot cars are coming, and when they get here we’ll all be better off for it.
A lot of what we call “fun” seems to be based on fairly simple principles. Ok, so there’s still a fair bit of complexity there. But after I read this interview with AI designer Jurgen Schmidhuber and watched his excellent presentation at this year’s singularity summit I’ve started to view a surprising number of things I do through a different lens. There’s all sorts of deep and strange ideas in that interview, but the one that stuck with me the longest is the notion that much of what we consider deeply and fundamentally human is reducible to our brains rewarding us for gathering and efficiently compressing information.
I’ve been aware for a while that many video games I play, particularly RPGs, are little more than cheap hacks of the dopamine system my brain has evolved to encourage me to do things. It’s just gambling without the high monetary cost, or cigarettes without the lung cancer.
But Schmidhuber is making an even more bizarre claim, and making it in a very compelling way.Essentialy, he’s saying that many of our drives are based simply on gathering and compressing information. Compression here means something a little different from what your computer does when it compresses a .zip or .rar file, but its the same basic idea; removing unnecessary information to make a given thing fit in a smaller box. Computers do it by finding redundant sequences of bits and representing them in more efficient ways, and humans do it by making connections and forming “understanding”. There’s a diverse array of examples of this discussed in the interview. Music is appealing to us because we can recognize novel patterns that are somewhat, but not too familiar to us, and music that is either too formulaic or too discordant is unappealing. Art is interesting because we can find compressible visual or cultural themes. Dancing is much the same as music; repetitive yet novel sequences that initially seem bizarre and random but show deep patterns. We laugh at jokes because we make interesting and surprising connections between various semantic pieces. The list goes on, and Schmidhuber makes the case for the truth of this better than I can so if you don’t understand, go check out that video. You can find exceptions and complications that culture and emotions have introduced to all of these things, but it really is remarkable how often that basic principle of novel compression shows up.
Schmidhubers theory has interesting implications for what it means to be the complex biological robots we call homo sapiens. What is the first objection people raise when the question of machines being “conscious” or “intelligent” comes up? It’s usually something along the lines of “Well they might be fancy calculators, but they’ll never [be creative, appreciate beauty, laugh at our jokes, etc]”. There’s all sorts of things wrong with that argument, which I’ll probably have to write a separate post on sometime. Suffice to say even if you believe those things are deeply weird and complicated, you have no reason to doubt a sufficiently powerful and well programmed computer would be able to do them (unless you believe the brain runs on magic). If, however, many of those precious deeply complex human characteristics are really fairly simple processes, what does it imply about us? As sympathetic as I am to the notion that we are just complicated computers of one type or another, I was somewhat skeptical at first. But ever since I read through that interview, I’m noticing more and more often how true it is.
I’ll leave this as an exercise to the reader; now that you’ve been exposed to this idea, start looking at your daily activities through the lens of information compression. I think you’ll be surprised at how often it fits. Not so high and mighty now eh Mr deeply mysterious human?