The MIT Technology Review has a detailed summary of Toyota’s most recent offerings in automated automobiles. The Lexus LS will have number of sensors for automated safety, including oriented GPS, 360-degree laser scan (LIDAR), millimeter-wave radar, several accelerometers, and optical/stereo cameras. Exactly the equipment you would expect in a self-driving car. But Toyota is not marketing this as a automated or “driverless” vehicle:
Toyota is also working to allow a car to understand road and traffic conditions much as a human driver would—for example, by observing traffic signals. “That may, over time, evolve into a fully autonomous car,” said Templin. The research is motivated by a desire to “eliminate future traffic-related fatalities and injuries.”
[D]espite signaling that Toyota’s research was leading it in that direction—Templin added that he didn’t see “autonomous” as being synonymous with “driverless.” Even as successively advanced autonomous features are introduced to Toyota and Lexus vehicles, he said, humans would remain in control. A future car could be considered a “skilled, intelligent, and attentive copilot.”
I thinks it’s great that Toyota is putting the emphasis on the safety of all cars, albeit starting with an expensive model. Such advanced sensing equipment and software may lead to a standard for safety applications and driver assistance. Then, more robust automation can be added to the same platform. Reducing crashes and human injury should be priority number one.
The title of this post is a quote from the Baltimore Sun, which was reporting on the Rockville Town Council’s 1908 decision to have it’s bailiff stop—by force— cars exceeding the (6-miles-per-hour) speed limit. This and so many other delightful anecdotes are included in Aaron Wiener’s (@aaronwiener) reporting in the Washington City Paper. Instead of having a exasperated tone, it is a well-researched piece, an example of good journalism at work:
There is No War on Cars
I love the bit about the jerk, AAA lobbyist who calls his opponent, an urban planner, a “nerd” and a “little ninny.”
Over the past couple of weeks, there have been a number of other interesting articles on cars and transportation:
- Your Car Is the Killer App for Google Glass
Using the accelerometer in Google Glass to detect driver drowsiness is a clever idea.
- A Better Way to Grade City Transportation Systems
- The Economist on Driverless Cars
- The Cure for Distracted Driving
- 3 Creative Ways To Visualize Urban Public Transportation
(More coverage of the Urban Data Challenge, including our Frustration Index.)
- How Apple is Taking Over Your Car
(Apple is not! This article is far too speculative. Related: Apple’s In-Car Siri Integration Hits a Roadblock and Using Siri when driving maybe be as dangerous as texting)
- This Is Why Everyone on BART Hates You
Last month, I worked on a project for the 2013 Urban Data Challenge, a competition to visualize a week’s worth of transportation data for three cities: San Francisco, Zurich, and Geneva. (My team won second place in the challenge so I’ll hopefully be visiting Switzerland some time soon!)
At the outset of the competition, I was fortunate to meet with some talented people with experience in civil engineering, data science, architecture, programming, and visual design. We considered a number of different ideas, but finally decided upon a way to quantify, rank, and then visualize the frustration of transit users in each city.
Our final application visualizes frustration on a Monday in each city. At different times of day, you can view frustration in terms of speed, the crowdedness or capacity of vehicles, and delay. Then, if you zoom in on one transit stop, a total grade of the frustration is calculated for this stop. There are a variety of other factors that I would have liked to consider, which are summarized along with our project methodology.
The application is here: http://frustration-index.herokuapp.com/
Incidentally, frustration is a word that summarizes my experience wrangling with the raw transit data for these cities. The tasks of formatting, normalizing, analyzing the data and formulating output required much more effort than I had anticipated. The dataset for San Francisco was particularly challenging. I spent a number of restless nights trying to correlate real data from October 2012 with the Google Transit Feed for the same time period. With the exception of a few hundred outlying buses, I was finally able to sync the Google schedules with the raw dataset. This was tremendously rewarding, but there were still countless other issues that our team had to figure out.
I’ll be writing a separate post about findings and trends in the data, as well as remaining questions, issues, ideas, et cetera that I would like to return to. For instance, I would love to apply theories of computer networking to this data, such as vehicle queuing and routing and channel flow, capacity, congestion, and reliability.
But enough banter, here’s a video.
The following is a list of noteworthy things I read in 2012. Since I’m working on an Info Science degree, most of these are information and computer science related.
- The Information: A History, a Theory, a Flood by James Gleick
This book is so good, I closely read it again this year and used many of the ideas and people in the book as a jumping off point for other reading.
- I Am a Strange Loop by Douglas Hofstadter
I tried my best to read Godel, Escher, Bach a few years back, which introduced me to so many exciting and perplexing ideas. Likewise, I Am a Strange Loop covers too wide an array topic and profound insights to summarize. The true feat of this book, is to explain the way that consciousness and self-reference feel in our everyday lives. There are many ah-ha moments in this book and even passages that may make you cry.
- The Signal and The Noise by Nate Silver
Getting into the data science of polls was a surprisingly refreshing way to reduce my day-to-day anxiety in advance of the presidential election.
- Information: The New Language of Science by Hans Christian Von Baeyer
Like Charles Seife and James Gleick, Von Baeyer goes into the history of information theory, but is more concerned with the implications of info theory for modern science, such as genetics and physics. You might say that bits (the smallest unit of data) are the atoms of the 21st century.
- Reinventing the Automobile
This book is a great summary of technologies (mobile networking, smart grids, the electric car) that are converging to revolutionize automobiles and thereby urban life.
- The Design of Everyday Things by Donald Norman
A classic. I gave it a critical read in a Human Computer Interaction design class during the winter.
- The Debt: The First 5,000 Years by David Graeber
I re-read parts of this fantastic cultural and economic history of debt.
- Facts are Sacred: The Power of Data by Simon Rogers
A short book about data journalism from the Guardian U.K.
- Interface Culture by Steven Johnson
Just about everything in our daily lives has an interface, not just computing devices.
I didn’t have time for these—and so many other books, but I hope to get to some of them in the new year:
- The Idea Factory: Bell Labs and the Great Age of American Innovation
- Turing’s Cathedral: The Origins of the Digital Universe
- We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency.
- Jill Lepore on Trayvon Martin and Guns in U.S.
- Contrast blog on The Language of Interfaces
- Bradley Voytek on Automated science, deep data and the paradox of information
- Novelist Tim Parks on Finishing Books
- James Surowiecki’s Brief History on Money (A great companion to The Debt above.)
- John Dickerson on How Obama Won
- Sean Gallagher on How the Obama Tech Team Left Romney in the Dust
- Mark Singer on The Marathon Fraud, Kip Linton
- Jack Shafer on Jonah Lehrer’s Recycling Business
- Bianca Bosker on how Sebastian Thrun Wants to Change the World
- Klint Finley on The Rise and Fall of the Cisco Empire
A few months ago I took a course on how to program a robotic automobile, Artificial Intelligence for Robotics. This course is taught by Sebastian Thurn, who heads the Google X Lab and the Google Driverless Car Project. Thurn also founded the educational organization, Udacity, that is offering this and many other free computer science and math courses.
Recently, I decided to port the code I wrote to Java and use the excellent Processing visualization library to illustrate some of the core concepts. My code is hosted on GitHub: https://github.com/stevepepple/car-robot. It a separate post, I’ll consider how this code would work in a real car. In short, a real robotic car (like Stanford’s award-winning cars Junior and Stanley) uses a map of its environment and Global Positioning System; optical/LIDAR, laser, radar, and inertia sensors; and– of course– an central, on-board computer that gathers data from the car’s sensor and controls steering, acceleration and braking.
The first step in programming a robotic car is how to locate the car within its environment. This is called localization. You can never know for sure where the car is at. Even the best global positioning system will have a margin of error within a few meters, which is not acceptable on the road. Yet, there are several localization techniques that provide us with a strong belief, statistically speaking, of where the robot is located.
The following histogram (Histogram.java in my code) shows a simple example of localization in a one-dimensional world. Each step in the graph represents the robot’s belief about it’s location. Before the robot has moved or tried to determine its location, there is an even probability distribution. The robot will then sense its location based upon a map of landmarks or other real-time observations. The belief for each part of the graph is normalized so that the total probability across steps is equal to one.
Filters and Bayes Theorem
Having a robot deterministically navigate a maze is rather simple, the real challenge presented in this course is how a robot can deal with uncertainty. There are several approaches that apply Bayes rule and other probabilistic methods that help the robot determine the location of landmarks, obstacles, other vehicles, and itself. A Kalman filter is one such method that allows a robot to continuously track the position of objects in it’s environment. The Kalman filter stores measurements as two-dimensional gaussian distributions and uses these values to estimate the current state of the environment and predict how surrounding objects will move.
Another useful technique, which is a central part of my final implementation, is a particle filter. The robot uses a particle filter to determine its location and proximity to obstacles. A particle filter creates thousands of copies of the robot and randomly distributes these copies over the grid. In essence, these particles travel along with the actual robot and makes the same measurements and calculations for each particle. The robot takes an mathematically-involved average of all of the particles and the particles are given an importance weight based on their proximity to the robot’s location, landmarks, and obstacles. Each time the robot moves these particles are re-sampled and reduced. The average of the reduced particle set gives the robot a much improved estimate of its location.
Algorithms that are often used for the routing of network data are also well-suited for creating a robot’s driving plan.
A shortest path algorithm will spider over all of the available paths and calculate the least number of steps required to arrive at the goal. Other algorithms, such as the Dijkstra or A* search pattern, use heuristics and/or policies at each point to decide upon which direction to search. A* thereby eliminates paths that are further away from the goal. Once A* finds the shortest route, this path can be reduced to a single continuous path. (During A*, I kept a updated policy and history of action of the actions taken.)
However, this path may be too tight for a car to navigate, so the robot needs to calculate a smoother path. The model used for my car simulation is quite simplistic. The car can set its speed, steering, and heading direction. It goes as speeds between 20 and 40 mph. It can steer left or right at a 45 degree angle. If the path is not smooth enough, the robot cannot stay on track and tends to oscillate back and forth away from the desired course. For the simulation, I added noise to the car movements and sensors. It’s quite fun to play with these values and see how the robot will behave under different conditions.
The remaining part of the robot that I must discuss is the PID (proportional-integral-differential) controller.
The controller will use the smooth path found by the planning algorithms to move. As the car moves, its controller calculates the error between the path and its estimated location. The proportional component steers in proportion to the path, adjusting according to the current error; The differential component prevents the proportional controller from over-steering and overshooting the smooth path by easing these proportional steering adjustments; The integral component tries to account for system”bias” over time. For instance, a real car hardware degrades and may not steer as well as it used to or it’s sensors may become more prone to error. The integral controller will help to account and correct for this sustained error over time.
All this said, here’s a video of my robot in action!
There have been a number of good articles on artificial intelligence and robotics within the past week.
- Gary Marcus (@GaryMarcus) on Moral Machines in the New Yorker. Marcus considers the current inability of robots to make moral decisions. Can a well-designed system, eliminate moral trade-offs? Or must robots be designed with ethical subroutines?
- Simon de la Rouviere (@simondlr) has a thoughtful post on the intersecting advancements in robotic and electric vehicles.
- Phil Patton covers Futuristic Highway Drones at LA Auto Show Design Challenge, where several teams prototyped ideas for autonomous police vehicles.
- John Markoff (@Markoff) on Deep Learning in the New York Times.