And you thought
your Prius was the pinnacle of innovation.
Step aside green
car for the all new self-driving car! With innovations in Smart technology
ranging in everything from your cellphone to your house, why should automobiles
be off the table? Innovators like Google, General Motors, Apple, and
Mercedes-Benz are all climbing behind the wheel for one last human driven ride.
But are they safe?
Mostly, there are still some kinks to work out. But so far there are only a
couple of accidents reported and all of them were determined ultimately to be
the causation of human error, not the Google car. So how do they work?
Are you ready for
this?
Frickin’ laser
beams.
That’s right. A
series of laser beams are used to guide the car through environments without
the aid of a human driver. It’s pretty incredible when you think about it, and
coupled with modern safety features already in place on most vehicles, in less
than ten years motorists may only have to tell their car to head to the “Dry
Cleaners” and stop by “McDonalds” on the way “Home” in order to navigate
through their busy schedule.
This could also
head toward faster posted speeds in the future, leading to less congestion,
shorter commute times, and most importantly, less motorist fatalities.
Everything here sounds good on paper, but what about the science? What is
LIDAR? How do these Smart cars see their environment and stop in time for
things like children and deer? Are they safe? What will they look like? How
well have these things been tested?
We’re going to
answer all those questions and more, including a little background on the history
of LIDAR. Come with me at To Infinity and…In Theory today as we discuss whether
or not the autonomous automobile market is destined to crash and burn.
Before we begin,
let’s start with some basic statistics so you can see why this technology is so
important.
Highway To The Danger
Zone
In 2012, just in
the United States, more than 34,000 motorists were killed in vehicular
accidents. A further 4,000+ bystanders were killed, thousands more were
injured, and 700 bicyclists were killed. That’s a lot of human related
motorcade carnage if you ask me.
Those numbers get
far higher as you branch out into the rest of the world, as they well should.
The United States is but a fraction of the world’s population. The majority of
those deaths resulted from human and mechanical error. I’m sure there were
faults of nature that could not have been avoided, just as I’m sure even a
Google car can get annihilated by a sudden sinkhole, but those numbers could
drop significantly.
During the decade
or so of testing autonomous vehicles, Google alone has logged over 700,000
miles on their cars. With over a dozen on the road in four states including
Nevada, California, Florida, and Michigan, there have been exactly 0 deaths
reported.
There have however
been two accidents reported. One being a rear-end collision, the fault of a
human driver in a non-Google car, and the other was hilariously in front of the
Google Headquarters building in California. The car was being manually driven
at the time, (and KITT probably laughed his carburetor off). But to date there
have been no reported accidents related to the software installed to make the
car autonomous.
So just how does it
sense its surroundings and make intelligent decisions based on the information
it ascertains?
It’s not magic.
It’s science.
Nice Laser
Ah yes, the famous Albert Einstein. And the
famous photo of him with his tongue out. If you’ve ever wondered the story
behind that photo ponder no more, it’s quite hilarious actually. It was his
birthday party, and all night long all the guests were asking to pose with
Albert and take pictures, or make him smile and take a photo. Albert grew tired
of smiling and as he was leaving a guest pleaded for one last photo of the
famous physicist. Not wanting to deny his public, but horribly bored with
smiling, he turned to the camera and instead stuck his tongue out. The
photographer happened to also be a journalist and this quickly became one of
the most famous photographs of the brilliant man.
And you thought he was just batty.
But anyway, I’ve digressed terribly far. In
1917, taking the work of Max Planck’s radiation experiments in another
direction, Einstein unintentionally laid the groundwork for what would later
become the laser and the maser.
Next came Rudolf Ladenburg who confirmed the
existences of the phenomena of stimulated emission and negative absorption.
Over a decade later, Valentin Fabrikant predicted the use of stimulated
emission to amplify short waves.
In repetitious fashion the cycle repeats
itself. It takes another ten years before Willis Lamb and R. C. Retherford
found apparent stimulated emission in hydrogen spectra and effected the first
demonstration of stimulated emission. Finally, in 1957, Charles Townes and
Arthur Schawlow, then at Bell Labs, began a serious study of the infrared
laser.
In May of 1960 the first laser was fired by
Theodore H. Maimann in Malibu, California, United States. It was the first
successful display of the technology and it set in motion a revolution the
entire world over. From the Iranian physicists Ali Javan, and William R.
Bennett, and Donald Herriott, who constructed the first gas laser, using helium
and neon that was capable of continuous operation in the infrared, to the
recent advances in the technology, ranging from using lasers to aide in
delicate surgeries or just annoy your cat for a funny YouTube video.
Lasers have a rich history, including
lawsuits, besmirching, and cutthroat research. If you want to read more about
the history of this fascinating technology head over to the Wikipedia page by
clicking here. For time’s sake
we’re going to move on to explaining just how lasers play a role in keeping you
from crashing in your new Google-Benz.
Mixed (Hand) Signals
There are many kinds of lasers out there now
but for the purposes of this article we’re going to be talking about a specific
kind. The light laser, or LIDAR, for short, is a combination of imaging lasers
and radar technology, hence the name. The unit sends out a laser signal (in the form of infrared or even in the visible spectrum) from it
and the laser is reflected off of the surface of an object and bounced back to
the source. This reflection is then interpreted by a computer and processed
into an image.
The LIDAR system of laser technology has been
around since the 1960s, but it wasn’t until the invention of modern synthetics
that were able to scale the tech down in size and bulk that it became conventional
for uses like this.
In the case of the Google car, it’s equipped
with a Velodyne oscillating LIDAR on the roof. This laser has 64 independent
beams that rotate very fast above the vehicle. As these beams are bounced off
of the surrounding environment and sent back to the car, software in the cars
computer compares the imaging with real-world images of the geographical
landscape. It then uses these images to make adjustments accordingly for
traffic, laws, construction, and road hazards.
While the technology is capable of sensing
hundreds of objects at a time there are some pitfalls to it still. It can sense
things like jaywalkers and deer crossing into the road, but squirrels, cats,
and possums aren’t quite as lucky. Another thing about the sensor technology
right now is that while it is capable of picking up on potholes in the road
ahead, it doesn’t go around them, simply slows down to minimize the damage. In
states here like Michigan where potholes can swallow a semi, that’s just not
going to cut it.
Another hilarious piece of information I gleaned off of the net is that supposedly it is difficult for the car to handle traffic cops. When the police start directing traffic with their erratic hand gestures the car doesn't know what to do and so turns control of the vehicle over to the human in the hopes they'll be able to sort it out. They’ll continue to work on this technology, but Google
certainly isn’t the only one.
While Google is using LIDAR to pick the kids
up from Soccer practice remotely, a researcher at Stanford working tirelessly
on his own autonomous vehicle is going a different direction with the research.
A direction that could pay off big time for every autonomous vehicle
manufacturer in the world.
Christian Gerdes considers himself an
“above-average driver”, at least according to his delightful TEDx talk which
you can watch by clicking the video below…
…but instead of equipping his cars with a ton
of lasers and real-time traffic control, he’s content with the modern safety
features available to most consumers currently with a few extra high-tech
cameras and computer algorithms on the side. He isn’t testing to see if his car
will stop in time, in fact, he doesn’t want it to stop at all. Christian is
taking his autonomous cars to some of the most dangerous locales in the world,
and he’s pushing them to their physical mechanical limitations.
Christian isn’t looking to design a car that
works like a soccer mom. He’s looking to design a car that drives like Jeff Gordon.
Racecar drivers are incredibly intuitive drivers. They are capable of feats
behind the wheel of a vehicle that even the most veteran of commuters wouldn’t
even dare dream of attempting. So why do you want a car that isn’t capable of
taking risks? After all, isn’t having the ability to take risks the best way to
learn to avoid them?
Researchers at Stanford think so, and have
worked day and night for several years now attempting to do so. In 2010 they
announced their autonomous Audi Quattro would make the climb up Pikes Peak in
Colorado. This raceway has been active for almost 100 years, and its known for
its treacherous hairpin turns. The autobot? It performed spectacularly.
But just what goes into these smart cars that
makes them so darn smart?
Road Tested, Soccer
Mom Approved
That’s just what was available as of 2008. As
of this writing we have technology ranging from self-parking cars, to anterior
cameras that sense obstructions in the road ahead (like unaware adolescents)
and automatically apply the brakes. As all of this technology developed it
wasn’t a surprise to see autonomous cars being announced by every major car
manufacturer in the world.
Some of the features are Tire Pressure
Monitoring. A sensor in the wheels alerts the driver via an audible sound or a
light on the instrument panel that the tires are low. Also more and more cars
are slated to be equipped with run-flat tires, which are capable of driving at
a high-rate of speed for around fifty miles after being punctured. This cuts down
on blowout tire related accidents.
Blindspot assistance has come along now as
well. Sensors on the sides and rear of the vehicle collect information on the
vehicles surroundings. If the driver attempts to make a lane-change into an
occupied section the car will quickly alert the driver to this, thereby
avoiding a collision. Sounds like that’s just one step away from I-Robot to me.
And what about Chrysler’s Electronic Roll
Mitigation? Suppose you take a curve too sharply and the vehicle senses there’s
a high likelihood of a rollover, it will apply the brakes and throttle
accordingly so as to avoid the incident. Once again, if mapped altogether as
one neat little computerized package, all of this sounds intuitively autonomous
to me already.
With backup cameras and automatic Emergency
service alert in the event of a collision, everything was already in place. The
software just needed to come along in order to get these cars on the road. So
are they safe? For the most part. They still require extensive field testing,
stress testing, and more than likely many more upgrades to various systems
before they’ll be ready for consumers.
But considering 3D mapping is going to be
available on your Smartphone soon, nothing here should surprise you.
Virtual Reality: The “Home”
Game
Google recently
announced another project along the same lines as the technology that went into
making their Smart cars autonomous. It’s called Project Tango, and no, it isn’t
a “So You Think You Can Dance?” app. It’s actually a highly detailed program
able to take the two-dimensional viewpoint of your Smartphone, into the awesome
3D world.
We live in a 3D
world, so it only makes sense that we should be able to capture the dimensional
properties around us. Google thought so too. Using the same kind of technology
as Microsoft’s Xbox 360 Kinect, and the new 3D chip called the Myriad-1
developed by Movidius, it enables the user to capture 3D simulated images of
their environment.
This technology
could be useful for a “myriad” (see what I did there…) of purposes and Google
is reaching out to hundreds of software developers worldwide and distributing
Dev Kits containing the technology. From uses as simple as taking a full scale
imaging of one’s living room before going furniture shopping to the ultimate in
fully immersive one of a kind virtual reality gaming experiences, the
possibilities seem endless.
The point I’m
trying to make is that if Google is capable of taking such a cutting edge kind
of technology and scaling it down to be utilized in a Smartphone, then they
should have no problem working out all the bugs in their Smart Cars. I can’t
wait to see what the future holds for self-driving automobiles.
For now I’ll just
have to keep my ear to the ground for updates on this as they come. As of right
now there are varying estimates on release times for fully autonomous vehicles,
ranging everywhere from 2015 to 2025. Personally, 2025 sounds much more
probable to me, but we’ll see.
Thanks for reading
everyone!
-Ryan Sanders
Thanks for reading
everyone! As always if you would like to know more about any of the topics
discussed above you can by following any of the links below. Feel free to share
this around on Facebook and Twitter.
No comments:
Post a Comment