Friday, May 30, 2014

The Phantoms of the Forest: Vampire Redwoods!


The great Redwood tree. Tall, hearty, majestic, mighty, these are all apt descriptors, and there are many more. But the Redwood is a dying breed, and not because it can’t stand up to the tests of the world, quite the contrary really, the Redwood is brutally tough. We’re the ones (as usual) that killed them all off.
With less than 1.5 million acres left in the world, these trees need our protection, now more than ever, because it seems a strange abnormality in these trees has been under scrutiny as of late. The albino and chimera mutations that occur in some of these trees is a marvel to botanists and phytology enthusiasts the world over (myself included. Yay tree nerds!)
Before we get too far along I want to clarify something. When I say 1.5 million acres that sounds like a lot, but when you take into consideration that the Earth is 36,819,200,000 (yes that’s 36 billion) acres, 1,500,000 acres is kind of small potatoes. That’s similar to the scale equivalent of shoving the Redwoods off to a single prison cell. Considering the tallest one is over 350 feet in height, that’s not cool.
We’ve talked about some monster trees before (Cosmic Radiation + 1,250 Year Old Cherry Tree = ???) but today we’re going deeper into the woods to see if we can’t solve a peculiar little mystery. Like just how in the heck do you get an albino tree?
Today at TI&IT we’re going to talk briefly about some of the cool aspects of Redwood trees and some of the parks that protect them. We’re also going to discuss a little bit about albinism as it relates to humans. Finally we’re going to talk about these bizarre arboreal oddities that, like Methuselah, have hidden coordinates to protect it from vandalism.
But first and foremost the question you’re all probably asking.
“Now…when you say vampire…?”

Nosfera-tree


The leaves of the albino redwood look almost plastic, but they’re very, very real. The problem with them is, while they’re beautiful, they lack a very important ingredient that is essential to the life of every plant on this planet. The ability to produce chlorophyll. This is how the plant produces energy.
These trees don’t produce chlorophyll effectively, thereby making it impossible to get energy. Therefore they have to vampirize their neighbors. (“I vant to suck your sap! Bwah!”) We’ll dig deeper into that in a bit. Let’s take a quick biology lesson first.
Plants are actually quite remarkable. Think of them in terms of humans. (This will come in handy later in the article.) In order for humans and other animals to function we have to produce energy of some kind. Plants are no different in that regard. In order for them to survive they have to be able to produce energy of their own. So if plants and trees are like us, we can think of smaller plants like ferns and flowers as in respect to animals and insects, and we can think of trees as people, comparing their more complex systems to our own.
A dandelion, while it operates on the same sort of principles as trees, has things a little easier. A tree sometimes has to pump nutrients from the soil hundreds of feet up into the air to get them to its leaves. A dandelion is much closer to the ground so it needs a less complex physiology in order to accomplish this same task.
Trees have a series of transporters built into their biology that function much in the same way our circulatory system does. Using what’s known as Xylem and Phloem they are able to provide their leaves with the nutrients and water they need to stimulate the photosynthetic process. Xylem essentially functions like veins, taking raw nutrients and water to the leaves to be cycled through photosynthesis and converted into sucrose and glucose (sugars) and carried to the rest of the plant. The structure that takes the nutrient rich sap, or blood if we think like humans, is called Phloem. Phloem works like our arteries which carry oxygen rich blood to our extremities for our muscles and other organs to function properly.
Xylem and Phloem can be visible if a cross section of a tree is taken.


Xylem is represented in the trees growth rings. The inner rings closest to the heartwood or Pith are the functioning Xylem. As the tree grows the rings extend toward the outer layers, this Xylem dies off and becomes non-functioning as new Xylem forms. It isn’t useless at this stage however, not only does it tell scientists how old a tree is it still stores vital nutrients for the rest of the tree. Think of old Xylem like fat in people.
Phloem on the other hand is the innermost layer of the bark. The bark is like our skin, protecting the tree from the elements, predation, and disease. While Xylem is mostly comprised of dead tissues and cells, Phloem is very much alive. It transports the glucose to the roots or bulbs for storage and can move in any direction throughout the tree provided it’s structurally feasible. Xylem is unidirectional, meaning it can only flow upward.
*Whew!*
So now we know why they need photosynthesis, how does it work? It’s pretty cool actually. The plant takes in light through small vascular openings kind of like capillaries on the leaves called Chloroplasts. Chloroplasts are responsible for producing Chlorophyll, which is a key ingredient in converting raw nutrients and water into sap. It also produces the trees pigmentation.
The spectrum of light is basically the Rainbow. (ROY G. BIV. Remember him?) Red, orange, yellow, green, blue, indigo, and violet. For one reason or another, the only colors of light in the spectrum that plants seem to have use for are the ones on the far left and right of the green, the blues and the reds. Green isn’t taken in, that’s why most plants appear green, they reflect the color of light they don’t have in them.
Chlorophyll works like Melanin in people. It’s the factor that determines the pigmentation of the plant. Depending on what the plant pulls in from the light spectrum to use photosynthesis, it could range from light green, to reds and yellows, even pink and black leaves have been noted.
So what happens when the plant can’t utilize the light spectrum for pigmentation let alone photosynthesis? You end up with something called albinism.

Albino Is The New Green



Albinism, or Achromia can strike anyone. It doesn’t matter what your racial background is. It doesn’t care if your family’s rich or poor, old or young, skinny or fat, it’s a recessive gene. If it’s in your bloodline, being born just becomes a crap shoot. But what causes the disorder?
A lack of the pigment melanin. The body is unable to produce the enzyme Tyrosinase. This enzyme is copper containing and is directly responsible for the production of melanin. Without this enzyme melanin cannot form within the skin, leaving the afflicted with a ghostly pale appearance in the skin, hair, and eyes.
It isn’t just color that’s affected by this disorder in humans and animals. Vision can be affected as well. The eyes are highly dependent on melanin in order to function properly. In fact, melanin is what gives eyes their wide variations in color. The more melanin someone has in their eyes the darker the colors will be, presenting with usually a brown or black iris. The less melanin they have the lighter the color will be, i.e. blue or green eyes.
Melanin protects the skin from harmful U.V. radiation by altering the pigmentation to block excess rays that cause damaging effects. (This is why there is such a wide array of skin color pending geographic location. Race is really just an arbitrary term.) This same phenomena applies to the eyes. People with darker eye color are less sensitive to light than those of us with blue eyes.
In those who suffer from albinism, since they lack the protective melanin, their eyes are far more susceptible to the damaging effects of ultraviolet radiation. It can cause eyes to cross, retinal failure, or worse yet, photophobia. This is not to be confused with heliophobia which is a goofy morbid fear of light. Photophobia isn’t a fear but rather a physical discomfort or pain caused by the exposure of light. In short, Heliophobics are sissies, photophobics can’t help it, it hurts.
Fortunately for humans, we don’t photosynthesize our energy through the pores of our skin. Plants on the other hand aren’t so fortunate.
So now we know that albino humans can’t produce melanin and albino plants can’t produce chloroplasts. We know that this has damaging effects on the bodies of humans and this prevents plants from undergoing photosynthesis. Humans with albinism can still eat and drink to produce energy, but how do plants do it?
For that we have to get a little help from classic horror.
Vampires.

The Roots of All Evil


Alright, so vampire trees may be a little bit misleading. They actually don’t uproot themselves at night stalking the forest in search of stamens to bite. They’re vampiristic actions are far more subtle than that. It all takes place away from the prying eyes of humans. Right beneath our feet.
As we know they can’t produce chlorophyll, which means they can’t stimulate photosynthesis, so how do they get food? Well luckily albino redwoods still have roots. They intertwine these roots with those of what’s called “The Parent Tree” in order to suckle nutrients from it.
What’s more incredible is that albino redwoods don’t appear to be growths unto themselves but rather branches from said parent tree. They don’t grow as trees so much as have the appearance of shrubs and bushes. Take a look at the example below.


Researchers have no idea why these redwoods sprout these genetic mutations. Some conclude it’s an abnormality in the genetic structure of the redwood. Genetically, redwoods are what’s known as a hexaploid meaning they have 66 chromosomes. Humans by contrast are diploids only containing 23. With 43 more chromosomes than us, coupled with the fact that some of the oldest redwoods alive today date back to the Roman Empire, this allows for a lot of variables that could result in this genetic mutation.
What’s even more incredible is that some of these albino redwoods present as chimeric, meaning they have both white and green leaves. This means that to some extent these chimera albino redwoods are capable of producing chlorophyll and therefore living on their own, however they are still more fragile than their full sapped redwood counterparts.


Chimera redwoods are much rarer than their preciously numbered counterparts. Not only that, they’re gold mines for researchers studying these trees and their genetic composition. Since it contains both the green and the white needles on the same tree it gives them the opportunity to study how the healthy tree operates in tandem with the albino tree. They are strange to say the least.
What’s even stranger is that during times of necessity, the parent tree can cut off the sugar supply to the albino mutation, effectively killing it. But the plant does not completely die. Redwood forests are extremely complicated in their ecology. The root systems can span for miles, entangling with the systems of neighboring trees, forming one gigantic networked forest. This can strengthen the trees from high winds, floods, droughts, you name it. Now that’s a support group.
The parasitic root system of the albino redwood is no different in this respect. While the part of the plant above the ground withers and dies, the roots are unaffected. This is how this arboreal apparition seems to disappear one season and reappear another in the same spot.
Some scientists have linked this genetic mutation to times of great stress and that hypothetically this could be some sort of coping mechanism. On top of that the increased number of chromosomes allows for a number of other possible genetic mutations, so in a sense, they could be looking for anything.
Not much is known about these trees, including their exact number. Reports range anywhere from 10 to over 500. The official number typically agreed upon is around 50. They were discovered in 1866, and first published about in the California Academy of Sciences Proceedings in 1866. Some published articles include elusions to their use in sacred Native American rituals and spiritual rituals. I could not track down a verifiable source of this information so I will only include the above sentence with the following, I have not verified the truth of these accounts, if someone has verification please feel free to leave it in the comments section. I will be more than happy to paw through it.
These mysterious trees will continue to fascinate us until we figure out exactly what it is that makes them tick. Perhaps the albino redwood is a disease, an abnormal growth that the parent plant has no control over. Perhaps it’s a way for the tree to deal with environmental stressors. Or perhaps (and this is my personal theory) it’s a way to store excess sugars for the host trees or other trees on the “network” and is consumed before a drought or other disaster strikes because the tree knows the excess stores will be needed.
If anyone researching these wonders of the wild reads this article and is willing to share, I would love to read any more information you have on these trees. Anything that keeps atmosphere around for us to enjoy is alright by me, and trees have been a fascination of mine since I was a young boy. After all, I live in an area of the United States not known for its Redwoods, but I have one growing in my backyard. That’s darned cool to me.
All I know is these albino redwoods are beautiful, and I hope we discover thousands more of these elusive forest ghosts. Hopefully one day I’ll get to see one in person. But for now, I’ll settle for Google images. Thanks for reading everyone!

-Ryan Sanders





Thursday, May 29, 2014

Returning to Castle Wolfenstein! 33 Years of Defining a Genre (and Nazis!)


When you mention the First Person Shooter genre, most veterans of the gaming community will fondly recall memories of Doom. What a lot of gamers may not know is that Wolfenstein did it first, and in their own right, created a genre. The fact that the company (id Software) made both games is irrelevant, the fact stands, Wolfenstein came first.
But what a lot of people don’t know is that a decade before id Software released their soon to be world renowned genre defining game, a small company called Muse Software actually laid the initial prep-work. In 1981, just 3 years after the Baltimore based company opened its doors and just six years before it closed them forever, Castle Wolfenstein was born.
Castle Wolfenstein was no doubt its most popular title on the Apple II (One of the original personal computers for all you youngin’s) with its followup “Beyond Castle Wolfenstein” falling just short of its popularity. At the height of the companies reign they were pulling in around two million dollars a year. For the time, that was excellent money. Alas, a host of issues caused them to file for bankruptcy in 1987.
But all was not lost, even though their roster only consisted of six or seven titles as a game development company, they had stirred an entire generation of computer nerds. And these nerds were not content to let the Nazis just sit in the annals of Apple II history collecting dust.
A guy named John Romero and a fella named John Carmack were bent on helping players the whole world over, crush the Nazi regime for the first time, in a way that had never been done before. This was accomplished giving the player a personal perspective by altering the vantage point to make the individual feel like they were inside the game.
This, was the First Person Shooter.

C:/Wolfenstein3D


One year before the intrepid Space Marine “Doom Guy” heads to Hell, John Romero and id Software took us on a rampage through a familiar castle to kill a very familiar dictator. Little did anyone know going into this that they were getting something they had never seen before, and no, it wasn’t a pixelated Hitler. It was the First Person Shooter.
What made this possible was that by 1990 computational power had more than tripled from what was possible in 1981. These machines went from glorified typewriters to day planners, word processors, office management software, and many, many new features were coming out each day. This gave developers the chance to flex their creative gaming muscles a little more freely.
In 1991 John Carmack built a gaming engine, which is essentially the framework that every piece of code is then layered on top of in order to create the finished product. This engine had been tested prior to applying it to Wolfenstein in their other franchise Catacomb, but this was its true maiden voyage into the mainstream.
Game Engines provide developers with the tools for scripting, lighting, shading, rendering, physics, collision detection, and many other tools needed for game design. Typically, in id Software’s case, game engines were named after their flagship franchises (i.e. Doom Engine, Quake Engine, Unreal Engine etc.) but due to their flexibility in design they can be easily reconfigured to suit many other developers’ needs for their own games.
Coupled with recent acquisition of Muse Software’s Castle Wolfenstein, the brilliant mind of John Romero, and the talents of a software design team far ahead of their time, Wolfenstein 3D released to smashing success in 1992. Nothing like it had been seen before, and because they were willing to sacrifice some of the graphics power they were able to offer the game on a wide array of systems.
Castle Wolfenstein was ready to be stormed once more.

Return To Wolfenstein (Again)


Wolfenstein 3D much like id Software’s megahit Doom caught a lot of attention for its gritty and gory game design. It appealed to teens and adults alike who had never experienced something so gruesome before, but many-a-mother was horrified by what she saw. Thankfully, they were able to continue doing what they do and in the late 90s turned development of a new Wolfenstein game over to development teams at Gray Matter Interactive.
They weren’t noobs to the FPS genre. They had worked with id in the past on titles like Quake II but this was their first chance to make a huge blockbuster title. Everything they had done (with the exception of Quake II which was still mainly considered an id Software title) was relatively mediocre in comparison to this.
The game was a tremendous success. Nerve Ent. developed the Multiplayer section of the game which eventually became the most popular feature. In many respects this paved for the way for online gaming as we know it in respect to FPS’. While it isn’t the first, the objective based gameplay was fresh and innovative for the time.
While non-networked multiplayer gaming had been around in the form of STAR since the 1970s, networked connections of an LAN (Local Area Network) didn’t start catching on until a mere ten years before Return to Castle Wolfenstein was released. The honors go to the games Spectre and Doom. Spectre in 1991 allowed for eight players to connect simultaneously, Doom allowed for four in 1993. Either option was groundbreaking for the time.
Return to Wolfenstein however received some notoriety much like its predecessor. It seems no innovations come without retractors. People were outraged that the multiplayer feature of the game had a Nazi protagonist. Eventually people moved on from this, and so did Gray Matter who in 2005 was forced to merge with Treyarch.
Once again, id Software had full control of the rights to Wolfenstein, and they saw fit to hand it over to another developer that could take it to another generation of gamers.

Sinking in a Flooding Market


Before we talk any more about the sequel to Return to Wolfenstein, it’s important to talk about what was going on in the gaming market around the time of its release. In 2000, the world saw the release of a gaming system like no other. The Playstation 2 revolutionized the way we played video games. Graphics were more life-like than before. The increased computational power of consoles allowed for more and more realistic simulations to be released.
Gaming consoles were nothing new. The Nintendo and the original Playstation were just two examples out of the dozen or so that had already graced living rooms all across the world. But in 2001when Microsoft stepped into the video game console world and dropped the Xbox on the world everything changed.
A huge software firm like Microsoft had just marched into Sony’s playground, and their console had the “chutzpah” to push competitors like Sega out of existence and overshadow the megalithic Nintendo Corporation.
As a result of all these consoles and game developers hitting the market, you can imagine things got a little flooded, and quickly. Gamers were overwhelmed with the amount of lackluster titles being shoved onto the shelves in order for each company to make a quick buck at the expense of gamers.
As a result, the industry lost a lot of gamers’ trust. It has since gained it back with wonderful publications like Game Informer, OXM, Playstation Magazine, and PC Gamer giving gamers the insight they need to make informed purchases, and better quality efforts within the industry to give us the titles we deserve for the lofty prices we spend.
These are all important to keep in mind as we head back to stomp some Nazis once more.

Isenstadt Not Constantinople


In 2009, a company called Raven Software, a subsidiary of the hugely successful Activision/Blizzard merger, got a call from the dev teams at id Software and Activision. It was a call to craft the next installment of Wolfenstein. In the eight years since the release of its predecessor a lot had changed. Gone was the Playstation 2 and the Xbox, replaced by their shiny new counterparts the Playstation 3 and the Xbox 360. The Gamecube manufactured by Nintendo had disappeared and was replaced with the motion controlled Wii. The landscape was unrecognizable compared to what it had been eight years prior.
So with a new engine called id Tech-4, a young and hip team that had worked on many id and Activision shooters in the past since its 1990 inception, and a huge budget to work with, they set to work crafting the next episode of the series. Unfortunately after its release it didn’t do so hot.
As we mentioned in the section above, the market was currently flooded with an overwhelming amount of FPS titles, not to mention hundreds of other games in a number of different genres. While the graphics were pretty, smoothed over edges instead of rough polygonal ones, and the sound and scripting were great, movie quality even, the title just didn’t strike a chord with critics.
It scored average on the metacritic scales but it was nothing compared to the blockbuster success of its predecessors. But id Software and newcomer to the gaming battlefield of FPS titles Machine Games are looking to change all that around with the recent release of Wolfenstein: The New Order.

Return To…Aw Jeez You Know Where…


It’s a new world. The Nazis won the war, and since that day they’ve made the world a very crappy place to live. Your mission, stomp some Nazi arse, save the world, and crush The New Order. Good luck.
Sounds like a lofty goal, but to anyone who has played an FPS in the last twenty years knows the sound of that plotline. We’ve all saved the world a million times from Nazis, cyborgs, aliens, and even ourselves now, we need something new. Gamers demand something fresh.
Well The New Order aims to deliver.
I haven’t had any hands on time with the game myself but I’ve watched a few hours of gameplay footage and I have to say, it’s intense. The voice acting is crisp and fresh, you almost don’t even need the subtitles at the bottom of screen, although sometimes it does help when the game tries to maintain the realistic nature of an airplane engine drowning out a conversation.
There’s a new system that’s been integrated into Wolfenstein known as Perks. Anyone who has played a Call of Duty title has heard that word before. Unlike in COD where you select your perks before going into a mission, this is more like an RPG-Lite mechanic allowing you to experiment with different styles of gameplay.
Just check out the trailer for The New Order below:


Judging from the trailers and gameplay I’ve seen the 8 out 10 that Game Informer gave The New Order seems fair. I’ve certainly added it to my list of games I’ll be picking up on my next stop to the store.
It’s been criticized for not taking any risks and being innovative. The review also made note that the music grew tiresome eventually and was shut off and that the A.I. was completely stupid.
The fact is that Machine Games, while its being backed by the tremendously successful studio Bethesda, is still a fledgling studio. To take huge risks, especially with such a beloved and well known franchise, could prove complete and utter suicide. While it lacks a multiplayer feature which is kind of a bummer in this day and age, it seems like the campaign will be a rousing bit of fun.
In my opinion, in 2009 when Zenimax, the owner of Machine Games, purchased id Software they did the right thing by playing it safe with their new companies first title. I hope Bethesda and Machine Games get to keep the rights and that this iteration is successful enough to warrant a sequel from them. Perhaps the next installment will take the risks the critics are clamoring for, but the fans are far less concerned with.
For me, I’m just happy to be able to blast away at some Nazis as the stocky B.J. Blazcowitz once again. As of May 20th the game is available on all platforms (Playstation 3, Xbox 360, Playstation 4, Xbox One, PC). So head on down and pick up your copy today. And no, they’re not paying me to advertise. (Nobody pays me at all honestly these days…) I just really enjoyed this franchise growing up, and I want to take this opportunity to share my love of it with everyone else, just like my love for Science.
So gather up your resistance gentlemen. You’re taking back America.
Dismissed.

-Ryan Sanders





















Wednesday, May 28, 2014

Cyclists & Squirrels: The Pros and Cons of Google Powered Autonomous Automotives


And you thought your Prius was the pinnacle of innovation.
Step aside green car for the all new self-driving car! With innovations in Smart technology ranging in everything from your cellphone to your house, why should automobiles be off the table? Innovators like Google, General Motors, Apple, and Mercedes-Benz are all climbing behind the wheel for one last human driven ride.
But are they safe? Mostly, there are still some kinks to work out. But so far there are only a couple of accidents reported and all of them were determined ultimately to be the causation of human error, not the Google car. So how do they work?
Are you ready for this?
Frickin’ laser beams.
That’s right. A series of laser beams are used to guide the car through environments without the aid of a human driver. It’s pretty incredible when you think about it, and coupled with modern safety features already in place on most vehicles, in less than ten years motorists may only have to tell their car to head to the “Dry Cleaners” and stop by “McDonalds” on the way “Home” in order to navigate through their busy schedule.
This could also head toward faster posted speeds in the future, leading to less congestion, shorter commute times, and most importantly, less motorist fatalities. Everything here sounds good on paper, but what about the science? What is LIDAR? How do these Smart cars see their environment and stop in time for things like children and deer? Are they safe? What will they look like? How well have these things been tested?
We’re going to answer all those questions and more, including a little background on the  history of LIDAR. Come with me at To Infinity and…In Theory today as we discuss whether or not the autonomous automobile market is destined to crash and burn.
Before we begin, let’s start with some basic statistics so you can see why this technology is so important.

Highway To The Danger Zone


In 2012, just in the United States, more than 34,000 motorists were killed in vehicular accidents. A further 4,000+ bystanders were killed, thousands more were injured, and 700 bicyclists were killed. That’s a lot of human related motorcade carnage if you ask me.
Those numbers get far higher as you branch out into the rest of the world, as they well should. The United States is but a fraction of the world’s population. The majority of those deaths resulted from human and mechanical error. I’m sure there were faults of nature that could not have been avoided, just as I’m sure even a Google car can get annihilated by a sudden sinkhole, but those numbers could drop significantly.
During the decade or so of testing autonomous vehicles, Google alone has logged over 700,000 miles on their cars. With over a dozen on the road in four states including Nevada, California, Florida, and Michigan, there have been exactly 0 deaths reported.
There have however been two accidents reported. One being a rear-end collision, the fault of a human driver in a non-Google car, and the other was hilariously in front of the Google Headquarters building in California. The car was being manually driven at the time, (and KITT probably laughed his carburetor off). But to date there have been no reported accidents related to the software installed to make the car autonomous.
So just how does it sense its surroundings and make intelligent decisions based on the information it ascertains?
It’s not magic. It’s science.

Nice Laser


Ah yes, the famous Albert Einstein. And the famous photo of him with his tongue out. If you’ve ever wondered the story behind that photo ponder no more, it’s quite hilarious actually. It was his birthday party, and all night long all the guests were asking to pose with Albert and take pictures, or make him smile and take a photo. Albert grew tired of smiling and as he was leaving a guest pleaded for one last photo of the famous physicist. Not wanting to deny his public, but horribly bored with smiling, he turned to the camera and instead stuck his tongue out. The photographer happened to also be a journalist and this quickly became one of the most famous photographs of the brilliant man.
And you thought he was just batty.
But anyway, I’ve digressed terribly far. In 1917, taking the work of Max Planck’s radiation experiments in another direction, Einstein unintentionally laid the groundwork for what would later become the laser and the maser.
Next came Rudolf Ladenburg who confirmed the existences of the phenomena of stimulated emission and negative absorption. Over a decade later, Valentin Fabrikant predicted the use of stimulated emission to amplify short waves.
In repetitious fashion the cycle repeats itself. It takes another ten years before Willis Lamb and R. C. Retherford found apparent stimulated emission in hydrogen spectra and effected the first demonstration of stimulated emission. Finally, in 1957, Charles Townes and Arthur Schawlow, then at Bell Labs, began a serious study of the infrared laser.
In May of 1960 the first laser was fired by Theodore H. Maimann in Malibu, California, United States. It was the first successful display of the technology and it set in motion a revolution the entire world over. From the Iranian physicists Ali Javan, and William R. Bennett, and Donald Herriott, who constructed the first gas laser, using helium and neon that was capable of continuous operation in the infrared, to the recent advances in the technology, ranging from using lasers to aide in delicate surgeries or just annoy your cat for a funny YouTube video.
Lasers have a rich history, including lawsuits, besmirching, and cutthroat research. If you want to read more about the history of this fascinating technology head over to the Wikipedia page by clicking here. For time’s sake we’re going to move on to explaining just how lasers play a role in keeping you from crashing in your new Google-Benz.

Mixed (Hand) Signals


There are many kinds of lasers out there now but for the purposes of this article we’re going to be talking about a specific kind. The light laser, or LIDAR, for short, is a combination of imaging lasers and radar technology, hence the name. The unit sends out a laser signal (in the form of infrared or even in the visible spectrum) from it and the laser is reflected off of the surface of an object and bounced back to the source. This reflection is then interpreted by a computer and processed into an image.
The LIDAR system of laser technology has been around since the 1960s, but it wasn’t until the invention of modern synthetics that were able to scale the tech down in size and bulk that it became conventional for uses like this.
In the case of the Google car, it’s equipped with a Velodyne oscillating LIDAR on the roof. This laser has 64 independent beams that rotate very fast above the vehicle. As these beams are bounced off of the surrounding environment and sent back to the car, software in the cars computer compares the imaging with real-world images of the geographical landscape. It then uses these images to make adjustments accordingly for traffic, laws, construction, and road hazards.
While the technology is capable of sensing hundreds of objects at a time there are some pitfalls to it still. It can sense things like jaywalkers and deer crossing into the road, but squirrels, cats, and possums aren’t quite as lucky. Another thing about the sensor technology right now is that while it is capable of picking up on potholes in the road ahead, it doesn’t go around them, simply slows down to minimize the damage. In states here like Michigan where potholes can swallow a semi, that’s just not going to cut it. 
Another hilarious piece of information I gleaned off of the net is that supposedly it is difficult for the car to handle traffic cops. When the police start directing traffic with their erratic hand gestures the car doesn't know what to do and so turns control of the vehicle over to the human in the hopes they'll be able to sort it out. They’ll continue to work on this technology, but Google certainly isn’t the only one.
While Google is using LIDAR to pick the kids up from Soccer practice remotely, a researcher at Stanford working tirelessly on his own autonomous vehicle is going a different direction with the research. A direction that could pay off big time for every autonomous vehicle manufacturer in the world.
Christian Gerdes considers himself an “above-average driver”, at least according to his delightful TEDx talk which you can watch by clicking the video below…



…but instead of equipping his cars with a ton of lasers and real-time traffic control, he’s content with the modern safety features available to most consumers currently with a few extra high-tech cameras and computer algorithms on the side. He isn’t testing to see if his car will stop in time, in fact, he doesn’t want it to stop at all. Christian is taking his autonomous cars to some of the most dangerous locales in the world, and he’s pushing them to their physical mechanical limitations.



Christian isn’t looking to design a car that works like a soccer mom. He’s looking to design a car that drives like Jeff Gordon. Racecar drivers are incredibly intuitive drivers. They are capable of feats behind the wheel of a vehicle that even the most veteran of commuters wouldn’t even dare dream of attempting. So why do you want a car that isn’t capable of taking risks? After all, isn’t having the ability to take risks the best way to learn to avoid them?
Researchers at Stanford think so, and have worked day and night for several years now attempting to do so. In 2010 they announced their autonomous Audi Quattro would make the climb up Pikes Peak in Colorado. This raceway has been active for almost 100 years, and its known for its treacherous hairpin turns. The autobot? It performed spectacularly.
But just what goes into these smart cars that makes them so darn smart?

Road Tested, Soccer Mom Approved


That’s just what was available as of 2008. As of this writing we have technology ranging from self-parking cars, to anterior cameras that sense obstructions in the road ahead (like unaware adolescents) and automatically apply the brakes. As all of this technology developed it wasn’t a surprise to see autonomous cars being announced by every major car manufacturer in the world.
Some of the features are Tire Pressure Monitoring. A sensor in the wheels alerts the driver via an audible sound or a light on the instrument panel that the tires are low. Also more and more cars are slated to be equipped with run-flat tires, which are capable of driving at a high-rate of speed for around fifty miles after being punctured. This cuts down on blowout tire related accidents.
Blindspot assistance has come along now as well. Sensors on the sides and rear of the vehicle collect information on the vehicles surroundings. If the driver attempts to make a lane-change into an occupied section the car will quickly alert the driver to this, thereby avoiding a collision. Sounds like that’s just one step away from I-Robot to me.
And what about Chrysler’s Electronic Roll Mitigation? Suppose you take a curve too sharply and the vehicle senses there’s a high likelihood of a rollover, it will apply the brakes and throttle accordingly so as to avoid the incident. Once again, if mapped altogether as one neat little computerized package, all of this sounds intuitively autonomous to me already.
With backup cameras and automatic Emergency service alert in the event of a collision, everything was already in place. The software just needed to come along in order to get these cars on the road. So are they safe? For the most part. They still require extensive field testing, stress testing, and more than likely many more upgrades to various systems before they’ll be ready for consumers.
But considering 3D mapping is going to be available on your Smartphone soon, nothing here should surprise you.

Virtual Reality: The “Home” Game


Google recently announced another project along the same lines as the technology that went into making their Smart cars autonomous. It’s called Project Tango, and no, it isn’t a “So You Think You Can Dance?” app. It’s actually a highly detailed program able to take the two-dimensional viewpoint of your Smartphone, into the awesome 3D world.
We live in a 3D world, so it only makes sense that we should be able to capture the dimensional properties around us. Google thought so too. Using the same kind of technology as Microsoft’s Xbox 360 Kinect, and the new 3D chip called the Myriad-1 developed by Movidius, it enables the user to capture 3D simulated images of their environment.
This technology could be useful for a “myriad” (see what I did there…) of purposes and Google is reaching out to hundreds of software developers worldwide and distributing Dev Kits containing the technology. From uses as simple as taking a full scale imaging of one’s living room before going furniture shopping to the ultimate in fully immersive one of a kind virtual reality gaming experiences, the possibilities seem endless.
The point I’m trying to make is that if Google is capable of taking such a cutting edge kind of technology and scaling it down to be utilized in a Smartphone, then they should have no problem working out all the bugs in their Smart Cars. I can’t wait to see what the future holds for self-driving automobiles.
For now I’ll just have to keep my ear to the ground for updates on this as they come. As of right now there are varying estimates on release times for fully autonomous vehicles, ranging everywhere from 2015 to 2025. Personally, 2025 sounds much more probable to me, but we’ll see.
Thanks for reading everyone!

-Ryan Sanders


Thanks for reading everyone! As always if you would like to know more about any of the topics discussed above you can by following any of the links below. Feel free to share this around on Facebook and Twitter.

-       Wiki entry on Lasers
-       Wiki entry on LIDAR