I wonder if you have ever used google maps to visit someplace you never have been there before. And it says your destination is on the right but you found it on your left or it says turn left but there’s no left there. It’s quite annoying right! Also, I don’t know if you have ever been through those embarrassing moment when you try to find something in an airport or in a shopping mall which is new to you, you go back and forth again and again and other will give you a strange look like you are a thief or something. Seriously sometimes it makes me feel like they deliberately made it like a maze so that we will spend more time in here and buy more.
To deal with this kind of problem and improve location accuracy finally Google is rolling out its new update which they call “Live view”. The live view uses AR(Augmented Reality) cues to guide you to your destination. Like if you are in an Airport and you are wondering where is the nearest elevator and escalators, your gate, platform, baggage claim, check-in counters, etc. Or if you are in a shopping mall and you need to pick something up from the mall, use Live View to see what floor a store is on and how to get there so you can get in and out in a snap.
Problems with the previous navigation method
So, before learning about live view let’s first talk about our current method of navigation GPS (Global positioning system). GPS relies on measuring the delay of radio signals from multiple dedicated satellites to determine a precise location. But this method doesn’t work well in urban areas like New York or San Francisco, it gets hard to pinpoint accurate geographic locations because of low visibility to the sky and signals reflecting off of buildings. Sometimes This can cause a highly inaccurate pin-point of the destination like it shows your location on the other side of the street or even a few blocks away.
One more problem with GPS is that it only detects your location, not your phone orientation which leads to the problem of false direction estimation. Sometimes our mobile sensors help in fetching the current orientation but it can easily get influenced by magnetic objects such as cars, pipes, buildings, and even electrical wires inside the phone, which can cause errors that can be inaccurate by up to 180 degrees.
A new approach to improve the accuracy of the location is global localization. Global localization uses VPS (visual positioning system) and Streetview with machine learning to identify your location and orientation with more precision. It uses your camera with some combination of techniques to determine your orientation.
Also read– Microsoft new update can transform your word document directly into power-point presentation
VPS (Virtual positioning system)
VPS determines your location with the use of imagery data rather than GPS signal.VPS first creates a map by taking multiple photos of the known location and then it analyzes the key feature such as an outline of the building or bridge to create a large-scale and fast searchable index of those visual features. To localize the device, VPS compares the features of the captured image from you with those in the VPS index.
You can think of this as face detection technology where it first analyzes and store’s your facial feature and then whenever it asks to recognize faces it search’s target face feature in it’s database and returns the result. similarly VPS stores the key features of the location, like buildings, bridges, etc. And uses them to recognize the place.
However, the accuracy of localization through VPS is highly affected by the quality of imagery and the location. Which is a matter of concern and need to find a solution for this.
Google street view
Google also connected Street view(It is a technology that captures images of street and famous places) data of over 93 countries across the globe with VPS to provide strong reference points to apply triangulation(means using more than one method to collect data on the same topic) and helping in better positioning.
Augmented Reality in Google map
Also to get a good match, we need to filter out the temporary feature and parts of the scene from the image and focus on the permanent structure that doesn’t change over time for example the phone at the time of localization may capture different pictures of a tree or a building from what the scene looked like when the street view imagery was collected.
Because of that, a core ingredient in this new approach is using machine learning to decide which features to pay attention to and prioritizing features that are likely to be permanent of the scene while ignoring the feature like dynamic light movement, temporary construction, etc.
This new method is also using AR(Augmented Reality) for more precision and ease of use. This will overlay direction on top of google maps when someone is in walking navigation mode. The results are satisfying even it is an early stage but it faces some challenges in conditions like late at night, snowstorm, or torrential downpour etc.
Data Scientist with 3+ years of experience in building data-intensive applications in diverse industries. Proficient in predictive modeling, computer vision, natural language processing, data visualization etc. Aside from being a data scientist, I am also a blogger and photographer.