/ Ideas / Making GPS accurate in dense forests using sensor fusion

This is an idea proposed in 2020 as a good starter project, and has been completed by Keshav Sivakumar. It was supervised by Anil Madhavapeddy, Srinivasan Keshav and David A Coomes as part of my Trusted Carbon Credits project.

Summary

Current GPS solutions are either very expensive ($8k+) or have relatively poor accuracies (10m+) under dense forest canopy. This project explores how to determine our location accurately in a forest area where we travel by foot under canopy without a GPS signal.

We observe that a lot of SLAM algorithms exist these days, but most of the recent research is on optimizing for monocular cameras, whereas we have the luxury of using cameras built for this purpose. A lot of options also exist with regards to depth cameras/fish eye cameras that specialize for localisation/mapping use cases. We chose the Intel T265 as it is part of a family of widely used products, and comes with a usable library (librealsense). It can also provide a good benchmark for base VSLAM, there is huge scope for greater accuracy by using depth cameras or LIDAR, but it is the cheapest, easiest solution among the current industry grade solutions. Interestingly, even the latest iPad Pro has LIDAR built-in now, so this is a solid approach!

The project was completed successfully (remotely due to pandemic), with details available in the PDF writeup and slides, and code notebooks on GitHub.

Related Ideas