close
address. Gaston Geenslaan 11 B4, 3001 Leuven
s.
All Insights

Creating an ARCore powered indoor navigation application in Unity

When talking about indoor navigation, we talk about navigation technologies not including GPS, due to its limitations in large complexes and building. These technologies are used in various use cases from finding your terminal with ease in an unknown airport, to locating the needed store in a mall with only 30 minutes to spare. Going to a grocery store you are not familiar with, will be a breeze with a product navigation app. The possibilities are endless.

Written by
Hannah Patronoudis
Project Manager

Imagine the following indoor navigation use case: you have a job interview with Raccoons, the best company in the world “ahem”, and were already succesful in traveling to the ‘Gaston Geenslaan’ in Leuven. However, the “Silicon Valley” of Leuven is rather big and there are only 5 minutes left to arrive at the interview in time. Following the “Blue Dot” revolution you take out your AR powered smartphone and you get guided towards and in the office space of Raccoons.

What is indoor navigation?

When talking about indoor navigation, we talk about navigation technologies not including GPS, due to its limitations in large complexes and building. These technologies are used in various use cases from finding your terminal with ease in an unknown airport, to locating the needed store in a mall with only 30 minutes to spare. Going to a grocery store you are not familiar with, will be a breeze with a product navigation app. The possibilities are endless.

How can it be established?

The concept of providing indoor navigation has been tackled by many using various types of technology. However, it has not yet been completely resolved and an explanation for this can be accuracy. An indoor navigation application needs to be able to guide a user between small corridors and rooms, which is not easy to do with the technology at hand. The application makes use of an indoor positioning system (IPS), that can use different types of sensory information to locate objects or people inside a large complex. All of these sensors behave optimally only in certain conditions, what makes it difficult to create a standard IPS for indoor navigation, like GPS for outdoor navigation. No efficient and perfect indoor positioning system has been developed that can be used for navigation. However, people have experimented with many different technologies to create an IPS, like Wifi and Bluetooh, but also magnetic positioning, dead reckoning, positioning based on visual markers, and combinations of these. Some need extra hardware to function, but others can work out of the box.

We can go on and on about these different possibilities to create an ideal IPS, but that is not what this blog is about. However, if you are interested, many nice papers summarizing and comparing the available indoor location techniques can be found. This blog instead will explore a technology that is only fairly recent available to the mass public via our mobile phones, namely SLAM or Simultanious Localisation And Mapping. To be more specific: the SLAM algorithm available in ARCore from Google.

Why use ARCore with SLAM for indoor navigation?

SLAM refers to the task of constructing an internal map of an unknown area whilst keeping track of its position inside the area. This allows for an IPS that can be used without any preparation before hand, in contradiction to an IPS using WiFi or Bluetooth. For the latter system, signal strengths needed to be measured from accesss points or Bluetooth beacons at known positions in order to triangulate the position of the users afterwards. Not having to do much preparation beforehand is a huge advantage for SLAM, but there is also a great disadvantage. Namely, it has issues with the stability of tracking when there are many moving objects. This makes it not suitable for use in very crowded places, e.g. airport or mall. SLAM uses a camera to find feature points, distinct point compared to its surroundings, in the envorinment. Feature points can be corners -often used-, edge segments, interest points, and even regions. ARCore's SLAM algorithm makes use of other sensors, e.g. accelerometer, to complement the obtained spatial information and reduce the intrinsic drift.

ARCore packs all this and more in an easy to use SDK. It handles motion tracking and environmental understanding for you without having to do much. Together with the previous mentioned features, ARCore can be used to create a nice augmented reality navigation sytem that shows the way by manipulating the reality.

How does it work?

Lets start of with a little demo:

Indoor navigation demo using ARCore in Raccoons headquarters

Unity is the development environment chosen for this project, because of its ease of use and NavMesh advantages (to be discussed later). The project consists out of four big parts, namely the ARCore based localisation, the QR-code repositioning, the navigation (NavMesh), and lastly the AR view. I assume that readers who want to recreate this project already have a basic knowledge of Unity and know how to setup a Unity project with ARCore. This project is developed for Android, but it is also possible for iOS. Only the necessary pieces of code are shown to get the app working (not always complete classes). Other code, e.g. GUI code and switch view code, is not shown, but please do experiment yourself. Whilst reading please keep in mind that this is not refactored code, but code written to create a prototype very fast.

ARCore localisation

The motion tracking and environmental understanding of ARCore will help us move the blue dot correctly according to our own movement.

Blue Dot indicator showing where we are on the map.

Lets start of with creating the map in unity. Make a plane and place the image material (image file as albedo) onto the plane. Make sure the map is scaled after real life, e.g. the distance between two desks is 1.74 meters so there should be 1.74 units between two desks in Unity. This can be measered with a simple cube.

Afterwards we only need a blue dot, which can be a simple cilinder, and the ARCore Device prefab. The blue dot has a camera as child that looks straight down and renders its view to a raw image used as minimap. On the ARCore device gameobject double click the session config and make sure it looks like below. You can change these settings afterwards to your own preference depending on what you want to add. Check the meaning of these settings in the ARCore documentation.

Session config settings

On the First Person Camera child of the ARCore device there is a Tracked Pose Driver script. For optimal localistation you should change the Tracking Type setting to "Position Only", however for having an AR navigation view you should keep it at "Rotation and Position". This is very unfortunate due to its big impact on the localisation precision, but for viewing AR object its necessary.

Scripts

Next up, a script should be made to translate real life movement to the movement of the cilinder. This will be an alteration of the available "HelloARController" script. You can place it on an object of choose, in this case an empty gameobject was chosen. The fields and Start/Update function will be different and look like this:

public Camera FirstPersonCamera;
public GameObject CameraTarget;
private Vector3 PrevARPosePosition;
private bool Tracking = false;

public void Start() {
//set initial position
PrevARPosePosition = Vector3.zero;
}

public void Update() {
UpdateApplicationLifecycle();

//move the person indicator according to position
Vector3 currentARPosition = Frame.Pose.position;
if (!Tracking) {
Tracking = true;
PrevARPosePosition = Frame.Pose.position;
}
//Remember the previous position so we can apply deltas
Vector3 deltaPosition = currentARPosition - PrevARPosePosition;
PrevARPosePosition = currentARPosition;
if (CameraTarget != null) {
// The initial forward vector of the sphere must be aligned with the initial camera
direction in the XZ plane.
// We apply translation only in the XZ plane.
CameraTarget.transform.Translate(deltaPosition.x, 0.0f, deltaPosition.z);
// Set the pose rotation to be used in the CameraFollow script
FirstPersonCamera.GetComponent<ArrowDirection>().targetRot = Frame.Pose.rotation;
}
}

In the Update function the difference between the previous position and the current of the camera is calculated. This difference is used to update the position of the blue dot. The y value of the position is set to zero to make sure the blue dot does not start to fly or go below the map. Lastly the rotation of the camera is used to update the arrow on the blue dot to point to the correct diretion in the ArrowDirection script.

public Quaternion targetRot;        // The rotation of the device camera from 
Frame.Pose.rotation
public GameObject arrow; // The direction indicator on the person indicator

void LateUpdate() {
Vector3 targetEulerAngles = targetRot.eulerAngles;
float rotationToApplyAroundY = targetEulerAngles.y;
float newCamRotAngleY = Mathf.LerpAngle(arrow.transform.eulerAngles.y,
rotationToApplyAroundY, rotationSmoothingSpeed * Time.deltaTime);
Quaternion newCamRotYQuat = Quaternion.Euler(0, newCamRotAngleY, 0);
arrow.transform.rotation = newCamRotYQuat;
}

The piece of code shown above handles the rotation of the blue dot's arrow.

Credits

This section was inspired on an other blog written by Roberto Lopez Mendez. You can check his blog if you want to see his perspective.

QR-code (re)positioning

After implementing the previous step, you will have a blue dot that will accurately follow you around on the map. The only thing needed is a start position synchronization that will be handled in this section.

QR-code representing a location in the office.

Using the ZXing library, QR codes can be scanned using the camera of the phone. In order to use the ZXing library in unity, the zxing.unity.dll needs to be placed in the plugin folder. The dll can be downloaded from here.

The QR codes are translated to simple strings, which are identical to the names used for the gameobjects that represent the locations on the map. These empty gameobjects are placed on the wanted position and are only used for its transform position.

Start position

When starting the application the ARCore device is turned off, only after scanning a QR code the ARCore device turns on and localisation can happen. This means that first another way (not via ARCore) for capturing the camera images needs to be instantiated. The setup for capturing footage happens in the Start function of the ScanStart script (placed on an empty gameobject). In its Update function every frame gets checked for containing a QR code.

public class ScanStart : MonoBehaviour {
public GameObject ardevice; //ARCore device gameobject

private bool camAvailable; //bool used for seeing if rendering with camera is possible
private WebCamTexture backCam; //used to obtain video from device camera
private Texture defaultBackground;

public RawImage background; // where to render to
public AspectRatioFitter fit; //fit rendered view to screen
public ImageRecognition imgRec; //object used to access method for setting location

//setup logic to capture camera video
private void Start() {
defaultBackground = background.texture;
WebCamDevice[] devices = WebCamTexture.devices;
if(devices.Length == 0)
{
Debug.Log("No camera detected");
camAvailable = false;
return;
}

for(int i = 0; i < devices.Length; i++)
{
if (!devices[i].isFrontFacing)
{
backCam = new WebCamTexture(devices[i].name, Screen.width, Screen.height);
}
}

if(backCam == null)
{
Debug.Log("unable to find backcam");
return;
}

backCam.Play();
background.texture = backCam;
camAvailable = true;
}

//if camera setup render each frame the obtained images
private void Update()
{
if (!camAvailable)
{
return;
}
float ratio = (float)backCam.width / (float)backCam.height;
fit.aspectRatio = ratio;
float scaleY = backCam.videoVerticallyMirrored ? -1f: 1f;
background.rectTransform.localScale = new Vector3(1f, scaleY, 1f);
int orient = -backCam.videoRotationAngle;
background.rectTransform.localEulerAngles = new Vector3(0, 0, orient);

bool result = imgRec.StartPosition(backCam);
//if result found that close this view and start ar application
if (result)
{
ardevice.GetComponent<ARCoreSession>().enabled = true;
background.gameObject.SetActive(false);
this.gameObject.SetActive(false);
}
}
}

The code above is mostly for receiving camera images, but the last couple of lines from the Update function handles the recognition of the QR codes. From the ImageRecognition script the StartPosition function is used to check whether or not a QR code is on the current frame.

using ZXing;
using GoogleARCore;

// is used at start of application to set initial position
public bool StartPosition(WebCamTexture wt) {
bool succeeded = false;
try {
IBarcodeReader barcodeReader = new BarcodeReader();
// decode the current frame
var result = barcodeReader.Decode(wt.GetPixels32(), wt.width, wt.height);
if (result != null) {
Relocate(result.Text);
succeeded = true;
}
}
catch (Exception ex) { Debug.LogWarning(ex.Message); }
return succeeded;
}

// move to person indicator to the new spot
private void Relocate(string text) {
text = text.Trim(); //remove spaces
//find the correct location scanned and move the person to its position
foreach (Transform child in calibrationLocations.transform) {
if(child.name.Equals(text)) {
person.transform.position = child.position;
break;
}
}
searchingForMarker = false;
}

Reposition

In order to compensate intrinsic drift from the cold start (see later) QR codes can be scanned whilst navigating to update the blue dot to the correct location. Instead of using the camera image capture code from above, the frames delivered by the ARCore SDK can be used to search for the QR code. Following code can be used for this, it may be beneficial to do this asynchronously.

byte[] imageByteArray = null;
int width;
int height;
using (var imageBytes = Frame.CameraImage.AcquireCameraImageBytes())
{
if (!imageBytes.IsAvailable)
{
return;
}
int bufferSize = imageBytes.YRowStride * imageBytes.Height;
imageByteArray = new byte[bufferSize];
Marshal.Copy(imageBytes.Y, imageByteArray, 0, bufferSize);
width = imageBytes.Width;
height = imageBytes.Height;
}
IBarcodeReader barcodeReader = new BarcodeReader();
var result = barcodeReader.Decode(bytes, width, height,
RGBLuminanceSource.BitmapFormat.Gray8);

Unity NavMesh navigation

Having precise localistation, we proceed to another important aspect of indoor navigation, namely pathfinding. Finding an optimal route to a destination can be achieved via a couple of methods. You can make optimal routes beforehand and take the route closest to where the person is standing (beneficial for AR navigation view). Another possibility, is making a graph model for the outline of the map and performing an A* algorithm. Or, when using Unity, you can make use of NavMesh components. With the help of NavMesh components, you can indicate which surface is walkable and pathfinding can be done easily using built in functions. This is often used in Unity for making NPC's walk in an environment, but it is also perfect for indoor navigation. A nice tutorial on using NavMesh can be found here.

NavMesh walkable area is indicated in blue.

Script

Using the tutorial mentioned before, the following NavigationController script has been made. That will, when a destination is set, constantly update its path from the blue dot to the set destination. A line renderer is used to indicate the calculated path on the map. Place the line renderer on the object where this script is placed (e.g. empty gameobject) and give it a colored material. The trigger and its instantiation in the setDestination function is for the last section. The list of transforms are empty gameobjects placed on the map that represent the possible destinations.

//class that handles all navigation
public class NavigationController : MonoBehaviour
{
public GameObject trigger; // trigger to spawn and despawn AR arrows
public Transform[] destinations; // list of destination positions
public GameObject person; // person indicator
private NavMeshPath path; // current calculated path
private LineRenderer line; // linerenderer to display path
public Transform target; // current chosen destination
private bool destinationSet; // bool to say if a destination is set

//create initial path, get linerenderer.
void Start()
{
path = new NavMeshPath();
line = transform.GetComponent<LineRenderer>();
destinationSet = false;
}

void Update()
{
//if a target is set, calculate and update path
if(target != null)
{
NavMesh.CalculatePath(person.transform.position, target.position,
NavMesh.AllAreas, path);
//lost path due to standing above obstacle (drift)
if(path.corners.Length == 0)
{
Debug.Log("Try moving away for obstacles (optionally recalibrate)");
}
line.positionCount = path.corners.Length;
line.SetPositions(path.corners);
line.enabled = true;
}
}

//set current destination and create a trigger for showing AR arrows
public void setDestination(int index)
{
target = destinations[index];
GameObject.Instantiate(trigger, person.transform.position,
person.transform.rotation);
}
}
How the gameobject should look like (add own destinations).

Augmented reality path showing

Lastly, we arrive at last section of our indoor navigation app, showing the route to take using augmented reality. This was the part were I struggled the most. Some pointers for you, the reader, when wanting to place 3D objects in the environment where you can walk around: make sure the Tracking Type option "Rotation and Position" is selected in the Tracked Pose Driver! If "Position Only" is selected you will get a better localisation, but 3D objects will not stay in its position and are barely visible.

Idea

The principle achieved in this section is an arrow that spawns in front of the user that points in the direction the user needs to go as soon as a destination is chosen. There is a collider around the arrow and everytime the blue dot exits the collider, the previous arrow gets deleted and a new one in front of the user appears with the correct positioning angle. The old arrow needs to be deleted otherwise you will still see it through walls for example.

Calculate angle and place arrow

In the UpdateNavigation script below you see two gameobjects and one tranform that help in finding the angle to point at. The arrowHelper, which is an invisible cube that is always positioned a couple of units before the point of the arrow indicator on the blue dot, the second point of the line renderer (route to follow), and lastly the transform of the script itself (you place it on the blue dot). The second point of the line renderer is needed, because the first one equals the blue dot's position. Using the three obtained transforms we can find three 2D points that can be used to calculate an angle (0-360° using atan2) between them.

Arrow pointing to destination.

Afterwards, the transform of the child camera of the ARCore Device prefab is used to place the arrow infront of the user with the use of an anchor. An anchor makes sure that the object will stay in this position whilst the user is moving. Lastly, the arrow is rotated to match the calculated angle.

//used to update AR stuff using colliders
public class UpdateNavigation : MonoBehaviour
{
public GameObject trigger; // collider to change arrows
public GameObject indicator; // arrow prefab to spawn
public GameObject arcoreDeviceCam; // ar camera
public GameObject arrowHelper; // box facing the arrow of person indicator used
to calculate spawned AR arrow direction
public LineRenderer line; // line renderer used to calculate spawned ARarrow direction
private Anchor anchor; //spawned anchor when putting somthing AR on screen
private bool hasEntered; //used for onenter collider, make sure it happens only once
private bool hasExited; //used for onexit collider, make sure it happens only once

private void Start()
{
hasEntered = false;
hasExited = false;
}

private void Update()
{
hasEntered = false;
hasExited = false;
}

//what to do when entering a collider
private void OnTriggerEnter(Collider other)
{
//if it is a navTrigger then calculate angle and spawn a new AR arrow
if (other.name.Equals("NavTrigger(Clone)") && line.positionCount > 0)
{
if (hasEntered)
{
return;
}
hasEntered = true;

//logic to calculate arrow angle
Vector2 personPos = new Vector2(this.transform.position.x,
this.transform.position.z);
Vector2 personHelp = new Vector2(arrowHelper.transform.position.x,
arrowHelper.transform.position.z);
Vector3 node3D = line.GetPosition(1);
Vector2 node2D = new Vector2(node3D.x, node3D.z);

float angle = Mathf.Rad2Deg * (Mathf.Atan2(personHelp.y - personPos.y,
personHelp.x - personPos.x) - Mathf.Atan2(node2D.y - personPos.y,
node2D.x - personPos.x));

// position arrow a bit before the camera and a bit lower
Vector3 pos = arcoreDeviceCam.transform.position +
arcoreDeviceCam.transform.forward * 2 +
arcoreDeviceCam.transform.up * -0.5f;

// rotate arrow a bit
Quaternion rot = arcoreDeviceCam.transform.rotation *
Quaternion.Euler(20, 180, 0);

// create new anchor
anchor = Session.CreateAnchor(new Pose(pos, rot));

//spawn arrow
GameObject spawned = GameObject.Instantiate(indicator,
anchor.transform.position, anchor.transform.rotation,
anchor.transform);

// use calculated angle on spawned arrow
spawned.transform.Rotate(0, angle, 0, Space.Self);
}
}

//what to do when exiting a collider
private void OnTriggerExit(Collider other)
{
//if it is a navTrigger then delete Anchor and arrow and create a new trigger
if (other.name.Equals("NavTrigger(Clone)"))
{
if (hasExited)
{
return;
}
hasExited = true;
Destroy(GameObject.Find("NavTrigger(Clone)"));
Destroy(GameObject.Find("Anchor"));
GameObject.Instantiate(trigger, this.transform.position,
this.transform.rotation);
}
}
}

Can it be improved?

To answer it shortly: Yes! This project was a learning experience during a month long internship at Raccoons. After first doing a literature study to better understand indoor navigation, I quickly understood that there was no right way to implement indoor navigation. Every technique had its pros and cons. That is why I decided to take the course of trying out one of the newer techniques, for which I do not mean SLAM, but using only ARCore. I learned a lot in this month and wished I could have done more, for example:

  • Fixing the accuracy issue after switching from "Position Only" to "Rotation and Position" Tracking Type.
  • Optimizing the QR repositioning.
  • Trying to figure out a way to get rid of the cold start created by ARCore's SLAM. I have found no way to import an existing point cloud in order to make the localisation more optimal at the first run of using the application. It makes no sense that you first have to walk around an unknown area for a while, before your movement is accurately represented on the map. For the moment this is the biggest downfal of this indoor navigation project. You rely in the beginning mostly on dead reckoning with some drift and try to counter this with QR repositioning. Having a hot start from an existing point cloud will make the use of QR repositioning unnecessary and benefit the application.
  • Making a plugin to automatically create a fitting NavMesh from a map that follows certain rules.
  • Making a augmented reality line to follow instead of reappearing arrows. The trick is to not show the line through walls.

And a lot more! Thank you for reading, and I hope you have a great time experimenting with this project.

Written by
Hannah Patronoudis
Project Manager

Subscribe to our newsletter

Raccoons NV (Craftworkz NV, Oswald AI NV, Brainjar NV, Wheelhouse NV, TPO Agency NV and Edgise NV are all part of Raccoons) is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick below:

By submitting this form, you agree to our privacy policy.

In order to provide you the content requested, we need to store and process your personal data. If you consent to us storing your personal data for this purpose, please tick the checkbox below.

More blog articles that can inspire you

what we do

Socials

address. Gaston Geenslaan 11 B4, 3001 Leuven