The intention of this project is to create an AR foundation that would allow a warehouse to create a virtual navigation using augmented reality. In theory this would allow a worker to quickly pull out their phone and have it navigate them quickly to the box they need to find.
This preview shows off my AR manager easily spawning, interacting with, and destroying AR objects seamlessly.
What’s done so far?
So far I’ve been able to architect a AR object scene manager that allows me to quickly create and manage AR objects within the scene without needing to touch any AR foundation code. In other words I have wrapped all of the plugin features into my own managers and set it all up to correctly manage them (located within ARScenemanager.cs). This will allow me to focus on creating the front end application without worrying about the AR foundation code.
I’ve began work on setting up the mixed reality scene, I’m doing this by creating a desktop app that allows the user to place where the box’s on a warehouse map in a desktop application. This would then be represented when they pull out their phone within the warehouse
Current Roadblocks / issues
There are two major issues that I’m running into currently, the first one is that there is no easy way to track a phone’s location within the warehouse. My current solution is to assume that the user is always starting the application in a specific place (placing a tape x on the ground where they need to stand in the physical warehouse). This however is a quick and dirty solution that I’m going to look into improving in the future.
The second one is that the AR world will shift if the user flicks their phone around too much, I believe that there are answers within Unity’s AR core plugin to fix this, or that I may need to redo how my architecture works. I need to do more research into how Unity’s plugin handles the machine vision end of AR & scene creation before I go any farther.
You can check out all of my code at my GitHub