AI for context-aware 3D interactions in AR
 
 

FREQUENTLY ASKED QUESTIONS

What is Smart AR?

Smart AR refers to AR experiences that use contextual information about the physical world surrounding the camera. Selerio provides a console and various SDKs to guide you through creating your own Smart AR experiences.

Is Smart AR part of the ARCloud?

Yes. The ARCloud is defined as a realtime, persistent, 1-to-1 digital map of the physical world. By definition, Smart AR is a big component of this, if not the most important one. Users can only have contextual experiences if the AR device has a knowledge graph of the physical world (i.e., AR devices need to be smart).

What can I do with Smart AR?

You use Smart AR to (1) create AR filters that interact with specific physical objects, (2) replace physical objects with more interesting virtual alternatives (e.g., for retailers, it means limitless display of their products) and (3) have virtual objects occluded by real objects.

Who else is building Smart AR?

Many popular ARCloud services mention Smart AR as a critical piece of the AR infrastructure and; have added it to their future roadmaps. However, only MagicLeap haS shown glimpses of Smart AR solutions with their Leap One. There is a high barrier of entry for this product and questions remain as to whether it will be scalable and/or cross platform.

Selerio Smart AR is available right now, natively scalable and cross-platform.

How do I get started?

The best way to get started with Smart AR is to follow the quickstart tutorial in the Developer Console. The tutorial guides you through creating a simple AR experience where your virtual object interacts with a physical object. After completing the tutorial, you can get more information on the full capabilities via the API Reference.

What are smart anchors?

Smart anchors represent physical objects indexed by Selerio; and detected in the camera feed. The smart anchor of an object consists of a semantic label, a 3D shape, and a world transform matrix, which specifies position, orientation and scale. Virtual objects are placed in relation to these anchors in order to interact with the physical world.

What objects are supported by the smart anchors?

Similar to Google indexing web pages, Selerio indexes objects within the physical world, used to create smart anchors. For visibility, we are exposing all the objects indexed by Selerio so far here: 3D Index.

How do I add recognition for new objects?

Selerio is continuously indexing the physical world as a background process. If you would like to jump the queue and index a given set of 3D objects, please contact us directly.

Do you track smart anchors?

This is not yet done automatically. In the scenario of a moving car, an anchor attached to the car won’t follow it automatically, but will have to be called repeatedly to simulate this. Future releases will have the functionality to automate this.

Can I save the smart anchors?

This is also on the Selerio product roadmap. Please drop us a line to upvote this feature and help move it up the backlog.

What is the tech behind this?

The core technology for Smart AR was born at Cambridge University and funded by Google research. It uses deep learning to translate what the camera sees, a physical data structure, into a digital data structure that applications can interact with. The deep learning architecture is IP fully-owned and unique to Selerio.

Is it cross-platform?

Yes. Swift is currently supported and we are working hard to support all the popular platforms including Unity and Android. If your platform is not supported, please drop us a NOTE.

Do you have some examples of AR apps built with Smart Anchors?

Yes, they are available on the developer console.