Project Tango allows devices to use visual cues to navigate and understand the world around them. Using area learning, a Project Tango device can remember the visual features of the area it is moving through and recognize when it sees those features again. These features can be saved in an Area Description File (ADF) to use again later. With an ADF loaded, Project Tango devices gain two new features: improved motion tracking and localization. 

Localization allows for users to synchronize movement within the Unreal world as reported by the Tango Motion component, with their position within a pre-recorded real-world area. This could allow for features such as fixing an A.R. tree within the center of a real-world room, or allowing for several users to share the same mixed reality space. 

The Area Learning Component provides access to blueprint nodes that expose this functionality. 

For more information on Area Learning: 

https://developers.google.com/project-tango/apis/c/c-area-learning 

https://developers.google.com/project-tango/apis/c/reference/group/area-description