In our daily lives, we use visual to navigate and understand the world around us. We observe the size and shape of objects , and we learn their positions and layouts almost effortlessly over time. This awareness of space and motion is fundamental to the way we interact with our environment and each other. We are physical beings that live in a 3D world.As our mobile devices can capture only 2D view not the 3D view, but Google was going to launch a device that can capture 3D map of surrounds.
This is the first device under Project Tango.
This device having multiple sensors which detect the motion and surrounding depth, which helps to create the 3D view.
This device having customised software and hardware to capture full 3D motion of device, while simultaneously creating a map of the environment. These sensors allow the device to make over a quarter million 3D measurements every second, updating its position and orientation in real-time, combining that data into a single 3D model of the space around you.
Lets see how the 3D model created by device.
Project Tango Tablet Development Kit
This 7″ development kit is powered by the new NVIDIA Tegra K1 processor packed with 4GB of RAM, 128GB of storage, motion tracking camera, integrated depth sensing, WiFi, BTLE, and 4G LTE.
Priced at around $1,000, the Project Tango kit will be available to developers later this year.
NASA to use Google’s Project Tango to update space robot
Smart SPHERES is a prototype free-flying space robot based on NASA’s Synchronised Position Hold, Engage, Reorient Experimental Satellites. NASA has been testing SPHERES on the space station since 2011.
Starting in early August, the astronauts will turn on the sensors that enable the 3D navigation and take the SPHERES throughout the station, mapping its entire layout.
Take a look on the video describing the NASA SPHERES:
Subscribe to Time to Hack
Get the latest posts delivered right to your inbox