FaceTime may be the utility that grabs the most attention, but iPad and iPhone’s forward-facing camera can do more than just video calling. These cameras act like a little eye that can be programmed to track our heads as we look left and right to produce some of the same movement effects seen with Kinect. But although the effect looks similar, it is a facade. Similar to closing one of our own eyes, these single forward-facing cameras can only create 2-D image maps and are lost with the addition of depth. Technically speaking, iPhone already employs depth sensing technology, and does it in the same fashion as Kinect. Buried inside the phone's ear piece is an infra-red (IR) LED measuring the reflectivity of objects placed in front of it. This is what allows us to press our ear to our phone without mashing on buttons and hanging up on callers.
Both the dual camera method and IR can achieve the same effect, but IR actually stands as the more beneficial of the two. The computing power it takes to translate two images into depth-of-field data is far more than it takes for IR data. In addition, IR functions just as well in the dark as it does in light, sort of like a technologically advanced version of the flashlight apps that clutters the app store's utility section. While most of us won't be stumbling through the dark with our smartphones, this begins to reveal some of the functionalities that depth sensing apps could bring to the market.
Just as though we had a second pair of eyes, a depth sensing input on our mobile devices can see anything we point it at. For the blind, this means that not only could they replace touch screens with gesture-based commands, but that their mobile device could potentially function as a visual assist. A group of university students in Germany have already put