By coming up with method for stitching together multi, Apple is contemplating ways video content for VR headsets could be improved data from videos to make the appear with distortion than current stitching apps and encoders can produce... videos are produced in ways that subject or scene can be captured from multiple viewpoints, such as development, all sides of scene. For video, or for shots that change the camera's position and viewpoint, the images could provide lot of data that could be incorporated into scene... Many modern coding applications are not designed to process such omnidirectional or image content, suggesting the applications are designed on assumption the data is flat or captured from single view. In short, the encoder splits video into pixel blocks, and for each block, the encoder may compare it to other data it may have about the in reference picture. Using prediction search on the search block and the data, the encoder could perform different actions to the pixel block, order to make it look more appropriate for the in the format it with be used within.. At the time, n on monitor as-live, and with changes made without any of the distortions required for it to appear correct in spherical view.. The USPTO filings are not guarantee that the idea will make in consumer device.. The second is for VR, both for creating spherical videos, and for spectators viewing what the user is seeing. Videos produced with 360-degree cameras are likely to be major content source for VR users in the future, and the ability to correct distortions, artifacts from the filming of such videos, would make the content much more acceptable to view..
Read more