Abstract:
Methods and apparatus for supporting the capture of images of surfaces of an environment visible from a default viewing position and capturing images of surfaces not visible from the default viewing position, e.g., occluded surfaces, are described. Occluded and non-occluded image portions are packed into one or more frames and communicated to a playback device for use as textures which can be applied to a model of the environment where the images were captured. An environmental model includes a model of surfaces which are occluded from view from a default viewing position but which maybe viewed is the user shifts the user's viewing location. Occluded image content can be incorporated directly into a frame that also includes non-occluded image data or sent in frames of a separate, e.g., auxiliary content stream that is multiplexed with the main content stream which communicates image data corresponding to non-occluded environmental portions.
Abstract:
Methods and apparatus for using a display in a manner which results in a user perceiving a higher resolution than would be perceived if a user viewed the display from a head on position are described. In some embodiments one or more displays are mounted at an angle, e.g., sometimes in range a range from an angle above 0 degrees to 45 relative to a user's face and thus eyes. The user sees more pixels, e.g., dots corresponding to light emitting elements, per square inch of eye area than the user would see if the user were viewing the display head on due to the angle at which the display or displays are mounted. The methods and display mounting arrangement are well suited for use in head mounted displays, e.g., Virtual Reality (VR) displays for stereoscopic viewing (e.g., 3D) and/or non-stereoscopic viewing of displayed images.
Abstract:
Camera and/or lens calibration information is generated as part of a calibration process in video systems including 3-dimensional (3D) immersive content systems. The calibration information can be used to correct for distortions associated with the source camera and/or lens. A calibration profile can include information sufficient to allow the system to correct for camera and/or lens distortion/variation. This can be accomplished by capturing a calibration image of a physical 3D object corresponding to the simulated 3D environment, and creating the calibration profile by processing the calibration image. The calibration profile can then be used to project the source content directly into the 3D viewing space while also accounting for distortion/variation, and without first translating into an intermediate space (e.g., a rectilinear space) to account for lens distortion.
Abstract:
Methods and apparatus for using selective resolution reduction on images to be transmitted and/or used by a playback device are described. Prior to transmission one or more images of an environment are captured. Based on image content, motion detection and/or user input a resolution reduction operation is selected and performed. The reduced resolution image is communicated to a playback device along with information indicating a UV map corresponding to the selected resolution allocation that should be used by the playback device for rendering the communicated image. By changing the resolution allocation used and which UV map is used by the playback device different resolution allocations can be made with respect to different portions of the environment while allowing the number of pixels in transmitted images to remain constant. The playback device renders the individual images with the UV map corresponding to the resolution allocation used to generate the individual images.
Abstract:
Methods and apparatus for allowing a user to switch viewing positions and/or perspective while viewing an environment, e.g., as part of a 3D playback/viewing experience, are described. In various embodiments images of the environment are captured using cameras placed at multiple camera positions. During viewing a user can select which camera position he/she would like to experience the environment from. While experiencing the environment from the perspective of a first camera position the user may switch from the first to a second camera position by looking at the second position. A visual indication is provided to the user to indicate that the user can select the other camera position as his/her viewing position. If a user input indicates a desired viewing position change, a switch to the alternate viewing position is made and the user is presented with images captured from the perspective of the user selected alternative viewing position.
Abstract:
A head mounted virtual reality (VR) device including an inertial measurement unit (IMU) is located in a vehicle which may be, and sometimes is, moving. Detected motion attributable to vehicle motion is filtered out based on one or more or all of: vehicle type information, information derived from sensors located in the vehicle external to the head mounted VR device, and/or captured images including a reference point or reference object within the vehicle. An image portion of a simulated VR environment is selected and presented to the user of the head mounted VR device based on the filtered motion information. Thus, the image portion presented to the user of the head mounted VR device is substantially unaffected by vehicle motion and corresponds to user induced head motion.
Abstract:
Content delivery and playback methods and apparatus are described. The methods and apparatus are well suited for delivery and playback of content corresponding to a 360 degree environment and can be used to support streaming and/or real time delivery of 3D content corresponding to an event, e.g., while the event is ongoing or after the event is over. Portions of the environment are captured by cameras located at different positions. The content captured from different locations is encoded and made available for delivery. A playback device selects the content to be received based in a user's head position.
Abstract:
An unobstructed image portion of a captured image from a first camera of a camera pair, e.g., a stereoscopic camera pair including fisheye lenses, is combined with a scaled extracted image portion generated from a captured image from a second camera in the camera pair. An unobstructed image portion of a captured image from the second camera of the camera pair is combined with a scaled extracted image portion generated from a captured image from the first camera in the camera pair. As part of the combining obstructed image portions which were obstructed by part of the adjacent camera are replaced in some embodiments. In some embodiments, the obstructions are due to adjacent fisheye lens. In various embodiments fish eye lenses which have been cut to be flat on one side are used for the left and right cameras with the spacing between the optical axis approximating the spacing between the optical axis of a human person's eyes.
Abstract:
Methods and apparatus for allowing a user to switch viewing positions and/or perspective while viewing an environment, e.g., as part of a 3D playback/viewing experience, are described. In various embodiments images of the environment are captured using cameras placed at multiple camera positions. During viewing a user can select which camera position he/she would like to experience the environment from. While experiencing the environment from the perspective of a first camera position the user may switch from the first to a second camera position by looking at the second position. A visual indication is provided to the user to indicate that the user can select the other camera position as his/her viewing position. If a user input indicates a desired viewing position change, a switch to the alternate viewing position is made and the user is presented with images captured from the perspective of the user selected alternative viewing position.
Abstract:
Customer wide angle lenses and methods and apparatus for using such lenses in individual cameras as well as pairs of cameras intended for stereoscopic image capture are described. The lenses are used in combination with sensors to capture different portions of an environment at different resolutions. In some embodiments ground is capture at a lower resolution than sky which is captured at a lower resolution than a horizontal area of interest. Various asymmetries in lenses and/or lens and sensor placement are described which are particularly well suited for stereoscopic camera pairs where the proximity of one camera to the adjacent camera may interfere with the field of view of the cameras.