Abstract:
Gaze based systems and methods are used to aid traffic controllers and/or pilots. A gaze line of an eye of the user viewing the display is tracked using an eyetracker. An intersection of the gaze line of the eye with the display is calculated to provide continuous feedback as to where on the display the user is looking. A trace of the gaze line of the eye is correlated with elements of a situation. The user's awareness of the situation is inferred by verifying that the user has looked at the elements of the situation. In an embodiment, the user is notified of the situation when it is determined that the user has not looked at the elements of the situation for a predetermined period of time. The notification is automatically removed once it is determined that the user has looked at the elements of the situation.
Abstract:
A target is imaged in a three-dimensional real space using two or more video cameras. A three-dimensional image space combined from two video cameras of the two or more video cameras is displayed to a user using a stereoscopic display. A right eye and a left eye of the user are imaged as the user is observing the target in the stereoscopic video display, a right gaze line of the right eye and a left gaze line of the left eye are calculated in the three-dimensional image space, and a gazepoint in the three-dimensional image space is calculated as the intersection of the right gaze line and the left gaze line using a binocular eyetracker. A real target location is determined by translating the gazepoint in the three-dimensional image space to the real target location in the three-dimensional real space from the locations and the positions of the two video cameras using a processor.
Abstract:
Embodiments of the present invention relate to systems and methods for minimizing motion clutter in image-generation devices. Temporally-interleaved image-subtraction reduces the magnitude of motion clutter and has no adverse effect on the desired ambient-light cancellation of static images. Embodiments of image-generation devices employing temporally-interleaved image-subtraction include single, double, triple, and series accumulator configurations. All four embodiments allow synchronization with scene illuminators and may be implemented on a single electronic chip. Temporally-interleaved image-subtraction is particularly well suited for use in video eyetracking applications where ambient light and scene motion can cause significant problems.
Abstract:
One embodiment of the present invention is a method for computing a first gaze axis of an eye in a first coordinate system. A camera is focused on the eye and moved to maintain the focus on the eye as the eye moves in the first coordinate system. A first location of the camera in the first coordinate system is measured. A second location of the eye and a gaze direction of the eye within a second coordinate system are measured. A second gaze axis within the second coordinate system is computed from the second location and the gaze direction. The first gaze axis is computed from the second gaze axis and the first location using a first coordinate transformation.
Abstract:
It is sometimes required to track randomly moving objects with precision and high speed. This inventioin pertains to a mirro rotation assembly that provides precise, high-speed pitch and yaw tracking by rotating a low-moment-of-inertia mirror rather than rotating a heavier double gimballed mirror assembly or the camera-lens assembly itself. By pivoting the mirror on a single bearing point and controlling the mirror surface pitch and yaw by driving two mirror tie points along circumferential paths around the bearing point, a low moment of inertia is obtained. The low moment of inertia together with mirror velocity rather than position control permits higher speed tracking while maintaining the required precision. The velocity control strategy minimizes image blur resulting from unpredicted object motions.
Abstract:
Gaze based systems and methods are used to aid traffic controllers and/or pilots. A gaze line of an eye of the user viewing the display is tracked using an eyetracker. An intersection of the gaze line of the eye with the display is calculated to provide continuous feedback as to where on the display the user is looking A trace of the gaze line of the eye is correlated with elements of a situation. The user's awareness of the situation is inferred by verifying that the user has looked at the elements of the situation. In an embodiment, the user is notified of the situation when it is determined that the user has not looked at the elements of the situation for a predetermined period of time. The notification is automatically removed once it is determined that the user has looked at the elements of the situation.
Abstract:
A target is imaged in a three-dimensional real space using two or more video cameras. A three-dimensional image space combined from two video cameras of the two or more video cameras is displayed to a user using a stereoscopic display. A right eye and a left eye of the user are imaged as the user is observing the target in the stereoscopic video display, a right gaze line of the right eye and a left gaze line of the left eye are calculated in the three-dimensional image space, and a gazepoint in the three-dimensional image space is calculated as the intersection of the right gaze line and the left gaze line using a binocular eyetracker. A real target location is determined by translating the gazepoint in the three-dimensional image space to the real target location in the three-dimensional real space from the locations and the positions of the two video cameras using a processor.
Abstract:
A setting of a video camera is remotely controlled. Video from a video camera is displayed to a user using a video display. At least one eye of the user is imaged as the user is observing the video display, a change in an image of at least one eye of the user is measured over time, and an eye/head activity variable is calculated from the measured change in the image using an eyetracker. The eye/head activity variable is translated into a camera control setting, and an actuator connected to the video camera is instructed to apply the camera control setting to the video camera using a processor.
Abstract:
An embodiment of the present invention is a system for identifying a user by observing irregularities on the surface of an eyeball of the user includes a topography system and a gaze tracking system. The topography system obtains one or more discernable features of the eyeball and stores the one or more discernable features. The gaze tracking system observes the irregularities, compares the irregularities to the one or more discernable features, and identifies the user if the irregularities and the one or more discernable features match.
Abstract:
Embodiments of the present invention relate to eyetracking methods and systems that compensate for physiological variations in the location of the pupil within the eye. In one embodiment, a video eyetracker measures the location of a pupil within the eye. Using measured observable features of the eye, including pupil diameter, a pupil-location-offset is estimated with respect to a fixed point on the eyeball. The location of a fixed point within the eyeball is estimated as a function of the measured pupil location and the estimated pupil-location-offset.