Abstract:
Systems, methods, and apparatuses are disclosed for simplifying a point cloud. A point cloud is received where the point cloud has a plurality of points, a global spatial structure, and a local point density. The processor calculates a set of pairwise distances for the plurality of points to at least one other point in the plurality of points. A first distance matrix is generated using the set of pairwise distances. The processor calculates a second pairwise distance set where the plurality of points have a weight and generates a second distance matrix based off the second pairwise distance set. A portion of the points in the second pairwise distance set are removed based on the weight. The processor performs a comparison of the two matrices using the comparison and the global spatial structure and the local point density, and generates a second point cloud based on the second distance matrix.
Abstract:
Systems, methods, and apparatuses are disclosed for simplifying a point cloud. A point cloud is received where the point cloud has a plurality of points, a global spatial structure, and a local point density. The processor calculates a set of pairwise distances for the plurality of points to at least one other point in the plurality of points. A first distance matrix is generated using the set of pairwise distances. The processor calculates a second pairwise distance set where the plurality of points have a weight and generates a second distance matrix based off the second pairwise distance set. A portion of the points in the second pairwise distance set are removed based on the weight. The processor performs a comparison of the two matrices using the comparison and the global spatial structure and the local point density, and generates a second point cloud based on the second distance matrix.
Abstract:
A method, apparatus and computer program product are provided for image registration in the gradient domain. A method is provided including receiving a first image and second image; and registering the first and second images in a gradient domain. The registration of the first and second images in the gradient domain includes applying an energy minimization function based on total variation.
Abstract:
A method and apparatus for lane detection using overhead images and positional data. A server receives positional data from a vehicle and computes a continuous trajectory. The server receives an overhead image of a road section. The server crops and processes the overhead image to remove unwanted portions. The server identifies edge features using the continuous trajectory and steerable filters. The server identifies lanes in the overhead image using a maximization algorithm, the edge filters, and the continuous trajectory.
Abstract:
In a method, a first video stream is received from a video camera, and a second video stream is received from a depth camera. A pixel mapping between the video camera and the depth camera is known. The video camera has an update rate greater than that of the depth camera. Optical flow in successive frames of the first video stream is measured, and a portion of the optical flow attributable to depth change is extracted. A scaling factor is calculated for each pixel in successive frames of the first video stream to determine whether a depth change has occurred. A perspective depth correction is applied to each pixel having a depth change. The perspective depth correction is based upon the depth of the corresponding pixel in the most recent frame from the second video stream. A combined video stream having an update rate of the video camera and depth information from the depth camera is obtained.
Abstract:
Point cloud data is received and a ground plane is segmented. A two-dimensional image of the segmented ground plane is generated based on intensity values of the segmented ground plane. Lane marking candidates are determined based on intensity within the generated two-dimensional image. Image data is received and the generated two-dimensional image is registered with the received image data. Lane marking candidates of the received image data are determined based on the lane marking candidates of the registered two-dimensional image. Image patches are selected from the two-dimensional image and from the received image data based on the determined lane markings. Feature maps including selected image patches from the registered two-dimensional image and received data are generated. The set of feature maps are sub-sampled, and a feature vector is generated based on the set of feature maps. Lane markings are determined from the generated feature vector.
Abstract:
Various methods are provided for tracking eye movement and estimating point of regard. Methods may include: providing for capture of a first image of a first eye of a person; providing for capture of a first image of a second eye of the person; and providing for capture of a first image of a field of view of the person. The method may include estimating a first iris and pupil boundary for the first eye using the first image of the first eye; estimating a second iris and pupil boundary for the second eye using the first image of the second eye; determining a first foveated retinal image related to the first eye based on the first image of the first eye and the first iris and pupil boundary of the first eye; and determining a second foveated retinal image related to the second eye based on the first image of the first eye and the second iris and pupil boundary for the second eye.
Abstract:
A method, apparatus and computer program product are provided for image registration in the gradient domain. A method is provided including receiving three or more input images and registering, simultaneously, the three or more input images in the gradient domain based on applying an energy minimization function.
Abstract:
A method includes identifying, by a processor, a first range data set. The first range data set is generated by a first device and includes first range data points. The processor identifies a second range data set. The second range data set is generated by a second device and includes second range data points. The processor compares the second range data points to the first range data points. The processor identifies a third range data set based on the comparison. The third range data set includes a subset of the second range data points. The processor adjusts distance values for first range data points of the first range data set corresponding to a portion of the surface of interest based on the third range data set.
Abstract:
An approach is provided for reconstruction of dynamic arbitrary specular objects. The approach involves determining time-of-flight data for at least one pixel of at least one time-of-flight sensor configured with at least one retro-reflector, wherein the time-of-flight data includes a first distance from the at least one time-of-flight sensor to at least one point of at least one surface, and a second distance from the at least one point to the at least one retro-reflector. The approach also involves determining other time-of-flight data for one or more neighboring pixels which are neighboring the at least one pixel. The approach further involves determining at least one range distance to the at least one point of the at least one surface by causing, at least in part, a factoring out of the second distance from the time-of-flight data by using the other time-of-flight data. The approach also involves causing, at least in part, a reconstruction of the at least one surface using the at least one range distance.