TrackKLT class
KLT tracking of features.
Contents
This is the implementation of a KLT visual frontend for tracking sparse features. We can track either monocular cameras across time (temporally) along with stereo cameras which we also track across time (temporally) but track from left to right to find the stereo correspondence information also. This uses the calcOpticalFlowPyrLK OpenCV function to do the KLT tracking.
Base classes
- class TrackBase
- Visual feature tracking base class.
Constructors, destructors, conversion operators
Public functions
- void feed_monocular(double timestamp, cv::Mat& img, size_t cam_id) override
- Process a new monocular image.
- void feed_stereo(double timestamp, cv::Mat& img_left, cv::Mat& img_right, size_t cam_id_left, size_t cam_id_right) override
- Process new stereo pair of images.
Protected functions
- void perform_detection_monocular(const std::vector<cv::Mat>& img0pyr, std::vector<cv::KeyPoint>& pts0, std::vector<size_t>& ids0)
- Detects new features in the current image.
- void perform_detection_stereo(const std::vector<cv::Mat>& img0pyr, const std::vector<cv::Mat>& img1pyr, std::vector<cv::KeyPoint>& pts0, std::vector<cv::KeyPoint>& pts1, std::vector<size_t>& ids0, std::vector<size_t>& ids1)
- Detects new features in the current stereo pair.
- void perform_matching(const std::vector<cv::Mat>& img0pyr, const std::vector<cv::Mat>& img1pyr, std::vector<cv::KeyPoint>& pts0, std::vector<cv::KeyPoint>& pts1, size_t id0, size_t id1, std::vector<uchar>& mask_out)
- KLT track between two images, and do RANSAC afterwards.
Function documentation
ov_core:: TrackKLT:: TrackKLT(int numfeats,
int numaruco,
int fast_threshold,
int gridx,
int gridy,
int minpxdist) explicit
Public constructor with configuration variables.
| Parameters | |
|---|---|
| numfeats | number of features we want want to track (i.e. track 200 points from frame to frame) |
| numaruco | the max id of the arucotags, so we ensure that we start our non-auroc features above this value |
| fast_threshold | FAST detection threshold |
| gridx | size of grid in the x-direction / u-direction |
| gridy | size of grid in the y-direction / v-direction |
| minpxdist | features need to be at least this number pixels away from each other |
void ov_core:: TrackKLT:: feed_monocular(double timestamp,
cv::Mat& img,
size_t cam_id) override
Process a new monocular image.
| Parameters | |
|---|---|
| timestamp | timestamp the new image occurred at |
| img | new cv:Mat grayscale image |
| cam_id | the camera id that this new image corresponds too |
void ov_core:: TrackKLT:: feed_stereo(double timestamp,
cv::Mat& img_left,
cv::Mat& img_right,
size_t cam_id_left,
size_t cam_id_right) override
Process new stereo pair of images.
| Parameters | |
|---|---|
| timestamp | timestamp this pair occured at (stereo is synchronised) |
| img_left | first grayscaled image |
| img_right | second grayscaled image |
| cam_id_left | first image camera id |
| cam_id_right | second image camera id |
void ov_core:: TrackKLT:: perform_detection_monocular(const std::vector<cv::Mat>& img0pyr,
std::vector<cv::KeyPoint>& pts0,
std::vector<size_t>& ids0) protected
Detects new features in the current image.
| Parameters | |
|---|---|
| img0pyr | image we will detect features on (first level of pyramid) |
| pts0 | vector of currently extracted keypoints in this image |
| ids0 | vector of feature ids for each currently extracted keypoint |
Given an image and its currently extracted features, this will try to add new features if needed. Will try to always have the "max_features" being tracked through KLT at each timestep. Passed images should already be grayscaled.
void ov_core:: TrackKLT:: perform_detection_stereo(const std::vector<cv::Mat>& img0pyr,
const std::vector<cv::Mat>& img1pyr,
std::vector<cv::KeyPoint>& pts0,
std::vector<cv::KeyPoint>& pts1,
std::vector<size_t>& ids0,
std::vector<size_t>& ids1) protected
Detects new features in the current stereo pair.
| Parameters | |
|---|---|
| img0pyr | left image we will detect features on (first level of pyramid) |
| img1pyr | right image we will detect features on (first level of pyramid) |
| pts0 | left vector of currently extracted keypoints |
| pts1 | right vector of currently extracted keypoints |
| ids0 | left vector of feature ids for each currently extracted keypoint |
| ids1 | right vector of feature ids for each currently extracted keypoint |
This does the same logic as the perform_
void ov_core:: TrackKLT:: perform_matching(const std::vector<cv::Mat>& img0pyr,
const std::vector<cv::Mat>& img1pyr,
std::vector<cv::KeyPoint>& pts0,
std::vector<cv::KeyPoint>& pts1,
size_t id0,
size_t id1,
std::vector<uchar>& mask_out) protected
KLT track between two images, and do RANSAC afterwards.
| Parameters | |
|---|---|
| img0pyr | starting image pyramid |
| img1pyr | image pyramid we want to track too |
| pts0 | starting points |
| pts1 | points we have tracked |
| id0 | id of the first camera |
| id1 | id of the second camera |
| mask_out | what points had valid tracks |
This will track features from the first image into the second image. The two point vectors will be of equal size, but the mask_out variable will specify which points are good or bad. If the second vector is non-empty, it will be used as an initial guess of where the keypoints are in the second image.