TrackBase class
Visual feature tracking base class.
Contents
This is the base class for all our visual trackers. The goal here is to provide a common interface so all underlying trackers can simply hide away all the complexities. We have something called the "feature database" which has all the tracking information inside of it. The user can ask this database for features which can then be used in an MSCKF or batch-based setting. The feature tracks store both the raw (distorted) and undistorted/normalized values. Right now we just support two camera models, see: undistort_
This base class also handles most of the heavy lifting with the visualization, but the sub-classes can override this and do their own logic if they want (i.e. the TrackAruco has its own logic for visualization). This visualization needs access to the prior images and their tracks, thus must synchronise in the case of multi-threading. This shouldn't impact performance, but high frequency visualization calls can negatively effect the performance.
Derived classes
- class TrackAruco
- Tracking of OpenCV Aruoc tags.
- class TrackDescriptor
- Descriptor-based visual tracking.
- class TrackKLT
- KLT tracking of features.
- class TrackSIM
- Simulated tracker for when we already have uv measurements!
Constructors, destructors, conversion operators
Public functions
- void set_calibration(std::map<size_t, Eigen::VectorXd> camera_calib, std::map<size_t, bool> camera_fisheye, bool correct_active = false)
- Given a the camera intrinsic values, this will set what we should normalize points with. This will also update the feature database with corrected normalized values. Normally this would only be needed if we are optimizing our camera parameters, and thus should re-normalize.
- void feed_monocular(double timestamp, cv::Mat& img, size_t cam_id) pure virtual
- Process a new monocular image.
- void feed_stereo(double timestamp, cv::Mat& img_left, cv::Mat& img_right, size_t cam_id_left, size_t cam_id_right) pure virtual
- Process new stereo pair of images.
- void display_active(cv::Mat& img_out, int r1, int g1, int b1, int r2, int g2, int b2) virtual
- Shows features extracted in the last image.
- void display_history(cv::Mat& img_out, int r1, int g1, int b1, int r2, int g2, int b2) virtual
- Shows a "trail" for each feature (i.e. its history)
- auto get_feature_database() -> FeatureDatabase*
- Get the feature database with all the track information.
- void change_feat_id(size_t id_old, size_t id_new)
- Changes the ID of an actively tracked feature to another one.
- auto undistort_point(cv::Point2f pt_in, size_t cam_id) -> cv::Point2f
- Main function that will undistort/normalize a point.
Protected functions
- auto undistort_point_brown(cv::Point2f pt_in, cv::Matx33d& camK, cv::Vec4d& camD) -> cv::Point2f
- Undistort function RADTAN/BROWN.
- auto undistort_point_fisheye(cv::Point2f pt_in, cv::Matx33d& camK, cv::Vec4d& camD) -> cv::Point2f
- Undistort function FISHEYE/EQUIDISTANT.
Protected variables
- FeatureDatabase* database
- Database with all our current features.
- std::map<size_t, bool> camera_fisheye
- If we are a fisheye model or not.
- std::map<size_t, cv::Matx33d> camera_k_OPENCV
- Camera intrinsics in OpenCV format.
- std::map<size_t, cv::Vec4d> camera_d_OPENCV
- Camera distortion in OpenCV format.
- int num_features
- Number of features we should try to track frame to frame.
- std::vector<std::mutex> mtx_feeds
- Mutexs for our last set of image storage (img_last, pts_last, and ids_last)
- std::map<size_t, cv::Mat> img_last
- Last set of images (use map so all trackers render in the same order)
- std::unordered_map<size_t, std::vector<cv::KeyPoint>> pts_last
- Last set of tracked points.
- std::unordered_map<size_t, std::vector<size_t>> ids_last
- Set of IDs of each current feature in the database.
- std::atomic<size_t> currid
- Master ID for this tracker (atomic to allow for multi-threading)
Function documentation
ov_core:: TrackBase:: TrackBase(int numfeats,
int numaruco)
Public constructor with configuration variables.
| Parameters | |
|---|---|
| numfeats | number of features we want want to track (i.e. track 200 points from frame to frame) |
| numaruco | the max id of the arucotags, so we ensure that we start our non-auroc features above this value |
void ov_core:: TrackBase:: set_calibration(std::map<size_t, Eigen::VectorXd> camera_calib,
std::map<size_t, bool> camera_fisheye,
bool correct_active = false)
Given a the camera intrinsic values, this will set what we should normalize points with. This will also update the feature database with corrected normalized values. Normally this would only be needed if we are optimizing our camera parameters, and thus should re-normalize.
| Parameters | |
|---|---|
| camera_calib | Calibration parameters for all cameras [fx,fy,cx,cy,d1,d2,d3,d4] |
| camera_fisheye | Map of camera_id => bool if we should do radtan or fisheye distortion model |
| correct_active | If we should re-undistort active features in our database |
void ov_core:: TrackBase:: feed_monocular(double timestamp,
cv::Mat& img,
size_t cam_id) pure virtual
Process a new monocular image.
| Parameters | |
|---|---|
| timestamp | timestamp the new image occurred at |
| img | new cv:Mat grayscale image |
| cam_id | the camera id that this new image corresponds too |
void ov_core:: TrackBase:: feed_stereo(double timestamp,
cv::Mat& img_left,
cv::Mat& img_right,
size_t cam_id_left,
size_t cam_id_right) pure virtual
Process new stereo pair of images.
| Parameters | |
|---|---|
| timestamp | timestamp this pair occured at (stereo is synchronised) |
| img_left | first grayscaled image |
| img_right | second grayscaled image |
| cam_id_left | first image camera id |
| cam_id_right | second image camera id |
void ov_core:: TrackBase:: display_active(cv::Mat& img_out,
int r1,
int g1,
int b1,
int r2,
int g2,
int b2) virtual
Shows features extracted in the last image.
| Parameters | |
|---|---|
| img_out | image to which we will overlayed features on |
| r1 | first color to draw in |
| g1 | first color to draw in |
| b1 | first color to draw in |
| r2 | second color to draw in |
| g2 | second color to draw in |
| b2 | second color to draw in |
void ov_core:: TrackBase:: display_history(cv::Mat& img_out,
int r1,
int g1,
int b1,
int r2,
int g2,
int b2) virtual
Shows a "trail" for each feature (i.e. its history)
| Parameters | |
|---|---|
| img_out | image to which we will overlayed features on |
| r1 | first color to draw in |
| g1 | first color to draw in |
| b1 | first color to draw in |
| r2 | second color to draw in |
| g2 | second color to draw in |
| b2 | second color to draw in |
FeatureDatabase* ov_core:: TrackBase:: get_feature_database()
Get the feature database with all the track information.
| Returns | FeatureDatabase pointer that one can query for features |
|---|
void ov_core:: TrackBase:: change_feat_id(size_t id_old,
size_t id_new)
Changes the ID of an actively tracked feature to another one.
| Parameters | |
|---|---|
| id_old | Old id we want to change |
| id_new | Id we want to change the old id to |
cv::Point2f ov_core:: TrackBase:: undistort_point(cv::Point2f pt_in,
size_t cam_id)
Main function that will undistort/normalize a point.
| Parameters | |
|---|---|
| pt_in | uv 2x1 point that we will undistort |
| cam_id | id of which camera this point is in |
| Returns | undistorted 2x1 point |
Given a uv point, this will undistort it based on the camera matrices. This will call on the model needed, depending on what type of camera it is! So if we have fisheye for camera_1 is true, we will undistort with the fisheye model. In Kalibr's terms, the non-fisheye is pinhole-radtan while the fisheye is the pinhole-equi model.
cv::Point2f ov_core:: TrackBase:: undistort_point_brown(cv::Point2f pt_in,
cv::Matx33d& camK,
cv::Vec4d& camD) protected
Undistort function RADTAN/BROWN.
Given a uv point, this will undistort it based on the camera matrices. To equate this to Kalibr's models, this is what you would use for pinhole-radtan.
cv::Point2f ov_core:: TrackBase:: undistort_point_fisheye(cv::Point2f pt_in,
cv::Matx33d& camK,
cv::Vec4d& camD) protected
Undistort function FISHEYE/EQUIDISTANT.
Given a uv point, this will undistort it based on the camera matrices. To equate this to Kalibr's models, this is what you would use for pinhole-equi.