klampt.model.calibrate module

Functions for extrinsic calibration of cameras.

New in version 0.9.

class klampt.model.calibrate.PointMarker(link=None, local_coordinates=None)[source]

Bases: object

A physical point marker that can be detected in an image.

link on which this is located. None means attached to the world frame.

Type

int or str, optional

size

used only for visual debugging, default 0.01 (1cm)

Type

float

local_coordinates

position relative to link

Type

3-vector, optional

variable

whether this should be optimized

Type

bool

class klampt.model.calibrate.TransformMarker(link=None, local_coordinates=None, local_features=None)[source]

Bases: object

A physical marker with some orientation, and either the transform or points of features on the marker can be detected in an image.

link on which this is located. None means attached to the world frame.

Type

int or str, optional

size

used only for visual debugging, default 0.01 (1cm)

Type

float

local_coordinates

pose relative to link

Type

se3 element, optional

local_features

feature points relative to marker

Type

list of 3-vectors

variable

whether this should be optimized

Type

bool

class klampt.model.calibrate.CameraInfo(link, intrinsics=None, local_coordinates=None)[source]

Bases: object

Stores info for a camera in a calibration.

link on which this is located. None means attached to the world frame.

Type

int or str, optional

intrinsics

dict with keys ‘fx’, ‘fy’, ‘cx’, ‘cy’ giving the camera intrinsics.

Type

dict

local_coordinates

pose relative to link

Type

se3 element, optional

variable

whether this should be optimized

Type

bool

class klampt.model.calibrate.PointObservation(value, camera_id, frame_id, marker_id, feature_id=None)[source]

Bases: object

A 3D point measurement in the frame of a camera.

X is to the right, Y is down, Z is forward. Units are in m.

class klampt.model.calibrate.TransformObservation(value, camera_id, frame_id, marker_id)[source]

Bases: object

A 3D SE(3) transform measurement in the frame of a camera.

X is to the right, Y is down, Z is forward. Translation units are in m.

class klampt.model.calibrate.PixelObservation(value, camera_id, frame_id, marker_id, feature_id=None)[source]

Bases: object

A 2D pixel measurement in the frame of a camera.

X is to the right, Y is down. Units are in pixels.

class klampt.model.calibrate.RobotExtrinsicCalibration[source]

Bases: object

Stores a calibration dataset and helps perform extrinsic (hand-eye) calibration.

Usage for a world-attached camera, Aruco marker on end effector:

calib = RobotExtrinsicCalibration()
calib.robot = #set the robot model here
calib.cameras[0] = CameraInfo(link=None,intrinsics=intrinsics)
ee_link = # set the ee link index or name here
calib.markers[tag_id] = TransformMarker(ee_link)

calib.visualize()         #this will show a visualization of the markers and camera

calib.editTrajectory()    #this will pop up an editing window

controller = MyRobotInterface()  #setup the robot controller here
controller.initialize()

def take_camera_picture():
    #TODO: take the picture from the camera here
    return image

#TODO: setup aruco_detector... see Aruco documentation for more details
camparam = aruco.CameraParameters()
camparam.readFromXMLFile('cam_params.yml')
detector = aruco.MarkerDetector()
detector.setDictionary("my_markers.dict")

def detect_aruco_marker(image):
    #return all detected markers
    markers = aruco_detector.detect(image)
    results = []
    for marker in markers:
        marker.calculateExtrinsics(0.15, camparam)
        mtx = marker.getTransformMatrix()
        Tmarker = se3.from_homogeneous(mtx)  #convert to klampt.se3
        results.append((marker.id,Tmarker))
    return results

#if you just want to run detection on the pictures offline, use
#take_camera_picture = None and detect_aruco_marker = None
calib.executeTrajectory(controller,take_camera_picture,detect_aruco_marker)

calib.editCalibration()     #visual editing of initial guesses
calib.calibrate()  #optimize the calibration
calib.save('my_calibration.json')

To use reprojection error instead:

intrinsics = {'fx':fx,'fy':fy,'cx':cx,'cy':cy}  #intrinsics are needed if you are using pixels

If you want to load a set of configurations and marker transforms from disk manually:

... do calib robot and marker setup ...
configs = klampt.io.loader.load('calibration.configs')
transforms = []
with open('marker_transforms.txt','r') as f:
    for line in f.readlines():
        if len(line.strip()) > 0:
            T = klampt.io.loader.read(line,'RigidTransform')
            transforms.append(T)
for i,(q,T) in enumerate(zip(configs,transforms)):
    calib.frames.append(None)
    calib.configurations.append(q)
    calib.addDetection(T,frame_id=i,marker_id=0)

calib.editCalibration()     #visual editing of initial guesses
calib.calibration()  #optimize the calibration
calib.save('my_calibration.json')

To visualize the calibration:

... do calib robot and marker setup ...
calib.load('my_calibration.json')
calib.visualize()

To fix the marker transform during optimization, e.g. at local coordinates [x,y,z]:

calib.markers[0].local_coordinates = (so3.identity(),[x,y,z])
calib.markers[0].variable = False
calib.calibrate()

A similar flag can be used for fixing the camera.

cameraFromRobot(sensors=None)[source]

Sets up the camera(s) from the robot model.

Parameters

sensors (str, int, list of str, or list of int, optional) – specifies one or more sensors to use. If None, just uses the first camera. If ‘all’, uses all cameras.

Return type

None

editTrajectory(world=None, name='calibration_trajectory')[source]

Returns a RobotTrajectory that passes through all calibration configurations.

Note: the executed trajectory stops at all milestones.

Return type

RobotTrajectory

executeTrajectory(controller, traj, camera_fn, detect_fn=None, speed=1.0, wait=0.0)[source]

Executes a trajectory on a controller, optionally taking images along the way.

TODO: specify times to actually stop.

Return type

None

visualize(world=None, edit=False)[source]

Pops up a visualization window showing the calibration setup.

Return type

None

editCalibration(world=None)[source]
Return type

None

addFrame(frame, q=None, camera_id=None, detect_fn=None)[source]

Add a frame and configuration, automatically using forward kinematics to determine camera / marker transforms. This is the easiest way to add frames and detections.

Parameters
  • frame (Image) – the image for the frame.

  • q (configuration, optional) – the robot’s configuration. If not given, the observations’ hand_eye_transform must be filled in manually.

  • camera_id (int or str, optional) – the camera ID, 0 by default.

  • detect_fn (callable, optional) – if given, a function detect(frame) which returns a list of detected observations. Each observation can be a PixelObservation, PointObservation, TransformObservation, or a (marker,value) pair. To indicate a detected feature on a marker, a ((marker,feature),value) pair is returned.

Return type

None

addDetection(value, frame_id, marker_id, feature_id=None, camera_id=None, error=None)[source]

Manually adds a detection for a given camera / frame / marker / feature. Automatically determines a hand_eye_transform for the observation if a robot and configuration are already provided for this frame.

Parameters
  • value (2-list, 3-list, or SE3 element) – the pixel, point, or transform of the detection, given in camera coordinates.

  • frame_id (int) – the index of the frame

  • marker_id (int or str) – the index of the marker

  • feature_id (int, optional) – the index of the feature on the marker.

  • camera_id (int or str, optional) – the camera that observed this feature (default 0)

  • error (float or list, optional) – the standard deviation of the observation error. If a float is given, noise is assumed to be isotropic.

Returns

The configured detection object.

Return type

Union[PixelObservation, PointObservation, TransformObservation]

cameraTransforms(world=False)[source]

Returns a dict of camera transforms, in either world or link- local coordinates. World coordinates use the robot’s current configuration.

Return type

Dict[Union[int, str], Tuple[Sequence[float], Sequence[float]]]

markerTransforms(world=False)[source]

Returns a dict of marker transforms, in either world or link- local coordinates. World coordinates use the robot’s current configuration.

Return type

Dict[int, Tuple[Sequence[float], Sequence[float]]]

setCameraTransform(camera, T, world=False)[source]

Sets the camera transform for the specified camera. World coordinates use the robot’s current configuration.

setMarkerTransform(marker, T, world=False)[source]

Sets the transform / coordinates for the specified marker. World coordinates use the robot’s current configuration.

predictedObservations(camera_transforms=None, marker_transforms=None, noise=False)[source]

Returns a list of predicted observations for a given set of camera and marker transforms.

Parameters
  • camera_transforms (dict key->se3 elements, optional) – keys matching self.cameras. If None, then it is assumed that all cameras have transforms (local_coordinates) specified.

  • marker_transforms (dict key->se3 elements, optional) – keys matching self.markers. If None, then it is assumed that all markers have transforms (local_coordinates) specified.

  • noise (bool or float) – if not False, then noise is injected into the prediction. This is useful for generating synthetic data and understanding the observation error model.

Returns

A list of configured observation objects, corresponding to the values in self.observations.

Return type

List[Union[PointObservation, PixelObservation, TransformObservation]]

predictedResiduals(camera_transforms=None, marker_transforms=None)[source]

Returns a list of predicted residuals for a given set of camera and marker transforms.

Parameters
  • camera_transforms (list of se3 elements, optional) – length len(self.cameras). If None, then it is assumed that all cameras have transforms (local_coordinates) specified.

  • marker_transforms (list of se3 elements, optional) – length len(self.markers). If None, then it is assumed that all markers have transforms (local_coordinates) specified.

Returns

A list of residuals (prediction - observed value) corresponding to the values in self.observations.

Return type

List[Sequence[float]]

predictedLogLikelihoods(camera_transforms=None, marker_transforms=None)[source]

Returns a list of predicted observation log-likelihoods for a given set of camera and marker transforms.

Parameters
  • camera_transforms (list of se3 elements, optional) – length len(self.cameras). If None, then it is assumed that all cameras have transforms (local_coordinates) specified.

  • marker_transforms (list of se3 elements, optional) – length len(self.markers). If None, then it is assumed that all markers have transforms (local_coordinates) specified.

Returns

A list of log likelihoods of residuals corresponding to the values in self.observations.

Return type

List[float]

optimize(maxIters=100, tol=1e-07, regularizationFactor=0.0, store=True)[source]

Optimizes the calibrated transforms.

Requires scipy.

Returns

A tuple of the RMSE, camera transform dictionary, and marker transform dictionary.

Return type

Tuple[float, Dict[Union[int, str], Tuple[Sequence[float], Sequence[float]]], Dict[Union[int, str], Tuple[Sequence[float], Sequence[float]]]]

updateRobotSensors()[source]

Uses the current camera transforms to update the robot model’s sensors.

Return type

None

save(fn)[source]

Saves to a JSON file on disk.

Warning

Images will not be saved.

Parameters

fn (str or file) – the file to save to.

Return type

None

load(fn)[source]

Loads from a JSON file on disk.

Parameters

fn (str or file) – the file to load from

Return type

None