klampt.model.calibrate module

Functions for extrinsic calibration of cameras.

Added in version 0.9.

klampt.model.calibrate.robot_sensors(robot, type=None)[source]

Returns all sensors on the robot of the given type.

class klampt.model.calibrate.PointMarker(link=None, local_coordinates=None)[source]

Bases: object

A physical point marker that can be detected in an image.

link on which this is located. None means attached to the world frame.

Type:

int or str, optional

size

used only for visual debugging, default 0.01 (1cm)

Type:

float

local_coordinates

position relative to link

Type:

3-vector, optional

variable

whether this should be optimized

Type:

bool

class klampt.model.calibrate.TransformMarker(link=None, local_coordinates=None, local_features=None)[source]

Bases: object

A physical marker with some orientation, and either the transform or points of features on the marker can be detected in an image.

link on which this is located. None means attached to the world frame.

Type:

int or str, optional

size

used only for visual debugging, default 0.01 (1cm)

Type:

float

local_coordinates

pose relative to link

Type:

se3 element, optional

local_features

feature points relative to marker

Type:

list of 3-vectors

variable

whether this should be optimized

Type:

bool

static chessboard(nrows, ncols, square_size, link=None)[source]

Creates a chessboard marker with the given number of rows and columns, and the given square size.

The origin is at (0,0) and the feature points go left to right, bottom to top (row-major). I.e., feature 0 is (0,0,0), feature 1 is (square_size,0,0), feature 2 is (2*square_size,0,0), etc.

class klampt.model.calibrate.CameraInfo(link, intrinsics=None, local_coordinates=None)[source]

Bases: object

Stores info for a camera during calibration.

link on which this is located. None means attached to the world frame.

Type:

int or str, optional

intrinsics

dict with keys ‘fx’, ‘fy’, ‘cx’, ‘cy’ giving the camera intrinsics.

Type:

dict

local_coordinates

pose relative to link

Type:

se3 element, optional

variable

whether this should be optimized

Type:

bool

class klampt.model.calibrate.PointObservation(value, camera_id, frame_id, marker_id, feature_id=None)[source]

Bases: object

A 3D point measurement in the frame of a camera.

X is to the right, Y is down, Z is forward. Units are in m.

class klampt.model.calibrate.TransformObservation(value, camera_id, frame_id, marker_id)[source]

Bases: object

A 3D SE(3) transform measurement in the frame of a camera.

X is to the right, Y is down, Z is forward. Translation units are in m.

class klampt.model.calibrate.PixelObservation(value, camera_id, frame_id, marker_id, feature_id=None)[source]

Bases: object

A 2D pixel measurement in the frame of a camera.

X is to the right, Y is down. Units are in pixels.

class klampt.model.calibrate.RobotExtrinsicCalibration[source]

Bases: object

Stores a calibration dataset and helps perform extrinsic (hand-eye) calibration. Supports multiple cameras, both eye in hand and fixed camera setups, and multiple marker types (pixels, 3d points, QR / Aruco tags).

Examples

Usage for a world-attached camera, OpenCV to read images, and Aruco marker on end effector:

world = WorldModel()
world.loadElement("myrobot.urdf")  #load the robot model here

calib = RobotExtrinsicCalibration()
calib.robot = world.robot(0)
ee_link = calib.robot.numLinks()-1
tag_id = 0
#setup the camera mounted link and intrinsics here. If you are using transform
#markers, then the intrinsics are not necessary
calib.cameras[0] = CameraInfo(link=None,intrinsics={})
ee_link = # set the ee link index or name here
calib.markers[tag_id] = TransformMarker(ee_link)

calib.visualize(world)         #this will show a visualization of the markers and camera

calib.editTrajectory(world)    #this will pop up an editing window

#setup the robot controller here. Should be a subclass of RobotInterfaceBase
controller = MyRobotInterface()
controller.initialize()

#set up the OpenCV capture.  For other sensors, you will use a different
#function to capture the image.
cam_port = 0
cam = cv2.VideoCapture(cam_port)

def take_camera_picture():
    #take the picture from the camera here
    result, image = cam.read()
    if not result:
        raise RuntimeError("Error reading from camera")
    return image

#TODO: setup aruco_detector...you will need to setup intrinsic parameters
#in cam_params.yml and marker parameters in my_markers.dict.  See Aruco
#documentation for more details.
camparam = aruco.CameraParameters()
camparam.readFromXMLFile('cam_params.yml')
detector = aruco.MarkerDetector()
detector.setDictionary("my_markers.dict")

def detect_aruco_marker(image):
    #return all detected markers
    markers = aruco_detector.detect(image)
    results = []
    for marker in markers:
        marker.calculateExtrinsics(0.15, camparam)
        mtx = marker.getTransformMatrix()
        Tmarker = se3.from_homogeneous(mtx)  #convert to klampt.se3
        results.append((marker.id,Tmarker))
    return results

#this will run the calibration sequence
calib.executeTrajectory(controller,take_camera_picture,detect_aruco_marker)

calib.editCalibration()     #visual editing of initial guesses
calib.optimize()            #optimize the calibration
calib.save('my_calibration.json')

#clean up OpenCV camera
cam.release()

To use PixelObservations and optimize reprojection error, you will need to provide intrinsics. For example, to use chessboard detection:

intrinsics = {'fx':fx,'fy':fy,'cx':cx,'cy':cy}  #intrinsics are needed if you are using pixels
calib.cameras[0] = CameraInfo(link=None,intrinsics=intrinsics)

...
nrows, ncols = 7, 5     #number of rows and columns on the chessboard
square_size = 0.05      #5cm
calib.markers[tag_id] = TransformMarker.chessboard(ee_link, nrows, ncols, square_size)
def detect_chessboard_marker(image):
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)  #assuming BGR images
    flags = cv2.CALIB_CB_ADAPTIVE_THRESH + cv2.CALIB_CB_NORMALIZE_IMAGE + cv2.CALIB_CB_FAST_CHECK
    ret, corners = cv2.findChessboardCorners(gray, (width,height), flags)
    # If found, refine and add object points
    if ret == True:
        #subpixel detection termination criteria
        criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
        corners2 = cv2.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria)
        #camera_id and frame_id are filled in for you
        return [PixelObservation(c,None,None,tag_id,feature_id) for feature_id,c in enumerate(corners2)]
    else:
        return []

...
calib.executeTrajectory(controller,take_camera_picture,detect_chessboard_marker)

To calibrate an eye-in-hand setup, just give the link of the camera and set the marker link to None:

calib.cameras[0] = CameraInfo(link=ee_link,intrinsics={})
calib.markers[tag_id] = TransformMarker(None)

You do not have to integrate everything all at once. If you just want to run detection on the pictures offline, pass None as detect_fn to executeTrajectory(). Once the images are taken, you can run the detector and add the observations manually using addDetection().

If you want to load a set of configurations and marker transforms from disk manually:

... do calib robot and marker setup ...
configs = klampt.io.loader.load('calibration.configs')
transforms = []
with open('marker_transforms.txt','r') as f:
    for line in f.readlines():
        if len(line.strip()) > 0:
            T = klampt.io.loader.read(line,'RigidTransform')
            transforms.append(T)
for i,(q,T) in enumerate(zip(configs,transforms)):
    calib.frames.append(None)
    calib.configurations.append(q)
    calib.addDetection(T,frame_id=i,marker_id=0)

calib.editCalibration()     #visual editing of initial guesses
calib.calibration()  #optimize the calibration
calib.save('my_calibration.json')

To visualize the calibration:

... do calib robot and marker setup ...
calib.load('my_calibration.json')
calib.visualize()

To fix the marker transform during optimization, e.g. at local coordinates [x,y,z]:

calib.markers[0].local_coordinates = (so3.identity(),[x,y,z])
calib.markers[0].variable = False
calib.calibrate()

A similar flag can be used for fixing the camera.

cameraFromRobot(sensors=None)[source]

Sets up the camera(s) to calibrate from the robot model.

Return type:

None

Parameters:

sensors (str, int, list of str, or list of int, optional) – specifies one or more sensors to use. If None, just uses the first camera. If ‘all’, uses all cameras.

editTrajectory(world=None, name='calibration_trajectory')[source]

Returns a RobotTrajectory that passes through all calibration configurations.

Note: the executed trajectory stops at all milestones.

Return type:

RobotTrajectory

executeTrajectory(controller, traj, camera_fn, detect_fn=None, speed=1.0, wait=0.0)[source]

Executes a trajectory on a controller, optionally taking images along the way.

Return type:

None

Parameters:
  • controller (RobotInterfaceBase) – the robot controller. Assumed initialized.

  • traj (RobotTrajectory) – the trajectory to execute. We will take a picture at each milestone.

  • camera_fn (callable or dict of callabls) –

    a function camera_fn() which returns an image. The result will be stored in self.frames.

    Can also be a dictionary of camera_id -> camera_fn.

  • detect_fn (callable) – if given, a function detect_fn(frame) which returns a list of detected observations. see addFrame() for more details.

  • speed (float, optional) – the speed at which to execute the trajectory.

  • wait (float, optional) – the time to wait after each milestone. An image will be taken in the middle of the waiting period.

visualize(world=None, edit=False)[source]

Pops up a visualization window showing the calibration setup.

Return type:

None

manualCapture(controller, camera_fn, detect_fn=None, world=None)[source]

Creates a visualization to help manually capture images. Assumes that the robot is driven by an external source, e.g., a joystick, teaching pendant, or klampt_control interface.

Return type:

None

Parameters:
  • controller (RobotInterfaceBase) – the robot controller. Assumed initialized.

  • camera_fn (callable or dict of callabls) –

    a function camera_fn() which returns an image. The result will be stored in self.frames.

    Can also be a dictionary of camera_id -> camera_fn.

  • detect_fn (callable) – if given, a function detect_fn(frame) which returns a list of detected observations. see addFrame() for more details.

  • world (WorldModel) – the world model to visualize the movement.

editCalibration(world=None)[source]
Return type:

None

addFrame(frame, q=None, camera_id=None, detect_fn=None)[source]

Add a frame and configuration, automatically using forward kinematics to determine camera / marker transforms. This is the easiest way to add frames and detections.

Return type:

None

Parameters:
  • frame (Image) – the image for the frame.

  • q (configuration, optional) – the robot’s configuration. If not given, the observations’ hand_eye_transform must be filled in manually.

  • camera_id (int or str, optional) – the camera ID, 0 by default.

  • detect_fn (callable, optional) – if given, a function detect(frame) which returns a list of detected observations. Each observation can be a PixelObservation, PointObservation, TransformObservation, or a (marker,value) pair. To indicate a detected feature on a marker, a ((marker,feature),value) pair is returned.

addDetection(value, frame_id, marker_id, feature_id=None, camera_id=None, error=None)[source]

Manually adds a detection for a given camera / frame / marker / feature. Automatically determines a hand_eye_transform for the observation if a robot and configuration are already provided for this frame.

Return type:

Union[PixelObservation, PointObservation, TransformObservation]

Parameters:
  • value (2-list, 3-list, or SE3 element) – the pixel, point, or transform of the detection, given in camera coordinates.

  • frame_id (int) – the index of the frame

  • marker_id (int or str) – the index of the marker

  • feature_id (int, optional) – the index of the feature on the marker.

  • camera_id (int or str, optional) – the camera that observed this feature (default 0)

  • error (float or list, optional) – the standard deviation of the observation error. If a float is given, noise is assumed to be isotropic.

Returns:

The configured detection object.

cameraTransforms(world=False)[source]

Returns a dict of camera transforms, in either world or link- local coordinates. World coordinates use the robot’s current configuration.

Return type:

Dict[Union[int, str], Tuple[Sequence[float], Sequence[float]]]

markerTransforms(world=False)[source]

Returns a dict of marker transforms, in either world or link- local coordinates. World coordinates use the robot’s current configuration.

Return type:

Dict[int, Tuple[Sequence[float], Sequence[float]]]

setCameraTransform(camera, T, world=False)[source]

Sets the camera transform for the specified camera. World coordinates use the robot’s current configuration.

setMarkerTransform(marker, T, world=False)[source]

Sets the transform / coordinates for the specified marker. World coordinates use the robot’s current configuration.

predictedObservations(camera_transforms=None, marker_transforms=None, noise=False)[source]

Returns a list of predicted observations for a given set of camera and marker transforms.

Return type:

List[Union[PointObservation, PixelObservation, TransformObservation]]

Parameters:
  • camera_transforms (dict key->se3 elements, optional) – keys matching self.cameras. If None, then it is assumed that all cameras have transforms (local_coordinates) specified.

  • marker_transforms (dict key->se3 elements, optional) – keys matching self.markers. If None, then it is assumed that all markers have transforms (local_coordinates) specified.

  • noise (bool or float) – if not False, then noise is injected into the prediction. This is useful for generating synthetic data and understanding the observation error model.

Returns:

A list of configured observation objects, corresponding to the values in self.observations.

predictedResiduals(camera_transforms=None, marker_transforms=None)[source]

Returns a list of predicted residuals for a given set of camera and marker transforms.

Return type:

List[Sequence[float]]

Parameters:
  • camera_transforms (list of se3 elements, optional) – length len(self.cameras). If None, then it is assumed that all cameras have transforms (local_coordinates) specified.

  • marker_transforms (list of se3 elements, optional) – length len(self.markers). If None, then it is assumed that all markers have transforms (local_coordinates) specified.

Returns:

A list of residuals (prediction - observed value) corresponding to the values in self.observations.

predictedLogLikelihoods(camera_transforms=None, marker_transforms=None)[source]

Returns a list of predicted observation log-likelihoods for a given set of camera and marker transforms.

Return type:

List[float]

Parameters:
  • camera_transforms (list of se3 elements, optional) – length len(self.cameras). If None, then it is assumed that all cameras have transforms (local_coordinates) specified.

  • marker_transforms (list of se3 elements, optional) – length len(self.markers). If None, then it is assumed that all markers have transforms (local_coordinates) specified.

Returns:

A list of log likelihoods of residuals corresponding to the values in self.observations.

optimize(maxIters=100, tol=1e-07, regularizationFactor=0.0, store=True)[source]

Optimizes the calibrated transforms.

Requires scipy.

Return type:

Tuple[float, Dict[Union[int, str], Tuple[Sequence[float], Sequence[float]]], Dict[Union[int, str], Tuple[Sequence[float], Sequence[float]]]]

Returns:

A tuple of the RMSE, camera transform dictionary, and marker transform dictionary.

updateRobotSensors()[source]

Uses the current camera transforms to update the robot model’s sensors.

Return type:

None

save(fn)[source]

Saves all calibration info (except images) to a JSON file on disk. :rtype: None

Warning

Images will not be saved.

Parameters:

fn (str or file) – the file to save to.

load(fn)[source]

Loads from a JSON file on disk.

Return type:

None

Parameters:

fn (str or file) – the file to load from

saveFrames(dirname, pattern='frame{:04d}.png')[source]

Saves all frames to a directory of images. Requires Python Imaging Library.

Return type:

None

Parameters:
  • dirname (str) – the directory to save to.

  • pattern (str, optional) – the pattern to save frames to. Can include a single integer format specifier.