klampt.model.sensing module

A collection of utility functions for dealing with sensors and sensor data.

More sophisticated operations call for the use of a full-fledged sensor package, such as Open3D or PCL.

Sensor transforms

The get_sensor_xform()/set_sensor_xform() functions are used to interface cleanly with the klampt klampt.math.se3 transform representation.

Getting images and point clouds

The camera_to_images(), camera_to_points(), and camera_to_points_world() functions convert raw CameraSensor outputs to Python objects that are more easily operated upon, e.g., images and point clouds. Use these to retrieve images as Numpy arrays.

The image_to_points() function converts a depth / color image to a point cloud, given camera intrinsic information.

Working with cameras

The camera_to_viewport() and viewport_to_camera() functions help with converting to and from the klampt.vis.glprogram.GLViewport class used in klampt.vis.

The camera_to_intrinsics() and intrinsics_to_camera() functions convert between intrinsics definitions.

camera_ray(), and camera_project() convert to/from image points. visible() determines whether a point or object is visible from a camera.

projection_map_texture() maps a texture from a camera into an OpenGL appearance.

Functions:

get_sensor_xform(sensor[, robot])

Extracts the transform of a SimRobotSensor.

set_sensor_xform(sensor, T[, link])

Given a link-mounted sensor (e.g., CameraSensor or ForceSensor), sets its link-local transform to T.

camera_to_images(camera[, image_format, ...])

Given a SimRobotSensor that is a CameraSensor, returns either the RGB image, the depth image, or both.

image_to_points(depth, color, xfov[, yfov, ...])

Given a depth and optionally color image, returns a point cloud representing the depth or RGB-D scene.

camera_to_points(camera[, points_format, ...])

Given a SimRobotSensor that is a CameraSensor, returns a point cloud associated with the current measurements.

camera_to_points_world(camera, robot[, ...])

Same as camera_to_points(), but converts to the world coordinate system given the robot to which the camera is attached.

camera_to_viewport(camera, robot)

Returns a GLViewport instance corresponding to the camera's view.

viewport_to_camera(viewport, camera, robot)

Fills in a simulated camera's settings to match a GLViewport specifying the camera's view.

camera_to_intrinsics(camera[, format, fn])

Returns the camera's intrinsics and/or saves them to a file under the given format.

intrinsics_to_camera(data, camera[, format])

Fills in a simulated camera's settings to match given intrinsics.

camera_ray(camera, robot, x, y)

Returns the (source,direction) of a ray emanating from the SimRobotSensor at pixel coordinates (x,y).

camera_project(camera, robot, pt[, clip])

Given a point in world space, returns the (x,y,z) coordinates of the projected pixel.

visible(camera, object[, full, robot])

Tests whether the given object is visible in a SimRobotSensor or a GLViewport.

laser_to_points(sensor[, robot, scan])

Converts laser readings to a PointCloud.

projection_map_texture(vp, app[, robot])

Calculates texture coordinate generator for a given appearance to project an image texture onto some geometry.

klampt.model.sensing.get_sensor_xform(sensor, robot=None)[source]

Extracts the transform of a SimRobotSensor. The sensor must be of a link-mounted type, e.g., a CameraSensor or ForceSensor.

Return type:

Tuple[Sequence[float], Sequence[float]]

Parameters:
  • sensor (SimRobotSensor) –

  • robot (RobotModel, optional) – if provided, returns the world coordinates of the sensor. Otherwise, returns the local coordinates on the link to which it is mounted.

Returns:

The sensor transform (klampt.se3 object)

klampt.model.sensing.set_sensor_xform(sensor, T, link=None)[source]

Given a link-mounted sensor (e.g., CameraSensor or ForceSensor), sets its link-local transform to T.

Parameters:

Another way to set a sensor is to give a coordinates.Frame object. This frame must either be associated with a RobotModelLink or its parent should be associated with one.

(the reason why you should use this is that the Tsensor attribute has a particular format using the loader.write_se3 function.)

klampt.model.sensing.camera_to_images(camera, image_format='numpy', color_format='channels')[source]

Given a SimRobotSensor that is a CameraSensor, returns either the RGB image, the depth image, or both.

Return type:

Tuple[Any, Any]

Parameters:
  • camera (SimRobotSensor) – a sensor that is of ‘CameraSensor’ type

  • image_format (str) –

    governs the return type. Can be:

    • ’numpy’ (default): returns numpy arrays. Depending on the value of color_format, the RGB image either has shape (h,w,3) and dtype uint8 or (h,w) and dtype uint32. Depth images as numpy arrays with shape (h,w).

  • color_format (str) – governs how pixels in the RGB result are packed.

  • be (Can) –

    • ‘channels’ (default): returns a 3D array with 3 channels

      corresponding to R, G, B values in the range [0,255].

    • ’rgb’ returns a 2D array with a 32-bit integer channel, with R,G,B channels packed in hex format 0xrrggbb.

    • ’bgr’: similar to ‘rgb’ but with hex order 0xbbggrr.

Returns:

(rgb, depth), which are either numpy arrays or another image format, as specified by image_format.

  • rgb: the RGB result (packed as specified by color_format)

  • depth: the depth result (floats)

klampt.model.sensing.image_to_points(depth, color, xfov, yfov=None, depth_scale=None, depth_range=None, color_format='auto', points_format='numpy', all_points=False)[source]

Given a depth and optionally color image, returns a point cloud representing the depth or RGB-D scene.

Optimal performance is obtained with points_format='PointCloud' or 'Geometry3D', with all_points=True.

Parameters:
  • depth (list of lists or numpy array) – the w x h depth image (rectified) given as a numpy array of shape (h,w).

  • color (list of lists or numpy array, optional) – the w x h color image given as a numpy uint8 array of shape (h,w,3) or a numpy uint32 array of shape (h,w) encoding RGB pixels in the format 0xrrggbb. It is assumed that color maps directly onto depth pixels. If color = None, an uncolored point cloud will be produced.

  • xfov (float) – horizontal field of view, in radians. Related to the intrinsics fx via \(fx = w/(2 an(xfov/2))\), i.e., \(xfov = 2*rctan(w/(2*fx))\).

  • yfov (float, optional) – vertical field of view, in radians. If not given, square pixels are assumed. Related to the intrinsics \(fy = h/(2 an(yfov/2))\), i.e., \(yfov = 2*rctan(h/(2*fy))\).

  • depth_scale (float, optional) – a scaling from depth image values to absolute depth values.

  • depth_range (pair of floats, optional) – if given, only points within this depth range (non-inclusive) will be extracted. If all_points=False, points that fail the range test will be stripped from the output. E.g., (0.5,8.0) only extracts points with z > 0.5 and z < 8 units.

  • color_format (str) –

    governs how pixels in the RGB result are packed. Can be:

    • ’auto’ (default): if it’s a 3D array, it assumes elements are in ‘channels’ format, otherwise it assumes ‘rgb’.

    • ’channels’: a 3D array with 3 channels corresponding to R, G, B values in the range [0,255] if uint8 type, otherwise in the range [0,1].

    • ’rgb’ a 2D array with a 32-bit integer channel, with R,G,B channels packed in hex format 0xrrggbb.

  • points_format (str, optional) –

    configures the format of the return value. Can be:

    • ’numpy’ (default): either an Nx3, Nx4, or Nx6 numpy array, depending on whether color is requested (and its format).

    • ’PointCloud’: a Klampt PointCloud object

    • ’Geometry3D’: a Klampt Geometry3D point cloud object

    • ’TriangleMesh’: a Klampt TriangleMesh object showing a regular grid encoded with the depth image.

  • all_points (bool, optional) – configures whether bad points should be stripped out. If False (default), this strips out all pixels that don’t have a good depth reading (i.e., the camera sensor’s maximum reading.) If True, these pixels are all set to (0,0,0).

Returns:

the point cloud. Represented as being local to the standard camera frame with +x to the right, +y down, +z forward.

Return type:

numpy ndarray or Geometry3D

klampt.model.sensing.camera_to_points(camera, points_format='numpy', all_points=False, color_format='channels')[source]

Given a SimRobotSensor that is a CameraSensor, returns a point cloud associated with the current measurements.

Points are triangulated with respect to the camera’s intrinsic coordinates, and are returned in the camera local frame (+z backward, +x toward the right, +y toward up).

The arguments

Return type:

Union[ndarray, PointCloud, Geometry3D]

Parameters:
  • points_format (str, optional) –

    configures the format of the return value. Can be:

    • ’numpy’ (default): either an Nx3, Nx4, or Nx6 numpy array, depending on whether color is requested (and its format).

    • ’PointCloud’: a Klampt PointCloud object

    • ’Geometry3D’: a Klampt Geometry3D point cloud object

    • ’TriangleMesh’: a Klampt TriangleMesh object showing a regular grid encoded with the depth image.

  • all_points (bool, optional) – configures whether bad points should be stripped out. If False (default), this strips out all pixels that don’t have a good depth reading (i.e., the camera sensor’s maximum reading.) If True, these pixels are all set to (0,0,0).

  • color_format (str) –

    If the sensor has an RGB component, then color channels may be produced. This value configures the output format, and can take on the values:

    • ’channels’: produces individual R,G,B channels in the range [0,1]. (note this is different from the interpretation of camera_to_images)

    • ’rgb’: produces a single 32-bit integer channel packing the 8-bit color channels together in the format 0xrrggbb.

    • None: no color is produced.

Returns:

The point cloud in the requested format.

klampt.model.sensing.camera_to_points_world(camera, robot, points_format='numpy', color_format='channels')[source]

Same as camera_to_points(), but converts to the world coordinate system given the robot to which the camera is attached.

Points that have no reading are stripped out.

Return type:

Union[ndarray, PointCloud, Geometry3D]

klampt.model.sensing.camera_to_viewport(camera, robot)[source]

Returns a GLViewport instance corresponding to the camera’s view.

See klampt.vis.glprogram and klampt.vis.visualization for information about how to use the object with the visualization, e.g. vis.setViewport(vp).

Return type:

GLViewport

Parameters:
  • camera (SimRobotSensor) – the camera instance.

  • robot (RobotModel) – the robot on which the camera is located, which should be set to the robot’s current configuration. This could be set to None, in which case the camera’s transform is in its link’s local coordinates.

Returns:

A GLViewport matching the camera’s viewport.

klampt.model.sensing.viewport_to_camera(viewport, camera, robot)[source]

Fills in a simulated camera’s settings to match a GLViewport specifying the camera’s view.

Parameters:
  • viewport (GLViewport) – the viewport to match

  • camera (SimRobotSensor) – the viewport will be output to this sensor

  • robot (RobotModel) – the robot on which the camera is located, which should be set to the robot’s current configuration. This could be set to None, in which case the camera’s transform is in its link’s local coordinates.

klampt.model.sensing.camera_to_intrinsics(camera, format='opencv', fn=None)[source]

Returns the camera’s intrinsics and/or saves them to a file under the given format.

Parameters:
  • camera (SimRobotSensor) – the camera instance.

  • format (str) – either ‘opencv’, ‘numpy’, ‘ros’, or ‘json’ describing the desired type

  • fn (str, optional) – the file to save to (must be .json, .xml, or .yml).

Returns:

If format=’opencv’, the (projection, distortion) matrix is returned.

If format=’numpy’, just the projection matrix is returned.

If format==’json’, a dict of the fx, fy, cx, cy values is returned

klampt.model.sensing.intrinsics_to_camera(data, camera, format='opencv')[source]

Fills in a simulated camera’s settings to match given intrinsics. Note: all distortions are dropped.

Parameters:
  • data – the file or data to set. Interpretation varies depending on format.

  • camera (SimRobotSensor) – the viewport will be output to this sensor

  • format (str) – either ‘opencv’, ‘numpy’, ‘ros’, or ‘json’

klampt.model.sensing.camera_ray(camera, robot, x, y)[source]

Returns the (source,direction) of a ray emanating from the SimRobotSensor at pixel coordinates (x,y).

If you are doing this multiple times, it’s faster to convert the camera to GLViewport and use GLViewport.click_ray.

Return type:

Tuple[Sequence[float], Sequence[float]]

Parameters:
  • camera (SimRobotSensor) – the camera

  • robot (RobotModel) – the robot on which the camera is mounted.

  • x (int/float) – x pixel coordinates

  • y (int/float) – y pixel coordinates

Returns:

A pair (source,direction) giving the world-space ray source/direction.

klampt.model.sensing.camera_project(camera, robot, pt, clip=True)[source]

Given a point in world space, returns the (x,y,z) coordinates of the projected pixel. z is given in absolute coordinates, while x,y are given in pixel values.

If clip=True and the point is out of the viewing volume, then None is returned. Otherwise, if the point is exactly at the focal plane then the middle of the viewport is returned.

If you are doing this multiple times, it’s faster to convert the camera to GLViewport and use GLViewport.project.

Return type:

Sequence[float]

Parameters:
  • camera (SimRobotSensor) – the camera

  • robot (RobotModel) – the robot on which the camera is mounted.

  • pt (3-vector) – world coordinates of point

  • clip (bool, optional) – if true, None will be returned if the point is outside of the viewing volume.

Returns:

(x,y,z), where x,y are pixel value of image, z is depth.

klampt.model.sensing.visible(camera, object, full=True, robot=None)[source]

Tests whether the given object is visible in a SimRobotSensor or a GLViewport.

If you are doing this multiple times, first convert to GLViewport.

Return type:

bool

Parameters:
  • camera (SimRobotSensor or GLViewport) – the camera.

  • object – a 3-vector, a (center,radius) pair indicating a sphere, an axis-aligned bounding box (bmin,bmax), a Geometry3D, or an object that has a geometry() method, e.g., RigidObjectModel, RobotModelLink.

  • full (bool, optional) – if True, the entire object must be in the viewing frustum for it to be considered visible. If False, any part of the object can be in the viewing frustum.

  • robot (RobotModel) – if camera is a SimRobotSensor, this will be used to derive the transform.

klampt.model.sensing.laser_to_points(sensor, robot=None, scan=None)[source]

Converts laser readings to a PointCloud.

Parameters:
  • sensor (SimRobotSensor) – the sensor

  • robot (RobotModel, optional) – the robot, placed in its current configuration. If given, the result is in world coordinates.

  • scan (list of float, optional) – the results from sensor.getMeasurements(). Otherwise, sensor.getMeasurements() is called

Returns:

the point cloud. Given in world coordinates if robot!=None or in link-local coordinates if robot=None.

Return type:

PointCloud

Note: only works with planar laser scanners, for now.

klampt.model.sensing.projection_map_texture(vp, app, robot=None)[source]

Calculates texture coordinate generator for a given appearance to project an image texture onto some geometry.

The appearance should already have been set up for the given object.

To complete the projection mapping, call app.setTexture2D(format,image).