pymomentum.renderer

Functions for rendering momentum models.

class pymomentum.renderer.Camera

Bases: pybind11_object

Camera for rendering

property T_eye_from_world

Transform from world space to camera/eye space

property T_world_from_eye

Transform from world space to camera/eye space

__init__(self: pymomentum.renderer.Camera, intrinsics_model: pymomentum.renderer.IntrinsicsModel, eye_from_world: numpy.ndarray[numpy.float32[4, 4]] | None = None) None

Create a camera with specified intrinsics and pose.

Parameters:
  • intrinsics_model – Camera intrinsics model defining focal length, principal point, and image dimensions.

  • eye_from_world – Optional 4x4 transformation matrix from world space to camera/eye space. Defaults to identity matrix if not provided.

Returns:

A new Camera instance with the specified intrinsics and pose.

property center_of_projection

Position of the camera center in world space

crop(self: pymomentum.renderer.Camera, top: int, left: int, width: int, height: int) pymomentum.renderer.Camera

Create a new camera with cropped image region.

Parameters:
  • top – Top offset in pixels.

  • left – Left offset in pixels.

  • width – New width in pixels after cropping.

  • height – New height in pixels after cropping.

Returns:

A new Camera instance with cropped intrinsics and same pose.

frame(self: pymomentum.renderer.Camera, points: numpy.ndarray[numpy.float32], min_z: float = 0.10000000149011612, edge_padding: float = 0.05000000074505806) pymomentum.renderer.Camera

Adjust the camera position to ensure all specified points are in view.

Parameters:
  • points – (N x 3) array of 3D points that should be visible in the camera view.

  • min_z – Minimum distance from camera to maintain. Defaults to 0.1.

  • edge_padding – Padding factor to add around the points as a fraction of the image size. Defaults to 0.05.

Returns:

A new Camera instance positioned to frame all the specified points.

property fx

Focal length in x direction (pixels)

property fy

Focal length in y direction (pixels)

property image_height

Height of the image in pixels

property image_width

Width of the image in pixels

property intrinsics_model

The camera’s intrinsics model

look_at(self: pymomentum.renderer.Camera, position: numpy.ndarray[numpy.float32[3, 1]], target: numpy.ndarray[numpy.float32[3, 1]] | None = None, up: numpy.ndarray[numpy.float32[3, 1]] | None = None) pymomentum.renderer.Camera

Position the camera to look at a specific target point.

Parameters:
  • position – 3D position where the camera should be placed.

  • target – 3D point the camera should look at. Defaults to origin (0,0,0) if not provided.

  • up – Up vector for camera orientation. Defaults to (0,1,0) if not provided.

Returns:

A new Camera instance positioned to look at the target.

project(self: pymomentum.renderer.Camera, world_points: numpy.ndarray[numpy.float32]) numpy.ndarray[numpy.float32]

Project 3D points from world space to 2D image coordinates.

Parameters:

world_points – (N x 3) array of 3D points in world coordinate space to project.

Returns:

(N x 3) array of projected points where columns are [x, y, depth] in image coordinates.

unproject(self: pymomentum.renderer.Camera, image_points: numpy.ndarray[numpy.float32]) numpy.ndarray[numpy.float32]

Unproject 3D image coordinates to 3D points in world space.

Parameters:

image_points – (N x 3) array of 3D points in image coordinates [x, y, depth].

Returns:

(N x 3) array of 3D points in world coordinate space.

upsample(self: pymomentum.renderer.Camera, factor: float) pymomentum.renderer.Camera

Create a new camera with upsampled resolution by the given factor.

Parameters:

factor – Upsampling factor (e.g., 2.0 doubles the resolution).

Returns:

A new Camera instance with upsampled intrinsics and same pose.

property world_space_principle_axis

Camera world-space principal axis (direction the camera is looking)

class pymomentum.renderer.IntrinsicsModel

Bases: pybind11_object

Base class for camera intrinsics models

__init__(*args, **kwargs)
crop(self: pymomentum.renderer.IntrinsicsModel, top: int, left: int, width: int, height: int) pymomentum.renderer.IntrinsicsModel

Create a new intrinsics model cropped to a sub-region of the image.

Parameters:
  • top – Top offset in pixels.

  • left – Left offset in pixels.

  • width – New width in pixels after cropping.

  • height – New height in pixels after cropping.

Returns:

A new IntrinsicsModel instance with cropped parameters.

downsample(self: pymomentum.renderer.IntrinsicsModel, factor: float) pymomentum.renderer.IntrinsicsModel

Create a new intrinsics model downsampled by the given factor.

Parameters:

factor – Downsampling factor (e.g., 2.0 halves the resolution).

Returns:

A new IntrinsicsModel instance with downsampled parameters.

property fx

Focal length in x direction (pixels)

property fy

Focal length in y direction (pixels)

property image_height

Height of the image in pixels

property image_width

Width of the image in pixels

project(self: pymomentum.renderer.IntrinsicsModel, points: numpy.ndarray[numpy.float32]) numpy.ndarray[numpy.float32]

Project 3D points in camera space to 2D image coordinates.

Parameters:

points – (N x 3) array of 3D points in camera coordinate space to project.

Returns:

(N x 3) array of projected points where columns are [x, y, depth] in image coordinates.

resize(self: pymomentum.renderer.IntrinsicsModel, image_width: int, image_height: int) pymomentum.renderer.IntrinsicsModel

Create a new intrinsics model resized to new image dimensions.

Parameters:
  • image_width – New image width in pixels.

  • image_height – New image height in pixels.

Returns:

A new IntrinsicsModel instance with resized parameters.

upsample(self: pymomentum.renderer.IntrinsicsModel, factor: float) pymomentum.renderer.IntrinsicsModel

Create a new intrinsics model upsampled by the given factor.

Parameters:

factor – Upsampling factor (e.g., 2.0 doubles the resolution).

Returns:

A new IntrinsicsModel instance with upsampled parameters.

class pymomentum.renderer.Light

Bases: pybind11_object

A light source for 3D rendering supporting point, directional, and ambient lighting. Point lights emit from a specific position, directional lights simulate distant sources like the sun, and ambient lights provide uniform illumination from all directions.

__init__(self: pymomentum.renderer.Light) None
property color
static create_ambient_light(color: numpy.ndarray[numpy.float32[3, 1]] | None = None) pymomentum.renderer.Light
static create_directional_light(direction: numpy.ndarray[numpy.float32[3, 1]], color: numpy.ndarray[numpy.float32[3, 1]] | None = None) pymomentum.renderer.Light
static create_point_light(position: numpy.ndarray[numpy.float32[3, 1]], color: numpy.ndarray[numpy.float32[3, 1]] | None = None) pymomentum.renderer.Light
property position
transform(self: pymomentum.renderer.Light, xf: numpy.ndarray[numpy.float32[4, 4]]) pymomentum.renderer.Light

Transform the light using the passed-in transform.

property type
class pymomentum.renderer.LightType

Bases: pybind11_object

Type of light to use in rendering.

Members:

Ambient

Directional

Point

Ambient = <LightType.Ambient: 2>
Directional = <LightType.Directional: 1>
Point = <LightType.Point: 0>
__init__(self: pymomentum.renderer.LightType, value: int) None
property name
property value
class pymomentum.renderer.OpenCVDistortionParameters

Bases: pybind11_object

OpenCV distortion parameters

__init__(self: pymomentum.renderer.OpenCVDistortionParameters) None

Initialize with default parameters (no distortion)

property k1

Radial distortion coefficient k1

property k2

Radial distortion coefficient k2

property k3

Radial distortion coefficient k3

property k4

Radial distortion coefficient k4

property k5

Radial distortion coefficient k5

property k6

Radial distortion coefficient k6

property p1

Tangential distortion coefficient p1

property p2

Tangential distortion coefficient p2

property p3

Tangential distortion coefficient p3

property p4

Tangential distortion coefficient p4

class pymomentum.renderer.OpenCVIntrinsicsModel

Bases: IntrinsicsModel

OpenCV camera intrinsics model with distortion

__init__(self: pymomentum.renderer.OpenCVIntrinsicsModel, image_width: int, image_height: int, fx: float | None, fy: float | None, cx: float | None, cy: float | None, distortion_params: pymomentum.renderer.OpenCVDistortionParameters | None = None) None

Create an OpenCV camera model with specified parameters and optional distortion.

Parameters:
  • image_width – Width of the image in pixels.

  • image_height – Height of the image in pixels.

  • fx – Focal length in x direction (pixels).

  • fy – Focal length in y direction (pixels).

  • cx – Principal point x-coordinate (pixels).

  • cy – Principal point y-coordinate (pixels).

  • distortion_params – Optional OpenCV distortion parameters. Defaults to no distortion if not provided.

Returns:

A new OpenCVIntrinsicsModel instance.

property cx

Principal point x-coordinate (pixels)

property cy

Principal point y-coordinate (pixels)

class pymomentum.renderer.PhongMaterial

Bases: pybind11_object

A Phong shading material model with diffuse, specular, and emissive components. Supports both solid colors and texture maps for realistic surface rendering. The Phong model provides smooth shading with controllable highlights and surface properties.

__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: pymomentum.renderer.PhongMaterial) -> None

  2. __init__(self: pymomentum.renderer.PhongMaterial, diffuse_color: Optional[numpy.ndarray[numpy.float32[3, 1]]] = None, specular_color: Optional[numpy.ndarray[numpy.float32[3, 1]]] = None, specular_exponent: Optional[float] = None, emissive_color: Optional[numpy.ndarray[numpy.float32[3, 1]]] = None, diffuse_texture: Optional[numpy.ndarray[numpy.float32]] = None, emissive_texture: Optional[numpy.ndarray[numpy.float32]] = None) -> None

Create a Phong material with customizable properties.

param diffuse_color:

RGB diffuse color values (0-1 range)

param specular_color:

RGB specular color values (0-1 range)

param specular_exponent:

Specular highlight sharpness

param emissive_color:

RGB emissive color values (0-1 range)

param diffuse_texture:

Optional diffuse texture as a numpy array

param emissive_texture:

Optional emissive texture as a numpy array

property diffuse_color
property emissive_color
property specular_color
property specular_exponent
class pymomentum.renderer.PinholeIntrinsicsModel

Bases: IntrinsicsModel

Pinhole camera intrinsics model without distortion

__init__(self: pymomentum.renderer.PinholeIntrinsicsModel, image_width: int, image_height: int, fx: float | None = None, fy: float | None = None, cx: float | None = None, cy: float | None = None) None

Create a pinhole camera model with specified focal lengths and image dimensions.

Parameters:
  • image_width – Width of the image in pixels.

  • image_height – Height of the image in pixels.

  • fx – Focal length in x direction (pixels). Defaults to computed value based on 50mm equivalent lens.

  • fy – Focal length in y direction (pixels). Defaults to computed value based on 50mm equivalent lens.

  • cx – Principal point x-coordinate (pixels). Defaults to image center if not provided.

  • cy – Principal point y-coordinate (pixels). Defaults to image center if not provided.

Returns:

A new PinholeIntrinsicsModel instance.

property cx

Principal point x-coordinate (pixels)

property cy

Principal point y-coordinate (pixels)

class pymomentum.renderer.SkeletonStyle

Bases: pybind11_object

Rendering style options for skeleton visualization. Different styles are optimized for different use cases, “

“from technical debugging to publication-quality figures.

Members:

Pipes : Render joints as spheres with fixed-radius pipes connecting them. Useful when rendering a skeleton under a mesh.

Octahedrons : Render joints as octahedrons. This gives much more sense of the joint orientations and is useful when rendering just the skeleton.

Lines : Render joints as lines. This would look nice in e.g. a paper figure. Note that all sizes are in pixels when this is used.

Lines = <SkeletonStyle.Lines: 2>
Octahedrons = <SkeletonStyle.Octahedrons: 0>
Pipes = <SkeletonStyle.Pipes: 1>
__init__(self: pymomentum.renderer.SkeletonStyle, value: int) None
property name
property value
pymomentum.renderer.alpha_matte(depth_buffer: torch.Tensor, rgb_buffer: torch.Tensor, target_image: numpy.ndarray, alpha: float = 1.0) None

Use alpha matting to overlay a rasterized image onto a background image.

This function includes a few features which simplify using it with the rasterizer: 1. Depth buffer is automatically converted to an alpha matte. 2. Supersampled images are handled correctly: if your rgb_buffer is an integer multiple of the target image, it will automatically be smoothed and converted to fractional alpha. 3. You can apply an additional global alpha on top of the per-pixel alpha.

Parameters:
  • depth_buffer – A z-buffer as generated by create_z_buffer().

  • rgb_buffer – An rgb_buffer as created by create_rgb_buffer().

  • target_image – A target RGB image to overlay the rendered image onto. Will be written in place.

  • alpha – A global alpha between 0 and 1 to multiply the source image by. Defaults to 1.

pymomentum.renderer.build_cameras_for_body(character: pymomentum.geometry.Character, joint_parameters: torch.Tensor, image_height: int, image_width: int, focal_length_mm: float = 50.0, horizontal: bool = False, camera_angle: float = 0.0) list[pymomentum.renderer.Camera]

Build a batched vector of cameras that roughly face the body (default: face the front of the body). If you pass in multiple frames of animation, the camera will ensure all frames are visible.

Parameters:
  • character – Character to use.

  • jointParameters – torch.Tensor of size (nBatch x [nFrames] x nJointParameters) or size (nJointParameters); can be computed from the modelParameters using \(ParameterTransform.apply\).

  • imageHeight – Height of the target image.

  • imageWidth – Width of the target image.

  • focalLength_mm – 35mm-equivalent focal length; e.g. focalLength=50 corresponds to a “normal” lens.

  • horizontal – whether the cameras are placed horizontally, assuming the Y axis is the world up direction.

  • camera_angle – what direction the camera looks at the body. default: 0, looking at front of body. pi/2: at left side of body.

Returns:

List of cameras, one for each element of the batch.

pymomentum.renderer.build_cameras_for_hand(wrist_transformation: torch.Tensor, image_height: int, image_width: int) list[pymomentum.renderer.Camera]

Build a vector of cameras that roughly face inward from the front of the hand.

Parameters:
  • wristTransformation – Wrist transformation.

  • imageHeight – Height of the target image.

  • imageWidth – Width of the target image.

Returns:

List of cameras, one for each element of the batch.

pymomentum.renderer.build_cameras_for_hand_surface(wrist_transformation: torch.Tensor, image_height: int, image_width: int) list[pymomentum.renderer.Camera]

Build a vector of cameras that face over the plane.

Parameters:
  • imageHeight – Height of the target image.

  • imageWidth – Width of the target image.

Returns:

List of cameras, one for each element of the batch.

pymomentum.renderer.create_index_buffer(camera: pymomentum.renderer.Camera) torch.Tensor

Creates a padded RGB buffer suitable for storing triangle or vertex indices during rasterization.

Parameters:

camera – Camera to render from.

Returns:

An integer tensor (height, padded_width) suitable for passing in as an index buffer to the rasterize() function.

pymomentum.renderer.create_rgb_buffer(camera: pymomentum.renderer.Camera, background_color: numpy.ndarray[numpy.float32[3, 1]] | None = None) torch.Tensor

Creates a padded RGB buffer suitable for rasterization.

Parameters:
  • camera – Camera to render from.

  • background_color – Background color, defaults to all-black (0, 0, 0).

Returns:

A rgb_buffer torch.Tensor (height, padded_width, 3) suitable for use in the rasterize() function. After rasterization, use rgb_buffer[:, 0 : camera.image_width, :] to get the rendered image.

pymomentum.renderer.create_shadow_projection_matrix(light: pymomentum.renderer.Light, plane_normal: numpy.ndarray[numpy.float32[3, 1]] | None = None, plane_origin: numpy.ndarray[numpy.float32[3, 1]] | None = None) numpy.ndarray[numpy.float32[4, 4]]

Create a modelview matrix that when passed to rasterize_mesh will project all the vertices to the passed-in plane.

This is useful for rendering shadows using the classic projection shadows technique from OpenGL.

Parameters:
  • light – The light to use to cast shadows.

  • plane_normal – The normal vector of the plane (defaults to y-up, (0, 1, 0)).

  • plane_origin – A point on the plane, defaults to the origin (0, 0, 0).

Returns:

a 4x4 matrix that can be passed to the rasterizer function.

pymomentum.renderer.create_z_buffer(camera: pymomentum.renderer.Camera, far_clip: float = 3.4028234663852886e+38) torch.Tensor

Creates a padded buffer suitable for rasterization.

Parameters:

camera – Camera to render from.

Returns:

A z_buffer torch.Tensor (height, padded_width) suitable for use in the rasterize() function.

pymomentum.renderer.rasterize_camera_frustum(camera_frustum: pymomentum.renderer.Camera, camera: pymomentum.renderer.Camera, z_buffer: torch.Tensor, rgb_buffer: torch.Tensor | None = None, line_thickness: float = 1.0, distance: float = 10.0, num_samples: int = 20, color: numpy.ndarray[numpy.float32[3, 1]] | None = None, model_matrix: numpy.ndarray[numpy.float32[4, 4]] | None = None, near_clip: float = 0.10000000149011612, depth_offset: float = 0.0, image_offset: numpy.ndarray[numpy.float32[2, 1]] | None = None) None

Rasterize the camera frustum.

Parameters:
  • camera_frustum – Camera frustum to render.

  • camera – Camera to render from.

  • z_buffer – Z-buffer to render geometry onto; can be reused for multiple renders.

  • rgb_buffer – RGB-buffer to render geometry onto; can be reused for multiple renders.

  • line_thickness – Thickness of the lines.

  • distance – Distance to project the frustum out into space (defaults to 10cm).

  • num_samples – Number of samples to use for computing the boundaries of the frustum (defaults to 20).

  • color – Color to use for the frustum (defaults to white).

  • model_matrix – Additional matrix to apply to the frustum.

  • near_clip – Clip any triangles closer than this depth. Defaults to 0.1.

  • depth_offset – Offset the depth values. Nonzero values can be used to render something slightly in front of something else and avoid depth fighting. Defaults to 0.

  • image_offset – Offset by (x, y) pixels in image space.

pymomentum.renderer.rasterize_capsules(transformation: torch.Tensor, radius: torch.Tensor, length: torch.Tensor, camera: pymomentum.renderer.Camera, z_buffer: torch.Tensor, rgb_buffer: torch.Tensor | None = None, surface_normals_buffer: torch.Tensor | None = None, material: pymomentum.renderer.PhongMaterial | None = None, *, lights: list[pymomentum.renderer.Light] | None = None, model_matrix: numpy.ndarray[numpy.float32[4, 4]] | None = None, near_clip: float = 0.10000000149011612, depth_offset: float = 0.0, image_offset: numpy.ndarray[numpy.float32[2, 1]] | None = None, cylinder_length_subdivisions: int = 16, cylinder_radius_subdivisions: int = 16) None

Rasterize capsules using the passed-in camera onto a given RGB+depth buffer.

A capsule is defined as extending along the x axis in the local space defined by the transform. It has two radius values, one for the start and end of the capsule, and the ends of the capsules are spheres.

Parameters:
  • transformation – (nCapsules x 4 x 4) torch.Tensor of transformations from capsule-local space (oriented along the x axis) to world space.

  • radius – (nCapsules x 2) torch.Tensor of per-capsule start and end radius values.

  • length – (nCapsules) torch.Tensor of per-capsule length values.

  • camera – Camera to render from.

  • z_buffer – Z-buffer to render geometry onto; can be reused for multiple renders.

  • rgb_buffer – RGB-buffer to render geometry onto; can be reused for multiple renders.

  • surface_normals_buffer – Buffer to render eye-space surface normals to; can be reused for multiple renders. Should have dimensions [height x width x 3].

  • material – Material to render with (assumes solid color for now).

  • lights – Lights to use in renderering, in world-space. If none are given, a default light setup is used.

  • model_matrix – Additional matrix to apply to the model. Unlike the camera transforms, it is allowed to have scaling and/or shearing.

  • near_clip – Clip any triangles closer than this depth. Defaults to 0.1.

  • depth_offset – Offset the depth values. Nonzero values can be used to render something slightly in front of something else and avoid depth fighting. Defaults to 0.

  • image_offset – Offset by (x, y) pixels in image space. Can be used to render e.g. two characters next to each other for comparison without needing to create a special camera.

  • cylinder_length_subdivisions – How many subdivisions along cylinder length; longer cylinders may need more to avoid looking chunky.

  • cylinder_radius_subdivisions – How many subdivisions around cylinder radius; good values are between 16 and 64.

pymomentum.renderer.rasterize_character(character: pymomentum.geometry.Character, skeleton_state: torch.Tensor, camera: pymomentum.renderer.Camera, z_buffer: torch.Tensor, rgb_buffer: torch.Tensor | None = None, *, surface_normals_buffer: torch.Tensor | None = None, vertex_index_buffer: torch.Tensor | None = None, triangle_index_buffer: torch.Tensor | None = None, material: pymomentum.renderer.PhongMaterial | None = None, per_vertex_diffuse_color: torch.Tensor | None = None, lights: list[pymomentum.renderer.Light] | None = None, model_matrix: numpy.ndarray[numpy.float32[4, 4]] | None = None, back_face_culling: bool = True, near_clip: float = 0.10000000149011612, depth_offset: float = 0.0, image_offset: numpy.ndarray[numpy.float32[2, 1]] | None = None, wireframe_color: numpy.ndarray[numpy.float32[3, 1]] | None = None) None

Rasterize the posed character using the passed-in camera onto a given RGB+depth buffer. Uses an optimized cross-platform SIMD implementation.

See detailed notes under rasterize_mesh(), above.

Parameters:
  • characterpymomentum.geometry.Character whose skeleton to rasterize.

  • skeleton_state – State of the skeleton.

  • camera – Camera to render from.

  • material – Material to render with (assumes solid color for now).

  • per_vertex_diffuse_color – A per-vertex diffuse color to use instead of the material’s diffuse color.

  • lights – Lights to use in renderering, in world-space. If none are given, a default light setup is used.

  • model_matrix – Additional matrix to apply to the model. Unlike the camera transforms, it is allowed to have scaling and/or shearing.

  • back_face_culling – Enable back-face culling (speeds up the render).

  • z_buffer – Z-buffer to render geometry onto; can be reused for multiple renders.

  • rgb_buffer – RGB-buffer to render geometry onto; can be reused for multiple renders.

  • vertex_index_buffer – Optional buffer to rasterize the vertex indices to. Useful for e.g. computing parts.

  • triangle_index_buffer – Optional buffer to rasterize the triangle indices to. Useful for e.g. computing parts.

  • near_clip – Clip any triangles closer than this depth. Defaults to 0.1.

  • depth_offset – Offset the depth values. Nonzero values can be used to render something slightly in front of something else and avoid depth fighting. Defaults to 0.

  • image_offset – Offset by (x, y) pixels in image space. Can be used to render e.g. two characters next to each other for comparison without needing to create a special camera.

  • wireframe_color – If provided, color to use for the wireframe (defaults to no wireframe).

pymomentum.renderer.rasterize_checkerboard(camera: pymomentum.renderer.Camera, z_buffer: torch.Tensor, rgb_buffer: torch.Tensor | None = None, *, surface_normals_buffer: torch.Tensor | None = None, material1: pymomentum.renderer.PhongMaterial | None = None, material2: pymomentum.renderer.PhongMaterial | None = None, lights: list[pymomentum.renderer.Light] | None = None, model_matrix: numpy.ndarray[numpy.float32[4, 4]] | None = None, back_face_culling: bool = True, near_clip: float = 0.10000000149011612, depth_offset: float = 0.0, image_offset: numpy.ndarray[numpy.float32[2, 1]] | None = None, width: float = 50.0, num_checks: int = 10, subdivisions: int = 1) None

Rasterize a checkerboard floor in the x-z plane (with y up).

See detailed notes under rasterize_mesh(), above.

Parameters:
  • camera – Camera to render from.

  • z_buffer – Z-buffer to render geometry onto; can be reused for multiple renders.

  • rgb_buffer – RGB-buffer to render geometry onto; can be reused for multiple renders.

  • material1 – Material to use for even checks.

  • material2 – Material to use for odd checks.

  • lights – Lights to use in renderering, in world-space. If none are given, a default light setup is used.

  • model_matrix – Matrix to use to transform the plane from the origin.

  • back_face_culling – Cull back-facing triangles.

  • near_clip – Clip any triangles closer than this depth. Defaults to 0.1.

  • depth_offset – Offset the depth values. Nonzero values can be used to render something slightly in front of something else and avoid depth fighting. Defaults to 0.

  • image_offset – Offset by (x, y) pixels in image space. Can be used to render e.g. two characters next to each other for comparison without needing to create a special camera.

  • width – Width of the plane in x/z.

  • num_checks – Number of checks in each axis.

  • subdivisions – How much to divide up each check.

pymomentum.renderer.rasterize_circles(positions: torch.Tensor, camera: pymomentum.renderer.Camera, z_buffer: torch.Tensor, rgb_buffer: torch.Tensor | None = None, *, line_thickness: float = 1.0, radius: float = 3.0, line_color: numpy.ndarray[numpy.float32[3, 1]] | None = None, fill_color: numpy.ndarray[numpy.float32[3, 1]] | None = None, model_matrix: numpy.ndarray[numpy.float32[4, 4]] | None = None, near_clip: float = 0.10000000149011612, depth_offset: float = 0.0, image_offset: numpy.ndarray[numpy.float32[2, 1]] | None = None) None

Rasterize circles to the provided RGB and z buffers.

The advantage of using rasterization to draw circles instead of just drawing them with e.g. opencv is that it will use the correct camera model and respect the z buffer.

See detailed notes under rasterize_mesh(), above.

Parameters:
  • positions – (nCircles x 3) torch.Tensor of circle centers.

  • camera – Camera to render from.

  • z_buffer – Z-buffer to render geometry onto; can be reused for multiple renders.

  • rgb_buffer – RGB-buffer to render geometry onto; can be reused for multiple renders.

  • line_thickness – Thickness of the circle outline.

  • radius – Radius of the circle.

  • line_color – Color of the outline, is transparent if not provided.

  • fill_color – Line color, is transparent if not provided.

  • near_clip – Clip any triangles closer than this depth. Defaults to 0.1.

  • depth_offset – Offset the depth values. Nonzero values can be used to render something slightly in front of something else and avoid depth fighting. Defaults to 0.

  • image_offset – Offset by (x, y) pixels in image space. Can be used to render e.g. two characters next to each other for comparison without needing to create a special camera.

pymomentum.renderer.rasterize_circles_2d(positions: torch.Tensor, rgb_buffer: torch.Tensor, line_thickness: float = 1.0, radius: float = 3.0, line_color: numpy.ndarray[numpy.float32[3, 1]] | None = None, fill_color: numpy.ndarray[numpy.float32[3, 1]] | None = None, z_buffer: torch.Tensor | None = None, image_offset: numpy.ndarray[numpy.float32[2, 1]] | None = None) None

Rasterize circles directly in 2D image space without camera projection or z-buffer.

Parameters:
  • positions – (nCircles x 2) torch.Tensor of circle centers in image space [x, y].

  • rgb_buffer – RGB-buffer to render geometry onto.

  • line_thickness – Thickness of the circle outline.

  • radius – Radius of the circle.

  • line_color – Color of the outline, is transparent if not provided.

  • fill_color – Fill color, is transparent if not provided.

  • z_buffer – Optional Z-buffer to write zeros to for alpha matting.

  • image_offset – Offset by (x, y) pixels in image space.

pymomentum.renderer.rasterize_cylinders(start_position: torch.Tensor, end_position: torch.Tensor, camera: pymomentum.renderer.Camera, z_buffer: torch.Tensor, rgb_buffer: torch.Tensor | None = None, surface_normals_buffer: torch.Tensor | None = None, radius: torch.Tensor | None = None, color: torch.Tensor | None = None, material: pymomentum.renderer.PhongMaterial | None = None, *, lights: list[pymomentum.renderer.Light] | None = None, model_matrix: numpy.ndarray[numpy.float32[4, 4]] | None = None, back_face_culling: bool = True, near_clip: float = 0.10000000149011612, depth_offset: float = 0.0, image_offset: numpy.ndarray[numpy.float32[2, 1]] | None = None, length_subdivisions: int = 16, radius_subdivisions: int = 16) None

Rasterize cylinders using the passed-in camera onto a given RGB+depth buffer. Uses an optimized cross-platform SIMD implementation.

A cylinder is defined as extending from start_position to end_position with the radius provided by radius.

See detailed notes under rasterize_mesh(), above.

Parameters:
  • start_position – (nCylinders x 3) torch.Tensor of starting positions.

  • end_position – (nCylinders x 3) torch.Tensor of ending positions.

  • camera – Camera to render from.

  • radius – (nSpheres) Optional tensor of per-cylinder radius values (defaults to 1).

  • color – (nVert x 3) Optional Tensor of per-cylinder colors (defaults to using the material parameter).

  • z_buffer – Z-buffer to render geometry onto; can be reused for multiple renders.

  • rgb_buffer – RGB-buffer to render geometry onto; can be reused for multiple renders.

  • lights – Lights to use in renderering, in world-space. If none are given, a default light setup is used.

  • model_matrix – Additional matrix to apply to the model. Unlike the camera transforms, it is allowed to have scaling and/or shearing.

  • back_face_culling – Enable back-face culling (speeds up the render).

  • near_clip – Clip any triangles closer than this depth. Defaults to 0.1.

  • depth_offset – Offset the depth values. Nonzero values can be used to render something slightly in front of something else and avoid depth fighting. Defaults to 0.

  • image_offset – Offset by (x, y) pixels in image space. Can be used to render e.g. two characters next to each other for comparison without needing to create a special camera.

  • length_subdivisions – How many subdivisions along length; longer cylinders may need more to avoid looking chunky.

  • radius_subdivisions – How many subdivisions around cylinder radius; good values are between 16 and 64.

pymomentum.renderer.rasterize_lines(positions: torch.Tensor, camera: pymomentum.renderer.Camera, z_buffer: torch.Tensor, rgb_buffer: torch.Tensor | None = None, *, thickness: float = 1.0, color: numpy.ndarray[numpy.float32[3, 1]] | None = None, model_matrix: numpy.ndarray[numpy.float32[4, 4]] | None = None, near_clip: float = 0.10000000149011612, depth_offset: float = 0.0, image_offset: numpy.ndarray[numpy.float32[2, 1]] | None = None) None

Rasterize lines to the provided RGB and z buffers.

The advantage of using rasterization to draw lines instead of just drawing them with e.g. opencv is that it will use the correct camera model and respect the z buffer.

See detailed notes under rasterize_mesh(), above.

Parameters:
  • positions – (nLines x 2 x 3) torch.Tensor of start/end positions.

  • camera – Camera to render from.

  • z_buffer – Z-buffer to render geometry onto; can be reused for multiple renders.

  • rgb_buffer – RGB-buffer to render geometry onto; can be reused for multiple renders.

  • width – Width of the lines, currently is the same across all lines.

  • color – Line color, currently is shared across all lines.

  • near_clip – Clip any triangles closer than this depth. Defaults to 0.1.

  • depth_offset – Offset the depth values. Nonzero values can be used to render something slightly in front of something else and avoid depth fighting. Defaults to 0.

  • image_offset – Offset by (x, y) pixels in image space. Can be used to render e.g. two characters next to each other for comparison without needing to create a special camera.

pymomentum.renderer.rasterize_lines_2d(positions: torch.Tensor, rgb_buffer: torch.Tensor, thickness: float = 1.0, color: numpy.ndarray[numpy.float32[3, 1]] | None = None, z_buffer: torch.Tensor | None = None, image_offset: numpy.ndarray[numpy.float32[2, 1]] | None = None) None

Rasterize lines directly in 2D image space without camera projection or z-buffer.

Parameters:
  • positions – (nLines x 4) torch.Tensor of line start/end positions in image space [start_x, start_y, end_x, end_y].

  • rgb_buffer – RGB-buffer to render geometry onto.

  • thickness – Thickness of the lines.

  • color – Line color.

  • z_buffer – Optional Z-buffer to write zeros to for alpha matting.

  • image_offset – Offset by (x, y) pixels in image space.

pymomentum.renderer.rasterize_mesh(vertex_positions: torch.Tensor, vertex_normals: torch.Tensor | None, triangles: torch.Tensor, camera: pymomentum.renderer.Camera, z_buffer: torch.Tensor, rgb_buffer: torch.Tensor | None = None, *, surface_normals_buffer: torch.Tensor | None = None, vertex_index_buffer: torch.Tensor | None = None, triangle_index_buffer: torch.Tensor | None = None, material: pymomentum.renderer.PhongMaterial | None = None, texture_coordinates: torch.Tensor | None = None, texture_triangles: torch.Tensor | None = None, per_vertex_diffuse_color: torch.Tensor | None = None, lights: list[pymomentum.renderer.Light] | None = None, model_matrix: numpy.ndarray[numpy.float32[4, 4]] | None = None, back_face_culling: bool = True, near_clip: float = 0.10000000149011612, depth_offset: float = 0.0, image_offset: numpy.ndarray[numpy.float32[2, 1]] | None = None) None

Rasterize the triangle mesh using the passed-in camera onto a given RGB+depth buffer. Uses an optimized cross-platform SIMD implementation.

Notes:

  • You can rasterize multiple meshes to the same depth buffer by calling this function multiple times.

  • To simplify the SIMD implementation, the width of the depth buffer must be a multiple of 8. If you want to render a resolution that is not a multiple of 8, allocate an appropriately padded depth buffer (e.g. using create_rasterizer_buffers()) and then extract the smaller image at the end.

Parameters:
  • vertex_positions – (nVert x 3) Tensor of vertex positions.

  • vertex_normals – (nVert x 3) Tensor of vertex normals.

  • triangles – (nVert x 3) Tensor of triangles.

  • camera – Camera to render from.

  • model_matrix – Additional matrix to apply to the model. Unlike the camera transforms, it is allowed to have scaling and/or shearing.

  • material – Material to render with (assumes solid color for now).

  • texture_coordinates – Texture coordinates used with the mesh, indexed by texture_triangles (if present) or triangles otherwise.

  • texture_triangles – Triangles in texture coordinate space. Must have the same number of triangles as the triangles input and is assumed to match the regular triangles input if not present. This allows discontinuities in the texture map without needing to break up the mesh.

  • per_vertex_diffuse_color – A per-vertex diffuse color to use instead of the material’s diffuse color.

  • lights – Lights to use in renderering, in world-space. If none are given, a default light setup is used.

  • back_face_culling – Enable back-face culling (speeds up the render).

  • z_buffer – Z-buffer to render geometry onto; can be reused for multiple renders.

  • rgb_buffer – RGB-buffer to render geometry onto; can be reused for multiple renders.

  • surface_normals_buffer – Buffer to render eye-space surface normals to; can be reused for multiple renders. Should have dimensions [height x width x 3].

  • vertex_index_buffer – Optional buffer to rasterize the vertex indices to. Useful for e.g. computing parts.

  • triangle_index_buffer – Optional buffer to rasterize the triangle indices to. Useful for e.g. computing parts.

  • near_clip – Clip any triangles closer than this depth. Defaults to 0.1.

  • depth_offset – Offset the depth values. Nonzero values can be used to render something slightly in front of something else and avoid depth fighting. Defaults to 0.

  • image_offset – Offset by (x, y) pixels in image space. Can be used to render e.g. two characters next to each other for comparison without needing to create a special camera.

pymomentum.renderer.rasterize_skeleton(character: pymomentum.geometry.Character, skeleton_state: torch.Tensor, camera: pymomentum.renderer.Camera, z_buffer: torch.Tensor, rgb_buffer: Optional[torch.Tensor] = None, *, surface_normals_buffer: Optional[torch.Tensor] = None, sphere_material: Optional[pymomentum.renderer.PhongMaterial] = None, cylinder_material: Optional[pymomentum.renderer.PhongMaterial] = None, lights: Optional[list[pymomentum.renderer.Light]] = None, model_matrix: Optional[numpy.ndarray[numpy.float32[4, 4]]] = None, back_face_culling: bool = True, active_joints: Optional[torch.Tensor] = None, near_clip: float = 0.10000000149011612, depth_offset: float = 0.0, image_offset: Optional[numpy.ndarray[numpy.float32[2, 1]]] = None, sphere_radius: float = 2.0, cylinder_radius: float = 1.0, sphere_subdivision_level: int = 2, cylinder_length_subdivisions: int = 16, cylinder_radius_subdivisions: int = 16, style: pymomentum.renderer.SkeletonStyle = <SkeletonStyle.Pipes: 1>) None

Rasterize the skeleton onto a given RGB+depth buffer by placing spheres at joints and connecting joints with cylinders.

See detailed notes under rasterize_mesh(), above.

Parameters:
  • characterpymomentum.geometry.Character whose skeleton to rasterize.

  • skeleton_state – State of the skeleton.

  • camera – Camera to render from.

  • z_buffer – Z-buffer to render geometry onto; can be reused for multiple renders.

  • rgb_buffer – RGB-buffer to render geometry onto; can be reused for multiple renders.

  • sphere_material – Material to use for spheres at joints.

  • cylinder_material – Material to use for cylinders at joints.

  • lights – Lights to use in renderering, in world-space. If none are given, a default light setup is used.

  • model_matrix – Additional matrix to apply to the model. Unlike the camera transforms, it is allowed to have scaling and/or shearing.

  • back_face_culling – Enable back-face culling (speeds up the render).

  • active_joints – Bool tensor specifying which joints to render.

  • near_clip – Clip any triangles closer than this depth. Defaults to 0.1.

  • depth_offset – Offset the depth values. Nonzero values can be used to render something slightly in front of something else and avoid depth fighting. Defaults to 0.

  • image_offset – Offset by (x, y) pixels in image space. Can be used to render e.g. two characters next to each other for comparison without needing to create a special camera.

  • sphere_radius – Radius for spheres at joints.

  • cylinder_radius – Radius for cylinders between joints.

  • sphere_subdivision_level – How many subdivision levels; more levels means more triangles, smoother spheres, but slower rendering. Good values are between 1 and 3.

  • cylinder_length_subdivisions – How many subdivisions along cylinder length; longer cylinders may need more to avoid looking chunky.

  • cylinder_radius_subdivisions – How many subdivisions around cylinder radius; good values are between 16 and 64.

pymomentum.renderer.rasterize_spheres(center: torch.Tensor, camera: pymomentum.renderer.Camera, z_buffer: torch.Tensor, rgb_buffer: torch.Tensor | None = None, surface_normals_buffer: torch.Tensor | None = None, radius: torch.Tensor | None = None, color: torch.Tensor | None = None, material: pymomentum.renderer.PhongMaterial | None = None, *, lights: list[pymomentum.renderer.Light] | None = None, model_matrix: numpy.ndarray[numpy.float32[4, 4]] | None = None, back_face_culling: bool = True, near_clip: float = 0.10000000149011612, depth_offset: float = 0.0, image_offset: numpy.ndarray[numpy.float32[2, 1]] | None = None, subdivision_level: int = 2) None

Rasterize spheres using the passed-in camera onto a given RGB+depth buffer. Uses an optimized cross-platform SIMD implementation.

See detailed notes under rasterize_mesh(), above.

Parameters:
  • center – (nSpheres x 3) Tensor of sphere centers.

  • camera – Camera to render from.

  • z_buffer – Z-buffer to render geometry onto; can be reused for multiple renders.

  • radius – (nSpheres) Optional Tensor of per-sphere radius values (defaults to 1).

  • color – (nSpheres x 3) optional Tensor of per-sphere colors (defaults to using the passed-in material).

  • rgb_buffer – RGB-buffer to render geometry onto; can be reused for multiple renders.

  • lights – Lights to use in renderering, in world-space. If none are given, a default light setup is used.

  • model_matrix – Additional matrix to apply to the model. Unlike the camera transforms, it is allowed to have scaling and/or shearing.

  • back_face_culling – Enable back-face culling (speeds up the render).

  • near_clip – Clip any triangles closer than this depth. Defaults to 0.1.

  • depth_offset – Offset the depth values. Nonzero values can be used to render something slightly in front of something else and avoid depth fighting. Defaults to 0.

  • image_offset – Offset by (x, y) pixels in image space. Can be used to render e.g. two characters next to each other for comparison without needing to create a special camera.

  • subdivision_level – How many subdivision levels; more levels means more triangles, smoother spheres, but slower rendering.

pymomentum.renderer.rasterize_transforms(transforms: torch.Tensor, camera: pymomentum.renderer.Camera, z_buffer: torch.Tensor, *, rgb_buffer: torch.Tensor | None = None, surface_normals_buffer: torch.Tensor | None = None, scale: float = 1.0, material: pymomentum.renderer.PhongMaterial | None = None, lights: list[pymomentum.renderer.Light] | None = None, model_matrix: numpy.ndarray[numpy.float32[4, 4]] | None = None, near_clip: float = 0.10000000149011612, depth_offset: float = 0.0, image_offset: numpy.ndarray[numpy.float32[2, 1]] | None = None, length_subdivisions: int = 16, radius_subdivisions: int = 16) None

Rasterize a set of transforms as little frames using arrows.

param transforms:

(n x 4 x 4) torch.Tensor of transforms to render.

param camera:

Camera to render from.

param z_buffer:

Z-buffer to render geometry onto; can be reused for multiple renders.

param rgb_buffer:

RGB-buffer to render geometry onto; can be reused for multiple renders.

param surface_normals_buffer:

Buffer to render eye-space surface normals to; can be reused for multiple renders.

param scale:

Scale of the arrows.

param material:

Material to render with. If not specified, then red/green/blue are used for the x/y/z axes.

param lights:

Lights to use in renderering, in world-space. If none are given, a default light setup is used.

param model_matrix:

Additional matrix to apply to the model. Unlike the camera transforms, it is allowed to have scaling and/or shearing.

param near_clip:

Clip any triangles closer than this depth. Defaults to 0.1.

param depth_offset:

Offset the depth values. Nonzero values can be used to render something slightly in front of something else and avoid depth fighting. Defaults to 0.

param image_offset:

Offset by (x, y) pixels in image space.

param length_subdivisions:

How many subdivisions along length; longer cylinders may need more to avoid looking chunky.

param radius_subdivisions:

How many subdivisions around cylinder radius; good values are between 16 and 64.

pymomentum.renderer.rasterize_wireframe(vertex_positions: torch.Tensor, triangles: torch.Tensor, camera: pymomentum.renderer.Camera, z_buffer: torch.Tensor, rgb_buffer: torch.Tensor | None = None, *, thickness: float = 1.0, color: numpy.ndarray[numpy.float32[3, 1]] | None = None, model_matrix: numpy.ndarray[numpy.float32[4, 4]] | None = None, back_face_culling: bool = True, near_clip: float = 0.10000000149011612, depth_offset: float = 0.0, image_offset: numpy.ndarray[numpy.float32[2, 1]] | None = None) None

Rasterize the triangle mesh as a wireframe.

See detailed notes under rasterize_mesh(), above.

Parameters:
  • vertex_positions – (nVert x 3) Tensor of vertex positions.

  • triangles – (nVert x 3) Tensor of triangles.

  • camera – Camera to render from.

  • z_buffer – Z-buffer to render geometry onto; can be reused for multiple renders.

  • rgb_buffer – RGB-buffer to render geometry onto; can be reused for multiple renders.

  • thickness – Thickness of the wireframe lines.

  • color – Wireframe color.

  • model_matrix – Additional matrix to apply to the model. Unlike the camera transforms, it is allowed to have scaling and/or shearing.

  • back_face_culling – Enable back-face culling (speeds up the render).

  • near_clip – Clip any triangles closer than this depth. Defaults to 0.1.

  • depth_offset – Offset the depth values. Nonzero values can be used to render something slightly in front of something else and avoid depth fighting. Defaults to 0.

  • image_offset – Offset by (x, y) pixels in image space. Can be used to render e.g. two characters next to each other for comparison without needing to create a special camera.

pymomentum.renderer.subdivide_mesh(vertices: numpy.ndarray[numpy.float32[m, n]], normals: numpy.ndarray[numpy.float32[m, n]], triangles: numpy.ndarray[numpy.int32[m, n]], texture_coordinates: numpy.ndarray[numpy.float32[m, n]] | None = None, texture_triangles: numpy.ndarray[numpy.int32[m, n]] | None = None, levels: int = 1, max_edge_length: float = 0) tuple[numpy.ndarray[numpy.float32[m, n]], numpy.ndarray[numpy.float32[m, n]], numpy.ndarray[numpy.int32[m, n]], numpy.ndarray[numpy.float32[m, n]], numpy.ndarray[numpy.int32[m, n]]]

Subdivide the triangle mesh.

Parameters:
  • vertices – n x 3 numpy.ndarray of vertex positions.

  • normals – n x 3 numpy.ndarray of vertex normals.

  • triangles – n x 3 numpy.ndarray of triangles.

  • texture_coordinates – n x numpy.ndarray or texture coordinates.

  • texture_triangles – n x numpy.ndarray or texture triangles (see :mesh:`rasterize_mesh` for more details).).

  • levels – Maximum levels to subdivide (default = 1)

  • max_edge_length – Stop subdividing when the longest edge is shorter than this length.

Returns:

A tuple [vertices, normals, triangles, texture_coordinates, texture_triangles].

pymomentum.renderer.triangulate(face_indices: numpy.ndarray[numpy.int32[m, 1]], face_offsets: numpy.ndarray[numpy.int32[m, 1]]) numpy.ndarray[numpy.int32[m, 3]]

triangulate the polygon mesh.

Parameters:
  • faceOffests – numpy.ndarray defining the starting and end points of each polygon

  • facesIndices – numpy.ndarray of the face indices to vertex