Client SDK Python API Reference
This page provides a comprehensive reference for the Aria Client SDK Python API, which enables control and data streaming from Aria devices.
Core Classes
DeviceClient
The DeviceClient class manages connections to Aria devices and handles device discovery.
Constructor
DeviceClient()
Creates a new DeviceClient instance.
Methods
authenticate()
Authenticate with the device using credentials.
Returns: Certificate hash pushed to the device (str)
Raises: RuntimeError if authentication fails
connect()
Establish a connection to an Aria device.
Returns: Device object representing the connected device
Raises: RuntimeError if connection fails
is_connected(device: Device) -> bool
Check if a device is currently connected.
Parameters:
device(Device): The device to check connection status for
Returns: bool - True if connected, False otherwise
usb_network_devices()
Get a list of devices available via USB network connection.
Returns: List of available USB network devices
Raises: RuntimeError if unable to query devices
active_connections() -> List[Device]
Get a list of all currently active device connections.
Returns: List of Device objects
disconnect(device: Device) -> None
Disconnect from a specific device.
Parameters:
device(Device): The device to disconnect from
disconnect_all() -> None
Disconnect from all connected devices.
set_client_config(config: DeviceClientConfig) -> None
Configure the device client settings.
Parameters:
config(DeviceClientConfig): Configuration object for the client
Device
The Device class represents a connected Aria device and provides methods to control it.
Methods
connection_id() -> str
Get the unique connection identifier for this device.
Returns: String connection ID
status()
Get current device status.
Returns: Device status object
Raises: RuntimeError if unable to retrieve device status
serial()
Get the device serial number.
Returns: String serial number
Raises: RuntimeError if unable to retrieve serial number
Streaming Methods
start_streaming()
Start streaming data from the device.
Returns: None
Raises: RuntimeError if unable to start streaming
stop_streaming()
Stop streaming data from the device.
Returns: None
Raises: RuntimeError if unable to stop streaming
set_streaming_config(streaming_config: HttpStreamingConfig) -> None
Configure streaming settings.
Parameters:
streaming_config(HttpStreamingConfig): Streaming configuration object
install_streaming_certs()
Install default streaming certificates on the device.
Returns: None
Raises: RuntimeError if certificate installation fails
install_streaming_certs(streaming_certificates: StreamingCertsPem)
Install custom streaming certificates on the device.
Parameters:
streaming_certificates(StreamingCertsPem): Custom certificates in PEM format
Returns: None
Raises: RuntimeError if certificate installation fails
uninstall_streaming_certs()
Remove streaming certificates from the device.
Returns: None
Raises: RuntimeError if certificate removal fails
Recording Methods
set_recording_config(recording_config: RecordingConfig) -> None
Configure recording settings.
Parameters:
recording_config(RecordingConfig): Recording configuration object
start_recording()
Start recording on the device.
Returns: UUID of the recording (str)
Raises: RuntimeError if unable to start recording
stop_recording()
Stop the current recording.
Returns: None
Raises: RuntimeError if unable to stop recording
list_recordings()
Print a list of all recordings on the device.
Returns: None
Raises: RuntimeError if unable to list recordings
recording_info(uuid: str)
Print detailed information about a specific recording.
Parameters:
uuid(str): Recording UUID
Returns: None
Raises: RuntimeError if recording not found or unable to retrieve info
delete_recording(uuid: str)
Delete a specific recording from the device.
Parameters:
uuid(str): Recording UUID to delete
Returns: None
Raises: RuntimeError if unable to delete recording
download_recording(uuid: str, output_path: str = "")
Download a specific recording from the device.
Parameters:
uuid(str): Recording UUID to downloadoutput_path(str, optional): Local path to save the recording. Defaults to current directory.
Returns: None
Raises: RuntimeError if download fails
download_all_recordings(output_path: str = "")
Download all recordings from the device.
Parameters:
output_path(str, optional): Local directory to save recordings. Defaults to current directory.
Returns: None
Raises: RuntimeError if download fails
delete_all_recordings()
Delete all recordings from the device.
Returns: None
Raises: RuntimeError if unable to delete recordings
Text-to-Speech Methods
render_tts(text: str)
Render text-to-speech on the device.
Parameters:
text(str): Text to convert to speech
Returns: None
Raises: RuntimeError if TTS rendering fails
stop_tts()
Stop current text-to-speech playback.
Returns: None
Raises: RuntimeError if unable to stop TTS
StreamDataInterface
The StreamDataInterface class handles receiving and processing streaming data from the device.
Constructor
StreamDataInterface(enable_image_decoding: bool, enable_raw_stream: bool)
Parameters:
enable_image_decoding(bool): Enable automatic image decodingenable_raw_stream(bool): Enable raw stream data access
Methods
record_to_vrs(vrs_path: str) -> None
Save streaming data to a VRS file.
Parameters:
vrs_path(str): Path where VRS file from received streaming data will be saved
Callback Registration Methods
The following methods register callbacks for different data types:
register_imu_callback(callback: Callable)
Register callback for IMU (Inertial Measurement Unit) data.
Parameters:
callback(Callable): Functioncallback(motion_data: projectaria_tools.core.sensor_data.MotionData, sensor_label: str)called for each IMU sample
register_imu_batch_callback(callback: Callable)
Register callback for batched IMU data.
Parameters:
callback(Callable): Functioncallback(motion_data_batch: List[projectaria_tools.core.sensor_data.MotionData], sensor_label: str)called for IMU batches
register_eye_gaze_callback(callback: Callable)
Register callback for eye gaze tracking data.
Parameters:
callback(Callable): Functioncallback(eye_gaze_data: projectaria_tools.core.mps.EyeGaze)called for eye gaze samples
register_hand_pose_callback(callback: Callable)
Register callback for hand pose tracking data.
Parameters:
callback(Callable): Functioncallback(hand_pose_data: projectaria_tools.core.mps.hand_tracking)called for hand pose samples
register_audio_callback(callback: Callable)
Register callback for audio data.
Parameters:
callback(Callable): Functioncallback(audio_data: projectaria_tools.core.sensor_data.AudioData, audio_record: projectaria_tools.core.sensor_data.AudioDataRecord, num_channels: int)called for audio samples
register_rgb_callback(callback: Callable)
Register callback for RGB camera data.
Parameters:
callback(Callable): Functioncallback(image_data: projectaria_tools.core.sensor_data.ImageData, image_record: projectaria_tools.core.sensor_data.ImageDataRecord)called for RGB images
register_slam_callback(callback: Callable)
Register callback for SLAM camera data.
Parameters:
callback(Callable): Functioncallback(image_data: projectaria_tools.core.sensor_data.ImageData, image_record: projectaria_tools.core.sensor_data.ImageDataRecord)called for SLAM images
register_et_callback(callback: Callable)
Register callback for eye tracking camera data.
Parameters:
callback(Callable): Functioncallback(image_data: projectaria_tools.core.sensor_data.ImageData, image_record: projectaria_tools.core.sensor_data.ImageDataRecord)called for ET images
register_barometer_callback(callback: Callable)
Register callback for barometer data.
Parameters:
callback(Callable): Functioncallback(barometer_data: projectaria_tools.core.sensor_data.BarometerData)called for barometer samples
register_magnetometer_callback(callback: Callable)
Register callback for magnetometer data.
Parameters:
callback(Callable): Functioncallback(motion_data: projectaria_tools.core.sensor_data.MotionData, sensor_label: str)called for magnetometer samples
register_vio_callback(callback: Callable)
Register callback for Visual-Inertial Odometry (VIO) data.
Parameters:
callback(Callable): Functioncallback(frontend_output: projectaria_tools.core.sensor_data.FrontendOutput)called for VIO updates
register_vio_high_frequency_callback(callback: Callable)
Register callback for high-frequency VIO data.
Parameters:
callback(Callable): Functioncallback(vio_data: projectaria_tools.core.mps.OpenLoopTrajectoryPose)called for high-frequency VIO updates
register_device_calib_callback(callback: Callable)
Register callback for device calibration updates.
Parameters:
callback(Callable): Functioncallback(device_calibration: projectaria_tools.core.calibration.DeviceCalibration)called when calibration is received
Queue Size Configuration Methods
The following methods configure queue sizes for different data streams:
set_rgb_queue_size(size: int) -> None
Set the queue size for RGB image data.
Parameters:
size(int): Queue size
set_slam_queue_size(size: int) -> None
Set the queue size for SLAM image data.
Parameters:
size(int): Queue size
set_et_queue_size(size: int) -> None
Set the queue size for eye tracking image data.
Parameters:
size(int): Queue size
set_imu_queue_size(size: int) -> None
Set the queue size for IMU data.
Parameters:
size(int): Queue size
set_imu_batch_queue_size(size: int) -> None
Set the queue size for batched IMU data.
Parameters:
size(int): Queue size
set_vio_high_freq_queue_size(size: int) -> None
Set the queue size for high-frequency VIO data.
Parameters:
size(int): Queue size
set_vio_high_freq_batch_queue_size(size: int) -> None
Set the queue size for batched high-frequency VIO data.
Parameters:
size(int): Queue size
set_eye_gaze_queue_size(size: int) -> None
Set the queue size for eye gaze data.
Parameters:
size(int): Queue size
set_hand_pose_queue_size(size: int) -> None
Set the queue size for hand pose data.
Parameters:
size(int): Queue size
set_vio_queue_size(size: int) -> None
Set the queue size for VIO data.
Parameters:
size(int): Queue size
Queue Size Getter Methods
The following methods retrieve queue sizes for different data streams:
get_rgb_queue_size() -> int
Get the queue size for RGB image data.
Returns: Queue size (int)
get_slam_queue_size() -> int
Get the queue size for SLAM image data.
Returns: Queue size (int)
get_imu_queue_size() -> int
Get the queue size for IMU data.
Returns: Queue size (int)
get_imu_batch_queue_size() -> int
Get the queue size for batched IMU data.
Returns: Queue size (int)
get_et_queue_size() -> int
Get the queue size for eye tracking image data.
Returns: Queue size (int)
get_vio_queue_size() -> int
Get the queue size for VIO data.
Returns: Queue size (int)
get_vio_high_freq_queue_size() -> int
Get the queue size for high-frequency VIO data.
Returns: Queue size (int)
get_vio_high_freq_batch_queue_size() -> int
Get the queue size for batched high-frequency VIO data.
Returns: Queue size (int)
get_eye_gaze_queue_size() -> int
Get the queue size for eye gaze data.
Returns: Queue size (int)
get_hand_pose_queue_size() -> int
Get the queue size for hand pose data.
Returns: Queue size (int)
Configuration Classes
HttpStreamingConfig
Configuration for HTTP-based streaming.
Attributes:
profile_name(str): Name of the streaming profile to usestreaming_cert_name(str): Name of the streaming certificatestreaming_interface(StreamingInterface): Network interface to use for streamingsecurity_options(StreamingSecurityOptions): Security settings for streamingadvanced_config(HttpStreamerConfig): Advanced streaming configurationkeep_streaming_on_disconnection(bool): Continue streaming if disconnected
RecordingConfig
Configuration for on-device recording.
Attributes:
profile_name(str): Name of the recording profile to usecustom_profile(str): Custom recording profile in JSON formatrecording_name(str): Name for the recording
StreamingCertsPem
Streaming certificates in PEM format.
Attributes:
root_ca_cert(str): Root CA certificate in PEM formatpublisher_cert(str): Publisher certificate in PEM formatpublisher_key(str): Publisher private key in PEM formatkey_password(str): Password for the private keycert_name(str): Name for this certificate set
HttpStreamerConfig
Advanced HTTP streamer configuration.
Attributes:
endpoint(Endpoint): Streaming endpoint configuration
Endpoint
HTTP endpoint configuration.
Attributes:
url(str): Endpoint URLverify_server_certificates(bool): Whether to verify server certificatesauth(SslAuthentication): SSL authentication credentials
HTTP Server Classes
AriaGen2HttpServer
HTTP server for receiving streaming data from devices.
Constructor
AriaGen2HttpServer(config: HttpServerConfig, handler_factory: AriaGen2HandlerFactory)
Parameters:
config(HttpServerConfig): Server configurationhandler_factory(AriaGen2HandlerFactory): Factory for creating stream handlers
Methods
stop() -> None
Stop the HTTP server.
join() -> None
Wait for the server to finish.
HttpServerConfig
Configuration for the HTTP server.
Attributes:
address(str): Server bind addressport(int): Server portuse_ssl(bool): Enable SSL/TLSmdns_hostname(str): mDNS hostname for discoverycertificate(Certificate): Server SSL certificateca_root(str): CA root certificatethreads(int): Number of server threadsidle_timeout_sec(int): Idle timeout in seconds
Certificate
Server SSL certificate configuration.
Attributes:
cert(str): Certificate in PEM formatkey(str): Private key in PEM formatkey_password(str): Password for the private key
AriaGen2HandlerFactory
Factory for creating stream data handlers.
Methods
create_factory_handler(handler_object: StreamDataInterface) -> AriaGen2HandlerFactory
Create a handler factory from a StreamDataInterface object.
Parameters:
handler_object(StreamDataInterface): Stream data interface instance
Returns: AriaGen2HandlerFactory instance
Enumerations
StreamingInterface
Available network interfaces for streaming.
Values:
WIFI_STA- WiFi Station modeUSB_RNDIS- USB RNDIS (Windows)USB_NCM- USB NCM (Linux/Mac)WIFI_SAP- WiFi Soft Access Point mode
RecordingType
Types of recordings.
Values:
RECORDING_TYPE_PROTOTYPE- Prototype recording
Message Types
MessageType
The MessageType class provides constants and utility functions for working with message type identifiers.
Message Type Constants
Message IDs are organized by category:
Camera Frames (0x8010-0x8012):
SLAM_CAMERA_FRAME(0x8010) - SLAM camera image dataET_CAMERA_FRAME(0x8011) - Eye tracking camera image dataPOV_CAMERA_FRAME(0x8012) - RGB/POV camera image data
Sensor Events (0x8020-0x8028):
IMU_EVENT(0x8020) - IMU (accelerometer/gyroscope) dataBARO_EVENT(0x8021) - Barometer dataMAG_EVENT(0x8022) - Magnetometer dataTEMP_EVENT(0x8023) - Temperature sensor dataPPG_EVENT(0x8024) - Photoplethysmogram (heart rate) dataGNSS_EVENT(0x8025) - GPS/GNSS location dataSUBGHZ_EVENT(0x8026) - Sub-GHz radio dataALS_EVENT(0x8027) - Ambient light sensor dataGNSS_VISIBLE_SATELLITES_EVENT(0x8028) - Visible GPS satellites
Machine Perception (0x8030-0x8033):
MP_ET_RESULT(0x8030) - Eye gaze tracking resultMP_HT_RESULT(0x8031) - Hand tracking resultMP_VIO_RESULT(0x8032) - Visual-Inertial Odometry resultMP_VIO_HIGH_FREQUENCY_POSE(0x8033) - High-frequency VIO pose updates
Data and Utilities (0x8000-0x800A):
ERROR(0x8000) - Error messageAUDIO_REC_DATA(0x8001) - Audio recording dataWIFI_BEACONS(0x8005) - WiFi beacon scan resultsBLE_BEACONS(0x8006) - Bluetooth beacon scan resultsPHONE_LOCATION_DATA(0x8007) - Companion phone locationBATTERY_STATUS_DATA(0x8008) - Device battery statusASR_DATA(0x8009) - Automatic speech recognition dataFACTORY_CALIBRATION(0x800A) - Factory calibration data
Status Messages (0x8100-0x8105):
ACRO_INFO(0x8100) - ACRO system informationOK(0x8105) - Success/acknowledgment message
Methods
to_string(message_id: int) -> str
Convert a message ID to its string name.
Parameters:
message_id(int): Numeric message ID (e.g., 0x8010)
Returns: String name of the message type (e.g., "SLAM_CAMERA_FRAME")
Example:
import aria.sdk_gen2 as sdk_gen2
name = sdk_gen2.MessageType.to_string(0x8010)
print(name) # "SLAM_CAMERA_FRAME"
# Using constant
name = sdk_gen2.MessageType.to_string(sdk_gen2.MessageType.IMU_EVENT)
print(name) # "IMU_EVENT"
from_string(name: str) -> Optional[int]
Convert a message type name to its numeric ID.
Parameters:
name(str): String name of the message type (e.g., "SLAM_CAMERA_FRAME")
Returns: Numeric message ID (int) if found, None otherwise
Example:
import aria.sdk_gen2 as sdk_gen2
msg_id = sdk_gen2.MessageType.from_string("SLAM_CAMERA_FRAME")
print(hex(msg_id)) # "0x8010"
# Unknown type returns None
unknown = sdk_gen2.MessageType.from_string("INVALID_TYPE")
print(unknown) # None
SharedMessage
Low-level message data structure.
Attributes:
id(int, read-only): Message IDpayload(IBuffer, read-only): Message payload
Methods:
size() -> int: Get message size in bytestimestamp() -> int: Get message timestamp in nanosecondsarrival_timestamp_ns() -> int: Get arrival timestamp in nanoseconds (returns -1 if not set)
Constructor:
SharedMessage(id: int, payload: bytes, timestampNs: int = 0)
Parameters:
id(int): Message type IDpayload(bytes): Message payload datatimestampNs(int, optional): Timestamp in nanoseconds. Defaults to 0.
Example:
import aria.sdk_gen2 as sdk_gen2
# Create a message
msg = sdk_gen2.SharedMessage(
id=sdk_gen2.MessageType.IMU_EVENT,
payload=b'\x00\x01\x02\x03',
timestampNs=1234567890
)
print(f"Message ID: {hex(msg.id)}")
print(f"Payload size: {msg.size()}")
print(f"Timestamp: {msg.timestamp()} ns")
IBuffer
Buffer interface for binary data.
Methods:
data() -> bytes: Get buffer data as bytessize() -> int: Get buffer size in bytesas_memoryview() -> Optional[memoryview]: Get buffer as Python memoryview (returns None if buffer is empty or null)
Example:
# Access payload data
if message.payload is not None:
# Get as memoryview for zero-copy access
mem_view = message.payload.as_memoryview()
if mem_view is not None:
print(f"Payload size: {len(mem_view)} bytes")
Data Converter
OssDataConverter
The OssDataConverter class converts raw FlatBuffer messages to typed Python data structures. This is essential for processing sensor data from SharedMessage payloads.
Constructor
OssDataConverter(enable_image_decoding: bool = True, decoder_class = None)
Parameters:
enable_image_decoding(bool, optional): Enable automatic image decoding. Defaults to True.decoder_class(optional): Custom decoder class (not currently supported). Defaults to None.
Example:
import aria.oss_data_converter as data_converter
# Create converter with image decoding enabled
converter = data_converter.OssDataConverter(enable_image_decoding=True)
Configuration Methods
set_calibration(calibration_json: str) -> None
Set device calibration from JSON string. Required for converting VIO, eye gaze, and hand tracking data.
Parameters:
calibration_json(str): Device calibration in JSON format
Example:
# Set calibration (required for VIO/eye gaze/hand tracking)
converter.set_calibration(calibration_json_string)
set_python_image_decoding(enable: bool) -> None
Enable or disable Python-based image decoding.
Parameters:
enable(bool): True to enable Python decoding, False otherwise
get_python_image_decoding() -> bool
Get current Python image decoding status.
Returns: bool - True if Python decoding is enabled
Image Conversion Methods
to_image_data_and_record(shared_message: SharedMessage) -> Tuple[ImageData, ImageDataRecord]
Convert camera frame message to image data and metadata.
Parameters:
shared_message(SharedMessage): Message with image payload
Returns: Tuple of (ImageData, ImageDataRecord) or (None, None) if conversion fails
Supported Message Types:
SLAM_CAMERA_FRAME- SLAM camera imagesET_CAMERA_FRAME- Eye tracking camera imagesPOV_CAMERA_FRAME- RGB camera images
Example:
if message.id == sdk_gen2.MessageType.SLAM_CAMERA_FRAME:
image_data, image_record = converter.to_image_data_and_record(message)
if image_data is not None:
print(f"Image size: {image_data.get_width()}x{image_data.get_height()}")
print(f"Timestamp: {image_record.capture_timestamp_ns} ns")
print(f"Camera ID: {image_record.camera_id}")
Sensor Conversion Methods
to_imu(shared_message: SharedMessage) -> List[MotionData]
Convert IMU event message to motion data list.
Parameters:
shared_message(SharedMessage): Message with IMU payload
Returns: List of MotionData objects or None if conversion fails
Example:
if message.id == sdk_gen2.MessageType.IMU_EVENT:
imu_data_list = converter.to_imu(message)
if imu_data_list:
for imu_data in imu_data_list:
print(f"Accel: {imu_data.accel_msec2} m/s²")
print(f"Gyro: {imu_data.gyro_radsec} rad/s")
to_magnetometer(shared_message: SharedMessage) -> List[MotionData]
Convert magnetometer event message to motion data list.
Parameters:
shared_message(SharedMessage): Message with magnetometer payload
Returns: List of MotionData objects or None if conversion fails
Example:
if message.id == sdk_gen2.MessageType.MAG_EVENT:
mag_data_list = converter.to_magnetometer(message)
if mag_data_list:
mag_data = mag_data_list[0]
print(f"Magnetic field: {mag_data.mag_tesla} T")
to_barometer(shared_message: SharedMessage) -> BarometerData
Convert barometer event message to barometer data.
Parameters:
shared_message(SharedMessage): Message with barometer payload
Returns: BarometerData object or None if conversion fails
Example:
if message.id == sdk_gen2.MessageType.BARO_EVENT:
baro_data = converter.to_barometer(message)
if baro_data:
print(f"Pressure: {baro_data.pressure} Pa")
print(f"Temperature: {baro_data.temperature} °C")
to_audio(shared_message: SharedMessage) -> Tuple[AudioData, AudioDataRecord]
Convert audio message to audio data and record.
Parameters:
shared_message(SharedMessage): Message with audio payload
Returns: Tuple of (AudioData, AudioDataRecord) or (None, None) if conversion fails
Example:
if message.id == sdk_gen2.MessageType.AUDIO_REC_DATA:
audio_data, audio_record = converter.to_audio(message)
if audio_data:
print(f"Audio samples: {len(audio_data.data)}")
print(f"Sample rate: {audio_record.sample_rate} Hz")
Location Conversion Methods
to_gnss(shared_message: SharedMessage) -> GpsData
Convert GNSS event message to GPS data.
Parameters:
shared_message(SharedMessage): Message with GNSS payload
Returns: GpsData object or None if conversion fails
Example:
if message.id == sdk_gen2.MessageType.GNSS_EVENT:
gnss_data = converter.to_gnss(message)
if gnss_data:
print(f"Location: {gnss_data.latitude}°, {gnss_data.longitude}°")
print(f"Altitude: {gnss_data.altitude} m")
print(f"Accuracy: {gnss_data.accuracy} m")
to_phone_location(shared_message: SharedMessage) -> GpsData
Convert phone location message to GPS data.
Parameters:
shared_message(SharedMessage): Message with phone location payload
Returns: GpsData object or None if conversion fails
Example:
if message.id == sdk_gen2.MessageType.PHONE_LOCATION_DATA:
phone_loc = converter.to_phone_location(message)
if phone_loc:
print(f"Phone location: {phone_loc.latitude}°, {phone_loc.longitude}°")
Machine Perception Conversion Methods
to_eye_gaze(shared_message: SharedMessage) -> EyeGaze
Convert eye gaze result message to eye gaze data. Requires calibration to be set.
Parameters:
shared_message(SharedMessage): Message with eye gaze payload
Returns: EyeGaze object or None if conversion fails
Example:
if message.id == sdk_gen2.MessageType.MP_ET_RESULT:
eye_gaze = converter.to_eye_gaze(message)
if eye_gaze:
print(f"Gaze: yaw={eye_gaze.yaw} rad, pitch={eye_gaze.pitch} rad")
print(f"Depth: {eye_gaze.depth} m")
to_hand_pose(shared_message: SharedMessage) -> HandTrackingResult
Convert hand tracking result message to hand pose data. Requires calibration to be set.
Parameters:
shared_message(SharedMessage): Message with hand tracking payload
Returns: HandTrackingResult object or None if conversion fails
Example:
if message.id == sdk_gen2.MessageType.MP_HT_RESULT:
hand_pose = converter.to_hand_pose(message)
if hand_pose:
if hand_pose.left_hand:
print("Left hand detected")
if hand_pose.right_hand:
print("Right hand detected")
to_vio_result(shared_message: SharedMessage) -> FrontendOutput
Convert VIO result message to VIO data. Requires calibration to be set.
Parameters:
shared_message(SharedMessage): Message with VIO payload
Returns: FrontendOutput object or None if conversion fails
Example:
if message.id == sdk_gen2.MessageType.MP_VIO_RESULT:
vio_data = converter.to_vio_result(message)
if vio_data:
pose = vio_data.transform_odometry_bodyimu
print(f"Position: {pose.translation()}")
print(f"Rotation: {pose.rotation().log()}")
to_vio_high_freq_pose(shared_message: SharedMessage) -> List[OpenLoopTrajectoryPose]
Convert high-frequency VIO pose message to pose list. Requires calibration to be set.
Parameters:
shared_message(SharedMessage): Message with high-frequency VIO payload
Returns: List of OpenLoopTrajectoryPose objects or None if conversion fails
Example:
if message.id == sdk_gen2.MessageType.MP_VIO_HIGH_FREQUENCY_POSE:
vio_poses = converter.to_vio_high_freq_pose(message)
if vio_poses:
for pose in vio_poses:
translation = pose.transform_odometry_device.translation()
print(f"Position: {translation}")
Wireless Beacon Conversion Methods
to_bluetooth_beacon(shared_message: SharedMessage) -> List[BluetoothBeaconData]
Convert Bluetooth beacon message to beacon data list.
Parameters:
shared_message(SharedMessage): Message with Bluetooth beacon payload
Returns: List of BluetoothBeaconData objects or None if conversion fails
Example:
if message.id == sdk_gen2.MessageType.BLE_BEACONS:
ble_beacons = converter.to_bluetooth_beacon(message)
if ble_beacons:
for beacon in ble_beacons:
print(f"ID: {beacon.unique_id}, RSSI: {beacon.rssi} dBm")
to_wifi_beacon(shared_message: SharedMessage) -> List[WifiBeaconData]
Convert WiFi beacon message to beacon data list.
Parameters:
shared_message(SharedMessage): Message with WiFi beacon payload
Returns: List of WifiBeaconData objects or None if conversion fails
Example:
if message.id == sdk_gen2.MessageType.WIFI_BEACONS:
wifi_beacons = converter.to_wifi_beacon(message)
if wifi_beacons:
for beacon in wifi_beacons:
print(f"SSID: {beacon.ssid}, RSSI: {beacon.rssi} dBm")
Other Sensor Conversion Methods
to_ppg(shared_message: SharedMessage) -> PpgData
Convert PPG (photoplethysmogram) event message to PPG data.
Parameters:
shared_message(SharedMessage): Message with PPG payload
Returns: PpgData object or None if conversion fails
Example:
if message.id == sdk_gen2.MessageType.PPG_EVENT:
ppg_data = converter.to_ppg(message)
if ppg_data:
print(f"PPG value: {ppg_data.value}")