Icon
PAPRLE
Plug-And-Play Robotic Limb Environment
A Modular Ecosystem for Robotic Limbs

PAPRLE is a plug-and-play robotic limb environment
for flexible configuration and control of robotic limbs across applications.

Abstract

We introduce PAPRLE (Plug-And-Play Robotic Limb Environment), a modular ecosystem that enables flexible placement and control of robotic limbs. With PAPRLE, a user can change the arrangement of the robotic limbs, and control them using a variety of input devices, including puppeteers, gaming controllers, and VR-based interfaces. This versatility supports a wide range of teleoperation scenarios and promotes adaptability to different task requirements.

To further enhance configurability, we introduce a pluggable puppeteer device that can be easily mounted and adapted to match the target robot configurations. PAPRLE supports bilateral teleoperation through these puppeteer devices, agnostic to the type or configuration of the follower robot. By supporting both joint-space and task-space control, the system provides real-time force feedback, improving user fidelity and physical interaction awareness. The modular design of PAPRLE facilitates novel spatial arrangements of the limbs and enables scalable data collection, thereby advancing research in embodied AI and learning-based control. We validate PAPRLE in various real-world settings, demonstrating its versatility across diverse combinations of leader devices and follower robots.

Video

Plug-And-Play

PAPRLE (Plug-and-Play Robotic Limb Environment) is a modular platform designed to integrate a diverse set of robotic limbs ("followers") with control devices ("leaders").

PAPRAS: Pluggable Robotic Arm


At the core of the follower side is PAPRAS, a modular robotic limb system. PAPRAS arm can be mounted on various places and configured in diverse sahpes. PAPRAS supports switching between different limb configurations without modifying the control interface, enabling users to test diverse embodiments under a unified system. For more information about PAPRAS, please refer to this site.

Pluggable Puppeteer


On the leader side, PAPRLE supports diverse types of control devices. One of the example is the puppeteer device, typically a scaled-down version of the follower robot. We introduce a pluggable puppeteer device, shares the same mount interfaces as PAPRAS. This allows users to easily swap the puppeteer device depending on the task or the desired mapping between leader and follower.

PAPRLE

Mix and Match

With PAPRLE, the puppeteer device does not need to have the same joint configuration of the follower robot. The system is designed to accommodate differences in kinematic structure. Furthermore, PAPRLE can operate with leader devices that provide no joint information at all, such as VR or gaming controller interfaces. Users just need to "plug-in" the leader device of their choice and pair it with the desired follower robot.

This plug-and-play capability also extends to the follower side. With PAPRAS, diverse limb arrangements can be easily configured and controlled through PAPRLE. In addition, the system supports commercial humanoid and robotic arm, further broadening its applicability.

Teleoperation

Below is the overview of the teleoperation system. You can refer to our github repository for more details.

Joint-Space Control or Task-Space Control

If the leader and follower have the same kinematics, we use joint-space control with one-to-one joint mapping. When they differ, we use task-space control by mapping the leader’s end-effector pose to the follower’s.

Joint Space

Follower: PAPRAS, Leader: PAPRAS Puppeteer

Task Space

Follower: PAPRAS, Leader: UR5 Puppeteer


Force Feedback

For puppeteer devices, PAPRLE supports force feedback at each joint of the leader. The feedbacks can be applied whether the control is based on joint-space mapping or end-effector pose mapping, improving user awareness and interaction quality. Two types of feedback are supported:

Intrinsic Feedback

Without Intrinsic Feedback
With Intrinsic Feedback

The user can specify a base pose for the leader device, and force feedback is generated based on deviations from this pose. This acts as a bias term that helps maintain the leader in a stable configuration. It prevents issues such as a dropped elbow or excessive joint extension. The feedback also provides intuitive cues that guide the user to stay within a comfortable and effective workspace.

Extrinsic Feedback

Extrinsic feedback is generated based on the difference between the follower's pose and the leader's intended pose. When the follower cannot fully track the leader due to obstacles, motion limits, or safety constraints, force feedback is applied to the leader to reflect this discrepancy. This provides the user with physical cues about the follower’s state, such as contact with the environment or limited mobility.

When the leader and the follower have the same kinematics, the extrinsic feedback is calculated in joint space.

When the leader and follower have different kinematics, the extrinsic feedback is first computed in task space and then mapped to joint space using the Jacobian.

When interacting with an object, the operator can also feel feedback on the gripper. This feedback is generated based on the discrepancy between the target pose and the actual gripper pose, allowing the operator to adjust accordingly.



Same Follower, Different Leader

Example of controlling same follower with different puppeteers, but all with the force feedback.

Also, PAPRLE supports non-puppeteer leader devices, such as gaming controllers or VR, or pre-recorded motions.

Gaming Controllers
Apple VisionPro
Pre-collected data from UMI interface

Data Collection

PAPRLE supports scalable data collection across different limb configurations and leader devices. The modular design allows for easy swapping of limbs and leaders, enabling rapid experimentation and diverse data collection setups.

Example of the collected dataset:

Supported Models

Below is the list of supported followers and leaders in PAPRLE.
☑️ indicates the hardware is tested with PAPRLE and it is included in the codebase.
🔲 indicates it is tested in simulation. We plan to test PAPRLE with more hardware in the future.

Gallery

*all videos are 1x speed.