This folder contains some example scripts for running SenseAct in various environments.
Run basic/sim_double_pendulum.py to start a simple random agent on the simulated Double Inverted Pendulum Environment for 1000 episodes. In addition to the standard SenseAct requirements, this environment also requires mujoco 1.50. You should see a mujoco_py window pop up showing a 3D rendering of the pendulum when you run the example.
We also provided examples that use the OpenAI Baselines implementation of Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO) for actual learning. To run these examples, first install the Baselines prerequisites shown on OpenAI Baselines README. At present SenseAct depends on an older version of baselines. We have created a specific fork for this dependency. Further, this version has removed the dependency on Mujoco. Additionally, the examples do not work with TensorFlow 2.0+.
pip install tensorflow==1.15
git clone https://github.com/csherstan/baselines.git
cd baselines
pip install -e .
Each examples also require its corresponding hardware, with the exception of advanced/sim_double_pendulum.py, which requires mujoco 1.50.
Below is a list of examples provided with a learning agent. Each is marked in terms of difficulty of hardware and physical environment setup.
- (Simple) Double Inverted Pendulum environment: advanced/sim_double_pendulum.py
- (Simple) DXL Reacher environment: advanced/dxl_reacher.py
- (Simple) DXL Tracker environment: advanced/dxl_tracker.py
- (Moderate) UR5 Reacher 2D environment: advanced/ur5_reacher.py
- (Moderate) UR5 Reacher 6D environment: advanced/ur5_reacher_6D.py
- (Complex) Create2 Mover environment: advanced/create2_mover.py
- (Complex) Create2 Docker environment: advanced/create2_docker.py