Skip to content

Latest commit

 

History

History

02_tennis

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

Tennis

In this projects we'll implementing agents that learns to play Unity Tennis using several Deep Rl algorithms. Unity Ml Agents is a toolkit for developing and comparing reinforcement learning algorithms. We'll be using pytorch library for the implementation.

Libraries Used

  • Unity Ml Agents
  • PyTorch
  • numpy
  • matplotlib

About Enviroment

  • Set-up: Two-player game where agents control rackets to hit a ball over the net.
  • Goal: The agents must hit the ball so that the opponent cannot hit a valid return.
  • Agents: The environment contains two agent with same Behavior Parameters.After training you can set the Behavior Type to Heuristic Only on one of the Agent's Behavior Parameters to play against your trained model.
  • Agent Reward Function (independent):
    • +1.0 To the agent that wins the point. An agent wins a point by preventing the opponent from hitting a valid return.
    • -1.0 To the agent who loses the point.
  • Behavior Parameters:
    • Vector Observation space: 9 variables corresponding to position, velocity and orientation of ball and racket.
    • Vector Action space: (Continuous) Size of 3, corresponding to movement toward net or away from net, jumping and rotation.
    • Visual Observations: None
  • Float Properties: Three
    • gravity: Magnitude of gravity
      • Default: 9.81
      • Recommended Minimum: 6
      • Recommended Maximum: 20
    • scale: Specifies the scale of the ball in the 3 dimensions (equal across the three dimensions)
      • Default: .5
      • Recommended Minimum: 0.2
      • Recommended Maximum: 5

Deep RL Agents

Any questions

If you have any questions, feel free to ask me:

Don't forget to follow me on twitter, github and Medium to be alerted of the new articles that I publish

How to help

  • Clap on articles: Clapping in Medium means that you really like my articles. And the more claps I have, the more my article is shared help them to be much more visible to the deep learning community.
  • Improve our notebooks: if you found a bug or a better implementation you can send a pull request.