Skip to content

EitanTreiger/Ping-Pong

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

55 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Our Goal

  1. Given video, track the ping pong ball across the screen.
  2. Build a robust algorithm capable of overcoming motion blur and adverse lighting conditions.
  3. From just one camera, estimate the ball's position in 3D space to allow for analysis of ball velocity and shot placement.
  4. Provide replay features and statistics.

(SwingVision provides similar functionality for tennis, we're building something similar for ping pong)

Ball Tracking Pipeline

  1. Read image from camera image

  2. Convert image to CIELAB image

  3. Compute frame difference between current and previous frame image

  4. Generate location proposals image

  5. Filter location proposals

PingPongDemoVideo.mp4

Distance Calculation

To determine the position of the ball in 3 dimmensional space we count the height of the ball in pixels, then use the known size of a ping pong ball along with the focal length of the camera to apply the following formula: image

While this theoretically allows us to calculate the distance to the ball to within a couple of inches, it also introduces a lot of noise into our data as a result of motion blur and variation in the tracking quality. As a result we need to apply robust smoothing and filtering in order to effecively use the data.

Data Processing

First points with distnaces with greater than 1.5 standard deviations from the mean are thrown out, as these generally are eroneous detections. Further data is filtered out with a hampel filter. Finally the data is smoothed.

image

Action Segmentation

Looking at the 2D data, we can identify when the ball flips direction on the x-axis (representing a hit) or changes direction on the y-axis (representing a bounce). image Red and green dots show where hits took place

Then curves are fitted to the remaining data to extract speeds from the hits (not the bounces). image

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •