You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Problem description:
When selecting a window with rounded corners (for example the iOS Simulator) the recorded video currently records parts of the desktop background or any other underlying window. This happens because the frames of the recorded video are not "screenshots" of the window but "screenshots" of the screen at the given coordinates. There is nothing we can do about that when the user selects a custom crop frame. However when a window is selected and recorded without modifications to the crop frame we know for sure that only the selected window should be recorded.
Proposed Solution
There is a method called CGWindowListCreateImage which we could use to create snapshots of the selected window. This would lead to frames which have an alpha color around these rounded corners and would address the above stated problem. We would create a timer which creates such an image every 1/fps seconds and adds them to a queue. When the recording is finished this queue of images is then exported to a GIF/movie. However by doing so we introduce multiple implementation problems:
We need to implement stitching these frame to a video file by our own since the user might want to have a resulting video file next to the GIF. This is currently done by AVCaptureSession. I am not sure it would be possible to still use AVCaptureSession. However this might just be me not finding the correct API to do so.
The separate frames are pretty large in size and this leads to a very large GIF file (without making quality compromises). Again this might be addresses by some high level Apple API like AVCaptureSession but currently I only see how to capture input from a CGDirectDisplayID not from a CGWindowID.
So technically this is possible but we need to make sure that this enhancement will be worth the effort.
The text was updated successfully, but these errors were encountered:
Currently I have a working prototype implementing the solution stated above. However before integrating this into Capture the problems would need to be addressed. Any thoughts on this are appreciated.
This is a feature request I got from a colleague:
Problem description:
When selecting a window with rounded corners (for example the iOS Simulator) the recorded video currently records parts of the desktop background or any other underlying window. This happens because the frames of the recorded video are not "screenshots" of the window but "screenshots" of the screen at the given coordinates. There is nothing we can do about that when the user selects a custom crop frame. However when a window is selected and recorded without modifications to the crop frame we know for sure that only the selected window should be recorded.
Proposed Solution
There is a method called
CGWindowListCreateImage
which we could use to create snapshots of the selected window. This would lead to frames which have an alpha color around these rounded corners and would address the above stated problem. We would create a timer which creates such an image every 1/fps seconds and adds them to a queue. When the recording is finished this queue of images is then exported to a GIF/movie. However by doing so we introduce multiple implementation problems:AVCaptureSession
. I am not sure it would be possible to still use AVCaptureSession. However this might just be me not finding the correct API to do so.CGDirectDisplayID
not from aCGWindowID
.So technically this is possible but we need to make sure that this enhancement will be worth the effort.
The text was updated successfully, but these errors were encountered: