Skip to content

Experiment

José Carlos edited this page Oct 22, 2024 · 3 revisions

Warning

The PLM can be configured to emit a pulse through the Trigger 2 (T2) output line at each hologram transition, which we use to trigger our camera. Due to our use of older firmware that sets PLM mirror heights to zero for 90 µs between holograms, we've add a 100 µs trigger delay on the camera.

While precise frame pacing is crucial, it's not sufficient for conducting a comprehensive experiment. It's equally important to identify when the first hologram appears on the PLM screen, ensuring that camera images are saved in the correct sequence.

To achieve this, we utilize the camera output (triggered by the PLM) to determine the start of the hologram sequence. We display a set of holograms within a single frame (comprising 24 holograms) that clearly indicates the sequence's beginning while monitoring the camera. This initial hologram employs a simple carrier tilt to direct light towards the camera, saturating the image, or, in another experiment, a lens phase focusing and defocusing light at the camera.

The StartSequence command resets the PLM's frame_index to 0 and initiates a display sequence that cycles through each frame in the predetermined order. Since the exact timing of the first hologram's display in the first frame cannot be guaranteed (despite correct frame pacing), we begin with a synchronization frame.

Our setup uses a Basler acA640-300gm camera model. We use Pylon's SDK to set up a callback function that executes each time the camera is triggered. This callback implements a routine to monitor the camera image for the synchronization frame, only commencing image saving once the synchronization routine is activated.

In future versions of plmctrl, we aim to offer a software-only solution that informs the user when the first frame is being displayed, without relying on the camera output for this kind of synchronization. Frame pacing is a tricky task; it's very difficult to ensure that a particular line of code commanding the GPU to perform an action (such as screen display) is actually executed at the intended time. Modern GPUs execute code within a queue system, and our commands only feed into this queue, which is consumed by the GPU at its own pace. Full control of that queue is offered by lower-level graphics APIs such as Vulkan or DirectX. Our main branch already uses DirectX 11, but this software-only solution is not yet implemented.

Clone this wiki locally