-
Package Version - 3.1.0 I am using the code as outlined here - https://maennchen.dev/ZipStream-PHP/guide/StreamOutput.html We have large files (1GB+ each) in our S3 buckets that we want to zip. My understanding is that when using Stream Output Here is how my code looks like $bucket = 'bucket-name';
$s3keys = [
'sample/file1.mp4',
'sample/file2.mp4',
'sample/file3.mp4',
.....
];
$client = new S3Client([.....]);
$client->registerStreamWrapper();
$zipFile = fopen("s3://$bucket/example.zip", 'w');
$zipStream = new ZipStream(
outputStream: $zipFile,
defaultEnableZeroHeader: true,
defaultCompressionMethod: CompressionMethod::STORE
);
foreach ($s3keys as $key) {
$fileName = basename($key);
$s3path = "s3://" . $bucket . "/" . $key;
if ($streamRead = fopen($s3path, 'r')) {
$zipStream->addFileFromStream(
fileName: $fileName,
stream: $streamRead,
);
fclose($streamRead);
}
}
$zipStream->finish();
fclose($zipFile); When I run this code, as stated above it create a file in the
So basically what I am trying to do here is, create a remote zip file from multiple remote files, without downloading the remote files on the server and also creating the zip file directly in the cloud and no tmp files on the server. Please let me know. Or if someone can point me in the right direction, that would be much appreciated. Thanks. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 6 replies
-
I believe that this has to do with the S3 lib since ZipStream doesn’t use any tempfiles. Can you try to stream the output somewhere else to make sure this is correct? |
Beta Was this translation helpful? Give feedback.
Probably caused by this:
https://github.com/aws/aws-sdk-php/blob/596534c0627d8b38597061341e99b460437d1a16/src/S3/StreamWrapper.php#L725
Can you try opening the target file using append (
a
) mode instead of write (w
)? I believe that should produce a different behavior of the amazon library.