Skip to content

Commit

Permalink
Sync alphalib
Browse files Browse the repository at this point in the history
  • Loading branch information
kvz committed Jan 14, 2025
1 parent 8e1ff50 commit da7be78
Show file tree
Hide file tree
Showing 11 changed files with 100 additions and 18 deletions.
4 changes: 2 additions & 2 deletions src/alphalib/types/robots/_instructions-primitives.ts
Original file line number Diff line number Diff line change
Expand Up @@ -387,8 +387,8 @@ export const bitrateSchema = z.number().int().min(1)
export const sampleRateSchema = z.number().int().min(1)

export const optimize_priority = z
.enum(['compression-ratio', 'conversation-speed'])
.default('compression-ratio')
.enum(['compression-ratio', 'conversion-speed'])
.default('conversion-speed')

export const imagemagickStackVersionSchema = z.enum(['v2.0.10', 'v3.0.1']).default('v2.0.10')
export type ImagemagickStackVersion = z.infer<typeof imagemagickStackVersionSchema>
Expand Down
4 changes: 3 additions & 1 deletion src/alphalib/types/robots/file-hash.ts
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,13 @@ export const meta: RobotMeta = {

export const robotFileHashInstructionsSchema = z
.object({
robot: z.literal('/file/hash').describe(`
This <dfn>Robot</dfn> allows you to hash any file as part of the <dfn>Assembly</dfn> execution process. This can be useful for verifying the integrity of a file for example.
`),
result: z
.boolean()
.optional()
.describe(`Whether the results of this Step should be present in the Assembly Status JSON`),
robot: z.literal('/file/hash'),
use: useParamSchema,
algorithm: z
.enum(['b2', 'md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512'])
Expand Down
11 changes: 10 additions & 1 deletion src/alphalib/types/robots/file-preview.ts
Original file line number Diff line number Diff line change
Expand Up @@ -46,11 +46,17 @@ export const meta: RobotMeta = {

export const robotFilePreviewInstructionsInterpolatedSchema = z
.object({
robot: z.literal('/file/preview').describe(`
This <dfn>Robot</dfn>'s purpose is to generate a meaningful preview image for any file, in such a way that the resulting thumbnail highlights the file's content. The goal is not to losslessly present the original media in a smaller way. Instead, it is to maximize the chance of a person recognizing the media at a glance, while being visually pleasing and consistent with other previews. The generation process depends on the file type. For example, the <dfn>Robot</dfn> can extract artwork from media files, frames from videos, generate a waveform for audio files, and preview the content of documents and images. The details of all available strategies are provided in the next section.
If no file-specific thumbnail can be generated because the file type is not supported, a generic icon containing the file extension will be generated.
The default parameters ensure that the <dfn>Robot</dfn> always generates a preview image with the predefined dimensions and formats, to allow an easy integration into your application's UI. In addition, the generated preview images are optimized by default to reduce their file size while keeping their quality.
`),
result: z
.boolean()
.optional()
.describe(`Whether the results of this Step should be present in the Assembly Status JSON`),
robot: z.literal('/file/preview'),
use: useParamSchema,
format: z.enum(['gif', 'jpg', 'png']).default('png').describe(`
The output format for the generated thumbnail image. If a short video clip is generated using the \`clip\` strategy, its format is defined by \`clip_format\`.
Expand All @@ -65,6 +71,9 @@ Height of the thumbnail, in pixels.
To achieve the desired dimensions of the preview thumbnail, the <dfn>Robot</dfn> might have to resize the generated image. This happens, for example, when the dimensions of a frame extracted from a video do not match the chosen \`width\` and \`height\` parameters.
See the list of available [resize strategies](/docs/transcoding/image-manipulation/image-resize/#resize-strategies) for more details.
`),
background: color_with_optional_alpha.default('#ffffffff').describe(`
The hexadecimal code of the color used to fill the background (only used for the pad resize strategy). The format is \`#rrggbb[aa]\` (red, green, blue, alpha). Use \`#00000000\` for a transparent padding.
`),
strategy: z
.object({
Expand Down
14 changes: 14 additions & 0 deletions src/alphalib/types/robots/image-generate.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,28 @@ export const robotImageGenerateInstructionsSchema = z
.optional()
.describe(`Whether the results of this Step should be present in the Assembly Status JSON`),
robot: z.literal('/image/generate'),
model: z.string(),
prompt: z.string().describe('The prompt describing the desired image content.'),
format: z
.enum(['jpeg', 'png', 'gif', 'webp'])
.optional()
.describe('Format of the generated image.'),
seed: z.number().optional().describe('Seed for the random number generator.'),
aspectRatio: z.string().optional().describe('Aspect ratio of the generated image.'),
height: z.number().optional().describe('Height of the generated image.'),
width: z.number().optional().describe('Width of the generated image.'),
style: z.string().optional().describe('Style of the generated image.'),
output_meta: outputMetaParamSchema.optional(),
use: useParamSchema,
})
.strict()

export const robotImageGenerateInstructionsWithHiddenFields =
robotImageGenerateInstructionsSchema.extend({
provider: z.string().optional().describe('Provider for generating the image.'),
})

export type RobotImageGenerateInstructions = z.infer<typeof robotImageGenerateInstructionsSchema>
export type RobotImageGenerateInstructionsWithHiddenFields = z.infer<
typeof robotImageGenerateInstructionsWithHiddenFields
>
8 changes: 4 additions & 4 deletions src/alphalib/types/robots/media-playlist.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ import { useParamSchema } from './_instructions-primitives.ts'

export const meta: RobotMeta = {
allowed_for_url_transform: true,
bytescount: 20,
bytescount: 10,
discount_factor: 0.1,
discount_pct: 90,
minimum_charge: 0,
Expand All @@ -29,7 +29,7 @@ export const robotMediaPlaylistInstructionsSchema = z
.optional()
.describe(`Whether the results of this Step should be present in the Assembly Status JSON`),
robot: z.literal('/media/playlist').describe(`
🤖/media/playlist is deprecated and will be removed! Please use [🤖/video/adaptive](/docs/transcoding/video-encoding/video-adaptive/) for all your HLS and MPEG-Dash needs instead.
**Warning:** 🤖/media/playlist is deprecated and will be removed! Please use [🤖/video/adaptive](/docs/transcoding/video-encoding/video-adaptive/) for all your HLS and MPEG-Dash needs instead.
`),
use: useParamSchema,
name: z.string().default('playlist.m3u8').describe(`
Expand All @@ -41,10 +41,10 @@ URL prefixes to use in the playlist file. Example: \`"/234p/"\`
resolution: z.string().optional().describe(`
The resolution reported in the playlist file. Example: \`"416×234"\`. [More info](https://developer.apple.com/library/ios/technotes/tn2224/_index.html#//apple_ref/doc/uid/DTS40009745-CH1-DECIDEONYOURVARIANTS-DEVICE_CAPABILITIES).
`),
codes: z.string().optional().describe(`
codecs: z.string().optional().describe(`
The codecs reported in the playlist file. Example: \`"avc1.42001e,mp4a.40.34"\`. [More info](https://developer.apple.com/library/ios/technotes/tn2224/_index.html#//apple_ref/doc/uid/DTS40009745-CH1-DECIDEONYOURVARIANTS-DEVICE_CAPABILITIES).
`),
bandwidth: z.union([z.literal('auto'), z.number()]).default('auto').describe(`
bandwidth: z.union([z.literal('auto'), z.number().int()]).default('auto').describe(`
The bandwidth reported in the playlist file. Example: \`2560000\`. [More info](https://developer.apple.com/library/ios/technotes/tn2224/_index.html#//apple_ref/doc/uid/DTS40009745-CH1-DECIDEONYOURVARIANTS-DEVICE_CAPABILITIES). This value is expressed in bits per second.
`),
closed_captions: z.boolean().default(true).describe(`
Expand Down
11 changes: 9 additions & 2 deletions src/alphalib/types/robots/meta-write.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
import { z } from 'zod'

import { ffmpegStackVersionSchema, useParamSchema } from './_instructions-primitives.ts'
import {
ffmpegAudioInstructions,
ffmpegStackVersionSchema,
useParamSchema,
} from './_instructions-primitives.ts'
import type { RobotMeta } from './_instructions-primitives.ts'

export const meta: RobotMeta = {
Expand Down Expand Up @@ -42,13 +46,16 @@ export const robotMetaWriteInstructionsSchema = z
.boolean()
.optional()
.describe(`Whether the results of this Step should be present in the Assembly Status JSON`),
robot: z.literal('/meta/write'),
robot: z.literal('/meta/write').describe(`
**Note:** This <dfn>Robot</dfn> currently accepts images, videos and audio files.
`),
use: useParamSchema,
data_to_write: z.object({}).passthrough().default({}).describe(`
A key/value map defining the metadata to write into the file.
Valid metadata keys can be found [here](https://exiftool.org/TagNames/EXIF.html). For example: \`ProcessingSoftware\`.
`),
ffmpeg: ffmpegAudioInstructions.optional(),
ffmpeg_stack: ffmpegStackVersionSchema.optional(),
})
.strict()
Expand Down
44 changes: 42 additions & 2 deletions src/alphalib/types/robots/video-adaptive.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ import type { RobotMeta } from './_instructions-primitives.ts'
export const meta: RobotMeta = {
allowed_for_free_plans: true,
allowed_for_url_transform: false,
bytescount: 0,
bytescount: Infinity,
discount_factor: 1,
discount_pct: 0,
example_code: {
Expand Down Expand Up @@ -66,11 +66,51 @@ export const meta: RobotMeta = {

export const robotVideoAdaptiveInstructionsSchema = z
.object({
robot: z.literal('/video/adaptive').describe(`
This <dfn>Robot</dfn> accepts all types of video files and audio files. Do not forget to use <dfn>Step</dfn> bundling in your \`use\` parameter to make the <dfn>Robot</dfn> work on several input files at once.
This <dfn>Robot</dfn> is normally used in combination with [🤖/video/encode](/docs/transcoding/video-encoding/video-encode/). We have implemented video and audio encoding presets specifically for MPEG-Dash and HTTP Live Streaming support. These presets are prefixed with \`"dash/"\` and \`"hls/"\`. [View a HTTP Live Streaming demo here](/demos/video-encoding/implement-http-live-streaming/).
### Required CORS settings for MPEG-Dash and HTTP Live Streaming
Playing back MPEG-Dash Manifest or HLS playlist files requires a proper CORS setup on the server-side. The file-serving server should be configured to add the following header fields to responses:
\`\`\`
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET
Access-Control-Allow-Headers: *
\`\`\`
If the files are stored in an Amazon S3 Bucket, you can use the following [CORS definition](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html) to ensure the CORS header fields are set correctly:
\`\`\`json
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET"],
"AllowedOrigins": ["*"],
"ExposeHeaders": []
}
]
\`\`\`
To set up CORS for your S3 bucket:
1. Visit <https://s3.console.aws.amazon.com/s3/buckets/>
1. Click on your bucket
1. Click "Permissions"
1. Edit "Cross-origin resource sharing (CORS)"
### Storing Segments and Playlist files
The <dfn>Robot</dfn> gives its result files (segments, initialization segments, MPD manifest files and M3U8 playlist files) the right metadata property \`relative_path\`, so that you can store them easily using one of our storage <dfn>Robots</dfn>.
In the \`path\` parameter of the storage <dfn>Robot</dfn> of your choice, use the <dfn>Assembly Variable</dfn> \`\${file.meta.relative_path}\` to store files in the proper paths to make the playlist files work.
`),
result: z
.boolean()
.optional()
.describe(`Whether the results of this Step should be present in the Assembly Status JSON`),
robot: z.literal('/video/adaptive'),
use: useParamSchema,
technique: z.enum(['dash', 'hls']).default('dash').describe(`
Determines which streaming technique should be used. Currently supports \`"dash"\` for MPEG-Dash and \`"hls"\` for HTTP Live Streaming.
Expand Down
8 changes: 6 additions & 2 deletions src/alphalib/types/robots/video-concat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -46,14 +46,18 @@ export const meta: RobotMeta = {

export const robotVideoConcatInstructionsSchema = z
.object({
robot: z.literal('/video/concat').describe(`
**Warning:** All videos you concatenate must have the same dimensions (width and height) and the same streams (audio and video streams), otherwise you will run into errors. If your videos donʼt have the desired dimensions when passing them to [🤖/video/concat](/docs/transcoding/video-encoding/video-concat/), encode them first with [🤖/video/encode](/docs/transcoding/video-encoding/video-encode/). [{.alert .alert-warning}]
Itʼs possible to concatenate a virtually infinite number of video files using [🤖/video/concat](/docs/transcoding/video-encoding/video-concat/).
`),
result: z
.boolean()
.optional()
.describe(`Whether the results of this Step should be present in the Assembly Status JSON`),
robot: z.literal('/video/concat'),
use: useParamSchema,
output_meta: outputMetaParamSchema,
preset: preset.optional().describe(`
preset: preset.default('flash').optional().describe(`
Performs conversion using pre-configured settings.
If you specify your own FFmpeg parameters using the <dfn>Robot</dfn>'s \`ffmpeg\` parameter and you have not specified a preset, then the default \`"flash"\` preset is not applied. This is to prevent you from having to override each of the flash preset's values manually.
Expand Down
2 changes: 1 addition & 1 deletion src/alphalib/types/robots/video-merge.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ import type { RobotMeta } from './_instructions-primitives.ts'

export const meta: RobotMeta = {
allowed_for_url_transform: false,
bytescount: 0,
bytescount: 1,
discount_factor: 1,
discount_pct: 0,
minimum_charge: 0,
Expand Down
4 changes: 3 additions & 1 deletion src/alphalib/types/robots/video-subtitle.ts
Original file line number Diff line number Diff line change
Expand Up @@ -52,11 +52,13 @@ export const meta: RobotMeta = {

export const robotVideoSubtitleInstructionsSchema = z
.object({
robot: z.literal('/video/subtitle').describe(`
This <dfn>Robot</dfn> supports both SRT and VTT subtitle files.
`),
result: z
.boolean()
.optional()
.describe(`Whether the results of this Step should be present in the Assembly Status JSON`),
robot: z.literal('/video/subtitle'),
use: useParamSchema,
output_meta: outputMetaParamSchema,
preset: preset.default('empty').describe(`
Expand Down
8 changes: 6 additions & 2 deletions src/alphalib/types/robots/video-thumbs.ts
Original file line number Diff line number Diff line change
Expand Up @@ -46,11 +46,13 @@ export const meta: RobotMeta = {

export const robotVideoThumbsInstructionsSchema = z
.object({
robot: z.literal('/video/thumbs').describe(`
**Note:** Even though thumbnails are extracted from videos in parallel, we sort the thumbnails before adding them to the Assembly results. So the order in which they appear there reflects the order in which they appear in the video. You can also make sure by checking the <code>thumb_index</code> meta key. [{.alert .alert-note}]
`),
result: z
.boolean()
.optional()
.describe(`Whether the results of this Step should be present in the Assembly Status JSON`),
robot: z.literal('/video/thumbs'),
use: useParamSchema,
output_meta: outputMetaParamSchema,
count: z.number().int().min(1).max(999).default(8).describe(`
Expand Down Expand Up @@ -82,7 +84,9 @@ The background color of the resulting thumbnails in the \`"rrggbbaa"\` format (r
`),
rotate: z
.union([z.literal(0), z.literal(90), z.literal(180), z.literal(270), z.literal(360)])
.default(0),
.default(0).describe(`
Forces the video to be rotated by the specified degree integer. Currently, only multiples of 90 are supported. We automatically correct the orientation of many videos when the orientation is provided by the camera. This option is only useful for videos requiring rotation because it was not detected by the camera.
`),
ffmpeg_stack: ffmpegStackVersionSchema.describe(`
Forces the video to be rotated by the specified degree integer. Currently, only multiples of 90 are supported. We automatically correct the orientation of many videos when the orientation is provided by the camera. This option is only useful for videos requiring rotation because it was not detected by the camera.
`),
Expand Down

0 comments on commit da7be78

Please sign in to comment.