Skip to content
EyalLavi edited this page Dec 14, 2018 · 12 revisions

Welcome to the AI benchmarking framework.

This is a collaborative project to create a library for benchmarking AI/ML applications. It evolved out of conversations among broadcasters and providers of Access Services to media organisations, but anyone is welcome to contribute.

Use cases for benchmarking STT

Use case Main metric(s)
Prepared subtitles WER, timing
Live subtitles Latency, WER
Re-synching of subtitles captured from live Timing
Keyword identification Proper noun recognition
Archive metadata extraction WER, noise tolerance, dialects
News monitoring Proper noun recognition, voice identification
Clone this wiki locally