Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ruby : Add low-level methods to transcribe #2585

Merged
merged 18 commits into from
Nov 28, 2024

Conversation

KitaitiMakoto
Copy link
Contributor

Hello,

This pull request adds methods corresponding to whisper_full and whisper_full_parallel. They support Ruby's MemoryView, which allows sharing C (multidimensional) array data with zero copy like Python's buffer protocol.

Thank you.

@KitaitiMakoto KitaitiMakoto changed the title ruby : Add low-lelve methods to transcribe ruby : Add low-level methods to transcribe Nov 25, 2024
@ggerganov ggerganov merged commit 021eef1 into ggerganov:master Nov 28, 2024
44 of 45 checks passed
@KitaitiMakoto KitaitiMakoto deleted the ruby-full branch November 28, 2024 09:38
bygreencn added a commit to bygreencn/whisper.cpp that referenced this pull request Dec 3, 2024
* ggerganov/master: (447 commits)
  ruby : Add low-level methods to transcribe (ggerganov#2585)
  models : add `q8_0` models to `download-ggml-model.sh` (ggerganov#2589)
  ruby : Follow source tree change (ggerganov#2580)
  whisper : use backend registry (#0)
  ggml/sched : do not skip views in pre-assignments
  whisper : adapt to new ggml (wip)
  talk-llama : sync llama.cpp
  sync : ggml
  ggml : sync resolve (skip) (#0)
  Add required ggml-base and backend libs to cmake pkg (llama/10407)
  cuda : fix CUDA_FLAGS not being applied (llama/10403)
  sycl : Add option to set the SYCL architecture for all targets (llama/10266)
  vulkan: Optimize soft_max (llama/10301)
  sycl: Revert MUL_MAT_OP support changes (llama/10385)
  cuda : only use native when supported by cmake (llama/10389)
  vulkan: remove use of null initializer (llama/10372)
  metal : fox offset integer overflows in im2col (ggml/1015)
  Vulkan: Fix device info output format specifiers (llama/10366)
  metal : add `GGML_UNARY_OP_ELU` kernel (ggml/1018)
  CUDA: fix MMV kernel being used for FP16 src1 (llama/10357)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants