Skip to content

Commit b4f3a2c

Browse files
authored
New version: ggml.llamacpp version b8184
1 parent b45ec82 commit b4f3a2c

3 files changed

Lines changed: 92 additions & 0 deletions

File tree

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
# Created with komac v2.15.0
2+
# yaml-language-server: $schema=https://aka.ms/winget-manifest.installer.1.12.0.schema.json
3+
4+
PackageIdentifier: ggml.llamacpp
5+
PackageVersion: b8184
6+
InstallerType: zip
7+
NestedInstallerType: portable
8+
NestedInstallerFiles:
9+
- RelativeFilePath: llama-batched-bench.exe
10+
- RelativeFilePath: llama-bench.exe
11+
- RelativeFilePath: llama-cli.exe
12+
- RelativeFilePath: llama-gguf-split.exe
13+
- RelativeFilePath: llama-imatrix.exe
14+
- RelativeFilePath: llama-mtmd-cli.exe
15+
- RelativeFilePath: llama-perplexity.exe
16+
- RelativeFilePath: llama-quantize.exe
17+
- RelativeFilePath: llama-server.exe
18+
- RelativeFilePath: llama-tokenize.exe
19+
- RelativeFilePath: llama-tts.exe
20+
Dependencies:
21+
PackageDependencies:
22+
- PackageIdentifier: Microsoft.VCRedist.2015+.x64
23+
ReleaseDate: 2026-03-01
24+
ArchiveBinariesDependOnPath: true
25+
Installers:
26+
- Architecture: x64
27+
InstallerUrl: https://github.com/ggml-org/llama.cpp/releases/download/b8184/llama-b8184-bin-win-vulkan-x64.zip
28+
InstallerSha256: 2D60828F4B90BDD1E93698837C163B54F40E7D682E8018DC40F52EB444C3CCEB
29+
ManifestType: installer
30+
ManifestVersion: 1.12.0
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
# Created with komac v2.15.0
2+
# yaml-language-server: $schema=https://aka.ms/winget-manifest.defaultLocale.1.12.0.schema.json
3+
4+
PackageIdentifier: ggml.llamacpp
5+
PackageVersion: b8184
6+
PackageLocale: en-US
7+
Publisher: ggml
8+
PublisherUrl: https://github.com/ggml-org
9+
PublisherSupportUrl: https://github.com/ggml-org/llama.cpp/issues
10+
PackageName: llama.cpp
11+
PackageUrl: https://github.com/ggml-org/llama.cpp
12+
License: MIT
13+
LicenseUrl: https://github.com/ggml-org/llama.cpp/blob/HEAD/LICENSE
14+
ShortDescription: LLM inference in C/C++
15+
Tags:
16+
- ggml
17+
- llama
18+
ReleaseNotes: |-
19+
vulkan: improve partial offloading performance on AMD (#19976)
20+
- vulkan: fix and enable cpy_tensor_async function
21+
- use transfer_queue for async transfers on AMD, synchronize with timeline semaphore
22+
- update offload_op logic
23+
- fix missing transfer submission
24+
- disable async transfer queue on AMD GCN
25+
- revert op batch size change
26+
- fix cpy_tensor_async checks
27+
macOS/iOS:
28+
- macOS Apple Silicon (arm64)
29+
- macOS Intel (x64)
30+
- iOS XCFramework
31+
Linux:
32+
- Ubuntu x64 (CPU)
33+
- Ubuntu x64 (Vulkan)
34+
- Ubuntu x64 (ROCm 7.2)
35+
- Ubuntu s390x (CPU)
36+
Windows:
37+
- Windows x64 (CPU)
38+
- Windows arm64 (CPU)
39+
- Windows x64 (CUDA 12) - CUDA 12.4 DLLs
40+
- Windows x64 (CUDA 13) - CUDA 13.1 DLLs
41+
- Windows x64 (Vulkan)
42+
- Windows x64 (SYCL)
43+
- Windows x64 (HIP)
44+
openEuler:
45+
- openEuler x86 (310p)
46+
- openEuler x86 (910b, ACL Graph)
47+
- openEuler aarch64 (310p)
48+
- openEuler aarch64 (910b, ACL Graph)
49+
ReleaseNotesUrl: https://github.com/ggml-org/llama.cpp/releases/tag/b8184
50+
Documentations:
51+
- DocumentLabel: Wiki
52+
DocumentUrl: https://github.com/ggml-org/llama.cpp/wiki
53+
ManifestType: defaultLocale
54+
ManifestVersion: 1.12.0
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
# Created with komac v2.15.0
2+
# yaml-language-server: $schema=https://aka.ms/winget-manifest.version.1.12.0.schema.json
3+
4+
PackageIdentifier: ggml.llamacpp
5+
PackageVersion: b8184
6+
DefaultLocale: en-US
7+
ManifestType: version
8+
ManifestVersion: 1.12.0

0 commit comments

Comments
 (0)