Skip to content

Commit cf2f1db

Browse files
authored
New version: ggml.llamacpp version b8171
1 parent 697427c commit cf2f1db

3 files changed

Lines changed: 86 additions & 0 deletions

File tree

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
# Created with komac v2.15.0
2+
# yaml-language-server: $schema=https://aka.ms/winget-manifest.installer.1.12.0.schema.json
3+
4+
PackageIdentifier: ggml.llamacpp
5+
PackageVersion: b8171
6+
InstallerType: zip
7+
NestedInstallerType: portable
8+
NestedInstallerFiles:
9+
- RelativeFilePath: llama-batched-bench.exe
10+
- RelativeFilePath: llama-bench.exe
11+
- RelativeFilePath: llama-cli.exe
12+
- RelativeFilePath: llama-gguf-split.exe
13+
- RelativeFilePath: llama-imatrix.exe
14+
- RelativeFilePath: llama-mtmd-cli.exe
15+
- RelativeFilePath: llama-perplexity.exe
16+
- RelativeFilePath: llama-quantize.exe
17+
- RelativeFilePath: llama-server.exe
18+
- RelativeFilePath: llama-tokenize.exe
19+
- RelativeFilePath: llama-tts.exe
20+
Dependencies:
21+
PackageDependencies:
22+
- PackageIdentifier: Microsoft.VCRedist.2015+.x64
23+
ReleaseDate: 2026-02-27
24+
ArchiveBinariesDependOnPath: true
25+
Installers:
26+
- Architecture: x64
27+
InstallerUrl: https://github.com/ggml-org/llama.cpp/releases/download/b8171/llama-b8171-bin-win-vulkan-x64.zip
28+
InstallerSha256: 8124ABB21F4CD27E261692DDF370A4F61D61BC1CD9E1314BB22D2A72B5763759
29+
ManifestType: installer
30+
ManifestVersion: 1.12.0
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
# Created with komac v2.15.0
2+
# yaml-language-server: $schema=https://aka.ms/winget-manifest.defaultLocale.1.12.0.schema.json
3+
4+
PackageIdentifier: ggml.llamacpp
5+
PackageVersion: b8171
6+
PackageLocale: en-US
7+
Publisher: ggml
8+
PublisherUrl: https://github.com/ggml-org
9+
PublisherSupportUrl: https://github.com/ggml-org/llama.cpp/issues
10+
PackageName: llama.cpp
11+
PackageUrl: https://github.com/ggml-org/llama.cpp
12+
License: MIT
13+
LicenseUrl: https://github.com/ggml-org/llama.cpp/blob/HEAD/LICENSE
14+
ShortDescription: LLM inference in C/C++
15+
Tags:
16+
- ggml
17+
- llama
18+
ReleaseNotes: |-
19+
replace the magic nunber 768 by max work group size to support iGPU (#19920)
20+
Co-authored-by: Neo Zhang Jianyu jianyu.zhang@intel.com
21+
macOS/iOS:
22+
- macOS Apple Silicon (arm64)
23+
- macOS Intel (x64)
24+
- iOS XCFramework
25+
Linux:
26+
- Ubuntu x64 (CPU)
27+
- Ubuntu x64 (Vulkan)
28+
- Ubuntu x64 (ROCm 7.2)
29+
- Ubuntu s390x (CPU)
30+
Windows:
31+
- Windows x64 (CPU)
32+
- Windows arm64 (CPU)
33+
- Windows x64 (CUDA 12) - CUDA 12.4 DLLs
34+
- Windows x64 (CUDA 13) - CUDA 13.1 DLLs
35+
- Windows x64 (Vulkan)
36+
- Windows x64 (SYCL)
37+
- Windows x64 (HIP)
38+
openEuler:
39+
- openEuler x86 (310p)
40+
- openEuler x86 (910b, ACL Graph)
41+
- openEuler aarch64 (310p)
42+
- openEuler aarch64 (910b, ACL Graph)
43+
ReleaseNotesUrl: https://github.com/ggml-org/llama.cpp/releases/tag/b8171
44+
Documentations:
45+
- DocumentLabel: Wiki
46+
DocumentUrl: https://github.com/ggml-org/llama.cpp/wiki
47+
ManifestType: defaultLocale
48+
ManifestVersion: 1.12.0
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
# Created with komac v2.15.0
2+
# yaml-language-server: $schema=https://aka.ms/winget-manifest.version.1.12.0.schema.json
3+
4+
PackageIdentifier: ggml.llamacpp
5+
PackageVersion: b8171
6+
DefaultLocale: en-US
7+
ManifestType: version
8+
ManifestVersion: 1.12.0

0 commit comments

Comments
 (0)