Skip to content

[BUG]: Get GGML_ASSERT when running KernelMemorySaveAndLoad.cs #1151

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
GianMeng opened this issue Apr 3, 2025 · 1 comment
Open

[BUG]: Get GGML_ASSERT when running KernelMemorySaveAndLoad.cs #1151

GianMeng opened this issue Apr 3, 2025 · 1 comment

Comments

@GianMeng
Copy link

GianMeng commented Apr 3, 2025

Description

Run embedding example KernelMemorySaveAndLoad.cs, and after the weights and context generated, I got error: LLamaSharp/LLamaSharp/ggml/src/ggml.c:2703: GGML_ASSERT(ggml_can_mul_mat(a,b)) failed

But if I run chatsession with the model, it works perfectly. So it seems something is wrong with the embedding part of kernelmemory, maybe WithLLamaSharpTextEmbeddingGeneration.

Reproduction Steps

Here is my code:

using LLama;
using LLama.Common;
using LLamaSharp.KernelMemory;
using Microsoft.KernelMemory;
using Microsoft.KernelMemory.Configuration;
using Microsoft.KernelMemory.DocumentStorage.DevTools;
using Microsoft.KernelMemory.FileSystem.DevTools;
using Microsoft.KernelMemory.MemoryStorage.DevTools;
using System.Diagnostics;

namespace LSKMRAG
{
    class Program
    {
        static void Main(string[] args)
        {
            ChatQwen chat = new ChatQwen();
            chat.ChatQwenMain().GetAwaiter().GetResult(); ;
        }
    }

    public class ChatQwen
    {
        static string StorageFolder => Path.GetFullPath($"./storage-{nameof(ChatQwen)}");
        static bool StorageExists => Directory.Exists(StorageFolder) && Directory.GetDirectories(StorageFolder).Length > 0;

        string modelPath = Path.Combine(Directory.GetCurrentDirectory(), "qwen2.5-3b-instruct-q4_k_m.gguf");

        public async Task ChatQwenMain()
        {
            Console.ForegroundColor = ConsoleColor.Yellow;
            Console.WriteLine(
                """

                This program uses the Microsoft.KernelMemory package to ingest documents
                and store the embeddings as local files so they can be quickly recalled
                when this application is launched again. 

                """);
            IKernelMemory memory = CreateMemoryWithLocalStorage(modelPath);

            Console.ForegroundColor = ConsoleColor.Yellow;
            if (StorageExists) {
                Console.WriteLine(
                    """
                    
                    Kernel memory files have been located!
                    Information about previously analyzed documents has been loaded.

                    """);
            }
            else
            {
                Console.WriteLine(
                    """
                    
                    Existing kernel memory was not found.
                    Documents will be analyzed (slow) and information saved to disk.
                    Analysis will not be required the next time this program is run.
                    Press ENTER to proceed...

                    """);
                Console.ReadLine();
                await IngestDocuments(memory);
            }
        }

        private static IKernelMemory CreateMemoryWithLocalStorage(string modelPath)
        {
            InferenceParams infParams = new() { AntiPrompts = ["\n\n"] };

            LLamaSharpConfig lsConfig = new(modelPath) { DefaultInferenceParams = infParams };

            var parameters = new ModelParams(modelPath)
            {
                ContextSize = 2048,
                GpuLayerCount = 99,
                MainGpu = lsConfig.MainGpu,
                SplitMode = lsConfig.SplitMode
            };

            SearchClientConfig searchClientConfig = new()
            {
                MaxMatchesCount = 1,
                AnswerTokens = 100,
            };

            TextPartitioningOptions parseOptions = new()
            {
                MaxTokensPerParagraph = 300,
                // MaxTokensPerLine = 100,
                OverlappingTokens = 30
            };

            SimpleFileStorageConfig storageConfig = new()
            {
                Directory = StorageFolder,
                StorageType = FileSystemTypes.Disk,
            };

            SimpleVectorDbConfig vectorDbConfig = new()
            {
                Directory = StorageFolder,
                StorageType = FileSystemTypes.Disk,
            };

            Console.ForegroundColor = ConsoleColor.Blue;
            Console.WriteLine($"Kernel memory folder: {StorageFolder}");
            Console.ForegroundColor = ConsoleColor.DarkGray;

            return new KernelMemoryBuilder()
                .WithSimpleFileStorage(storageConfig)
                .WithSimpleVectorDb(vectorDbConfig)
                .WithLLamaSharpDefaults(lsConfig)
                .WithSearchClientConfig(searchClientConfig)
                .With(parseOptions)
                .Build();
            */
        }
        //... others same as KernelMemorySaveAndLoad.cs
    }
}

Environment & Configuration

  • Operating system: Win10 / Win11
  • .NET runtime version: .NET 8
  • LLamaSharp version:0.23.0
  • CUDA version (if you are using cuda backend): 12.4 / 12.6
  • CPU & GPU device: Nvidia RTX 4060 Laptop / Nvidia RTX 4090 Laptop

Known Workarounds

It seems to be related with llama.cpp. Because I found possible related issue in llama.cpp: #12517. And the BUG in llama.cpp just resolved last week in #12545.

@martindevans
Copy link
Member

LLamaSharp is currently using this version of llama.cpp (three weeks old). Hopefully this should be resolved once we upgrade to a newer version (no set timeline, but usually around once a month).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants