Skip to content

Added MAUI usage example (Android) #1217

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

LucaMaccarini
Copy link

@LucaMaccarini LucaMaccarini commented Jul 2, 2025

Added MAUI example (Android)

  

Disclaimer

I’m new to the project and I hope I have followed all the contribution guidelines and policies correctly. If not, please forgive me and kindly let me know what I should fix or improve.

Context

As suggested by @AmSmart in PR #1179, I extended the Mobile project developing a chatbot as a basic working app example using LlamaSharp on MAUI.

Important note on functionality (ISSUE)

I noticed that the example works correctly on an Android emulator (running on PC), but on a real Android device it crashes with the following error related to loading the CommunityToolkit.HighPerformance.dll dependency:

[monodroid-assembly] open_from_bundles: failed to load assembly CommunityToolkit.HighPerformance.dll
Loaded assembly: /data/data/com.llama.mobile/files/.__override__/CommunityToolkit.HighPerformance.dll [External]
[libc] Fatal signal 4 (SIGILL), code 0 (SI_USER) in tid 28509 (.NET TP Worker), pid 28397 (com.llama.mobile)
[Choreographer] Skipped 32 frames!  The application may be doing too much work on its main thread.

@AmSmart, could you please check what is going on here?

A simple idea from building the app

While developing the app, it occurred to me that it might be useful to provide an API like LLamaWeights.LoadFromStream to load the model directly from a stream. This could be handy in cases where a small model is bundled with the APK. Currently, since loading requires a file, the model must be extracted from the APK and saved to the device storage, resulting in having two copies: one compressed inside the APK and one extracted. With a stream-based load, the app could load the model directly from the APK without extracting it. I understand that in a real-world scenario the model probably won't be shipped with the APK, but I thought it was an interesting possibility and wanted to hear your thoughts on this.

@martindevans
Copy link
Member

LLamaWeights.LoadFromStream I'd love to support that, but unfortunately it's not possible. The underlying API that llama.cpp offers is this: private static extern SafeLlamaModelHandle llama_model_load_from_file(string path, LLamaModelParams @params);, it requires a file path.

@LucaMaccarini
Copy link
Author

Oh ok then unfortunately there is nothing to do, thanks anyway for considering it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants