-
Notifications
You must be signed in to change notification settings - Fork 41
Components
This document will give a detailed documentation for every component of function Ollama App has to offer.
This is the main view of the app, simple and, most importantly, working.
Note
To start chatting, you first have to select a model in the Model Selector. Do that first, then come back here.
The chat mode is straightforward. Just write a message, wait a few moments, and the answer will get sent into the chat.
The chat view hides a few useful features already included by default. Messages can be deleted by double-tapping the message you desire to wipe from the earth, and all messages sent since then, including the message itself, will get deleted.
Editing a message is almost as simple. After enabling message editing in Interface Settings, simply long press on a message and a popup will open and ask for the new content.
Tip
Messages (almost) fully support markdown syntax. That means the AI will be able to receive and send back the message in markdown.
Ollama App supports multimodal models, models that support input via an image. Models supporting the technology are marked with an image icon next to their name in the Model Selector.
After selecting a multimodal model, a new icon appears at the bottom left of the message bar; a camera icon. Clicking on it reveals the following bottom sheet:
Select one of them, take or select your photo, and it'll get added to the chat. Adding multiple images is allowed, just repeat the steps.
Even though the images will appear in the chat already after sending, they won't be submitted to the AI until a new text message is sent. When you send a message, the AI will answer the message in consideration of the image.
You can access the model selector by tapping on the <selector>
text in the top middle, or the name of the currently selected model in the same spot. Then you'll get the following popup dialog:
This will display all the models currently installed on your Ollama server instance.
The models with a star next to them are recommended models. They have been selected by me (hehe) to be listed as that. Read more under Custom Builds.
Models supporting Multimodal Input are marked with an image icon next to their name, like llava
in the image above.
At the end of the model list, you can find a button with the text 'Add'. Pressing it will open an input field in which you have to put a model name. You can put any model here, you're able to find on the official list. Model tags have to be separated by a colon.
Ollama App will then check if the model exists. If it does, it'll ask you again if you want to install it. If you press 'Add', it'll download the model to your host. Keep the app open to continue the download. If you accidentally closed it, just reenter the exact model name in the add dialog and confirm the action again; the process will be resumed.
The button on the top left opens the menu. In it, you have two options: New Chat
and Settings
. The first option creates a new chat, and the second one opens the Settings, where you can change how everything works.
Below that are all the chats. To delete one, swipe it from left to right. To rename the chat tab and hold it until a popup dialog appears. In it, you can change the title or tab the sparkle icon to let AI find one for you. This is not affected by the "generate title" setting.
Note
The button on the top right corner deletes the chat. It has the same effect as swiping the chat in the sidebar.
Ollama App offers a lot of configuration options. We'll go through every option one by one.
The host is the main address of your Ollama server. It may include a port, a protocol and a hostname. Paths are not recommended.
The port is required unless the port number matches the protocol (443 for HTTPS or 80 for HTTP). After that, click the save icon right next to the text field to set the host. Redirects are not supported. If the host you entered returns a 301 or 302 status code, it won't be accepted.
The host address will be checked, so no worry about entering the wrong one. If you set the host once, and your server is offline, the requests will fail, but the host will stay saved if you don't change it yourself. To do that, just go into the side menu and open settings to do so.
Ollama App supports adding custom headers. This can be useful in case you want to secure your instance with authentication or something similar. Simply press the plus icon next to the host input and set one as a JSON object. This could be, for example:
{
"Authorization": "Bearer <token>"
}
The behavior settings include settings connected to the system prompt.
The system prompt is sent to the assistant at the start of the conversation. It leads the assistant in a direction, and it'll talk like you told him to in this message. To reset the system prompt to default, empty its value, click the save icon and close the screen.
The toggle to disable system messages is useful if you're model has a system prompt added through its Modelfile. Ollama App in this case won't send the system prompt and the result will be as expected.
The option to disable markdown is not safe, and the assistant can still potentially add markdown to its response.
The interface settings are focused, as the name might imply, on the interface of the Ollama App. The following list will document all options
- Targeted at the Model Selector.
- Show model tags in the model selector. This can be useful if you have multiple versions of the same model installed
- The model will be pinged on select, so it can be loaded into the host's memory and be used directly
- Clear the chat if the model is changed. This is highly recommended, disabling this option could lead to unintended behavior
- Used in the Chat View
- Set the request mode. Streaming is recommended, but sometimes it's not available, then select "Request"
- Whether to generate titles of chats with the host's AI or not. Could higher potential quota costs
- Whether long-pressing messages opens the edit dialog or not
- Whether to ask before deletion of chats. Useful if important data is potentially stored in chats or not
- Whether to show tips in the main sidebar or not
- Backend loading options
- Keep model always loaded (
keep_alive
to-1
) - Never keep model (
keep_alive
to0
) - Time to keep models alive, will overwrite the above options
- Keep model always loaded (
- Timeout multiplier
- A multiplier applied to all instances when a web request is made. This can be useful with a slow host or internet connection
- Appearance settings
- Whether to enable haptic feedback or not
- Whether to start windows maximized (only desktop)
- Brightness of the app
- Follow the device theme/color
- Temporary fixed
- Solution to problems that currently have no good way to be solve. These will be removed once the proper fix can be implemented
- Issues might occur, use only if needed!
Warning
This is still an experimental feature! Some functions may not work as intended. If you come across any errors, please create a new issue report.
If the "Permission not granted" text appears, you have to allow the app a few crucial permissions for voice mode to work. To give them, simply tap on the information, it'll lead you through the needed steps.
After you enabled it, you can enable Voice Mode by pressing the big "Enable Voice Mode" toggle. Then, select a language in the language dialog and you're good to go.
To use voice mode, open a new chat (or an existing one) and press the headphone icon. If you have a multimodal model selected, you'll have to press the attachment button, and then select "Voice" in the media drawer.
This will launch Voice Mode. Just start chatting, have fun!
The export function allows you to export and save all chats to a file. This can be very useful if you want to back up your data or want to sync it between devices. Settings won't be included in the file.
The import functionality deletes all currently saved chats from disk and replaces them with the ones from the file. This cannot be undone.
The About screen holds a lot of useful information, like the current app version or all licenses associated with Ollama App. It also allows you to directly create an issue.
This screen is also home to the update tracker. You can check for the latest version by pressing the update text. If there's any update available, it'll lead you to the GitHub release page to download the latest release.