Skip to content

improved documentation (WIP) #14

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
129 changes: 129 additions & 0 deletions docs/configuration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
---
title: Configuration
description: Setting up PearAI
keywords: [setup, start, install, vscode, jetbrains]
---

# Configuration

Want a quick and easy setup for PearAI? We've got you covered with some sample `config.json` files for different scenarios. Just copy and paste them into your `config.json `by clicking the gear icon at the bottom right of the PearAI sidebar.

# Quick Setup Options

You can use PearAI in different ways. Here are some quick setups for common uses:

### Model

- You can download PearAI directly, and use our free trial, or your own API key

### Custom Commands

- Get the monthly subscription, and we'll take care of you

### Context Provider

- Pay one lump sum yearly, and you'll be treated like our VIP!

### Slash Commands

# Model Comfiguration

The Intern lets new users PearAI directly, and use our free trial, or your own API key

- Free requests out of the box, no credit card required.
- Use our free trial, your own API key, or local models.
- Join our Community Discord server.

### Configuration (`config.json`)

```json
{
"models": [
{
"model": "pearai_model",
"contextLength": 300000,
"title": "PearAI Model",
"systemMessage": "You are an expert software developer. You give helpful and concise responses.",
"provider": "pearai_server",
"isDefault": true
},
{
"model": "gpt-4o",
"contextLength": 300000,
"title": "GPT-4o (PearAI)",
"systemMessage": "You are an expert software developer. You give helpful and concise responses.",
"provider": "pearai_server",
"isDefault": true
},
{
"model": "claude-3-5-sonnet-20240620",
"contextLength": 3000000,
"title": "Claude 3.5 Sonnet (PearAI)",
"systemMessage": "You are an expert software developer. You give helpful and concise responses.",
"provider": "pearai_server",
"isDefault": true
},
{
"model": "pearai_model",
"contextLength": 300000,
"title": "PearAI Model (1)",
"systemMessage": "You are an expert software developer. You give helpful and concise responses.",
"provider": "pearai_server"
}
]
}
```

# Custom Commands



### Configuration (`config.json`)

```json
{
"customCommands": [
{
"name": "test",
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
"description": "Write unit tests for highlighted code"
}
]
}
```

# Context Providers

Learn more about using and modifying PearAI's [Context Providers](/codebase-context)


# Slash Commands

### Configuration (`config.json`)

```json
{
"slashCommands": [
{
"name": "edit",
"description": "Edit selected code"
},
{
"name": "comment",
"description": "Write comments for the selected code"
},
{
"name": "share",
"description": "Export the current chat session to markdown"
},
{
"name": "cmd",
"description": "Generate a shell command"
},
{
"name": "commit",
"description": "Generate a git commit message"
}
]
}
```
56 changes: 56 additions & 0 deletions docs/context-providers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
---
title: Context Providers
description: Context Providrrs for PearAI
keywords: [setup, install, context, providers, model]
---

# Context Providers

## How does it work

Context Providers allow you to type '@' and see a dropdown of content that can all be fed to the LLM as context. Every context provider is a plugin, which means if you want to reference some source of information that you don't see here, you can request (or build!) a new context provider.

As an example, you are fixing a bug in a web development project where the navbar isn't responsive on mobile. Using PearAI's `@problems`, you pull up the related GitHub issue, then add `Navbar.tsx` and `styles.css` to your context with `@navbar` and `@style`. Reviewing the code with `@code`, you identify a problem with the CSS media queries, and use `@diff` to spot a recent change that caused the issue. After testing the fix with `@terminal`, PearAI helps streamline the debugging process by keeping everything you need in one place.

!![context](../static/img/context.png)

## Built-in Context Providers

To use any of the built-in context providers, open `~/.pearai/config.json` and add it to the `contextProviders` list.

### Configuration (`config.json`)

```json
{
"contextProviders": [
{
"name": "code",
"params": {}
},
{
"name": "docs",
"params": {}
},
{
"name": "diff",
"params": {}
},
{
"name": "terminal",
"params": {}
},
{
"name": "problems",
"params": {}
},
{
"name": "folder",
"params": {}
},
{
"name": "codebase",
"params": {}
}
]
}
```
20 changes: 20 additions & 0 deletions docs/install.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
title: Install
description: Setting up PearAI
keywords: [setup, start, install, vscode, jetbrains]
---

# Install

# PearAI

1. Click **Install** on the PearAI site.

This will open the Download and IDE, where you will need to click **Install** again.

2. Select PearAI server and log in on right sidebar. Happy coding!!

<video width="800" controls>
<source src="/docs/videos/pearai-onboard-login.webm" type="video/webm" />
Your browser does not support the video tag.
</video>
81 changes: 81 additions & 0 deletions docs/model-providers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
---
title: Model Providers
description: Models for PearAI
keywords: [setup, install, autopilot, chatgpt, model]
---

# Model Providers

# Configure and Integrate Various LLM Providers

Configure and integrate various LLM (Large Language Model) providers for chat, autocomplete, and embedding models, whether self-hosted, remote, or via SaaS.

To select the ones you want to use, add them to your `config.json`.

## Self-hosted

### Local

You can run a model on your local computer using:

- **Ollama**
- **LM Studio**
- **Llama.cpp**
- **KoboldCpp** (OpenAI compatible server)
- **llamafile** (OpenAI compatible server)
- **LocalAI** (OpenAI compatible server)
- **Text generation web UI** (OpenAI compatible server)
- **FastChat** (OpenAI compatible server)
- **llama-cpp-python** (OpenAI compatible server)
- **TensorRT-LLM** (OpenAI compatible server)
- **IPEX-LLM** (Local LLM on Intel GPU)
- **Msty**
- **IBM watsonx**
- **Nvidia NIMS** (OpenAI compatible server)

### Remote

You can deploy a model in your AWS, GCP, Azure, Lambda, or other clouds using:

- **HuggingFace TGI**
- **vLLM**
- **SkyPilot**
- **Anyscale Private Endpoints** (OpenAI compatible API)
- **Lambda**

## SaaS

### Open-source and Commercial Models

You can access both open-source and commercial LLMs via:

- **OpenRouter**
- **Kindo**
- **Nvidia NIMS** (OpenAI compatible server)

### Open-source Models

You can run open-source LLMs with cloud services like:

- **Codestral API**
- **Together**
- **HuggingFace Inference Endpoints**
- **Anyscale Endpoints** (OpenAI compatible API)
- **Replicate**
- **Deepinfra**
- **Groq** (OpenAI compatible API)

### Commercial Models

You can use commercial LLMs via APIs using:

- **Anthropic API**
- **OpenAI API**
- **Azure OpenAI Service**
- **Amazon Bedrock**
- **Google Gemini API**
- **Mistral API**
- **Voyage AI API**
- **Cohere API**

In addition to selecting providers, you will need to figure out what models to use.
54 changes: 54 additions & 0 deletions docs/overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
---
title: Overview
description: Overview ofPearAI
keywords: [setup, start, install, vscode, jetbrains, overview]
---

# Overview

PearAI is an Open Source AI-Powered code Editor that can be deeply customized. This is primarily accomplished by clicking in the ```Settings icon ```editing a local file located at ```~/pearai/config.json``` (MacOS / Linux) or ```%USERPROFILE%\.pearai\config.json``` (Windows). config.json is created the first time you use PearAI.

## Quick Start

- When you first install PearAI, Open the app and the chat window should appear on the right side (might load for a brief period on the first time).
- Select PearAI server and log in.

## Choosing Your Models

PearAI works with a variety of LLMs:

- **Commercial models** (e.g., Claude 3 Opus via Anthropic API)
- **Open-source models** (e.g., Llama 3 running locally with Ollama)
- And many options in between

You'll need to select models for three main functions:

1. **Chat**
2. **Autocomplete**
3. **Embeddings**

## Customization Options

PearAI offers deep customization through `config.json` and `config.ts` files. These are located in:

- **MacOS**: `~/.PearAI/`
- **Windows**: `%userprofile%\.PearAI`

You can customize:

- **Basic Configuration**
- **Model Providers**
- **Model Selection**
- **Context Providers**
- **Slash Commands**
- **Advanced Options**

## Sharing Your Configuration

To share your PearAI setup with a team:

1. Create a `.PearAIrc.json` file in your project's root directory.
2. Use the same JSON Schema as `config.json`.
3. This file will automatically apply on top of the local `config.json`.

Get started with PearAI today and Speed up your development by integrating AI the correct way 🚀
Loading