Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
144 changes: 7 additions & 137 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,139 +1,9 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# the generate content
*.pyc
*.pyo
*.pptx

# Pipe files
Pipfile
Pipfile.lock

# VSCode
.vscode/
.env
.venv/
backend/.venv/
frontend/node_modules/
frontend/dist/
29 changes: 29 additions & 0 deletions .pylintrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
[MASTER]
ignore-patterns=^frontend/.*,^backend/generated/.*

[MESSAGES CONTROL]
disable=
missing-module-docstring,
missing-class-docstring,
missing-function-docstring,
too-few-public-methods,
broad-exception-caught,
broad-exception-raised,
import-outside-toplevel,
line-too-long,
raise-missing-from,
wrong-import-order,
invalid-name,
possibly-used-before-assignment,
inconsistent-return-statements,
unreachable,
no-else-return,
consider-using-sys-exit,
consider-using-with,
consider-using-in

[TYPECHECK]
ignored-modules=openai,pptx,pptx.dml.color,pptx.enum.shapes,pptx.util,fastapi,fastapi.middleware.cors,fastapi.staticfiles,pydantic,streamlit,ollama

[FORMAT]
max-line-length=120
176 changes: 115 additions & 61 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,84 +1,138 @@
# ChatPPT
# ChatPPT Studio (React + FastAPI)

ChatPPT is a tool powered by chatgpt/ollama that helps you generate PPT/slide. It supports output in English and Chinese.
ChatPPT Studio generates new PowerPoint decks, supports live text editing, and applies chat-driven revisions.

## Table of Contents
- Backend: FastAPI + `python-pptx`
- Frontend: React (Vite)
- Dependency manager (backend): `uv`
- LLM usage is optional. If `OPENAI_API_KEY` is missing, deterministic fallback generation is used.

- [What's New](#whats-new)
- [What is ChatPPT](#what-is-chatppt)
- [Requirements](#requirements)
- [Installation](#installation)
- [Usage](#usage)
- [Contributing](#contributing)
- [License](#license)
## Features

## What's New
1. Generate a new PPT from topic input
- `POST /api/generate`
- Supports `preset`, `audience`, `tone`, `slide_count`, and `language`
- Returns output path, outline, and theme metadata

ChatPPT now supports Ollama and includes a sample UI.
2. Parse and edit PPT text content
- `GET /api/ppt`
- `POST /api/ppt/update`

![UI demo 1](ui_demo_1.png)
![UI demo 2](ui_demo_2.png)
3. Chat-based edits (existing compatibility retained)
- `POST /api/chat`

## What is ChatPPT
4. Apply inferred theme to an existing PPT (existing compatibility retained)
- `POST /api/theme/apply`

ChatPPT is powered by chatgpt/ollama. It can help you generate PPT/slide in English and Chinese.
## Project Layout

![What is GPT | 600](demo1.png)
![什么是AWS | 400](demo2.png)
```text
chatppt/
├── backend/
│ ├── app/
│ │ ├── main.py
│ │ ├── models.py
│ │ ├── ppt_service.py
│ │ ├── chat_service.py
│ │ └── generator_service.py
│ ├── tests/test_services.py
│ └── requirements.txt
├── frontend/
│ ├── src/App.jsx
│ ├── src/App.test.jsx
│ └── src/styles.css
└── README.md
```

## Requirements
## Environment Variables

Python 3.8.10 or higher
Backend (optional):

## Installation

### Ollama

Follow the [guide](https://ollama.com/) to install ollama

### OpenAI
```bash
export OPENAI_API_KEY=your_key
export OPENAI_MODEL=gpt-3.5-turbo
# Optional for online demo testing without real LLM calls:
export FAKE_LLM_RESPONSES=1
```

Generate your OpenAI API key at <https://platform.openai.com/account/api-keys>
Frontend (optional):

## Usage
```bash
export VITE_API_BASE=http://127.0.0.1:8000
```

1. Install requirements
## Run Locally

```
pip install -r requirements.txt
```
### 1) Start backend

2. Start Streamlit
```bash
cd backend
uv venv
source .venv/bin/activate
uv pip install -r requirements.txt
uv run uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
```

```
streamlit run chatppt_ui.py
```
### 2) Start frontend

3. Open the Streamlit URL in your browser (<http://localhost:8501>)
```bash
cd frontend
npm install
npm run dev
```

Open `http://127.0.0.1:5173`.

## API Example

### `POST /api/generate`

Request:

```json
{
"topic": "AI customer support product roadmap",
"preset": "Tech",
"audience": "Executive team",
"tone": "Concise",
"slide_count": 6,
"language": "en-US",
"output_path": "generated/ai-roadmap.pptx"
}
```

Response:

```json
{
"output_path": "generated/ai-roadmap.pptx",
"outline": [{ "title": "...", "bullets": ["..."] }],
"theme": {
"name": "preset-tech",
"font_name": "Segoe UI",
"title_size_pt": 38,
"body_size_pt": 19
}
}
```

Notes:
- `preset` is optional (`Business`, `Tech`, `Education`, `Marketing`).
- Explicit `audience`, `tone`, `language`, and `slide_count` still work as before and take precedence over preset defaults.

## Quality Checks

Backend tests:

![UI](ui.png)
```bash
cd backend
PYTHONPATH=. uv run python -m unittest tests/test_services.py
```

> You can also use ChatPPT in the command line:
Frontend tests and build:

```bash
> python chatppt.py -h
usage: chatppt.py [-h] [-m {openai,ollama}] -t TOPIC [-k API_KEY] [-u OLLAMA_URL] [-o OLLAMA_MODEL] [-p PAGES] [-l {cn,en}]

I am your PPT assistant, I can help to you generate PPT.

options:
-h, --help show this help message and exit
-m {openai,ollama}, --ai_model {openai,ollama}
Select the AI model
-t TOPIC, --topic TOPIC
Your topic name
-k API_KEY, --api_key API_KEY
Your api key file path
-u OLLAMA_URL, --ollama_url OLLAMA_URL
Your ollama url
-o OLLAMA_MODEL, --ollama_model OLLAMA_MODEL
Specify the Ollama model to use
-p PAGES, --pages PAGES
How many slides to generate
-l {cn,en}, --language {cn,en}
Output language
cd frontend
npm test
npm run build
```
1 change: 1 addition & 0 deletions backend/app/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# package
Loading
Loading