A command-line tool that leverages OpenAI's Chat Completion API to document code with the assistance of AI models.
Watch this Demo video to view features.
- Source Code Documentation: Automatically generate comments and documentation for your source code.
- Multiple File Processing: Handle one or multiple files in a single command.
- Model Selection: Choose which AI model to use with the
--model
flag. - Custom Output: Output the results to a file with the
--output
flag, or display them in the console. - Stream Output: Stream the LLM response to command line with
--stream
flag.
To run the tool, specify one or more source files or folders as input:
npm start ./examples/file.js
For processing multiple files:
npm start ./examples/file.js ./examples/file.cpp
For processing folders:
npm start ./examples
Note: Use npm start -- -option
to pass the option flag to the program, as npm captures it without the --
.
-
-m, --model <model-name>
: Choose the AI model to use(default: google/gemma-2-9b-it:free from OpenRouter)
.npm start file.js -- -m "openai/gpt-4o-mini"
-
-o, --output <output-file>
: Write the output to a specified file.npm start file.js -- -o output.js
-
-t, --temperature <value>
: Set the creativity level of the AI model(default: 0.7)
.npm start file.js -- -t 1.1
-
-u, --token-usage
: Display token usage informationnpm start file.js -- -u
-
-s, --stream
: Stream response to command linenpm start file.js -- -s
- Check Version: To check the current version of the tool, use:
npm start -- --version
- Help: Display the help message listing all available options:
npm start -- --help
-
Document a JavaScript file and save the result:
npm start ./examples/file.js -- --output file-documented.js --model google/gemini-flash-8b-1.5-exp
-
Process multiple files and print output to the console:
npm start ./examples/file.js ./examples/file.py --model google/gemini-flash-8b-1.5-exp
You can store your api key and base url in a .env
file created at the root of the project. Example configuration:
API_KEY=your_api_key
BASE_URL=https://api.openai.com/v1
This setup will also allow you to customize other settings, like temperature or base URL, directly in the .env
file.
Contributions are welcome! If you find a bug or have an idea for an improvement, feel free to open an issue or submit a pull request, view Contribution Guidelines for more details.
This project is licensed under the MIT License.