This repository contains Helm charts that are automatically released to GitHub Pages using GitHub Actions. The charts are tested, validated, and published automatically when changes are made via pull requests.
- π Repository Purpose
- π Automated Pipeline Overview
- π Test Values Requirement
- π Interpreting Test Results
- π§ Troubleshooting Failed Checks
- π Accessing Detailed Logs
- π Chart Development Workflow
- π Using Published Charts
- π‘ Best Practices
- π Pipeline Configuration
This repository serves as a Helm chart registry that:
- Hosts Helm charts for Kubernetes applications
- Automatically validates and tests charts on every pull request
- Publishes charts to GitHub Pages using chart-releaser
- Provides automated version management and release notes
- Ensures chart quality through comprehensive testing
The published charts are available at: https://[your-username].github.io/[repository-name]/
The GitHub Actions pipeline (Release Charts) has two main triggers and workflows:
When you create or update a pull request:
- Change Detection - Scans the
charts/directory to identify modified charts - Chart Validation & Testing - For each modified chart:
- Template Validation: Runs
helm templateto ensure YAML templates are syntactically correct - Dry-run Installation: Performs
helm install --dry-runto validate Kubernetes resource generation
- Template Validation: Runs
- Results Reporting - Posts detailed test results as PR comments with downloadable logs
When a PR is merged to the main branch:
- Change Detection - Re-identifies which charts were changed in the merge
- Version Validation - Ensures chart versions are properly incremented compared to existing releases
- Automated Release - If version validation passes:
- Creates GitHub releases for validated charts
- Generates and updates the Helm repository index
- Publishes charts to GitHub Pages for public consumption
Important: The release process requires proper version increments. If version validation fails after merge, a GitHub issue will be created to track the problem.
Each chart MUST include a test-values.yaml file in its root directory (charts/[chart-name]/test-values.yaml).
This file provides test-specific configuration that allows the pipeline to:
- Validate template rendering with realistic values
- Test dry-run installations without external dependencies
- Ensure charts work with various configuration scenarios
The test-values.yaml should contain:
# Example test-values.yaml
# Use test/mock data instead of production values
image:
repository: nginx # Use public images for testing
tag: "1.21-alpine"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
ingress:
enabled: false # Disable complex ingress for testing
# Use test-specific names/namespaces
fullnameOverride: "test-app"
nameOverride: "test"
# Disable or mock external dependencies
database:
enabled: false
# Or use test database settings
host: "test-db.example.com"
port: 5432
name: "testdb"
# Use minimal resource requirements for testing
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 50m
memory: 64Mi
# Disable persistence for testing
persistence:
enabled: false- Start with your main
values.yamlas a template - Replace production values with test-friendly alternatives:
- Use public Docker images instead of private ones
- Use test database connections or disable external services
- Set minimal resource requirements
- Disable complex features like ingress, persistence, or monitoring
- Ensure the chart can render and install with these values in a clean Kubernetes cluster
The pipeline posts detailed results as PR comments. Here's how to interpret them:
The pipeline generates a results table showing:
| Chart | Template Validation | Dry-run Installation | Overall Status |
|---|---|---|---|
| my-chart | β Passed | β Passed | β Passed |
| another-chart | β Failed | - | β Failed |
- Template Validation: Whether
helm templatesucceeded - Dry-run Installation: Whether
helm install --dry-runsucceeded - Overall Status: Combined result of all tests
Symptoms:
- β Template validation shows "Failed"
- Error messages about YAML syntax or missing values
Common Causes & Solutions:
-
Missing required values in test-values.yaml
# Add missing required values database: host: "required-value" port: 5432
-
Invalid YAML syntax in templates
- Check for proper indentation
- Ensure all quotes are properly closed
- Validate template logic (
{{ if }},{{ range }}, etc.)
-
Undefined template functions
- Ensure all custom template functions are defined
- Check for typos in template function names
Finding Logs:
- Check the PR comment for the "π Download detailed test logs" link
- Download the
helm-test-logs-[run-number]artifact - Look at
[chart-name]-template.logfor detailed error output
Symptoms:
- β Template validation passed
- β Dry-run installation failed
Common Causes & Solutions:
-
Kubernetes resource conflicts
- Conflicting resource names
- Invalid Kubernetes API versions
- Missing required labels or annotations
-
Resource dependencies
- References to non-existent ConfigMaps/Secrets
- Invalid service account references
- Missing RBAC permissions
-
Invalid resource specifications
- Incorrect resource quotas
- Invalid container specifications
- Malformed environment variables
Finding Logs:
- Download the test logs artifact from the PR comment
- Check
[chart-name]-dryrun.logfor detailed Kubernetes validation errors
Symptoms:
- β Chart Version Validation Failed
- Error about version not being incremented
When this happens:
- On Pull Requests: You'll get a PR comment explaining the issue
- On Main Branch (after merge): A GitHub issue will be automatically created to track the problem
Solution:
-
Check the current version in your
Chart.yaml:version: 1.0.0 # This needs to be incremented
-
Find the latest released version:
git tag -l "my-chart-*" --sort=-version:refname | head -n1
-
Increment the version following semantic versioning:
- Patch (1.0.0 β 1.0.1): Bug fixes
- Minor (1.0.1 β 1.1.0): New features, backward compatible
- Major (1.1.0 β 2.0.0): Breaking changes
When tests fail, the pipeline generates detailed logs that are essential for debugging:
- Look for the test results comment on your PR
- Find the "π Download detailed test logs" link
- Click the link to download the artifact
- Navigate to the failed workflow run
- Scroll to the "Artifacts" section at the bottom
- Download
helm-test-logs-[run-number]
The artifact contains:
[chart-name]-template.log: Full output fromhelm templatecommand[chart-name]-dryrun.log: Full output fromhelm install --dry-runcommand
# Create new chart
helm create charts/my-new-chart
# Or modify existing chart
cd charts/existing-chart
# Make your changes...# Copy and modify values for testing
cp charts/my-chart/values.yaml charts/my-chart/test-values.yaml
# Edit test-values.yaml with test-appropriate values# Validate template rendering
helm template my-chart charts/my-chart -f charts/my-chart/test-values.yaml
# Test dry-run installation
helm install my-chart charts/my-chart \
-f charts/my-chart/test-values.yaml \
--dry-run --debug# Edit Chart.yaml
vim charts/my-chart/Chart.yaml
# Increment the version field- Push changes to a feature branch
- Create PR to
mainbranch - Wait for automated tests to complete (template validation and dry-run testing)
- Address any failures using the troubleshooting guide above
- Note: Version validation happens during release, not during PR testing
- Once all tests pass, merge the PR
- The release process will automatically:
- Validate chart version increments
- Create GitHub releases for properly versioned charts
- Publish charts to GitHub Pages
- If version validation fails, a GitHub issue will be created
After charts are successfully released, you can use them:
# Add the repository
helm repo add my-charts https://[username].github.io/[repository-name]/
# Update repository
helm repo update
# Install a chart
helm install my-release my-charts/my-chart- Always test locally before creating a PR
- Use meaningful commit messages that describe chart changes
- Follow semantic versioning for chart versions
- Keep test-values.yaml minimal but functional
- Document breaking changes in chart README files
- Test with different Kubernetes versions when possible
The pipeline is configured in .github/workflows/release-charts.yml with two main jobs:
- Runs on Ubuntu latest with Minikube
- Uses Helm 3.x
- Tests against stable Kubernetes version
- Retains test logs for 30 days
- Validates template rendering and dry-run installations
- Triggers only on pushes to main branch (after PR merge)
- Performs version validation before release
- Uses chart-releaser to publish to GitHub Pages
- Creates GitHub issues if version validation fails
- Requires specific permissions for releases
For pipeline modifications, ensure you understand the security implications and test thoroughly in a fork first.





