Skip to content

Commit 52b2e04

Browse files
committed
[fix]complete the Readme
1 parent 8343813 commit 52b2e04

File tree

8 files changed

+210
-203
lines changed

8 files changed

+210
-203
lines changed

test/.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
reports/
22
dataset/
33
logs/
4+
result_outputs/
45
.cache/
56
backup/
67
$null

test/README.md

Lines changed: 106 additions & 99 deletions
Original file line numberDiff line numberDiff line change
@@ -1,79 +1,81 @@
11
# UCM Pytest Testing Framework
22

3-
A unified cache management testing framework based on pytest, supporting multi-level testing, flexible marking, performance data collection, and beautiful Allure report generation.
3+
[English](README.md) | [简体中文](README_zh.md)
4+
5+
A unified cache management testing framework based on pytest, supporting multi-level testing, flexible tagging, performance data collection, and Allure report generation.
46

57
## Framework Features
68

7-
- [x] 🏗️ **Multi-level Testing**: UnitTest(0) → Smoke(1) → Feature(2) → E2E(3)
8-
- [x] 🏷️ **Flexible Marking**: Support for feature tags, platform tags, and reliability tags
9-
- [x] 📊 **Data Collection**: Integrated InfluxDB performance data pushing
10-
- [x] 📋 **Beautiful Reports**: Allure test report integration, supporting both static HTML and dynamic server modes
11-
- [x] 🔧 **Configuration Management**: Flexible YAML-based configuration system
12-
- [x] 🚀 **Automation**: Support for parallel test execution and automatic cleanup
9+
- [x] 🏗️ **Multi-Level Testing**: UnitTest(0) → Smoke(1) → Feature(2) → E2E(3)
10+
- [x] 🏷️ **Flexible Tagging**: Supports feature tags and custom tag addition
11+
- [x] 📊 **Data Collection**: Integrated InfluxDB performance data push
12+
- [x] 📋 **Beautiful Reports**: Allure test report integration, supporting static HTML and dynamic service modes
13+
- [x] 🔧 **Configuration Management**: Flexible configuration system based on YAML
14+
- [x] 🚀 **Automation**: Supports parallel test execution and automatic cleanup
1315

1416
## Test Level Definitions
1517

16-
| Level | Name | Description | Execution Time |
17-
|-----|------|------|----------|
18-
| 0 | UnitTest | Unit Tests | Every code commit |
19-
| 1 | Smoke | Smoke Tests | Build verification |
20-
| 2 | Feature | Feature Tests | When features are completed |
21-
| 3 | E2E | End-to-End Tests | Before version release |
18+
| Level | Name | Description | Execution Timing |
19+
|------|------|-------------|------------------|
20+
| 0 | UnitTest | Unit tests | On every code commit |
21+
| 1 | Smoke | Smoke tests | Build verification |
22+
| 2 | Feature | Feature tests | When features are completed |
23+
| 3 | E2E | End-to-end tests | Before version release |
2224

2325
## Directory Structure
2426

2527
```
2628
test/
2729
├── config.yaml # Test framework configuration file
28-
├── conftest.py # pytest configuration and fixtures, main program entry
30+
├── conftest.py # pytest configuration and fixtures, main entry point
2931
├── pytest.ini # pytest markers and basic configuration
3032
├── requirements.txt # Dependency package list
3133
├── common/ # Common utility library
3234
│ ├── __init__.py
33-
│ ├── config_utils.py # Configuration file reading tools
34-
│ ├── influxdb_utils.py # InfluxDB writing tools
35-
│ └── allure_utils.py # Allure reporting tools
35+
│ ├── config_utils.py # Configuration file reading utilities
36+
│ ├── influxdb_utils.py # InfluxDB write utilities
37+
│ └── allure_utils.py # Allure report utilities
3638
├── suites/ # Test case directory
3739
│ ├── UnitTest/ # Unit tests (stage 0)
3840
│ ├── Smoke/ # Smoke tests (stage 1)
3941
│ ├── Feature/ # Feature tests (stage 2)
4042
│ ├── E2E/ # End-to-end tests (stage 3)
41-
│ └── test_demo_function.py# Example test cases
43+
│ └── test_demo_function.py# Example test case
4244
├── reports/ # Test report directory
43-
└── logs/ # Test log directory
45+
└── logs/ # Log directory
4446
```
4547

4648
## Quick Start
4749

48-
### 1. Environment Setup
50+
### 1. Environment Preparation
4951
```bash
5052
# Install dependencies
5153
pip install -r requirements.txt
5254

5355
# Ensure Allure CLI is installed (for report generation)
54-
# Download from: https://github.com/allure-framework/allure2/releases
56+
# Download address: https://github.com/allure-framework/allure2/releases
5557
```
5658

5759
### 2. Configuration File
5860
The main configuration file is `config.yaml`, containing the following configuration items:
5961
- **reports**: Report generation configuration (HTML/Allure)
60-
- **log**: Logging configuration
62+
- **log**: Log configuration
6163
- **influxdb**: Performance data push configuration
62-
- **llm_connection**: LLM connection configuration
6364

6465
### 3. Running Tests
6566
```bash
6667
# Run all tests
6768
pytest
6869

69-
# Run specific level tests
70+
# Run tests of a specific level
7071
pytest --stage=1 # Run smoke tests
71-
pytest --stage=2+ # Run feature and end-to-end tests
72+
pytest --stage=2+ # Run feature tests and end-to-end tests
7273

73-
# Run specific tag tests
74+
# Run tests with specific tags
7475
pytest --feature=performance # Run performance-related tests
7576
pytest --platform=gpu # Run GPU platform tests
76-
pytest --reliability=high # Run high reliability tests
77+
78+
pytest --platform=all # Run tests for all platforms (all means all items with this tag)
7779

7880
# Combined filtering
7981
pytest --stage=1 --feature=performance,accuracy # Performance and accuracy tests in smoke tests
@@ -99,121 +101,126 @@ class TestExample:
99101
test_data = config_instance.get_config("test_data")
100102

101103
# Act & Assert
102-
with allure.step("Execute GPU computation"):
104+
with allure.step("Perform GPU calculation"):
103105
result = perform_gpu_calculation(test_data)
104106
assert result.is_valid
105107

106-
# Collect performance data
107-
from common.influxdb_utils import push_to_influx
108-
push_to_influx("gpu_compute_time", result.duration, {
109-
"test_name": "test_gpu_performance",
110-
"platform": "gpu"
111-
})
108+
# Import parameters from yaml example
109+
from common.config_utils import config_utils as config_instance
110+
@pytest.mark.feature("config")
111+
def test_llm_config():
112+
# Example of reading parameters by type
113+
llm_config = config_instance.get_config("llm_connection")
114+
assert llm_config["type"] == "openai"
115+
# Example of reading nested parameters
116+
assert config_instance.get_nested_config("llm_connection.model") == "gpt-3.5-turbo"
117+
assert config_instance.get_nested_config("llm_connection.models", "gpt-3.5-turbo") == "gpt-3.5-turbo"
112118
```
113119

114-
### Marking Usage Guidelines
120+
### Tag Usage Guidelines
115121

116-
#### 1. Level Markers (Required)
122+
#### 1. Level Tags (Required)
117123
```python
118-
@pytest.mark.stage(0) # Unit tests
119-
@pytest.mark.stage(1) # Smoke tests
120-
@pytest.mark.stage(2) # Feature tests
121-
@pytest.mark.stage(3) # End-to-end tests
124+
@pytest.mark.stage(0) # Unit test
125+
@pytest.mark.stage(1) # Smoke test
126+
@pytest.mark.stage(2) # Feature test
127+
@pytest.mark.stage(3) # End-to-end test
122128
```
123129

124-
#### 2. Feature Markers (Recommended)
130+
#### 2. Feature Tags (Recommended)
125131
```python
126-
@pytest.mark.feature("performance") # Performance tests
127-
@pytest.mark.feature("accuracy") # Accuracy tests
128-
@pytest.mark.feature("memory") # Memory tests
132+
@pytest.mark.feature("performance") # Performance test
133+
@pytest.mark.feature("accuracy") # Accuracy test
134+
@pytest.mark.feature("memory") # Memory test
129135
```
130136

131-
#### 3. Platform Markers (Optional)
137+
#### 3. Platform Tags (Optional)
132138
```python
133-
@pytest.mark.platform("gpu") # GPU platform tests
134-
@pytest.mark.platform("npu") # NPU platform tests
135-
@pytest.mark.platform("cpu") # CPU platform tests
139+
@pytest.mark.platform("gpu") # GPU platform test
140+
@pytest.mark.platform("npu") # NPU platform test
141+
@pytest.mark.platform("cpu") # CPU platform test
136142
```
137143

138-
#### 4. Reliability Markers (Optional)
144+
#### 4. Reliability Tags (Optional)
139145
```python
140-
@pytest.mark.reliability("high") # High reliability tests
141-
@pytest.mark.reliability("medium") # Medium reliability tests
142-
@pytest.mark.reliability("low") # Low reliability tests
146+
@pytest.mark.reliability("high") # High reliability test
147+
@pytest.mark.reliability("medium") # Medium reliability test
148+
@pytest.mark.reliability("low") # Low reliability test
143149
```
144150

145151
## Allure Report Integration
146152

147153
### 1. Basic Usage
154+
Refer to the official Allure marker usage methods. Here we only introduce some commonly used markers. For more details, refer to test_demo_function.py
148155
```python
149156
import allure
150-
151-
@allure.feature('User Authentication')
152-
@allure.story('Login Function')
153-
def test_user_login():
154-
"""Test user login functionality"""
155-
with allure.step("Enter username and password"):
156-
login_page.enter_credentials("user", "pass")
157-
158-
with allure.step("Click login button"):
159-
login_page.click_login()
160-
161-
with allure.step("Verify successful login"):
162-
assert dashboard_page.is_displayed()
163-
164-
# Add attachment
165-
allure.attach("Screenshot data", name="Login Screenshot",
166-
attachment_type=allure.attachment_type.PNG)
157+
@pytest.mark.feature("allure1")
158+
@allure.feature('test_success')
159+
def test_success():
160+
"""this test succeeds"""
161+
assert True
162+
163+
@allure.feature('test_failure')
164+
@pytest.mark.feature("allure1")
165+
def test_failure():
166+
"""this test fails"""
167+
assert False
167168
```
168169

169170
### 2. Report Configuration
170171
Configure Allure reports in `config.yaml`:
171172
```yaml
172173
reports:
174+
base_dir: "reports"
175+
use_timestamp: true # Whether to use timestamps as report folder names for differentiation
176+
directory_prefix: "pytest"
177+
html: # pytest-html
178+
enabled: false
179+
filename: "report.html"
180+
title: "UCM Pytest Test Report"
173181
allure:
174-
enabled: true
175-
html_enable: true
176-
serve_mode: true # Use dynamic server mode
177-
serve_host: "localhost"
178-
serve_port: 8081
182+
enabled: false # Whether to enable Allure report collection (json, requires pytest-allure-plugin installation)
179183
directory: "allure-results"
184+
html_enable: true # Whether to generate HTML reports (requires Allure program installation)
185+
serve_mode: true # Whether to automatically start Allure service
186+
serve_host: "localhost"
187+
serve_port: 8081
180188
```
181189
182190
### 3. Report Viewing
183-
- **Static HTML Mode**: Automatically generates static HTML reports after test completion
184-
- **Dynamic Server Mode**: Starts Allure server, providing interactive report interface
191+
- **Static HTML Mode**: Automatically generates static HTML reports after testing
192+
- **Dynamic Service Mode**: Starts an Allure server, providing an interactive report interface
185193
186194
## Performance Data Collection
187195
188-
### InfluxDB Integration
196+
### InfluxDB Configuration
197+
```yaml
198+
influxdb:
199+
url: ""
200+
token: "" # Can also be read via environment variable INFLUXDB_TOKEN
201+
org: "influxdb"
202+
bucket: "ucm"
203+
offline_file: ".cache/influxdb_offline.jsonl"
204+
```
205+
206+
### InfluxDB Data Push
189207
```python
190-
from common.influxdb_utils import push_to_influx
191-
192-
# Collect performance data in tests
193-
def test_performance_metrics():
194-
start_time = time.time()
195-
196-
# Execute test logic
197-
result = perform_operation()
198-
199-
# Push performance data to InfluxDB
200-
push_to_influx("operation_duration", time.time() - start_time, {
201-
"test_name": "test_performance_metrics",
202-
"operation_type": "calculation",
203-
"success": str(result.success)
204-
})
208+
from common.influxdb_utils import influxdb_utils as influxdb_instance
209+
@pytest.mark.feature("influxdb")
210+
def test_influxdb_push():
211+
influxdb_instance.push_metric("cpu_usage", "value", 3.0, {"host": "server1"})
205212
```
206213
207-
## Extensions and Customization
214+
## Extension and Customization
208215
209-
### Adding New Markers
210-
1. Add new marker definitions in the `markers` section of `pytest.ini`
211-
2. Keep the `markers =` and `# end of markers` lines unchanged
212-
3. Re-run tests to use new markers
216+
### Adding New Tags
217+
1. Add new tag definitions in the `markers` section of `pytest.ini`
218+
2. Keep the lines `markers =` and `# end of markers` unchanged
219+
3. Re-run tests to use the new tags
213220

214221
### Custom Configuration
215-
Customize through `config.yaml`:
222+
Customize through modifying `config.yaml`:
216223
- Report format and storage location
217224
- Log level and output format
218225
- InfluxDB connection parameters
219-
- LLM service configuration
226+
- Custom parameters

0 commit comments

Comments
 (0)