|
1 | 1 | # UCM Pytest Testing Framework |
2 | 2 |
|
3 | | -A pytest-based testing framework that supports multi-level testing, platform tagging, performance data collection, and Allure report generation. |
4 | | -[Chinese Documentation](README_zh.md) |
| 3 | +A unified cache management testing framework based on pytest, supporting multi-level testing, flexible marking, performance data collection, and beautiful Allure report generation. |
| 4 | + |
5 | 5 | ## Framework Features |
6 | 6 |
|
7 | | -- [x] 🏗️ **Multi-level Testing**: Unit(0) → Smoke(1) → Regression(2) → Release(3) |
8 | | -- [x] 🏷️ **Flexible Tagging**: Supports feature tags and platform tags |
9 | | -- [ ] 📊 **Data Collection**: Integrates InfluxDB performance data push |
10 | | -- [ ] 📋 **Beautiful Reports**: Allure test report integration |
11 | | -- [ ] 🔧 **Convenient Tools**: Rich testing tools and common methods |
| 7 | +- [x] 🏗️ **Multi-level Testing**: UnitTest(0) → Smoke(1) → Feature(2) → E2E(3) |
| 8 | +- [x] 🏷️ **Flexible Marking**: Support for feature tags, platform tags, and reliability tags |
| 9 | +- [x] 📊 **Data Collection**: Integrated InfluxDB performance data pushing |
| 10 | +- [x] 📋 **Beautiful Reports**: Allure test report integration, supporting both static HTML and dynamic server modes |
| 11 | +- [x] 🔧 **Configuration Management**: Flexible YAML-based configuration system |
| 12 | +- [x] 🚀 **Automation**: Support for parallel test execution and automatic cleanup |
12 | 13 |
|
13 | 14 | ## Test Level Definitions |
14 | 15 |
|
15 | | -| Level | Name | Description | Execution Timing | |
16 | | -|-------|------|-------------|------------------| |
17 | | -| 0 | Unit | Unit tests | Each code commit | |
18 | | -| 1 | Smoke | Smoke tests | Build verification | |
19 | | -| 2 | Regression | Regression tests | Before version release | |
20 | | -| 3 | Release | Release tests | Production environment verification | |
| 16 | +| Level | Name | Description | Execution Time | |
| 17 | +|-----|------|------|----------| |
| 18 | +| 0 | UnitTest | Unit Tests | Every code commit | |
| 19 | +| 1 | Smoke | Smoke Tests | Build verification | |
| 20 | +| 2 | Feature | Feature Tests | When features are completed | |
| 21 | +| 3 | E2E | End-to-End Tests | Before version release | |
21 | 22 |
|
22 | 23 | ## Directory Structure |
23 | 24 |
|
24 | 25 | ``` |
25 | | -project/ |
26 | | -├── tests/ |
27 | | -│ ├── config.yaml # Testing framework configuration file |
28 | | -│ ├── conftest.py # pytest configuration and fixtures, main entry point |
29 | | -│ ├── common/ # Common utility library |
30 | | -│ │ ├── __init__.py |
31 | | -│ │ ├── config_utils.py # Configuration file reading utility |
32 | | -│ │ ├── influxdb_utils.py # InfluxDB writing utility |
33 | | -│ │ ├── allure_utils.py # Allure report utility |
34 | | -│ ├── suites/ # Test case directory |
35 | | -│ │ ├── unit/ # Unit tests (stage 0) |
36 | | -│ │ ├── smoke/ # Smoke tests (stage 1) |
37 | | -│ │ ├── regression/ # Regression tests (stage 2) |
38 | | -│ │ └── release/ # Release tests (stage 3) |
39 | | -├── requirements.txt # Dependencies |
40 | | -├── pytest.ini # pytest configuration |
41 | | -└── README.md # User guide |
| 26 | +test/ |
| 27 | +├── config.yaml # Test framework configuration file |
| 28 | +├── conftest.py # pytest configuration and fixtures, main program entry |
| 29 | +├── pytest.ini # pytest markers and basic configuration |
| 30 | +├── requirements.txt # Dependency package list |
| 31 | +├── common/ # Common utility library |
| 32 | +│ ├── __init__.py |
| 33 | +│ ├── config_utils.py # Configuration file reading tools |
| 34 | +│ ├── influxdb_utils.py # InfluxDB writing tools |
| 35 | +│ └── allure_utils.py # Allure reporting tools |
| 36 | +├── suites/ # Test case directory |
| 37 | +│ ├── UnitTest/ # Unit tests (stage 0) |
| 38 | +│ ├── Smoke/ # Smoke tests (stage 1) |
| 39 | +│ ├── Feature/ # Feature tests (stage 2) |
| 40 | +│ ├── E2E/ # End-to-end tests (stage 3) |
| 41 | +│ └── test_demo_function.py# Example test cases |
| 42 | +├── reports/ # Test report directory |
| 43 | +└── logs/ # Test log directory |
| 44 | +``` |
| 45 | + |
| 46 | +## Quick Start |
| 47 | + |
| 48 | +### 1. Environment Setup |
| 49 | +```bash |
| 50 | +# Install dependencies |
| 51 | +pip install -r requirements.txt |
| 52 | + |
| 53 | +# Ensure Allure CLI is installed (for report generation) |
| 54 | +# Download from: https://github.com/allure-framework/allure2/releases |
| 55 | +``` |
| 56 | + |
| 57 | +### 2. Configuration File |
| 58 | +The main configuration file is `config.yaml`, containing the following configuration items: |
| 59 | +- **reports**: Report generation configuration (HTML/Allure) |
| 60 | +- **log**: Logging configuration |
| 61 | +- **influxdb**: Performance data push configuration |
| 62 | +- **llm_connection**: LLM connection configuration |
| 63 | + |
| 64 | +### 3. Running Tests |
| 65 | +```bash |
| 66 | +# Run all tests |
| 67 | +pytest |
| 68 | + |
| 69 | +# Run specific level tests |
| 70 | +pytest --stage=1 # Run smoke tests |
| 71 | +pytest --stage=2+ # Run feature and end-to-end tests |
| 72 | + |
| 73 | +# Run specific tag tests |
| 74 | +pytest --feature=performance # Run performance-related tests |
| 75 | +pytest --platform=gpu # Run GPU platform tests |
| 76 | +pytest --reliability=high # Run high reliability tests |
| 77 | + |
| 78 | +# Combined filtering |
| 79 | +pytest --stage=1 --feature=performance,accuracy # Performance and accuracy tests in smoke tests |
42 | 80 | ``` |
43 | 81 |
|
44 | 82 | ## Test Case Standards |
45 | 83 |
|
46 | 84 | ### Basic Structure |
47 | | - |
48 | 85 | ```python |
49 | 86 | import pytest |
50 | | -from tests.common.influxdb_utils import push_to_influx |
51 | | -from tests.common.test_utils import setup_test_env |
| 87 | +import allure |
| 88 | +from common.config_utils import config_utils as config_instance |
52 | 89 |
|
53 | 90 | class TestExample: |
54 | 91 | """Test example class""" |
55 | 92 |
|
56 | 93 | @pytest.mark.stage(2) |
57 | | - @pytest.mark.feature("accuracy") |
58 | | - def test_calculation_accuracy(self): |
59 | | - """Test calculation accuracy""" |
| 94 | + @pytest.mark.feature("performance") |
| 95 | + @pytest.mark.platform("gpu") |
| 96 | + def test_gpu_performance(self): |
| 97 | + """Test GPU performance""" |
60 | 98 | # Arrange |
61 | | - input_data = 1 + 1 |
62 | | - |
63 | | - # Act |
64 | | - result = calculate(input_data) |
65 | | - |
66 | | - # Assert |
67 | | - assert result == 2 |
68 | | - |
69 | | - # Collect performance data (to be implemented) |
70 | | - push_to_influx("calculation_time", 0.001, { |
71 | | - "test_name": "test_calculation_accuracy", |
72 | | - "feature": "accuracy" |
| 99 | + test_data = config_instance.get_config("test_data") |
| 100 | + |
| 101 | + # Act & Assert |
| 102 | + with allure.step("Execute GPU computation"): |
| 103 | + result = perform_gpu_calculation(test_data) |
| 104 | + assert result.is_valid |
| 105 | + |
| 106 | + # Collect performance data |
| 107 | + from common.influxdb_utils import push_to_influx |
| 108 | + push_to_influx("gpu_compute_time", result.duration, { |
| 109 | + "test_name": "test_gpu_performance", |
| 110 | + "platform": "gpu" |
73 | 111 | }) |
74 | 112 | ``` |
75 | 113 |
|
76 | | -### Tag Usage Specifications |
77 | | - |
78 | | -#### 1. Level Tags (Required) |
79 | | -```python |
80 | | -@pytest.mark.stage(0) # Unit test |
81 | | -@pytest.mark.stage(1) # Smoke test |
82 | | -@pytest.mark.stage(2) # Regression test |
83 | | -@pytest.mark.stage(3) # Release test |
84 | | -``` |
| 114 | +### Marking Usage Guidelines |
85 | 115 |
|
86 | | -#### 2. Feature Tags (Recommended) |
| 116 | +#### 1. Level Markers (Required) |
87 | 117 | ```python |
88 | | -@pytest.mark.feature("performance") # Performance test |
89 | | -@pytest.mark.feature("accuracy") # Accuracy test |
| 118 | +@pytest.mark.stage(0) # Unit tests |
| 119 | +@pytest.mark.stage(1) # Smoke tests |
| 120 | +@pytest.mark.stage(2) # Feature tests |
| 121 | +@pytest.mark.stage(3) # End-to-end tests |
90 | 122 | ``` |
91 | 123 |
|
92 | | -#### 3. Platform Tags (Optional) |
| 124 | +#### 2. Feature Markers (Recommended) |
93 | 125 | ```python |
94 | | -@pytest.mark.platform("gpu") # Only execute on GPU platform |
95 | | -@pytest.mark.platform("npu") # Only execute on NPU platform |
96 | | -# Default no tag, execute on all platforms |
| 126 | +@pytest.mark.feature("performance") # Performance tests |
| 127 | +@pytest.mark.feature("accuracy") # Accuracy tests |
| 128 | +@pytest.mark.feature("memory") # Memory tests |
97 | 129 | ``` |
98 | 130 |
|
99 | | -#### 4. Combined Usage |
| 131 | +#### 3. Platform Markers (Optional) |
100 | 132 | ```python |
101 | | -@pytest.mark.stage(2) |
102 | | -@pytest.mark.feature("performance") |
103 | | -@pytest.mark.gpu |
104 | | -def test_gpu_performance(self): |
105 | | - """GPU Performance Test""" |
106 | | - pass |
| 133 | +@pytest.mark.platform("gpu") # GPU platform tests |
| 134 | +@pytest.mark.platform("npu") # NPU platform tests |
| 135 | +@pytest.mark.platform("cpu") # CPU platform tests |
107 | 136 | ``` |
108 | 137 |
|
109 | | -### Test Method Naming Convention |
110 | | - |
| 138 | +#### 4. Reliability Markers (Optional) |
111 | 139 | ```python |
112 | | -def test_[scenario]_[expected_result]: |
113 | | - """Test scenario description""" |
114 | | - pass |
115 | | - |
116 | | -# Example: |
117 | | -def test_calculation_returns_correct_result(): |
118 | | - """Test calculation returns correct result""" |
119 | | - pass |
120 | | - |
| 140 | +@pytest.mark.reliability("high") # High reliability tests |
| 141 | +@pytest.mark.reliability("medium") # Medium reliability tests |
| 142 | +@pytest.mark.reliability("low") # Low reliability tests |
121 | 143 | ``` |
122 | 144 |
|
123 | | -### Assertion Specification |
| 145 | +## Allure Report Integration |
124 | 146 |
|
| 147 | +### 1. Basic Usage |
125 | 148 | ```python |
126 | | -# Recommended: Clear assertion messages |
127 | | -assert result == expected, f"Expected {expected}, actual {result}" |
128 | | - |
129 | | -# Recommended: Use pytest built-in assertions |
130 | | -assert isinstance(result, dict) |
131 | | -assert len(items) > 0, "Result list should not be empty" |
132 | | - |
133 | | -# Data validation |
134 | | -assert 0 <= accuracy <= 1.0, "Accuracy should be in the range of 0-1" |
135 | | -assert response.status_code == 200 |
| 149 | +import allure |
| 150 | + |
| 151 | +@allure.feature('User Authentication') |
| 152 | +@allure.story('Login Function') |
| 153 | +def test_user_login(): |
| 154 | + """Test user login functionality""" |
| 155 | + with allure.step("Enter username and password"): |
| 156 | + login_page.enter_credentials("user", "pass") |
| 157 | + |
| 158 | + with allure.step("Click login button"): |
| 159 | + login_page.click_login() |
| 160 | + |
| 161 | + with allure.step("Verify successful login"): |
| 162 | + assert dashboard_page.is_displayed() |
| 163 | + |
| 164 | + # Add attachment |
| 165 | + allure.attach("Screenshot data", name="Login Screenshot", |
| 166 | + attachment_type=allure.attachment_type.PNG) |
136 | 167 | ``` |
137 | 168 |
|
138 | | -## Usage |
139 | | - |
140 | | -### Install Dependencies |
141 | | -```bash |
142 | | -pip install -r requirements.txt |
| 169 | +### 2. Report Configuration |
| 170 | +Configure Allure reports in `config.yaml`: |
| 171 | +```yaml |
| 172 | +reports: |
| 173 | + allure: |
| 174 | + enabled: true |
| 175 | + html_enable: true |
| 176 | + serve_mode: true # Use dynamic server mode |
| 177 | + serve_host: "localhost" |
| 178 | + serve_port: 8081 |
| 179 | + directory: "allure-results" |
143 | 180 | ``` |
144 | 181 |
|
145 | | -### Run Tests |
| 182 | +### 3. Report Viewing |
| 183 | +- **Static HTML Mode**: Automatically generates static HTML reports after test completion |
| 184 | +- **Dynamic Server Mode**: Starts Allure server, providing interactive report interface |
146 | 185 |
|
147 | | -#### 1. Run by Levels |
148 | | -```bash |
149 | | -# Run all tests |
150 | | -pytest |
| 186 | +## Performance Data Collection |
151 | 187 |
|
152 | | -# Run specific levels |
153 | | - pytest --stage=1 # Only run smoke tests |
154 | | - pytest --stage=2,3 # Run regression and release tests |
155 | | - pytest --stage=1+ # Run smoke and all higher level tests |
156 | | -``` |
157 | | - |
158 | | -#### 2. Run by Features |
159 | | -```bash |
160 | | -# Run specific feature tests |
161 | | -pytest --feature=accuracy |
162 | | -pytest --feature=performance,accuracy |
163 | | -``` |
164 | | - |
165 | | -#### 3. Run by Platforms |
166 | | -```bash |
167 | | -# Run all tests (default) |
168 | | -pytest |
169 | | - |
170 | | -# Only run GPU-related tests |
171 | | - pytest --platform=gpu |
172 | | - |
173 | | - # Exclude NPU tests |
174 | | - pytest --platform=!npu |
| 188 | +### InfluxDB Integration |
| 189 | +```python |
| 190 | +from common.influxdb_utils import push_to_influx |
| 191 | + |
| 192 | +# Collect performance data in tests |
| 193 | +def test_performance_metrics(): |
| 194 | + start_time = time.time() |
| 195 | + |
| 196 | + # Execute test logic |
| 197 | + result = perform_operation() |
| 198 | + |
| 199 | + # Push performance data to InfluxDB |
| 200 | + push_to_influx("operation_duration", time.time() - start_time, { |
| 201 | + "test_name": "test_performance_metrics", |
| 202 | + "operation_type": "calculation", |
| 203 | + "success": str(result.success) |
| 204 | + }) |
175 | 205 | ``` |
176 | 206 |
|
177 | | -#### 4. Combined Filtering |
178 | | -```bash |
179 | | -# Run performance regression tests on GPU platform |
180 | | -pytest --stage=2 --platform=gpu --feature=performance |
181 | | -``` |
| 207 | +## Extensions and Customization |
182 | 208 |
|
| 209 | +### Adding New Markers |
| 210 | +1. Add new marker definitions in the `markers` section of `pytest.ini` |
| 211 | +2. Keep the `markers =` and `# end of markers` lines unchanged |
| 212 | +3. Re-run tests to use new markers |
183 | 213 |
|
| 214 | +### Custom Configuration |
| 215 | +Customize through `config.yaml`: |
| 216 | +- Report format and storage location |
| 217 | +- Log level and output format |
| 218 | +- InfluxDB connection parameters |
| 219 | +- LLM service configuration |
0 commit comments