This document provides comprehensive information on the testing approach, methodology, and instructions for the Ticketly ticketing microservice.
The Ticketly ticketing microservice follows a comprehensive testing strategy to ensure reliable and robust code. Our testing approach is based on the following principles:
- Test-Driven Development (TDD): We encourage writing tests before implementation
- Comprehensive Coverage: Aim for high code coverage (target: >80%)
- Isolation: Services are tested in isolation using mocks for external dependencies
- Realistic Scenarios: Tests are designed to model real-world use cases
- Continuous Testing: Tests run automatically in our CI/CD pipeline
The project implements several testing categories:
- Unit Tests: Testing individual components in isolation
- Integration Tests: Testing interaction between components
- Mock Tests: Using mock implementations to simulate external dependencies
- Database Tests: Testing database operations with in-memory SQLite
- API Tests: Testing HTTP endpoints
- Go 1.18 or later
- Access to project dependencies (either internet access for downloading or vendor directory)
- PostgreSQL client (for integration tests)
- Redis client (for integration tests)
- Docker (optional, for containerized testing)
For convenience, use the provided test script:
# Make script executable if needed
chmod +x run_tests.sh
# Run all tests with coverage reporting
./run_tests.shThe test script provides verbose output with real-time test status:
- ✓ PASSED: [Test Name] - For successful tests
- ✗ FAILED: [Test Name] - For failed tests
At the end, you'll see a summary showing:
- Total number of tests
- Number of passed tests
- Number of failed tests (if any)
To run all tests in the project:
go test ./...To run tests for specific packages:
# For order service tests
go test ./internal/order/...
# For ticket service tests
go test ./internal/tickets/...
# For a specific test file
go test ./internal/order/service_test.goTo run tests and get coverage information:
go test ./... -coverprofile=coverage/coverage.out
go tool cover -html=coverage/coverage.out -o coverage/coverage.htmlThen open coverage/coverage.html in a browser to see coverage details.
You can run tests in a containerized environment:
# Build and run tests in Docker
docker build -t ms-ticketing-tests -f Dockerfile.test .
docker run --rm ms-ticketing-testsThe test suite is organized as follows:
internal/order/service_test.go- Tests for the order service functionalityinternal/order/stripe_test.go- Tests for the payment processing functionalityinternal/order/db/db_test.go- Tests for the order database layer
internal/tickets/service/service_test.go- Tests for the ticket service functionalityinternal/tickets/service/ticket_count_test.go- Tests for the ticket counting serviceinternal/tickets/db/db_test.go- Tests for the ticket database layerinternal/tickets/db/ticket_count_test.go- Tests for the ticket count database layer
The project leverages the following testing libraries:
- Standard Go Testing: Using the built-in
testingpackage - Testify:
github.com/stretchr/testify/assertfor assertionsgithub.com/stretchr/testify/requirefor fatal assertionsgithub.com/stretchr/testify/mockfor mocking
- SQLite: For in-memory database testing
- UUID:
github.com/google/uuidfor generating test identifiers - Custom Mocks: Handwritten mocks for various interfaces
The project uses interface-based design to facilitate mocking:
- Service Dependencies: All service dependencies are defined as interfaces
- Mock Implementations: Mock implementations are created using testify/mock
- Behavior Simulation: Mocks are configured to simulate specific behaviors
- Verification: Mock expectations verify that dependencies are called correctly
Example of a mock implementation:
// MockDBLayer implements the OrderDBLayer interface for testing
type MockDBLayer struct {
mock.Mock
}
func (m *MockDBLayer) GetOrderByID(id string) (*models.Order, error) {
args := m.Called(id)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0).(*models.Order), args.Error(1)
}Tests can be configured using environment variables:
KAFKA_MOCK_MODE=true- Use Kafka mocks instead of real connectionsTEST_DB_DSN- Database connection string for integration testsTEST_REDIS_ADDR- Redis address for integration tests
Tests run automatically in our CI/CD pipeline:
- Pull Request Checks: All tests run on PR creation and updates
- Main Branch Validation: Tests must pass before merging to main
- Coverage Reports: Code coverage is tracked and reported
- Performance Benchmarks: Critical paths are benchmarked for performance regression
When adding new functionality, please follow these guidelines:
- Write unit tests for all new functions
- Use mocks for external dependencies
- Follow the existing naming pattern:
TestFunctionName - For database tests, use the in-memory SQLite setup
- Test error cases, not just happy paths
- Add integration tests for API endpoints
- Include benchmarks for performance-critical code
Example test structure:
func TestCreateOrder_Success(t *testing.T) {
// Setup mocks
mockDB := new(MockDBLayer)
mockLock := new(MockRedisLock)
// Create service with mocks
svc := &order.OrderService{
DB: mockDB,
Lock: mockLock,
}
// Set expectations
mockDB.On("CreateOrder", mock.Anything).Return(nil)
mockLock.On("Lock", mock.Anything).Return(true, nil)
// Execute test
result, err := svc.CreateOrder(testOrder)
// Assert results
assert.NoError(t, err)
assert.Equal(t, expectedResult, result)
mockDB.AssertExpectations(t)
mockLock.AssertExpectations(t)
}-
Missing dependencies
If you see errors about missing packages:
go get -u github.com/stretchr/testify go get -u github.com/google/uuid
-
Database setup issues
For database tests, make sure the required tables are created in the test setup function.
-
Test times out
Some tests may have implicit timeouts. Use
-timeoutflag to extend:go test ./... -timeout 5m -
Redis connection errors
For tests involving Redis locking, make sure the mock is properly configured.
-
Flaky tests
If you encounter intermittent test failures:
- Check for goroutine race conditions
- Look for time-dependent logic
- Ensure proper cleanup between tests
- Unit Test Coverage: 80%+ for business logic
- Integration Test Coverage: Key API endpoints and workflows
- Edge Cases: Error handling, boundary conditions, invalid inputs
- Implement property-based testing for complex algorithms
- Add end-to-end testing with real database instances
- Improve test parallelization for faster execution
- Add fuzzing tests for request handlers
For more assistance, please contact the development team.