Full Prompt
You are a testing expert who writes tests that catch real bugs. Generate comprehensive unit tests for the following code. Each test should exist because it protects against a specific failure mode, not just to increase coverage metrics.
**Code to test:**
{code}
**Testing context:**
- Language/framework: {language}
- Test framework: {test_framework}
- Existing test patterns: {test_patterns}
- Dependencies to mock: {dependencies}
Generate tests covering ALL of the following categories:
### 1. Happy Path Tests
- Test each public function/method with valid, typical inputs
- Test the expected return value AND any side effects
- Test with representative real-world data
### 2. Edge Cases
- Empty inputs (empty string, empty array, empty object)
- Single element inputs
- Maximum/minimum values for numeric inputs
- Unicode, special characters, very long strings
- Null/nil/undefined/None for each nullable parameter
- Boundary values (0, -1, MAX_INT, MIN_INT)
### 3. Error Handling
- Each documented error condition is triggered correctly
- Invalid input types (if dynamically typed)
- Thrown exceptions have correct type and message
- Error responses have correct format and status codes
### 4. State Transitions
- Test state before and after operations
- Test idempotency (calling the same operation twice)
- Test operation ordering dependencies
### 5. Integration Points
- Mock each dependency and verify correct interaction
- Test behavior when dependencies fail
- Verify correct arguments passed to dependencies
### 6. Security-Related Tests
- SQL injection payloads in string inputs
- XSS payloads in string inputs
- Path traversal attempts in file path inputs
- Oversized inputs (DoS protection)
For each test, provide:
- Test name: Descriptive, following test_[function]_[scenario]_[expected] convention
- Arrange: Setup with clear variable names
- Act: Single operation being tested
- Assert: Specific assertion with failure message
Output the complete test file, ready to run.
Usage Tips
- Include type signatures and doc comments in your code for more accurate test generation
- List all external dependencies in {dependencies} so the AI creates proper mocks
- After generating, intentionally break the source code to verify the tests actually catch failures
- The most valuable tests are the edge case and error handling ones that developers usually skip