/tdd
/tdd - Test-Driven Development Workflow
Section titled “/tdd - Test-Driven Development Workflow”Purpose
Section titled “Purpose”Start a strict test-driven development (TDD) workflow where you write failing tests first, then implement code to make them pass. Enforces the Red-Green-Refactor cycle with mandatory verification.
/tdd [feature or function description]Arguments
Section titled “Arguments”[feature description]- What you want to build using TDD[function name]- Specific function to develop with TDD
How It Works
Section titled “How It Works”The /tdd command enforces a strict 4-phase TDD workflow:
Phase 1: Red - Write Failing Tests
Section titled “Phase 1: Red - Write Failing Tests”-
Understand Requirements
- What should the code do?
- What are the inputs and outputs?
- What edge cases exist?
- What errors should be handled?
-
Write Tests First
def test_feature_does_expected_thing():"""Test the main functionality."""result = feature("input")assert result == "expected"def test_feature_handles_edge_case():"""Test edge case behavior."""result = feature("")assert result == "default"def test_feature_raises_on_invalid():"""Test error handling."""with pytest.raises(ValueError):feature(None) -
Run Tests (Expect Failure)
Terminal window pytest -v # Should FAIL - feature() doesn't exist yet
Non-Negotiable Rule: NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
Phase 2: Green - Make Tests Pass
Section titled “Phase 2: Green - Make Tests Pass”-
Implement Minimal Code
- Write just enough code to pass tests
- Don’t add extra features
- Don’t optimize prematurely
- Keep it simple
-
Run Tests (Expect Success)
Terminal window pytest -v # Should PASS - all tests green -
Verify Every Claim
- Don’t say “tests should pass”
- Run the actual command
- Read the complete output
- Verify it matches your claim
Phase 3: Refactor
Section titled “Phase 3: Refactor”-
Improve Code Quality
- Clean up implementation
- Remove duplication
- Improve naming
- Apply patterns
-
Run Tests (Ensure Still Passing)
Terminal window pytest -v # Should still PASS -
Keep Tests Green
- Never break tests during refactoring
- If tests fail, revert and try again
- Tests are your safety net
Phase 4: Repeat
Section titled “Phase 4: Repeat”Add more test cases and repeat the cycle:
- Write next failing test
- Make it pass
- Refactor if needed
- Commit
TDD Best Practices
Section titled “TDD Best Practices”- ✅ Write one test at a time
- ✅ Run tests after every change
- ✅ Keep red-green-refactor cycles short (minutes, not hours)
- ✅ Commit after each green phase
- ✅ Test behavior, not implementation
- ✅ Start with simplest test cases
- ❌ Write production code before tests
- ❌ Write multiple tests before implementing
- ❌ Skip running tests
- ❌ Keep code that makes tests pass without running them
- ❌ Use vague language like “should work” or “probably passes”
Strict TDD Rules
Section titled “Strict TDD Rules”Based on .claude/skills/methodology/test-driven-development/SKILL.md:
Rule 1: No Production Code Without Failing Test
Section titled “Rule 1: No Production Code Without Failing Test”WRONG:
"Let me write the function first, then add tests""I'll implement it and test as I go"RIGHT:
1. Write test that calls feature()2. Run test - watch it FAIL3. Write feature() to make test PASS4. Run test - watch it PASSRule 2: If You Already Wrote Code, Delete It
Section titled “Rule 2: If You Already Wrote Code, Delete It”If you accidentally wrote production code before tests:
WRONG:
"I'll keep this code as reference while writing tests""Let me comment it out for now"RIGHT:
Delete the code completely.Write the test.Rewrite the implementation.Rule 3: Verify Before Claiming
Section titled “Rule 3: Verify Before Claiming”Based on .claude/skills/methodology/verification-before-completion/SKILL.md:
Before saying “tests pass”:
- Identify the exact command to run tests
- Execute it completely and freshly
- Read the complete output
- Verify output matches your claim
- Only then make the claim
Forbidden Language:
- ❌ “should work”
- ❌ “probably fixed”
- ❌ “seems to pass”
- ❌ “tests should pass”
Required Language:
- ✅ “Tests pass: [paste output]”
- ✅ “Verified by running: [command]“
Testing Anti-Patterns to Avoid
Section titled “Testing Anti-Patterns to Avoid”Based on .claude/skills/methodology/testing-anti-patterns/SKILL.md:
1. Testing Mock Behavior
Section titled “1. Testing Mock Behavior”WRONG:
def test_sends_email(): mock_mailer = Mock() send_email(mock_mailer, "test@example.com") mock_mailer.send.assert_called_once() # Testing mock, not code!RIGHT:
def test_sends_email(): mailer = FakeMailer() # Real implementation send_email(mailer, "test@example.com") assert mailer.sent_emails[0].to == "test@example.com"2. Test-Only Methods in Production
Section titled “2. Test-Only Methods in Production”WRONG:
class User: def __init__(self, name): self.name = name
def set_id_for_testing(self, id): # Only for tests! self.id = idRIGHT:
class User: def __init__(self, name, id=None): self.name = name self.id = id or generate_id()
# Test uses real constructoruser = User("John", id="test-id-123")3. Incomplete Mocks
Section titled “3. Incomplete Mocks”WRONG:
const mockDb = { findUser: vi.fn() // Missing other methods that will be called!};RIGHT:
const mockDb: Database = { findUser: vi.fn(), saveUser: vi.fn(), deleteUser: vi.fn(), // All methods implemented};Example TDD Session
Section titled “Example TDD Session”/tdd "Create a password validator that checks strength"Output
Section titled “Output”## TDD Session: Password Validator
### Phase 1: RED - Write Failing Test
Created `tests/test_password_validator.py`:
```pythonimport pytestfrom src.password_validator import validate_password
def test_validates_strong_password(): """Test that strong password passes validation.""" result = validate_password("StrongP@ss123") assert result is True
def test_rejects_short_password(): """Test that short password fails.""" with pytest.raises(ValueError, match="too short"): validate_password("weak")
def test_rejects_password_without_numbers(): """Test that password without numbers fails.""" with pytest.raises(ValueError, match="must contain number"): validate_password("NoNumbers!")Running tests:
$ pytest tests/test_password_validator.py -v
test_password_validator.py::test_validates_strong_password FAILED ModuleNotFoundError: No module named 'src.password_validator'✅ Tests FAIL as expected (module doesn’t exist)
Phase 2: GREEN - Make Tests Pass
Section titled “Phase 2: GREEN - Make Tests Pass”Created src/password_validator.py:
import re
MIN_LENGTH = 8
def validate_password(password: str) -> bool: """ Validate password strength.
Args: password: Password to validate
Returns: True if password is strong
Raises: ValueError: If password doesn't meet requirements """ if len(password) < MIN_LENGTH: raise ValueError(f"Password too short (minimum {MIN_LENGTH})")
if not re.search(r'\d', password): raise ValueError("Password must contain number")
if not re.search(r'[A-Z]', password): raise ValueError("Password must contain uppercase letter")
if not re.search(r'[!@#$%^&*]', password): raise ValueError("Password must contain special character")
return TrueRunning tests:
$ pytest tests/test_password_validator.py -v
test_password_validator.py::test_validates_strong_password PASSEDtest_password_validator.py::test_rejects_short_password PASSEDtest_password_validator.py::test_rejects_password_without_numbers PASSED
====== 3 passed in 0.12s ======✅ All tests PASS
Phase 3: REFACTOR - Improve Code
Section titled “Phase 3: REFACTOR - Improve Code”Extracted validation rules into separate functions:
import refrom typing import List
MIN_LENGTH = 8REQUIRED_PATTERNS = [ (r'\d', "number"), (r'[A-Z]', "uppercase letter"), (r'[!@#$%^&*]', "special character"),]
def validate_password(password: str) -> bool: """Validate password strength.""" check_length(password) check_required_patterns(password) return True
def check_length(password: str) -> None: """Check password meets minimum length.""" if len(password) < MIN_LENGTH: raise ValueError(f"Password too short (minimum {MIN_LENGTH})")
def check_required_patterns(password: str) -> None: """Check password contains required character types.""" for pattern, name in REQUIRED_PATTERNS: if not re.search(pattern, password): raise ValueError(f"Password must contain {name}")Running tests:
$ pytest tests/test_password_validator.py -v
====== 3 passed in 0.11s ======✅ Tests still PASS after refactoring
Phase 4: REPEAT - Add More Tests
Section titled “Phase 4: REPEAT - Add More Tests”Added edge case test:
def test_rejects_empty_password(): """Test that empty password fails.""" with pytest.raises(ValueError, match="too short"): validate_password("")$ pytest tests/test_password_validator.py -v
====== 4 passed in 0.13s ======✅ New test passes immediately (covered by existing implementation)
Cycle Summary
Section titled “Cycle Summary”- Red: 3 tests written, all failed initially ✅
- Green: Implementation created, all tests pass ✅
- Refactor: Extracted helper functions, tests still pass ✅
- Repeat: Added edge case test ✅
Final Stats
Section titled “Final Stats”- Tests: 4 (all passing)
- Coverage: 100%
- Lines of code: 28
- Complexity: Low (simple functions)
## Flags
| Flag | Description | Example ||------|-------------|---------|| `--strict` | Extra strict TDD enforcement | `--strict` || `--framework=[fw]` | Specify test framework | `--framework=vitest` || `--format=[fmt]` | Output format (concise/detailed) | `--format=concise` |
### Flag Examples
```bash# Strict TDD with verbose output/tdd --strict --format=detailed "user authentication"
# TDD with specific framework/tdd --framework=jest "form validation"Deliverables
Section titled “Deliverables”After a TDD session, you receive:
- Test File - Complete test suite written first
- Implementation - Minimal code that passes tests
- Verification - Proof that tests failed then passed
- Refactored Code - Clean, well-structured implementation
- Coverage Report - Typically 100% for TDD code