pester
Pester Unit Testing for PowerShell
Pester is PowerShell's ubiquitous test and mock framework. Pester 5+ uses a two-phase execution model (Discovery → Run) that requires specific patterns for reliable tests.
TDD Cycle
- Red – Write a failing test describing expected behavior
- Green – Implement minimal code to pass
- Refactor – Clean up while keeping tests green
Test File Structure
Test files use *.Tests.ps1 naming convention. Place alongside source files:
src/
├── Get-Widget.ps1
└── Get-Widget.Tests.ps1
Basic Template
BeforeAll {
. $PSCommandPath.Replace('.Tests.ps1', '.ps1')
}
Describe 'Get-Widget' {
Context 'when called with valid ID' {
It 'returns widget object' {
$result = Get-Widget -Id 42
$result.Id | Should -Be 42
}
}
Context 'when widget does not exist' {
It 'throws not found error' {
{ Get-Widget -Id 9999 } | Should -Throw -ErrorId 'WidgetNotFound'
}
}
}
Block Hierarchy
| Block | Purpose | Scope |
|---|---|---|
Describe |
Top-level grouping (1 per function/feature) | Container |
Context |
Scenario grouping ("when X", "with Y") | Sub-container |
It |
Single test case with assertions | Test |
BeforeAll |
Run once before all tests in block | Setup |
BeforeEach |
Run before each It |
Per-test setup |
AfterEach |
Run after each It (guaranteed) |
Per-test cleanup |
AfterAll |
Run once after all tests (guaranteed) | Final cleanup |
Discovery vs Run Phase (Critical)
Pester 5 executes in two phases:
- Discovery – Scans to find all tests (does NOT run
Itblocks) - Run – Executes tests with setup/teardown
Rule: Put ALL code inside It, BeforeAll, BeforeEach, AfterEach, AfterAll, or BeforeDiscovery.
# ❌ WRONG - runs during Discovery, $data is null in Run phase
$data = Get-ExpensiveData
Describe 'Tests' {
It 'works' { $data | Should -Not -BeNull } # FAILS!
}
# ✅ CORRECT - use BeforeAll
Describe 'Tests' {
BeforeAll { $script:data = Get-ExpensiveData }
It 'works' { $script:data | Should -Not -BeNull }
}
For dynamic test generation, use BeforeDiscovery:
BeforeDiscovery {
$testCases = @('file1.ps1', 'file2.ps1')
}
Describe 'Validate <_>' -ForEach $testCases {
BeforeAll { $file = $_ }
It 'has valid syntax' { ... }
}
Mocking
Mock any PowerShell command within test scope:
Describe 'Send-Report' {
BeforeAll {
Mock Send-MailMessage {}
Mock Get-Date { return [DateTime]'2024-01-15' }
}
It 'sends email with correct subject' {
Send-Report -Title 'Summary'
Should -Invoke Send-MailMessage -Times 1 -ParameterFilter {
$Subject -like '*Summary*'
}
}
}
Parameter Filters
Create conditional mocks for different inputs:
Mock Get-Service { @{ Status = 'Running' } } -ParameterFilter { $Name -eq 'BITS' }
Mock Get-Service { @{ Status = 'Stopped' } } -ParameterFilter { $Name -eq 'Spooler' }
Mock Get-Service { @{ Status = 'Unknown' } } # Default fallback
Mocking Native Commands (bash, git, curl)
Native commands work via $args:
Describe 'Git Operations' {
BeforeAll { Mock git { 'mocked-output' } }
It 'calls git with correct args' {
Invoke-GitPush -Branch 'main'
Should -Invoke git -ParameterFilter {
$args[0] -eq 'push' -and $args[1] -eq 'origin'
}
}
}
Module Internals
Use -ModuleName for functions inside modules:
Mock Get-InternalData { 'mocked' } -ModuleName MyModule
Use InModuleScope for private/non-exported functions:
InModuleScope MyModule {
Mock Write-Log {}
Invoke-PrivateFunction
Should -Invoke Write-Log
}
Test Isolation
TestDrive (Filesystem)
Temporary PSDrive auto-cleaned per block:
Describe 'File Processing' {
BeforeAll {
Set-Content 'TestDrive:\config.json' -Value '{"key":"value"}'
}
It 'reads config' {
$cfg = Get-Content 'TestDrive:\config.json' | ConvertFrom-Json
$cfg.key | Should -Be 'value'
}
}
Use $TestDrive for .NET APIs requiring full paths:
$path = Join-Path $TestDrive 'file.txt'
[System.IO.File]::WriteAllText($path, 'content')
TestRegistry (Windows)
Temporary registry hive:
BeforeAll {
New-Item -Path 'TestRegistry:\MyApp'
New-ItemProperty -Path 'TestRegistry:\MyApp' -Name 'Setting' -Value 'Test'
}
Environment Variables
Save and restore manually:
BeforeEach {
$script:oldEnv = $env:MY_VAR
$env:MY_VAR = 'test-value'
}
AfterEach {
$env:MY_VAR = $script:oldEnv
}
Output Capture
Stream Redirection
| Stream | Command | Capture |
|---|---|---|
| 1 (Success) | Write-Output | Direct assignment |
| 2 (Error) | Write-Error | 2>&1 or -ErrorVariable |
| 3 (Warning) | Write-Warning | 3>&1 |
| 4 (Verbose) | Write-Verbose | 4>&1 with -Verbose |
| 6 (Information) | Write-Host | 6>&1 |
It 'captures Write-Host' {
$result = MyFunction 6>&1
$result | Should -Contain 'expected message'
}
ANSI Color Stripping
function Remove-AnsiCodes {
param([string]$Text)
$Text -replace '\x1b\[[0-9;]*[a-zA-Z]', ''
}
$clean = Remove-AnsiCodes $coloredOutput
Or configure Pester: $config.Output.RenderMode = 'Plaintext'
Parameterized Tests
Use -ForEach or -TestCases:
Describe 'Add-Numbers' {
It 'adds <a> + <b> = <expected>' -TestCases @(
@{ a = 2; b = 3; expected = 5 }
@{ a = -1; b = 1; expected = 0 }
) {
Add-Numbers $a $b | Should -Be $expected
}
}
Running Specific Tests
Tags
It 'slow test' -Tag 'Integration', 'Slow' { ... }
# Run only tagged tests
Invoke-Pester -TagFilter 'Unit' -ExcludeTagFilter 'Slow'
Name Filters
Invoke-Pester -FullNameFilter '*Get-Widget*returns*'
Skip
It 'admin only' -Skip:(-not (Test-IsAdmin)) { ... }
Code Coverage
$config = New-PesterConfiguration
$config.CodeCoverage.Enabled = $true
$config.CodeCoverage.Path = './src'
$config.CodeCoverage.OutputFormat = 'JaCoCo'
$config.CodeCoverage.OutputPath = 'coverage.xml'
$config.CodeCoverage.CoveragePercentTarget = 80
Invoke-Pester -Configuration $config
CI Reports (JUnit/NUnit)
$config = New-PesterConfiguration
$config.TestResult.Enabled = $true
$config.TestResult.OutputFormat = 'JUnitXml' # or NUnitXml
$config.TestResult.OutputPath = 'test-results.xml'
$config.Run.Exit = $true # Exit code for CI
Invoke-Pester -Configuration $config
Additional Resources
- references/anti-patterns.md - Common mistakes and pitfalls with solutions
- references/mocking-patterns.md - Advanced mocking scenarios (APIs, databases, native commands)
- references/ci-integration.md - GitHub Actions, Azure DevOps, GitLab CI, Jenkins examples
Common Anti-Patterns
See references/anti-patterns.md for detailed examples.
Quick checklist:
- ❌ Code outside Pester blocks
- ❌ Tests depending on each other
- ❌ Using
foreachinstead of-ForEach - ❌ Mocking the function under test
- ❌ Over-specifying mock interactions
- ❌ Global variables in tests
Assertion Quick Reference
| Assertion | Description |
|---|---|
Should -Be |
Case-insensitive equality |
Should -BeExactly |
Case-sensitive equality |
Should -BeTrue / -BeFalse |
Boolean |
Should -BeNullOrEmpty |
Null/empty check |
Should -BeOfType |
Type checking |
Should -Contain |
Collection contains |
Should -Match |
Regex (case-insensitive) |
Should -BeLike |
Wildcard match |
Should -Throw |
Exception expected |
Should -Exist |
Path exists |
Should -HaveCount |
Collection count |
Should -Invoke |
Mock was called |
Full assertion list: Get-ShouldOperator
More from oleksandrkucherenko/e-bash
elegant-code
Language-agnostic rulebook for producing, reviewing, and improving elegant code. Use when writing new code, refactoring existing code, reviewing code quality, or when user asks for "elegant", "clean", "maintainable", or "well-structured" code. Applies to any programming language or framework.
14skill-learning-patterns
Use when agents discover better patterns, find gaps or inaccuracies in existing skills, or need to contribute validated improvements to shared knowledge, or found unique experience that could be shared with others.
10gemini-cli
Use Gemini CLI as a complementary AI tool for tasks requiring massive context windows (1M tokens). Invoke when analyzing large codebases, requesting deep analysis with extended thinking, getting second opinions on complex problems, or when Claude's context limits are insufficient. Triggers include phrases like "use gemini", "analyze with gemini", "get second opinion", "deep analysis of codebase", or when processing files exceeding Claude's context capacity.
1