secure-development-lifecycle
๐ก๏ธ Secure Development Lifecycle (SDLC) Skill
๐ฏ Purpose
Comprehensive security practices for the entire Software Development Lifecycle (SDLC), ensuring security is built in from inception through maintenance. Integrates classification-driven requirements, AI-augmented development controls, and systematic testing frameworks aligned with Hack23 Secure Development Policy.
๐ Core Security Principles
๐ Security by Design
- ๐ท๏ธ Project Classification: CIA triad, RTO/RPO, business impact analysis
- ๐ก๏ธ Secure Coding Standards: OWASP Top 10 alignment with classification controls
- ๐๏ธ Architecture Documentation: SECURITY_ARCHITECTURE.md + FUTURE_SECURITY_ARCHITECTURE.md
๐ Transparency Through Documentation
- ๐ Living Security Architecture: Real-time documentation with classification controls
- ๐๏ธ Public Security Badges: OpenSSF Scorecard, SLSA, Quality Gate validation
- ๐ Open Development: Demonstrating expertise while maintaining classification
๐ Continuous Security Improvement
- ๐ท๏ธ Classification-Driven Testing: SAST/SCA/DAST per classification levels
- ๐ Performance Monitoring: Security metrics with availability SLAs
- ๐ Regular Reviews: Classification-based risk management and ROI
๐ 5-Phase SDLC Security Framework
๐ Phase 1: Planning & Design
๐ท๏ธ Project Classification (REQUIRED)
Apply Classification Framework:
- CIA Triad Analysis (Confidentiality, Integrity, Availability)
- Business Impact Classification (Revenue, Trust, Compliance)
- RTO/RPO Definition (Recovery Time/Point Objectives)
- Risk Assessment Integration with Risk Register
- Cost-Benefit Analysis (Security ROI)
Classification Levels:
| Level | Confidentiality | Integrity | Availability | Security Investment |
|---|---|---|---|---|
| Critical | State secrets | Financial | <1 hour RTO | Maximum controls |
| High | Proprietary | Legal | 4 hour RTO | Strong controls |
| Medium | Internal | Operational | 24 hour RTO | Standard controls |
| Low | Public | Informational | 72 hour RTO | Baseline controls |
๐๏ธ Security Architecture Design (REQUIRED)
Maintain comprehensive architecture documentation:
- SECURITY_ARCHITECTURE.md: Current implemented security design
- FUTURE_SECURITY_ARCHITECTURE.md: Planned security improvements
- ARCHITECTURE.md: Complete C4 models (Context, Container, Component, Code)
- DATA_MODEL.md: Data structures and classifications
- FLOWCHART.md: Business process flows with security controls
๐ฏ Threat Modeling (MANDATORY)
- STRIDE Framework: Spoofing, Tampering, Repudiation, Information Disclosure, DoS, Elevation of Privilege
- MITRE ATT&CK Integration: 14 tactics mapped with techniques
- Attack Tree Analysis: Graphical attack path decomposition
- Threat Agent Classification: 7 categories (Accidental Insiders โ Nation-State APTs)
- THREAT_MODEL.md: Comprehensive 9-section threat documentation
๐ป Phase 2: Development
๐ก๏ธ Secure Coding Guidelines
OWASP Top 10 (2021) Alignment:
- A01 - Broken Access Control: Proper authentication/authorization
- A02 - Cryptographic Failures: TLS 1.3, AES-256 encryption
- A03 - Injection: Parameterized queries, input validation
- A04 - Insecure Design: Apply threat modeling, secure patterns
- A05 - Security Misconfiguration: Secure defaults, hardened configs
- A06 - Vulnerable Components: SCA scanning, SBOM generation
- A07 - Authentication Failures: MFA, secure session management
- A08 - Software/Data Integrity: Code signing, integrity checks
- A09 - Logging Failures: Comprehensive security event logging
- A10 - SSRF: Validate external resource requests
๐ Code Review Requirements
Classification-Based Review:
| Classification | Review Type | Required Approvals | Security Focus |
|---|---|---|---|
| Critical | Formal security review | 2+ reviewers + security architect | All OWASP Top 10 |
| High | Security-focused PR review | 2+ reviewers | Critical vulnerabilities |
| Medium | Standard PR review | 1+ reviewer | Input validation, auth |
| Low | Standard PR review | 1 reviewer | Basic security checks |
๐ Secret Management (MANDATORY)
- Zero Hard-Coded Credentials: No secrets in source code
- GitHub Secrets: All credentials in encrypted secrets
- Rotation Policy: Critical: 90 days, High: 180 days, Medium/Low: 365 days
- Access Logging: All secret access logged and monitored
- Least Privilege: Secrets scoped to minimum required access
๐งช Phase 3: Security Testing
๐ฌ Static Application Security Testing (SAST)
Implementation:
- Tool: SonarCloud integration on every commit
- Quality Gates: Classification-based failure thresholds
- Coverage: All code analyzed for security vulnerabilities
- Reporting: Public quality/security dashboards
Classification-Based Quality Gates:
| Classification | Security Hotspots | Code Coverage | Duplications | Maintainability |
|---|---|---|---|---|
| Critical | 0 (block) | โฅ90% | <3% | A rating |
| High | โค2 (review) | โฅ80% | <5% | A or B rating |
| Medium | โค5 (track) | โฅ70% | <10% | B or C rating |
| Low | โค10 (monitor) | โฅ60% | <15% | C rating |
๐ฆ Software Composition Analysis (SCA)
Dependency Security:
- Automated Scanning: Dependabot, Snyk, or equivalent
- SBOM Generation: Software Bill of Materials for all releases
- Vulnerability Database: CVE, NVD, GitHub Advisory integration
- Update Policy: Classification-based patching SLAs
- License Compliance: OSS license validation
Remediation SLAs:
| Severity | Critical Project | High Project | Medium Project | Low Project |
|---|---|---|---|---|
| Critical | 24 hours | 72 hours | 1 week | 2 weeks |
| High | 1 week | 2 weeks | 1 month | 2 months |
| Medium | 1 month | 2 months | 3 months | 6 months |
| Low | Next release | Next release | Next release | Next release |
โก Dynamic Application Security Testing (DAST)
Runtime Security Testing:
- Tool: OWASP ZAP, Burp Suite, or equivalent
- Scope: Staging environments (classification-appropriate)
- Frequency: Per sprint (Critical/High), quarterly (Medium/Low)
- Coverage: All authentication, authorization, input handling paths
๐ Secret Scanning (CONTINUOUS)
- GitHub Secret Scanning: Enabled on all repositories
- Pre-commit Hooks: Detect secrets before commit
- Historical Scanning: Scan entire git history
- Alert Integration: Immediate notifications to security team
- Remediation SLA: Critical secrets rotated within 1 hour
๐ Test Data Protection (MANDATORY)
- Zero Production Data: Never use real data in dev/test
- Data Anonymization: Pseudonymize test data
- Secure Deletion: Wipe test data after use
- Access Control: Least privilege for test environments
๐ฏ Unit Test Coverage & Quality
๐ Testing Standards
Minimum Thresholds:
- Line Coverage: โฅ80% (Critical/High), โฅ70% (Medium/Low)
- Branch Coverage: โฅ70% (Critical/High), โฅ60% (Medium/Low)
- Mutation Testing: โฅ60% mutation score (Critical only)
- Test Execution: Every commit and PR
- Trend Analysis: Historical tracking, regression prevention
๐ Required Documentation
Every repository MUST have:
- UnitTestPlan.md: Comprehensive unit test strategy
- Test Results: Public HTML reports (GitHub Pages)
- Coverage Dashboards: Accessible coverage metrics
- Quality Badges: Status badges in README.md
๐ Reference Implementation Examples
๐๏ธ Citizen Intelligence Agency (Java/Spring):
๐ฎ Black Trigram (TypeScript/Phaser):
๐ CIA Compliance Manager (TypeScript/Vite):
๐ End-to-End Testing Strategy
๐ฏ E2E Testing Requirements
Coverage Areas:
- Critical User Journeys: All primary workflows tested
- Authentication Flows: Login, logout, session management
- Authorization Checks: Role-based access validation
- Data Integrity: CRUD operations validation
- Performance: Response time within SLA thresholds
๐ Required Documentation
Every repository MUST have:
- E2ETestPlan.md: Comprehensive E2E test strategy
- Mochawesome Reports: Public HTML test results
- Browser Matrix: Cross-browser validation (Chrome, Firefox, Safari, Edge)
- Performance Assertions: Response time validation
๐ Reference Implementation Examples
๐๏ธ Citizen Intelligence Agency:
๐ค AI-Augmented Development Controls
๐ AI as Proposal Generator, Not Authority
Core Principles:
- All AI outputs are proposals: Require human review and approval
- No autonomous deployment: AI cannot bypass CI/CD pipelines or security gates
- Human accountability: Responsibility remains with human developers
- Transparent attribution: Document AI assistance in PR descriptions
๐ PR Review Requirements
Mandatory Controls:
- Human Review: All AI-assisted changes pass through standard PR workflows
- Security Gate Enforcement: CI pipelines unchanged or only tightened
- Change Attribution: PR descriptions MUST document AI tools used
- Code Ownership: Human developers remain code owners
๐ง Curator-Agent Configuration Management
Change Control:
- Scope:
.github/agents/*.md,.github/copilot-mcp*.json,.github/workflows/copilot-setup-steps.yml - Classification: Normal Change per Change Management
- Approval: CEO or designated security owner required
- Risk Assessment: Documented evaluation for capability expansion
๐ก๏ธ Security Requirements
Tool Governance:
- Least Privilege: Agents operate with minimal required tool access
- MCP Configuration Control: Model Context Protocol changes require security review
- Audit Trail: All agent activities logged for compliance analysis
- Capability Expansion: New integrations require documented risk assessment
๐ Phase 4: Deployment
๐ค Automated CI/CD Pipelines
Security Gates:
- SAST Scanning: Code quality gates (classification-based thresholds)
- SCA Scanning: Dependency vulnerability checks with auto-block
- Secret Scanning: Zero tolerance for exposed credentials
- Container Scanning: Image vulnerability assessment (if applicable)
- Infrastructure as Code: Terraform/CloudFormation security validation
โ Manual Approval Gates
Classification-Based Approvals:
| Classification | Approval Required | Approvers | Change Window |
|---|---|---|---|
| Critical | Production deploy | CEO + Security Architect | Scheduled only |
| High | Production deploy | Tech Lead + Reviewer | Standard window |
| Medium | Production deploy | Automated + monitoring | Anytime |
| Low | Production deploy | Automated | Anytime |
๐ Deployment Checklists
Pre-Deployment Verification:
- All security tests passing
- Classification-appropriate controls validated
- Rollback plan documented
- Monitoring alerts configured
- Incident response procedures ready
๐ Security Metrics
Real-Time Monitoring:
- OpenSSF Scorecard: Public security posture metrics
- SLSA Level: Supply chain security attestation
- Quality Gates: SonarCloud quality/security dashboards
- Uptime Metrics: Availability aligned with classification SLAs
๐ง Phase 5: Maintenance & Operations
๐ Vulnerability Management
Classification-Based Remediation: Per Vulnerability Management:
| Severity | Critical Project | High Project | Medium Project | Low Project |
|---|---|---|---|---|
| Critical | 24 hours | 72 hours | 1 week | 2 weeks |
| High | 1 week | 2 weeks | 1 month | 2 months |
| Medium | 1 month | 2 months | 3 months | 6 months |
| Low | Next release | Next release | Next release | Next release |
๐ Performance Monitoring
Security Metrics Integration: Per Security Metrics:
- Availability Tracking: Uptime per classification requirements
- Response Time: Performance within SLA thresholds
- Error Rates: Security-relevant errors logged and analyzed
- Incident Metrics: MTTR, MTTD aligned with classification
๐ Regular Updates
Patch Management:
- Security Patches: Classification-based deployment schedules
- Dependency Updates: Automated PRs with security review
- Framework Updates: Major version upgrades with testing
- Business Continuity: Updates aligned with BCP
๐ Incident Response
Integration: Per Incident Response Plan:
- Classification-Driven Escalation: Incident severity based on project classification
- Communication Procedures: Stakeholder notifications per classification
- Recovery Objectives: RTO/RPO aligned with classification
- Post-Incident Review: Lessons learned and improvement actions
๐ SDLC Security Maturity Levels
Level 1: Basic (Minimum Viable Security)
- โ Basic security controls implemented
- โ Dependabot enabled
- โ Secret scanning active
- โ Basic threat model documented
Level 2: Intermediate (Standard Security)
- โ Level 1 + Classification implemented
- โ SAST/SCA integrated in CI/CD
- โ Unit test coverage โฅ70%
- โ SECURITY_ARCHITECTURE.md maintained
- โ Regular vulnerability scanning
Level 3: Advanced (Enhanced Security)
- โ Level 2 + DAST implementation
- โ Comprehensive threat modeling (STRIDE + MITRE ATT&CK)
- โ Unit test coverage โฅ80%
- โ E2E testing framework
- โ Public security dashboards
Level 4: Mature (Security Excellence)
- โ Level 3 + AI-augmented development controls
- โ Mutation testing (โฅ60% score)
- โ Full C4 architecture documentation
- โ Continuous security monitoring
- โ Evidence-based compliance (badges, reports)
- โ External security validation (pentesting, audits)
โ SDLC Security Checklist
Planning & Design Phase
- Project classification completed (CIA triad, RTO/RPO, business impact)
- Threat model documented (STRIDE + MITRE ATT&CK)
- Security architecture designed (C4 models, data flows)
- Risk assessment integrated with Risk Register
- Cost-benefit analysis for security investments
Development Phase
- Secure coding standards applied (OWASP Top 10)
- Code review requirements met (classification-based)
- Asset classification implemented
- Secret management controls enforced
- AI-augmented development controls active
Testing Phase
- SAST scanning integrated (SonarCloud)
- SCA scanning enabled (Dependabot)
- DAST testing implemented (OWASP ZAP)
- Secret scanning active (GitHub)
- Unit test coverage thresholds met (โฅ80% line, โฅ70% branch)
- E2E testing framework operational
- Test data protection controls enforced
Deployment Phase
- CI/CD security gates configured
- Manual approval gates per classification
- Deployment checklists completed
- Security metrics monitoring active
- Rollback procedures documented
Maintenance Phase
- Vulnerability management process active
- Performance monitoring with security metrics
- Regular update schedule defined
- Incident response procedures integrated
- Continuous improvement process operational
๐ References
Hack23 ISMS Core Policies
- ๐ ๏ธ Secure Development Policy - Comprehensive SDLC framework
- ๐ท๏ธ Classification Framework - Business impact analysis
- ๐ฏ Threat Modeling Policy - Systematic threat analysis
- ๐ Risk Register - Enterprise risk management
- ๐ Vulnerability Management - Remediation procedures
- ๐ Security Metrics - KPI tracking
- ๐จ Incident Response Plan - Security incident procedures
- ๐ Business Continuity Plan - BCP/DR processes
- ๐ Change Management - Change control procedures
- ๐ท๏ธ Data Classification Policy - Data handling requirements
Example Implementations
- ๐๏ธ CIA Security Architecture - Full authentication stack (Java/Spring)
- ๐๏ธ CIA Threat Model - Comprehensive threat analysis
- ๐ CIA Compliance Manager Security - Frontend security (TypeScript/Vite)
- ๐ฎ Black Trigram Security - Gaming security (TypeScript/Phaser)
- ๐ณ๏ธ Riksdagsmonitor Security - Static site security (HTML/CSS)
External Frameworks
- OWASP Top 10 - Critical web application security risks
- OWASP ASVS - Application security verification
- NIST SP 800-218 - Secure Software Development Framework
- Microsoft SDL - Security Development Lifecycle
- MITRE ATT&CK - Adversary tactics and techniques
๐ฏ Remember
- Classification Drives Security: All requirements aligned with business impact
- Transparency is Competitive Advantage: Public security demonstrates expertise
- AI Augments, Humans Decide: AI proposals require human approval
- Evidence-Based Security: Badges, dashboards, reports validate claims
- Continuous Improvement: Measure, analyze, improve security posture
- Documentation is Mandatory: SECURITY_ARCHITECTURE.md, THREAT_MODEL.md required
- Testing is Not Optional: Unit + E2E coverage proves quality
- Security is Everyone's Responsibility: DevSecOps culture required
Last Updated: 2026-02-10 (Continuous)
Version: Based on Hack23 Secure Development Policy v2.1 & STYLE_GUIDE v2.3