skills/hack23/riksdagsmonitor/secure-development-lifecycle

secure-development-lifecycle

SKILL.md

๐Ÿ›ก๏ธ Secure Development Lifecycle (SDLC) Skill

๐ŸŽฏ Purpose

Comprehensive security practices for the entire Software Development Lifecycle (SDLC), ensuring security is built in from inception through maintenance. Integrates classification-driven requirements, AI-augmented development controls, and systematic testing frameworks aligned with Hack23 Secure Development Policy.

๐Ÿ” Core Security Principles

๐Ÿ” Security by Design

  • ๐Ÿท๏ธ Project Classification: CIA triad, RTO/RPO, business impact analysis
  • ๐Ÿ›ก๏ธ Secure Coding Standards: OWASP Top 10 alignment with classification controls
  • ๐Ÿ—๏ธ Architecture Documentation: SECURITY_ARCHITECTURE.md + FUTURE_SECURITY_ARCHITECTURE.md

๐ŸŒŸ Transparency Through Documentation

  • ๐Ÿ“‹ Living Security Architecture: Real-time documentation with classification controls
  • ๐ŸŽ–๏ธ Public Security Badges: OpenSSF Scorecard, SLSA, Quality Gate validation
  • ๐Ÿ”“ Open Development: Demonstrating expertise while maintaining classification

๐Ÿ”„ Continuous Security Improvement

  • ๐Ÿท๏ธ Classification-Driven Testing: SAST/SCA/DAST per classification levels
  • ๐Ÿ“ˆ Performance Monitoring: Security metrics with availability SLAs
  • ๐Ÿ” Regular Reviews: Classification-based risk management and ROI

๐Ÿ”„ 5-Phase SDLC Security Framework

๐Ÿ“‹ Phase 1: Planning & Design

๐Ÿท๏ธ Project Classification (REQUIRED)

Apply Classification Framework:

  • CIA Triad Analysis (Confidentiality, Integrity, Availability)
  • Business Impact Classification (Revenue, Trust, Compliance)
  • RTO/RPO Definition (Recovery Time/Point Objectives)
  • Risk Assessment Integration with Risk Register
  • Cost-Benefit Analysis (Security ROI)

Classification Levels:

Level Confidentiality Integrity Availability Security Investment
Critical State secrets Financial <1 hour RTO Maximum controls
High Proprietary Legal 4 hour RTO Strong controls
Medium Internal Operational 24 hour RTO Standard controls
Low Public Informational 72 hour RTO Baseline controls

๐Ÿ—๏ธ Security Architecture Design (REQUIRED)

Maintain comprehensive architecture documentation:

  • SECURITY_ARCHITECTURE.md: Current implemented security design
  • FUTURE_SECURITY_ARCHITECTURE.md: Planned security improvements
  • ARCHITECTURE.md: Complete C4 models (Context, Container, Component, Code)
  • DATA_MODEL.md: Data structures and classifications
  • FLOWCHART.md: Business process flows with security controls

๐ŸŽฏ Threat Modeling (MANDATORY)

Per Threat Modeling Policy:

  • STRIDE Framework: Spoofing, Tampering, Repudiation, Information Disclosure, DoS, Elevation of Privilege
  • MITRE ATT&CK Integration: 14 tactics mapped with techniques
  • Attack Tree Analysis: Graphical attack path decomposition
  • Threat Agent Classification: 7 categories (Accidental Insiders โ†’ Nation-State APTs)
  • THREAT_MODEL.md: Comprehensive 9-section threat documentation

๐Ÿ’ป Phase 2: Development

๐Ÿ›ก๏ธ Secure Coding Guidelines

OWASP Top 10 (2021) Alignment:

  1. A01 - Broken Access Control: Proper authentication/authorization
  2. A02 - Cryptographic Failures: TLS 1.3, AES-256 encryption
  3. A03 - Injection: Parameterized queries, input validation
  4. A04 - Insecure Design: Apply threat modeling, secure patterns
  5. A05 - Security Misconfiguration: Secure defaults, hardened configs
  6. A06 - Vulnerable Components: SCA scanning, SBOM generation
  7. A07 - Authentication Failures: MFA, secure session management
  8. A08 - Software/Data Integrity: Code signing, integrity checks
  9. A09 - Logging Failures: Comprehensive security event logging
  10. A10 - SSRF: Validate external resource requests

๐Ÿ” Code Review Requirements

Classification-Based Review:

Classification Review Type Required Approvals Security Focus
Critical Formal security review 2+ reviewers + security architect All OWASP Top 10
High Security-focused PR review 2+ reviewers Critical vulnerabilities
Medium Standard PR review 1+ reviewer Input validation, auth
Low Standard PR review 1 reviewer Basic security checks

๐Ÿ” Secret Management (MANDATORY)

  • Zero Hard-Coded Credentials: No secrets in source code
  • GitHub Secrets: All credentials in encrypted secrets
  • Rotation Policy: Critical: 90 days, High: 180 days, Medium/Low: 365 days
  • Access Logging: All secret access logged and monitored
  • Least Privilege: Secrets scoped to minimum required access

๐Ÿงช Phase 3: Security Testing

๐Ÿ”ฌ Static Application Security Testing (SAST)

Implementation:

  • Tool: SonarCloud integration on every commit
  • Quality Gates: Classification-based failure thresholds
  • Coverage: All code analyzed for security vulnerabilities
  • Reporting: Public quality/security dashboards

Classification-Based Quality Gates:

Classification Security Hotspots Code Coverage Duplications Maintainability
Critical 0 (block) โ‰ฅ90% <3% A rating
High โ‰ค2 (review) โ‰ฅ80% <5% A or B rating
Medium โ‰ค5 (track) โ‰ฅ70% <10% B or C rating
Low โ‰ค10 (monitor) โ‰ฅ60% <15% C rating

๐Ÿ“ฆ Software Composition Analysis (SCA)

Dependency Security:

  • Automated Scanning: Dependabot, Snyk, or equivalent
  • SBOM Generation: Software Bill of Materials for all releases
  • Vulnerability Database: CVE, NVD, GitHub Advisory integration
  • Update Policy: Classification-based patching SLAs
  • License Compliance: OSS license validation

Remediation SLAs:

Severity Critical Project High Project Medium Project Low Project
Critical 24 hours 72 hours 1 week 2 weeks
High 1 week 2 weeks 1 month 2 months
Medium 1 month 2 months 3 months 6 months
Low Next release Next release Next release Next release

โšก Dynamic Application Security Testing (DAST)

Runtime Security Testing:

  • Tool: OWASP ZAP, Burp Suite, or equivalent
  • Scope: Staging environments (classification-appropriate)
  • Frequency: Per sprint (Critical/High), quarterly (Medium/Low)
  • Coverage: All authentication, authorization, input handling paths

๐Ÿ” Secret Scanning (CONTINUOUS)

  • GitHub Secret Scanning: Enabled on all repositories
  • Pre-commit Hooks: Detect secrets before commit
  • Historical Scanning: Scan entire git history
  • Alert Integration: Immediate notifications to security team
  • Remediation SLA: Critical secrets rotated within 1 hour

๐Ÿ“‹ Test Data Protection (MANDATORY)

  • Zero Production Data: Never use real data in dev/test
  • Data Anonymization: Pseudonymize test data
  • Secure Deletion: Wipe test data after use
  • Access Control: Least privilege for test environments

๐ŸŽฏ Unit Test Coverage & Quality

๐Ÿ“Š Testing Standards

Minimum Thresholds:

  • Line Coverage: โ‰ฅ80% (Critical/High), โ‰ฅ70% (Medium/Low)
  • Branch Coverage: โ‰ฅ70% (Critical/High), โ‰ฅ60% (Medium/Low)
  • Mutation Testing: โ‰ฅ60% mutation score (Critical only)
  • Test Execution: Every commit and PR
  • Trend Analysis: Historical tracking, regression prevention

๐Ÿ“š Required Documentation

Every repository MUST have:

  • UnitTestPlan.md: Comprehensive unit test strategy
  • Test Results: Public HTML reports (GitHub Pages)
  • Coverage Dashboards: Accessible coverage metrics
  • Quality Badges: Status badges in README.md

๐Ÿ“Š Reference Implementation Examples

๐Ÿ›๏ธ Citizen Intelligence Agency (Java/Spring): Unit Test Coverage Unit Tests Test Plan

๐ŸŽฎ Black Trigram (TypeScript/Phaser): Coverage Unit Tests Test Plan

๐Ÿ“Š CIA Compliance Manager (TypeScript/Vite): Coverage Unit Tests Test Plan

๐ŸŒ End-to-End Testing Strategy

๐ŸŽฏ E2E Testing Requirements

Coverage Areas:

  • Critical User Journeys: All primary workflows tested
  • Authentication Flows: Login, logout, session management
  • Authorization Checks: Role-based access validation
  • Data Integrity: CRUD operations validation
  • Performance: Response time within SLA thresholds

๐Ÿ“š Required Documentation

Every repository MUST have:

  • E2ETestPlan.md: Comprehensive E2E test strategy
  • Mochawesome Reports: Public HTML test results
  • Browser Matrix: Cross-browser validation (Chrome, Firefox, Safari, Edge)
  • Performance Assertions: Response time validation

๐Ÿ“Š Reference Implementation Examples

๐Ÿ›๏ธ Citizen Intelligence Agency: E2E Tests E2E Plan

๐ŸŽฎ Black Trigram: E2E Tests E2E Plan

๐Ÿ“Š CIA Compliance Manager: E2E Tests E2E Plan

๐Ÿค– AI-Augmented Development Controls

๐Ÿ” AI as Proposal Generator, Not Authority

Core Principles:

  • All AI outputs are proposals: Require human review and approval
  • No autonomous deployment: AI cannot bypass CI/CD pipelines or security gates
  • Human accountability: Responsibility remains with human developers
  • Transparent attribution: Document AI assistance in PR descriptions

๐Ÿ“‹ PR Review Requirements

Mandatory Controls:

  • Human Review: All AI-assisted changes pass through standard PR workflows
  • Security Gate Enforcement: CI pipelines unchanged or only tightened
  • Change Attribution: PR descriptions MUST document AI tools used
  • Code Ownership: Human developers remain code owners

๐Ÿ”ง Curator-Agent Configuration Management

Change Control:

  • Scope: .github/agents/*.md, .github/copilot-mcp*.json, .github/workflows/copilot-setup-steps.yml
  • Classification: Normal Change per Change Management
  • Approval: CEO or designated security owner required
  • Risk Assessment: Documented evaluation for capability expansion

๐Ÿ›ก๏ธ Security Requirements

Tool Governance:

  • Least Privilege: Agents operate with minimal required tool access
  • MCP Configuration Control: Model Context Protocol changes require security review
  • Audit Trail: All agent activities logged for compliance analysis
  • Capability Expansion: New integrations require documented risk assessment

๐Ÿš€ Phase 4: Deployment

๐Ÿค– Automated CI/CD Pipelines

Security Gates:

  • SAST Scanning: Code quality gates (classification-based thresholds)
  • SCA Scanning: Dependency vulnerability checks with auto-block
  • Secret Scanning: Zero tolerance for exposed credentials
  • Container Scanning: Image vulnerability assessment (if applicable)
  • Infrastructure as Code: Terraform/CloudFormation security validation

โœ… Manual Approval Gates

Classification-Based Approvals:

Classification Approval Required Approvers Change Window
Critical Production deploy CEO + Security Architect Scheduled only
High Production deploy Tech Lead + Reviewer Standard window
Medium Production deploy Automated + monitoring Anytime
Low Production deploy Automated Anytime

๐Ÿ“‹ Deployment Checklists

Pre-Deployment Verification:

  • All security tests passing
  • Classification-appropriate controls validated
  • Rollback plan documented
  • Monitoring alerts configured
  • Incident response procedures ready

๐Ÿ“Š Security Metrics

Real-Time Monitoring:

  • OpenSSF Scorecard: Public security posture metrics
  • SLSA Level: Supply chain security attestation
  • Quality Gates: SonarCloud quality/security dashboards
  • Uptime Metrics: Availability aligned with classification SLAs

๐Ÿ”ง Phase 5: Maintenance & Operations

๐Ÿ†˜ Vulnerability Management

Classification-Based Remediation: Per Vulnerability Management:

Severity Critical Project High Project Medium Project Low Project
Critical 24 hours 72 hours 1 week 2 weeks
High 1 week 2 weeks 1 month 2 months
Medium 1 month 2 months 3 months 6 months
Low Next release Next release Next release Next release

๐Ÿ“ˆ Performance Monitoring

Security Metrics Integration: Per Security Metrics:

  • Availability Tracking: Uptime per classification requirements
  • Response Time: Performance within SLA thresholds
  • Error Rates: Security-relevant errors logged and analyzed
  • Incident Metrics: MTTR, MTTD aligned with classification

๐Ÿ”„ Regular Updates

Patch Management:

  • Security Patches: Classification-based deployment schedules
  • Dependency Updates: Automated PRs with security review
  • Framework Updates: Major version upgrades with testing
  • Business Continuity: Updates aligned with BCP

๐Ÿ“‹ Incident Response

Integration: Per Incident Response Plan:

  • Classification-Driven Escalation: Incident severity based on project classification
  • Communication Procedures: Stakeholder notifications per classification
  • Recovery Objectives: RTO/RPO aligned with classification
  • Post-Incident Review: Lessons learned and improvement actions

๐Ÿ“Š SDLC Security Maturity Levels

Level 1: Basic (Minimum Viable Security)

  • โœ… Basic security controls implemented
  • โœ… Dependabot enabled
  • โœ… Secret scanning active
  • โœ… Basic threat model documented

Level 2: Intermediate (Standard Security)

  • โœ… Level 1 + Classification implemented
  • โœ… SAST/SCA integrated in CI/CD
  • โœ… Unit test coverage โ‰ฅ70%
  • โœ… SECURITY_ARCHITECTURE.md maintained
  • โœ… Regular vulnerability scanning

Level 3: Advanced (Enhanced Security)

  • โœ… Level 2 + DAST implementation
  • โœ… Comprehensive threat modeling (STRIDE + MITRE ATT&CK)
  • โœ… Unit test coverage โ‰ฅ80%
  • โœ… E2E testing framework
  • โœ… Public security dashboards

Level 4: Mature (Security Excellence)

  • โœ… Level 3 + AI-augmented development controls
  • โœ… Mutation testing (โ‰ฅ60% score)
  • โœ… Full C4 architecture documentation
  • โœ… Continuous security monitoring
  • โœ… Evidence-based compliance (badges, reports)
  • โœ… External security validation (pentesting, audits)

โœ… SDLC Security Checklist

Planning & Design Phase

  • Project classification completed (CIA triad, RTO/RPO, business impact)
  • Threat model documented (STRIDE + MITRE ATT&CK)
  • Security architecture designed (C4 models, data flows)
  • Risk assessment integrated with Risk Register
  • Cost-benefit analysis for security investments

Development Phase

  • Secure coding standards applied (OWASP Top 10)
  • Code review requirements met (classification-based)
  • Asset classification implemented
  • Secret management controls enforced
  • AI-augmented development controls active

Testing Phase

  • SAST scanning integrated (SonarCloud)
  • SCA scanning enabled (Dependabot)
  • DAST testing implemented (OWASP ZAP)
  • Secret scanning active (GitHub)
  • Unit test coverage thresholds met (โ‰ฅ80% line, โ‰ฅ70% branch)
  • E2E testing framework operational
  • Test data protection controls enforced

Deployment Phase

  • CI/CD security gates configured
  • Manual approval gates per classification
  • Deployment checklists completed
  • Security metrics monitoring active
  • Rollback procedures documented

Maintenance Phase

  • Vulnerability management process active
  • Performance monitoring with security metrics
  • Regular update schedule defined
  • Incident response procedures integrated
  • Continuous improvement process operational

๐Ÿ“š References

Hack23 ISMS Core Policies

Example Implementations

External Frameworks

๐ŸŽฏ Remember

  • Classification Drives Security: All requirements aligned with business impact
  • Transparency is Competitive Advantage: Public security demonstrates expertise
  • AI Augments, Humans Decide: AI proposals require human approval
  • Evidence-Based Security: Badges, dashboards, reports validate claims
  • Continuous Improvement: Measure, analyze, improve security posture
  • Documentation is Mandatory: SECURITY_ARCHITECTURE.md, THREAT_MODEL.md required
  • Testing is Not Optional: Unit + E2E coverage proves quality
  • Security is Everyone's Responsibility: DevSecOps culture required

Last Updated: 2026-02-10 (Continuous)
Version: Based on Hack23 Secure Development Policy v2.1 & STYLE_GUIDE v2.3

Weekly Installs
2
GitHub Stars
2
First Seen
10 days ago
Installed on
amp2
cline2
opencode2
cursor2
kimi-cli2
codex2