lighthouse-audit
Lighthouse Audit
Quick Start
When running Lighthouse audits:
- Mobile Audit:
npm run lighthouse - Desktop Audit:
npm run lighthouse:desktop - JSON Output:
npm run lighthouse:json - Analysis:
npm run lighthouse:analyze - Diagnostic Report:
npm run lighthouse:diagnostic
Commands
Mobile Audit (Default)
# Interactive mobile audit
npm run lighthouse
Configuration:
- Throttling: 4x CPU slowdown, 150ms RTT, 1600 Kbps
- Screen: 412x915 (mobile)
- Opens interactive report in browser
Desktop Audit
# Interactive desktop audit
npm run lighthouse:desktop
Configuration:
- Throttling: 1x CPU slowdown, 40ms RTT, 10240 Kbps
- No screen emulation
- Opens interactive report in browser
JSON Output
# Generate JSON report
npm run lighthouse:json
Output: lighthouse-report.json
Use: For automated analysis and CI/CD
Automated Analysis
# Analyze Lighthouse results
npm run lighthouse:analyze
Output: Console analysis of metrics
Shows: Scores, Core Web Vitals, opportunities
Diagnostic Report
# Generate diagnostic report
npm run lighthouse:diagnostic
# Local diagnostic (localhost:3000)
npm run lighthouse:diagnostic:local
Output: Markdown report in lighthouse-reports/
Format: Detailed analysis with recommendations
Core Web Vitals
Target Metrics
| Metric | Target | Good | Needs Improvement | Poor |
|---|---|---|---|---|
| LCP | < 2.5s | < 2.5s | 2.5s - 4.0s | > 4.0s |
| FID | < 100ms | < 100ms | 100ms - 300ms | > 300ms |
| CLS | < 0.1 | < 0.1 | 0.1 - 0.25 | > 0.25 |
| FCP | < 1.8s | < 1.8s | 1.8s - 3.0s | > 3.0s |
| TBT | < 300ms | < 300ms | 300ms - 600ms | > 600ms |
| SI | < 3.4s | < 3.4s | 3.4s - 5.8s | > 5.8s |
Current Performance
Mobile (11/03/2026, post-deploy lazy Recharts + HeroSlideCarousel). URL: https://www.pintemas.com
- Performance: 62/100 🟡
- LCP: 7.58s 🔴
- FCP: 1.89s 🟡
- TBT: 331ms 🟡
- SI: 5.84s 🔴
- CLS: 0 ✅
- Accessibility: 82/100 | Best Practices: 93/100 🟢 | SEO: 100/100 🟢
Mobile baseline (23/01/2026): Performance 38, LCP 17.3s, FCP 3.2s, TBT 1,210ms, SI 7.9s. Mejoras: lazy Swiper, IntersectionObserver, hero en servidor, luego lazy Recharts/Hero → LCP y TBT mejoran vs cargas sin optimizar.
Desktop:
- Performance: 93/100 🟢
- LCP: 3.2s 🟡
- FCP: 0.7s 🟢
- TBT: 60ms 🟢
- SI: 2.0s 🟢
- CLS: 0 ✅
Analysis Workflow
1. Run Audit
# Mobile audit
npm run lighthouse:json
2. Analyze Results
# Automated analysis
npm run lighthouse:analyze
3. Review Opportunities
Check the report for:
- Reduce unused JavaScript: Largest opportunity
- Defer offscreen images: Image optimization
- Reduce unused CSS: CSS optimization
- Avoid legacy JavaScript: Modern browser support
- Server response time: Backend optimization
4. Generate Diagnostic Report
npm run lighthouse:diagnostic
Output: lighthouse-reports/diagnostic-report-*.md
Optimization Opportunities
Última auditorÃa: 11/03/2026 (móvil, producción).
High Priority
-
LCP (7.58s — objetivo < 2.5s) — mayor impacto en Performance score
- Optimize hero image (preload, size, format, quality)
- Reduce main-thread work and JS execution time
- Defer offscreen images:
loading="lazy",fetchPriority="low"
-
Reduce Unused CSS (~200ms potential)
- Verify Tailwind purge configuration
- Remove unused CSS rules
- Use CSS chunking
-
Reduce Unused JavaScript
- Lazy load heavy components (Recharts/Hero ya en lazy)
- Dynamic imports and code splitting
- Remove unused dependencies
Medium Priority
-
Minimize main-thread work / Reduce JavaScript execution time
- Revisar reporte para scripts más pesados
- Code split por ruta y por componente
-
Server Response Time (~44ms potential)
- Optimize database queries
- Add database indexes
- Implement caching
-
Errores en consola y ARIA
- Revisar lighthouse-report.json: "Browser errors", "[aria-*] attributes do not match their roles"
Próxima acción recomendada
- LCP: Revisar en
lighthouse-report.jsonel nodo "Largest Contentful Paint element" y "LCP breakdown" para confirmar si el elemento es la imagen hero; si es asÃ, valorar preload más temprano, formato/size (ej.w=414en mobile) o CDN/cache para la imagen. - Reduce unused CSS: Ejecutar
npm run lighthouse:diagnosticy revisar el informe enlighthouse-reports/para listado de CSS no usado.
Performance Score Breakdown
Scoring Categories
- Performance: 0-100 (weighted by Core Web Vitals)
- Accessibility: 0-100 (WCAG compliance)
- Best Practices: 0-100 (security, modern APIs)
- SEO: 0-100 (meta tags, structured data)
Performance Score Calculation
Performance score is calculated from:
- LCP: 25% weight
- FID: 25% weight
- CLS: 15% weight
- FCP: 10% weight
- TBT: 10% weight
- SI: 10% weight
- TTI: 5% weight
Report Analysis
Reading Lighthouse Reports
- Scores: Overall performance rating
- Metrics: Core Web Vitals values
- Opportunities: Optimization suggestions with savings
- Diagnostics: Additional information
- Passed Audits: What's working well
Interpreting Opportunities
Each opportunity shows:
- Savings: Potential time saved (ms)
- Impact: High/Medium/Low priority
- Description: What needs to be done
- Learn More: Link to documentation
Continuous Monitoring
CI/CD Integration
Lighthouse CI is configured in:
.github/workflows/performance-budgets.ymllighthouserc.js
Automated Reports
Reports are saved to:
lighthouse-reports/directory- Timestamped filenames
- JSON and Markdown formats
Troubleshooting
Audit Fails or Times Out
- Check server is running:
npm run start - Verify URL is accessible
- Increase timeout in Lighthouse config
- Check network connectivity
Inconsistent Results
- Run multiple audits and average
- Clear browser cache
- Use consistent throttling settings
- Check for external dependencies
High LCP
- Optimize hero image
- Preload critical resources
- Reduce server response time
- Use CDN for static assets
High TBT
- Reduce JavaScript execution time
- Code split heavy components
- Defer non-critical JavaScript
- Optimize third-party scripts
Key Files
lighthouserc.js- Lighthouse CI configurationscripts/performance/analyze-lighthouse-results.js- Analysis scriptscripts/performance/lighthouse-diagnostic.js- Diagnostic scriptlighthouse-reports/- Generated reports directory
Best Practices
When to Run Audits
- Before releases: Verify performance hasn't regressed
- After optimizations: Measure improvements
- Regular monitoring: Weekly or monthly audits
- After major changes: New features or dependencies
Audit Environment
- Production: Best for accurate results
- Staging: Good for pre-release verification
- Local: Fast iteration, less accurate
Comparing Results
- Use same throttling settings
- Run at similar times
- Average multiple runs
- Document changes between audits