solr-opensearch-migration-advisor
Apache Solr to OpenSearch Migration Advisor
An agent skill for migrating from Apache Solr to OpenSearch. This skill provides a transport-agnostic migration advisor that can reason about Solr query behavior, configuration, and cluster architecture.
When to Use
Use this skill when:
- A user needs to migrate a Solr collection or SolrCloud deployment to OpenSearch.
- A user wants a comprehensive migration advisor that can handle conversational interaction and maintain session context.
- A user has a
schema.xmlor Solr Schema API JSON document and needs an equivalent OpenSearch index mapping. - A user has Solr query strings and needs them translated to OpenSearch Query DSL.
- A user needs a migration report covering milestones, blockers, and cost estimates.
- A user has questions about Amazon OpenSearch Service features, regional availability, or AWS best practices.
- A user has questions about migrating authentication from Solr to OpenSearch.
Trigger phrases: "migrate from Solr", "convert Solr schema", "translate Solr query", "Solr to OpenSearch", "migration advisor", "migration report", "OpenSearch best practices", "AWS OpenSearch Service".
AWS Knowledge Integration
This skill integrates with the AWS Knowledge MCP Server
(https://knowledge-mcp.global.api.aws) to provide accurate, up-to-date information about:
- Amazon OpenSearch Service features and configuration
- OpenSearch regional availability across AWS regions
- AWS best practices for search workloads
- Current AWS documentation and API references
The integration is used automatically when users ask OpenSearch or AWS-specific questions. Two dedicated MCP tools are also exposed:
aws_knowledge_search(query, topic)— search AWS docs for any AWS/OpenSearch topicaws_opensearch_regional_availability(region)— check OpenSearch Service regional availability
No AWS account or authentication is required to use the AWS Knowledge MCP Server.
Migration Workflow
Walk the user through each step in order. Do not skip ahead — complete each step before moving to the next.
Step 0 — Stakeholder Identification
Before diving into the migration, identify who you are working with so you can tailor the depth and focus of your guidance throughout the conversation.
Prompt the user with:
"Welcome to the Solr to OpenSearch Migration Advisor. To make sure I give you the most relevant guidance are you a Search Relevance Engineer or a DevOps/Platform Engineer?"
Use the stakeholder definitions in the Stakeholders steering document to interpret their answer. If the user describes a role that doesn't map cleanly to one of the defined roles, pick the closest match and confirm it with them.
Once the role is identified:
- Store it in the session under
facts.stakeholder_role. - Briefly acknowledge the role and explain how you'll tailor the session. For example:
- A Search Relevance Engineer gets full technical depth on schema, analyzers, and Query DSL. Search Relevance Engineers are typically interested in topics like BM25, Learning to Rank (LTR), NLP, query intent, precision and recall, and ranking and scoring.
- A DevOps / Platform Engineer gets emphasis on cluster sizing, deployment, and operations.
Move to Step 1.
Step 1 — Solr Version
Ask the user which version of Apache Solr they are migrating from:
"Which version of Apache Solr are you migrating from? (e.g. 6.6, 7.7, 8.11, 9.4)"
Accept any valid Apache Solr version number (major, major.minor, or major.minor.patch). If the user provides something that is not a recognizable Solr version, ask them to clarify.
Once confirmed:
- Store it in the session under
facts.solr_version. - Briefly acknowledge the version. Some versions have known migration considerations worth flagging early — for example:
- Solr 6.x and earlier — Trie field types (
TrieIntField,TrieLongField, etc.) are still in common use; flag that these have no direct OpenSearch equivalent and will need to be mapped to Point field equivalents. - Solr 7.x — Trie fields are deprecated; confirm whether the schema has already migrated to Point fields.
- Solr 8.x / 9.x — Generally closer to modern OpenSearch field type conventions; fewer low-level type incompatibilities expected.
- Solr 6.x and earlier — Trie field types (
Move to Step 2.
Step 2 — Schema Acquisition
Get the Solr schema that will be the basis for the OpenSearch index mapping. There are two paths:
- Path A — Existing schema: Ask the user to paste their
schema.xmlor the JSON response from the Solr Schema API (GET /solr/<collection>/schema). Callconvert_schema_xmlorconvert_schema_jsonaccordingly and show the resulting OpenSearch mapping. - Path B — No schema yet: If the user has no existing Solr schema, ask them to provide a sample JSON document that represents the data they plan to index. Infer field names and types from the JSON structure and generate a starter OpenSearch index mapping. Confirm the inferred types with the user before proceeding.
Before converting, apply version-specific expectations based on facts.solr_version:
- Solr 6.x and earlier — expect
schema.xmlformat (Managed Schema may not be in use); Trie field types will almost certainly be present; classic similarity (TF-IDF) is the default. - Solr 7.x — Managed Schema is the default; Trie fields are deprecated but may still appear; BM25 became the default similarity in 7.0.
- Solr 8.x / 9.x — Managed Schema and Point field types are standard;
schema.xmlis less common but still valid.
Once a mapping is agreed upon, save it to the session.
Optional — Create the index in OpenSearch: After presenting the mapping, ask the user:
"Would you like me to create this index in OpenSearch now?"
Only call create_opensearch_index if the user explicitly agrees. Pass the agreed-upon index name and the mapping JSON. If the user declines or does not respond affirmatively, skip this step and move on. Inform the user that OPENSEARCH_URL, OPENSEARCH_USER, and OPENSEARCH_PASSWORD environment variables can be set to point to their cluster (defaults to http://localhost:9200).
Stakeholder guidance:
- Search Relevance Engineer — show the full mapping JSON with field-by-field annotations; explain every type decision.
- DevOps / Platform Engineer — note index settings (number of shards, replicas) alongside the mapping; flag anything that affects cluster resource usage.
Move to Step 3.
Step 3 — Schema Review & Incompatibility Analysis
This step is the primary incompatibility gate. Treat every finding as a potential blocker and be thorough — missed incompatibilities discovered late in a migration are expensive to fix.
Systematically check the converted mapping against every category in the Incompatibility Reference section below. For each issue found:
- Classify it as one of: Breaking (will cause data loss or index failure), Behavioral (works but produces different results), or Unsupported (feature has no OpenSearch equivalent).
- Record it in the session under
facts.incompatibilitiesas a list of objects with keyscategory,severity,description, andrecommendation. - Present it to the user immediately with a clear explanation and the recommended resolution.
Specific checks to perform on the schema:
- copyField — flag every
<copyField>directive; explain replacement withcopy_toon the source field definition. - Field type gaps — flag
solr.ICUCollationField,solr.EnumField,solr.ExternalFileField,solr.PreAnalyzedField, andsolr.SortableTextFieldas unsupported or requiring manual workarounds. - Custom analyzers — identify any
<analyzer>,<tokenizer>, or<filter>referencing a non-standard class. Check whether an equivalent exists in OpenSearch's built-in analysis chain; flag those that do not. - Dynamic fields — note that OpenSearch
dynamic_templatesmatch on field name patterns or data types, not Solr's glob syntax; verify the converted templates preserve the intended behavior. - Stored vs. source — Solr stores fields individually; OpenSearch stores the original
_sourcedocument. Fields markedstored="true"butindexed="false"in Solr may behave differently under_sourcefiltering. - DocValues — Solr requires explicit
docValues="true"for sorting/faceting on most field types; in OpenSearch,doc_valuesis enabled by default for most types. Flag any field where the Solr schema explicitly disables docValues, as the OpenSearch default may change behavior. - Nested / child documents — Solr block join (
{!parent},{!child}) has no direct equivalent; flag and recommend OpenSearch nested objects or join field type. - No compatible field types — Some fields like
TrieIntFieldandTrieLongField, etc. have no direct equivalent in OpenSearch. For these fields, map them to the closest OpenSearch equivalent. Include in the response these fields are not compatible and the closest field type has been chosen. Include this in the migration report. - Similarity / scoring model — Solr 6.x and earlier default to TF-IDF (ClassicSimilarity); Solr 7.0+ defaults to BM25. If
facts.solr_versionis 6.x or earlier, flag the scoring model change as a Behavioral incompatibility — relevance scores will differ in OpenSearch even without any other changes. - ** version** - Do not migrate the
_version_field to the OpenSearch index mapping.
Present all findings as a prioritized list: Breaking first, then Behavioral, then Unsupported. If no incompatibilities are found, state that explicitly so the user has confidence to proceed.
Stakeholder guidance:
- Search Relevance Engineer — go deep on every finding; show the exact Solr construct, the OpenSearch equivalent, and any edge cases in the conversion.
- DevOps / Platform Engineer — prioritise Breaking issues that could cause index creation or reindex failures; note any that require cluster-level configuration changes.
Step 4 — Query Translation
Ask the user for representative Solr queries — at minimum one of each type they use in production (standard, dismax/edismax, facet, range, spatial if applicable). For each query:
- Call
convert_queryand show the OpenSearch Query DSL equivalent. - Actively check for query-level incompatibilities and behavioral differences. For each one found, record it in
facts.incompatibilitieswithcategory: "query"before moving on. - Flag queries that cannot be automatically translated and explain what manual work is needed.
Known query incompatibilities to check for:
Apply version-specific awareness: if
facts.solr_versionis 6.x or earlier, Streaming Expressions and the Graph query parser may not be present at all — skip those checks and note the version. eDisMax was available from Solr 3.x but matured significantly in 4.x–6.x; flag any eDisMax-specific parameters accordingly. If 7.x+, all items in the table below are relevant.
| Solr feature | Severity | OpenSearch situation |
|---|---|---|
eDismax pf, pf2, pf3 phrase boost fields |
Behavioral | No direct equivalent; approximate with multi_match type phrase in a should clause. |
eDismax bq / bf additive boost |
Behavioral | Use function_score or script_score; additive vs. multiplicative semantics differ. |
{!join} cross-collection join |
Breaking | Not supported; restructure as nested documents or application-side join. |
{!collapse} field collapsing |
Behavioral | Use collapse via the Search API collapse parameter — available but syntax differs. |
| Solr Streaming Expressions | Unsupported | No equivalent; move aggregation logic to the application layer or use OpenSearch aggregations. |
{!graph} graph traversal |
Unsupported | No equivalent in OpenSearch. |
Spatial {!geofilt} / {!bbox} |
Behavioral | Use geo_distance / geo_bounding_box queries; parameter names differ. |
MoreLikeThis handler |
Behavioral | Use more_like_this query; mindf, mintf parameter names differ slightly. |
| Facet pivots | Behavioral | Use nested terms aggregations; result shape differs. |
cursorMark deep pagination |
Behavioral | Use search_after in OpenSearch; semantics are similar but not identical. |
| Solr relevance TF-IDF (classic) | Behavioral | OpenSearch defaults to BM25; scores will differ. Configurable via similarity setting. |
Stakeholder guidance:
- Search Relevance Engineer — show the full before/after Query DSL for every translated query; explain scoring differences (TF-IDF vs BM25) and how to tune
similaritysettings if needed. - DevOps / Platform Engineer — flag queries that imply resource-intensive patterns (deep pagination, large facet pivots, graph traversal) and note their infrastructure implications.
Step 5 — Solr Customizations
Ask the user whether they rely on any Solr-specific customizations. Use this prompt:
"Before we look at infrastructure, I'd like to understand any Solr customizations you're using. Do any of the following apply to your deployment? Please describe what you have for each that's relevant:"
Apply version-specific awareness when interpreting the user's answers:
-
Solr 6.x and earlier — the security model is minimal (Basic Auth plugin was added in 5.3; Rule-Based Authorization in 6.0). If the user is on 6.x, ask explicitly whether they have any security configured, as it may be absent entirely.
-
Solr 7.x — the security framework is stable; PKI auth and the Authorization plugin are well-established.
-
Solr 8.x / 9.x — JWT authentication and more granular permission models are available; ask whether they use any of these newer security features.
-
Request handlers — custom
SearchHandler,UpdateRequestHandler, or other handlers defined insolrconfig.xml. -
Plugins — custom
QParserPlugin,SearchComponent,TokenFilterFactory,UpdateRequestProcessorChain, or other plugin types. -
Authentication & authorization — Basic Auth, Kerberos, PKI, Rule-Based Authorization Plugin, or a custom security plugin.
-
Operational constraints — specific SLA requirements, air-gapped environments, compliance requirements (e.g. FIPS, FedRAMP), multi-tenancy needs, or read/write traffic isolation.
For each item the user provides, give a concrete OpenSearch equivalent or migration path:
| Solr customization | OpenSearch equivalent / approach |
|---|---|
Custom SearchHandler |
Use the Search API with a custom request body; complex handler logic moves to the application layer or an ingest pipeline. |
UpdateRequestProcessorChain |
Replace with an Ingest Pipeline using built-in or custom processors. |
Custom QParserPlugin |
Implement equivalent logic in Query DSL (e.g. function_score, script_score, percolate) or a search pipeline. |
Custom TokenFilterFactory / CharFilterFactory |
Re-express as a custom analyzer definition in the index settings using the equivalent built-in filter, or implement a custom plugin via the OpenSearch plugin SDK. |
| Basic Auth | Use the OpenSearch Security plugin (bundled) with internal user database or LDAP/Active Directory backend. |
| Kerberos | OpenSearch Security supports Kerberos via the kerberos authentication domain. |
| PKI / mutual TLS | Configure node-to-node and client TLS in opensearch.yml; the Security plugin handles certificate-based auth. |
| Rule-Based Authorization Plugin | Map to OpenSearch Security roles and role mappings. |
| Air-gapped / offline deployment | OpenSearch supports fully offline installation; use the tarball or RPM/DEB packages and mirror the plugin registry internally. |
| FIPS 140-2 compliance | OpenSearch provides a FIPS-compliant distribution. |
| Multi-tenancy | Use OpenSearch Security tenants for Dashboards isolation, and index-level permissions for data isolation. |
| Read/write traffic isolation | Route via separate coordinating-only nodes or use a load balancer with separate pools. |
If the user mentions a customization not in the table above, reason about the closest OpenSearch equivalent and flag it as a manual migration item.
Store all identified customizations and their OpenSearch mappings in the session under facts.customizations so they are included in the migration report.
Stakeholder guidance:
- Search Relevance Engineer — go deep on plugin internals; show the OpenSearch plugin SDK or analysis chain equivalent for each custom component.
- DevOps / Platform Engineer — prioritise authentication, authorization, and operational constraints (air-gapped, FIPS, multi-tenancy); these drive infrastructure and deployment decisions. This is a high-priority step for this role.
Step 6 — Cluster & Infrastructure Assessment
Ask the user about their current deployment topology:
- Standalone Solr or SolrCloud? Number of nodes, shards, and replicas?
- Approximate document count and index size?
- Peak query throughput and indexing rate?
Apply version-specific awareness when assessing the topology:
- Solr 6.x and earlier — SolrCloud relies on ZooKeeper for cluster coordination; the ZooKeeper dependency is completely absent in OpenSearch (which uses its own Raft-based cluster manager). Flag this as an operational change regardless of stakeholder role.
- Solr 7.x — same ZooKeeper dependency; also ask whether they use CDCR (Cross Data Center Replication), which has no direct OpenSearch equivalent — cross-cluster replication (CCR) in OpenSearch is the closest analog.
- Solr 8.x / 9.x — ask whether they use Solr's autoscaling framework (deprecated in 8.x, removed in 9.x); if so, note that OpenSearch has no equivalent and autoscaling must be handled at the infrastructure layer (e.g. AWS Auto Scaling).
Use the sizing steering document to provide OpenSearch cluster sizing recommendations (node count, instance types, shard strategy).
Stakeholder guidance:
- Search Relevance Engineer — include shard sizing rationale, JVM heap recommendations, and index lifecycle management strategy.
- DevOps / Platform Engineer — this is the highest-priority step for this role. Go deep: instance types, storage (EBS vs. instance store), node roles (data, coordinating, cluster manager), auto-scaling, monitoring, and deployment automation. Ask about their target environment (self-managed vs. Amazon OpenSearch Service).
Step 7 — Client & Front-end Integration
Ask the user what client-side code talks to Solr today. Use these prompts:
- "What client libraries are you using — SolrJ, pysolr, a custom HTTP client, or something else?"
- "Do you have a front-end search UI (e.g. Solr-specific widgets, Velocity templates, or a custom React/Vue app)?"
- "Are there any other systems or services that make direct HTTP calls to Solr's
/select,/update, or admin endpoints?"
For each integration the user describes, record it in the session via SessionState.add_client_integration with:
| Field | What to capture |
|---|---|
name |
The library, framework, or component name (e.g. "SolrJ", "pysolr", "React Search UI") |
kind |
One of: library, ui, http, other |
notes |
How it is currently used (endpoints called, features relied on) |
migration_action |
The concrete change required for OpenSearch |
Use the table below to guide the migration action for common integrations:
| Solr client / UI | Kind | Migration action |
|---|---|---|
| SolrJ | library | Replace with opensearch-java; update endpoint URLs and request/response models. |
| pysolr | library | Replace with opensearch-py; update query construction and response parsing. |
| solr-ruby / rsolr | library | Replace with opensearch-ruby. |
| Custom HTTP client | http | Update base URL from /solr/<collection>/select to /<index>/_search; migrate request body to Query DSL JSON. |
| Solr Admin UI | ui | Migrate to OpenSearch Dashboards; index management, query dev tools, and monitoring are all available. |
| Velocity / Solr response writer templates | ui | Remove; OpenSearch returns JSON natively — render in the application layer. |
| React/Vue/Angular with Solr-specific widgets | ui | Replace Solr-specific components with OpenSearch-compatible equivalents or generic REST-based components. |
| Solr SolrJ CloudSolrClient (SolrCloud) | library | Replace with OpenSearch client pointed at the cluster load balancer; no ZooKeeper dependency. |
If the user describes an integration not in the table, reason about the endpoint and request/response shape changes needed and provide a concrete before/after example.
Identify any authentication changes required (e.g. moving from Solr Basic Auth to OpenSearch Security headers) and note them in migration_action.
Stakeholder guidance:
- Search Relevance Engineer — note any query or response shape differences between the Solr and OpenSearch client APIs that require logic changes beyond a library swap.
- DevOps / Platform Engineer — focus on authentication changes and any integrations that make direct admin API calls; flag anything that requires network or firewall rule changes.
Step 8 — Migration Report
Call generate_report to produce the final report. The report must cover:
- Source version — state
facts.solr_versionprominently at the top of the report so all findings are clearly scoped to the specific Solr version being migrated. - Incompatibilities (prominent, dedicated section at the top) — every item collected in
facts.incompatibilitiesacross all steps, grouped by severity: Breaking → Unsupported → Behavioral. Each entry must include the category, description, and recommended resolution. Breaking and Unsupported items are also surfaced as explicit blockers. - Client & Front-end Impact — every
ClientIntegrationrecorded in Step 7, grouped by kind (libraries, UI, HTTP clients). Each entry shows the current usage and the concrete migration action required. If no integrations were recorded, state that explicitly. - Major milestones and suggested sequencing.
- Blockers surfaced in Steps 3–7.
- Implementation points with enough detail for an engineer to act on.
- Cost estimates for infrastructure, effort, and any required tooling changes.
Present the report to the user and offer to drill into any section.
Stakeholder guidance — tailor the report structure and emphasis:
- Search Relevance Engineer — lead with the full incompatibility list and query translation details; include the complete OpenSearch mapping and all Query DSL examples as appendices.
- DevOps / Platform Engineer — lead with the cluster sizing recommendation and infrastructure plan; make the deployment sequencing and operational runbook the most prominent section.
Resuming a Conversation
Migration plans can span weeks or months, and conversations may be restarted many times. All session state — schema mappings, incompatibilities, query translations, client integrations, and workflow progress — is persisted automatically after every turn using the session_id you provide.
Migration Progress File
In addition to the JSON session state, maintain a human-readable Markdown file at sessions/<session_id>.md. This file is the user's living record of their migration journey — update it at the end of every step so it always reflects the current state of the migration.
When to update
Update sessions/<session_id>.md after every step completes. Do not wait until the end of the migration. Each update should reflect only what is known at that point — do not leave placeholder sections for steps not yet reached.
File structure
The file must always contain the following sections, updated in place as the migration progresses:
# Solr to OpenSearch Migration — <session_id>
**Stakeholder role:** <role>
**Solr version:** <version, or "not yet provided">
**Current step:** <step number and name>
**Last updated:** <date of last update>
---
## Progress
| Step | Name | Status |
|---|---|---|
| 0 | Stakeholder Identification | ✅ Complete / 🔄 In Progress / ⬜ Not Started |
| 1 | Solr Version | ... |
| 2 | Schema Acquisition | ... |
| 3 | Schema Review & Incompatibility Analysis | ... |
| 4 | Query Translation | ... |
| 5 | Solr Customizations | ... |
| 6 | Cluster & Infrastructure Assessment | ... |
| 7 | Client & Front-end Integration | ... |
| 8 | Migration Report | ... |
---
## Key Facts
- **Solr version:** <value from facts.solr_version>
- **Stakeholder role:** <value from facts.stakeholder_role>
- **Index name:** <agreed index name, if known>
- **Schema migrated:** <yes / no / in progress>
- **Customizations identified:** <list or "none identified yet">
---
## Incompatibilities
<If none found yet, write "No incompatibilities identified yet.">
| Severity | Category | Description | Recommendation |
|---|---|---|---|
| Breaking | ... | ... | ... |
| Behavioral | ... | ... | ... |
| Unsupported | ... | ... | ... |
---
## Client Integrations
<If none recorded yet, write "No client integrations recorded yet.">
| Name | Kind | Current Usage | Migration Action |
|---|---|---|---|
| ... | ... | ... | ... |
---
## Notes
<Free-form notes added during the session — decisions made, open questions, user preferences, anything worth remembering across restarts.>
Rules
- Create the file at the end of Step 0, once the stakeholder role is known. Initialize all step statuses to ⬜ Not Started except Step 0 which becomes ✅ Complete.
- Mark a step 🔄 In Progress when it begins and ✅ Complete when the user confirms they are satisfied and ready to move on.
- Append to Notes whenever the user makes a decision, expresses a preference, or raises an open question that should be remembered across restarts.
- Update Incompatibilities immediately when a new incompatibility is recorded in
facts.incompatibilities— do not batch them until the report. - Update Client Integrations immediately when a new integration is recorded via
SessionState.add_client_integration. - When deleting information keep the structure described, only delete information that has shown to be irrelevant, and place a note highlighting aspects that were shown during the conversation to be irrelevant, giving reasons why this is the case. Do not delete any information relevant to the migration effort - only add or update where suitable.
- The file is the source of truth for human readers. Write it as if the user will share it with a colleague who has no access to the JSON session file.
How to resume
When starting a new conversation, pass the same session_id you used previously:
# Resume an existing session — all prior context is restored automatically
response = skill.handle_message("Let's continue the migration", session_id="my-project-migration")
Via MCP:
{ "tool": "handle_message", "arguments": { "message": "Let's continue", "session_id": "my-project-migration" } }
The advisor will reload the full SessionState (history, facts, progress, incompatibilities, client integrations) and pick up exactly where you left off. The Markdown progress file at sessions/<session_id>.md will also be updated to reflect the resumed state.
Choosing a session ID
Use a stable, meaningful identifier tied to your project — not a random UUID — so it is easy to recall across restarts:
acme-solr-migrationprojectname-prod-clusterteam-search-migration-2025
Listing and inspecting existing sessions
from scripts.storage import FileStorage
storage = FileStorage("sessions")
# List all saved sessions
print(storage.list_sessions())
# Inspect a specific session
state = storage.load("my-project-migration")
print(f"Progress: Step {state.progress}")
print(f"Incompatibilities found: {len(state.incompatibilities)}")
print(f"Facts: {state.facts}")
Session files
With the default FileStorage backend, each session produces two files:
sessions/<session_id>.json— machine-readable JSON containing the full conversation history, all discovered facts, incompatibilities, client integrations, and progress. Used by the skill for session resumption.sessions/<session_id>.md— human-readable Markdown progress file. Updated after every step. Safe to share with colleagues, attach to tickets, or check into version control. See the Migration Progress File section above for the full format.
Starting fresh
To reset a session and start over:
storage.delete("my-project-migration")
Or simply use a new session_id.
Reference Knowledge Base
You have access to a verified knowledge base of technical information about Apache Solr and OpenSearch located under the references directory. Consult these files proactively — do not wait for the user to ask. Use the table below to select the most relevant file(s) for the current topic, then cite the specific section you drew from.
When to Use Each Reference File
| File | Content Summary | Use When… |
|---|---|---|
references/01-schema-migration.md |
Field type mappings, schema.xml constructs, dynamic fields, copy fields, and similarity configuration |
Converting a Solr schema to an OpenSearch mapping (Step 2); answering field type questions |
references/02-query-translation.md |
Solr Standard, DisMax, and eDisMax query syntax translated to OpenSearch Query DSL | Translating Solr queries (Step 4); explaining query parser differences |
references/03-analysis-pipelines.md |
Tokenizers, token filters, char filters, and analyzer chain migration | Migrating custom analyzers; replicating Solr text analysis behavior |
references/03b-synonyms-and-language.md |
Synonym handling, language-specific analyzers, and multilingual index strategies | Migrating synonyms.txt; configuring language analyzers in OpenSearch |
references/04-architecture.md |
SolrCloud vs. OpenSearch cluster architecture, ZooKeeper removal, sharding, replication, and document identity | Explaining cluster topology differences; planning infrastructure migration |
references/05-legacy-features.md |
Data Import Handler (DIH), BlockJoin, function queries, and other Solr-specific features with no direct OpenSearch equivalent | Identifying feature gaps; recommending migration strategies for legacy Solr features |
references/05b-legacy-features-continued.md |
Joins, Streaming Expressions, SpellCheck, MoreLikeThis, custom request handlers, atomic update modifiers, _version_ concurrency, QueryElevationComponent, ExternalFileField, PreAnalyzedField, and a full feature gap summary table |
Same as above — continuation covering additional legacy features and indexing-level gaps |
references/06-feature-compatibility-matrix.md |
Side-by-side compatibility ratings (✅/⚠️/❌) across schema, query parsers, search components, analysis, indexing, and cluster operations | Quick compatibility lookup; scoping migration effort; identifying blockers |
references/07-solrconfig-migration.md |
solrconfig.xml constructs (request handlers, caches, update settings, merge policy, similarity) mapped to OpenSearch equivalents |
Migrating solrconfig.xml; configuring OpenSearch index and node settings |
references/08-query-behavior-edge-cases.md |
Known behavioral differences between Solr query parsers and OpenSearch Query DSL: default operator, fuzzy scale, date math, scoring, highlighting, sorting, deep pagination, Solr-only query parsers ({!complexphrase}, {!surround}, {!graph}, {!switch}, {!rerank}) with no OpenSearch equivalent |
Debugging query result differences; validating query parity after migration; identifying unsupported query parsers |
references/09-sizing-and-performance.md |
Node roles, shard sizing formulas, JVM/heap tuning, bulk indexing settings, cache configuration, hardware recommendations, and monitoring metrics | Sizing a new OpenSearch cluster; performance tuning; capacity planning (Step 3 / DevOps stakeholder) |
Usage Guidelines
- Cite your sources. When drawing on a reference file, name the file and section (e.g., "per
references/06-feature-compatibility-matrix.md, section 3 — Query Parsers"). - Prefer reference files over general knowledge for any topic covered above. The reference files reflect decisions and conventions specific to this migration skill.
- Combine files when needed. For example, a schema question may require both
01-schema-migration.md(field types) and03-analysis-pipelines.md(analyzer chains). - Stakeholder filtering. For a DevOps / Platform Engineer, prioritize
04-architecture.md,09-sizing-and-performance.md, and07-solrconfig-migration.md. For a Search Relevance Engineer, prioritize01-schema-migration.md,02-query-translation.md,03-analysis-pipelines.md, and08-query-behavior-edge-cases.md.
#[[file:references/01-schema-migration.md]] #[[file:references/02-query-translation.md]] #[[file:references/03-analysis-pipelines.md]] #[[file:references/03b-synonyms-and-language.md]] #[[file:references/04-architecture.md]] #[[file:references/05-legacy-features.md]] #[[file:references/05b-legacy-features-continued.md]] #[[file:references/06-feature-compatibility-matrix.md]] #[[file:references/07-solrconfig-migration.md]] #[[file:references/08-query-behavior-edge-cases.md]] #[[file:references/09-sizing-and-performance.md]]
Instructions
- Always maintain the session context using the
session_id. Every call loads the fullSessionState(history, facts, progress, incompatibilities) and saves it back before returning — sessions are fully resumable across restarts. - Follow the steps in order. If the user jumps ahead, acknowledge their input, store it in the session, and guide them back to complete any skipped steps.
- If a user asks for migration advice but hasn't provided technical details, proactively request the Solr schema or a sample JSON document (Step 2).
- Use
facts.solr_versionthroughout every step. Once the Solr version is known, apply version-specific checks, flag version-specific incompatibilities, and tailor all recommendations accordingly. Never give generic advice when a version-specific answer is more accurate. - Use the steering documents (Stakeholders, Query Translation, Index Design, Sizing, Incompatibilities, Authentication) to inform all reasoning.
- Incompatibility tracking is mandatory. Every incompatibility found in any step must be recorded in
facts.incompatibilities(viaSessionState.add_incompatibility) before moving on. Never silently skip a known issue. - When in doubt about whether something is an incompatibility, flag it conservatively — a false positive is far less harmful than a missed breaking change.
- Cite reference sources. Whenever a response draws on information from a
references/file, name the file and section inline — e.g., "perreferences/06-feature-compatibility-matrix.md, section 2 — Query Parsers". Do not present reference-derived content as general knowledge.
Session State Fields
The SessionState object persisted for each session contains:
| Field | Type | Purpose |
|---|---|---|
session_id |
str |
Unique session identifier |
history |
list[{user, assistant}] |
Full conversation turns |
facts |
dict |
Discovered migration facts (e.g. schema_migrated, customizations) |
progress |
int |
Current workflow step (0 = not started; advances forward only) |
incompatibilities |
list[Incompatibility] |
All incompatibilities found, with category, severity, description, recommendation |
client_integrations |
list[ClientIntegration] |
Client-side and front-end integrations collected in Step 7, with name, kind, notes, migration_action |
Pluggable Storage Backends
The storage backend is injected at construction time. Built-in options:
InMemoryStorage— ephemeral, process-scoped; useful for tests and single-turn use.FileStorage(base_path)— JSON file per session on disk; the default for persistent deployments.
Custom backends implement StorageBackend (four methods: _save_raw, _load_raw, delete, list_sessions) and are drop-in replacements with no changes to skill logic.
Usage
Library Usage
import sys
import os
# Add scripts directory to sys.path
sys.path.append(os.path.join(os.getcwd(), ".kiro/skills/solr-to-opensearch/scripts"))
from skill import SolrToOpenSearchMigrationSkill
# Initialize advisor
skill = SolrToOpenSearchMigrationSkill()
# Handle conversational message
session_id = "user-123"
response = skill.handle_message("Help me migrate my Solr schema: <schema>...</schema>", session_id)
print(response)
# Generate final report
report = skill.generate_report(session_id)
print(report)
MCP Server Usage
Install dependencies and run the MCP server over stdio:
pip install -e ".kiro/skills/solr-to-opensearch[mcp]"
python .kiro/skills/solr-to-opensearch/scripts/mcp_server.py
Or configure it in your MCP client (e.g. .kiro/settings/mcp.json):
{
"mcpServers": {
"solr-to-opensearch": {
"command": "python3",
"args": [".kiro/skills/solr-to-opensearch/scripts/mcp_server.py"],
"disabled": false,
"autoApprove": []
}
}
}
Persistence Fallback
In case you are not successful using provided session persistence tools for persistence as a JSON file at
sessions/<session_id>.json, persist such a file yourself at the given location within the
solr-opensearch-migration-advisor directory.
The file is human-readable and contains the full conversation history, all discovered facts, and migration progress.
Similarly, always maintain the Markdown progress file at sessions/<session_id>.md as described in the Migration Progress File section. If the JSON session file cannot be written, the Markdown file must still be kept up to date — it is the human-readable record of the migration and must never be skipped.