This tool migrates your data from Cosmic API v2 to Cosmic API v3, transferring all object types and objects.
- Install dependencies:
bun install
- Create a
.env
file in the root directory with the following variables:
# Cosmic API v2 credentials
COSMIC_V2_BUCKET_SLUG=your-v2-bucket-slug
COSMIC_V2_READ_KEY=your-v2-read-key
# Cosmic API v3 credentials
COSMIC_V3_BUCKET_SLUG=your-v3-bucket-slug
COSMIC_V3_READ_KEY=your-v3-read-key
COSMIC_V3_WRITE_KEY=your-v3-write-key
You can find these credentials in your Cosmic dashboard:
- For v2: Go to Bucket Settings > API Access
- For v3: Go to Project Settings > API Access
If you want to migrate everything at once:
bun run migrate
For more control, you can run the migration in two steps:
- First, migrate only the object types:
bun run migrate-object-types
- After verifying and potentially adjusting the v3 object types, migrate the objects:
bun run migrate-objects
This two-step approach allows you to:
- Migrate the object types first
- Verify the object types are created correctly in v3
- Make any necessary adjustments to the v3 object types
- Then migrate the actual objects with confidence that they'll match the object type schemas
If you need to start fresh and remove data from your v3 bucket:
bun run delete-object-types
bun run delete-objects
bun run truncate-bucket
All scripts will:
- List what will be deleted
- Prompt for confirmation before proceeding
- Execute the deletion in parallel with reasonable batching
- Migrates all object types with their configuration
- Migrates all objects with their metadata and content
- Transforms image URLs in metadata to filenames (extracts filename from URLs)
- Removes 'value' property from repeater and parent metafields in object types
- Detailed error logging with complete object information for debugging
- Console logging for tracking migration progress
- Error handling for individual objects
- Pagination support for handling large datasets
- Optional skipping of existing items
- Split migration process for better control
- Schema mismatch detection and tracking
- Enhanced reporting system with recommended actions
- Support for custom field transformations
- Rate limit handling with exponential backoff
- Batch processing for large datasets
In the migrate-object-types.ts
file, you can adjust:
const config = {
// ... API credentials
settings: {
skipExistingTypes: false, // Set to true to skip existing object types in v3
},
};
In the migrate-objects.ts
file, you can adjust:
const config = {
// ... API credentials
settings: {
pageSize: 100, // Number of objects to fetch per page
skipExistingObjects: false, // Set to true to skip existing objects in v3
filterObjectTypes: [], // Array of object type slugs to filter (empty = all types)
},
};
The filterObjectTypes
option allows you to migrate only specific object types. For example:
settings: {
filterObjectTypes: ["posts", "authors"]; // Only migrate objects of these types
}
During object migration, the tool checks for mismatches between your v2 object data and the v3 object type schemas. When mismatches are found, they are tracked and a detailed report is generated in schema-mismatch-report.json
(configurable file name).
Example report:
{
"summary": {
"totalMismatches": 2,
"objectTypesWithMismatches": 2,
"timestamp": "2025-05-07T19:30:03.427Z"
},
"mismatchesByType": {
"homepage-client": [
{
"typeSlug": "homepage-client",
"fieldKey": "button",
"expectedType": "text",
"actualType": "object",
"occurrences": 3,
"objectIds": [
"661e51ce19e3627d5a0a0302",
"661e51ce19e3627d5a0a0303",
"661e51ce19e3627d5a0a0304"
],
"recommendedAction": "Field 'button' contains object values but should be text - Check data or update schema"
}
]
}
}
Use this report to:
- Identify fields that don't match between v2 and v3
- Find which objects have the issues (using the objectIds)
- Follow the recommended actions to fix schemas or data
To change the output file or disable file output entirely:
// In utils.ts config section
config.settings.reportOutput = "my-custom-report.json"; // or null to disable
When a migration fails for specific objects or object types, detailed error information is logged, including:
- The error message
- The complete v2 object that failed to migrate
- The v3 object type schema it was trying to match
This information helps identify issues like:
- Missing required fields
- Type mismatches (e.g., text vs. object)
- Schema incompatibilities
For example, you might see something like:
❌ Error migrating object: Sample Post (sample-post)
Error: Validation failed: Field 'tags' expects an array but received a string
Full v2 object that caused error: {
// Full object JSON
}
v3 object type model: {
// Full object type schema
}
You can define custom transformations for specific fields when their data structure needs special handling:
// In utils.ts
config.settings.fieldTransformations = {
// Format: 'typeSlug.fieldKey': (value) => transformedValue
// Convert string to array (for fields that need to be arrays in v3)
"blog.tags": (value) => (Array.isArray(value) ? value : value ? [value] : []),
// Convert string prices to numbers
"products.price": (value) =>
typeof value === "string" ? parseFloat(value) : value,
// Transform complex nested structures
"page.seo": (value) => ({
title: value?.title || "",
description: value?.description || "",
image: value?.image ? extractFilenameFromUrl(value.image.url) : "",
}),
};
These transformations are applied during object migration and can help:
- Fix type mismatches automatically
- Restructure complex nested data
- Normalize data formats
- Handle edge cases specific to your content model
All configuration options can be controlled via environment variables:
# Required credentials (as shown in Setup section)
COSMIC_V2_BUCKET_SLUG=your-v2-bucket-slug
COSMIC_V2_READ_KEY=your-v2-read-key
COSMIC_V3_BUCKET_SLUG=your-v3-bucket-slug
COSMIC_V3_READ_KEY=your-v3-read-key
COSMIC_V3_WRITE_KEY=your-v3-write-key
# Optional settings
BATCH_PROCESS=true # Enable batch processing mode
BATCH_DELAY=2000 # Delay between processing different object types (ms)
You can define custom field transformations for specific content type fields to handle edge cases or complex transforms:
// In utils.ts
config.settings.fieldTransformations = {
// Format: 'typeSlug.fieldKey': (value) => transformedValue
"blog.tags": (value) => (Array.isArray(value) ? value : value ? [value] : []),
"products.price": (value) =>
typeof value === "string" ? parseFloat(value) : value,
};
This allows you to handle special cases in your content model that need custom transformation logic.
For large migrations, you can use batch processing to migrate objects in smaller chunks, which helps avoid API rate limits and makes the process more manageable. Enable batch processing by setting environment variables:
BATCH_PROCESS=true
BATCH_DELAY=2000 # Delay in milliseconds between processing different object types
When batch processing is enabled, you can also specify which object types to migrate in a single run:
# Migrate a specific object type by index (1-based)
bun run migrate-objects 3
# Migrate a range of object types by index
bun run migrate-objects 1-5
# Migrate specific types by slug
bun run migrate-objects posts,pages,authors
This is especially useful when:
- You have a large number of object types
- You need to retry migration for specific types after fixing schema issues
- You want to implement a staged migration process
The migration tool now includes intelligent handling of API rate limits, using exponential backoff with jitter to automatically retry operations when rate limits are hit. This provides:
- Automatic detection of rate limit errors (HTTP 429 or error messages)
- Smart retry logic with increasing delays between attempts
- Detailed logging of retry attempts with countdown timers
- Configurable retry parameters (max retries, initial delay, max delay)
All API operations that might encounter rate limits, such as:
- Fetching objects from v2 API
- Checking if objects exist in v3
- Creating/updating objects in v3
- Fetching object type models
are wrapped with retry logic that automatically handles rate limits without requiring manual intervention.
The migration script now generates a detailed report of all schema mismatches encountered during migration, including:
- Field-by-field analysis of schema mismatches
- Specific recommendations for how to fix each issue
- Sample object IDs where the issue occurs
- Option to save detailed reports to a JSON file for further analysis
To customize the report output:
// In utils.ts
config.settings.reportOutput = "my-custom-report.json"; // Set to null to disable file output
The report provides actionable insights to help you fix your object type schemas in v3 to better match your v2 data structure.
The script automatically transforms image metadata. For example:
// Original metadata with image URL
{
"hero_image": {
"url": "https://cdn.cosmicjs.com/bucket/path/image.jpg",
"imgix_url": "https://imgix.cosmicjs.com/bucket/path/image.jpg"
}
}
// Transformed to just the filename
{
"hero_image": "image.jpg"
}
This transformation is applied recursively to all nested objects in the metadata.
For Object Types, the script removes the 'value' property from repeater and parent type metafields:
// Original metafield with value
{
"type": "repeater",
"title": "Items",
"key": "items",
"value": [...], // This will be removed
"children": [...]
}
// Processed metafield
{
"type": "repeater",
"title": "Items",
"key": "items",
"children": [...]
}
This ensures proper compatibility with the v3 API format.
When an error occurs during migration, the script logs:
- The error message
- The v3 object type model (schema) for reference when migrating objects
- The complete v2 object or object type that caused the error
This makes it easier to diagnose and fix issues with specific content. When an object fails to migrate, seeing both the v3 model (what was expected) and the v2 object (what was being migrated) side by side helps quickly identify schema mismatches or field discrepancies.
For example, you might see that the v3 model expects a required field that is missing in the v2 object, or that the data types don't match between versions.
If you encounter rate limits or other API restrictions, try reducing the pageSize
or adding delays between operations.
This project uses Bun as the JavaScript runtime.