Skip to content

Storage Operations

This guide explains the core storage concepts and provides examples of how to use the Synapse SDK to store, retrieve, and manage data on Filecoin On-Chain Cloud.

Data Set: A logical container of pieces stored with one provider. When a data set is created, a payment rail is established with that provider. All pieces in the data set share this single payment rail and are verified together via PDP proofs.

PieceCID: Content-addressed identifier for your data (format: bafkzcib...). Automatically calculated during upload and used to retrieve data from any provider.

Metadata: Optional key-value pairs for organization:

  • Data Set Metadata: Max 10 keys (e.g., project, environment)
  • Piece Metadata: Max 5 keys per piece (e.g., filename, contentType)

Storage Manager: The main entry point for storage operations. Handles provider selection, data set management, and provides downloads from any provider (provider-agnostic) using the StorageContext.

Storage Context: A connection to a specific storage provider and data set. Created explicitly for fine-grained control or automatically by StorageManager. Enables uploads and downloads with the specific storage provider.

The SDK offers two ways to work with storage operations:

ApproachWho It’s ForWhat SDK HandlesWhen to Use
Auto-ManagedMost developersProvider selection, data set creation, managementGetting started, simple apps, quick prototypes
Explicit ControlAdvanced usersNothing - you control everythingBatch operations, specific providers, cost optimization

Recommendation: Start with auto-managed, then explore explicit control only if needed.

Upload and download data with zero configuration - SDK automatically selects a provider and manages the data set:

const
const data: Uint8Array<ArrayBuffer>
data
= new
var Uint8Array: Uint8ArrayConstructor
new (elements: Iterable<number>) => Uint8Array<ArrayBuffer> (+6 overloads)
Uint8Array
([1, 2, 3, 4, 5]);
const
const result: UploadResult
result
= await
const synapse: Synapse
synapse
.
Synapse.storage: StorageManager
storage
.
StorageManager.upload(data: UploadData, options?: StorageManagerUploadOptions): Promise<UploadResult>
upload
(
const data: Uint8Array<ArrayBuffer>
data
);
const
const downloaded: Uint8Array<ArrayBufferLike>
downloaded
= await
const synapse: Synapse
synapse
.
Synapse.storage: StorageManager
storage
.
StorageManager.download(pieceCid: string | PieceCID, options?: StorageManagerDownloadOptions): Promise<Uint8Array>
download
(
const result: UploadResult
result
.
UploadResult.pieceCid: PieceLink
pieceCid
);
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
("Uploaded:",
const result: UploadResult
result
.
UploadResult.pieceCid: PieceLink
pieceCid
);
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
("Downloaded:",
const downloaded: Uint8Array<ArrayBufferLike>
downloaded
.
Uint8Array<ArrayBufferLike>.length: number

The length of the array.

length
, "bytes");

Add metadata to organize uploads and enable faster data set reuse - SDK will reuse any existing data set matching the metadata:

const
const context: StorageContext
context
= await
const synapse: Synapse
synapse
.
Synapse.storage: StorageManager
storage
.
StorageManager.createContext(options?: CreateContextOptions): Promise<StorageContext>
createContext
({
BaseContextOptions.metadata?: Record<string, string>
metadata
: {
type Application: string
Application
: "My DApp",
type Version: string
Version
: "1.0.0",
type Category: string
Category
: "Documents",
},
});
const
const result: UploadResult
result
= await
const synapse: Synapse
synapse
.
Synapse.storage: StorageManager
storage
.
StorageManager.upload(data: UploadData, options?: StorageManagerUploadOptions): Promise<UploadResult>
upload
(
const data: Uint8Array<ArrayBufferLike>
data
, {
StorageManagerUploadOptions.contexts?: StorageContext[]
contexts
: [
const context: StorageContext
context
] });
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
("Uploaded:",
const result: UploadResult
result
.
UploadResult.pieceCid: PieceLink
pieceCid
);

When you call upload(), the SDK selects storage providers, uploads your data, and commits it on-chain. By default it stores 2 copies on separate providers for redundancy. The result tells you exactly what happened:

const
const result: UploadResult
result
= await
const synapse: Synapse
synapse
.
Synapse.storage: StorageManager
storage
.
StorageManager.upload(data: UploadData, options?: StorageManagerUploadOptions): Promise<UploadResult>
upload
(
const data: Uint8Array<ArrayBufferLike>
data
)
// The PieceCID identifies your data across all providers
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
("PieceCID:",
const result: UploadResult
result
.
UploadResult.pieceCid: PieceLink
pieceCid
)
// Each copy is committed on-chain with its own provider and data set
for (const
const copy: CopyResult
copy
of
const result: UploadResult
result
.
UploadResult.copies: CopyResult[]
copies
) {
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
(`Copy on provider ${
const copy: CopyResult
copy
.
CopyResult.providerId: bigint
providerId
}, dataset ${
const copy: CopyResult
copy
.
CopyResult.dataSetId: bigint
dataSetId
}`)
}
// If any provider failed, it appears here (the upload still succeeded on others)
for (const
const failure: FailedCopy
failure
of
const result: UploadResult
result
.
UploadResult.failures: FailedCopy[]
failures
) {
var console: Console
console
.
Console.warn(...data: any[]): void

The console.warn() static method outputs a warning message to the console at the 'warning' log level.

MDN Reference

warn
(`Provider ${
const failure: FailedCopy
failure
.
FailedCopy.providerId: bigint
providerId
} failed: ${
const failure: FailedCopy
failure
.
FailedCopy.error: string
error
}`)
}

Key points:

  • result.copies — each entry is a confirmed on-chain copy of your data. If you requested 2 copies and both succeeded, you get 2 entries.
  • result.failures — providers that failed during the upload. Empty when everything worked. The upload still returns a result as long as at least one copy succeeded.
  • If the upload can’t store your data at all, it throws (StoreError). If data was stored but couldn’t be committed on any provider, it throws (CommitError). Both are safe to retry.

You can request more copies with { count: 3 } for additional redundancy.

Data sets are automatically created during your first upload to a provider. For explicit management of data sets, use these operations:

When You Need Explicit Data Sets:

  • Uploading many files to same provider
  • Want consistent provider for your application
  • Need to track costs per data set
  • Building batch upload workflows

Retrieve all data sets owned by your account to inspect piece counts, CDN status, and metadata:

const
const dataSets: EnhancedDataSetInfo[]
dataSets
= await
const synapse: Synapse
synapse
.
Synapse.storage: StorageManager
storage
.
StorageManager.findDataSets(clientAddress?: Address): Promise<EnhancedDataSetInfo[]>
findDataSets
();
for (const
const ds: EnhancedDataSetInfo
ds
of
const dataSets: EnhancedDataSetInfo[]
dataSets
) {
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
(`Dataset ${
const ds: EnhancedDataSetInfo
ds
.
EnhancedDataSetInfo.pdpVerifierDataSetId: bigint
pdpVerifierDataSetId
}:`, {
live: boolean
live
:
const ds: EnhancedDataSetInfo
ds
.
EnhancedDataSetInfo.isLive: boolean
isLive
,
cdn: boolean
cdn
:
const ds: EnhancedDataSetInfo
ds
.
EnhancedDataSetInfo.withCDN: boolean
withCDN
,
pieces: bigint
pieces
:
const ds: EnhancedDataSetInfo
ds
.
EnhancedDataSetInfo.activePieceCount: bigint
activePieceCount
,
metadata: Record<string, string>
metadata
:
const ds: EnhancedDataSetInfo
ds
.
EnhancedDataSetInfo.metadata: Record<string, string>
metadata
});
}

List all pieces stored in a specific data set by iterating through the context:

const
const context: StorageContext
context
= await
const synapse: Synapse
synapse
.
Synapse.storage: StorageManager
storage
.
StorageManager.createContext(options?: CreateContextOptions): Promise<StorageContext>
createContext
({
CreateContextOptions.dataSetId?: bigint
dataSetId
});
const
const pieces: any[]
pieces
= [];
for await (const
const piece: PieceRecord
piece
of
const context: StorageContext
context
.
StorageContext.getPieces(options?: {
batchSize?: bigint;
signal?: AbortSignal;
}): AsyncGenerator<PieceRecord>
getPieces
()) {
const pieces: any[]
pieces
.
Array<any>.push(...items: any[]): number

Appends new elements to the end of an array, and returns the new length of the array.

@paramitems New elements to add to the array.

push
(
const piece: PieceRecord
piece
);
}
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
(`Found ${
const pieces: any[]
pieces
.
Array<any>.length: number

Gets or sets the length of the array. This is a number one higher than the highest index in the array.

length
} pieces`);

Calculate total storage size by summing piece sizes extracted from PieceCIDs:

const
const pdpVerifier: PDPVerifier
pdpVerifier
=
class PDPVerifier
PDPVerifier
.
PDPVerifier.create(options?: {
transport?: Transport;
chain?: Chain;
}): PDPVerifier
create
();
const
const leafCount: bigint
leafCount
= await
const pdpVerifier: PDPVerifier
pdpVerifier
.
PDPVerifier.getDataSetLeafCount(dataSetId: bigint): Promise<bigint>
getDataSetLeafCount
(
const dataSetId: 1n
dataSetId
);
const
const sizeInBytes: bigint
sizeInBytes
=
const leafCount: bigint
leafCount
* 32n; // Each leaf is 32 bytes
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
(`Data set size: ${
const sizeInBytes: bigint
sizeInBytes
} bytes`);

Access custom metadata attached to individual pieces for organization and filtering:

const
const warmStorage: WarmStorageService
warmStorage
=
class WarmStorageService
WarmStorageService
.
WarmStorageService.create(options?: {
transport?: Transport;
chain?: Chain;
}): WarmStorageService
create
();
const
const metadata: MetadataObject
metadata
= await
const warmStorage: WarmStorageService
warmStorage
.
WarmStorageService.getPieceMetadata(dataSetId: bigint, pieceId: bigint): Promise<MetadataObject>
getPieceMetadata
(
const dataSetId: 1n
dataSetId
,
const piece: {
pieceCid: string;
pieceId: bigint;
}
piece
.
pieceId: bigint
pieceId
);
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
("Piece metadata:",
const metadata: MetadataObject
metadata
);

Calculate size of a specific piece by extracting the size from the PieceCID:

import {
import getSizeFromPieceCID
getSizeFromPieceCID
} from "@filoz/synapse-sdk/piece";
const
const size: any
size
=
import getSizeFromPieceCID
getSizeFromPieceCID
(
const pieceCid: string
pieceCid
);
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
(`Piece size: ${
const size: any
size
} bytes`);

Query service-wide pricing, available providers, and network parameters:

const
const info: StorageInfo
info
= await
const synapse: Synapse
synapse
.
Synapse.getStorageInfo(): Promise<StorageInfo>
getStorageInfo
();
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
("Price/TiB/month:",
const info: StorageInfo
info
.
StorageInfo.pricing: {
noCDN: {
perTiBPerMonth: bigint;
perTiBPerDay: bigint;
perTiBPerEpoch: bigint;
};
withCDN: {
perTiBPerMonth: bigint;
perTiBPerDay: bigint;
perTiBPerEpoch: bigint;
};
tokenAddress: Address;
tokenSymbol: string;
}
pricing
.
noCDN: {
perTiBPerMonth: bigint;
perTiBPerDay: bigint;
perTiBPerEpoch: bigint;
}
noCDN
.
perTiBPerMonth: bigint
perTiBPerMonth
);
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
("Providers:",
const info: StorageInfo
info
.
StorageInfo.providers: PDPProvider[]
providers
.
Array<PDPProvider>.length: number

Gets or sets the length of the array. This is a number one higher than the highest index in the array.

length
);
const
const providerInfo: PDPProvider
providerInfo
= await
const synapse: Synapse
synapse
.
Synapse.getProviderInfo(providerAddress: Address | bigint): Promise<PDPProvider>
getProviderInfo
("0x...");
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
("PDP URL:",
const providerInfo: PDPProvider
providerInfo
.
PDPProvider.pdp: PDPOffering
pdp
.
PDPOffering.serviceURL: string
serviceURL
);

Ready to explore more? Here’s your learning path:

  • Advanced Operations → Learn about batch uploads, lifecycle management, and download strategies. For developers building production applications with specific provider requirements.

  • Plan Storage Costs → Calculate your monthly costs and understand funding requirements. Use the quick calculator to estimate costs in under 5 minutes.

  • Payment Management → Manage deposits, approvals, and payment rails. Required before your first upload.