Skip to main content

Custom Indexing Framework

The custom indexing framework exposes specific interfaces that you implement to define your data processing logic. Some common APIs include:

  • process(): Transform raw checkpoint data (transactions, events, object changes) into your desired database rows. This is where you extract meaningful information, filter relevant data, and format it for storage.

  • commit(): Store your processed data to the database with proper transaction handling. The framework calls this with batches of processed data for efficient bulk operations.

  • prune(): Clean up old data based on your retention policies (optional). Useful for managing database size by removing outdated records while preserving recent data.

Sequential and concurrent pipeline types and their trade-offs are detailed in Pipeline Architecture.