Python
from chonkie.cloud import TokenChunker 

chunker = TokenChunker(api_key="{api_key}") 

chunks = chunker(text='YOUR_TEXT')
[
  {
    "text": "<string>",
    "start_index": 123,
    "end_index": 123,
    "token_count": 123
  }
]

Authorizations

Authorization
string
header
required

Your API Key from the Chonkie Cloud dashboard

Body

multipart/form-data
file
file

The file to chunk.

tokenizer
string
default:gpt2

Tokenizer to use. Can be a string identifier or a tokenizer instance.

chunk_size
integer
default:2048

Maximum number of tokens per chunk.

chunk_overlap
integer
default:0

Number or percentage of overlapping tokens between chunks.

return_type
enum<string>
default:chunks

Whether to return chunks as Chunk objects or plain text strings.

Available options:
texts,
chunks

Response

200 - application/json

Successful Response: A list of Chunk objects.

A list containing Chunk objects, each detailing a segment of the original text based on token count.

text
string

The actual text content of the chunk.

start_index
integer

The starting character index of the chunk within the original input text.

end_index
integer

The ending character index (exclusive) of the chunk within the original input text.

token_count
integer

The number of tokens in this specific chunk, according to the tokenizer used.