Python
from chonkie.cloud import NeuralChunker 

chunker = NeuralChunker(api_key="{api_key}") 

chunks = chunker(text="YOUR_TEXT")
[
  {
    "text": "<string>",
    "start_index": 123,
    "end_index": 123,
    "token_count": 123
  }
]

Authorizations

Authorization
string
header
required

Your API Key from the Chonkie Cloud dashboard

Body

multipart/form-data
file
file

The file to chunk.

model
string
default:mirth/chonky_modernbert_large_1

The identifier of the fine-tuned BERT model to use.

min_characters_per_chunk
integer
default:10

Minimum number of characters required for a valid chunk.

return_type
enum<string>
default:chunks

Whether to return chunks as Chunk objects or plain text strings.

Available options:
texts,
chunks

Response

200 - application/json

Successful Response: A list of Chunk objects.

A list containing Chunk objects, representing segments split based on semantic shifts detected by the neural model.

text
string

The actual text content of the chunk.

start_index
integer

The starting character index of the chunk within the original input text.

end_index
integer

The ending character index (exclusive) of the chunk within the original input text.

token_count
integer

The number of tokens in this specific chunk, according to the tokenizer used.