Skip to content

DOC: What is the maximum chunk size returned from SemanticChunker.split_documents() #28250

Open
@abhipandey09

Description

URL

No response

Checklist

  • I added a very descriptive title to this issue.
  • I included a link to the documentation page I am referring to (if applicable).

Issue with current documentation:

I need to know on the maximum chunk size that can be return from SemanticChunker.split_documents() for large documents.
Can it be more than 8k token as i need to send the chunk for embedding, chunk with more than 8k token will fail with Azure embedding model.
Need Help!!

Idea or request for content:

Documentation should cleary explain this

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    staleIssue has not had recent activity or appears to be solved. Stale issues will be automatically closed🤖:docsChanges to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions