Skip to content

Investigate memory overflow possibility when reading large collections #108

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
gagik opened this issue Apr 24, 2025 · 1 comment
Open
Labels
investigation required Requires investigation, might open more issues depending on investigation.

Comments

@gagik
Copy link
Collaborator

gagik commented Apr 24, 2025

Some of the read tools simply run an i.e. aggregate or read query and load everything into memory. This can be quite problematic for a collection with millions of items.

We should test this against a very large collection and come up with mitigations (perhaps impose a hard limit or do a count and prompt the user to give a limit if the size is too large).

@nirinchev
Copy link
Collaborator

We should also look into streaming responses so that we don't load everything in memory: https://github.com/cyanheads/model-context-protocol-resources/blob/main/guides/mcp-server-development-guide.md#streaming-responses-transport-feature

@fmenezes fmenezes added the investigation required Requires investigation, might open more issues depending on investigation. label Apr 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
investigation required Requires investigation, might open more issues depending on investigation.
Projects
None yet
Development

No branches or pull requests

3 participants