-
Notifications
You must be signed in to change notification settings - Fork 622
Added support for Invalid LLM API KEYS #638
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
it will now display invalid LLM API keys with a clean structured message.
🦋 Changeset detectedLatest commit: 8c7c375 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR Summary
This PR introduces a dedicated error handling approach for invalid API keys across multiple LLM client modules, providing clearer, structured error messages via a new custom error class.
- Updated
/lib/llm/AnthropicClient.ts
to catch authentication errors and throwInvalidLLMKeyError
. - Modified
/lib/llm/CerebrasClient.ts
and/lib/llm/GroqClient.ts
to handle OpenAI.AuthenticationError with detailed logging and custom error propagation. - Revised
/lib/llm/OpenAIClient.ts
to wrap core calls in try/catch blocks and implement the new invalid API key handling. - Introduced
InvalidLLMKeyError
in/types/stagehandErrors.ts
for consistent error feedback.
💡 (2/5) Greptile learns from your feedback when you react with 👍/👎!
5 file(s) reviewed, no comment(s)
Edit PR Review Bot Settings | Greptile
thanks so much for contributing to stagehand! instead of modifying each client individually, can we instead change also, please add a changeset - just run |
Yeah, that sounds like a good idea. But since However, even with that, we’ll still need to check which client is being used — like OpenAI, Groq, Cerebras (which are similar using OPEN AI client), or Anthropic, which has a different API and error pattern and handle both errors. So in the future, if we add a new client, we’ll need to update this part again to support its error format. Do you want me to move in this way! |
why
Previously it was displaying a
StagehandDefaultError
when LLM api keys are not present which does not provide concise details about the error.what changed
It now displays invalid LLM API keys with a clean structured message.
Added error handling in
createChatCompletion
method for various LLM Clients ( OpenAI, Grok, Anthropic, Cerebras ).Created new
InvalidLLMKeyError
class for custom Invalid api messagetest plan