Replies: 5 comments
-
There should be no need to manually manage this. Changing the number of retries should not have much affect. You want the request to fail faster? If you need additional configurability, you can use the StaticJsonRpcProvider directly, with: const connection = {
url: YOUR_ALCHEMY_URL,
throttleLimit: RETRIES
};
// For networks that have a network that cannot change, the StaticJsonRpcProvider
// is substantially more performant.
const provider = new ethers.providers.StaticJsonRpcProvider(network, connection); There are plenty of low-level things you can then override; see https://github.com/ethers-io/ethers.js/blob/master/packages/web/src.ts/index.ts#L38 Let me know if that helps. :) |
Beta Was this translation helpful? Give feedback.
-
It's not this, my main problem is that I find myself writing a lot of code like: while (true) {
try {
return await Contract.operation()
} catch (e) {
if (e.code === "TIMEOUT") {
continue;
} else {
throw e;
}
}
} And I would appreciate if there was some way to automatically have that be bundled into the requests. |
Beta Was this translation helpful? Give feedback.
-
You definitely should not need to do that. That error means that the server has failed to respond within two minutes, which is probably indicative of a larger issue? I have never had to do that. I usually use the default provider though, which would gracefully fallback to other backends and swallow timeouts that other backends picked up... What percent of requests succeed vs timeout? Do you have something configured in your Alchemy account for whitelisting/blacklisting that might be causing the backend to drop your request? Is there a proxy or link configuration that might be messing with it (e.g. entry in the |
Beta Was this translation helpful? Give feedback.
-
Around 0.3% of all requests time out
No, it's a stock account that hasn't had any configuration changes. The requests have also not been rate-limited.
Not that I am aware of. Furthermore, the pattern in which I'm seeing these errors seems to indicate that it's probably not related to that as well, as I just see timeout errors randomly on requests that later return properly. I've also talked with a person from alchemy and so far we have not been able to diagnose the root cause. Personally I think this is somewhat related to the fact that I'm querying highly-unused data at blocks that are ~8 months old, so I think that as that's something really uncommon the startup time required to serve some of these requests ends up triggering timeouts, this is wild speculation though. |
Beta Was this translation helpful? Give feedback.
-
Oh! I have seen old data on Alchemy being slow to respond... Especially for Have you tried using |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
According to #1162, retries are already enabled, however, when using AlchemyProvider I've been hitting a lot of timeouts that result in an error being raised, some examples:
Would it be possible to make the retries in the provider configurable in some way so I don't have to build that stuff on top of ethers.js (by catching errors and re-sending the request)?
Beta Was this translation helpful? Give feedback.
All reactions