Fuel Testnet Node problem

I encountered a problem when starting the node. When I start the node, it synchronizes with the network, as far as I understand. At this point I get errors as shown in the screenshot. Then as expected I get some correct logs and then the errors similar to the previous ones continue:


P.S. Sorry about this image, but as a new member I can only post one image. You can open this image and go to “original size”

Then I can get a few more “correct logs” but errors appear again.

I also tried to replace 0.0.0.0 with a public IP address taken from the command “curl ifconfig.me”. I tried to run on port 5000, but all attempts are in vain.
If you need additional information, I am ready to provide it.

My “fuelup show” command output

Default host: x86_64-unknown-linux-gnu
fuelup home: /root/.fuelup

Installed toolchains
--------------------
latest-x86_64-unknown-linux-gnu (default)

active toolchain
----------------
latest-x86_64-unknown-linux-gnu (default)
  forc : 0.62.0
    - forc-client
      - forc-deploy : 0.62.0
      - forc-run : 0.62.0
    - forc-crypto : 0.62.0
    - forc-debug : 0.62.0
    - forc-doc : 0.62.0
    - forc-fmt : 0.62.0
    - forc-lsp : 0.62.0
    - forc-tx : 0.62.0
    - forc-wallet : 0.8.2
  fuel-core : 0.31.0
  fuel-core-keygen : 0.31.0

fuels versions
--------------
forc : 0.65.1
forc-wallet : 0.65.0

My command to start the Fuel Node:

fuel-core run \
--service-name=HZ-sepolia-testnet-node \
--keypair <MY_SECRET_FROM_FUEL_CORE_KEYGEN> \
--relayer https://eth-sepolia.g.alchemy.com/v2/<API_FROM_ALCHEMY> \
--ip=0.0.0.0 --port=4000 --peering-port=30333 \
--db-path ~/.fuel-sepolia-testnet \
--snapshot ~/.fuel-sepolia-testnet \
--utxo-validation --poa-instant false --enable-p2p \
--reserved-nodes /dns4/p2p-testnet.fuel.network/tcp/30333/p2p/16Uiu2HAmDxoChB7AheKNvCVpD4PHJwuDGn8rifMBEHmEynGHvHrf \
--sync-header-batch-size 100 \
--enable-relayer \
--relayer-v2-listening-contracts=0x01855B78C1f8868DE70e84507ec735983bf262dA \
--relayer-da-deploy-height=5827607 \
--relayer-log-page-size=500 \
--sync-block-stream-buffer-size 30

Used these commands to download chain-configuration being in /root directory

git clone https://github.com/FuelLabs/chain-configuration chain-configuration
mkdir .fuel-sepolia-testnet 
cp -r chain-configuration/ignition/* ~/.fuel-sepolia-testnet/

Thanks for the quick response.

Thanks for bringing this up. I will make sure that the team looks at this. Though, can you try updating to our nightly toolchain and see if that works for you? :slight_smile:

I tried nightly, but nothing changed, the same errors.

Installed toolchains
--------------------
latest-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu (default)

active toolchain
----------------
nightly-x86_64-unknown-linux-gnu (default)
  forc : 0.62.0+nightly.20240730.ec01af49c8
    - forc-client
      - forc-deploy : 0.62.0+nightly.20240730.ec01af49c8
      - forc-run : 0.62.0+nightly.20240730.ec01af49c8
    - forc-crypto : 0.62.0+nightly.20240730.ec01af49c8
    - forc-debug : 0.62.0+nightly.20240730.ec01af49c8
    - forc-doc : 0.62.0+nightly.20240730.ec01af49c8
    - forc-fmt : 0.62.0+nightly.20240730.ec01af49c8
    - forc-lsp : 0.62.0+nightly.20240730.ec01af49c8
    - forc-tx : 0.62.0+nightly.20240730.ec01af49c8
    - forc-wallet : 0.8.2+nightly.20240730.76e9796cbe
  fuel-core : 0.31.0+nightly.20240730.1cfbb05932
  fuel-core-keygen : not found

In addition, I want to add that playground it is launched and working. I give an example of the commands that I executed there:

query {chain{latestBlock{id, height}}}

Response:
{
  "data": {
    "chain": {
      "latestBlock": {
        "id": "0xb653623287ffe0fa0642a90a66c45a67951565aeee0a7684b3c6116f175bf72d",
        "height": "171900"
      }
    }
  }
}
query {health}

Response:
{
  "data": {
    "health": true
  }
}

My log also encountered the same error, I hope it can help us solve the problem as soon as possible

Thanks for reporting it. Will share it with the team and get back to you with a solution soon

It is an expected and known issue, the fix is available but our cluster setup doesn’t support it currently.
Unfortunately, we don’t have a workaround. The priority of this is low, so you can expect its fix maybe in 2-6 weeks.