Hello, I follow the project for a long time and now I think is the right moment to start using it.
The topology which I want to use is with four servers. I will not use relays, automatic discoveries and etc. Everything will be static, IP addresses, ports, etc. The only significant change I made is fsWatcherDelayS=“2” on every server. The synced folder is 300GB and rising. Mostly small files around 1-20MB each.
The folder is mounted with NFS on both “Server 1” and “Server 2” from “Fileserver 1”. This will be the case 99% of the time. Both servers will read/write on this folder.
If “Fileserver 1” goes offline the NFS will be remounted automatically on “Server 1” and/or “Server 2” from “Fileserver 2”. If “Fileserver 2” goes down too then “Server 1” and “Server 2” will start read/write on their local synced folders. And when one of the Fileservers is back online the folder will be remounted from it.
Both “Server 1” and “Server 2” will have monitoring scripts through Syncthing API calls for the current status on every server. The remount scripts will use the results from these checks for the final decision. This is on theory
So far I do not have any problems with the test environment, but I will be pleased if someone shares a thought or advice on this kind of topology.
My questions are: What potential problems may I have with this topology and logic? Any best practices to follow for this kind of sync? About API checks for now I think the “/rest/db/completion” - “state” and “needBytes” will give info for the current state of the cluster. Anything else I can check to be sure that folder is synced?
Best regards
2 posts - 2 participants