One of the open problems we have is how peers find and choose cwtch servers and detect bad servers. Here is the barebones of an idea to solve part of that problem:
Watchdogs could also determine stats like server latency.
How peers find new servers is still an open question. I think an open directory might be a way to go, but has obvious issues some partially mitigated by having a watchdog in place.
Also, what's kinda cool about this approach is that we can build both parts as Cwtch applications.
How would you define "bad servers"? Based on the paper, I assume servers which selectively often fail to relay messages and modify relayed messages. Also servers with high latency, maybe?
The watchdog idea sounds quite reasonable and is similar to protocols which do computation verification through sampling. I think one of the biggest challenges would be how to limit the request overhead to servers, which in the worst case scenario could result in some sort of DoS. The overhead is not that large, but maybe its something that should be considered if the servers are publicly available and everyone can verify.