Winterflaw wrote:Of course, if the node(s) performing the mean calculation are compromised, then you're in trouble. In general, this is like ECC; you have a certain tolerance for error - you can detect it, given the system working properly to a sufficient extent.
Mean-checking wouldn't be sufficient. Otherwise you'll just generate, say, a 0.25 if that's what you need to make Illidan drop his twin blades, and generate a 0.75 at whatever trash mob you're fighting next. Your mean will be just right, but "somehow" you'd end up getting all these cool legendaries and epics

Winterflaw wrote:And how do you pick the clients performing the checks and e.g. RNG?
Randomly. Or GeoIP and make sure they're a long way from the peer they check.
GeoIP probably not helpful, just log on with a VPN and you can have your rigged client's ip to be wherever you want it to be.
Randomly would indeed be the only way to go, but that's exactly the question: How do you ensure it is actually random? (see next point)
Winterflaw wrote:Obviously the client requiring this input shouldn't chose the providers, otherwise you can just have two clients collaborate on the whole cheating thing. So you need some neutral authority to assign these - which probably comes back to a central master server though?
One way is that the peer issues a request (just like someone issuing a chat message in game) and it propogrates through the network. Each peer receiving the message has a fractional chance of responding.
Then a collborating rigged client could just choose to set his chance of responding to 100% when a its time for legendary loot generation. Also, who checks that other clients were actually asked / received this message?
Maybe you could try finding some algorithm that deterministically maps a request to some other client handling it, i.e. when a player needs to determine loot for a boss, the system e.g. hashes a combination of that player's id with the mob's id with some id of the needed request ("Get Loot"), that hash is then deterministically transformed into a number from 1 to N where N is the number of connected clients, and whatever client number comes out handles the request?
I guess all clients would need to have perfectly synced information about who is online, but I assume even in a P2P network there has to be at least one actual static server who contains a list of all connected IPs?
Winterflaw wrote:Moreover, a missing object does not break the database, because the database is written to tolerate this. By tolerate I mean that the database does not break, or fail; the game of course with regard to whatever happens with that object cannot continue - so the object will for example, temporarily disappear from a users inventory, or a mob will vanish. But if we have 10,000 players, and a sufficient number of copies, then we might be looking at this happening say one per century. I suspect more mobs vanish due to OTHER bugs than this...
Most databases usually have constraints in their tables to ensure data doesn't get corrupted (e.g. foreign keys must actually exist and such). You could deactivate these constraints of course, but you may just end up running the risk of corrupting your data for good.
Simple example, but if the data fragment containing your character's structural information is just missing, who's to stop someone from creating a character with exactly the same name as yours - and what happens when your information is back online then?
Winterflaw wrote:But generally, yes agree, you could maybe cut it down a little further that way (though I'd expect realistic numbers to be closer to 50% than to 0.03% or so).
If 50% of 10,000 players have the whole database, then it's the same as 5,000 having 100% - i.e. you're arguing half of all players need a full copy of the database for the system to consistently be able to reach all objects. This is crazy. If you have say 1,000 players, and each has 100th of the database, you already have 10x over-duplication. If they each held 50% of the database, you'd have 500x over duplication. Amazon S3 runs on *3x*.
Correct me if wrong as I haven't read up on S3 much.
But I assume they replicate data 3 times on servers all of which are intended to be up and running 24/7, i.e. each individual server has an uptime of 99.99% or so? And replication just provides failure safety, in case one of the server crashes.. so you can up availability from 99.99% to 99.99999%?
That would of course be very much unlike peer clients, who are in fact offline most of the day and arbitratily choose to log in and out all the time.
A proper calculation would be much more complex of course, but just as a very back of the envelope thing:
If you say each player is online 10% of the day, and you have 10x duplication, then each of these groups containing a data fragment has a 0.9^10 = ca 34.86% chance of not being available (i.e. none of its holders are online). So the chance of all data fragments being available would be (1-0.9^10)^100 = 0,000000000000000024%.