tepples wrote:
By the time you've implemented reliable delivery of input packets over UDP, with error detection and retransmission and the like, you'll end up having reimplemented most of TCP. Or what am I missing?
Nothing. You're absolutely 100% correct. And for sake of example, WoW and many other modern games use TCP. The "use UDP because it's fast" mentality is only applicable in one scenario, and that's the handshaking nature of the protocol -- which for large payloads isn't even used any more given how the TCP sliding window algorithm works (RFC 1323): you no longer need to send an ACK for every single packet). TCP selective acknowledgement (SACK) per RFC 2018 further extends this in the case of packet loss.
Use UDP if you want, but you're going to end up reimplementing pieces/parts of TCP like Tepples said. Those reimplementations may be better for your application -- it's up to you to decide and do the analysis -- but it's time not well-spent, IMO.
The biggest problem I find is that the programmers using the underlying syscalls/code for sockets do not actually understand 1) how individual descriptor flags affect what is going across the wire, and 2) have no real familiarity with packet analysis to determine what is going on
and correlating that behaviour with actual code in their application. (The latter is somewhat difficult, which is why using a language like C makes this easier -- anything that is abstracted is going to make that task even harder). A common example are packets that arrive out-of-order due to intermediary routers and load balancing on the Internet (and to some degree even NAT routers); there's a common misconception that if machine A sends packets to machine B in the order of 1,2,3 that machine B will receive them in the order of 1,2,3. I will not pontificate on all this this past this point; my statements (especially about the code aspect) sound anecdotal but honestly I'm really preaching fact.
All that said: if you find something online somewhere that actually documents the design and architecture of a realtime gameplay protocol (TCP or UDP, I don't care which), I'd love to read it. I have yet to see anyone publish anything like this. All I've seen are commercial companies writign their own proprietary methodologies (which makes sense to some degree), and open-source nutbags saying "look at some crap I put on github; the code is the documentation" (wrong). Possibly this type of thing is discussed in an actual published game development book. I have a few, but they're all dedicated to graphics, level design, algorithms, and things of this nature.
The buffering methodology Tepples describes (re: the emulator working on input 4 frames behind what's active) is commonly used, but the problem is that 4 frames is sometimes too little, or in other cases too much. It's going to vary based on several variables, absolutely none of which your emulator/game/whatever has control over.
My general suggestion to people is: don't bother implementing netplay in emulators. You're entering a completely different world of completely different pain. And I have seen people implement things (including in commercial games) so utterly wrong that it's baffling.