Skip to content

Prevent dropped ticcmds due to interp timing jutter

AJ Martinez requested to merge Tyron/SRB2:net-timing into next

Port from Ring Racers, also see #981 (closed).

what

SRB2 runs gamelogic and network behavior whenever a new game frame crosses a tic boundary; when interp is enabled, it's possible for the last tic's last rendered frame to "steal" time from the current tic, affecting the timing of network behavior.

From a visual perspective, it's good for interp to work this way, since it provides smooth motion and frame pacing without stutters at tic boundaries. However, this means that for both clients and listen servers, the duration of a tic will change based on the timing of the last rendered frame, behaving differently based on your performance characteristics and fpscap. In the least fortunate cases, this can cause up to ~30% GAMEMISS as the server and client desync their ticcmd dispatch/ingest rates, causing the server to process 0, 1, or 2 ticcmds per tic, effectively at random. This can happen even on LAN or localhost!

This is the simplest way I could think to clean up this behavior; when servers receive two ticcmds for the same tic, instead of dropping the earlier one and processing the later one, they process the earlier one and queue the later one, acting as a floating 1-tic "buffer" for when server and client timers inevitably fall out of sync in the other direction. Clients are still generating 35 ticcmds per second, and servers are still processing 35 ticcmds per second—just with wonky timing—so this seems to be enough lenience to almost completely prevent dropped ticcmds.

notes

This can be difficult to test on localhost, since things like vsync can often artificially sync up client/server timing, and if your client is luckily trying to tick exactly in the middle of serverside tics, timing variance usually won't be enough to cross a tic boundary. This goes double if your device is fast and consistent, since interp will be stealing very little time.

However, during my testing for Ring Racers, I noticed notable GAMEMISS (netstat 1) to just about every netgame server in the ecosystem, in both Kart v1 and vanilla SRB2, that would immediately improve if I set fpscap 35. Other devs have tested this on my behalf and found largely similar results.

While researching this, I discovered NetPlus, which is (among other things) trying to solve a less severe version of the same problem; the build predates interp, and is only trying to address network variance instead of tic-timing variance. I think the "timefudge" approach done there is probably better for input responsiveness, but my gut-feeling guess is that the timing variance introduced by interp is simply too high for it to be a meaningful improvement on its own.

AFAIK this is not net-compatible and I don't have a great intuition for why, lots of $sav errors that indicate Stupid Shit. I'm an asshole, this is next. Probably this works fine if you port it onto master. I am not a programmer, I just stayed at a Holiday Inn Express last night

Edited by AJ Martinez

Merge request reports

Loading