ETTV 2.60 Chaining Madness

Forum for discussing ET TV

Moderators: Forum moderators, developers

User avatar
deej
Posts: 743
Joined: Fri Mar 19, 2004 12:44 am
Location: Belgium!
Contact:

ETTV 2.60 Chaining Madness

Post by deej »

Please note this was done with experimental binaries and does not represent any official release from the ETPro team.

Image

1) r0f.vanilla ice is the game server
2) RED: the recorder is connected to it and recording the demos
3) BLUE: the hub is playing back those demos with approx 90 seconds delay
4) GREEN: the three ettv broadcasters are connected to the hub and relaying what the hub plays

This was all done with the experimental ETTV binaries Zinx posted on a system running Fedora Core 3... To top it, the recorder, hub & reT!reD.ettv #3 run from the same install ;-) Stable as hell!

Vanilla Ice runs for 380 hours straight right now 8)!

Working this way will be a lot less straining on the gameserver since an ETTV slave connects with rate 90000. This way you can select 2 or 4 master hubs and have the rest of the slaves connect to the hubs (or even more hubs to the hubs to the hubs :lol: it's madness I tellz ya :lol:)

This is looking goooooooooooooooood for the future!

If you want to see yourself in action: go play on et.qsgames.com:28044 and then connect to fragland.org:27960. Narcism guarantueed.
Last edited by deej on Sat Jun 04, 2005 8:06 am, edited 2 times in total.
Our servers now run on 64 bit steroids. Point your ET to:
- Forgotten Ground StopWatch Server with occasional wolfrof 1
- Fraggle Rock ETPub Server - Mix up ET/UT & Duke Nukem
User avatar
bani
Site Admin
Posts: 2780
Joined: Sun Jul 21, 2002 3:58 am
Contact:

Post by bani »

a slave says rate 90000 but in reality it is unlimited. a slave uses 'however much bandwidth it has to'.
User avatar
deej
Posts: 743
Joined: Fri Mar 19, 2004 12:44 am
Location: Belgium!
Contact:

Post by deej »

Unlimited? Wow. No wonder that game server that hosted the EC final last thursday went down a few times.

Anyway I was wondering the following: Assuming we use the terminology "hub" for a delayed broadcast where only ETTVs can connect to and not players (viewers), and "slave" for the ETTVs where viewers connect to; in order to increase resiliency would a model such as this be possible (numbers chosen at random)?

Code: Select all

   Game Server         -> Runs Match Live
        |
   x Core Hubs         -> Broadcast Match with 60 seconds delay
        |       
y Distribution Hubs    -> Record what Core hubs send out and delay that with 30 seconds
      | | |
  z ETTV slaves        -> Broadcast live what Distribution Hubs send to them
Kinda like designing a network. x < y < z of course.

This way you would have x (small number) of powerful core hubs that buffer the matchdata. Connected to them are y distribution hubs (less powerful but since they buffer they increase resiliency) and then z (probably alot) ETTV slaves for the viewers to connect to.

Guess what's important to know is, can an ETTV hub record from another ETTV hub?
Our servers now run on 64 bit steroids. Point your ET to:
- Forgotten Ground StopWatch Server with occasional wolfrof 1
- Fraggle Rock ETPub Server - Mix up ET/UT & Duke Nukem
User avatar
bani
Site Admin
Posts: 2780
Joined: Sun Jul 21, 2002 3:58 am
Contact:

Post by bani »

in theory a slave connecting to another slave should be the same as a slave connecting to a master. so all the functionality should be the same.

a master shouldnt ever go down, on a typical matchserver there shouldnt really be that much traffic to a slave. only when you get to broadcasting eg 64 player pubs would you run into bandwidth issues.
User avatar
Lekdevil.NL
Posts: 89
Joined: Fri Sep 12, 2003 8:59 am

Post by Lekdevil.NL »

deej wrote:

Code: Select all

   Game Server         -> Runs Match Live
        |
   x Core Hubs         -> Broadcast Match with 60 seconds delay
        |       
y Distribution Hubs    -> Record what Core hubs send out and delay that with 30 seconds
      | | |
  z ETTV slaves        -> Broadcast live what Distribution Hubs send to them
Why would you want to add another delay in the distribution hubs?
User avatar
deej
Posts: 743
Joined: Fri Mar 19, 2004 12:44 am
Location: Belgium!
Contact:

Post by deej »

Lekdevil.NL wrote:Why would you want to add another delay in the distribution hubs?
For buffering purposes. Instead of delaying 90 secs in one layer you delay 60 secs in 1 & 30 secs in the 2nd. This way if one item of the chain encounters difficulties you have a small margin on all sides.
Our servers now run on 64 bit steroids. Point your ET to:
- Forgotten Ground StopWatch Server with occasional wolfrof 1
- Fraggle Rock ETPub Server - Mix up ET/UT & Duke Nukem
User avatar
deej
Posts: 743
Joined: Fri Mar 19, 2004 12:44 am
Location: Belgium!
Contact:

Post by deej »

bani wrote:a master shouldnt ever go down, on a typical matchserver there shouldnt really be that much traffic to a slave. only when you get to broadcasting eg 64 player pubs would you run into bandwidth issues.
Must have been a bad master then ;-). It went through its knees when 18 100 slots ETTVs connected.
Our servers now run on 64 bit steroids. Point your ET to:
- Forgotten Ground StopWatch Server with occasional wolfrof 1
- Fraggle Rock ETPub Server - Mix up ET/UT & Duke Nukem
User avatar
deej
Posts: 743
Joined: Fri Mar 19, 2004 12:44 am
Location: Belgium!
Contact:

Post by deej »

To answer my own question: chaining hubs works (just tested it) but the console mentioned frame skipping & I must admit the broadcast didn't go smooth (lots of map reloading).

So I guess 1 big delay is appropriate at the "master hubs".
Our servers now run on 64 bit steroids. Point your ET to:
- Forgotten Ground StopWatch Server with occasional wolfrof 1
- Fraggle Rock ETPub Server - Mix up ET/UT & Duke Nukem
User avatar
bani
Site Admin
Posts: 2780
Joined: Sun Jul 21, 2002 3:58 am
Contact:

Post by bani »

deej wrote:
bani wrote:a master shouldnt ever go down, on a typical matchserver there shouldnt really be that much traffic to a slave. only when you get to broadcasting eg 64 player pubs would you run into bandwidth issues.
Must have been a bad master then ;-). It went through its knees when 18 100 slots ETTVs connected.
well 18 slaves could probably do that. but one slave shouldnt.

fwiw with 18 slaves on a master thats 18*(players-on-master) total traffic. for 7vs7 that would be equivalent to 252+14 players on the master. or about 1596kbyte/sec (12.7 mbit). though i would expect the master to meltdown from cpu overload before it hit that bandwidth limit :lol:

number of spec slots on the ettv slaves shouldnt affect the master. only the number of ettv slaves connected to the master will matter.
DG
Posts: 513
Joined: Thu Jul 24, 2003 4:16 am

Post by DG »

certainly seems a good idea to have "core hubs", to balance the load both on the matchserver and the hubs. E.g. for Quakecon maybe a hub per continent (two or three for EU and NA?). I'd certainly expect Quakecon to draw more than the EC final, and every single slot of those 18 ETTV servers was taken for that, with quite a few people queuing.

any way to utilise multiple procs? even if just to effectively run as a core hub with a distribution hub per cpu drawing off it, so you balance the load out on the cpus while only using the bandwidth of one on the matchserver.

Am I right in thinking you cant turn off tvchat when you're spectating on a ETTV server that is a slave to another one? If so maybe cutting out tvchat from being sent to another ETTVslave could cut some bandwidth? does /tv_chat off stop it from being displayed, or stop it from being sent? Have ettv viewers treated as if on snaps 10, to cut the load to an extent that might not be nicely playable, but is perfectly adequate for spectating?
User avatar
deej
Posts: 743
Joined: Fri Mar 19, 2004 12:44 am
Location: Belgium!
Contact:

Post by deej »

DG wrote:any way to utilise multiple procs? even if just to effectively run as a core hub with a distribution hub per cpu drawing off it, so you balance the load out on the cpus while only using the bandwidth of one on the matchserver.
I think that if you use a linux box with a 2.6 SMP kernel, the kernel will be in charge of dividing the CPU cycles. At least that's how I figure it after heaving read a bit on how linux does the SMP thing. But to assign one process to one cpu, I don't know if that's possible.

So I guess the "core & distribution model" would be even more beneficial to alleviate the strain on the cpu's then on the bandwidth.

Anyway if you have 4 powerful master hubs which can each serve 6 to 8 distribution hubs, I think the load can be distributed quite evenly. All in theory of course.
DG wrote:I'd certainly expect Quakecon to draw more than the EC final, and every single slot of those 18 ETTV servers was taken for that, with quite a few people queuing.
Yes we saw that. At 20:00 aNgel fired up our ETTV with g_password enabled and removed g_password on 20:55. 1 minute later 97 slots were full...
Our servers now run on 64 bit steroids. Point your ET to:
- Forgotten Ground StopWatch Server with occasional wolfrof 1
- Fraggle Rock ETPub Server - Mix up ET/UT & Duke Nukem
User avatar
bani
Site Admin
Posts: 2780
Joined: Sun Jul 21, 2002 3:58 am
Contact:

Post by bani »

lower snaps wouldnt really lower bandwidth, it would only really lower cpu. snaps 10 would make the game pretty choppy, i don't think many people would like spectating that.
User avatar
Lekdevil.NL
Posts: 89
Joined: Fri Sep 12, 2003 8:59 am

Post by Lekdevil.NL »

DG wrote:certainly seems a good idea to have "core hubs", to balance the load both on the matchserver and the hubs. E.g. for Quakecon maybe a hub per continent (two or three for EU and NA?). I'd certainly expect Quakecon to draw more than the EC final, and every single slot of those 18 ETTV servers was taken for that, with quite a few people queuing.
I very much like the idea of building a ETTV broadcast web, but I fear that the current implementation of ETTV has some shortcomings that will make such a configuration very hard to set up and maintain.

At the moment, ETTV does not have much in the way of resilience, partly because of the ugly hack (sorry guys) of streaming demos to disk and then playing them back, partly because there is no auto-reconnect feature that has hubs or slaves reconnect to their upstream server in case of a connection failure.

As a result, once you've carefully constructed your elaborate broadcast web, multiple levels of hubs and slaves and all, and one of the top-level servers gets reset, the whole web comes crashing down, disconnecting all servers down the chain, domino-fashion. The same thing would happen when the need arises to change game (master) servers, requiring the first-tier hubs to disconnect and reconnect to the new master. After such an event, admins would have to scramble to kill and re-run their scripts along the web, obviously disconnecting all the viewers that were connected at the time.

I think to make such an extended broadcast web a feasible option, ETTV would need to offer a number of features:
  • The ETTV server should always send out a broadcast signal, even though it isn't currently connected to an upstream master (like a test image or a waiting room type of thing). This would allow for upstream connection changes without disconnecting all downstream servers and viewers.
  • The "playback to disk" hack should obviously be replaced with in-daemon buffering code.
  • An auto-reconnect feature that tries to re-establish an upstream connection. Whilst reconnecting, downstream servers and viewers should not be disconnected, but shown the test image/waiting room I mentioned above.
Thoughts?
User avatar
bani
Site Admin
Posts: 2780
Joined: Sun Jul 21, 2002 3:58 am
Contact:

Post by bani »

We know about all the shortcomings already :P

All your points are already (and have long been) in the plans to be fixed, though all of them might not be fixed by qcon. In daemon buffering will take the most work. :moo:

chaining is a huge step up in functionality from the previous release though. it allows broadcasts to scale much better.
User avatar
Lekdevil.NL
Posts: 89
Joined: Fri Sep 12, 2003 8:59 am

Post by Lekdevil.NL »

Ah, yes. I know you know. I've just got a habit of stating the obvious, that's all. :lol: And I agree that the fixed chaining is a huge step forward.

As for QuakeCon: we'll probably just run with what we've got at that point in time. Any suggestions would be more than welcome!
Post Reply