The call to set the priority is still there. I've verified that it works on an S2, but I don't have an S1 to test with.
Originally Posted by jdiner
To verify the scheduling priority, find the tserver PID with ps or top, then run "getpri PID", where PID is the tserver PID. It should show "fifo 1". getpri is in AlphaWolf's all-in-one package, so many of you will already have it.
The tserver code sets the policy to "FIFO" and the priority to 1. This is lower than the realtime tivo processes, though it is still a realtime priority and is higher than normal timesharing processes (shells, etc).
The NowShowing list generation is fairly disk intensive and probably causes a lot of head seeking, since the data is scattered about in MFS. If the skipping only occurs when you're refreshing the show list, then I suppose that could be the cause. If the skipping occurs during stream transfers, then I don't think this is the issue.
The low level steam export code has a rate throttle option to sleep a small time interval between "chunks" to reduce the load on the tivo. tserver doesn't currently use it, but it would be easy to add a command line option to tserver to allow it to be set. Here's the description from the mfs_uberexport usage string:
I could add the same command line option to tserver, if the extra control on the "throttle" would help. Right now tserver runs at full throttle with no rate limiting.
-r <ms> Rate control (throttle)
-'ve : no delay (default)
0 : sched_yield() between chunks
+'ve : # of ms to delay between chunks
I'm wondering if the disk just doesn't have enough bandwith to support recording two HD live buffers, playing back, and streaming a HD stream to the network all at once. Somebody who can recall the typical HD stream bit rates can do the math and compare against typical disk specs.