[hatari-devel] Higher precision when generating samples -> use nanosleep

Nicolas Pomarède npomarede at corp.free.fr
Wed Jan 26 21:39:21 CET 2011


Le 26/01/2011 20:56, Eero Tamminen a écrit :

>> nanosleep is defined in POSIX.1-2001, so it should be OK to use under
>> Linux and OSX. It seems cygwin is also providing a nanosleep function
>> that uses Windows' internal calls, so it should be good for Windows too.
>>
>> With nanosleep, we could get a much better granularity.
>
> I think the reason why SDL isn't offering better granularity is that
> although some API would have higher accuracy, it doesn't mean that your
> program would get scheduled at such intervals.
>
> On my Debian testing i3 machine with 2.6.32 kernel, the scheduler tick is:
> 	$ grep '^CONFIG_HZ=' /boot/config-$(uname -r)
> 	CONFIG_HZ=250
>
> I.e. on my machine process CPU time slice size is 4ms.
>
> That means that any program/thread using CPU for few ms (on the same CPU),
> can add 4ms "random" delays to program timings. If there are multiple such
> programs/threads (on same CPU), it will be some multiple of that CPU slice
> size.  Even 1000Hz scheduler "tick" could still give n*1ms "random" delays.
>
> Only way to get guaranteed timings is to use real-time scheduling and that
> you really want to use only on programs/threads that been _designed_ to be
> real-time i.e. the code guarantees always to use less than certain low
> amount of CPU (so that it doesn't completely freeze the system), unlike
> Hatari does...

Regarding SDL, I think some part of it are just not up to date to what 
recent OS/kernel can do nowadays. The fact that they don't provide a 
finer granularity than 100 ms doesn't mean it's not safe/achievable.
IMHO, SDL doesn't cover all possible functions an OS can do, it tries to 
expose a common trunk that works the same under different OS.

So, I think we can try to use finer functions when an OS provide them, 
and default to SDL if we can't do better.

I agree the program might not be scheduled at the expected time, but 
it's already the case at 10ms, nothing guarantees you won't be scheduled 
at 11 or 12 ms.

So, the finer delay we can get, the better it is, then I also agree with 
your next point :


> I think it's more robust to have code that adapts to unprecise timings.
>
> Hatari sound behaving erratically when any program runs on background
> and on platforms which don't have submillisecond accuracy is not really
> good.

Internally, Hatari already does what you say about adjusting : if a 
timer B interrupt is delayed by 4 or 8 cycles (because there's a 68000 
instruction that can't be interrupted now), the next timer B interrupt 
will be scheduled to happen 4 or 8 cycles earlier (relative to when the 
delayed timer b happened).

Here in Wait VBL, I think we can do the same :
  - wait 16.67 ms using nanosleep
  - read current nanotime
  - if 16.67 ms were elapsed, we're good.
  - else let's say we have a 4 ms granularity, so we are at 20 ms 
instead of 16.67 : then next nanosleep should be 16.67-4 = 12.67 ms


So, I don't think we should blindly to "nanosleep (16.67ms)" and 
consider we're good. We should nanosleep if possible to the closest 
value, then measure the difference at the end of the nanosleep and take 
this into account for the next nanosleep.

If possible and nanosleep is ok on a machine, it will be better to 
alternate between 16.66 ms and 16.67 to emulate a 16.66666... ms delay 
(60 Hz), than to alternate between 16 ms and 17 ms to also emulate 
16.6666 ms delay.

We could have a calibration routine at start that roughly try to see if 
nanosleep can be used instead of SDL_Delay / millisleep and then a self 
adapting code in Main_WaitOnVbl (as we already have today, expect ticks 
would be nanosec, not millisec).

Nicolas



More information about the hatari-devel mailing list