[hatari-devel] STE sound breakage with lower sound frequencies
David Savinkoff
dsavnkff at telus.net
Wed Feb 9 23:07:26 CET 2011
On Feb 9, 2011, Nicolas Pomarède <npomarede at corp.free.fr> wrote:
> Le 09/02/2011 00:35, David Savinkoff a écrit :
>
> > All of this ntp and time changes are dirty enough to force a re-think.
> > Maybe we should use SDL exclusively as it is cleaner and will 'not' be
> > less accurate averaged over time. Furthermore main.c would not depend
> > on #include<time.h>
>
> Hello,
>
> sdl is using either gettimeofday or clock_gettime on unixlike systems
> (changing time when SDL is running will confuse it if clock_gettime was
> not available at compile time) so I don't see the difference, except SDL
> "truncates" all times to millisecond instead of keeping the micro/nano
> value.
> What do you mean by "less accurate averaged over time" ?
I was referring to accuracy/precision where the frequency is
accurate, but jitters.
>
> > SDL at 10ms is just as accurate as usleep in microseconds; it is the
> > precision that is 10ms. To time-average 60Hz VBLs, one only needs to
> > have a 10ms delay 1/3 of the time and a 20ms delay 2/3 of the time.
> > eg.
> > VBL(1) = 10ms
> > VBL(2) = 20ms
> > VBL(3) = 20ms
> > Time average the above over 3 VBLs and you have 16.66... ms
> > This time averaging has been taking place since Thomas added the code.
> > In light of this, SDL at 1ms is a luxury.
>
> 10 ms is really a very rough averaging to get close to 16.666 ms
> In that regard, I prefer sleeping 17ms + 17ms + 16ms which also gives
> 16.66 over 3 VBL.
>
> So yes, 10ms precision can be used to average 16.66, but it will do so
> with many more jitter (=standard deviation) than if using a 1ms precision.
>
10 ms doesn't have to jitter because it happens more frequently
than 16.66 ms. It also happens that the next two 20 ms intervals
allow a 16.66 ms interval to land within them.
VBL(1) -starts at - 0 ms 0 ms (simultaneous)
VBL(1) = 10ms -ends after- 10 ms
VBL(2) -starts at - 10 ms
16.66 ms (within)
VBL(2) = 20ms -ends after- 30 ms
VBL(3) -starts at - 30 ms
33.33 ms (within)
VBL(3) = 20ms -ends after- 50 ms
VBL(1) -starts at - 50 ms 50 ms (repeating)
...
> My point is : just use as many precision as the OS provides (with a
> reasonnable amount of code), it won't hurt anyone and it will benefit to
> some cases.
>
I'm happy with this. I was getting concerned about too much precision.
>
> > I don't believe your eyes will bothered, especially if xorg
> > (or equivalent) is doing its job (buffering). Video is the
> > only thing that is affected, sound is not.
>
> Yes, of course a lot of thing happen behind SDL to copy the buffer to
> the video screen, wait or not for a vsync, ...
>
> Today, you don't see the difference because most LCD monitors will run
> at 70/80 Hz, not 50 or 60, so anyway Hatari emulating a 50/60 Hz video
> will not look smooth.
>
> But if you connect your PC to an old CRT capable of doing exactly 60 Hz,
> then doing sleep of 10+20+20 ms instead of 17+17+16 will be really
> noticable. If using nanosleep, you will even get a 16.667 ms sleep which
> means Hatari's video should be really synchronized with the CRT monitor
> at 60 Hz (this is how hardcore emulation fans are doing when using MAME,
> they prefer using CRT because it can output at the same video freq as
> the original arcade machine).
Good points here. If hardcore emulation is possible with 10ms
precision, then the rest is extra.
Note that the ST has its Sound, Video, and CPU synchronized with
the same crystal; a feature that modern computers lack. I believe
that changing the emulation rate to keep the sound syncronized is
reasonable given that even an old TV can handle small vertical
frequency variations. The picture size may vary slightly, but old
TVs position and size change depending on what is being displayed.
David
More information about the hatari-devel
mailing list