[hatari-devel] Hatari patches for bitplane conversion

Kåre Andersen kareandersen at gmail.com
Tue Jul 21 21:20:54 CEST 2009


On Tue, Jul 21, 2009 at 8:46 PM, Eero Tamminen<eerot at users.berlios.de> wrote:
> Hi,
>
> On Tuesday 21 July 2009, Kåre Andersen wrote:
>> Indeed. Which brings up another point: Why are the converters
>> #include'd into one source file rather than being compiled as separate
>> units as would make so much more sense? It really messes up the whole
>> structure of the code, as they are neither inline functions nor simple
>> headers...
>
> As the functions are declared static, they're in practice the same and you
> can consider them as inline.  With GCC, specifying "inline" is pretty much
> redundant; if a function is static, with optimizations enabled GCC either
> inlines or directly jumps to it (depending on in how many places it's
> called in the code).  You cannot help noticing this when you debug GCC
> optimized code with Gdb... :-)

> If you would put them to separate object files, they would need to be global
> functions and would have the normal function call overhead.  If I remember
> correctly, a more important reason was that then the helper functions in
> screen.c (which are called from the convertors) would then need to be global
> and have the normal function call overhead too.
>
> (GCC / GAS don't support (yet) inter-object optimizations.)

Ah yes, I remember now, that table declaring them static, effectively
giving them file scope. But uhm, is this overhead really noticeable?
It seems you know GCC a lot better than me, so I wont argue, I just
find it hard to maintain and it leads to all that code duplication. I
keep thinking that these things should be dynamically linked and then
the functors could be wrestled out from there... I am probably wrong
:) What I actually know best, is M68k :D I like readable, manageable
C-code tho :)


>> > How do you want to detect that SDL feature? Hard-coding it with #ifdefs
>> > is a bad idea, IMHO.
>>
>> I guess there are several ways to do this, including the check for
>> hardware surfaces. The safest way should nevertheless be to do a bit
>> of profiling on buffer creation (that is, program start _and_ screen
>> mode changes). You can do a wait for vsync, and see how much time will
>> pass in between. If its shorter than a given threshold, say 50Hz, then
>> you dont have any vsync...
>
> I like run-time detection, but isn't it possible to do it only on Hatari
> startup, why it would need to be done on each screen mode change?
>
> (Some programs, especially for Falcon do frequent mode changes.)

Host screen mode changes, like say, going to full screen, could change
the whole situation. In OS X at least, where it seems you bypass
compositing by rendering SDL full screen...  And sure, in Behn i mess
with the VIDEL until it bleeds out the DSP port. Nearly anyway. Well,
I was young and naive ;)

>> A similar test is already done to get fine
>> granularity cycle timing (and the comments about OS X in that part of
>> the code are wrong mind you - we have HPET just as much as linux do).
>
> On PowerPC?  (I think Thomas' Mac is PowerPC :-))

Ah, havent a clue about PPC, but then the #ifdefs are wrong anyway -
and yes you are right - it is... :)

-Kåre



More information about the hatari-devel mailing list