Re: bool vs BOOL

The reason "bool" requires more overhead than the "BOOL" type is
because of the size differences (which were already addressed in Carl's
reply). The performance difference comes from a machine architecture
point of view.

A CPU is best at handling its native data size. On most desktops
today, that tends to be a 32-bit value. That is why most memory
copying/moving algorithims copy memory one byte at a time until they
hit an 32-bit boundary, and then copy the memory in as many 32-bit
chunks as possible.

When dealing with values like "bool", that value must be "masked" at
some point (or even at multiple points) when loaded into or saved from
a register in order to "hide" the unused bits of the value. The
masking is just another step that the CPU would not normally have to
take. (Also note that this performacne difference may be *VERY*
CPU-specific, both on CPU vendor and the CPUs microarchitecture.)

My opinion, take it or leave it, is that there are many more problems
in software development that need to be addressed (like the unnecessary
abuse of dynamically allocated memory, and other poor practices), that
have much more of a performance implication than using a "bool" over a
"BOOL". Personally, I minmize my use of heap memory whenever possible,
but I still use "bool" instead of BOOL in my objects, as parameters,
and variables. If you are experiencing a performance problem, I would
look for lower hanging fruit before changing all my "bool"s to


-=- James.


Relevant Pages

  • Re: pcib allocation failure
    ... pcib1: attempting to grow prefetch window for ... attempting to grow memory window for ... cpu0: on acpi0 ... <ACPI PCI bus> on pcib0 ...
  • Next July 27: boot failure(hang) on x86_64 box.
    ... Freeing unused kernel memory: 1360k freed ... ACPI: PM-Timer IO Port: 0x488 ... CPU: L2 Cache: 1024K ... # AX.25 network device drivers ...
  • [PATCH] Document Linuxs memory barriers [try #3]
    ... The attached patch documents the Linux kernel's memory barriers. ... I've tried to get rid of the concept of memory accesses appearing on the bus; ... barring implicit enforcement by the CPU. ...
  • Oops in 2.6.28-rc9 and -rc8 -- mtrr issues / e1000e
    ... Bios 1.04beta did show correct memory sizing in dmidecode, ... I hope this is as simple as me doing something glaringly wrong in the kernel ... DMI present. ... CPU: L2 cache: 6144K ...
  • Re: read vs. mmap (or io vs. page faults)
    ... not fit in main memory, and there are overheads related to the heuristics ... But because the CPU is underutilized, ... reasonably sized user buffer). ... You have to measure the actual overhead to see what the actual cost is. ...