Re: [CodeGallery] MFC MD5 Calculator

See below...
On Fri, 26 Sep 2008 23:32:49 +0200, "Giovanni Dicanio"
<giovanniDOTdicanio@xxxxxxxxxxxxxxxxx> wrote:

"Xavier" <xavier@xxxxxxx> ha scritto nel messaggio

With MD5 apis : no Crypto API required...

CryptoAPI is a part of Win32 API, and it seems to ma that is present by
default on Windows XP and Windows Vista.
Using CryptoAPI to calculate MD5 makes your code independent from 3rd party

I found an open source MD5 library here:

but it was pure C code, and with a style that seems very strange to me:


void MD5Update(ctx, buf, len)
struct MD5Context *ctx; unsigned char *buf; unsigned len;
In the original K&R C, all values were integers. So you didn't need to specify a type.
Then when they added types, internally, the compiler still thought they were int values,
but syntax was added, which was to put the type declaration after the parameter list.

This is generally considered crap code beneath contempt today.

This would be rewritten as
void MD5Update(struct MD5Context * ctx, unsigned char * buf, unsigned len)

Note that the compiler did no cross-checking of types; you did not need to have header
files; you could just write
MD5Update(3, 22, 17);
and it would compile without error. Of course, it would crash horribly with memory
exceptions if you ever RAN the code, but you didn't have to type anything clumsy, like
argument types (who needs those, anyway?). So your programs could be smaller. And
because you didn't need a header file, your compilations could run faster.

The lint program was written to take a collection of files and deduce what we now call
function prototypes and cross-check all calls against the deduced prototype. This was why
Hungarian Notation was invented: so programmers could visually check the types of the
arguments without having to actually read any headers.

By the way, the header file said
extern void ctx();
because there was no way to specify parameter types in a header file. You could write
ctx(2, 22, 17);
ctx("abc", 22, &length, 9);

and they would all compile and try to execute (and all would fail horribly). Now try to
debug this on a PDP-11 or Vax with nothing we would call a "debugger" (I had to rewrite
the debugger to be usable; I went from environments with real debuggers and even
source-level debuggers to the PDP-11 and Vax and C, and it was like going through a
time-warp to the mid-1960s. All that was missing was the punched cards to make the
experience complete).

Who needs type checking anyway? It just gets in the way of writing code as quickly as
possible! Programmers don't make mistakes, anyway (and does anyone remember the infamous
nationwide shutdown of the entire long-distance system in the 1970s? It was a coding
error in C that a decent language and compiler would have caught...)

And void wasn't a data type; it was an annoation:

#define void int

I programmed in this tenth-rate piece of crap for about five years, hating every minute of
it. It ranks among the worst languages ever designed. It wasn't until the idea of an
ANSI standard (and later an ISO standard) began to emerge that the language design
community got enough power to replace this piece of crap with real function prototypes
that had compile-time type-checking.

What is this way of defining function parameters??
Is it C 1.0 ?
Well, it is official AT&T Bell Labs C of the 1970s. And there were vendors producing crap
compilers like this into the late 1980s.

In those days, struct fields were global names, so if you had

struct Point2 { int x; int y; };
you could not have another
struct Point3 { int z; int y; int x;}

because x was a global value which was an offset of 0 into a structure. So you could

int A;
A.y = 2;

and it would actually compile code to store the value 2 (a 16-bit value) offset 2 bytes
from the address of A. This is why we have the stupidity of having "." and "->" as
distinct operators; A.y meant "take the address of A, add 2 to it, and use the resulting
location as the source or destination value" but A->y meant "take the contents of A, add 2
to it, and use the resulting location as the source or destination value". That is, you
would write

int A;
A = ...some value...;
A->y = 2;

and it would store the value 2 bytes beyond the address stored in A (yes, I'm serious! It
really worked this way!)

This is why we get stupid naming conventions where the fields all start with some prefix
based on the structure type. The need is long gone, but someone looks at the old C header
files that haven't changed much since 1974 or so, sees the naming convention, and says "I
guess I have to use this technique, it is Blessed By The Designers Of C". But in fact, it
was a horrible kludge to compensate for a crappy language design. As is the distinction
between "->" and ".". These are holdovers from a era of bad language design and poor
compiler design that characterized many languages of the late 1960s and early 1970s
(remember, the original C was a language that was a bad design in 1960)

However, in general, if something is already available in Win32, I prefer
using that instead of 3rd party stuff, IMHO.

Joseph M. Newcomer [MVP]
email: newcomer@xxxxxxxxxxxx
MVP Tips: