This file was made for all the programmers and fairly experienced
folks out there who want to do something a little more elaborate
than just building and running.

  CONTENTS OF THIS FILE:

  0. INTRODUCTION

  1. BUILD INSTRUCTIONS FOR EACH TYPE OF CHANGE YOU MIGHT MAKE

  2. GNUT DEBUGGING TECHNIQUES

  3. HISTORY OF THE BUILD PROCESS AND AUTO*TOOLS

  4. MAILING LIST

  5. TRACKING CHANGES

  6. REFERENCES AND FURTHER READING

-------------------------------------------------------------------------------

0. INTRODUCTION

gnut was written mainly by Josh Pieper and Jon Arney, with many
contributions by others, during the spring of 2000. Around June of
2000, Josh was too busy with other things and turned over the ongoing
maintenence and bug-fixing to me, Robert Munafo.

I keep gnut in plain text source-code form and edit the files the
normal way, then when it seems to be working well enough to release. I
use the auto*tools (notably "make distcheck") to create the new
release as a tarfile.

There is a mailing list for gnut developer discussion at:

  http://groups.yahoo.com/group/gnut_dev/

Old releases are available as tarfiles at the main gnut web site:

  http://www.gnutelliums.com/linux_unix/gnut/

Many have requested that it be put into a CVS system at a centralized
site like sourceforge. If that were done, it wouldn't give anyone
access to anything they don't already have because I don't have the
time to commit changes every few hours and check to see that the stuff
I'm committing actually works. If it doesn't work I get flooded with
email.

Also, CVS has a lot of bugs that get in the way of several things I
need to be able to do. The most important problem is that if I go
through and change all the tabs to spaces or make other equally
trivial changes it confuses the diff algorithms and makes it hard to
discover any real (non-whitespace) changes.

Also important, CVS prevents old deltas from being added before
current deltas, and prevents editing old deltas (for example, to
re-indent all the old versions to have a consistent indenting style).
This is important because I want to eventually go back and reformat
the old versions so they look right, but I don't want to do that right
now.

Since gnut is GPL'ed, you should feel free to post your own
modifications to gnut through your own web site or any other means.
The copyright messages at the beginning of each source file are there
simply to keep it out of the "public domain" (read the GPL and the GNU
Manifesto for some background on the issues there).

If you have changes you want to submit for me to include in my
version, send them as plain text diffs in a mail message (not an
attached file, please!) so I can read them and check for bugs.

Here are the main reasons I don't accept contributed changes:

  1. I prioritize my work on gnut as shown in the list in the TODO
     file, which is included with every gnut release. The reasons for
     the priorities are obvious if you read the list (hint: segfaults
     first, bells and whistles last) I get about 10 times as much
     submission as any one person can handle without being paid for
     it. If it's low priority, I put it off until later, perhaps as
     long as a few years (-:

     I also get requests from people who haven't even bothered to
     implement the change themselves. Usually these changes are of a
     nature that, if implemented, would change gnut's interface
     and force others to adjust to the new behavior. There are
     thousands of users who like gnut exactly the way it is, and
     only 10 or 20 who are asking me to make changes for them. I do
     not have time to do free programming work for people who are
     unable to do it themselves. I expect you to try a change
     yourself and get it working, then come to me only if you still
     think it was a good idea. I will then prioritze your contribution
     as per the TODO file, and I will look over the code carefully
     for errors and make changes where I feel it is necessary. That
     is why gnut is so stable.

  2. The changes are specific to your operating system and you haven't
     told me how to make the changes in a way that works on all
     operating systems. This includes broken system header files that
     don't work unless you #include some other system header file
     first. I can't just blindly include something like <sysstruct.h>
     unless you tell me how to check for whether it exists and
     whether it is required. (See also section 2.5, 'make headerstats')

  3. As described in the next chapter of this file, some files are
     auto-generated and therefore cannot be modified directly. If it
     doesn't end in ".c" or ".h" you should read the next chapter to
     find out what it is and how to change it.

  4. Gnut is a console (command-line-oriented) program. It is
     interactive, and takes commands from a user who is actively
     typing and reading its output. Gnut is not supposed to have a
     graphical interface, or run in the background without a console
     window, or perform complex automatic operations, and it is not
     designed to be easy for a robot (script) to control. The HTTP
     interface, daemon mode and scripting system are all very
     rudimentary and designed to satisfy only the simplest and least
     demanding purposes. If you want to make it do more, you're on
     your own.

- Robert Munafo
  gnut maintainer


-------------------------------------------------------------------------------

1. BUILD INSTRUCTIONS FOR EACH TYPE OF CHANGE YOU MIGHT MAKE

This chapter is needed mainly because there are special things you
need to do to rebuild after changing any of the "special" source
files. For example, if you want to change Makefile you can skip down
to the paragraph under "To update 'Makefile'"

It used to be that all the files used to make a program were
controlled by a single set of rules in a "Makefile" and a single
command called "make". No matter what you changed, you could type
"make" and it would do the right thing. Unfortunately, this isn't true
anymore. Now "Makefile" is generated from "Makefile.in" and that is
generated from "Makefile.am". There are different steps to perform
based on what file you have changed or want to change. There is no
single command that will properly figure out what to do to rebuild.

This mess was made possible by something called the "GNU auto tools".
Further down in tis file I have provided a history of building so that
you can easily get up-to-date on the auto*tools and learn why it is so
bloody complicated.

If you are new to the auto*tools, please remember that this file is
here and consult it before sending me any changes to the special
files.



The First Build (if you haven't edited any files)

  Before you can do anything you have to untar the tarfile, which
  creates a directory "gnut-0.X.XX", and then go into that directory.
  All the build instructions assume you are in that directory.

  Here's how to do your first build:

    mv configure conf-warning
    mv conf-real configure
    ./configure         (or 'sh ./configure' if in csh on SysV)
    make
    make install        (only if you want to put it in /usr/local/bin)


Source code changes:

  If you change code, but aren't adding any #include statements,
  you just have to recompile. This includes any other ".c" file
  and any ".h" file EXCEPT config.h or acconfig.h. To make the
  changes take effect, just repeat these steps:


    make
    make install        (only if you want to put it in /usr/local/bin)


*All other changes*

  All other changes, including adding a "#include" statement to a .c or
  .h file, require that you have automake and autoconf on your
  system. Please don't send me patches to the Makefile, or any of the
  other special files, unless you know it isn't an auto-generated
  file. Instead read below and see which file you *really* need to change.

  Also note, the auto-tools are still changing; for the latest updates
  on how to use them, see the URLs listed at the end of this file
  under "References".


After editing "acconfig.h"

  "acconfig.h" and "configure.in" are co-dependent. That means, if you change
  one, you might have to (or want to) change the other. Specifically,
  any features tested for by configure.in might need to have their #define
  symbol added to acconfig.h -- and anytime you add a #define symbol to
  acconfig.h you need to add a test for that feature to configure.in.
  After making such changes, you should type one command:

    autoheader

  followed by all the commands listed under "aclocal.m4".


After editing "aclocal.m4"

  After changing "aclocal.m4", type:

    aclocal -I macros

  followed by all the commands listed under "configure.in".


To update "config.cache"

  This file depends on your installed system and on "configure.in". It
  is auto-generated, so you shouldn't edit it directly. "config.cache"
  only changes if you change something installed in your system (like
  updating libraries or drivers) or if you edit "configure.in". If
  either of these happens, follow the instructions under
  "configure.in" to update "config.cache".


To update "config.h"

  This file depends on "config.h.in". It is auto-generated, so you
  shouldn't edit it directly. To change something in "config.h", you
  have to edit "config.h.in" and then follow the instructions under
  "config.h.in".


To update "config.log"

  This file depends on your installed system and on "configure.in". It
  is auto-generated, so you shouldn't edit it directly. "config.log"
  only changes if you change something installed in your system (like
  updating libraries or drivers) or if you edit "configure.in". If
  either of these happens, follow the instructions under
  "configure.in" to update "config.log".


To update "config.status"

  This file depends on your installed system and on "configure.in". It
  is auto-generated, so you shouldn't edit it directly.
  "config.status" only changes if you change something installed in
  your system (like updating libraries or drivers) or if you edit
  "configure.in". If either of these happens, follow the instructions
  under "configure.in" to update "config.status".


To update "configure"

  This file depends on "configure.in". It is auto-generated, so you
  shouldn't edit it directly. To change something in "configure", you
  have to edit "configure.in" and then follow the instructions under
  "configure.in".


After editing "configure.in"

    autoconf
    automake -a
    rm config.cache
    ./configure
    make
    make install        (only if you want to put it in /usr/local/bin)


To update "install-sh"

  This file is automatically generated by "automake -a", but it has no
  source file (it comes from the system automake directory). You
  shouldn't ever have to change "install-sh". If you have just
  installed a new version of "automake", you should update your
  "install-sh" and the two other files "missing" and "mkinstalldirs"
  by doing:

    rm install-sh
    rm missing
    rm mkinstalldirs
    automake -a


To update "Makefile"

  Makefile depends on Makefile.in and ./configure, and they depend
  on Makefile.am and configure.in. To get Makefile to change, you
  should change Makefile.am and then follow the instructions for
  "Makefile.am".


After changing "Makefile.am"

  After changing "Makefile.am", do these commands:

    autoconf
    automake -a
    rm config.cache
    ./configure
    make
    make install        (only if you want to put it in /usr/local/bin)


To update "Makefile.in"

  Makefile.in depends on Makefile.am. To get Makefile.in to change, you
  should change Makefile.am and then follow the instructions for
  "Makefile.am".


To update "missing"

  This file is treated the same way as "install-sh" -- see the
  instructions for "install-sh".


To update "mkinstalldirs"

  This file is treated the same way as "install-sh" -- see the
  instructions for "install-sh".

To change the gnut version number:

  The version number is set by configure.in and src/lib.h. Find the
  version number in both of those files and change it, then follow the
  instructions for "configure.in".


-------------------------------------------------------------------------------

2. GNUT DEBUGGING TECHNIQUES

2.1. GDB

Because gnut is multi-threaded, I have not even bothered to try GDB. I
don't think it would be very useful. Because so much of what gnut does
depends on things that affect each other in real time, I consider it
better to record everything as it happens and analyze the output after
the fact.

2.2. gnut's Built-in Debugging Facilities

gnut already has quite a bit of debugging built in. There are two
#defines, two config variables and the gnut-segfault-info file.

At the beginning of lib.h you will find two important #defines, YTAGS
and GDEBUG_ENABLE. Normally both of these are set to 0. After changing
either of these you have to do a make to recompile gnut.

If YTAGS is set to 1, the leak detection described in section 2.4 is
enabled. If GDEBUG_ENABLE is set to 1, the *log_level* debugging
output described here is enabled.

The variable *log_level* is normally zero. It can be set to a higher
number to create debugging output. If you are going to use log_level,
start at level 1 and go up one level at a time. Level 2 includes all
the output of level 1 plus more, and level 3 includes all the output
of level 2 plus more, and so on. You will probably want to record the
output by running gnut inside a script session (type "man script" if
you don't know about the script command)

The variable *debug_opts* also turns on debugging output, but it is a
bitfield mask. In other words, you can turn on and off different parts
of the debug_opts output independently. The bits and their meanings
change from time to tme. In version 0.4.26 they were:

   1 failed connection attempts
   2 GUIDs
   4 HTTP handshaking and headers
   8 user-cancelled connections and retry disables
  16 download retry overlap handling
  32 dropped packets

so for example, "set debug_opts 17" would select the messages for
failed connection attempts and for download retry overlap handling.

2.3. gnut-segfault file

If gnut segfaults, it will write output into a file called
"gnut-segfault-info", which will be placed in the directory you were
in when you launched gnut. This is a text-only file which is designed
to be emailed to me by users who get segfaults. It consists of a bunch
of hexadecimal numbers written by the program every time it encounters
a call to the macro "dqi()". The numbers are written into a block of
memory and dumped only if a segfault occurs. Writing into a block of
memory allows information to be collected without the performance
overhead of doing printf's to a log file.

gnut can segfault multiple times without crashing. When a segfault
occurs, it only kills one thread, which is usually a GnutellaNet
connection, upload or download. gnut will write up to four segfault
dumps to the gnut-segfault-info file and then stop. If you quit and
restart gnut and it segfaults again, the old gnut-segfault-info file
will get overwritten by the new one.

The gnut-segfault-file is a plain text file with 79 characters (or less)
per line to make it easy for people to send me via email. It consists
of up to four "dumps"; each dump has 16 lines of "global" dump date
and 8 lines of "thread-local" dump data. A typical line of dump data
might look like this:

<pre>
    43   44   47   48   4a   4b   4c    4   17   1d   1e   1f   2b   2c
</pre>

These numbers are hexadecimal and correspond to the parameters of
|dqi()| calls in the gnut code. For example, |dqi(0x001e)| will put a
"1e" in the dump. In the example shown here, #|gnut|# executed a
|dqi(0x0043)|, then a little later a |dqi(0x0044)|, then a
|dqi(0x0047)|, and so on.

The numbers in the dump appear in chronilogical order, and the 8 lines
(or 16 lines) of numbers are always the *recentmost* 128 (or 256)
calls to |dqi()| that happened before the crash.

To find out what each number means, search all the source code using a
command like this:

  |fgrep 0x0045 src/*.c|

I will not provide a list of the numbers and where they are called
because they change way too often.

The thread-local dump shows *only* the |dqi()| calls that happened
inside the thread that just SEGFAULTed. The last number in the
thread-local dump is the most useful, because it shows the last
|dqi()| that happened before the crash.

If the *auto_remove_segv_info* setting is set (which it is by
default), |gnut| will delete the gnut-segfault-file, if any, when it
starts up. Since it only checks in the directory it started in, you
can get leftover gnut-segfault-info files if you run |gnut| from
different directories.

Feel free to modify your version of gnut by adding more debugging
messages using the same three methods. The log_level debugging
messages are all done with calls to the routines gd_s(), gd_02x(),
gd_i(), gd_x(), gd_li(), gd_lu(), and gd_p(). The debug_opts messages
can be found by looking for tests of the variable gc_debug_opts. The
gnut-segfault-info info is generated by calls to the macro dqi().

2.4. Memory Leak and Memory Overwrite Detection

All of the things in this section require YTAGS to be set to 1 in
lib.h. To do this, edit lib.h, find this:

  #define YTAGS 0

and change to this:

  #define YTAGS 1

All of the memory allocate and free operations in gnut are done
through a special set of routines that keep track of where the memory
was allocated and whether a pointer is being freed twice. This allows
you to detect memory leaks (places where memory is allocated but never
freed, even after it is no longer needed).

Also, when a block is deallocated, gnut checks to see if the bytes
right before the beginning or right after the end of the block were
overwritten.

Every allocate call is done through the routines ymaloc(), ystdup(),
and ycaloc() which are used in place of the standard routines malloc,
strdup and calloc. Every call to one of those routines passes a unique
three-digit number, and the three-digit number gets stored in the
beginning of the memory block. The blocks are freed by calls to
various routines like fre_str(), fre_v(), fre_gq(). These also have
unique 3-digit numbers, and they chck to see if the pointer is already
0 before trying to free it.

To find out if there is a leak, first turn on leak detection and
recompile. Then start gnut, let it run for as long as you can until it
completely fills your computer's memory (as of version 0.4.27 this
took at least a week). While gnut is running, type the "debug" command
at intervals (every hour for the first several hours, then twice a day
afte that) It will print something like the following:

  ymaloc(..., 296) == 2152, delta 105 

This means there are currently 2152 blocks of type 296 allocated in
memory, and that's 105 more than the last time you typed "debug". If
you suspect this is a memory leak, then you should find the number "296"
in the source code:

  bash# cd src
  bash# fgrep 296 *.c
  route.c:  re = (route_entry *) ymaloc(sizeof(route_entry), 296);
  bash#

This shows that the suspected leak is from allocating route_entry
structures in route.c, which are maybe not being deallocated
elsewhere.

Note that gnut allocates lots of things (packet GUIDs, query replies,
host addresses, etc.) and enforces maximums for all of these. Gnut
does not continue allocating memory for those items after the maximum
is reached. For example, the route_entry allocation shown above is for
GUID matching, and will keep growing until the number of allocated
blocks is equal to the value of ROUTE_MAX, which is defined in
route.h. So, you should make sure you aren't looking at a "leak" that
is really just a case of this type of behavior. The best way to avoid
this is to just ignore what "debug" says during the first few hours.

2.5. 'make headerstats'

Starting with version 0.4.28, there is a new make target that you can
invoke either from the main directory (the same place you normally
type 'make') or from within the 'src' directory. If you type:

  make headerstats

it will run a Perl script called 'hsrun' which automatically tests for
most system header-file problems. It places the results of its tests
in the file src/headerstats.out, and a detailed log of all of its
tests and the errors encountered in src/headerstats.log.

headerstats is a useful tool for any cross-platform development
project, and in fact, 'hsrun' is designed to be used in any auto-tools
based project with no modification. Read the comments at the beginning
of hsrun for instructions.

If users of versions 0.4.28 and later report compile errors relating
to the system header files, I ask them to run 'make headerstats' and
email me the output.

-------------------------------------------------------------------------------

3. HISTORY OF THE BUILD PROCESS AND AUTO*TOOLS

You might have noticed above that there are SIX STEPS required to do a
rebuild after editing "configure.in". Why is it so complicated?

You might remember the days when all the dependencies and rules were
encapsulated in one file (called "Makefile") and no matter what you
changed (including the Makefile itself) the "make" command would
figure out what to do to rebuild everything. That's not true anymore.

The original Makefile model worked well before the proliferation of
many different types of UNIX and the advent of cross-platform
compatibility.

To handle all the different types of Unix, the Makefile has to be
complex -- so complex that it is no longer practical to edit by hand.
Furthermore, many things have to be figured out by the machine where
the program is going to be built, rather than the machine where the
programmer developed it. So, the "meta-rules" for creating Makefiles
got too complicated to edit by hand. Eventually there got to be a lot
of different types of source files, and a lot of different rules for
what to do if you want to change something. The best way to explain
this is by going through the history in chrinilogical order.

Originally, all changes were made just by changing C source code (in
the ".c" and ".h" files) and typing "cc" to compile it into a binary:

                  sources -.
                            \
                            cc
                             \
                              `->  binary

Then came compiling and linking as a separate step. This created the
situation that if you change just one ".c" file, you only have to
re-compile that one file and then link, but if you change a ".h" file
you probably had to compile everything. To save time, people figured
out how to make a list of rules for which ".c" files depend on which
".h" files. The "make" tool was developed, a program that would
automatically figure out what needed to be recompiled. "make" takes a
new source file, called "Makefile", and also uses all the program
source files. Rebuilding still consisted of just one step:

               Makefile
                    and   -.
                sources     \
                           make
                             \
                              `->  binary

"make" wasn't quite as smart as it should have been. For example,
there's no way to get it to check if the Makefile itself was changed.
If you change the "Makefile" (like when you add new source files) you
have to type a special command like "make clean" to force it to
recompile everything. Commands like "make clean" are still used today.

After a few years, different versions of Unix started to exist (like
BSD vs. AT&T System III), and people noticed that you had to change
the ".c" and ".h" files and the Makefile in different ways depending
on what type of Unix you were compiling for. Those changes were called
"configuration", and were kind of tedious, because it took a lot of
knowledge and diligence to make all the correct changes for your own
particular type of Unix. Eventually it was decided all such changes
should be controlled by #ifdef tests (like "#ifdef BSD_4_1"), and the
#defines could be specified in the Makefile or a header file called
"config.h" (or something similar). Building then required two steps:

             generic
           Makefile and  -.
             config.h      \
   STEP 1.             manually-edit
                             \
                              `->  custom Makefile
                                    and config.h

               Makefile
                    and   -.
                sources     \
   STEP 2.                 make
                             \
                              `->  binary

Next, standard "configuration systems" were created. Usually a
configuration system was a set of shell scripts that made all the
tests and modifications automatically. The effect was to replace the
first manual step with something more automatic. For example, there
might be a script called "configure" that did the work. Because the
"Makefile" was now auto-generated by "configure", the "configure"
script was what you edited when you wanted to change the Makefile, and
the Makefile became an uneditable, automatically-generated file just
like the program binary. The two build steps became:

    configuration-files  -.
                           \
   STEP 1.              configure
                             \
                              `->  Makefile

               Makefile
                    and   -.
                sources     \
   STEP 2.                 make
                             \
                              `->  binary

Several different types of configuration systems were in place by
1992. Some consisted of a script called "configure" that did all the
tests to see what type of Unix you're running on, then generated the
Makefiles. The configure script had to know a lot about the syntax of
makefiles, as well as knowing a lot about how to test for different
features of operating systems.

Eventually, the job of doing the operating-system tests and the job of
creating the Makefiles from "Makefile templates" was split up into two
different tools.

By 1994 it was generally agreed that the best tool for the
operating-system tests was "autoconf". It took one new source file:
"configure.in" and generated a script called "configure" as output.
The "configure" script, in turn, took one new source file called
"Makefile.in", and generated "Makefile" as an output file. At this
point the build had three steps that worked like this:

             configure.in -.
                            \
   STEP 0.               autoconf
                             \
                              `->  configure

   - - - - tarfile is distributed in this form - - - -

             Makefile.in -.
                           \
   STEP 1.              configure
                             \
                              `->  Makefile

               Makefile
                    and   -.
                sources     \
   STEP 2.                 make
                             \
                              `->  binary

Note that Step 0 only had to be done if you changed the configuration
requirements, like if you added a major new feature that depended on
something that is different on different systems (an example would be
adding a graphical user interface to a program that was previously
text-only). Therefore, the build process was now split into the "user
installation" steps (steps 1 and 2) and the "complete rebuild from
scratch" (steps 0 1 and 2). Typically, the programmer would perform
step 0 and distribute the result to the users, who perform steps 1 and
2. This is indicated above where it says "tarfile is distributed in
this form".

The weak point in this system was "Makefile.in". This had to be a very
large and complex file, because it contained all the rules for how to
generate a "Makefile", and Makefiles were pretty complex and vary a
lot from one OS to another, and since Makefile.in was a source file it
had to be edited manually. Most of "Makefile.in" was the same
regardless of what program you were building, and programmers found it
cumbersome.

The solution to that was "automake". It automatically creates
"Makefile.in" from another new source file, called "Makefile.am". By
1996, the standard build process had four steps (two for users doing
an install and two more for people adding new features) and the steps
were:

             configure.in -.
                            \
   STEP 0-A.             autoconf
                             \
                              `->  configure

              Makefile.am -.
                            \
   STEP 0-B.             automake
                             \
                              `->  Makefile.in

   - - - - tarfile is distributed in this form - - - -

             Makefile.in -.
                           \
   STEP 1.              configure
                             \
                              `->  Makefile

               Makefile
                    and   -.
                sources     \
   STEP 2.                 make
                             \
                              `->  binary

Over the next couple years, "configure.in" got bigger and included
lots of code to test for lots of different types of libraries,
drivers, operating systems, etc. Eventually "configure.in" became the
biggest and hardest-to-maintain file, just like "Makefile.in" had
been. More recent versions of "autoconf" have solved this by allowing
for the use of a "macros" file called "aclocal.m4". The "macros" are
written in a language called "m4", and they contain the rules for
performing all sorts of different operating-system tests. As far as
the build process is concerned, these can be treated as part of step
0-A, except that you don't ever have to worry about changing the
contents of "aclocal.m4":

             configure.in 
               aclocal.m4   -.
                              \
   STEP 0-A.               autoconf
                               \
                                `->  configure

   STEP 0-B.  (automake step, same as above)

   - - - - tarfile is distributed in this form - - - -

   STEP 1.    (configure step, same as above)

   STEP 2.    (make step, same as above)


Around the same time it also became common to use a tool called
"aclocal" to generate "aclocal.m4", from a directory of macros files
called "macros". This added a fifth step to the full build process:

             configure.in 
              macros/*.m4   -.
                              \
   STEP 0-A.            aclocal -I macros
                                \
                                 `->  aclocal.m4

             configure.in 
               aclocal.m4   -.
                              \
   STEP 0-B.               autoconf
                               \
                                `->  configure

   STEP 0-C.  (automake step, same as above)

   - - - - tarfile is distributed in this form - - - -

   STEP 1.    (configure step, same as above)

   STEP 2.    (make step, same as above)




Complete list of files and the order in which they are built:

 ORIGINAL FILES

        the file: configure.in
 is created from: typed in by hand

        the file: Makefile.am
 is created from: typed in by hand

        the file: src/gnut.c
 is created from: typed in by hand

        the file: src/gnut.h
 is created from: typed in by hand

        the file: src/anything.c      (any ".c" not listed below)
 is created from: typed in by hand

        the file: src/anything.h      (any ".h" not listed below)
 is created from: typed in by hand



 AUTO_GENERATED FILES

        the file: config.h.in 
 is created from: acconfig.h configure.in acconfig.h
              by: autoheader
  
        the file: config.h
 is created from: config.h.in
              by: ./configure

        the file: Makefile
 is created from: Makefile.in
              by: ./configure

        the file: configure
 is created from: configure.in aclocal.m4
              by: autoconf

        the file: aclocal.m4
 is created from: configure.in macros/*.m4
              by: aclocal -I macros

        the file: Makefile.in
 is created from: Makefile.am
              by: automake

-------------------------------------------------------------------------------

4. MAILING LIST

There is a mailing list for programmers interested in improving and
customizing gnut, which you can read (and if interested, join) by
going to:

  http://groups.yahoo.com/group/gnut_dev

-------------------------------------------------------------------------------

5. TRACKING CHANGES

After you customize your gnut, you might want to add some of the
changes I've done in my version in the meantime, particularly if they
are major bug fixes.

Because of limitations in CVS (see section 1) I just distribute
tarfiles. But you can still tell where the changes were made in a
recent version by looking for tags. Tags are comments containing
something like "0.4.27.c05". The tags are identified in the ChangeLog
file (in the same directory as this file). To see how each change was
implemented, find all occurrances of its tag in the source and compare
those places in the source to the corresponding place in the previous
version's source.

-------------------------------------------------------------------------------

6. REFERENCES AND FURTHER READING

For more about the auto-tools, check:

 http://sources.redhat.com/autoconf/autoconf.html
 http://sources.redhat.com/autobook/autobook/autobook_toc.html
 http://sources.redhat.com/autobook/autobook/autobook_15.html

If those links are all dead, do a search-engine search for something like:

    "automake autoconf HOWTO"
    "autoconf tutorial"
    "Makefile configure build aclocal"

Those are just suggestions. You need to try different combinations of
jargon-words like those shown here until you find a good page that
contains the type of info you need.

For more about Gnutella, check:

 http://gnutelladev.wego.com   (developer-oriented)

 http://gnutella.wego.com
 http://gnutella.co.uk
 http://www.gnutellanews.com
 http://www.gnutelliums.com
