[GURUZ: guruz.dyn.bawue.de:6346]

* upload_add -- only first line read, need to handle headers before doing
  the upload. [DONE]

[test this -- we shall accept incoming 0.6, but still connect at 0.4]
[make sure uploads work]

* parse handshake ack/replies for codes other than 2xx.
* flag HEAD/GET correctly in uploads.
* add header parsing in download, for their replies (needs errcode parsing).
* Send 10 fresh pongs to old 0.4 servents to whom we deny the connection.
* Push-reply handling (reception of GIV, and sending)
* if "push" flag is on, start immediately with a PUSH.

[make sure we can download]

* tests ping/pong reduction
* remove all routing information for pongs
  (will need to think/rewrite node_parse() to not route_message() before
   handling the ping/pong -- also could separate the handle_it/drop_it
   function from the route-to-whom function, and differ the actual routing
   after the processing: would allow to drop invalid search requests with
   no NULL in them).
* change 10 fresh pong sending: wait for first incoming PING message, and
  reuse the same GUID for the reply.  Kill "Ponging" connection after timeout
  or if first message is not a PING.

* add connection at the 0.6 level, falling back to 0.4 if they close.

[test we can still connect to the network]

* Generate X-Try header, and parse the ones we may receive.
* Add parameter and logic to avoid sending a push if too old a record.
  Note, when re-issuing, update the "last search" like we do for the indices,
  meaning the route is "fresh".  [Implemented as "Route Lost"]

* apply the "double-click" patch
* apply the filtering patch
* URL (un)escaping (GIV strings as well? no need)
* Node: header emission and parsing, along with X-My-Address
* Fix queue_remove_all_named() to remove all those in "Retry in 10s" state.
* Deal with GIV in unexpected state or unexpected GIV more gracefully.
  Those are really precious.

* implement Bye! (will need to read data on write error, not close at once)
. for nodes, set SO_SNDBUF to 8K (for instance), and on bye, raise it back
  to the size of the last unsent packet + sizeof bye to make sure we can
  write the whole thing to the kernel without blocking.
. add node_timer(), download_timer() routines... to remove all this crap
  from main.c's main_timer().
* Support upper end on Range requests, including Range: bytes=-10.
* Fix BUG: rx_dropped can be incremented twice for same message if TTL reached
  0 and it's a ping and we throttle it, for instance.
  -- I've seen RX=1 and Drop(RX=3456) [when FC on]
* collect user-agent info on neighbouring node, parsing query hits if
  necessary to collect vendor code when nothing is in the handshaking headers.
* Add setlocal(LC_TIME, "C") to ensure dates are shown uniformly;
* Bandwidth management
* Add Date and Last-Modified headers to HTTP upload replies.
* Add TCP_CORK on uploads?
* Use sendfile(2) (not very portable)?
* Investigate routing bug on my own messages (core in ~/dbg) -- Not a bug!
* Raise receive buffer for downloads?
* Handle "alive pings" as highly prioritary: enqueue message at pri=1.
  (modify msg comparison routine to handle priority as well first)
* Bug de mbuf->data: verifier que le node est present dans la boucle,
  en testant n->membuf.
* Patches de guruz...
* Sticky GUID for servents behind a firewall.
* Add lockfile. (no multiple runs on same machine)
* When killing all downloads named..., only do so with those whose size is
  lower than the finished one.  If a bigger exists, need to continue download,
  so move back file to working directory.
* Investigate download_push_insert() bug. -- found it, I think (13/03/2002)
* Fix Route Lost for downloads that have only fallback to push.
* Generate a config.new, renamed as config, and move config as config.orig
* Ensure minimum value for node_sendqueue_size (1.5 * max packet size).

>>> for 0.85 release

* Try to return "412 Request URI too large" for large HTTP requests. [Check!]
* Re-write README
* Review man page, sort options alphabetically for reference.
* Ensure no option is missing.
* Document screen interface, explain columns.
* Write a FEATURES section/file.

>>> for 0.90

* force_ip logic change: don't auto-update if forced.  But change
  the local_ip on Remote-IP if no force_ip.
* download_remove_file() and download_file_exists().
* Check for new "download_delete_aborted" condition in download_abort() to
  remove file when it is set.
* Fix SIGFXSZ by ignoring and stopping downloads on E2BIG (see =soft/gtkg).
* Make search queue and persistent searches.
* Mark downloads moved out of queue: do a link() instead of renaming.
  Then remove the links (if > 1) when finishing the download because
  there's no more to get, or when completing it (moving to target done dir
  with same inode and linknum > 1).
  [Fixed by moving out only if the file is smaller than what we download]
* Use link() instead of rename() when moving from done->tmp.  When moving
  back to done, if nlinks > 1, then check for the file with same name in done,
  and if same inode and same filesystem, then simply unlink. [NO, fixed]
* BUG fix: local_ip is persistent, ok for local default, but must determine
  it at least once on new launches, in case it changed...
* Ignore size mismatch if greater than what we thought, and rely on range
  checking to resume safely. [REJECTED, see comment in downloads.c]
* Content-Encoding: deflate [DONE]
* apply SHA1 patch
* apply __attribute__ patch. / SOCKS message change patch.
* Ensure SHA1 computed during library rebuilt is dumped out to the cache
  ASAP in case of crash later on, or quit!  Maybe add boolean/flag "on_disk"
  when loaded from disk or saved, and force append to disk cache when we
  have a new computation not on disk already. [FIXED differently: was a BUG]
* Make sure qrp_finalize_computation() is done as a coroutine, one step
  at a time every second.
* String atoms in atoms.c.  -> streq() can become == between atoms. [DONE]
* Change library: atomize the full path (shared with HUGE anyway) and
  have the filename point rigth into this full string. [DONE]
* Add 301 Location redirect on non-matching SHA1 for /get which we have. [DONE]
* Avoid 409 if we have the filename, and it's unique, for old servents. [DONE]
* Factorize SHA1 validation!
* Extract SHA1 in downloads and searches, and make them persistent.
* Make persistent results filters. [DONE, bluefire]
* Also detect non-support of /uri-res if EOF during headers? [DONE]
* Propage stamp to downloads, enter in mesh what we download, remove it
  when we get a fatal error from host (404, etc..) or can't connect to it.
  Don't insert if PUSH, of course! [DONE]
* Ensure we don't insert private IPs in mesh.
* Fix qrt_compact(), which starts from the end, abd puts LAST slot
  in bit 7!  This seems to contradict the logic of the patching routines. [DONE]
* Memory leak on "clear searches": does not free all the rs attached, only
  frees the clist.  (in search.c and gui.c) [DONE]
* Probe for max #fd, and reserve only MIN(maxfds/5, 100)
* urn:sha1: searches. [DONE]
* When removing done downloads, remove by name identical and by SHA1 of the
  completed download. [DONE]
* Add configure check for zlib. [DONE]
* When closing a search, clear the search queue. [DONE]
* If query sent on connection with TTL=1 and we get hops>0 for reply, it's
  probably an invalid hopcount. [NO: there could be re-routing]
* Create alive.c, handling alive pings and replies.  Maintains a list of
  MUID/gettimeofday, appended.  If maxsize reached, timeout.  On ACK, trim.
  Keep stats of min_rt, max_rt_, avg_rt (EMA), last_rt. (in ms). [DONE]
* Separate version stable / dev.  Show date in messages. [DONE]

>>> NEXT STEP...

* BUG: has:
	http://62.128.212.57:6980/uri-res/N2R?File With Space 2002-07-06T15:12:08Z
in the download mesh, with an N2R??
* Implement Gnet ban for BYE'ed clients.  This will need restructuring of
  ban.c and a slightly different interface since here we're explicitly banning,
  and do not wait an inordinate amount of requests.
* On /uri-res request to gtk-gnutella, or any servent known to support /uri-res
  incrementally and persistently, honour 404, 403, etc...
* Validate query hits:
	hops=0 && !firewalled => IP address valid and matches.
	hops>0 && !firewalled => IP address != servent's one
	hops=0 && vendor => always same vendor code for hops=0 hits
	hops=0 => always same servent GUID in hits
	hops>0 => GUID different than servent's own (if known).
    always: GUID different from our's.
* On 401, ban IP for servent type, forever.
* Write hostcache bootstrap servents in a file, so they may add/remove entries
* Don't send back to host the Alt-Locs it just sent to us!
* Compare downloads on IP address first, then on GUID if equal.
* Break down query hits to 4 KB entities.  Will require changing the
  buffering and changing mechanism so that buffering can flush when ready
  and start a new packet.
* Add the following to downloads: timestamp (start), timestamp (last connect),
  user-agent, flags (attributes, status) 
* Make sure huge_init() does not load SHA1 cache.  Do that incrementally,
  whilst GUI is start up but before scanning library.
* Change Push broadcasting: send them all at once, even if multiple files.
  Then, when get a hook with a 200/206, cancel the others.
* Regulate HTTP uploads: keep track of who. when, which file, and send
  the full headers only once, then send minimal ones.  If too aggressive,
  ban the user.
* Likewise, when making Outgoing connections and waiting, if we get an
  incoming, randomly drop an outgoing to avoid replying Busy to the incoming
  yet get failed/timeout outgoing.
* Count replies we get from each TTL=2 probe (that we don't send today), and
  display replies.
* Look at node inactivity timeout.  Must be on last RX.

* GWebCache: http://www.gnucleus.net/gcache/gcache.php
* See message about HTTP status: reject Range requests that have ",".
* Fix the hops++ in route_message.  This must be done when resending packet!
  (remember to change all tests for e.g. hops==1 in code to hops==0)
* Broadcast Pushes to ALL known routes, to maximize chances.  Introduce a new
  ROUTE_LIST with a dyn-allocated list of nodes to send the message to.
  (will need to change interface of forward_message())
* Keep indices persistent (assuming basename is unique, keep a table),
  then allocate indices sequentially (keeping a list of "free slots"?) to
  ensure consecutive rebuiilds and re-runs don't perturb indices.
* Allocate routing table dynamically (list of fixed chunks) based on minimal
  amount of cycling time.  Use this opportunity to separate queries and Qhits
  into two separate tables with possibly different lifetimes.  Keep list of
  messages seen separately to avoid dups the longest time possible at little
  memory cost?
  Keep 2 buffers, and start the first, expand it, then after the time limit,
  switch to a new circular buffer, but keep old data.  When the time limit
  expires, go back to the first and sequentially replace entries -> typical
  lifetime will be around 2*limit.
* Clean traffic: remove bad queries, query hits with 0 files, etc...
* Randomize search results when limit is reached?

[test we can still connect to the network]
[test for memory leaks]
[release]

* Implement upload queue (mike's version, with variations).
* Generate pre-pended comment with date in all files generated by autogen.sh
  so they can be committed back (without changing timestamps).
* Use getrusage() to monitor CPU usage on query processing?  Flow control
  the link between Gnetd and the query processors.

* Embed Perl, add Perl->C gateway to filtering.  Or allow a PERL call rule,
  giving it record/result-set information.  NB: adding a perl call to every
  rc that comes is going to be costly?
* Use Perl module Video::Info to collect meta-data information from library
  files.

[rest a little]

