• A copycat recipe for suck-feeding from Tildeverse NNTP newspool

    From xwindows@xwindows@tilde.club to tilde.club on Fri Sep 25 15:58:58 2020
    Due to ~cat of baud.baby having unsuccessful attempt with `suck` program
    to feed a newspool to a refurbished NNTP server at baud.baby,
    I have decided to make a write-up about my oneshot suck-feed attempt,
    together with rationale and detailed explanation, should any need came up
    for people to replicate my attempt on other hosts.

    TL;DR: Skip to the next cut (`%<-----`) for the recipe.

    I originally meant to post this to #netnews channel of Tildeverse IRC,
    but it got more and more lengthy; so I decided to post this as a Netnews article instead, so that other people can use it as a "reference" later
    as well.

    And I said "copycat" recipe, as it was intended for relevant commands
    to be copy-pasted; and it is the beholder's responsibility to make preconditions match or similar enough for the command to run.

    This writing is based on my actual suck-feed run within tilde.club
    on 2020-09-18 to fix its newspool memory lapse [1], together with
    subsequent run on 2020-09-20 to fix its newsgroup omissions [2];

    This recipe should be suitable for any tilde servers that wish
    to participate in Tildeverse Netnews network, and already configured
    local NNTP server peering to accept new messages.

    Following is the structure of Tildeverse Netnews network
    at the time of this writing:

    Public gateway (*A) Distribution point (*B)
    +---------------+ +-------------+
    | news.tilde.club |< . >| cosmic.voyage |
    +---------------+ +-------------+ Other participating servers (*C)
    ^ ^
    . . +---------+
    . . . . . . >| baud.baby |
    . +---------+
    .
    . +-------------+
    . . . . . . .>| yourtilde.com |
    +-------------+

    - (*A) Open to posts from non-participating tilde servers and the public;
    this host is also known as news.tildeverse.org
    - (*B) Read-only to the public
    - (*C) Usually not open to the public

    - Only newsgroup with `tilde.` prefix are sent across the servers
    - Current article expiration window on news.tilde.club and cosmic.voyage is

    - 1095-days [3-years] by default
    (in most cases; where article does not request
    any specific expiration date)
    - 365-days [1-year] minimum
    (in case the article requested an expiration date earlier than 365 days,
    it will be kept for 365 days anyway)
    - Maximum being `never` expire
    (in case the article requested expiration date of any point later
    than 365 days, that requested date would be used,
    no matter how far away it is)

    - The network is text-only, low-traffic, and very light on storage
    (all articles from tilde.* newsgroups as existed on cosmic.voyage
    at 2020-09-18 [with its 1-year history back then] totaled to only 293 KiB;
    yes, *kilobyte*, you read that correctly)
    - Any other tilde servers that wish to participate, would set up
    NNTP peering with cosmic.voyage
    (contact ~tomasino of cosmic.voyage for arrangement)

    I have chosen cosmic.voyage as a newspool source in this instruction,
    as it was the same source I used when I fixed tilde.club newspool,
    and also because cosmic.voyage is the main news distribution point
    in this network.

    Anyway, I'm a self-proclaimed Usenet newbie; don't assume that I know
    jack about NNTP Usenet; I'm open to suggestions and corrections.
    And like any instruction you got from the Internet,
    take it with a grain of salt...


    %<-----

    ## 1. The NNTP preconditions of suck-feed at the time I ran it were: ##

    - I ran `suck` from inside tilde.club, and instructing it to feed
    the newspool to `localhost`.
    - I pulled the NNTP feed from cosmic.voyage:119.
    - tilde.club accepts NNTP `IHAVE` push command from `localhost`
    without authentication.
    - All the newsgroup were created *before* the date of oldest article
    I was trying to feed it. (*important*, but with workaround,
    see the last heading of this article)

    ## 2. The software configuration I used at the time was as follow: ##

    - I compiled `suck` from source code on tilde.club system
    (Fedora 32 x86_64), and installed in my home directory
    without root privilege.
    - I have `$HOME/bin` in my `$PATH` environment variable.
    - I used `suck` 4.3.4 Git snapshot tarball from
    <https://github.com/lazarus-pkgs/suck/releases/tag/4.3.4>.

    ## 3. The `suck` configuration steps I used was: ##

    - I created a new temporary directory for this job.
    - I created two subdirectories inside that directory: `info`, and `feed`.
    - I created a file named `sucknewsrc` within the subdirectory `info`.
    (Details in the next heading)
    - The main temporary directory is the location I ran `suck` in.
    (Full command line is in the second-next heading,
    and explanation in the third-next heading)

    ## 4. The content of `sucknewsrc` applicable in this case is: ##

    - In each line, there will be two columns, separated by one space:

    - First column in each line has to be a newsgroup to pull articles from
    (e.g. `tilde.services`).
    - Second column in each line has to be `0`.

    - You can add as many newsgroups you'd like, one newsgroup per line.
    - Reminder: only newsgroups with `tilde.` prefix are propagated through
    Tildeverse Netnews network.

    ## 5. The exact `suck` command line I used is: ##

    suck cosmic.voyage -dm feed -dd info -dt . -M -i 0 -lr -hl localhost -bp

    Note: `suck` will remove individual downloaded article as soon as it successfully pushed that on local server, *or* when local server rejected
    that for whatever reason. Fortunately, as Tildeverse Netnews articles repository is very small; you probably could do this as many times
    as you like without putting undue load to the origin server.

    ## 6. And explanation of each `suck parameters (AFAIK) are: ##

    cosmic.voyage # Download newspool from NNTP server `cosmic.voyage`.
    -dm feed # Use `feed` subdirectory to store downloaded articles.
    -dd info # Use `info` subdirectory for configuration files
    -dt . # Use current directory (`.`) for temporary files.
    -M # Issue `MODE READER` NNTP command on the newspool-source
    # server before attempting to download.
    -i 0 # Set limit of number of articles to download
    # per newsgroup to 0 (unlimited).
    -lr # Start downloading/count limit from the oldest article
    # in each newsgroup.
    -hl localhost # Use NNTP server `localhost` as a target for pushing
    # downloaded articles into.
    -bp # After downloading, push all downloaded articles to
    # specified NNTP server by the means of
    # NNTP `IHAVE` command.

    ## 7. Debugging caveats: ##

    - You should not take INN2's returned NNTP status message literally,
    as the *actual* cause which led to in that status may be completely
    different:

    - "435 Duplicate" does not always mean the article you are
    trying to push already exists on server; it may also be returned
    when...

    - You tried to retroactive-push article into a newsgroup which is
    created on the server _after_ the article's timestamp.
    (See the second-next heading for workaround)
    - You tried to push to a newsgroup that doesn't currently exist
    on the server.

    - "502 You have no permission to talk. Goodbye!" does not always mean
    you have no permission to post (in fact, you might);
    this status message is returned when you attempted
    to _read_ news from a NNTP server that does not allow reading.

    ## 8. A Netcat test: ##

    - Netcat is quite useful for debugging local NNTP server,
    as NNTP protocol is a really simple text protocol.
    - INN2 isn't really picky about line ending in NNTP commands.
    (You can use Unix-style LF and it would still work;
    but note that I have yet to test if using LF line ending within
    the submitted article content would make any difference or not)
    - To check if your server accepts `IHAVE` push from localhost,
    run `nc localhost 119`...

    1. If the first line returned starts with number "200",
    you're good to try the next step.
    2. Type `IHAVE <slrnqo72fc.18n.tomasino@cosmic.voyage>`
    and press Enter.
    3. If server returned a line "335 Send it",
    then your server is likely ready for suck-feeding article.
    (But also read the next heading for issue about timestamp reject)
    4. Don't actually send the article; press Ctrl+C to exit.

    Side note: `<slrnqo72fc.18n.tomasino@cosmic.voyage>` is the Message-ID
    of the oldest article available on cosmic.voyage NNTP server
    at the time of this writing. (The article is dated 2019-09-19T14:00:12Z)

    - If you don't have Netcat (or compatibles), you can use Telnet instead.

    ## 9. Issues of pushing-in article predating local newsgroup creation time: ##

    If you have created the newsgroup _after_ the timestamp of an article
    you was trying to suck-feed into, the server would outright reject
    that article. (With a misleading status message like "435 Duplicate")

    There is a workaround for this, which ~deepend discovered it when
    I subsequently attempted suck-fed tilde.your newsgroup into tilde.club
    on 2020-09-20; it has to be applied to the target local INN2 server
    *before* starting the suck-feeding...

    - Remove all lines containing newsgroups you were trying to suck-feed,
    from `active.times` file inside INN2's $pathdb directory
    (`/var/lib/news` under Fedora 32).
    - Restart INNd.
    - Then you can suck-feed the newspool, and the local server wouldn't
    reject them anymore.


    Wish you a good old netnews time,
    ~xwindows


    %<-----

    P.S. Further reading...

    - RFC 3977: Network News Transfer Protocol specification (2006 edition)
    <https://tools.ietf.org/html/rfc3977>
    - RFC 1036: USENET message format specification (1987 version)
    <https://tools.ietf.org/html/rfc1036>
    - RFC 5536: USENET message format specification (2009 modern variant)
    <https://tools.ietf.org/html/rfc5536>
    - INN FAQ number 6.4, push-feeding old article to other server;
    initiated by origin server (as opposed to pull-based suck-feeding)
    <https://www.eyrie.org/~eagle/faqs/inn.html#S6.4>

    P.P.S. Footnotes...

    [1] "Tilde.club newspool now restored" [2020-19-18T11:59:59Z]
    in tilde.club newsgroup
    <nntp://news.tilde.club/tilde.club/9>
    <news:alpine.LFD.2.23.451.2009181832080.2046475@tilde.club>

    [2] "Re: Tilde.club newspool now restored; +tilde.institute +tilde.your"
    [2020-09-20T10:39:48Z] in tilde.club newsgroup
    <nntp://news.tilde.club/tilde.club/17>
    <news:alpine.LFD.2.23.451.2009201723170.2765424@tilde.club>
    --- Synchronet 3.18b-Linux NewsLink 1.113
  • From James Tomasino@tomasino@cosmic.voyage to tilde.club on Fri Sep 25 11:50:24 2020
    On 2020-09-25, xwindows <xwindows@tilde.club> wrote:
    Due to ~cat of baud.baby having unsuccessful attempt with `suck` program
    to feed a newspool to a refurbished NNTP server at baud.baby,
    I have decided to make a write-up about my oneshot suck-feed attempt, together with rationale and detailed explanation, should any need came up
    for people to replicate my attempt on other hosts.

    This was incredibly helpful and detailed. I'll be sure to reference it
    to any new peers getting set up. Well done!

    cat, how did it work out for you? Do you have the historic messages now?
    --- Synchronet 3.18b-Linux NewsLink 1.113
  • From f6k@f6k@huld.re to tilde.club on Fri Sep 25 20:15:58 2020
    hi!

    i'm sorry my post is quite off subject, but i've got to ask.

    On 2020-09-25, xwindows <xwindows@tilde.club> wrote:
    - The network is text-only, low-traffic, and very light on storage
    (all articles from tilde.* newsgroups as existed on cosmic.voyage
    at 2020-09-18 [with its 1-year history back then] totaled to only 293 KiB;
    yes, *kilobyte*, you read that correctly)

    when i read 293KiB, i checked my own local copy of tildeverse. for
    information i'm using slrnpull to make a local copy and read it
    offline. i only keep articles within 60 days and i retrieve all
    tilde.* from news.tildeverse.org. this is what i've got:

    $pwd
    /home/f6k/x/news/tildeverse/news/tilde
    $du -shc *
    112K art
    28K black
    124K club
    88K cosmic
    32K food+drink
    52K gopher
    12K javascript
    12K meetups
    172K meta
    44K nsfw
    12K php
    12K pink
    32K poetry
    208K projects
    12K python
    24K radiofreqs
    184K services
    12K team
    124K text
    1.3M total

    this is only text, and only the articles (meaning, not config files,
    no logs, well no slrnpull(1) specific files).

    i'm really curious; how is it possible that a year in cosmic.voyage
    makes 293KiB when 60days in my local copy of tildeverse makes 1.3M!
    i'm certainly missing something in my configuration, right? do you
    know (or anyone else) cosmic.voyage achieves that? maybe some
    compression at some point?

    thank you very much :)

    -f6k

    --
    ~{,_,"> indignus LabRat - ftp://shl.huld.re
    --- Synchronet 3.18b-Linux NewsLink 1.113
  • From James Tomasino@tomasino@cosmic.voyage to tilde.club on Sat Sep 26 12:19:37 2020
    On 2020-09-25, f6k <f6k@huld.re> wrote:
    hi!

    i'm sorry my post is quite off subject, but i've got to ask.

    On 2020-09-25, xwindows <xwindows@tilde.club> wrote:
    - The network is text-only, low-traffic, and very light on storage
    (all articles from tilde.* newsgroups as existed on cosmic.voyage
    at 2020-09-18 [with its 1-year history back then] totaled to only 293 KiB; >> yes, *kilobyte*, you read that correctly)

    when i read 293KiB, i checked my own local copy of tildeverse. for information i'm using slrnpull to make a local copy and read it
    offline. i only keep articles within 60 days and i retrieve all
    tilde.* from news.tildeverse.org. this is what i've got:

    1.3M total

    I was curious too. On cosmic /var/spool/news/articles has:
    27 directories, 255 files, and weighs in at 1.2M

    That includes control and local newsgroups, not just the tilde ones that federate.
    --- Synchronet 3.18b-Linux NewsLink 1.113
  • From xwindows@xwindows@tilde.club to tilde.club on Sat Sep 26 20:54:12 2020
    On Fri, 25 Sep 2020, f6k wrote:

    i'm certainly missing something in my configuration, right?
    do you know (or anyone else) cosmic.voyage achieves that?
    maybe some compression at some point?

    My figure was counted from data that `suck` downloaded from cosmic.voyage
    and saved into my directory in tilde.club. And no, there is no compression
    used at any point of my workflow, all of the articles are `cat`-readable.

    On Fri, 25 Sep 2020, f6k wrote:

    i'm really curious; how is it possible that a year in cosmic.voyage
    makes 293KiB when 60days in my local copy of tildeverse makes 1.3M!

    The key is your use of `du`:

    $du -shc *

    combined with your directory structure, this introduces a whole lot
    of local bias (which its magnitude is larger than the actual newspool size itself). This is a measurement error. Explanation starts from header 1-
    an extreme example there should give an idea of how subjective
    this method is.

    See heading 5 for the measurement strategy, if you would like to
    use my methodology to measure your newspool.

    Another possible factor is your continued use of `slrnpull` which
    I suspect that it didn't "expire" the article the way you might think;
    thus your "i only keep articles within 60 days" might or might not be true.
    I cannot confirm or cross off this factor on on my own, but there is a test
    in heading 6 that you could try.

    And there appears to be an oversight from my end about line ending
    which caused my original total to be a bit off; but not significantly.
    (Off by 0.237%, see heading 4 for side note)

    A big wall-of-text follows...


    %<-----

    ## 1. File-size V.S. size-on-disk ##

    On Fri, 25 Sep 2020, xwindows wrote:

    (all articles from tilde.* newsgroups as existed on cosmic.voyage
    at 2020-09-18 [with its 1-year history back then] totaled to only 293 KiB;
    yes, *kilobyte*, you read that correctly)

    Emphasize "all articles (...) totaled to only 293 KiB".

    I measured content size (in bytes) of articles (including its USENET headers) totaled together, without including local storage overhead.
    This is intentional, to avoid introducing local bias.

    Your measurement method however, is for measuring "storage size
    as occupied on disk"; useful for inspecting actual disk utilization,
    as it includes local overhead such as block padding, indirect block maps,
    and other filesystem-specific baggage; which varies from system to system,
    thus constitutes local bias.

    My original measurement was done by running `du -bc *` in `suck`'s
    article feed directory, where all download articles are stored,
    one article per file, all articles are in a single directory...

    299626 total

    This number is the "293 KiB" I was talking about, in the original article.

    Then compare and contrast: following is the last line result of
    `du -shc *` (your method) in the same directory...

    780K total

    You'd see that this alone made the size go up to 267%+ of the actual value,
    and this does not even include space occupied directories,
    which will come up in the next heading.

    This measurement is done on tilde.club, which appeared to use 4 KiB
    filesystem block size; if your filesystem block size is bigger,
    the difference will be even *wilder* than this.

    To demonstrate a more extreme case, I have even tried tar'ing said directory over to my local machine and extracted it to my local FAT32 volume
    (which was formatted with 32 KiB block size).
    Guess what `du -shc *` in that folder reported?

    5.9M total

    Outrageous, right?

    (FYI: Running `du -bc *` there still reported the same-as-ever 299626-byte size, which is the actual size of newspool content, excluding storage
    overhead)


    ## 2. Directory overhead ##

    This is another local bias, as they are not really a storage space occupied
    by articles, but rather occupied by directory areas which point to those articles- which, again, varies from system to system. `du` also counts these
    if it found any directory in the specified tree.

    And by including directory overhead, it means you even counted
    *empty* newsgroups; which by essence, have 0 bytes of article.

    12K javascript
    12K meetups
    (...)
    12K php
    12K pink
    (...)
    12K python
    (...)
    12K team

    ^ This means additional 72 KiB of emptiness added into your total.

    And for directories of non-empty newsgroup, this is my estimate of your newspool directory tree; with the subgroups information from
    currently-listed newsgroups in news.tilde.club...

    12K art
    12K art/ascii
    12K art/music
    12K club
    12K cosmic
    12K food+drink
    12K gopher
    12K meta
    12K nsfw
    12K poetry
    12K projects
    12K radiofreqs
    12K services
    12K services/uucp
    12K text

    ^ This means 180 KiB-minimum of directory weight added to your total.
    It is not exact, as it is calculated from assumption of 12 KiB size
    per directory which is the *lower bound* estimate.
    (Empty directory consumed 12 KiB, directory which itself lists many files
    is bound to consume more than that)

    Note that I omitted `black` directory as it is saved for next point...


    ## 3. There are some newsgroups that cosmic.voyage did not carry ##

    The cosmic.voyage is no longer carrying `tilde.black` newsgroup,
    not after ~tomasino shut it down. And have been the case since
    before the time I've done suck-feeding. So, the following:

    28K black

    does not count.


    ## 4. Line endings differences ##

    It happened that `suck` stores the downloaded articles with Unix (LF)
    line ending, as opposed to articles in its NNTP transfer format,
    which use Internet line ending (CR-LF). INN2 however, stores
    each article verbatim in its transfer format; so I'm going to treat this
    as the canonical format.

    This could be counted as an error on my part, which resulted in
    my original 292 KiB figure being a slight under-measurement.
    Let's see what is the actual size, and check if this error
    significantly changes the statistics...

    I ran `cat * | wc -c` in the same article feed directory again,
    for a sanity check...

    299626

    You'd see that the size matched exactly with my `du -bc *` output
    shown in heading 1 (292 KiB).

    Then, on every LF byte encountered, add CR byte in front of it,
    and measure again (`cat * | sed -e 's/$/\r/' | wc -c`)...

    306896

    This means 300 KiB worth of articles in its transfer format
    is the correct total size:

    - My original measurement had 7270-byte difference from this correct value.
    - This means my original measurement's error was 0.237%.

    This is pretty much insignificant, but I think it's worth writing
    about anyway, just for the record.


    ## 5. Methodology-matching of measurement ##

    To exclude blocksize-induced bias and directory-induced bias,
    you ought to take total count of only file size of each article.

    Running following command inside your newspool directory
    should have you covered...

    find . -type f | xargs -d '\n' du -bc | tail -n 1

    This will print out the total bytes of files (news articles)
    in the directory, excluding any local storage overhead.
    (Note that the directory tree you're going to run this in must not contain files that are not news articles, not even hidden dotfiles)


    ## 6. Finding out the oldest article date in the tree ##

    I don't currently use SLRNpull myself, but from your description
    of your usage, it seems that you instructed SLRNpull to download
    articles in the latest 60-days window into some directory;
    repeated every time you'd like to fetch the update.

    As I don't really have an insight on how SLRNpull actually operate
    regarding article expiration; but the date of oldest article actually
    existing in your newspool should give the indication.

    To get the date of oldest article run this scary-looking command [1]
    in your newspool folder...

    find . -type f -exec sed -e \
    's/^.*[^[:space:]].*$/\0/
    t CKDATE
    d
    q
    :CKDATE
    s/^Date:[[:space:]]*\(.*[^[:space:]]\)[[:space:]]*$/\1/
    t
    d' '{}' ';' | \
    xargs -d '\n' -I '{}' date -d '{}' '+%Y-%m-%d %H:%M:%S' | sort | head -n 1

    (Note that the directory tree you're going to run this in should not contain files that are not news articles, not even hidden dotfiles)

    This would return the timestamp of oldest article from your newsspool,
    using your timezone, in ISO-8601 format (YYYY-MM-DD HH:mm:ss). If the date
    is older than 60 days counted from today into the past, then you are
    not really "keep articles within 60 days", but rather longer than that.


    Regards,
    ~xwindows

    P.S. Note that the commands I listed here are tested on GNU implementation; your mileage may vary if your system is not GNU-based.

    [1] This scary-looking command basically...

    1. `find . -type f -exec [...] '{}' ';'`
    Find every files in current directory tree, and for each file...

    1.1. `sed -e '[...]'`
    Run a specified Sed inline program on it, which extracts its
    newspost date. (I take a liberty to not discuss a
    cryptic-looking Sed program in the ellipsis here; this article
    is too lengthy already)

    2. `xargs -d '\n' -I '{}' [...]`
    For each extracted newspost date...

    2.1. `date -d '{}' '+%Y-%m-%d %H:%M:%S'`
    Convert it into a sortable ISO-8601 format.

    3. `sort`
    Sort the dates in ascending order (i.e. oldest first).

    4. `head -n 1`
    And display only the first date (i.e. oldest date).
    --- Synchronet 3.18b-Linux NewsLink 1.113
  • From xwindows@xwindows@tilde.club to tilde.club on Sat Sep 26 21:03:58 2020
    On Sat, 26 Sep 2020, James Tomasino wrote:

    I was curious too. On cosmic /var/spool/news/articles has:
    27 directories, 255 files, and weighs in at 1.2M

    That includes control and local newsgroups, not just the tilde ones that federate.

    It is at that *on-disk* size, because the block-padding overhead
    (which dwarfs the news content in each file, due to most news
    being sub-KiB sized), and this also includes directory size.

    I just ran this on tilde.club newspool today (covering only tilde.* newsgroup)...

    /var/spool/news/articles/tilde$ du -shc * | tail -n -1
    980K total
    /var/spool/news/articles/tilde$ find . -type f | xargs -d '\n' du -bc | tail -n 1
    378220 total

    You'd see that there is just 369 KiB of actual articles
    (as opposed to 980 KiB of size-on-disk with all the overhead).

    Regards,
    ~xwindows
    --- Synchronet 3.18b-Linux NewsLink 1.113
  • From James Tomasino@tomasino@cosmic.voyage to tilde.club on Sat Sep 26 15:41:37 2020
    On 2020-09-26, xwindows <xwindows@tilde.club> wrote:
    On Sat, 26 Sep 2020, James Tomasino wrote:

    I was curious too. On cosmic /var/spool/news/articles has:
    27 directories, 255 files, and weighs in at 1.2M

    That includes control and local newsgroups, not just the tilde ones that
    federate.

    It is at that *on-disk* size, because the block-padding overhead
    (which dwarfs the news content in each file, due to most news
    being sub-KiB sized), and this also includes directory size.

    That was an du -sh in /var/spool/news/articles/

    You'd see that there is just 369 KiB of actual articles
    (as opposed to 980 KiB of size-on-disk with all the overhead).

    Yep, when I do the fancier calcs and just on the tilde folder I get 1.1M
    and 400k respectively. Still quite small.
    --- Synchronet 3.18b-Linux NewsLink 1.113
  • From f6k@f6k@huld.re to tilde.club on Mon Sep 28 11:26:02 2020
    i'll start saying: WOOW, thank you for that answer! i've got to say
    that i saved this post and will definitely keep it in my archive since
    it's so valuable!

    On 2020-09-26, xwindows <xwindows@tilde.club> wrote:
    [snip]
    ## 1. File-size V.S. size-on-disk ##
    [snip]

    oh yes, i've read discussions about this but this is the first time i
    can check myself.

    Outrageous, right?

    yes it is! i'll change my way of using du from now on.

    ## 2. Directory overhead ##
    [snip]

    ah yes, true, i didn't thinked of that!

    ## 4. Line endings differences ##
    [snip]

    This is pretty much insignificant, but I think it's worth writing
    about anyway, just for the record.

    and i've read it religiously.

    ## 5. Methodology-matching of measurement ##
    [snip]

    and now the big final.

    well on my local copy of tildeverse i've got 2019-09-19 16:00:12. but
    since news.tildeverse.org has been wiped few times ago, and slrnpull
    took note of it, it's not relevant.

    more interesting: my local copy of usenet (via eternal-september):

    the result is 2017-06-10 12:39:01

    about my window of 60 days, i've read more closely the manual. it says
    that the field used for the days argument "indicates the number of days
    after an article is retrieved before it will be eligible for deletion."
    so it's not a 60 days old articles according to its post date but an
    article retrieved in that 60 days window, no matter what its post date
    is. maybe that explains why the find | xargs gives me 2017-06-10
    12:39:01 on my eternal-september news folder.

    i'll go read again the manual and will try to fix all that.

    in any case, thank you again and very much for your answer. it was particularly enriching intellectually and very interesting!


    P.S. Note that the commands I listed here are tested on GNU implementation; your mileage may vary if your system is not GNU-based.

    [1] This scary-looking command basically...
    [snip]

    i almost understood everything on that command! just the sed stuff are
    still quite mysterious for me, but i never learned how to properly use
    it, so it's my bad :)

    kind regards,

    -f6k

    --
    ~{,_,"> indignus LabRat - ftp://shl.huld.re
    --- Synchronet 3.18b-Linux NewsLink 1.113