free collaborative tools for low BW and losy connections

Hello,

I was watching SDNOG and saw the below conversation recently.

Here is the relavant part:

"I think this is the concern of all of us, how to work from home and to

keep same productivity level, we need collaborative tools to engaging

the team. I am still searching for tool and apps that are free, tolerance

the poor internet speed."

I know of some free tools and all, but am not aware of the tolerance
they may have to slow speed and (likely) poor internet connections.

I was wondering if anyone here has experience with tools that’d work,

so I could suggest something to them.

I don’t know if everyone’s aware of what they have been going through
in Sudan (both of them), but it has been a rough life there recently.

Thanks!

scott

It would be a lot MORE relevant if there were some actual tools listed & discussed!

Miles Fidelman

One of the tools that we've had for a very long time but which is
often overlooked is NNTP. It's an excellent way to move information
around under exactly these circumstances: low bandwidth, lossy
connections -- and intermittent connectivity, limited resources, etc.

Nearly any laptop/desktop has enough computing capacity to run an
NNTP server and depending on the quantity of information being moved
around, it's not at all out of the question to do exactly that, so that
every laptop/desktop (and thus every person) has their own copy right
there, thus enabling them to continue using it in the absence of any
connectivity.

Also note that bi- or unidirectional NNTP/SMTP gateways are useful.

It's not fancy, but anybody who demands fancy at a time like this
is an idiot. It *works*, it gets the basics done, and thanks to
decades of development/experience, it holds up well under duress.

---rsk

On 3/25/20 5:39 AM, Rich Kulawiec wrote:> One of the tools that we've had for a very long time but which is

often overlooked is NNTP. It's an excellent way to move information around under exactly these circumstances: low bandwidth, lossy connections -- and intermittent connectivity, limited resources, etc.

I largely agree. Though NNTP does depend on system-to-system TCP/IP connectivity. I say system-to-system instead of end-to-end because there can be intermediate systems between the end systems. NNTP's store and forward networking quite capable.

Something that might make you groan even more than NNTP is UUCP. UUCP doesn't even have the system-to-system (real time) requirement that NNTP has. It's quite possible to copy UUCP "Bag" files to removable media and use sneaker net t transfer things. I've heard tell of people configuring UUCP on systems at the office, their notebook that they take with them, and systems at home. The notebook (push or poll) connects to the systems that it can currently communicate with and transfers files.

UUCP can also be used to transfer files, news (NNTP: public (Usenet) and / or private), email, and remote command execution.

Nearly any laptop/desktop has enough computing capacity to run an NNTP server

Agreed. I dare say that anything that has a TCP/IP stack is probably capable of running an NNTP server (and / or UUCP).

depending on the quantity of information being moved around, it's not at all out of the question to do exactly that, so that every laptop/desktop (and thus every person) has their own copy right there, thus enabling them to continue using it in the absence of any connectivity.

I hadn't considered having a per system NNTP server. I sort of like the idea. I think it could emulate the functionality that I used to get out of Lotus Notes & Domino with local database replication. I rarely needed the offline functionality, but having it was nice. I also found that the local database made searches a lot faster than waiting on them to traverse the network.

Also note that bi- or unidirectional NNTP/SMTP gateways are useful.

Not only that, but given the inherent one-to-many nature of NNTP, you can probably get away with transmitting that message once instead of (potentially) once per recipient. (Yes, I know that SMTP is supposed to optimize this, but I've seen times when it doesn't work, properly.)

It's not fancy, but anybody who demands fancy at a time like this is an idiot. It *works*, it gets the basics done, and thanks to decades of development/experience, it holds up well under duress.

I completely agree with your statement about NNTP. I do think that UUCP probably holds up even better. UUCP bag files make it easy to bridge communications across TCP/IP gaps. You could probably even get NNTP and / or UUCP to work across packet radio. }:slight_smile:

In article <9f22cde2-d0a2-1ea1-89e9-ae65c4d47171@tnetconsulting.net> you write:

I hadn't considered having a per system NNTP server. I sort of like the
idea. I think it could emulate the functionality that I used to get out
of Lotus Notes & Domino with local database replication. I rarely
needed the offline functionality, but having it was nice. I also found
that the local database made searches a lot faster than waiting on them
to traverse the network.

Also note that bi- or unidirectional NNTP/SMTP gateways are useful.

I've been reading nanog and many other lists on my own NNTP server via
a straightforward mail gateway for about a decade. Works great. I'm
sending this message as a mail reply to a news article.

Brian Buhrow and I replaced a completely failing database-synchronization-over-Microsoft-Exchange system with UUCP across American President Lines and Neptune Orient Lines fleets, back in the mid-90s. UUCP worked perfectly (Exchange connections were failing ~90% of the time), was much faster (average sync time on each change reduced from about three minutes to a few seconds), and saved them several million dollars a year in satellite bandwidth costs.

UUCP kicks ass.

                                -Bill

UUCP kicks ass.

And scary as it sounds, UUCP over SLIP/PPP worked remarkably
robustly. When system/network resources are skinny or scarce, you get
really good at keeping things working.

:slight_smile:

uucp is a batch oriented protocol so it's pretty decent for situations where there's no permanent connectivity, but uncompelling otherwise.

nntp is a non-scalable protocol which broke under its own weight. Threaded news-readers are a great way of catching up with large mailing lists if you're prepared to put in the effort to create a bidirectional gateway. But that's really a statement that mail readers are usually terrible at handling large threads rather than a statement about nntp as a useful media delivery protocol.

Nick

I was remiss not to mention this as well. *Absolutely* UUCP still has
its use cases, sneakernetting data among them. It's been a long time
since "Never underestimate the bandwidth of a station wagon full of tapes"
(Dr. Warren Jackson, Director, UTCS) but it still holds true for certain
values of (transport container, storage medium).

---rsk

some of us still do uucp, over tcp and over pots. archaic, but still
the right tool for some tasks.

randy

some of us still do uucp, over tcp and over pots.

My preference is to do UUCP over SSH (STDIO) over TCP/IP. IMHO the SSH adds security (encryption and more friendly authentication (keys / certs / Kerberos)) and reduces the number of ports that need to be exposed to the world / allowed through the network.

nntp is a non-scalable protocol which broke under its own weight.

That statement surprises me. But I'm WAY late to the NNTP / Usenet game.

Threaded news-readers are a great way of catching up with large mailing lists if you're prepared to put in the effort to create a bidirectional gateway. But that's really a statement that mail readers are usually terrible at handling large threads rather than a statement about nntp as a useful media delivery protocol.

Especially when most of the news readers that I use or hear others talk about using are primarily email clients that also happen to be news clients. As such, it's the same threading code.

Some mail readers are terrible at that: mutt isn't.

And one of the nice things about trn (and I believe slrn, although
that's an educated guess, I haven't checked) is that it can save
Usenet news articles in Unix mbox format, which means that you can
read them with mutt as well. I have trn set up to run via a cron job
that executes a script that grabs the appropriate set of newsgroups,
spam-filters them, saves what's left to a per-newsgroup mbox file that
I can read just like I read this list.

Similarly, rss2email saves RSS feeds in Unix mbox format. And one of
the *very* nice things about coercing everything into mbox format is
that myriad tools existing for sorting, searching, indexing, etc.

---rsk

Nick Hilliard <nick@foobar.org> writes:

nntp is a non-scalable protocol which broke under its own
weight.

How is nntp non-scalable? It allows an infinite number of servers
connected in a tiered network, where you only have to connect to a few
other peers and carry whatever part of the traffic you want.

Binaries broke USENET. That has little to do with nntp.

nntp is still working just fine and still carrying a few discussion
groups here and there. And you have a really nice mailling list gateway
at news.gmane.io (which recently replaced gmane.org - see

for full story)

Bjørn

How is nntp non-scalable?

because it uses flooding and can't guarantee reliable message distribution, particularly at higher traffic levels.

The fact that it ended up having to implement TAKETHIS is only one indication of what a truly awful protocol it is.

Once again in simpler terms:

> How is nntp non-scalable?
[...]
> Binaries broke USENET. That has little to do with nntp.

If it had been scalable, it could have scaled to handling the binary groups.

Nick

>How is nntp non-scalable?

because it uses flooding and can't guarantee reliable message
distribution, particularly at higher traffic levels.

That's so hideously wrong. It's like claiming web forums don't
work because IP packet delivery isn't reliable.

Usenet message delivery at higher levels works just fine, except that
on the public backbone, it is generally implemented as "best effort"
rather than a concerted effort to deliver reliably.

The concept of flooding isn't problematic by itself. If you wanted to
implement a collaborative system, you could easily run a private
hierarchy and run a separate feed for it, which you could then monitor
for backlogs or issues. You do not need to dump your local traffic on
the public Usenet. This can happily coexist alongside public traffic
on your server. It is easy to make it 100% reliable if that is a goal.

The fact that it ended up having to implement TAKETHIS is only one
indication of what a truly awful protocol it is.

No, the fact that it ended up having to implement TAKETHIS is a nod to
the problem of RTT.

Once again in simpler terms:

> How is nntp non-scalable?
[...]
> Binaries broke USENET. That has little to do with nntp.

If it had been scalable, it could have scaled to handling the binary groups.

It did and has. The large scale binaries sites are still doing a
great job of propagating binaries with very close to 100% reliability.

I was there. I'm the maintainer of Diablo. It's fair to say I had a
large influence on this issue as it was Diablo's distributed backend
capability that really instigated retention competition, and a number
of optimizations that I made helped make it practical.

The problem for smaller sites is simply the immense traffic volume.
If you want to carry binaries, you need double digits Gbps. If you
filter them out, the load is actually quite trivial.

... JG

because it uses flooding and can't guarantee reliable message
distribution, particularly at higher traffic levels.

That's so hideously wrong. It's like claiming web forums don't
work because IP packet delivery isn't reliable.

Really, it's nothing like that.

Usenet message delivery at higher levels works just fine, except that
on the public backbone, it is generally implemented as "best effort"
rather than a concerted effort to deliver reliably.

If you can explain the bit of the protocol that guarantees that all nodes have received all postings, then let's discuss it.

The concept of flooding isn't problematic by itself.

Flood often works fine until you attempt to scale it. Then it breaks, just like Bjørn admitted. Flooding is inherently problematic at scale.

If you wanted to
implement a collaborative system, you could easily run a private
hierarchy and run a separate feed for it, which you could then monitor
for backlogs or issues. You do not need to dump your local traffic on
the public Usenet. This can happily coexist alongside public traffic
on your server. It is easy to make it 100% reliable if that is a goal.

For sure, you can operate mostly reliable self-contained systems with limited distribution. We're all in agreement about this.

The fact that it ended up having to implement TAKETHIS is only one
indication of what a truly awful protocol it is.

No, the fact that it ended up having to implement TAKETHIS is a nod to
the problem of RTT.

TAKETHIS was necessary to keep things running because of the dual problem of RTT and lack of pipelining. Taken together, these two problems made it impossible to optimise incoming feeds, because of ... well, flooding, which meant that even if you attempted an IHAVE, by the time you delivered the article, some other feed might already have delivered it. TAKETHIS managed to sweep these problems under the carpet, but it's a horrible, awful protocol hack.

It did and has. The large scale binaries sites are still doing a
great job of propagating binaries with very close to 100% reliability.

which is mostly because there are so few large binary sites these days, i.e. limited distribution model.

I was there.

So was I, and probably so were lots of other people on nanog-l. We all played our part trying to keep the thing hanging together.

I'm the maintainer of Diablo. It's fair to say I had a
large influence on this issue as it was Diablo's distributed backend
capability that really instigated retention competition, and a number
of optimizations that I made helped make it practical.

Diablo was great - I used it for years after INN-related head-bleeding. Afterwards, Typhoon improved things even more.

The problem for smaller sites is simply the immense traffic volume.
If you want to carry binaries, you need double digits Gbps. If you
filter them out, the load is actually quite trivial.

Right, so you've put your finger on the other major problem relating to flooding which isn't the distribution synchronisation / optimisation problem: all sites get all posts for all groups which they're configured for. This is a profound waste of resources + it doesn't scale in any meaningful way.

Nick

>>because it uses flooding and can't guarantee reliable message
>>distribution, particularly at higher traffic levels.
>
>That's so hideously wrong. It's like claiming web forums don't
>work because IP packet delivery isn't reliable.

Really, it's nothing like that.

Sure it is. At a certain point you can get web forums to stop working
by DDoS. You can't guarantee reliable interaction with a web site if
that happens.

>Usenet message delivery at higher levels works just fine, except that
>on the public backbone, it is generally implemented as "best effort"
>rather than a concerted effort to deliver reliably.

If you can explain the bit of the protocol that guarantees that all
nodes have received all postings, then let's discuss it.

There isn't, just like there isn't a bit of the protocol that guarantees
that an IP packet is received by its intended recipient. No magic.

It's perfectly possible to make sure that you are not backlogging to a
peer and to contact them to remediate if there is a problem. When done
at scale, this does actually work. And unlike IP packet delivery, news
will happily backlog and recover from a server being down or whatever.

>The concept of flooding isn't problematic by itself.

Flood often works fine until you attempt to scale it. Then it breaks,
just like Bj??rn admitted. Flooding is inherently problematic at scale.

For... what, exactly? General Usenet? Perhaps, but mainly because you
do not have a mutual agreement on traffic levels and a bunch of other
factors. Flooding works just fine within private hierarchies, and since
I thought this was a discussion of "free collaborative tools" rather than
"random newbie trying to masochistically keep up with a full backbone
Usenet feed", it definitely should work fine for a private hierarchy and
collaborative use.

> If you wanted to
>implement a collaborative system, you could easily run a private
>hierarchy and run a separate feed for it, which you could then monitor
>for backlogs or issues. You do not need to dump your local traffic on
>the public Usenet. This can happily coexist alongside public traffic
>on your server. It is easy to make it 100% reliable if that is a goal.

For sure, you can operate mostly reliable self-contained systems with
limited distribution. We're all in agreement about this.

Okay, good.

>>The fact that it ended up having to implement TAKETHIS is only one
>>indication of what a truly awful protocol it is.
>
>No, the fact that it ended up having to implement TAKETHIS is a nod to
>the problem of RTT.

TAKETHIS was necessary to keep things running because of the dual
problem of RTT and lack of pipelining. Taken together, these two
problems made it impossible to optimise incoming feeds, because of ...
well, flooding, which meant that even if you attempted an IHAVE, by the
time you delivered the article, some other feed might already have
delivered it. TAKETHIS managed to sweep these problems under the
carpet, but it's a horrible, awful protocol hack.

It's basically cheap pipelining. If you want to call pipelining in
general a horrible, awful protocol hack, then that's probably got
some validity.

>It did and has. The large scale binaries sites are still doing a
>great job of propagating binaries with very close to 100% reliability.

which is mostly because there are so few large binary sites these days,
i.e. limited distribution model.

No, there are so few large binary sites these days because of consolidation
and buyouts.

>I was there.

So was I, and probably so were lots of other people on nanog-l. We all
played our part trying to keep the thing hanging together.

I'd say most of the folks here were out of this fifteen to twenty years
ago, well before the explosion of binaries in the early 2000's.

>I'm the maintainer of Diablo. It's fair to say I had a
>large influence on this issue as it was Diablo's distributed backend
>capability that really instigated retention competition, and a number
>of optimizations that I made helped make it practical.

Diablo was great - I used it for years after INN-related head-bleeding.
Afterwards, Typhoon improved things even more.

>The problem for smaller sites is simply the immense traffic volume.
>If you want to carry binaries, you need double digits Gbps. If you
>filter them out, the load is actually quite trivial.

Right, so you've put your finger on the other major problem relating to
flooding which isn't the distribution synchronisation / optimisation
problem: all sites get all posts for all groups which they're configured
for. This is a profound waste of resources + it doesn't scale in any
meaningful way.

So if you don't like that everyone gets everything they are configured to
get, you are suggesting that they... what, exactly? Shouldn't get everything
they want?

None of this changes that it's a robust, mature protocol that was originally
designed for handling non-binaries and is actually pretty good in that role.
Having the content delivered to each site means that there is no dependence
on long-distance interactive IP connections and that each participating site
can keep the content for however long they deem useful. Usenet keeps hummin'
along under conditions that would break more modern things like web forums.

... JG

That's so hideously wrong. It's like claiming web forums don't
work because IP packet delivery isn't reliable.

Really, it's nothing like that.

Sure it is. At a certain point you can get web forums to stop working
by DDoS. You can't guarantee reliable interaction with a web site if
that happens.

this is failure caused by external agency, not failure caused by inherent protocol limitations.

Usenet message delivery at higher levels works just fine, except that
on the public backbone, it is generally implemented as "best effort"
rather than a concerted effort to deliver reliably.

If you can explain the bit of the protocol that guarantees that all
nodes have received all postings, then let's discuss it.

There isn't, just like there isn't a bit of the protocol that guarantees
that an IP packet is received by its intended recipient. No magic.

tcp vs udp.

Flood often works fine until you attempt to scale it. Then it breaks,
just like Bj??rn admitted. Flooding is inherently problematic at scale.

For... what, exactly? General Usenet?

yes, this is what we're talking about. It couldn't scale to general usenet levels.

Perhaps, but mainly because you
do not have a mutual agreement on traffic levels and a bunch of other
factors. Flooding works just fine within private hierarchies and since
I thought this was a discussion of "free collaborative tools" rather than
"random newbie trying to masochistically keep up with a full backbone
Usenet feed", it definitely should work fine for a private hierarchy and
collaborative use.

Then we're in violent agreement on this point. Great!

delivered it. TAKETHIS managed to sweep these problems under the
carpet, but it's a horrible, awful protocol hack.

It's basically cheap pipelining.

no, TAKETHIS is unrestrained flooding, not cheap pipelining.

If you want to call pipelining in
general a horrible, awful protocol hack, then that's probably got
some validity.

you could characterise pipelining as a necessary reaction to the fact that the speed of light is so damned slow.

which is mostly because there are so few large binary sites these days,
i.e. limited distribution model.

No, there are so few large binary sites these days because of consolidation
and buyouts.

20 years ago, lots of places hosted binaries. They stopped because it was pointless and wasteful, not because of consolidation.

Right, so you've put your finger on the other major problem relating to
flooding which isn't the distribution synchronisation / optimisation
problem: all sites get all posts for all groups which they're configured
for. This is a profound waste of resources + it doesn't scale in any
meaningful way.

So if you don't like that everyone gets everything they are configured to
get, you are suggesting that they... what, exactly? Shouldn't get everything
they want?

The default distribution model of the 1990s was *. These days, only a tiny handful of sites handle everything, because the overheads of flooding are so awful. To make it clear, this awfulness is resource related, and the knock-on effect is that the resource cost is untenable.

Usenet, like other systems, can be reduced to an engineering / economics management problem. If the cost of making it operate correctly doesn't work, then it's non-viable.

None of this changes that it's a robust, mature protocol that was originally
designed for handling non-binaries and is actually pretty good in that role.
Having the content delivered to each site means that there is no dependence
on long-distance interactive IP connections and that each participating site
can keep the content for however long they deem useful. Usenet keeps hummin'
along under conditions that would break more modern things like web forums.

It's a complete crock of a protocol with robust and mature implementations. Diablo is one and for that, we have people like Matt and you to thank.

Nick

>>>That's so hideously wrong. It's like claiming web forums don't
>>>work because IP packet delivery isn't reliable.
>>
>>Really, it's nothing like that.
>
>Sure it is. At a certain point you can get web forums to stop working
>by DDoS. You can't guarantee reliable interaction with a web site if
>that happens.

this is failure caused by external agency, not failure caused by
inherent protocol limitations.

Yet we're discussing "low BW and losy(sic) connections". Which would be
failure of IP to be magically available with zero packet loss and at high
speeds. There are lots of people for whom low speed DSL, dialup, WISP,
4G, GPRS, satellite, or actually nothing at all are available as the
Internet options.

>>>Usenet message delivery at higher levels works just fine, except that
>>>on the public backbone, it is generally implemented as "best effort"
>>>rather than a concerted effort to deliver reliably.
>>
>>If you can explain the bit of the protocol that guarantees that all
>>nodes have received all postings, then let's discuss it.
>
>There isn't, just like there isn't a bit of the protocol that guarantees
>that an IP packet is received by its intended recipient. No magic.

tcp vs udp.

IP vs ... what exactly?

>>Flood often works fine until you attempt to scale it. Then it breaks,
>>just like Bj??rn admitted. Flooding is inherently problematic at scale.
>
>For... what, exactly? General Usenet?

yes, this is what we're talking about. It couldn't scale to general
usenet levels.

The scale issue wasn't flooding, it was bandwidth and storage. It's
actually not problematic to do history lookups (the key mechanism in
what you're calling "flooding") because even at a hundred thousand per
second, that's well within the speed of CPU and RAM. Oh, well, yes,
if you're trying to do it on HDD, that won't work anymore, and quite
possibly SSD will reach limits. But that's a design issue, not a scale
problem.

Most of Usenet's so-called "scale" problems had to do with disk I/O and
network speeds, not flood fill.

>Perhaps, but mainly because you
>do not have a mutual agreement on traffic levels and a bunch of other
>factors. Flooding works just fine within private hierarchies and since
>I thought this was a discussion of "free collaborative tools" rather than
>"random newbie trying to masochistically keep up with a full backbone
>Usenet feed", it definitely should work fine for a private hierarchy and
>collaborative use.

Then we're in violent agreement on this point. Great!

Okay, fine, but it's kinda the same thing as "last week some noob got a
1990's era book on setting up a webhost, bought a T1, and was flummoxed
at why his service sucked."

The Usenet "backbone" with binaries isn't going to be viable without a
real large capex investment and significant ongoing opex. This isn't a
failure in the technology.

>>delivered it. TAKETHIS managed to sweep these problems under the
>>carpet, but it's a horrible, awful protocol hack.
>
>It's basically cheap pipelining.

no, TAKETHIS is unrestrained flooding, not cheap pipelining.

It is definitely not unrestrained. Sorry, been there inside the code.
There's a limited window out of necessity, because you get interesting
behaviours if a peer is held off too long.

>If you want to call pipelining in
>general a horrible, awful protocol hack, then that's probably got
>some validity.

you could characterise pipelining as a necessary reaction to the fact
that the speed of light is so damned slow.

Sure.

>>which is mostly because there are so few large binary sites these days,
>>i.e. limited distribution model.
>
>No, there are so few large binary sites these days because of consolidation
>and buyouts.

20 years ago, lots of places hosted binaries. They stopped because it
was pointless and wasteful, not because of consolidation.

I thought they stopped it because some of us offered them a better model
that reduced their expenses and eliminated the need to have someone who was
an expert in an esoteric '80's era service, while also investing in all the
capex/opex.

Lots of companies sold wholesale Usenet, usually just by offering access to
a remote service. As the amount of Usenet content exploded, the increasing
cost of storage for a feature a declining number of users was using didn't
make sense.

One of my companies specialized in shipping dreader boxes to ISP's and
letting them backend off remote spools, usually for a fairly modest cost
(high three, low four figures?). This let them have control over the service
that was unlike anything other service providers were doing. Custom groups,
custom integration with their auth/billing, etc., required about a megabit
of bandwidth to maintain the header database, anything else was user traffic.
Immense flexibility without all the expense.

It was totally possible to keep things going. The reason text Usenet tended
to lose popularity has more to do with it representing a technology that had
difficulty evolving. As HTTP grew more powerful, easy signups and user avatars
were attractive things, and meant competing forumwares developed powerful
features that Usenet could never really have.

Consolidation and buyouts is something that happened more in the last ten
years. Sorry for any confusion there.

I spent enough years on this stuff that it matters to me that we're honest
about these things. I don't mind that Usenet is a declining technology and
that people don't use it because it is difficult in comparison to a webforum.

Usenet is a great technology for doing collaboration on low bandwidth and
lossy connections. It will get the traffic through in situations where
packet loss would make interactive traffic slow-to-impossible. I don't
expect to see any great upswing in Usenet development at this point, though.

... JG