North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Reducing Usenet Bandwidth

  • From: Eliot Lear
  • Date: Sun Feb 17 12:34:06 2002


This is the art of content delivery and caching. And the nice thing is that depending on which technology you use the person who wants the material closer to the end user pays. If that's the end user, then use a cache with WCCP. If that's the content owner, use a cache with either an HTTP redirect or (Paul, forgive me) a DNS hack, either of which can be tied to the routing system. In either case there is, perhaps, a more explicit economic model than netnews. It's not to say there *isn't* an economic model with netnews. It's just that it doesn't make as much sense as it did (see smb's earlier comment).

Eliot

At 01:44 PM 2/17/2002 +0100, Iljitsch van Beijnum wrote:

On Fri, 8 Feb 2002, Stephen Stuart wrote:

> The topic being discussed is to try to reduce USENET bandwidth. One
> way to do that is to pass pointers around instead of complete
> articles. If the USENET distribution system passed pointers to
> articles around instead of the actual articles themselves, sites could
> then "self-tune" their spools to the content that their readers (the
> "users") found interesting (fetch articles from a source that offered
> to actually spool them), either by pre-fetching or fetching on-demand,
> but still have access to the "total accumulated wisdom" of USENET -
> and maybe it wouldn't need to be reposted every week, because sites
> could also offer value to their "users" on the publishing side by
> offering to publish their content longer.

I'm a bit behind on reading the NANOG list, so excuse the late reply.

If we can really build such a beast, this would be extremely cool. The
method of choice for publishing free information on the Net is WWW these
days. But it doesn't work very well, since there is no direct relationship
between an URL and the published text or file. So people will use a "far
away" URL because they don't know the same file can be found much closer,
and URLs tend to break after a while.

I've thought about this for quite a while, and even written down a good
deal of them. If you're interested:

http://www.muada.com/projects/usenet.txt

Iljitsch van Beijnum