North American Network Operators Group|
Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical
Re: number of unaggregated class C's in swamp?
From: Dennis Ferguson <firstname.lastname@example.org> I also wouldn't bet the farm on some amazing new routing architecture saving us Actually, I think you can safely bet the farm in the other direction, that there is no such thing possible. Yakov and I have some substantial non-agreements about routing architectures, but we are fully agreed that aggregation/abstraction is the only way to make the routing overhead scale, and that means connectivity-sensitive assignment of routing-names, aka renumbering. Sure, at a fixed point on the cost/performance curve, over time you can buy more powerful machines at the same price, and that will allow you to be able to run larger routing tables over time (imagine trying to stuff 28K routes into an IMP :-). There are other things going on too, e.g. some routing designs inherently have more or less overhead than others. However, the bottom line will always be the same, which is that free-form allocation of routing-names in a global-scale network will not be possible. To keep routing overhead within a workable (although probably slowly growing) bound, routign-names will *always* have to be assigned based primarily on network connectivity. I think not dealing with pre-existing allocations is going to mean putting an ever-tighter squeeze on future allocations in a way that is counter-productive, Yes. This also spreads the burden to everyone. what I think we should be doing is trying to pick a routing efficiency which gives us a number of routes ... which seems tractable Exactly. the IPv4 end state at a maximum of about 250,000 routes, a number which I think is not an unreasonable target for new high-end router designs Hmmm. I think this is probably a bit agressive for the next generation, but then again we aren't likely to arrive at the end state for at least several product cycles anyway. Also, the limit I most worried about is the stabilization time, and looking at individual router performance is unlikely to tell you if you can meet this. I'd much rather see each space filled .. as appropriate ... rather than picking an arbitrary, one-size-fits-all filter limit. The latter is a sign of failure. I agree, but the latter does have the advantage of being easier to "police". Getting a whole address block under a limit requires the cooperation of everyone in the block, whereas filters (as we have seen) are easy to impose... Noel