consistent policy != consistent announcements

[BGP Metrics]

Tony Li wrote:

First, any such proposal should have a reasonable architecture. Not just a
description of the mechanism. Motivational explanations are most welcome,
preferably sprinkled with real world examples.

Agreed, though i was coming to it from the diffrent angle -- out of the
sense of general disgust about having several different metrics, every one
being rather limited. I.e. the idea was to introduce a simple generic mechanism
including all existing metrics as particluar cases.

Second, there's the issue of the consistency of the values used. As I
recall your proposal, each domain in the path would propose a metric for
its contribution for a prefix. A receiving domain then weighted each
domain in whichever way it chose to arrive at a final, composite metric.
Thus, the semantics of the metric are hardly clear.

There's a clear statement that the default value of a single AS hop is 1.0.
Since there's no central administration, there's no way to coordinate the
scale of metrics globally (i.e. what constitutes 1/2 of a default hop?) -- so
the weights are provided to give other parties ability to compensate the
variations in the interpretation.

The reason for carrying a vector is to make possible establishing local policies
on values originated from remote side, and to allow operators to do things
like having closer metrics to have higher costs (i.e. the more remote
as-path component is, the less important its metric is).

Third, there's the pragmatic issue of implementation cost. Yes, the cost
of an integer per AS in an AS path is tolerable, tho not "cheap". This
cost becomes painful if most domains are not using the metric.

Then it does not take space in the path attributers. I.e. if you have a box
implementing proposed metrics, and use only old-style paths, you end up with
identical updates and identical results of their comparison. If old boxes
implement transitive attributes properly, their effect would simply be setting
static metric 1.0 for a hop.

And it becomes more painful if two prefixes with otherwise identical attributes
have different metrics. This results in them not landing in the same
update, thereby increasing overhead. Are we willing to take a signficant
step forward in overhead for this flexibility?

Exactly the same happens when you have different MEDs or different communities.
Since the proposal obsoletes MEDs and LOCAL_PREFs (and makes the practice of
ASN replication unnecessary) the overall overhead is likely to be close to zero
(providing that same policies are implemented).