CAR of Committed Access Rate is like CB Policing in many respects. As a technology is allows us to apply traffic policing inbound and outbound on an interface. Unlike CB Policing however it does allow us to nest conditions to we can set maybe a global interface rate then make the access rates more specific for certain protocols...pretty neat. CAR is a two way policer though, there is no violate action.

Finally, CAR is considered legacy however so make sure you have a look at the MCQ!

Here are the obligatory 'scary' acronyms to get the blood rushing.

CIR = Committed Information Rate
Bc = Bytes Conformed
Be = Bytes Excess
Tc = Time Conformed or Time Interval
AR = Access Rate (the maximum bandwidth of the wire)

And the even more 'scary' formulae

CIR = Bc/Tc

So, CAR what is it then? Well it's the network traffic police! CAR is a rate limiter very much like the traffic police or crowd control security can reduce the 'flow' of traffic into a road or the 'people' into a Bon Jovi concert. Reducing the amount of product X going into road/gig/wire Y can help an overcrowding situation. See, it's a good when that the peelers pull over the road hog? Well yes unless its YOU! Right, lets get back to bits. Imagine we've got a 64k wire and we don't want to allow the router to grab all available bandwidth...maybe we want to allocate a quarter of the bandwidth outbound. Well thats good, we've got CAR!

OK so normally a router will push data out onto the wire at the maximum rate it can, this is the Access Rate and is calculated in terms of one second (Kbps, bps, Mbps they are all 'per second right?). Now when we start applying rate liming to the interface (policing not shaping) we start talking about fractions of a second. These 'time slices' are termed the 'T sub c' phonetically or just Tc to us. The maximum Tc we are allowed is 1/8th of a second or 125ms so we can only work down from there. The point of slicing the second means we can send fractions of the normal rate e.g. 64kbps / 8 = 8kb per Tc at 125ms. If we send data at the maximum rate defined per time interval then we'll be managing ourselves for maximum throughput. Crucially however we can stop sending data for one time period and send more in the next interval so long as we don't send more data in that one second we're good to go.

Policing is often brought back to the 'buckets' analogy but in todays speak we tend to move away from bucket...there are so many different size and colours and some have holes! For the sake of sanity however lets go down that bucket road again. Imagine at the start of the time interval you have a bucket full of tokens. Those tokens are equal to the Bc value and you have a finite amount of them. During the time interval as data needs to be sent they pay to get onto the wire by removing tokens from the bucket. When all the tokens are gone from the bucket within that time period then no more data can get onto the wire and the data is dropped (unless we configure another bucket for those 'exceeding' packets).

Right, I think we're had enough background lets get down to some configuration!

We've got a small circuit here at 64kbps. We're going to rate limit ICMP traffic down the wire to no more than half (32kbps) of the available bandwidth. The remaining traffic is free to use what it can when it policing.

We need to match the traffic first so lets write out the access-list

access-list 100 permit icmp any any echo
access-list 100 permit icmp any any echo-reply

OK this matches any TCP packet going to any remote web server running on the default HTTP and HTTPS ports (remember we're rate limiting OUTBOUND traffic but not inbound...considering most HTTP web traffic is coming back at you with images, web pages, video etc we really should limit inbound shouldn't we?). We'll stick to it and work out the bits per second we need to restrict the matched traffic in access-list 100 to 32kbps.

We want to police a 32kbps rate on the wire outbound. so here are some numbers we know.

CIR = 64kbps
Bc = 32kbps
Tc = 125ms or 1/8th second (if you need to work out the 'th' value then you take the 1000ms in one second and divide by the number of ms 1000/125=8)

The rate-limit command allows us to specify the bits per second (our target rate), the bytes per time interval for normal (Bc) and excess burst (Be) but not the time interval itself. Having said that there is nothing to stop us using the Tc to work out how many bytes per Tc we need to send. The rate-limit function also allows us to describe what we are going to do with the traffic which meets our traffic commitments or not as the case may be. OK so here is our statement.

rate-limit output access-group 100 32000 1500 2000 conform-action transmit exceed-action drop

So what are we saying here? The first part after the access group is our limited rate allowing us to send web traffic requests at 32kbps (this figure is in bits per second). Next we have the normal burst value but this (unusually and annoyingly) is in bytes. Then we have the excess burst again in bytes (if the excess burst and normal burst values match then excess burting is turned off). Finally we say what we want to do with the traffic which meets these targets. Normal burst traffic is allowed to get on the wire (transmit), excess burst is dropped.

Cisco actually help us out with our maths here and suggest that your Bc value should be CIR * 0.125 (convert bits into bytes) * 1.5 (typical RTT) which for us means 32000 * 0.125 * 1.5 = 6000 bytes. Even more than that they say the Be value should be 2 * Bc so 2 x 6000 = 12000 bytes. So our configuration is going to be:

rate-limit output access-group 100 32000 6000 12000 conform-action transmit exceed-action drop

So hang on why have we configured 6000bytes for normal? That makes 48000 bits per second right?! But I wanted 32000bits? Well yeah thats how it is by the Cisco numbers. Remember this is just a starting point for you. Your average RTT (round trip time may not be 1.5 seconds) so the numbers will change. All guess I can say is that Cisco say make the Bc and Be numbers like that...and thats a good place to start. If you want to go by the book and work it out so your bytes match for the Bc and Be then you need to adjust them down to fit the 32000 bits you have (i.e. 4000bytes). So we're going to push this into our router now and then test the throughput. we are expecting 32kbps to be sent so we'll throw an ICMP ping packet outbound with a payload of 100B, then 200B then when will it stop working and run out of headroom?

Here's some more maths:

CIR = 32000bits per second = 4bytes per ms
Bc = 6000bytes which is refilled every Tc at the rate of 4bytes per ms
So how quickly will the bucket get emptied?

First work out drain rate:

What is the average RTT for ICMP to the destination?

Screen shot 2011-08-19 at 20.22.16

So it's around 4ms but it can be anywhere between 1 and 32! - OK we'll go with the average.

Average RTT is 4ms and CIR fill rate is 4bytes per so 4*4 = 16bytes

Second we need the offset:

We are going to ping out with 100byte ICMP packets and we know our drain rate was 16bytes per interval so 100-16 = 84bytes

So now we can work out when it'll start dropping packets?

Time till we run out is Bc - Offset = 6000/84 = 71.42 packets.

So this is all averages right? We can't have 71.42 packets so we should see on average 71 ping packets whiz out and number 72 be dropped. When the bucket is filled up again we're off yeehar!

Let give it a whirl:

First we'll create the access list to match the ICMP traffic outbound

access-list 100 permit icmp any any echo

Now under the interface we add the CAR rate-limit statement

rate-limit output access-list 100

OK now thats all setup we'll test it with ping. Now we can use the 'size' and 'repeat' keywords to make this work for us. We are expecting pings to fail on or around 71 right and the pings are going to be 100bytes big which is the default anyway.So lets throw 80 pings of 100 bytes...

ping repeat 80

Hmm I got 74 packets...not quite the 71 I was expecting but you know what...thats close enough. It all comes down to the average RTT, the numbers all work around that.

So are we rate limiting at all?

Here is a ping a result with rate-limiting on sending 1000bytes ICMP packets with a count of 10.

ping with rate-limit on

70% success rate. Now we'll remove the rate-limit line from the interface and ping again.

ping with rate-limit off

100% success rate, no drops.

Good luck with your studies


Cisco actually suggest that due to the 'averaging' affect of TCP traffic over a period that you over-egg the amount of data to rate-limit by a factor of 1.5. Here is a screenshot from the page and a link back to Cisco for the whole story.

Cisco recommended burst values
View Comments
Random Early Detection or Weighted Random Early Detection are great tools to help you out when your queues are getting full - they are termed 'Congestion Avoidance' mechanisms. If for instance you've got a slower outbound link, maybe a WAN link, and from time to time you are bursting the heck out of if then consider RED. RED or more commonly WRED is a fantastic testing topic and something we should know about, even though the world has mostly moved on in terms of using WRED as a QoS or Quality of Service mechanism.

RED works by randomly (the clue is in the name) dropping a pre-determined number of packets out of the filling or full outbound interface queue every now and then. You don't know which packets are going to be dropped, you just know that within a period of time a certain number of them are going to be dropped - this is called the 'drop probability'.

So how does this work for different conversations? Well it kinda depends. If we're talking about TCP traffic then we should be good. TCP will realise in it's awesomeness that one of the packets has gone missing and we'll get a TCP retransmission - remember TCP is connection-based. For connection-less protocols however like UDP we're going to start seeing some problems with we don't really want to be depending on RED saving our bacon for voice traffic ;-)...we'll talk about TCP in a little while, just understand for now that turning on RED for traffic which is not mostly TCP is not going to help you out much.

So what is it with RED anyway? It drops packets, you get that bit right...we just went through it. We'll here's the thing, RED is used to drop packets in a 'nice' way. The quotes are there because you never want a packet to be dropped so it's never really 'nice' is it? But lets consider this.

You've got a WAN link and it's slow. The buffer of the interface connected to the WAN link are filling up because there is way too much traffic ingress and not enough egress. At this point we get something called 'tail drop'. Now this is one of those keywords people hear about and don't really think about too hard so it goes in one ear and out of the other. I had the same problem in school when my French teacher told me about 'vowel clash'...don't know why...I just never understood why he said it...I know now...but its about 20 years too late...sorry Mr Hughes.

'Tail Drop', so it's all about understanding a packet flow or like a queue at a nightclub, winding its way along the path to the door. It could be a really long 'queue with big people or skinny people, it doesn't matter. The point here is that at some point, when those interface buffers get big, the tail of your 'queue' is going to be cut off. It's like the bouncer at the nightclub just but his big hand up to you when your friend just went in front of you and said 'No-one is getting in so go home!'. So at this point in the network world the sender and receiver (you and your friend) are both waiting around ready to get back together again. It gets worse because it's not just one conversation, this is affecting all the other conversations going on at that time too. Back at the nightclub everyone in that queue is waiting to have another go at getting past the bouncer. At first there is a bit of shouting at the bouncer but he just tells them to wait. Then the crowd get a little lively and the conversation starts to get a bit more repetitive, then they start to chant all at once' Let us in, Let us in'!.

The point is that the bouncer can't let them in until some people come out so all those people in that queue or indeed back at your network interface queue are being dropped and told to wait. Just like the crowd in the nightclub queue your network traffic is also backing off and the more they backoff the more they come back, slowly at first and then growing. This is akin in the network world to TCP backoff and when all of the conversations restart this is called TCP Slow Start. TCP slow start occurs when two clients begins to have a conversation. Each party agrees with the other that they will send a number of packets before they need to acknowledge the receipt of them (it's an efficiency thing). Initially the number of packets send before an acknowledgment is returned is small but as the relationship grows (trust) the number of acknowledgements per number of packets drops off. Now go back to our full queue, during tail drop situation that TCP slow start process causes a real headache. At some point everyone on the network behind the full queue is going to be told stop, and restart. This creates a state called TCP synchronisation (or global synchronisation) where most or all of the hosts behind the router are all trying to talk and grow at the same time and this is really bad for an already bad situation.

RED helps us to get rid of tail drop by adding a randomness to the dropping of packets in the queue to try to avoid the queue getting full. By doing this we hope to stop tail drop and in turn TCP restart and that dreaded 'Global Synchronisation'. Take our first example again with the nightclub. If the queue is maybe 100 people deep we could ask 10 of those people to come back later and they could leave the line or 1 in 10 people. This helps to reduce our queue and makes the situation less bad for our bouncer to manage. Now the queue gets to 1000 people deep and thats a lot for him to deal with so maybe we get a little more aggressive here and instead of asking every 1 in 10 people to go away we ask every 1 in 5 or 1 in 2, now thats going to shorten the queue much more effectively and keep the management of the queue down.

So RED is great but we're not being too sensible about which packets to drop. Maybe some of those people in the queue we just asked to leave were VIP's - maybe you just bumped the latest 'A' list celebrity and kept Joe Bloggs! So thats where WRED or Weighted RED comes in. By implementing WRED we have a way of choosing more appropriate packets to drop and at the same time allowing different ratios of drops to each of these priorities. So can lose fewer 'A' list and lose more 'D' list at the same time! This is exactly how WRED works....exactly the same. WRED using precedence values within the IP packet to determine the importance of the traffic and can make decisions on drop probability based on those numbers.

The drop probability should be the last part of our story. As you know we've got a filling or full queue and as you know now we've got RED or WRED helping us out by dropping packets out of the queue randomly trying to keep it from filing completely and causing tail drop. The probability is the only thing you've got to think about and indeed some of the guess work is done for you.

Getting RED to work is a simple case of enabling RED on the interface with the 'random-detect' keyword. RED can also be enabled inside a policy map as part of MQC. There are a lot of things we can do with RED including thresholds, drop determinators, exponential drop growth (how quickly it ramps up the dropping) and many other 'tweaks' which come into play when we go into WRED.

R1#(config) int fa0/0
R1#(config-if) fair-queue
R1#(config-if) random-detect

Enabling WRED requires us to set the deterministic part of the 'weighting' and we have a couple of choices, IP Precedence or DSCP. Default behaviour is already set for each of these dscp or prec-based min and max queue lengths but you do have the power to change these if you wish.

Either RED or WRED requires the setting of a minimum and maximum threshold plus a drop probability. What we mean here is that as a queue gets 'x' number of packets deep we will have a drop probability of 'y'. If the traffic continues to grow and we reach another number higher than the first (usually the maximum queue size or just before) we'll be dropping traffic at the highest rate to keep the queue from filling.

For further information please see the Cisco DocCD on RED and WRED here.

Good luck with your studies.
View Comments
This is a going to be a more general 'chat' around pushing packets about the network and avoiding the usual IGP routes you may or may not have messing you up. I've prepared a small topology of 6 routers. Each router is connected via multiple routes using ethernet and serial interfaces. Some interfaces have OSPF running and they are all in the backbone area 0 with no costs so it's all vanilla. This is going to look WAY too much for such a simple demo - it's part of a bigger picture.

Screen shot 2011-07-17 at 22.56.53

So here is the plan:

  • Lets build this lot up and just switch some costs to prove that we can manipulate the flow of traffic using the IGP.
  • Lets change some of the links to use a different OSPF area and show how that affects the flow of traffic.

We're only really interested in the 'journey' of the packet so lets take a look at the IP address on R1 and R6 first

R1's 'show ip interface brief'

show ip interface brief R1

R6's 'show ip interface brief'

show ip interface brief R6

So we'll just have a look at the routing table on R1

show ip route R1

See how R6 Lo0 interface is via and which are R3 and R2 respectively? Thats because it costs the same for packets to get from R1 to R6 via both of these directions (oh and CEF is on). Lets do a traceroute. Now before you look at the output remember that a traceroute sends three packets out for each hop. With us load balancing across R2 and R3 we'll see and ICMP response from R2 and R3 as the packet is load balanced...fingers crossed.

trace on R1

So we're going to change the cost for the FastEthernet1/0 interface on R2 between R2 and R4 to make it more expensive for traffic to go that way, OSPF should send an LSA update to the DR on the R1/R2/R3 segment to tell everyone else that R2 is definitely NOT the way to go. We'll see this on R1 when we look at its routing table and see that equal costed route to the Lo0 interface on R6 ( is no longer shared between R2 and R3. Lets have a quick look at the cost on the interface right now using the 'show ip ospf interface fa0/0' command...useful that one.

show ip ospf interface cost

OK so it's one. If we double that then it'll remove the 'load balanced' route from R1's table. We'll increase it from '1' to '2'

change interface ospf cost

Right lets look at the routing table on R1. This is how it looked before the change...see the two routes out?

show ip route with two routes out.

So now this is how it looks after the cost change.

shwo ip route with one route out now

Right...just the one now. We could do a trace just to show it's only one way out.

trace to from

OK, cost see how that works? Why did we change the cost of the interface facing away from R1 and not the one closest to it? Well it uses the 'local' cost value to build the view of the world. So far as R1 is concerned it's a cost of 1 between it and R2 so even if R2 has a locally configured cost of 2 R1 won't listen to that. That cost for 2 would however be useful to R4 when it was learning about routes from R2 learned via that interface.

Right lets put the cost back and change the area between R2 and R4 to area 1 rather than area 0 and see if that affects the routing table at all. This WAS the routing table BEFORE the area change...two routes...all good. Now lets but the point to point link into area 1.

show ip route from R1

OK so we'll just show the screen shot from R2 here but believe me when I say we moved from area 0 to area 1 on that point to point link. The adjacency comes up anyway...all good. Lets see the OSPF topology on R1.

network between R2 and R4 area change

OK so now the link between R2 and R4 is in OSPF area 1 but we've put all the other costs back...there is nothing wrong with the links...they are equal and the same.

show ip route from R1

So check that out - OSPF really loves the same area and no matter how poor the cost, another route learned from an IA (inter area) will be less valued. It will choose a route via the same or backbone area over any other NO MATTER HOW AWFULLY costed it is. If you ran the same area link over barbed wire through a wet field it would win over an IA route learnt over a 10GE DWDM ;-)

Good luck with your studies.
View Comments
Before you start this article you should get your time sorted - read all about setting up NTP in IOS with our NTP article. Remember logs are useful when taken in context. If the time is wrong on your router then you can forget using them in any meaningful way when it comes to diagnosis.

Just so we are clear, logs and debugging is useless without context - you need to have your networked devices synchronised with each other and with a common time source. If you time is out then your logs have little or no value for things like event analysis, trending, capacity planning and forensic analysis.


Time right now? Then we’ll begin...

Logging the messages created from your device is a very useful for troubleshooting a variety of issues we’ve already discussed like event analysis (i.e. a fault on your device caused x,y and z to occur at this time) and forensics (e.g. during or after a security incident). So what happens when your router reboots and you want to see those logs? Well it’s likely you’ve only setup logging to the buffer (in volatile RAM) on the device so when it reboots you’ve lost all of that useful data. What if your device is being hammered by bad traffic or a flapping interface which fills up the available buffer space (the amount of RAM allocated to log messages). Wouldn’t it be great to keep those logs for historical purposes?

Damn right - here are some logging options.

First of all we need to enable logging with the global configuration mode command ‘logging on’. You should perhaps note that this is the default...hopefully nothing to worry about unless some unscrupulous or crap engineer has disabled logging.

In this screenshot the router has booted with a clean configuration. Basic things like hostname, interface IP and VTY access have been setup but rest assured, no logging configuration has been what do we get enabled by default?

show logging

Syslog, enabled; Console, enabled; Monitor, enabled...what does all this mean?

Syslog monitoring is a double-edged sword. In this regard we could (if on a PIX/ASA) be talking about SNMP traps but in this case we’re saying what logging information should be sent over the network to a syslog server.

Monitor logging is all about remote sessions to the device. When you telnet or SSH onto the device you wouldn’t see any logging information even if this is turned on. To see this logging data the operators needs to issue the ‘terminal monitor’ command.

Console logging is all about local access to the device. By local we mean access from devices connected via their ‘console’ port. This is probably the best place to have logging turned on but you don’t want to have too much logging since a serial port can easily become swamped with logging information.

Buffer logging is a historical dump for logging data. Lets say an event occurred a few minutes ago and you want to see what happened. You can telnet to the device, issue a ‘show logging’ command under privileged EXEC mode and you’ll see a list of historical logging data.

So what can and should we tune for logging data? Here are the headlines:

  • Syslog - send too much traffic to your syslog server and this means more network traffic which could cause a network capacity issue not to mention you are going to fill up your syslog server disk more quickly.
  • Monitor - remember this is logging being sent to a telnet or SSH client. Again, as with syslog, you will be increasing your network utilisation which could be bad, particularly so if you are seeing the logging increase as a result of service issues related with network capacity issues. Logging too much over the monitor session and you will lock out that session causing you to dump the session and restart your connection.
  • Console - unlike the monitor session the console is way slower normally running at 9600 baud not 100Mbps! If you set logging levels which are too high you are almost certainly going to lock yourself out of the console session. If you are already experiencing network issues then the console is your only hope of getting meaningful telemetry out of your failing device. Setting logging levels too high on the console WILL lockout your serial connection period! Avoid if you like your job. The only way to bring a router back from a lost console due to high logging levels is usually to reboot it.
  • Buffers - this is a memory risk. The facts are that you want to go back in time to look at historical logs. Well thats great but logging for too long runs a risk of consuming more RAM which could be used for other things. It’s a fine line between keeping logs for as long as you can and not consuming previous RAM when you could actually store historical logs on an external device like a syslog server.

What about those logging levels? What does ‘logging debugging’ mean? Well we can tell the device to log more or less verbosely to include more or less functions. There are 8 different logging levels available and each one logs more and more data as you pass from a lower to a higher number i.e 0-7 with 7 being the highest or most verbose level.

  • Emergency (severity 0)—The system is unusable
  • Alert (severity 1)—Immediate action is needed
  • Critical (severity 2)—Critical condition
  • Error (severity 3)—Error condition
  • Warning (severity 4)—Warning condition
  • Notification (severity 5)—Normal but significant condition
  • Informational (severity 6)—Informational message
  • Debugging (severity 7)—Debugging message
So, now we know the defaults, we know the types and we know about logging levels. Lets consider the main logging types in more detail.


We’ll consider buffering first since it’s the simplest to start with.

Take a look at the following output from a ‘show logging’ on a Cisco wireless AP.

show logging

This output is using something called the logging buffer. Notice the logging buffer is 1048576 bytes? Well we’ve increased the buffer here from the default of 4096 bytes or 4Kbytes. It is not uncommon to find this is one of the more obvious tweaks you would want to do on your IOS device. To explain the point imagine a very busy router being hit hard with a flapping port. The state of the interface goes up and down every few seconds or so and each of those logs are sent to the history buffer. Pretty soon the log is going to fill up and when it gets full we move to FIFO mode (first in first out) and we start to drop out the oldest messages. If you, as an engineer, are being asked to find out when this started happening and you are depending on the log’d better hope your log buffer is big enough.

Now, what you shouldn’t do is change the buffer size without looking at your available RAM. Remember the logging buffer sits in RAM so the more you reserve for logging, the less you have for any other processes. Right, lets take our vanilla configuration and turn on the buffering of log files to RAM.

enable logging to buffer

Whats the default buffer size again?

show logging buffer

To change the buffer size we simply issue the following command, we’ve changed this to 4 times the default or 16384 bytes

changing cisco ios logging buffer size

Thats great, we’ve got a bigger buffer, but what if the buffer still fills up? What if we choose to log more data by increasing out logging levels? The best way to make sure we catch all data and can keep an infinitely more long term history is to use syslog.

Lets configure our 2600 router for SYSLOG. We’ll tell the router to log at the highest level and send it’s logging data to our SYSLOG host running on IP address

First of all lets configure the logging server. We use the command ‘logging host’ followed by the IP address of the SYSLOG server

Screen shot 2011-06-19 at 23.05.54

Now, before we move on here I’m going to bring back my first statement in this article regarding time and logs. Logs need context - SORT YOUR TIME OUT ;-) Seriously, without time-stamping the log data it has very little use or meaning. So how can we fix that?

OK first lets make sure we are using NTP and we are synchronised (if you have not done this then follow our NTP ‘howto guide here’.

Here is a screenshot from our NTP. We are synchronised, all is ready to go.

show ntp status

So now we need to make sure that our logs get time stamped. Lets first look at a log file which has been written to the buffer. To demonstrate the issue we’re just going to perform a ‘shutdown’ then a ‘no shutdown’ on FastEthernet0/1.

log buffer showign interface up and down

Notice that the log in the buffer shows the state change to down and back again...but we have no way of knowing when this happened. Enter time-stamping!

enable cisco ios timestamp

The ‘service timestamps’ command allows us to prepend any logs with the current time and date. We have told the router to use the actual date for the log (rather than the number of seconds since the box was brought into service) and to include information like the timezone (in case we are going to export these logs to another country working with UTC when the logs were sourced in a country working with summer-time which it could be out by an hour. We’ve also told it to include the year which is off by default. Remember of course the more you add the more buffer space is used’s all about compromise.

Lets take a look at the same ‘shut/no shut’ with the timestamp on.

ios logging buffer output with timestamps

Awesome, now we have prepended the time and date which is perfect for troubleshooting. Notice the first two log entries were still in there without the time and date.


When you connect to a cisco device be it a router, switch or firewall you may wish to turn on debugging to more closely watch what is going on. Now normally the debugs are not sent to the telnet or SSH session so we need to tell the device to override the normal behaviour and to send those debugs and other messages to the VTY console and not just the serial port of buffer and sysiog.

This type of redirect to screen is called ‘monitoring’ and is akin to the unix ‘tee’ or shell redirect ‘2>’ command where stderr can be sent to another process.

Lets have a look at a normal session. We’ll telnet onto the 2600 router we already setup with buffering. We’ll telnet on and tell the router to debug HRSP. Then we’ll build an HSRP instance on the interface Fa0/0 and look for the console messages to scroll on the screen.

Firstly lets turn on debugging

hsrp debug

OK great now lets turn HSRP on interface Fa0/0

hsrp interface configuration

I waited for a minute but there were no messages on the screen...thats what we expected right?

OK so now we’ll turn on monitoring. Now remember the level of information we will receive on the screen is related tot he level of debugging stated in the ‘show debug’ statement we saw earlier so currently we are at level ‘debugging’ which is more than enough.

terminal monitor configuration

Ok, we enabled the log redirect to screen with the ‘terminal monitor’ keyword. Then when we disabled the HSRP on the interface we got the debug information sent to the screen. Awesome.


OK the last subject to cover off in our brief stop off is syslog. Syslog is used by Unix systems as a method of “System Logging” for services or daemons running on the box. It evolved to include remote syslog where devices could sent information about the services and authentication etc across to one central unix box to manage the lot.

In our world, network engineers depend on syslog heavily to troubleshoot issues we see on the network. Syslog is probably one of the best tool in your box and should be ignored at your peril - you will need it one day.

Right lets setup some basic logging. I’ve got a syslog server running on host I’ve enabled the fantastic syslog server tool from for the test. We’ll begin by telling the cisco router where exactly to send it’s logging information:

ios syslog host configuration

The ‘logging host’ command simply tells the router where to send it’s syslog traffic. Right now lets bring down that fastethernet interface again

logging monitor output

We see that the interface Down and Up is shown. Notice that the interface goes DOWN before the router has setup the logging to the host - it took that first log message to startup the syslog service. Once it is started though, we’re golden.

Now on the syslog server itself we don’t see the interface going DOWN because the logging wasn’t running at the time.

solarwinds syslog collector

Notice one of the benefits of the remote syslog server. Do you see that the time and date that the event was received is recorded on the far left? Having the syslog servers time correct is clearly one of the more important things since ALL of your logs will be stored on this and having a fixed time reference for all of your logging is invaluable.

Facilities and Levels

I use facilities for all the wrong reasons I’m sure. Back in the days when I was a unix administrator I used to have syslog daemons pointing at one master syslog collector. The syslog collector was configured using facilities where each of the facilities available was used for different hosts. I’d have my SUN boxes pointing at local1 and SGI pointing at local2. Each facility would then have the different logging levels inside it. I used facilities like a pigeon hole for internal mail.

Now I’m not sure if that was right or wrong and I never bothered to find out, it just worked for me. There are more than a dozen facilities available for different uses including local, system wide, UUCP (unix to unix copy), mail, clock and more besides but I’d suggest you don’t move away from the default.

I guess this concludes our brief run into logging today

Thank you for reading, please leave us some feedback.
View Comments
In this scenario what we did was to put a router behind each bridge then configure GLBP on the routers. By using GLBP and setting the load balancing mode to ‘round-robin’ we allowed for a kind of ‘L3 ether-channel’ across the wireless. The dual-band radio’s allowed us to run both the a and g channels at 54Mbps and get a ‘by the book’ 108Mbps per bridge which means the customer gets 216Mbps. We met the requirements, they ratified the design and the investment was made. As it turned out we got about 150Mbps because of loss but you know what, they only ever used about 10Mbps at the most anyway which was a good conversation I assure you.

So what about GLBP, what is it that makes it a better FHRP than say HSRP or VRRP? Well unlike these other two, GLBP (or Gateway Load Balancing Protocol) is capable of distributing load between the other members of the GLBP group. Take for example three routers connected to one network segment. Each router is sitting in the same network of with R1 being .11, R2 being .12 and R3 being .13. Now if this were HSRP or indeed VRRP those routers can operate as one HSRP or VRRP group too but only one of that group is active. Only one can forward traffic, the others are just sat there doing nothing except waiting for something bad to happen to the active node.

With GLBP there is still a master of sorts and that is called the AVG or Active Virtual Gateway. The AVG is elected in much the same way as the active node in HSRP and VRRP. Each node has a priority of 100 by default and the highest priority node in the group is elected as the AVG. Again, just like HSRP and VRRP the AVG when elected will not be de-selected for any other member of the group unless it is lost (e.g. the box running the AVG fails). You can however, just as with HSRP and VRRP chose to allow any member to take over control as the AVG if we set a higher priority and configure preemption.

So, how does GLBP work. Well GLBP is a great way of load balancing traffic between multiple gateways while at the same time allowing for automatic failover in case one of the members fails. We can set the load balancing to round robin where the source traffic is balanced equally to each node based on an ‘R1, now R2, now back to R1, then R1’ basis. We can also set a preference known as weight to send say 75% of traffic to R1 and the remaining 25% to R2.

How does it manage the load balancing? Well it uses a cool technique of virtual MAC allocation. Basically the AVG manages the GLBP group members. It knows the MAC addresses of the neighbors and it knows the virtual IP address (VIP) associated with the cluster. When a client sat behind the GLBP group wishes to send network traffic outside of the local network it ARP’s out for the MAC address of its gateway. This request comes into the GLBP AVG the AVG passes back a virtual MAC address (associated with one of the members of the group) to the client. The client then has that MAC address in it’s ARP table associated with the gateway VIP. Now when the client sends its data it sends it to the VIP. The AVG manages this whole process to make sure a distribution of traffic across the nodes in the group based on the load balancing configuration (.e.g. round-robin, none, weighted etc).

Lets take an example, here we are going to have three nodes in the GLBP group. Each node is going to be sharing the virtual IP address and the clients will be using that IP address as their default gateway. Our clients will be R1 and R2. Nodes R3, R4 and R5 are members of the GLBP group which we will configure as group 1.

Screen shot 2011-06-18 at 11.42.50

Each of the nodes have been configured with the following IP addressing:

R1 Fa0/0 =
R2 Fa0/0 =
R3 Fa0/0 =
R3 Fa1/0 =
R4 Fa0/0 =
R4 Fa1/0 =
R5 Fa0/0 =
R5 Fa1/0 =
R6 Fa0/0 =
R7 Fa0/0 =

Lets start out by setting up a basic GLBP configuration on R3. Lets make R3 the AVG of the group by default, it will become the AVG because there are no other members. We’re going to turn on GLBP debug with the commands:

R3#debug glbp events
GLBP Events debugging is on

R3#debug glbp packets
GLBP Packets debugging is on

Lets turn on GLBP for the FastEthernet0/0 port. We’ll define the virtual IP address to bind to that interface (remember that there is already an IP address assigned to this interface of

R3(config)#int f0/0
R3(config-if)#glbp 1 ip

Lets walk through the process:

*Mar 1 13:25:22.768: GLBP: Fa0/0 API is not a GLBP address in table 0
*Mar 1 13:25:22.772: GLBP: Fa0/0 1 Disabled: a/GLBP IP address configured
*Mar 1 13:25:22.772: GLBP: Fa0/0 1 Disabled -> Init
*Mar 1 13:25:32.780: GLBP: Fa0/0 Interface up
*Mar 1 13:25:32.780: GLBP: Fa0/0 1 Init: d/GLBP enabled
*Mar 1 13:25:32.780: GLBP: Fa0/0 1 Init -> Listen

See that first line - this is GLBP doing a ‘double take’. Before I star this, do I already know anything about GLBP on this interface. Clearly it’s a no and the GLBP process goes form being disabled n the interface to active by moving to the Initial or Init phase. Right, GLBP is now ‘up’ on the interface (NOTE: this is not saying the interface moved from down to up). Right, into Listen phase. Now this is all good stuff, it looks exactly like almost every other protocol we ever saw ever...whats the best part of a conversation? The answer is listening before you speak, that way you won’t put your foot in it.

This is a ‘listen’ packet. See I’m asking specifically for ‘Grp 1’ this was our group. Our priority is 100 - thats the default. We’re going to send one of these every 3 seconds (3000 milliseconds).

*Mar 1 13:25:41.780: GLBP: Fa0/0 Grp 1 Hello out VG Listen pri 100 vIP hello 3000, hold 10000

Right I’ve done enough listening now, I heard nothing, so I’m going to shout out my intention to be the GBLP ‘man’ for group 1

*Mar 1 13:25:42.780: GLBP: Fa0/0 1 Listen: g/Active timer expired (unknown)
*Mar 1 13:25:42.780: GLBP: Fa0/0 1 Listen -> Speak

We’re going to send these Hellos now again every 3 seconds for a maximum of 10 seconds. See how we’re now in ’Speak’ mode not ‘Listen’

*Mar 1 13:25:42.780: GLBP: Fa0/0 Grp 1 Hello out VG Speak pri 100 vIP hello 3000, hold 10000

OK I’m not hearing anything coming back at all so I don’t need to assert my awesomeness anymore. I’m going to move into a the ’Standby phase’

*Mar 1 13:25:52.780: GLBP: Fa0/0 1 Standby router is local
*Mar 1 13:25:52.780: GLBP: Fa0/0 1 Speak -> Standby

One more Hello before I hit my 10second timer

*Mar 1 13:25:52.780: GLBP: Fa0/0 Grp 1 Hello out VG Standby pri 100 vIP hello 3000, hold 10000

Right lets transition into an ‘Active’ state now. I’m the AVG!

*Mar 1 13:25:52.784: GLBP: Fa0/0 1 Standby: g/Active timer expired (unknown)
*Mar 1 13:25:52.784: GLBP: Fa0/0 1 Active router IP is local
*Mar 1 13:25:52.784: GLBP: Fa0/0 1 Standby router is unknown, was local
*Mar 1 13:25:52.784: GLBP: Fa0/0 1 Standby -> Active
*Mar 1 13:25:52.784: %GLBP-6-STATECHANGE: FastEthernet0/0 Grp 1 state Standby -> Active

OK so now thats sorted I’m in control of assigning the virtual MAC addresses. That means I need to think of one. So I’m going to call out, again, to see if anyone else is using the MAC address I want to use.

*Mar 1 13:25:52.788: GLBP: Fa0/0 1.1 Disabled: a/Forwarder MAC address acquired
*Mar 1 13:25:52.788: GLBP: Fa0/0 1.1 Disabled -> Listen

Right - lets whiz out a few ‘Hellos’ to see if anyone responds to my MAC address.

*Mar 1 13:25:52.788: GLBP: Fa0/0 Grp 1 Hello out VG Active pri 100 vIP hello 3000, hold 10000 VF 1 Listen pri 167 vMAC 0007.b400.0101

I’m asking about virtual MAC 0007.b400.0101. Lets have a look at this MAC. Well we build this up using a common MAC in the first instance of 0007.b400. Now the next 16 bits are up for grabs. The 01 closest to the left is derived from the group number i.e. group 1 is 01, group 2 would be 02 etc. You can have a maximum of 1024 groups so I guess we can use up to 10 bits for group (we can have 4 forwards per group). The right most two numbers of ’01’ relate to the member ID. First member to come up gets 01, next is 02 etc...

OK so, we’ve run out of time and not heard anyone complaining about the vMAC so we’ll assert that on our interface now and go active.

*Mar 1 13:26:02.788: GLBP: Fa0/0 1.1 Listen: g/Active timer expired
*Mar 1 13:26:02.788: GLBP: Fa0/0 1.1 Listen -> Active

Check out the VF keyword here in this debug - VF = Virtual Forwarder.

*Mar 1 13:26:04.788: GLBP: Fa0/0 Grp 1 Hello out VG Active pri 100 vIP hello 3000, hold 10000 VF 1 Active pri 167 vMAC 0007.b400.0101

OK so we’re up and running. I’m going to ping out between R1 and R6 now then lets have a look at the ARP table on R1.

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to, timeout is 2 seconds:
Success rate is 80 percent (4/5), round-trip min/avg/max = 20/29/40 ms

Here is the ARP table. We’ve got the MAC address for R3’s Fa0/0 interface and the vMAC address for the GLBP group.

R1#sh arp
Protocol Address Age (min) Hardware Addr Type Interface
Internet - cc00.05a1.0000 ARPA FastEthernet0/0
Internet 1 0007.b400.0101 ARPA FastEthernet0/0
<-- this is the Virtual MAC
Internet 67 cc02.05a1.0000 ARPA FastEthernet0/0
<-- this is the MAC for R3
Internet 75 cc02.05a1.0000 ARPA FastEthernet0/0

All good stuff. At the moment then you are looking at a virtual mac and a virtual IP address running on the physical Fa0/0 interface on R3. But this is a First Hop Redundancy Protocol right so we need to add in some redundancy. Lets load up GLBP on R4 and add it to the cluster group running on R3. First we go into the interface then add the GLBP statement. Notice it’s using the same GLBP group number as R3 and the same virtual IP address of

R4(config)#int f0/0
R4(config-if)#glbp 1 ip

On R4 with no ‘debug’ running this is the output on the console. We see the GLBP process going through a forwarding state change from Listen to Active for GLBP group 1.

*Mar 1 13:53:13.140: %GLBP-6-FWDSTATECHANGE: FastEthernet0/0 Grp 1 Fwd 2 state Listen -> Active

This is the debug running on R3. Notice it has seen the Hello’s from R4 ( and the AVG running on R3 has agreed with R4 that R4 is to become the standby router for that group.

*Mar 1 13:53:06.412: GLBP: Fa0/0 1.2 Disabled: a/Forwarder MAC address acquired
*Mar 1 13:53:06.412: GLBP: Fa0/0 1.2 Disabled -> Listen
*Mar 1 13:53:20.412: GLBP: Fa0/0 1 Standby router is

Looking back on R3 we can examine the status of the nodes in the current GLBP cluster.

R3#sh glbp brief
Interface Grp Fwd Pri State Address Active router Standby router
Fa0/0 1 - 100 Active local
Fa0/0 1 1 - Active 0007.b400.0101 local -
Fa0/0 1 2 - Listen 0007.b400.0102 -

Now then, we have a resilience to failure. If we lose R3 then R4 should take over as the AVG and therefore ownership of the AVG. The client (R1) should see no difference to it’s MAC table during the failure process. So lets do that now, we’ll knock out R3 by shutting down the Fa0/0 interface.

This is the console debug on R4 after we shutdown R3.

*Mar 1 14:00:32.504: %GLBP-6-STATECHANGE: FastEthernet0/0 Grp 1 state Standby -> Active
*Mar 1 14:00:32.504: %GLBP-6-FWDSTATECHANGE: FastEthernet0/0 Grp 1 Fwd 1 state Listen -> Active

R4 is now the active (AVG) for GLBP group 1. Notice that the Active router is now ‘local’ and this comand is being run in R4 not R3. There is now also no standby router.

R4#show glbp
Interface Grp Fwd Pri State Address Active router Standby router
Fa0/0 1 - 100 Active local unknown
Fa0/0 1 1 - Active 0007.b400.0101 local -
Fa0/0 1 2 - Active 0007.b400.0102 local -

Lets run a ping again from R1 to the remote host and look at the MAC addresses.

R1#sh arp
Protocol Address Age (min) Hardware Addr Type Interface
Internet 19 0007.b400.0101 ARPA FastEthernet0/0

Looks the sae right? Thats because the AVG manages the AVF. the AVG knows what it’s doing here. BUT one of the great things whch happens next is aging. When the client’s arp cache is lost it has to re-learn the MAC address for the gateway ( so it ARP’s out again and guess what? Well this time the AVG fires off the new vMAC associated with R4.

R1#clear arp
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to, timeout is 2 seconds:!!!!!
R1#sh arp
Protocol Address Age (min) Hardware Addr Type Interface
Internet 0 0007.b400.0102 ARPA FastEthernet0/0

So now is associated with R4’s vMAC - remember the last two digits are 02 indicating the second member in the group?

OK lets bring R3 back into service and then we’ll bring R5 into the GLBP group so we have three member in the group ad then we will be ready start tweaking, tuning and generally messing about with the cluster members to show how it all fits together.

Ok so here are some of the things we can mess with:

Priority - this determines who is going to be in charge of the cluster group. The one who manages the AVF’s and dishes out the vMACs. The highest priority is preferred.
Preempt - if we have this set then then the node will become the active AVG if the priority is higher than all o the other nodes in the group.
Weight - changing the weight of a node affects how much ‘load/traffic’ it is given by the AVG.
Load balancing - we can change the way the group operates to include things like source MAC, round-robin and weight.

Of the following the most useful in terms of GLBP and it’s benefit above other FHRP’s with HSRP and VRRP are the weight and load balancing. Changing the way traffic is passed through the cluster can be of great benefit if you are trying to better utilise over/under utilised links. Source MAC load balancing is a pretty neat way of ensuring traffic *always* goes the way you need it to (a bit like affinity or ‘stickiness’ for session based traffic). The source will be sent a MAC address for the AVF at that time and it will always be given that MAC. The others will change the MAC advertised to the source.

Here is the output from R3 for ‘show glbp’. Notice we have the shared virtual IP of The main points here are in bold - you can see we have 3 group members and we list their REAL physical MAC addresses. You can see there are three AVF’s or active virtual forwarders. The active AVF is R3 and all of the other AVF’s listen to R3...he rules. We’ve setup R3 to have a priority of 120, R4 to be 100 and R3 to be 80. The AVG goes to the one with the highest priority (remember the preempt rules).

We’ve also set R3 to have a AVF weight of 75, R4 to have 50 and R5 to have 25. The AVF weight is used when we are distributing load using the ‘weighted’ load balancing algorithm. We can set limits or ‘thresholds’ for weighting and if we set a lower threshold and decrement below that lower threshold we can in effect STOP the AVF forwarding traffic.

R3#sh glbp
FastEthernet0/0 - Group 1
State is Active
6 state changes, last state change 00:19:41
Virtual IP address is
Hello time 3 sec, hold time 10 sec
Next hello sent in 0.324 secs
Redirect time 600 sec, forwarder time-out 14400 sec
Preemption enabled, min delay 0 sec
Active is local
Standby is, priority 100 (expires in 7.940 sec)
Priority 120 (configured)
Weighting 75 (configured 75), thresholds: lower 1, upper 75
Track object 1 state Up decrement 51
Load balancing: round-robin
Group members:
cc02.05a1.0000 ( local
cc03.05a1.0000 (
cc04.05a3.0000 (

There are 3 forwarders (1 active)
Forwarder 1
State is Active
3 state changes, last state change 00:28:41
MAC address is 0007.b400.0101 (default)
Owner ID is cc02.05a1.0000
Redirection enabled
Preemption enabled, min delay 30 sec
Active is local,
weighting 75
Arp replies sent: 3
Forwarder 2
State is Listen
MAC address is 0007.b400.0102 (learnt)
Owner ID is cc03.05a1.0000
Redirection enabled, 599.596 sec remaining (maximum 600 sec)
Time to live: 14399.596 sec (maximum 14400 sec)
Preemption enabled, min delay 30 sec
Active is (primary),
weighting 50 (expires in 9.596 sec)
Forwarder 3
State is Listen
MAC address is 0007.b400.0103 (learnt)
Owner ID is cc04.05a3.0000
Redirection enabled, 598.580 sec remaining (maximum 600 sec)
Time to live: 14398.580 sec (maximum 14400 sec)
Preemption enabled, min delay 30 sec
Active is (primary),
weighting 25 (expires in 8.580 sec)

We’ll demonstrate the ‘load balancing’ part of GLBP now - this is what we are after isn’t it. So lets have a look at the interface - the default load balancing is to use ‘round robin’. For round robin in our example we’re going to talk to R3 then R4, then R5, then R3 then R4 then R5 in that order based on weight.

Remember the default for GLBP is round robin so there is no configuration change at this time.

Load balancing: round-robin

OK lets ping R6 first from R1

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to, timeout is 2 seconds:.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 24/35/40 ms
R1#sh arp
Protocol Address Age (min) Hardware Addr Type Interface
Internet - cc00.05a1.0000 ARPA FastEthernet0/0
Internet 0 0007.b400.0101 ARPA FastEthernet0/0
Internet 0 cc03.05a1.0000 ARPA FastEthernet0/0

Notice the MAC address given to R1 for the VIP of ends in ’0101’ - GLBP group 1, first node.

OK lets clear the ARP table so we have to re-learn the MAC address for R6. The AVG should now allocate the vMAC for R5 which ended in 0102

R1#clear arp
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to, timeout is 2 seconds:.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 20/30/40 ms
R1#sh arp
Protocol Address Age (min) Hardware Addr Type Interface
Internet - cc00.05a1.0000 ARPA FastEthernet0/0
Internet 0 0007.b400.0102 ARPA FastEthernet0/0
Internet 0 cc03.05a1.0000 ARPA FastEthernet0/0

OK lets clear the ARP table so we have to re-learn the MAC address for R6. The AVG should now allocate the vMAC for R6 which ended in 0103

R1#clear arp
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to, timeout is 2 seconds:.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 20/32/40 ms
R1#sh arp
Protocol Address Age (min) Hardware Addr Type Interface
Internet - cc00.05a1.0000 ARPA FastEthernet0/0
Internet 0 0007.b400.0103 ARPA FastEthernet0/0
Internet 0 cc03.05a1.0000 ARPA FastEthernet0/0

Right, thats the default of round robin. We’ve also got weighted and MAC based. Lets setup MAC based load balancing and ping from R1 and R2 to R6. We should see R1 get a vMAC and R2 get a different vMAC (hopefully) and after that no matter how many times we clear the ARP table on either R1 or R2 they should get the same vMAC.

First we configure the load balancing based on source MAC (you would do this on all members but this is the AVG right now)

R3(config-if)#glbp 1 load-balancing host-dependent

Right lets ping from R1 and look at the learned vMAC

Internet 0 0007.b400.0103 ARPA FastEthernet0/0

Lets clear the ARP table and ping again

Internet 0 0007.b400.0103 ARPA FastEthernet0/0

It never changes...sticky.

Right weighted load balancing. Lets setup out topology so that R3 is the best weighting with 100, R4 with get 50 and R5 will get 10 .So, when we ping R6 form R1 we would expect that 5 out of 10 times R5 would be the source vMAC for and R6 would be it 1 in 10 times.

Lets setup the weighting method and weights (change the 100 for 50 and 10 respectively for each router)

R3(config-if)#glbp 1 load-balancing weighted
R3(config-if)#glbp 1 weighting 100

Here is the output from show glbp showing the weights are applied

Forwarder 1
State is Active
7 state changes, last state change 00:13:42
MAC address is 0007.b400.0101 (default)
Owner ID is cc02.05a1.0000
Redirection enabled
Preemption enabled, min delay 30 sec
Active is local,
weighting 100
Arp replies sent: 6
Forwarder 2
State is Listen
MAC address is 0007.b400.0102 (learnt)
Owner ID is cc03.05a1.0000
Redirection enabled, 599.564 sec remaining (maximum 600 sec)
Time to live: 14399.560 sec (maximum 14400 sec)
Preemption enabled, min delay 30 sec
Active is (primary),
weighting 50 (expires in 9.560 sec)
Arp replies sent: 2
Forwarder 3
State is Listen
MAC address is 0007.b400.0103 (learnt)
Owner ID is cc04.05a3.0000
Redirection enabled, 598.536 sec remaining (maximum 600 sec)
Time to live: 14398.532 sec (maximum 14400 sec)
Preemption enabled, min delay 30 sec
Active is (primary),
weighting 10 (expires in 8.528 sec)
Arp replies sent: 21

Right, lets ping R6 from R1...this is 100 pings and clears so I’m not going to show them all...believe me when I tell you it works out ;-) Here are the best bits from 10 pings.

Internet 0 0007.b400.0101 ARPA FastEthernet0/0
Internet 0 0007.b400.0101 ARPA FastEthernet0/0
Internet 0 0007.b400.0102 ARPA FastEthernet0/0
Internet 0 0007.b400.0101 ARPA FastEthernet0/0
Internet 0 0007.b400.0103 ARPA FastEthernet0/0
Internet 0 0007.b400.0101 ARPA FastEthernet0/0
Internet 0 0007.b400.0102 ARPA FastEthernet0/0
Internet 0 0007.b400.0101 ARPA FastEthernet0/0
Internet 0 0007.b400.0101 ARPA FastEthernet0/0
Internet 0 0007.b400.0102 ARPA FastEthernet0/0

So for 10 pings we got 6 using R1, 3 using R2 and 1 using R3... I’m not doing this for 100 pings guys ;-) It’s close enough for me to expect it’ll work out by the end. The point here is we would do this sort of weighted approach for different line capacities or if we have poor circuits we don’t want to use as much. As an example maybe we’d weight a 100Mbps and 1Gbps uplink in a 1:10 ratio respectively since the bandwidth ratio is the same.

OK, lets move on a little now. we’ve seen how we failover from R3 to R4 when we shutdown the local GLBP interface but what if we lose connectivity between R3 and R6? There is no point in traffic coming to R3 if there is no way for R3 to send that traffic on to the destination. We’ll setup a basic interface track to manage that. If the interface goes down we’ll decrement by 99 which will take the priority down to 0 which means that is should stop forwarding traffic

Lets setup the object tracking, then we’ll assign a weight decrement value for the AVF, then we’ll shutdown interface F1/0 and watch the results.

Object tracking - we’ve seen this before and it’s in other articles on here so no big depth. We’re going to watch for line protocol changes on interface F1/0.

R3(config)#track 1 interface fa1/0 line-protocol

Right now we’ll assign a weight value to the interface for the GLBP group 1. We’re dropping 51 if we lose the interface F1/0

R3(config-if)#glbp 1 weighting track 1 decrement 99

Right now we’ll setup the weighting in the interface so that if the value falls below 1 (remember this is just a test, you can use multiple trackings to manage the weight value e.g. drop by 20 if this route is out of the table etc) that the AVF stop forwarding traffic.

R3(config-if)#glbp 1 weighting 100 lower 1

This is the output from R3 after we shutdown Fa1/0. Notice the tracked object state change and weight from 100 to 0 (100-100=0)

*Mar 1 14:53:10.728: %HSRP-5-STATECHANGE: FastEthernet1/0 Grp 1 state Standby -> Init
*Mar 1 14:53:10.732: %TRACKING-5-STATE: 1 interface Fa1/0 line-protocol Up->Down
*Mar 1 14:53:10.732: GLBP: Fa0/0 1 Track 1 object changed, state Up -> Down
*Mar 1 14:53:10.732: GLBP: Fa0/0 1 Weighting 100 -> 0
*Mar 1 14:53:12.728: %LINK-5-CHANGED: Interface FastEthernet1/0, changed state to administratively down
*Mar 1 14:53:13.728: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet1/0, changed state to down

Then we get a GLBP response from the active router R5

*Mar 1 16:16:09.055: GLBP: Fa0/0 1.1 Active: i/Hello rcvd from higher pri Active router (135/
*Mar 1 16:16:09.055: GLBP: Fa0/0 1.1 Active -> Listen
*Mar 1 16:16:09.059: %GLBP-6-FWDSTATECHANGE: FastEthernet0/0 Grp 1 Fwd 1 state Active -> Listen
*Mar 1 16:16:09.059: GLBP: Fa0/0 API MAC address update
*Mar 1 16:16:09.063: GLBP: Fa0/0 1.1 Ignoring Hello (135/ < 135/

OK lets try that weighted ping again. This time we should *never* use R3 since it’s weighting is now below the threshold and should not be forwarding traffic.

Internet 0 0007.b400.0102 ARPA FastEthernet0/0
Internet 0 0007.b400.0103 ARPA FastEthernet0/0
Internet 0 0007.b400.0102 ARPA FastEthernet0/0
Internet 0 0007.b400.0102 ARPA FastEthernet0/0
Internet 0 0007.b400.0102 ARPA FastEthernet0/0
Internet 0 0007.b400.0102 ARPA FastEthernet0/0
Internet 0 0007.b400.0102 ARPA FastEthernet0/0
Internet 0 0007.b400.0103 ARPA FastEthernet0/0

Right, well that took way longer than I expected to put down here but I really hope it helped anyone out

Good luck with your studies

View Comments
So lets just cover off one of the great things about distance vector routing protocols like RIP and EIGRP. To prevent loops in routing tables they use something called ‘split horizon’. The basic rule is this, “never pass on a route learned from one interface back through that same interface”. Scott Morris states this in this way “never tell a joke back to the person you just heard it from, it never has the same effect”. Split horizon works great for broadcast networks, but the behavior is not quite so useful in non-broadcast networks like frame relay, lets have a look at a few examples.

Broadcast network

Here is our broadcast network with three routers we’ll have say R1, R2 and R3 (see diagram below). Each router is connected to a switch using interface Fa0/0 sharing subnet Each router is also configured with a loopback interface (Lo0) with an IP address significant to their hostname e.g. R1 is, R2 is and R3 is

Screen shot 2011-06-15 at 00.06.37

Now, each of the routers advertise their connected networks using RIP.

Here is the configuration for R1’s RIP process. You can see we’ve enabled version 2 and disabled auto-summary but it doesn’t matter for the purpose of this demonstration. TIP: the sh run | s rip’ command is us asking the IOS parser to look for the RIP section (s rip) in the configuration.

Screen shot 2011-06-15 at 00.06.07

R1, R2 and R3 each have a copy of the RIP database containing all of the routers. Crucially however none of the routers advertise learnt networks back out of their networks connected to the switch , they only send their directly connected networks. This is because, as the split horizon rule dictates, we are learning the other networks via a broadcast interface so it is un-necessary to send that same learned route back out. If we did that then we would be advertising routers to neighbors which we didn’t have connectivity to.

For example, what if we disabled split horizon on R2. It would advertise connectivity to R3 and R1’s loopback interfaces. Of course those RIP adverts would have a metric (hop count) of one more than the best route but what if the best route was lost? In this case the route advertised from R2 would be the new ‘best route’ and since R2 doesn’t actually have any connectivity to the R1 or R3 loopback interfaces the traffic would get to R2 and be dropped. All of this is possible because of the painfully slow convergence timers in distance vector protocols like RIP. Indeed the dead interval is 3 minutes and the flush timer 4 minutes! Now then, it’s not all bad news. RIP does also have another very clever way of telling all connected RIP hosts on the same broadcast network that something bad has happened and you must flush your RIP tabels immediately (usually because an interface has gone down). This type of ‘triggered update’ is called ‘Poison Reverse’. This ‘poisoning’ of a lost route basically advertises an immediate route update for the lost network with a metric (hop count) of 16 which means ‘inaccessible’ - job done.

Lets see the poison reverse in action here We’ll shut down the loopback0 interface on R1. Our debugs are running on R2. Watch as R1 sends out the update for the lost route to

First;y here is a normal update from R1 - notice it learned it from which is R1’s interface closest to R2.

Screen shot 2011-06-15 at 00.28.56

Now we’ve just shutdown the lo0 interface on R1. It immediately sends out an update telling us that route is dead (hop count 16).

Screen shot 2011-06-15 at 00.29.38

Now we (R2) send an update ourselves, basically passing on the bad news to all other RIPv2 (we send it to which is the multicast address used by RIPv2)

Screen shot 2011-06-15 at 00.29.54

We also ‘hear’ the bad news from R3 who also, like us, passed it on...hey bad news does travel fast even in RIP.

Screen shot 2011-06-15 at 00.30.13

Right, this is all good fun, but what about split-horizon? Well, you know, the best place to see split horizon NOT working well for us is on a frame relay lets do that instead.

Split Horizon on Frame Relay multipoint networks.

Here is the topology. In this example, R1 is the frame relay hub with R2 and R3 as spokes.


The network has been configured using between the two spokes and the hub. We’ve configured RIP on each of the routers and split horizon is enabled on each interface. Lets take a look at the routing table on R1.

Screen shot 2011-06-15 at 00.42.40

OK, this is perfect. We see learned routes for the loopback interfaces of both R2 and R3. Right so now we’ve enabled a routing protocol and we’re injecting the loopback interfaces into RIP then R2 should have routes to R1’s lo0 and R3’s lo0 and R3 should have routes to R1’s lo0 and R2’s lo0 right?

Well lets see. First on R2.

Screen shot 2011-06-15 at 00.46.17

Hmm - well I see R1’s lo0 network but I don’t see R3’s lo0.

What about R3?

Screen shot 2011-06-15 at 00.46.51

OK, so just like R2 I only see R1’s lo0 interface.

Well unless I fix this I won’t be able to ping all of the loopback interfaces. The answer to the problem is easy though. In our example R1 is the hub of a frame relay network. The RIP routes are being learned from R2 and R3 via one interface on R1 (S2/0.123). So, go back to what we know about split horizon. The fact that is that routes are being learned from R2 and R3 and then those routes are not sent back out of that same interface!

This behaviour is not ideal right, luckily however we can override split horizon. We need to disable split-horizon on that serial interface for the hub on R1. Lets do that now.

Screen shot 2011-06-15 at 00.50.10

Now lets take a look at the RIP database on R2 (show ip rip database)

Screen shot 2011-06-15 at 00.50.42

Cool, now I see the lo0 network from R3. Notice that it was learned from R1 though. R1 has passed it on? Of course we’re on a non-broadcast network and R3 and R2 and not directly connected with their own DLCI so R1 is the man in the middle here.

Lets take a look at R3’s routing table

Screen shot 2011-06-15 at 00.52.36

Nice one, we have a network for R2’s lo0 interface.

Thats it for this article on split horizon

Thanks for reading

View Comments
RIP is a classful routing protocol, it doesn’t do CIDR (Classless Inter-Domain Routing). So is a defaultroute a classless or classful entity? In this article we’ll get one RIP router to advertise the default route to another RIP neighbor using static routes, an IGP, redistribution and then the ‘default-information originate’ approach. Lets move on, here is the network topology. It’s a little overkill but we used this same design for a video which we’ll be posting up soon.


We’ll begin by configuring a basic RIP setup on R1 and R5. The RIP default will be sourced from R1 and sent to R5.

Here is the basic configuration to enable the RIP process and enable it on the network segment between R1 and R5. Notice the new (ish) pipe command ’s’ this is short for section. We’ve put the loopback0 interfaces for R1 and R5 into RIP - notice the network masks both both of these? The loopback interfaces are and for R1 and R5 respectively and yet in the RIP process the network statements are and There is no netmask statement to support the 24 bits of network so how does RIP know what to advertise? Well in fact version 1 of RIP would not be able to help here but version 2 (which we have enabled) does support VLSM and can send the netmask of associated with the ‘network’ of the interface configuration along with the advertisement of the route...phew. RIP is CLASSFUL, it’s not something you want to run in a modern network necessarily. We’ve also disabled summarisation to stop RIP sending the classful will do it automatically so we need to do this if we have shared networks advertised (for this example it is unnecessary)

Screen shot 2011-06-09 at 02.04.31

Screen shot 2011-06-09 at 01.57.34

Right, lets have a look at the routing table on R1 and then on R5

Screen shot 2011-06-09 at 02.05.35

Screen shot 2011-06-09 at 02.05.45

See how RIP, because of the CLASSFUL network issue has advertised the networks between R1 and R2, R1 and R4 and R5 and R4? On a production network you can see how it might get out of control sending more routes than you actually wanted to? To stop this happening we’d normally apply the ‘passive-interface’ command to stop the router sending router updates to it’s neighbors on that shared interface. If you wanted to stop the router receiving the routes you would use a distribute-list or access-list denying UDP port 520 inbound.

Right do we want a defaultroute sending from R1 to R5 (and other neighbors). First we need R1 to have a defaultroute in it’s routing table, without it RIP will not send a defaut route it HAS to already be in the routing table.

Firstly lets send the default using static routing. We’re going to put a static defaultroute into R1’s routing table and point it at the null0 interface - you wouldn’t want to do this unless you wished to blackhole traffic to networks which were not in your routing table. Remember packets trying to get to unknown networks are dropped...this may be desirable...up to you.

Screen shot 2011-06-09 at 02.14.19

Lets take a look at R1’s routing table

Screen shot 2011-06-09 at 02.14.29

Lets take a look at R5’s RIP routes in the routing table to show you that static routes don’t just jump into the RIP process - we have to redistribute between routing protocols don’t we ;-)

Screen shot 2011-06-09 at 02.15.26

Right so lets do that redistribute step now. We want to take that static route to null and pop it into RIP. Remember RIP routes are sent as full updates every 30 seconds so we’ll just wait...OK here is the ‘debug ip rip’ output.

Screen shot 2011-06-09 at 02.24.18

Great news - lets take a look at the routing table and see if the default route is in there now.

Screen shot 2011-06-09 at 02.25.04

Nice BA.

OK so I just thought of something we can do here. The redistribution line allows us to do things like add hop count (metric) into the updates...lets take a quick look at that. Remember the metric right now is 1 (we’re one hop away).

Screen shot 2011-06-09 at 02.26.37

Here is the debug on R5

Screen shot 2011-06-09 at 02.27.29

Excellent - you know what though...I told you RIP full updates are sent every 30 secs but this arrived a second after I told R1 that the redistribution should have a metric of 5. Well this is a great feature of later versions of RIP called ‘triggered updates’. This type of update is sent immediately when there is a change rather than waiting.

NOTE: there is a cisco proprietary version of triggered updates which can be used between peers across point-to-point interfaces. This type of update differs in that no updates are ever sent unless their has been a change at which point only the routes which have changed are sent. It reduces update traffic across what could be a slow circuit.

Right so the metric is reflected in the routing table? Yes indeed [120/5] shows administrative distance 120 (default) and a metric of 5.

Screen shot 2011-06-09 at 02.32.26

So that was redistribution but there is another way to get the default sent and thats using ‘default-information originate’. Using this method we get a little more control using route-maps and conditional advertising. Lets move on with this.

First we’ll tear down the static redistribution and check the route is gone from R5:

Screen shot 2011-06-09 at 02.34.28

Trust me - the route is gone from R5 - screenshot overload ;-)

Right lets configure the default originate - make a note - we still need the default route to be alive on the router, no default in table = no advertising default

Screen shot 2011-06-09 at 02.36.02

Here is the debug on R5...see the coming in

Screen shot 2011-06-09 at 02.37.27

Lets do a show ip rip database on R1 to see it is redistributing the default route (default originate is still a redistribution)

Screen shot 2011-06-09 at 02.38.35

So we have the route in the table but before we close down here lets have a think back to what I was saying earlier about blackholing the traffic. Remember, if we don’t have a route with a shorter prefix than the default then we’re going to drop those packets right here. Now that may be what you want but it’s more likely that you’re advertising a default route because you want traffic to come to R1 because R1 is a gateway for your network e.g. a router before the internet. So lets think about it, if R1 is connected to the internet and we are advertising a default route to our internal routers so they send their packets to us...thats great. What if R1 loses connectivity to the internet though? All of that internal traffic is still going to be racing to R1 which is wasting bandwidth and ultimately unnecessary.

RIP default-information originate can help us out here with a technique called conditional advertising. To do this we use a route-map.

The route-map will ‘look’ for the existence of a route and if that route drops out then we’ll stop advertising the default...sound right to you? Well in our example we are going to say that the interface connected between R1 and R2 is the uplink to the internet. I’ve configured a basic OSPF neighbour relationship between R1 and R2 and R2 is sending it’s loopback0 network ( to R1. Sp, if we see the OSPF router for go away then we will suggest that our link to the internet has gone away and we will want to stop advertising the default route. OK lets get on with it then. Here is the OSPF route in R1’s routing table

Screen shot 2011-06-09 at 02.59.18

First we setup the route-map. We want to match the route for the OSPF route learned from R2 - A standard access-list will match the IP address for the network. The route-map matches existence of that route.

Screen shot 2011-06-09 at 02.59.12

Lets add that route-map to the conditional default-route advertisement.

Screen shot 2011-06-09 at 03.01.53

Right - lets the default route in R5’s routing table?

Screen shot 2011-06-09 at 03.03.01

Ok, lets shutdown the interface to R2. We see the OSPF go down, the route is gone.

Screen shot 2011-06-09 at 03.03.44

The route has gone from R5...job done

Screen shot 2011-06-09 at 03.04.07

Ok, that wraps it up for this tech article. We’re doing these all the time, be sure to keep checking back.

Thanks for reading

View Comments
  • Using the OSPF default-information originate command in its raw form allows us to send a route into the OSPF database for our neighbors to put into their
  • active routing tables. One problem with default routes however is they can soon become traffic black holes.

Imagine a router connected to the internet (R1). To ‘help out’ all the other internal routers it advertises a default route in saying “Hey you guys, I know how to get to the internet so don’t worry about all the routes I know about, here is one route to rule them all, send all your traffic to me” The command that router might use would look like this:

Screen shot 2011-06-01 at 00.13.32

The statement ‘default-information originate’ is truncated with the ‘always’ keyword to signify that OSPF should ‘always’ advertise the default route no matter whether the router has an active default route in it;s routing table.

Lets have a look at the active routing table for the neighbor connected to R1 for the new default route advertisement:

Screen shot 2011-06-01 at 00.15.20

We see the default route has an OSPF type of E2 as it is a ‘redistributed’ route brought into OSPF from an external process.

Screen shot 2011-06-01 at 00.15.53

So thats great, but what if we wanted the default route to be conditionally advertised based on the advertising router already having a default route in it’s active routing table? Well thats easy enough we can remove the ‘always’ keyword. Lets go back to R1 and take that out:

Screen shot 2011-06-01 at 00.18.16

Now lets put the originate statement back in - notice no ‘always’

Screen shot 2011-06-01 at 00.18.28

Now on the neighbor we have no route to

Screen shot 2011-06-01 at 00.19.20

As we explained before, this is because R1 (the originator) didn’t have a default route in it’s routing shown here:

Screen shot 2011-06-01 at 00.20.16

Right. so without the ‘always’ keyword we need a default route to be present therefore making the presence of the default route a condition of the further advertisement of that route. Lets pop in a static route on R1 to meet this condition and hopefully then advertise the route to R2:

First R1, the first command adds a static route to via interface null0 (effectively blackholing data destined for any route not known by R1. the second statement is simple showing the routing table entry.

Screen shot 2011-06-01 at 00.21.54

Now R2, we see the route is now in the table due to R1 meeting the pre-condition of having a default route in it’s active routing table.

So thats great of course. But what if our internet router has a full internet routing table BUT crucially has no default route? Well we could add a static route as we’ve seen but if those internet routes go down (neighbor failure etc) then the static default will never fail and traffic will keep routing toward R1. Wouldn’t it be better if we matched for a BGP route int he table and based on the presence or ‘non-presence’ of that route stop advertising the default route.

To do this we need a route-map. A route map can be configured in this case to match either an access-list or a prefix list for the route in the table. Lets advertise based on access list entry first of all. We’re going to look for an active route for network If it exists then we advertise the route, if not don’t advertise it.

First lets make sure the route is in our routing table on R1:

Screen shot 2011-06-01 at 00.26.44

Great now lets setup the default-originate statement and route map to match the existence of this route.

Screen shot 2011-06-01 at 00.27.51

We need to create the route-map called BGPEXIST now to match the route entry.

Screen shot 2011-06-01 at 00.29.14

Now we need to create the access-list ‘1’ as we have described it. Notice we use a wildcard mask to match exactly the first three octets so as to catch the /24 mask in the routing entry.

Screen shot 2011-06-01 at 00.29.56

Now lets check R2 to make sure we are getting the route:

Screen shot 2011-06-01 at 00.31.11

All good. Now lets prove out the conditional configuration works by taking out the network from the R1 routing table - we’re going to do this by denying the route on the internet peer. Lets check that route is no longer in the R1 active routing table:

Screen shot 2011-06-01 at 00.32.28

OK so without the route we should no longer be advertising the default to R2 - lets check R2’s routing table:

Screen shot 2011-06-01 at 00.33.01

Perfect - conditional advertising works.

To cover the full story we’ll do the same thing using a prefix list now. We’ll chose a different network this time, will be fine. Lets make sure this network is in the R1 routing table:

Screen shot 2011-06-01 at 00.37.12

So we only need to change the route-map since the default-information command still stands. First we access the route-map sub-command then we take out the access-list and add in the new prefix list line. NOTE that you cannot have both access-list and prefix-list matches at the same time.

Screen shot 2011-06-01 at 00.34.30

So now we need to setup the prefix list ‘1’ (the sequence number is personal preference and can be ignored)

Screen shot 2011-06-01 at 00.35.49

Now lets have a look at R2’s routing table to make sure that now we are matching a prefix list looking for the active route we are now advertising the default route again...

Screen shot 2011-06-01 at 00.37.59


This topic is covered in more detail on our technical video at youtube

Thanks for reading.
View Comments
GRE (Generic Routing Encapsulation) is an industry standard for encapsulating data within an IP packet. Unlike IP protocol 7 (IPv4) GRE runs over IP protocol 47. It is often used to manipulate routing over non-broadcast networks or for sending multicast over IPSEC tunnels. This tech note was setup between a Juniper EX 4200 switch cluster and a Cisco 2621XM router. One of the issues with GRE traffic is its extra header which means payloads are reduced when you use it by an extra 4bytes (minimum).

So here is our basic topology. Remember we’re just using this to prove out the connectivity NOT to delve deeply into GRE itself or how we could use this to fix a ‘situation’. We’ll be bringing more of these technical guides as soon as we can write them using another 24 bit subnet From here we’ll have two loopback interfaces (one on each device) and we’ll setup the routing to divert traffic between each of these loopback interfaces across the tunnel.

.Screen shot 2011-05-28 at 22.19.00

First lets configure the EX switch ge-0/0/0 interface which is connected directly to the Cisco 2600. Notice the bit-wise mask at the end. Cisco’s recent NEXUS platform running the NX-OS also now uses the bitwise pattern for netmask...interesting ;-)

Screen shot 2011-05-28 at 22.31.29

Lets configure the loopback interface

Screen shot 2011-05-28 at 22.31.37

Right now we’ll configure the GRE interface itself. It doesn’t matter in the order of the next three configuration lines but you DO need them all ;-)

The source is the beginning of the tunnel from ‘this routers’ point of view’. As an analogy think of you in your car. You are driving toward a tunnel going under a river from Coolville to Duddberg. As far as you are concerned the start (source) the tunnel is in Coolville. When you return however the start of the tunnel is in Duddsberg. Same thing for traffic going into and out of your tunnel here.

Screen shot 2011-05-28 at 22.31.44

Now the destination. Remeber this is all relative and the other side will look the opposite.

Screen shot 2011-05-28 at 22.32.04

OK, now thats the tunnel built we need to ‘load it up’ with loely IPv4 traffic. So, just like a normal interface we’ll give it an IP address and a mask.

Screen shot 2011-05-28 at 22.42.19

Right JunOS side done now, lets nip over to the Cisco box and do the same.

Lets configure the Fast Ethernet 0/0 interface which is connected to the Juniper switch.

Screen shot 2011-05-28 at 22.47.14

Now we’ll configure the tunnel interface. For brevity I’ve taken a pumped all of the configuration in here but it follows EXACTLY the same sort of configuration as JunOS. Source IP of tunnel, desitination IP, IP address of the tunnel...done.

Screen shot 2011-05-28 at 23.03.44

Right so now thats up lets see if we can ping either side of the tunnel.

Cisco side first...

Screen shot 2011-05-28 at 22.38.30

Cool, now the Juniper side

Screen shot 2011-05-28 at 22.49.52

Awesome. Right but we’re pinging the sides of a point to point interface here which isn’t exactly right is it. So if we’re going to be ‘routing’ traffic through this tunnel and not just having a secondary route (whats the point in our topology anyway) we’ll need to give each side routes to one another. We’re going to route traffic for each sides Loopback interface through the tunnel.

Juniper side first

Screen shot 2011-05-28 at 22.53.06

...don’t forget the commit in JunOS. You know it never fails to impress me how Juniper got JunOS so right for administrators. If we screwed this up and the router happened to be 1000 miles away we’ve got options. Auto rollback is the best thing ever.

Now the Cisco side

Screen shot 2011-05-28 at 22.53.41

...if we screwed up IOS we’d be gone ;-) Of course we could always issue that great ‘reload in 10’ shortcut to save our ass.

Ok lets ping out over the tunnel interface.

Screen shot 2011-05-28 at 23.04.19

Now form Cisco side...just because we can

Screen shot 2011-05-28 at 23.05.23

What about some statistics to back that up man?! OK, here we go.

Right Juniper side first again. We got 6 in and out here...

Screen shot 2011-05-28 at 23.07.03

Cisco side...

Screen shot 2011-05-28 at 23.05.59

Job Done.

Thanks for Reading
View Comments
BGP has a number of metric affecting values, one of the most often ignored is origin although it can have a serious effect on your routing table. Imagine getting two routes from two eBGP neighbors. ISPA is connected to CUSTOMER_Z using OSPF and redistributes those learned networks into BGP then advertises those to it's other upstream peers. ISP_B on the other hand is also connected to the same CUSTOMER_Z however ISPB decides to bring those OSPF networks into the BGP RIB using the 'network' statement. The upstream BGP peers will receive two routes into their RIB one with an origin of i (the one from the ISP_B) and one with origin ? (the one from ISP_A).

BGP uses a number of 'tiebreakers' to decision the best route to a destination and here are the first 5; Weight (Cisco proprietary), Local Preference, AS-PATH length, Origin, MED. So origin comes right after AS-PATH length and actually is the second highest decision maker for your outbound really matters.

So in this tech guide we're going to build a small 3 mode network. The diagram below shows two BGP autonomous systems. 65001 contains R1 and R2 and they are connected with the network R3 is in another AS with number 65002. The network between R2 and R3 is


Here is the output from R1 showing first the ip routing table and then a 'show run' for the interface connected to R2


Here is the same output for R2. Note that R2 is connected to R1 via Fa0/0 and R3 via Fa0/1


Finally the same output for R3


OK so we have IP comms between R1, R2 and R3. Now we're going to configure the BGP relationships.

First we'll add the neighbor statement on R1 for R2 (remember we already enabled the BGP process by issuing the 'router bgp 65001' global configuration command.


Now we do the same for the other side of this neighborship on R2 pointing to R1


Note that both have the same AS# because they are iBGP neighbors. We see now that the neighbor relationship is 'Established'


OK so now lets setup the eBGP relationship between R2 and R3. Firstly on R2


Now on R3


Note the difference in AS# between that configured in the 'router bgp' and the 'remote-as' this is an eBGP relationship. We get this console message on R2 - the neighborship is Established.


So lets create our first 'network' to advertise via BGP from R3. We'll just use a loopback interface, here it is loopback0.


We bring this into BGP using the 'network' statement. 


Now on R2 we can see that R3 has indeed sent us a route for Notice the origin for the route is 'i' or internal. Thats because we brought the route into the BGP RIB (Routing Information Base) using the 'network' keyword.


So what happens if we bring the network in via a redistribution? We'll create another loopback interface on R3 called loopback1 and redistribute that connected interface into the BGP RIB.

Firstly the interface configuration


Now we add the necessary configuration to the bgp routing process. Firstly we need to be careful here. simply redistributing all connected interfaces will also bring our loopback 0 interface lets use a route-map. We'll match the network for loopback 1 using an access-list

Here is the access list (we could use the 'host' keyword instead of


Now the route-map. We're matching access-list 1 which is which is the network we configured for interface loopback 1. Any other match is denied and dropped i.e. it won't be redistributed.


So lets have a look at the routing table on R2 to see if we now how two routes, the first from and now a second for


We do! Thats great. Now look at the origin. The first was an 'i' for internal which you remember was brought in using the 'network' keyword. Now the new network has an origin of '?' which means incomplete. The reason for this is we don't know the source of the route so our knowledge of it's origin is...well incomplete. All we know is that someone brought it in from somewhere.

So there is one more origin type and that is 'e' or EGP. Now EGP is a legacy protocol and I've never come across it. To create the 'e' origin type therefore we'll have to 'fudge' it. We're going to use a route-map again to set this.

First lets create another loopback interface on R3 for this EGP candidate route.


OK, so now lets edit the existing route-map to add in our EGP configuration. What we'll do is again using an access-list match the interface lo2. If it matches then change the origin to e and set the source AS# for the EGP to 65003.


OK, now thats done we'll wait for a while till the BGP routes settle (or we can force it with a 'clear ip bgp *' at either side of the peering).


There we go...all three origin types. If these were duplicated routes learned from different sources with the same AS_PATH length we'd choose i first, then e then ?.

So what about R1? You're right we didn't even use this yet...lets take a look at it's routing table...


Nothing? Of course - is the next hop in my routing table? No of course not so the route goes into the BGP RIB but is inaccessible. So we need to set the 'next-hop-self' on R2 to change the next hop to R2's fa0/0 interface.


OK lets see the routing table now.


Looks good - of course R3 has no way of knowing how to get to yet but you know how to do that now right? Of course you do...

Thanks for reading.

View Comments
Well, being as we all know lab time is precious we're all looking for CCIE shortcuts. Certainly in my labs (passed third time) I found the last hour was usually spent pinging everything to make sure I had that special full routing table. So, you've seen those practice labs right? You've probably gotten hold of a training package from one of the big Cisco training vendors like InternetworkExpert or IPExpert right? Well those full labs they throw at you are pretty extensive and contain a lot of IP addresses. If we were to assume that the real lab has just as many IP addresses then how could you possibly remember all of those. Not only that but could you really be up to the task of pinging each and everyone from each and every layer 3 device on the network to test for full reachability?

Well of course not, that would be nuts. So here is what I did for my lab (frankly I only had enough time to exploit this in my third attempt) and I hope it can help you out. Of course you could always use this on your production network as part of some alarm script or reachability script but I digress.

So, tclsh, what is it and how do we use it? Well it's a very powerful application in it's own right, read about it here. On IOS it runs under it's own binary and is called from privileged EXEC mode. The power of tclsh is not in the scope of this document and honestly I've not even scratched the surface myself. We're going to concentrate on running the application, writing a basic script, executing the script and using it to debug issues in your lab before the time runs out.

So first off how do we call tclsh? From privileged EXEC mode type 'tclsh' then press return


You see you get dropped into a sub-prompt or actually the TCLSH command shell. It is inside here that you can create your script. For our example here I've built up a small three node network. R1, R2 and R3 are connected together using subnets (R1<->R2) and (R2<->R3) and we are running EIGRP across all three. Each has a local loopback0 interface. Lo0 on R1 is, Lo0 on R2 is and Lo0 on R3 is

Here is a diagram so we can see the topology:


eigrp process ID 100 is running on each router. Here is the routing table on R1:


Ok, so in the lab you're going to be busy, too busy to remember IP addresses and *way* to busy to type these all in. Here is a tip. As you go through the lab have notepad open and type in the IP addresses for all of your nodes. I'd use notepad a lot if I were you. There was a study done (can't recall the publisher sorry) which stated that more than 60% of exam (not just CCIE) error was down to mistyping or human errors (the technical term for this of course is "fat fingers"). So why would you set yourself up for this sort of thing - put it in notepad and minimise your risk.

Here is Fred who could press QWASEDZXC with one finger easily.


So anyway you've got a list of IP addresses - this is perfect. At the end of your lab you're going to be ahead of the game when it comes to troubleshooting with tclsh!

The basic setup here for our tclsh script is we are going to run an iterated 'for' loop. For any programmers out there you'll be aware of 'for' loops. The idea behind a loop is to reduce the number of lines of code for repetitious process. e.g. "check A looks like B" or "doesn't look like B", or maybe "is smaller than B", or "bigger than B" etc. the point is that whatever the differentiator is between A and B do it, and do it again until the end...whatever the end is. In our ping script we're not doing any comparison we're just being iterative through a list of address. Hmm this is getting hard to put down in words I guess so here is our example:

We've got a number of IP addresses in our topology and these are in our notepad list:

Cool. Right now the syntax for our for loop starts with a 'foreach VARIABLE_NAME'. Now the 'VARIABLE_NAME' can be anything but it's usual to pick something small like a letter e.g. a or b, most guides I've seen us 'i', but it could easily be 'ipaddress' or even 'ccie' choose. The VARIABLE_NAME is just something you are associating with your IP addresses i.e. the first IP address in your list, for the first run through the loop will associate with the letter 'a' or in other words a=

The tclsh script ends by telling IOS what command it is supposed to run against the variable in the script. Our script is written to 'ping' so clearly we'll need to have ping in there somewhere ;-)

Here is the command based on us choosing a VARIABLE_NAME of 'a'. If you chose something else here then you should substitute it.

"ping $a repeat 1"

Lets walk through this. The tclsh parser will send the string "ping $a repeat 1" into the privileged EXEC mode just as if you were typing it. Now remember $a takes on the IP address for each iteration through the loop so first round $a= so now the line makes more sense "ping repeat 1". We are saying repeat 1 because we don't need IOS to do the usual 5 pings...we're running low on time right?! So we rung one ping and we look for a "!" which means all good. Now some people tell me "Hey, sometimes the first ping is lost because the router needs to lookup the MAC address and it times out". Well I say use whatever works best for you. In the lab it's likely your routers will already have the MAC addresses they needs due to routing protocols and other troubleshooting you already did so one ping should be enough.

Right, we've been through the logic behind the script. Here is is being used.


Thats it - all pinged - I guess I passed ;-)

The tclsh application is way more powerful than this but for an exam tool to save you some time this is probably as hard as you need to go into it. For a more in depth study of tclsh in IOS and how powerful it can be for operation use then I can recommend this Cisco Press book.


View Comments
Cisco IOS access-lists are well understood, well documented and well, just plain necessary. We use access-lists to secure devices and services as well as to manage routing protocols and much more in routers and switches.

Access-lists come in a number of flavours so lets mentioned some of them here:

Standard access-lists
Extended access-lists
VLAN access-lists
Context-Based access-lists
NBAR enhanced access-lists (not really an access-list but can be used within ACL's to augment the matching process)
Zone Based access-lists

In this techguide we're just concentrating on the standard ACL.

So how do we say it? Ay'see'el? or ACKLE? It doesn't matter how you pronounce the name but the majority of the discussions I attend use the phonetic Ay'See'El (like you were reciting the letters individually A, C, L) way. Hey, what do I know, I'm a Brit and we pronounce route like Route as in Route66 (see Chuck Berry) So lets move onto something more interesting.

In this first encounter we'll go through standard ACL's. Access-lists are lists of lines which are read by the parser one by one from top to bottom. The lines are 'sorted' using line numbers which are incremented by default in 10's (e.g. the first permit/deny access-list statement will be prefaced with a line number of 10, the next line by 20, the next by 30 and so on) although this can be overridden. By function a standard ACL can only describe the source and it has no idea about protocol.

This is the format as written out:

access-list {1-99 | 1300-1999} {permit | deny} source-address [wildcard mask]

Here is a screenshot using the Cisco IOS context sensitive help to show the format of the access-list. We see here that the keyword to begin defining the access-list is simply 'access-list' in configuration mode. The '?' after 'access-list' simply engages the help output to be displayed.

Screen shot 2011-05-15 at 20.46.42

So we see looking down the list that standard ACL's can be defined using values between 1-99 and 1300-1999. We'll choose 1, it doesn't matter so long as it fits within those ranges described earlier. Using the help again we see the next choice is the action to take. Here we see we can 'deny', 'permit' or actually simply remark or comment on the access-list. Lets remark on the access-list as a first step to describe it's function "Test ACL for demo" It should perhaps be stated that the default action for any traffic which is NOT matched int eh access-list receives a 'deny' by default. You will not see this default action in any of your configuration but it is there.

Screen shot 2011-05-15 at 20.48.53

Screen shot 2011-05-15 at 20.50.52

Now we'll add to this ACL by permitting access from remote hosts on network

Firstly lets see the options available to us to do this. OK so we use the permit keyword, thats required since we want to permit access. The ? reveals a few options now, hostname, any and host. What are we saying here? The easiest one to take on first is 'any'. By stating any we're saying basically permit (allow) any source.

Screen shot 2011-05-15 at 20.49.02

Using the keyword 'host' is very powerful and allows us to permit or deny access from a single IP address. The IOS parser simply reads the 'host' keyword and replaces it with a /32 mask in runtime. Remember an all 1's subnet mask ( denotes a host.

So what if we want to permit all hosts on network Well of course a /24 netmask is written as (first 24 bits are 1's). IOS however expects the netmask to be written as a wildcard mask. A wildcard mask is the inverse of a network mask where a '1' is written as a '0'. The therefore becomes So lets add this new line in:

Screen shot 2011-05-15 at 21.00.42

Again we see we can add to this statement by using the 'log' keyword. Using 'log' allows us to see the number of matches this access-list receives.

So we now have an access-list in the configuration, we can see this here:

Screen shot 2011-05-15 at 21.03.28

The access-list is useless however without applying it to something. The access-list is powerful because it can be added it interfaces, protocols, services and probably more. We'll test out access list by applying it to our VTY service which is used to access the router across the network using protocols like telnet and SSH.

Using this topology we have created a loopback interface Lo2 on R2 with IP address


Lets assign the access-list 1 to the running configuration for the VTY interfaces on R1.

Screen shot 2011-05-15 at 21.09.48

Note that the access-list command changes to the 'access-group' command when it is being applied to the interface. Notice also that we are now adding a 'direction' of 'in' for the access-list. This means we are asking the router to look at packet arriving 'into' the router. The other option would be 'out' or 'outbound'. OK so we'll drop out of configuration mode back into privileged EXEC mode and run a show command to describe the status of the access-list. Notice it is given the default line of '10'. When access-lists are created each new line is incremented by a value of '10'. If we were to add a new permit or deny statement to out access-list it would start with a '20'. Remember once more that there is a default 'deny' at the end of any access-list which will be matched if no other lines within the access-list are matched.

Screen shot 2011-05-15 at 21.11.51

So lets telnet to the VTY interface running on IP address (refer to diagram) and see what happens.

First we'll try to connect from the directly connected interface on R2 (from

Screen shot 2011-05-15 at 21.19.15

Ah, we're denied access. If you recall we permitted access from nodes in network and remember too that anything not specifically matched in the ACL will hit the implicit 'deny' statement at the end...well we're denied because our source IP address is So lets override the default IP address used for the telnet on R2 and make it use the loopback Lo2 interface...

Screen shot 2011-05-15 at 21.22.36

OK, well that worked! So we should now see a hit in our access-list...

Screen shot 2011-05-15 at 21.23.17

We can see the matches at the end of the access-list there - 2 ;-)

Thanks for reading, see you next time with Extended Access-lists
View Comments
We're going to continue our last post on EIGRP discovery with an EGP protocol discovery (eBGP).


To present this in a demonstration we've done as before and created a two node network and will setup an eBGP adjacency between the two nodes. The link between the nodes will be using the network.So we'll start up by setting the addressing on the interfaces as per the diagram. First R1:


Now R2:


OK, so maybe we do a ping to confirm connectivity:


So we're going to need a network to advertise into BGP so lets have a loopback on R2 with IP address
Great all done for interfaces and networks, lets crack on and configure the BGP on router R2. We're going to configure the BGP session with AS#123. We'll set the BGP router-id as the loopback address (this is shown for completeness but is not an essential configuration step). We're going to add the loopback interface in as a network to advertise. Remember the BGP 'network' statement doesn't work int he same way as most IGPs. The 'network' statement in BGP is saying "Here is a network I want to advertise to my neighbors".

So the 'neighbor' statement is saying here is my BGP neighbor IP address (in our case R1 with and it's remote-as number is 100.

Right, lets move on to R1 now and configure the BGP session there. We'll want to use AS#100 (remember R2 pointed to remote-as 100) and then add in the neighbor statement. In this example we're not really interested in sending any networks from R1 to R2 so there are no network statements of redistributions etc.
OK! we'll we've put the configuration in and almost immediately we'd commited the neighbor statement to the running config the router went away and tried to establish the BGP session. The router has dumped a log to the console which includes a lot of HEX formatted 'guff' (thats a technical term).

To troubleshoot the issue lets dissect the debug/log. We see that we have a BGP notification saying that the 'peer in wrong AS'. So R1 ha been told here by R2 that the AS number it is using doesn't match it's configured AS#. The router tells you that the AS# is 2 bytes long is ;-) AS#'s are valid between the range of 0 and 64511 (64512 to 65535 are reserved for private use.

Anyway back to the story, the remote side AS# is contained in the log dump as the first 2 bytes after the word '2 bytes' ;-)


The HEX following the message is 007B and the BGP AS# is DECIMAL so we'll need to convert it...where is that scientific calculator?
So we'll use the MAC calculator. Click the '16' button because HEX is base 16...why can't they just say HEX? So lets pop in the HEX we got in the log message 007B. So now click the 10 button because DECIMAL is base 10...of course ;-) and we get...drum roll please
Right so the remote AS# is 123...of course...why didn't I just click on R2 and do a show running-config ;-) We'll this is a scenario of course and it's all about the 'win'.

So lets go onto R1 and change that neighbor line to reflect the correct AS# of 123 not 1 as we had originally set it.
Great, and the BGP session has come up. Tell you what lets just have a look in the routing table to see if that loopback0 interface from R2 is in there?

So thats it, BGP is up and routing updates are being received. 
Job well done.

View Comments
Sometimes you find yourself in a bad place, you've forgotten the AS number you configured on your router in LA, you locked out VTY access to the directly connected interface and you've lost your documentation. So you're in Massapequa, NJ and you've got your MPLS VLAN up to the LA office but you can't get onto the loopback interface. You need to bring the EIGRP neighborship up and get that good stuff flowing ;-)

This techguide will show you how to discover the remote side AS number for EIGRP. Lets talk a little about EIGRP so we get some background. We'll it's got it's own protocol (88) number and uses RTP (reliable transport protocol) for delivery. The EIGRP traffic is distributed using IP multicast on address for discovery with updates as unicast and queries as multicast. The initial setup 'are you there' messages are sent as multicast.

So knowing this information we can see that by capturing the traffic between two EIGRP neighbors we should be able to discover the AS number since the AS number will be sent in the discovery as part of the setup process (subsequent updates and replies will also contain the AS#).

To prove it out we've got a basic topology:


R1 is directly connected to R2 using a fast ethernet connection.

We've configured the interfaces already using the network with for R1 (see below)


and for R2


Right - lets put the EIGRP configuration in:

First R1


Then R2


So, we've basically built the basic configuration, nothing fancy. We've added the interface Fa0/0 in for both routers so EIGRP will be enabled onto those. We've also added in the Loopback0 interface of each (just because we did - it has no part to play for this post and can be ignored). You should however give some attention to the chosen EIGRP AS #. Notice that the router R2 has AS# 12345 where R1 is as AS #1. We've done this o illustrate the point that we don't know the remote AS. We need to use an AS number to make EIGRP work int he first can choose any AS number here so long as it isn't 12345 ;-) Think back to the original example where we discussed LA and are not supposed to know the AS# otherwise what is the point of debugging the problem?

Right, how do we catch the traffic? Well the router clearly sees all the traffic we need so it makes sense to use the router to collect and show the packets right? OK we all probably know and use 'debug ip packet' but hopefully not on production systems unless you know what you are doing. Here is a screenshot for the command.


Notice the access-list on the end? We could use this to tune down the traffic which the router will be examining. Indeed using 'debug ip packet' on a production system would be suicidal without some sort of filtering. For our demo here however we're not going to need this filtering because we don't have any traffic. You *should* definately note however that there is a missing or 'hidden' command here called 'dump'. IOS has a few of these little easter eggs hiding in the code. Normally it's because they are either really dangerous in the wrong hands or else they are not very well tested and therefore shouldn't be used by mortals. In this case I feel the dump keyword is missed off because if you thought debug ip packet created too much console information to be healthy then adding dumps of data to the console is never going to be the best plan.

So we turn on debugging and ask it to also crack open and dump the data inside the packet like a typical pcap...HEX and all that good stuff.


OK so here is a dump of data. We see the sources are our local Fa0/0 (s= and Loopback0 (s= interfaces. OK so where is the R2 traffic?


Here's one s= great. So where is the AS number man...I bet the big red circle gives it away for you. The first few bytes are header so things like destination MAC for EIGRP mulicast group (0100.5e00.000a) followed by source MAC of R2 (c201.0cab.0000)...etc.

So we have the HEX value of 3039 remember EIGRP uses values from 0 to 65535 which is a 32 bit number (same number of bits as an IP address). So lets pop 3039 into a HEX to DEC calculator and we get 12345...thats IT!

Job done - all we need to do now is change the AS number we used in R1 (AS#1) with the one we just 'discovered' (AS#12345). Now because we can't simply rename the AS number we need to tear down the original and rebuild it. If you had a lot of config below the 'router eigrp' function you are best to do a 'show run | b router eigrp' and copy the config then all you need to do is paste it in.


Job done - enjoy

View Comments
Hi all,

I know you're all desperate to find out how we configure Q-in-Q between IOS and JunOS and here we have it. You know, when we did this the effort required to get the Juniper EX part done was worrying. The Juniper documentation was pretty good and the Cisco Q-in-Q part is very well documented of course. This techguide will hopefully help some of you out there desperate to join these two great platforms as one.

First I guess we should provide a little background to Q-in-Q services. The idea is pretty simple, Q-in-Q is a process whereby a normal 802.1q frame is attached to another 802.1q header, hence the name. You will most likely use this is you have a requirement to say connect two layer 3 networks together which are separated by a layer 2 network. The layer 3 networks at each end may be in the same subnet and each of these geographically disparate networks can act as one.

Here is our challenge:


The laptop at the top of the diagram is connected to a Cisco 6509E and is in VLAN 186 with layer 3 network We have another laptop at the bottom of the diagram connected to a Juniper EX cluster again in VLAN 186 with network We need to get both of these laptops to operate within the same network despite being separated by a number of switches and hops. That is to say when I ping from one side to the other I want one hop to the destination.

Lets begin.

First we're hopefully all up to speed on VLANs and what they mean. If you need to read up on VLANs then take at look at our other techguides for a reminder. Lets create VLAN 186 on the first 6500 connected to the laptop and then place the port connected to the laptop into that vlan:

6500_FRONT(config#) vlan 186
6500_FRONT(config-vlan)# name X_SITE_INTERNET
6500_FRONT(config-vlan)# exit
6500_FRONT(config)# int g0/0
6500_FRONT(config-if)# switchport mode access
6500_FRONT(config-if)# switchport access vlan 186
6500_FRONT(config-if)# no shutdown
6500_FRONT(config-if)# end

So now lets take a look at the port connected between the FRONT and BACK 6500's. We need to configure this as an 802.1q port. Remember the Q-in-Q takes our 802.1q frames first.


So we take the VLAN 186 and only allow it on this port. Untagged traffic is also placed into VLAN 186 but this is necessary. We've forced the port to not use DTP (switchport nonegotiate) and we're manually configuring it as a trunk port. Forcing the port as a trunk is not required but is good practice.

Lets have a look at the 6500_BACK configuration for the port connected to the 6500_FRONT G4/11.


OK, looks like a lot of configuration here so lets go through the main lines. Now on a trunk port if you have a configuration for an access vlan (switchport access vlan) but configure 'switchport mode trunk' then the port os a trunk port and the acces vlan configuration is ignored. In our case we're configuring a dot1q tunnel port and therefore the access vlan IS IMPORTANT. So what are we saying here? Basically the 802.1q tagged traffic ingress into the port is left tagged (as if it were an access port with no tagging). The next line switchport mode dot1q-tunnel no performs the double tagging - we're going to be ramming tagged VLAN 186 traffic into VLAN 3043. The MTU is set to 9216 to accommodate for the double 802.1Q tag and should not be missed. 

We're disabling CDP so that we can tunnel CDP. What?! Well to help to explain this here is an example. When you have two directly connected cisco devices and CDP is running and enabled on these ports then you can 'see' the other device by issuing a 'show cdp neighbors'. This process is true also for Q-in-Q neighbors but we must turn off CDP between the two endpoints for the tunnel or else it would stomp all over the tunnel start points (in our example CDP would be tunnelled from 6500_FRONT but since the other startpoint is a Juniper device which doesn't undertsand CDP this step is unnecessary).

The l2ptotocol-tunnel allows us to send spanning-tree BPDU's down the Q-in-Q tunnel and CDP...we don't need them in our example but maybe you are creating a spanning-tree loop in your deployment so I think these lines are useful...just in case.

So now we are going to configure a normal run of the mill ordinary 802.1Q trunk link between the 6500_BACK and 3560 switches.

6500_BACK# configure terminal
6500_BACK(config)# interface g4/12
6500_BACK(config-if) switchport trunk encapsulation dot1q
6500_BACK(config-if) switchport mode trunk
6500_BACK(config-if) switchport trunk allowed vlan 1559,2127,3043,3738
6500_BACK(config-if) mtu 9216

So here we're just creating a normal trunk as we said. We're going to be trunking the Q-in-Q vlan 3043 we already created. Don't forget the jumbo frame MTU requirement. The other VLANs allowed on the trunk are for other things.

Right, at the other end of the link between the 6500 and 3560 we're going to again configure a normal bog standard trunk. 


On the 3560 there is one requirement to enable jumbo frames before this will work. In global configuration mode we issue:

3560(config)# system mtu jumbo

Then we need a reboot. Without this line the switch will not be able to support the minimum 1504 bytes required to transport the Q-in-Q frames. So when we're back we can move on - all should be fine so's just trunking.

Here is the configuration for the port connected to the Juniper EX stack.


So we're still trunking 3043...and some others but ignore those.

Now onto the Juniper EX stack dot1q-tunnel configuration.

First enable the protocol frame support.


Now lets create the VLAN - this took some time to get right int he Juniper documentation.


So we have created two vlans here. The first called X_SITE_INTERNET0 is carrying the Q-in-Q trunk traffic. Vlan XSITE_BACKBONE is carrying the untagged vlan 186 traffic.

Lets configure the port connected to the Cisco 3560. Again we need a dot1q trunk port and we'll restrict it to carrying only VLAN 3043.


So the last port configuration part if for the laptop port...


A quick 'show vlans XSITE_INTERNET_0 extensive'?


We see here that the 'Customer VLANs' are 186-186...or just 186 ;-) the ports ge-1/1/0.0 and ge-2/1/1.0 are the trunk ports connected to upstream Cisco switches (we've only looked at one of them...why do you think I had the STP stuff listed earlier ;-) ). The access ports are connected to host devices and are each in VLAN 186.

So there we go, IOS to JunOS using dot1q tunnelling. IT works, it's great tech, have fu.

View Comments
We all know the pain of static IP address assignment on large networks. The majority of us would probably configure DHCP services onto a server or host like Windows or a *nix platform. This guide is going to show you how to configure DHCP in IOS. You have to be clear on this however, Cisco devices whether a switch or router are NOT really meant to be a DHCP server. The function of your network device is pushing traffic around ;-) If you really really really must do this then take it in the context it is meant. DHCP on a Cisco device is only ever going to be a best effort service. It'll work and it'll work well but don't expect too much in terms of lease visibility.

So lets begin with a little overview of the DHCP protocol. Way way back in the early to mid 90's we had BOOTP (RFC 951). This protocol was often used for disk-less hosts using a BOOTPROM. The BPROM would broadcast out it's MAC address onto the LAN and the BOOTP server would send back an IP address plus the BOOT server and BOOT image required by the disk-less server to boot its operating system. BOOTP worked well for single networks but didn't include a things like IP gateways so the host didn't know how to get out.

Along comes DHCP. With this reworking of BOOTP we now have included options like default gateway, DNS servers, WINS servers and all manner of extra IP host configuration parameters to aid the modern client.

The DHCP 'lease' process whereby the client is provided with a unique IP address follows a 4 way process:

i) Client comes online and will broadcast a DISCOVER message onto the network
ii) DHCP server hears the DISCOVER and sends OFFER message in response
iii) The client sends a REQUEST message back to the server saying 'sounds good to me man, I'll take it'
iv) The SERVER sends an ACKNOWLEDGE message to the client saying 'You're welcome'

So thats the process, how do we setup the cisco router or layer 3 switch to do DHCP leasing. WE'll configure our router to lease IP addresses from the subnet. Set the netmask, default gateway, dns-servers and domain name. The only REQUIRED part is the 'network' statement, all of the others are optional.

R1# configure term
R1(config)# ip dhcp pool TEST
R1(dhcp-config)# network /16
R1(dhcp-config)# default-router
R1(dhcp-config)# dns-server
R1(dhcp-config)# domain-name

So what if we've got some static addresses in the network which have been set statically? For example, above we have two DNS servers and What about the default gateway which was We don't want the DHCP server leasing those addresses. You should note that most modern DHCP servers including the Windows DHCP server perform an initial ping on the LEASE address before the OFFER is sent. This way any already live addresses will be noticed and made invalid for lease to avoid duplicate addresses.

Anyway - lets ask the router to NOT lease addresses between and so we can use those addresses statically and be assured it won't OFFER them.

R1#(config) ip dhcp excluded-adresses
R1#(config) end

So thats it! Pop a client onto the LAN to check it out.

Don't forget to enable the dhcp service or to check leasing using:

R1(config)# service dhcp

R1# show ip dhcp server statistics
BOOTREQUEST          0
DHCPREQUEST          203
DHCPDECLINE          1
DHCPRELEASE          27
DHCPINFORM           19
View Comments
We’re going to be talking about 802.1D, this is the classic spanning-tree protocol and NOT as my good colleague suggested the model number for the Enterprise in Star Trek 'Next Generation' which we all know was the NCC 1701D.


We all like failover, fallback and redundancy in our networks. Resilience to failure of any one (or more) components in the network is always a good thing. In the switching world we use spanning tree to offer us the ability to create a dynamic loop free layer 2 bridge topology. How does it do this? Well by using specific frames on the network called BPDU's (Bridge Protocol Data Units) the spanning-tree protocol dynamically 'blocks' one of the links in the redundant loop.

Here is the basic topology


Here we have a physical loop for traffic moving from Switch A to Switch B to Switch C or Switch A to Switch C to Switch get the idea. The point is that traffic flow like that can easily and very quickly get out of hand and cause what is common miss-quoted as a spanning-tree loop but which is in fact a bridging or switching loop.

In a bridging loop example, Switch C receives a request from a node connected to one of it's ports (Client PC). The destination for the request is to an unknown host (Web Server) connected to Switch A. Now Switch A (who doesn't know Web Server is connected to Switch A) broadcasts the frame out of all ports (remember, this is the default behaviour of a switch for an unknown destination mac address). The layer 2 mac request is picked up by the host connected to Switch A who politely passes that information back to Switch C ("Hey I'm Web Server and here is my MAC address so populate your mac table!"). Now crucially Switch_B forwards the initial lookup sent from Client PC out of all of it's ports (remember it also doesn't have a layer 2 mapping for the frame). The MAC request gets to Switch_A from Switch_B and Switch A forwards the frame to Client PC and that's all fine. Now also recall that Switch A is going to flood the frame out of all ports except the one it received from. This means that the frame from Switch B is sent out toward Switch C! So you can see how a switch could get confused about which way to send traffic and this is for a unicast lookup. Now if we think about broadcast traffic where the frame is always sent to all ports this loop design is going to cause some headaches. Indeed I've seen this sort of thing bring down pretty big switches very very easily and they are difficult to find  except for asking who just did something ;-)

So spanning-tree is pretty essential for workable layer 2 resilience. In the diagram you can see an etherchannel there though...and thats a loop isn't it? Two links between switches...feels a bit 'loopy' to me. Well you know etherchannel in IOS and LAG in JunOS are a great way of adding resilience between switches as well as capacity and the good news is that to spanning tree they look like one link.

Thanks for reading and the colleague who suggested 802.1D was the model number for the Enterprise wishes you all a good night.


View Comments

A basic starter showing a few of the more simple ways to manipulate your traffic

The topology has been set as shown here. The customers preferred path between R1 and R5 is via R2 and R3 NOT R4. The customer has no control over R5 or the links connected to it and you are asked to influence the traffic coming from R1’s loopback interface ( to R5’s loopback interface The customer is clear that you are to use no static route to meet the requirement.

Screen shot 2011-05-08 at 23.45.30

R1 f0/0 <-> f0/0 R2 (
R2 f1/0 <-> f1/0 R3 (
R1 f1/0 <-> f1/0 R4 (
R4 f0/0 <-> f0/0 R5 (
R5 f1/0 <-> f0/0 R3 (

Starting with a clean sheet, this is the routing table for R1. You can see that OSPF has been configured throughout the topology and that each router has advertises it’s loopback 0 interface.

Screen shot 2011-05-08 at 23.51.09

The routing table here shows that the route to R5’s loopback is via the next hop of or R4’s FastEthernet1/0 interface. So what are the options (this list is not exhaustive and is to show some of the more obvious ones).

  • Influence the OSPF calculation by manipulating bandwidth
  • Influence the OSPF calculation by adding a ‘cost’
  • Change the next-hop using a route-map
  • Use default route and filter out the network route
  • Introduce a lower metric routing protocol to override the OSPF metric
  • Create a tunnel to bypass the normal routing table

Influence the OSPF calculation by manipulating bandwidth

Now normally OSPF uses the SPF Djikstra algorithm to calculate the best path through a network. To include some sort of capacity indicator into this OSPF uses
a ’reference bandwidth’. The default reference bandwidth is 100Mbps and so for a Fast Ethernet interface running at the default speed of 100Mbps would create an
OSPF cost of 1 (100/100=1). Our screenshot above clearly has a metric of 3 for via This is because the traffic has hopped from R1-R4 then R4 to R5 then R5 to R5’s loopback interface (yes thats a hop too). So if we want the traffic to go via R2 instead then we need to make R4’s interface bandwidth lower. Lets choose a bandwidth of 10000 or 10Mbps. By the way, you can change the denominator and accumulator in this equation! The relative OSPF bandwidth is changed under the
OSPF process using the ‘auto reference-bandwidth’ command (remember the default is 100)

First take a look at the default bandwidth figure - denoted here as BW

Screen shot 2011-05-09 at 00.01.52

Now we’ll change the interface closest to R5, remember the cost is additive for the path.

Screen shot 2011-05-09 at 00.03.30

Now lets take a look back at R1’s routing table for the R5 loopback interface.

Screen shot 2011-05-09 at 00.03.41

Now the path is via R2’s Fa0/0 interface. Why? Because we made OSPF add a cost of 10 for the route between R4 and R5 so now the route via R2 and R3 has a lower metric of just 4.

Influence the OSPF calculation by adding a ‘cost’

We’ve reset the bandwidth on the R4 interface and R1 now has a preference back to R4 with a cost of 3. One of the quicker and neater ways to influence OSPF is to use the interface level ‘ip ospf cost’ command. This command overrides the reference bandwidth and interface bandwidth calculation. In this case, whichever is the lowest cost wins.

Lets change the cost via R4 on R1. First lets just check we’ve got the right metric via R4...

Screen shot 2011-05-09 at 00.08.44

Now lets apply the interface cost command to make R4 a really bad choice...

Screen shot 2011-05-09 at 00.10.14

Confirm the routing table once more to be sure we’re preferring R2 again...

Screen shot 2011-05-09 at 00.10.49

Great...all going well so far.

Change the next-hop using a route-map

Route-maps are really really powerful tools and you are going to use more and more of them especially if you are keen on the CCIE lab exam. For our little lab we’re going to use a route-map to match some traffic and force it to bypass the normal routing table. Now normally when packets are routed they lookup the next hop int eh table etc etc. With a route-map we can force the next-hop to be different from the norm. So long as the next-hop is in the routing table we can send that traffic anywhere.

Lets do it. First of all we blow away the config from the previous lab.

Lets create teh access-list we are going to use to catch the traffic we need for the next hop. We just want traffic from our loopback0 interface going to R5’s loopback, everything else should take the normal route.

Screen shot 2011-05-09 at 00.15.24

Next we create the route-map statement which uses our access-list to match the traffic then the ‘next-hop’ statement to force the next hop for that traffic.

Screen shot 2011-05-09 at 00.17.30

One final thing. Because we are trying to influence traffic sourced on our router we need one other command. For traffic passing through our router this is not necessary and instead we use the interface level ip policy route-map’ command.

Screen shot 2011-05-09 at 00.19.13

So lets have a look at the routing table, it should be normal with the route to R5’s loopback still going via R4...

Screen shot 2011-05-09 at 00.20.00

Yep looks what if we do a traceroute to R5?

Screen shot 2011-05-09 at 00.21.19

Yeah thats it what YOU expected? Hopefully you’ve been following along. Recall that we setup the route-map ONLY to match traffci from our loopback going to R5’s loopback? Well that traceroute just picked up the closest route to R5’s loopback. We’ll run the command again but this time source it from R1’s loopback...

Screen shot 2011-05-09 at 00.20.39

Awesome, you see it went via R2! Lets check the access-list to make sure we are hitting that...

Screen shot 2011-05-09 at 00.23.14

9 matches? How come? Well traceroute will send 3 ICMP per hop you can see that in the output where got 4 millisec, 12 millisec and 4 millisenc responses. worked for us ;-) Lets move on.

  • Use default route and filter out the network route

I’m really proud f myself for thinking up this crazy ‘fix’. So lets have a quick time out to think this through. In the routing table, the route with the best match will be active. That is to say, if you have a more specific route to somewhere then that route will take precedence. So what am I talking about here? Well we want to go via R2 right? ut the specific route for is being advertised from R4. So here is what I propose. Lets filter out the network from coming into the routing table of R1 (remember we can’t filter the route from the OSPF table but we can stop it being placed into the active routing table). At the same time lets inject a default route from R2 so that R1 gets a defaultroute for all networks it doens’t know about pointing at R2. All being well because R1 won’t have a route to anymore it should use the default? Clear? Lets go!

First off lets create the default route on R2. Now remember, we don’t already have a default route int he table so we use the ‘always’ keyword here to advertise teh default route even though we don’t already have one.

Screen shot 2011-05-09 at 00.31.51

Lets just check we have that route in R1’s table.

Screen shot 2011-05-09 at 00.33.07

Yeah you can see we have it right at the bottom, we also still have the route via R4 so lets remove that now with a ‘distribute-list’. The distribute list works on matching values from an access-list. We create a standard access-list 2 and tell it to DENY but permit everything else. The second line is important because without it the distribute list would block all routes coming in (remember the default action of an access-list is DENY)
Screen shot 2011-05-09 at 00.34.33

So we’re denying and permitting everything else...did it work?

Screen shot 2011-05-09 at 00.36.25

Well we’ve got rid of the route to and we only have the default lets see what path we take to get to R5’s loopback now.

Screen shot 2011-05-09 at 00.37.14

Perfect, lets move on.

Introduce a lower metric routing protocol to override the OSPF metric

OK so OSPF has a distance of 110 by default on the router. That means it has a precedence over any routing protocols with a higher distance. So who is lower? Well in the IGP world the most obvious choice here is EIGRP with an AD of 90. So lets crack on and put EIGRP between R2 and R1 and try to influence the traffic.

First lets enable EIGRP on R1

Screen shot 2011-05-09 at 00.38.50

Now R2

Screen shot 2011-05-09 at 00.39.35

Great, we see the adjacency come up. There won’t be any routes in the table yet however because we didn’t add any network statements yet except their shared subnet. So lets now redistribute the OSPF route for into EIGRP. Remember, the plan here is to advertise a route using EIGRP between R2 and R1 which will override the OSPF route being advertised from R4 to R1 (and R2 to R1).

So lets create an access-list to match the route we want to redistribute from OSPF into EIGRP. Then we’ll create a route-map then we’ll go into the EIGRP process and redistribute. The numbers after the redistribution are the all important EIGRP K values. We need to seed them with numbers or else the route will NOT go into EIGRP.

Screen shot 2011-05-09 at 00.44.12

So lets make sure we’re redistributing now with a ‘show ip route on R2

Screen shot 2011-05-09 at 00.44.42

Perfect! Now R1 should have an EIGRP route in it;’s table for with a distance of 90.

Screen shot 2011-05-09 at 00.45.47

Hmm, no routes from EIGRP?! Whats gone wrong here?! Well, actually nothing went wrong here. The thing about EIGRP is it has two distances, one for internal routes and one for external. Anything redistributed into EIGRP gets the external distance and guess what, that distance is thats 60 more than OSPF so it’s not int he table. What can we do? OK so distance is only relevant to the local router so lets be kind to ourselves and just change the distance for EIGRP external routes on R1. The first number is the distance for internal routes and the second for external routes. What we are saying here therefore is every EIGRP route has a distance of 90 on this router.

Screen shot 2011-05-09 at 00.48.19

So lets check those EIGRP routes once more.

Screen shot 2011-05-09 at 00.49.32

Perfect...and a traceroute just to be sure...

Screen shot 2011-05-09 at 00.50.14

Excellent, lets move on.

Create a tunnel to bypass the normal routing table

We love tunnels, tunnels are good, tunnels save time and are cool (unless you get cyclical routing but lets not worry too much about that yet). Lets recap on what we are trying to do here. Get traffic to bypass R4 from R1’s loopback to R5’s loopback. So how about we create a tunnel from R1 to R5? Well that would be good because it’s a tunnel but unless we do something about the next-hop like we did earlier then the tunnel will go across R4. We could do some great service provider stuff using VFR’s and MPLS but again, maybe it’s overkill. We don’t even have control over R5 so we can’t even create a tunnel if we wanted to.

So the best we could do here is create a tunnel from R1 to R3 then we could set the cost of the link to the same as R4 and then maybe get some awesome equal cost load balancing going on...hey 50% is better than none. Lets give it a shot.

First on R1 lets create the tunnel interface. You could choose the loopback interfaces as the source and destination - I’ve chosen the interface for brevity and OSPF cost reasons.

Screen shot 2011-05-09 at 00.59.43

Now R3

Screen shot 2011-05-09 at 01.00.07

So a ping from R3 to the ip address of R1’s tunnel0 interface?

Screen shot 2011-05-09 at 01.01.16

Great, now we’ll put the network into OSPF - ouch recursive! We get this because I’m learning about from the tunnel interface but also I need to learn how to build the tunnel interface by finding the route to can’t do both. It’s the whole chicken and egg thing! So we setup an access list like before denying from being learnt via the tunnel interface.

Screen shot 2011-05-09 at 01.03.54

OK all stable now and tunnel is up so lets set the OSPF cost to match the route via R4.

Screen shot 2011-05-09 at 01.12.11

Now lets have a look at the routing table.

Screen shot 2011-05-09 at 01.11.22

Great two equal cost routes as expected, one via R4 and one via Tu0...what about a ping then?

Screen shot 2011-05-09 at 01.11.58

Lovely, and on that note it’s goodbye from me.
View Comments
First you need to find an IPv6 broker. There are a few out there and we've chosen Hurricane Electric as our transit host of choice (they also offer a nifty certification at the end of a small test to make sure you are learning as you go).

So, go ahead and register for an account with your IPv6 broker of choice. They will probably offer you one /64 network which forms the 'link' between them and you and a /48 network which is for your LAN. Now in the IPv4 world we hate waste and normally you would subnet your point-to-point links with say a /30 subnet to give you a network, broadcast plus 2 host addresses. The IPv6 world is at this point not encouraging the use of /126 or /127 subnet masks for point-to-point links but instead is suggesting a /64. Of course this means that the point-to-point link now has 1.810 addresses available...ah no worries.

So you have your allocations so lets crack on configuring your router.

First things first you know that you are running IPv4 and your ISP probably doesn't support your desire to run IPv6 over their network. So we pick up that old Cisco trick of encapsulating the unsupported IPv6 traffic into IPv4 packets. A tunnel is perfect for this and luckily we can perform this 6-to-4 transformation using IPv6 inside both GRE or IP packets.

In the example below we have configured our tunnel interface and Tu0. We are setting the encapsulation type as IPv6IP (IPv6 inside an IP packet). Maybe (optional) we set a description on the interface. We set the local and remote IPv4 addresses for the tunnel. Now we enable IPv6 and then finally we give it the IPv6 address.

router> enable
router# configure terminal
router(config)# interface tunnel0
router(config)# ipv6 unicast-routing
router(config-if)# description My First IPv6 Tunnel
router(config-if)# tunnel source
router(config-if)# tunnel destination
router(config-if)# tunnel mode ipv6ip
router(config-if)# ipv6 enable
router(config-if)# ipv6 address

Here is a working example.


Thats it. So now lets have a look at the tunnel and make sure it's up and running.


Well that looks good but we still need to setup the routing table just like IPv4. We'll create a single static default route pointing all IPv6 traffic into the new tunnel interface. You could also use the IPv6 address of the broker side of the tunnel to save the recursion but the interface works fine.

router> enable
router# configure terminal
router(config)# ipv6 route ::0/0 tunnel0

Maybe we can do a ping to an ipv6 address to make sure all is well (please understand that our router has been configured to perform dns lookups against the google DNS servers using the 'ip domain-lookup' and 'ip name-server' commands in global configuration mode).


So that worked a treat and we're able to get out to the internet. There are two steps left. Firstly we need to enable your local LAN addressing now using the new /48 your broker allocated to you so lets do that now.

Now there is a question around what address do you use on the LAN from your /48 allocation. Well lets consider the IPv6 address for a moment. The first 64 bits are considered as the network. As an example our IPv6 /48 allocation looks like this:


The first 48 bits are 2001:0470:93FE and so we can subnet this into 65535 subnetworks working from 2001:470:93FE:1::/64 through to 2001:470:93FE:FFFF::/64. Do you see we are just changing the bits after the 93FE and the double colon ::? If you need to understand the IPv6 addressing then thats for another time, so for now lets move on. Choose a /64 network to work with...lets take 2001:470:93FE:1::/64.

router> enable
router# configure terminal
router(config)# interface G0/0
router(config-if)# description My Inside Interface
router(config-if)# ipv6 enable
router(config-if)# ipv6 address 2001:470:93FE:1::/64 eui-64

Whats the eui-64 all about? Well basically it creates a unique IPv6 address by taking the 48-bit MAC address of the interface, reversing the 7bit around, jamming another 16 bits of FFFE into the middle and putting the rest it to the end of your existing /64 address. Scared? Don't be, it's really nice. You can of course choose to fix your own IP address by doing this instead:

router(config-if)# ipv6 address 2001:470:93FE:1::1/64

We've chosen .1 as the router address and why not. So now thats done pick a host on behind this interface on your router, enable IPv6 on the stack and away you go. There are some other things in IPv6 like neighbor discovery (ICMPv6) which lists neighbor capability like DHCP servers and routing etc but thats for another time. This should be all you need to get routing IPv6.

So what about security? I'm guessing you're already using access lists on your router right? Maybe it's a standard or extended ACL, maybe it's reflexive or CBAC or even maybe ZBF? The point is that you have some sort of access control. IPv6 is no different so lets take a look at a quick and dirty access list for IPv6 using a reflexive ACL. We'll create two ACL's called TU0-INBOUND (Tunnel inbound) and TU0-OUTBOUND (tunnel Outbound). We'll basically just allow ping and then any outbound TCP and UDP traffic will be permitted and relexive. Anything inbound which was not initiated from inside the router OR a ping will be denied and logged.

ipv6 access-list TU0-INBOUND
permit icmp any any echo-request
permit icmp any any echo-reply
deny ipv6 any any log-input

ipv6 access-list TU0-OUTBOUND
permit icmp any any echo-reply
permit icmp any any echo-request
permit tcp any any reflect REFLECTOUT
permit udp any any reflect REFLECTOUT
deny ipv6 any any log-input

Then we should apply these to the tunnel0 interface.

router(config)# int tu0
router(config-if)# ipv6 traffic-filter TU0-INBOUND in
router(config-if)# ipv6 traffic-filter TU0-INBOUND out

So lets fire up a web browser and see if we can get to an IPv6 enabled website...this is


Now we'll escape back to priv exec mode and do a quick 'show access-lists' to see if we are matching rules.


Hey we have traffic matches.

Thats it. Enjoy

View Comments
Whats the big deal about a defaultroute? The defaultroute is the last resort path your IP traffic takes to get out of your local network if it doesn’t have a more specific route in it’s routing table.

Well you know, the current internet routing table is hitting around 370,000 entries or so today. Having all of those routes sat on your PC is not going to be a good thing for your RAM or CPU when you have to store and lookup those routes. So what do we do, we have a default route pointing at a gateway device on our network. So, that gateway might be a layer 3 switch where the gateway address is a virtual switch interface (VSI). Do you think your switch was built to hold 370k+ routes? No sir you are correct. Most likely therefore your first hop gateway device will also have a default gateway configured pointing to an upstream device like your firewall. So, that firewall, is that likely to be fantastic enough to hold these routes to the internet? You guessed right, no way. So your firewall has a default route to another upstream gateway like you CPE (Customer/Provider Edge) router. Now finally this is the most likely place you’ll not find a default route. If you are taking a full BGP feed from your ISP then you don’t need a default route because your router learns all of the networks out there. So do you need a full routing table here? If you are connected to one ISP with one link...nope you do NOT need the full routing table and a default route from the next hop (the ISP neighbor router) is enough.

Now we see the defaultroute and it’s importance it’s important to understand why it’s a bad thing for network security. If you give your devices access to a network which contains a default route then it’s something which an attacker can use to ‘get around’. Without a route to a destination packets are dropped - just think about that - network security without a firewall - awesome.

So if we want a default route how can we have one?

We’re going to start our ramble into default routes using the humble and powerful but ultimately high maintenance static route.

  • Static route - easiest to configure but least scalable. In this example we’ll walk through a defaultroute to get traffic via from the source interface of Loopback0 on R1 to the destination interface of Loopback0 on R4.

  • Here is the topology:

  • Screen shot 2011-05-03 at 22.48.25

  • Some topology information:
  • R1 Fa1/0 <-> R3 Fa1/0
  • R1 Fa0/0 <-> R2 Fa0/0
  • R3 Fa0/0 <-> R4 Fa0/0
  • R2 Fa1/0 <-> R4 Fa1/0
  • R1 Loopback0
  • R2 Loopback0
  • R3 Loopback0
  • R4 Loopback0
Screen shot 2011-05-03 at 23.02.10

Screen shot 2011-05-03 at 23.02.22

Screen shot 2011-05-03 at 23.02.34

Screen shot 2011-05-03 at 23.02.45

I’ve configured R2, R3 and R4 to ‘know’ how to reach the loopback interface of R1. This is key to the topology and you should try to think of R2 and R3 as intermediate systems rather like those of the internet. On the internet there are man hops in the network between you and your destination...just look at this traceroute for an example. I’ve removed the IP information from the first 4 hops to anonymise the result but you can see that my packets hop across 9 nodes before they get to the server. In our example we’re going to hop twice to get to R4.

Screen shot 2011-05-03 at 23.11.21

So lets start with showing the routing table on R1

Screen shot 2011-05-03 at 23.03.55

So we see that there are 3 networks listed is the loopback0 interface and the other two are the Fast Ethernet ports. Each interface is listed as code ‘C’ which means connected (Juniper call this ‘direct’). If we try to ping the remote loopback interface on R4 we get a fail because R4 doesn’t know how to get to the R4 network of

Screen shot 2011-05-03 at 23.18.29

Simple enough so far and hopefully no worries.

So we want to get to and in our wisdom we’ve put into two links to the internet. We’re going to use the link between R1 and R2 as ‘way out’ and we’re going to trust that R2 knows how to get to Lets put in our default route now, point it at R2.

Screen shot 2011-05-03 at 23.20.49

The first command we put in adds the default route. The IP address of and netmask of all zeros means any IP address basically. It can sometimes be shown as The last IP address is the address of the next hop from us. R1 has a connected network link between it’s Fa0/0 interface and R2 using network R2’s IP address on this link is so we point our defaultroute at that.

The result of the ‘show ip route’ now has a new route at the bottom prefaced with the code of ’S*’ which means a static route. So now we have a defaultroute pointing at the next hop on R2. R2 knows how to get to the Lo0 interface of R4...lets try a ping now.

Screen shot 2011-05-03 at 23.28.35
  • Great stuff we did it - we can now ping - lets see the path we took.
Screen shot 2011-05-03 at 23.30.13

So, as we know our packets went via R2. But hey you’ve bought two uplinks to the internet, one via R2 and one via R3...can we have two default routes? Well if your have one ISP and two uplinks to that ISP then the answer is yes. If you have two uplinks but are using different ISP’s on each link then the answer is a vague maybe and we’re not getting into the reasons why here.

So lets add another default route into R1 via R3.
Screen shot 2011-05-03 at 23.32.23

This time the ‘show ip route’ has two lines under This is IOS telling you there are two equal cost routes to the destination. This means your packets can take either way to get there. We do a traceroute...

Screen shot 2011-05-03 at 23.34.17

Check out how the ICMP packets are telling us that sometime the first hop is then This is your packets being ‘load balanced’ in a round robin way. The second hop is either the network between R3 and R4 or the network between R2 and R4 Perfect. Just what we wanted...right? So what happens no if the link between R1 and R3 fails...we’ll administratively shutdown the interface for the demo.

Screen shot 2011-05-03 at 23.37.12
Screen shot 2011-05-03 at 23.37.34

Now lets ping the R4 loopback0 interface. What has happened now is that every other packet is failing to get to the address. Thats because we effectively load balanced the traffic and now one of those links is down...hence the loss.

Screen shot 2011-05-03 at 23.42.56

So what can we do? Well in most cases you wouldn’t use static routes you’d be using dynamic routing protocols, some sort of first hop redundancy protocol and certainly be tracking upstream connectivity to failover and load balance properly but we’ll leave that for another time. For our brief demo we’ll concentrate on weighted static routes sometimes called ‘floating static routes’.

Floating static routes allow you to put a ‘primary route’ into the table but in situations where that route is no longer available a second route with a higher metric can be brought into service. Here we’re testing everything is good...and we’ve brought the link between R1 and R3 back up for this.

Screen shot 2011-05-03 at 23.47.38

The second configuration command basically takes out with a ‘no’ the second load balanced default route to R4 via R3. So now lets see the routing table...

Screen shot 2011-05-03 at 23.48.57

Ok you can see we’ve got one route now via Lets add in our floating static route. Now by default a static route to a next hop gets a metric of 1. You can see the metric in the above ‘show ip route’ as the [1/0] after the route We’ll add a ‘floating’ route with a metric of 200 but it can be anything between 2 and 255.

Screen shot 2011-05-03 at 23.50.42

So the route is in but crucially you WON’T see this in the routing table. It will still look like it did before we added the floating route...see...

Screen shot 2011-05-03 at 23.51.12

The floating route is only used when the lower metric route is lost from the table. To demonstrate it we’ll shutdown the interface on R1 connected to the link between R1 and R2.

Screen shot 2011-05-03 at 23.52.47

Check out the ‘show ip route’ now. You see that the defaultroute now points to which before was not shown. R1 has realised that the next hop of the previous route is no longer valid so it drops that route from the table. The new defaultroute has a metric of 200 which is what we configured earlier. job done.

Thanks for reading.
View Comments
Here is the basic topology.

R1 and R3 are connected using port FastEthernet0/0
R1 and R2 are connected using port FastEthernet0/6
R2 and R3 are connected using port FastEthernet0/12

Screen shot 2011-04-20 at 22.14.15

Lets create a new VLAN 100 on each ‘switch’

Screen shot 2011-04-20 at 22.19.53

So now lets run through all the switches setting up each connected port as a trunk. We don’t need to set the encapsulation since these only support 802.1q but we’ll do it anyway...

Screen shot 2011-04-20 at 22.22.33

When all the ports are up lets wait for spanning-tree to converge.

Screen shot 2011-04-20 at 22.25.37

On R3 the port FastEthernet0/12 is ‘blocking’.

Screen shot 2011-04-20 at 22.26.03

So let’s move the blocked port to R2’s FastEthernet0/12 interface. To do that we need to make the cost of ‘getting’ to R3 easier than getting to R2. So lets lower the port cost on the interface closest to the root switch.

Screen shot 2011-04-20 at 22.59.02

The default port cost for FastEthernet is 19 so a cost of 1 is preferred. Now looking at R2 we’ve got the result we wanted.

Screen shot 2011-04-20 at 22.59.30
View Comments
A simple yet effective way to manage remote service dependancies is with IP SLA.

Imagine you have two routers connected to one ISP, behind those routers you have a shared ethernet LAN in which you have a firewall. The ISP offers you eBGP services and you receive a full routing table from them into each of your CPE routers. Your CPE facing LAN is configured over ethernet and you are running VRRP between your two routers and the firewall in this LAN ( Devices in the CPE LAN all point to the VRRP virtual IP address for service to the Internet.

Screen shot 2011-04-13 at 20.59.09

So, everything is great, VRRP is up and running as master on RouterA. Now you’ve been extra sensible and setup VRRP to ‘track’ an object on each router to preference the VRRP master device. In our case we are tracking object 1, the object is looking for route in the routing table.

Screen shot 2011-04-13 at 21.08.53

If the route exists then our priority s the default VRRP priority of 100 however if the route is lost we decrement by 10 and RouterB (configured with a priority of 95 and to preempt) takes over.

Screen shot 2011-04-13 at 21.08.22

You figure if the eBGP neighbor goes down you’ll lose the routing table so you be able to get out into the internet so you should maybe fail over the VRRP address to the other node (hopefully he’s got a full routing table).

Now, what happens if there is a problem on the ISP network so you are receiving a full routing table into both of the routers and yet behind one of the PE routers there is no upstream service and your data is being blackholed there. This is bad. Your VRRP service does NOT fail over because it was tracking a local route and the provider eBGP neighbor is still sending you a full table. Without manual intervention you are out of service.

So this is where ‘ip sla’ takes over. Cisco ip sla stands for IP Service Level Agreements and is used to provide telemetry information about service availability and capability for a number of network streams such as FTP, HTTP, Voice Jitter.

First lets setup a simple SLA object. For this example we’ll do an HTTP GET request against the website but htere are a number of different types of SLA (Jitter, ping,FTP etc). The response time from the site will be a deterministic value for GOOD or BAD status. Clearly a DOWN website might be for reasons other than a lost route but it’s just an example.

Screen shot 2011-04-13 at 21.13.44

Lets check to see if it’s running

Screen shot 2011-04-13 at 21.16.35

Looks good. We’re getting a round trip time (RTT) of 5060ms, the DNS lookup is around 52ms. The return code is OK which is what we are looking for. So now we’ll bind this sla monitor to a track object so we can add it to the VRRP tracking.

Screen shot 2011-04-13 at 21.13.35

Lets confirm thats in

Screen shot 2011-04-13 at 21.13.13

So now we’ll add the new track object to theVRRP configuration and we’ll add another 10 to the increment. So now the outcome of losing the route AND connectivity to the web server will reduce the value to 80. However the key here is that losing connectivity to the monitored web server OR the existence of the route to will cause the VRRP service to failover.

Screen shot 2011-04-13 at 21.13.24

Job done - experiment and have fun
View Comments
When there is a technical error with the device or you need to troubleshoot a failure the most important thing is time. Recently we were onsite troubleshooting a denial of service attack (DoS) on a web server for a client. They informed us that it had taken place sometime between 22:00 and 22:10 on a certain date. We asked for the web logs and then for the router/firewall logs for the period concerned. We spent time looking though the logs and noticed no commonality with the customers experience and the logs we were looking at. Indeed there appeared to be some difference even with the log files we had been given. For example a very clear HTTP GET request on the web server had no matching event on the firewall...

You guessed it, not only was the time wrong on the web server but it was also wrong on the firewall. Worse still the timezone was wrong so even if the time setting was correct we were at best an hour out as we were operating in BST.

Setting the time on the web server to point to the cisco switch (Catalyst 4500) to obtain it’s time was the first stage. The timezone was setup correctly and we were to use the windows time service using NTP. Here are the steps we took to setup the Cisco switch to set it’s timezone, to setup the correct British Summer Time skew plus the NTP daemon function so that the Cisco device to collect the correct time information from the internet and also allow the windows server to collect the time information.

First we setup the timezone so that the switch knew where in the world it was:

Screen shot 2011-04-10 at 21.24.37

Now we need to setup the NTP daemon itself. We need to find an NTP source out there to deal with us. There are different tiers of NTP service called ‘strata’. We don’t really need to be hugely accurate so we chose a stratum 2 NTP source. Here are a list of the UK NTP servers we used.

Screen shot 2011-04-10 at 21.28.18

The prefer keyword is just to say to IOS “if this one is available then we’d like you to trust this one most”. We also want to make sure our log data on the switch is given a timestamp.

Screen shot 2011-04-10 at 21.24.47

So lets just make sure it’s all working. Type “show ntp associations”

Screen shot 2011-04-10 at 21.25.16
View Comments
© 2011

Cisco, IOS, CCNA, CCNP, CCIE are trademarks of Cisco Systems Inc.
JunOS, JNCIA, JNCIP, JNCIE are registered trademark of Juniper Networks Inc.