WAN – Dynamic Host Control Protocol

In this article I will talk about a well known service that can be configured on Cisco devices, DHCP or Dynamic Host Control Protocol. This is a network service that automatically configures any device that uses the Internet Protocol, with all the elements needed to communicate inside a network. DHCP is a transparent process (to the user) that has helped a lot network administrators with managing IP addresses. Before this service became available, every device would required static entries that would bind a physical device to a unique IP address. Imagine how hard it was for network administrators to maintain all changes of locations and IPs for all devices. DHCP offers an easy way to manage and troubleshoot IP allocation and also provides a scalable service meaning that is not affected by network growth. Beside the workstation IP address, DHCP can automatically assign the network mask, default gateway, DNS servers and much more.

   As an IP allocation mechanism, DHCP provides three methods of assigning an IP to a physical host:
manual – network administrators manually assign one IP address to a single device. The DHCP service is used only to maintain that particular binding.
automatic – a single IP address is allocated permanently to a host and the DHCP server will always allocate the same IP to that particular host.
dynamic – DHCP will allocate IPs from a pool of usable addresses. IP addresses are leased for a defined period of time and if a device does not renew its lease, the IP is automatically returned to the address pool and can be used again by another device.
   DHCP works in a client/server model. The host requests an IP address to the closest DHCP server and the server responds with a new IP allocation or renew of the leased IP. The host must contact the DHCP server periodically to renew its lease. If the host doesn’t contact the server for a period of time, the IP address is returned to the address pool. When a device wants to obtain IP allocation from a DHCP server, the following messages are exchanged:
1. the host sends a DHCPDISCOVER message which is a broadcast message send to all devices in the network. It will use the layer 2 and layer 3 broadcast addresses (FFFF-FFFF-FFFF-FFFF and 255.255.255.255).
2. the DHCP server will receive the message and will create an entry in the ARP table that will contain the host’s MAC address and the leased IP address. This information will then be sent in a DHCPOFFER message. The response is sent unicast directly to the host by using it’s MAC address.
3. the host will then check the received DHCPOFFER message and will then reply with a broadcast DHCPREQUEST message informing all devices (and the DHCP server) that he accepted this configuration.
4. the server will finally reply with a DHCPACK message and the host will have access to network resources.
The following image displays the DHCP allocation mechanism:
DHCP process
   DHCP had a predecessor, the BOOTP(Bootstrap Protocol) which was used primarily for configuring devices that did not had an operating system nor a hard drive. These two protocols are somehow similar because both use the client/server model. The main difference between DHCP and BOOTP is that BOOTP uses manually configured tables in which bindings between IP and MAC addresses are stored. DHCP builds entries automatically according to network changes. Another aspect of BOOTP is that this protocol uses permanent assigned IP addresses (the same IP address is allocated to one device permanently). BOOTP supports only four configuration parameters (IP address, subnet mask, gateway address and DNS server’s IP address) while DHCP supports over 20 parameters. Check the following link from IANA for more information: http://www.iana.org/assignments/bootp-dhcp-parameters/bootp-dhcp-parameters.xml.
   Next, we will talk about configuring DHCP on Cisco devices. Remember that you will have to be careful when doing this on a production environment so I suggest you test your configurations first. When configuring DHCP, we will first have to set IPs that will be reserved for special purposes and will not be included in the DHCP pool. Remember that servers, routers or printers require static IP addresses so it’s best that you include all these devices in the excluded IP pool. To configure these excluded addresses, use the ip dhcp excluded-address [ip address or ip pool] command. The following image displays an example of such a pool:

dhcp excluded address command
In this case, the IPs from 172.16.1.0 to 172.16.1.100 will not be leased to clients.
Next, we will need to configure the DHCP pool and the default-router address. To achieve this, use the commands displayed in the following image:
dhcp pool configuration
Optionally, you can configure domain name, dns server, duration of dhcp lease, etc. To view these options type ? from the dhcp configuration mode:
dhcp options
These are the options emulated in my Cisco Packet Tracer version. If you use GNS3 tool, all DHCP commands will be available:
   To verify your configuration, use the show running-configshow ip dhcp binding, show ip dhcp server statistics and show ip dhcp pool commands. Run these commands one by one to view their output. The following image displays the show running-config command:
show running-config
To troubleshoot your DHCP configuration, use the show ip dhcp conflict (this command is used when an IP allocation conflict exists).
   Remember that DHCP clients use broadcast messages when first trying to obtain IP configuration. We already know that broadcasts are not forwarded by routers by default, so what would happen if your DHCP server is a couple of routers away from your client computer? Well, on the closest router to the client’s workstation, you’ll have to specify the DHCP server IP address by using the ip helper-address [IP address] command from the interface configuration mode. By issuing this command, the router will accept DHCP broadcasts and will then forward the request unicast to the specified IP address:
ip helper address
This mechanism is also known as DHCP Relay.
   On the client side, if you use Cisco devices that will obtain their IP configuration from the DHCP server, use the ip address dhcp command from the interface configuration mode:
ip address dhcp
I think that’s about it for this post folks. Remember to rate/share/comment if you’ve enjoyed this article. Have a wonderful day and stay tuned for more articles to come.
Advertisements

WAN – Frame Relay

Hello dear readers,

   In the last networking article we’ve talked about the role and functionality of ACLs. In this post I will focus on explaining another wide used WAN technology, Frame Relay. This is a WAN protocol that functions over the last two layers of the OSI stack, the Data Link and the Physical Layer. This protocol uses the concept of Virtual Circuits (VC) which is an identifier of the path that frames will travel through from source to destination. Frame Relay is often used to interconnect LANs over a WAN link. We will see how Frame Relay implements VCs that can change with every frame received, how to configure a router with Frame Relay, how a Frame Relay switch works so that in the end you will know the main concepts of this WAN protocol.
   Frame Relay introduced the concept of Virtual Circuits which unlike permanent leased lines offer flexibility and cost effective implementations. This WAN protocol is pretty simple to configure and also uses less equipment than other WAN implementations. It is cost effective because clients are paying only for the local loop lines and the dedicated bandwidth received from the service provider. In Frame Relay each virtual circuit is uniquely identified by a DLCI or Data Link Connection Identifier (DLCI) number. DLCIs are used to set the virtual path that a packet will travel to reach its destination. A disadvantage of Frame Relay is that this protocol does not support error detection mechanisms. When an error is detected, Frame Relay will simply drop the packet without notifying the sender or the receiver. We will see later that Frame Relay maps DLCI numbers to IP addresses. In every WAN transmission, DTE and DCE equipment must be installed between the two transmitting nodes. Frame Relay specifies how information is sent between these nodes (DTE-DCE) but does not specify how frames are moved between DCEs.
Frame Relay operates by using two main elements: the Physical layer and the link layer. The Physical layer is responsible for determining the electrical and mechanical specifications that must be used during transmission. The link layer specifies the protocol that will establish the WAN connection between the DTE and DCE. Let’s say information is sent from one node (DTE device such as a router) to another node in a remote location. The DTE will sent the packets to the closest DCE device, in this case it would be a WAN edge equipment such as a Frame Relay switch. Once the packet reaches the edge switch through the local loop, the client responsibility ends. How packets are sent between switches in the Frame Relay network is the ISP’s responsibility. In the end, the closest switch to the destination network delivers the packets to the client’s DTE device.
As I’ve told you earlier, Frame Relay uses the concept of Virtual Circuits to identify the logical path used to forward packets between two nodes. There are two types of VC that can be established:
Switched Virtual Circuits (SVC) – can change their configuration dynamically according to network changes.
   Permanent Virtual Circuits (PVC) – are configured by the carrier before any transmission can be made.
DLCI values are set by the the Frame Relay provider and have local significance only. This means that two DTE devices can use different DLCI numbers when sending data between them. DLCI numbers can be configured from 16 to 1007 while 0 to 15 and 1008 to 1023 are reserved. Another feature of Frame Relay is that a client’s DTE device can use multiple DLCIs when sending data to different destinations. Let’s take the following example: suppose we have three DTE devices (routers) A, B and C. A uses DLCI 100 for sending data to router B and DLCI 101 when sending data to router C. B uses DLCI 105 for sending packets to router A and 106 for sending data to router C. Remember that these numbers have local significance only. By using multiple DLCIs the cost is significantly reduced since the same physical devices are used.
Frame Relay receives packets from the network layer, encapsulates them into frames by adding DLCI numbers and checksums (CRC). Each frame is delimited by the 01111110 flag and then it is sent to the Physical layer for final delivery. Usually, Frame Relay topologies can be full mesh, partial mesh, star or hub and spoke. We have talked about these kind of topologies in the networking fundamentals articles. Frame Relay DLCIs are mapped to remote IP addresses. A DLCI would be used to forward packets to a certain network. In Frame Relay networks, inverse ARP is used to obtain the IP address (layer 3) of a remote network from the DLCI number (layer 2). Inverse ARP is enabled by default on all Cisco devices.  Remember that Frame Relay can support multiple protocols like IP, AppleTalk or IPX. The address mapping can be done in two ways:
dynamic mapping – a router will sent inverse ARP requests throughout the PVC to obtain the IP address for each hop. The router will then use the responses received to populate a local address table (also known as mapping table) that will be used for sending and receiving data.
static mapping – as a Network Administrator, you can configure static mappings between DLCI numbers and IP addresses. If you choose to assign a static mapping to an IP address, the dynamic mapping obtained by the inverse ARP protocol will be ignored.
Another aspect that you will need to remember about Frame Relay is that LMI (Local Management Interface) messages are exchanged between the DTE and the DCE equipment, to check the status of the Frame Relay connection. You can view the status by typing the show frame-relay lmi command from the privilege mode. By default the interval in which lmi messages are exchanged is 10 seconds. This interval can be modified using the keepalive command. There are many other aspects of the lmi mechanism, but are not needed for the CCNA exam. Feel free to add anything you know about lmi or Frame Relay in general, in the comments section.
We will continue talking about Frame Relay configuration commands. Given the following topology we will configure Frame-Relay on the these Cisco routers:

Frame-Relay topology

First, we’ll have to enable Frame-Relay on an interface, I will enable Frame-Relay on the interface serial 0/1/0 of router (R1):

Frame-Relay configuration
As you can see, I’ve added the 192.168.0.1 IP address to the Serial 0/1/0 interface. I’ve then set the encapsulation type to frame-relay and finally configured the bandwidth used by the link (kb/s). Now I will do the same for router R2:
Frame-Relay commands
To verify your configuration use the show running-config or show interfaces serial [number] commands:
show interface serial command
We have just enabled Frame-Relay on these interfaces, our configuration will use dynamic mapping (with the help of inverse ARP). To configure static mappings on a Cisco device use the frame-relay map ip [ip address] [DLCI number] [broadcast] command. The broadcast parameter is optional and is used to enable broadcasts in a Frame-Relay topology. Remember that Frame Relay by default is a nonbroadcast multiaccess network (NBMA) and will not enable the transfer of broadcast or multicast traffic. Here is how a static Frame-Relay mapping would look like:
To verify our configuration, use the show frame-relay map command:
show frame-relay map command
   One big problem introduced in NBMA networks is caused by split horizon. Remember that this is a mechanism used to prevent routing loops by not sending messages to the interface from which the original routing information came from. This is a useful technique in routing protocols, but in Frame Relay configurations can cause problems since some of the Frame-Relay information could be blocked. The best way to resolve this is by using subinterfaces. A subinterface is a logical interface assigned to a physical one (a physical interface can support multiple subinterfaces). By using this technique you reduce the overall cost and also broadcast traffic can be forwarded between subinterfaces. These logical interfaces can be configured in two ways:
point-to-point – a Virtual Circuit is established between two subinterfaces (or between a subinterface and a physical interface). Each subinterface uses a different subnet IP address so that split horizon can be avoided. Each PVC has its own DLCI number and packets will be forwarded between subinterfaces using that particular DLCI
multipoint – a subinterface establishes multiple Virtual Circuits with one or more physical/logical interfaces. Interfaces that use the multipoint mechanism must be part of the same subnet.
When configuring subinterfaces, the physical interface must have the frame-relay encapsulation type configured first. Subinterfaces must be configured independently (with an IP address and mask that are part of a different subnet). I will show you how to configure Frame-Relay subinterfaces later.
   For your exam, you will need to know the elements that are needed to consider in a Frame-Relay implementation from the client’s side. As you already know, in this WAN protocol, the client leases only the connection between his DTE device and the ISP’s DCE. The aspects that the client must consider are: link speed (also known as access rate/port speed) and CIR (committed information rate). CIR refers to the guaranteed speed that the client can use when transferring information over the PVC (the ISP will offer a speed indicated by the CIR). A cool feature of Frame-Relay is that it offers support for speed bursting. This means that Frame-Relay can take advantage of the unused speed of an PVC. Because sometimes one PVC can have a higher usage than other, Frame-Relay can transfer the unused speed to the needed virtual circuit. An element called the CBIR (Committed Burst Information Rate) identifies the maximum speed that a link can support over the CIR. If a link has a CIR of 64 kb/s and a CBIR of 48 kb/s, this means that this PVC can use a maximum of 112 kb/s. In this case, if frames are sent with 112 kb/s speed, they are marked as Discard Eligible(DE) meaning that if there is not enough speed available, the frames will be dropped. Another element called the BE (Excess Burst) is used to indicate the remaining bandwidth of the access port. In our example, if the link can support a maximum bandwidth of 128, the BE is 128-112=16 kb/s.
   Two elements are used by Frame-Relay to notify devices about network congestion:
FECN (Forward Explicit Congestion Notification) – is a flag that signalizes the receiving DTE device that the link encountered congestion during transmission. Remember that frames with FECN flags set to 1, are sent only to the upstream devices. (devices through which frames will be sent to reach their destination)
BECN (Backward Explicit Congestion Notification) – a notification mechanism that informs devices, from which frames originated, that the PVC suffers from congestion. Frames with BECN flag set to 1 will be sent only to the downstream devices.
   At last, we will talk about configuring Frame-Relay subinterfaces on a Cisco device. First, we will need to enable the Frame-Relay encapsulation mode on the physical interface. To do this type the following:
encapsulation frame-relay
OK folks, now let’s configure one subinterface for the DLCI 100 and another one for DLCI 101:
Frame-Relay
The subinterface number can be chosen from 1 to 4294967293, I use the same number as DLCI for better identification. The frame-relay interface dlci [number] statement assigns the desired DLCI number for the interface. In this configuration, DLCI 100 will be mapped for the 192.168.1.1 IP. There can be two options selected: point-to-point or multipoint. Remember that the multipoint option can be used when the same subnet is used by all routers in the Frame-Relay network. Usually, point-to-point connections use the /32 network mask, I just showed you an example. Now let’s do the same thing for the DLCI 101:
Frame-Relay configuration
It is important to remove any IP address configured on the physical interface!. If any IP was configured previously, frames will not be forwarded to the subinterfaces. You can verify/troubleshoot your Frame-Relay configurations by using the following commands:
show frame-relay lmi – view the status of lmi
You can even enable the lmi debugging mode to view the lmi exchanges in real time:
Debug Frame-Relay
show interfaces serial [number] – check interface configuration
show frame-relay map – verify inverse ARP operation (use the clear frame-relay inarp to clear the mappings table)
show running-config – view the running configuration
show frame-relay pvc [number] – checks the status of a PVC
I think that’s about it for this post, almost every part of the Frame-Relay protocol was described (at least I hope so). Please share to others and also post any comment/question. I hope you will enjoy this, stay tuned for the following WAN article. Have a wonderful day folks!

WAN – Access Control List (ACL)

In this article I will talk about one technology used especially for restricting and securing access throughout a network, ACLs (Access Control Lists). This is one of the most important lesson that you need to learn in order to pass the CCNA exam. As a network administrator you’ll have to know how to create and modify ACLs because you’ll probably use them on a daily basis. You’ve probably used ACLs in different technologies without knowing it, to secure access to a file, computer, application etc. Firewalls are hardware devices that use ACLs to restrict network access based on source and destination IPs, port numbers, protocol, etc. Even permissions on Windows shared folders can be seen as layer 7 ACLs because users are restricted/granted access to that resource. I will talk about ACLs used only to restrict network traffic, because you will need to know them very good for your exam.
We will talk about different types of ACLs, how each one works and how you can use them to make your network more secure. At the base of the network layer sits the IP address, the element which provides the means of communication between devices. Before two devices (remember client-server model) can start forwarding data between them, a network connection must be established. This means that these devices must first determine the source/destination MAC address, the source/destination IP address and the ports that will provide the communication mechanisms. If you can’t remember or you haven’t studied my networking fundamentals tutorials, take a look again at the TCP connection establishment  and at the TCP/IP network layer. I’ve written earlier that network traffic can be filtered using ACLs, these are nothing more than lists of rules that dictate what traffic is allowed or denied to enter or to exit a network. Packet filtering can be made based on source and destination IP address, protocol, or source and destination ports. Upon receiving a packet, the router will simply check each ACL from top to bottom and based on the information gathered from here, it will grant or deny access. As you can see, the logic behind this technology is pretty simple but effective (remember that the packet filtering is made at the network layer). ACLs can be configured on the inbound or outbound direction of an interface and by default routers will not have any ACL configured. You will have to remember that you can apply one ACL per-protocol (IP, TCP, UDP), per-direction (the ACL will filter traffic only in one direction, outbound/inbound) and per-interface (FastEthernet 0/1, Serial 0/0/0). But how do ACLs work? Each rule or statement from an access-list is tested against the received packet. ACLs are read from top to bottom line by line and if a match is made (the packet is denied or permitted by a rule) then rest of the lines are skipped. Remember that every access-list has an implicit deny all at the end of all statements. This means that if no permit rule is made, all traffic is denied by default (deny any any – you will understand this statement later in this article). For this reason, an ACL must have at least one permit rule. An inbound ACL will process packets before they are forwarded to the exit interface while an outbound ACL will process packets after they are routed to the exit interface. Now let’s talk a little bit about the types of ACLs that can be configured on Cisco routers:
standard ACL – this type of access-list will filter traffic based on source IP address. A standard ACL is composed of the access-list statement, number, permit or deny flag, source IP address and wildcard mask. An example of a standard ACL is access-list 20 deny 172.16.0.0 0.0.255.255.
extended ACLs – can filter traffic based on source and destination IP address, source and destination port (it could be a TCP or UDP port) and protocol. This is how an extended ACL would look like:
access-list 103 deny ip 172.16.1.0 0.0.0.255 172.16.2.0 0.0.0.255.
These are the two main ACL types used today, there are also special ACL types used, but we will talk about them later (reflexive ACL, dynamic ACL or time based ACL). The number of an ACL is simply used for identifying each access-list, newer versions of IOS offer support for named ACLs (you can assign a name/description to an access-list). To see the available numbers that you can assign to an ACL, type access-list ? from the global configuration mode of a router:

standard access-list

Normally, this command will display many options, but only these are implemented in my version of Packet Tracer. Named ACLs can use letters and numbers and each entry can be deleted or modified. It is recommended that you place ACLs where they have the biggest effect. Based on this rule, place standard ACLs as close as possible to the destination because they use source address. Extended ACLs have the best effect when they are configured as close as possible to the source where traffic is denied.
First, I will show you how to configure a standard ACL on a Cisco router. As I’ve told you earlier, standard ACLs make decisions based on the source IP address, no port, protocol or destination address can be used in a standard ACL. For best practice, always put the most used statement at the top of the ACL. By using this method, you reduce the time needed by the router to check each entry in the ACL.
To configure a standard ACL on a Cisco router, use the access-list [number] [deny/permit] IP address wildcard. To add another statement in the same ACL use the same number when configuring the new entry. The following image displays a standard ACL configured with two entries:

standard access-list
In this access-list, I’ve granted access for packets that have the source IP address from the 192.168.0.0 network and denied access for those IPs originated from the 172.16.0.0 network. Remember that at the end of each ACL there is a deny all entry. You can configure the access-list 20 deny any any statement, but it is already configured by default. We can add the remark parameter to describe the functionality of the ACL:
ACL remark command
To view the currently configured ACL, use the show running-config command:
ACL
I don’t know if I’ve ever talked about the wildcard mask (I think at OSPF). This element is used by ACLs to identify which portion of the IP address, stated in the ACL, must be tested. Wildcard mask is similar to network mask because it is composed of 32 bits (4 octets) of 0 and 1 with the following rules:
0 – the rule must match on that particular bit
1 – it will ignore that bit
Let’s take the following example:
to match all IPs that are part of 192.168.0.0 we use the 0.0.255.255 wildcard mask. If we want to match a single IP from this network we use the 0.0.0.0 wildcard mask (for example 192.168.1.6 0.0.0.0). To check the result of applying a wildcard mask, do an AND operation between the IP address and the wildcard mask. You will have to know how to use wildcard masks in ACLs statements so you should exercise a little bit, you can find a lot of examples over the Internet. As I’ve written earlier, you can apply an access-list to an interface in only one direction, in or out. To apply an access-list to an interface, use the ip access-group [ACL number] in/out. Now let’s apply our access-list to a  fast Ethernet interface in the in direction:
ip access-group command
To remove our access-list from from our router, type no access-list 20 from the global configuration mode. Access to VTY lines can be restricted using access-lists. To achieve this, use the access-class [ACL number] in/out command:
vty access-list
You’ve probably guessed the outcome of this configuration, the 192.168.0.0 network will be able to establish remote connections with the router while the 172.16.0.0 network will not have permissions to do this.
   Named ACLs use the ip access-list standard/extended [ACL name] command. After typing this command, you will enter the ACL configuration mode as shown in the following image:
Named ACL
To apply a named ACL to an interface, use the ip access-group [ACL name] in/out as follows:
Apply ACL to an interface
To troubleshoot access-lists configuration, use the show access-list command or show access-list [ACL name or number].
   Extended ACLs are used to better control traffic filtering. These ACLs use numbers from 100 to 199 and 2000 to 2699. They enhance standard ACLs functionality because filtering can occur based on both source and destination IP address, source and destination port numbers and protocol. When building an extended access-list that uses port numbers to filter traffic you can choose between a TCP or UDP port number. The statement of an extended access-list is a little more complex, as follows:
access-list [number] [deny/permit/remark] protocol source IP address and wildcard [operator operand]

[operator] [port name/number] destination  IP address and wildcard [operator] [destination port] [establish]

These are the options that can be used when configuring an extended access list. The establish option can be used only for the TCP protocol and it flags that the ACL is used to establish a connection. The host parameter indicates that the access-list must match the exact IP (something similar when using the 0.0.0.0 wildcard mask).
The following image displays an example of an extended ACL configuration:
ACL configuration
The first line permits all traffic from host 10.0.0.1 to host 10.0.0.2, the second line denies web traffic from these two networks and the third one denies telnet connections from one host to another. To apply this access-list to an interface use the same ip access-group 100 in command from the interface configuration mode. Named access-lists are configured in a similar way of standard named ACLs:
ip access-list extended [name].
   For the CCNA exam you will need to learn ACLs very good, this is why I suggest you should exercise them a lot. I remember that my CCNA exam had a lot of access-lists questions.
  More complex ACLs can be configured on Cisco devices: Dynamic access-lists, Reflexive access-lists and Time-based access-lists. We will talk a little about each of these types of ACLs. Remember that for the CCNA exam you will not need to know all the aspects of complex ACLs.
   Dynamic ACLs (you will also hear about lock-and-key ACL) are used to control IP traffic by using Telnet connections to authenticate users. Dynamic ACLs can function only with the use of extended ACLs. Users are denied access by the extended access-list until they establish a Telnet connection with the router to be authenticated. This type of ACL can be used to allow a user to forward traffic through a firewall or authenticate using a TACACS+ server. You will have to remember that dynamic ACLs are used to authenticate users before allowing them to forward traffic. By using this type of access-lists, network security is enhanced, because of the authentication mechanism. To configure a dynamic access-list, you must take the following steps:
1. Configure the username and password used for authentication:
Router(config)#username admin password test
2. Configure an ACL with an entry allowing users to establish Telnet connections to the router:
Router(config)#access-list 110 permit tcp any host 192.168.0.1 eq 23
3. Another entry will be added to the access-list to allow traffic from one point to another. Let’s say we have 172.16.1.0 and 172.16.2.0 networks and we want to allow traffic to flow from one network to another:
Router(config)#access-list 110 dynamic networks timeout 10 permit ip 172.16.1.0 0.0.0.255 172.16.2.0 0.0.0.255
the timeout statement will close the session after the time specified
4. Configure virtual lines to allow Telnet connections. After the user is authenticated, the Telnet session will close. If there is no activity in the time specified (in this case 10 minutes), the session will close :
Router(config)#line vty 0 15
Router(config-line)#login local
Router(config-line)#autocommand access-enable host timeout 10

5. Apply the access-list to an interface:

Router(config)#interface fastEthernet 0/1
Router(config-if)#ip access-group 110 in
 
   Reflexive ACLs – these type of access-lists can change their behavior based on some evaluation statements. Reflexive ACLs evaluate traffic based on its origin, they will allow traffic from the inside while denying traffic originated from the outside. These ACLs are included inside extended named access-lists and cannot be used with standard ones. They provide a higher level o security because they can detect and counter attacks like DoS or DDoS. To configure a reflexive ACL, you will have to take the following steps:
1. Create the rules which will allow traffic originated from the inside. Remember that reflexive ACLs can be used with TCP, UDP and even ICMP traffic:
Router(config)#ip access-list extended InsideTraffic
Router(config-ext-nacl)# permit icmp 172.16.1.0 0.0.0.255 any reflect ICMP
2. Create an access-list that will check to see if traffic was originated from the inside and based on the evaluation rules, it will allow or deny traffic:
Router(config)#ip access-list extended OutsideTraffic
Router(config-ext-nacl)#evaluate ICMP
3. Apply the ACLs:
Router(config)#interface fastEthernet 0/1
Router(config-if)#ip access-group InsideTraffic out
Router(config-if)#ip access-group OutsideTraffic in
This is a perfect example on how you can use reflexive ACLs. Imagine you want to be able to ping to an outside host and receive an answer, but you don’t want outside hosts to be able to ping your devices. A correctly configured reflexive ACL would allow traffic from the inside while blocking traffic from the outside.
   Time-based ACLs – extended access-lists that allow access to network resources only in a specified interval. You can specify the time of the day or the week in which traffic will be allowed. These are best used when you want to log traffic only in certain moments of the day or the week. Imagine you want to monitor traffic only on Mondays and Fridays, it would not be appropriate to log all the traffic from the entire week. The following steps must be taken when configuring a time-based ACL:
1. Configure a time range in which traffic will be allowed:
Router(config)#time-range TIMEBASEDACL
Router(config-time-range)# periodic Monday Friday 0:00 to 23:59
2. Configure an extended ACL that will use the configured time interval to allow traffic:
Router(config)#access-list 110 permit ip 172.16.1.0 0.0.0.255 any time-range TIMEBASEDACL
3. Apply the ACL to an interface:
Router(config)#interface fastEthernet 0/1
Router(config-if)#ip access-group 110 out
   Remember the differences between standard and extended ACLs, when you can apply reflexive, dynamic or time-based access-lists. Be careful when configuring access-lists, watch out for the wilcard mask and always place the access-list at the desired place. I recommend you configure all access-lists in a testing environment first before applying them in production because with one mistake you can block your entire traffic.
   That’s it for this article, I hope I’ve included and described all the aspects of access-lists. Please leave a comment, share and rate it. I wish you all the best and enjoy your day.

WAN – Point-to-Point Protocol (PPP)

In this article I will talk about the Point-to-Point Protocol(PPP) used in point-to-point communications. PPP is one of the most used WAN technologies in data networks all around the world. This type of serial connection is mostly used to connect LANs between each other or to connect an enterprise network with a Service Provider. A point-to-point connection between a company and an ISP (Internet Service Provider) is also known as a leased line. PPP offers support for many WAN technologies like Frame Relay or ATM, but also provides a multi-protocol architecture for TCP/IP, Appletalk or IPX. I will show you how to configure Point-to-Point connections, how to troubleshoot them and also how to configure PPP and CHAP authentication modes.

   I’ve told you that point-to-point connections use serial communications, but what exactly are those and what is the difference between a serial and a parallel communication? In serial communications, bits are sent one after the other while in parallel communications, multiple bits can be sent together using different lines. You would probably say that parallel communications are preferred because they offer increased speed since bits can be sent faster than in serial communications. Parallel communications are susceptible to clock skew and interference. Clock skew simply means that bits that are sent together from one point do not arrive at the same time at the other end. This happens because both ends of communication must synchronize when transmitting information over the medium. You can clock serial communications to achieve faster speeds than parallel communications. In parallel communications, bits can interfere between them and many can get dropped because of this. Simply put, serial communications are preferred in point-to-point links because they require less physical resources (wires and cables), can provide faster speeds than parallel communications, support longer cable distances and also they can be better isolated so that data transfers do not suffer from interference. There are many point-to-point WAN standards used today but among the well known standards are (I will not talk about each of them but I’ll put some interesting links if someone is interested) :

HSSI (High-Speed Serial Interface (HSSI) – HSSI – http://en.wikipedia.org/wiki/High-Speed_Serial_Interface

   The main concept used in Point-to-Point connections is the TDM or Time Division Multiplexing. In this layer 1 concept, every node that wants to transfer data over the medium receives a timeslot in which it can transfer bits over the physical connection. A multiplexer is responsible for allocating timeslots to the users and this devices also reassembles each data stream. Remember that these timeslots are interleaved in the physical channel (we’ve talked about the interleaving process in an earlier article from the networking fundamentals section). In the first TDM implementations, timeslots were 8 bits long but this concept had a problem. When a user didn’t had nothing to send over the channel, the TDM mechanism would still allocate a timeslot for that particular user. To address this issue, statistical time-division multiplexing was invented. In this technology, a buffer was created in which data can be stored when there is high traffic onto the medium. By using this method, STDM ensures that the physical channel doesn’t remain idle when there is no information transmitted.
   You know from earlier articles that point-to-point connections use two devices, a DTE and a DCE. The DTE is part of the CPE (customer premises equipment) while the DCE is located in the ISP’s network. The DCE, which can be a modem or a CSU/DSU device, provides the clock signal for the serial communication. Unfortunately I don’t have pictures with WAN connectors to show you and I cannot use pictures that are taken from some other places, but if you are interested you can look for some of the most used serial connectors like DB-60, Smart Serial, V.35, X.21, EIA-530 etc. I don’t think is important to know these connectors or their role for the CCNA exam, but you can make a general idea by searching on www.google.com for them.

   There are many WAN encapsulation protocols used in serial connections. We will study some of them later, but for now I just want to point out the most used encapsulation protocols today:
HDLC (High-Level Data Link Control) – default encapsulation protocol used in point-to-point communications (this protocol is enabled by default on all Cisco devices). It is a protocol that provides connection-oriented and connectionless services. This protocol uses ACK messages when sending and receiving frames and uses synchronous serial communications. HDLC adds a special flag that signalizes the beginning and the end of a frame, the flag is 8 bit long and it’s 01111110. When there are five consecutive 1s in a stream of bits, HDLC will insert another 1 bit in order to make sure that the flag is inserted in the right spot. It is pretty simple to change the encapsulation protocol used in serial connections. This is done by typing encapsulation hdlc from the interface configuration mode:

interface encapsulation

Remember that HDLC is the default encapsulation mode used by Cisco routers. To verify the encapsulation protocol, type show interfaces serial [number]:

show interface command
PPP –  “is a data link protocol commonly used in establishing a direct connection between two networking nodes” from Wikipedia http://en.wikipedia.org/wiki/Point-to-point_protocol. We will talk about the PPP protocol later in this article.
Frame Relay – data link protocol that uses Virtual Circuits (VC) to sent/receive data. It is an improved version of the X.25 protocol, I will talk about Frame Relay in a future article.
SLIP – point-to-point protocol that uses TCP/IP for transmitting data.
The point-to-point protocol is used when you want to connect non Cisco devices between each other. This is a serial protocol known by all networking devices and it has some features that cannot be found in HDLC. It has a feature to detect the link quality and also it supports authentication using the PAP or CHAP protocols, we will talk about these two authentication protocols later in this article. PPP uses the HDLC protocol to encapsulate IP datagrams in point-to-point connections. PPP has another protocol called the LCP (Link Control Protocol) protocol used for configuring, establishing connections and for checking the state of point-to-point links. Another component of the PPP protocol is the NCP or the Network Control Protocol. NCPs are used to configure network protocols like IP, IPX or Appletalk, over the serial communication.
PPP uses the last three OSI layers, the physical, data-link and the network layer. At the physical layer, PPP can be configured in many serial interfaces like synchronous, asychronous or HSSI. The LCPs are used to establish, terminate, configure and test connections. You’ll have to know for the CCNA exam that the LCP layer from the PPP protocol is used to set the error detection, compression and authentication mechanisms. The NCP layer is used by PPP to encapsulate different network protocols. When a PPP connection is made, three phases must be done: connection, establishment, link quality and the network protocol determination. There is much to talk about the NCP or LCP operation, check the following link from tcpipguide for further details http://www.tcpipguide.com/free/t_PPPLinkControlProtocolLCP.htm.
The Point-to-Point protocol offers the following options:
authentication – can provide two authentication mechanisms, PPP and CHAP.
error detection – by using magic numbers and quality numbers, PPP ensures that the link doesn’t contain errors.
multilink support – it is a mechanism used to load balance traffic over multiple physical PPP links.
compression – using the Stacker and Predictor protocols, PPP can reduce the size of frames.
PPP callback – a security mechanism in which one side must call the other side and by answering, the PPP link is established.
To configure PPP on a Cisco device, first set the encapsulation type to PPP from the interface configuration mode:

interface encapsulation
Now, if we want to set the quality of the link, we just have to type in ppp quality [percentage]. The quality is calculated by the number of packets send and received. If the link quality doesn’t meet our expectations, then PPP will shutdown the link. To set the multilink option, simply type in ppp multilink from the interface configuration mode. The compression option is enabled by the compress [predictor/stac] command. To verify your PPP configuration use the show interfaces, show running-config and the show interfaces serial commands. To troubleshoot PPP you can activate the debug ppp feature.
   I’ve written earlier that PPP supports two authentication mechanisms, PAP (password authentication protocol) and CHAP (Challenge-Handshake Authentication Protocol):
PAP is a simple implementation of an authentication mechanism, the two devices participating in the PPP link establishment must first authenticate each other using a username and a password. This is a two-way method of authentication. The first router must send it’s credentials to the second router which will grant or deny the connection. PAP will sent the credentials in plain text, this is why it’s not a secure method of authentication because it’s susceptible to interception. After the link is established, PAP will not ask again for the credentials. Read more about the PAP protocol, in this article from Wikipedia: http://en.wikipedia.org/wiki/Password_authentication_protocol.
CHAP on the other hand, uses an encryption algorithm (MD5) when sending credentials and it will also ask for the username and password periodically. CHAP is a three-way method in which the second router must first send a challenge message and then the first router must respond with a hash value of the encrypted credentials. In the third step of the CHAP authentication process, the router will check the received credentials and it will deny or accept the connection. You can configure the users/passwords locally on the routers or use a AAA/TACACS (a server which is used to authenticate users).
   We will configure a PPP connection between two Cisco routers, R1 and R2, using the PAP authentication method. We will first configure PAP on router R1:
we will first need to configure a username on R1: R1(config)#username R2 password test
The next image displays all the configuration commands needed to configure PAP on this router:
PPP configuration commands
The same commands must be entered on router R2: R2(config)#username R1 password test
After the username has been configured, the PAP protocol must be configured on this router too:
PPP authentication
To configure CHAP on these two routers, simply create the usernames/passwords and then type in ppp authentication CHAP from the interface configuration mode. If for some reason the ppp authentication fails, you can enable the Debug ppp authentication mode.
   I hope I’ve covered all the main components that make up the PPP protocol. If you think there is something more to add here, don’t hesitate to leave a comment or post any question that you have. I wish you all the best, please share this article to others and stay tuned for more to come.

WAN – Introduction to Wide Area Networks

In our days, large corporations have multiple branches around a continent or even around the world. You can imagine that a company that spans multiple territories has a large data network. To interconnect multiple branches, WAN connections are used because they offer both cost effective and speed. You can imagine an enterprise network as multiple LANs interconnected. In this article I will talk about the main elements that are part of WAN connections and it will serve as an introduction for the following articles. As you already know, a Local Area Connection is a network that interconnects devices like computers, servers, printers etc and is usually located in a single geographical area. Wide Area Networks span large geographical areas and basically are a collection of multiple LANs interconnected. By leasing ISPs connections, companies can connect together networks that are situated in different territories and countries around the globe. WANs use serial connections to interconnect smaller networks, because this type of communication channel provides the highest amount of bandwidth available (increased speed). The Internet is actually a very large WAN because multiple networks (ISPs or large enterprises) are interconnected together to form one huge network in which devices can communicate between each other. VPN connections, branch or regional offices all relay on WAN connections. Enterprises grow from little companies to large ones in time so you can imagine that it’s impossible to implement a hierarchical network model from the beginning.

   We will talk about WAN connections and the technologies used in the following articles, but for now you’ll have to know that WANs operate at the last two OSI layers, the Physical and Data Link layers. These layers are responsible for frame encapsulation, flow control and physical delivery of data over the medium. We have talked about these layers a while ago in the networking fundamentals articles. The Data Link layer receives packets from the network layer, encapsulates them into data frames and then sends these frames to the physical layer for further processing. The lowest layer of the OSI model provides all the physical connections, electrical standards and transmission elements needed to transfer data from one point to another. To talk about WAN connections, we will first need to understand all the elements and devices involved in this technology:
DCE device (Data Communication Equipment) – hardware device that transfers data from the service provider to the local network. You probably know by now that DCE is responsible for setting the clock signal in a serial connection.
DTE device (Data Terminal Equipment) – device used to forward data from the local network to the local loop using the DCE. The local loop is the physical connection between the DCE device and the ISP’s network.
Demarcation Point – the point where the local network is separated from the ISP’s network.
CPE (Customer Premises Equipment) – all devices that are located in the local network (routers, switches, modems, storage devices, etc).
These are some of the elements used when talking about WAN connections. Some of the hardware devices that are most common in WANs connections are:
Modem – hardware device that converts analog signals into digital signals and vice versa.
Router – the most common hardware device used in WAN technologies. Routers use CSU/DSU devices or modems to connect to the Provider’s Wide Area Network. Core routers make up the backbone of every computer network. A CSU/DSU device is used to receive frames from the Service Provider and to forward them to the local network.
WAN switch – hardware device that works at the data link layer. It is used to receive and forward frames that are part of WAN technologies such as Frame Relay, HDLC or ATM.
   Some of the well known WAN technologies used today include Circuit Switched, Packet Switched and Point-to-Point. I will talk about point-to-point connections (PPP) and Packet Switched connections (Frame Relay) in a later article because these two are needed for the CCNA exam. In Circuit Switched networks such as ISDN or PSTN, devices must first establish a circuit, from source to destination, before sending data over the medium. Circuit Switched networks use TDM (Time Division Multiplexing) technology in which every node receives a time interval in which it can transmit information over the network. In Packet Switched networks, devices do not have to establish dedicated lines before transmitting. Data is encapsulated into packets which can be routed using different paths. A router can send a piece of data using one path and another piece of the same data using another path. There are two types of Packet switched technologies used today:
Connectionless – packets sent over networks include all the information needed to route them (source and destination addresses).
Connection-oriented – each packet has a predefined route and each one includes a unique identifier for that particular path. In Frame-Relay technology, these identifiers are called DLCIs (Data Link Connection Identifiers). Multiple DLCIs are used to forward packets from one point to another and all together they from a virtual circuit (VC).
   As a conclusion, in our days there are many technologies that can be used in WAN implementations. You can either use a public or a private infrastructure to connect multiple LANs. By using the public Internet, VPN connections must be used to secure, authenticate and forward packets that are part of the same company. In private WANs, Switched or Dedicated lines can be used. As I’ve told you earlier, switched networks can be either Circuit-Switched (ISDN or PSTN) or Packet-Switched (Frame Relay,ATM or x.25). In Leased Lines, dedicated connections (E1,E3,T1,T3 etc.) are used to forward packets from branch offices to the main office.
That’s it for this article folks, I hope you’ve made a general idea of WANs. In the following articles we will continue talking about different WAN technologies used today. Have a wonderful day and enjoy IT training day.

Switching – Wireless concepts

Hello dear readers,

   This article will be focused in explaining the elements, roles and functionality of wireless connections. I always had problems with remembering all the encryption methods and algorithms, all the wireless technologies with all their names and numbers so this is why I will try to put them all together to enlighten anyone that is in the same situation. I hope you will enjoy this post, feel free to leave any comment or post any question that you have. I really appreciate when you are straight forward and say whatever is on your mind. I know that there are probably a lot of things to say about the wireless technology, this is why I encourage you to say whatever there is to say.
   Usually, businesses today use both physical and wireless connections. Wireless local area networks (WLANs) are more and more demanded in companies around the world. The main qualities of wireless networks are their flexibility, management and easy implementation. I will explain in this article how to configure, secure and implement a wireless network. Another aspect that you will need to consider is the area coverage of wireless networks. People moved slowly from physical workstations to wireless laptops, PDAs, mobile phones etc. Along with this migration, wireless networks became the crucial point in this transition. Everywhere from work places to home networks, people prefer the use of wireless network instead of physical connections. There are four main wireless technologies used today:
PAN (Personal Area Network)
LAN (Local Area Network)
MAN (Metropolitan Area Network)
WAN (Wide Area Network)
I will not talk much about these because they are not studied for the CCNA exam. You will have to know that they differ from each other in terms of area coverage, speed and applications. You can read more on this article from Wikipedia: http://en.wikipedia.org/wiki/Wireless_network.
WLAN connections use the RF(radio frequency) spectrum and are also known as 802.11 wireless LAN standard. RF uses the air medium to transmit signals from one point to another. These waves can be sent anywhere and through almost all materials. One big problem with wireless networks is the possibility of interference with other RF signals. Two devices that transmit RF signals between them must use the same transmission channel. Devices that use wireless connections must have a WNIC (Wireless Network Interface Card) hardware installed. Unlike Ethernet networks, wireless networks communicate using Access Points (AP) which are physical wireless devices. Another aspect of wireless technologies is the use of Collision Avoidance mechanism instead of Collision Detection. I don’t have a table with all the wireless standards, I will post a link from Wikipedia in which you can see all the available wireless standards used today: http://en.wikipedia.org/wiki/IEEE_802.11. What you will have to remember from here is that 802.11a and 802.11g have higher speeds (54 Mbps) than 802.11b (11 Mbps) and also they use a different modulation technology. 802.11b uses the DSSS (Direct Sequence Spread Spectrum) and 802.11a and 802.11g uses OFDM (Orthogonal Frequency Division Multiplexing). Remember that the area of coverage and the channels used are different from one technology to another. The 802.11n standard is a newer and much faster technology that uses a different band and modulation.
But what about the components needed in wireless communications? As I’ve written earlier, Wireless NICs are used as the main hardware component to transmit RF waves from one point to another. They encode data into RF signals by using the modulation mechanism. From mobile phones to laptops and even desktop computers, all use Wireless Network Interface Cards to communicate throughout the radio frequency spectrum. Usually, devices communicate with each other using Access Points (AP). These are physical devices, that convert Ethernet frames (802.3) into wireless frames (802.11), used in wireless communications. In wireless technologies, users must associate their devices with one AP in order to communicate with other wireless devices. Unlike Ethernet standard, wireless connections use the CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) technology. The mechanism of this technology is pretty simple, devices must first check the RF spectrum before transmitting frames. If a signal is detected then they must wait until the medium is free. Once frames are received by APs, ACK(acknowledgement) messages are exchanged between the two transmitting nodes. This technology has a problem, wireless signals are affected by attenuation. This means that the more distant wireless devices are from the AP, the more weaker the signal is. This issue has introduced the hidden node problem (from Wikipedia: “In wireless networking, the hidden node problem or hidden terminal problem occurs when a node is visible from a wireless access point (AP), but not from other nodes communicating with said AP” http://en.wikipedia.org/wiki/Hidden_node_problem). To connect one wireless network to another, you can use a classic network router or a wireless router. This is nothing more than a physical wireless device that acts like a gateway for devices that are behind it. When configuring a wireless Access Point, you will have to specify the SSID (Service Set Identifier) which is an element that identifies wireless connections. Also, always check to see if the mode (it can be mixed-mode to support multiple standards or single-mode to support only one) that the AP uses is the one that you need (802.11a,b,g or n) and that the channels are different from one AP to another in order to avoid interference. Check this Wikipedia link to see an image with the available channels: en.wikipedia.org/wiki/List_of_WLAN_channels.
In terms of wireless topology, there are three main topologies used today:
Ad hoc – wireless networks that do not use APs. Devices communicate directly with each other (low coverage or BSA-Basic Service Area)
BSS (Basic Service Set) – a wireless network that uses an AP to provide the wireless communication channel between devices.
ESS (Extended Service Sets) – multiple BSSs connected between each other to provide an extended are of wireless coverage (multiple APs are connected to form a larger wireless network).
The association between a wireless device and an AP is made by using the following steps:
APs send beacons which are wireless messages that contain the SSID, speed rates and the authentication method. The beacons are received by wireless devices which will try to establish a connection with the AP. If wireless hosts are already configured for one wireless network, they will send probe messages to establish a wireless connection with an already known SSID. After this step is complete, the authentication is made between the client and the AP using the configured method. We will talk in a moment about authentication methods. If the authentication is successful then the two devices establish an association.
Probably the most important aspect that you need to consider when implementing wireless networks is the security. Because RF signals are transmitted in the air, these signals can be intercepted by malicious users. There are different security methods and technologies used today this is why you’ll have to ensure that you choose the best one for your desires. Attacks like man-in-the-middle, Dos or DDoS (Denial of Service or Distributed Denial of Service) are a real threat to wireless networks. In the first implementations of wireless networks, two security protocols where introduced, open and WEP (Wired Equivalent Privacy). Open meant that there is no security involved, a device would simply ask an AP to authenticate and the AP would simply grant access to the network. The WEP authentication method uses a shared key between an AP and a client. The client sends an authentication request to the AP. The AP receives the request and then sends a challenge text to the client. The client encrypts the text using the shared key and then the encrypted message is sent back to the AP. The AP decrypts the message using its own key and if the text is the same with the one that has been sent, the client is authenticated. Even if this mechanism introduced a certain level of security, it could be easily cracked because the shared key could be intercepted by a hacker. With the shared key in hands, the hacker could easily authenticate with the AP and gain control to the network resources. A new encryption algorithm was invented to provide better security to the WPA protocol, TKIP or Temporal Key Integrity Protocol (read more about this protocol on this article from Wikipedia: http://en.wikipedia.org/wiki/Temporal_Key_Integrity_Protocol). The Wi-Fi Protected Access (WPA) protocol was introduced as a better solution for the WPA protocol. This protocol uses a preshared key (PSK) and the TKIP protocol for encryption.
The most used standard in today’s enterprise networks is the 802.11i/WPA2 standard. This uses the AES encryption with a dynamic key management and a feature to authenticate clients using a Remote Authentication Dial In User Service (RADIUS) database. Enterprise networks, beside the normal authentication mechanism, often use a login mechanism using an authentication server. The following link from Cisco’s website will show you how EAP authentication works: http://www.cisco.com/en/US/i/000001-100000/65001-70000/65001-66000/65583.jpg . Other known security mechanisms are MAC address filtering and SSID broadcasting disabled. These methods are not so used because they can be easily cracked.
As a conclusion, remember always to take the necessary steps from installing the AP, configuring SSID, band, mode and channels, to implementing wireless network security using the WPA or WPA2 standards (authentication and encryption). Ensure that the APs will not suffer from interferences by placing them in the right locations and by selecting the appropriate channel. Ensure that you choose the best hardware devices to set up your wireless network and design a wireless map.
I think that’s it for this article folks, I hope I’ve covered all the elements of wireless networks, please share it to others and rate it. Have a wonderful day and stay tuned because more will come.

Switching – Inter-vlan routing

This article will be focused in explaining the basic principles of inter-vlan routing. This is a mechanism that provides communication between different VLANs. Because each VLAN has its own broadcast domain, devices from separate VLANs cannot communicate with each other. As the same suggest, inter-VLAN routing is made by connecting a router to a switched network. The router acts as a the point of contact between two or more VLANs. I will try to explain all the elements that make up inter-vlan routing and also I will show you how to configure it. What you have to remember so far is that inter-vlan routing is a mechanism used to forward traffic from on VLAN to another.
Older implementations of inter-vlan routing required that a router would have one physical interface for each VLAN. Newer implementations like “router-on-a-stick” can use one physical interface for all VLANs. “Router-on-a-stick” added a new features in which a router can have multiple subinterfaces for each physical interface. A router configured with subinterfaces can receive tagged traffic coming from a trunk link. The router must be connected to a switch port set in the trunk mode. Subinterfaces are configured in software and act like real interfaces(each one must have an IP and subnet mask configured). Basically, traffic is sent and received through one physical interface and the router makes its decisions based on the subinterface configuration and tagged traffic coming from the trunk link. The router acts somehow like a switch between subinterfaces. As I’ve told you previously, each subinterface must have an IP configured that is part of a specified VLAN subnet. The subinterface IP will act as the gateway for switches that make up a particular VLAN.
If you’ve read all my networking articles you now by now how to configure interfaces on a router. The limitation of the older implementation of inter-vlan routing was that with each new VLAN added, the router would have to provide a dedicated physical interface. Using the new inter-vlan routing design, a physical interface can be part of several VLANs while subinterfaces are assigned separately for each VLAN. A subinterface configuration looks similar to a physical interface configuration, you have to specify an IP address and subnet mask. The physical interface must be connected to a trunk port this is why when configuring subinterfaces, you will have to specify the encapsulation type for each VLAN. I will show you in a moment how to configure subinterfaces. The benefit of using subinterfaces is visible from the start, the cost is reduced because you use only one physical interface for many VLANs. Of course, subinterface configuration is more complex than physical interface configuration and the speed is reduced since all subinterfaces use the speed of one physical interface.
I will show you now how to configure inter-vlan routing without using subinterfaces, in order to see the difference between these two technologies. Assuming that you’ve already configured VLANs on the switches connected to the router, I will jump directly to the router configuration (if you didn’t configured VLANs, check out an earlier networking post). Let’s take the following topology:

Vlan topology
There are three VLANs created here, VLAN 10, 20 and 30. On the switch side you’d have to create VLANs and then assign switch ports to the appropriate VLAN. On the router side, on each physical interface, you would have to assign an IP configuration (IP and mask) according to the VLAN configuration. Because these subnets are directly connected there is no more configuration required. Verify your configuration using the show running-config and show ip route commands. Remember from this example that each switch is connected to one different physical port on the router.
   The Router-on-a-stick design would look like this:
Router on a stick
Port F0/1 (the middle switch) will be configured as a trunk link and it will carry tagged frames from all VLANs (10, 20 and 30). The router will receive the frames on it’s physical port for one VLAN using a subinterface and it will forward traffic on the same physical port for another VLAN using a different subinterface. After configuring VLANs and setting F0/1 port as a trunk, on the router side you will have to take the following steps:
1. enter the global configuration mode.
2. select the desired physical interface, for example interface fast0/1 and type no shutdown.
3. enter each subinterface, configure the encapsulation type for each VLAN and set the IP configuration.
This is how inter-vlan routing configuration would look like:
Router on a stick configuration
To enter a subinterface type interface [physical interface id].[subinterface number]. I recommend you use the same subinterface number as the VLAN ID. For example, interface FastEthernet 0/1.10 for VLAN 10. Next, specify the encapsulation type for each VLAN (encapsulation dot1Q 10, where 10 is the VLAN ID). In the end add the IP configuration (ip address 192.168.1.1 255.255.255.0). After this step is complete, verify your configuration and use ping and tracert commands to test the communication between devices from different VLANs.
   I think I’ve covered all the main aspects of inter-vlan routing. If you think there is more to add please leave a comment or post a question. I hope you will find this article interesting, and stay tuned because more will come. I wish you all the best folks.