Wednesday, November 21, 2012

LAN SWITCHING


LAN Switching

A LAN switch is a device that provides much higher port density at a lower cost than traditional bridges.  For this reason, LAN switches can accommodate network designs featuring fewer users per segment, thereby increasing the average available bandwidth per user.  LAN switches are being used to replace hubs in the wiring closet because user applications are demanding greater bandwidth.
The trend toward fewer users per segment is known as microsegmentation.  Microsegmentation allows the creation of private or dedicated segments, that is, one user per segment.  Each user receives instant access to the full bandwidth and does not have to contend for available bandwidth with other users.  As a result, collisions do not occur.  A LAN switch forwards frames based on either the frame's Layer 2 address (Layer 2 LAN switch), or in some cases, the frame's Layer 3 address (multi-layer LAN switch).  A LAN switch is also called a frame switch because it forwards Layer 2 frames, whereas an ATM switch forwards cells.
LAN Switch Operation
LAN switches are similar to transparent bridges in functions such as learning the topology, forwarding, and filtering. These switches also support several new and unique features, such as dedicated communication between devices, multiple simultaneous conversation, full-duplex communication, and media-rate adaption.
Dedicated collision-free communication between network devices increases file-transfer throughput.  Multiple simultaneous conversations can occur by forwarding, or switching, several packets at the same time, thereby increasing network capacity by the number of conversations supported.  Full-duplex communication effectively doubles the throughput, while with media-rate adaption, the LAN switch can translate between 10 and 100 Mbps, allowing bandwidth to be allocated as needed.  Deploying LAN switches requires no change to existing hubs, network interface cards (NICs), or cabling.
LAN Switching Forwarding
LAN switches can be characterized by the forwarding method they support. In the store-and-forward switching method, error checking is performed and erroneous frames are discarded. With the cut-through switching method, latency is reduced by eliminating error checking.
With the store-and-forward switching method, the LAN switch copies the entire frame into its onboard buffers and computes the cyclic redundancy check (CRC).  The frame is discarded if it contains a CRC error or if it is a runt (less than 64 bytes including the CRC) or a giant (more than 1518 bytes including the CRC).  If the frame does not contain any errors, the LAN switch looks up the destination address in its forwarding, or switching, table and determines the outgoing interface.  It then forwards the frame toward its destination.
With the cut-through switching method, the LAN switch copies only the destination address (the first 6 bytes following the preamble) into its onboard buffers.  It then looks up the destination address in its switching table, determines the outgoing interface, and forwards the frame toward its destination.  A cut-through switch provides reduced latency because it begins to forward the frame as soon as it reads the destination address and determines the outgoing interface.
LAN Switching Bandwidth
LAN switches also can be characterized according to the proportion of bandwidth allocated to each port.  Symmetric switching provides evenly distributed bandwidth to each port, while asymmetric switching provides unlike, or unequal, bandwidth between some ports.
An asymmetric LAN switch provides switched connections between ports of unlike bandwidths, such as a combination of 10BaseT and 100BaseT.  This type of switching is also called 10/100 switching.  Asymmetric switching is optimized for client-server traffic flows where multiple clients simultaneously communicate with a server, requiring more bandwidth dedicated to the server port to prevent a bottleneck at that port.
A symmetric switch provides switched connections between ports with the same bandwidth, such as all 10BaseT or all 100BaseT.  Symmetric switching is optimized for a reasonably distributed traffic load, such as in a peer-to-peer desktop environment.
LAN Switching and the OSI Model
LAN switches can be categorized according to the OSI layer at which they filter and forward, or switch, frames.  These categories are: Layer 2, Layer 2 with Layer 3 features, or multi-layer.
A Layer 2 LAN switch is operationally similar to a multiport bridge but has a much higher capacity and supports many new features, such as full-duplex operations.  A Layer 2 LAN switch performs switching and filtering based on the OSI Data Link layer MAC address.  As with bridges, it is completely transparent to network protocols and user applications.
A Layer 2 LAN switch with Layer 3 features can make switching decisions based on more information than just the Layer 2 MAC address.  Such a switch might incorporate some Layer 3 traffic-control features, such as broadcast and multicast traffic management, security through access lists, and IP fragmentation.
A multi-layer switch makes switching and filtering decisions on the basis of OSI data link layer (Layer 2) and OSI network-layer (Layer 3) addresses.  This type of switch dynamically decides whether to switch (Layer 2) or route (Layer 3) incoming traffic.  A multi-layer LAN switch switches within a workgroup and routes between different workgroups.

LAN Switching Summary
LAN switching technology improves the performance of traditional Ethernet, FDDI, and Token Ring technologies without requiring costly wiring upgrades or time-consuming host reconfiguration. The low price per port allows the deployment of LAN switches so that they decrease segment size and increase available bandwidth. VLANs make it possible to extend the benefit of switching over a network of LAN switches and other switching devices.




Layer 2 Switching Methods

LAN switches are characterized by the forwarding method that they support, such as a store-and-forward switch, cut-through switch, or fragment-free switch. In the store-and-forward switching method, error checking is performed against the frame, and any frame with errors is discarded. With the cut-through switching method, no error checking is performed against the frame, which makes forwarding the frame through the switch faster than store-and-forward switches.

Store-and-Forward Switching

Store-and-forward switching means that the LAN switch copies each complete frame into the switch memory buffers and computes a cyclic redundancy check (CRC) for errors. CRC is an error-checking method that uses a mathematical formula, based on the number of bits (1s) in the frame, to determine whether the received frame is errored. If a CRC error is found, the frame is discarded. If the frame is error free, the switch forwards the frame out the appropriate interface port, as illustrated in Figure 6-7.
Figure 7Figure 6-7 Store-and-Forward Switch Discarding a Frame with a Bad CRC
An Ethernet frame is discarded if it is smaller than 64 bytes in length, a runt, or if the frame is larger than 1518 bytes in length, a giant, as illustrated in Figure 6-8.
NOTE
Some switches can be configured to carry giant, or jumbo, frames.
If the frame does not contain any errors, and is not a runt or a giant, the LAN switch looks up the destination address in its forwarding, or switching, table and determines the outgoing interface. It then forwards the frame toward its intended destination.

Store-and-Forward Switching Operation

Store-and-forward switches store the entire frame in internal memory and check the frame for errors before forwarding the frame to its destination. Store-and-forward switch operation ensures a high level of error-free network traffic, because bad data frames are discarded rather than forwarded across the network, as illustrated in Figure 6-9.
Figure 8Figure 6-8 Runts and Giants in the Switch
Figure 9Figure 6-9 Store-and-Forward Switch Examining Each Frame for Errors Before Forwarding to Destination Network Segment
The store-and-forward switch shown in Figure 6-9 inspects each received frame for errors before forwarding it on to the frame's destination network segment. If a frame fails this inspection, the switch drops the frame from its buffers, and the frame is thrown in to the proverbial bit bucket.
A drawback to the store-and-forward switching method is one of performance, because the switch has to store the entire data frame before checking for errors and forwarding. This error checking results in high switch latency (delay). If multiple switches are connected, with the data being checked at each switch point, total network performance can suffer as a result. Another drawback to store-and-forward switching is that the switch requires more memory and processor (central processing unit, CPU) cycles to perform the detailed inspection of each frame than that of cut-through or fragment-free switching.

Cut-Through Switching

With cut-through switching, the LAN switch copies into its memory only the destination MAC address, which is located in the first 6 bytes of the frame following the preamble. The switch looks up the destination MAC address in its switching table, determines the outgoing interface port, and forwards the frame on to its destination through the designated switch port. A cut-through switch reduces delay because the switch begins to forward the frame as soon as it reads the destination MAC address and determines the outgoing switch port, as illustrated in Figure 6-10.
The cut-through switch shown in Figure 6-10 inspects each received frame's header to determine the destination before forwarding on to the frame's destination network segment. Frames with and without errors are forwarded in cut-through switching operations, leaving the error detection of the frame to the intended recipient. If the receiving switch determines the frame is errored, the frame is thrown out to the bit bucket where the frame is subsequently discarded from the network.
Figure 10Figure 6-10 Cut-Through Switch Examining Each Frame Header Before Forwarding to Destination Network Segment
Cut-through switching was developed to reduce the delay in the switch processing frames as they arrive at the switch and are forwarded on to the destination switch port. The switch pulls the frame header into its port buffer. When the destination MAC address is determined by the switch, the switch forwards the frame out the correct interface port to the frame's intended destination.
Cut-through switching reduces latency inside the switch. If the frame was corrupted in transit, however, the switch still forwards the bad frame. The destination receives this bad frame, checks the frame's CRC, and discards it, forcing the source to resend the frame. This process wastes bandwidth and, if it occurs too often, network users experience a significant slowdown on the network. In contrast, store-and-forward switching prevents errored frames from being forwarded across the network and provides for quality of service (QoS) managing network traffic flow.
NOTE
Today's switches don't suffer the network latency that older (legacy) switches labored under. This minimizes the effect switch latency has on your traffic. Today's switches are better suited for a store-and-forward environment.

Cut-Through Switching Operation

Cut-through switches do not perform any error checking of the frame because the switch looks only for the frame's destination MAC address and forwards the frame out the appropriate switch port. Cut-through switching results in low switch latency. The drawback, however, is that bad data frames, as well as good frames, are sent to their destinations. At first blush, this might not sound bad because most network cards do their own frame checking by default to ensure good data is received. You might find that if your network is broken down into workgroups, the likelihood of bad frames or collisions might be minimized, in turn making cut-through switching a good choice for your network.

Fragment-Free Switching

Fragment-free switching is also known as runtless switching and is a hybrid of cut-through and store-and-forward switching. Fragment-free switching was developed to solve the late-collision problem.
NOTE
Recall that when two systems' transmissions occur at the same time, the result is a collision. Collisions are a part of Ethernet communications and do not imply any error condition. A late collision is similar to an Ethernet collision, except that it occurs after all hosts on the network should have been able to notice that a host was already transmitting.
A late collision indicates that another system attempted to transmit after a host has transmitted at least the first 60 bytes of its frame. Late collisions are often caused by an Ethernet LAN being too large and therefore needing to be segmented. Late collisions can also be caused by faulty network devices on the segment and duplex (for example, half-duplex/full-duplex) mismatches between connected devices.

Fragment-Free Switching Operation

Fragment-free switching works like cut-through switching with the exception that a switch in fragment-free mode stores the first 64 bytes of the frame before forwarding. Fragment-free switching can be viewed as a compromise between store-and-forward switching and cut-through switching. The reason fragment-free switching stores only the first 64 bytes of the frame is that most network errors and collisions occur during the first 64 bytes of a frame.
NOTE
Different methods work better at different points in the network. For example, cut-through switching is best for the network core where errors are fewer, and speed is of utmost importance. Store-and-forward is best at the network access layer where most network problems and users are located.

Layer 3 Switching

Layer 3 switching is another example of fragment-free switching. Up to now, this discussion has concentrated on switching and bridging at the data link layer (Layer 2) of the Open System Interconnection (OSI) model. When bridge technology was first developed, it was not practical to build wire-speed bridges with large numbers of high-speed ports because of the manufacturing cost involved. With improved technology, many functions previously implemented in software were moved into the hardware, increasing performance and enabling manufacturers to build reasonably priced wire-speed switches.
Whereas bridges and switches work at the data link layer (OSI Layer 2), routers work at the network layer (OSI Layer 3). Routers provide functionality beyond that offered by bridges or switches. As a result, however, routers entail greater complexity. Like early bridges, routers were often implemented in software, running on a special-purpose processing platform, such as a personal computer (PC) with two network interface cards (NICs) and software to route data between each NIC, as illustrated in Figure 6-11.
Figure 11Figure 6-11 PC Routing with Two NICs
The early days of routing involved a computer and two NIC cards, not unlike two people having a conversation, but having to go through a third person to do so. The workstation would send its traffic across the wire, and the routing computer would receive it on one NIC, determine that the traffic would have to be sent out the other NIC, and then resend the traffic out this other NIC.
NOTE
In the same way that a Layer 2 switch is another name for a bridge, a Layer 3 switch is another name for a router. This is not to say that a Layer 3 switch and a router operate the same way. Layer 3 switches make decisions based on the port-level Internet Protocol (IP) addresses, whereas routers make decisions based on a map of the Layer 3 network (maintained in a routing table).
Multilayer switching is a switching technique that switches at both the data link (OSI Layer 2) and network (OSI Layer 3) layers. To enable multilayer switching, LAN switches must use store-and-forward techniques because the switch must receive the entire frame before it performs any protocol layer operations, as illustrated in Figure 6-12.
Figure 12Figure 6-12 Layer 3 (Multilayer) Switch Examining Each Frame for Error Before Determining the Destination Network Segment (Based on the Network Address)
Similar to a store-and-forward switch, with multilayer switching the switch pulls the entire received frame into its memory and calculates its CRC. It then determines whether the frame is good or bad. If the CRC calculated on the packet matches the CRC calculated by the switch, the destination address is read and the frame is forwarded out the correct switch port. If the CRC does not match the frame, the frame is discarded. Because this type of switching waits for the entire frame to be received before forwarding, port latency times can become high, which can result in some latency, or delay, of network traffic.

Layer 3 Switching Operation

You might be asking yourself, "What's the difference between a Layer 3 switch and a router?" The fundamental difference between a Layer 3 switch and a router is that Layer 3 switches have optimized hardware passing data traffic as fast as Layer 2 switches. However, Layer 3 switches make decisions regarding how to transmit traffic at Layer 3, just as a router does.
NOTE
Within the LAN environment, a Layer 3 switch is usually faster than a router because it is built on switching hardware. Bear in mind that the Layer 3 switch is not as versatile as a router, so do not discount the use of a router in your LAN without first examining your LAN requirements, such as the use of network address translation (NAT).
Before going forward with this discussion, recall the following points:
·         A switch is a Layer 2 (data link) device with physical ports and that the switch communicates via frames that are placed on to the wire at Layer 1 (physical).
·         A router is a Layer 3 (network) device that communicates with other routers with the use of packets, which in turn are encapsulated inside frames.
Routers have interfaces for connection into the network medium. For a router to route data over the Ethernet, for instance, the router requires an Ethernet interface, as illustrated in Figure 6-13.
A serial interface is required for the router connecting to a wide-area network (WAN), and a Token Ring interface is required for the router connecting to a Token Ring network.
A simple network made up of two network segments and an internetworking device (in this case, a router) is shown in Figure 6-14.
Figure 13Figure 6-13 Router Interfaces
The router in Figure 6-14 has two Ethernet interfaces, labeled E0 and E1. The primary function of the router is determining the best network path in a complex network. A router has three ways to learn about networks and make the determination regarding the best path: through locally connected ports, static route entries, and dynamic routing protocols. The router uses this learned information to make a determination by using routing protocols. Some of the more common routing protocols used include Routing Information Protocol (RIP), Open Shortest Path First (OSPF), Interior Gateway Routing Protocol (IGRP), and Border Gateway Protocol (BGP).
Figure 14Figure 6-14 Two-Segment Network with a Layer 3 Router
NOTE
Routing protocols are used by routers to share information about the network. Routers receive and use the routing protocol information from other routers to learn about the state of the network. Routers can modify information received from one router by adding their own information along with the original information, and then forward that on to other routers. In this way, each router can share its version of the network.

Packet Switching

Layer 3 information is carried through the network in packets, and the transport method of carrying these packets is called packet switching, as illustrated in Figure 6-15.
Figure 15Figure 6-15 Packet Switching Between Ethernet and Token Ring Network Segments
Figure 6-15 shows how a packet is delivered across multiple networks. Host A is on an Ethernet segment, and Host B on a Token Ring segment. Host A places an Ethernet frame, encapsulating an Internet Protocol (IP) packet, on to the wire for transmission across the network.
The Ethernet frame contains a source data link layer MAC address and a destination data link layer MAC address. The IP packet within the frame contains a source network layer IP address (TCP/IP network layer address) and a destination network layer IP address. The router maintains a routing table of network paths it has learned, and the router examines the network layer destination IP address of the packet. When the router has determined the destination network from the destination IP address, the router examines the routing table and determines whether a path exists to that network.
In the case illustrated in Figure 6-15, Host B is on a Token Ring network segment directly connected to the router. The router peels off the Layer 2 Ethernet encapsulation, forwards the Layer 3 data packet, and then re-encapsulates the packet inside a new Token Ring frame. The router sends this frame out its Token Ring interface on to the segment where Host B will see a Token Ring frame containing its MAC address and process it.
Note the original frame was Ethernet, and the final frame is Token Ring encapsulating an IP packet. This is called media transition and is one of the features of a network router. When the packet arrives on one interface and is forwarded to another, it is called Layer 3 switching or routing.

Routing Table Lookup

Routers (and Layer 3 switches) perform table lookups determining the next hop (next router or Layer 3 switch) along the route, which in turn determines the output port over which to forward the packet or frame. The router or Layer 3 switch makes this decision based on the network portion of the destination address in the received packet.
This lookup results in one of three actions:
·         The destination network is not reachable—There is no path to the destination network and no default network. In this case, the packet is discarded.
·         The destination network is reachable by forwarding the packet to another router—There is a match of the destination network against a known table entry, or to a default route if a method for reaching the destination network is unknown. The first lookup tells the next hop. Then a second lookup is performed to determine how to get to the next hop. Then a final determination of the exit port is reached. The first lookup can return multiple paths, so the port is not known until after the determination of how to get there is made. In either case, the lookup returns the network (Layer 3) address of the next-hop router, and the port through which that router can be reached.
·         The destination network is known to be directly attached to the router—The port is directly attached to the network and reachable. For directly attached networks, the next step maps the host portion of the destination network address to the data link (MAC) address for the next hop or end node using the ARP table (for IP). It does not map the destination network address to the router interface. It needs to use the MAC of the final end node so that the node picks up the frame from the medium. Also, you are assuming IP when stating that the router uses the ARP table. Other Layer 3 protocols, such as Internetwork Packet Exchange (IPX), do not use ARP to map their addresses to MAC addresses.
Routing table lookup in an IP router might be considered more complex than a MAC address lookup for a bridge, because at the data link layer addresses are 48-bits in length, with fixed-length fields—the OUI and ID. Additionally, data-link address space is flat, meaning there is no hierarchy or dividing of addresses into smaller and distinct segments. MAC address lookup in a bridge entails searching for an exact match on a fixed-length field, whereas address lookup in a router looks for variable-length fields identifying the destination network.
IP addresses are 32 bits in length and are made up of two fields: the network identifier and the host identifier, as illustrated in Figure 6-16.
Both the network and host portions of the IP address can be of a variable or fixed length, depending on the hierarchical network address scheme used. Discussion of this hierarchical, or subnetting, scheme is beyond the scope of this book, but suffice to say you are concerned with the fact that each IP address has a network and host identifier.
The routing table lookup in an IP router determines the next hop by examining the network portion of the IP address. After it determines the best match for the next hop, the router looks up the interface port to forward the packets across, as illustrated in Figure 6-17.
Figure 16Figure 6-16 IP Address Space
Figure 6-17 shows that the router receives the traffic from Serial Port 1 (S1) and performs a routing table lookup determining from which port to forward out the traffic. Traffic destined for Network 1 is forwarded out the Ethernet 0 (E0) port. Traffic destined for Network 2 is forwarded out the Token Ring 0 (T0) port, and traffic destined for Network 3 is forwarded out Serial Port 0 (S0).
NOTE
In terms of the Cisco Internet Operating System (IOS) interface, port numbers begin with zero (0), such as serial port 0 (S0). Not all vendors, including Cisco, use ports; some use slots or modules, which might begin with zero or one.
Figure 17Figure 6-17 Routing Table Lookup Operation
The host identifier portion of the network address is examined only if the network lookup indicates that the destination is on a locally attached network. Unlike data-link addresses, the dividing line between the network identifier and the host identifier is not in a fixed position throughout the network. Routing table entries can exist for network identifiers of various lengths, from 0 bits in length, specifying a default route, to 32 bits in length for host-specific routes. According to IP routing procedures, the lookup result returned should be the one corresponding to the entry that matches the maximum number of bits in the network identifier. Therefore, unlike a bridge, where the lookup is for an exact match against a fixed-length field, IP routing lookups imply a search for the longest match against a variable-length field.
For example, a network host might have both the IP address of 68.98.134.209 and a MAC address of 00-0c-41-53-40-d3. The router makes decisions based on the IP address (68.98.134.209), whereas the switch makes decisions based on the MAC address (00-0c-41-53-40-d3). Both addresses identify the same host on the network, but are used by different network devices when forwarding traffic to this host.

ARP Mapping

Address Resolution Protocol (ARP) is a network layer protocol used in IP to convert IP addresses into MAC addresses. A network device looking to learn a MAC address broadcasts an ARP request onto the network. The host on the network that has the IP address in the request replies with its MAC (hardware) address. This is called ARP mapping, the mapping of a Layer 3 (network) address to a Layer 2 (data link) address.
NOTE
Some Layer 3 addresses use the MAC address as part of their addressing scheme, such as IPX.
Because the network layer address structure in IP does not provide for a simple mapping to data-link addresses, IP addresses use 32 bits, and data-link addresses use 48 bits. It is not possible to determine the 48-bit data-link address for a host from the host portion of the IP address. For packets destined for a host not on a locally attached network, the router performs a lookup for the next-hop router's MAC address. For packets destined for hosts on a locally attached network, the router performs a second lookup operation to find the destination address to use in the data-link header of the forwarded packet's frame, as illustrated in Figure 6-18.
After determining for which directly attached network the packet is destined, the router looks up the destination MAC address in its ARP cache. Recall that ARP enables the router to determine the corresponding MAC address when it knows the network (IP) address. The router then forwards the packet across the local network in a frame with the MAC address of the local host, or next-hop router.
Figure 18Figure 6-18 Router ARP Cache Lookup
NOTE
Note in Figure 6-18 that Net 3, Host: 31 is not part of the ARP cache, because during the routing table lookup, the router determined that this packet is to be forwarded to another, remote (nonlocally attached) network.
The result of this final lookup falls into one of the three following categories:
·         The packet is destined for the router itself—The IP destination address (network and station portion combined) corresponds to one of the IP addresses of the router. In this case, the packet must be passed to the appropriate higher-layer entity within the router and not forwarded to any external port.
·         The packet is destined for a known host on the directly attached network—This is the most common situation encountered by a network router. The router determines the mapping from the ARP table and forwards the packet out the appropriate interface port to the local network.
·         The ARP mapping for the specified host is unknown—The router initiates a discovery procedure by sending an ARP request determining the mapping of network to hardware address. Because this discovery procedure takes time, albeit measured in milliseconds, the router might drop the packet that resulted in the discovery procedure in the first place. Under steady-state conditions, the router already has ARP mappings available for all communicating hosts. The address discovery procedure is necessary when a previously unheard-from host establishes a new communication session.
NOTE
The current version of Cisco IOS (12.0) Software drops the first packet for a destination without an ARP entry. The IOS does this to handle denial of service (DoS) attacks against incomplete ARPs. In other words, it drops the frame immediately instead of awaiting a reply.

Fragmentation

Each output port on a network device has an associated maximum transmission unit (MTU). Recall from earlier in this chapter that the MTU indicates the largest frame size (measured in bytes) that can be carried on the interface. The MTU is often a function of the networking technology in use, such as Ethernet, Token Ring, or Point-to-Point Protocol (PPP). PPP is used with Internet connections. If the frame being forwarded is larger than the available space, as indicated by the MTU, the frame is fragmented into smaller pieces for transmission on the particular network.
Bridges cannot fragment frames when forwarding between LANs of differing MTU sizes because data-link connections rarely have a mechanism for fragment reassembly at the receiver. The mechanism is at the network layer implementation, such as with IP, which is capable of overcoming this limitation. Network layer packets can be broken down into smaller pieces if necessary so that these packets can travel across a link with a smaller MTU.
Fragmentation is similar to taking a picture and cutting it into pieces so that each piece will fit into differently sized envelopes for mailing. It is up to the sender to determine the size of the largest piece that can be sent, and it is up to the receiver to reassemble these pieces. Fragmentation is a mixed blessing; although it provides the means of communication across different link technologies, the processing accomplishing the fragmentation is significant and could be a burden on each device having to fragment and reassemble the data. Further, pieces for reassembly can be received out of order and may be dropped by the switch or router.
As a rule, it is best to avoid fragmentation in your network if at all possible. It is more efficient for the sending station to send packets not requiring fragmentation anywhere along the path to the destination, instead of sending large packets requiring intermediate routers to perform fragmentation.
NOTE
Hosts and routers can learn the maximum MTU available along a network path through the use of MTU discovery. MTU discovery is a process by which each device in a network path learns the MTU size that the network path can support.


Integrated Services Digital Network (ISDN) provides switched (dialed) digital WAN services in increments of 64 kbps. Before ISDN, most dial services used the same analog lines that were connected to phones. Before ISDN was created, data rates using modems and analog phone lines typically did not exceed 9600 bits per second.
One key reason to use dialed connections of any kind, including ISDN, might be to send and receive data for only short periods of time. “Occasional” connections might be used by a site for which instant access to data is not needed, but for which access is needed a few times per day.
Routers frequently use ISDN to create a backup link when their primary leased line or Frame Relay connection is lost. Although the leased line or Frame Relay access link might seldom fail, when it does, a remote site might be completely cut off from the rest of the network. Depending on the network’s business goals, long outages might not be acceptable, so ISDN could be used to dial back to the main site.
Fig. 38 shows some typical network topologies when you’re using ISDN.
ISDN Usage
Fig. 38
The above scenarios can be described as follows:
  • Case 1 shows dial-on-demand routing. Logic is configured in the routers to trigger the dial when the user sends traffic that needs to get to another site.
  • Case 2 shows a typical telecommuting environment.
  • Case 3 shows a typical dial-backup topology. The leased line fails, so an ISDN call is established between the same two routers.
  • Case 4 shows where an ISDN BRI can be used to dial directly to another router to replace a Frame Relay access link or a failed virtual circuit (VC).
Channels and Protocols
ISDN includes two types of interfaces: Basic Rate Interface (BRI) and Primary Rate Interface (PRI). Both BRI and PRI provide multiple digital bearer channels (B channels), over which temporary connections can be made and data can be sent. Because both BRI and PRI have multiple B channels, a single BRI or PRI line can have concurrent digital dial circuits to multiple sites, or multiple circuits to the same remote router to increase available bandwidth to that site.
B channels are used to transport data. B channels are called bearer channels because they bear or carry the data. B channels operate at speeds of up to 64 kbps, although the speed might be lower depending on the service provider. ISDN signals new data calls using the D channel. When a router creates a B channel call to another device using a BRI or PRI, it sends the phone number it wants to connect to inside a message sent across the D channel. The phone company’s switch receives the message and sets up the circuit. Signaling a new call over the D channel is effectively the same thing as picking up the phone and dialing a number to create a voice call.
The different types of ISDN lines are often described with a phrase that implies the number of each type of channel. For instance, BRIs are referred to as 2B+D, meaning two B channels and one D channel. PRIs based on T1 framing, as in the U.S., are referred to as 23B+D, and PRIs based on E1 framing are referred to as 30B+D. The following table lists the number of channels for each type of ISDN line and the terminology used to describe them.
Type of Interface
Number of B channels
Number of D channels
Descriptive
BRI
2 (64 kbps)
1 (16 kbps)
2B+D
PRI (T1)
23 (64 kbps)
1 (64 kbps)
23B+D
PRI (E1)
30 (64 kbps)
1 (64 kbps)
30B+D
In the following table are the characterizations of several key protocols. Be sure to learn the information in the Issue column. Knowing what each series of specifications is about is useful.
Issue
Protocol
Key Example
Telephone network and ISDN
E-series
E.163—International telephone numbering plan

E.164
—International ISDN addressing
ISDN concepts, aspects, and interfaces
I-series
I.100 series—Concepts, structures, and terminology

I.400 series
—User-Network Interface (UNI)
Switching and signaling
Q-series
Q.921—Link Access Procedure on the D channel (LAPD)

Q.931
—ISDN network layer
It’s also useful to memorize the specifications listed in the following table, as well as which OSI layer each specification matches.
Layer as Compared to OSI
I-Series
Equivalent Q-Series Specification
Description
1
ITU-T I.430

ITU-T I.431
-
Defines connectors, encoding, framing, and reference points.
2
ITU-T I.440

ITU-T I.441
ITU-T Q.920

ITU-T Q.921
Defines the LAPD protocol used on the D channel to encapsulate signaling requests.
3
ITU-T I.450

ITU-T I.451
ITU-T Q.930

ITU-T Q.931
Defines signaling messages, such as call setup and teardown messages.
Now that you have at least seen the names and numbers behind some of the ISDN protocols, you can concentrate on the more important protocols. The first of these is LAPD, defined in Q.921, which is used as a data-link protocol across an ISDN D channel. Essentially, a router with an ISDN interface needs to send and receive signaling messages to and from the local ISDN switch to which it is connected. LAPD provides the data-link protocol that allows delivery of messages across that D channel to the local switch. Note that LAPD does not define the signaling messages. It just provides a data-link protocol that can be used to send the signaling messages.
The call setup and teardown messages themselves are defined by the Q.931 protocol. So, the local switch can receive a Q.931 call setup request from a router over the LAPD-controlled D channel, and it should react to that Q.931 message by setting up a circuit over the public network, as shown in Fig. 39.
LAPD and PPP


Fig. 39
The service provider can use anything it wants to set up the call inside its network, but between each local switch and the routers, ISDN Q.931 messages are used for signaling. Typically, Signaling System 7 (SS7) is used between the two switches—the same protocol used inside phone company networks to set up circuits for phone calls.
As soon as the call is established, a 64-kbps circuit exists between a B channel on each of the two routers shown in Fig. 39. The routers can use HDLC, but they typically use PPP as the data-link protocol on the B channel from end to end. As on leased lines, the switches in the phone company do not interpret the bits sent inside this circuit.
The D channel remains up all the time so that new signaling messages can be sent and received. Because the signals are sent outside the channel used for data, this is called out-ofband signaling.
An ISDN switch often requires some form of authentication with the device connecting to it. Switches use a free-form decimal value, call the service profile identifier (SPID), to perform authentication. In short, before any Q.931 call setup messages are accepted, the switch asks for the configured SPID values. If the values match what is configured in the switch, call setup flows are accepted. When you order new ISDN lines, the provider gives you some paperwork. If the paperwork includes SPIDs, you simply need to configure that number in the ISDN configuration for that line.

ISDN BRI and ISDN PRI Function Groups and Reference Points

The ISDN specifications identify the various functions that must be performed to support customer premises equipment (CPE). ISDN uses the term function group to refer to a set of functions that a piece of hardware or software must perform. Because the ITU wanted several options for the customer, it defined several different function groups. Because the function groups might be implemented by separate products, possibly even from different vendors, the ITU needed to explicitly define the interfaces between the devices that perform each function. Therefore, ISDN uses the term reference point to refer to this interface between two function groups.
  • Function group — A set of functions implemented by a device and software.
  • Reference point — The interface between two function groups, including cabling details.
Most people understand concepts better if they can visualize or actually implement a network. A cabling diagram is helpful for examining the reference points and function groups. Fig. 40 shows the cabling diagram for several most used examples.
ISDN Function Groups and Reference Points
Fig. 40
Router A is ordered with an ISDN BRI U interface; the U implies that it uses the U reference point, referring to the I.430 reference point for the interface between the customer premises and the telco in North America. No other device needs to be installed; the line supplied by the telco is simply plugged into the router’s BRI interface.
Router B uses a BRI card with an S/T interface, implying that it must be cabled to a function group NT1 device in North America. An NT1 function group device must be connected to the telco line through a U reference point in North America. When using a router BRI card with an S/T reference point, the router must be cabled to an external NT1, which in turn is plugged into the line from the telco (the U interface).
A router can connect to an ISDN service with a simple serial interface, as shown with Router C in Fig. 40. Router C must implement an ISDN function group called TE2 (Terminal Equipment 2) and connect directly to a device called a terminal adapter using the R reference point.
The following tables summarize the types shown in Fig. 40.
Function Group
Acronym
Description
TE1
Terminal Equipment 1
ISDN-capable four-wire cable. Understands signaling and 2B+D. Uses an S reference point.
TE2
Terminal Equipment 2
Equipment that does not understand ISDN protocols and specifications (no ISDN awareness). Uses an R reference point, typically an RS-232 or V.35 cable, to connect to a TA.
TA
Terminal adapter
Equipment that uses R and S reference points. Can be thought of as the TE1 function group on behalf of a TE2.
NT1
Network Termination Type 1
CPE equipment in North America. Connects with a U reference point (two-wire) to the telco. Connects with T or S reference points to other CPE.
NT2
Network Termination Type 2
Equipment that uses a T reference point to the telco outside North America or to an NT1 inside North America. Uses an S reference point to connect to other CPE.
NT1/NT2
-
A combined NT1 and NT2 in the same device. This is relatively common in North America.
And here are the reference points:
Reference Point
What it connects between
R
TE2 and TA
S
TE1 or TA and NT2
T
NT2 and NT1
U
NT1 and telco
S/T
TE1 or TA, connected to an NT1when no NT2 is used. Alternatively, the connection from a TE1 or TA to a combined NT1/NT2.
The ITU planned for multiple implementation options with BRI because BRI would typically be installed when connecting to consumers. PRI was seen as a service for businesses, mainly because of the larger anticipated costs and a PRI’s larger number of B channels. So the ITU did not define function groups and reference points for ISDN PRI!
Encoding and Framing
For any physical layer specification, the line encoding defines which energy levels sent over the cable mean a 1 and which energy levels mean a 0. For instance, an early and simple encoding scheme simply used a +5 volt signal to mean a binary 1 and a –5 volt signal to mean a binary 0. Today, encoding schemes vary greatly from one Layer 1 technology to another. Some consider a signal of a different frequency to mean a 1 or 0. Others examine amplitude (signal strength), look for phase shifts in the signal, or look for more than one of these differences in electrical signals.
ISDN PRI in North America is based on a digital T1 circuit. T1 circuits use two different encoding schemes—Alternate Mark Inversion (AMI) and Binary 8 with Zero Substitution (B8ZS). You will configure one or the other for a PRI; all you need to do is make the router configuration match what the telco is using. For PRI circuits in Europe, Australia, and other parts of the world that use E1s, the only choice for line coding is High-Density Bipolar 3 (HDB3).
PRI lines send and receive a serial stream of bits. So how does a PRI interface know which bits are part of the D channel, or the first B channel, or the second, or the third, and so on? In a word—framing.
Framing, at ISDN’s physical layer, defines how a device can decide which bits are part of each channel. As is true of encoding, PRI framing is based on the underlying T1 or E1 specifications. The two T1 framing options define 24 different 64-kbps DS0 channels, plus an 8-kbps management channel used by the telco, which gives you a total speed of 1.544 Mbps. That’s true regardless of which of the two framing methods are used on the T1. With E1s, framing defines 32 64-kbps channels, for a total of 2.048 Mbps, regardless of the type of framing used.
The two options for framing on T1s are to use either Extended Super Frame (ESF) or the older option—Super Frame (SF). In most cases today, new T1s use ESF. For PRIs in Europe and Australia, based on E1s, the line uses CRC-4 framing or the original line framing defined for E1s. You simply need to tell the router whether to enable CRC-4 or not.
As soon as the framing details are known, the PRI can assign some channels as B channels and one channel as the D channel. For PRIs based on T1s, the first 23 DS0 channels are the B channels, and the last DS0 channel is the D channel, giving you 23B+D. With PRIs based on E1 circuits, the D channel is channel 15. The channels are counted from 0 to 31. Channel 31 is unavailable for use because it is used for framing overhead. That leaves channels 0 through 14 and 16 through 30 as the B channels, which results in a total of 30B+1D.
ISDN BRI uses a single encoding scheme and a single option for framing. Because of this, there are no configuration options for either framing or encoding in a router.

No comments: