Published via Inbox: 2013-06-01 18:51:03
June 1, 2013 11:50
June 1, 2013 11:50
AuthorTitle||lbackstromThe Importance of Algorithms||antimatterHow To Dissect a TopCoder Problem Statement||DumitruHow to Find a Solution||leadhyena_inranPlanning an Approach to a TopCoder Problem:- Section 1- Section 2||dimkadimonMathematics for TopCoders||lbackstromGeometry Concepts:- Section 1: Basic Concepts- Section 2: Line Intersection and its Applications- Section 3: Using Geometry in TopCoder Problems
One of the biggest changes in the world of MPLS in recent years has been the advent of Next-Generation Multicast VPN (NG- MVPN). This technology allows Layer 3 VPN multicast traffic to be handled in a much more scalable manner than with the legacy draft Rosen scheme.
It allows point-to-multipoint LSPs to be used to transport the multicast traffic between PEs, thus allowing the multicast traffic and the unicast traffic to benefit from the advantages of MPLS transport, such as traffic engineering and fast re-route. This technology is ideal for video transport as well as offering multicast service to customers of the layer 3 VPN service.
The purpose of this blog post is to give you a brief overview of M-LSPs, the motivation behind this technology and demonstrate some practical examples. We start with Multicast VPNs and then give an overview of the M-LSP implementations based on M-LDP and RSVP-TE extensions. The practical example is based on the recently added RSVP-TE signaling for establishing P2MP LSPs. All demostrations coud be replicated using the widely available Dynamips hardware emulator. The reader is assumed to have solid understanding of MPLS technologies, including LDP, RSVP-TE, MPLS/BGP VPNs and Multicast VPNs.
Multiprotocol Label Switching (MPLS) is a mechanism in high-performance telecommunications networks that directs data from one network node to the next based on short path labels rather than long network addresses, avoiding complex lookups in a routing table. The labels identify virtual links (paths) between distant nodes rather than endpoints. MPLS can encapsulate packets of various network protocols. MPLS supports a range of access technologies, including T1/E1, ATM, Frame Relay, and DSL.
Packet switching is a digital networking communications method that groups all transmitted data – regardless of content, type, or structure – into suitably sized blocks, called packets. First proposed for military uses in the early 1960s and implemented on small networks in 1968, this method of data transmission became one of the fundamental networking technologies behind the Internet and most local area networks.||Packet switching features delivery of variable-bit-rate data streams (sequences of packets) over a shared network. When traversing network adapters, switches, routers and other network nodes, packets are buffered and queued, resulting in variable delay and throughput depending on the traffic load in the network.
Packet switching
contrasts with another principal networking paradigm, circuit switching, a method which sets up a limited number of dedicated connections of constant bit rate and constant delay between nodes for exclusive use during the communication session. In case of traffic fees (as opposed to flat rate), for example in cellular communication services, circuit switching is characterized by a fee per time unit of connection time, even when no data is transferred, while packet switching is characterized by a fee per unit of information.
Packet mode
communication may be utilized with or without intermediate forwarding nodes (packet switches or routers). In all packet mode communication, network resources are managed by statistical multiplexing or dynamic bandwidth allocation in which a communication channel is effectively divided into an arbitrary number of logical variable-bit-rate channels or data streams. Statistical multiplexing, packet switching and other store-and-forward buffering introduces varying latency and throughput in the transmission. Each logical stream consists of a sequence of packets, which normally are forwarded by the multiplexers and intermediate network nodes asynchronously using first-in, first-out buffering. Alternatively, the packets may be forwarded according to some scheduling discipline for fair queuing, traffic shaping or for differentiated or guaranteed quality of service, such as weighted fair queuing or leaky bucket. In case of a shared physical medium, the packets may be delivered according to some packet-mode multiple access scheme.