BGP Problem Statment
o Global BGP table is huge, and growing
o over 500 000 IPv4 prefixes & growing
o IPv6 space is growing but currently negligible
o See http://bgp.potaroo.net / for table grow
o Why is this a problem ?
o IP Routing is destination based
o all devices in the transit path must know the destination
o E.g all transit routers must have full BGP feeds
o Routing Through vs To the Core
o Transit providers sell transit , not applications
o e.g ISP is not the same as an ASP
o Traffic routes through the SP, not to the SP
o e.g end client needs to ping end application , not core link
o How does this affect core routing ?
o To SP core , only the ingress point and egress point matter
o original source & final destination are arbitrary
Tunnels - The ultimate band - aid
o Simple transit solution is to tunnel traffic over core from ingress to egress
o only the ingress & egress devices need full end to end information
o core only needs info about ingress & egress devices
How can we tunnel ?
o QinQ, GRE , IPinIP , MPLS etc
o MPLS is defacto standard
Example case - BGP over GRE over core
o Form a GRE tunnel from ingress to egress
o Tunnel subnet is link-local & arbitrary
o Peer BGP from ingress to egress
o Recover BGP next-hop to tunnel
o Either peer through the tunnel , or modify next-hop to the tunnel
o What is the core's data plane result ?
o Core routes ingress PE to egress PE
o Core does not need end to end information
o Where MPLS fits in
o MPLS is the core's tunnel encapsulation
o same exact logic as GRE
MPLS is more flexible
o Arbitrary transport
o Arbitrary payload
o Extensible application
BGP over MPLS over core
o Form an MPLS tunnel from ingress to egress
o Typically IGP + LDP is used for this
o Could be BGP or RSVP ( MPLS TE)
o Peer BGP from ingress to egress
o Recurse BGP next-hop to MPLS label
o What is the core's data plane result ?
o Core label switches ingress PE to egress PE
o Core does not need end to end information
++++++++++++++++++++++++++++++++++++++++++++++++++++
MPLS
o Multi protocol label switching
o Originally cisco proprietary
o Previously called "tag switching"
o Now an open standard "RFC 3031 "
-- can transport different payloads
layer 2 payload - ethernet , FR , ATM , PPP , HDLC etc
layer 3 payload - ipv4 , ipv6 etc
extensible for new further payloads
Why use MPLS ?
o Transparent tunneling over SP n/w
o BGP free core
o saves routing table space on provider (P) routes
o offers L2/L3 VPN services to Customers
o No need for overlay VPN model
-- Traffic Engineering
-- Distribute load over underutilized links
-- Give b/w guarantes
-- Route based on service type
-- Detect & repair failure quickly , i.e Fast reroute (FRR)
MPLS label format :
RFC 3032 - MPlS label stack encoding
4 byte header used to "switch" packet
20 bit label = locally significant to router
3 bit EXP = class of service
S bit = defines last label in the label stack
8 bit TTL = time to live
# show mpls ldp bindings ( LRIB similar to sh ip route )
# show mpls forwarding table ( LFIB simil to show ip cef)
How Labels work:
o MPLS lables are bound to FECs
o forwarding equivalence class
o IPv4 prefix for our purposes
o Router uses MPLS LFIB instead of IP Routing table to switch traffic
o Switch logic : if traffic comes in if1 with label X send it inter if2 with label Y
MPLS Device Roles:
o MPLS n/w consists of 3 types of devices
oo Customer edge ( CE)
oo Provider edge (PE)
oo Provider (P)
CE Devices
-- Last hop device in customer n/w
-- connects to providers n/w
-- can be L2 only or L3 aware
-- typically not MPLS aware
PE Devices
-- Provider Edge (PE)
-- previously called label edge router (LER)
-- Last hop device in providers n/w
-- connects to CE and Provider (P) core devices .
o PE performs both ip routing & MPLS lookups
-- For traffic from customer to core ...
-- Receives unlabelled pkts ( e.g IPv4)
-- Adds one or more MPLS labels
-- Forwards labeled packet to Core
-- For traffic from core to customer
-- receives MPLS labelled packets
-- forwards pkt to customer
P Devices
o Provider (P)
Previously called label switch routers (LSR)
core devices in providers n/w
-- connects to PEs and / or other P routers
o Switches traffic based only on MPLS labels
MPLS Devices operatoins
PE & P routers perform 3 major MPLS operations
oo Label Push
-- add a label to an incoming pkt
-- AKA label imposition
oo Label Swap
-- Replace the label on an incoming pkt
oo Label PoP
-- Remove the label from an outgoing pkt
-- AKA label dispostion
Label Distribution
-- label are advertised via a label distribution protocol
-- label distribution protocol (LDP)
-- Advertised labels for IGP routes
-- RFC 5036
MP- BGP
oo advertise labels for BGP learned routes
oo RFC 3107
RSVP
oo Used for MPLS traffic engineering (MPLS TE)
oo RFC 3209
------------------oooooooooooooooooooo---------------------------
o Global BGP table is huge, and growing
o over 500 000 IPv4 prefixes & growing
o IPv6 space is growing but currently negligible
o See http://bgp.potaroo.net / for table grow
o Why is this a problem ?
o IP Routing is destination based
o all devices in the transit path must know the destination
o E.g all transit routers must have full BGP feeds
o Routing Through vs To the Core
o Transit providers sell transit , not applications
o e.g ISP is not the same as an ASP
o Traffic routes through the SP, not to the SP
o e.g end client needs to ping end application , not core link
o How does this affect core routing ?
o To SP core , only the ingress point and egress point matter
o original source & final destination are arbitrary
Tunnels - The ultimate band - aid
o Simple transit solution is to tunnel traffic over core from ingress to egress
o only the ingress & egress devices need full end to end information
o core only needs info about ingress & egress devices
How can we tunnel ?
o QinQ, GRE , IPinIP , MPLS etc
o MPLS is defacto standard
Example case - BGP over GRE over core
o Form a GRE tunnel from ingress to egress
o Tunnel subnet is link-local & arbitrary
o Peer BGP from ingress to egress
o Recover BGP next-hop to tunnel
o Either peer through the tunnel , or modify next-hop to the tunnel
o What is the core's data plane result ?
o Core routes ingress PE to egress PE
o Core does not need end to end information
o Where MPLS fits in
o MPLS is the core's tunnel encapsulation
o same exact logic as GRE
MPLS is more flexible
o Arbitrary transport
o Arbitrary payload
o Extensible application
BGP over MPLS over core
o Form an MPLS tunnel from ingress to egress
o Typically IGP + LDP is used for this
o Could be BGP or RSVP ( MPLS TE)
o Peer BGP from ingress to egress
o Recurse BGP next-hop to MPLS label
o What is the core's data plane result ?
o Core label switches ingress PE to egress PE
o Core does not need end to end information
++++++++++++++++++++++++++++++++++++++++++++++++++++
MPLS
o Multi protocol label switching
o Originally cisco proprietary
o Previously called "tag switching"
o Now an open standard "RFC 3031 "
-- can transport different payloads
layer 2 payload - ethernet , FR , ATM , PPP , HDLC etc
layer 3 payload - ipv4 , ipv6 etc
extensible for new further payloads
Why use MPLS ?
o Transparent tunneling over SP n/w
o BGP free core
o saves routing table space on provider (P) routes
o offers L2/L3 VPN services to Customers
o No need for overlay VPN model
-- Traffic Engineering
-- Distribute load over underutilized links
-- Give b/w guarantes
-- Route based on service type
-- Detect & repair failure quickly , i.e Fast reroute (FRR)
MPLS label format :
RFC 3032 - MPlS label stack encoding
4 byte header used to "switch" packet
20 bit label = locally significant to router
3 bit EXP = class of service
S bit = defines last label in the label stack
8 bit TTL = time to live
# show mpls ldp bindings ( LRIB similar to sh ip route )
# show mpls forwarding table ( LFIB simil to show ip cef)
How Labels work:
o MPLS lables are bound to FECs
o forwarding equivalence class
o IPv4 prefix for our purposes
o Router uses MPLS LFIB instead of IP Routing table to switch traffic
o Switch logic : if traffic comes in if1 with label X send it inter if2 with label Y
MPLS Device Roles:
o MPLS n/w consists of 3 types of devices
oo Customer edge ( CE)
oo Provider edge (PE)
oo Provider (P)
CE Devices
-- Last hop device in customer n/w
-- connects to providers n/w
-- can be L2 only or L3 aware
-- typically not MPLS aware
PE Devices
-- Provider Edge (PE)
-- previously called label edge router (LER)
-- Last hop device in providers n/w
-- connects to CE and Provider (P) core devices .
o PE performs both ip routing & MPLS lookups
-- For traffic from customer to core ...
-- Receives unlabelled pkts ( e.g IPv4)
-- Adds one or more MPLS labels
-- Forwards labeled packet to Core
-- For traffic from core to customer
-- receives MPLS labelled packets
-- forwards pkt to customer
P Devices
o Provider (P)
Previously called label switch routers (LSR)
core devices in providers n/w
-- connects to PEs and / or other P routers
o Switches traffic based only on MPLS labels
MPLS Devices operatoins
PE & P routers perform 3 major MPLS operations
oo Label Push
-- add a label to an incoming pkt
-- AKA label imposition
oo Label Swap
-- Replace the label on an incoming pkt
oo Label PoP
-- Remove the label from an outgoing pkt
-- AKA label dispostion
Label Distribution
-- label are advertised via a label distribution protocol
-- label distribution protocol (LDP)
-- Advertised labels for IGP routes
-- RFC 5036
MP- BGP
oo advertise labels for BGP learned routes
oo RFC 3107
RSVP
oo Used for MPLS traffic engineering (MPLS TE)
oo RFC 3209
------------------oooooooooooooooooooo---------------------------
No comments:
Post a Comment