Evolutionof IXP Architectures in an Era of Open Networking Innovation
Internet exchange points (IXPs) play a key role in theinternet ecosystem. Worldwide, there are more than 400 IXPs in over 100countries, the largest of which carry peak data rates of almost 10 Tbps andconnect hundreds of networks. IXPs offer a neutral shared switching fabricwhere clients can exchange traffic with one another once they have establishedpeering connections. This means that the client value of an IXP increases withthe number of clients connected to it.
Simply speaking, an internet exchange pointcan be regarded as a big Layer 2 (L2) switch. Each client network connecting tothe IXP connects one or more of its routers to this switch via Ethernetinterfaces. Routers from different networks can establish peering sessions byexchanging routing information via Border Gateway Protocol (BGP) and then sendtraffic across the Ethernet switch, which is transparent to this process.Please refer to Figure 1 for different peering methods.
IXPs allow operators to interconnect nclient networks locally across their switch fabrics. Connectivity then scaleswith n (e.g., one 100-Gbps connection from each network to the switchfabric) rather than scaling with n? connections, as is the case whenindependent direct peering is used (e.g., one 10-Gbps connection to each of npeering partners). This leads to a flatter internet, improves bandwidthutilization, and reduces the cost and latency of interconnections, including indata center interconnect (DCI) applications. To avoid the cumbersome setup ofbilateral peering sessions, most IXPs today operate route servers, whichsimplify peering by allowing IXP clients to peer with other networks via asingle (multilateral) BGP session to a route server.
[Native Advertisement] IXPs can be grouped into not-for-profit (e.g.,industry associations, academic institutions, government agencies) andfor-profit organizations. Their business models depend on regulation and otherfactors. Many European IXPs are not-for-profit organizations that rely, forexample, on membership fees. In the U.S. most IXPs are for-profitorganizations. It is important to understand that all IXP operators, whilestill providing public neutral peering services, may also provide commercialvalue-added services (VAS), such as security, access to cloud services,transport services, synchronization, caching, etc.
Over the past few years, content deliverynetworks (CDNs) have been major contributors to the traffic growth of IXPs.IXPs are critical infrastructure for CDNs to keep their transport costs undercontrol. This is facilitated by putting content caches into the same locationsas IXPs put their access switches. Often these locations are neutral colocation(colo) data centers (DCs).
Current IXP infrastructure
While early IXPs in the 1990s were based onFiber Distributed Data Interface (FDDI) or Asynchronous Transfer Mode (ATM),today the standard interconnectivity service is based on Ethernet, as mentionedabove. The L2 IXP switch fabric itself has also evolved from simple Ethernetswitches in just one location, connected via a standard local area network, to InternetProtocol/Multiprotocol Label Switching (IP/MPLS) switches distributed overmultiple sites, which require wide area network (WAN) connectivity over opticalfiber. Utilizing an IP/MPLS switch fabric for the distributed L2 switchingfunction provides better scalability and is more suitable for WAN connectivity.The distribution of the IXP switch fabric over several locations facilitatesaccess for clients and improves resiliency. In most cases, these locations arein metro or regional areas, but extending the IXP fabric to a national orglobal scale is also possible.
Consequently, with more locations andincreasing bandwidth, a flexible and scalable high-performance connectivitynetwork becomes an important strategic asset for IXP operators. For larger IXPs,today?s locations are connected via high-capacity DWDM WAN links ? typically nx 100 Gigabit Ethernet (GbE) today, with higher data rates like n x400GbE in preparation. Client routers connect to the IXP switch fabric withEthernet interfaces of 1/10/100GbE today and potentially 25GbE and 50GbE in thefuture.
Figure 2 shows a high-level standard IP/MPLSarchitecture of a distributed IXP. Client routers connect to IXP provider edge(PE) routers at different sites (e.g., in colo DCs) via standard1/10/100GbE interfaces. (Note: Depending on the business model, the colocationservices may be provided by the IXP as VAS or may be provided by an independentcolo DC provider.) The PE routers are connected to core provider (P) routerswith high-capacity links, often n x 100GbE DWDM. Detailed architecturesare highly customer specific and depend on many factors such as availabilityand ownership of optical fiber, topology, bandwidth, resiliency and latencyrequirements, etc.
It should be noted that althoughIP/MPLS-based L2 switch fabrics are mainly used today, there are alternativeapproaches such as Virtual Extensible Local Area Network (VXLAN) available thatare based on more recent DC connectivity methods. It may well be that thesemethods, which do not change the basic architecture topology, will be deployedmore often in the future.
It may also be worth mentioning that toprovide better resiliency of the IXP infrastructure, especially forhigh-capacity interfaces such as 100GbE, photonic cross-connects (PXCs) areincreasingly being used between client and PE routers. In case of failure orscheduled maintenance, the PXC can switch over from the client router to abackup PE router.
Innovation at IXPs: Disaggregation, SDN, NFV, network automation
Disaggregation, software-defined networking(SDN), network function virtualization (NFV), and network automation, asalready applied in the big ICPs? DC-centric networks, are now increasingly alsobeing used in telco networks and IXPs. As IXP networks are typically morelocalized than telco networks and must cope with less legacy infrastructure andservices, they may be an ideal place to introduce new networking concepts.
Disaggregation and openness speed innovation.Disaggregation, when applied to the IXP router and transport infrastructure,provides horizontal scalability, ensuring that even unexpected growth can beeasily handled without the need for pre-planning of large chassis-based systemcapacity or forklift upgrades.
On top of that, in a disaggregated network,innovation can be driven very efficiently, as network functions are decoupledfrom each other and can evolve at their own speeds. This enables IXPs to introduceadditional steps in interface capacity (e.g., 400GbE or 1 Terabit Ethernet) aswell as single-chip switching capacities (12.8T, 25T, 50T per chip) andfunctionalities (e.g., programming protocol-independent packet processors [P4])seamlessly. At the same time, the underlying electronic and photonicintegration will drastically reduce power consumption and space requirements aswell as the number of cables to be installed.
Openness breaks the dependency on a singlevendor and enables network operators to leverage innovation from the wholeindustry, not just from a single supplier.
Disaggregating the DWDM layer: Open line system
The underlying optical layer combines thelatest optical innovation and end-to-end physical layer automation with an opennetworking approach that seamlessly ties into a Transport SDN control layer.There are a number of industry forums and associations driving the vision ofopen application programming interfaces (APIs) and interworking further,including the Telecom Infra Project?s(TIP?s) Open Optical & Packet Transport project group, which is leading thealignment on information models, and the Open ROADM (reconfigurableoptical add/drop multiplexer) Multi-Source Agreement project, as well as other standardsdevelopment organizations such as the International Telecommunication Union?sTelecommunication Standardization Sector (ITU-T), which is working to ensurephysical layer interworking.
In addition, advances in open opticaltransport system architectures are creating ultra-dense, ultra-efficient IXPapplications, including innovative 1 rack unit (1RU) modular open transportplatforms for cloud and data center networks that can be equipped as muxponderterminal systems and as open line system (OLS) optical layer platforms.Purpose-built for interconnectivity applications, these disaggregated platformsoffer high density, flexibility, and low power consumption. Designed to meetthe scalability requirements of network operators now and into the future, innovationsin OLSs include a pay-as-you-grow disaggregated approach that enables thelowest startup costs, reduced equipment sparing costs, and cost-effectivescalability.
Many IXPs are deploying open opticaltransport technology to scale capacity while reducing cost, floor space, andpower consumption. Recent examples include the Moscow Internet Exchange(MS-IX), France-IX, Swiss-IX, ESpanix, Berlin Commercial Internet Exchange(BCIX), and the Stockholm Internet eXchange(STHIX).
Disaggregating the Router/Switch: WhiteBoxes, Hardware-independent NOS, SDN, and VNFs
Router disaggregation is well establishedinside DCs. Instead of using large chassis-based routers, highly scalableleaf-spine switch fabrics are being built with white box L2/L3 switches andcontrolled by SDN. Using white boxes together with a hardware-independent andconfigurable network operating system (NOS) provides greater flexibility and enablesIXP operators to select only those features that they really need.
Carrier-class disaggregated router/switchwhite boxes are distinguished by capabilities that include environmentalhardening, enhanced synchronization, and high-availability features, withcarrier-class NOSs that are hardware independent. To ensure the resiliencyrequired, these platforms rely on proven and scalable IP/MPLS softwarecapabilities and support for IP/MPLS and segment routing as well as datacenter-oriented protocols such as VXLAN and Ethernet Virtual Private Network(EVPN). Additional services such as security services that in the past may haverequired dedicated modules in the router chassis or standalone devices can besupported with third-party virtual network functions (VNFs) hosted on the whiteboxes or on standard x86 servers.
Packet switching functionality can beincreased with P4. P4 can be used to program switches to determine how theyprocess packets (i.e., define the headers and fields of the protocols that willneed to be processed). This brings flexibility to hardware, enabling additionalsupport for new protocols without waiting for new chips to be released or newversions of protocols to be specified, as with OpenFlow.
The comprehensive SDN/NFV management functionalityenables IXP operators to introduce advanced features such as pay-per-use,sophisticated traffic engineering, or advanced blackholing for distributeddenial of service (DDOS) mitigation.
IXPs are an integral and important part ofthe internet ecosystem. They provide a way for various networks to exchangetraffic locally, resulting in a flatter and faster internet. To staycompetitive, IXPs are also undergoing a transition from standard Ethernet/IPnetworks to cloud technologies, such as leaf-spine switching fabrics andDCI-style optical connectivity to reduce total cost of ownership, increaseautomation, and facilitate the offering of VAS in addition to basic peeringservices.
Harald Bock is vice president, network and technology strategy,at Infinera.