试译内容:

  1. 本页Preface全部;
  2. 选译本页Chapter 1中任意连续的3段(这3段请在Word文档中以一段英文一段中文形式对照给出)。

试译要求:

  1. 中文表达准确、流畅、简练;
  2. 人名地名,除人们熟知的要翻译,其他的可保留英文。
  3. 术语统一、规范,若查不到术语的标准译法,请以“中文译法(terminology)”形式给出(国家名词委网站:http://www.cnctst.gov.cn/);
  4. 请在3天之内完成试译,全部译文保存为一个Word文件,命名为“SDN试译-您的姓名.docx”。把试译稿发送到:lisf@turingbook.com,抄送:yuexx@turingbook.com。

Preface

The first question most readers of an O’Reilly book might ask is about the choice of the cover animal. In this case, “why a duck?” Well, for the record, our first choice was a unicorn decked out in glitter and a rainbow sash.

That response always gets a laugh (we are sure you just giggled a little), but it also brings to the surface a common perception of software-defined networks among many experienced network professionals. Although we think there is some truth to this perception, there is certainly more meat than myth to this unicorn.

enter image description here

So, starting over, the better answer to that first question is that the movement of a duck1 is not just what one sees on the water; most of the action is under the water, which you can’t easily see. Under the waterline, some very muscular feet are paddling away to move that duck along. In many ways, this is analogous to the progress of software-defined networks.

The surface view of SDN might lead the casual observer to conclude a few things. First, defining what SDN is, or might be, is something many organizations are frantically trying to do in order to resuscitate their business plans or revive their standards-developing organizations (SDOs). Second, that SDN is all about the active rebranding of existing products to be this mythical thing that they are not. Many have claimed that products they built four or five years ago were the origins of SDN, and therefore everything they have done since is SDN, too.

Along these lines, the branding of seemingly everything anew as SDN and the expected hyperbole of the startup community that SDN has been spawning for the past three or four years have also contributed negatively toward this end.

If observers are predisposed by their respective network religions and politics to dismiss SDN, it may seem like SDN is an idea adrift.

Now go ahead and arm yourself with a quick pointer to the Gartner hype-cycle.[2] We understand that perspective and can see where that cycle predicts things are at.

Some of these same aspects of the present SDN movement made us lobby hard for the glitter-horned unicorn just to make a point—that we see things differently.

For more than two years, our involvement in various customer meetings, forums, consortia, and SDOs discussing the topic, as well as our work with many of the startups, converts, and early adopters in the SDN space, leads us to believe that something worth noting is going on under the waterline. This is where much of the real work is going on to push the SDN effort forward toward a goal of what we think is optimal operational efficiency and flexibility for networks and applications that utilize those networks.

There is real evidence that SDN has finally started a new dialogue about network programmability, control models, the modernization of application interfaces to the network, and true openness around these things. In that light, SDN is not constrained to a single network domain such as the data center—although it is true that the tidal wave of manageable network endpoints hatched via virtualization is a prime mover of SDN at present. SDN is also not constrained to a single customer type (e.g., research/education), a single application (e.g., data center orchestration), or even a single protocol/architecture (e.g., OpenFlow). Nor is SDN constrained to a single architectural model (e.g., the canonical model of a centralized controller and a group of droid switches). We hope you see that in this book.

At the time of writing of the first edition of this book, both Thomas Nadeau and Ken Gray work at Juniper Networks in the Platform Systems Division Chief Technologist’s Office. We both also have extensive experience that spans roles both with other vendors, such as Cisco Systems, and service providers, such as BT and Bell Atlantic (now Verizon). We have tried our best to be inclusive of everyone that is relevant in the SDN space without being encyclopedic on the topic still providing enough breadth of material to cover the space. In some cases, we have relied on references or examples that came from our experiences with our most recent employer (Juniper Networks) in the text, only because they are either part of a larger survey or because alternative examples on the topic are net yet freely available for us to divulge. We hope the reader finds any bias to be accidental and not distracting or overwhelming. If this can be corrected or enhanced in a subsequent revision, we will do so. We both agree that there are likely to be many updates to this text going forward, given how young SDN still is and how rapidly it continues to evolve.

Finally, we hope the reader finds the depth and breadth of information presented herein to be interesting and informative, while at the same time evocative. We give our opinions about topics, but only after presenting the material and its pros and cons in as unbiased a manner as possible.

We do hope you find unicorns, fairy dust, and especially lots of paddling feet in this book.

Assumptions

SDN is a new approach to the current world of networking, but it is still networking. As you get into this book, we’re assuming a certain level of networking knowledge. You don’t have to be an engineer, but knowing how networking principles work—and frankly, don’t work—will aid your comprehension of the text. You should be familiar with the following terms/concepts:

  • OSI model
    The Open Systems Interconnection (OSI) model defines seven different layers of technology: physical, data link, network, transport, session, presentation, and application. This model allows network engineers and network vendors to easily discuss and apply technology to a specific OSI level. This segmentation lets engineers divide the overall problem of getting one application to talk to another into discrete parts and more manageable sections. Each level has certain attributes that describe it and each level interacts with its neighboring levels in a very well-defined manner. Knowledge of the layers above layer 7 is not mandatory, but understanding that interoperability is not always about electrons and photons will help.

  • Switches
    These devices operate at layer 2 of the OSI model and use logical local addressing to move frames across a network. Devices in this category include Ethernet in all its variations, VLANs, aggregates, and redundancies.

  • Routers
    These devices operate at layer 3 of the OSI model and connect IP subnets to each other. Routers move packets across a network in a hop-by-hop fashion.

  • Ethernet
    These broadcast domains connect multiple hosts together on a common infrastructure. Hosts communicate with each other using layer 2 media access control (MAC) addresses.

  • IP addressing and subnetting
    Hosts using IP to communicate with each other use 32-bit addresses. Humans often use a dotted decimal format to represent this address. This address notation includes a network portion and a host portion, which is normally displayed as 192.168.1.1/24.

  • TCP and UDP
    These layer 4 protocols define methods for communicating between hosts. The Transmission Control Protocol (TCP) provides for connection-oriented communications, whereas the User Datagram Protocol (UDP) uses a connectionless paradigm. Other benefits of using TCP include flow control, windowing/buffering, and explicit acknowledgments.

  • ICMP
    Network engineers use this protocol to troubleshoot and operate a network, as it is the core protocol used (on some platforms) by the ping and traceroute programs. In addition, the Internet Control Message Protocol (ICMP) is used to signal error and other messages between hosts in an IP-based network.

  • Data center
    A facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning and fire suppression), and security devices. Large data centers are industrial-scale operations that use as much electricity as a small town.

  • MPLS Multiprotocol Label Switching (MPLS) is a mechanism in high-performance networks that directs data from one network node to the next based on short path labels rather than long network addresses, avoiding complex lookups in a routing table. The labels identify virtual links (paths) between distant nodes rather than endpoints. MPLS can encapsulate packets of various network protocols. MPLS supports a range of access technologies.

  • Northbound interface An interface that conceptualizes the lower-level details (e.g., data or functions) used by, or in, the component. It is used to interface with higher-level layers using the southbound interface of the higher-level component(s). In architectural overview, the northbound interface is normally drawn at the top of the component it is defined in, hence the name northbound interface. Examples of a northbound interface are JSON or Thrift.

  • Southbound interface
    An interface that conceptualizes the opposite of a northbound interface. The southbound interface is normally drawn at the bottom of an architectural diagram. Examples of southbound interfaces include I2RS, NETCONF, or a command-line interface.

  • Network topology
    The arrangement of the various elements (links, nodes, interfaces, hosts, etc.) of a computer network. Essentially, it is the topological structure of a network and may be depicted physically or logically. Physical topology refers to the placement of the network’s various components, including device location and cable installation, while logical topology shows how data flows within a network, regardless of its physical design. Distances between nodes, physical interconnections, transmission rates, and/or signal types may differ between two networks, yet their topologies may be identical.

  • Application programming interfaces
    A specification of how some software components should interact with each other. In practice, an API is usually a library that includes specification for variables, routines, object classes, and data structures. An API specification can take many forms, including an international standard (e.g., POSIX), vendor documentation (e.g., the JunOS SDK), or the libraries of a programming language.

What's in This Book?

  • Chapter 1, Introduction
    This chapter introduces and frames the conversation this book engages in around the concepts of SDN, where they came from, and why they are important to discuss.

  • Chapter 2, Centralized and Distributed Control and Data Planes
    SDN is often framed as a decision between a distributed/consensus or centralized network control-plane model for future network architectures. In this chapter, we visit the fundamentals of distributed and central control, how the data plane is generated in both, past history with both models,[3] some assumed functionality in the present distributed/consensus model that we may expect to translate into any substitute, and the merits of these models.

  • Chapter 3, OpenFlow
    OpenFlow has been marketed either as equivalent to SDN (i.e., OpenFlow is SDN) or a critical component of SDN, depending on the whim of the marketing of the Open Networking Foundation. It can certainly be credited with sparking the discussion of the centralized control model. In this chapter, we visit the current state of the OpenFlow model.

  • Chapter 4, SDN Controllers
    For some, the discussion of SDN technology is all about the management of network state, and that is the role of the SDN controller. In this chapter, we survey the controllers available (both open source and commercial), their structure and capabilities, and then compare them to an idealized model (that is developed in Chapter 9).

  • Chapter 5, Network Programmability
    This chapter introduces network programmability as one of the key tenets of SDN. It first describes the problem of the network divide that essentially boils down to older management interfaces and paradigms keeping applications at arm’s length from the network. In the chapter, we show why this is a bad thing and how it can be rectified using modern programmatic interfaces. This chapter firmly sets the tone for what concrete changes are happening in the real world of applications and network devices that are following the SDN paradigm shift.

  • Chapter 6, Data Center Concepts and Constructs
    This chapter introduces the reader to the notion of the modern data center through an initial exploration of the historical evolution of the desktop-centric world of the late 1990s to the highly distributed world we live in today, in which applications—as well as the actual pieces that make up applications—are distributed across multiple data centers. Multitenancy is introduced as a key driver for virtualization in the data center, as well as other techniques around virtualization. Finally, we explain why these things form some of the keys to the SDN approach and why they are driving much of the SDN movement.

  • Chapter 7, Network Function Virtualization
    In this chapter, we build on some of the SDN concepts that were introduced earlier, such as programmability, controllers, virtualization, and data center concepts. The chapter explores one of the cutting-edge areas for SDN, which takes key concepts and components and puts them together in such a way that not only allows one to virtualize services, but also to connect those instances together in new and interesting ways.

  • Chapter 8, Network Topology and Topological Information Abstraction
    This chapter introduces the reader to the notion of network topology, not only as it exists today but also how it has evolved over time. We discuss why network topology—its discovery, ongoing maintenance, as well as an application’s interaction with it—is critical to many of the SDN concepts, including NFV. We discuss a number of ways in which this nut has been partially cracked and how more recently, the IETF’s I2RS effort may have finally cracked it for good.

  • Chapter 9, Building an SDN Framework
    This chapter describes an idealized SDN framework for SDN controllers, applications, and ecosystems. This concept is quite important in that it forms the architectural basis for all of the SDN controller offerings available today and also shows a glimpse of where they can or are going in terms of their evolution. In the chapter, we present the various incarnations and evolutions of such a framework over time and ultimately land on the one that now forms the Open Daylight Consortium’s approach. This approach to an idealized framework is the best that we reckon exists today both because it is technically sound and pragmatic, and also because it very closely resembles the one that we embarked on ourselves after quite a lot of trial and error.

  • Chapter 10, Use Cases for Bandwidth Scheduling, Manipulation, and Calendaring
    This chapter presents the reader with a number of use cases that fall under the areas of bandwidth scheduling, manipulation, and bandwidth calendaring. We demonstrate use cases that we have actually constructed in the lab as proof-of-concept trials, as well as those that others have instrumented in their own lab environments. These proof-of-concept approaches have funneled their way into some production applications, so while they may be toy examples, they do have real-world applicability.

  • Chapter 11, Use Cases for Data Center Overlays, Big Data, and Network Function Virtualization
    This chapter shows some use cases that fall under the areas of data centers. Specifically, we show some interesting use cases around data center overlays, and network function virtualization. We also show how big data can play a role in driving some SDN concepts.

  • Chapter 12, Use Cases for Input Traffic Monitoring, Classification, and Triggered Actions
    This chapter presents the reader with some use cases in the input traffic/triggered actions category. These uses cases concern themselves with the general action of receiving some traffic at the edge of the network and then taking some action. The action might be preprogrammed via a centralized controller, or a device might need to ask a controller what to do once certain traffic is encountered. Here we present two use cases to demonstrate these concepts. First, we show how we built a proof of concept that effectively replaced the Network Access Control (NAC) protocol and its moving parts with an OpenFlow controller and some real routers. This solved a real problem at a large enterprise that could not have been easily solved otherwise. We also show a case of how a virtual firewall can be used to detect and trigger certain actions based on controller interaction.

  • Chapter 13, Final Thoughts and Conclusions This chapter brings the book into the present tense—re-emphasizing some of our fundamental opinions on the current state of SDN (as of this writing) and providing a few final observations on the topic.


Chapter 1. Introduction

Up until a few years ago, storage, computing, and network resources were intentionally kept physically and operationally separate from one another. Even the systems used to manage those resources were separated—often physically. Applications that interacted with any of these resources, such as an operational monitoring system, were also kept at arm’s length significantly involved access policies, systems, and access procedures all in the name of security. This is the way IT departments liked it. It was really only after the introduction of (and demand for) inexpensive computing power, storage, and networking in data center environments that organizations were forced to bring these different elements together. It was a paradigm shift that also brought applications that manage and operate these resources much, much closer than ever before.

Data centers were originally designed to physically separate traditional computing elements (e.g., PC servers), their associated storage, and the networks that interconnected them with client users. The computing power that existed in these types of data centers became focused on specific server functionality—running applications such as mail servers, database servers, or other such widely used functionality in order to serve desktop clients. Previously, those functions—which were executed on the often thousands (or more) of desktops within an enterprise organization—were handled by departmental servers that provided services dedicated only to local use. As time went on, the departmental servers migrated into the data center for a variety of reasons—first and foremost, to facilitate ease of management, and second, to enable sharing among the enterprise’s users.

It was around 10 years ago that an interesting transformation took place. A company called VMware had invented an interesting technology that allowed a host operating system such as one of the popular Linux distributions to execute one or more client operating systems (e.g., Windows). What VMware did was to create a small program that created a virtual environment that synthesized a real computing environment (e.g., virtual NIC, BIOS, sound adapter, and video). It then marshaled real resources between the virtual machines. This supervisory program was called a hypervisor.

Originally, VMware was designed for engineers who wanted to run Linux for most of their computing needs and Windows (which was the corporate norm at the time) only for those situations that required that specific OS environment to execute. When they were finished, they would simply close Windows as if it were another program, and continue on with Linux. This had the interesting effect of allowing a user to treat the client operating system as if it were just a program consisting of a file (albeit large) that existed on her hard disk. That file could be manipulated as any other file could be (i.e., it could be moved or copied to other machines and executed there as if it were running on the machine on which it was originally installed). Even more interestingly, the operating system could be paused without it knowing, essentially causing it to enter into a state of suspended animation.

With the advent of operating system virtualization, the servers that typically ran a single, dedicated operating system, such as Microsoft Windows Server, and the applications specifically tailored for that operating system could now be viewed as a ubiquitous computing and storage platform. With further advances and increases in memory, computing, and storage, data center compute servers were increasingly capable of executing a variety of operating systems simultaneously in a virtual environment. VMware expanded its single-host version to a more data-center-friendly environment that was capable of executing and controlling many hundreds or thousands of virtual machines from a single console. Operating systems such as Windows Server that previously occupied an entire “bare metal” machine were now executed as virtual machines, each running whatever applications client users demanded. The only difference was that each was executing in its own self-contained environment that could be paused, relocated, cloned, or copied (i.e., as a backup). Thus began the age of elastic computing.

Within the elastic computing environment, operations departments were able to move servers to any physical data center location simply by pausing a virtual machine and copying a file. They could even spin up new virtual machines simply by cloning the same file and telling the hypervisor to execute it as a new instance. This flexibility allowed network operators to start optimizing the data center resource location and thus utilization based on metrics such as power and cooling. By packing together all active machines, an operator could turn down cooling in another part of a data center by sleeping or idling entire banks or rows of physical machines, thus optimizing the cooling load on a data center. Similarly, an operator could move or dynamically expand computing, storage, or network resources by geographical demand.

As with all advances in technology, this newly discovered flexibility in operational deployment of computing, storage, and networking resources brought about a new problem: one not only of operational efficiency both in terms of maximizing the utilization of storage and computing power, but also in terms of power and cooling. As mentioned earlier, network operators began to realize that computing power demand in general increased over time. To keep up with this demand, IT departments (which typically budget on a yearly basis) would order all the equipment they predicted would be needed for the following year. However, once this equipment arrived and was placed in racks, it would consume power, cooling, and space resources—even if it was not yet used! This was the dilemma discovered first at Amazon. At the time, Amazon’s business was growing at the rate of a “hockey stick” graph—doubling every six to nine months. As a result, growth had to stay ahead of demand for its computing services, which served its retail ordering, stock, and warehouse management systems, as well as internal IT systems. As a result, Amazon’s IT department was forced to order large quantities of storage, network, and computing resources in advance, but faced the dilemma of having that equipment sit idle until the demand caught up with those resources. Amazon Web Services (AWS) was invented as a way to commercialize this unused resource pool so that it would be utilized at a rate closer to 100%. When internal resources needed more resources, AWS would simply push off retail users, and when it was not, retail compute users could use up the unused resources. Some call this elastic computing services, but this book calls it hyper virtualization.

It was only then that companies like Amazon and Rackspace, which were buying storage and computing in huge quantities for pricing efficiency, realized they were not efficiently utilizing all of their computing and storage and could resell their spare computing power and storage to external users in an effort to recoup some of their capital investments. This gave rise to a multitenant data center. This of course created a new problem, which was how to separate thousands of potential tenants, whose resources needed to be spread arbitrarily across different physical data centers’ virtual machines.

Another way to understand this dilemma is to note that during the move to hyper virtualized environments, execution environments were generally run by a single enterprise or organization. That is, they typically owned and operated all of the computing and storage (although some rented co-location space) as if they were a single, flat local area network (LAN) interconnecting a large number of virtual or physical machines and network attached storage. (The exception was in financial institutions where regulatory requirements mandated separation.) However, the number of departments in these cases was relatively small—fewer than 100—and so this was easily solved using existing tools such as layer 2 or layer 3 MPLS VPNs. In both cases, though, the network components that linked all of the computing and storage resources up until that point were rather simplistic; it was generally a flat Ethernet LAN that connected all of the physical and virtual machines. Most of these environments assigned IP addresses to all of the devices (virtual or physical) in the network from a single network (perhaps with IP subnets), as a single enterprise owned the machines and needed access to them. This also meant that it was generally not a problem moving virtual machines between different data centers located within that enterprise because, again, they all fell within the same routed domain and could reach one another regardless of physical location.

In a multitenant data center, computing, storage, and network resources can be offered in slices that are independent or isolated from one another. It is, in fact, critical that they are kept separate. This posed some interesting challenges that were not present in the single tenant data center environment of the past. Keep in mind that their environment allowed for the execution of any number of operating systems and applications on top of those operating systems, but each needed a unique network address if it was to be accessed by its owner or other external users such as customer. In the past, addresses could be assigned from a single, internal block of possibly private addresses and routed internally easily. Now, however, you needed to assign unique addresses that are externally routable and accessible. Furthermore, consider that each virtual machine in question had a unique layer 2 address as well. When a router delivers a packet, it ultimately has to deliver a packet using Ethernet (not just IP). This is generally not an issue until you consider virtual machine mobility (VM mobility). In these cases, virtual machines are relocated for power, cooling, or computing compacting reasons. In here lies the rub because physical relocation means physical address relocation. It also possibly means changes to layer 3 routing in order to ensure packets previously destined for that machine in its original location can now be changed to its new location.

At the same time data centers were evolving, network equipment seemed to stand still in terms of innovations beyond feeds and speeds. That is, beyond the steady increase in switch fabric capacities and interface speeds, data communications had not evolved much since the advent of IP, MPLS, and mobile technologies. IP and MPLS allowed a network operator to create networks and virtual network overlays on top of those base networks much in the way that data center operators were able to create virtual machines to run over physical ones with the advent of computing virtualization. Network virtualization was generally referred to as virtual private networks (VPN) and came in a number of flavors, including point-to-point (e.g., a personal VPN as you might run on your laptop and connect to your corporate network); layer 3 (virtualizing an IP or routed network in cases such as to allow a network operator to securely host enterprise in a manner that isolated their traffic from other enterprise); and layer 2 VPNs (switched network virtualization that isolates similarly to a layer 3 VPN except that the addresses used are Ethernet).

Commercial routers and switches typically come with management interfaces that allow a network operator to configure and otherwise manage these devices. Some examples of management interfaces include command line interfaces, XML/Netconf, graphical user interfaces (GUIs), and the Simple Network Management Protocol (SNMP). These options provide an interface that allows an operator suitable access to a device’s capabilities, but they still often hide the lowest levels of details from the operator. For example, network operators can program static routes or other static forwarding entries, but those ultimately are requests that are passed through the device’s operating system. This is generally not a problem until one wants to program using syntax or semantics of functionality that exists in a device. If someone wishes to experiment with some new routing protocol, they cannot on a device where the firmware has not been written to support that protocol. In such cases, it was common for a customer to make a feature enhancement request of a device vendor, and then typically wait some amount of time (several years was not out of the ordinary).

At the same time, the concept of a distributed (at least logically) control plane came back onto the scene. A network device is comprised of a data plane that is often a switch fabric connecting the various network ports on a device and a control plane that is the brains of a device. For example, routing protocols that are used to construct loop-free paths within a network are most often implemented in a distributed manner. That is, each device in the network has a control plane that implements the protocol. These communicate with each other to coordinate network path construction. However, in a centralized control plane paradigm, one single (or at least logical) control plane would exist. This über brain would push commands to each device, thus commanding it to manipulate its physical switching and routing hardware. It is important to note that although the hardware that executed data planes of devices remained quite specialized, and thus expensive, the control plane continued to gravitate toward less and less expensive, general-purpose computing, such as those central processing units produced by Intel.

All of these aforementioned concepts are important, as they created the nucleus of motivation for what has evolved into what today is called software-defined networking (SDN). Early proponents of SDN saw that network device vendors were not meeting their needs, particularly in the feature development and innovation spaces. High-end routing and switching equipment was also viewed as being highly overpriced for at least the control plane components of their devices. At the same time, they saw the cost of raw, elastic computing power diminishing rapidly to the point where having thousands of processors at one’s disposal was a reality. It was then that they realized that this processing power could possibly be harnessed to run a logically centralized control plane and potentially even use inexpensive, commodity-priced switching hardware. A few engineers from Stanford University created a protocol called OpenFlow that could be implemented in just such a configuration. OpenFlow was architected for a number of devices containing only data planes to respond to commands sent to them from a (logically) centralized controller that housed the single control plane for that network. The controller was responsible for maintaining all of the network paths, as well as programming each of the network devices it controlled. The commands and responses to those commands are described in the OpenFlow protocol. It is worth noting that the Open Networking Foundation (ONF) commercially supported the SDN effort and today remains its central standardization authority and marketing organization. Based on this basic architecture just described, one can now imagine how quickly and easily it was to devise a new networking protocol by simply implementing it within a data center on commodity priced hardware. Even better, one could implement it in an elastic computing environment in a virtual machine.

A slightly different view of SDN is what some in the industry refer to as software-driven networks, as opposed to software-defined networks. This play on words is not meant to completely confuse the reader, but instead highlight a difference in philosophy of approaches. In the software-driven approach, one views OpenFlow and that architecture as a distinct subset of functionality that is possible. Rather than viewing the network as being comprised of logically centralized control planes with brainless network devices, one views the world as more of a hybrid of the old and the new. More to the point, the reality is that it is unrealistic to think that existing networks are going to be dismantled wholesale to make way for a new world proposed by the ONF and software-defined networks. It is also unrealistic to discard all of the advances in network technology that exist today and are responsible for things like the Internet. Instead, there is more likely a hybrid approach whereby some portion of networks are operated by a logically centralized controller, while other parts would be run by the more traditional distributed control plane. This would also imply that those two worlds would need to interwork with each other.

It is interesting to observe that at least one of the major parts of what SDN and OpenFlow proponents are trying to achieve is greater and more flexible network device programmability. This does not necessarily have anything to do with the location of the network control and data planes; however, it is concerned with how they are programmed. Do not forget that one of the motivations for creating SDN and OpenFlow was the flexibility of how one could program a network device, not just where it is programmed. If one observes what is happening in the SDN architecture just described, both of those questions are solved. The question is whether or not the programmability aspect is the most optimal choice.

To address this, individuals representing Juniper, Cisco, Level3, and other vendors and service providers have recently spearheaded an effort around network programmability called the Interface to the Routing System (I2RS). A number of folks from these sources have contributed to several IETF drafts, including the primary requirements and framework drafts to which Alia Atlas, David Ward, and Tom have been primary contributors. In the near future, at least a dozen drafts around this topic should appear online. Clearly there is great interest in this effort. The basic idea around I2RS is to create a protocol and components to act as a means of programming a network device’s routing information base (RIB) using a fast path protocol that allows for a quick cut-through of provisioning operations in order to allow for real-time interaction with the RIB and the RIB manager that controls it. Previously, the only access one had to the RIB was via the device’s configuration system (in Juniper’s case, Netconf or SNMP).

The key to understanding I2RS is that it is most definitely not just another provisioning protocol; that’s because there are a number of other key concepts that comprise an entire solution to the overarching problem of speeding up the feedback loop between network elements, network programming, state and statistical gathering, and post-processing analytics. Today, this loop is painfully slow. Those involved in I2RS believe the key to the future of programmable networks lies within optimizing this loop.

To this end, I2RS provides varying levels of abstraction in terms of programmability of network paths, policies, and port configuration, but in all cases has the advantage of allowing for adult supervision of said programming as a means of checking the commands prior to committing them. For example, some protocols exist today for programming at the hardware abstraction layer (HAL), which is far too granular or detailed for the network’s efficiency and in fact places undue burden on its operational systems. Another example is providing operational support systems (OSS) applications quick and optimal access to the RIB in order to quickly program changes and then witness the results, only to be able to quickly reprogram in order to optimize the network’s behavior. One key aspect around all of these examples is that the discourse between the applications and the RIB occur via the RIB manager. This is important, as many operators would like to preserve their operational and workflow investment in routing protocol intelligence that exists in device operating systems such as Junos or IOS-XR while leveraging this new and useful programmability paradigm to allow additional levels of optimization in their networks.

I2RS also lends itself well to a growing desire to logically centralize routing and path decisions and programmability. The protocol has requirements to run on a device or outside of a device. In this way, distributed controller functionality is embraced in cases where it is desired; however, in cases where more classic distributed control is desired, we are able to support those as well.

Finally, another key subcomponent of I2RS is normalized and abstracted topology. Defining a common and extensible object model will represent this topology. The service also allows for multiple abstractions of topological representation to be exposed. A key aspect of this model is that nonrouters (or routing protocol speakers) can more easily manipulate and change the RIB state going forward. Today, nonrouters have a major difficulty getting at this information at best. Going forward, components of a network management/OSS, analytics, or other applications that we cannot yet envision will be able to interact quickly and efficiently with routing state and network topology.

So, to culminate these thoughts, it is appropriate that we define SDN for what we think it is and will become:

Software-defined networks (SDN): an architectural approach that optimizes and simplifies network operations by more closely binding the interaction (i.e., provisioning, messaging, and alarming) among applications and network services and devices, whether they be real or virtualized. It often is achieved by employing a point of logically centralized network control—which is often realized as an SDN controller—which then orchestrates, mediates, and facilitates communication between applications wishing to interact with network elements and network elements wishing to convey information to those applications. The controller then exposes and abstracts network functions and operations via modern, application-friendly and bidirectional programmatic interfaces.

So, as you can see, software-defined, software-driven, and programmable networks come with a rich and complex set of historical lineage, challenges, and a variety of solutions to those problems. It is the success of the technologies that preceded software-defined, software-driven, and programmable networks that makes advancing technology based on those things possible. The fact of the matter is that most of the world’s networks—including the Internet—operate on the basis of IP, BGP, MPLS, and Ethernet. Virtualization technology today is based on the technologies started by VMware years ago and continues to be the basis on which it and other products are based. Network attached storage enjoys a similarly rich history.

I2RS has a similar future ahead of it insofar as solving the problems of network, compute, and storage virtualization as well as those of the programmability, accessibility, location, and relocation of the applications that execute within these hyper virtualized environments.

Although SDN controllers continue to rule the roost when it comes to press, many other advances have taken place just in the time we have been writing this book. One very interesting and bright one is the Open Daylight Project. Open Daylight’s mission is to facilitate a community-led, industry-supported open source framework, including code and architecture, to accelerate and advance a common, robust software-defined networking platform. To this end, Open Daylight is hosted under the Linux Foundation’s umbrella and will facilitate a truly game changing, and potentially field-leveling effort around SDN controllers. This effort will also spur innovation where we think it matters most in this space: applications. While we have seen many advances in controllers over the past few years, controllers really represent the foundational infrastructure for SDN-enabled applications. In that vein, the industry has struggled to design and develop controllers over the past few years while mostly ignoring applications. We think that SDN is really about operational optimization and efficiency at the end of the day, and the best way to achieve this is through quickly checking off that infrastructure and allowing the industry to focus on innovating in the application and device layers of the SDN architecture.

This book focuses on the network aspects of software-defined, software-driven, and programmable networks while giving sufficient coverage to the virtualization, location, and programming of storage, network, and compute aspects of the equation. It is the goal of this book to explore the details and motivations around the advances in network technology that gave rise to and support of hyper virtualization of network, storage, and computing resources that are now considered to be part of SDN.