Are they really the future of the Internet?
The Internet continues to evolve in ways that its founders had not initially anticipated. Yet we take for granted that its evolution—and the resources that drive it—were all part of some master plan of efficiency and transparency. Unfortunately, this is not the case.
For instance, consider how this page that you are currently reading has arrived on your computer screen or on your tablet or phone. It was assembled on the fly from a variety of elements referenced from multiple servers that exist at numerous physical locations.
Yet, the architects of the Internet didn't really expect this kind of composite information flow. Indeed, their vision of online news service was popularly reported in 1981 in a broadcast on a
That vision of how the Internet would work was based on the client/server model, in which a single server located in one place delivered a stream of information to multiple client machines at multiple locations.
Client/server computing is still with us, of course, but the services have fragmented and the architectures by which information is communicated have radically changed. Instead of a preponderance of north-south, client/server traffic, today's Internet environments typically consume more resources communicating east-to-west, server-to-server, as web pages are knitted together from hundreds or thousands of elements that exist on multiple servers and are ultimately delivered to your browser as a single page.
According to analysts at WebsiteOptimization.com the size and complexity of web pages has been increasing at an alarming rate. They tell us that web page size has increased by between 71% and 76% over the past two years alone. At the same time, the number of web objects has increased by 16.2% to 28.2%. Consequently, despite increases in bandwidth, the load time of web pages has increased by 48% over the past two years. In fact, the actual size of the average web page has tripled since 2008. But more alarming from a bandwidth perspective is the fact that the average number of objects referenced by a web page has recently topped 100 per page. This means that for every web page delivered north-to-south from a web server node to a web browser, there was an average of 100 resource calls to other servers on other network nodes, east-west—accessing data, interpreting code, assembling graphics, etc.—all to deliver the web page in HTML.
This trend has been intensified by the revolution in server virtualization and storage virtualization, and becomes exceptionally pronounced in Software as a Service (SaaS) and cloud computing environments. Instead of all the activity occurring on a single server, it's spread across a spectrum of virtual devices, all connected through the IP and Ethernet backbones of the network's nodes. As a result, network administrators are witnessing a radical new pattern in east-west network traffic and wondering how the resulting bottlenecks can be opened.
The bottom line is that this virtualization revolution now requires greater bandwidth between networked nodes, more resilient routing technologies, and a substantially more streamlined approach to the configuration and the maintenance of the network nodes that support our information systems.
But how can we streamline the architecture? To understand the problem, we need to go back to one of the first structures created by IP. It's something called Spanning Tree Protocol (STP).
Spanning Tree Protocol
STP is a protocol that operates in the Layer 2 data link layer of the OSI model. It runs primarily on Ethernet network bridges and switches and was formally defined in the IEEE 802.ID specification. The main purpose of STP is to ensure that a network node doesn't send information in a loop in the network when there are redundant pathways.
A network of switches needs redundant pathways between nodes in the event that the primary path is disconnected. However, without some method to manage the flow of traffic, there's a real potential that information will essentially "echo" as it travels through the redundant pathways.
STP solves the echoing problem with a relatively simple set of rules that's based upon the physical connections between nodes. Essentially, in STP, every networked node has two paths, one that's identified as the primary path, and one that's secondary (defined as "disabled" by STP). If a network transaction fails to reach its partnered device through the primary pathway, the secondary pathway—connecting to a different partnered device—is activated. Any disruption to the pathways, therefore, creates a cascade of new connections that dynamically routes traffic around the disruption, much like a cascade of marbles through a hierarchical maze.
STP works well when there's a limited number of network node switches and when the traffic is essentially north-south in a client/server environment. After all, the purpose of a network in a client/server scenario is to reliably connect one server with multiple clients.
Limitations of STP
However, STP bottlenecks communication when the number of networked node devices that need to be communicating with one another increases in a non-hierarchical manner. For instance, bottlenecks occur when one web server needs to link to other servers to access the database, interpret PHP, assemble content, parse parameters, access image files, and then assemble and spew out the desired web page. In that scenario, the east-west traffic is far more intense than the final page delivered as HTML to the browser. Why? Because each traffic element has to wend its way east-west through the maze of nodes defined by the physical connections of the spanning tree.
Of course, STP has evolved significantly with many different versions of algorithms that define different proprietary network topologies. These algorithms include Rapid Spanning Tree Protocol (RSTP), Per-VLAN Spanning Tree (PVST) and Per-VLAN Spanning Tree Plus (PVST+), and Multiple Spanning Tree Protocol (MSTP). Each protocol has particular strengths, but all have faults that exclude backward compatibility with other STP implementations, increase management complexity, and fail to provide resiliency as the network traffic changes in response to user requests.
Fabric Network Topologies
It's against this limitation of STP that Fabric Network Topologies were engineered. By comparison to an STP topology, a fabric network is a non-hierarchical topology by which nodes connect with one another via one or more crossbar switches.
A crossbar switch is a switching device that has multiple input and output points arranged in a matrix that conceptually appears as a sort of woven fabric-like structure. The value of crossbar switching is that it permits addressing any input point with any output point touched by the switch. So, instead of having just two network nodes physically connected to a switching device, a crossbar switch might have up to 100 devices virtually connected.
The theoretical advantage of a fabric network then is how it "flattens" the network topology, removing the hierarchy, reducing potential bottlenecks in throughput, enabling faster east-west traffic, and permitting easier maintenance and configuration. In a fabric network, the address of every network node is known to the switching mechanism, and the switch is able to directly address and route frames to every other node within the network. This any-to-any capability increases bandwidth by reducing latency and complexity. Fabric topologies can co-exist with STP topologies and act as a superset to control those nodes. This makes a fabric topology a good management tool for companies that wish to expand their network without disrupting the existing STP topology.
Fabric Network Switches, then, are physical devices that act in a combined manner as both a router and a bridge between nodes, often technically described as RBridges. They are engineered devices that most commonly use a protocol called Transparent Interconnection of Lots of Links, or TRILL, which is an Internet Engineering Task Force (IETF) standard.
The Current Protocol Wars and the Future of the Network
Network engineers are now having a great debate about fabric switches that use the TRILL protocol, versus a new multipath protocol called Shortest Path Bridging, which became an IEEE standard in May of 2012. It's the nature of the network switch technology market that each vendor has a preferred technology, founded in one engineering standard. That's what's happening now by the various vendors, and considering what the network switches cost, it's easy to understand how engineers of different technology companies are pitted against one another.
It's not within the purview of this article to express an opinion about each multipath protocol or the implementing technology, except to comment that it's important to understand that fabric network technology is evolving rapidly. The arena of engineered protocols is expanding rapidly, and there are proprietary and open standards specifications that are utilized by different vendor devices. What your network administrator or cloud service provider might choose for your environment will be determined by cost, reliability, vendor support, and the way the preexisting network is configured and managed. But fabric isn't the only burgeoning technology on the horizon.
Software-Defined Networks
Finally, there is a third technological approach that some network administrators believe will provide a still more resilient network. This technology, called Software-Defined Network (SDN), is so new that today it seems to represent more buzz than substance. Indeed, some vendors are latching onto SDN nomenclature as a way to package their own proprietary networking technologies into the loosely defined abstractions of SDN. How this technological approach evolves may represent the ultimate future of how the OSI model for network topologies continues to develop. Or it may represent another side street that draws networks to a technological dead end.
What's to Come
Regardless of how the technologies arrive or disappear—like the virtual newspapers described in the news broadcast back in 1981—there's no doubt that we're experiencing incredible change in how our networks are functioning. The problems our network administrators are witnessing probably won't be solved by any single standard protocol or by the selection of an individual vendor's proprietary solution. What we're seeing is an incredibly rapid evolution of engineered devices to meet the demands of our networks and overcome the problems the networks experience. It's a fantastic flood of new ideas, inventions, protocols, and devices.
This is the real nature of the virtual environments that are supporting our networks, and it's the talents of our network engineers and the dynamics of the technology marketplace that will determine the real future of the Internet.
LATEST COMMENTS
MC Press Online