Exploding data volumes are putting corporate networks and data centres under immense strain: so can they cope? Jason Stamper investigates.
The network is still the computer. It was Jon Gage, Sun Microsystems' chief researcher and vice-president of the Science Office, who is credited with coining the phrase, "The network is the computer" back in 1984. It's a phrase that Sun Microsystems adopted as its rallying cry, and it's a phrase resonating even more in 2010.
Gage coined the phrase to describe the emerging world of distributed computing. But it was only about three years after the stand-alone, non-networked PC had been introduced to the workplace, and most people dismissed Gage's idea as nuts. The power appeared to be in the box on or under their desk running Windows, and the only cabling was to connect the mouse, monitor and keyboard and plug the thing into the power socket.
Of course, what Gage managed to explain so succinctly would soon become accepted wisdom. Then when book publisher Tim O'Reilly coined the phrase 'Web 2.0' in around 2004, the emphasis was on the collaborative power of networks (especially social networks) and more feature-rich and powerful 'front ends'.
Despite the fact that Tim Berners Lee described the term Web 2.0 as nothing more than a "piece of jargon", the phrase did at least focus minds on the fact that the way people were using the Web was changing. Simply browsing the Web was giving way to greater collaboration, a sense of user empowerment and a decentralisation of power away from the traditional media companies: users were increasingly writing rather than just reading.
From Aloha to Metcalfe's Law
Yet what Sun's Gage said in 1984 and O'Reilly said about the Web in 2004 was the logical evolution of what computer scientists and network engineers had been saying for some time. It's a logic that can be traced back at least as far as 1970, when Robert Metcalfe, a student at Harvard, was reading a paper by Norman Abramson of the University of Hawaii that described a simple network called Aloha.
AlohaNet was a packet radio system used for data communications among the Hawaiian Islands. Packets are collections of bits led by a header, which is a smaller collection of bits, bearing an address; they proceed through a communications system rather like envelopes through a postal system. The key feature of AlohaNet was that anyone could send packets to anyone else at any time: you just began transmitting.
If you didn't get an acknowledgment back, you knew the message had failed to get through, perhaps because your packets had collided with someone else's. As Metcalfe described it, they had been "lost in the ether".
Abramson knew that AlohaNet was only about 17% efficient due to such collisions. Metcalfe was able to apply an advanced form of maths called queuing theory to drastically reduce collisions, and get efficiency up to 90%. Metcalfe had invented Ethernet, and he would go on to found 3Com in 1981. But Metcalfe had made another key discovery: that the value of networks goes up exponentially as you add more end-points, or users. This became known as Metcalfe's Law.
As George Gilder wrote in his 1993 essay, 'Metcalfe's Law and Legacy': "In this era of networking, he is the author of what I will call Metcalfe's law of the telecosm, showing the magic of interconnections: connect any number, "n," of machines - whether computers, phones or even cars - and you get "n" squared potential value. Think of phones without networks or cars without roads.
Conversely, imagine the benefits of linking up tens of millions of computers and sense the exponential power of the telecosm." Which is exactly what happened with the World Wide Web.
Web 2.0 then wasn't particularly new - Berners Lee had always envisioned that the Web would be, as he put it, "a collaborative medium, a place where we [could] all meet and read and write". He even called it the 'Read/Write Web'. But as Dr. Mike Wesch, an anthropology professor at Kansas State University explained in a YouTube video in 2007, called 'Web 2.0 ... The Machine is Us/ing Us', the network really had become the computer, and the computer was us.
From physical to virtual
In the enterprise, meanwhile, there has also been an evolution going on in and around the network, albeit one that should probably not have been a surprise to anyone. Corporate network traffic is growing just as data volumes are exploding, and networks have had to adapt to keep up with the latest demands. Corporate networks must now handle voice thanks to Voice over IP, video thanks to YouTube, pictures thanks to Facebook and Flickr and far more.
In the data centre networking space the pace of change has been particularly dramatic: with data volumes doubling around every two years in most organisations by analyst estimates, and information considered as critical as ever if companies are to compete effectively, data centre network fabrics are changing, and fast. And just as there is a huge challenge for companies to architect their networks for the next generation data centre, there is also a big opportunity for network specialists.
To give an idea of the size of this opportunity, Brocade's CEO Mike Klayko told CBR in New York recently that, "If we do this properly we have enough market to go after to be a $10 billion company. I won't predict when that is, but we actually have enough on our own to grow into that kind of space." Acquiring Foundry Networks for $3bn was just a step in this direction.
Mike Klayko, Brocade CEO.
One of the key trends that all of the networking players talk about is 'convergence'. There has been convergence of voice and data traffic on corporate networks. There is a convergence underway between physical computer kit and virtualised IT infrastructures. There is even a convergence in data centre networking between Fibre Channel and good old Ethernet.
Cisco's UK and Ireland CTO, Ian Foddering, explains that, "There has been a long history of convergence and not just in the data centre space. It's been driven by feedback from customers and partners. An example might be something as simple as VLAN, which allows you to segment off different elements on the network [like data and voice]. In the data centre we think there is a prime opportunity to reduce both Capex and Opex as virtualisation and convergence happen together: recent Cisco solutions like the Unified Computing System is a common IP platform that unites computing, virtualisation and storage access, with huge benefits."
Back in June, Brocade introduced Brocade One, which it says is a unifying network architecture that enables customers to simplify the complexity of virtualising their applications: "By removing network layers, simplifying management and protecting existing technology investments, Brocade One helps customers migrate to a world where information and services are available anywhere in the cloud."
While public cloud providers like Amazon, Google, Microsoft with its Azure and so on offer computing on-demand delivered over the Internet, analysts are also claiming that companies will use virtualisation and other optimisation techniques to turn their own data centres into mini clouds most people are calling private clouds. Here at CBR we have our doubts as to whether this is just a new term for a trajectory that IT has been on for some time - virtualisation and consolidation are not especially new - but most analysts seem to think it's a real enough trend to throw their weight behind it.
With Brocade talking more and more about how its networking gear can help customers going down the private cloud route, we asked Klayko if he believes customers are actively moving to this private cloud idea today. "I think they want to prepare for it," he told us. "There is a lot of discussion. Every customer I talk to talks about cloud, cloud, cloud. No one debates that there are clouds out there. There are public clouds right now. People are trying to figure out what actually is inside of a private cloud. Things that service providers do today, like metering and bill-backs, is what enterprises want to go ahead and do. We can actually do that by taking the expertise we have in that space along with the technology know-how we have in the data centre, give them the tools to go ahead and build out their own private cloud. I think the first step along the way is really what they want to do is virtualise the data centre. And then from that they will build out their different clouds."
Meanwhile another area of convergence is in the data centre network standards space. The most prevalent protocol, Fibre Channel, is giving way to trusty Ethernet. Of course the massive volumes of data being shunted around a data centre needs Ethernet On Steroids, hence the development of the 10 Gigabit Ethernet (or 10 GbE) standard. Ten times faster than standard Gigabit Ethernet, 10GbE is still fairly nascent, while it's also now possible to have Fibre Channel over Ethernet (FCoE) which sits on top of 10 GbE networks. 40 Gigabit Ethernet and 100 Gigabit Ethernet are coming too, and it won't stop there.
A question of focus
As far as Klayko is concerned, it's not worth getting hung up on the standards themselves: "I think Fibre Channel will be around when my grandchildren finish college," he says. "It is rock solid, it works great, it is in the largest... I mean, think about it... 92% of the largest Fortune 1,000 data centres [have Fibre Channel] as the standard. When you look at convergence, where you will see it first show up, is in the first six feet of the data centre, you know, in the rack. Then after that you have to virtualise the data centre before convergence really becomes effective.
"We have been shipping FCoE products for two years - it doesn't pay the light bill," says Klayko. "We have them. FCoE is Fibre Channel, it just happens to run over an Ethernet wire. That is really all it is. When it happens, that is fine, it is just moving from one medium to another. I look at it independent of protocol. ISCSI, NAS, Fibre Channel, whatever you want, Ethernet. It is not really going to matter. It is going to be the underlying applications. It is going to determine how you then go and virtualise the data centre."
Another data centre networking specialist, Force10 Networks, argues that the standards are actually pretty important. "It is an exciting time as the investment made by the industry over the past four years in 40 GbE and 100 GbE is set to culminate... with the ratification of the IEEE P802.3ba standard," said John D'Ambrosia, chair of the IEEE P802.3ba Task Force and director of standards for Force10 Networks. "This standard will provide the tools needed to add bandwidth and reduce complexity in the data centre."
Juniper Networks' founder, chairman and CTO Pradeep Sindhu, meanwhile, recently told CBR that the firm wants to flatten data centre networks to reduce cost, latency and complexity. Project Stratus, due to launch in 2011, will see it go from the three tiers common today to just one - it can already go from two layers down to one.
One thing is certain. While data centre managers are looking to reduce complexity, Metcalfe's Law means that there is no sign of the data explosion - and the resultant bandwidth saturation - abating any time soon. The standards are keeping pace but they are really not where attention should be focused: the key is keeping up with bandwidth and, dare we say it, private cloud demands, while reducing complexity and total cost of ownership. Because without a capable network, the organisation will surely struggle. The network is still every inch the computer.