Like power and cooling, connectivity is also essential for providing all the services in data centres. Without connections with the outside world, data centres are as good as useless. They must be good connections though – but what do we mean by good? What are the latest developments? And what are the snags when choosing connectivity in the cloud? Connectivity is the ability to make and maintain connections between two or more points by way of telecom or datacom systems.
Apart from the technology selected, the choice of connectivity is determined by four different parameters. These are Price, Bandwidth, Latency (delay) and Availability. Each situation demands its own specific assessment of the optimal mix. Furthermore, every choice made influences the selection of the connectivity technology and the providers. One business needs a lot of low cost bandwidth, while another demands the lowest latency or the highest availability and is prepared to increase its budget in order to get it.
In order to link in with this Yearbook, I’ll stick to connectivity between data centres and enterprise environments. There are many good reasons for a business to get involved with several data centres. The most common ones include: meeting all the rules and regulations, load-balancing requirements, globalisation and being able to guarantee continuity. All these reasons lead towards a high redundancy requirement: duplication of all the components in a system in order to prevent a single point of failure and also minimising the possibility of downtime. Connectivity therefore plays a central role here, although it is frequently ignored altogether.
In the development of connectivity in data centric computing, we can actually identify threematurity phases. These run in parallel with the developments seen in IT in general: doing it yourself, outsourcing and as a ready-to-use service (cloud).
Phase 1. Setting up and connecting own data centres (doing it yourself)
Phase 2. Connecting up own data centres to commercial data centre providers (outsourcing)
Phase 3. Connecting up own and outsourced data centre infrastructure to the public cloud providers.
A specific level of connectivity is always needed no matter the phase of maturity in the business. This level always differs for each business and therefore each application as well. This should be thoroughly analysed. It is therefore always advisable to implement azero assessment to find out the extent to which the existing connectivity is in line with the business strategy and business processes of the company, and then once the differences have been established, being able to decide what must be done to match connectivity with current and future requirements. It is also important to identify any requirements which are put on the business, for example by regulators. In the financial sector for example, regulators can demand synchronous replication. The connectivity must be able to accommodate this. The wishes of the business itself must also be ascertained.
The second criterion concerns designing the applications that you want to deploy at several locations. The type of connectivity needed is determined by the risk management and cost perspectives per application category. Storage replication for example needs a different connectivity from that of file sharing simply because Office applications are less business critical than primary business applications such as CRM, ERP and process control. The average latency of transatlantic connections is 60 – 80 ms which is excellent for office applications, but not suitable for the synchronous replication of data.
The scope of zero assessments
- Business requirements, whether or not imposed by legislation or regulators
- Basic infrastructure and applications (e.g. Office)
- Business-specific applications (e.g. ERP)
- Relationships between data, servers, storage and networks
- Data volume per unit of time (actual or anticipated)
- Required bandwidth (dimensioned or peak load)
- Assignment of bandwidth per application and per location
Zero assessments indicate the minimum and maximum bandwidth requirements and also the latency effects from the perspective of the users of the applications.
With zero assessments it is possible to select the actual bandwidth that has to be deployed. Very often organisations take a one-size-fits-all approach to this. This is simple and clear. Although I am of the firm conviction that from a cost point of view and also taking risk management into account, it is much better to opt for the bandwidth required for each application. It is simply not necessary to match the entire business connectivity to the peak loads of just a few business critical applications.
The necessary level of availability of the connections depends largely on the level of operational security required. It is reassuring to know that connections between data centres require to have ‘five 9’s’ availability (99.999% uptime). Selecting such a level of availability puts demands on vendor management. There are plenty of connectivity providers who will happily sign an SLA based on this figure. From the point of view of many providers this is principally an administrative figure, being the basis on which any fine would be paid were the availability not achieved. Though it always leads to heated discussions: the customer must prove that the promised availability has not been achieved, and where possible the provider will fall back on force majeure (‘the undersea cable was damaged by a ship’s anchor – our apologies’) and the fine eventually paid does not come anywhere near the losses suffered by the customer. A minority of providers do it differently, because they have built the infrastructure in such a way that the materialcontent can be given 99.999% availability. These providers take the five 9’s as the main guideline when building the network. This means that each component in the network is duplicated.
To find which camp the providers belongs to (administrative or material SLA), just ask to see the network topology. Many providers will be unable to deliver this because of poor administration or the complexity of the system. If they themselves cannot see what their network looks like, then how can they possibly guarantee end-to-end redundancy? If there is a provider who can show you how the network is set up, you must also ensure that he is expressly committed to the topology presented to you.
Phase 1: doing it yourself
Back to the data centre. The diagram shows what the connection between two data centres should look like in order to physically offer the five 9’s. Everything must be duplicated: redundant and separate power supplies per device (A & B feed), duplicate paths, separate paths, redundant cooling. All this plus the costs for acquisition, maintenance and renewals. This is repeated per location, depending on the number of data centres that have to be connected.
Connectivity between the data centres must be set up with 99.999% availability. There is virtually no provider who can offer international end-to-end connections. Building a network is therefore always composed of a mix of providers with their own connections and SLAs, which must be controlled and managed. For many CIOs this is an unknown area that requires a high level of specialism.
It is probably clear that doing it yourself is simply a step too far for many businesses. Building connectivity with 99.999% availability is virtually impossible to achieve. Naturally there are organisations, specifically in the financial and government sectors where for strategic or legal reasons everything must be done in-house, although for the majority of businesses this is an inefficient strategy.
Phase 2: outsourcing
The combination of own data centres and commercial data centres is currently the most common configuration. Of course this is only logical because of the substantial investment required in energy, security, cooling, etc, for a data centre. Outsourcing to groups who have made this their core business speaks for itself.
Data centre business is booming, and specifically in Amsterdam there is a wide choice of providers of all types and sizes, including international players and right through to local specialists. What should you watch out for? Regarding connectivity, a dividing line can be drawn between carrier neutral providers and the so-called ‘carrier hotels’. Carrier neutral data centres offer connections with all the large carriers – carrier hotels on the other hand are run by carriers who only offer connectivity via their own infrastructure. Both approaches have advantages and disadvantages. Starting with the carrier neutral data centres: the customer has more flexibility and freedom regarding the choice of systems available. You can change over to another carrier in the same data centre without having to move to another location. The competition between the various carriers is substantial resulting lower pricing and higher levels of innovation. The disadvantage is that the customer must remain focused to keep up with the latest developments.
Carrier hotels offer a one-stop-shop approach: one contact point and one invoice covering both the data centre and the connectivity. Quite often they also offer end-to-end connectivity via their own infrastructure, either with or without partners. This is easy. There are also disadvantages of course: running a data centre is not the core business of a carrier. Vendor lock-in plays a substantial role in the relationship.
An important trend is the pay-per-use model: the customer only pays for the bandwidth used and not the capacity. We see this making a breakthrough in the US. We have not yet come this far in Europe. Carriers in the world of internet are leading the way. Most of the traditional carriers are having difficulty setting up billing systems. It is a technically complex subject. Moreover, it is easier – and more lucrative – to invoice on the basis of the capacity take-up than it is for use. Where pay-per-use is possible, you often see the following mixture emerging: 50% of the peak load dimensioned bandwidth is fixed, with the rest being invoiced on the basis of use. Carriers are over-booking, so the prices offered are rather dubious.
Phase 2: outsourcing
Connectivity with cloud providers is the next phase in the development of enterprise computing. The prevailing preference for converting Capex (investments) into Opex (costs) is on the rise within IT. It is smarter to pay for the usage and not for the ownership. As explained above we can also see a general development towards pay-per-use in connectivity.
Within this framework I will keep to connectivity and IaaS (Infrastructure-as-a-Service). IaaS is a powerful development in enterprise computing. In fact it is possible to take complete data centres from the cloud. The following also applies: carry out zero assessments. Make an assessment based on business requirements, risk management and costs that are either taken or not taken from the cloud.
It is always possible to exercise influence on an outsourcing partner, although this is not the case with cloud computing. Here you have to accept the offer from the provider as-is. Therefore you are also dependent on the connectivity on their side of the connection. Furthermore, in a cloud environment the chain is only as strong as the weakest link, which often emerges to be the connection. Experience tells us that cloud providers are strongly focused on their internal systems and their own technology. This is why they can issue attractive SLAs. Connectivity remains either under-emphasised or is left out of the SLA altogether. In other words, you get a guarantee ‘up to the front door’ and no further. But we have also seen some innovative cloud providers taking the importance of connectivity into account regarding the quality of the service they provide. Apart from the internet, they also offer alternative connection methods with the required redundancy and complete with guarantees.
When going over an IaaS proposition it is essential to get detailed answers about the connectivity of the service. What shape does the SLA take? How is connectivity dealt with? Does the provider take contractual responsibility for connectivity? What are the performance guarantees? Other there other connections available apart from the internet? What is the failover policy regarding the connections – are they redundant? And just as importantly: how is the Quality of Service (QoS) arranged, and how is the traffic prioritised? And what does support and escalation look like?
The internet is used mainly for connections. The advantage of using the internet is operational reliability: the internet architecture is such that communications are possible under all kinds of circumstances. The disadvantage is latency, resulting in delayed arrival of data packages at their destination. The latency can be high and unpredictable. Given the circumstances it is not a good idea to use the internet for a five 9’s environment, although the provider may offer additional forms of connectivity.
No matter what stage your data centric computing is at, connectivity is – although understated – of crucial significance. Well-considered connectivity policies should match the business strategy and contribute towards good risk management. A lot of money can be saved with smart strategies, as well as raising the general availability of the IT facilities and increasing the flexibility of the business.
- There is always more availability needed than you think, therefore carry out a zero assessment.
- Set the bandwidth individually for each application, the one-size-fits-all approach is needlessly expensive.
- Retain flexibility by avoiding long-term contracts (max 2 years). Connectivity continues to drop in price, latency is falling and innovation is occurring rapidly. Today’s systems are already out-of-date the day after tomorrow.
- Ask for pay-per-use bandwidth.
- With the cloud, find out exactly who is responsible for connectivity and Quality of Service in the chain: who determines the prioritising in the traffic?