Template by:
Free Blog Templates

Minggu, 01 Mei 2011

Bad Network Designs That Still Serves Their Purpose

Kudos to IT giants such as Cisco, Juniper & Microsoft. Despite some disagreeable network designs out in the field their equipment continues to work. In lots of cases, so well that the designer is not aware of the abomination that is the network architecture. Here are our top picks for network designs that can make your eyes water.

1. Dodgy Net - This design consists of lots of IP subnets all residing on single VLAN. For the uninitiated the general rule is IP subnet per VLAN. This helps to segment layer & layer traffic consistently across the network.

Technically, however, it is feasible to run all IP subnets on a single VLAN. Of work, you get the worst of both worlds with this approach. IP broadcasts are encapsulated by layer frames that have no boundaries & are in turn seen by every IP device on the network. Those devices outside the IP subnet of the originating host promptly discard the packet but by that stage both performance & security have been compromised.

Correcting dodgy net designs does require lots of planning & management because every access port VLAN & trunk port has to be identified, labeled & configured.

Configuring Dodgy Net is akin to slipping on a warm sweater in winter then leaping in to a chilled pool. It doesn't make sense.

2. Static City - Most network engineers first learn about routing using static routes. Learning to propagate routes by routing protocols comes later but for some lost souls the penny never drops & their network designs inevitably become static cities.

Think that modern networks can host thousands of subnets & hundred/thousands of routing devices. Imagine now having to write down each subnet from the point of view of each device & by hand tell it which direction to send the packet. That is a lot of work & it becomes an administrative nightmare in giant networks where changes occur on an every day basis.

Here is a simple example of how the workload involved in adding manual routes can grow exponentially. A network with 800 subnets hosted on 50 devices requires 40,000 static entries. Of work, this doesn't take in to account summarization but even in case you can reduce that number to 10% of original entries that is still 4,000 routes to manage & update. Every time a used network is added/removed or modified 50 devices must be reconfigured to reflect the new changes. Even on a comparatively stable network alter per month will add up to 600 devices changes per year. Alter per week & that number grows to a staggering 2600 changes per year.

The lovely news is that this issue is comparatively simple to fix because the administrative distance feature on routers means that you can configure & implement network protocols together with static routes allowing you to configure the whole dynamic routing solution & check it without having to remove a single static. But lots of organizations that have experienced this type of growth are reluctant to alter for fear of breaking something unexpected & lots of them have utilized default gateways as a way to reduce an otherwise unmanageable issue.

3. Physically redundant but logically dependent network designs - Redundancy on networks will continue to become more important as increasingly services generate critical dependencies on knowledge communications. Under some circumstances, however, redundant links are useless due to the logical design of the network. In other words, even if physically redundant links exist the network will still drop due to the failure of logical single point of failure.

Some examples of networks that may include logical single points of failure (as against physical single points of failure) are:

- Logical endpoints for tunnels
- Static routing & redistributed statics in a single point
- Hub & spoke designs
- Non redundant authentication services - for example for VPNs or 802.1x framework.
- Non redundant network services - DNS, DHCP & other services necessary to make computers work.

. Pasta Routing - Whenever networks run multiple routing protocols in an inconsistent manner (e.g. router runs protocols A & B while another router runs protocols B & C) the effect of different administrative distances between protocols as well as different metrics across protocols can lead to packets taking whacky paths to their location. Not only is this effect inefficient, the wrong combinations or routing protocols without correct filtering can be dicy & can actually bring the whole network down.

Plenty of cases of chronic pasta routing involve BGP. Not because it is a bad protocol, the opposite, it is an excellent protocol. But the path choice algorithm, the loop prevention mechanisms & differing administrative distances between iBGP & eBGP tend to make this protocol dicy in the wrong hands & less experienced network engineers can find their packets taking the long road to their destinations.

0 komentar:

Posting Komentar