Geoff Huston has written an article "The Unreliable Internet" (PDF) about how ISP networks becoming less resilient and customers becoming more so (by connecting to two or more ISPs, in other words: multihoming) creates a vicious circle.
At the same time, there is concern about the growth of the number of networks in the global routing table, which is in part caused by the increase in multihoming. The IETF Operations and Management Area has the multi6 working group looking into the the issue from an IPv6 angle. While there is much improvement possible in current protocols, there is no be-all, end-all solution in sight that would scale to unlimited numbers of multihomers.Permalink - posted 2001-09-28
An article on the Network World Fusion site was posted and caused heated debate on the future of multihoming on the NANOG list.
Permalink - posted 2001-09-29
The terrorist attack on the World Trade Center in New York City resulted in outages for a number of ISPs. Of the destroyed buildings, WTC 1 and 7 housed colocation facilities. The Telehouse America facility on 25 Broadway in Manhattan, not far from the WTC, lost power. The facility was not damaged, but commercial power was lost and after running on generator power for two days, the generators overheated and had to be turned off for several hours. Affected ISPs received many offers for temporary connectivity and assistence rebuilding their networks.
The phone network experienced congestion in many places on the day of the attack. Although individual (news) sites were slow or hard to reach, general Internet connectivity held up very well. While phone traffic was much higher than usual, traffic over the Internet rose shortly after the attack, but then it declined and stayed somewhat lower than normal the rest of the day, with some unusual traffic patterns.
It seems obvious that packet switched network have better graceful degradation than circuit switched networks. A phone call always uses the same amount of bandwidth, so either you are lucky and it works, or you are unlucky and you get nothing. Packet networks on the other hand, slow down but generally don't cut off users completely until things get really, really bad. And while the current Internet holds its own in many-to-many communication, it can't really cope with massive one-to-many traffic.
Permalink - posted 2001-09-29
Between August 20 and 26 an interesting subject came up on the NANOG list: when using Gigabit Ethernet for exchange points, there can be a nice performance gain if a larger MTU than the standard 1500 byte one is used. However, this will only work if all attached layer 3 devices agree on the MTU. This can be accomplished by having several VLANs and setting the MTU for each subinterface if a single MTU can't be agreed upon.
Permalink - posted 2001-09-30
During the week from September 17 to 23, the main topic of NANOG was the new worm called Nimda. There was some discussion whether it is useful to try to slow worms down using "tar pits" such as LaBrea.
Permalink - posted 2001-09-30
Internet Still Growing Dramatically, says Lawrence Roberts, one of the pioneers of the ARPANET.
Permalink - posted 2001-12-24
The Renesys Corporation has published a preliminary report indicating that the Code Red II and Nimda worms caused a somewhat alarming instability in global routing. Remarkably, this instability lasted much longer than those caused by (even quite large) outages. When important links go down, BGP converges within minutes and remains stable after that. The worms on the other hand made the interdomain routing system less stable for many hours.
Global Routing Instabilities during Code Red II and Nimda Worm Propagation (original link is broken, so though archive.org)
Permalink - posted 2001-12-25
Jaap Akkerhuis from the .nl TLD registry made an analysis of the impact of the events of September 11th on the net which he presented at the ICANN general meeting mid-November.
Permalink - posted 2001-12-27
On December 17th, Yahoo News published an article about hackers attacking the router infrastructure of the Net. The story is pretty much completely without merit. First of all, no incidents or specific threats of hackers actually attacking routers, or realistic ways in which they might accomplish this, are given. The bit about using the default password sounds especially implausible. If only because Cisco routers don't come with a default password: if you don't set a password yourself, it is impossible to telnet to the router. I've never heard of a BGP-running router without adequate password protection.
The idea that routers might be vulnerable to denial of service attacks is not completely out in left field, but adequate access control filters and enough CPU power easily neutralize this threat.
The stuff about MD5 protection of BGP sessions is plain and simple wrong. Have a look at some remarks about BGP passwords and MD5 in the old news (Q3 2001) section for better information. (Or, better yet, read RFC 2385. It's just six pages.)
Secure BGP (S-BGP) might sound like a good idea, but I'm far from sure that making the routing system depend on something as complex and (at least potentially) fragile as a public key infrastructure is a good idea. "We're very sorry, but the root CA certificates expired, so there won't be any internet today." Besides, in the current situation each network can build all the filters it deems necessary. This way, routes are only used when they are announced by the neighboring network and if they're allowed through the manually created filters. The chances of both screwing up in exactly the same way are very small.
Also, a PKI system might open up additional ways in which a router could be the victim of a denial of service attack. The required RSA computations are extremely CPU intensive, so an attacker would only have to deliver a small number of falsified routing updates to keep a router very busy rejecting them.
Permalink - posted 2001-12-30
The weeks from September 23 to October 7 saw two heated discussions on the NANOG list. The first one was on filtering BGP routes. It all started with some remarks about Verio's peering filter policy but the discussion became more general after some days, including related topics such as "sub-basement multihoming".
The second discussion was about (ab)use of the Domain Name System for failover (in combination with a NAT box) and load balancing. This discussion seems to be somewhat religious: some people think there is no problem, others nearly start to foam at the mouth just thinking about it.
Permalink - posted 2001-12-31