Geoff Huston, member of the Internet Architecture Board, wrote as one of his monthly columns for the Internet Society Waiting for IP version 6. In this column Geoff argues that the only real advantage IPv6 has over IPv4 are the larger addresses, but we are not very close to running out of IPv4 address space yet with 1.5 billion addresses still unallocated. So he expects demand for IPv6 will not be driven by anything or anyone that's online now but rather by the tons of new devices that need connectivity in the future.
The IPv6 Forum doesn't quite agree and, in the persons of Latif Ladid and Jim Bound, sent in a response. They argue IPv6 does have some additional features that make it better than IPv4, such as the flow label, for which there is currently no use defined but it could help quality of service efforts, stateless autoconfiguration and routing scalability. They do have a point here.
Sure, in IPv4 you can use DHCP and you're online without having to configure an address and additional parameters. However, there must still be a DHCP server somewhere. These days, DHCP servers are built in pretty much anything that can route IP or perform NAT. This works quite well for numbering hosts that act as clients, but it doesn't really help with servers because in order to have a DHCP server give out stable addresses it must be configured to do so. This is only slightly better than having to configure the host itself. In IPv6, this is only a problem in theory. IPv6 stateless autoconfiguration makes a host select the same address for itself each time, without the need to configure this address either on the host or on a DHCP server. The theoretical problem is that some other host may select the same address because of MAC address problems (as the lower part of the address is derived from the MAC address) but this should be extremely rare.
Routing is the same for IPv6 as for IPv4. Both use provider aggregation to limit the number of routes in the global routing table. However, address conservation in IPv4 makes it necessary to make these provider blocks much smaller than is desirable from a routing aggregation standpoint. In IPv4, ISPs get /20 blocks and only the ones using up address space really fast get /16 blocks. In IPv6, every ISP gets at least a /32 and the next seven /32s remain unallocated so this can grow to a /29 (at least in the RIPE region) so ISPs need far fewer individual blocks to assign address space to their customers. This should help with route scaling. Unfortunately, it doesn't help with customers who are multihomed. But that's another story.
Permalink - posted 2003-05-23
If your network has a link with an MTU that's smaller than 1500 bytes in the middle, you're in trouble. It's not the first time this came up on the NANOG list and it won't be the last.
In order to avoid wasting resources by either sending packets that are smaller than the maximum supported by the network or sending packets that are so large they must be fragmented, hosts implement Path MTU Discovery (PMTUD). By assuming a large packet size and simply transmitting them with the don't fragment (DF) bit set (in IPv4, in IPv6 DF is implied) and listening for ICMP messages that say the packet is too big, hosts can quickly determine the lowest Maximum Transmission Unit (MTU) that's in effect on a certain link.
Most of the time, that is. In RFC 1191 it is suggested that hosts quickly react to a changing path MTU. So implementors decided to simply set the DF bit on ALL packets. At the same time, many people are very suspicious of ICMP packets since they can be used in denial of service attacks or to uncover information about a network. So to be on the safe side a significant number of people filters all ICMP messages. Or routers are configured in such a way that the ICMP packet too big messages aren't generated or can't make it back to the source host. NAT really doesn't help in this regard either.
So what happens when all packets have DF set and there are no ICMP packet too big messages? Right: nothing. Since the first few packets in a session are typically small session get set up without problems but as soon as the data transfer starts the session times out. So what can we do?
RFC 2923 TCP Problems with Path MTU Discovery
The MSS Initiative
Cisco - Why Can't I Browse the Internet when Using a GRE Tunnel?
Permalink - posted 2003-05-19
(Distributed) Denial of Service attacks continue to be a serious problem. (Well, for the victims, at least.) RIPE suffered an attack on februari 27th 2003 that almost wiped them off the net for two and a half hours.
Permalink - posted 2003-04-10
On august 7th 2002, Rob Thomas announced on his bogon list and a number of mailinglists that IANA had delegated the 69.0.0.0/8 address block to APNIC for further distribution to ISPs and end-user organizations in the Asia-Pacific region. The next day, APNIC made an announcement of their own.
Exactly seven months later, someone who had gotten addresses in the new range posted a message to the NANOG under the title "69/8...this sucks". He encountered problems reaching certain destinations from the new addresses and wrote a script to test this. It turned out a significant number of sites ("dozens") still filtered this range.
Further investigation uncovered that this wasn't so much a routing problem, but many firewall administrators also use the bogon list to create filters. And then subsequently fail to keep those filters up to date.
This sucks indeed.
Permalink - posted 2003-04-10
RIPE has another analysis of the MS SQL worm. RIPE monitors performance between 49 locations on the net. 40% of the monitored pairs of hosts suffered from congestion between them, in 60% of the cases there weren't any problems. The problems were cleared up after about 8 hours.
RIPE also monitors root server performance and BGP activity. Two of the root servers suffered a good deal of packet loss. The BGP stuff is the most spectacular: there were about 30 to 60 times more updates of different kinds.
This clearly shows the need for control and data plane seperation: congestion in the actual traffic shouldn't be able to take down the routing protocols. On the other hand, having BGP and other routing protocols run "in-band" over the same circuits as the actual data makes sure there is a functioning path between two routers. There's also something to be said for that.
On NANOG there was some talk about UDP/1434 filters. I argued that they shouldn't be necessary any more by now, but the rate of reinfection (people bringing in new vulnerable boxes) remains significant. So places with Windows machines on the network will probably need to have these filters in place for the forseeable future. But this is annoying because they also block legitimate UDP traffic, such as DNS, once in a while, and many routers take a performance hit when these types of filters are enabled.
Permalink - posted 2003-02-27
A company called Packet Design has developed BGP Scalable Transport (BST), a transport protocol for BGP that is intended to replace TCP as the carrier potocol for BGP routing information. Packet Design isn't afraid to make bold claims; their press release about the protocol heads "Packet Design solves security, reliability problems of major internet routing protocol, BGP."
But BST really only addresses the problem that each BGP router in an organization is required to talk to every other BGP router. This gets out of hand very fast in larger networks. But two solutions have been around for years: route reflectors and confederations. When a BGP router is configured to be a route reflector, it checks for routing loops so it can safely "reflect" routing information from one router to another, eliminating the need to have every two routers communicate directly. Confederations break up large Autonomous Systems (ASes) into smaller ones and accomplish the same thing in a different way. Packet Design solves the problem by flooding BGP updates throughout the internal network, much the same way protocols such as OSPF and IS-IS do.
Packet Design claims that this will help with convergence speed and reliability and even security. Their reasoning is that using IPsec on lots of individual TCP sessions uses too much CPU so protecting BGP over TCP with IPsec is unfeasible. But even on a Pentium II @ 450 MHz with SHA-1 authentication you get 17 Mbps (see performance tests). Exchange of a full routing table takes about 8 MB and 1 minute = around 1 Mbps so with less than 17 peers receiving a full table the crypto can keep up without trouble. And that's even assuming internal BGP sessions need this level of protection. External BGP sessions are a more natural candidate for this but eBGP doesn't have the same scalability problem as iBGP. It also assumes the security problems BGP has can be fixed at the TCP level. However, the most important BGP vulnerability is that there is no scalable and reliable way to check whether the origin of a BGP route is allowed to send out the route in question to begin with and whether any information was changed en route (by on otherwise legitimate intermediate router). These issues are addressed by soBGP and S-BGP.
Permalink - posted 2003-02-19