How the Internet Works: 12 Myths Debunked

Written on 10:09 AM by Right Click IT - Technology Services

– Carolyn Duffy Marsan, Network World November 21, 2008

Internet Protocols (IP) keep evolving: What incorrect assumptions do we make when we send an e-mail or download a video?

Thirty years have passed since the Internet Protocol was first described in a series of technical documents written by early experimenters. Since then, countless engineers have created systems and applications that rely on IP as the communications link between people and their computers.

Here's the rub: IP has continued to evolve, but no one has been carefully documenting all of the changes.

"The IP model is not this static thing," explains Dave Thaler, a member of the Internet Architecture Board and a software architect for Microsoft. "It's something that has changed over the years, and it continues to change."

Thaler gave the plenary address Wednesday at a meeting of the Internet Engineering Task Force, the Internet's premier standards body. Thaler's talk was adapted from a document the IAB has drafted entitled "Evolution of the IP Model.

"Since 1978, many applications and upper layer protocols have evolved around various assumptions that are not listed in one place, not necessarily well known, not thought about when making changes, and increasingly not even true," Thaler said. "The goal of the IAB's work is to collect the assumptions—or increasingly myths—in one place, to document to what extent they are true, and to provide some guidance to the community."

The following list of myths about how the Internet works is adapted from Thaler's talk:

1. If I can reach you, you can reach me.
Thaler dubs this myth, "reachability is symmetric," and says many Internet applications assume that if Host A can contact Host B, then the opposite must be true. Applications use this assumption when they have request-response or callback functions. This assumption isn't always true because middleboxes such as network address translators (NAT) and firewalls get in the way of IP communications, and it doesn't always work with 802.11 wireless LANs or satellite links.

2. If I can reach you, and you can reach her, then I can reach her.
Thaler calls this theory "reachability is transitive," and says it is applied when applications do referrals. Like the first myth, this assumption isn't always true today because of middleboxes such as NATs and firewalls as well as with 802.11 wireless and satellite transmissions.

3. Multicast always works.
Multicast allows you to send communications out to many systems simultaneously as long as the receivers indicate they can accept the communication. Many applications assume that multicast works within all types of links. But that isn't always true with 802.11 wireless LANs or across tunneling mechanisms such as Teredo or 6to4.

4. The time it takes to initiate communications between two systems is what you'll see throughout the communication.
Thaler says many applications assume that the end-to-end delay of the first packet sent to a destination is typical of what will be experienced afterwards. For example, many applications ping servers and select the one that responds first. However, the first packet may have additional latency because of the look-ups it does. So applications may choose longer paths and have slower response times using this assumption. Increasingly, applications such as Mobile IPv6 and Protocol Independent Multicast send packets on one path and then switch to a shorter, faster path.

5. IP addresses rarely change.
Many applications assume that IP addresses are stable over long periods of time. These applications resolve names to addresses and then cache them without any notion of the lifetime of the name/address connection, Thaler says. This assumption isn't always true today because of the popularity of the Dynamic Host Configuration Protocol as well as roaming mechanisms and wireless communications.

6. A computer has only one IP address and one interface to the network.
This is an example of an assumption that was never true to begin with, Thaler says. From the onset of the Internet, hosts could have several physical interfaces to the network and each of those could have several logical Internet addresses. Today, computers are dealing with wired and wireless access, dual IPv4/IPv6 nodes and multiple IPv6 addresses on the same interface making this assumption truly a myth.

7. If you and I have addresses in a subnet, we must be near each other.
Some applications assume that the IP address used by an application is the same as the address used for routing. This means an application might assume two systems on the same subnet are nearby and would be better to talk to each other than a system far away. This assumption doesn't hold up because of tunneling and mobility. Increasingly, new applications are adopting a scheme known as an identifier/locator split that uses separate IP addresses to identify a system from the IP addresses used to locate a system.

8. New transport-layer protocols will work across the Internet.
IP was designed to support new transport protocols underneath it, but increasingly this isn't true, Thaler says. Most NATs and firewalls only allow Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) for transporting packets. Newer Web-based applications only operate over Hypertext Transfer Protocol (HTTP).

9. If one stream between you and me can get through, so can another one.
Some applications open multiple connections—one for data and another for control—between two systems for communications. The problem is that middleboxes such as NATs and firewalls block certain ports and may not allow more than one connection. That's why applications such as File Transfer Protocol (FTP) and the Real-time Transfer Protocol (RTP) don't always work, Thaler says.

10. Internet communications are not changed in transit.
Thaler cites several assumptions about Internet security that are no longer true. One of them is that packets are unmodified in transit. While it may have been true at the dawn of the Internet, this assumption is no longer true because of NATs, firewalls, intrusion-detection systems and many other middleboxes. IPsec solves this problem by encrypting IP packets, but this security scheme isn't widely used across the Internet.

11. Internet communications are private.
Another security-related assumption Internet developers and users often make is that packets are private. Thaler says this was never true. The only way for Internet users to be sure that their communications are private is to deploy IPsec, which is a suite of protocols for securing IP communications by authenticating and encrypting IP packets.

12. Source addresses are not forged.
Many Internet applications assume that a packet is coming from the IP source address that it uses. However, IP address spoofing has become common as a way of concealing the identity of the sender in denial of service and other attacks. Applications built on this assumption are vulnerable to attack, Thaler says.


10 Tips for Technology Management

Written on 10:33 AM by Right Click IT - Technology Services

– Julie Bort, Network World May 07, 2007

No. 1: Fine-tune your IPS.
"There's a lot of set-it-and-forget-it mentality in intrusion-prevention system marketing, and it's dangerous," says David Newman, president of testing facility Network Test and a Network World Lab Alliance member.

Fuzzing, in which the exploit is changed just enough for the security mechanism to miss it, trips up many IPSs, Network World's recent IPS test showed.

Network managers need to understand how each exploit works and how their IPS detects them, and then upgrade that protection routinely.

No. 2: Sell security by its benefits.
Start selling security to the purse-holders the way you do all other technology investments -- in measurable terms that relate to the business, recommends Mandy Andress, president of testing facility ArcSec Technologies and a Network World Lab Alliance member. Rather than saying how dangerous viruses are as a method to gain the budget for a reputation services antispam defense, for example, illustrate how much productivity could be gained by adding another layer of antispam control.

No. 3: Automate desktop and network access.
Wireless badges can come in handy for automated access control to desktop PCs, particularly those shared by multiple users in medical exam rooms, warehouses, call centers and the like.

For example, Northwestern Memorial Physicians Group implemented Ensure Technologies' XyLoc MD, which uses 900MHz radio-frequency technology encoded on staff ID badges for authentication, says Guy Fuller, IT manager at the Chicago healthcare organization. This saves the staff time while ensuring that network access and sensitive information are not available to other users.

No. 4: Link physical access to enterprise applications.
IP-based building-access systems built on industry-standard servers and using the existing data network are more affordable than ever because of open architecture products. Advances in server-management technology mean these systems not only are deployable by network (rather than the physical security) staff but are centrally manageable. Plus, they can integrate with ERP applications and network access-control systems.

Georgia-Pacific, a US$20 billion paper manufacturer in Atlanta, is rolling out Automated Management Technologies' WebBrix, an IP-based building-access system, to the majority of its 400 locations. IT used WebBrix's open application interface to write a custom application called Mysecurity that integrates the system with SAP, among other duties. When employees swipe their badges to gain access to the building, they also are sending data to SAP for time and attendance tracking, says Steven Mobley, senior systems analyst at Georgia-Pacific.

No. 5: Delegate an operating systems guru.
"Operating systems configuration can seem to some like a black art," says Tom Henderson, principle researcher for testing facility ExtremeLabs and a Network World Lab Alliance member. Setting the wrong combination is bad news. For example, large memory-block move options can affect the amount of dirty cache with which the operating system must deal, he says. If memory/caching options are balanced incorrectly, the machine could freeze. By assigning a staffer to master the voluminous documentation published by mainstream operating system vendors, servers can be safely fine-tuned to optimal performance for every application. The guru also should master Web server and BIOS setting options.

No. 6: Use VMware server memory smartly.
Without spending a dime, you may be able to boost the amount of memory available on virtualized Windows 2003 physical servers, thereby improving performance of the virtual machines. If all the virtual machines on the same physical box need the same memory-resident code, such as a dynamic link library (DLL), you can load the DLL once into the physical server's main memory and share that DLL with all virtual machines, says Wendy Cebula, COO at VistaPrint, an international online printer with U.S. operations headquartered in Lexington, Mass. "We've gotten big memory usage benefits by caching once per physical box rather than once per usage," she says.

No. 7: Move applications to a Linux grid.
If you have compute-intensive mainframe applications, don't shy away from lower-cost alternatives such as grid computing because the applications were written in COBOL, says Brian Cucci, manager of the Advanced Technology Group at Atlanta-based UPS, which has such a grid. The application will likely have to be redesigned somewhat for the new hardware platform. But vendors can be counted on to help, as they'll want to ally on the new technology.

No. 8: Recognize WAN links may degrade VoIP QoS.
This is particularly true in areas of the country where the public infrastructure is aging, says Bruce Bartolf, principal and CTO of architecture firm Gensler, in San Francisco. Having completed VoIP installation at seven of 35 sites, Bartolf found unexpectedly high error rates or complete failure on many links. To provide the kind of uptime and quality demanded of phone service, you need to design with alternative failover paths on the WAN. Cable may not be much better, but Metro Ethernet, if available, could work well, he says.

No. 9: Ease IP management with an appliance.
Although the tasks that appliances perform can be done with each vendor's gear, "with something as important as IP management, if you don't do it well, you can really hurt your five-nines," Gensler's Bartolf says. He chose Infoblox appliances, which manage numerous tasks, including Trivial File Transfer Protocol (TFTP) firmware upgrades. "Rather than dealing with Microsoft distributed file system, loading a TFTP server on a Microsoft server, running DHCP on a Microsoft server, running SMS on top of that, and managing it all, I have an appliance," he says. "I put it in, and it works."

No. 10: Shelve the fancy visuals.
"We found it highly impractical to make our monitoring visual," VistaPrint's Cebula says. VistaPrint relies on remote monitoring to manage its data centers, including one in Bermuda. It uses homegrown tools to track everything from CPU usage to event correlation. Visual graphing of events slowed down detection and analysis, taking network operations staff an average of five to seven minutes per event to use, Cebula says. When the tools used simple red, yellow and green lights, detection and correlation dropped to one or two minutes per event, she says.

And don't forget to keep your monitoring tools on at all times and run spot checks, advises Barry Nance, independent consultant and Network Lab Alliance member. The most common mistake is not to turn them on until an event occurs.