How the Internet Works: 12 Myths Debunked

Written on 10:09 AM by Right Click IT - Technology Services

– Carolyn Duffy Marsan, Network World November 21, 2008

Internet Protocols (IP) keep evolving: What incorrect assumptions do we make when we send an e-mail or download a video?

Thirty years have passed since the Internet Protocol was first described in a series of technical documents written by early experimenters. Since then, countless engineers have created systems and applications that rely on IP as the communications link between people and their computers.

Here's the rub: IP has continued to evolve, but no one has been carefully documenting all of the changes.

"The IP model is not this static thing," explains Dave Thaler, a member of the Internet Architecture Board and a software architect for Microsoft. "It's something that has changed over the years, and it continues to change."

Thaler gave the plenary address Wednesday at a meeting of the Internet Engineering Task Force, the Internet's premier standards body. Thaler's talk was adapted from a document the IAB has drafted entitled "Evolution of the IP Model.

"Since 1978, many applications and upper layer protocols have evolved around various assumptions that are not listed in one place, not necessarily well known, not thought about when making changes, and increasingly not even true," Thaler said. "The goal of the IAB's work is to collect the assumptions—or increasingly myths—in one place, to document to what extent they are true, and to provide some guidance to the community."

The following list of myths about how the Internet works is adapted from Thaler's talk:

1. If I can reach you, you can reach me.
Thaler dubs this myth, "reachability is symmetric," and says many Internet applications assume that if Host A can contact Host B, then the opposite must be true. Applications use this assumption when they have request-response or callback functions. This assumption isn't always true because middleboxes such as network address translators (NAT) and firewalls get in the way of IP communications, and it doesn't always work with 802.11 wireless LANs or satellite links.

2. If I can reach you, and you can reach her, then I can reach her.
Thaler calls this theory "reachability is transitive," and says it is applied when applications do referrals. Like the first myth, this assumption isn't always true today because of middleboxes such as NATs and firewalls as well as with 802.11 wireless and satellite transmissions.

3. Multicast always works.
Multicast allows you to send communications out to many systems simultaneously as long as the receivers indicate they can accept the communication. Many applications assume that multicast works within all types of links. But that isn't always true with 802.11 wireless LANs or across tunneling mechanisms such as Teredo or 6to4.

4. The time it takes to initiate communications between two systems is what you'll see throughout the communication.
Thaler says many applications assume that the end-to-end delay of the first packet sent to a destination is typical of what will be experienced afterwards. For example, many applications ping servers and select the one that responds first. However, the first packet may have additional latency because of the look-ups it does. So applications may choose longer paths and have slower response times using this assumption. Increasingly, applications such as Mobile IPv6 and Protocol Independent Multicast send packets on one path and then switch to a shorter, faster path.

5. IP addresses rarely change.
Many applications assume that IP addresses are stable over long periods of time. These applications resolve names to addresses and then cache them without any notion of the lifetime of the name/address connection, Thaler says. This assumption isn't always true today because of the popularity of the Dynamic Host Configuration Protocol as well as roaming mechanisms and wireless communications.

6. A computer has only one IP address and one interface to the network.
This is an example of an assumption that was never true to begin with, Thaler says. From the onset of the Internet, hosts could have several physical interfaces to the network and each of those could have several logical Internet addresses. Today, computers are dealing with wired and wireless access, dual IPv4/IPv6 nodes and multiple IPv6 addresses on the same interface making this assumption truly a myth.

7. If you and I have addresses in a subnet, we must be near each other.
Some applications assume that the IP address used by an application is the same as the address used for routing. This means an application might assume two systems on the same subnet are nearby and would be better to talk to each other than a system far away. This assumption doesn't hold up because of tunneling and mobility. Increasingly, new applications are adopting a scheme known as an identifier/locator split that uses separate IP addresses to identify a system from the IP addresses used to locate a system.

8. New transport-layer protocols will work across the Internet.
IP was designed to support new transport protocols underneath it, but increasingly this isn't true, Thaler says. Most NATs and firewalls only allow Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) for transporting packets. Newer Web-based applications only operate over Hypertext Transfer Protocol (HTTP).

9. If one stream between you and me can get through, so can another one.
Some applications open multiple connections—one for data and another for control—between two systems for communications. The problem is that middleboxes such as NATs and firewalls block certain ports and may not allow more than one connection. That's why applications such as File Transfer Protocol (FTP) and the Real-time Transfer Protocol (RTP) don't always work, Thaler says.

10. Internet communications are not changed in transit.
Thaler cites several assumptions about Internet security that are no longer true. One of them is that packets are unmodified in transit. While it may have been true at the dawn of the Internet, this assumption is no longer true because of NATs, firewalls, intrusion-detection systems and many other middleboxes. IPsec solves this problem by encrypting IP packets, but this security scheme isn't widely used across the Internet.

11. Internet communications are private.
Another security-related assumption Internet developers and users often make is that packets are private. Thaler says this was never true. The only way for Internet users to be sure that their communications are private is to deploy IPsec, which is a suite of protocols for securing IP communications by authenticating and encrypting IP packets.

12. Source addresses are not forged.
Many Internet applications assume that a packet is coming from the IP source address that it uses. However, IP address spoofing has become common as a way of concealing the identity of the sender in denial of service and other attacks. Applications built on this assumption are vulnerable to attack, Thaler says.


Five Tips: Make Virtualization Work Better Across the WAN

Written on 10:27 AM by Right Click IT - Technology Services

– Jeff Aaron, VP Marketing, Silver Peak Systems, CIO November 18, 2008

IT departments can reap enormous benefits from virtualizing applications and implementing Virtual Desktop Infrastructures (VDI). However, the management and cost savings of virtualization can be lost if performance is so bad that it hampers productivity, as can happen when virtual applications and desktops are delivered across a Wide Area Network (WAN).

For an in-depth look at a WAN revamp, see CIO.com's related article, "How to Make Your WAN a Fast Lane: One Company's Story."

How can enterprises overcome poor performance to reap the rewards of virtualization?

Jeff Aaron, VP of marketing at Silver Peak Systems, suggests these five tips.

1. Understand The Network Issues
For starters, it makes sense to understand why your virtualized applications and virtual desktops perform poorly across the WAN. It's typically not due to the application or VDI components, but due to the network. More specifically, virtualized environments are sensitive to the following WAN characteristics:

Latency: the time it takes for data to travel from one location to one another.
Packet loss: when packets get dropped or delivered out of order due to network congestion they must be re-transmitted across the WAN. This can turn a 200 millisecond roundtrip into one second. To end users, the virtual application or desktop seems unresponsive when packets are being re-transmitted. They start to re-hit the keys on their client machines, which compounds the problem.

Bandwidth: WAN bandwidth may or may not be an issue depending on the type of traffic being sent. While most virtualized applications are fairly efficient when it comes to bandwidth consumption, some activities (such as file transfers and print jobs) consume significant bandwidth, which can present a performance challenge.

2. Examine WAN Optimization Techniques
WAN optimization devices can be deployed on both ends of a WAN link to improve the performance of all enterprise applications. The following WAN optimization techniques are used by these devices to improve the performance of virtual applications and desktops:

Latency can be overcome by mitigating the "chattiness" of TCP, the transport protocol used to by virtual applications for communication across the WAN. More specifically, WAN optimization devices can be configured to send more data within specific windows, and minimize the number of back and forth acknowledgements required prior to sending data. This improves the responsiveness of keystrokes in a virtual environment.

Loss can be mitigated by rebuilding dropped packets on the far end of a WAN link, and re-sequencing packets that are delivered out of order in real-time. This eliminates the need to re-transmit packets every time they are dropped or delivered out-of-order. By avoiding re-transmissions, virtual applications and desktops appear much more responsive across the WAN.

Bandwidth can be reduced using WAN deduplication. By monitoring all data sent across the WAN, repetitive information can be detected and delivered locally rather than resent across the network. This significantly improves bandwidth utilization in some (but not all) virtualized environments.

3. Set Application Priorities
The average enterprise has more than 80 applications that are accessed across the WAN. That means that critical applications, including terminal services and VDI, are vying for the same resources as less important traffic, such as Internet browsing. Because virtual applications and desktops are sensitive to latency, it often makes sense to prioritize this traffic over other applications using Quality of Service (QoS) techniques. In addition, QoS can guarantee bandwidth for VDI and virtual applications.

4. Compress and Encrypt in the Right Place
Often times host machines compress information prior to transmission. This is meant to improve bandwidth utilization in a virtual environment. However, compression obfuscates visibility into the actual data, which makes it difficult for downstream WAN optimization devices to provide their full value. Therefore, it may be a better choice to turn off compression functionality in the virtual host (if possible), and instead enable it in the WAN optimization device.

Moving compression into the WAN optimization device has another added benefit: it frees up CPU cycles within the host machine. This can lead to better performance and scalability throughout a virtual environment.

IT staff should also consider where encryption takes place in a virtual infrastructure, since encryption also consumes CPU cycles in the host.


5. Go With the Flows
Network scalability can have an important impact on the performance of virtual applications and VDI. The average thin client machine has 10 to15 TCP flows open at any given time. If thousands of clients are accessing host machines in the same centralized facility, that location must be equipped to handle tens of thousands of simultaneous sessions.

When it comes to supporting large numbers of flows, there are two "best practice" recommendations. First, as discussed above, it is recommended that compression and encryption be moved off the host machine to free up CPU cycles. Second, make sure your WAN acceleration device supports the right amount of flows for your environment. The last thing you want to do is create an artificial bottleneck within the very devices deployed to remove your WAN's bottlenecks.

8 Reasons Tech Will Survive the Economic Recession

Written on 10:05 AM by Right Click IT - Technology Services

– Carolyn Duffy Marsan, Network World November 13, 2008

The global economy is in as bad shape as we've ever seen. In the last two months, U.S. consumers have stopped spending money on discretionary items, including electronic gear, prompting this week's bankruptcy filing by Circuit City. Retailers are worried that Black Friday will indeed be black, as holiday shoppers cut back on spending and choose lower-priced cell phones and notebook computers.

Yet despite all of the bailouts and layoffs, most IT industry experts are predicting that sales of computer hardware, software and services will be growing at a healthy clip again within 18 months.

Here's a synopsis of what experts are saying about the short- and long-term prognosis for the tech industry:

1. The global IT market is still growing, although barely.

IDC this week recast its projections for global IT spending in 2009, forecasting that the market will grow 2.6 percent next year instead of the 5.9 percent predicted prior to the financial crisis. In the United States, IT spending will eke out 0.9 percent growth.

IDC predicts the slowest IT markets will be the United States, Japan and Western Europe, which all will experience around 1 percent growth. The healthiest economies will be in Central and Eastern Europe, the Middle East, Africa and Latin America.

Similarly, Gartner's worst-case scenario for 2009 is that IT spending will increase 2.3 percent, according to a report released in mid-October. Gartner said the U.S. tech industry will be flat. Hardest hit will be Europe, where IT expenditures are expected to shrink in 2009.

Overall, Gartner said global IT spending will reach $3.8 trillion in 2008, up from $3.15 trillion in 2007.

"We expect a gradual recovery throughout 2010, and by 2011 we should be back into a more normal kind of environment," said IDC Analyst Stephen Minton. If the recession turns out to be deeper or last longer than four quarters as most economics expect, "it could turn into a contraction in IT spending," Minton added. "In that case, the IT market would still be weak in 2010 but we'd see a gradual recovery in 2011, and we'd be back to normal by 2012."

2. It's not as bad as 2001.

Even the grimmest predictions for global IT spending during the next two years aren't as severe as the declines the tech industry experienced between 2001 and 2003.

"Global economic problems are impacting IT budgets, however the IT industry will not see the dramatic reductions that were seen during the dot.com bust. . . . At that time, budgets were slashed from mid-double-digit growth to low-single-digit growth," Gartner said in a statement.

Gartner said the reason IT won't suffer as badly in 2009 as it did during the 2001 recession is that "operations now view IT as a way to transform their businesses and adopt operating models that are leaner. . . . IT is embedded in running all aspects of the business."

IDC's Minton said that in 2001 many companies had unused data center capacity, excess network bandwidth and software applications that weren't integrated in a way that could drive productivity.

"This time around, none of that is true," Minton said. "Today, there isn't a glut of bandwidth. There is high utilization of software applications, which are purchased in a more modular way and integrated much faster into business operations. Unlike in 2001, companies aren't waking up to find that they should be cutting back on IT spending. They're only cutting back on new initiatives because of economic conditions."

"We're anxious about whether the economy will resemble what the most pessimistic economists are saying or the more mainstream economists," Minton said. "But we don't see any reason that it will turn into a disaster like 2001. It shouldn’t get anywhere near that bad."

3. Consumers won't give up their cell phones.

They may lose their jobs and even their homes, but consumers seem unwilling to disconnect their cell phones.

"I would sleep in my car before I would give up my mobile phone," says Yankee Group Analyst Carl Howe. "Consumers buy services like broadband and mobile phones, and even if they lose their jobs they need these services more than ever."

Yankee Group says the demand for network-based services—what it dubs "The Anywhere Economy"—will overcome the short-term obstacles posed by the global financial crisis and will be back on track for significant growth by 2012.

Yankee Group predicts continued strong sales for basic mobile phone services at the low end, as well as high-end services such as Apple iPhones and Blackberry Storms. Where the mobile market will get squeezed is in the middle, where many vendors have similar feature sets. One advantage for mobile carriers: they have two-year contracts locked in.

Telecom services "are not quite on the level of food, shelter and clothing, but increasingly it satisfies a deep personal need," Howe says. "When bad things happen to us, we want to talk about it. And in today's world, that's increasingly done electronically."

4. Notebook computers are still hot.

Worldwide demand for notebooks—particularly the sub-$500 models—has been strong all year. But that may change in the fourth quarter given Intel's latest warnings about flagging demand for its processors.

Both IDC and Gartner reported that PC shipments grew 15 percent in the third quarter of 2008, driven primarily by sales of low-cost notebook computers. Altogether, more than 80 million PCs were shipped during the third quarter of 2008, which was down from estimates earlier in the year but still represents healthy growth.

IDC said notebook sales topped desktop sales—55 percent to 45 percent—for the first time ever during the third quarter of 2008. This is a trend that will help prop up popular notebook vendors such as Hewlett-Packard, Dell and Apple. Apple, for example, saw its Mac shipments rise 32 percent in the third quarter of 2008, powered primarily by its notebooks.

The big unknown is what will happen to notebook sales during the holiday season. Analysts have noted sluggishness in U.S. corporate PC sales this fall as well as home sales, where most demand is for ultra-low-priced notebooks.

"The impact will come this quarter. People will be looking for cheaper products. . . . They will not be spending as much as they did a year ago," IDC's Minton said.

Intel said yesterday that it was seeing significantly weaker demand across its entire product line and dropped its revenue forecast for the fourth quarter by $1 billion.

The brunt of the slowdown in IT spending will hit servers and PCs, predicts Forrester Research analyst Andrew Bartels. Forrester is adjusting its IT spending forecast for 2009 downward, and plans to release new numbers after Thanksgiving, he adds.

"PCs and servers may see declines similar to 2001, but we're not going to be seeing that across the whole tech industry," Bartels says. "Software is a bright spot. Much of software spending comes in the form of maintenance and subscriptions. The licensing part may go down, but that's only a quarter of total software revenues."

The biggest U.S. carriers—including AT&T and Verizon—are in much better shape going into this recession than they were during the dot.com bust. So while consumer spending will fall in 2009, it is expected to have less of an impact on the telecom sector than it did after 2001.

Yankee Group says the financial crisis will not significantly impact network build-outs by carriers because most of the financing for 3G, Fios, WiMAX and other next-generation networks is already in place.

"These are multibillion-dollar build-outs, and most of the financing has been arranged months if not years in advance," Yankee Group's Howe says. "We were projecting that in 2009 carriers would spend over $70 billion on these network build-outs in the U.S. Now we're saying that there will be $2 billion or $3 billion less in spending. . . . We're talking single-digit percentage declines, not wholesale cuts."

This doesn't mean that the network industry will emerge from the chaos unscathed. Carriers will squeeze their equipment providers, and companies like Cisco are already feeling the pinch. When Cisco announced its latest earnings last week, CEO John Chambers reported the company had seen its sales shift from solid-single-digit growth in August to a 9 percent decline in October.

Forrester says computer and communications equipment vendors will bear the brunt of IT cost-cutting from enterprise customers.

6. Corporate data storage needs keep rising during recessions.

Every segment of the IT market is weaker today than it was six months ago. But some segments are less weak than others, and one of the healthiest is storage.

“Storage is relatively stable because of the fact that companies are using a lot more of their storage capacity and they are still dealing with an increasing amount of data that requires storage on a weekly basis. That’s not going to change,” IDC’s Minton said. “It’s not just the hardware, but the storage software that will be relative bright spots in the years ahead.”

One storage industry bellwether is EMC, which continued to demonstrate strong demand for storage hardware and software in its recent quarterly results. EMC’s revenue grew 13 percent in the third quarter of 2008 compared to a year ago. Unlike many other network industry leaders, EMC is projecting continued revenue gains in the fourth quarter of 2008.

Similarly, this week Brocade issued a preliminary release indicating strong sales for the quarter ending in October. CEO Michael Klayko said the company will outperform its sales projections from August 2008.

“Storage needs are on the rise, and storage investments will continue,” Forrester’s Bartels says. “We don’t see cloud storage as having a meaningful impact yet.”

7. New IT markets will continue to emerge, although more slowly.

Emerging economies such as China and Latin America are slowing down, but they are still expected to have IT sales increases in 2009. The Latin American market, in particular, is a solid one for IT companies such as IBM, HP and Microsoft, which have a strong foothold south of the border.

“In the past two to three years, Latin America has had some of the fastest growth rates in IT spending,” IDC’s Minton said. “Brazil is the biggest market, and it has been growing at double digits. But all of the markets in Latin America have been growing by more than 10 percent a year. With some exceptions, the economies there are relatively stable and have had less political turmoil than in the past. . . . This is one of the regions that we think will bounce back pretty quickly.”

Other emerging markets that will continue to post growth in IT spending in 2009 are Central and Eastern Europe, the Middle East and Africa, IDC predicts. While these markets won’t experience double-digit gains next year, they will help offset sharp declines in IT purchasing in the United States, Japan and Western Europe.

Forrester warns that IT vendors shouldn’t count on so-called BRIC countries—Brazil, Russia, India and China— to bail them out of the financial crisis altogether.

“The BRIC markets are performing better than the industrial markets, but they are also slowing down,” Forrester’s Bartels says. “Among those markets, China looks to be the strongest, then Brazil and Mexico. Russia is weakening, and India is weakening. They’re not going to go into a contraction, but the growth rates could slow to the point that they feel like a contraction.”

One issue for IT vendors is the rising strength of the U.S. dollar, which means U.S. tech vendors will bring home fewer dollars from their foreign sales when they convert currencies.

“The dollar has been strengthening against every currency except the Chinese currency,” Bartels says. “Even if a vendor is successful in sales in Brazil or Russia, they will bring back fewer dollars, which was not the case six months ago.”

8. Outsourcing helps companies stretch their IT budgets.

Many companies will freeze new IT initiatives for the next three to six months as they absorb the Wall Street crash. But one segment that’s likely to continue is IT outsourcing because it provides near-term cost reductions.

“While IT outsourcing will benefit from an economic slowdown in 2008 as companies turn to IT outsourcing vendors to help cut costs, trends toward use of lower-cost offshore resources and smaller-scale outsourcing deals will keep growth modest,” says Forrester Research.

Forrester predicts IT outsourcing will grow around 5 percent in 2009 and 2010.

“When you sign an outsourcing agreement, you’re locked into it barring going out of business,” Forrester’s Bartels says. “Outsourcing revenues are not going to be variable.”

On the horizon is cloud computing, which also holds the promise of reducing corporate IT overhead but requires more up-front spending than outsourcing.

“Over the longer term, we’re pretty bullish about cloud computing,” IDC’s Minton said. “But there will be a lot of hurdles for a bigger corporation. It’s difficult for them psychologically to give up control, and there are quite a lot of up-front costs to engage consultants, to roll out applications to a large number of employees, and there’s training involved. But ultimately these projects save money.”

What is Server Virtualization?

Written on 11:21 AM by Right Click IT - Technology Services

Server Virtualization

Have you been reading about server virtualization. Heard it is the next big thing, going to save you a lot of money, make IT management easier, decrease the amount of space you need to dedicate to your servers.

At Right Click, we are well equipped to teach you about the pros and cons of virtualizing and make a recommendation if that is the way to go.

What is it? Server virtualization is running multiple copies of a server software on one physical box. For instance with a virtualized server you can have a domain controller, exchange server and a terminal server on one physical box, but you have three separate copies of Windows Server running. It is as though you have three machines, but only one physical box.

Why does it work? If you look at your CPU utilization it hardly ever goes about 10%. With virtualization you can get more out of your existing servers and be able to offer additional services to your users without significant hardware spending.

How does it work? Virtualization begins with base software which can be Microsoft Windows Server 2008 or a small program like VMWare’s hyper visor. Once you have your base software, you begin to setup new machines on your one box. You can install different operating systems, even different platforms on one box.

Why do you want it? There are a number of reasons to virtualize:

    • Decrease number of servers to maintain. You can put 5 – 7 servers onto one box.
    • Spend less money on hardware. If you need a new server, just install a new one on your virtualized machine. • Decrease need for cooling and space. With the price of energy increasing, virtualization is going to become mandatory in the future.
    • Easier to manage, no longer do you need to be familiar with 10 different types of machines. As long as you know your one machine you will be ok. Only one set of drivers and bios’ to worry about.
    • Easier to test with. Imagine having a test environment you can setup in minutes. If you break something and want to start from scratch, just restore your image you started with and you are good to go.

  • Who makes the software? The leader today is VMWare http://www.vmware.com. They have a two year headstart on their closest competitor. Microsoft and Citrix are making a hard charge to claim some of this space. We believe the battle is just starting to begin amongst these companies.

Why Select Right Click? Jim Harrington and Avi Lall both are VMWare certified. After working with many customers we know what products and add-ons you are going to need. Our installations go smooth and customers enjoy the benefits of virtualization immediately. Please call or email us for a free consultation on how virtualization can help your firm.

Network Management: Tips for Managing Costs

Written on 10:41 AM by Right Click IT - Technology Services

These tips, including virtualization, consolidation and measuring bandwidth consumption, will help you reduce costs as network management becomes more complex and expensive.

– Karen D. Schwartz, CIO August 25, 2008

Of all of the ongoing expenses needed to keep corporate IT running, network-related costs are perhaps the most unwieldy. New technologies, changing requirements and ongoing equipment maintenance and upgrades keep IT staff on their toes and money flowing out the door. But there are ways to manage network costs.

The Problem
According to Aberdeen Group, network costs continue to rise steadily. In 2008, for example, network spending is expected to increase slightly more than 5 percent over 2007. Telecom management industry association AOTMP of Indianapolis, Ind., backs that up, estimating that spending for voice and data services alone averages $2,000 to 3,000 per employee.

The biggest area for steady cost growth is the ever-expanding network, either as a result of physical expansion or a general thirst for connectivity. In the first case, a new branch office could require replication of the security infrastructure through technology like a point-to-point VPN connection. The network may need to add a multiprotocol labeling service (MPLS) to provide that branch office with a wide-area, high-speed connection. And those expenses are in addition to the cost of routers, switches and network appliances that the branch office may need.

Internally, the "need for speed" is driving the increase of network costs. More and more devices, either in terms of number of ports for network access or the number of network-connected devices per employee, is increasing.

One growing trend is the shift from standard PCs to mobile PCs in the corporate world. Over the next five years Forrester Research believes corporate America will reach an inflection point where traditional PCs are eclipsed by mobile PCs.

"Now you have a device that perhaps needs a port or wired drop at the desk and may also need to be supported on a wireless network, so the number of means by which employees can connect to the network drives the size of the network in terms of end points of connectivity," explains Chris Silva, an analyst with Forrester Research of Cambridge, Mass.

Other factors also are contributing to spiraling network costs. Aberdeen Group, for example, found that companies expect to increase their bandwidth by 108 percent on average over the next 12 months and expect to increase the number of business-critical applications running on their networks by 67 percent.

The growth of wireless networking is also increasing IT costs. As companies begin to replace all or part of their networks with Wi-Fi networks to take advantage of newer technologies like 802.11n, they are spending liberally.

And don't forget the hidden costs: As new devices enter the network and new network end points are developed, network management becomes more complex and expensive. For example, you might have your core wired network infrastructure from Vendor A but overlay a wireless network from Vendor B, which creates two separate management consoles. And as more employees connect to the network via devices like BlackBerrys and phones, the IT staff must manage and secure these network-connected devices as well.

Clearly, companies must do what they can to manage network costs. AOTMP, a telecom consultancy based in Indianapolis, found that developing a strategy to manage network expenses was the top telecom network initiative for companies in 2008, with reducing spending for telecom services and improved asset and inventory management services rounding out the top three.

Reducing Network Costs
The first step in controlling network costs, says Aberdeen analyst Began Simi, is to take the network's pulse. That means understanding exactly where the network's performance bottlenecks are and how efficiently the network is performing.

"Throwing more bandwidth and money at the problem even though you don't understand the bandwidth consumption per application or network location can be expensive," he says.

There are automated network monitoring tools available to measure these metrics. Both sophisticated products from vendors like Cisco Systems and NetQoS and free tools like PRTG Network Monitor and pier can provide a lot of value, such as reducing bandwidth and server performance bottlenecks and avoiding system downtime.

Once you understand what's going on in your network, there are many methods companies can use to reduce costs or prevent them from rising further.

One method is to consolidate the physical network infrastructure by finding ways to make the switch that's at the core of the network perform more functions; by doing so, you can reduce the number of appliances and bolt-on solutions your network uses. Many networking vendors like HP and Cisco are making inroads in this area.

Virtualization is a key part of network consolidation. By setting up the network infrastructure to be delivered from a pool of shared resources, those resources can be used more efficiently across a network fabric, explains Peter Fetterolf, a partner at Network Strategy Partners, a Boston consultancy. Virtualization can improve network resource utilization, efficiency and agility, helping lower the total cost of ownership.

What's more, virtualization leads to reduced overhead in areas like power and cooling; real estate; supervision, maintenance and personnel; and telecom services, he adds. And consolidation of service capacity in a single location creates more predictable demand patterns that permit better utilization, while overhead costs are spread over more productive assets such as systems administrators per server and network managers per network element.

Another part of consolidation is adopting technology that allows the IT staff to manage both the wired and wireless network from a single platform via APIs or other types of application integration tools. Most of the major network vendors are battling to provide functions like these, but third-party vendors also can help.

"That means taking one network management console and managing not only just the flow of data bits and bytes, but managing the VPN service, the WAN optimization tool and other things in the network," Silva says. "You want to consolidate your different management interfaces and consoles into one virtual single pane of glass management, where everything is on one screen."

And don't forget about what you already have in place. It doesn't make sense to invest in more technology if you're not maximizing the value of the investments you have already made, Silva says. For example, you may have spent a lot on a wireless network and mobility technology, but if the network hasn't been configured properly to use the technology, you're wasting money. If built correctly, the network can probably support technologies like voice over wireless LAN or VoIP, for example.

"Most often, you can squeeze more value from what you already have by using the same infrastructure with different overlay technologies to get more return on the investment that's already been made," he says. "So in addition to serving data, that $200,000 investment in a wireless LAN can also work toward cutting down the monthly cellular bills of an organization because that network can also support voice. And the same template can be applied for supporting things like video, using the WLAN for asset or employee tracking and presence-enabling unified communications systems."

And examine the vendors and technologies you are using for best value. If, for example, you have relied on Cisco Systems to develop your entire network, expenses could get very high very quickly. "There are a lot of different ways to build a network, and there are a lot of different options. They are all worth exploring," Fetterolf says. And once you have done that, don't be shy about pitting vendors against each other, he adds.

Finally, it can also make sense to look beyond the four walls of your organization for cost savings. Outsourcing network management, for example, can save significant money in some cases. In a recent study, Aberdeen Group found that organizations that outsourced network management reported an average savings of 26 percent as compared with previous spending.

Networks Concern IT Managers

Written on 10:49 AM by Right Click IT - Technology Services

Total user spend will almost double in the next five years, according to the analysts

By Len Rust

July 07, 2008 — Computerworld Australia — In a recent survey of more than 1100 IT decision makers in the Asia/Pacific region, IDC measured the importance of 10 key solution areas touted by IT services vendors globally. Network infrastructure solutions came up as being most important, with more than 70 per cent of respondents in markets such as Australia, China, and India indicating that solutions pertaining to the network were either important or very important.

The Role of the CIO in a Network Services World Business continuity and disaster recovery was a close second in importance among survey respondents.

IDC estimated that total spending in network services (which includes network consulting and integration services—NCIS—and network management—NM) will grow from $US4.7 billion in 2007 to $US9.1 billion in 2012 at a compound annual growth rate of 13.7 per cent from 2007-2012. This bodes well for companies such as IBM, Hewlett-Packard, and Dimension Data (including Datacraft), ranked by IDC as the top players (in terms of revenue) in Asia/Pacific, excluding Japan, for 2007.

Business continuity and disaster recovery, which include a variety of activities aimed at protecting and safeguarding critical corporate information against unpredictable events, was another key area of importance to IT decision makers.

According to the survey, end-users have stated that overall security concerns (51.9 per cent of responses) and past experience with security threats (44 per cent of responses) were the two key issues that have prompted increased focus on business continuity and disaster recovery.

Eugene Wee, research manager of IT services at IDC Asia/Pacific, said he is concerned with the nonchalance that still exists in the marketplace. "Currently, most of the needs assessments and process improvement around business continuity and disaster recovery occurs as an afterthought to threats arising."

How to Monitor Workers' Use of IT Without Becoming Big Brother

Written on 10:30 AM by Right Click IT - Technology Services

– Thomas Wailgum , CIO April 17, 2007

Arthur Riel says he was just doing his job.

When he was hired by Morgan Stanley in 2000 and put in charge of the $52 billion financial company’s e-mail archiving system, gaining access to its most sensitive corporate communications, the company was already involved in litigation that involved its e-mail retention policies. That suit would end in a landmark 2005 judgment against the bank, which awarded $1.57 billion in damages to financier Ronald Perelman. (In March 2007, Morgan Stanley won an appeal to Florida’s District Court of Appeal.)

It was part of Riel’s $500,000 a year job, he says, to make sure that would never happen again.

To do that, Riel had what he calls “carte blanche to go through e-mail.” What he says he discovered reading company e-mails throughout 2003 were what he construed as dubious business ethics, potential conflicts of interest and sexual banter within Morgan Stanley’s executive ranks that, he says, ran contrary to the bank’s code of conduct.

Based on his reading of executive e-mails, most notably CTO Guy Chiarello’s, Riel alleged that the e-mails showed the improper influence of Morgan Stanley’s Investment Banking division in how the IT department, with its multimillion-dollar budget, purchased technology products; the improper solicitation of tickets to New York Yankees–Boston Red Sox baseball games and other high-profile sporting events from vendors such as EMC; and the influencing, through one of Chiarello’s direct reports, of the outcome of Computerworld magazine’s Smithsonian Leadership Award process, of which Morgan Stanley was a sponsor. (Computerworld is a CIO sister publication.) “I reported what was basically a kickback scheme going on in IT,” Riel says.

E-mail exchanges that contained sexual banter and involved Riel’s boss, CIO Moira Kilcoyne, added to Riel’s conviction that something was wrong at the top. Believing, he says, that he was doing his duty, Riel claims to have sent hard copies of the offending e-mails to Stephen Crawford, Morgan Stanley’s then-CFO, on Jan. 15, 2004, anonymously via interoffice mail.

Riel’s superiors vigorously dispute his story.

First, according to a Morgan Stanley spokesperson, the company asserts that Riel was never authorized to monitor, read or disseminate other employees’ e-mails “as he saw fit.” Second, the spokesperson denies that a package of e-mails was either sent to or received by Crawford. And third, after conducting an internal investigation, the company maintains that it found no evidence warranting disciplinary action against anyone identified by Riel.

On Aug. 18, 2004, moments after Riel’s BlackBerry service was shut off, Kilcoyne, along with a vice president of HR, called Riel into her office. She told him that he was being placed on administrative leave with full pay. Morgan Stanley security searched his office and eventually found more than 350 e-mails on his PC, e-mails of which Riel was neither the writer nor the intended recipient.

On Sept. 27, 2005, 13 months after being placed on leave, Riel was “terminated for gross misconduct,” says the Morgan Stanley spokesperson.

Riel filed a $10 million whistle-blower Sarbanes-Oxley suit and a $10 million federal defamation suit against Morgan Stanley. In June 2006, the Department of Labor dismissed the whistle-blower suit and said it had found no cause to believe that Morgan Stanley had violated any part of the Sarbanes-Oxley act. It also found that Morgan Stanley had “terminated other employees in the past for similar misconduct.”

In February 2007, a federal judge dismissed seven of the eight complaints Riel had filed in his suit. (A small issue concerning compensation was uncontested.) In a statement, Morgan Stanley said that the dismissal of the seven complaints and the whistle-blower suit “further confirms that Arthur Riel’s allegations are without any legal or factual merit.”

Today, in light of everything that transpired, Riel says he learned a lesson that all CIOs should heed: “It’s critical that IT departments determine a policy for who should have access to what.” During his time at Morgan Stanley, he claims, “there was no policy.”

With Power Comes Responsibility
As the need to broaden access to systems and applications increases due to business and regulatory demands, so does the potential for malfeasance, whether it’s your network admin testing the corporate firewall on his own time and inadvertently leaving it open, a salesperson accessing a customer’s credit card information or a rogue help desk staffer hell-bent on sabotaging your CEO by reading his e-mail.

Like good governments, IT departments need checks and balances, and they need to marry access with accountability. A December 2006 Computer Emergency Readiness Team (CERT) study on insider threats found that a lack of physical and electronic access controls facilitates insider IT sabotage. The situation is even more critical now because new, widely deployed applications for identifying and monitoring employee behavior have thrust IT into what was formerly the domain of HR and legal departments. Tom Sanzone, CIO of Credit Suisse, says he works “hand in glove” with HR, legal, compliance and corporate auditors, and has formalized an IT risk function to ensure that all access policies are consistent and repeatable on a global scale. “Those relationships are very important,” he says. (For more on building those relationships, see “CIOs Need Business Partners To Achieve Security Goals." )

Many CIOs have discovered that their new policing role presents the same challenges faced by the men and women who wear blue uniforms: If people can’t trust the police —or if something happens that damages that trust—then whom can they trust? (For how to repair trust once it’s compromised, see "Maurice Schweitzer Addresses the Importance of Truth and Deception in Business." )

“If IT does something that they shouldn’t, then the general employee thinks, I’m going to find a way to get around the monitoring because we can’t even trust the people in IT,” says David Zweig, an associate professor of organizational behavior at the University of Toronto at Scarborough. “It’s a cycle of increasing deviance, which, unfortunately, could create more monitoring.”

At Network Services Company (NSC), a distributor in the paper and janitorial supply industry, CIO Paul Roche asserted control over how and when his IT department can access employee systems and, working with HR and legal, he has developed a policy for dealing with suspected employee infractions. For example, the IT policy states that IT personnel can’t start snooping around employees’ PCs without prior HR approval. “Employees know we’re not going to look the other way,” says Roche.

Any CIO’s mettle—no matter how rock-solid his policy or relationships—will be tested when one of his own crosses the line and breaks the trust between users and the IT department. “The expectation has to be that if you’re going to give someone authority, at some point it will be misused,” says Khalid Kark, a senior security analyst at Forrester Research. “And who will guard the guards?”

Bad Guys and Do-Gooders
Despite Riel’s assertion that Morgan Stanley had no policy for which systems and e-mail accounts he could access, Morgan Stanley says Riel was never authorized to do what he did. (No one from Morgan Stanley’s IT department was made available for this article.)

Morgan Stanley isn’t alone in having to deal publicly with renegade IT employees. Wal-Mart disclosed last March that over a four-month period one of its systems technicians, Bruce Gabbard, had monitored and recorded telephone conversations between Wal-Mart public relations staffers and a New York Times reporter. “These recordings were not authorized by the company and were in direct violation of the established operational policy that forbids such activity without prior written approval from the legal department,” Wal-Mart said in a statement. In addition, Wal-Mart revealed that Gabbard had “intercepted text messages and pages, including communications that did not involve Wal-Mart associates,” which the company maintains “is not authorized by company policies under any circumstances.” Gabbard, who was fired, claimed in an April Wall Street Journal article that his “spying activities were sanctioned by superiors.” Wal-Mart says that it has removed the recording equipment and related hardware from the system. “Any future use of this equipment will be under the direct supervision of the legal department,” Wal-Mart stated.

In February, the Massachusetts Department of Industrial Accidents (DIA) disclosed that Francis Osborn, an IT contractor, had accessed and retrieved workers’ compensation claimants’ Social Security numbers from a DIA database. According to court documents, Osborn accessed 1,200 files and opened credit card accounts using three claimants’ information, charging thousands of dollars to those fraudulent accounts. In a statement, the DIA commissioner said the department was “conducting a thorough review of all security procedures.” Osborn was fired, arrested and charged with identity fraud.

Other incidents, however, are less egregiously criminal and therefore harder for CIOs to evaluate and handle. In February 2006, New Hampshire officials announced that they had discovered password-cracking software (a program called Cain & Abel) planted on a state server. Cain & Abel potentially could have given hackers visibility into the state’s cache of credit card numbers used to conduct transactions with the division of motor vehicles, state liquor stores and the veterans home. Douglas Oliver, an IT employee who in one news report referred to himself as the state’s “chief technical hacker,” admitted to media outlets that he had installed the program, saying he was using it to test system security. He said he did so with state CIO Richard Bailey’s knowledge. (Bailey did not respond to repeated requests for an interview.) Oliver was placed on paid leave during an investigation that involved the FBI and the U.S. Department of Justice.