Green IT at Washington Mutual saves lots of green

Written on 2:04 PM by Right Click IT - Technology Services

by Thomas Wailgum (CIO)

Like her CIO peers, Washington Mutual's Deborah Horvath has that unfiltered, end-to-end view of her company's operations that all IT executives possess.

To her credit (and perhaps as a result of her accumulated experience), Horvath is drawing on every bit of that enterprise-wide visibility to help her in another critical role she plays at Washington Mutual: chairing the bank's burgeoning environmental council's efforts to cut its carbon emissions, reduce power costs and become a greener business.

While some in IT may be inclined to overlook what's happening in their own backyards, Horvath is not. For starters, she notes that 50% of most companies' carbon emissions are electricity related. "And data centers are electricity hogs," she says. (For more, see "Six Ways to a More Efficient Data Center.") Add to that the PCs, copiers, printers and phones, which, Horvath says, drive the lion's share of the electricity demands coming from the employee base. Basically, she reasons, IT has got everything but HVAC, lighting and other building systems under its umbrella.

"So the smarter you get about specifying and determining your standards around energy consumption," Horvath says, "the more progress you can make for the company."

These days, the hype surrounding green initiatives is also an opportunity for IT to step up and aid interested but uninformed executive peers. IT-driven environmental and sustainability initiatives can "be a way for IT leaders and organizations to gain strategic influence by acting as an enabler for improving the environmental footprint of other business operations, beyond the IT infrastructure," notes Christopher Mines, an analyst at Forrester Research, in a May 2008 report "More Green Progress In Enterprise IT." (For more, see "Can IT Make Your Company Green?")

Deborah Horvath, CIO at Washington Mutual, uses green policies to cut costs.

"Because of the universal nature of IT infrastructure touching every location and almost every employee," he writes, "IT has a unique capability to substitute (low-carbon) infrastructure for (carbon-intensive) processes."

Washington Mutual's green successes

Since taking on the chairmanship of the brand-new environmental council in 2007, Horvath and her business counterparts at Washington Mutual, or WaMu, have made significant progress. The Seattle-based consumer and small-business bank -- with nearly $320 billion (U.S.) in assets -- has cut its PC-related greenhouse gas emissions by 60%, saved millions on IT-related electricity costs for its PC fleet, and in one recently completed pilot reduced the legal department's paper consumption by 15%.

In the desktop computing space alone, WaMu has seen huge savings. For the environment's sake, Horvath reports that WaMu went from emitting 24.5 metric tons of CO2 down to 8.6 metric tons this year. That decrease in electricity usage is chiefly derived from energy-saving power-management software installed on 44,000 PCs, which powers them down when not in use. (Horvath declines to name the vendor, citing WaMu policy.)

IT first rolled out the software in a spring 2007 pilot to 100 PCs. Horvath was concerned that the software might introduce latency issues or bugs that could cause WaMu's systems to crash. It did not. The software, she says, is highly customizable with lots of options for IT and users to figure out. For instance, the application can lower a PC's power settings when it's not in use for a certain amount of time (if, say, an employee goes to a meeting). Or the PC can be powered down at a specified time, say, at 6 p.m. every night. (A warning message pops up before a PC powers down, and an employee who is working can delay the software from powering down.)

Horvath and her team worked out the specifics of its program, incorporating rules from all of those situations and has applied them enterprise-wide. For example, during business hours (8 a.m. to 6 p.m.), PCs and monitors in WaMu's retail branches remain on at all times. At WaMu's back-office locations, however, monitors turn off after 20 minutes of inactivity, and PCs go into standby mode after 30 minutes of inactivity. And, at 6 p.m. every night, if there is no activity, the PCs go into standby and the monitors turn off.

From the savings on the pilot and enterprise-wide rollout thus far, Horvath projects that the program will deliver $3 million in savings this year. WaMu has also received $230,000 in rebates from its electric utility.

In addition, WaMu is enabling its customers to become greener through WaMu's products. One notable example is WaMu's "Make a Statement, Plant a Tree" campaign. Born out of the marketing and e-commerce departments, the program aimed to help the Arbor Day Foundation's efforts to plant a million trees by getting WaMu customers to switch from paper to online banking statements. For every consumer that switched, WaMu donated a dollar to the Arbor Day Foundation. In spring 2008, WaMu presented a $1 million check to the foundation for 1 million trees to be planted.

So not only did WaMu's efforts do well for the environment, but WaMu now has a total of 2.7 million customers using its e-statements. The savings on paper itself as well as postage and "handling the paper," Horvath says, is approximately $18 million a year.

"We're going to continue to drive this hard," Horvath adds. "If we can take it from 2.7 million, currently, and add another 3 million this year, then our savings will be closer to $36 million in 2009."

Does going green actually help business?

Why are companies embarking on green IT initiatives? The Forrester report describes the results from an April 2008 survey of more than 1,000 IT personnel: green motivations range from corporate strategy and sustainability, to improving the brand, to complying with regulations, to reducing IT operational expenses. (For more on this, see "Power Costs Drive Moves to Virtual Servers" and "UPS's New Telematics System Cuts Fuel Costs and Makes Drivers More Efficient.")

In particular, Forrester's survey results showed that U.S.-based companies were more likely to cite cost-cutting as a prime motivator for green IT rather than more environmental or brand-related motivations.

WaMu and many other companies in financial services are hurting right now and can use any reductions in operational costs to offset losses related to downturns in credit, mortgage and other markets. (For more, see "Does Your Work in Information Technology Matter to Wall Street?") WaMu reported a net loss of $1.14 billion in the first quarter of 2008, due to deterioration in WaMu's home-equity and home-loan portfolios, and increases in delinquent loans.

Unfortunately, Horvath says, green initiatives tend to be viewed as adding expense. "It's very difficult to suggest that you are going to increase the company's overall expenses at this time," she says. "There is a concern by people on the peripheral that trying to become environmentally friendly is just going to increase costs, because environmentally friendly products come with a price premium, initiatives take project teams [away from what they're doing], and they cost money. That's the pushback you get early on."

From the get-go, Horvath's environmental council set out to "foster and support green initiatives that could be self-funding," she says. As an example, Horvath mentions office paper. "If we could reduce the amount of paper that we utilize, we would save enough money to then consider buying paper that was more environmentally friendly," Horvath says. "And if the paper is more expensive, it still might be a net reduction [in cost] because we've reduced our volume" of paper consumption.

WaMu's legal department, for instance, had been the company's biggest consumer of paper. IT already had reduced the legal department's use of dedicated printers and moved the group to multifaceted printer-copier-fax machines, which saved on paper-related costs. "We then asked them to go to duplex printing," she says, which is printing on both sides of a piece of paper. "And then we asked them to think twice about all the things they're printing."

In a short time, Horvath reports that the legal department had reduced its paper usage by 15%. From the legal department pilot, plans are now underway to replicate the program throughout the rest of WaMu's operations. Even just a 5% reduction in paper usage spread out across the rest of the bank's operations would make moving to more environmentally friendly paper, which can be more expensive, "quite easy to justify," Horvath notes.

"How we get everybody to think about green is that it doesn't have to be more expensive or increase our costs," Horvath says.

To its customers, research showed that a more environmentally friendly WaMu is a big selling point. In 2007, WaMu's marketing department conducted an online survey of 500 customers in areas where WaMu has retail branches. The goal was to find out how important green-related corporate initiatives and services mattered. The findings showed that 38% of those surveyed thought it was important for their bank to be passionate about environmental causes, and 45% said their bank should operate as a green company, Horvath reports.

"We then went deeper into finding out just how many of those customers would actually be more prone to buy products or do more business with a company that had green products and a green agenda," Horvath says. "We found that most of them are not looking for companies that are not 'green washing' but actually have it embedded in their company."

How to get green going

Green was not always top of the mind and well-thought-out in the executive ranks at WaMu.

When Horvath "stepped up" to chair the committee in early 2007, as she puts it, she was hoping to "get us better organized around being green, because we already had a lot employees with a lot of good ideas incorporating it on their jobs," Horvath recalls. "But we weren't well-organized around it. We weren't reporting on it. We didn't know our baseline of our carbon emission. We didn't have metrics or measurements around reducing our carbon emissions. And we didn't have strategies or visions for what we were going to do, relative to green."

Her self-imposed agenda was to "build a framework that could not only increase our velocity of change relative to environmental concerns," she says, "but to increase the overall awareness of customers, employees and other vendors and stakeholders. That's what I set out to do."

Forrester's Mines recommends that companies just starting out should create a comprehensive document or action plan "that details the goals, priorities and activities that the company will undertake." (For more on this, see "Can You Build a Carbon-Efficient Supply Chain?") According to Forrester's survey results, 45% of respondents said they are either implementing or creating such a green IT plan.

In just over a year, Horvath and the team on the environmental council have realized many successes. To other CIOs and IT staffers, she offers three points of advice for getting started.

1. Be inclusive.
"The best way to approach this is not with the traditional approach," Horvath says. That would be, "let's create an organization and put a few full-time dedicated people in it. And it's their responsibility to be environmentally friendly and the rest of us are all off the hook."

WaMu purposefully structured the green initiatives as something that every employee could participate in. "It could be part and parcel of all their jobs," she says.

2. Tap into employees' ideas and passion.
Horvath says that one of the smartest things her group did at the outset was opening up an online discussion board on the intranet where WaMu's 50,000 employees could offer up green-related ideas. Launched in 2007 and called "Go Green," the site was also used as an opportunity to increase awareness to all employees on the overall topic, important issues and corporate possibilities.

WaMu then ran a contest that awarded environmentally friendly prizes for the top three personal-related and top three work-related green ideas. The ideas are still flowing strong today. "There were more ideas than you could implement in a year, and I don't think we could turn [Go Green] off if we wanted to," Horvath says. "For CIOs who say, 'I don't know where to start,' that is a great place to start."

3. Think green when making IT purchases.
To reduce electricity costs and, ultimately, carbon emissions, IT executives need to examine their utility bills, Horvath points out. In addition, there are statistics from PC manufacturers on the electrical usage of their equipment.

"When you go to make an equipment decision, either for PCs and especially for storage devices because some consume very high energy," Horvath notes, "you should be asking your vendors what are the electricity requirements of the equipment and comparing them across the board with others."

Tough Times and Three Unequivocal Standards of IT Agility

Written on 10:48 AM by Right Click IT - Technology Services

Michael Hugos/Blog: Doing Business in Real Time

So the CEO and the CFO are telling you to cut IT expenses - tell them for the good of the company you can’t do that. Tell them you already run a lean operation and saving another 10 percent on the IT budget is small potatoes compared to using IT to save 10 percent on the operating expenses of the whole company or using IT to grow company revenue by 10 percent.

As all eyes around the table turn your way to see how you are going to recover from that jaw-dropping bit of unexpected impertinence, in the stunned silence that follows, drive home your point. Propose that instead of cutting IT, you’ll work with the CEO and the COO and the VP of Sales to create strategies to deliver those savings in company operating expenses and attain those increases in revenue. Seal your offer by publically committing to power the resulting business strategies with systems infrastructure that meets three unequivocal standards of IT agility: 1) No cap ex; 2) Variable cost; and 3) Scalable.

Commit to the standard of no cap ex (no capital expense) because it’s the order of the day in business. Revenue and profits are under pressure and credit is harder to get, so there is less money for capital investments. Also, because we’re in a period of rapid technological change, making big investments in technology is risky because it might result in your company investing in technology that becomes obsolete a lot faster than expected. So smart IT execs learn to get systems in place without a lot of up front cost. That means using SOA and SaaS and mashups and cloud computing to deliver new systems.

Committing to the standard of a variable cost operating model is very smart because it’s a great way to protect company cash flow. Pay-as-you-go operating models (like what the SaaS and cloud computing vendors are offering) mean operating expenses will rise if business volumes rise, but just as important, operating expenses will drop or stay small if business volumes contract or don’t grow as big or as fast as expected (you only pay more if you're making more and you pay less if you're making less). In this economy where it is so hard to predict what will happen next, and where companies need to keep trying new things to find out where new opportunities lie, variable cost business models are best for managing financial risk.

Committing to scalable systems infrastructure enables companies to enjoy the benefits of the first two standards. A scalable systems infrastructure enables a company to “think big, start small, and deliver quickly”. The CEO and COO and VP Sales can create strategies with big potential and try them out quickly on a small scale to see if they justify further investment. Start with targeted 80% solutions to the most important needs and then build further features and add more capacity as business needs dictate. Companies don’t outgrow scalable systems; they don’t have to rip out and replace scalable systems.

Making such an offer to your CEO might sound pretty bold and risky but then consider this: If your plan is just to cut your IT budget and try to keep your head down, chances are excellent you won’t survive anyway. That's because if you dumb down your IT operations and IT is seen as a cost center instead of part of your company’s value proposition, then your CEO and your CFO are going to quickly see that a great way to save an additional six figure sum will be to fire you. Who needs a highly paid person like

SLAs: How to Show IT's Value

Written on 10:13 AM by Right Click IT - Technology Services

From: www.cio.com – Bob Anderson, Computerworld December 02, 2008

Over a career in information technology spanning multiple decades, I have observed that many IT organizations have focused process improvement and measurement almost exclusively on software development projects.

This is understandable, given the business-critical nature and costs of large software development projects. But in reality, IT support services consume most of the IT budget, and they also require the most direct and continuous interaction with business customers.

IT organizations must demonstrate the value of IT support services to business customers, and a primary way of doing this is through service-level agreements. SLAs help IT show value by clearly defining the service responsibilities of the IT organization that is delivering the services and the performance expectations of the business customer receiving the service.

One of the most difficult tasks in developing an SLA is deciding what to include. The following sample SLA structure provides a good starting point.

Introduction: This identifies the service, the IT organization delivering that service and the business customer receiving it.

Examples:

  • Infrastructure support for a shipping warehouse.
  • Software application support for the payroll staff.

Description of services: This characterizes the services to be provided, the types of work to be performed and the parameters of service delivery, including the following:

  • The types of work that are part of the service (maintenance, enhancement, repair, mechanical support).
  • The time required for different types and levels of service.
  • The service contact process and detailed information for reaching the help desk or any single point of contact for support services.

Description of responsibilities: This delineates responsibilities of both the IT service provider and the customer, including shared responsibilities.

Operational parameters: These may affect service performance and therefore must be defined and monitored.

Examples:

  • Maximum number of concurrent online users.
  • Peak number of transactions per hour.
  • Maximum number of concurrent user requests.

If operational parameters expand beyond the control of the service provider, or if users of the service exceed the limits of specified operational parameters, then the SLA may need to be renegotiated.

Service-level goals: These are the performance metrics that the customer expects for specific services being delivered. SLGs are useless unless actual performance data is collected. The service being delivered will dictate the type and method of data collection.

It is important to differentiate between goals that are equipment-related and service-level goals that are people- and work-related.

Examples:

  • Equipment SLG: 99% network availability 24/7.
  • People and work SLG: critical incidents resolved within two hours.

Service-improvement goals: These establish the required degree and rate of improvement for a specific SLG over time. An SIG requires that a performance trend be calculated over a specified period of time in addition to specific SLG data getting captured. This trend indicates the rate of improvement and whether the improvement goal has been achieved.

Service-performance reporting: This states IT's commitment to delivering reports to the business customer on a scheduled basis. The reports detail actual services delivered and actual levels of performance compared to the commitments stated within the SLA.

Sign-off: Signature lines and dates for authorized representatives of the IT organization delivering the service and the business customer receiving the service.

The hardest part of developing an SLA may be getting started. I hope this framework will help you begin to demonstrate IT's value to your customers.

Anderson is director of process development and quality assurance at Computer Aid Inc. Contact him atbob_anderson@compaid.com .

When "IT Alignment with the Business" Isn't a Buzzword

Written on 10:10 AM by Right Click IT - Technology Services

December 01, 2008 – Matt Heusser, CIO

IT leaders were told to "do more with less" even before economic woes exacerbated the issue. Savvy managers have always kept their eye on the goal: demonstrating what IT can do for the business, so that it's not always viewed as a cost center. Last week, one IT manager explained her strategy.

At a meeting of the Grand Rapids Association of IT Professionals (AITP), Krischa Winright, associate VP of Priority Health, a health insurance products provider, demonstrated her IT team's accomplishments over the past year. Among the lessons learned: talented development organizations can gain advantages from frugality (including developing applications using internal resources and open-source technologies); you can ferociously negotiate costs with vendors; and virtualization can save the company money and team effort. End result: an estimated 12 percent reduction in expense spending (actual dollars spent) in 2008.

I asked Krischa about what her team had done at Priority Health, and how other organizations might benefit from her approach.

CIO: First, could you describe your IT organization: its size and role?

Winright: Priority Health is a nationally recognized health insurance company based in Michigan. Our IT department has approximately 90 full time staff, whose sole objective is to support Priority Health's mission: to provide all people access to excellent and affordable health care. The implications of this mission for IT are to support cutting edge informatics strategies in the most efficient way possible. We staff all IT services and infrastructure functions, in addition to software development capability.

CIO: In your AITP talk, you mentioned basic prerequisites to transparency and alignment. Can you talk about those for a moment?

Winright: Prior to 2008, we put in place a Project Management Office with governance at the executive level. Our executive steering committee prioritized all resources in IT dedicated to large projects, which meant that we already were tightly, strategically aligned with the business. ROI for all new initiatives is calculated, and expenditures (IT and non-IT) are tracked.

CIO: So you put a good PMO in place to improve the organization's ability to trace costs. Then what?

Winright: Well, let's be careful. First, project costs associated with large business initiatives are only one portion of IT spending. Additionally, cutting costs is easy; you just decrease the services you offer the business.

Instead, we wanted to cut costs in ways that would enhance our business alignment, and increase (rather than decrease) the services we offer. To do that, we had to expose all of the costs in IT (PMO and non-PMO) in terms that the business could understand. In other words: business applications.

We enumerated all IT budgetary costs by application, and then bucketed them based upon whether they were (1) existing services (i.e. keeping the "true" IT lights on) or (2) new services being installed in 2008.

We then launched a theme of "convergence" in IT, which would allow us to converge to fewer technologies/applications that offer the business the same functionality, while increasing the level of service for each offering.

CIO: So you defined the cost of keeping the "true" IT lights on. What about new projects and development?

Winright: We adopted Forrester's MOOSE model. We established the goals of reducing the overall cost of MOOSE ("true" IT lights on) and increasing the amount of funding of items of strategic business importance.

Using the MOOSE framework, we finally understood the true, total cost of our business applications and our complete IT portfolio. This allowed us to quickly see opportunities for convergence and execute those plans. By establishing five work queues which spanned all of IT—Operations, Support, IT Improvement, PMO, Small Projects—we learned how all 90 of our staff were spending their time. That let us make adjustments to the project list to "converge" their time to items of most imminent strategic return.

CIO: In your talk, you said an economic downturn can be a time of significant opportunity for your internal development staff.

Winright: Businesses in Michigan are acutely aware of the economic downturn. Our health plan directly supports those businesses, so we are optimizing our spending just like everyone else.

Maximum benefit must be gained for every dollar spent. Every area of the company is competing for expenditures in ways they weren't before.

Yet when budgets are cut, business' core values dictate keeping talented people. In IT, a talented development organization can seize the opportunity of frugality and provide help across a plethora of business opportunities in an extremely cost-effective way. Developing applications using internal resources and open-source technologies have a more favorable cost portfolio than do third party vendor applications with their extensive implementation costs and recurring, escalating maintenance expense. Additionally, the decline of major third-party software implementations allows IT more bandwidth to partner side by side with the business.

CIO: What other steps have you taken to win trust?

Winright: We converted costly contracted labor associated with MOOSE to internal staff. Given exposure to the true cost of our business applications, we ferociously negotiated costs with our vendors. We took advantage of virtualization and other convergence technologies to maximize benefit from spending, and eliminated over 10 items from our environment (such as consolidated environments, consolidated hosts through virtualization, converged to one scheduler) in this first year by embracing the theme of convergence.

The fruit of our labor is an estimated 12 percent reduction in expense spending (actual dollars spent) in 2008. More importantly, we have proven a 6 percent shift of spending from existing service costs to new services. This is a powerful message to share with business partners. They will ultimately benefit when 6 percent more IT spending is directed to new initiatives rather than to existing services costs.

CIO: What's been the most painful part of this process for you?

Winright: Two things. First, it was difficult and time consuming to gather all actual budgetary expenses and tie them to a specific service. For most organizations our size, this information is held across several cost centers and managers, and the technical infrastructure itself is complex.

Second, it is always difficult to take 90 technologists and get them aligned around common themes. We continue to strive for internal alignment and eventual embodiment of these themes.

CIO: Pretend for a moment you are speaking to peer at an organization the size of Priority Health or a little larger. What advice would you have on quick wins, and things to do tomorrow?

Winright: Although painful and time consuming, it is imperative that you and your business peers understand the complete picture of IT spending in terms of business strategy. Then, and only then, will transparency into IT spending be an effective tool to increase business alignment.

Get your internal resources aligned around common themes, because an aligned group of highly intelligent people on a singular mission can yield incredible results.

How the Internet Works: 12 Myths Debunked

Written on 10:09 AM by Right Click IT - Technology Services

– Carolyn Duffy Marsan, Network World November 21, 2008

Internet Protocols (IP) keep evolving: What incorrect assumptions do we make when we send an e-mail or download a video?

Thirty years have passed since the Internet Protocol was first described in a series of technical documents written by early experimenters. Since then, countless engineers have created systems and applications that rely on IP as the communications link between people and their computers.

Here's the rub: IP has continued to evolve, but no one has been carefully documenting all of the changes.

"The IP model is not this static thing," explains Dave Thaler, a member of the Internet Architecture Board and a software architect for Microsoft. "It's something that has changed over the years, and it continues to change."

Thaler gave the plenary address Wednesday at a meeting of the Internet Engineering Task Force, the Internet's premier standards body. Thaler's talk was adapted from a document the IAB has drafted entitled "Evolution of the IP Model.

"Since 1978, many applications and upper layer protocols have evolved around various assumptions that are not listed in one place, not necessarily well known, not thought about when making changes, and increasingly not even true," Thaler said. "The goal of the IAB's work is to collect the assumptions—or increasingly myths—in one place, to document to what extent they are true, and to provide some guidance to the community."

The following list of myths about how the Internet works is adapted from Thaler's talk:

1. If I can reach you, you can reach me.
Thaler dubs this myth, "reachability is symmetric," and says many Internet applications assume that if Host A can contact Host B, then the opposite must be true. Applications use this assumption when they have request-response or callback functions. This assumption isn't always true because middleboxes such as network address translators (NAT) and firewalls get in the way of IP communications, and it doesn't always work with 802.11 wireless LANs or satellite links.

2. If I can reach you, and you can reach her, then I can reach her.
Thaler calls this theory "reachability is transitive," and says it is applied when applications do referrals. Like the first myth, this assumption isn't always true today because of middleboxes such as NATs and firewalls as well as with 802.11 wireless and satellite transmissions.

3. Multicast always works.
Multicast allows you to send communications out to many systems simultaneously as long as the receivers indicate they can accept the communication. Many applications assume that multicast works within all types of links. But that isn't always true with 802.11 wireless LANs or across tunneling mechanisms such as Teredo or 6to4.

4. The time it takes to initiate communications between two systems is what you'll see throughout the communication.
Thaler says many applications assume that the end-to-end delay of the first packet sent to a destination is typical of what will be experienced afterwards. For example, many applications ping servers and select the one that responds first. However, the first packet may have additional latency because of the look-ups it does. So applications may choose longer paths and have slower response times using this assumption. Increasingly, applications such as Mobile IPv6 and Protocol Independent Multicast send packets on one path and then switch to a shorter, faster path.

5. IP addresses rarely change.
Many applications assume that IP addresses are stable over long periods of time. These applications resolve names to addresses and then cache them without any notion of the lifetime of the name/address connection, Thaler says. This assumption isn't always true today because of the popularity of the Dynamic Host Configuration Protocol as well as roaming mechanisms and wireless communications.

6. A computer has only one IP address and one interface to the network.
This is an example of an assumption that was never true to begin with, Thaler says. From the onset of the Internet, hosts could have several physical interfaces to the network and each of those could have several logical Internet addresses. Today, computers are dealing with wired and wireless access, dual IPv4/IPv6 nodes and multiple IPv6 addresses on the same interface making this assumption truly a myth.

7. If you and I have addresses in a subnet, we must be near each other.
Some applications assume that the IP address used by an application is the same as the address used for routing. This means an application might assume two systems on the same subnet are nearby and would be better to talk to each other than a system far away. This assumption doesn't hold up because of tunneling and mobility. Increasingly, new applications are adopting a scheme known as an identifier/locator split that uses separate IP addresses to identify a system from the IP addresses used to locate a system.

8. New transport-layer protocols will work across the Internet.
IP was designed to support new transport protocols underneath it, but increasingly this isn't true, Thaler says. Most NATs and firewalls only allow Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) for transporting packets. Newer Web-based applications only operate over Hypertext Transfer Protocol (HTTP).

9. If one stream between you and me can get through, so can another one.
Some applications open multiple connections—one for data and another for control—between two systems for communications. The problem is that middleboxes such as NATs and firewalls block certain ports and may not allow more than one connection. That's why applications such as File Transfer Protocol (FTP) and the Real-time Transfer Protocol (RTP) don't always work, Thaler says.

10. Internet communications are not changed in transit.
Thaler cites several assumptions about Internet security that are no longer true. One of them is that packets are unmodified in transit. While it may have been true at the dawn of the Internet, this assumption is no longer true because of NATs, firewalls, intrusion-detection systems and many other middleboxes. IPsec solves this problem by encrypting IP packets, but this security scheme isn't widely used across the Internet.

11. Internet communications are private.
Another security-related assumption Internet developers and users often make is that packets are private. Thaler says this was never true. The only way for Internet users to be sure that their communications are private is to deploy IPsec, which is a suite of protocols for securing IP communications by authenticating and encrypting IP packets.

12. Source addresses are not forged.
Many Internet applications assume that a packet is coming from the IP source address that it uses. However, IP address spoofing has become common as a way of concealing the identity of the sender in denial of service and other attacks. Applications built on this assumption are vulnerable to attack, Thaler says.


Learn Google Search Tips From the Pros

Written on 6:38 PM by Right Click IT - Technology Services

A Techie Holiday Wish List

Written on 8:10 AM by Right Click IT - Technology Services

CIO — By Kristin Burnham

Gadget makers hope you have some money to spend this holiday season

Look, No Hands!
If hands-free legislation has crimped your cell phone usage, Funkwerk Americas' Ego Flash—a Bluetooth-enabled, hands-free car kit—is the solution. Its OLED display allows you to view phone contacts (it stores up to 10,000), call logs and caller ID; make phone calls via voice recognition; and it can even read aloud incoming text messages. The console also integrates with your car's stereo system and can play MP3 music downloaded to your mobile phone or any other Bluetooth-enabled player. $240 www.egohandsfree.com

Hunt and Peck With Style
Spice up your workspace with this hand-crafted, retro-inspired keyboard, The Aviator. This custom-made keyboard is constructed with a brushed aluminum frame, a black, felt faceplate and jewel-style LEDs similar to those on an airplane's instrument panel. $1,200-$1,500 www.datamancer.net

Watch While You Work
Late nights at the office don't mean you need to miss The Office. Sling Media's Slingbox PRO-HD streams HD content from a home television source, such as a cable box or satellite dish, to a laptop, desktop or smartphone. System requirements include a high-speed network connection with upload speeds of 1.5 megabits per second and an HD-compatible laptop or desktop computer. $300 www.slingmedia.com

No Outlet? No Problem!
A BlackBerry or iPod battery that is dying—especially when there's no outlet or charger in sight—is the ultimate inconvenience. But Solio has developed what it boasts is "the world's most advanced hybrid charger," the Magnesium Edition. Solar panels collect and store power—one hour of sun will power your iPod for an hour—and its adapter tips plug in to a variety of mobile devices, limiting the need to lug multiple chargers around. $170 www.solio.com

Can You Hear Me Now?
Crying baby on your six-hour flight? Get some shut-eye with Sennheiser's PXC 450 NoiseGard travel headphones, which reduce ambient noise by up to 90 percent. They also include a talk-through function to help distinguish between sounds such as those of a plane's engines versus the voice of person—enabling you to communicate while wearing them. The headphones collapse for easy transport and come with adapters for in-flight entertainment systems. $400 www.sennheiserusa.com





How to Recession-Proof Yourself

Written on 8:08 AM by Right Click IT - Technology Services

By Meridith Levinson

November 17, 2008 — CIO — Layoff fears are sending a shiver through the workforce as the U.S. economy lurches toward a full-blown recession. And no one is safe as corporate cost-cutters sharpen their axes. Though senior executives are less vulnerable to losing their jobs than the employees below them, they, too, can be casualties of restructurings.

How To Motivate Your Employees During Layoffs Whether you're a CIO or a help desk technician, career coaches say you can take measures to prevent the hatchet from falling on your neck. Here's a list of actions they say you can take to help safeguard your job.

1. Know your value and communicate it. "If you're flying under the radar, you're going to be the first to be eliminated," says Kirsten Dixson, author of Career Distinction: Stand Out by Building Your Brand. This goes for CIOs, too.

Dixson recommends compiling a weekly status report that outlines the project or projects you're working on, your progress on those projects and your key performance indicators, and sending that report to your boss each week.

If you're known as a "growth and innovation CIO," now is also the time to prove that you're as adept at cost cutting as you are at generating ideas, says Joanne Dustin, a 25-year IT veteran who's now a career coach and an organizational development consultant.

Dustin says CIOs need to talk up the efficiencies and cost savings that their innovations have achieved as well as the revenue they've generated. Your company may still decide that it needs someone with a different skill set in the CIO role, but at least you've given it your best shot.

2. Be a team player. Getting along with others—in the boardroom or elsewhere—is critical when downsizing is on the table, especially for IT professionals who tend to be independent, says Dustin, who's worked as a programmer, project manager and systems manager. "These times require cooperation, flexibility and a willingness to go the extra mile," she says.

IT professionals who "just sit at their desk or in the server room and do their eight-to-five" are at risk, says Ed Longanacre, senior vice president of IT at Amerisafe, a provider of workers' compensation insurance. The problem with hunkering down, he says, is that it gives the impression that you're not interested in the organization.

3. Keep your ear to the ground. Staying attuned to what's going on inside your company, including gossip, can help you anticipate changes, says Patricia Stepanski Plouffe, president of Career Management Consultants. "If there's a rumor that your department is going to fold or downsize, you can identify other areas of the company where you could transfer your skills," she says. Just remember that you can't trust everything you hear, whether it comes from the water cooler or the CFO.

4. Adapt to change quickly. "If you can develop an attitude that nothing is going to stay the same and that your organization and your job will always be in flux, that will help you cope," says Stepanski Plouffe. "Be ready for whatever change may come up."

5. Get out and lead. "Executives are expected to set the vision and reassure people of the path the company is on," says Dixson. "This is not the time to go in your office and shut the door. Show decisiveness, strength and integrity. Show that you're combating the rumor mill."

ABC: An Introduction to Business Continuity and Disaster Recovery Planning

Written on 11:22 AM by Right Click IT - Technology Services

Disaster recovery and business continuity planning are processes that help organizations prepare for disruptive events—whether an event might be a hurricane or simply a power outage caused by a backhoe in the parking lot. Management's involvement in this process can range from overseeing the plan, to providing input and support, to putting the plan into action during an emergency. This primer (compiled from articles in CSO magazine) explains the basic concepts of business continuity planning and also directs you to more CSO magazine resources on the topic.

  • What’s the difference between disaster recovery and business continuity planning?
  • What does a disaster recovery and business continuity plan include?
  • How do I get started?
  • Is it really necessary to disrupt business by testing the plan?
  • What kinds of things have companies discovered when testing a plan?
  • What are the top mistakes that companies make in disaster recovery?
  • I still have a binder with our Y2K plan. Will that work?
  • Can we outsource our contingency measures?
  • How can I sell this business continuity planning to other executives?
  • How do I make sure the plans aren’t overkill for my company?

Q: "Disaster recovery" seems pretty self-explanatory. Is there any difference between that and "business continuity planning"?

A: Disaster recovery is the process by which you resume business after a disruptive event. The event might be something huge-like an earthquake or the terrorist attacks on the World Trade Center-or something small, like malfunctioning software caused by a computer virus.

Given the human tendency to look on the bright side, many business executives are prone to ignoring "disaster recovery" because disaster seems an unlikely event. "Business continuity planning" suggests a more comprehensive approach to making sure you can keep making money. Often, the two terms are married under the acronym BC/DR. At any rate, DR and/or BC determines how a company will keep functioning after a disruptive event until its normal facilities are restored.

What do these plans include?

All BC/DR plans need to encompass how employees will communicate, where they will go and how they will keep doing their jobs. The details can vary greatly, depending on the size and scope of a company and the way it does business. For some businesses, issues such as supply chain logistics are most crucial and are the focus on the plan. For others, information technology may play a more pivotal role, and the BC/DR plan may have more of a focus on systems recovery. For example, the plan at one global manufacturing company would restore critical mainframes with vital data at a backup site within four to six days of a disruptive event, obtain a mobile PBX unit with 3,000 telephones within two days, recover the company's 1,000-plus LANs in order of business need, and set up a temporary call center for 100 agents at a nearby training facility.

But the critical point is that neither element can be ignored, and physical, IT and human resources plans cannot be developed in isolation from each other. At its heart, BC/DR is about constant communication. Business leaders and IT leaders should work together to determine what kind of plan is necessary and which systems and business units are most crucial to the company. Together, they should decide which people are responsible for declaring a disruptive event and mitigating its effects. Most importantly, the plan should establish a process for locating and communicating with employees after such an event. In a catastrophic event (Hurricane Katrina being a recent example), the plan will also need to take into account that many of those employees will have more pressing concerns than getting back to work.

Where do I start?

A good first step is a business impact analysis (BIA). This will identify the business's most crucial systems and processes and the effect an outage would have on the business. The greater the potential impact, the more money a company should spend to restore a system or process quickly. For instance, a stock trading company may decide to pay for completely redundant IT systems that would allow it to immediately start processing trades at another location. On the other hand, a manufacturing company may decide that it can wait 24 hours to resume shipping. A BIA will help companies set a restoration sequence to determine which parts of the business should be restored first.

Here are 10 absolute basics your plan should cover:

1. Develop and practice a contingency plan that includes a succession plan for your CEO.
2. Train backup employees to perform emergency tasks. The employees you count on to lead in an emergency will not always be available.
3. Determine offsite crisis meeting places for top executives.
4. Make sure that all employees-as well as executives-are involved in the exercises so that they get practice in responding to an emergency.
5. Make exercises realistic enough to tap into employees' emotions so that you can see how they'll react when the situation gets stressful.
6. Practice crisis communication with employees, customers and the outside world.
7. Invest in an alternate means of communication in case the phone networks go down.
8. Form partnerships with local emergency response groups-firefighters, police and EMTs-to establish a good working relationship. Let them become familiar with your company and site.
9. Evaluate your company's performance during each test, and work toward constant improvement. Continuity exercises should reveal weaknesses.
10. Test your continuity plan regularly to reveal and accommodate changes. Technology, personnel and facilities are in a constant state of flux at any company.
Hold it. Actual live-action tests would, themselves, be the "disruptive events." If I get enough people involved in writing and examining our plans, won't that be sufficient?

Let us give you an example of a company that thinks tabletops and paper simulations aren't enough. And why their experience suggests they're right.

When CIO Steve Yates joined USAA, a financial services company, business continuity exercises existed only on paper. Every year or so, top-level staffers would gather in a conference room to role-play; they would spend a day examining different scenarios, talking them out-discussing how they thought the procedures should be defined and how they thought people would respond to them.

Live exercises were confined to the company's technology assets. USAA would conduct periodic data recovery tests of different business units-like taking a piece of the life insurance department and recovering it from backup data.

Yates wondered if such passive exercises reflected reality. He also wondered if USAA's employees would really know how to follow such a plan in a real emergency. When Sept. 11 came along, Yates realized that the company had to do more. "Sept. 11 forced us to raise the bar on ourselves," says Yates.

Yates engaged outside consultants who suggested that the company build a second data center in the area as a backup. After weighing the costs and benefits of such a project, USAA initially concluded that it would be more efficient to rent space on the East Coast. But after the attack on the World Trade Center and Pentagon, when air traffic came to a halt, Yates knew it was foolhardy to have a data center so far away. Ironically, USAA was set to sign the lease contract the week of Sept. 11.

Instead, USAA built a center in Texas, only 200 miles away from its offices-close enough to drive to, but far enough away to pull power from a different grid and water from a different source. The company has also made plans to deploy critical employees to other office locations around the country.

Yates made site visits to companies such as FedEx, First Union, Merrill Lynch and Wachovia to hear about their approach to contingency planning. USAA also consulted with PR firm Fleishman-Hillard about how USAA, in a crisis situation, could communicate most effectively with its customers and employees.

Finally, Yates put together a series of large-scale business continuity exercises designed to test the performance of individual business units and the company at large in the event of wide-scale business disruption. When the company simulated a loss of the primary data center for its federal savings bank unit, Yates found that it was able to recover the systems, applications and all 19 of the third-party vendor connections. USAA also ran similar exercises with other business units.

For the main event, however, Yates wanted to test more than the company's technology procedures; he wanted to incorporate the most unpredictable element in any contingency planning exercise: the people.

USAA ultimately found that employees who walked through the simulation were in a position to observe flaws in the plans and offer suggestions. Furthermore, those who practice for emergency situations are less likely to panic and more likely to remember the plan.

Can you give me some examples of things companies have discovered through testing?

Some companies have discovered that while they back up their servers or data centers, they've overlooked backup plans for laptops. Many businesses fail to realize the importance of data stored locally on laptops. Because of their mobile nature, laptops can easily be lost or damaged. It doesn't take a catastrophic event to disrupt business if employees are carting critical or irreplaceable data around on laptops.

One company reports that it is looking into buying MREs (meals ready-to-eat) from the company that sells them to the military. MREs have a long shelf life, and they don't take up much space. If employees are stuck at your facility for a long time, this could prove a worthwhile investment.

Mike Hager, former head of information security and disaster recovery for OppenhiemerFunds, says 9/11 brought issues like these to light. Many companies, he said, were able to recover data, but had no plans for alternative work places. The World Trade Center had provided more than 20 million square feet of office space, and after Sept. 11th there was only 10 million square feet of office space available in Manhattan. The issue of where employees go immediately after a disaster and where they will be housed during recovery should be addressed before something happens, not after.

USAA discovered that while it had designated a nearby relocation area, the setup process for computers and phones took nearly two hours. During that time, employees were left standing outside in the hot Texas sun. Seeing the plan in action raised several questions that hadn't been fully addressed before: Was there a safer place to put those employees in the interim? How should USAA determine if or when employees could be allowed back in the building? How would thousands of people access their vehicle if their car keys were still sitting on their desk? And was there an alternate transportation plan if the company needed to send employees home?
What are the top mistakes that companies make in disaster recovery?

Hager and other experts note the following pitfalls:

1. Inadequate planning: Have you identified all critical systems, and do you have detailed plans to recover them to the current day? (Everybody thinks they know what they have on their networks, but most people don't really know how many servers they have, or how they're configured, or what applications reside on them-what services were running, what version of software or operating systems they were using. Asset management tools claim to do the trick here, but they often fail to capture important details about software revisions and so on.

2. Failure to bring the business into the planning and testing of your recovery efforts.

3. Failure to gain support from senior-level managers. The largest problems here are:

1. Not demonstrating the level of effort required for full recovery.
2. Not conducting a business impact analysis and addressing all gaps in your recovery model.
3. Not building adequate recovery plans that outline your recovery time objective, critical systems and applications, vital documents needed by the business, and business functions by building plans for operational activities to be continued after a disaster.
4. Not having proper funding that will allow for a minimum of semiannual testing.

I still have a binder with our Y2K contingency plan. Will that work?

Absolutely not (unless your computers, employees and business priorities are exactly the same as they were in 1999). Plus, most Y2K plans cover only computer system-based failure. Potential physical calamities like blackouts, natural disasters or terrorist events bring additional issues to the table.

Can we outsource our contingency measures?

Disaster recovery services-offsite data storage, mobile phone units, remote workstations and the like-are often outsourced, simply because it makes more sense than purchasing extra equipment or space that may never be used. In the days after the Sept. 11 attacks, disaster recovery vendors restored systems and provided temporary office space, complete with telephones and Internet access for dozens of displaced companies.

What advice would you give to security executives who need to convince their CEO or board of the need for disaster recovery plans and capabilities? What arguments are most effective with an executive audience?

Hager advises chief security officers to address the need for disaster recovery through analysis and documentation of the potential financial losses. Work with your legal and financial departments to document the total losses per day that your company would face if you were not capable of quick recovery. By thoroughly reviewing your business continuance and disaster recovery plans, you can identify the gaps that may lead to a successful recovery. Remember: Disaster recovery and business continuance are nothing more than risk avoidance. Senior managers understand more clearly when you can demonstrate how much risk they are taking."

Hager also says that smaller companies have more (and cheaper) options for disaster recovery than bigger ones. For example, the data can be taken home at night. That's certainly a low-cost way to do offsite backup.
Some of this sounds like overkill for my company. Isn't it a bit much?

The elaborate machinations that USAA goes through in developing and testing its contingency plans might strike the average CSO (or CEO, anyway) as being over the top. And for some businesses, that's absolutely true. After all, HazMat training and an evacuation plan for 20,000 employees is not a necessity for every company.

Like many security issues, continuity planning comes down to basic risk management: How much risk can your company tolerate, and how much is it willing to spend to mitigate various risks?

In planning for the unexpected, companies have to weigh the risk versus the cost of creating such a contingency plan. That's a trade-off that Pete Hugdahl, USAA's assistant vice president of security, frequently confronts. "It gets really difficult when the cost factor comes into play," he says. "Are we going to spend $100,000 to fence in the property? How do we know if it's worth it?"

And-make no mistake-there is no absolute answer. Whether you spend the money or accept the risk is an executive decision, and it should be an informed decision. Half-hearted disaster recovery planning (in light of the 2005 hurricane season, 9/11, the Northeast blackout of 2003, and so on) is a failure to perform due diligence.

This document was compiled from articles published in CSOand CIO magazines. Contributing writers include Scott Berinato, Kathleen Carr, Daintry Duffy, Michael Goldberg, and Sarah Scalet. Send feedback to CSO Executive Editor Derek Slater at dslater@cxo.com.

Five Tips: Make Virtualization Work Better Across the WAN

Written on 10:27 AM by Right Click IT - Technology Services

– Jeff Aaron, VP Marketing, Silver Peak Systems, CIO November 18, 2008

IT departments can reap enormous benefits from virtualizing applications and implementing Virtual Desktop Infrastructures (VDI). However, the management and cost savings of virtualization can be lost if performance is so bad that it hampers productivity, as can happen when virtual applications and desktops are delivered across a Wide Area Network (WAN).

For an in-depth look at a WAN revamp, see CIO.com's related article, "How to Make Your WAN a Fast Lane: One Company's Story."

How can enterprises overcome poor performance to reap the rewards of virtualization?

Jeff Aaron, VP of marketing at Silver Peak Systems, suggests these five tips.

1. Understand The Network Issues
For starters, it makes sense to understand why your virtualized applications and virtual desktops perform poorly across the WAN. It's typically not due to the application or VDI components, but due to the network. More specifically, virtualized environments are sensitive to the following WAN characteristics:

Latency: the time it takes for data to travel from one location to one another.
Packet loss: when packets get dropped or delivered out of order due to network congestion they must be re-transmitted across the WAN. This can turn a 200 millisecond roundtrip into one second. To end users, the virtual application or desktop seems unresponsive when packets are being re-transmitted. They start to re-hit the keys on their client machines, which compounds the problem.

Bandwidth: WAN bandwidth may or may not be an issue depending on the type of traffic being sent. While most virtualized applications are fairly efficient when it comes to bandwidth consumption, some activities (such as file transfers and print jobs) consume significant bandwidth, which can present a performance challenge.

2. Examine WAN Optimization Techniques
WAN optimization devices can be deployed on both ends of a WAN link to improve the performance of all enterprise applications. The following WAN optimization techniques are used by these devices to improve the performance of virtual applications and desktops:

Latency can be overcome by mitigating the "chattiness" of TCP, the transport protocol used to by virtual applications for communication across the WAN. More specifically, WAN optimization devices can be configured to send more data within specific windows, and minimize the number of back and forth acknowledgements required prior to sending data. This improves the responsiveness of keystrokes in a virtual environment.

Loss can be mitigated by rebuilding dropped packets on the far end of a WAN link, and re-sequencing packets that are delivered out of order in real-time. This eliminates the need to re-transmit packets every time they are dropped or delivered out-of-order. By avoiding re-transmissions, virtual applications and desktops appear much more responsive across the WAN.

Bandwidth can be reduced using WAN deduplication. By monitoring all data sent across the WAN, repetitive information can be detected and delivered locally rather than resent across the network. This significantly improves bandwidth utilization in some (but not all) virtualized environments.

3. Set Application Priorities
The average enterprise has more than 80 applications that are accessed across the WAN. That means that critical applications, including terminal services and VDI, are vying for the same resources as less important traffic, such as Internet browsing. Because virtual applications and desktops are sensitive to latency, it often makes sense to prioritize this traffic over other applications using Quality of Service (QoS) techniques. In addition, QoS can guarantee bandwidth for VDI and virtual applications.

4. Compress and Encrypt in the Right Place
Often times host machines compress information prior to transmission. This is meant to improve bandwidth utilization in a virtual environment. However, compression obfuscates visibility into the actual data, which makes it difficult for downstream WAN optimization devices to provide their full value. Therefore, it may be a better choice to turn off compression functionality in the virtual host (if possible), and instead enable it in the WAN optimization device.

Moving compression into the WAN optimization device has another added benefit: it frees up CPU cycles within the host machine. This can lead to better performance and scalability throughout a virtual environment.

IT staff should also consider where encryption takes place in a virtual infrastructure, since encryption also consumes CPU cycles in the host.


5. Go With the Flows
Network scalability can have an important impact on the performance of virtual applications and VDI. The average thin client machine has 10 to15 TCP flows open at any given time. If thousands of clients are accessing host machines in the same centralized facility, that location must be equipped to handle tens of thousands of simultaneous sessions.

When it comes to supporting large numbers of flows, there are two "best practice" recommendations. First, as discussed above, it is recommended that compression and encryption be moved off the host machine to free up CPU cycles. Second, make sure your WAN acceleration device supports the right amount of flows for your environment. The last thing you want to do is create an artificial bottleneck within the very devices deployed to remove your WAN's bottlenecks.

8 Reasons Tech Will Survive the Economic Recession

Written on 10:05 AM by Right Click IT - Technology Services

– Carolyn Duffy Marsan, Network World November 13, 2008

The global economy is in as bad shape as we've ever seen. In the last two months, U.S. consumers have stopped spending money on discretionary items, including electronic gear, prompting this week's bankruptcy filing by Circuit City. Retailers are worried that Black Friday will indeed be black, as holiday shoppers cut back on spending and choose lower-priced cell phones and notebook computers.

Yet despite all of the bailouts and layoffs, most IT industry experts are predicting that sales of computer hardware, software and services will be growing at a healthy clip again within 18 months.

Here's a synopsis of what experts are saying about the short- and long-term prognosis for the tech industry:

1. The global IT market is still growing, although barely.

IDC this week recast its projections for global IT spending in 2009, forecasting that the market will grow 2.6 percent next year instead of the 5.9 percent predicted prior to the financial crisis. In the United States, IT spending will eke out 0.9 percent growth.

IDC predicts the slowest IT markets will be the United States, Japan and Western Europe, which all will experience around 1 percent growth. The healthiest economies will be in Central and Eastern Europe, the Middle East, Africa and Latin America.

Similarly, Gartner's worst-case scenario for 2009 is that IT spending will increase 2.3 percent, according to a report released in mid-October. Gartner said the U.S. tech industry will be flat. Hardest hit will be Europe, where IT expenditures are expected to shrink in 2009.

Overall, Gartner said global IT spending will reach $3.8 trillion in 2008, up from $3.15 trillion in 2007.

"We expect a gradual recovery throughout 2010, and by 2011 we should be back into a more normal kind of environment," said IDC Analyst Stephen Minton. If the recession turns out to be deeper or last longer than four quarters as most economics expect, "it could turn into a contraction in IT spending," Minton added. "In that case, the IT market would still be weak in 2010 but we'd see a gradual recovery in 2011, and we'd be back to normal by 2012."

2. It's not as bad as 2001.

Even the grimmest predictions for global IT spending during the next two years aren't as severe as the declines the tech industry experienced between 2001 and 2003.

"Global economic problems are impacting IT budgets, however the IT industry will not see the dramatic reductions that were seen during the dot.com bust. . . . At that time, budgets were slashed from mid-double-digit growth to low-single-digit growth," Gartner said in a statement.

Gartner said the reason IT won't suffer as badly in 2009 as it did during the 2001 recession is that "operations now view IT as a way to transform their businesses and adopt operating models that are leaner. . . . IT is embedded in running all aspects of the business."

IDC's Minton said that in 2001 many companies had unused data center capacity, excess network bandwidth and software applications that weren't integrated in a way that could drive productivity.

"This time around, none of that is true," Minton said. "Today, there isn't a glut of bandwidth. There is high utilization of software applications, which are purchased in a more modular way and integrated much faster into business operations. Unlike in 2001, companies aren't waking up to find that they should be cutting back on IT spending. They're only cutting back on new initiatives because of economic conditions."

"We're anxious about whether the economy will resemble what the most pessimistic economists are saying or the more mainstream economists," Minton said. "But we don't see any reason that it will turn into a disaster like 2001. It shouldn’t get anywhere near that bad."

3. Consumers won't give up their cell phones.

They may lose their jobs and even their homes, but consumers seem unwilling to disconnect their cell phones.

"I would sleep in my car before I would give up my mobile phone," says Yankee Group Analyst Carl Howe. "Consumers buy services like broadband and mobile phones, and even if they lose their jobs they need these services more than ever."

Yankee Group says the demand for network-based services—what it dubs "The Anywhere Economy"—will overcome the short-term obstacles posed by the global financial crisis and will be back on track for significant growth by 2012.

Yankee Group predicts continued strong sales for basic mobile phone services at the low end, as well as high-end services such as Apple iPhones and Blackberry Storms. Where the mobile market will get squeezed is in the middle, where many vendors have similar feature sets. One advantage for mobile carriers: they have two-year contracts locked in.

Telecom services "are not quite on the level of food, shelter and clothing, but increasingly it satisfies a deep personal need," Howe says. "When bad things happen to us, we want to talk about it. And in today's world, that's increasingly done electronically."

4. Notebook computers are still hot.

Worldwide demand for notebooks—particularly the sub-$500 models—has been strong all year. But that may change in the fourth quarter given Intel's latest warnings about flagging demand for its processors.

Both IDC and Gartner reported that PC shipments grew 15 percent in the third quarter of 2008, driven primarily by sales of low-cost notebook computers. Altogether, more than 80 million PCs were shipped during the third quarter of 2008, which was down from estimates earlier in the year but still represents healthy growth.

IDC said notebook sales topped desktop sales—55 percent to 45 percent—for the first time ever during the third quarter of 2008. This is a trend that will help prop up popular notebook vendors such as Hewlett-Packard, Dell and Apple. Apple, for example, saw its Mac shipments rise 32 percent in the third quarter of 2008, powered primarily by its notebooks.

The big unknown is what will happen to notebook sales during the holiday season. Analysts have noted sluggishness in U.S. corporate PC sales this fall as well as home sales, where most demand is for ultra-low-priced notebooks.

"The impact will come this quarter. People will be looking for cheaper products. . . . They will not be spending as much as they did a year ago," IDC's Minton said.

Intel said yesterday that it was seeing significantly weaker demand across its entire product line and dropped its revenue forecast for the fourth quarter by $1 billion.

The brunt of the slowdown in IT spending will hit servers and PCs, predicts Forrester Research analyst Andrew Bartels. Forrester is adjusting its IT spending forecast for 2009 downward, and plans to release new numbers after Thanksgiving, he adds.

"PCs and servers may see declines similar to 2001, but we're not going to be seeing that across the whole tech industry," Bartels says. "Software is a bright spot. Much of software spending comes in the form of maintenance and subscriptions. The licensing part may go down, but that's only a quarter of total software revenues."

The biggest U.S. carriers—including AT&T and Verizon—are in much better shape going into this recession than they were during the dot.com bust. So while consumer spending will fall in 2009, it is expected to have less of an impact on the telecom sector than it did after 2001.

Yankee Group says the financial crisis will not significantly impact network build-outs by carriers because most of the financing for 3G, Fios, WiMAX and other next-generation networks is already in place.

"These are multibillion-dollar build-outs, and most of the financing has been arranged months if not years in advance," Yankee Group's Howe says. "We were projecting that in 2009 carriers would spend over $70 billion on these network build-outs in the U.S. Now we're saying that there will be $2 billion or $3 billion less in spending. . . . We're talking single-digit percentage declines, not wholesale cuts."

This doesn't mean that the network industry will emerge from the chaos unscathed. Carriers will squeeze their equipment providers, and companies like Cisco are already feeling the pinch. When Cisco announced its latest earnings last week, CEO John Chambers reported the company had seen its sales shift from solid-single-digit growth in August to a 9 percent decline in October.

Forrester says computer and communications equipment vendors will bear the brunt of IT cost-cutting from enterprise customers.

6. Corporate data storage needs keep rising during recessions.

Every segment of the IT market is weaker today than it was six months ago. But some segments are less weak than others, and one of the healthiest is storage.

“Storage is relatively stable because of the fact that companies are using a lot more of their storage capacity and they are still dealing with an increasing amount of data that requires storage on a weekly basis. That’s not going to change,” IDC’s Minton said. “It’s not just the hardware, but the storage software that will be relative bright spots in the years ahead.”

One storage industry bellwether is EMC, which continued to demonstrate strong demand for storage hardware and software in its recent quarterly results. EMC’s revenue grew 13 percent in the third quarter of 2008 compared to a year ago. Unlike many other network industry leaders, EMC is projecting continued revenue gains in the fourth quarter of 2008.

Similarly, this week Brocade issued a preliminary release indicating strong sales for the quarter ending in October. CEO Michael Klayko said the company will outperform its sales projections from August 2008.

“Storage needs are on the rise, and storage investments will continue,” Forrester’s Bartels says. “We don’t see cloud storage as having a meaningful impact yet.”

7. New IT markets will continue to emerge, although more slowly.

Emerging economies such as China and Latin America are slowing down, but they are still expected to have IT sales increases in 2009. The Latin American market, in particular, is a solid one for IT companies such as IBM, HP and Microsoft, which have a strong foothold south of the border.

“In the past two to three years, Latin America has had some of the fastest growth rates in IT spending,” IDC’s Minton said. “Brazil is the biggest market, and it has been growing at double digits. But all of the markets in Latin America have been growing by more than 10 percent a year. With some exceptions, the economies there are relatively stable and have had less political turmoil than in the past. . . . This is one of the regions that we think will bounce back pretty quickly.”

Other emerging markets that will continue to post growth in IT spending in 2009 are Central and Eastern Europe, the Middle East and Africa, IDC predicts. While these markets won’t experience double-digit gains next year, they will help offset sharp declines in IT purchasing in the United States, Japan and Western Europe.

Forrester warns that IT vendors shouldn’t count on so-called BRIC countries—Brazil, Russia, India and China— to bail them out of the financial crisis altogether.

“The BRIC markets are performing better than the industrial markets, but they are also slowing down,” Forrester’s Bartels says. “Among those markets, China looks to be the strongest, then Brazil and Mexico. Russia is weakening, and India is weakening. They’re not going to go into a contraction, but the growth rates could slow to the point that they feel like a contraction.”

One issue for IT vendors is the rising strength of the U.S. dollar, which means U.S. tech vendors will bring home fewer dollars from their foreign sales when they convert currencies.

“The dollar has been strengthening against every currency except the Chinese currency,” Bartels says. “Even if a vendor is successful in sales in Brazil or Russia, they will bring back fewer dollars, which was not the case six months ago.”

8. Outsourcing helps companies stretch their IT budgets.

Many companies will freeze new IT initiatives for the next three to six months as they absorb the Wall Street crash. But one segment that’s likely to continue is IT outsourcing because it provides near-term cost reductions.

“While IT outsourcing will benefit from an economic slowdown in 2008 as companies turn to IT outsourcing vendors to help cut costs, trends toward use of lower-cost offshore resources and smaller-scale outsourcing deals will keep growth modest,” says Forrester Research.

Forrester predicts IT outsourcing will grow around 5 percent in 2009 and 2010.

“When you sign an outsourcing agreement, you’re locked into it barring going out of business,” Forrester’s Bartels says. “Outsourcing revenues are not going to be variable.”

On the horizon is cloud computing, which also holds the promise of reducing corporate IT overhead but requires more up-front spending than outsourcing.

“Over the longer term, we’re pretty bullish about cloud computing,” IDC’s Minton said. “But there will be a lot of hurdles for a bigger corporation. It’s difficult for them psychologically to give up control, and there are quite a lot of up-front costs to engage consultants, to roll out applications to a large number of employees, and there’s training involved. But ultimately these projects save money.”

Working Virtually: You can easily provide secured remote access to employees

Written on 1:26 PM by Right Click IT - Technology Services

Definition from Whatis.com

Remote access is the ability to get access to a computer or a network from a remote distance. In corporations, people at branch offices, telecommuters, and people who are traveling may need access to the corporation's network. Access can go through an ISP, ISDN, DSL or other wireless mobile method.

Right Click Can Set up Remote Access for Your Team!

The playing field is beginning to be level, technology to link remote offices to headquarters is not just for big companies. Small and medium sized businesses need to have their remote sites dialed in just like their main office. At Right Click we can work with your existing internet connections and setup hardware VPN’s for remote offices and power home users. This allows these remote users to securely access files, emails and programs when they are outside the main office and minimizes the need for emailing files and trying to keep track of versions.

If you have a need to give access to a large application for your remote users, Right Click's experts can set-up cost-effective Terminal and Citrix Servers for this task. We will work with you to determine the best solution for your team, architect the system to fit into your budget and make sure that it performs to your standards.

How can you make sure your Network Security is compliant and meets today's requirements?

Written on 11:59 AM by Right Click IT - Technology Services

Definition provided by Wikipedia

Network security
consists of the provisions made in an underlying computer networkpolicies adopted by the network administrator to protect the network and the network-accessible resources from unauthorized access and consistent and continuous monitoring and measurement of its effectiveness (or lack) combined together.

In today’s world where your most valuable data can be taken from you on a device the size of key, ensuring you have proper network security is critical.

With a USB key drive holding an average of 8 gigabytes and an IPod holding 80GB of data, nearly all proprietary or sensitive information can be copied from your network in a matter of minutes. Additionally, there is the ever present threat of viruses, spy ware and hackers to further impact the network.

At Right Click we look at your network and come up with a plan to cost effectively ensure network security. Our goal is to ensure that your company has effective security without onerous management and allows your employees to work with ease.

Many of our clients operate in regulated industries such as health care and financial services, Right Click is positioned to ensure that the networks are in compliance with all HIPAA and SOX regulations.

Let us come out and do a network security survey to proactively ensure that all network security requirements are met or exceeded. Give us a call or email us at Right Click.

The Benefits of Outsourcing for Small Businesses

Written on 5:50 PM by Right Click IT - Technology Services

From AllBusiness.com

Outsourcing, the practice of using outside firms to handle work normally performed within a company, is a familiar concept to many entrepreneurs. Small companies routinely outsource their payroll processing, accounting, distribution, and many other important functions — often because they have no other choice. Many large companies turn to outsourcing to cut costs. In response, entire industries have evolved to serve companies' outsourcing needs.

Wise outsourcing, however, can provide a number of long-term benefits:

Control capital costs. Cost-cutting may not be the only reason to outsource, but it's certainly a major factor. Outsourcing converts fixed costs into variable costs, releases capital for investment elsewhere in your business, and allows you to avoid large expenditures in the early stages of your business. Outsourcing can also make your firm more attractive to investors, since you're able to pump more capital directly into revenue-producing activities.

Increase efficiency. Companies that do everything themselves have much higher research, development, marketing, and distribution expenses, all of which must be passed on to customers. An outside provider's cost structure and economy of scale can give your firm an important competitive advantage.

Reduce labor costs. Hiring and training staff for short-term or peripheral projects can be very expensive, and temporary employees don't always live up to your expectations. Outsourcing lets you focus your human resources where you need them most.

Start new projects quickly. A good outsourcing firm has the resources to start a project right away. Handling the same project in-house might involve taking weeks or months to hire the right people, train them, and provide the support they need. And if a project requires major capital investments (such as building a series of distribution centers), the startup process can be even more difficult.

Focus on your core business. Every business has limited resources, and every manager has limited time and attention. Outsourcing can help your business to shift its focus from peripheral activities toward work that serves the customer, and it can help managers set their priorities more clearly.

Level the playing field. Most small firms simply can't afford to match the in-house support services that larger companies maintain. Outsourcing can help small firms act "big" by giving them access to the same economies of scale, efficiency, and expertise that large companies enjoy.

Reduce risk. Every business investment carries a certain amount of risk. Markets, competition, government regulations, financial conditions, and technologies all change very quickly. Outsourcing providers assume and manage this risk for you, and they generally are much better at deciding how to avoid risk in their areas of expertise

Let Right Click help augment your IT services needs with the "right" size IT staff, available to fit your project needs.

What is Data Recovery and Computer Forensics?

Written on 11:43 AM by Right Click IT - Technology Services

Definition From Wikipedia in Italics

Data recovery is the process of salvaging data from damaged, failed, corrupted, or inaccessible secondary storage media when it cannot be accessed normally.

Often the data are being salvaged from storage media formats such as hard disk drives, storage tapes, CDs, DVDs, RAID, and other electronics. Recovery may be required due to physical damage to the storage device or logical damage to the file system that prevents it from being mounted by the host operating system.

If you are missing information on your server or desktop, your hard drive crashed and you did not have a backup Right Click’s data recovery services can help you to get your data back quickly and efficiently. We have done a number of data recoveries saving thousands in lost time and productivity if the files had to be recreated.

Data recovery can also be the process of retrieving and securing deleted information from a storage media for forensic purposes or spying.

Do you think an employee or partner is doing something that may not be in accordance with company policies. Right Click is an expert in examining computers and ensuring that you get an answer that has been thoroughly researched and examined.

Right Click's Jim Harrington, is an Encase Certified Engineer, has testified in court and has tremendous experience in dealing with forensics jobs large or small.

Computer forensics is a branch of forensic science pertaining to legal evidence found in computers and digital storage mediums. Computer forensics is also known as digital forensics.

The goal of computer forensics is to explain the current state of a digital artifact. The term digital artifact can include a computer system, a storage media (such as a hard disk or CD-ROM), an electronic document (e.g. an email message or JPEG image) or even a sequence of packets moving over a computer network. The explanation can be as straightforward as "what information is here?" and as detailed as "what is the sequence of events responsible for the present situation?"

The field of Computer Forensics also has sub branches within it such as Firewall Forensics, Database Forensics and Mobile Device Forensics.

There are many reasons to employ the techniques of computer forensics:

  • In legal cases, computer forensic techniques are frequently used to analyze computer systems belonging to defendants (in criminal cases) or litigants (in civil cases).
  • To recover data in the event of a hardware or software failure.
  • To analyze a computer system after a break-in, for example, to determine how the attacker gained access and what the attacker did.
  • To gather evidence against an employee that an organization wishes to terminate.
  • To gain information about how computer systems work for the purpose of debugging, performance optimization, or reverse-engineering.

Special measures should be taken when conducting a forensic investigation if it is desired for the results to be used in a court of law. One of the most important measures is to assure that the evidence has been accurately collected and that there is a clear chain of custody from the scene of the crime to the investigator---and ultimately to the court.

Right Click has the field expertise to provide you with the right solution to your Data Recovery and Computer Forensics needs. Give us a call or email us to find out how we can help!