Tough Times and Three Unequivocal Standards of IT Agility

Written on 10:48 AM by Right Click IT - Technology Services

Michael Hugos/Blog: Doing Business in Real Time

So the CEO and the CFO are telling you to cut IT expenses - tell them for the good of the company you can’t do that. Tell them you already run a lean operation and saving another 10 percent on the IT budget is small potatoes compared to using IT to save 10 percent on the operating expenses of the whole company or using IT to grow company revenue by 10 percent.

As all eyes around the table turn your way to see how you are going to recover from that jaw-dropping bit of unexpected impertinence, in the stunned silence that follows, drive home your point. Propose that instead of cutting IT, you’ll work with the CEO and the COO and the VP of Sales to create strategies to deliver those savings in company operating expenses and attain those increases in revenue. Seal your offer by publically committing to power the resulting business strategies with systems infrastructure that meets three unequivocal standards of IT agility: 1) No cap ex; 2) Variable cost; and 3) Scalable.

Commit to the standard of no cap ex (no capital expense) because it’s the order of the day in business. Revenue and profits are under pressure and credit is harder to get, so there is less money for capital investments. Also, because we’re in a period of rapid technological change, making big investments in technology is risky because it might result in your company investing in technology that becomes obsolete a lot faster than expected. So smart IT execs learn to get systems in place without a lot of up front cost. That means using SOA and SaaS and mashups and cloud computing to deliver new systems.

Committing to the standard of a variable cost operating model is very smart because it’s a great way to protect company cash flow. Pay-as-you-go operating models (like what the SaaS and cloud computing vendors are offering) mean operating expenses will rise if business volumes rise, but just as important, operating expenses will drop or stay small if business volumes contract or don’t grow as big or as fast as expected (you only pay more if you're making more and you pay less if you're making less). In this economy where it is so hard to predict what will happen next, and where companies need to keep trying new things to find out where new opportunities lie, variable cost business models are best for managing financial risk.

Committing to scalable systems infrastructure enables companies to enjoy the benefits of the first two standards. A scalable systems infrastructure enables a company to “think big, start small, and deliver quickly”. The CEO and COO and VP Sales can create strategies with big potential and try them out quickly on a small scale to see if they justify further investment. Start with targeted 80% solutions to the most important needs and then build further features and add more capacity as business needs dictate. Companies don’t outgrow scalable systems; they don’t have to rip out and replace scalable systems.

Making such an offer to your CEO might sound pretty bold and risky but then consider this: If your plan is just to cut your IT budget and try to keep your head down, chances are excellent you won’t survive anyway. That's because if you dumb down your IT operations and IT is seen as a cost center instead of part of your company’s value proposition, then your CEO and your CFO are going to quickly see that a great way to save an additional six figure sum will be to fire you. Who needs a highly paid person like

SLAs: How to Show IT's Value

Written on 10:13 AM by Right Click IT - Technology Services

From: www.cio.com – Bob Anderson, Computerworld December 02, 2008

Over a career in information technology spanning multiple decades, I have observed that many IT organizations have focused process improvement and measurement almost exclusively on software development projects.

This is understandable, given the business-critical nature and costs of large software development projects. But in reality, IT support services consume most of the IT budget, and they also require the most direct and continuous interaction with business customers.

IT organizations must demonstrate the value of IT support services to business customers, and a primary way of doing this is through service-level agreements. SLAs help IT show value by clearly defining the service responsibilities of the IT organization that is delivering the services and the performance expectations of the business customer receiving the service.

One of the most difficult tasks in developing an SLA is deciding what to include. The following sample SLA structure provides a good starting point.

Introduction: This identifies the service, the IT organization delivering that service and the business customer receiving it.

Examples:

  • Infrastructure support for a shipping warehouse.
  • Software application support for the payroll staff.

Description of services: This characterizes the services to be provided, the types of work to be performed and the parameters of service delivery, including the following:

  • The types of work that are part of the service (maintenance, enhancement, repair, mechanical support).
  • The time required for different types and levels of service.
  • The service contact process and detailed information for reaching the help desk or any single point of contact for support services.

Description of responsibilities: This delineates responsibilities of both the IT service provider and the customer, including shared responsibilities.

Operational parameters: These may affect service performance and therefore must be defined and monitored.

Examples:

  • Maximum number of concurrent online users.
  • Peak number of transactions per hour.
  • Maximum number of concurrent user requests.

If operational parameters expand beyond the control of the service provider, or if users of the service exceed the limits of specified operational parameters, then the SLA may need to be renegotiated.

Service-level goals: These are the performance metrics that the customer expects for specific services being delivered. SLGs are useless unless actual performance data is collected. The service being delivered will dictate the type and method of data collection.

It is important to differentiate between goals that are equipment-related and service-level goals that are people- and work-related.

Examples:

  • Equipment SLG: 99% network availability 24/7.
  • People and work SLG: critical incidents resolved within two hours.

Service-improvement goals: These establish the required degree and rate of improvement for a specific SLG over time. An SIG requires that a performance trend be calculated over a specified period of time in addition to specific SLG data getting captured. This trend indicates the rate of improvement and whether the improvement goal has been achieved.

Service-performance reporting: This states IT's commitment to delivering reports to the business customer on a scheduled basis. The reports detail actual services delivered and actual levels of performance compared to the commitments stated within the SLA.

Sign-off: Signature lines and dates for authorized representatives of the IT organization delivering the service and the business customer receiving the service.

The hardest part of developing an SLA may be getting started. I hope this framework will help you begin to demonstrate IT's value to your customers.

Anderson is director of process development and quality assurance at Computer Aid Inc. Contact him atbob_anderson@compaid.com .

When "IT Alignment with the Business" Isn't a Buzzword

Written on 10:10 AM by Right Click IT - Technology Services

December 01, 2008 – Matt Heusser, CIO

IT leaders were told to "do more with less" even before economic woes exacerbated the issue. Savvy managers have always kept their eye on the goal: demonstrating what IT can do for the business, so that it's not always viewed as a cost center. Last week, one IT manager explained her strategy.

At a meeting of the Grand Rapids Association of IT Professionals (AITP), Krischa Winright, associate VP of Priority Health, a health insurance products provider, demonstrated her IT team's accomplishments over the past year. Among the lessons learned: talented development organizations can gain advantages from frugality (including developing applications using internal resources and open-source technologies); you can ferociously negotiate costs with vendors; and virtualization can save the company money and team effort. End result: an estimated 12 percent reduction in expense spending (actual dollars spent) in 2008.

I asked Krischa about what her team had done at Priority Health, and how other organizations might benefit from her approach.

CIO: First, could you describe your IT organization: its size and role?

Winright: Priority Health is a nationally recognized health insurance company based in Michigan. Our IT department has approximately 90 full time staff, whose sole objective is to support Priority Health's mission: to provide all people access to excellent and affordable health care. The implications of this mission for IT are to support cutting edge informatics strategies in the most efficient way possible. We staff all IT services and infrastructure functions, in addition to software development capability.

CIO: In your AITP talk, you mentioned basic prerequisites to transparency and alignment. Can you talk about those for a moment?

Winright: Prior to 2008, we put in place a Project Management Office with governance at the executive level. Our executive steering committee prioritized all resources in IT dedicated to large projects, which meant that we already were tightly, strategically aligned with the business. ROI for all new initiatives is calculated, and expenditures (IT and non-IT) are tracked.

CIO: So you put a good PMO in place to improve the organization's ability to trace costs. Then what?

Winright: Well, let's be careful. First, project costs associated with large business initiatives are only one portion of IT spending. Additionally, cutting costs is easy; you just decrease the services you offer the business.

Instead, we wanted to cut costs in ways that would enhance our business alignment, and increase (rather than decrease) the services we offer. To do that, we had to expose all of the costs in IT (PMO and non-PMO) in terms that the business could understand. In other words: business applications.

We enumerated all IT budgetary costs by application, and then bucketed them based upon whether they were (1) existing services (i.e. keeping the "true" IT lights on) or (2) new services being installed in 2008.

We then launched a theme of "convergence" in IT, which would allow us to converge to fewer technologies/applications that offer the business the same functionality, while increasing the level of service for each offering.

CIO: So you defined the cost of keeping the "true" IT lights on. What about new projects and development?

Winright: We adopted Forrester's MOOSE model. We established the goals of reducing the overall cost of MOOSE ("true" IT lights on) and increasing the amount of funding of items of strategic business importance.

Using the MOOSE framework, we finally understood the true, total cost of our business applications and our complete IT portfolio. This allowed us to quickly see opportunities for convergence and execute those plans. By establishing five work queues which spanned all of IT—Operations, Support, IT Improvement, PMO, Small Projects—we learned how all 90 of our staff were spending their time. That let us make adjustments to the project list to "converge" their time to items of most imminent strategic return.

CIO: In your talk, you said an economic downturn can be a time of significant opportunity for your internal development staff.

Winright: Businesses in Michigan are acutely aware of the economic downturn. Our health plan directly supports those businesses, so we are optimizing our spending just like everyone else.

Maximum benefit must be gained for every dollar spent. Every area of the company is competing for expenditures in ways they weren't before.

Yet when budgets are cut, business' core values dictate keeping talented people. In IT, a talented development organization can seize the opportunity of frugality and provide help across a plethora of business opportunities in an extremely cost-effective way. Developing applications using internal resources and open-source technologies have a more favorable cost portfolio than do third party vendor applications with their extensive implementation costs and recurring, escalating maintenance expense. Additionally, the decline of major third-party software implementations allows IT more bandwidth to partner side by side with the business.

CIO: What other steps have you taken to win trust?

Winright: We converted costly contracted labor associated with MOOSE to internal staff. Given exposure to the true cost of our business applications, we ferociously negotiated costs with our vendors. We took advantage of virtualization and other convergence technologies to maximize benefit from spending, and eliminated over 10 items from our environment (such as consolidated environments, consolidated hosts through virtualization, converged to one scheduler) in this first year by embracing the theme of convergence.

The fruit of our labor is an estimated 12 percent reduction in expense spending (actual dollars spent) in 2008. More importantly, we have proven a 6 percent shift of spending from existing service costs to new services. This is a powerful message to share with business partners. They will ultimately benefit when 6 percent more IT spending is directed to new initiatives rather than to existing services costs.

CIO: What's been the most painful part of this process for you?

Winright: Two things. First, it was difficult and time consuming to gather all actual budgetary expenses and tie them to a specific service. For most organizations our size, this information is held across several cost centers and managers, and the technical infrastructure itself is complex.

Second, it is always difficult to take 90 technologists and get them aligned around common themes. We continue to strive for internal alignment and eventual embodiment of these themes.

CIO: Pretend for a moment you are speaking to peer at an organization the size of Priority Health or a little larger. What advice would you have on quick wins, and things to do tomorrow?

Winright: Although painful and time consuming, it is imperative that you and your business peers understand the complete picture of IT spending in terms of business strategy. Then, and only then, will transparency into IT spending be an effective tool to increase business alignment.

Get your internal resources aligned around common themes, because an aligned group of highly intelligent people on a singular mission can yield incredible results.

How the Internet Works: 12 Myths Debunked

Written on 10:09 AM by Right Click IT - Technology Services

– Carolyn Duffy Marsan, Network World November 21, 2008

Internet Protocols (IP) keep evolving: What incorrect assumptions do we make when we send an e-mail or download a video?

Thirty years have passed since the Internet Protocol was first described in a series of technical documents written by early experimenters. Since then, countless engineers have created systems and applications that rely on IP as the communications link between people and their computers.

Here's the rub: IP has continued to evolve, but no one has been carefully documenting all of the changes.

"The IP model is not this static thing," explains Dave Thaler, a member of the Internet Architecture Board and a software architect for Microsoft. "It's something that has changed over the years, and it continues to change."

Thaler gave the plenary address Wednesday at a meeting of the Internet Engineering Task Force, the Internet's premier standards body. Thaler's talk was adapted from a document the IAB has drafted entitled "Evolution of the IP Model.

"Since 1978, many applications and upper layer protocols have evolved around various assumptions that are not listed in one place, not necessarily well known, not thought about when making changes, and increasingly not even true," Thaler said. "The goal of the IAB's work is to collect the assumptions—or increasingly myths—in one place, to document to what extent they are true, and to provide some guidance to the community."

The following list of myths about how the Internet works is adapted from Thaler's talk:

1. If I can reach you, you can reach me.
Thaler dubs this myth, "reachability is symmetric," and says many Internet applications assume that if Host A can contact Host B, then the opposite must be true. Applications use this assumption when they have request-response or callback functions. This assumption isn't always true because middleboxes such as network address translators (NAT) and firewalls get in the way of IP communications, and it doesn't always work with 802.11 wireless LANs or satellite links.

2. If I can reach you, and you can reach her, then I can reach her.
Thaler calls this theory "reachability is transitive," and says it is applied when applications do referrals. Like the first myth, this assumption isn't always true today because of middleboxes such as NATs and firewalls as well as with 802.11 wireless and satellite transmissions.

3. Multicast always works.
Multicast allows you to send communications out to many systems simultaneously as long as the receivers indicate they can accept the communication. Many applications assume that multicast works within all types of links. But that isn't always true with 802.11 wireless LANs or across tunneling mechanisms such as Teredo or 6to4.

4. The time it takes to initiate communications between two systems is what you'll see throughout the communication.
Thaler says many applications assume that the end-to-end delay of the first packet sent to a destination is typical of what will be experienced afterwards. For example, many applications ping servers and select the one that responds first. However, the first packet may have additional latency because of the look-ups it does. So applications may choose longer paths and have slower response times using this assumption. Increasingly, applications such as Mobile IPv6 and Protocol Independent Multicast send packets on one path and then switch to a shorter, faster path.

5. IP addresses rarely change.
Many applications assume that IP addresses are stable over long periods of time. These applications resolve names to addresses and then cache them without any notion of the lifetime of the name/address connection, Thaler says. This assumption isn't always true today because of the popularity of the Dynamic Host Configuration Protocol as well as roaming mechanisms and wireless communications.

6. A computer has only one IP address and one interface to the network.
This is an example of an assumption that was never true to begin with, Thaler says. From the onset of the Internet, hosts could have several physical interfaces to the network and each of those could have several logical Internet addresses. Today, computers are dealing with wired and wireless access, dual IPv4/IPv6 nodes and multiple IPv6 addresses on the same interface making this assumption truly a myth.

7. If you and I have addresses in a subnet, we must be near each other.
Some applications assume that the IP address used by an application is the same as the address used for routing. This means an application might assume two systems on the same subnet are nearby and would be better to talk to each other than a system far away. This assumption doesn't hold up because of tunneling and mobility. Increasingly, new applications are adopting a scheme known as an identifier/locator split that uses separate IP addresses to identify a system from the IP addresses used to locate a system.

8. New transport-layer protocols will work across the Internet.
IP was designed to support new transport protocols underneath it, but increasingly this isn't true, Thaler says. Most NATs and firewalls only allow Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) for transporting packets. Newer Web-based applications only operate over Hypertext Transfer Protocol (HTTP).

9. If one stream between you and me can get through, so can another one.
Some applications open multiple connections—one for data and another for control—between two systems for communications. The problem is that middleboxes such as NATs and firewalls block certain ports and may not allow more than one connection. That's why applications such as File Transfer Protocol (FTP) and the Real-time Transfer Protocol (RTP) don't always work, Thaler says.

10. Internet communications are not changed in transit.
Thaler cites several assumptions about Internet security that are no longer true. One of them is that packets are unmodified in transit. While it may have been true at the dawn of the Internet, this assumption is no longer true because of NATs, firewalls, intrusion-detection systems and many other middleboxes. IPsec solves this problem by encrypting IP packets, but this security scheme isn't widely used across the Internet.

11. Internet communications are private.
Another security-related assumption Internet developers and users often make is that packets are private. Thaler says this was never true. The only way for Internet users to be sure that their communications are private is to deploy IPsec, which is a suite of protocols for securing IP communications by authenticating and encrypting IP packets.

12. Source addresses are not forged.
Many Internet applications assume that a packet is coming from the IP source address that it uses. However, IP address spoofing has become common as a way of concealing the identity of the sender in denial of service and other attacks. Applications built on this assumption are vulnerable to attack, Thaler says.


Learn Google Search Tips From the Pros

Written on 6:38 PM by Right Click IT - Technology Services

A Techie Holiday Wish List

Written on 8:10 AM by Right Click IT - Technology Services

CIO — By Kristin Burnham

Gadget makers hope you have some money to spend this holiday season

Look, No Hands!
If hands-free legislation has crimped your cell phone usage, Funkwerk Americas' Ego Flash—a Bluetooth-enabled, hands-free car kit—is the solution. Its OLED display allows you to view phone contacts (it stores up to 10,000), call logs and caller ID; make phone calls via voice recognition; and it can even read aloud incoming text messages. The console also integrates with your car's stereo system and can play MP3 music downloaded to your mobile phone or any other Bluetooth-enabled player. $240 www.egohandsfree.com

Hunt and Peck With Style
Spice up your workspace with this hand-crafted, retro-inspired keyboard, The Aviator. This custom-made keyboard is constructed with a brushed aluminum frame, a black, felt faceplate and jewel-style LEDs similar to those on an airplane's instrument panel. $1,200-$1,500 www.datamancer.net

Watch While You Work
Late nights at the office don't mean you need to miss The Office. Sling Media's Slingbox PRO-HD streams HD content from a home television source, such as a cable box or satellite dish, to a laptop, desktop or smartphone. System requirements include a high-speed network connection with upload speeds of 1.5 megabits per second and an HD-compatible laptop or desktop computer. $300 www.slingmedia.com

No Outlet? No Problem!
A BlackBerry or iPod battery that is dying—especially when there's no outlet or charger in sight—is the ultimate inconvenience. But Solio has developed what it boasts is "the world's most advanced hybrid charger," the Magnesium Edition. Solar panels collect and store power—one hour of sun will power your iPod for an hour—and its adapter tips plug in to a variety of mobile devices, limiting the need to lug multiple chargers around. $170 www.solio.com

Can You Hear Me Now?
Crying baby on your six-hour flight? Get some shut-eye with Sennheiser's PXC 450 NoiseGard travel headphones, which reduce ambient noise by up to 90 percent. They also include a talk-through function to help distinguish between sounds such as those of a plane's engines versus the voice of person—enabling you to communicate while wearing them. The headphones collapse for easy transport and come with adapters for in-flight entertainment systems. $400 www.sennheiserusa.com





How to Recession-Proof Yourself

Written on 8:08 AM by Right Click IT - Technology Services

By Meridith Levinson

November 17, 2008 — CIO — Layoff fears are sending a shiver through the workforce as the U.S. economy lurches toward a full-blown recession. And no one is safe as corporate cost-cutters sharpen their axes. Though senior executives are less vulnerable to losing their jobs than the employees below them, they, too, can be casualties of restructurings.

How To Motivate Your Employees During Layoffs Whether you're a CIO or a help desk technician, career coaches say you can take measures to prevent the hatchet from falling on your neck. Here's a list of actions they say you can take to help safeguard your job.

1. Know your value and communicate it. "If you're flying under the radar, you're going to be the first to be eliminated," says Kirsten Dixson, author of Career Distinction: Stand Out by Building Your Brand. This goes for CIOs, too.

Dixson recommends compiling a weekly status report that outlines the project or projects you're working on, your progress on those projects and your key performance indicators, and sending that report to your boss each week.

If you're known as a "growth and innovation CIO," now is also the time to prove that you're as adept at cost cutting as you are at generating ideas, says Joanne Dustin, a 25-year IT veteran who's now a career coach and an organizational development consultant.

Dustin says CIOs need to talk up the efficiencies and cost savings that their innovations have achieved as well as the revenue they've generated. Your company may still decide that it needs someone with a different skill set in the CIO role, but at least you've given it your best shot.

2. Be a team player. Getting along with others—in the boardroom or elsewhere—is critical when downsizing is on the table, especially for IT professionals who tend to be independent, says Dustin, who's worked as a programmer, project manager and systems manager. "These times require cooperation, flexibility and a willingness to go the extra mile," she says.

IT professionals who "just sit at their desk or in the server room and do their eight-to-five" are at risk, says Ed Longanacre, senior vice president of IT at Amerisafe, a provider of workers' compensation insurance. The problem with hunkering down, he says, is that it gives the impression that you're not interested in the organization.

3. Keep your ear to the ground. Staying attuned to what's going on inside your company, including gossip, can help you anticipate changes, says Patricia Stepanski Plouffe, president of Career Management Consultants. "If there's a rumor that your department is going to fold or downsize, you can identify other areas of the company where you could transfer your skills," she says. Just remember that you can't trust everything you hear, whether it comes from the water cooler or the CFO.

4. Adapt to change quickly. "If you can develop an attitude that nothing is going to stay the same and that your organization and your job will always be in flux, that will help you cope," says Stepanski Plouffe. "Be ready for whatever change may come up."

5. Get out and lead. "Executives are expected to set the vision and reassure people of the path the company is on," says Dixson. "This is not the time to go in your office and shut the door. Show decisiveness, strength and integrity. Show that you're combating the rumor mill."

ABC: An Introduction to Business Continuity and Disaster Recovery Planning

Written on 11:22 AM by Right Click IT - Technology Services

Disaster recovery and business continuity planning are processes that help organizations prepare for disruptive events—whether an event might be a hurricane or simply a power outage caused by a backhoe in the parking lot. Management's involvement in this process can range from overseeing the plan, to providing input and support, to putting the plan into action during an emergency. This primer (compiled from articles in CSO magazine) explains the basic concepts of business continuity planning and also directs you to more CSO magazine resources on the topic.

  • What’s the difference between disaster recovery and business continuity planning?
  • What does a disaster recovery and business continuity plan include?
  • How do I get started?
  • Is it really necessary to disrupt business by testing the plan?
  • What kinds of things have companies discovered when testing a plan?
  • What are the top mistakes that companies make in disaster recovery?
  • I still have a binder with our Y2K plan. Will that work?
  • Can we outsource our contingency measures?
  • How can I sell this business continuity planning to other executives?
  • How do I make sure the plans aren’t overkill for my company?

Q: "Disaster recovery" seems pretty self-explanatory. Is there any difference between that and "business continuity planning"?

A: Disaster recovery is the process by which you resume business after a disruptive event. The event might be something huge-like an earthquake or the terrorist attacks on the World Trade Center-or something small, like malfunctioning software caused by a computer virus.

Given the human tendency to look on the bright side, many business executives are prone to ignoring "disaster recovery" because disaster seems an unlikely event. "Business continuity planning" suggests a more comprehensive approach to making sure you can keep making money. Often, the two terms are married under the acronym BC/DR. At any rate, DR and/or BC determines how a company will keep functioning after a disruptive event until its normal facilities are restored.

What do these plans include?

All BC/DR plans need to encompass how employees will communicate, where they will go and how they will keep doing their jobs. The details can vary greatly, depending on the size and scope of a company and the way it does business. For some businesses, issues such as supply chain logistics are most crucial and are the focus on the plan. For others, information technology may play a more pivotal role, and the BC/DR plan may have more of a focus on systems recovery. For example, the plan at one global manufacturing company would restore critical mainframes with vital data at a backup site within four to six days of a disruptive event, obtain a mobile PBX unit with 3,000 telephones within two days, recover the company's 1,000-plus LANs in order of business need, and set up a temporary call center for 100 agents at a nearby training facility.

But the critical point is that neither element can be ignored, and physical, IT and human resources plans cannot be developed in isolation from each other. At its heart, BC/DR is about constant communication. Business leaders and IT leaders should work together to determine what kind of plan is necessary and which systems and business units are most crucial to the company. Together, they should decide which people are responsible for declaring a disruptive event and mitigating its effects. Most importantly, the plan should establish a process for locating and communicating with employees after such an event. In a catastrophic event (Hurricane Katrina being a recent example), the plan will also need to take into account that many of those employees will have more pressing concerns than getting back to work.

Where do I start?

A good first step is a business impact analysis (BIA). This will identify the business's most crucial systems and processes and the effect an outage would have on the business. The greater the potential impact, the more money a company should spend to restore a system or process quickly. For instance, a stock trading company may decide to pay for completely redundant IT systems that would allow it to immediately start processing trades at another location. On the other hand, a manufacturing company may decide that it can wait 24 hours to resume shipping. A BIA will help companies set a restoration sequence to determine which parts of the business should be restored first.

Here are 10 absolute basics your plan should cover:

1. Develop and practice a contingency plan that includes a succession plan for your CEO.
2. Train backup employees to perform emergency tasks. The employees you count on to lead in an emergency will not always be available.
3. Determine offsite crisis meeting places for top executives.
4. Make sure that all employees-as well as executives-are involved in the exercises so that they get practice in responding to an emergency.
5. Make exercises realistic enough to tap into employees' emotions so that you can see how they'll react when the situation gets stressful.
6. Practice crisis communication with employees, customers and the outside world.
7. Invest in an alternate means of communication in case the phone networks go down.
8. Form partnerships with local emergency response groups-firefighters, police and EMTs-to establish a good working relationship. Let them become familiar with your company and site.
9. Evaluate your company's performance during each test, and work toward constant improvement. Continuity exercises should reveal weaknesses.
10. Test your continuity plan regularly to reveal and accommodate changes. Technology, personnel and facilities are in a constant state of flux at any company.
Hold it. Actual live-action tests would, themselves, be the "disruptive events." If I get enough people involved in writing and examining our plans, won't that be sufficient?

Let us give you an example of a company that thinks tabletops and paper simulations aren't enough. And why their experience suggests they're right.

When CIO Steve Yates joined USAA, a financial services company, business continuity exercises existed only on paper. Every year or so, top-level staffers would gather in a conference room to role-play; they would spend a day examining different scenarios, talking them out-discussing how they thought the procedures should be defined and how they thought people would respond to them.

Live exercises were confined to the company's technology assets. USAA would conduct periodic data recovery tests of different business units-like taking a piece of the life insurance department and recovering it from backup data.

Yates wondered if such passive exercises reflected reality. He also wondered if USAA's employees would really know how to follow such a plan in a real emergency. When Sept. 11 came along, Yates realized that the company had to do more. "Sept. 11 forced us to raise the bar on ourselves," says Yates.

Yates engaged outside consultants who suggested that the company build a second data center in the area as a backup. After weighing the costs and benefits of such a project, USAA initially concluded that it would be more efficient to rent space on the East Coast. But after the attack on the World Trade Center and Pentagon, when air traffic came to a halt, Yates knew it was foolhardy to have a data center so far away. Ironically, USAA was set to sign the lease contract the week of Sept. 11.

Instead, USAA built a center in Texas, only 200 miles away from its offices-close enough to drive to, but far enough away to pull power from a different grid and water from a different source. The company has also made plans to deploy critical employees to other office locations around the country.

Yates made site visits to companies such as FedEx, First Union, Merrill Lynch and Wachovia to hear about their approach to contingency planning. USAA also consulted with PR firm Fleishman-Hillard about how USAA, in a crisis situation, could communicate most effectively with its customers and employees.

Finally, Yates put together a series of large-scale business continuity exercises designed to test the performance of individual business units and the company at large in the event of wide-scale business disruption. When the company simulated a loss of the primary data center for its federal savings bank unit, Yates found that it was able to recover the systems, applications and all 19 of the third-party vendor connections. USAA also ran similar exercises with other business units.

For the main event, however, Yates wanted to test more than the company's technology procedures; he wanted to incorporate the most unpredictable element in any contingency planning exercise: the people.

USAA ultimately found that employees who walked through the simulation were in a position to observe flaws in the plans and offer suggestions. Furthermore, those who practice for emergency situations are less likely to panic and more likely to remember the plan.

Can you give me some examples of things companies have discovered through testing?

Some companies have discovered that while they back up their servers or data centers, they've overlooked backup plans for laptops. Many businesses fail to realize the importance of data stored locally on laptops. Because of their mobile nature, laptops can easily be lost or damaged. It doesn't take a catastrophic event to disrupt business if employees are carting critical or irreplaceable data around on laptops.

One company reports that it is looking into buying MREs (meals ready-to-eat) from the company that sells them to the military. MREs have a long shelf life, and they don't take up much space. If employees are stuck at your facility for a long time, this could prove a worthwhile investment.

Mike Hager, former head of information security and disaster recovery for OppenhiemerFunds, says 9/11 brought issues like these to light. Many companies, he said, were able to recover data, but had no plans for alternative work places. The World Trade Center had provided more than 20 million square feet of office space, and after Sept. 11th there was only 10 million square feet of office space available in Manhattan. The issue of where employees go immediately after a disaster and where they will be housed during recovery should be addressed before something happens, not after.

USAA discovered that while it had designated a nearby relocation area, the setup process for computers and phones took nearly two hours. During that time, employees were left standing outside in the hot Texas sun. Seeing the plan in action raised several questions that hadn't been fully addressed before: Was there a safer place to put those employees in the interim? How should USAA determine if or when employees could be allowed back in the building? How would thousands of people access their vehicle if their car keys were still sitting on their desk? And was there an alternate transportation plan if the company needed to send employees home?
What are the top mistakes that companies make in disaster recovery?

Hager and other experts note the following pitfalls:

1. Inadequate planning: Have you identified all critical systems, and do you have detailed plans to recover them to the current day? (Everybody thinks they know what they have on their networks, but most people don't really know how many servers they have, or how they're configured, or what applications reside on them-what services were running, what version of software or operating systems they were using. Asset management tools claim to do the trick here, but they often fail to capture important details about software revisions and so on.

2. Failure to bring the business into the planning and testing of your recovery efforts.

3. Failure to gain support from senior-level managers. The largest problems here are:

1. Not demonstrating the level of effort required for full recovery.
2. Not conducting a business impact analysis and addressing all gaps in your recovery model.
3. Not building adequate recovery plans that outline your recovery time objective, critical systems and applications, vital documents needed by the business, and business functions by building plans for operational activities to be continued after a disaster.
4. Not having proper funding that will allow for a minimum of semiannual testing.

I still have a binder with our Y2K contingency plan. Will that work?

Absolutely not (unless your computers, employees and business priorities are exactly the same as they were in 1999). Plus, most Y2K plans cover only computer system-based failure. Potential physical calamities like blackouts, natural disasters or terrorist events bring additional issues to the table.

Can we outsource our contingency measures?

Disaster recovery services-offsite data storage, mobile phone units, remote workstations and the like-are often outsourced, simply because it makes more sense than purchasing extra equipment or space that may never be used. In the days after the Sept. 11 attacks, disaster recovery vendors restored systems and provided temporary office space, complete with telephones and Internet access for dozens of displaced companies.

What advice would you give to security executives who need to convince their CEO or board of the need for disaster recovery plans and capabilities? What arguments are most effective with an executive audience?

Hager advises chief security officers to address the need for disaster recovery through analysis and documentation of the potential financial losses. Work with your legal and financial departments to document the total losses per day that your company would face if you were not capable of quick recovery. By thoroughly reviewing your business continuance and disaster recovery plans, you can identify the gaps that may lead to a successful recovery. Remember: Disaster recovery and business continuance are nothing more than risk avoidance. Senior managers understand more clearly when you can demonstrate how much risk they are taking."

Hager also says that smaller companies have more (and cheaper) options for disaster recovery than bigger ones. For example, the data can be taken home at night. That's certainly a low-cost way to do offsite backup.
Some of this sounds like overkill for my company. Isn't it a bit much?

The elaborate machinations that USAA goes through in developing and testing its contingency plans might strike the average CSO (or CEO, anyway) as being over the top. And for some businesses, that's absolutely true. After all, HazMat training and an evacuation plan for 20,000 employees is not a necessity for every company.

Like many security issues, continuity planning comes down to basic risk management: How much risk can your company tolerate, and how much is it willing to spend to mitigate various risks?

In planning for the unexpected, companies have to weigh the risk versus the cost of creating such a contingency plan. That's a trade-off that Pete Hugdahl, USAA's assistant vice president of security, frequently confronts. "It gets really difficult when the cost factor comes into play," he says. "Are we going to spend $100,000 to fence in the property? How do we know if it's worth it?"

And-make no mistake-there is no absolute answer. Whether you spend the money or accept the risk is an executive decision, and it should be an informed decision. Half-hearted disaster recovery planning (in light of the 2005 hurricane season, 9/11, the Northeast blackout of 2003, and so on) is a failure to perform due diligence.

This document was compiled from articles published in CSOand CIO magazines. Contributing writers include Scott Berinato, Kathleen Carr, Daintry Duffy, Michael Goldberg, and Sarah Scalet. Send feedback to CSO Executive Editor Derek Slater at dslater@cxo.com.

Five Tips: Make Virtualization Work Better Across the WAN

Written on 10:27 AM by Right Click IT - Technology Services

– Jeff Aaron, VP Marketing, Silver Peak Systems, CIO November 18, 2008

IT departments can reap enormous benefits from virtualizing applications and implementing Virtual Desktop Infrastructures (VDI). However, the management and cost savings of virtualization can be lost if performance is so bad that it hampers productivity, as can happen when virtual applications and desktops are delivered across a Wide Area Network (WAN).

For an in-depth look at a WAN revamp, see CIO.com's related article, "How to Make Your WAN a Fast Lane: One Company's Story."

How can enterprises overcome poor performance to reap the rewards of virtualization?

Jeff Aaron, VP of marketing at Silver Peak Systems, suggests these five tips.

1. Understand The Network Issues
For starters, it makes sense to understand why your virtualized applications and virtual desktops perform poorly across the WAN. It's typically not due to the application or VDI components, but due to the network. More specifically, virtualized environments are sensitive to the following WAN characteristics:

Latency: the time it takes for data to travel from one location to one another.
Packet loss: when packets get dropped or delivered out of order due to network congestion they must be re-transmitted across the WAN. This can turn a 200 millisecond roundtrip into one second. To end users, the virtual application or desktop seems unresponsive when packets are being re-transmitted. They start to re-hit the keys on their client machines, which compounds the problem.

Bandwidth: WAN bandwidth may or may not be an issue depending on the type of traffic being sent. While most virtualized applications are fairly efficient when it comes to bandwidth consumption, some activities (such as file transfers and print jobs) consume significant bandwidth, which can present a performance challenge.

2. Examine WAN Optimization Techniques
WAN optimization devices can be deployed on both ends of a WAN link to improve the performance of all enterprise applications. The following WAN optimization techniques are used by these devices to improve the performance of virtual applications and desktops:

Latency can be overcome by mitigating the "chattiness" of TCP, the transport protocol used to by virtual applications for communication across the WAN. More specifically, WAN optimization devices can be configured to send more data within specific windows, and minimize the number of back and forth acknowledgements required prior to sending data. This improves the responsiveness of keystrokes in a virtual environment.

Loss can be mitigated by rebuilding dropped packets on the far end of a WAN link, and re-sequencing packets that are delivered out of order in real-time. This eliminates the need to re-transmit packets every time they are dropped or delivered out-of-order. By avoiding re-transmissions, virtual applications and desktops appear much more responsive across the WAN.

Bandwidth can be reduced using WAN deduplication. By monitoring all data sent across the WAN, repetitive information can be detected and delivered locally rather than resent across the network. This significantly improves bandwidth utilization in some (but not all) virtualized environments.

3. Set Application Priorities
The average enterprise has more than 80 applications that are accessed across the WAN. That means that critical applications, including terminal services and VDI, are vying for the same resources as less important traffic, such as Internet browsing. Because virtual applications and desktops are sensitive to latency, it often makes sense to prioritize this traffic over other applications using Quality of Service (QoS) techniques. In addition, QoS can guarantee bandwidth for VDI and virtual applications.

4. Compress and Encrypt in the Right Place
Often times host machines compress information prior to transmission. This is meant to improve bandwidth utilization in a virtual environment. However, compression obfuscates visibility into the actual data, which makes it difficult for downstream WAN optimization devices to provide their full value. Therefore, it may be a better choice to turn off compression functionality in the virtual host (if possible), and instead enable it in the WAN optimization device.

Moving compression into the WAN optimization device has another added benefit: it frees up CPU cycles within the host machine. This can lead to better performance and scalability throughout a virtual environment.

IT staff should also consider where encryption takes place in a virtual infrastructure, since encryption also consumes CPU cycles in the host.


5. Go With the Flows
Network scalability can have an important impact on the performance of virtual applications and VDI. The average thin client machine has 10 to15 TCP flows open at any given time. If thousands of clients are accessing host machines in the same centralized facility, that location must be equipped to handle tens of thousands of simultaneous sessions.

When it comes to supporting large numbers of flows, there are two "best practice" recommendations. First, as discussed above, it is recommended that compression and encryption be moved off the host machine to free up CPU cycles. Second, make sure your WAN acceleration device supports the right amount of flows for your environment. The last thing you want to do is create an artificial bottleneck within the very devices deployed to remove your WAN's bottlenecks.

8 Reasons Tech Will Survive the Economic Recession

Written on 10:05 AM by Right Click IT - Technology Services

– Carolyn Duffy Marsan, Network World November 13, 2008

The global economy is in as bad shape as we've ever seen. In the last two months, U.S. consumers have stopped spending money on discretionary items, including electronic gear, prompting this week's bankruptcy filing by Circuit City. Retailers are worried that Black Friday will indeed be black, as holiday shoppers cut back on spending and choose lower-priced cell phones and notebook computers.

Yet despite all of the bailouts and layoffs, most IT industry experts are predicting that sales of computer hardware, software and services will be growing at a healthy clip again within 18 months.

Here's a synopsis of what experts are saying about the short- and long-term prognosis for the tech industry:

1. The global IT market is still growing, although barely.

IDC this week recast its projections for global IT spending in 2009, forecasting that the market will grow 2.6 percent next year instead of the 5.9 percent predicted prior to the financial crisis. In the United States, IT spending will eke out 0.9 percent growth.

IDC predicts the slowest IT markets will be the United States, Japan and Western Europe, which all will experience around 1 percent growth. The healthiest economies will be in Central and Eastern Europe, the Middle East, Africa and Latin America.

Similarly, Gartner's worst-case scenario for 2009 is that IT spending will increase 2.3 percent, according to a report released in mid-October. Gartner said the U.S. tech industry will be flat. Hardest hit will be Europe, where IT expenditures are expected to shrink in 2009.

Overall, Gartner said global IT spending will reach $3.8 trillion in 2008, up from $3.15 trillion in 2007.

"We expect a gradual recovery throughout 2010, and by 2011 we should be back into a more normal kind of environment," said IDC Analyst Stephen Minton. If the recession turns out to be deeper or last longer than four quarters as most economics expect, "it could turn into a contraction in IT spending," Minton added. "In that case, the IT market would still be weak in 2010 but we'd see a gradual recovery in 2011, and we'd be back to normal by 2012."

2. It's not as bad as 2001.

Even the grimmest predictions for global IT spending during the next two years aren't as severe as the declines the tech industry experienced between 2001 and 2003.

"Global economic problems are impacting IT budgets, however the IT industry will not see the dramatic reductions that were seen during the dot.com bust. . . . At that time, budgets were slashed from mid-double-digit growth to low-single-digit growth," Gartner said in a statement.

Gartner said the reason IT won't suffer as badly in 2009 as it did during the 2001 recession is that "operations now view IT as a way to transform their businesses and adopt operating models that are leaner. . . . IT is embedded in running all aspects of the business."

IDC's Minton said that in 2001 many companies had unused data center capacity, excess network bandwidth and software applications that weren't integrated in a way that could drive productivity.

"This time around, none of that is true," Minton said. "Today, there isn't a glut of bandwidth. There is high utilization of software applications, which are purchased in a more modular way and integrated much faster into business operations. Unlike in 2001, companies aren't waking up to find that they should be cutting back on IT spending. They're only cutting back on new initiatives because of economic conditions."

"We're anxious about whether the economy will resemble what the most pessimistic economists are saying or the more mainstream economists," Minton said. "But we don't see any reason that it will turn into a disaster like 2001. It shouldn’t get anywhere near that bad."

3. Consumers won't give up their cell phones.

They may lose their jobs and even their homes, but consumers seem unwilling to disconnect their cell phones.

"I would sleep in my car before I would give up my mobile phone," says Yankee Group Analyst Carl Howe. "Consumers buy services like broadband and mobile phones, and even if they lose their jobs they need these services more than ever."

Yankee Group says the demand for network-based services—what it dubs "The Anywhere Economy"—will overcome the short-term obstacles posed by the global financial crisis and will be back on track for significant growth by 2012.

Yankee Group predicts continued strong sales for basic mobile phone services at the low end, as well as high-end services such as Apple iPhones and Blackberry Storms. Where the mobile market will get squeezed is in the middle, where many vendors have similar feature sets. One advantage for mobile carriers: they have two-year contracts locked in.

Telecom services "are not quite on the level of food, shelter and clothing, but increasingly it satisfies a deep personal need," Howe says. "When bad things happen to us, we want to talk about it. And in today's world, that's increasingly done electronically."

4. Notebook computers are still hot.

Worldwide demand for notebooks—particularly the sub-$500 models—has been strong all year. But that may change in the fourth quarter given Intel's latest warnings about flagging demand for its processors.

Both IDC and Gartner reported that PC shipments grew 15 percent in the third quarter of 2008, driven primarily by sales of low-cost notebook computers. Altogether, more than 80 million PCs were shipped during the third quarter of 2008, which was down from estimates earlier in the year but still represents healthy growth.

IDC said notebook sales topped desktop sales—55 percent to 45 percent—for the first time ever during the third quarter of 2008. This is a trend that will help prop up popular notebook vendors such as Hewlett-Packard, Dell and Apple. Apple, for example, saw its Mac shipments rise 32 percent in the third quarter of 2008, powered primarily by its notebooks.

The big unknown is what will happen to notebook sales during the holiday season. Analysts have noted sluggishness in U.S. corporate PC sales this fall as well as home sales, where most demand is for ultra-low-priced notebooks.

"The impact will come this quarter. People will be looking for cheaper products. . . . They will not be spending as much as they did a year ago," IDC's Minton said.

Intel said yesterday that it was seeing significantly weaker demand across its entire product line and dropped its revenue forecast for the fourth quarter by $1 billion.

The brunt of the slowdown in IT spending will hit servers and PCs, predicts Forrester Research analyst Andrew Bartels. Forrester is adjusting its IT spending forecast for 2009 downward, and plans to release new numbers after Thanksgiving, he adds.

"PCs and servers may see declines similar to 2001, but we're not going to be seeing that across the whole tech industry," Bartels says. "Software is a bright spot. Much of software spending comes in the form of maintenance and subscriptions. The licensing part may go down, but that's only a quarter of total software revenues."

The biggest U.S. carriers—including AT&T and Verizon—are in much better shape going into this recession than they were during the dot.com bust. So while consumer spending will fall in 2009, it is expected to have less of an impact on the telecom sector than it did after 2001.

Yankee Group says the financial crisis will not significantly impact network build-outs by carriers because most of the financing for 3G, Fios, WiMAX and other next-generation networks is already in place.

"These are multibillion-dollar build-outs, and most of the financing has been arranged months if not years in advance," Yankee Group's Howe says. "We were projecting that in 2009 carriers would spend over $70 billion on these network build-outs in the U.S. Now we're saying that there will be $2 billion or $3 billion less in spending. . . . We're talking single-digit percentage declines, not wholesale cuts."

This doesn't mean that the network industry will emerge from the chaos unscathed. Carriers will squeeze their equipment providers, and companies like Cisco are already feeling the pinch. When Cisco announced its latest earnings last week, CEO John Chambers reported the company had seen its sales shift from solid-single-digit growth in August to a 9 percent decline in October.

Forrester says computer and communications equipment vendors will bear the brunt of IT cost-cutting from enterprise customers.

6. Corporate data storage needs keep rising during recessions.

Every segment of the IT market is weaker today than it was six months ago. But some segments are less weak than others, and one of the healthiest is storage.

“Storage is relatively stable because of the fact that companies are using a lot more of their storage capacity and they are still dealing with an increasing amount of data that requires storage on a weekly basis. That’s not going to change,” IDC’s Minton said. “It’s not just the hardware, but the storage software that will be relative bright spots in the years ahead.”

One storage industry bellwether is EMC, which continued to demonstrate strong demand for storage hardware and software in its recent quarterly results. EMC’s revenue grew 13 percent in the third quarter of 2008 compared to a year ago. Unlike many other network industry leaders, EMC is projecting continued revenue gains in the fourth quarter of 2008.

Similarly, this week Brocade issued a preliminary release indicating strong sales for the quarter ending in October. CEO Michael Klayko said the company will outperform its sales projections from August 2008.

“Storage needs are on the rise, and storage investments will continue,” Forrester’s Bartels says. “We don’t see cloud storage as having a meaningful impact yet.”

7. New IT markets will continue to emerge, although more slowly.

Emerging economies such as China and Latin America are slowing down, but they are still expected to have IT sales increases in 2009. The Latin American market, in particular, is a solid one for IT companies such as IBM, HP and Microsoft, which have a strong foothold south of the border.

“In the past two to three years, Latin America has had some of the fastest growth rates in IT spending,” IDC’s Minton said. “Brazil is the biggest market, and it has been growing at double digits. But all of the markets in Latin America have been growing by more than 10 percent a year. With some exceptions, the economies there are relatively stable and have had less political turmoil than in the past. . . . This is one of the regions that we think will bounce back pretty quickly.”

Other emerging markets that will continue to post growth in IT spending in 2009 are Central and Eastern Europe, the Middle East and Africa, IDC predicts. While these markets won’t experience double-digit gains next year, they will help offset sharp declines in IT purchasing in the United States, Japan and Western Europe.

Forrester warns that IT vendors shouldn’t count on so-called BRIC countries—Brazil, Russia, India and China— to bail them out of the financial crisis altogether.

“The BRIC markets are performing better than the industrial markets, but they are also slowing down,” Forrester’s Bartels says. “Among those markets, China looks to be the strongest, then Brazil and Mexico. Russia is weakening, and India is weakening. They’re not going to go into a contraction, but the growth rates could slow to the point that they feel like a contraction.”

One issue for IT vendors is the rising strength of the U.S. dollar, which means U.S. tech vendors will bring home fewer dollars from their foreign sales when they convert currencies.

“The dollar has been strengthening against every currency except the Chinese currency,” Bartels says. “Even if a vendor is successful in sales in Brazil or Russia, they will bring back fewer dollars, which was not the case six months ago.”

8. Outsourcing helps companies stretch their IT budgets.

Many companies will freeze new IT initiatives for the next three to six months as they absorb the Wall Street crash. But one segment that’s likely to continue is IT outsourcing because it provides near-term cost reductions.

“While IT outsourcing will benefit from an economic slowdown in 2008 as companies turn to IT outsourcing vendors to help cut costs, trends toward use of lower-cost offshore resources and smaller-scale outsourcing deals will keep growth modest,” says Forrester Research.

Forrester predicts IT outsourcing will grow around 5 percent in 2009 and 2010.

“When you sign an outsourcing agreement, you’re locked into it barring going out of business,” Forrester’s Bartels says. “Outsourcing revenues are not going to be variable.”

On the horizon is cloud computing, which also holds the promise of reducing corporate IT overhead but requires more up-front spending than outsourcing.

“Over the longer term, we’re pretty bullish about cloud computing,” IDC’s Minton said. “But there will be a lot of hurdles for a bigger corporation. It’s difficult for them psychologically to give up control, and there are quite a lot of up-front costs to engage consultants, to roll out applications to a large number of employees, and there’s training involved. But ultimately these projects save money.”

Working Virtually: You can easily provide secured remote access to employees

Written on 1:26 PM by Right Click IT - Technology Services

Definition from Whatis.com

Remote access is the ability to get access to a computer or a network from a remote distance. In corporations, people at branch offices, telecommuters, and people who are traveling may need access to the corporation's network. Access can go through an ISP, ISDN, DSL or other wireless mobile method.

Right Click Can Set up Remote Access for Your Team!

The playing field is beginning to be level, technology to link remote offices to headquarters is not just for big companies. Small and medium sized businesses need to have their remote sites dialed in just like their main office. At Right Click we can work with your existing internet connections and setup hardware VPN’s for remote offices and power home users. This allows these remote users to securely access files, emails and programs when they are outside the main office and minimizes the need for emailing files and trying to keep track of versions.

If you have a need to give access to a large application for your remote users, Right Click's experts can set-up cost-effective Terminal and Citrix Servers for this task. We will work with you to determine the best solution for your team, architect the system to fit into your budget and make sure that it performs to your standards.

How can you make sure your Network Security is compliant and meets today's requirements?

Written on 11:59 AM by Right Click IT - Technology Services

Definition provided by Wikipedia

Network security
consists of the provisions made in an underlying computer networkpolicies adopted by the network administrator to protect the network and the network-accessible resources from unauthorized access and consistent and continuous monitoring and measurement of its effectiveness (or lack) combined together.

In today’s world where your most valuable data can be taken from you on a device the size of key, ensuring you have proper network security is critical.

With a USB key drive holding an average of 8 gigabytes and an IPod holding 80GB of data, nearly all proprietary or sensitive information can be copied from your network in a matter of minutes. Additionally, there is the ever present threat of viruses, spy ware and hackers to further impact the network.

At Right Click we look at your network and come up with a plan to cost effectively ensure network security. Our goal is to ensure that your company has effective security without onerous management and allows your employees to work with ease.

Many of our clients operate in regulated industries such as health care and financial services, Right Click is positioned to ensure that the networks are in compliance with all HIPAA and SOX regulations.

Let us come out and do a network security survey to proactively ensure that all network security requirements are met or exceeded. Give us a call or email us at Right Click.

The Benefits of Outsourcing for Small Businesses

Written on 5:50 PM by Right Click IT - Technology Services

From AllBusiness.com

Outsourcing, the practice of using outside firms to handle work normally performed within a company, is a familiar concept to many entrepreneurs. Small companies routinely outsource their payroll processing, accounting, distribution, and many other important functions — often because they have no other choice. Many large companies turn to outsourcing to cut costs. In response, entire industries have evolved to serve companies' outsourcing needs.

Wise outsourcing, however, can provide a number of long-term benefits:

Control capital costs. Cost-cutting may not be the only reason to outsource, but it's certainly a major factor. Outsourcing converts fixed costs into variable costs, releases capital for investment elsewhere in your business, and allows you to avoid large expenditures in the early stages of your business. Outsourcing can also make your firm more attractive to investors, since you're able to pump more capital directly into revenue-producing activities.

Increase efficiency. Companies that do everything themselves have much higher research, development, marketing, and distribution expenses, all of which must be passed on to customers. An outside provider's cost structure and economy of scale can give your firm an important competitive advantage.

Reduce labor costs. Hiring and training staff for short-term or peripheral projects can be very expensive, and temporary employees don't always live up to your expectations. Outsourcing lets you focus your human resources where you need them most.

Start new projects quickly. A good outsourcing firm has the resources to start a project right away. Handling the same project in-house might involve taking weeks or months to hire the right people, train them, and provide the support they need. And if a project requires major capital investments (such as building a series of distribution centers), the startup process can be even more difficult.

Focus on your core business. Every business has limited resources, and every manager has limited time and attention. Outsourcing can help your business to shift its focus from peripheral activities toward work that serves the customer, and it can help managers set their priorities more clearly.

Level the playing field. Most small firms simply can't afford to match the in-house support services that larger companies maintain. Outsourcing can help small firms act "big" by giving them access to the same economies of scale, efficiency, and expertise that large companies enjoy.

Reduce risk. Every business investment carries a certain amount of risk. Markets, competition, government regulations, financial conditions, and technologies all change very quickly. Outsourcing providers assume and manage this risk for you, and they generally are much better at deciding how to avoid risk in their areas of expertise

Let Right Click help augment your IT services needs with the "right" size IT staff, available to fit your project needs.

What is Data Recovery and Computer Forensics?

Written on 11:43 AM by Right Click IT - Technology Services

Definition From Wikipedia in Italics

Data recovery is the process of salvaging data from damaged, failed, corrupted, or inaccessible secondary storage media when it cannot be accessed normally.

Often the data are being salvaged from storage media formats such as hard disk drives, storage tapes, CDs, DVDs, RAID, and other electronics. Recovery may be required due to physical damage to the storage device or logical damage to the file system that prevents it from being mounted by the host operating system.

If you are missing information on your server or desktop, your hard drive crashed and you did not have a backup Right Click’s data recovery services can help you to get your data back quickly and efficiently. We have done a number of data recoveries saving thousands in lost time and productivity if the files had to be recreated.

Data recovery can also be the process of retrieving and securing deleted information from a storage media for forensic purposes or spying.

Do you think an employee or partner is doing something that may not be in accordance with company policies. Right Click is an expert in examining computers and ensuring that you get an answer that has been thoroughly researched and examined.

Right Click's Jim Harrington, is an Encase Certified Engineer, has testified in court and has tremendous experience in dealing with forensics jobs large or small.

Computer forensics is a branch of forensic science pertaining to legal evidence found in computers and digital storage mediums. Computer forensics is also known as digital forensics.

The goal of computer forensics is to explain the current state of a digital artifact. The term digital artifact can include a computer system, a storage media (such as a hard disk or CD-ROM), an electronic document (e.g. an email message or JPEG image) or even a sequence of packets moving over a computer network. The explanation can be as straightforward as "what information is here?" and as detailed as "what is the sequence of events responsible for the present situation?"

The field of Computer Forensics also has sub branches within it such as Firewall Forensics, Database Forensics and Mobile Device Forensics.

There are many reasons to employ the techniques of computer forensics:

  • In legal cases, computer forensic techniques are frequently used to analyze computer systems belonging to defendants (in criminal cases) or litigants (in civil cases).
  • To recover data in the event of a hardware or software failure.
  • To analyze a computer system after a break-in, for example, to determine how the attacker gained access and what the attacker did.
  • To gather evidence against an employee that an organization wishes to terminate.
  • To gain information about how computer systems work for the purpose of debugging, performance optimization, or reverse-engineering.

Special measures should be taken when conducting a forensic investigation if it is desired for the results to be used in a court of law. One of the most important measures is to assure that the evidence has been accurately collected and that there is a clear chain of custody from the scene of the crime to the investigator---and ultimately to the court.

Right Click has the field expertise to provide you with the right solution to your Data Recovery and Computer Forensics needs. Give us a call or email us to find out how we can help!


Reduce Energy Costs and Go Green with VMware Virtualization

Written on 11:42 AM by Right Click IT - Technology Services

Server virtualization is being rapidly adopted by companies of all sizes. Server virtualization with VMware delivers immediate benefits including cost savings, easier application deployment, and simplified management.

Reduce the energy demands of your datacenter by right-sizing your IT infrastructure through consolidation and dynamic management of computer capacity across a pool of servers. Our virtualization solution powered by VMware Infrastructure delivers the resources your infrastructure needs and enables you to:

  • Reduce energy costs by 80%.
  • Power down servers without affecting applications or users.
  • Green your datacenter while decreasing costs and improving service levels.

Learn more about Virtualization for your organization by contacting Right Click today!

What is Server Virtualization?

Written on 11:21 AM by Right Click IT - Technology Services

Server Virtualization

Have you been reading about server virtualization. Heard it is the next big thing, going to save you a lot of money, make IT management easier, decrease the amount of space you need to dedicate to your servers.

At Right Click, we are well equipped to teach you about the pros and cons of virtualizing and make a recommendation if that is the way to go.

What is it? Server virtualization is running multiple copies of a server software on one physical box. For instance with a virtualized server you can have a domain controller, exchange server and a terminal server on one physical box, but you have three separate copies of Windows Server running. It is as though you have three machines, but only one physical box.

Why does it work? If you look at your CPU utilization it hardly ever goes about 10%. With virtualization you can get more out of your existing servers and be able to offer additional services to your users without significant hardware spending.

How does it work? Virtualization begins with base software which can be Microsoft Windows Server 2008 or a small program like VMWare’s hyper visor. Once you have your base software, you begin to setup new machines on your one box. You can install different operating systems, even different platforms on one box.

Why do you want it? There are a number of reasons to virtualize:

    • Decrease number of servers to maintain. You can put 5 – 7 servers onto one box.
    • Spend less money on hardware. If you need a new server, just install a new one on your virtualized machine. • Decrease need for cooling and space. With the price of energy increasing, virtualization is going to become mandatory in the future.
    • Easier to manage, no longer do you need to be familiar with 10 different types of machines. As long as you know your one machine you will be ok. Only one set of drivers and bios’ to worry about.
    • Easier to test with. Imagine having a test environment you can setup in minutes. If you break something and want to start from scratch, just restore your image you started with and you are good to go.

  • Who makes the software? The leader today is VMWare http://www.vmware.com. They have a two year headstart on their closest competitor. Microsoft and Citrix are making a hard charge to claim some of this space. We believe the battle is just starting to begin amongst these companies.

Why Select Right Click? Jim Harrington and Avi Lall both are VMWare certified. After working with many customers we know what products and add-ons you are going to need. Our installations go smooth and customers enjoy the benefits of virtualization immediately. Please call or email us for a free consultation on how virtualization can help your firm.

Network Management: Tips for Managing Costs

Written on 10:41 AM by Right Click IT - Technology Services

These tips, including virtualization, consolidation and measuring bandwidth consumption, will help you reduce costs as network management becomes more complex and expensive.

– Karen D. Schwartz, CIO August 25, 2008

Of all of the ongoing expenses needed to keep corporate IT running, network-related costs are perhaps the most unwieldy. New technologies, changing requirements and ongoing equipment maintenance and upgrades keep IT staff on their toes and money flowing out the door. But there are ways to manage network costs.

The Problem
According to Aberdeen Group, network costs continue to rise steadily. In 2008, for example, network spending is expected to increase slightly more than 5 percent over 2007. Telecom management industry association AOTMP of Indianapolis, Ind., backs that up, estimating that spending for voice and data services alone averages $2,000 to 3,000 per employee.

The biggest area for steady cost growth is the ever-expanding network, either as a result of physical expansion or a general thirst for connectivity. In the first case, a new branch office could require replication of the security infrastructure through technology like a point-to-point VPN connection. The network may need to add a multiprotocol labeling service (MPLS) to provide that branch office with a wide-area, high-speed connection. And those expenses are in addition to the cost of routers, switches and network appliances that the branch office may need.

Internally, the "need for speed" is driving the increase of network costs. More and more devices, either in terms of number of ports for network access or the number of network-connected devices per employee, is increasing.

One growing trend is the shift from standard PCs to mobile PCs in the corporate world. Over the next five years Forrester Research believes corporate America will reach an inflection point where traditional PCs are eclipsed by mobile PCs.

"Now you have a device that perhaps needs a port or wired drop at the desk and may also need to be supported on a wireless network, so the number of means by which employees can connect to the network drives the size of the network in terms of end points of connectivity," explains Chris Silva, an analyst with Forrester Research of Cambridge, Mass.

Other factors also are contributing to spiraling network costs. Aberdeen Group, for example, found that companies expect to increase their bandwidth by 108 percent on average over the next 12 months and expect to increase the number of business-critical applications running on their networks by 67 percent.

The growth of wireless networking is also increasing IT costs. As companies begin to replace all or part of their networks with Wi-Fi networks to take advantage of newer technologies like 802.11n, they are spending liberally.

And don't forget the hidden costs: As new devices enter the network and new network end points are developed, network management becomes more complex and expensive. For example, you might have your core wired network infrastructure from Vendor A but overlay a wireless network from Vendor B, which creates two separate management consoles. And as more employees connect to the network via devices like BlackBerrys and phones, the IT staff must manage and secure these network-connected devices as well.

Clearly, companies must do what they can to manage network costs. AOTMP, a telecom consultancy based in Indianapolis, found that developing a strategy to manage network expenses was the top telecom network initiative for companies in 2008, with reducing spending for telecom services and improved asset and inventory management services rounding out the top three.

Reducing Network Costs
The first step in controlling network costs, says Aberdeen analyst Began Simi, is to take the network's pulse. That means understanding exactly where the network's performance bottlenecks are and how efficiently the network is performing.

"Throwing more bandwidth and money at the problem even though you don't understand the bandwidth consumption per application or network location can be expensive," he says.

There are automated network monitoring tools available to measure these metrics. Both sophisticated products from vendors like Cisco Systems and NetQoS and free tools like PRTG Network Monitor and pier can provide a lot of value, such as reducing bandwidth and server performance bottlenecks and avoiding system downtime.

Once you understand what's going on in your network, there are many methods companies can use to reduce costs or prevent them from rising further.

One method is to consolidate the physical network infrastructure by finding ways to make the switch that's at the core of the network perform more functions; by doing so, you can reduce the number of appliances and bolt-on solutions your network uses. Many networking vendors like HP and Cisco are making inroads in this area.

Virtualization is a key part of network consolidation. By setting up the network infrastructure to be delivered from a pool of shared resources, those resources can be used more efficiently across a network fabric, explains Peter Fetterolf, a partner at Network Strategy Partners, a Boston consultancy. Virtualization can improve network resource utilization, efficiency and agility, helping lower the total cost of ownership.

What's more, virtualization leads to reduced overhead in areas like power and cooling; real estate; supervision, maintenance and personnel; and telecom services, he adds. And consolidation of service capacity in a single location creates more predictable demand patterns that permit better utilization, while overhead costs are spread over more productive assets such as systems administrators per server and network managers per network element.

Another part of consolidation is adopting technology that allows the IT staff to manage both the wired and wireless network from a single platform via APIs or other types of application integration tools. Most of the major network vendors are battling to provide functions like these, but third-party vendors also can help.

"That means taking one network management console and managing not only just the flow of data bits and bytes, but managing the VPN service, the WAN optimization tool and other things in the network," Silva says. "You want to consolidate your different management interfaces and consoles into one virtual single pane of glass management, where everything is on one screen."

And don't forget about what you already have in place. It doesn't make sense to invest in more technology if you're not maximizing the value of the investments you have already made, Silva says. For example, you may have spent a lot on a wireless network and mobility technology, but if the network hasn't been configured properly to use the technology, you're wasting money. If built correctly, the network can probably support technologies like voice over wireless LAN or VoIP, for example.

"Most often, you can squeeze more value from what you already have by using the same infrastructure with different overlay technologies to get more return on the investment that's already been made," he says. "So in addition to serving data, that $200,000 investment in a wireless LAN can also work toward cutting down the monthly cellular bills of an organization because that network can also support voice. And the same template can be applied for supporting things like video, using the WLAN for asset or employee tracking and presence-enabling unified communications systems."

And examine the vendors and technologies you are using for best value. If, for example, you have relied on Cisco Systems to develop your entire network, expenses could get very high very quickly. "There are a lot of different ways to build a network, and there are a lot of different options. They are all worth exploring," Fetterolf says. And once you have done that, don't be shy about pitting vendors against each other, he adds.

Finally, it can also make sense to look beyond the four walls of your organization for cost savings. Outsourcing network management, for example, can save significant money in some cases. In a recent study, Aberdeen Group found that organizations that outsourced network management reported an average savings of 26 percent as compared with previous spending.

Networks Concern IT Managers

Written on 10:49 AM by Right Click IT - Technology Services

Total user spend will almost double in the next five years, according to the analysts

By Len Rust

July 07, 2008 — Computerworld Australia — In a recent survey of more than 1100 IT decision makers in the Asia/Pacific region, IDC measured the importance of 10 key solution areas touted by IT services vendors globally. Network infrastructure solutions came up as being most important, with more than 70 per cent of respondents in markets such as Australia, China, and India indicating that solutions pertaining to the network were either important or very important.

The Role of the CIO in a Network Services World Business continuity and disaster recovery was a close second in importance among survey respondents.

IDC estimated that total spending in network services (which includes network consulting and integration services—NCIS—and network management—NM) will grow from $US4.7 billion in 2007 to $US9.1 billion in 2012 at a compound annual growth rate of 13.7 per cent from 2007-2012. This bodes well for companies such as IBM, Hewlett-Packard, and Dimension Data (including Datacraft), ranked by IDC as the top players (in terms of revenue) in Asia/Pacific, excluding Japan, for 2007.

Business continuity and disaster recovery, which include a variety of activities aimed at protecting and safeguarding critical corporate information against unpredictable events, was another key area of importance to IT decision makers.

According to the survey, end-users have stated that overall security concerns (51.9 per cent of responses) and past experience with security threats (44 per cent of responses) were the two key issues that have prompted increased focus on business continuity and disaster recovery.

Eugene Wee, research manager of IT services at IDC Asia/Pacific, said he is concerned with the nonchalance that still exists in the marketplace. "Currently, most of the needs assessments and process improvement around business continuity and disaster recovery occurs as an afterthought to threats arising."

10 Tips for Technology Management

Written on 10:33 AM by Right Click IT - Technology Services

– Julie Bort, Network World May 07, 2007

No. 1: Fine-tune your IPS.
"There's a lot of set-it-and-forget-it mentality in intrusion-prevention system marketing, and it's dangerous," says David Newman, president of testing facility Network Test and a Network World Lab Alliance member.

Fuzzing, in which the exploit is changed just enough for the security mechanism to miss it, trips up many IPSs, Network World's recent IPS test showed.

Network managers need to understand how each exploit works and how their IPS detects them, and then upgrade that protection routinely.

No. 2: Sell security by its benefits.
Start selling security to the purse-holders the way you do all other technology investments -- in measurable terms that relate to the business, recommends Mandy Andress, president of testing facility ArcSec Technologies and a Network World Lab Alliance member. Rather than saying how dangerous viruses are as a method to gain the budget for a reputation services antispam defense, for example, illustrate how much productivity could be gained by adding another layer of antispam control.

No. 3: Automate desktop and network access.
Wireless badges can come in handy for automated access control to desktop PCs, particularly those shared by multiple users in medical exam rooms, warehouses, call centers and the like.

For example, Northwestern Memorial Physicians Group implemented Ensure Technologies' XyLoc MD, which uses 900MHz radio-frequency technology encoded on staff ID badges for authentication, says Guy Fuller, IT manager at the Chicago healthcare organization. This saves the staff time while ensuring that network access and sensitive information are not available to other users.

No. 4: Link physical access to enterprise applications.
IP-based building-access systems built on industry-standard servers and using the existing data network are more affordable than ever because of open architecture products. Advances in server-management technology mean these systems not only are deployable by network (rather than the physical security) staff but are centrally manageable. Plus, they can integrate with ERP applications and network access-control systems.

Georgia-Pacific, a US$20 billion paper manufacturer in Atlanta, is rolling out Automated Management Technologies' WebBrix, an IP-based building-access system, to the majority of its 400 locations. IT used WebBrix's open application interface to write a custom application called Mysecurity that integrates the system with SAP, among other duties. When employees swipe their badges to gain access to the building, they also are sending data to SAP for time and attendance tracking, says Steven Mobley, senior systems analyst at Georgia-Pacific.

No. 5: Delegate an operating systems guru.
"Operating systems configuration can seem to some like a black art," says Tom Henderson, principle researcher for testing facility ExtremeLabs and a Network World Lab Alliance member. Setting the wrong combination is bad news. For example, large memory-block move options can affect the amount of dirty cache with which the operating system must deal, he says. If memory/caching options are balanced incorrectly, the machine could freeze. By assigning a staffer to master the voluminous documentation published by mainstream operating system vendors, servers can be safely fine-tuned to optimal performance for every application. The guru also should master Web server and BIOS setting options.

No. 6: Use VMware server memory smartly.
Without spending a dime, you may be able to boost the amount of memory available on virtualized Windows 2003 physical servers, thereby improving performance of the virtual machines. If all the virtual machines on the same physical box need the same memory-resident code, such as a dynamic link library (DLL), you can load the DLL once into the physical server's main memory and share that DLL with all virtual machines, says Wendy Cebula, COO at VistaPrint, an international online printer with U.S. operations headquartered in Lexington, Mass. "We've gotten big memory usage benefits by caching once per physical box rather than once per usage," she says.

No. 7: Move applications to a Linux grid.
If you have compute-intensive mainframe applications, don't shy away from lower-cost alternatives such as grid computing because the applications were written in COBOL, says Brian Cucci, manager of the Advanced Technology Group at Atlanta-based UPS, which has such a grid. The application will likely have to be redesigned somewhat for the new hardware platform. But vendors can be counted on to help, as they'll want to ally on the new technology.

No. 8: Recognize WAN links may degrade VoIP QoS.
This is particularly true in areas of the country where the public infrastructure is aging, says Bruce Bartolf, principal and CTO of architecture firm Gensler, in San Francisco. Having completed VoIP installation at seven of 35 sites, Bartolf found unexpectedly high error rates or complete failure on many links. To provide the kind of uptime and quality demanded of phone service, you need to design with alternative failover paths on the WAN. Cable may not be much better, but Metro Ethernet, if available, could work well, he says.

No. 9: Ease IP management with an appliance.
Although the tasks that appliances perform can be done with each vendor's gear, "with something as important as IP management, if you don't do it well, you can really hurt your five-nines," Gensler's Bartolf says. He chose Infoblox appliances, which manage numerous tasks, including Trivial File Transfer Protocol (TFTP) firmware upgrades. "Rather than dealing with Microsoft distributed file system, loading a TFTP server on a Microsoft server, running DHCP on a Microsoft server, running SMS on top of that, and managing it all, I have an appliance," he says. "I put it in, and it works."

No. 10: Shelve the fancy visuals.
"We found it highly impractical to make our monitoring visual," VistaPrint's Cebula says. VistaPrint relies on remote monitoring to manage its data centers, including one in Bermuda. It uses homegrown tools to track everything from CPU usage to event correlation. Visual graphing of events slowed down detection and analysis, taking network operations staff an average of five to seven minutes per event to use, Cebula says. When the tools used simple red, yellow and green lights, detection and correlation dropped to one or two minutes per event, she says.

And don't forget to keep your monitoring tools on at all times and run spot checks, advises Barry Nance, independent consultant and Network Lab Alliance member. The most common mistake is not to turn them on until an event occurs.