Tough Times and Three Unequivocal Standards of IT Agility

Written on 10:48 AM by Right Click IT - Technology Services

Michael Hugos/Blog: Doing Business in Real Time

So the CEO and the CFO are telling you to cut IT expenses - tell them for the good of the company you can’t do that. Tell them you already run a lean operation and saving another 10 percent on the IT budget is small potatoes compared to using IT to save 10 percent on the operating expenses of the whole company or using IT to grow company revenue by 10 percent.

As all eyes around the table turn your way to see how you are going to recover from that jaw-dropping bit of unexpected impertinence, in the stunned silence that follows, drive home your point. Propose that instead of cutting IT, you’ll work with the CEO and the COO and the VP of Sales to create strategies to deliver those savings in company operating expenses and attain those increases in revenue. Seal your offer by publically committing to power the resulting business strategies with systems infrastructure that meets three unequivocal standards of IT agility: 1) No cap ex; 2) Variable cost; and 3) Scalable.

Commit to the standard of no cap ex (no capital expense) because it’s the order of the day in business. Revenue and profits are under pressure and credit is harder to get, so there is less money for capital investments. Also, because we’re in a period of rapid technological change, making big investments in technology is risky because it might result in your company investing in technology that becomes obsolete a lot faster than expected. So smart IT execs learn to get systems in place without a lot of up front cost. That means using SOA and SaaS and mashups and cloud computing to deliver new systems.

Committing to the standard of a variable cost operating model is very smart because it’s a great way to protect company cash flow. Pay-as-you-go operating models (like what the SaaS and cloud computing vendors are offering) mean operating expenses will rise if business volumes rise, but just as important, operating expenses will drop or stay small if business volumes contract or don’t grow as big or as fast as expected (you only pay more if you're making more and you pay less if you're making less). In this economy where it is so hard to predict what will happen next, and where companies need to keep trying new things to find out where new opportunities lie, variable cost business models are best for managing financial risk.

Committing to scalable systems infrastructure enables companies to enjoy the benefits of the first two standards. A scalable systems infrastructure enables a company to “think big, start small, and deliver quickly”. The CEO and COO and VP Sales can create strategies with big potential and try them out quickly on a small scale to see if they justify further investment. Start with targeted 80% solutions to the most important needs and then build further features and add more capacity as business needs dictate. Companies don’t outgrow scalable systems; they don’t have to rip out and replace scalable systems.

Making such an offer to your CEO might sound pretty bold and risky but then consider this: If your plan is just to cut your IT budget and try to keep your head down, chances are excellent you won’t survive anyway. That's because if you dumb down your IT operations and IT is seen as a cost center instead of part of your company’s value proposition, then your CEO and your CFO are going to quickly see that a great way to save an additional six figure sum will be to fire you. Who needs a highly paid person like

SLAs: How to Show IT's Value

Written on 10:13 AM by Right Click IT - Technology Services

From: www.cio.com – Bob Anderson, Computerworld December 02, 2008

Over a career in information technology spanning multiple decades, I have observed that many IT organizations have focused process improvement and measurement almost exclusively on software development projects.

This is understandable, given the business-critical nature and costs of large software development projects. But in reality, IT support services consume most of the IT budget, and they also require the most direct and continuous interaction with business customers.

IT organizations must demonstrate the value of IT support services to business customers, and a primary way of doing this is through service-level agreements. SLAs help IT show value by clearly defining the service responsibilities of the IT organization that is delivering the services and the performance expectations of the business customer receiving the service.

One of the most difficult tasks in developing an SLA is deciding what to include. The following sample SLA structure provides a good starting point.

Introduction: This identifies the service, the IT organization delivering that service and the business customer receiving it.

Examples:

  • Infrastructure support for a shipping warehouse.
  • Software application support for the payroll staff.

Description of services: This characterizes the services to be provided, the types of work to be performed and the parameters of service delivery, including the following:

  • The types of work that are part of the service (maintenance, enhancement, repair, mechanical support).
  • The time required for different types and levels of service.
  • The service contact process and detailed information for reaching the help desk or any single point of contact for support services.

Description of responsibilities: This delineates responsibilities of both the IT service provider and the customer, including shared responsibilities.

Operational parameters: These may affect service performance and therefore must be defined and monitored.

Examples:

  • Maximum number of concurrent online users.
  • Peak number of transactions per hour.
  • Maximum number of concurrent user requests.

If operational parameters expand beyond the control of the service provider, or if users of the service exceed the limits of specified operational parameters, then the SLA may need to be renegotiated.

Service-level goals: These are the performance metrics that the customer expects for specific services being delivered. SLGs are useless unless actual performance data is collected. The service being delivered will dictate the type and method of data collection.

It is important to differentiate between goals that are equipment-related and service-level goals that are people- and work-related.

Examples:

  • Equipment SLG: 99% network availability 24/7.
  • People and work SLG: critical incidents resolved within two hours.

Service-improvement goals: These establish the required degree and rate of improvement for a specific SLG over time. An SIG requires that a performance trend be calculated over a specified period of time in addition to specific SLG data getting captured. This trend indicates the rate of improvement and whether the improvement goal has been achieved.

Service-performance reporting: This states IT's commitment to delivering reports to the business customer on a scheduled basis. The reports detail actual services delivered and actual levels of performance compared to the commitments stated within the SLA.

Sign-off: Signature lines and dates for authorized representatives of the IT organization delivering the service and the business customer receiving the service.

The hardest part of developing an SLA may be getting started. I hope this framework will help you begin to demonstrate IT's value to your customers.

Anderson is director of process development and quality assurance at Computer Aid Inc. Contact him atbob_anderson@compaid.com .

When "IT Alignment with the Business" Isn't a Buzzword

Written on 10:10 AM by Right Click IT - Technology Services

December 01, 2008 – Matt Heusser, CIO

IT leaders were told to "do more with less" even before economic woes exacerbated the issue. Savvy managers have always kept their eye on the goal: demonstrating what IT can do for the business, so that it's not always viewed as a cost center. Last week, one IT manager explained her strategy.

At a meeting of the Grand Rapids Association of IT Professionals (AITP), Krischa Winright, associate VP of Priority Health, a health insurance products provider, demonstrated her IT team's accomplishments over the past year. Among the lessons learned: talented development organizations can gain advantages from frugality (including developing applications using internal resources and open-source technologies); you can ferociously negotiate costs with vendors; and virtualization can save the company money and team effort. End result: an estimated 12 percent reduction in expense spending (actual dollars spent) in 2008.

I asked Krischa about what her team had done at Priority Health, and how other organizations might benefit from her approach.

CIO: First, could you describe your IT organization: its size and role?

Winright: Priority Health is a nationally recognized health insurance company based in Michigan. Our IT department has approximately 90 full time staff, whose sole objective is to support Priority Health's mission: to provide all people access to excellent and affordable health care. The implications of this mission for IT are to support cutting edge informatics strategies in the most efficient way possible. We staff all IT services and infrastructure functions, in addition to software development capability.

CIO: In your AITP talk, you mentioned basic prerequisites to transparency and alignment. Can you talk about those for a moment?

Winright: Prior to 2008, we put in place a Project Management Office with governance at the executive level. Our executive steering committee prioritized all resources in IT dedicated to large projects, which meant that we already were tightly, strategically aligned with the business. ROI for all new initiatives is calculated, and expenditures (IT and non-IT) are tracked.

CIO: So you put a good PMO in place to improve the organization's ability to trace costs. Then what?

Winright: Well, let's be careful. First, project costs associated with large business initiatives are only one portion of IT spending. Additionally, cutting costs is easy; you just decrease the services you offer the business.

Instead, we wanted to cut costs in ways that would enhance our business alignment, and increase (rather than decrease) the services we offer. To do that, we had to expose all of the costs in IT (PMO and non-PMO) in terms that the business could understand. In other words: business applications.

We enumerated all IT budgetary costs by application, and then bucketed them based upon whether they were (1) existing services (i.e. keeping the "true" IT lights on) or (2) new services being installed in 2008.

We then launched a theme of "convergence" in IT, which would allow us to converge to fewer technologies/applications that offer the business the same functionality, while increasing the level of service for each offering.

CIO: So you defined the cost of keeping the "true" IT lights on. What about new projects and development?

Winright: We adopted Forrester's MOOSE model. We established the goals of reducing the overall cost of MOOSE ("true" IT lights on) and increasing the amount of funding of items of strategic business importance.

Using the MOOSE framework, we finally understood the true, total cost of our business applications and our complete IT portfolio. This allowed us to quickly see opportunities for convergence and execute those plans. By establishing five work queues which spanned all of IT—Operations, Support, IT Improvement, PMO, Small Projects—we learned how all 90 of our staff were spending their time. That let us make adjustments to the project list to "converge" their time to items of most imminent strategic return.

CIO: In your talk, you said an economic downturn can be a time of significant opportunity for your internal development staff.

Winright: Businesses in Michigan are acutely aware of the economic downturn. Our health plan directly supports those businesses, so we are optimizing our spending just like everyone else.

Maximum benefit must be gained for every dollar spent. Every area of the company is competing for expenditures in ways they weren't before.

Yet when budgets are cut, business' core values dictate keeping talented people. In IT, a talented development organization can seize the opportunity of frugality and provide help across a plethora of business opportunities in an extremely cost-effective way. Developing applications using internal resources and open-source technologies have a more favorable cost portfolio than do third party vendor applications with their extensive implementation costs and recurring, escalating maintenance expense. Additionally, the decline of major third-party software implementations allows IT more bandwidth to partner side by side with the business.

CIO: What other steps have you taken to win trust?

Winright: We converted costly contracted labor associated with MOOSE to internal staff. Given exposure to the true cost of our business applications, we ferociously negotiated costs with our vendors. We took advantage of virtualization and other convergence technologies to maximize benefit from spending, and eliminated over 10 items from our environment (such as consolidated environments, consolidated hosts through virtualization, converged to one scheduler) in this first year by embracing the theme of convergence.

The fruit of our labor is an estimated 12 percent reduction in expense spending (actual dollars spent) in 2008. More importantly, we have proven a 6 percent shift of spending from existing service costs to new services. This is a powerful message to share with business partners. They will ultimately benefit when 6 percent more IT spending is directed to new initiatives rather than to existing services costs.

CIO: What's been the most painful part of this process for you?

Winright: Two things. First, it was difficult and time consuming to gather all actual budgetary expenses and tie them to a specific service. For most organizations our size, this information is held across several cost centers and managers, and the technical infrastructure itself is complex.

Second, it is always difficult to take 90 technologists and get them aligned around common themes. We continue to strive for internal alignment and eventual embodiment of these themes.

CIO: Pretend for a moment you are speaking to peer at an organization the size of Priority Health or a little larger. What advice would you have on quick wins, and things to do tomorrow?

Winright: Although painful and time consuming, it is imperative that you and your business peers understand the complete picture of IT spending in terms of business strategy. Then, and only then, will transparency into IT spending be an effective tool to increase business alignment.

Get your internal resources aligned around common themes, because an aligned group of highly intelligent people on a singular mission can yield incredible results.

How the Internet Works: 12 Myths Debunked

Written on 10:09 AM by Right Click IT - Technology Services

– Carolyn Duffy Marsan, Network World November 21, 2008

Internet Protocols (IP) keep evolving: What incorrect assumptions do we make when we send an e-mail or download a video?

Thirty years have passed since the Internet Protocol was first described in a series of technical documents written by early experimenters. Since then, countless engineers have created systems and applications that rely on IP as the communications link between people and their computers.

Here's the rub: IP has continued to evolve, but no one has been carefully documenting all of the changes.

"The IP model is not this static thing," explains Dave Thaler, a member of the Internet Architecture Board and a software architect for Microsoft. "It's something that has changed over the years, and it continues to change."

Thaler gave the plenary address Wednesday at a meeting of the Internet Engineering Task Force, the Internet's premier standards body. Thaler's talk was adapted from a document the IAB has drafted entitled "Evolution of the IP Model.

"Since 1978, many applications and upper layer protocols have evolved around various assumptions that are not listed in one place, not necessarily well known, not thought about when making changes, and increasingly not even true," Thaler said. "The goal of the IAB's work is to collect the assumptions—or increasingly myths—in one place, to document to what extent they are true, and to provide some guidance to the community."

The following list of myths about how the Internet works is adapted from Thaler's talk:

1. If I can reach you, you can reach me.
Thaler dubs this myth, "reachability is symmetric," and says many Internet applications assume that if Host A can contact Host B, then the opposite must be true. Applications use this assumption when they have request-response or callback functions. This assumption isn't always true because middleboxes such as network address translators (NAT) and firewalls get in the way of IP communications, and it doesn't always work with 802.11 wireless LANs or satellite links.

2. If I can reach you, and you can reach her, then I can reach her.
Thaler calls this theory "reachability is transitive," and says it is applied when applications do referrals. Like the first myth, this assumption isn't always true today because of middleboxes such as NATs and firewalls as well as with 802.11 wireless and satellite transmissions.

3. Multicast always works.
Multicast allows you to send communications out to many systems simultaneously as long as the receivers indicate they can accept the communication. Many applications assume that multicast works within all types of links. But that isn't always true with 802.11 wireless LANs or across tunneling mechanisms such as Teredo or 6to4.

4. The time it takes to initiate communications between two systems is what you'll see throughout the communication.
Thaler says many applications assume that the end-to-end delay of the first packet sent to a destination is typical of what will be experienced afterwards. For example, many applications ping servers and select the one that responds first. However, the first packet may have additional latency because of the look-ups it does. So applications may choose longer paths and have slower response times using this assumption. Increasingly, applications such as Mobile IPv6 and Protocol Independent Multicast send packets on one path and then switch to a shorter, faster path.

5. IP addresses rarely change.
Many applications assume that IP addresses are stable over long periods of time. These applications resolve names to addresses and then cache them without any notion of the lifetime of the name/address connection, Thaler says. This assumption isn't always true today because of the popularity of the Dynamic Host Configuration Protocol as well as roaming mechanisms and wireless communications.

6. A computer has only one IP address and one interface to the network.
This is an example of an assumption that was never true to begin with, Thaler says. From the onset of the Internet, hosts could have several physical interfaces to the network and each of those could have several logical Internet addresses. Today, computers are dealing with wired and wireless access, dual IPv4/IPv6 nodes and multiple IPv6 addresses on the same interface making this assumption truly a myth.

7. If you and I have addresses in a subnet, we must be near each other.
Some applications assume that the IP address used by an application is the same as the address used for routing. This means an application might assume two systems on the same subnet are nearby and would be better to talk to each other than a system far away. This assumption doesn't hold up because of tunneling and mobility. Increasingly, new applications are adopting a scheme known as an identifier/locator split that uses separate IP addresses to identify a system from the IP addresses used to locate a system.

8. New transport-layer protocols will work across the Internet.
IP was designed to support new transport protocols underneath it, but increasingly this isn't true, Thaler says. Most NATs and firewalls only allow Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) for transporting packets. Newer Web-based applications only operate over Hypertext Transfer Protocol (HTTP).

9. If one stream between you and me can get through, so can another one.
Some applications open multiple connections—one for data and another for control—between two systems for communications. The problem is that middleboxes such as NATs and firewalls block certain ports and may not allow more than one connection. That's why applications such as File Transfer Protocol (FTP) and the Real-time Transfer Protocol (RTP) don't always work, Thaler says.

10. Internet communications are not changed in transit.
Thaler cites several assumptions about Internet security that are no longer true. One of them is that packets are unmodified in transit. While it may have been true at the dawn of the Internet, this assumption is no longer true because of NATs, firewalls, intrusion-detection systems and many other middleboxes. IPsec solves this problem by encrypting IP packets, but this security scheme isn't widely used across the Internet.

11. Internet communications are private.
Another security-related assumption Internet developers and users often make is that packets are private. Thaler says this was never true. The only way for Internet users to be sure that their communications are private is to deploy IPsec, which is a suite of protocols for securing IP communications by authenticating and encrypting IP packets.

12. Source addresses are not forged.
Many Internet applications assume that a packet is coming from the IP source address that it uses. However, IP address spoofing has become common as a way of concealing the identity of the sender in denial of service and other attacks. Applications built on this assumption are vulnerable to attack, Thaler says.