At my current client, a large health care organization, I needed to dispose of some old equipment that had personal health information about patients on it. I got directed to a front-line employee who operates a machine to degauss disk drives.
Knowing the organization, I knew that wasn't all I needed to do. And fortunately I knew how to track down the financial, inventory and other people who would be interested in reselling the machine if possible, and then getting rid of it all the way to the dump and removing it from the financial books. In total, I'll have to manage the disposal myself through three or four departments, and at least that many individuals.
What I really wanted was a single phone number I could call and say, "In April, get rid of this thing for me." and be done with it.
I think that's why we hear so much about "aligning IT with the business" these days. It's not just the big picture, find-a-way-to-put-your-business-on-the-web-and-make-yourself-rich alignment. It's also because we confuse an IT task with a business service. To the business there's value in an internal 1-800-got-junk number for information assets. There's very marginal value having someone in a room who can degauss disk drives (and who only gets called if someone is technologically savvy enough to know to call them anyway).
How can you tell if something is an IT task or a business service? Start by really getting into the head of the person who would use your service. Like ask them. If you can't sell the service, or at least get someone excited in about five minutes, then you better re-think your service.
By the way, my remarks about tasks shouldn't be taken as disparaging the people who do the real work. The internal 1-800-got-junk model needs someone to run the degausser, and their work is critical to making the whole model work. IT is sufficiently complicated that in medium to large organizations almost any business service will require multiple tasks carried out by multiple individuals. The shift to service thinking has to happen at the management level. The people doing the tasks are usually doing the right thing.
Saturday, 5 January 2008
Saturday, 10 November 2007
A Tangled Web We Weave
I'm currently dealing with people who want to buy a technology that's flexible and can do all sorts of things that they don't currently need it to do. This is slowing up the project, and causing them a lot of stress.
The technology we buy today will be obsolete in five years (if it lasts that long). So by the time you want to do something more with the existing technology, you'll be able to do it better and cheaper with whatever is on the market at that time, not what you bought three years ago. Why should you pay extra for features that you're not going to use until the equipment is half way or more through its useful life?
If your organization is of any reasonable size, the real cost of doing something new with technology is the training and organizational change, not the capital cost of the equipment. You may want to take advantage of those features you paid for two years ago, but you still can't afford to because of the training costs for staff.
The idea that we have to buy for future unknown needs, not just what we need now is pervasive in IT. Perhaps it's because we're always overpromising what we can deliver. When we deliver and the customer is unhappy we beat ourselves up for not buying (overspending) on all the extra features so we get the one the user really wants. It's time to stop over promising.
The technology we buy today will be obsolete in five years (if it lasts that long). So by the time you want to do something more with the existing technology, you'll be able to do it better and cheaper with whatever is on the market at that time, not what you bought three years ago. Why should you pay extra for features that you're not going to use until the equipment is half way or more through its useful life?
If your organization is of any reasonable size, the real cost of doing something new with technology is the training and organizational change, not the capital cost of the equipment. You may want to take advantage of those features you paid for two years ago, but you still can't afford to because of the training costs for staff.
The idea that we have to buy for future unknown needs, not just what we need now is pervasive in IT. Perhaps it's because we're always overpromising what we can deliver. When we deliver and the customer is unhappy we beat ourselves up for not buying (overspending) on all the extra features so we get the one the user really wants. It's time to stop over promising.
Saturday, 27 October 2007
An Observation on Blogs and Podcasts
I've come quite late to the Web 2.0 world. About a year and a half ago I started to listen to podcasts while walking my dog. One of the things I discovered is how much differently I react to people when I hear their voice in a podcast.
When I read what Steve McConnell and Joel Spolsky write, I have trouble getting to the end of their articles because they seem to be just so wrong about too much. (I'll explain why below.) However, when I heard podcasts by them, they made a lot more sense. I don't know why, but there's something about a verbal communication, even when the person isn't present, that seems to somehow help me hear the whole message in the right context.
For example, when I read Joel's stuff about how to manage software teams, I think he's out to lunch because what he recommends would be impossible to implement for 99.9 percent of working software development managers out there. I'm sure he's said so much in his writing, but it wasn't until I heard a podcast by him that I really heard how much he admits that his is a special case. As soon as I heard that, my opinion of him as a person changed and I was able to read and listen to him in a whole different way.
With McConnell, I've always felt that his experience, research and analysis of software development was staggeringly good, which made the fact that he draws absolutely the wrong conclusions from his knowledge all the more maddening. I forget what it was about the podcast that softened my opinion of him, but I do remember quite well finishing the podcast and thinking that, while his conclusions are still wrong, I have much more respect for him than I did from his writings.
When I read what Steve McConnell and Joel Spolsky write, I have trouble getting to the end of their articles because they seem to be just so wrong about too much. (I'll explain why below.) However, when I heard podcasts by them, they made a lot more sense. I don't know why, but there's something about a verbal communication, even when the person isn't present, that seems to somehow help me hear the whole message in the right context.
For example, when I read Joel's stuff about how to manage software teams, I think he's out to lunch because what he recommends would be impossible to implement for 99.9 percent of working software development managers out there. I'm sure he's said so much in his writing, but it wasn't until I heard a podcast by him that I really heard how much he admits that his is a special case. As soon as I heard that, my opinion of him as a person changed and I was able to read and listen to him in a whole different way.
With McConnell, I've always felt that his experience, research and analysis of software development was staggeringly good, which made the fact that he draws absolutely the wrong conclusions from his knowledge all the more maddening. I forget what it was about the podcast that softened my opinion of him, but I do remember quite well finishing the podcast and thinking that, while his conclusions are still wrong, I have much more respect for him than I did from his writings.
Sunday, 21 October 2007
Mac Joke
I'm part-way through a podcast by Guy Kawasaki where he recounts a joke the Apple II team had in the early days of the Macintosh: How many Mac team members does it take to screw in a light bulb? One: He just holds the light bulb and waits for the universe to revolve around him.
The podcast is good, too. At least so far.
The podcast is good, too. At least so far.
Saturday, 20 October 2007
How Long Will People Put Up With Us?
Vancouver Coastal Health is gearing up to make sure that no one has problems with meetings scheduled in Outlook when we switch back to standard time from daylight savings time. In the leadership group for Pharmacy, about eight senior managers, the executive assistants have spent at least a full person-day, if not more, changing the subject line of meetings to include the intended meeting time, as recommended by Microsoft.
This is an office productivity tool?
I know DST changes and calendaring applications aren't easy. You can find lots of discussion on the web about the challenges. In this case, we seem to have put the responsibility for dealing with the complexity on the users, rather than figuring it out and giving the users a solution. But do we really think we can expect our users to put up with this twice a year forever?
I believe if you handled the DST rule change in March 2007, you shouldn't have to do anything else. However, IT organizations seem to think otherwise. Are they just covering their butts?
In one sense I don't blame the IT staff at an organization for being a bit reluctant to try to optimize the process. Take a look at the Microsoft knowledge base topic on the DST change and Outlook. The table of contents fills my entire screen top to bottom, and I use a small font.
So what should IT departments do? One thing you can do is be brave and don't tell the users to do anything special. Then, when the complaints come in about meetings being wrong, go out and fix the computers that didn't get the timezone update, or that didn't run the Outlook timezone fix tool. Sure, your the affected users will think you're a jerk because their calendars were wrong once. But you know what? All your users already think you're a jerk twice a year because you expect them to do all sorts of manual work-arounds.
This is an office productivity tool?
I know DST changes and calendaring applications aren't easy. You can find lots of discussion on the web about the challenges. In this case, we seem to have put the responsibility for dealing with the complexity on the users, rather than figuring it out and giving the users a solution. But do we really think we can expect our users to put up with this twice a year forever?
I believe if you handled the DST rule change in March 2007, you shouldn't have to do anything else. However, IT organizations seem to think otherwise. Are they just covering their butts?
In one sense I don't blame the IT staff at an organization for being a bit reluctant to try to optimize the process. Take a look at the Microsoft knowledge base topic on the DST change and Outlook. The table of contents fills my entire screen top to bottom, and I use a small font.
So what should IT departments do? One thing you can do is be brave and don't tell the users to do anything special. Then, when the complaints come in about meetings being wrong, go out and fix the computers that didn't get the timezone update, or that didn't run the Outlook timezone fix tool. Sure, your the affected users will think you're a jerk because their calendars were wrong once. But you know what? All your users already think you're a jerk twice a year because you expect them to do all sorts of manual work-arounds.
Friday, 31 August 2007
This is What I Was Afraid Of
Part of the reason I started blogging was because I was "between contracts" as we say. I never seemed to have time to write when I was working full time and trying to have a life. Sure enough, I've posted three times, including this, since I got my current contract.
Who cares? Well, one of the reasons we get things so wrong in IT is that technology doesn't do what we were told it does. One of the great things about the Internet is that it's given us access to people who are actually using technology, so we can solve problems faster. Blogging, however, demands a certain level of time to blog, which is taking away from your time doing.
The bottom line: There's useful stuff in blogs, but you have to filter out the rantings from the useful information yourself.
Who cares? Well, one of the reasons we get things so wrong in IT is that technology doesn't do what we were told it does. One of the great things about the Internet is that it's given us access to people who are actually using technology, so we can solve problems faster. Blogging, however, demands a certain level of time to blog, which is taking away from your time doing.
The bottom line: There's useful stuff in blogs, but you have to filter out the rantings from the useful information yourself.
iSCSI vs. Fibre Channel - You're Both Right
Reading another article from an expert who provides less than useful information has finally prompted me to try to provide useful guidance for IT managers of 50 to 1,000 diverse servers running a variety of applications.
iSCSI vs. fibre channel (FC) is a classic technology debate with two camps bombarding each other mercilessly with claims that one or the other is right. The reason the debate is so heated and long lived is because there isn't a right answer: there are different situations in which each one is better than the other. Here's how to figure out what's best for you:
Start with the assumption that you'll use iSCSI. It's less expensive, so if it does what you need, it should be your choice. It's less expensive at all levels: The switches and cables enjoy the ecnomy of scale of the massive market for IP networking. You already have staff who know how to manage IP networks. You already have a stock of Cat 6 cables hanging in your server rooms or network closets.
If you have mostly commodity servers, they transfer data to and from direct-attached storage at less than gigabit speeds. Gigabit iSCSI is fine. If you have a lot of servers, you have to size the switches correctly, but you have to do that with FC as well, and the FC switch will be more expensive. Implement jumbo frames so backups go quickly.
Just because you're using iSCSI doesn't mean you're running your storage network over the same cables and switches as your data centre LAN. In fact, you probably aren't. The cost saving doesn't come from sharing the existing LAN, it comes from the lower cost per port and the reduced people cost (skill sets, training, availability of administrators in the labour market) of using the same technology. As long as your storage and general-purpose networks are not sharing the same physical network, a lot of the criticisms of iSCSI evaporate.
If you have large, specialized servers that can and do need to sustain high data transfer rates, then definitely look at FC. Be sure you're measuring (not just guessing) that you need the data transfer rates.
If you have a large farm of physical servers running a huge number of virtual machines (VMs), look at FC. My experience is that virtual machine infrastructures tend to be limited by RAM on the physical servers, but your environment may be different. You may especially want to think about how you back up your VMs. You may not need the FC performance during the day, but when backups start, watch out. It's often the only time of day when your IT infrastructure actually breaks a sweat.
You might look at a FC network between your backup media servers and backup devices, especially if you already have an FC network for one of the reasons above.
Yes, FC will give you higher data transfer rates, but only if your servers and storage devices can handle it, and few today go much beyond one gigabit. FC will guarantee low latency so your servers won't do their equivalent of "Device not ready, Abort, Retry, Ignore?"
The challenge for an IT manager, even (or especially) those like me who have a strong technical background, is that it's easy to get talked into spending too much money because you might need the performance or low latency. The problem with that thinking is that you spend too much money on your storage network, and you don't have the money left over to, for example, mirror your storage, which may be far more valuable to your business.
A final warning: neither technology is as easy to deal with as the vendor would have you believe (no really?). Both will give you headaches for some reason along the way. If it wasn't hard, we wouldn't get the big bucks, would we?
iSCSI vs. fibre channel (FC) is a classic technology debate with two camps bombarding each other mercilessly with claims that one or the other is right. The reason the debate is so heated and long lived is because there isn't a right answer: there are different situations in which each one is better than the other. Here's how to figure out what's best for you:
Start with the assumption that you'll use iSCSI. It's less expensive, so if it does what you need, it should be your choice. It's less expensive at all levels: The switches and cables enjoy the ecnomy of scale of the massive market for IP networking. You already have staff who know how to manage IP networks. You already have a stock of Cat 6 cables hanging in your server rooms or network closets.
If you have mostly commodity servers, they transfer data to and from direct-attached storage at less than gigabit speeds. Gigabit iSCSI is fine. If you have a lot of servers, you have to size the switches correctly, but you have to do that with FC as well, and the FC switch will be more expensive. Implement jumbo frames so backups go quickly.
Just because you're using iSCSI doesn't mean you're running your storage network over the same cables and switches as your data centre LAN. In fact, you probably aren't. The cost saving doesn't come from sharing the existing LAN, it comes from the lower cost per port and the reduced people cost (skill sets, training, availability of administrators in the labour market) of using the same technology. As long as your storage and general-purpose networks are not sharing the same physical network, a lot of the criticisms of iSCSI evaporate.
If you have large, specialized servers that can and do need to sustain high data transfer rates, then definitely look at FC. Be sure you're measuring (not just guessing) that you need the data transfer rates.
If you have a large farm of physical servers running a huge number of virtual machines (VMs), look at FC. My experience is that virtual machine infrastructures tend to be limited by RAM on the physical servers, but your environment may be different. You may especially want to think about how you back up your VMs. You may not need the FC performance during the day, but when backups start, watch out. It's often the only time of day when your IT infrastructure actually breaks a sweat.
You might look at a FC network between your backup media servers and backup devices, especially if you already have an FC network for one of the reasons above.
Yes, FC will give you higher data transfer rates, but only if your servers and storage devices can handle it, and few today go much beyond one gigabit. FC will guarantee low latency so your servers won't do their equivalent of "Device not ready, Abort, Retry, Ignore?"
The challenge for an IT manager, even (or especially) those like me who have a strong technical background, is that it's easy to get talked into spending too much money because you might need the performance or low latency. The problem with that thinking is that you spend too much money on your storage network, and you don't have the money left over to, for example, mirror your storage, which may be far more valuable to your business.
A final warning: neither technology is as easy to deal with as the vendor would have you believe (no really?). Both will give you headaches for some reason along the way. If it wasn't hard, we wouldn't get the big bucks, would we?
Subscribe to:
Posts (Atom)