Sunday, 19 December 2010

Managing Music on an SD Card for Nintendo DS

I wanted to manage music on an SD card, as that's the way a Nintendo DS gets its music. I wanted my son to have his music on his DS so he didn't have to keep track of and charge an MP3 player and a DS.

First, Rhythmbox on Ubuntu 10.04 wouldn't recognize the SD card. Fortunately, I'd stumbled across the solution to that a few weeks ago: put a file called ".is_audio_player" in the root of the SD card.

However, it still wouldn't copy m4a files over. The DS only plays m4a files, not mp3. I discovered that you can actually put content in the .is_audio_player file to control the behaviour of Rhythmbox. It was still hard to figure out what to put in, but finally I looked up what Rhythmbox uses to manage an iPod, and put this in the .is_audio_player file:

folder_depth=99
output_formats=audio/mpeg,audio/aac,audio/x-wav,audio/x-aiff

Worked a charm after that.

P.S. My son's doing some serious scratch with the Police on his DS right now.

Sunday, 14 November 2010

Configuring Bacula Tray Monitor on Ubuntu

I use Bacula to back up my servers and desktop/laptop computers. It's always bugged me that I didn't have a little icon on my Ubuntu desktop showing the status of the backup: whether it was running or not and some indication of progress. Most backup systems have this. In Bacula it's called the tray monitor. The configuration file documentation seemed straightforward, but it took a lot of fiddling to get it right.

I think I have a fairly typical situation:
  • A backup server with a direct attached backup storage device (in my case, two: a USB-connected 1 TB hard drive, and a DAT-72 tape drive)
  • Several clients being backed up on a regular schedule
  • One client is the laptop I use as my normal workstation. This is the one I want to put the tray monitor on
  • I'm already successfully backing up this configuration, so all my passwords in my Bacula configuration files are correct, and all my firewalls are configured to allow the backup to work
  • The laptop and the backup server are both running Ubuntu 10.04
Here's what I did to get the tray monitor to work (read my notes below before you start cutting and pasting the following into your configuration):
  1. I installed the tray monitor software on my laptop:
  2. sudo apt-get install bacula-traymonitor
    
  3. On my laptop I changed the tray monitor configuration file (/etc/bacula/tray-monitor.conf) to look like this:
  4. Monitor {
      Name = backup02-mon
      Password = "Monitor-Password"
      RefreshInterval = 5 seconds
    }
    
    Client {
      Name = pacal-mon
      Address = pacal.pender.jadesystems.ca
      FDPort = 9102
      Password = "Monitor-Password"
    }
  5. Still on the laptop, I added the following to the file daemon, aka backup client, configuration file (/etc/bacula/bacula-fd.conf):
  6. # Restricted Director, used by tray-monitor to get the
    #   status of the file daemon
    
    Director {
      Name = backup02-mon
      Password = "Monitor-Password"
      Monitor = yes
    }
  7. I restarted the file daemon on the laptop (don't forget this or you'll confuse yourself horribly):
  8. sudo service bacula-fd restart
    
  9. On the backup server, I added the following to the director configuration file (/etc/bacula/bacula-dir.conf):
  10. # Restricted console used by tray-monitor to get the status of the director
    Console {
      Name = backup02-mon
      Password = "Monitor-Password"
      CommandACL = status, .status
    }
  11. Finally, I reloaded the configuration file on the backup server:
  12. sudo bconsole 
    reload 
    exit 
    
  13. Now all I had to do is start the tray monitor. The command line is:
  14. bacula-tray-monitor -c /etc/bacula/tray-monitor.conf
Then I made a menu item for it. I put it in Applications-> System Tools.
  1. Select System-> Preferences-> Main Menu
  2. Select "System Tools" on the left side of the window
  3. Click on the "New Item" button on the right side of the window
  4. Fill in the "Name:" box with "Bacula Tray Monitor" and the "Command:" box with the command line above
  5. Click "OK"
  6. Click "Close" in the "Main Menu" window
Notes:
  • I used a separate password specifically for the monitor. The tray monitor's configuration file has to be readable by an ordinary user without special privileges. So anyone can see the password. Don't use the same password for the monitor as you use for the director or the file daemons, or you'll be making it easy for anyone who gets access to your computer to read all the files on your network.
  • You have to change to above bits of configuration file to match your particular configuration. Change: "laptop.example.com" to the fully qualified domain name of the computer on which you're installing the tray monitor. Change "Monitor-Password" to something else more secure that everyone who reads this blog doesn't know about. 
  • "backup02-mon" and "laptop-mon" are both names you can change to be anything you want them to be. In my case, "backup02-mon" means the monitor on the backup server (hostname: backup02), and "laptop-mon" means the monitor on the laptop (hostname: laptop)

Tuesday, 12 October 2010

Google Chrome, Ubuntu, and Cisco AnyConnect

I need to use Cisco's AnyConnect VPN client. It's worked quite well with FireFox on Ubuntu, although I had to forgo the upgrade to 9.10 because the VPN client wouldn't work with the kernels that came with 9.10. (That wasn't the only reason I didn't go to 9.10, so I wasn't really bothered by it.)

I've been using Google Chrome for the last few weeks instead of FireFox. It is noticeably faster on my Lenovo x300. Going back to FireFox seems excruciatingly slow. I decided to try Chrome with the Cisco VPN client. It's not officially supported, but both FireFox and Chrome are supposed to support standards, so what could be the problem?

It worked on my Lenovo with Ubuntu 10.04, but when I tried it on my netbook with Ubuntu Netbook Remix 10.04, it didn't work. It would get to the point where the client is supposed to actually start, and then nothing would happen.

I finally noticed that on the Lenovo, I had the IcedTea plugin installed, whereas on the netbook I was trying to do exactly what was supported by Cisco (Sun Java and some fiddling to get the plugin working). So I installed IcedTea on the netbook, and it worked just fine.

To install IcedTea, start System-> Adminstration-> Synaptic Package Manager, enter your password, then put "icedtea" in the "Search" field. Right click on "icedtea6-plugin", select "Mark for installation" and then click on the "Apply" button. Or, if you like the Terminal, type "sudo apt-get install icedtea6-plugin" in a terminal.

(Update for Ubuntu 11.04: the package to install is called "icedtea-plugin" now. No version number.)

It's always fun when you try to do something exactly by the book and it doesn't work, and then you do it the way you think should work, and it does.

Unfortunately, Exchange 2010 Outlook Web Access doesn't support Chrome, so I'm forced to use the crippled "Lite" interface. So I'll probably end up using FireFox anyway.

Saturday, 9 October 2010

Web Trap Pages

I'm looking for an open source friendly accounting company to do my taxes and give me advice. The only reason I need a Windows box anymore is to do my accounting, since most accountants want their clients to use Windows-based software.

I typed something into Google to see if I could find an accounting firm that was open source friendly. I got one of those bogus pages that's just scraped together from bits of the internet by a computer program. Sleaze-balls put up sites like this to try to get you to land there from a search and then click on a link, generating ad revenue for the sleaze-ball.

The thing was, it took me a few moments to realize the page for what it was. It almost looked like a real site dedicated to open source accounting software. I thought, "wow, this sleaze-ball software is getting pretty good."

The I realized that it could also be that so many sites on the Internet are still so bad, that a computer can do as good a job as a person.

(Is this a variant of the Turing Test?)

Tuesday, 5 October 2010

CFOs: Use the Cloud Now

It occurred to me that there's an easy way for CFOs and CEOs to use the cloud right now, without waiting for the IT department to touch a single piece of equipment. Here's how:

Ask your IT department how many servers and how much data you have. (Ask how much data is actually being used, not how much capacity you have.) Then, go to Amazon's site for cloud services and calculate how much it would cost to host that on Amazon. Finally, call in the CIO and ask her why your IT infrastructure budget is a lot higher than what it would cost to host on Amazon. It will be. You're asking for the whole infrastructure budget, not just the cost of the equipment.

For example, suppose you have 460 Windows servers and 200 TBs of data. Amazon has different prices for different size servers, but start by assuming all your servers are what Amazon calls "large". Your annual cost for that (October, 2010) is $2.5M. That includes 400 Mbps of network traffic into and out of the data centre 24 hours per day.

Ask your CIO out what services you're getting that justify the premium you pay for having an in-house IT infrastructure department.

In reality, you're CIO's no dummy. She'll be able to give you a pretty good story about why the IT infrastructure budget is so much. That's when you can use an independent IT consultant who's not owned by a company selling the infrastructure that drives up your costs. The real value comes when you start to use the benchmark cost of Amazon to identify and drive improvements in the value provided by your infrastructure department.

For example, when your CIO is talking about the services she provides, ask her when she's going to offer servers that can be spun up by a user, through a web site, with no intervention at all from the IT infrastructure group, like on Amazon? Or when the business will be able to downsize how much it's paying if it discovers that it doesn't need a large server, like on Amazon? Or when you'll start paying only for the data storage you're using, and not for a bunch of empty disk that you had to buy for "future growth", like on Amazon?

And that's how to use the cloud without changing one piece of technology.

Sunday, 26 September 2010

Terry Fox Run

I like the web site the Terry Fox Foundation has put together for their annual school run to raise funds for cancer research. It lets people donate on-line, of course. Much more interesting is that it lets kids collect and create their own content -- photos and videos -- and post them on their own page, along with a graph showing how close to reaching their fund-raising goal they are.

My son Marc got right into making videos for it. For the Foundation, it gets kids thinking and talking about Terry Fox and the importance of cancer research. For the kids, it gets them producing content for the web. The future belongs to those who produce content. (Those of us who produce the technology will be like the guys today who keep the mainframes running.)

Shameless commercial: You can contribute to cancer research by supporting Marc's run here.

Friday, 10 September 2010

The Cost of Storage: Reality Check

A friend pointed me at this awesome blog post from Backblaze, who sell cloud storage: Petabytes on a budget: How to build cheap cloud storage | Backblaze Blog. They build their own storage boxes based on a commodity motherboard running Linux, and standard open source software.

Backblaze gets a cost per gigabyte of under $0.12. Yes, 12 cents per GB. And that's per GB of RAID 6 storage. It's easy to find storage costing $12 or more per GB from the mainstream storage vendors -- two orders of magnitude more. The blog post also compares prices of storage. They show a price difference of up to 2,000 times!

I think there are a lot of areas of IT that a fundamentally broken. Storage is an area that is most obviously broken, and these price differences should make that obvious.

What I find really interesting is Backblaze's approach. They published their hardware design in the blog post. They've open-sourced their hardware. The supplier of their cabinet is already offering the cabinet as a product because they've had so much demand. People are buying and building these boxes, and I'm sure it won't be long before lots of open source software becomes available that provides storage solutions based on this hardware.

This gives hope. In ten years, perhaps, open source will do to storage what it's doing to CPU cycles and the operating system business -- get rid of the artificial cost imposed by proprietary vendors who hoard technology.

Sunday, 18 July 2010

Can't Run VMWare Server 2 Management Interface

I filled the disk on my VMWare Server 2 host, which caused all sorts of grief. Part of the grief was that I couldn't get to the management interface at https://vmhost:8333/ui. I solved that problem by killing the VMWare hostd process (after freeing up some space on the disk):
  1. Look up the process ID: ps -ea | grep hostd
  2. Kill the process: sudo kill pid 
  3. Remove the old lock file: sudo rm /var/run/vmware/vmware-hostd.PID
  4. Restart VMWare management: sudo /etc/init.d/vmware-mgmt restart

Saturday, 17 July 2010

Why Can't We All Be Nice to Each Other

Wouldn't technical forums, technical blogs, etc. be nicer to read if people restricted their value judgements and opinions to their own behaviour, rather than other people's behaviour? Doing so might eliminate a lot of unnecessary flaming.

To be really clear, I'm not saying there's no place for opinions on the Internet. I'm just saying technical forums aren't the place for me to express my opinions about what someone else is doing. If someone wants help with an old version of the Linux kernel, I should either help or shut up. There's no need for me to give my own opinions on the wisdom of using old kernels.

Wednesday, 7 July 2010

Scanning with Ubuntu 10.04

I have an HP CM1312nfi MFP multi-function colour printer, fax and scanner. xsane worked fine in 9.04. The first time I tried to scan after upgrading to Ubuntu 10.04 it didn't work anymore. (Breaking it may have been something I did, rather than the upgrade itself.)

First xsane told me that it couldn't find any devices. I reinstalled all the hplip and xsane packages, and that got me a message that xsane couldn't open the device -- giving a name that was obvious that it knew about my scanner.

I found a message in /var/log/syslog that xsane couldn't find the file "/usr/share/hplip/scan/plugins/bb_soapht.so". So I ran:
sudo hp-plugin
and answered the question about license. Then xsane (and the new Simple Scan) worked.

Friday, 7 May 2010

Privacy and the Cloud

A friend pointed me at articles from the Privacy Commissioners of Canada and Ontario about cloud computing. They raise some interesting points. By and large they're good articles and raise points that you should consider.

I want to put a bit of context around them. I don't think the cloud should be dismissed because of privacy concerns, but I wouldn't blindly jump onto the cloud, either.

The article from the Privacy Commissioner of Canada had quite a few comments that weren't directly related to privacy, and I think some of them need to be looked at.

First, the Privacy Commissioner for Canada states that cloud computing can mean an on-going cost instead of one-time fee. But there is no such thing as a one-time fee in computing. Your computing gear lasts three to five years. You need to replace it, and you need to service it while you own it. It's much better in computing to convert your costs to a monthly cost, either by using the lease price, or by using the depreciation that your accountant would use.

Consumer lack of control refers to the challenge of moving from one cloud provider to another. For example, you want to take your blog from Blogger to Wordpress. It's an absolutely important point to consider with cloud computing. It's also an absolutely important point to consider when you use proprietary software (e.g. Microsoft) on your own equipment. There is a roughly equivalent amount of technical effort to switch to a different platform in either scenario.

In fact, technically you always have a way to get your data from a web site. The terms of service of the web site may prevent it, but technically you can do it. That's not always the case with a proprietary, in-house solution.


Compromising meaningful consent refers to the fact that the cloud tends towards a single provider of most services: Facebook, Google (for search), Twitter are all dominant in their sphere. However, twenty-five years of Microsoft wasn't exactly a world of diversity, either. Again, it's the monoculture that's undesirable, not the means by which we arrive at a monoculture.

Most of the Ontario Privacy Commissioner's paper is actually about identity. I am not by any means an expert on identity. I learned some interesting things from the Ontario Privacy Commissioner's paper.

One point I'd like to draw your attention to: Identity is impossible without the cloud, or at least the Internet. Most of the effective, practical identity mechanisms rely on an trusted third party. I believe the experts can demonstrate that this is required. You need the Internet to get to the trusted third party, and that third party is effectively a cloud service.

(What I mean by "practical" in the previous sentence is to rule out the public/private key approaches that work, but are too much of a pain for even most geeks to use.)


Finally, I want to step away from the privacy commissioners and talk about one aspect of the cloud debate: Many IT people are reluctant to embrace the cloud. Here is an example of IT backlash against the cloud. It's important to remember that IT jobs will disappear as users migrate to the cloud. If you work in a 4,000 person organization you probably have a couple of people working full-time to support Exchange (the back end of your e-mail system). If your organization used gmail, they wouldn't be needed.

What's that got to do with privacy? Well, it affects the cases that the IT experts bring forward. For example, you'll hear about the Chinese infiltration of gmail (attack on a cloud service), but you won't be reminded about the Chinese attacks on Tibetan nationalist and supporters, which was primarily about compromise people's personal computer.

I know that Google has way smarter people than me working on security, and they do it full time. I think I have a reasonably secure network, but I don't even have time to monitor it to see if I'm being compromised. Security and privacy will be a differentiating factor in the evolution of cloud providers. The market advantage will go to those who provide the level of privacy their customers desire.

In the proprietary, self-hosted world, security and privacy are usually the last thing that gets any resources, because the competitive pressures are always something else.

Monday, 3 May 2010

On-Line Presence

My friend Elena Murzello just got her web site going. Elena is an actor who's appeared as Anna in The L Word, Tennat #1 in Da Vinci's Inquest, and a Nurse Educator in the Vancouver Coastal Health's Unit Dose Project, among other roles. (One of these things is not like the other.)

She asked me for hints about how to generate traffic to her web site. I realized that I really should blog my thoughts, because other people could comment and correct what I say. So the rest of this post is written to Elena, but you can read it as if it's written to you.

First, your site looks great. I'm really glad you have a blog on it, and that you're writing new posts frequently. New posts generate traffic to your site, and traffic to your site improves your ranking on search engines. Write blog posts about what's important to you, and about your experiences in the dramatic arts. Mention names: If you had a great moment with Nicholas Campbell while filming for Da Vinci, blog that. The more names the better.

(Let's face it, you'll get more traffic blogging about The L Word. Who am I kidding. And a note to the regular technical readers of this blog: The L Word is not a TV series about Linux.)

Tumblr looks like a great blog service. As usual, I learn something from you. Thanks!

Also, put links in your blog post, like I'm doing with this blog post. It's a service to your readers, but it also generates traffic back to your site. Think of it as if you get points when someone goes somewhere popular because you pointed them there.

Of course, make sure that all search engines are indexing your site, but especially Google. You web site designer should have done this for you already, but don't assume it was done. Ask.

You're also absolutely doing the right thing by using Google Analytics (or any kind of traffic measuring tool). If you decide to pay someone to do some search engine optimization on your site, you need to have baseline data on how well your traffic was growing just from your own efforts. No point in paying someone for growth that you generated by yourself.

Next get on Twitter (Update: Elena's at @ElenaMurzello) and put Echofon, a free app from the App Store, on your iPhone. Echofon makes tweeting from your iPhone easy, so that you'll tweet a lot.

Then, use TwitterFeed or something like it to feed your blog posts to Twitter and Facebook. Your web site designer will have to put an RSS or Atom feed on your blog. Once that's done, you can set up TwitterFeed yourself. If you don't want to, your web site designer should be able to help you. And if they can't, I'll help you.

Add "Follow me on Twitter" and "Tweet this" icons and text on your web site (meaning your blog). Your web site designer will have to do that. The first makes it easy for people to follow you. The second makes it easy for people to publicize your site -- free publicity!

Get yourself a YouTube account. Post clips of yourself with some commentary about why you like the clip. Put links back to your site in the YouTube comment, and blog about the video link and embed the video in your blog (as you've done already with other people's work). If you want some help editing videos, let me know.

Finally, start reading Seth Godin's blog. Seth is the original marketing brain of the Internet age and he's an incredible generator of ideas. I find reading his blog overwhelming, but if you take even one of his ideas and implement it you're way ahead of everyone else.

Now, about search engine optimization. I'm no expert, but I've heard from reliable sources that Google and the other big search engines put a lot of effort into preventing people from "gaming" search results. It makes sense. People won't use a search engine if they don't get the answers they really want. So Google does a lot to make sure that your rank is based on people who found your site useful. That's why real traffic from real people is the best way to rise up in the search engine ranks.

Most search engine optimization techniques already don't work. What I mean is that every time someone comes up with a new trick, Google and the others find a way to filter it out. No search engine optimization "expert" that you and I can afford to hire is likely to know how to outsmart Google. And even if he does today, you'll find that next month Google has neutralized the trick.

If you really want to get someone to do search engine optimization, ask if they'll agree to be paid based on the sustained additional growth in traffic they provide to your site. It will take some work to come up with a fair formula, but you have the raw data you need since you're using Google Analystics. Really, if someone isn't confident they can produce results for you, why should you be confident they can produce results?

I hope this helps. Let me know what you think.

Sunday, 4 April 2010

Looking for IP Addresses in Files

I've moved a couple of data centres. And I've virtualized a lot of servers. In all cases, the subnets in which the servers were installed changed. If anything depends on hard-coded IP addresses, it's going to break when the server moves.

The next data centre I move, I'm going to search all the servers for files that contain hard-coded IP addresses. The simplest thing to do for Linux and Unix is this:
egrep -R "\b([[:digit:]]{1,3}\.){3}[[:digit:]]{1,3}\b" root_of_code
The regular expression matches one to three digits followed by a "." exactly three times, then matches one to three digits, with word boundaries at either end.

That's not the most exact match of an IP address, because valid IP addresses won't have anything higher than 255 in each component. This is more correct:
egrep -R "\b((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b" /!(tmp|proc|dev|lib|sys) >/tmp/ips.out
It yields about two percent fewer lines when scanning a Linux server (no GUI installed). (Thanks to this awesome site for the regular expression.)

When I run the above egrep command from "/", I have problems. There are a few directories I had to exclude: /tmp, /proc, /dev, /lib and /sys. I used this file pattern match to get all the files in root except those directories:
/!(tmp|proc|dev|lib|sys)
The reason I wanted to exclude /tmp is that I wanted to put the output somewhere. /tmp is a good place, and by excluding it I didn't end up writing to the file while reading it. /sys on a Linux server has recursive directories in it. /proc and /dev have special files in them that cause egrep to simply wait forever. /lib also caused egrep to stop, but I'm not sure why (apparently certain combinations of regular expressions and files cause egrep to take a very long time -- perhaps that's what happened in /lib.)

I'll write about how to do this for Windows in another post. I'll also write about how to do it across a large number of server.

Friday, 2 April 2010

The Cost of Storage

Over the years I've seen SAN storage cost between C$10 and C$20 per GB (C$ is approximately equal to US$ right now). This is the cost of the frame, a mix of disks, redundant director-class fibre channel switches with a number of 48 port cards in each switch, management computer, and a variety of management and replication software. The cost doesn't include the HBAs in the servers, or cabling.

The price above is for a very raw GB, before you apply the loss for whatever classes of RAID you apply.

The management and replication software in all the above cases was the basic stuff you need to manage a SAN and replicate it. There was no fancy de-duplication or information lifecycle management going on.

The costs above also didn't include the cost of training dedicated storage staff to set up and manage a SAN, or the higher salary you'll have to pay to keep them after you train them.

Compare that to direct attached storage: Right now I can get a 1TB drive for less than C$300, or about 30 cents per GB. If I put that in a RAID 1 configuration with a RAID controller (less than $400 easily) you would still be paying less than $1 per GB.

I get RAID 1 storage for an order of magnitude cheaper than raw storage on a SAN. No need for special management software. You've got rsync for replication. You can use standard tools that everyone knows how to use.

No wonder Google uses direct-attached storage in commodity servers for their index. It's just way more cost-effective. What's your business case for SAN-attached storage?

Sunday, 31 January 2010

Ubuntu Netbook Remix Desktop Disappears

I have a netbook running Ubuntu Netbook Remix (UNR) 9.04. I switched it to the regular Ubuntu desktop just to try. Before I switched back, I rebooted (the battery ran all the way down). Apparently, this is known to be a bad thing. When you restart, all you get is a blank desktop -- no panels at the top and bottom to allow you to get at any commands.

The fix is described in Launchpad here, but I'm going to summarize it because it's a little spread out in the comments to the bug.
  1. Right click on the desktop and select "Create Folder..."
  2. Double click on the folder you just created
  3. Navigate to "/usr/bin/desktop-switcher" and run it
  4. Switch back to the UNR desktop
  5. Now navigate to your home directory
  6. Show hidden files (View-> Show Hidden Files, or Ctrl-h)
  7. Delete the .gconf, .gconfd, and .config folders
  8. Log out and log back in
This should fix the problem. Now, with respect to the classical desktop, don't do that :-)

Monday, 25 January 2010

Ubuntu Support Saturday in Vancouver

The ever-fantastic Ubuntu Vancouver Local Committee is organizing a Support Saturday. Come on down and learn about the world's most popular free operating system. If you already use Ubuntu, get some help to make your experience even better.

The details:

Saturday January 30th, 2010 11am - 2pm
Vancouver Community College
1155 East Broadway (Broadway Campus)
Building B, Room G219

Click here for the poster.

Here's the best door to use:


View Larger Map