Friday, 30 December 2011

Manual Two-Sided Printing

In my home office, I have a multi-function printer that does pretty much everything I typically need, except printing two sides. Here's how I get two-sided printing when I need it.

The printer is an HP CM1312nfi. It prints on the side of the paper facing up in the paper tray. The "far end" of the paper in the paper tray is the top of the page.

I print the even-numbered pages first. These are the "back side" or "left pages".


Print in reverse order.


I take the paper from the output tray, and turn it so that the blank side is up, and the top goes into the far side of the paper tray.

Then I print the odd-numbered pages. These are the "right side pages".


Print in forward order.


This only works for one copy at a time if I have an odd number of pages in the document. That's because you need one extra page when you print the second time to get the odd number of pages.

The screen shots are LibreOffice 3.4.4.

Saturday, 10 December 2011

Relocating Data Centres in Waves

I've never had to relocate a data centre in one big bang. You hear stories about organizations that shut down all the computers at 5:00 PM, unplug them, move them, and have them up by 8:00 AM the next morning, but I've never done that.

The big bang approach may still be necessary sometimes, but you can mitigate a lot of risk by taking a staged approach, moving a few systems at a time.

Conventional wisdom on the staged data centre relocation is to move simpler systems, and test and development systems, first. This lets you tune your relocation processes and, particularly if you're moving into a brand new data centre, work the kinks out of the new data centre.

It sounds great in theory. In practice, we ran into a few wrinkles.

I'd say the root source of the wrinkles traces back to our environment: We had a lot of applications integrated through various tools, and a large J2EE platform running a lot of custom applications. Also, even though we had some months to do the relocation in waves, we didn't have an infinite amount of time. On top of that, business cycles meant that some systems had to be moved at certain times within the overall relocation period.

The net result is that we ended up moving some of the most complicated systems first. At least we were only moving the development and test environments. Even so, it turned out to be quite a challenge. We were slammed with a large workload when people were just learning the processes for shipping and installing equipment in the new data centre. The team pulled it off quite well, but it certainly increased the stress level.

I don't think there's much you can do about this. If your time lines force you to move complicated systems first, so be it. The lesson I take away is to identify early in planning if I have to move any complicated environments. On this project, I heard people right from the start talking about certain environments, and they turned out to be the challenging ones. We focused on them early, and everything worked out well.

Karma and Data Centre Relocations

We're pretty much done the current project: relocation of 600 servers to a new data centre 400 kms from the old one. By accident more than by design we left the move of most of the significant Windows file shares to the last month of the relocation period.

Windows file shares are known to be a potential performance issue when you move your data centre away from a group of users who are used to having the file shares close to them. We're no exception: A few applications have been pulled back to the old data centre temporarily while we try to find a solution to the performance issues, and we have complaints from people using some desktop tools that don't work nicely with latency.

The luck part is that we've  developed lots of good karma by making the rest of the relocation go well. Now that we have issues, people are quite tolerant of the situation and are at least willing to let us try to fix the problems. I won't say they're all happy that we've slowed their work, but at least we don't have anyone screaming at us.

I'd go so far as to say this should be a rule: All other things equal, move file shares near the end of a relocation project.

Sunday, 27 November 2011

The Java Gotcha for Data Centre Relocations

Way back in time, someone thought it would be a good idea for the Java run-time to cache DNS look-ups itself. Once it has an IP address for a name, it doesn't look up the name again for the duration of the Java run-time process.

Fast forward a decade, and the Java run-time is the foundation of many web sites. It sits there running, and caches DNS lookups as long as the web site is up.

On my current project, we're changing the IP address of every device we move, which is typical for a data centre relocation. We have a number of Java-based platforms, and they're well integrated (read interconnected) with the rest of our environment, and we're finding we have to take an outage to restart the Jave-based platforms far too often.

In hindsight, it would have been far simpler to change the Java property to disable DNS caching. Run this way for a while in the old environment to be sure there are no issues (highly unlikely, but better safe than sorry). Then you can start moving and changing IPs of other devices knowing your Java-based applications will automatically pick up the changes you make in DNS.

In case the link above goes stale, the four properties you want to look at are:

networkaddress.cache.ttl
networkaddress.cache.negative.ttl
sun.net.inetaddr.ttl
sun.net.inetaddr.negative.ttl

Look them up in your Java documentation and decide which caching option works best for you. (Normally I'd say how to set the parameters, but I've never done Java and I fear I'd say something wrong.)

Sunday, 20 November 2011

Data Centre Relocation Gotchas

Here are a couple of gotchas we ran into while relocating close a medium-size data centre:

  • When restarting a server in its new location, it decided to do a chkdsk. Unfortunately, the volume was a 10 TB SAN LUN. Fortunately, we had a long weekend to move that particular server, so we could wait the almost two days it took for the chkdsk to run. (I don't know why the server decided to do chkdsk. Rumour has it we didn't shut down the server cleanly because a service wouldn't stop.) 
  • A website tells me to run "fsutil dirty query c:" to see if chkdsk is going to run on the C: drive the next time the system boots
  • On Linux, here are a couple of ways to make sure you won't have an fsck when you restart the server
  • We were frequently burned by the Windows "feature" to automatically add a server to DNS when the server starts up. Either we'd get DNS changes when we weren't ready for them, or we'd get the wrong changes put into DNS. For example, servers that have multiple IPs on one NIC, where only one of the IPs should have been in DNS
Here's a short checklist for turning off and moving a server:

  • Check to see if the server is going to check file system consistency on the next startup (chkdsk or fsck)
  • Shut the server down cleanly
  • If it's a physical server, shut it down and then restart it. Rumour has it that the hard drive can freeze up if the server hasn't been stopped in a long while. Better to find that out before you move it than after. This has never happened to me
  • Do a host or nslookup after starting the server to make sure your DNS entries are correct. Make sure the entry is correct and that you have the right number of entries (usually one)

Friday, 11 November 2011

Running Over the WAN After Relocating a Data Centre

My current data centre relocation has us moving the data centre about 400 kms away from its current location. This has resulted in a total round-trip change in latency of 6 ms. We implemented WAN acceleration in certain locations to address the issue, and we've learned some lessons in the process. Lessons learned is what this post is about.

We have offices all over the province, so not everyone sees the 6 ms change in latency as a negative. Many users are now closer to the data centre than they were before, and we always had users who had worse than 6 ms latency to our data centre. That gave us a lot of confidence that everything would be fine after the relocation.

However, the old data centre location was the head office, so a large number of users are now experiencing latency where they never did before, including senior management. Most of the remote sites were much smaller than head office.

The one or two issues we've had up to recently were due to our phased approach to moving. In one case we had to move a shared database server without moving all the application servers that used it. After the move, we had to do a quick move of one application server, because we discovered it just couldn't live far from its database server.

That changed recently. Like many organizations, we have shared folders on Windows file shares. Windows file shares are generally considered a performance risk for data centre relocations when latency changes. In preparation, we implemented WAN acceleration technology.

We moved the main file share, and by about 10 AM we were experiencing lots of calls to the help desk about slow performance. After a hour or two of measuring and testing, we decided to turn off WAN acceleration to improve the performance. Indeed, the calls to help desk stopped after turning off the WAN acceleration.

Analysis showed that the Windows file share was using SMB signing. SMB signing not only prevents the WAN accelerator from doing its job, but the number of log messages being written by the WAN accelerator may have actually been degrading performance to worse than an un-accelerated state.

So we turned off SMB signing, and tried again a few days later. No luck. Around 9:30 AM we started to get lots of calls, and again we turned off the WAN acceleration. We're lucky that performance is acceptable even without WAN acceleration (for the time being -- we'll need it soon).

We're still working this issue, so I don't know what the final solution is. I'll update this post when I know.

A non-technical lesson learned: If I were to implement WAN acceleration again, I'd get all the silos in a room in the planning stages, before I even bought anything. I'd make the network people, Windows administrators, and storage administrators understand each others' issues. I would have the WAN accelerator vendor and the storage device vendor at the table as well. And I'd make everyone research the topic using Google so we could find out what issues other people ran into.

Oh, and one final lesson learned: Bandwidth hasn't been an issue at all. In this day and age, 1 Gbps WAN connections are within the reach of a medium-sized organization's budget. We're finding 1 Gbps is more than enough bandwidth, even with the large data replication demands of the our project. And those demands will go away once the data centre is fully relocated.

Living with Virtualization

In 2006, I was project manager on a VMware implementation for a health care organization. We virtualized 200 servers in six weeks, after a planning phase of about 2 months. Out of that experience I wondered, "Did virtualization have anything to offer a smaller business?" So I set up a box at home and converted my home "data centre" into a virtualized data centre using VMware's Server product, which was the free product at the time.

After five years it's been an interesting experience and I've learned a lot. At the end of the day, I'm pretty convinced that the small business that has a few servers running in a closet in their office doesn't have a lot to gain from virtualizing within the "closet". (I'm still a big fan of virtualization in a medium or large organization.) I'm going to switch back to running a single server with all the basic services I run (backup, file share, DNS, DHCP, NTP) on a single server image.

I had one experience where the VM approach benefited me: As newer desktops and laptops came into the house, the version of the backup client installed on them by default was newer than the backup master on my backup server (I use Bacula). Rather than play around with installing and updating different versions of the backup client or master, I simply upgraded the backup master VM to a new version of Ubuntu and got the newer version of Bacula. I didn't have to worry about what other parts of my infrastructure I was going to affect by doing the upgrade.

The down side was that I spent a lot of time fooling around with VMware to make it work. Most kernel upgrades require a recompile of the VMware tools on each VM, which was a pain. I spent a fair bit of time working through an issue about timekeeping on the guests versus the VMware host that periodically caused my VMs to slow to a crawl.

Connecting to the web management interface and console plug-in always seemed to be a bit of a black art, and it got worse over time. At the moment, I still don't think modern versions of FireFox can connect to a running VM's console, so I have to keep an old version around when I need to do something with a VM's console (before ssh comes up).

My set-up wasn't very robust in the face of power failures. When the power went off, the VMs would leave their lock files behind. Then, when the power came back, the physical machine would restart but the VMs wouldn't. I would have to go in by hand and clean up the lock files. And often I wouldn't even know there'd been a power failure, so I'd waste a bit of time trying to figure out what was wrong. I should have had a UPS, but that wouldn't solve all the instances where something would crash leaving a lock file behind.

All in all, and even if I had automated some of that, the extra level of complexity didn't buy me anything. In fact, it cost me a lot of time.

Some of these problems would have been solved by using the ESX family of VMware products, but the license fees guarantee that the economics don't work for a small business.

I originally started out planning to give Xen a try, but it turned out not to work with the current (at the time) version of Ubuntu. Today I would try KVM. I played around with it a bit last year and it looked fine for a server VM platform. I needed better USB support, so I switched to VirtualBox. VirtualBox worked fine for me to run the Windows XP VM I used to need to run my accounting program, but it has the free version/enterprise version split that makes me uncomfortable for business use.

So my next home IT project will be to move everything back to a simpler, non-virtualized platform. I'll still keep virtualization around for my sandbox. It's been great to be able to spin up a VM to run, say, an instance of Drupal to test upgrades before rolling out to my web site, for example, or to try out Wordpress, or anything else I need to try.

My blog posts about interesting steps along the virtualization road are here.


Wednesday, 9 November 2011

A New Computer -- Wireshark

I'm not a network expert by any stretch of the imagination, but I've occasionally solved problems by poking around a bit with Wireshark.

Of course, if my network is down I'm not going to be able to download Wireshark. Fortunately, I remembered to re-install Wireshark on my new computer before I needed it. I installed it using the Ubuntu Software Centre.

A new feature of Wireshark that I didn't know about: If you add yourself to the "wireshark" group, you can do live captures without running Wireshark as root.

sudo adduser --disabled-login --disabled-password wireshark
sudo chgrp wireshark /usr/bin/dumpcap
sudo chmod 754 /usr/bin/dumpcap
sudo setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap


Now add yourself to the wireshark group and log out. When you log back in you should be able to do live captures without root privileges. To add yourself to the wireshark group in a terminal, type:

adduser your-user-name wireshark

The Wireshark documentation for this is here (scroll down a bit).

Tuesday, 1 November 2011

A New Computer -- Video

My son is fascinated with videos. I dream that one day he'll get fascinated by making them, not just watching them. So I've been trying to learn about making videos. Here's what I had to reinstall on my new computer.

First, playing video (and audio, for that matter) has worked out of the box much better with 11.04 than with previous versions of Ubuntu. I play my Guatemalan radio station and CBC audio and video without having to fool around with any setup.

To make videos, I loaded up OpenShot. 

sudo apt-get install openshot

That didn't install ffmpeg, which has been my main fallback tool. It seems to be the tool that does everything, although as a command line tool, I find I usually just cut and paste an example command line from the Internet. It's not that I'm afraid of the Linux command line. It's that I don't know anything about video. So:

sudo apt-get install ffmpeg

That seems to be all that was needed.

Thursday, 27 October 2011

A New Computer -- Backups

I'd love to find a new backup solution, but the reality is I have Bacula working reasonably consistently right now, and it's the easiest thing to get set up quickly. So I:
  1. Installed bacula-client and bacula-traymonitor packages (sudo apt-get install bacula-client bacula-traymonitor)
  2. Copied /etc/bacula/bacula-fd.conf and /etc/bacula/tray-monitor.conf from the old laptop
  3. Changed the host name in both the above files
  4. Added my new laptop to /etc/bacula/bacula-dir.conf on the bacula director host by copying the job definition of the old laptop and renaming it

Wednesday, 19 October 2011

New Computer -- Fixing the Too-Sensitive Touchpad

My new laptop had a way-too-sensitive touchpad. So much so that I installed Touchpad Indicator so I could turn it off. Interestingly, I couldn't use its "turn off touchpad when mouse plugged in" feature, because it seemed to always think the mouse was plugged in.

That led me to realize that I also didn't have the touchpad tab in the mouse control panel. Googling, I found that this was a common problem with ALPS touchpads, like the one I had.

The fix is here: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/550625/comments/492. An updated driver that allows you to get at the touchpad control in the mouse control panel. Download the .deb file, then double-click it and wait a bit for the Software Centre to run. Click install, enter your password, wait, then restart, and you'll have the touchpad tab in the mouse control panel. On the touchpad tab, you can turn off mouse clicks when typing, and suddenly typing isn't a pain.

I have to resist the urge to rant a bit. I bought an Ubuntu-certified laptop. This is the sort of pissing around fixing that I was hoping to avoid. Sigh!

Friday, 14 October 2011

A New Computer -- Printing

Setting up my multi-function printer on Ubuntu has always been interesting. When I first got my printer, it was so new I had to download and build hplip, the printing subsystem.

It looks like installation is a lot easier now, but to get started you still have to go into a terminal and type:

hp-setup

That starts a GUI that easily discovered my printer on the network. Unfortunately, when I tried to install the driver, it failed with "The download of the digital signature file failed." That sucks. But wait! The server that holds the drivers is actually run by the Linux Foundation, and it was off the air because of the security breach almost a month ago.


Finally, about a week and a half later, hp-setup worked. It now brings up a GUI window and walks you through a few steps: You have to tell it whether you printer is USB, parallel or network-connected. It's not much closer to being, "It just works."

A New Computer -- Touchpad Too Sensitive

The touchpad on the last two laptops I've had have been way too sensitive. There should be a better solution, but for now I'm installing the touchpad indicator:

sudo add-apt-repository ppa:atareao/atareao
sudo apt-get update
sudo apt-get install touchpad-indicator


Update: I found that the clicking on the trackpad indicator icon didn't work reliably until I rebooted.

Wednesday, 5 October 2011

Microsoft Considering Again Buying Yahoo

Microsoft is again considering buying Yahoo. According to one MS executive, the acquisition would allow them to "obliterate AOL." AOL still exists?

As a Yahoo shareholder, I hope MS buys. If I were a MS shareholder, I'd be afraid, very afraid.


Sunday, 2 October 2011

A New Computer -- Installing VirtualBox

I installed VirtualBox from the Ubuntu Software Centre on my new computer. I had already copied over my VM -- it came with all the other files when I copied everything under /home.

When I figured out how to run VirtualBox under Unity (it's under Accessories), it came up and knew about all the VMs I had on the old machine. When I started my Windows VM, it complained that it couldn't find "/usr/share/virtualbox/VBoxGuestAdditions.iso". Sure enough, the iso wasn't anywhere on the new machine.

However, for my purposes I didn't need the iso. I just unloaded the iso from the virtual CD device for the virtual machine, and tried restarting. It worked.

I would need the guest additions iso sometime, so, with the virtual machine running, I went to the Devices-> Install Guest Additions menu. It asked me if I wanted to download the file from the Internet, and I said "yes".

Saturday, 1 October 2011

A New Computer

The backlight died on my Lenovo x300 a couple of weeks ago, so I bought a Dell Vostro with Ubuntu pre-installed. Here's how I got from one to the other:

The Dell website said I was getting Ubuntu 11.04, but out of the box the computer had 10.10 on it. My first try upgrading to 11.04 failed. When it tried to reboot, it said the device for / wasn't ready. Fortunately, I somehow got to the old grub screen that let me choose which image to boot, and at the bottom there was an option to return the box to factory state.

I tried that, and it worked. I had about 10 minutes of the spinning "wait" cursor before it started doing something, but by being patient I got the box back to factory state.

So I went through the process of updating 10.10, reboot, then upgrade to 11.04 again. This time I saved the sudoers file. I was asked whether to keep the old one or use the new one as part of the installation.

I'm going to force myself to use Unity for a while. That's causing me some grief, but I'm already finding so of the tricks that make Unity more productive (try right-clicking the "Applications" or "Files & Folders" icons in the Launcher bar).

To get my old files over from the old computer, I connected a 1 TB USB drive to the old laptop, shut down Evolution, virtual machines, and any other applications that might be updating files while I copied, and did:

sudo cp -a /home /media/wd1tbb/home


I have a lot of software installed on the old machine. I found excellent instructions here about how to move Ubuntu from one machine to another, preserving your environment and installed software as much as possible. The instructions are for when both machines are running exactly the same version of Ubuntu.

I was moving from 10.04 to 11.04, so I didn't meet the criteria. I generated the list of installed packages anyway, and took a look at the list using a text editor. The vast majority of the packages are what I'd call supporting or system packages. So I think I'll skip that step.

But wait. The Ubuntu Software Centre has a better idea of what's an application package. I'll have to look into where it gets its list.

Any, back to copying my file. I did:

sudo cp -aiv /media/wd1tbb/home/reid /home


Then I used judgement to decide whether to overwrite or not.

Argh! For some reason, the new computer has the original user with uid=1001, not uid=1000 like every other Ubuntu I've installed. I had to run commands like this to fix up the files I copied over:

sudo find /home -gid 1000 -uid 1000 -exec chown -h 1001:1001 \{} \;

Then, probably because I've upgraded versions of Evolution, just copying the files across wasn't enough. I had to go back to the old machine and make an Evolution backup, copy it to the new machine, and set up Evolution from the backup. This was made more interesting because the Evolution setup wizard would show the text in each window for only a few seconds, then it would disappear. By fiddling and trying a few times I was able to get Evolution going. (A lot of work considering I may well switch back to Thunderbird soon.)

That's the basics. I think I'll post this now, and add new posts for all the other work I'll have to do (like lower the sensitivity of the touch pad).

Friday, 30 September 2011

Network Team for Data Centre Relocations

I had a real "Doh" moment this week. We're about 80 percent of the way through relocating a 500 server data centre. Things have been going pretty well, but right from the start we've found we were under-staffed on the network side. We have a pretty good process in place, with what I think is just the right amount of documentation. The individuals working on the network team are excellent. We even brought in one more person than we had planned for, but we're still struggling with burnout.

The light bulb went on for me a few days ago: Here, like other IT organizations of similar size and purpose, we have about ten times as many server admins as we do network admins. We have a bigger pool of people on the server side to draw from when we organize the relocations, which typically happen over night or on the weekend. While we rotate the network guys for the nights and weekends, it's a smaller pool. They do shifts more often, and they more often have to come in the next day, because they have to plan for the next relocation on their list and it's coming soon.

Another factor is that, in our case, the network team started intense work earlier than everyone else. We're occupying a brand new cage in a data centre, so the network team had to build the whole data centre network first. They did three weeks of intense work before we moved any servers at all.

As we hit six months of intense work for the network team, the strain is showing. We're going to try to rearrange some work and delay what we can. Other than that, we'll probably have to suck up the remaining 20 percent of the project somehow.

In the future, I'm not sure what I'd do. One approach, if I had enough budget, would be to hire a couple of contract network admins well in advance. Tell them they're going to work Wednesday to Sunday, and often at night. Train them up ahead of time so they're as effective as your in-house people. Then give most of the nasty shifts to the contractors.

What would you do?

(If you're looking for numbers, we have three full-time network engineers, and we're drawing on the operational pool from time to time.)

Saturday, 24 September 2011

The Data Centre Relocation Calendar

I'm past the half-way point in relocating a 500-server data centre. The servers are a real variety -- a typical medium-scale business data centre. We're using mostly internal resources, supplemented by some contractors like myself with data centre relocation experience.

I chose not to come in and impose a relocation methodology on everyone. There are a lot of reasons for that, some of which were out of my control. Rather than using a methodology out of the can, I tried to foster an environment where the team members would build the methodology that worked for them.

This turned out to be quite successful. One of the items that emerged fairly late for us, but was very successful, was a shared calendar in Microsoft Outlook/Exchange. (The tool wasn't important. It could have been done in Google Calendar. Use whatever your organization is already using.)

The shared calendar contained an event for every relocation of some unit of work -- typically some service or application visible to some part of the business or the public. Within the calendar event we put a high-level description of what was moving, the name of the coordinator for that unit of work, and hyperlinks to the detailed planning documents in our project shared folders. The calendar was readable by anyone in the corporation, but only my team members could modify it.

What struck me about the calendar was how it organically became a focal point for all sorts of meetings, including our weekly status meeting. Without having to make any pronouncements, people just began to put the calendar up on the big screen at the start of most of our meetings. We could see at a glance when we might be stretching our resources too thin. The chatter within the corporation that we weren't communicating enough diminished noticeably.

Based on my experience, I'd push for a calendar like this much earlier in the project. We built it late in our project because I had a lot of people on my team who were reluctant to talk about dates until they had all the information possible. We got so much value in having the calendar that I think it's worth it to make a calendar early in the planning stage, even if the dates are going to change.

Tuesday, 30 August 2011

Linuxcon 2011 Part II

I went to a lot of cloud computing-related talks at Linuxcon 2011. One of the better ones was by Mark Hinkle of cloud.com.

One of his slides showed what he considers the five characteristics of cloud computing. Two important ones for him are self service, and a measured service. I think those are two useful criteria for distinguishing between a VMware cluster and a cloud that is distinct from a VMware cluster.

It was clear listening to all the talks, including Mark's, is the role of open source in the large clouds. Basically, anyone big is building their service on the open source cloud stacks. Of course, there are a number of open source cloud stacks. One of the challenges is to pick which one to use.

Fortunately, there are serious supporters behind the three main stacks. Eucalyptus has a company called Eucalyptus Systems backing it now, headed up by Marten Mikos of MySQL fame. Cloudstack has cloud.com which is part of Citrix. And the OpenStack project is backed by Rackspace and NASA.

One factor that seems to be important is the hypervisors supported by the cloud stack. OpenStack supports the most right now.

Something that struck me listening to the talks is that the cloud, like so much in IT, isn't a slam dunk solution by itself. You need to know what problem you want to solve, and then figure out how to use the cloud to solve it, if indeed the cloud is a solution to your problem.

Related to that insight, it's clear that unless you solve the problem of monitoring your infrastructure with Zenoss or Nagios, and of provisioning it with Puppet or the like, then you're not going to see much benefit from the cloud.

Saturday, 20 August 2011

Linux 2011 Part I

Linux is 20 years old this year, and Linuxcon was in Vancouver, so I had to sign up. The conference ended yesterday. There were a lot of good speakers. As a bonus, we also got to hear some poor guy from HP give a keynote about HP's great WebOS play, at almost exactly the same time as his company was killing the product line.

What I was looking for, frankly, was a business opportunity for a small consultant/system integrator like Jade Systems to use Linux to help businesses with 1,000 servers, give or take a zero at the end. The most obvious opportunity I came away with is storage.

I've written before about the cost of enterprise storage. There are tremendous opportunities with hardware solutions like Backblaze's storage bricks, and the software that will make it all work is Gluster. Install Gluster on a couple of boxes with storage, and you have synchronous replication (locally) or asynchronous replication (over the WAN). It provides what you need to store your virtual machines and move them around your data centre as load or availability needs dictate. It can be your large, reliable, network attached storage device for all your spreadsheets and documents.

Gluster grew out of the needs of a supercomputing project at Lawrence Livermore Labs in 2004 and have an impressive list of users today. They're working to integrate with the OpenStack cloud computing stack to provide a complete cloud storage solution for OpenStack.

This is certainly a solution that could support a business case.

Monday, 23 May 2011

Making DVDs

My son's class at school raised and released some salmon this spring, and he had a project to produce a video about it. I offered to edit the raw video together. That spun into a project where I ended up putting together four short videos. Then I put them all on one DVD with a menu. Of course, it wasn't easy. It turned into another episode of all my spare time for two weeks being spent trying to do something useful with a computer.

First I had to find the right tools for Ubuntu (10.04 in my case). It turns out that Kino, the video editor I had used before, is no longer under active development. It looks like everyone is using OpenShot now. It's in the Ubuntu repository, but I found there were sufficient new features with the more recent version that I followed the instructions to get it from their PPA repository.

I found OpenShot to be quite intuitive. There's also pretty good documentation. I think I observed a few random crashes, so it's worthwhile to remember to save your work frequently. There is an autosave feature as well, but you have to find it and turn it on. It isn't on by default.

OpenShot will take your completed video project and turn it into a DVD image ready to burn. That's very slick. My earlier attempts at producing DVDs led to a lot of command line fiddling, and I found it very easy to burn DVDs that didn't actually work. OpenShot made it easy.

The next part of my challenge was to put four videos on the DVD and stick a menu on the front. I found documentation here and here and elsewhere on how to do it (the colour scheme is a killer). However, it turns out that making DVD menus is very picky and error prone, at least for me. I never got anything working consistently.

Finally I found DVD Styler. Again, it's in the Ubuntu repository so it's easy to install. It has a GUI and lets you set up a menu, including automatically doing the typical "Play all or episode selection" scenario if you have multiple videos.

There were a couple of tricks I discovered along the way:

  • Some of the original videos were shot in HD. OpenShot can't deal with HD. I had to use ffmpeg to convert the format to ordinary MPEG-2 video ("ffmpeg -i input.mp4 -target ntsc-dvd -y output.mpg" if you're making a DVD for North America)
  • OpenShot by default makes a DVD. You don't want that when you're planning to use DVD Styler to make a menu. I had to go back and re-export my videos from OpenShot as "mpeg2" videos
  • DVD Styler sets up the DVD menu buttons to use "auto" navigation. That didn't work on my cheap DVD player. I had to go into the properties window for each menu button, and under "focus", change the target of each button to the explicit value I needed (this description will make sense when you look at the properties window for the buttons)
DVD Styler lets you simply preview the menu, create an ISO file, or go straight to burning the DVD. I created the ISO so I could test it with video players (I used VLC and mplayer). Using the players didn't expose the menu button problem I mention in the second bullet above, but otherwise was a worthwhile step.

I didn't try to do anything but the default menu background in DVD Styler. If you've done it, please comment here with your experience.

Unfortunately, I can't post the final product. Since it was done at school, they're very careful not to release anything publicly when they don't have all the parents' permission. Makes sense, of course.

Monday, 25 April 2011

Rhythmbox Wouldn't Rip to .m4a

I had a problem yesterday where Rhythmbox wouldn't rip a CD to .m4a format. I went to the Preferences window to set the format, and it wouldn't appear in the drop-down. Curiously, if I clicked the Add button it would show the existing formats, and .m4a was in that list.

After much flailing, I discovered that you need to have both gstreamer0.10-plugins-bad and gstreamer0.10-plugins-bad-multiverse installed. I had removed the multiverse one a few weeks ago when I thought I was cleaning up after a previous round of flailing to get something around music working.

Sunday, 27 March 2011

Debugging Windows Shares and Samba

I wrote here about a problem I had connecting to a Windows share that was using DFS from my Ubuntu 10.04 laptop. Turns out I was wrong about the issue being related to authentication.

The issue was that my server was sending back a string without the Windows file separator at the end (backslash "\"). I simply patched the code at the relevant place to check if the file separator was missing and put one in if needed. Not the most elegant patch, but probably less likely to impact any other code.

I'm being careful not to call this a bug in Samba. The file server was an EMC NAS implementation. Fairly common, but not a standard Windows server sharing files. Perhaps the EMC is out of spec on this point, or perhaps the spec is ambiguous on trailing file separators. Given the number of devices out there, I think it's reasonable to make the Samba client code handle the case, regardless of what the spec says.

I fed the patch to Ubuntu, because that's where I originally logged the issue. Building Samba was remarkably hassle free, as was getting the source from their git repository, thanks to good documentation for both. My thanks to the Samba team for a great product.

Sunday, 13 March 2011

Connecting to DFS Shares with Ubuntu

See an important update here. The information that follows is still relevant to mount.cifs.

With my current client, when I'm working at home, I have to connect to a CIFS share that uses Microsoft's DFS through their Cisco AnyConnect VPN. It seemed like the mount would work, but I couldn't see any of the files or folders below the share. (This is with Ubuntu 10.04 Lucid Lynx.)

Fortunately, some other consultants working on the same project had the same problem, and they found a work-around. The work-around was to connect to the underlying server and folder, rather than through the DFS root.

As my position with this client has evolved, I've needed to get to other folders on their file server, and I've occasionally had problems because I didn't know the underlying server and folder I needed to get to. So I continued to work on the problem. I think I've found the solution.

I already had the keyutils package installed. You'll need it:

sudo apt-get install keyutils

Then I added the following two lines to /etc/request-key.conf:

create cifs.spnego * * /usr/sbin/cifs.upcall %k %d
create      dns_resolver * * /usr/sbin/cifs.upcall %k

Now I can connect to the DFS root if I use a mount command in a Terminal window.


sudo mount -t cifs --verbose -o user=my_domain/my_user_id //my_server/my_share /mnt


It still doesn't work if I try to connect to the share with Nautilus. 

(A quick check of a VM of 11.04 alpha 2 that I had lying around shows the above two lines are already in /etc/request-key.conf.)

I haven't been using the fix for a long time, yet, so I don't know if it's the complete solution. I've noticed so far that sometimes Nautilus times out and doesn't get the file and folder list from the share. When I refresh the view in Nautilus it works fine.

One of the key hints to find the solution was this text in my dmesg log:

CIFS VFS: cifs_compose_mount_options: Failed to resolve server part of...

Note that it turned out to have nothing to do with the VPN. Also, it leaves open the mystery as to why the other consultants, who were using Windows Vista, I think, also had to use the same work-around.