Monday, 30 May 2016

WebEx on Ubuntu 16.04

Java

You need Java installed. I used the Open JRE. Some places on the web say you need the Oracle version, but it works for me with the Open JRE and IcedTea:

sudo apt-get install openjdk-8-jre icedtea-8-plugin

That’s all you need to get the meeting to work, but…

Missing i386 Libraries

But you won’t be able to share screens without a bunch of missing i386 libraries. The WebEx plugin is 32-bit, so you need to install some libraries that aren’t installed by default.

Check to see if you’re missing libraries by going into ~/.webex/ and then into a sub-directory whose name is all digits and underscores. Once there, run:

ldd *.so | grep "not found" | cut -f1 -d' ' | tr -d '\t' | uniq

I got about a dozen missing libraries on a relatively new install of Ubuntu 16.04. You may get different results, depending on what’s been installed on your system since you initially installed Ubuntu 16.04.

I installed the following packages (fewer than a dozen, because some packages pull in multiple libraries as dependencies):

sudo apt-get install libxmu6:i386
sudo apt-get install libgtk2.0-0:i386
sudo apt-get install libpangox-1.0-0:i386
sudo apt-get install libpangoxft-1.0:i386
sudo apt-get install libxtst6:i386

If you check again with the above ldd command, the only library you should still be missing is libjawt.so. This library doesn’t seem to be needed.

Sunday, 17 April 2016

Android Phone Not Connecting via DHCP

I had a weird problem where suddenly my phone stopped connecting to my home WiFi. I was getting the WiFi icon with the exclamation mark, meaning that the router was connecting but I wasn't getting all the info needed to participate in the network.

(The solution further down doesn't require you to understand the next couple of paragraphs, so don't despair if there's too much tech talk in what follows.)

Many posts on-line suggested using a static connection. I was able to do that at home, because I knew the range of DHCP addresses that my router would not give out. But I wasn't satisfied with that solution. I hate it when problems mysteriously arise, and I couldn't identify any reason why my network connection at home should have suddenly started failing.

About the third time I looked for a solution, I came across this document from Princeton. It mentions that there's a bug in some Broadcomm chips that messes up DHCP when the network stays on when the device is asleep.

Well, I remember noting that I had my network configured to stay up when the device was asleep. I noticed it because I didn't think I had configured it that way. (I sometimes find my phone on the settings screen when I pull it out of my pocket, and settings are accidentally changed.)

So (here's the solution), I went back to Settings-> WiFi, then touched the three dots near the top right of the screen, then Advanced, then I turned off "Keep Wi-Fi on during sleep", which set my network to go off when the device sleeps. After that, my phone connected to my home network just fine.

My phone is a Nexus 4, running Android 5.1.1, but obviously this might affect other models since it looks like it's because of the hardware.

Saturday, 9 April 2016

Installing a Brother MFC9340CDW on Ubuntu 14.04

The printer install was easy. Just follow Brother’s instructions, which at the time were at: http://support.brother.com/g/b/downloadtop.aspx?c=ca&lang=en&prod=mfc9340cdw_all. Brother seems to change the location of their documents. A lot of the links on the net were broken.

The trick was when I installed on the second computer. It couldn’t find the printer. Once it woke up the printer, then I was able to install.

(I think I saw some references to wake-on-lan being an issue. I haven’t had a chance to look into it.)

As usual, installing the scanner was a bit of an adventure. I did the instructions here: http://support.brother.com/g/s/id/linux/en/instruction_scn1c.html?c=us_ot&lang=en&comple=on&redirect=on#u13.04

and this:

sudo usermod -a -G lp <username>
sudo usermod -a -G lp saned

and rebooted, and it still didn’t work. But then I just ignored it for about eight hours and did some yard work and cooking, and scanning worked. Go figure.

It’s sure nice to have double-sided scanning and printing. One trick with xsane and double sided-scanning is that you enter the number of page sides you’re going to scan, not the number of physical sheets of paper. In other words, when you’re scanning three pieces of paper double-sided, tell xsane that you’re scanning six pages.

Thursday, 24 September 2015

Why On-Line Voting Won't Work

There are a lot of reasons why on-line voting for anything important is a really bad idea. Some of them are technological, and technological problems, given time, can often be solved. But some of the problems are based in human nature. These kinds of problems are much harder to solve.

If you're sitting at home while you vote, someone, say from the ruling party, can stand over you and force you to vote the way they want you to. If you don't think that could happen in your country, fine. Maybe you trust your political parties more than I trust mine. But I bet you can think of places where it could happen.

That's not the only situation where people could be forced to vote for certain candidates. A woman in an abusive relationship could be prevented by her husband from voting for the candidate who wants to crack down on spousal abuse. Or imagine you're a candidate running to protect people from slum-lords, when the slum-lords can make sure that anyone who votes for you gets kicked out of their apartment. What about a single mother who can only find accommodation in a "faith" based shelter, and the shelter requires her to vote for candidates who want to take away women's reproductive rights?

And then there's the simple bribe. Did you ever wonder why parties spend money on advertising, instead of simply offering you $20 for your vote? It would probably be a lot cheaper. You don't get offered a bribe for your vote because the briber has no way of knowing you actually voted the way they paid you to. Once someone can watch you vote, they will be a lot more willing to pay you to vote their way.

None of the above happens in most modern democracies because you vote in a public place. People -- other voters and election officials -- can see that no one else could see how you voted. So there's a reasonable chance that you really did vote for the candidate you preferred, and not for the one that either coerced you or paid you to vote for them.

We're taught that the secret vote is the key to legitimacy for an electoral system. But the real key is the secret vote in a public place. Once the safety of the crowd is taken away from the act of voting, bribery and coercion become effective options. And once people believe that votes can be bought or obtained by coercion, the legitimacy of the whole electoral system evaporates.

There may be answers to these problems. I don't pretend to be smart enough to come up with them. But until we have answers to these issues, the technological problems of on-line voting pale in comparison.

Friday, 21 August 2015

Doing Something About Security -- Linuxcon 2015 #3

The Let's Encrypt people are wonderful. They're doing something about the state of security on the Internet. They're providing an easy and free way to get the certificates you need to publish a secure web-site (one using HTTPS, instead of plain HTTP), like your bank does.

If you've run a secure web site, you'll know that it's expensive, inflexible, takes time to set up, and requires you to remember to renew the certificate. Let's Encrypt solves most of those problems for you, at least in a common use case.

If you run a server with a dedicated IP, have privileges to install software on that server (i.e. you can run `apt-get` or `yum`), and you use Apache or Nginx as your HTTP server, then it's brain-dead simple to switch to HTTPS.

Let's Encrypt is planning on going live sometime in the last quarter of 2015. Right now they're in a restricted beta, and users will see browser warnings about your site if you use their certificates. When they go live, they'll be backed by IdenTrust, so users will have the same warning-free experience that any other secure site would have.

But even if it's not ready to use for the general public, you can help them test. (At the moment, you can't use apt or yum to install the Let's Encrypt client. Read these installation instructions instead.)

There are lots of use cases that aren't helped by Let's Encrypt yet. Probably the most glaring are for the legions of us that use $3/month hosting services that don't give us a fixed IP and a way to install the Let's Encrypt client. Still, it's a big step forward for a secure web.

Mood and Swag -- Linuxcon 2015 #2

Four years ago I went to Linuxcon NA 2011. The unspoken mood of the conference seemed to be, "Linux has won the OS wars, but the rest of the world hasn't noticed." At Linuxcon NA 2015, the unspoken mood of the conference was, "It doesn't matter who won the OS wars."

Some big markers of that:

  • IBM announcing LinuxOne, an offering for people who want to buy Z series mainframes to run Linux
  • Microsoft giving out soft squishy Tux penguins with a Microsoft URL on them, and stickers that said, "Microsoft   Linux"
  • Only one joke about 2015 being the year of the Linux desktop

Container Land -- Linuxcon 2015 #1

I went to Linuxcon 2015 NA with a friend this year. It wasn't hard to figure out what the flavour of the year was -- containers. And Docker was the overwhelming favourite. As usual, I found it way easier to see the negative in the hype, rather than the positive.

But I'll try to see the positive first. There's a lot of value in having a thinner layer, thinner than a full virtual machine, between an application's context and the bare metal. There's also value in packaging an application and distributing it in a way that's thinner than shipping a whole virtual machine. Containers have the potential to provide these features.

The container has many historical roots, but from what I saw, we're mostly excited today because this is what Google has been using for 10 years to run their vast server farms. They have a very particular use case: Huge numbers of users accessing a small set of relatively homogeneous functionality. Perfect for a light-weight way of deploying a huge number of instances of applications across the smallest number of physical resources possible.

There were a number of presentation where the engineering challenges around containers were discussed. And there are significant ones, primarily around networking and privileges (all processes in containers run as root on the containing physical machine). These challenges will be solved, but not for another 18-24 months, I'd guess. Only then can we start to talk about adoption in the enterprise world.

In the enterprise world, the one I get paid to play in, we're mostly still dealing with servers as pets. Even at my current client, who have drunk the DevOps kool-aid and have Puppetized a lot of their deploys, we're talking about very few duplicate instances of a server in production. (They get value from Puppet by cleverly factoring the Puppet configuration across development, test, UAT, and production environments.)

Given the engineering effort that was evident in the containers model, I think there's going to be another significant adoption hill, like there was for virtualization. Perhaps even more so, as I'm not convinced that the math will be quite as compelling for containers as it was for VMs. The problem is that the definitions of containers have to be hand-crafted. Once the container is defined, you can spin up thousands, quickly and efficiently. But as I just said, most enterprises just need a few instances of any particular application.

Some of the speakers talked about containers being just another stop on the continuum from physical machines to virtual machines and other models (Amazon Lambdas, for example). SaaS (not PaaS) providers can use containers to realize savings on hardware, because they can amortize the container definition cost over all their customers. Enterprises that use SaaS will use containers, without even knowing it, as it should be.

Compounding the problem of enterprise adoption of containers in-house, is the fact that the orchestration tools (tools for spinning up, shutting down, and monitoring a large number of instances), are largely split along the underlying model: You use VMware or Openstack to manage virtual machines, and Kubernetes (or any one of hundreds of other offerings) to manage containers. Most enterprises won't have the personnel or the volume of applications to justify developing two different skill sets and platforms to manage their VMs and containers. There needs to be a unified orchestration platform that covers the spectrum of deployment models.

In summary, I think that containers will be a significant deployment option in the near future, but the way they will be used in practice is still to be determined, and they may never end up being adopted for in-house enterprise deployments.