Thursday, 24 September 2015

Why On-Line Voting Won't Work

There are a lot of reasons why on-line voting for anything important is a really bad idea. Some of them are technological, and technological problems, given time, can often be solved. But some of the problems are based in human nature. These kinds of problems are much harder to solve.

If you're sitting at home while you vote, someone, say from the ruling party, can stand over you and force you to vote the way they want you to. If you don't think that could happen in your country, fine. Maybe you trust your political parties more than I trust mine. But I bet you can think of places where it could happen.

That's not the only situation where people could be forced to vote for certain candidates. A woman in an abusive relationship could be prevented by her husband from voting for the candidate who wants to crack down on spousal abuse. Or imagine you're a candidate running to protect people from slum-lords, when the slum-lords can make sure that anyone who votes for you gets kicked out of their apartment. What about a single mother who can only find accommodation in a "faith" based shelter, and the shelter requires her to vote for candidates who want to take away women's reproductive rights?

And then there's the simple bribe. Did you ever wonder why parties spend money on advertising, instead of simply offering you $20 for your vote? It would probably be a lot cheaper. You don't get offered a bribe for your vote because the briber has no way of knowing you actually voted the way they paid you to. Once someone can watch you vote, they will be a lot more willing to pay you to vote their way.

None of the above happens in most modern democracies because you vote in a public place. People -- other voters and election officials -- can see that no one else could see how you voted. So there's a reasonable chance that you really did vote for the candidate you preferred, and not for the one that either coerced you or paid you to vote for them.

We're taught that the secret vote is the key to legitimacy for an electoral system. But the real key is the secret vote in a public place. Once the safety of the crowd is taken away from the act of voting, bribery and coercion become effective options. And once people believe that votes can be bought or obtained by coercion, the legitimacy of the whole electoral system evaporates.

There may be answers to these problems. I don't pretend to be smart enough to come up with them. But until we have answers to these issues, the technological problems of on-line voting pale in comparison.

Friday, 21 August 2015

Doing Something About Security -- Linuxcon 2015 #3

The Let's Encrypt people are wonderful. They're doing something about the state of security on the Internet. They're providing an easy and free way to get the certificates you need to publish a secure web-site (one using HTTPS, instead of plain HTTP), like your bank does.

If you've run a secure web site, you'll know that it's expensive, inflexible, takes time to set up, and requires you to remember to renew the certificate. Let's Encrypt solves most of those problems for you, at least in a common use case.

If you run a server with a dedicated IP, have privileges to install software on that server (i.e. you can run `apt-get` or `yum`), and you use Apache or Nginx as your HTTP server, then it's brain-dead simple to switch to HTTPS.

Let's Encrypt is planning on going live sometime in the last quarter of 2015. Right now they're in a restricted beta, and users will see browser warnings about your site if you use their certificates. When they go live, they'll be backed by IdenTrust, so users will have the same warning-free experience that any other secure site would have.

But even if it's not ready to use for the general public, you can help them test. (At the moment, you can't use apt or yum to install the Let's Encrypt client. Read these installation instructions instead.)

There are lots of use cases that aren't helped by Let's Encrypt yet. Probably the most glaring are for the legions of us that use $3/month hosting services that don't give us a fixed IP and a way to install the Let's Encrypt client. Still, it's a big step forward for a secure web.

Mood and Swag -- Linuxcon 2015 #2

Four years ago I went to Linuxcon NA 2011. The unspoken mood of the conference seemed to be, "Linux has won the OS wars, but the rest of the world hasn't noticed." At Linuxcon NA 2015, the unspoken mood of the conference was, "It doesn't matter who won the OS wars."

Some big markers of that:

  • IBM announcing LinuxOne, an offering for people who want to buy Z series mainframes to run Linux
  • Microsoft giving out soft squishy Tux penguins with a Microsoft URL on them, and stickers that said, "Microsoft   Linux"
  • Only one joke about 2015 being the year of the Linux desktop

Container Land -- Linuxcon 2015 #1

I went to Linuxcon 2015 NA with a friend this year. It wasn't hard to figure out what the flavour of the year was -- containers. And Docker was the overwhelming favourite. As usual, I found it way easier to see the negative in the hype, rather than the positive.

But I'll try to see the positive first. There's a lot of value in having a thinner layer, thinner than a full virtual machine, between an application's context and the bare metal. There's also value in packaging an application and distributing it in a way that's thinner than shipping a whole virtual machine. Containers have the potential to provide these features.

The container has many historical roots, but from what I saw, we're mostly excited today because this is what Google has been using for 10 years to run their vast server farms. They have a very particular use case: Huge numbers of users accessing a small set of relatively homogeneous functionality. Perfect for a light-weight way of deploying a huge number of instances of applications across the smallest number of physical resources possible.

There were a number of presentation where the engineering challenges around containers were discussed. And there are significant ones, primarily around networking and privileges (all processes in containers run as root on the containing physical machine). These challenges will be solved, but not for another 18-24 months, I'd guess. Only then can we start to talk about adoption in the enterprise world.

In the enterprise world, the one I get paid to play in, we're mostly still dealing with servers as pets. Even at my current client, who have drunk the DevOps kool-aid and have Puppetized a lot of their deploys, we're talking about very few duplicate instances of a server in production. (They get value from Puppet by cleverly factoring the Puppet configuration across development, test, UAT, and production environments.)

Given the engineering effort that was evident in the containers model, I think there's going to be another significant adoption hill, like there was for virtualization. Perhaps even more so, as I'm not convinced that the math will be quite as compelling for containers as it was for VMs. The problem is that the definitions of containers have to be hand-crafted. Once the container is defined, you can spin up thousands, quickly and efficiently. But as I just said, most enterprises just need a few instances of any particular application.

Some of the speakers talked about containers being just another stop on the continuum from physical machines to virtual machines and other models (Amazon Lambdas, for example). SaaS (not PaaS) providers can use containers to realize savings on hardware, because they can amortize the container definition cost over all their customers. Enterprises that use SaaS will use containers, without even knowing it, as it should be.

Compounding the problem of enterprise adoption of containers in-house, is the fact that the orchestration tools (tools for spinning up, shutting down, and monitoring a large number of instances), are largely split along the underlying model: You use VMware or Openstack to manage virtual machines, and Kubernetes (or any one of hundreds of other offerings) to manage containers. Most enterprises won't have the personnel or the volume of applications to justify developing two different skill sets and platforms to manage their VMs and containers. There needs to be a unified orchestration platform that covers the spectrum of deployment models.

In summary, I think that containers will be a significant deployment option in the near future, but the way they will be used in practice is still to be determined, and they may never end up being adopted for in-house enterprise deployments.

Sunday, 26 April 2015

DNS with DD-WRT and dnsmasq

I recently switched ISPs. I had been a broadband customer for 15 years with the same ISP. I had a fairly complicated home network with a few routers, one of which provided DHCP and DNS. Between my network and the ISP’s, I had a simple ADSL modem with no additional functionality (i.e. no built-in wireless or router).

My new cable modem (an Arris/Motorola SBG6782) came with wireless, four wired LAN ports, and a router. Basic connectivity was up very quickly, and we were able to start connecting wirelessly to the new network.

Unfortunately, the new modem/router is also “idiot-proofed” by the manufacturer and/or the ISP. Among other features, I’m forced to use the “192.168.0/24” subnet for my LAN. Way back, for reasons that I have mostly forgotten, I set up my LAN on “10.3.3.0/24”. This meant that all my network infrastructure, including file servers, storage boxes, printers, backup destinations, etc. were all broken by the new router.

After a couple of unsuccessful attempts to make the router act only as a modem, I decided that reconfiguring my network was probably the quickest solution. (The ISP’s support forums suggested it was possible, but it didn’t work for me, and their customer support denied all knowledge of how to do it.)

In my old network, one DD-WRT-based router was connected to the ADSL modem (Internet) via the router’s WAN port. That router provided DHCP and DNS for my network. The DNS provided addresses for devices in my network with static IPs and those that got addresses from DHCP.

For the new network, I disconnected the WAN port and hooked the LAN side of the router into an 8-port switch that was also connected to the cable modem. I changed all the relevant IP addresses in the router to “192.168.0.x” addresses. Tedious, but it mostly worked.

The part that didn’t work was DNS. If I had DHCP on my router hand out my router’s IP as the DNS server, I could look up hosts on my LAN, but not on the Internet. If I hard coded the ISP’s DNS servers into my router, I could look up hosts on the Internet, but not on my LAN.

Since my router was no longer using DHCP to get its Internet address from the ISP, the magic done by DD-WRT and/or dnsmasq to configure the DNS service wasn’t working any more.

After a bit of Googling, and reading dnsmasq documentation, I decided that what was missing was to put the ISP’s DNS servers into the router’s /tmp/resolv.dnsmasq file. So in the Administration-> Commands page of the router’s web interface, I added these lines to the start-up script:

echo 'nameserver 64.59.144.93' >> /tmp/resolv.dnsmasq
echo 'nameserver 64.59.150.139' >> /tmp/resolv.dnsmasq
killall -HUP dnsmasq

After rebooting the router, and disconnecting and reconnecting my computer to force it to get the new settings from the router, DNS works.

I’m not sure if this is the “right” way to do it, but it works.

Sunday, 19 April 2015

Single Sign-Off

One of the things I find amusing about the IT business is how often we create unintended consequences for ourselves.

Last week at work we ran into an interesting dilemma: We have a nice set-up to enable some level of single sign-on for our external users (business partners), across a suite of applications they use. We're preparing to deploy some browser-based COTS software into that suite of applications. Like most applications, the new one has a "log out" button.

When the user logs out, we'd like to take them back to a page that says, "You have logged out. To log on again click here." But we can't, because once they click on the log-out link, our "single sign-on" becomes "single sign-off". Before they can see any page on our partner network, we have to send them to our corporate log-in page.

We have options, so it's not like this is a huge problem. But no one thought of it before, so we're going through a bit of churn while people get their head around the problem and decide how they want to deal with it.

So don't forget, "single sign-on" also means "single sign-off".

Sunday, 18 January 2015

Finding More Women for IT

Martin Fowler recently published a great blog post on how to get more gender diversity in IT. You need to read his post to understand this one, but in a nutshell he makes an analogy to a bag of marbles. 80 % are blue and 20 % are pink. 10 % of each colour are sparkly. As long as you have 100 marbles, you can find 2 sparkly ones of either colour. You just have to look for them.

When I read his post, I thought, "what about the marbles outside the bag." In the universe of marbles, 50 % are blue and 50 % are pink. 10 % of each colour are sparkly. So if you step outside the bag (e.g. the resumes you received for a job posting), the probability of finding a sparkly pink marble is actually greater than that of finding a blue one.

Another thought would be to get a bag of pink marbles from the factory. Then it's really easy to get pink sparkly marbles. And you'll probably get to chose amongst all the sparkly ones yourself, at least until other people clue in that this is a good way to get sparkly marbles. This is the equivalent of recruiting from women in IT meetups and suchlike. And that's not so hard.

Sunday, 4 January 2015

Using Plantronics M165 Marque 2 Bluetooth Headset with Linux

The Plantronics M165 Marque 2 Bluetooth headset paired very nicely with my Android phone. To pair it to my computer running Linux Mint 17 I:
  1. Clicked on the Bluetooth icon
  2. Turned on Bluetooth
  3. Clicked “Set up a new device…”
  4. Pressed and held the Call button on the headset for five or six seconds, until the computer found the headset (the Plantronics documentation is here)
The sound test in the Sound Settings dialogue didn’t sound right, but I could play music through the headset and it sounded recognizable.
Using it with Skype gave super-sucky sound quality. Lowering the PCM level in alsamixer to about 70 made the sound quality a lot better, but still not great (lowering the PCM level was suggested here).
alsamixer is a command-line application. Open a Terminal and type: alsamixer, then use the left and right arrow keys to find “PCM”, and use the up and down arrow keys to set the level.
Still trying to improve the sound quality, I noticed that the built-in microphone is on when the headset is on. On a Skype test call, manually turning off the microphone didn’t seem to make a lot of difference to the sound quality.
Using Audacity to record sound, the quality of the built-in microphone was even worse than the headset.
[Edit] Using the headset, I made a Skype call to my son, and he said the quality of my voice was okay. I could also hear him okay.
In case you need to know, the Bluetooth config files are in /etc/bluetooth.