Thursday, 24 September 2015
If you're sitting at home while you vote, someone, say from the ruling party, can stand over you and force you to vote the way they want you to. If you don't think that could happen in your country, fine. Maybe you trust your political parties more than I trust mine. But I bet you can think of places where it could happen.
That's not the only situation where people could be forced to vote for certain candidates. A woman in an abusive relationship could be prevented by her husband from voting for the candidate who wants to crack down on spousal abuse. Or imagine you're a candidate running to protect people from slum-lords, when the slum-lords can make sure that anyone who votes for you gets kicked out of their apartment. What about a single mother who can only find accommodation in a "faith" based shelter, and the shelter requires her to vote for candidates who want to take away women's reproductive rights?
And then there's the simple bribe. Did you ever wonder why parties spend money on advertising, instead of simply offering you $20 for your vote? It would probably be a lot cheaper. You don't get offered a bribe for your vote because the briber has no way of knowing you actually voted the way they paid you to. Once someone can watch you vote, they will be a lot more willing to pay you to vote their way.
None of the above happens in most modern democracies because you vote in a public place. People -- other voters and election officials -- can see that no one else could see how you voted. So there's a reasonable chance that you really did vote for the candidate you preferred, and not for the one that either coerced you or paid you to vote for them.
We're taught that the secret vote is the key to legitimacy for an electoral system. But the real key is the secret vote in a public place. Once the safety of the crowd is taken away from the act of voting, bribery and coercion become effective options. And once people believe that votes can be bought or obtained by coercion, the legitimacy of the whole electoral system evaporates.
There may be answers to these problems. I don't pretend to be smart enough to come up with them. But until we have answers to these issues, the technological problems of on-line voting pale in comparison.
Friday, 21 August 2015
If you've run a secure web site, you'll know that it's expensive, inflexible, takes time to set up, and requires you to remember to renew the certificate. Let's Encrypt solves most of those problems for you, at least in a common use case.
If you run a server with a dedicated IP, have privileges to install software on that server (i.e. you can run `apt-get` or `yum`), and you use Apache or Nginx as your HTTP server, then it's brain-dead simple to switch to HTTPS.
But even if it's not ready to use for the general public, you can help them test. (At the moment, you can't use apt or yum to install the Let's Encrypt client. Read these installation instructions instead.)
There are lots of use cases that aren't helped by Let's Encrypt yet. Probably the most glaring are for the legions of us that use $3/month hosting services that don't give us a fixed IP and a way to install the Let's Encrypt client. Still, it's a big step forward for a secure web.
Some big markers of that:
- IBM announcing LinuxOne, an offering for people who want to buy Z series mainframes to run Linux
- Microsoft giving out soft squishy Tux penguins with a Microsoft URL on them, and stickers that said, "Microsoft Linux"
- Only one joke about 2015 being the year of the Linux desktop
But I'll try to see the positive first. There's a lot of value in having a thinner layer, thinner than a full virtual machine, between an application's context and the bare metal. There's also value in packaging an application and distributing it in a way that's thinner than shipping a whole virtual machine. Containers have the potential to provide these features.
The container has many historical roots, but from what I saw, we're mostly excited today because this is what Google has been using for 10 years to run their vast server farms. They have a very particular use case: Huge numbers of users accessing a small set of relatively homogeneous functionality. Perfect for a light-weight way of deploying a huge number of instances of applications across the smallest number of physical resources possible.
There were a number of presentation where the engineering challenges around containers were discussed. And there are significant ones, primarily around networking and privileges (all processes in containers run as root on the containing physical machine). These challenges will be solved, but not for another 18-24 months, I'd guess. Only then can we start to talk about adoption in the enterprise world.
In the enterprise world, the one I get paid to play in, we're mostly still dealing with servers as pets. Even at my current client, who have drunk the DevOps kool-aid and have Puppetized a lot of their deploys, we're talking about very few duplicate instances of a server in production. (They get value from Puppet by cleverly factoring the Puppet configuration across development, test, UAT, and production environments.)
Given the engineering effort that was evident in the containers model, I think there's going to be another significant adoption hill, like there was for virtualization. Perhaps even more so, as I'm not convinced that the math will be quite as compelling for containers as it was for VMs. The problem is that the definitions of containers have to be hand-crafted. Once the container is defined, you can spin up thousands, quickly and efficiently. But as I just said, most enterprises just need a few instances of any particular application.
Some of the speakers talked about containers being just another stop on the continuum from physical machines to virtual machines and other models (Amazon Lambdas, for example). SaaS (not PaaS) providers can use containers to realize savings on hardware, because they can amortize the container definition cost over all their customers. Enterprises that use SaaS will use containers, without even knowing it, as it should be.
Compounding the problem of enterprise adoption of containers in-house, is the fact that the orchestration tools (tools for spinning up, shutting down, and monitoring a large number of instances), are largely split along the underlying model: You use VMware or Openstack to manage virtual machines, and Kubernetes (or any one of hundreds of other offerings) to manage containers. Most enterprises won't have the personnel or the volume of applications to justify developing two different skill sets and platforms to manage their VMs and containers. There needs to be a unified orchestration platform that covers the spectrum of deployment models.
In summary, I think that containers will be a significant deployment option in the near future, but the way they will be used in practice is still to be determined, and they may never end up being adopted for in-house enterprise deployments.
Sunday, 26 April 2015
I recently switched ISPs. I had been a broadband customer for 15 years with the same ISP. I had a fairly complicated home network with a few routers, one of which provided DHCP and DNS. Between my network and the ISP’s, I had a simple ADSL modem with no additional functionality (i.e. no built-in wireless or router).
My new cable modem (an Arris/Motorola SBG6782) came with wireless, four wired LAN ports, and a router. Basic connectivity was up very quickly, and we were able to start connecting wirelessly to the new network.
Unfortunately, the new modem/router is also “idiot-proofed” by the manufacturer and/or the ISP. Among other features, I’m forced to use the “192.168.0/24” subnet for my LAN. Way back, for reasons that I have mostly forgotten, I set up my LAN on “10.3.3.0/24”. This meant that all my network infrastructure, including file servers, storage boxes, printers, backup destinations, etc. were all broken by the new router.
After a couple of unsuccessful attempts to make the router act only as a modem, I decided that reconfiguring my network was probably the quickest solution. (The ISP’s support forums suggested it was possible, but it didn’t work for me, and their customer support denied all knowledge of how to do it.)
In my old network, one DD-WRT-based router was connected to the ADSL modem (Internet) via the router’s WAN port. That router provided DHCP and DNS for my network. The DNS provided addresses for devices in my network with static IPs and those that got addresses from DHCP.
For the new network, I disconnected the WAN port and hooked the LAN side of the router into an 8-port switch that was also connected to the cable modem. I changed all the relevant IP addresses in the router to “192.168.0.x” addresses. Tedious, but it mostly worked.
The part that didn’t work was DNS. If I had DHCP on my router hand out my router’s IP as the DNS server, I could look up hosts on my LAN, but not on the Internet. If I hard coded the ISP’s DNS servers into my router, I could look up hosts on the Internet, but not on my LAN.
Since my router was no longer using DHCP to get its Internet address from the ISP, the magic done by DD-WRT and/or
dnsmasq to configure the DNS service wasn’t working any more.
After a bit of Googling, and reading dnsmasq documentation, I decided that what was missing was to put the ISP’s DNS servers into the router’s
/tmp/resolv.dnsmasq file. So in the Administration-> Commands page of the router’s web interface, I added these lines to the start-up script:
echo 'nameserver 184.108.40.206' >> /tmp/resolv.dnsmasq echo 'nameserver 220.127.116.11' >> /tmp/resolv.dnsmasq killall -HUP dnsmasq
After rebooting the router, and disconnecting and reconnecting my computer to force it to get the new settings from the router, DNS works.
I’m not sure if this is the “right” way to do it, but it works.
Sunday, 19 April 2015
Last week at work we ran into an interesting dilemma: We have a nice set-up to enable some level of single sign-on for our external users (business partners), across a suite of applications they use. We're preparing to deploy some browser-based COTS software into that suite of applications. Like most applications, the new one has a "log out" button.
When the user logs out, we'd like to take them back to a page that says, "You have logged out. To log on again click here." But we can't, because once they click on the log-out link, our "single sign-on" becomes "single sign-off". Before they can see any page on our partner network, we have to send them to our corporate log-in page.
We have options, so it's not like this is a huge problem. But no one thought of it before, so we're going through a bit of churn while people get their head around the problem and decide how they want to deal with it.
So don't forget, "single sign-on" also means "single sign-off".
Sunday, 18 January 2015
When I read his post, I thought, "what about the marbles outside the bag." In the universe of marbles, 50 % are blue and 50 % are pink. 10 % of each colour are sparkly. So if you step outside the bag (e.g. the resumes you received for a job posting), the probability of finding a sparkly pink marble is actually greater than that of finding a blue one.
Another thought would be to get a bag of pink marbles from the factory. Then it's really easy to get pink sparkly marbles. And you'll probably get to chose amongst all the sparkly ones yourself, at least until other people clue in that this is a good way to get sparkly marbles. This is the equivalent of recruiting from women in IT meetups and suchlike. And that's not so hard.
Sunday, 4 January 2015
- Clicked on the Bluetooth icon
- Turned on Bluetooth
- Clicked “Set up a new device…”
- Pressed and held the Call button on the headset for five or six seconds, until the computer found the headset (the Plantronics documentation is here)
Using it with Skype gave super-sucky sound quality. Lowering the PCM level in
alsamixerto about 70 made the sound quality a lot better, but still not great (lowering the PCM level was suggested here).
alsamixeris a command-line application. Open a Terminal and type:
alsamixer, then use the left and right arrow keys to find “PCM”, and use the up and down arrow keys to set the level.
Still trying to improve the sound quality, I noticed that the built-in microphone is on when the headset is on. On a Skype test call, manually turning off the microphone didn’t seem to make a lot of difference to the sound quality.
Using Audacity to record sound, the quality of the built-in microphone was even worse than the headset.
[Edit] Using the headset, I made a Skype call to my son, and he said the quality of my voice was okay. I could also hear him okay.
In case you need to know, the Bluetooth config files are in