Missing 5GHz Wi-Fi Networks

Holiday is a time for rest, relaxation and trips to exciting new places with family and friends. It is also a great time to rebuild your home network and spend long nights mucking about with firmware, conf files and cables. As I come to the end of my summer vacation, I have achieved at least some of the items in the above list.

In particular though, I have managed to rebuild my home network to make use of many of the bits and pieces I have had lying around waiting for something to do. As usual, 80% was straight-forward (or answers to problems were quickly found) but 20% were proper head-scratchers.

This post is about one of the 20%. Without worrying about why, for now, I switched one of the Wi-Fi networks in my apartment to the 5GHz band. This worked just fine for the two laptops I run in my house (both of which use Intel Wi-Fi adapters). The Linksys WMP600N PCI adapter that I added to my desktop could see nothing whatsoever on the 5GHz band.

Using inSSIDer, I was able to see a number of 2.4GHz networks (including one of my own at the other end of the apartment) but nothing on the 5GHz band. This misdirected my investigations for a time since it led me to assume that the card (or Windows) was not enabling the 5GHz radio.

After quite a bit of Googling and reading of unhelpful forum posts (the quantity of which suggest that laptop manufacturers need to be clearer about what the adapters in their laptops really support), I came across this post from ReginaldPerrin (post 8) on the Linksys forums. To summarize, he reminds readers that the regulatory situation for 5GHz channels is much more complicated than for the 2.4GHz bands.

Depending on the country you are in, your Wi-Fi equipment will be configured to support a particular range of channels in a particular way. Provided all your equipment was bought in the same country and you are not using custom firmware, you will probably not experience this issue.

The relevant part of my network consists of a Linksys E4200 running DD-WRT to allow it to act as a wireless bridge and get me coverage across my apartment. This was configured with a N-only, 5GHz network with automatic channel selection and a 40GHz width. Using inSSIDer on one of my laptops, I could see that the auto-selected base channel was 148.

A quick check against the Wikipedia table of Wi-Fi channels showed that none of the channels above 148 are legal in Europe. Setting the E4200 to use channel 48 as the base channel fixed the problem immediately. I have no idea whether this was down to Windows hiding ‘illegal’ channels or the Linksys WMP600N itself disallowing access.

A couple of observations

The forum post talks about a configuration page for the NIC that exists in neither the current driver, nor the driver than ships with Windows. This should allow you to configure the country and hence allowable channel. I guess the driver now works off the country locale of the host machine. I am using the ‘-EU’ suffixed card so it is possible that the allowed channels have been set in the card’s firmware. According to this, the card is probably a Linksys OEMd version of RALink RT3562. I might try the RALink driver to see what more I can do with that.

It is interesting that there are still very few 5GHz networks out there. The 2.4GHz band around my apartment is packed and it is difficult if not impossible to find clear-air. 5GHz is a ghost-town for now. If you want to improve your Wi-Fi reception and you live in a busy area, it’s probably worth investing in 5GHz capable gear. Use inSSIDer to find

Wi-Fi routers have to monitor for radar pulses and switch to a different channel if they detect them!

Setting up OpenSSH\Putty and key based authentication

I wrote this back in 2005 on my old blog (since deleted). As I was setting this up again tonight, I’ve re-posted this as it’s a useful reference. 

Sick of trying to remember root passwords for my *nix boxes, I’ve finally got round to configuring key based authentication using OpenSSH and Putty. This is a quick description of the setup and configuration that is required to get this going. There are some useful links at the end for background and understanding.

Download putty.exe, pageant and puttygen. Next, fire up puttygen and create an ssh key-pair (private to keep on the workstation, public to dole out to the hosts you’ll be authenticating to). Generate lots of lovely entropy by waggling the mouse furiously and pick a decent pass-phrase.

Save your newly created public and private keys on the workstation that you’ll be making connections from.

Next up, copy the public key text from the ‘puttygen’ window and connect over SSH to the host you want to configure. Login as the user you want to key authenticate and paste the public key text into a ~/.ssh/authorized_keys. Save the file then check run, “chmod 700 ~/.ssh” and “chmod 600 ~/.ssh/authorized_keys”. Note that this process can be automated somewhat using Plink and cat.

Note that it is important that the text be in the format shown in the puttygen window. I always forget this and spend ages fannying around trying to figure out the right format for authorized_keys. If you just load your keys into the puttygen application, it will provide the appropriate key-text for you (just hit the ‘load’ button and select your private key).

Before logging out, check that the /etc/ssh/sshd_config has the following lines uncommented:

PubkeyAuthentication yes
AuthorizedKeysFile %h/.ssh/authorized_keys

If not, make the change and reload the sshd config.

Back on the Windows box, run pageant.exe with the path to your private key as an argument. Pageant will prompt you for your pass-phrase (stick this command in your startup directory to make life easier).

Now, when Putty is run, it will detect the presence of Pageant and attempt to authenticate using the key you’ve provided. Bear in mind that if you’ve not configured a host correctly, login will fail silently. Thus, it’s worth checking the ‘Attempt “keyboard-interactive” (SSH2)’ in the ‘Auth’ section of putty’s options so that you’ll get a password prompt if key authentication fails.

Now just look after your private key and pass-phrase and all will be well with the world.

The following are pretty useful to get you up and running :

Governance in the cloud

One of the IT department’s less official roles has been as a gatekeeper to an organisations infrastructure. The cost and time to market constraints that are sometimes imposed by internal IT can lead to applications being cancelled and even to not being proposed in the first place. By allowing the business to side-step the IT department though, cloud computing enables departments and individuals within organisations to get new applications up and running quickly and with investment largely focussed on development.

Where internal IT is imposing unreasonable delays and costs, this is going to be great for businesses. There are some major caveats to add though. In particular, a lot of the governance and ‘red-tape’ that internal IT seems to impose is actually about protecting an organisation’s data. By checking that things like backup and recovery have been considered and planned for, IT ensures that an organisation’s data, reputation and ultimately it’s business are protected. Where those checks are bypassed, it is fair to expect that the ‘boring’ aspects of application development and deployment will not get the attention the really require. The litany of data loss horror stories never seems to abate. Cloud computing service providers may provide the tools to implement effective backups, but that won’t guarantee that developers will use them.

To be clear, the threat here is not that organisations will use cloud computing, which will be a great addition to the IT tool-box. The threat is the same as that posed by applications running on servers sitting under people’s desks; It is the same thread as that posed by data that leaks on portable drives; The threat is that broken governance can lead to no governance and that organisations will be compromised as a result.

The solution is for internal IT and their management to build cloud computing into their governance and release management models. In much the same way as for suppliers of physical infrastructure, organisations need to choose their suppliers and build standards for development and deployment . By doing this, they can ensure that all applications, whether hosted internally or in the cloud are checked to ensure compliance with data protection, availability and security requirements.

There’s something else to say here though and that’s to remember quite how much due diligence vendors of physical infrastructure are put through before purchase decisions are made. Ultimately, even an SLA isn’t really enough unless you are convinced that the organisation to which you are trusting your data is able to follow through on their promises. I wonder what the cloud services RFP equivalent of a double disk pull will be?

Could the cloud drown in FUD?

Mike Kavis has written a great post following the Forrester EA forum, suggesting that cloud computing faces the risk of heading down the same road of death by over-definition recently run by SOA. I couldn’t agree more with what he says – especially his lessons for getting the pitch right. Still, I wouldn’t lose too much sleep about cloud computing going away any time soon.

As a concept (even one which is misunderstood and misrepresented woefully), cloud computing is orders of magnitude simpler to explain than SOA ever was. SOA is the only industry buzz-word that I’ve ended up buying books about just to get my head around the general concept (that may say more about me than SOA mind).

Cloud computing, I feel has much more of a self-fulfilling dynamic about it than SOA could have had. The economics of it are mind-blowingly simple – even if a solution ends up more expensive than in-house, it is at least cost-transparent. The benefits to the business are clear and are very often then things that in-house IT has been failing to deliver for years (think agility and effective communication of costs in particular).

Ultimately, while some of the FUD is important, it’s in the process of being answered. Most of the issues with cloud computing have been solved somewhere already. What we’re going to see (sorry, are seeing!) now is the emergence of services able to tick many boxes simultaneously. These services will take off, FUD or no FUD.

Five years from now, we may well all have forgotten the phrase ‘cloud computing’ but it will be there one way or another, and the enterprise IT department and the data-centre will have been changed forever regardless.

(Re)fragmenting the IT department

When I started in IT in the mid-1990s, many medium and even large organisations had highly fragmented IT delivery functions. At Ernst and Young in 1994, I worked in a small team delivering IT to the Yorkshire office in the North of England. At the start of the year, we were largely autonomous and able to deliver new services and applications quickly and with only local change control. By the end of the year, we were (along with everyone else in E&Y IT) being outsourced to Sema and amalgamated into a single IT department.

Likewise at the BBC, I started out in the IT team of the ‘Youth and Entertainment Features’ department in BBC Manchester (one of three support and delivery organisations in one building!). By the time I left Auntie in 2004, I had been through three sets of organisational consolidation before finally being outsourced (again!) to Siemens.

The last twenty years has seen a steady process of consolidation of IT delivery in organisations. The relentless trend has been for the centralisation of development, infrastructure delivery and support into the corporate IT department. Outside of business units with very high margins and esoteric IT needs, it has become increasingly difficult for business units to develop and deploy applications without the cooperation of the corporate IT department.

I wonder though, whether cloud service providers open up the risk (or is it an opportunity?) that business units will once again be able to develop, deploy and support new applications, independently of the corporate IT department. Where once, deploying an un-authorised app. would mean running servers under desks or stacking Lacie drives off the back of a desktop to create a private file server; business units can now employ any one of thousands of boutique consultancies or developers to knock up the apps of their dreams using the cloud to obviate the need to involve corporate IT.

Of course, corporate security and finance policy may well stand in the way but history suggests that these won’t be too great an impediment. Once more than a few apps have been deployed, we might see the re-growth of the parallel support organisations that the corporate IT department thought they’d seen the back of (or more likely – absorbed) over the last twenty years.

If that happens, there are all sorts of consequences, many of them nasty. Business units and even businesses as a whole might be willing to pay them to gain agility and bypass what many see (if unfairly) as bureaucratic impediments to business thrown up by corporate IT.

The answer for corporate IT is to make sure that it is nearly or as easy to develop and deploy new applications through them as it is through the cloud suppliers. Maybe that means using the public cloud as a support for internal systems or maybe it means developing private clouds (though some are already pouring cold water on that idea). Either way (or some other way…), it’s an interesting time to be in IT.