Monday, 14 March 2011

Consent to be required for cookies? (updated 2011-03-15 and 2011-05-11 and 2011-05-17)

An amendment to the EU Privacy and Electronic Communications Directive comes into effect on 25 May 2011. This makes it necessary to obtain consent before storing and retrieving usage information on users’ computers. Details are unclear, mainly because the UK government has apparently yet to publicise any information on how this requirement will be implemented in the UK, but in the limit it could require every web site to obtain informed user consent before setting any cookies, which is going to be interesting...

Here are some external references to the topic:
Updated 2011-03-15: More links
Updated 2011-05-11:
Updated 2011-05-17:

Wednesday, 2 March 2011

Other people's content shown to be dangerous

In Promiscuous JavaScript considered dangerous I said that including content from elsewhere on your pages was dangerous, not only because the people supplying the content might be malicious but also because they might fail to prevent third parties from injecting malicious content.

Judging by this BBC News article this is exactly what happened recently to the web sites of the London Stock Exchange, Autotrader, the Vue cinema chain and a number of other organisations as a result of displaying adverts provided by the advertising firm Unanimis. This will have caused problems for these various organisations' clients, and reputational damage and hassle for the organisations themselves.

Ideally you'd carefully filter other people's content before including it in your pages. But you may not be able to do this if, for example, the supplier requires you to let everything through untouched or if you are using the promiscuous JavaScript approach. In such cases you are entirely dependent on the competence of the supplier and, as demonstrated here, some are more competent than others.

Thursday, 17 February 2011

Google Apps: SSO and IdM at #GUUG11

Here are some slides from a presentation I gave on Google Apps SSO and Identity Management at the inaugural 'Google Apps for Education UK User Group' meeting at Loughborough University on 15th February 2011:

Wednesday, 9 February 2011

Loops in PKI certificate hierarchies and Firefox bugs

In a previous posting, I mentioned being surprised to discover that PKI certificate hierarchies were more complicated than the strict trees that I had always assumed them to be. At the time I rather assumed that they must be directed acyclic graphs.

I've subsequently realised that there is nothing to prevent them from being cyclic graphs and have actually found a loop in an existing commercial hierarchy. Unfortunately it looks as if I wasn't the only person making false assumptions since it seems that certificate loops trigger an old-but-not-yet fixed bug in Firefox that prevents certificate chain verification.

Certificates contain within them the 'Distinguished Name' (DN) of a further certificate that contains the public half of the key used to sign the first certificate. Certificate are only identified by name, and there is nothing to stop multiple certificates sharing the same name (though all the certificates with the same name had better contain the same key or new and exciting bad things will probably happen). All this is what I worked out last time.

What I've discovered that's new is illustrated in the following diagram:

This represents part of the hierarchy under which certificates are issued to UK HE institutions by JANET (and I think by similar organisations in other countries) under a contract with Comodo. In this diagram:
  1. The blocks with grey backgrounds represent key pairs. The numbers at the top of the box are the first half of the key's 'Key Identifier'.
  2. The smaller blocks represent certificates containing the corresponding public keys.
  3. The arrows link certificates to the keys that signed them (and so which can validate them).
  4. Certificates with red backgrounds represent self-signed certificates that are trusted roots (at least on my copy of Firefox). The certificate with blue background represent an example server certificate.
  5. The number in each certificate is the first half of the certificate's SHA-1 hash.
  6. Certificates with a  green border represent the recommended verification chain for JANET-issued certificates.
Copies of the certificates involved (and others) can be found here.

The problem here are the two certificates "31:93:..." and "9E:99:...", since each represents a potential verification route for the other. Neither are part of the 'official' verification chain for JANET-issued certificates. "9E:99:..." is distributed by Comodo in support of other of their certificate products. I don't know where "31:93:..." comes from, but I assume it appears in someone else's 'official' certificate chain. Both these 'intermediate' certificates will presumably be included in certificate bundles by particular web servers and, once serverd, tend to be cached by web browsers.

The problem is that, once a web browser has a copy of both of these there's a danger of it going into a spin since each is an apparently acceptable
parent for the other.  It turns out that Firefox has exactly this problem, as described in bug 479508. Unfortunately this bug last saw any action in March 2008 so it's not clear when, if ever, it's going to be fixed. There are some other reports of what I suspect is the same problem here and here.

So who's problem is it? Clearly Firefox could and should be more careful in how it constructs certificate chains. It's possible that other SSL software is vulnerable to similar problems, though I've only seen this manifest in Firefox (and only then occasionally). But I also wonder what the Certification Authorities thought they were doing when they issued these certificates. As far as I can see they were both issued as 'cross-certification' certificates, intended to allow server certificates issued under one root certificate to be validated by reference to another. Issuing one of these isn't a problem. Issuing a pair clearly is.

A work around, should this problem bite you, should be to delete "31:93:..." and "9E:99:..." from Firefox's certificate store. Neither are roots, and any server that needs them to get its certificate verified should be providing them, so deleting them should be entirely safe. The work-around should last until you next pick up copies of both of these, at which point you'll need to delete them again.


Tuesday, 1 February 2011

Root certificates for MacOS OpenSSL

In an earlier post I mentioned that, while MacOS includes OpenSSL it isn't preconfigured with any trusted root certificates. So before you can use it to do SSL properly you need to provide a set. 

My previous post suggested extracting them from the bundle that comes with Firefox, but I've recently come across a useful article about Alpine on MacOS by Paul Heinlein in which he points out that the MacOS operating system already has a set of preconfigured roots and that these can be extracted using the Keychain Access utility for use by OpenSSL. See his posting for details, but to quote from it:
  1. Open the Keychain Access application and choose the System Roots keychain. Select the Certificates category and you should see 100 or more certificates listed in the main panel of the window.
  2. Click your mouse on any of those certificate entries and then select them all with EditSelect All (Cmd+A).
  3. Once the certificates are all highlighted, export them to a file: FileExport Items…. Use cert as the filename and make sure Privacy Enhanced Mail (.pem) has been chosen as the file format.
  4. Copy the newly created cert.pem into the /System/Library/OpenSSL directory
Now, I wonder why Apple didn't do this for us?

Saturday, 29 January 2011

Thoughts on "Initial authentication"

"If it was easy we'd have already done it!"

I've recently contributed to some work on 'Password management". Here are some of the thoughts that I've managed to condense into words. Don't be disappointed - they raise at least as many questions as they answer.

1) Looking say five years ahead I think we need to talk about 'authentication' and not just 'passwords'. Given their vulnerability to keyboard sniffing it seems to me that within five years it will be necessary at least to support (though perhaps not require) some sort(s) of non-password based authentication for some systems.

2) While there will be pressure do so, it might be better not to try to solve everyone's problems. For example in some cases it might still be best to create a shared password system for use only on a limited set of systems and not allow anyone else to use it.

3) The understandable enthusiasm for SSO by some is at variance with an equal enthusiasm, in some cases promoted by the same people, for aggressive inactivity timeouts on individual systems. Meeting everyone's individual security requirements may result in an unusable service.

4) The group developing HTML5 have adopted a policy that says "In case of conflict, consider users over authors over implementers over specifiers over theoretical purity". It might be sensible to adopt something similar in this area (suitably adapted). Or perhaps not... Discuss.

5) A critical feature of any authentication system is who is able to reset its authentication credentials (i.e. reset passwords or equivalent), because they (all of them) can subvert the security of the systems (all of them) that use it. It looks to me to be difficult to simultaneously meet the expectations of people who would like easy local reset of passwords and the operators of some 'high risk' systems who want tight control of access.

6) Given the existence of a central password verification service overloading existing protocols (LDAP, Radius, ...), I don't see any technical way to restrict the clients that can use it since clients don't identify themselves. Such use could be restricted by rules, but they would be hard to enforce, and by non-100% perfect technical restrictions (e.g. client IP address filtering). So anyone providing such a service will have to accept that in practise it will be open to anyone to use. You could implement a central password verification service using something like SAML, where clients are strongly identified, but then there wouldn't be any clients to use it.

7) Accepting that we'll at least have to accept passwords for the foreseeable future (even if we accept other things in parallel), then the not-unreasonable idea that people will only willing accept using two passwords restricts us to a maximum of two authentication systems. So how about:
  • a) A 'low trust' service verifying a single password over one or more commonly used protocols (so LDAP, RADUIS, TACAS), intended for use in situations where we can't do better (3rd party service that can only do LDAP auth, WebDAV service that has to work with common clients that can only do username/password, etc).Document that this is a low trust service, that server operators can intercept passwords in transit, etc. Require good practive as a condition of using the service - don't ship passwords over unsecured networks, don't write them to disk, etc. Perhaps make token attempts to restrict client access (e.g. by IP address) but accept and document that this won't be perfect. This violates all my prerequisites for secure use of passwords, but perhaps on balance doing so is necessary to support what needs to be supported.
  • b) A 'higher trust' service where credential disclosure is limited to local workstations and a logically single central server. Web redirection based protocols (i.e. Ucam_WebAuth, Shibboleth) and Kerberos (and so Windows AD) meet this requirement and provide at least some single sign-on. Web redirect, and perhaps Kerberos could both use things other than a password for initial authentication. SPNEGO holds the possibility of transparently transferring a pre-existing Kerberos session into a web redirection-based system thus widening SSO for existing Kerberos users but leaving open password or other authentication for access to web systems in situations when Kerberos isn't available.
Item (b) violates my third prerequisite for secure use of passwords ("It must be possible for password holders to decide when it is safe to divulge their password"), but I'm coming to the conclusion that the price of adhering to this principle does not warrant the cost.

8) If you can't stomach the single shared password idea, an alternative might be a 'Managed Password Service' that extended the 'token' (in the current University Computing Service sense) idea by centrally managing multiple password sets (one for each 'system'). So the administrator of a new system somewhere could mint a new password set for their system and configure their system to do password verification against it using LDAP, RADIUS or anything else supported. Users could set and reset these passwords under the authority of their web redirect/Kerberos credentials. The end-system would have to it's own authorisation since in principle anyone could create a password for any system. This doesn't give 'two passwords', but it does at least allow one password to manage all the others.

Wednesday, 12 January 2011

Microformats (lots of Microformats)

I've wanted to play with microformats for some time. The need to rework my (still somewhat minimal) work home page provided an ideal opportunity. To see the effect you'll need a browser plug-in such as Operator for Firefox, or to use something like the Google Rich Snippits testing tool.

Essentially microformats (and their friends - see below) provide a way of marking up HTML content with additional semantics to allow automatic parsing that wouldn't otherwise be possible. For example a human would know this this supplies my telephone number:

<p>
Jon Warbrick<br />
Tel: +44 1223 337733
</p>

but if I mark it up like this

<p class="vcard">
<span class="fn">Jon Warbrick</span><br />
Tel: <span class="tel">+44 1223 337733</span>
</p>

then microformat-aware processors should be able to reliably extract the number and associate it with my name - and then perhaps use this to create a contact list entry or put a call through to me. Similar microformats exist for events, reviews, licences, etc.

It turns out that there are (at least) three different, competing microformat-like systems out there:

Microformats
The original offering in this area. Aims to add semantic markup for various classes of 'thing' to standards-conforming HTML 4.01/XHTML 1.0. It largely does this using HTML structure and a range of pre-defined class names.

RDFa
("Resource Description Framework in attributes") defines a set of attribute-level extensions to XHTML which make it possible to add semantic markup using RDF syntax.

Microdata
This is a (proposed) feature of HTML5 that adds semantic markup in a similar way to microformats, but using new attributes itemscope, itemprop, itemtype and itemref rather than overloading class. As an experiment I've also tried marking up my contact details using microdata.