Thursday, 9 July 2015

Mitigating recent TLS vulnerabilities

Recently discovered vulnerabilities in TLS (also known as SSL -- the protocol implementing secure web browsing and securing other activities such as email) can and should be mitigated by appropriate server configurations. Existing defaults and previously-recommended configurations may require attention to address these issues. While these vulnerabilities have been addressed in recent versions of major clients, not everyone runs up-to-date versions and not all access is from major clients.

Vulnerabilities addressed by this advice include 'POODLE' (CVE-2014-3566), 'Freak' (CVE-2015-0204), and 'LogJam' (CVE-2015-4000).

What represents a 'best' configuration depends on the capabilities of the servers involved, and of their expected clients. The best security can only be obtained on up-to-date software and only with configurations that may exclude some older clients. The following advice will provide a reasonable level of security but should be reviewed in the light of specific requirements.

The following advice is intended to be generic; specific configuration advice for some platforms appears below.

Suggestions (in order of importance):

1) Ensure that the SSLv2 and SSLv3 versions of the protocol are disabled.

Note that IE6 on Windows XP will be unable to communicate with servers that don't support SSLv3, but given its age this should be acceptable -- many major services already disable SSLv2 and SSLv3.

2) Adjust the cryptographic suites supported to exclude the following:

  • all 'export' suites
  • any using symmetric encryption with keys less that 128 bits
  • any using signatures based on the MD5 hash algorithm
  • any using symmetric encryption based on the RC4 algorithm

3) Configure Diffie-Hellman (DH) key exchange to use at least 2048-bit groups. Additionally generate a unique 2048-bit group for use in Diffie-Hellman key exchange on each server. As an alternative, it may be appropriate to disable all cryptographic suites that rely on Diffie-Hellman key exchange. Plan to upgrade systems that can't be appropriately configured.

Note that this advice does not apply to Elliptic-Curve Diffie-Hellman key exchange (ECDH) which does not currently have any known vulnerabilities. Note also that Java 1.6 and 1.7 clients may be unable to communicate with servers offering Diffie-Hellman key exchange using groups over 1024-bits long.

4) Support TLSv1.2 - plan to upgrade any systems that can't do so.

One way to test your configuration is to use the SSL Labs server test page. Aim to eliminate any issues flagged 'VULNERABLE' or shown in red, and to reduce or eliminate any marked 'WEEAK' or shown in orange. It should be possible to achieve an overall rating of at least 'B' and preferably 'A' but don't be guided entirely by the overall rating shown. The 'Handshake Simulation' section of the report can be helpful when evaluating the impact of any configuration change on clients.



Add the following directives to httpd.conf or equivalent and ensure that they are not being overridden elsewhere:

    SSLProtocol all -SSLv2 -SSLv3
    SSLCipherSuite ALL:!aNULL:!eNULL:!LOW:!EXP:!MD5:!RC4

TLSv1.2 is automatically available on systems running OpenSSL 1.0.1 or above but not otherwise - plan to upgrade systems that include only lower versions

By default Apache 2.2 only supports a fixed 1024-bit Diffie-Hellman group - plan to upgrade it. Apache from 2.4, and patched versions of 2.2 in some Linux distributions, support longer fixed groups. Unique groups can be created with the command

    openssl dhparam -out dhparams.pem 2048

and loaded into patched versions of Apache 2.2, and into Apache 2.4 with the directive

    SSLOpenSSLConfCmd DHParameters dhparams.pem

Restart Apache after making these changes.


See elsewhere for instructions on disabling SSLv2 and SSLv3.

To set cryptographic suites:
  • Open the Group Policy Object Editor (i.e. run gpedit.msc in the command prompt).
  • Expand Computer Configuration --> Administrative Templates --> Network --> SSL Configuration Settings.
  • In the right pane, open the SSL Cipher Suite Order setting.
  • A reasonable cipher suite list (from Bulletproof SSL and TLS, Ch 15) would be:


    If this excludes some old but necessary clients then consider adding:


Reboot after making these changes


The Nginx project have published instructions on how to disable SSLv3 on Nginx.

To configure cipher suites place the following in the website configuration server block in /etc/nginx/sites-enabled/default (see the LogJam pages):

      ssl_ciphers 'ALL:!aNULL:!eNULL:!LOW:!EXP:!MD5:!RC4';

Custom Diffie-Helman groups can be created with the command:

      openssl dhparam -out dhparams.pem 2048

and loaded into nginx with the following configuration:

      ssl_dhparam dhparams.pem;


See elsewhere for instructions on disabling SSLv2 and SSLv3.

Configuring cipher suites (see Bulletproof SSL and TLS):

* With the APR/Native connector
  •  Set the 'SSLCipherSuite' attribute of the 'Connector' XML element in your $TOMCAT_HOME/conf/server.xml file:

    SSLCipherSuite = "ALL:!aNULL:!eNULL:!LOW:!EXP:!MD5:!RC4
* With the JSSE connector
  • Set the 'ciphers' attribute of the 'Connector' XML element in your $TOMCAT_HOME/conf/server.xml file:

    ciphers = "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,

Saturday, 31 January 2015

Information Systems - what we do

Building on yesterday's list of technologies use by the Information Systems team here at the University, here are some of the services that we run. Much of this will probably only make sense to people inside the University.
In addition, we contribute to a number of services run by other parts of UIS:
So now you know.

Friday, 30 January 2015

An Information Systems colophon

I've followed the work of Government Digital Service (GDS - the people behind GOV.UK) for a while. It seems to me that an organisation dedicated to “leading the digital transformation of government” probably knows a thing or two that's relevant to the digital transformation of a university.

GDS have a blog, and their list of technologies that they use is interesting. Work I've been doing recently means that I've created a similar list of the technologies used within my Information Services team here at the University. For what its worth this what it looks like:

Core servers
Base software products
  • Some services are based on existing software, including phpBB and Mediawiki
  • The Web Search service is provided by Funnelback

We rely on, and in some cases also support, a range of technologies:
Applications (incl. Frameworks, etc)
Database and other storage
Monitoring, managing and alerting
Supporting Tools
  • Central RT system for support tickets
  • Trac and Mantis for bug tracking
  • We've been experimenting with LeanKit and Asana for program and project management

Saturday, 27 April 2013

Why I think php is a bad idea

Update: a friend reminds me of which covers the same topic from a different angle.

Thursday, 31 January 2013

Getting your fonts from the cloud

The University of Cambridge's latest web style, due for deployment RSN, uses Adobe Myriad Pro for some of its headings. This is loaded as a web font from Adobe's TypeKit service. As I understand it this is the only legal way to use Adobe Myriad Pro since Adobe don't allow self-hosting.

Typekit is apparently implemented on a high-availability Content Delivery Network (though even that isn't perfect - see for example here), but the question remains of what the effect will be if it can't be reached. Obviously the font won't be available, but we have web-safe fall-backs available. The real question is what sort of delay might we see under these circumstances.  Ironically, one group who are particularly exposed to this risk are University users since at the moment we only have one connection to JANET, and so to the Internet and all the TypeKit servers.

TypeKit fonts are loaded by loading a JavaScript library in the head of each document and then calling a initialisation function:

<script type="text/javascript" src="//<licence token>.js"></script>
<script type="text/javascript">try{Typekit.load();}catch(e){}</script>
Web browsers block while loading JavaScript like this, so if can't be reached then page loading will be delayed until the attempt times out. How long will this be?

Some experiments suggest it's very varied, and varies between operating systems, browsers, and types of network connection. At best, loss of access to TypeKit results in an additional 3 or 4 second delay in page loading (this is actually too small, see correction below). At worst this delay can be a minute or two. iOS devices, for example, seem to consistently see an additional 75 second delay. These delays apply to every page load since browsers don't seem to cache the failure. 

Users are going to interpret this as somewhere between the web site hosting the pages going slowly and the web site being down. It does mean that for many local users, loss of access to TypeKit will cause them to loose usable access to any local pages in the new style.

Of course similar considerations apply to any 'externally' hosted JavaScript. One common example is the code to implement Google Analytics. However in this case its typically loaded at the bottom of each page and so shouldn't delay page rendering. This isn't an option for a font unless you can cope with the page initially rendering in the wrong font and then re-rendering subsequently.

I also have a minor concern about loading third-party JavaScript. Such JavaScript can in effect do whatever it wants with your page. In particular it can monitor form entry and steal authentication tokens
such as cookies. I'm not for one moment suggesting that Adobe would deliberately do such things, but we don' know much about how this JavaScript is managed and delivered to us so it's hard to evaluate the risk we might be exposed to. In view of this it's likely that at least the login pages for our central authentication system (Raven) may not be able to use Myriad Pro.

Update: colleagues have noticed a problem with my testing methodology which means that some of my tests will have been overly-optimistic about the delays imposed. It now appears that at best, loss of access to TypeKit results in an additional 20-30 seconds delay in page loading. That's a long time waiting for a page.

Further update: another colleague has pointed out that TypeKit's suggested solution to this problem is to load the JavaScript asynchronously. This has the advantage of allowing you to control the time-out process and decide when to give up and use fall-back fonts, but has the drawback that it requires  custom CSS to hide the flash of unstyled text that can occur while fonts are loading.

Sunday, 27 January 2013

Restricting web access based on physical location

Occasionally people want to restrict access to a web-based resource based not on who is accessing it but on where they are located when they do so. This is  normally to comply with some sort of copyright licence. In UK education this is, more often that not, something to do with the educational recording licences offered by ERA (but see update below).

Unfortunately this is difficult to do, and close to impossible to do reliably. This often puzzles people, given that the ERA licences expect it and that things like BBC iPlayer are well known to be already doing it. It's a long story...

Because of the way the Internet works it's currently impossible to know, reliably, where the person making a request is physically located. It is however possible to guess, but you need to understand the limitations of this guessing process before relying on it. Whether this guessing process is good enough for any particular purpose is something only people using it can decide.

A common approach is based on Internet Protocol (IP) addresses. When someone requests something from a web server, one of the bits of information that the server sees is the IP address of the computer from which the request came (much as your telephone can tell you the number of the person calling you). In many cases this will be address assigned to the computer the person making the request is sitting at. IP addresses are generally assigned on a geographic basis and lists exist of what addresses are used where, so it is in principle possible to ask the question 'Did my server receive this request from a machine in the UK', or even ' my institution'.

But there are catches:
  • It's possible to route requests through multiple computers, in which case the server only see the address of the last one. This often happens without the user knowing about it (for example most home broadband set-ups route all connections through the house's broadband router, mobile networks route requests through central proxies, etc.), but it can also be done deliberately. Like many organisations, the University provides a Virtual Private Network service explicitly so that requests made from anywhere in the world can appear to be coming from a computer inside the University.
  • The lists saying which addresses are used where are inevitably inaccurate. From example a multi-national company might have a block of addresses allocated to its US headquarters but, unknown to anyone outside the company, actually use some of them for its UK offices. Connections from people in the UK office would then appear to be from the US.
So, the bottom line is that you can come close to knowing where connections are coming from, but it's nothing like 100% reliable. People will, by accident or design, be able to access content when they shouldn't, and some people won't be able to gain access when they should. Organisations (such as MaxMind) provide or sell lists which can, for example, provide a best-guess of which country an IP address is allocate to. Organisations will know what addresses their networks use - the network addresses used on the University network (and so by the majority of computing devices in the University) are described here. Though beware that increasingly people are using mobile devices connected by mobile data services such as 3G that may well appear to be 'outside' their institution even when they are physically inside it.

Another tempting approach is that modern web browsers, especially those on devices with GPSs such as mobile phones, can be asked to supply the user's location. This is used, for example, to put 'you are here' markers on maps. You might think that this information could be used to implement geographic restrictions. However the fundamental problem with this is that it's under the user's control, so in the end they can simply make their browser lie. Further it's often inaccurate or may not be available (for example in a desktop browser) so all in all this probably isn't a usable solution.

If you can setup authentication such that you can identify all your users then it seems to me that one approach would simply be to impose terms and conditions that prohibit them from accessing content when not physically in the UK, or wherever. You could back this up by warning them if IP address recognition or geo-location suggests that they are outside the relevant area. It seems to me (but IANAL) that this might be sufficient to meet contractual obligations (or at last to provide a defence after failing), but obviously I can't advise on any particular case.

Update July 2014: it appears that the ERA licence has changed recently in line with changes to UK copyright legislation to better support distance learning. This probably reduces the relevance of ERA to the whole geolocation question, but obviously doesn't affect the underlying technical issues.

Monday, 5 November 2012

Doing RSS right (3) - character encoding

OK, I promise I'll shut up about RSS after this posting (and my previous two).

This posting is about one final problem in including text from RSS feeds, or Atom feeds, or almost anything else, into web pages. The problem is that text is made up of characters and that 'characters' are an abstraction that computers don't understand. What computers ship around (across the Internet, in files on disk, etc.) while we are thinking about characters are really numbers. To convert between a sequence of numbers and a sequence of characters you need some sort of encoding, and the problem is that there are lots of these and they are all different. In theory if you don't know the encoding you can't do anything with number-encoded text. However most of the common encodings use the numbers from the ASCII encoding for common letters and others symbols. So in practice lot of English and European text will come out right-ish even if it's being decoded based on the wrong encoding.

But once you move away from the characters in ASCII (A-Z, a-z, 0-9, and a selection of other common ones) to the slightly more 'esoteric' ones -- pound sign, curly open and close quotation marks, long typographic dashes, almost any common character with an accent, and any character from a non-European alphabet -- then all bets are off. We've all seen web pages with strange question mark characters (like this �) or boxes where quotation marks should be, or with funny sequences of characters (often starting Â) all over them. These are both classic symptoms of character encoding confusion. It turns out there's a word to describe this effect: 'Mojibake'.

Now I'm not going to go into detail here about what the various encodings look like, how you work with them, how you can convert from one to another, etc. That's a huge topic, and in any case the details will vary depending on which platform you are using. There's what I think is a good description of some of this at the start of chapter 4 of 'Dive into Python3' (and this applies even if you are not using Python). But if you don't like this there are lots of other similar resources out there. What I do want to get across is that if you take a sequence of numbers repenting characters from one document and insert those numbers unchanged into another document then that's only going to work reliably if the encodings of the two documents are identical. There's a good chance that doing this wrong may appear to work as long as you restrict yourself to the ASCII characters, but sooner or later you will hit something that doesn't work.

What you need to do to get this right is to convert the numbers from the source document into characters according to the encoding of your source document, and then convert those characters back into numbers based on the encoding of your target. Actually doing this is left as an exercise for the reader.

If your target document is an HTML one then there's an alternative approach. In HTML (and XML come to that) you can represent almost any character using a numeric character entity based on the Universal Character Set from Unicode. If you always represent anything not in ASCII this way then the representation of you document will only contain ASCII characters, and these come out the same in most common encodings. So if someone ends up interpreting your text using the wrong encoding (and that someone could be you if, for example, you edit you document with an editor that gets character encoding wrong) there's a good chance it won't get corrupted. You should still clearly label such documents with a suitable character encoding. This is partly because (as explained above) it is, at least in theory, impossible to decode a text document without this information, but also because doing so helps to defend against some other problems that I might describe in a future posting.