Thursday 31 January 2013

Getting your fonts from the cloud

The University of Cambridge's latest web style, due for deployment RSN, uses Adobe Myriad Pro for some of its headings. This is loaded as a web font from Adobe's TypeKit service. As I understand it this is the only legal way to use Adobe Myriad Pro since Adobe don't allow self-hosting.

Typekit is apparently implemented on a high-availability Content Delivery Network (though even that isn't perfect - see for example here), but the question remains of what the effect will be if it can't be reached. Obviously the font won't be available, but we have web-safe fall-backs available. The real question is what sort of delay might we see under these circumstances.  Ironically, one group who are particularly exposed to this risk are University users since at the moment we only have one connection to JANET, and so to the Internet and all the TypeKit servers.

TypeKit fonts are loaded by loading a JavaScript library in the head of each document and then calling a initialisation function:

<script type="text/javascript" src="//use.typekit.com/<licence token>.js"></script>
<script type="text/javascript">try{Typekit.load();}catch(e){}</script>
Web browsers block while loading JavaScript like this, so if use.typekit.com can't be reached then page loading will be delayed until the attempt times out. How long will this be?

Some experiments suggest it's very varied, and varies between operating systems, browsers, and types of network connection. At best, loss of access to TypeKit results in an additional 3 or 4 second delay in page loading (this is actually too small, see correction below). At worst this delay can be a minute or two. iOS devices, for example, seem to consistently see an additional 75 second delay. These delays apply to every page load since browsers don't seem to cache the failure. 

Users are going to interpret this as somewhere between the web site hosting the pages going slowly and the web site being down. It does mean that for many local users, loss of access to TypeKit will cause them to loose usable access to any local pages in the new style.

Of course similar considerations apply to any 'externally' hosted JavaScript. One common example is the code to implement Google Analytics. However in this case its typically loaded at the bottom of each page and so shouldn't delay page rendering. This isn't an option for a font unless you can cope with the page initially rendering in the wrong font and then re-rendering subsequently.

I also have a minor concern about loading third-party JavaScript. Such JavaScript can in effect do whatever it wants with your page. In particular it can monitor form entry and steal authentication tokens
such as cookies. I'm not for one moment suggesting that Adobe would deliberately do such things, but we don' know much about how this JavaScript is managed and delivered to us so it's hard to evaluate the risk we might be exposed to. In view of this it's likely that at least the login pages for our central authentication system (Raven) may not be able to use Myriad Pro.

Update: colleagues have noticed a problem with my testing methodology which means that some of my tests will have been overly-optimistic about the delays imposed. It now appears that at best, loss of access to TypeKit results in an additional 20-30 seconds delay in page loading. That's a long time waiting for a page.

Further update: another colleague has pointed out that TypeKit's suggested solution to this problem is to load the JavaScript asynchronously. This has the advantage of allowing you to control the time-out process and decide when to give up and use fall-back fonts, but has the drawback that it requires  custom CSS to hide the flash of unstyled text that can occur while fonts are loading.

Sunday 27 January 2013

Restricting web access based on physical location

Occasionally people want to restrict access to a web-based resource based not on who is accessing it but on where they are located when they do so. This is  normally to comply with some sort of copyright licence. In UK education this is, more often that not, something to do with the educational recording licences offered by ERA (but see update below).

Unfortunately this is difficult to do, and close to impossible to do reliably. This often puzzles people, given that the ERA licences expect it and that things like BBC iPlayer are well known to be already doing it. It's a long story...

Because of the way the Internet works it's currently impossible to know, reliably, where the person making a request is physically located. It is however possible to guess, but you need to understand the limitations of this guessing process before relying on it. Whether this guessing process is good enough for any particular purpose is something only people using it can decide.

A common approach is based on Internet Protocol (IP) addresses. When someone requests something from a web server, one of the bits of information that the server sees is the IP address of the computer from which the request came (much as your telephone can tell you the number of the person calling you). In many cases this will be address assigned to the computer the person making the request is sitting at. IP addresses are generally assigned on a geographic basis and lists exist of what addresses are used where, so it is in principle possible to ask the question 'Did my server receive this request from a machine in the UK', or even '...in my institution'.

But there are catches:
  • It's possible to route requests through multiple computers, in which case the server only see the address of the last one. This often happens without the user knowing about it (for example most home broadband set-ups route all connections through the house's broadband router, mobile networks route requests through central proxies, etc.), but it can also be done deliberately. Like many organisations, the University provides a Virtual Private Network service explicitly so that requests made from anywhere in the world can appear to be coming from a computer inside the University.
  • The lists saying which addresses are used where are inevitably inaccurate. From example a multi-national company might have a block of addresses allocated to its US headquarters but, unknown to anyone outside the company, actually use some of them for its UK offices. Connections from people in the UK office would then appear to be from the US.
So, the bottom line is that you can come close to knowing where connections are coming from, but it's nothing like 100% reliable. People will, by accident or design, be able to access content when they shouldn't, and some people won't be able to gain access when they should. Organisations (such as MaxMind) provide or sell lists which can, for example, provide a best-guess of which country an IP address is allocate to. Organisations will know what addresses their networks use - the network addresses used on the University network (and so by the majority of computing devices in the University) are described here. Though beware that increasingly people are using mobile devices connected by mobile data services such as 3G that may well appear to be 'outside' their institution even when they are physically inside it.

Another tempting approach is that modern web browsers, especially those on devices with GPSs such as mobile phones, can be asked to supply the user's location. This is used, for example, to put 'you are here' markers on maps. You might think that this information could be used to implement geographic restrictions. However the fundamental problem with this is that it's under the user's control, so in the end they can simply make their browser lie. Further it's often inaccurate or may not be available (for example in a desktop browser) so all in all this probably isn't a usable solution.

If you can setup authentication such that you can identify all your users then it seems to me that one approach would simply be to impose terms and conditions that prohibit them from accessing content when not physically in the UK, or wherever. You could back this up by warning them if IP address recognition or geo-location suggests that they are outside the relevant area. It seems to me (but IANAL) that this might be sufficient to meet contractual obligations (or at last to provide a defence after failing), but obviously I can't advise on any particular case.

Update July 2014: it appears that the ERA licence has changed recently in line with changes to UK copyright legislation to better support distance learning. This probably reduces the relevance of ERA to the whole geolocation question, but obviously doesn't affect the underlying technical issues.