Saturday, 27 April 2013
Why I think php is a bad idea
Update: a friend reminds me of http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/ which covers the same topic from a different angle.
Thursday, 31 January 2013
Getting your fonts from the cloud
The University of Cambridge's latest web style, due for deployment RSN, uses Adobe Myriad Pro for some of its headings. This is loaded as a web font from Adobe's TypeKit service. As I understand it this is the only legal way to use Adobe Myriad Pro since Adobe don't allow self-hosting.
Typekit is apparently implemented on a high-availability Content Delivery Network (though even that isn't perfect - see for example here), but the question remains of what the effect will be if it can't be reached. Obviously the font won't be available, but we have web-safe fall-backs available. The real question is what sort of delay might we see under these circumstances. Ironically, one group who are particularly exposed to this risk are University users since at the moment we only have one connection to JANET, and so to the Internet and all the TypeKit servers.
TypeKit fonts are loaded by loading a JavaScript library in the head of each document and then calling a initialisation function:
Typekit is apparently implemented on a high-availability Content Delivery Network (though even that isn't perfect - see for example here), but the question remains of what the effect will be if it can't be reached. Obviously the font won't be available, but we have web-safe fall-backs available. The real question is what sort of delay might we see under these circumstances. Ironically, one group who are particularly exposed to this risk are University users since at the moment we only have one connection to JANET, and so to the Internet and all the TypeKit servers.
TypeKit fonts are loaded by loading a JavaScript library in the head of each document and then calling a initialisation function:
<script type="text/javascript" src="//use.typekit.com/<licence token>.js"></script>
<script type="text/javascript">try{Typekit.load();}catch(e){}</script>
Web browsers block while loading JavaScript like this, so if use.typekit.com can't be reached then page loading will be delayed until the attempt times out. How long will this be?
Some experiments suggest it's very varied, and varies between operating systems, browsers, and types of network connection. At best, loss of access to TypeKit results in an additional 3 or 4 second delay in page loading (this is actually too small, see correction below). At worst this delay can be a minute or two. iOS devices, for example, seem to consistently see an additional 75 second delay. These delays apply to every page load since browsers don't seem to cache the failure.
Users are going to interpret this as somewhere between the web site hosting the pages going slowly and the web site being down. It does mean that for many local users, loss of access to TypeKit will cause them to loose usable access to any local pages in the new style.
Of course similar considerations apply to any 'externally' hosted JavaScript. One common example is the code to implement Google Analytics. However in this case its typically loaded at the bottom of each page and so shouldn't delay page rendering. This isn't an option for a font unless you can cope with the page initially rendering in the wrong font and then re-rendering subsequently.
I also have a minor concern about loading third-party JavaScript. Such JavaScript can in effect do whatever it wants with your page. In particular it can monitor form entry and steal authentication tokens
such as cookies. I'm not for one moment suggesting that Adobe would deliberately do such things, but we don' know much about how this JavaScript is managed and delivered to us so it's hard to evaluate the risk we might be exposed to. In view of this it's likely that at least the login pages for our central authentication system (Raven) may not be able to use Myriad Pro.
Update: colleagues have noticed a problem with my testing methodology which means that some of my tests will have been overly-optimistic about the delays imposed. It now appears that at best, loss of access to TypeKit results in an additional 20-30 seconds delay in page loading. That's a long time waiting for a page.
Further update: another colleague has pointed out that TypeKit's suggested solution to this problem is to load the JavaScript asynchronously. This has the advantage of allowing you to control the time-out process and decide when to give up and use fall-back fonts, but has the drawback that it requires custom CSS to hide the flash of unstyled text that can occur while fonts are loading.
Update: colleagues have noticed a problem with my testing methodology which means that some of my tests will have been overly-optimistic about the delays imposed. It now appears that at best, loss of access to TypeKit results in an additional 20-30 seconds delay in page loading. That's a long time waiting for a page.
Further update: another colleague has pointed out that TypeKit's suggested solution to this problem is to load the JavaScript asynchronously. This has the advantage of allowing you to control the time-out process and decide when to give up and use fall-back fonts, but has the drawback that it requires custom CSS to hide the flash of unstyled text that can occur while fonts are loading.
Sunday, 27 January 2013
Restricting web access based on physical location
Occasionally people want to restrict access to a web-based resource based not on who is accessing it but on where they are located when they do so. This is normally to comply with some sort of copyright licence. In UK education this is, more often that not, something to do with the educational recording licences offered by ERA (but see update below).
Unfortunately this is difficult to do, and close to impossible to do reliably. This often puzzles people, given that the ERA licences expect it and that things like BBC iPlayer are well known to be already doing it. It's a long story...
Because of the way the Internet works it's currently impossible to know, reliably, where the person making a request is physically located. It is however possible to guess, but you need to understand the limitations of this guessing process before relying on it. Whether this guessing process is good enough for any particular purpose is something only people using it can decide.
A common approach is based on Internet Protocol (IP) addresses. When someone requests something from a web server, one of the bits of information that the server sees is the IP address of the computer from which the request came (much as your telephone can tell you the number of the person calling you). In many cases this will be address assigned to the computer the person making the request is sitting at. IP addresses are generally assigned on a geographic basis and lists exist of what addresses are used where, so it is in principle possible to ask the question 'Did my server receive this request from a machine in the UK', or even '...in my institution'.
But there are catches:
Another tempting approach is that modern web browsers, especially those on devices with GPSs such as mobile phones, can be asked to supply the user's location. This is used, for example, to put 'you are here' markers on maps. You might think that this information could be used to implement geographic restrictions. However the fundamental problem with this is that it's under the user's control, so in the end they can simply make their browser lie. Further it's often inaccurate or may not be available (for example in a desktop browser) so all in all this probably isn't a usable solution.
If you can setup authentication such that you can identify all your users then it seems to me that one approach would simply be to impose terms and conditions that prohibit them from accessing content when not physically in the UK, or wherever. You could back this up by warning them if IP address recognition or geo-location suggests that they are outside the relevant area. It seems to me (but IANAL) that this might be sufficient to meet contractual obligations (or at last to provide a defence after failing), but obviously I can't advise on any particular case.
Update July 2014: it appears that the ERA licence has changed recently in line with changes to UK copyright legislation to better support distance learning. This probably reduces the relevance of ERA to the whole geolocation question, but obviously doesn't affect the underlying technical issues.
Unfortunately this is difficult to do, and close to impossible to do reliably. This often puzzles people, given that the ERA licences expect it and that things like BBC iPlayer are well known to be already doing it. It's a long story...
Because of the way the Internet works it's currently impossible to know, reliably, where the person making a request is physically located. It is however possible to guess, but you need to understand the limitations of this guessing process before relying on it. Whether this guessing process is good enough for any particular purpose is something only people using it can decide.
A common approach is based on Internet Protocol (IP) addresses. When someone requests something from a web server, one of the bits of information that the server sees is the IP address of the computer from which the request came (much as your telephone can tell you the number of the person calling you). In many cases this will be address assigned to the computer the person making the request is sitting at. IP addresses are generally assigned on a geographic basis and lists exist of what addresses are used where, so it is in principle possible to ask the question 'Did my server receive this request from a machine in the UK', or even '...in my institution'.
But there are catches:
- It's possible to route requests through multiple computers, in which case the server only see the address of the last one. This often happens without the user knowing about it (for example most home broadband set-ups route all connections through the house's broadband router, mobile networks route requests through central proxies, etc.), but it can also be done deliberately. Like many organisations, the University provides a Virtual Private Network service explicitly so that requests made from anywhere in the world can appear to be coming from a computer inside the University.
- The lists saying which addresses are used where are inevitably inaccurate. From example a multi-national company might have a block of addresses allocated to its US headquarters but, unknown to anyone outside the company, actually use some of them for its UK offices. Connections from people in the UK office would then appear to be from the US.
Another tempting approach is that modern web browsers, especially those on devices with GPSs such as mobile phones, can be asked to supply the user's location. This is used, for example, to put 'you are here' markers on maps. You might think that this information could be used to implement geographic restrictions. However the fundamental problem with this is that it's under the user's control, so in the end they can simply make their browser lie. Further it's often inaccurate or may not be available (for example in a desktop browser) so all in all this probably isn't a usable solution.
If you can setup authentication such that you can identify all your users then it seems to me that one approach would simply be to impose terms and conditions that prohibit them from accessing content when not physically in the UK, or wherever. You could back this up by warning them if IP address recognition or geo-location suggests that they are outside the relevant area. It seems to me (but IANAL) that this might be sufficient to meet contractual obligations (or at last to provide a defence after failing), but obviously I can't advise on any particular case.
Update July 2014: it appears that the ERA licence has changed recently in line with changes to UK copyright legislation to better support distance learning. This probably reduces the relevance of ERA to the whole geolocation question, but obviously doesn't affect the underlying technical issues.
Subscribe to:
Posts (Atom)