Monday 5 November 2012

Doing RSS right (3) - character encoding

OK, I promise I'll shut up about RSS after this posting (and my previous two).

This posting is about one final problem in including text from RSS feeds, or Atom feeds, or almost anything else, into web pages. The problem is that text is made up of characters and that 'characters' are an abstraction that computers don't understand. What computers ship around (across the Internet, in files on disk, etc.) while we are thinking about characters are really numbers. To convert between a sequence of numbers and a sequence of characters you need some sort of encoding, and the problem is that there are lots of these and they are all different. In theory if you don't know the encoding you can't do anything with number-encoded text. However most of the common encodings use the numbers from the ASCII encoding for common letters and others symbols. So in practice lot of English and European text will come out right-ish even if it's being decoded based on the wrong encoding.

But once you move away from the characters in ASCII (A-Z, a-z, 0-9, and a selection of other common ones) to the slightly more 'esoteric' ones -- pound sign, curly open and close quotation marks, long typographic dashes, almost any common character with an accent, and any character from a non-European alphabet -- then all bets are off. We've all seen web pages with strange question mark characters (like this �) or boxes where quotation marks should be, or with funny sequences of characters (often starting Â) all over them. These are both classic symptoms of character encoding confusion. It turns out there's a word to describe this effect: 'Mojibake'.

Now I'm not going to go into detail here about what the various encodings look like, how you work with them, how you can convert from one to another, etc. That's a huge topic, and in any case the details will vary depending on which platform you are using. There's what I think is a good description of some of this at the start of chapter 4 of 'Dive into Python3' (and this applies even if you are not using Python). But if you don't like this there are lots of other similar resources out there. What I do want to get across is that if you take a sequence of numbers repenting characters from one document and insert those numbers unchanged into another document then that's only going to work reliably if the encodings of the two documents are identical. There's a good chance that doing this wrong may appear to work as long as you restrict yourself to the ASCII characters, but sooner or later you will hit something that doesn't work.

What you need to do to get this right is to convert the numbers from the source document into characters according to the encoding of your source document, and then convert those characters back into numbers based on the encoding of your target. Actually doing this is left as an exercise for the reader.

If your target document is an HTML one then there's an alternative approach. In HTML (and XML come to that) you can represent almost any character using a numeric character entity based on the Universal Character Set from Unicode. If you always represent anything not in ASCII this way then the representation of you document will only contain ASCII characters, and these come out the same in most common encodings. So if someone ends up interpreting your text using the wrong encoding (and that someone could be you if, for example, you edit you document with an editor that gets character encoding wrong) there's a good chance it won't get corrupted. You should still clearly label such documents with a suitable character encoding. This is partly because (as explained above) it is, at least in theory, impossible to decode a text document without this information, but also because doing so helps to defend against some other problems that I might describe in a future posting.

Friday 26 October 2012

Doing RSS right (2) - including content

In addition to the issues I described in 'Doing RSS right', there's another problem with RSS feeds, though at least this one doesn't apply to Atom.

The problem is that there's nothing in RSS to say if the various blocks of text are allowed to contain markup, and if so which. Apparently (see here):
"Userland's RSS reader—generally considered as the reference implementation—did not originally filter out HTML markup from feeds. As a result, publishers began placing HTML markup into the titles and descriptions of items in their RSS feeds. This behavior has become expected of readers, to the point of becoming a de facto standard"
This isn't just difficult, it's unresolvable. If you find

<strong>Boo!</strong>

in feed data you simply can't know if the author intended it as an example of HTML markup, in which case you should escape the brackets before including them in your page, or as 'Boo!', in which case you probably expected to include the data as it stands.

And if you are expected to include the data as it stands you have the added problem that including HTML authored by third parties in your pages is dangerous. If they get their HTML wrong they could wreck the layout of you page (think missing close tag) and, worse, they could inject JavaScript into your pages or open you up to cross site scripting attacks by others. As I wrote here and here, if you let other people add any content to your pages then you are essentially giving them editing rights to the entire page, and perhaps the entire site.

However, given how things are and unless you know from agreements or documentation that a feed will only ever contain text then you are going to have to assume that the content includes HTML. Stripping out all the tags would be fairly easy, but probably isn't going to be useful because it will turn the text into nonsense - think of a post that includes a list.

The only safe way to deal with this is to parse the content and then only allow that subset of HTML tags and/or attributes that you believe to be safe. Don't fall for the trap of trying to filter out only what you consider to be dangerous because that's almost impossible to get right, and don't let all attributes through because they can be dangerous too - consider <a href="javascript:...">.

What should you let through? Well, that's hard to say. Most of the in-line elements, like <b>, <strong>, <a> (carefully), etc. will probably be needed. Also at least some block level stuff - <p>, <div>, <ul>, <ol>, etc. And note that you will have to think carefully about the character encoding both of the RSS feed and the page you are substituting it into, otherwise you might not realise that +ADw-script+AD4- could be dangerous (hint: take a look at UTF7)

If at all possible I'd try to avoid doing this yourself and use a reputable library for the purpose. Selecting such a library is left as an exercise for the reader.

See also Doing RSS right (3) - character encoding.

Doing RSS right - retrieving content

Feeds, usually RSS but sometimes Atom or other formats, are a convenient way of including syndicated content into web pages - indeed the last 'S' of 'RSS' stands for 'syndication' in one of the two possible ways of expanding the acronym.

The obvious way to include the content of a feed in a dynamically-generated web page (such as the 'News' box on the University's current home page) is to include in the code that generates the page something that retrieves the page's feed data, parses it, and then marks it up and includes it in the page.

But this obvious approach comes with some drawbacks. Firstly the process of retrieving and parsing the feed may be slow and may be resource intensive. Doing this on every page load may slow down page rendering and will increase the load on the web server doing the work - it's easy to forget that multiple page renderings can easily run in parallel if several people look at the same page at about the same time.

Secondly, fetching the feed on every page load could also throw an excessive load on the server providing the feed - this is at least impolite and could trigger some sort of throttling or blacklisting behaviour.

And thirdly there's the problem of what happens if the source of the feed becomes unreachable? Unless it's very carefully written the retrieval code will probably hang, waiting for the feed to arrive, probably preventing the entire page from rendering and giving the impression that you site is down, or at least very slow. And even if the fetching code can quickly detect that the feed really isn't going to be available (and doing that is harder than it sounds), what do you then display in your news box (or equivalent)?

A better solution is to separate out the fetching part of the process from the page rendering part. Get a background process (a cron job, say, or a long ruining background thread) to periodically fetch the feed and cache it somewhere local, say in a file, in a database, or in memory for real speed. While it's doing this it it might as well check the feed for validity and only replace the cached copy if it passes. This process can use standard HTTP mechanisms to check for changes in the feed and so only transfer it when actually needed - it's likely to need to remember the feeds last modification timestamp from every fetch to make this work.

That way, once you've retrieved it once you'll always have something to display even if the feed becomes unavailable or the content you retrieve is corrupt. It would be a good idea to alert someone if this situation persists, otherwise the failure might go un-noticed, but don't do so immediately or on every failure since it seems common for some feeds to be at least temporally unavailable. Since the fetching job is parsing the feed it could store the parsed result in some easily digestible format to further reduce the cost of rendering the content into the relevant pages.

Of course this, like most caching strategies, has the drawback that there will now be a delay between the feed updating and the change appearing on your pages - in some circumstances the originators of feeds seem very keen that any changes are visible immediately. In practice, as long as they know what's going on they seem happy to accept a short delay. There's also the danger that you will be fetching (or at least checking) a feed that no longer used or very rarely viewed. Automatically keeping statistics on how often a particular feed is actually included in page would allow you to tune the fetching process (automatically or manually) to do the right thing.

If you can't do this, perhaps because you are stuck with a content management system that insists on doing things it's way, then one option might be to arrange to fetch all feeds via a local caching proxy. That way the network connections being made for each page view will be local and should succeed. Suitable configuration of the cache should let you avoid hitting the origin server too often, and you may even be able to get it to continue to serve stale content if the origin server becomes unavailable for a period of time.

See also Doing RSS right (2) - including content and Doing RSS right (3) - character encodings.

Thursday 12 July 2012

Cookies and Google Analytics

Recent changes to the law as it relates to the use of web site cookies has focused attention on Google Analytics. If by some freak chance you haven't met Analytics, it's a free tool provided by Google that lets web site managers analyse in depth how their site is being used. It can do lots that simple log file analysis can't, and many web site managers swear by it.

Analytics uses lots of cookies, and there's quite a lot of confusion about it. In the UK, the Information Commissioner has been quite clear that cookies used for this sort of purpose don't fall under any of the exemptions in the new rules (see the final question in 'Your questions answered'):
"The Regulations do not distinguish between cookies used for analytical activities and those used for other purposes. We do not consider analytical cookies fall within the ‘strictly necessary’ exception criteria. This means in theory websites need to tell people about analytical cookies and gain their consent."
However he goes on to say:
"In practice we would expect you to provide clear information to users about analytical cookies and take what steps you can to seek their agreement. This is likely to involve making the argument to show users why these cookies are useful."
and then says:
"Although the Information Commissioner cannot completely exclude the possibility of formal action in any area, it is highly unlikely that priority for any formal action would be given to focusing on uses of cookies where there is a low level of intrusiveness and risk of harm to individuals. Provided clear information is given about their activities we are highly unlikely to prioritise first party cookies used only for analytical purposes in any consideration of regulatory action."
which looks a bit like a 'Get out of jail free' card (or 'Stay out of jail' card) for the use of at least some analytics cookies. The recent Article 29 Data Protection Working Party Opinion on Cookie Consent Exemption seems to have come to much the same conclusion (see section 4.3). They even suggest:
"...should article 5.3 of the Directive 2002/58/EC be re-visited in the future, the European legislator might appropriately add a third exemption criterion to consent for cookies that are strictly limited to first party anonymized and aggregated statistical purposes."
Which is fine, but there's that reference to 'first party cookies' in both sets of guidance, and the reference to "a low level of intrusiveness and risk of harm to individuals"

Now that should be OK, because Google Analytics really does use first party cookies - they are set by JavaScript that you include in your own pages with a scope that means their data is only returned to your own web site (or perhaps sites, but still yours).

But there's a catch. The information from those cookies still gets sent to Google - it rather has to be, because otherwise there's no way Google can create all the useful reports that web managers like so much. But if they are first party cookies, how does that happen?

Well, if you watch carefully you'll notice that when to load a page that includes Google Analytics you browser requests a file called __utm.gif from a server at www.google-analytics.com. And attached to this request are a whole load of parameters that, as far as I can tell, largely include information out of those Google Analytics cookies. __utm.gif is a one pixel image, as typically used to implement web bugs. And the ICO is clear that:
"The Regulations apply to cookies and also to similar technologies for storing information. This could include, for example, Local Shared Objects (commonly referred to as “Flash Cookies”), web beacons or bugs (including transparent or clear gifs)." (emphases mine).
So while the cookies themselves may be first party, the system as a whole seems to me to be more like something that's third party. And third party using persistent cookies into the bargain (some of the Analytics ones have a 2 year lifetime), and one that gets my IP address on every request.

But it's not all bad. There's some suggestion that Google do understand this and are committing not to be all that evil. For example here they explain that they use IP addresses for geolocation, and that "Google Analytics does not report the actual IP address information to Google Analytics customers" (though I note they don't mention what they might do with it themselves). They also say that "Website owners who use Google Analytics have control over what data they allow Google to use. They can decide if they want Google to use this data or not by using the Google Analytics Data Sharing Options." (though the subsequent link seems to be broken - this looks like a possible replacement).

Further, the Google Analytics Terms of Service have a section on 'Privacy' that requires (section 8.1) anyone using Analytics to to tell their visitors that:
"Google will use [cookie and IP address] information for the purpose of evaluating your use of the website, compiling reports on website activity for website operators and providing other services relating to website activity and internet usage.  Google may also transfer this information to third parties where required to do so by law, or where such third parties process the information on Google's behalf. Google will not associate your IP address with any other data held by Google."
which seems fairly clear (or as clear as anything you ever find in this area).

So what do I think? My current, entirely personal view is that Google Analytics is probably OK at the moment, providing you are very clear that you are using it. It might also be a good idea to make sure you've disabled as much data sharing as possible. But I do wonder if the ICO's view might change in the future if he ever looks too closely at what's going on (or if someone foolishly describes it in a blog post...), so it might be an idea to have a plan 'B'. This might involve a locally-hosted analyics solution, or falling back to 'old fashioned' log file analysis. Both of these could still probably be supplemented by cookies, but they still wouldn't be exempt so you'd still need to get consent somehow. But this should be easier if they were truly 'first party' cookies and the data in them wasn't being shipped off to someone else. Trouble is, most good solutions in this area cost significant money. There is, as they say, no free lunch.

Wednesday 27 June 2012

Cookies - what the EU actually did

In an earlier posting I managed to work out what had changes in the relevant UK law to implement the changes to how we all use cookies that we all know and love. At the time I didn't know how to track down the changes to the relevant EU directives that precipitated all this.

Well, now I think I do - thanks mainly to the references at the beginning of a recent Article 29  Data Protection Working Party Opinion on Cookie Consent Exemption which itself is well worth a read (here's Andrew Cormack's summary). For your delight and delectation, here's what I think the changes are - all in Article 5.3 of  Directive 2002/58/EC as amended by Directive 2009/136/EC:

3. Member States shall ensure that the use of electronic communications networks to store information or to gain access to information storedstoring of information, or the gaining of access to information already stored in the terminal equipment of a subscriber or user is only allowed on condition that the subscriber or user concerned is providedhas given his or her consent, having been provided with clear and comprehensive information in accordance with Directive 95/46/EC, inter alia about the purposes of the processing, and isoffered the right to refuse such processing by the data controller. This shall not prevent any technical storage or access for the sole purpose of carrying out or facilitating the transmission of a communication over an electronic communications network, or as strictly necessary in order to providefor the provider of an information society service explicitly requested by the subscriber or user to provide the service.
So there you have it. 

Thursday 1 March 2012

SSL/TLS Deployment Best Practices

What looks to me like a useful collection of SSL/TLS Deployment Best Practices from Qualys SSL Labs:

  https://www.ssllabs.com/projects/best-practices/

Thursday 2 February 2012

Two sorts of authetnication

It occurs to me that, from a user's perspective, every authentication sits somewhere between the following two extremes:

  1. Authentications where it's strongly in the user's interest not to disclose their authentication credentials, but doing so has little impact on the corresponding service provider.  For example I'm probably going to be careful about my credentials for electronic banking (because I don't want you to get my money) and for Facebook (because I don't want you to start saying things to my friends that appear to come from me).
  2. Authentications where it's mainly in the service provider's interest that the user doesn't disclose their authentication credentials but it's of little importance to the user. For example authentication to gain access to institution-subscribed electronic journals, or credentials giving access to personal subscription services such as Spottify. In neither case is giving away my credentials to third parties likely to much immediate impact for me.
This is obviously a problem for service providers in case two, because it significantly undermines any confidence they can have in any authentications, and may undermine their business model if it's based on number of unique users. There's not much you can do technically to address this, other than using non-copyable, non-forgeable credentials (which are few and far between and typically expensive). It is of course traditional to address this with contracts and rules and regulations, but neither work well when the chance of being found out is low and the consequence small.

More interesting is what happens when you use the same credentials (SSO or single password, for example) for a range of services that sit in different places in this continuum. I suspect that there is a strong possibility, human nature being what it is, that people will make credential-sharing decisions based on the properties of an individual services and without really considering that they are actually sharing access to everything. 

[I'd note in parsing a New Your Times article (Young, in Love and Sharing Everything, Including a Password) that suggests that young people will sometimes share passwords as a way of demonstrating devotion. I expect this is true too]

Wednesday 25 January 2012

O2 changing web page content on the fly?


We recently noticed an oddity in the way some of our web pages were appearing when viewed via some 3G providers, including at least O2. The pages in question includes something like this in the head section:

  <!--[if IE 6]>
    <style type="text/css" media="screen">@import
       url(http://www.example.com/ie6.css);</style>
  <![endif]-->

which should have the effect of including the ie6.css stylesheet when the document is viewed from IE6 but not otherwise.

When accessed over 3G on some phones, something expands the @import by replacing it with the content of the ie6.css style sheet before the browser sees it:

  <!--[if IE 6]>
    <style type="text/css" media="screen">
       ...literal CSS Statements...
    </style>
  <![endif]-->

which, while a bit braindead, would be OK if it were not for the fact that the CSS file happens to contain (within entirely legal CSS comments) an example of an HTML end-comment:

  <!--[if IE 6]>
    <style type="text/css" media="screen">
      /* Use like this
         <!--[if IE 6]>
           ...
         <![endif]-->
      */
      ...more literal CSS Statements...
    </style>
  <![endif]-->

When parsed, this causes chaos. The first <!-- starts a comment and so hides the <style> tag. But the --> inside the CSS then closes that comment, leaving WebKit's HTML parser in the middle of a stack of CSS definitions. It does the only thing it can do and renders the CSS as part of the document.

I don't know what is messing with the @import statements. I suspect some sort of proxy inside O2's network, perhaps trying to optimise things and save my phone from having to make an extra TCP connection to retrieve the @included file. If so its failing spectacularly since its inlining a large pile of CSS that my phone would never actually retrieve.

You can see this effect in action by browsing to http://mnementh.csi.cam.ac.uk/atimport/. You should just see the word 'Test' on a red background, but for me over O2 I get some extra stuff that is the result of their messing around with my HTML.

[There's also the issue that O2 seem to have recently been silently supplying the mobile phone number of each device in the HTTP headers of each request it makes, but that's a separate issue: http://conversation.which.co.uk/technology/o2-sharing-mobile-phone-number-network-provider/]