Is it too much to ask for a broadband supplier to get forward and reverse DNS registrations for their own addresses right?
$ who
jw35 pts/0 2010-08-09 19:32 (81.98.240.47)
$ dig +short -x 81.98.240.47
cpc2-cmbg4-0-0-cust814.know.cable.virginmedia.com.
$ dig +short cpc2-cmbg4-0-0-cust814.know.cable.virginmedia.com
81.98.243.47
Result: OpenSSH restriction based on hostname fails because the client hostname can't be established and I waste an hour trying to debug the problem.
Actually it's worse than that:
$ dig +short -x 81.98.243.47
cpc2-cmbg4-0-0-cust814.cmbg.cable.virginmedia.com.
$ dig +short cpc2-cmbg4-0-0-cust814.cmbg.cable.virginmedia.com
81.98.243.47
Argh!
Update 2010-08-10: It looks as if the problem may be resolving. The authoritative name servers for 240.98.81.in-addr.arpa (ns[1,2,3,4].virginmedia.net) seem to be serving consistent results:
$ dig +short +norecurse @ns1.virginmedia.net -x 81.98.240.47
cpc2-cmbg4-0-0-cust46.cmbg.cable.virginmedia.com
$ dig +short +norecurse @ns1.virginmedia.net cpc2-cmbg4-0-0-cust46.cmbg.cable.virginmedia.com
81.98.240.47
Unfortunately they serve this information with 7 day TTLs and it's going to be several more days before the bogus information if finally purged from DNS server caches.
Monday, 9 August 2010
Tuesday, 18 May 2010
Splits and joins in PKI certificate hierarchies
I've always visualised PKI certificate hierarchies as strict trees. In this view, CA root certificates either directly authenticate end-user server and client certificates, or they authenticate multiple intermediate certificates, which may in their turn authenticate multiple further intermediate certificates, which eventually authenticate end-user certificates:
What hadn't occurred to me was that these hierarchies can also branch upward, with certificates being authenticated by more than one certificate above it:
But as it happens I've recently come across two real-live examples of this in commercial CA certificate hierarchies.
The first is in the one operated by Comodo implementing the JANET Certificate Service for UK HE sites. According to the documentation, the 'O=TERENA, CN=TERENA SSL CA' certificate chains to one ultimately authenticated by 'O=AddTrust AB, CN=AddTrust External CA Root'. But it can just as easily be verified by a commonly installed root 'O=The USERTRUST Network, CN=UTN-USERFirst-Hardware'. I've no idea why it's like this.
The second is in a hierarchy operated by Thawte. Here they are introducing a new 2048-bit root certificate, but as a precaution have also created an intermediate certificate that chains back to the old root:
This all has interesting implications for certificate verification since there are now multiple possible paths from an end user certificate to a potential root. From a little experimentation it appears that Firefox and Safari manage to find the shortest path to a configured root, but CryptoAPI (and so Internet Explorer and most of the rest of Windows) and OpenSSL take the certificate chain as provided by the server and then try verification from the end of that without ever trying to backtrack (but see note below).
This makes it impossible to take advantage of having both roots available since, taking the Thawte case, if you include the 'O=thawte, Inc., CN=thawte Primary Root CA' intermediate in the chain then your Windows/OpenSSL clients are bound to end up attempting verification against 'O=Thawte Consulting cc, CN=Thawte Premium Server CA' (and failing if they don't have it), and if you don't they will verify against the 'O=thawte, Inc., CN=thawte Primary Root CA' root (and failing if the don't have that).
The situation isn't helped by the fact that (if I'm reading it right) the relavent RFC describes verification from root to leaf even though in practice you'll always be doing it from leaf to root.
Note: subsequent further experimentation suggests that it's more complicated. Firefox does seem to be finding the shorter path in the Thawte case, but finds the longer path in the Comodo case and fails validation if the 'O=AddTrust AB, CN=AddTrust External CA Root' is disabled. It's possible that the behaviour is influenced by other data in the various certificates.
But as it happens I've recently come across two real-live examples of this in commercial CA certificate hierarchies.
The first is in the one operated by Comodo implementing the JANET Certificate Service for UK HE sites. According to the documentation, the 'O=TERENA, CN=TERENA SSL CA' certificate chains to one ultimately authenticated by 'O=AddTrust AB, CN=AddTrust External CA Root'. But it can just as easily be verified by a commonly installed root 'O=The USERTRUST Network, CN=UTN-USERFirst-Hardware'. I've no idea why it's like this.
The second is in a hierarchy operated by Thawte. Here they are introducing a new 2048-bit root certificate, but as a precaution have also created an intermediate certificate that chains back to the old root:
This all has interesting implications for certificate verification since there are now multiple possible paths from an end user certificate to a potential root. From a little experimentation it appears that Firefox and Safari manage to find the shortest path to a configured root, but CryptoAPI (and so Internet Explorer and most of the rest of Windows) and OpenSSL take the certificate chain as provided by the server and then try verification from the end of that without ever trying to backtrack (but see note below).
This makes it impossible to take advantage of having both roots available since, taking the Thawte case, if you include the 'O=thawte, Inc., CN=thawte Primary Root CA' intermediate in the chain then your Windows/OpenSSL clients are bound to end up attempting verification against 'O=Thawte Consulting cc, CN=Thawte Premium Server CA' (and failing if they don't have it), and if you don't they will verify against the 'O=thawte, Inc., CN=thawte Primary Root CA' root (and failing if the don't have that).
The situation isn't helped by the fact that (if I'm reading it right) the relavent RFC describes verification from root to leaf even though in practice you'll always be doing it from leaf to root.
Note: subsequent further experimentation suggests that it's more complicated. Firefox does seem to be finding the shorter path in the Thawte case, but finds the longer path in the Comodo case and fails validation if the 'O=AddTrust AB, CN=AddTrust External CA Root' is disabled. It's possible that the behaviour is influenced by other data in the various certificates.
Monday, 17 May 2010
Doing certificate verification in OpenSSL clients (properly)
Many SSL-capable applications, particularly those that started life on a Unix/Linux platform, use OpenSSL to implement the SSL protocol. Amongst other checks, SSL clients are expected to verify that certificates that they receive from servers have been correctly signed by a Certification Authority (CA) that the client has been configured to trust, but doing this correctly (or at all) with OpenSSL turns out to be harder than you might think.
For a start, OpenSSL can be instructed not to bother with verification. This can seem like an easy way to get rid of annoying error messages and to make things work, but doing so makes clients vulnerable to server impersonation and man-in-the-middle attacks. Most clients do verification by default, but things like curl's -k and --insecure command line options, and Pine's /novalidate-cert option in mailbox and SMTP server definitions will suppress this. The first step towards doing certificate verification properly is to make sure you have verification turned on.
The next problem is that to verify a server certificate a client must have access to the root certificates of the CAs it chooses to trust. OpenSSL can access these in two ways: either from a single file containing a concatenation of root certificates, or from a directory containing the certificates in separate files. In the latter case, the directory must also contain a set of symlinks pointing to the certificate files, each named using a hash of the corresponding certificate's subject's Distinguished Name. OpenSSL comes with a program, c_rehash, that generates or regenerates these symlinks and it should be run whenever the set of certificates in a directory changes. All the certificates should be in PEM format (base64 encoded certificate data, enclosed between "-----BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----" lines).
[Actually it's worse than this, because the client also needs access to any intermediate certificates that are needed to construct a chain linking the servers certificate to the corresponding root. It's the server's responsibility to provide these in intermediates along with its own certificate but sometimes they don't, making verification difficult or impossible. See below for how to detect this problem]
The OpenSSL library has compiled-in default locations for root certificates. You can find out what it is by first running the OpenSSL version utility:
The base OpenSSL distribution no longer puts anything in these locations. Debian (and so Ubuntu), and OpenSUSE/SLES 11 have a seperate package that install an extensive collection of roots. SLES10's OpenSSL package installs a small and idiosyncratic set, and OpenSSL under Mac OSX installs none at all. Worse, while the library knows about these default locations, applications have to make a concious decision to use them, and some don't - for example, wget seem to do so but ldapsearch doesn't.
OpenSSL applications generally have configuration options for selecting a certificate file and/or directory. Sometimes these are command line options (the OpenSSL utilities use -CAfile and -CApath; curl uses --cacert and --capath), sometimes they appear in configuration files (the OpenLDAP utilities look for TLS_CACERT and TLS_CACERTDIR in ldap.conf or ~/.ldaprc), and client libraries will have their own syntax (the Perl Net::LDAP module supports 'cafile' and 'capath' options in calls to both the Net::LDAPS->new() and $ldap->start_tls() methods).
However the locations are established, you'll need an appropriate collection of root certificates - at least containing one for each CA that issued the certificates on the the servers you want to talk to. It's often easiest if these are in the default locations, but you can put them anywhere as long as you tell you clients where to find them. You should of course be a little careful about this - by installing root certificates you are choosing to trust the corresponding CAs with at least part of your system's security. One approach is only to install roots as and when you need them to contact particular servers, but beware that this may lead to unexpected problems in the future if a server gets a new certificate from a new CA that you don't yet trust. In practice the easiest and probably best approach may be to use either a distribution-supplied collection of roots or to extract the root certificates from something like the Mozilla certificate bundle (see for example the make-ca-bundle utility distributed with curl). [Update 2011-02-01: see this new post for a better solution under MacOS]
So, putting this together, you need to do the following to use OpenSSL properly:
<certificate directory> appropriately ( <port> needs to be 443 for a HTTPS server, 636 for LDAPS, etc.). Or replace -CApath with -CAfile to select a file containing root certificates. This
actually establishes a connection to the server - you can terminate it
by typing ctrl-c or similar.
If you see "Verify return code: 0 (ok)" then everything worked and the server's certificate was successfully validated. If you see "Verify return code: 20 (unable to get local issuer certificate)" then OpenSSL was unable to verify the certificate, either because it doesn't have access to the necessary root certificate or because the server failed to include a necessary intermediate certificate. Yiou can check for the latter by adding the -showcerts option to the command line - this will display all the certificates provided by the server and you should expect to see everything necessary to link the server certificate up to but not including one of the roots you've installed. If you see "Verify return code: 19 (self signed certificate in certificate chain)" then either the servers is really trying to use a self-signed certificate (which a client is never going to be able to verify), or OpenSSL hasn't got access to the necessary root but the server is trying to provide it itself (which it shouldn't do becasue it's pointless - a client can never trust a server to supply the root coresponding to the server's own certificate). Again, adding -showcerts will help you diagnose which. Once you've got OpenSSL itself to work, move on to your actual client and hopefully it will work too.
Um, and that's about all, though there is one more wrinkle. Several applications that can use OpenSSL can also use GnuTLS and/or NSS instead. This can change many of the details above. In particular, GnuTLS only supports certificate files, not certificate directories but clients using it don't always report this. As a result you can waste (i.e. I have wasted) lots of time trying to work out why something like ldapsearch is continuing to reject server certificates despite being passed an entirely legitimate directory of CA root certificates...
For a start, OpenSSL can be instructed not to bother with verification. This can seem like an easy way to get rid of annoying error messages and to make things work, but doing so makes clients vulnerable to server impersonation and man-in-the-middle attacks. Most clients do verification by default, but things like curl's -k and --insecure command line options, and Pine's /novalidate-cert option in mailbox and SMTP server definitions will suppress this. The first step towards doing certificate verification properly is to make sure you have verification turned on.
The next problem is that to verify a server certificate a client must have access to the root certificates of the CAs it chooses to trust. OpenSSL can access these in two ways: either from a single file containing a concatenation of root certificates, or from a directory containing the certificates in separate files. In the latter case, the directory must also contain a set of symlinks pointing to the certificate files, each named using a hash of the corresponding certificate's subject's Distinguished Name. OpenSSL comes with a program, c_rehash, that generates or regenerates these symlinks and it should be run whenever the set of certificates in a directory changes. All the certificates should be in PEM format (base64 encoded certificate data, enclosed between "-----BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----" lines).
[Actually it's worse than this, because the client also needs access to any intermediate certificates that are needed to construct a chain linking the servers certificate to the corresponding root. It's the server's responsibility to provide these in intermediates along with its own certificate but sometimes they don't, making verification difficult or impossible. See below for how to detect this problem]
The OpenSSL library has compiled-in default locations for root certificates. You can find out what it is by first running the OpenSSL version utility:
openssl version -dto find OpenSSL's configuration directory. The default certificate file is called certs.pem, and the default certificate directory is called certs, both within this configuration directory. However be careful: it's not unusual to have multiple copies of the OpenSSL library installed on a single system and different versions of the library may have different ideas of where the configuration directory is. You need to be running a copy of 'version' that's linked against the same copy of the OpenSSL library as the client you are trying to configure.
The base OpenSSL distribution no longer puts anything in these locations. Debian (and so Ubuntu), and OpenSUSE/SLES 11 have a seperate package that install an extensive collection of roots. SLES10's OpenSSL package installs a small and idiosyncratic set, and OpenSSL under Mac OSX installs none at all. Worse, while the library knows about these default locations, applications have to make a concious decision to use them, and some don't - for example, wget seem to do so but ldapsearch doesn't.
OpenSSL applications generally have configuration options for selecting a certificate file and/or directory. Sometimes these are command line options (the OpenSSL utilities use -CAfile and -CApath; curl uses --cacert and --capath), sometimes they appear in configuration files (the OpenLDAP utilities look for TLS_CACERT and TLS_CACERTDIR in ldap.conf or ~/.ldaprc), and client libraries will have their own syntax (the Perl Net::LDAP module supports 'cafile' and 'capath' options in calls to both the Net::LDAPS->new() and $ldap->start_tls() methods).
However the locations are established, you'll need an appropriate collection of root certificates - at least containing one for each CA that issued the certificates on the the servers you want to talk to. It's often easiest if these are in the default locations, but you can put them anywhere as long as you tell you clients where to find them. You should of course be a little careful about this - by installing root certificates you are choosing to trust the corresponding CAs with at least part of your system's security. One approach is only to install roots as and when you need them to contact particular servers, but beware that this may lead to unexpected problems in the future if a server gets a new certificate from a new CA that you don't yet trust. In practice the easiest and probably best approach may be to use either a distribution-supplied collection of roots or to extract the root certificates from something like the Mozilla certificate bundle (see for example the make-ca-bundle utility distributed with curl). [Update 2011-02-01: see this new post for a better solution under MacOS]
So, putting this together, you need to do the following to use OpenSSL properly:
- Make sure you have enabled, or at least haven't suppressed, certificate verification.
- Get yourself an appropriate set of root certificates. If you add these to a certificate directory, remember to run c_rehash afterwards to recreate the hash symlinks.
- If you install these certificates in one of OpenSSL's default locations and you application uses those locations then everything should work immediatly. Otherwise, add appropriate configuration to tell the application where to look.
Replacing <host name>, <port>, andopenssl s_client -connect <host name>:<port>-CApath <certificate directory>
If you see "Verify return code: 0 (ok)" then everything worked and the server's certificate was successfully validated. If you see "Verify return code: 20 (unable to get local issuer certificate)" then OpenSSL was unable to verify the certificate, either because it doesn't have access to the necessary root certificate or because the server failed to include a necessary intermediate certificate. Yiou can check for the latter by adding the -showcerts option to the command line - this will display all the certificates provided by the server and you should expect to see everything necessary to link the server certificate up to but not including one of the roots you've installed. If you see "Verify return code: 19 (self signed certificate in certificate chain)" then either the servers is really trying to use a self-signed certificate (which a client is never going to be able to verify), or OpenSSL hasn't got access to the necessary root but the server is trying to provide it itself (which it shouldn't do becasue it's pointless - a client can never trust a server to supply the root coresponding to the server's own certificate). Again, adding -showcerts will help you diagnose which. Once you've got OpenSSL itself to work, move on to your actual client and hopefully it will work too.
Um, and that's about all, though there is one more wrinkle. Several applications that can use OpenSSL can also use GnuTLS and/or NSS instead. This can change many of the details above. In particular, GnuTLS only supports certificate files, not certificate directories but clients using it don't always report this. As a result you can waste (i.e. I have wasted) lots of time trying to work out why something like ldapsearch is continuing to reject server certificates despite being passed an entirely legitimate directory of CA root certificates...
Friday, 30 April 2010
Webapps and HTTP Error codes
Someone recently asked me
For example, a request for a non-existent URL really should return 404, otherwise you will find search engines repeatedly indexing you human-readable error page. Likewise, our local directory returns a real 404 if you try to look at the details for someone who dosen't exist.
I'd suggest that 'server exploded horribly' should also return 500, and that broken requests could be reported as 400, but beyond this there are few response codes that are really relevant to a web application (apart from 'functional' ones like the 300 series). By all means use anything else that makes sense, but check the standard carefully - many codes don't mean what they superficially appear to mean. How you should report an 'application' error that doesn't map on to an existing code is beyond me (200 with explanation, 400 with explination?). There are also 'errors' which are not really errors - like not finding any results in a search.
Remember that it's perfectly possible to include an application error code or message in the text of a page returned with a non-200 HTTP status code. But you do have to accept that some browsers (thank you Microsoft, but also modern Firefoxes) may suppress your useful, hand-crafted message in favour of their own generic, unhelpful one. Sometimes making your error message long enough will encourage browsers to display it after all.
HTTP servers can return HTTP error codes. Should an application (e.g. a grails application) send back HTTP errors/codes, or in case of error, send back an HTTP code 200, and then send back some form of application error code/message ?IMHO you should use appropriate HTTP error codes where possible, it's just not always possible.
For example, a request for a non-existent URL really should return 404, otherwise you will find search engines repeatedly indexing you human-readable error page. Likewise, our local directory returns a real 404 if you try to look at the details for someone who dosen't exist.
I'd suggest that 'server exploded horribly' should also return 500, and that broken requests could be reported as 400, but beyond this there are few response codes that are really relevant to a web application (apart from 'functional' ones like the 300 series). By all means use anything else that makes sense, but check the standard carefully - many codes don't mean what they superficially appear to mean. How you should report an 'application' error that doesn't map on to an existing code is beyond me (200 with explanation, 400 with explination?). There are also 'errors' which are not really errors - like not finding any results in a search.
Remember that it's perfectly possible to include an application error code or message in the text of a page returned with a non-200 HTTP status code. But you do have to accept that some browsers (thank you Microsoft, but also modern Firefoxes) may suppress your useful, hand-crafted message in favour of their own generic, unhelpful one. Sometimes making your error message long enough will encourage browsers to display it after all.
Friday, 19 February 2010
The problem with breadcrumb trails
You won't find me objecting to anything in this:
http://derivadow.com/2010/02/18/the-problem-with-breadcrumb-trails/
http://derivadow.com/2010/02/18/the-problem-with-breadcrumb-trails/
Thursday, 11 February 2010
Getting nagged by Nagios
Nagios is an entirely usable service monitoring system - I'm aware of at least three implementations within the University Computing Service alone. There are some aspects of its design (or, I suspect, its evolution) that I don't particularly like, but all in all it's much, much better than nothing.
An important feature is its powerful and capable configuration system. As usual this is a two edged sword because you have to understand the resulting complexity to take advantage of the power. I have two things to offer that might help: some general configuration guidelines, and a diagram.
Guidelines
Object relationships
Nagios's Template-Based Object Configuration is one of its most powerful features but it's difficult to get you head around it when starting out. Here's a diagram that might help - it shows most of the relationships between the various objects:
An important feature is its powerful and capable configuration system. As usual this is a two edged sword because you have to understand the resulting complexity to take advantage of the power. I have two things to offer that might help: some general configuration guidelines, and a diagram.
Guidelines
- Keep the number and size of edits to the distribution-supplied config files to a minimum.
- Arrange that related names and aliases share a common prefix (e.g. foo in what follows) so that they sort together (host names, which are most usefully based on DNS names, can be an exception to this)
- Keep to a minimum the number of distinct names created (e.g. use foo as both a contact and a contact group name and so don't create foo-contactgroup)
- Note that, unlike most other 'names', service_description isn't a global name and only needs to be unique within the set of hosts that provide it. It doesn't need and shouldn't have a foo prefix, and should be a short, human-readable, description of the service
- Keep names (especially those commonly displayed in the web apps) as short as reasonably possible
- Express group membership by listing group name in the individual objects, NOT by listing the individual objects in the group definition (or if you want, the other way around but be consistent!)
- Use inheritance wherever possible to avoid replicating common settings
- Use a naming convention so that separate groups of people can create global names without clashing
- Store information belonging to each group of people in a predictable location (e.g. always put host information in files or directories starting "host") to make navigation easier
- Optimise the web-based display
Object relationships
Nagios's Template-Based Object Configuration is one of its most powerful features but it's difficult to get you head around it when starting out. Here's a diagram that might help - it shows most of the relationships between the various objects:
Wednesday, 10 February 2010
Authentication: Think before you proxy
There are three obvious ways to add authentication such as Ucam Webauth or Shibboleth or OpenID to a web application:
- Add the authentication code to the application itself (what the Java world calls 'Application Managed Security')
- Implement the authentication in the web server or container hosting the application (Java's 'Container Managed Security')
- Implement the authentication in an HTTP proxy in front of the application
- Security: to be useful, the proxy has to pass on information (such as user name at the very least) to its downstream client and obviously has to do this is a way that can't be influenced by an attacker. About the only option (if you are sticking to HTTP for communication between the proxy and its downstream client) is by adding extra HTTP request headers. It's not easy to be absolutely sure that these headers can't somehow be injected by an attacker (by talking directly to the downstream client, by supplying them in a request and convincing the proxy to pass them on, etc., etc.).
- Availability of attributes: Shib and other similar systems potentially make lots of information about the authenticating user available as attributes. Unless you arrange for your proxy to pass all of these on (and there could be lots and you might not know in advance what they all are) then your clients will loose out. There's also the problem that (in theory, if not in practise) that users are able to control what information about them is disclosed to which services. With multiple services behind a single proxy it's going to be a case of one size having to fit all.
- Confused logs: clients behind proxies see all connections as coming from the proxy. This makes their logs less useful than they might be and frustrates any attempt to limit access by IP address (which is actually a bad thing anyway IMHO, but that's another story). There are ways around this, though most then suffer from the security problems mentioned above.
- One more system: the proxy is one more system to build, maintain and in particular to understand. I have a theory that authentication services in some big web applications are a bit 'hit and miss' (and 'electronic journals' are big offenders here) precisely because they are big, layered applications that almost no one (or perhaps actually no one in a few cases) really understands.
Friday, 5 February 2010
Core data stores in a University context
I've just come across what I think is a really interesting post about core data stores in a University context:
http://alexbilbie.blogs.lincoln.ac.uk/2010/02/04/core-dot-lincoln/
I haven't thought this through fully, but on first glance it seems to me that much of what's identified as needed at Lincoln would be directly relavent in Cambridge. The idea that University systems should be making interfaces to raw data available, for use by other University systems and potentially by others, has been bubbling around in the back of my mind for a while (and is something that I know is already happening in systems such as dSpace and the Streaming Media Service) and this post seems to be helping to crystalise some of those thoughts. Perhaps I'll come back to this topic in a later post.
http://alexbilbie.blogs.lincoln.ac.uk/2010/02/04/core-dot-lincoln/
I haven't thought this through fully, but on first glance it seems to me that much of what's identified as needed at Lincoln would be directly relavent in Cambridge. The idea that University systems should be making interfaces to raw data available, for use by other University systems and potentially by others, has been bubbling around in the back of my mind for a while (and is something that I know is already happening in systems such as dSpace and the Streaming Media Service) and this post seems to be helping to crystalise some of those thoughts. Perhaps I'll come back to this topic in a later post.
Tuesday, 19 January 2010
Promiscuous JavaScript considered dangerous
A trick commonly used to incorporate content from one web site into pages provided by another is for the provider site to make available a piece of JavaScript which inserts the required content into any page when loaded and executed. There are lots of examples - see for example Talks.cam 'Custom views', or the Met Office's Weather widgit.
This is realy neat, and has the huge advantage that it doesn't need any server-side support to impliment (typically RSS/Atom would need work on the server to collect and format the feeds) and can be deployed by just about anyone with editing access to the pages in question and the ability to cut and paste.
But it comes at a huge price. JavaScript that you include in your pages (wherever it comes from) has largely unfettered access to those pages. So it can rewrite you content, read cookies (and so steal authentication sessions), post content back to your server, etc., etc. So consider, before doing so, that adding JavaScript to your pages that is controlled by someone else amounts to giving that 'someone else' editing rights to your page. Are you happy to do that?
Of course reputable providers of this sort of JavaScript won't do anything like this deliberatly. But you do need to consider that they might do it accidently - it's particularly easy for them to mess up other JavaScript on you pages (for example the Met Office widgit above sets the JavaScript global variable moDays without any check if it's being used elsewhere). This trick is oftern used to provide feed information that may have been originally supplied to the provider site by even more third parties. There's the danger that the provider's cross site scripting protection may be incomplete, opening the posibility of unknown third parties being able to inject JavaScript into your pages, which is bad. And if you are relying on someone else's cross site scripting protection you need to know what character encoding they are doing it for and to be sure that you are setting a compatible one (this is discussed in a very helpful and to their credit very honest page supplied by Talks.cam that describes the risks of using their 'Promiscuous JavaScript' offering).
Note that I'm NOT saying that you shouldn't use this trick (as a supplier or as a provider). I am suggesting that you should be aware of the risks and make an apropriate assement of how they affect any particular deployment before proceding.
This is realy neat, and has the huge advantage that it doesn't need any server-side support to impliment (typically RSS/Atom would need work on the server to collect and format the feeds) and can be deployed by just about anyone with editing access to the pages in question and the ability to cut and paste.
But it comes at a huge price. JavaScript that you include in your pages (wherever it comes from) has largely unfettered access to those pages. So it can rewrite you content, read cookies (and so steal authentication sessions), post content back to your server, etc., etc. So consider, before doing so, that adding JavaScript to your pages that is controlled by someone else amounts to giving that 'someone else' editing rights to your page. Are you happy to do that?
Of course reputable providers of this sort of JavaScript won't do anything like this deliberatly. But you do need to consider that they might do it accidently - it's particularly easy for them to mess up other JavaScript on you pages (for example the Met Office widgit above sets the JavaScript global variable moDays without any check if it's being used elsewhere). This trick is oftern used to provide feed information that may have been originally supplied to the provider site by even more third parties. There's the danger that the provider's cross site scripting protection may be incomplete, opening the posibility of unknown third parties being able to inject JavaScript into your pages, which is bad. And if you are relying on someone else's cross site scripting protection you need to know what character encoding they are doing it for and to be sure that you are setting a compatible one (this is discussed in a very helpful and to their credit very honest page supplied by Talks.cam that describes the risks of using their 'Promiscuous JavaScript' offering).
Note that I'm NOT saying that you shouldn't use this trick (as a supplier or as a provider). I am suggesting that you should be aware of the risks and make an apropriate assement of how they affect any particular deployment before proceding.
Subscribe to:
Posts (Atom)