Category Archives: Griping

My Disordered Rebuttal to “AngularJs Performance in Large Applications”

After reading Abraham Polishchuk’s article AngularJS Performance in Large Applications, I wanted to dig a bit deeper on his suggestions before I took any of them on board. My basic concern was many of his suggestions seemed focused around technology-specific performance micro-optimisations. In today’s browser world, optimising around specific technologies (i.e. browsers and browser versions) is an expensive task with a limited life and usefulness: both improvements in the Javascript engines and differences in browser implementations mean that these fine-grained optimisations should really be limited to very specific performance critical situations.

Conversely, I don’t think we can expect Angular’s performance to improve in the same way as browser engines have. While Angular has undergone significant iterative performance improvements in the 1.x series, it is clear from the massive redesign of version 2.0 that most future application performance gains will only be won by rewriting significant chunks of your own code. Be aware!

Understanding computational complexity is crucial for web developers, even though with Angular it is all Javascript and browser and cloud and magic. You still need that software engineering education.

Onto the article.

1-2 Introduction, Tools of the Trade

No arguments here. Helpful.

3 Software Performance

Ok.

4.1 Loops

This advice, to move function calls out of loops, really depends on what the function you are calling is. Now, clearly, it’s sensible to move calls out of a loop if you can – that’s a normal O(n) to O(1) optimisation. And Objects.keys, as shown in the original article, is very heavy function, so that’d be a good one to avoid.

But don’t take this advice too far. The actual cost of the function call itself is minimal (http://jsperf.com/for-loop-perf-demo-basic/2). In general, inlining code to avoid a function call is not advisable: it results in less maintainable code for a minimal performance gain.

4.2 DOM access

While modifications to the DOM should be approached carefully, the reality is a web application is all about modifying the DOM, whether directly, or indirectly. It cannot be avoided! Applying inline styles (as opposed to using CSS selectors predefined in stylesheets?) vary significantly depending on both the styles being set and the browser. It’s certainly not a hard-and-fast rule, and not really a useful one. In the example, I compare background color (which should not force a reflow), and font-size (which would): (http://jsperf.com/inline-style-vs-class). The results vary dramatically across browsers.

4.3 Variable Scope and Garbage Collection

No arguments here. In fact I would go further and say that this is the single biggest performance concern, both in terms of CPU and resource usage, in Angular. In my limited experience, anyway. It’s very easy to leak closures without realising.

4.4 Arrays and Objects

Yes. Well, somewhat. As Gregory Jacobs commented, there is no significant difference between single-type and multi-type arrays. The differences come in what you are doing with the members of that array, and again browsers vary. http://jsperf.com/object-tostring-vs-number-tostring

I would again urge you to avoid implementation-specific optimisations. Designing an application to minimize the number of properties in an object just because today’s implementation of V8 has a special internal representation of small objects is not good advice. Design your objects according to your application’s requirements! This is a good example of a premature optimization antipattern.

Finally, as Wills Bithrey commented, Array.delete isn’t really slow.

5.1 Scopes and the Digest Cycle

This is a clear description of Angular scopes and the digest cycle. Good stuff. Abraham does a good job of unpacking the ‘magic’ of Angular.

6.1. Large Objects and Server Calls

Ok, so in my app I return full rows of data, and there are a few columns I don’t *currently* use in that data. I measured the performance impact on my application, and the cost of keeping those columns was insignificant: I was actually unable measure any difference. Again, this is a premature optimization antipattern. Your wins here will ultimately not be from excluding columns from your data but from making sure your data is normalized appropriately. This is about good design, rather than performance-constrained design.

Of course, there are plenty of other reasons to minimize your data across the wire. But remember that a custom serializer for each database object is another function to test, and another failure point. Measure the cost and the benefit.

6.2 Watching Functions

Yes. Never is a little too strong for me, but yeah, in general, avoid binding to functions.

6.3 Watching Objects

Yes, in general, deep watching of objects does introduce performance concerns. But I do not agree that this is a terrible idea. The alternative he gives (“relying on services and object references to propagate object changes between scopes”) gets pretty complicated and introduces maintenance and reliability concerns. I would approach this point as potential low hanging fruit for optimisation.

7.1 Long Lists

Yes.

7.2 Filters

As DJ commented, filters don’t work by hiding elements by CSS. Filters, as used in ng-repeat, return a transform of the array, which may be what Abraham meant?

7.3 Updating an ng-repeat

Pretty much. I would add that for custom syncing to work, you still need a unique key, just as you do for track by. If you are using Angular 1.3, there’s no benefit to the custom sync approach. Abraham has detailed the maintenance cost of custom syncing.

8. Rendering Problems

ng-if and ng-show are both useful tools. Choose your weapons wisely.

9.1 Bindings

Yes.

9.2 $digest() and $apply()

Yes, if you must. This is a great way to introduce inconsistencies in your application which can be very hard to debug!

9.3 $watch()

I really disagree with this advice. scope.$watch is a very practical way of separating concerns. I don’t think it’s indicative of bad architecture, quite the opposite in fact! While trying to find the discussions that Abraham refers to (but sadly doesn’t link to), it seemed that most of the alternatives given were in fact in themselves bad architecture. For example, Ben Lesh suggested (although he has since relaxed his opinion) using an ng-change on an input field instead of a watcher on the target variable. Why is that bad? From an architectural point of view, that means each place in your code that you change that variable, you have to remember to make that function call there as well. This is antithetical to the MVC pattern and seriously WET.

9.4 $on, $broadcast, and $emit

Apart from the advice to unbind $on(‘$destroy’) which I don’t think is correct, yes, being aware of the performance impact of events is important in an event-driven system. But that doesn’t mean don’t use them: events are a weapon to use wisely. Seeing a pattern here?

9.5 $destroy

Yes. Apart from the advice to “explicitly call your $on(‘$destroy’)” — I’m not sure what this means.

10.1 Isolate Scope and Transclusion

Only one quibble here: isolate scopes or transclusions can be faster than occupying the parent scope, because a tidy directive can use $apply to just update its own isolated scope, and avoid the digest of the entire scope tree. Choose your tools wisely.

10.2 The compile cycle

Basically, yes.

11 DOM Event Problems

Yes, addEventListener is more efficient, but so is not using Angular in the first place! This results in a significant code increase and again you the benefit of Angular in doing so. Going back to raw DOM is always a trade-off. In the majority of cases, the raw DOM event model is a whole lot of additional work for little gain.

12 Summary

In summary, I really think this post is about micro-optimisations that in many cases will not bring you the performance benefits you are looking for. Abraham’s summary suggests avoiding many of the most useful tools in Angular. If you have to roll your own framework code to avoid ng-repeat, or ng-click, you have already lost.

Have I got the wrong end of the stick?

How many TLDs is too many?

So I got my usual daily email from $registrar today, telling me that my world is going to change and I just have to, have to, buy a new domain name from them for the today’s brave new TLD. Today’s must-have TLD is .website, and yesterday’s was probably .ninja. I’m offended on behalf of Samurai, because I wasn’t offered a .samurai domain.

Today’s must-have TLD is .website.

I enclose below a snippet from the email. I think it speaks for itself:

generic

A morbid fascination caused me to click the link provided. I was given a list of 236 different TLDs, ranging in price from $8.98 for .uk (which, by the way, I can’t actually buy), all the way through to a princely $3059.99 for .healthcare.  I was surprised that .dental was so affordable, at a mere $59.99.

Here’s the list I was offered (click for the gory detail):

keyman-domains

I have a few keyman.* domains already, purchased over the years before we managed to acquire our blue label keyman.com.  It’d be nice to fill out the collection. So how much for the full set? Well, the full set of “New Domains” offered by one particular registrar, anyway. Today only, the full set will cost just $26,761.45. That’s doesn’t include most of those two letter country domains, .la and .io and .tv.

Today only, the full set will cost just $26,761.45.

My view is that every TLD issued now increases the value of keyman.com.  I can’t see any way that the average small company is going to be blowing $25K a year on maintaining a set of worthless domains, just to prevent squatters or even to maintain a registered trademark. Some very large companies might find it worthwhile to register, e.g. microsoft.tax. But me? Nup. I’m not renewing some of my unused lower value domains, and you’re welcome to them.

The gold rush is over people.

Cursor Keys Have Not Improved

I’m a keyboard guy. And I think keyboards suck. In fact, I wrote about this before

I found two new ugly specimens for today’s little rant, and your perusal. Both these keyboards have a reasonably traditional layout, but both fail, for different reasons. These two keyboards were in our conference room.

Microsoft Wireless 800

What’s wrong with this?

  1. It has no gaps between the different parts of the keyboard. Muscle memory fail.
  2. It has a bizarre scooped out shape, not really visible in the photo, which seems to encourage pressing the wrong row of keys.
  3. It has no gaps. This is so bad that it bears repeating. Without gaps, you have to look for the key because you can’t feel for it. Every time.

I thought I was the only one who really hated this keyboard with a passion, but enough other people complained about it that we replaced it with a Dell keyboard. I don’t know what has happened to the Microsoft keyboard. It’s entirely possible someone burned it.

Dell Latest

So this one, at first glance, improves on the Microsoft keyboard by reintroducing that classic design feature: white space, or black space. Just space. Y’know, gaps between different parts of the keyboard. Space is not entirely at a premium on our conference room table. But:

  1. The keys are modern funky flat keys with an unsatisfying deadness to them.
  2. The wrong size! Little tiny navigation keys for big fingers.
  3. And the media keys are encroaching on the navigation key space.
  4. What is the Clear key for? And what have you done with Num Lock? And Scroll Lock? And Pause/Break?
  5. I have nothing against the moon, but why do we need a moon key on our keyboard?

These things cost us time and productivity. It may seem minor, but moving between keyboards has become a constant frustration. I wish we as an industry could do better.

The farce of security challenge questions (yes, ANZ, I’m talking about you!)

My bank has decided that I have to have some security challenge questions, and gave me a fixed set of questions to add answers to.

They had some simple instructions: “Keep them secret and don’t disclose them to anyone.  Don’t write down or record them anywhere.”  And added a little threat as icing on the cake: “If you don’t follow these instructions, you may be liable for any loss arising from an unauthorised transaction.”

Security Questions 1 Security Questions 2 Security Questions 3If I actually attempt to give honest answers to the questions, any determined and reasonably intelligent hacker could find the answers to all the questions that I actually know the answer to, within a minute or two, online, tops.

So what if I opt to use 1-Password or another password management tool to generate secure and random “password” style answers to these questions?  These would not be readily memorisable and so I’d have to save them in the tool.  But according to their little threat, I can’t do that!  That’s called recording the answers to the questions and I could be liable if an unauthorised transfer occurs.

The real problem with questions like this is that too much of this information is recorded online, already.  It adds a layer of complexity to the security model, without actually improving security much, if at all.

Then another question arises.  If an acquaintance does happen to ask me where I got married, am I now liable to ANZ if I tell them?  It sounds ridiculous but lawyers be lawyers.  Mind you, given that I have no way of not agreeing to the terms, perhaps it’s unenforceable.  The whole thing is really badly thought out.

Update 9:46am: Blizzard and insecurity questions: My father’s middle name is vR2Ut1VNj is a really good read for more detail!

Strava have decided to abandon their developer community

I received an email from Strava today which was very disappointing.

Hello Marc,

As of July 1, 2013, V1/V2 API endpoints have been retired. Libraries, sites, and applications using these endpoints will cease to work.

In previous blog updates, we’ve discussed status and access to V3 of our API.  As mentioned then, we had to make difficult decisions this year about where to invest time and resources – feature development or a full-fledged API program.  We have chosen to focus on feature development at this time and so access to V3 of our API is extremely limited.

Any developer who has been granted access to V3 of the API has been contacted. We will revisit our API program and applications from time to time, but for the time being, we have no plans to grant further access in 2013.

If you have questions or comments, please send an email to [email protected]. Given our limited resources, you should not expect an immediate response.

Thanks for your understanding,

Your Friends at Strava

The highlights (or lowlights) from this email:

  • Strava have no plans to allow anyone else to access their new API for at least six months
  • It seems Strava aren’t interested in dialogue with their community about this decision.

I’ve been using the Strava V1 and V2 APIs now for several years, and have written quite a few blog posts about how to use it.  The V1 and V2 APIs were always fairly experimental, but that was okay.  We all understood that and it was part of the fun of working with Strava.

Discontinuing the V1 and V2 APIs was in the pipeline, and again we Strava API users could understand limiting the V3 API beta program to a small number of developers.  However, the abrupt announcement today that not only are the V1 and V2 APIs no longer available, but now Strava won’t be making the V3 API available to anyone else for an Internet Eternity (that is, at least 6 months) either is completely unexpected.

The only Windows Phone app that integrates with Strava no longer works.  My apps no longer work.  And there appears to be no future for any of these apps.

Now, I’ve been a Strava Ambassador since before they started their official Ambassador program, and have had a lot of fun promoting Strava.  I’ve developed popular tools that integrate with the Strava API.  It is a very big disappointment to me that they’ve decided to throw away all that good-will with their developer community.  This is not the way to treat the most dedicated and enthusiastic portion of your user base, Strava!

Cursing over cursor keys

A follow-up post, 29 Sep 2014: Cursor Keys Have Not Improved.

As you may know, I do a fair bit of work with keyboards.  This means I have collected quite a number of hardware keyboards over the last few years.  I do a lot of testing with various keyboards and use a number of computers with different hardware keyboards for both testing and development purposes.

One of the more irritating issues I experience moving between these keyboards is in the way manufacturers keep rearranging the cursor key block.  As a touch typist, I am continually pressing the wrong cursor key, or Delete instead of Home, and so on.  And that’s just the desktop keyboards.  When it comes to notebooks I have lost all hope.

The ISO/IEC 9995-5 standard does require keyboard manufacturers to keep those cursor keys in one of two shapes:

That’s it.  Not very prescriptive.  One almost wonders why they bothered!  So let’s have a look at the problem with the keyboards in my office.  Starting with desktop keyboards, I found 12 keyboards on and around my desk (and one in a second office), with no less than 6 different arrangements of this cursed cursor zone. I have included 8 of the keyboards in the image below:

Eight keyboards.  Can you spot the ones with identical cursor layouts?

I drew up the six different layouts I found, grouping the keys by colour (sorry if you are colour blind).  The top two layouts seem to be where most keyboards are going today.  The middle two are more traditional, and the final two layouts were invented, I am convinced, purely to cause me grief.

I aligned the layout diagrams with the right hand side of the main alphanumeric keyboard block: this reflects where my hands sit normally.  I hardly need to explain the difficulties with the constant rearrangement of the Home, End, PgUp, PgDn, Ins and Del keys, but I will note that the inconsistency of the cursor block position is almost as much of a problem: I get used to the position of the down arrow key on one keyboard, switch to another keyboard and find myself hitting left arrow instead from muscle memory.

Notebook manufacturers have somewhat more of an excuse: they are trying to fit a bunch of keys into a much smaller space.  Even so, the variety is pretty amazing.  I didn’t even include the couple of dead laptops that have not yet made their way into the rubbish.  Apple and Acer take top marks for consistency.  Toshiba, not so much…  I threw in a Microsoft Surface RT tablet keyboard for good measure!  Some of the keyboards use the Fn key to access PgUp, PgDn, Home or End (or even Del).  These are of course very inconsistently mapped, if you can even find them…

Surface RT, MacBook, Acer netbook, Acer ultrabook,
Toshiba notebook, MacBook, Toshiba notebook

Even soft keyboards suffer.  The following 5 images are all from the same company, Microsoft.  Of note is that the position of the Fn key changes between Windows 7 and Windows 8. In 3 of the images, the position of the up arrow key is just slightly misaligned with the down arrow; this may not be a problem in use but it is visually irritating!

Windows 8 Accessibility, Windows 7 Accessibility,
Windows XP, Win 8 Touch Optimised, Win 8 Touch Full.