String concatenation vs string buffers in Javascript

I was reading this book – Professional Javascript for Web Developers where the author mentions string concatenation is an expensive operation compared to using an array to store strings and then using the join method to create the final string. Curious, I did a couple test here to see how much time it would save and this is what I got –

Somehow, the Firefox usually produces somewhat similar times to both ways, but in IE, string concatenation is much much faster. So, can this idea now be considered outdated (browsers probably have improved since?

Even if it were true and the join() was faster than concatenation it wouldn’t matter. We are talking about tiny amounts of miliseconds here which are completely negligible.

I would always prefer well structured and easy to read code over microscopic performance boost and I think that using concatenation looks better and is easier to read.

Just my two cents.

On my system (IE 8 in Windows 7) the times of StringBuilder in that test very from about 70-100% in range — that is, it is not stable — although the mean is about 95% of that of the normal appending.

While it’s easy now just to say “premature optimization” (and I suspect that in almost every case it is) there are things worth considering:

The problem with repeated string concatenation comes repeated memory allocations and repeated data copies (advanced string data-types can reduce/eliminate much of this, but let’s keep assuming a simplistic model for now). From this lets raise some questions:

  • What memory allocation is used? In the naive case each str+=x requires str.length+x.length new memory to be allocated. The standard C malloc, for instance, is a rather poor memory allocator. JS implementations have undergone changes over the years including, among other things, better memory subsystems. Of course these changes don’t stop there and touch really all aspects of modern JS code. Because now ancient implementations may have been incredibly slow in certain tasks does not necessarily imply that the same issues still exist, or to the same extents.

  • As with above the implementation of Array.join is very important. If it does NOT pre-allocate memory for the final string before building it then it only saves on data-copy costs — how many GB/s is main memory these days? 10,000 x 50 is hardly pushing a limit. A smart Array.join operation with a POOR MEMORY ALLOCATOR would be expected to perform a good bit better simple because the amount of re-allocations is reduced. This difference would be expected to be minimized as allocation cost decreases.

  • The micro-benchmark code may be flawed depending on if the JS engine creates a new object per each UNIQUE string literal or not. (This would bias it towards the Array.join method but needs to be considered in general).

  • The benchmark is indeed a micro benchmark 🙂
    Increase the growing size should have an impact of performance based on any or all (and then some) above conditions. It is generally easy to show extreme cases favoring some method or another — the expected use case is generally of more importance.

Read More:   ng-submit not working in angularjs

Although, quite honestly, for any form of sane string building, I would just use normal string concatenation until such a time it was determined to be a bottleneck, if ever.

I would re-read the above statement from the book and see if there perhaps other implicit considerations the author was indeed meaning to invoke such as “for very large strings” or “insane amounts of string operations” or “in JScript/IE6”, etc… If not, then such a statement is about as useful as “Insert sort is O(n*n)” [the realized costs depend upon the state of the data and the size of n of course].

And the disclaimer: the speed of the code depends upon the browser, operating system, the underlying hardware, moon gravitational forces and, of course, how your computer feels about you.

In principle the book is right. Joining an array should be much faster than repeatedly concatenating to the same string. As a simple algorithm on immutable strings it is demonstrably faster.

The trick is: JavaScript authors, being largely non-expert dabblers, have written a load of code out there in the wild that uses concatenating, and relatively little ‘good’ code that using methods like array-join. The upshot is that browser authors can get a better improvement in speed on the average web page by catering for and optimising the ‘bad’, more common option of concatenation.

So that’s what happened. The newer browser versions have some fairly hairy optimisation stuff that detects when you’re doing a load of concatenations, and hacks it about so that internally it is working more like an array-join, at more or less the same speed.

Read More:   Add Whatsapp function to website, like sms, tel

I actually have some experience in this area, since my primary product is a big, IE-only webapp that does a LOT of string concatenation in order to build up XML docs to send to the server. For example, in the worst case a page might have 5-10 iframes, each with a few hundred text boxes that each have 5-10 expando properties.

For something like our save function, we iterate through every tab (iframe) and every entity on that tab, pull out all the expando properties on each entity and stuff them all into a giant XML document.

When profiling and improving our save method, we found that using string concatention in IE7 was a lot slower than using the array of strings method. Some other points of interest were that accessing DOM object expando properties is really slow, so we put them all into javascript arrays instead. Finally, generating the javascript arrays themselves is actually best done on the server, then you write then onto the page as a literal control to be exectued when the page loads.

As we know, not all browsers are created equal. Because of this, performance in different areas is guaranteed to differ from browser to browser.

That aside, I noticed the same results as you did; however, after removing the unnecessary buffer class, and just using an array directly and a 10000 character string, the results were even tighter/consistent (in FF 3.0.12):

Unless you’re doing a great deal of string concatenation, I would say that this type of optimization is a micro-optimization. Your time might be better spent limiting DOM reflows and queries (generally the use of document.getElementbyById/getElementByTagName), implementing caching of AJAX results (where applicable), and exploiting event bubbling (there’s a link somewhere, I just can’t find it now).

Read More:   How to clear localStorage in IE10 & IE11 from developer tool?

Okay, regarding this here is a related module:

This is an effective means of creating String buffers, by using

var buffer = new String.Buffer();
buffer.append("foo", "bar");

This is the fastest sort of implementation of String buffers I know of. First of all if you are implementing String Buffers, don’t use push because that is a built-in method and it is slow, for one push iterates over the entire arguments array, rather then just adding one element.

It all really depends upon the implementation of the join method, some implementations of the join method are really slow and some are relatively large.

The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 .

Similar Posts