Site Map - skip to main content - dyslexic font - mobile - text - print

Hacker Public Radio

Your ideas, projects, opinions - podcasted.

New episodes Monday through Friday.


Correspondent

Rho`n

Host Image
Host ID: 293

email: roan.horning.nospam@nospam.gmail.com
episodes: 2

hpr2102 :: AngularJS's ng-repeat, and the browser that shall not be named

Released on 2016-08-23 under a CC-BY-SA license.

Introduction

At my work, we are in the process of revamping our internal call logging system. Moving from .NET and Microsoft’s ASPX pages for both the client side and back end processing, to an HTML5 based Single Page Application (SPA) using AngularJS for the client side interface with a .NET WebAPI service for the back end processing. The main page for both versions contains a list of the current days calls laid out in a table with 9 columns. Users are able to switch to a specific day’s calls by selecting a date via a calendar widget, or by moving one day at a time via previous and next day buttons. By the end of a typical day, the page will contain between 40 and 50 calls.

During recent testing of the SPA client on the proprietary browser we all love to hate, or at least have a love/hate relationship with if you have to support it, I noticed that rendering of a whole days worth of calls would take seconds, freezing the UI completely. This made changing dates painful. As we reload the data any time you re-enter that page (a manual way to poll for new data until we implement either timer based polling or a push service through websockets), the page was almost unusable. The page rendered fine in both Mozilla and webkit based javascript JIT engines, but Microsoft’s engine would choke on it.

After a bit of searching on “AngularJS slow rendering” and “AngularJS optimize”, I found many references about using Angular’s ng-repeat directive when rendering long lists of data (see references below for the main pages I read). I tried a couple of the methods mentioned to optimize the ng-repeat directive. I used the “track by” feature of ng-repeat to use the call’s id as the internal id of the row, so ng-repeat didn’t have to generate a hashed id for each row. I implemented Angular’s one-time binding feature to reduce the number of watches being created (reducing the test day’s number of watches from 1120 to 596), but even these two combined optimizations didn’t have enough effect to render the page in an acceptable amount of time. The next optimization I played with was using ng-repeat with the limitTo filter. This limits the number of items rendered in the list that ng-repeat is looping through. This is particularly useful combined with paging of the data. I set the limitTo option to different values to see how it affected the rendering time. I found that rendering 5 rows was fast and consistent for every day’s worth of data I viewed. From my reading, I knew if I updated the limitTo amount while keeping the array of items the same, ng-repeat would only render any un-rendered items, and not redo the whole limited list.

The Code

<tr ng-repeat="c in results | limitTo:displayRenderSize">

Inside your directive, set an angular.$watch on the list of items to be rendered by ng-repeat. In this example the list is stored in the variable results.

return {
        scope: {
            results: "=",
    },
        link: function (scope, element, attrs) {
            scope.renderSizeIncrement = 5;
            scope.displayRenderSize = scope.renderSizeIncrement;

            scope.$watch('results', function () {
                if (scope.results) {
                    scope.displayRenderSize = scope.renderSizeIncrement;
                    scope.updateDisplayRenderSize();
                }
            });
            scope.updateDisplayRenderSize = function () {
                if (scope.displayRenderSize < scope.results.length) {
                    scope.displayRenderSize += scope.renderSizeIncrement;
                    $timeout(scope.updateDisplayRenderSize, 0);
                }
            }
        }
    }
}

Any time the results are updated. The displayRenderSize variable is reset to render the default number of items, and the updateDisplayRenderSize function is called. This function calls itself repeatedly via angular’s $timeout service ($timeout is a wrapper for javascript’s setTimeout function). It increments the displayRenderSize variable which is being watched by the limitTo filter of the main ng-repeat. Each time the displayRenderSize variable is incremented, the ng-repeat renders the next set of items. This is repeated until all the items in the list are rendered.

The magic happens because ng-repeat blocks any other javascript, which does not effect angular’s digest path, until it is finished rendering. By calling the updateDisplayRenderSize with a timeout, the function doesn’t get called again until after the next set of items is rendered. Making the $timeout delay 0, sets the function to be called as soon as possible after the ng-repeat digest cycle stops blocking. In this instance, the sum of the rendering time for parts of the list is shorter than the sum of the rendering time for all of the list at one time.

Conclusion

There are a couple small glitches with this solution. Scrolling can be a bit jerky as the chunk sized renders cause a series of micro UI freezes, instead of one big long one. Also, if you don’t have a fixed or 100% percent wide table layout, and you don’t have fixed column sizes, the table layout will dance a little on the screen until the columns have been filled with their largest amounts of data. This is the result of the table layout being re-calculated as more data fills it. That being said, overall, this solution works great. It moved the pause from seconds to under half a second or less—making the page go from unbearable to usable on Microsoft’s latest browser offerings.

References

[1] AngularJS Performance Tuning for Long Lists; Small Improvements; Tech blog; blog; viewed: 2016-08-09

[2] Optimizing ng-repeat in AngularJS; Fundoo Solutions; blog; viewed: 2016-08-09

[3] AngularJS: My solution to the ng-repeat performance problem; thierry nicola; blog; published: July 24, 2013; viewed: 2016-0809


hpr1688 :: Some useful tools when compiling software

Released on 2015-01-21 under a CC-BY-SA license.

introduction

Hi this is Rho`n and welcome to my first submission to Hacker Public Radio. I have been working on an application using the Python programming language with the Enlightenment Foundation Libraries (EFL) libraries for the GUI interface. After acquiring a new laptop and installing a fresh copy of Ubuntu on it, I decided to set up the build environment I needed to be able to work on my project. I have been building from source the EFL libraries along with the Python-EFL wrapper libraries. For the last couple machines on which I have built the software, I would use the standard configure, make, and make install procedure. This time around I decided to create a debian package to use for installing the libraries. It had been a few years since I had created a .deb, so I googled for some tutorials, and found mention of the checkinstall program. After reading a couple blog posts about it I decided to try it out. checkinstall is run instead of "make install" , and will create a .deb file, and then install the newly created package.

cut and tr commands

To help speed up the configure process, I had previously created a file from my other builds that is a grep of my history for all the various "apt get install" commands of the libraries the EFL software needs to compile. Since my current operating system was a freshly installed distribution of Ubuntu, I needed to install the build-essential package first. After looking through my install file, and I decided to create a single apt-get install line with all the packages listed, instead of running each of the installs seperately. I knew I could grep the file, and then pass that to awk or sed, but my skill with either isn't that great. I did a little searching to see what other tools were out there and found the cut command and the tr command. Cut lets you print part of a line. You can extract set a field delimeter with the -d option and then list a range of fields to be printed with the -f option. The tr command can replace a character. I used this to replace the new line character that was printed by the cut command to generate a single line of packages which I piped to a file. A quick edit of the file to add "sudo apt-get install" at the beginning, add execute permissions to the file, and now I have a nice, easy way to install all the needed libraries.

apt-file and checkinstall

At least that was the idea. After installing the libraries, and running configure, I still received errors that libraries were missing. The machines from which my list of libraries was generated, had all been used for various development purposes, so some needed libraries were already installed on them, and so their installation had passed out of my history. Besides echoing to standard out the file configure can't find, it also creates a log file: config.log. Between the two it is relatively easy to figure out what library is needed. Often the libraries needed included their name in the .deb which has to be installed, and finding them is easy with an apt-cache search and grep of the library name. The hardest ones to find were often the X11 based references. In this case, I needed the scrnsaver.h header file. After googling, I found a reference to the needed package (libxss-dev) on Stack Exchange. The answer also showed how to use the apt-file command to determine in which package a file is included. I wish I had run into this before, there a few times where it took a number of searches on the internet to figure out which package I needed to install, and "apt-file find" would have saved time and frustration. A very handy tool for anyone developing on a debian based distribution. As it turns out, that was the last dependency that needed resolved. After a successful configure, and successful compile using the make command, I was ready to try out checkinstall. Running sudo checkinstall, brings up a series of questions about your package, helping you fill out the needed .deb meta-data. I filled out my name and email, name for the package, short description of the package, and let everything else go to the suggested defaults. After, that hit enter and checkinstall will create a debian package and install it for you. If you run "apt-cache search <name of package>" you will see it listed, and "apt-cache show <name of package>" will give you the details you created for the package. There are warnings on the Ubuntu wiki not to use this method for packages to be included in an archive or in a ppa. It does work great for a local install, and would use it to install on machines on my local network.

conclusion

After a short side trip into development setup, I'm back writing my application on my new laptop. While I am a big fan of binary packages, Debian being the first GNU/Linux distribution I ever used, sometimes you need to dive in and compile software from source. For me running configure, make, make install has been the easiest way to do this, and these days it usually isn't too difficult to get even moderately complex applications and libraries to build. The most tedious part can be resolving all the dependencies. Now, with apt-file in my tool belt, it will be even faster and easier. I will also be using checkinstall for future compiles. I do like being able to use package management tools to install, and un-install software.

I hope others find these tools useful. I have posted links in the show notes to the pages about cut, tr, apt-file and checkinstall that led me to these tools. If you've made it this far, thanks for listening to my first post to HPR. As Ken Fallon points out, it's not an HPR episode until you have uploaded it to the server. So let those episode ideas flow from your brain, into your favorite recording device, and up to the HPR server. Let's keep HPR active, vibrant, and a part of our lives for years to come.


Become a Correspondent