Yesterday was a fantastic day. I spent the afternoon helping out at the Wintellect booth. There is a rumor I may have been seen challenging attendees to a game of 9-ball in the game room once or twice. I also attended two great sessions, one about dependency injection and the other a panel discussion.

The sessions from day one are now posted online.

The Future of Dependency Injection

This talk by Vojta Jina discussed dependency injection within Angular and what the future will look like. First, he talked a bit about why DI is so important. I think this is something many server-side OO developers take for granted but the nature of JavaScript makes it a little less obvious on the client. He demonstrated the difference between creating instances from within a module versus taking them as a constructor panel, but pointed out this leads to a monolithic and complex “main method” that has to wire everything up. The logical solution is an injector that can keep track of components and provide them as needed when other components require them.

The existing Angular 1.x versions use a module-based approach. Each module is essentially a separate DI container, and modules can depend on other modules so the injector knows how to reach across containers to wire up the dependency graph. This approach works but is also a bit complex to understand and master. The Angular team decided they wanted to focus on simplifying this as much as possible for the 2.x versions, so a new approach is being taken.

The new approach is quite elegant. The concept of containers goes away. This is possible because of the way you reference dependencies. There are two parts. The first is a special statement to import dependencies, that looks something like this:

import {MyComponent} from ‘./myComponent’;

This will look in the myComponent.js file for the definition (either via constructor function or factory) of MyComponent. Anything you wish to expose for dependencies gets exported in a format similar to TypeScript, like this:

export class MyComponent

Of course this is the new format that is not compatible with older JavaScript implementations. For this, Google provides a compiler that builds modern JavaScript out of “future JavaScript” or you can use the direct convention like:

define([‘./myComponent’, function ….

Then to annotate what dependencies a class needs, you annotate like this:

class MyDependentComponent {

or the classic:

myDependentComponent.annotations = new Inject(MyComponent);

This is very exciting. It allows for asynchronous dependency resolution and makes asynchronous module loading a first class citizen of the framework. As a veteran of many server-side DI frameworks over the years I�
�m a huge fan of the
changes they are making and look forward to the new version.

Team Panel

The afternoon concluded with a team panel. This included almost a dozen team members answering questions that had been voted on a site. The majority of questions had been previously answered in other talks. A few that stood out included a concern over Dart support. There had been some rumors that the adoption of Dart would de-prioritized JavaScript but the team assured everyone that JavaScript will continue to be first class.

One interesting question was about memory leaks. The team was adamant that Angular is so thoroughly tested that it simply doesn’t leak, but that there are side effects from practices that can cause the leaks. For example, compiling a DOM element and not attaching it to the DOM will cause a leak. We’ve encountered issues with third party controls that create or track DOM elements as well, but in each case we’ve simply had to track the integration point and either call $destroy on the related $scope or call a method on the third-party control to detach the DOM. Angular automatically fires an event when a compiled DOM node is detached that will destroy the associated $scope.


Igor Minar started day two by talking about the Angular community. He explained the vision of changing the way developers feel about the web, and how users experience the web. He shared stories about the early days of Angular when there were few defects and fewer docs. He covered many of the ups and downs, like the 1.2 release breaking a lot of apps until they reverted a “minor” change, the reaction when they pulled support for IE8 and the announcements around Angular for Dart. He also shared how the team uses the leading edge version of Angular internally … literally within hours of a release, hundreds of apps within Google are updated with that version. This has taught them to focus on the stability of changes and “not to ship bugs.” The pre-release builds are pushed to Bower as well.

Pro tip: when building features like progress bars, date controls, etc. instead of building them as an Angular directive, consider building them as a traditional reusable JavaScript entity and then use the directive to glue into Angular. Makes more more reusable code.

At the end of his talk he shared that the team is creating an Angular working group to help steer the direction of future releases. Do you want to contribute to Angular and help it grow? The call to action is available right here.


The next talk covered TypeScript. This is a language Wintellect has had great success using. It is not a separate language to learn, but is a superset of existing JavaScript that tries to conform with upcoming ECMAScript 6 standards. Sean Hess walked through some examples and features of the language. We have experience with several projects that use TypeScript, including projects that switched midstream so we have a great before/after snapshot of the benefits it provided (and surprisingly, even the “JavaScript purists” on the team were won over). Contact me or swing by the Wintellect booth if you want to learn more.

Realtime Apps with Firebase

Anant Narayanan covered Firebase. He started with a discussion of the evolution of web apps and the notion that we are finally at a place with tools like Angular to build real web apps and not just try to paste the app paradigm onto the web page paradigm. Firebase is an API (on top of an application platform) to store and synchronize data in real time. It provides a simple login service that supports several major login providers for authentication and provides a declarative security system for data. Firebase provides “3-way data-binding” by synchronizing data with remote datab

ases for you.

He demonstrated the functionality by building a live “video commenting” app. The concept is that while you are on the page, the comments update real-time as other users are adding comments. It was very impressive to see how easily he was able to drop in the $firebase dependency and create a real-time app on the fly. Firebase provides a hosting service for SPA apps that holds all static assets, JavaScript, etc. Learn more on their site. You can also interact with a sample he built and deployed live.

Scaling Directives

Burke Holland talked about scaling directives. He first covered the history of KendoUI and how once they released it, they went through a phase of various requests that centered around Knockout, leading them to build a Knockout integration. Suddenly in 2013 these requests started shifting to Angular. Burke was given the task of integrating KendoUI to Angular. It was interesting to learn about his approach to this integration and what he learned along the way.

Their approach is interesting. They have a “master directive” that spins through all of the available widgets, then creates a new directive per widget. It binds to the various attributes, and most importantly wires changes to the $scope through the “Change” and “Value” events that exist on every widget. For the more complex widgets they create special one-off directives to handle the complexity (such as their Scheduler). 


I was not able to sit in on the Zone talk as I was invited to a panel interview with Ward Bell, John Papa, and Dan Wahlin to discuss Angular as it relates to the .NET community. The ZoneJS library is “thread-local storage for JavaScript VMs” that provides “execution contexts” to persist across asynchronous calls. I probably am not doing it justice so be sure to check out the video that is already posted online.

Using Angular with Require

Thomas Burleson took the stage to discuss using Angular with Require. He summarized at a high level that Angular injects instances and Require injects classes.

There are 4 general dependency types: load, construction, runtime, and module. Require provides a package dependency manager, a class injector, a JavaScript file loader, and a process to concatenate JavaScript.

Require has three methods: define (define your dependencies and register a factory) using the AMD (Asynchronous Module Definitions) standard, require (callback function invoked when dependencies are resolved), and config to configure source paths and aliases. He did a great job of showing not only how Angular and Require solve different problems, but how they can work together right now.

The talk is posted online here.

He did not have time for the Angular decorator talk, but posted his slides here.

End to End Testing Using Protractor

Julie Ralph discussed end-to-end (E2E) testing with Protractor. Her words: “Shiny new …” She is a software engineer in test at Google. It replaces the Angular Scenario Runner. Test is about confidence in code. Julie says if you give her a suite of unit tests she will not have confidence your system is going to run … they are a good foundation but you need a good set of E2E tests. Protractor uses WebDriver (also known as Selenium). The framework is specific to Angular, because it can use the knowledge of Angular to make it easier to write tests.

Common complaints about

E2E: tests are hard to write, they are tough to keep up to date, they are slow and flaky, and it’s tough to debug them (not to mention authentication and cleaning up after tests). Protractor helps by providing a manager to keep WebDriver running and up to date. Protractor uses Jasmine to scaffold tests and a similar configuration to Karma. It also provides some special helpers for interacting with the browser and DOM.

Nice “wait for Angular’ function that will let injection and asynchronous code to complete before continuing your test. In the future, the framework will formalize the contract with Angular and migrate internal tests away from the scenario builder.


Jason Aden then discussed ng-model. It is core to great UI components and responsible for 2-way data-binding. It can be used to port jQuery components and enables declarative components. To add this in you use an ng-model in your directive and require it as part of the directive definition.

It interfaces with ng-form to transport input into a model representation. It takes care of alerting listeners. It watches for model changes and changes model representation back to view representations. In short, it is the key “glue” for data-binding.


There is more to come but I probably won’t have to time to wrap up before I post so I figured I would close out here. This conference has made it clear that Angular is a force to be reckoned with and is not only gaining attention, but is being used in real world applications with success. The community is excited and supportive and the releases are coming quickly. The conference was incredibly well-organized and an absolute pleasure to attend. I’m looking forward to watching this community grow over the next year and to future events.

Thanks to all of the organizers and sponsors, and of course to the community for helping make this a great set of tools (I know people feel the favor is returned because of what it saves them when building JavaScript apps). Don’t forget the videos are all available online.