.NET Developer Days 2016


On 19 – 21 of October I had a great pleasure to attend .NET Developer Days conference in Warsaw. This was my second time at this conference so I had an opportunity to compare and I thought I will share some thoughts about it.

Following the website of the event:

.NET DeveloperDays is the biggest event in Central and Eastern Europe dedicated exclusively to application development on the .NET platform. It is designed for architects, developers, testers and project managers using .NET in their work and to those who want to improve their knowledge and skills in this field. The conference content is 100% English, making it easy for the international audience to attend.

What is sure, is that it’s really big and the speakers are not only locals. For sure that’s the top .NET conference in Poland. There are 3 parallel sessions for the two days of conference (excluding keynotes and sponsor sessions, where there’s only one at a time), so quite a lot of choices to be done, but also full freedom about when and what would you like to listen to.

Conference last year (2015)

I could mention at least two main issues with the conference last year:

  • Registration problems, where one needed to wait for an hour, or so, to register. Which of course has led to postponing keynote and still some people didn’t get in for it. On conference closing the organizer said they are aware of the issue and it won’t happen again
  • Issue with conference party, which took place in a small pub, so it was way too crowdy and one needed again to queue himself to get a beer. This was also promised to be fixed this year

Conference this year (2016)

Of those two mentioned problems, the party problem has been really addressed and the improvement was clearly visible.

If it goes about the registration topic, then it’s not as nice as for the party. Personally, I haven’t experienced this problem, because I was attending a workshop the day before main conference, so I have registered there without having to wait for an hour in a registration queue.

There were only few workshop sessions and around 20 folks per session, so the registration was quite fast. So for me, no problem with registration.

When I arrived on Thursday to EXPO for the first conference day, I already knew that the problem with registration was not solved for everybody. Those people standing there, could have a bit worse experience than I’ve had. If any of those people attended the conference last year, then after hearing those promises about solving registration problems, he might feel disappointed.

This year they said again this problem will be addressed next year, but I think people may not believe 🙂

Apart from this, I didn’t notice any failures in the organization of this event. So let’s skip to the content part.

Day 1: Workshops (Wednesday 19.10.2016)

This year, they have introduced pre-conference workshop day. One could choose one of three options:

  • Ted Neward: Busy .NET Developer’s Workshop on the CLR
  • Dino Esposito: Modern Web Development with the ASP.NET MVC Stack
  • Adam Granicz: Functional Programming on .NET with F# – Become a Programming Jedi Now!

I went for the first option, hoping for some deep dive into CLR details.

As far as I liked the workshops, I think it looked more like a few-hours long presentation than a workshop. Ted said that supposedly the organizers didn’t forward some message about having laptops for the workshop, or something. In any case – I had mine, didn’t use it much though.

If it goes about the level of details Ted drilled down into, it was… not bad, not good. If the conference weren’t paid by my company, I think I would have been a bit disappointed. Not that it was bad, but I had high expectations so being a ‘good’ session was not enough for me.

Expectations aside, I’ve spent a whole day with Ted listening to the .NET and CLR basics and theory with some brief history of Microsoft approach as a company. Ted Neward is really a great speaker and even that I knew many of those facts, I wasn’t bored at all. I think people that are somewhere at the beginning of their .NET experience would benefit from such workshops a lot.

Day 2: Conference ( Thursday 20.10.2016)

As I already mentioned before, one could choose from 3 sessions running in parallel (except keynotes and sponsored sessions). I will describe the sessions I’ve chosen.

  1. Keynote by Jon Skeet – C#: Open, Evolving, Everywhere
    This keynote didn’t get me as much as I thought it will. I like Jon for being such an active person on Stack Overflow. I always admired people that spend their time on helping others and that’s precisely what Jon is doing for a long time already. Being a top 1 StackOverflow user of course means also, that this guys just knows a lot. The presentation was a bit too technical as for a keynote, and a bit too general for being a good technical session. Compared to the keynote from last year, by Scott Hanselman, it was just … so so.

  2. Dino Esposito: Hands-on Experience: What it Means to Design a Domain Model
    Not much to say about this session. If it goes about the content, it was fine. I felt a bit bored, however, so I didn’t go to any other session of Dino after that. I remember Dino as one of the first .NET community guys back in the days of my studies, years ago. I still admire him for his knowledge and practical approach to the technology. He is not a presentation rock-star as for me. I’ve heard a lot good about the other sessions, so I wish I made another choice.
    Alternative sessions:
    Alex Mang: SQL Database From a Developer’s Perspective
    Valdis Iljuconoks: Tackling Complexities and Mediating Hexagonal Challenges

  3. Kuba Waliński: Angular: Back to the Future
    I didn’t like the other sessions so I’ve picked this Angular session even that I already know a bit about Angular 1 and 2. This session was describing the process of transition from first, to second version of the framework from Google. Technically speaking the presentation was fine. Not so many new things for me though. Kuba Waliński seems to be a  real expert in his domain, what made this presentation good even given the fact I know most of the topics he covered.
    Alternative sessions:
    Tomasz Kopacz: Deep dive into Service Fabric after 2 years
    Sean Farmar: Why Service Oriented Architecture?

  4. Sponsor Session. Piotr Spikowski, Marcin Nowak: Deadly Sins of .NET Developers
    VOLVO sponsor session… I was really surprised about this presentation. I was prepared for some very boring content filled with sponsor’s products and ads. The session was well prepared, the speakers were quite precise and consequent in their thoughts. I wasn’t bored, so plus for them.

  5. Ted Neward: Busy .NET Developer’s Guide to Task Parallel Library (TPL)
    I like Ted and how he speaks to the crowd. But this was a bit too much for me. This was more a 100 session than 300 as it was in the agenda. If anybody knows anything about Tasks in .NET, this session was just a brief introduction to the topic. So it was nice to listen to the speaker, but looking at the content, it was rather average, or maybe even less.
    Alternative sessions:
    Adam Granicz: Functional, reactive web abstractions for .NET
    Don Wibier: Using secure WebAPI services from a JavaScript SPA

  6. Maciej Aniserowicz: CQRS for… everyone!
    It is just a pleasure to say that one of the best sessions that day has been given by a Polish dev. No misunderstandings, no workarounds. Clear content, on something, that maybe was not the most difficult topic discussed that day, but transferred in a clean, technical way. I would lie if I say that I was not waiting for this presentation, but even though I had high expectations, I was not disappointed. Maciek rocks.
    Alternative sessions:
    – Dino Esposito
    : Migrating to ASP.NET Core: Challenges and Opportunities
    Valdis Iljuconoks: Dependency Injection: Revisit

  7. Q&A Session with the speakers
    I think that everybody who’s heard about Wroc# speakers panel, was waiting for this part of the conference. As for me, it was rather a poor comparison. Most of the time it was a conversation between Jon Skeet and Ted Neward, but honestly, it was not that good. A charismatic moderator could bring this discussion to a really good level, but apparently there was noone that could take this role.

  8. DevTalk Live! Maciej Aniserowicz interviews Jon Skeet and Dino Esposito on stage!
    What can I say. Nicely moderated, perfect speakers, clear pleasure to listen to this part of the conference. Was maybe worth more than most of the sessions. Maciek rocks again!

Some selfies after the show with people I always admired made my day for good. Won’t forget that part of the conference.

Day 3: Conference (Friday 21.10.2016)

  1. Jon Skeet: Abusing C#
    Strong beginning of the second day. This was THE presentation that could make people that don’t know Jon (are there any like that?) realize that this guy is simply awesome.Lots of C# tricks that would make you think this isn’t for real. Small, maybe even tiny hacks that probably most of the C# developers don’t know and that changes the point of view in regard to C#/.NET perfectness.Examples from his presentation could be found on his blog, like in this blog post.
    Alternative sessions:
    Bartłomiej Zass: The Cloud was made for APIs
    Don Wibier: Breaking Bad: you CAN make Fast Web Pages

  2. Tomas Herceg: Entity Framework Core
    This session again was somehow forced by the parallel sessions’ topics. I had once problems with porting my project to EF Core, so I decided I will give it a shot. I think it won’t be a big surprise if I say that this presentation was not anything that one would remember for years.
    Simple, yet precise introduction to EF Core topics. For somebody totally new in the area and interested in starting some projects with it, would be nice I guess.
    Alternative sessions:
    Don Wibier: Enabling Plugins in your web application with MEF
    Dino Esposito: DDD: Where’s the Value and What’s in It for Me?

  3. Ted Neward: Busy Developer’s Guide to Garbage Collection
    Ted just did it again. The comment is exactly the same as for previous Ted presentations. Cool because he’s cool. Technically speaking, rather nothing new and everybody knows it’s not because Ted doesn’t know more – I think he took wrong assumption about the level of expectations/attendees. I wish I could go to some really deep dive hardcore session about .NET topics from Ted. I think it would be a top rated session knowing how he speaks and how big experience he has in the area.
    Alternative sessions:
    Michał Dudak: How To .NET All The Things
    Sean Farmar: Building (micro) services in .NET

  4. Sponsor Session. Raimondas Tijūnaitis: Complexity game – from big balls of mud to shiny bullets
    That kind of session one would expect from sponsor. I’ve left the room in the middle of the presentation and played some xbox games which were available in the conference hall 🙂

  5. Jon Skeet: Immutability in C#
    Very good presentation, which was very good as I already got bored after previous sessions. Jon was showing different types of immutability and has given some advices on the implementation details for the immutable types. Even people that know what immutability generally speaking is, still would benefit from this presentation a lot.I think that Jon’s presentations (Immutability in C# and Abusing C#) have been the best technical sessions during the conference. No doubts this guy is awesome.
    Alternative sessions:
    Alex Mang: Everyone Loves Docker Containers Before They Understand Docker Containers
    Tomas Herceg: DotVVM: “Web Forms” on .NET Core? Yes!

  6. Closing Keynote: Ted Neward – Rethinking Enterprise
    The show that Ted has given during the closing keynote was something I will surely remember for a long time. This was a perfect example of session where Ted could show all his skills as a speaker. Even that this presentation wasn’t technical, the experience and point of view that Ted presented to the audience, made this session one of the best sessions during the conference.It was discussing the topic that always matters (Enterprise), with his sense of humor (EJB = Enterprise Jesus Beans…), and showing the distance he has to the technical topics that are discussed world wide.Even though that was the last session after three days of conference, I think nobody was bored at this moment.


All in all, it was definitely worth it to go to Warsaw for this conference. Many popular guys from the .NET world, nice venue, interesting sessions.

Main things to remembers:

  • Scott Hanselman’s sessions from previous year not only were good. Compared to many other speakers, looking from the time perspective, his sessions were simply outstanding.
  • Ted Neward is a showman, not only an experienced geek 🙂
  • Jon Skeet is a very nice guy apart from his extraordinary technical skills. We talked a bit about about children and approach to teaching them technical stuff. I think we shared kind of the same approach. Definitely recommended to meet him and listen to his presentations if you only have chance.
  • Workshops should be better organized and more interactive.
  • Having party at the conference venue while QA session is running was a very good idea.
  • The DevTalk session was even better, Maciej Aniserowicz is a professional person and I recommend attending his trainings and presentations.
  • Don’t forget to book ticket for the conference next year early enough!

Windows Installers directory size


The problem is very likely to be known to people who bought small SSDs for their system partition. Just like me.

After around 2 years, my C:\Windows folder was huge, taking up over 60% of my C drive (~60GB), due to the C:\Windows\Installers folder (30GB). As you can imagine, that might hurt when you simply don’t have anything you could remove and you need some free space to install something on your computer.

I have faced this problem on a machine with Windows 7, but the solutions will apply also to the newer versions.

What is this crap?

You may find a lot of content on the internet that will convince you to not delete this folder as it is crucial for keeping the installations made on your computer working fine. Like on this MS Guy blog:

C:\windows\Installer is not a temporary folder and files in it should not be deleted. If you do it on machines on which you have SQL Server installed you may have to rebuild the operating system and reinstall SQL Server.

The Windows Installer Cache, located in c:\windows\installer folder, is used to store important files for applications installed using the Windows Installer technology including SQL Server and should not be deleted.

Okay, so you say that windows should be eating 60GB and it’s fine?
Looks like it.



I’ve found a nice cleanup tool called PatchCleaner.
The application is quite easy in what is it doing, according to the author:

The windows operating system holds a list of current installers and patches, that can be accessed via WMI calls, (Windows Management Instrumentation ).

PatchCleaner obtains this list of the known msi/msp files and compares that against all the msi/msp files that are found in the “c:\Windows\Installer” directory. Anything that is in the folder but not on the windows provided list is considered an orphaned file and is tagged to be moved or deleted.

It has a very simple interface:


When you run it, it looks for orphaned files from the Installers directory and allows you to either delete, or move them to another location.

I recommend to move the files first to another drive and then check if you won’t experience any problems. If no, then the folder may be deleted in the future.

It saved me around 20GB of space, totally for free!


This one is easy, but it’s really worth trying, just right-click on the directory -> properties -> advanced -> tick the compress content checkbox -> also for subdirectories.

Since we don’t use these files on a regular basis, it’s a good idea to compress the content to save up some space. There are actually more places worth looking, just by sorting the windows subdirectories by size:


Directory link/junction

Another solution that has been proposed is to cheat windows by moving the directories to another path and create either symbolic link, or junction for the directory. You can read more about this approach in this superuser topic:

I will quote the junction solution so that you don’t need to browse it back and forth:

Start a command prompt as administrator.
Take ownership of installer directory and all its files:

takeown /f "C:\Windows\Installer"
takeown /f "C:\Windows\Installer\*"

Move C:\Windows\Installer to a new spacious drive, let’s say E:. For convenience, it’s better to create a subfolder to gather all the future junctions in one place, e.g. E:\Win7-Junctions, so the new path will be E:\Win7-Junctions\Installer. Cut-paste from Windows Explorer should be enough to move the installer folder.

Make sure that C:\Windows\Installer is really gone and that all files have been moved to E:\Win7-Junctions\Installer.

Create the junction:

mklink /j "C:\Windows\Installer" "E:\Win7-Junctions\Installer"

The syntax is:

mklink /j [destination] 

Verify that the junction works by creating a small text file in E:\Win7-Junctions\Installer and seeing it materializing in C:\Windows\Installer as well.

Done. Check within “Add or remove programs” that installers are still working (Office is a good candidate to start with).


That’s insane Microsoft forces users to proceed such way, but since it’s only a software, for each solution there are usually at least few solution.

In my case, after cleaning with PatchCleaner and compression I gained over 20GB of space, which saved my windows installation (at least) for months. Hope it will save some more GB of yours!

Stay tuned.

Invoking methods dynamically in C#: Examples and benchmarks


Imagine that you have a service, which receives requests to execute specific methods, possibly from a number of different underlying libraries that you don’t want to expose directly. Possible examples are web services. The request comes in as a number of string values (method name, parameters etc.) and you need to respond with results.

So you have inherited a project with API that starts with a method:

object Invoke(string methodName, object[] parameters);

You now have to call variety of methods by decoding the name of the method and parameters provided into the method. In my precise case the signatures were a bit different and more complicated, but for the sake of simplicity let’s keep it that easy.

Let’s assume now you have two methods that you want to call using this API:

public static string TestString1(string a)
    return "Received: " + a;

public static int TestInt2(int a, int b)
    return a+b;

You get the point: different method names, different number of arguments of different types and names.


How to dynamically invoke appropriate method with supplied arguments the most efficient way? How to not hard-code the method names and arguments in a series of if-else statements that would grow and get unreadable very quickly?

Possible approaches

There are couple of approaches I will discuss here. All are assuming that you already got the type of the class that defines the target method. This could be achieved on many ways and in order to make this post a bit shorter and easier to read, I will skip it. If you find it useful to blog about it, let me know in the comments.


One of the first things that you will likely find on the web is the MethodInfo.Invoke method from System.Reflection namespace.

The usage is very easy:

//get the method from a given type definition
MethodInfo method = typeof(Program).GetMethod("TestString1");

//build the parameters array
object[] _stringMethodParams = new object[] { "lalamido" };

//invoke the method
object result = method.Invoke(null, _stringMethodParams);

The first parameter of Invoke method, defines the instance of the object we want to execute the method against, but since our methods are static, we don’t need any instance, hence null value. As you could see it’s relatively easy to use. The only thing that’s wrong with this approach is performance,  but I think the best way to get the impact is to see comparison,  so we’ll get back to this later in this post.


Another approach you can find on the Internet is DynamicInvoke of Delegate class. The usage looks like that:

MethodInfo info1 = typeof(PublicReadMethods).GetMethod("TestString");
Delegate _mi1 = Delegate.CreateDelegate(
         (from parameter in info.GetParameters() 
          select parameter.ParameterType)
         .Concat(new[] { info.ReturnType })
         .ToArray()), info);

_stringMethodParams = new object[] { "lalamido" };


I think that the code is also quite easy, except (maybe) the delegate creation which expects a type of delegate. Here we have used LINQ Expressions to build it. The method Expression.GetDelegateType accepts an array of types and builds the delegate type out of it. The last type provided in array defines the returned value type.

Expression trees

Since we already used Expression class in our previous example, we could go one step further. Expression trees are how the Expressions are represented in memory as an object. Following MSDN:

The System.Linq.Expressions namespace contains classes, interfaces and enumerations that enable language-level code expressions to be represented as objects in the form of expression trees.

What does it actually bring into the discussion?

Our c# code is built using expressions and other syntactical constructs like  for example, clauses, or statements.

After you write code in your favorite IDE, you need to compile and JIT (of course, speaking about .NET platform only here). Now the easy thing about it is that at compile time, compiler knows the signature of your methods and the types of the arguments, so you can perform some direct calls in your code and compiler is happy.

But what if you don’t know it at that moment? Well, the compiler needs to be happy, so usually people go through reflection, which doesn’t assume much at the compile time, so it may result in exceptions at runtime. Another thing is that reflection is slow, because it involves a lot of type-checking and matching at runtime, where we would like the code simply to perform well.

Expression Trees are a way to make both worlds happy. You can write a c# code, that is able to represent code expressions as objects. Then, at runtime, you can compile the expression and use it almost as if you would be using direct invokation. Just for the sake of completeness I will mention two ways to call the methods when the type is known at compile-time, because except the compilation phase, the execution time for Expression Trees ads small-or-no overhead.

So how do we represent our methods as Expression Trees? Let’s try to analyze it in the opposite direction, so starting from the method invokation:

public delegate object LateBoundMethod(object target, object[] arguments);
LateBoundMethod _mi1 = DelegateFactory.Create(typeof(PublicReadMethods).GetMethod("TestString"));
_mi1.Invoke(null, _stringMethodParams);

Looking at this example, and the MethodInfo one that we’ve started with, it looks quite similar. The details are hidden in the Create method of DelegateFactory class. It returns a delegate, which has accepts the target object and array of objects as arguments, and returns object-type result.

The DelegateFactory class implementation is based on Nate Kohari‘s blog post which is not available today for reading (returns 404), but I will explain the implementation a little bit more. You can check the full source code >here<.

Let’s examine it a step further. We have defined a delegate, which (surprisingly :)) accepts a target object and array of arguments and returns an object. Now we need to construct such a delegate. Late bound methods are generally speaking bound to its types at runtime, contrary to early binding which occurs at compile time.

So we called Create method, which accepts MethodInfo argument, here’s the code:

ParameterExpression instanceParameter = Expression.Parameter(typeof(object), "target");
ParameterExpression argumentsParameter = Expression.Parameter(typeof(object[]), "arguments");

MethodCallExpression call = Expression.Call(method,
    CreateParameterExpressions(method, argumentsParameter));

Expression<LateBoundMethod> lambda = Expression.Lambda<LateBoundMethod>(
    Expression.Convert(call, typeof(object)),

return lambda.Compile();

First thing we have to do is we have to construct our two parameters, which will be represented as Expressions. We do this by calling Expression.Parameter with two arguments: type and name of the parameter.

Then we have to get our method call expression, there’s also a Type for that, which name is self-explanatory: MethodCallExpression. We get it via Expression.Call method, which accepts MethodInfo object and an array of Expressions that will be used as arguments collection. So far so good. Now we have to build the array of arguments Expressions. We use another method for that:

private static Expression[] CreateParameterExpressions(MethodInfo method, Expression argumentsParameter)
    return method.GetParameters().Select((parameter, index) =>
            Expression.ArrayIndex(argumentsParameter, Expression.Constant(index)),

The method GetParameters returns an array of ParameterInfo objects from System.Reflection Namespace.

Then, using LINQ, we’re enumerating through the parameters we’ve got from the GetParameters method and appropriately converting our parameters Expression (that we have declared at the beginning of Create method) to the expression with the given type.

Next we need to build Expression that will represent our LateBoundMethod delegate. There’s a method for that as well, we need to provide the Expression body for the lambda and the expression arguments that must be corresponding to the class we provided as TDelelegate generic type. Obviously this type also has to be a Delegate.

Expression<LateBoundMethod> lambda = Expression.Lambda<LateBoundMethod>(
    Expression.Convert(call, typeof(object)),

Last, but not least, once we have built our expression that represents our lambda (Delegate),  we need to compile,  before using it. Once the expression is compiled,  we get a Delegate that is ready to be invoked.

return lambda.Compile();

It might look a bit hard at the beginning, but I advise you to read a bit more about Expression Trees as that is generally speaking a very nice feature of .NET that you shouldn’t miss. I can recommend Chapter 11 (Inside Expression Trees) and chapter 12 (Extending Linq) of Programming Microsoft Linq book.

How does it perform? Well,  apparently it does quite well. The overhead that we can’t ignore is added by the compilation. Which is quite reasonable. Every C# code we’re using needs to be compiled. The only difference is that if you compile it in your IDE, you don’t even bother about it as it doesn’t impact your application’s performance.

In order to visualize the impact, I have split the code execution into two parts.  This way we can see how long does it take to prepare the execution and how long does the execution really take. We’ll get back to this point after presenting tests results.

Compile time typed methods

Just to make this comparison more valuable, I will also compare times for calling those methods using early binding. Of course first way to call a method you know everything about at compile time is to actually directly call it. Like that 🙂


Another way is to have Delegate with types defined.

static Func<string,string> _testString;
_testString = Program.TestString;

Those two ways I think don’t need any explanation and represent most probably the 99% your standard way of invoking methods. Since everything is known at compile time, we also expect those two to be the quickest ones. Let’s see how it goes.



In the setup part, we are preparing everything for execution. This includes getting the MethodInfo object, or compiling the ExpressionTree so that in the Execution phase we can only focus on the execution time.


The execution part is executing the two methods that I mentioned on the beginning, in a for loop with a parametrized iterations number.


Setup was executed only once for each approach. The objects were then used iteratively in tests.

Dynamic Invokation Setup

As you can see, proportionally Expression Trees have been the longest to prepare, head-to-head with DynamicInvoke. That doesn’t look good.

On a second thought, this may be (probably) done only once and then reused, so using some kind of memory container, cache or something, you could pre-calculate things before they are really used and benefit from the execution times later on. This may also not be the case, so one has to be careful when using each of those in different scenarios.

Execution test have been run with 1000, 10 000, 100 000 and 1 000 000 iterations. The average results are as follows.

1 000 iterations

Dynamic Invokation 1000 runs

10 000 iterations

Dynamic Invokation 10 000 runs

100 000 iterations

Dynamic Invokation 100 000 runs

1 000 000 iterations

Dynamic Invokation 1 000 000 runs


The results for the first tests may have looked surprisingly. Direct invokation slower than expression tree invokation? Even delegate was faster. Maybe we should use delegates all the time, instead of directly invoking the methods.

Later, we could see that the results have been normalized and finally the results are quite clear.

Methods of invokation that types we know at compile time are the fastest ones.

Expression Trees are last on the podium, but I think all will agree that result is awesome, comparing to MethodInfo.Invoke, or Delegate.DynamicInvoke.

Important to notice here is that since setup for Expression Trees is expensive, if your method doesn’t get called very often, you may not have enough benefits from using Expression Trees. In case you know what types’ methods will be called, you may apply some kind of pre-compilation of methods on the application startup, in order to reuse the compiled delegates later.

Source code

You can try yourself to run the examples, the whole solution is available on github.

Stay tuned.

Last, but not least – Daj się poznać

Ok, so this is my last blog post in the scope of Daj się poznać contest. I wanted to share some thoughts on how did it went.

What I can easily say about the period from late February till today is that I have learned a lot more than I expected, whereas I produced a lot less code than I thought I will. This brings some very interesting conclusions if it goes about summary report.


One of the easiest conclusions I had is that while working on a regular project after work is possible, it is not as much possible as i thought it is. It is very good to be able to estimate such things, and sometimes it’s hard to get such experience because if one is not pushed to the limits by something (like the rules of the contest), it’s very easy to slow down, or even give up. Looking at the number of people that signed up for the contest and the number of people that did actually participate in it, I think it’s clearly visible. I am not surprised, at all.

For me, the main time-eater are kids, which are consuming most of my and my wife’s energy after work, even at night. I have always been reading and coding late at home, so when the contest has been announced, I thought it’s easy to cope with semi-weekly posts and a website project.

What I was missing at that time is that when you work on random stuff in technology you are familiar with, the time that you need to start coding is quite short. So even if you have half an hour, you should be able to ‘do something’.

When new technology appears and there’s a time pressure because of posts writing requirements, things are getting complicated, because before you actually start coding, you need to get to know few new libraries, or new language, new IDE, or maybe new everything. And then, after reading this and that, you ‘npm install’ and … it doesn’t work. So you’re browsing the web to fix all these tiny problems and sometimes it may take even few evenings to start coding. What is worse, is that you need to write posts, all the time, and it’s not easy to build a reasonable content if all you do is struggling with problems of that kind.


Another conclusion, following the first one, is that generally speaking, I find it a lot more useful in the context of Daj się poznać, to learn more than code. I find writing code rather a simple process, usually, it’s the few underestimated steps that you need to do to wire things up and make it playing together, that are causing most of the headaches. You can’t learn it from a screen-cast or blog post, you have to face it by yourself.

I feel like I learned something totally new, just like I wasn’t coding ever before. That’s totally awesome, I have technical conclusions about the whole new JavaScript world and relation to .NET experience, I can compare experiences, which is I think the most valuable thing to mention.

Reference Point

Another thing is to be aware of the new technologies’ pros and cons, which is hard to spot when watching from the sidelines.

Before the contest, I have just watched the different talks about the new JavaScript trend and to be honest, I was even starting to think that maybe .NET doesn’t have any chances to compete with all these new shiny JS frameworks, linked with typescript, NoSQL databases and the whole open-source community of people being totally on fire with all this stuff.

This whole JS world has really a very good marketing around it, which probably comes from the fact that it’s open-source and you may find a lot of people loving it, the community is huge, they are excited about it and this is so far the most important and interesting thing I learned about it. I wish .NET community was such involved.

On the flip side of the coin, I learned that there are a lot of things that I don’t really like. I have found myself in a lot of situations where I needed to choose some ugly solution just because ‘it has to be like that’. I wrote about it in some posts before, like the mongoose typescript classes/interfaces design, where I had to repeat my code few times to have it both: working, and designed according to OOP principles. And I know that it’s not fully OOP if I repeat myself (according to DRY :)). Even typescript doesn’t help here, although I cannot imagine how hard would it be for me to accept the pure JavaScript approach. The only hope is that new specifications will introduce all the nice stuff typescript already has and this language will evolve and libraries will follow this evolution smoothly.

A good example for JavaScript world and the techniques used by different people is Angular 2 library. If you still didn’t try it, I think it’s the first thing you should do after reading this. That’s totally awesome framework to use and even if it can bring some hard moments, most of the time it does just great. It has also great community support and a lot of useful documentation and tutorials all over the web. This was even the case when everything was in beta. It’s even more great due to the fact that it’s written in typescript and as such its concepts are simply understandable for a .NET/Object oriented guy easily. Of course you have to learn some new things, but it’s relatively easy compared to the NodeJS stuff and server-side libraries available out there. The only thing I would like to know now is how Angular 2 compares to some other popular JS frameworks like Aurelia, or ReactJS, maybe in the next edition of Daj się poznać? 🙂

Social Media

To tell you all the truth, the fact that I’ve created a facebook fanpage is for me the best sign that I went through a big change and have opened myself for social media in a way that I wouldn’t do before Daj się poznać. I didn’t even have a facebook profile before, I needed to create it to run the fanpage. I’ve been already available on twitter, so this isn’t new for me, but generally speaking I can say that Daj się poznać pushed me to open myself to wider audience, which I’m happy about.


Blogging itself, as I wrote in my first post here, was always something I wanted to do. I even had an attempt long time ago, but I think I didn’t have any real content to share so that ended very quickly. I am still feeling that I need to work on the style I am using to transfer knowledge, especially when it comes to some detailed coding information as it’s always hard to explain it easily in a blog post (at least for me).

A lot of people have visited my blog from all around the world, including India, Canada, US, and many many others. A lot from Poland as well. This makes me feel it is worth to continue.

I will be blogging after the contest is over, so I hope somebody will visit this blog sometimes. The topics will be very likely focused more on .NET stuff than JavaScript, but I will not restrict myself in such way, I will blog about everything I think people should know, but I will try to stick to technical stuff, linked with software development.

I will inform about new posts on Twitter, FB FanPage and dotnetomaniak.pl as I am already doing recently.

.NET Videos

I couldn’t set up the working version for the end of the contest, which I am not happy about, but when I started to work on authentication, I couldn’t just leave it the way it was. I needed to try the approaches I’ve never used before, but this made my website not ready, and more importantly, it disturbed me with fixing the azure deployment stuff (which I blogged already about).

Of course I plan to continue to work on the project, just not with the same pace, I can share with you what are the plans for next phases:

  • JWT authentication based implementation with Auth0 library for Angular 2 on the client side and passport-jwt on NodeJS side
  • oEmbed approach to Videos Embedding on my website, in order to bring good experience to the users
  • Unit/E2E tests
  • Automated videos suggestions based on some predefined sources and filters
  • Users involvement in content creation (to be defined), including profiles, voting for movies, moderation, videos rating etc.
  • Styling application

I was thinking about moving the server side to .NET after the contest is done, but I haven’t made up my mind yet. It is very likely I will keep it as is, in order to stay in the JavaScript world and keep my knowledge and experience up-to-date.

Last, but not least

I wanted to say big THANK YOU to Maciej Aniserowicz for the trigger to make all this happen. The same goes to all people participating in the contest. There’s a very positive energy around the contest which motivated me a lot, when I was thinking to give up. The contest was definitely one of the best thing that happened to me in terms of personal development since the beginning of this year. Even if it was not easy for me to follow, I don’t regret a single minute spent on it and I think it was 100% worth it. I am nearly sure I will participate in next round, if it will only take place.

JWT vs Session Authentication


The topic looks obviously obvious and generally speaking standard user doesn’t care much about the details. He wants to register, login, and make sure that his password is safe. The more you read and learn, the more sophisticated requirements you have for protection, state management on the server and client side, the more things are getting complicated.

The problem with authentication is basically about: how does one prove that he is the guy that he’s pretending to be? In the world of web development, the general use cases are:

  • register with a login and password
  • login by providing login, password and validating it is the same as provided during registration
  • after logged in, authorize access to the specific resources during the whole period of website usage, so that one does not need to navigate to the login screen all the time

So when working with websites, you have to clearly separate the client and the server. The client – browser – is handling user input and sending it to the server. Now on the server side, we have to persist the authentication data so that we could verify them afterwards, each time user wants to log in.

In order to avoid keeping passwords in plain-text, passwords are very often hashed upon registration, so the server (database, supposedly) only keeps hash. The password itself isn’t really necessary, when user will try to login, the password he will enter is gonna be hashed and only hashes will be compared. This prevents anybody from knowing the passwords only looking at the storage values. Hash functions are meant to be one-way functions (although, nobody can prove such functions do exist), so there is no easy way to recognize password looking only at the hash value.

Additional layer of security comes from SALT usage, which represents some random data that is added to the password before hashing. Adding random content to the passwords is supposed to prevent from attacks based on dictionaries and if used very often in passwords hashing.

In my project, I am using bcrypt-nodejs library, which allows you to hash passwords easily, handles salt and lets us focus on other things. Simple as that:

var bcrypt = require('bcrypt-nodejs');
//hash password using 8 rounds to generate Salt
bcrypt.hashSync(password, bcrypt.genSaltSync(8), null);

Session & Cookies

Session State, which on the very general level is a kind of dictionary container, that keeps data in the memory on server side, very often is used with Cookies, which are a very small files persisted on the client (browser) side. The idea here is that when you are connecting to the server, it generates an ID for your session, which your browser is storing on the cookies level so that it is sent with each request to the server. This way, server can recognize if the request has already been authenticated, or not. I think that’s the most popular approach so far, which has now a good alternative, that I will describe shortly. If you open a connection to my .NET videos website, you could see in your cookies such entry:


which is because I am using in my application the session state on the express side as I mentioned in one of my previous posts.

Tokens based approach (JWT – JSON Web Tokens)


Different approach than session state could be to use JWT, which translates to JSON Web Token, and the approach is often referred to as Token based authentication. The main difference in this case is that instead of keeping the state on the server, the information about user are sent in a token on each request to the server, which verifies the user on the basis of the data included in the token. Sounds like it adds some performance penalties and is vulnerable for attacks? Not really.

JSON Web Tokens are an open, industry standard RFC 7519 method for representing claims securely between two parties. Reading the abstract from this specification:

JSON Web Token (JWT) is a compact, URL-safe means of representing claims to be transferred between two parties.

The claims in a JWT are encoded as a JSON object that is used as the payload of a JSON Web Signature (JWS) structure or as the plaintext of a JSON Web Encryption (JWE) structure, enabling the claims to be digitally signed or integrity protected with a Message Authentication Code (MAC) and/or encrypted.

Sounds better already, so you can sign JWT using a secret (with the HMAC algorithm) or a public/private key pair using RSA. This already brings some safety to the process as even if the token is modified on the client side, it won’t be validated as the signature won’t match. This means also that it’s not very risky to send the token in an URL, which you can obviously do as JWTs are meant to be compact. Obviously if you want to put it in a HTTP Header there’s nothing preventing you from doing that as well.


How does a JWT look like? It consists of three elements, which are concatenated in one string, using a dot ‘.’ The three different parts are:

  1. Header
    Header is used to define what kind of token is being sent and how is it defined. So this most probably will look like


    The header is then Base64Url encoded so it results in value

  2. Payload
    Payload is where the content of the token is stored. The content is a set of key/value pairs that are describing the entity (your application user) with some metadata possible. These key/value pairs are called claims.Obviously there are some reserved claims that you shouldn’t override, like
    – exp for expiration time
    – iss for issuerThe specification also defines public claims, which can be defined in a public registry.
    Private claims, are to be agreed between the parties that are exchanging the messages. If you are the one that communicates from the client and you’re the one that reads the values on the server side, of course you can use whatever you like, but adhering to the standards allows you open your API to the web in the future and make sure nothing will break or cause any problems.The claims definition is defined in section 4.1 of the official standard specification. An example payload could look like:


    The payload is also Base64Url Encoded, so it results in:

  3. Signature
    The signature is based on the two previous parts. It takes the encoded header, encoded payload and signs it with the algorithm provided in the header. Computing the MAC of the encoded Header and encoded Payload with the HMAC SHA-256 algorithm and base64url encoding the HMAC value yields this encoded JWS Signature:


    Concatenating these encoded parts in this order with period (‘.’) characters between the parts yields this complete JWT (with line breaks for display purposes only):



  • One of the first, obvious benefits is that it doesn’t require the session state on the server side, which for many people may be very interesting option as it adds to the server side scalability. Stateless applications are generally speaking more performant and more scalable, this topic has already been described a lot on the internet, but just to bring up few benefits:
    1. Reduces memory usage. Image if google stored session information about every one of their users.
    2. Easier to support server farms. If you need session data and you have more than 1 server, you need a way to sync that session data across servers. Normally this is done using a database.
    3. Reduce session expiration problems. Sometimes expiring sessions cause issues that are hard to find and test for. Sessionless applications don’t suffer from these.
    4. Url linkability. Some sites store the ID of what the user is looking at in the sessions. This makes it impossible for users to simply copy and paste the URL or send it to friends.
  • Since the token includes in the payload all the information about user, and the server knows that it’s the right information as it can verify the signature, there is no need to query the database for the user data, it’s all there.
  • CSRF brings another point to the discussion about JWTs as not relying cookies makes things much easier. You could pass your CSRF in the xsfrToken JWT Claim:
  • From a developer perspective, a nice thing is that there is a library to handle JWT for most of the languages you would use, along with a kind of online debugger, you can check it on https://jwt.io/
  • A lot more benefits that is hard to enlist could be found on the web, one of the good resources being this Auth0 blog post

.NET Videos

In my website, I started with session state, but now I am convinced to change the approach to JWT. There’s even a library (open source) to handle JWT for Angular2 by Auth0, you can check it on their github repo. Since that is the work in progress, which takes me a lot of time to understand and dig into, this is not going to see the daylight before the end of DajSięPoznać, but I will do this afterwards as I plan to continue the project. Stay tuned.

Embedding .NET videos with oEmbed


This is definitely something you should be aware of. oEmbed allows you to embed easily content from different sources/providers into your website. It’s an open standard for embedding content. Following the official website:

oEmbed is a format for allowing an embedded representation of a URL on third party sites. The simple API allows a website to display embedded content (such as photos or videos) when a user posts a link to that resource, without having to parse the resource directly.

So, let’s assume you want to embed a YouTube video on your website, previuosly you would need to browse YouTube API, or check under the movie what is the code of iframe to embed a given movie.

With oEmbed the situation looks trivial. You need call a given endpoint/url that will provide you with the video’s details in a predefined format (oEmbed format). For instance, if you want to display the movie (which I strongly encourage you to check, if you didn’t see it yet!): https://vimeo.com/110554082, you need to call this address:


and you will be presented with all the specific information about it

"title":"One Hacker Way - Erik Meijer",
"html":"&lt;iframe src=\"https:\/\/player.vimeo.com\/video\/110554082\" width=\"1920\" height=\"1080\" frameborder=\"0\" title=\"One Hacker Way - Erik Meijer\" webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;\/iframe&gt;",
"description":"One Hacker Way, a Rational Alternative to Agile\nPresented at Reaktor Dev Day 2014\nhttp:\/\/reaktor.fi\/blog\/erik-meijer-software-eating-world\/\nhttp:\/\/reaktordevday.fi",
"upload_date":"2014-10-31 04:35:08",

As you can see, it gives you a lot of very interesting information, which you can use to embed the video, even the HTML code! You have no idea how useful this is when you are beginning to think abou the details like how to get the thumbnail of a given video, or how to get the duration, upload, or any other property linked with the video. Here you have it all, in clear form, without any confusion.

That’s not all. Once the format of the object is defined (which properties are exposed etc.) you could change the output format as well (e.g. from JSON to XML), in case JSON doesn’t suit you. So in the url provided above, you can just change the json part to xml:


and you’re ready to go:

<title>One Hacker Way - Erik Meijer</title>
<iframe src="https://player.vimeo.com/video/110554082" width="1920" height="1080" frameborder="0" title="One Hacker Way - Erik Meijer" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
One Hacker Way, a Rational Alternative to Agile Presented at Reaktor Dev Day 2014 http://reaktor.fi/blog/erik-meijer-software-eating-world/ http://reaktordevday.fi
<upload_date>2014-10-31 04:35:08</upload_date>

Once you know it, you can’t even think of a reason why nobody did it like that from the very beginning. If I didn’t put the link in preformatted content:


WordPress automatically converts it like that:

I didn’t need to add any code or html. WordPress parsed the link, and understood what’s going on and displayed a movie embedded in my post.

There is one thing to say to make this even better. The list of providers offering their content in this format is just huge, including things like Twitter, Flickr, YouTube, Instagram and many, many others. On oEmbed official website I could even find some data in JSON to check programmatically:


But it doesn’t include all providers (unfortunately). For instance, I could find that twitter offers this functionality in its API, but you won’t find it in the JSON provided by oEmbed. So you can do:

 "cache_age": "3153600000",
 "url": "https://twitter.com/Interior/status/507185938620219395",
 "provider_url": "https://twitter.com",
 "provider_name": "Twitter",
 "author_name": "US Dept of Interior",
 "version": "1.0",
 "author_url": "https://twitter.com/Interior",
 "type": "rich",
 "html": "<blockquote class=\"twitter-tweet\"><p>Happy 50th anniversary to the Wilderness Act! Here's a great wilderness photo from <a href=\"https://twitter.com/YosemiteNPS\">@YosemiteNPS</a>. <a href=\"https://twitter.com/hashtag/Wilderness50?src=hash\">#Wilderness50</a> <a href=\"http://t.co/HMhbyTg18X\">pic.twitter.com/HMhbyTg18X</a></p>— US Dept of Interior (@Interior) <a href=\"https://twitter.com/Interior/status/507185938620219395\">September 3, 2014</a></blockquote>\n<script async src=\"//platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>",
 "height": null,
 "width": 550

Generally speaking, it just feels like it’s a time-saver.

The website I’m working on is a perfect fit for such thing. Since I will not download the movies, nor do I plan to have own videos hosted, what is the most interesting for me is how to just display the video properly so that user gets the best experience possible. I already created a component for that. Stay tuned.

Users management

As I wrote in my last post, users can now register on my website. There was some tiny issue with password handling, but as it appeared, it was due to some stupid error of mine. I got rid of this error, so the users are saved fine to the database.

The thing is, that having user in the db is not enough.

First thing is that you have to be able to validate the user based on his login and password, but also you need to authenticate all requests that come in (for the restricted website areas, of course). That’s very interesting topic when you’re working on websites development in .NET – very often you don’t know, or simply don’t care about the basics of the communication and mechanisms involved in it, because .NET just does it for you. When you communicate with the server, you get the SessionID with your requests and based on that you can easily authenticate requests. When request is authenticated, you just call `Request.IsAuthenticated` and you’re ready to go. It’s .NET that generates this SessionID on the server side, and does all the nice things that you used to be working with.

But here, in JavaScript world, you have to take care of every single thing by yourself. So first you have to force your server to generate some kind of SessionID and make sure on the client you remember it when you’re sending your requests. Then you have to make sure your server remembers your SessionID as well and is able to say if your request is authenticated, or not. There are some other, very useful things you’d like to use in your app, like session expiration time so that your session doesn’t last forever, or more importantly, to prevent people from reading (or maybe ‘understanding’ is a better word) the cookies as that’s probably the place where you will keep that kind of data on the client side. As you may know, there’s not much to prevent others from reading your cookies, these are small text files and if only anybody can reach your storage, he can read it. So encrypting its contents would be a nice idea.

Looking from MEAN stack developer, you have to take care of different steps to make it working:

  1. Enable session state on the server, which is Express in this case. It will already send session id down to the client’s cookies. One can do this like in the code below. Of course there’s some more things you will need to care about (like cookies parsing mechanism), but for the sake of simplicity, we’ll just visualize the process:
    import express from 'express';
    import session from 'express-session';
    // Create our app with Express
    let app = express();
    app.use(session({ secret : "here goes your key for encryption" }));
  2. Our authentication middleware, which we will use for authenticating the incoming requests and session is PassportJS. So more configuration is needed.
    // PassportJS
    import passport from 'passport';
    // Persistent login sessions
  3. Now, in your express routes configuration, you have to configure your routes to authenticate request when it arrives (where you want the request content to be secured). next() moves to the next route.
    let auth = (req, res, next) =&gt; {
        if (!req.isAuthenticated()) {
        else {

With this in place, user won’t get the data if he’s not authenticated. So how do we authenticate the user? We’ll get to this point in next post. For now my website finally is able to authenticate user upon login form submit, so we have both sign up and sign in operations working fine.

Since passport will authenticate the requests on the server side, we would still need something on the client side that would allow us controlling the authentication process when navigating over the website. As I already mentioned in my previous post, I think HTTP Interceptors will come handy here on the Angular2 level, but I didn’t even start to work on it, so that’s also content for one of the next posts. Stay tuned.


Azure comeback & Users registering


It appears, that there’s a subscription for Visual Studio Online users that is allowing to use 25EUR per month for development purpose. What a nice surprise. I have just subscribed and will try to get my website up this week (as far as I remember, due to some python packages I needed to transfer my node_modules to the server, because I can’t install stuff like python or VC++ dependencies). It’s always good to have something up and running in public.

I have already selected my deployment settings as github repo, so when I push to my github repo, azure website should get automatically updated. And that is precisely the case, but deployment script fails on the mentioned problem. Been there, done that.

Users management

I’ve just saved first user using my register form. There’s some issue with password on saving but the entity was saved to the db. There is almost no form validation and login/register buttons are aligned … ugly, but I managed to make it working, so I am very happy. This means that the user-register form is good, but also passportJS integration works as it should be. What’s still to be worked out is how the authentication token should be managed on the website level. I have already searched a bit on this topic and it seems I am going to dive into angular2 http interceptors, which allows you to intercept a http request and do some additional stuff there, like for instance, request authentication.

NPM stuff

I mentioned in my last post, that I get loads of duplicated typings error messages during webpack build process. I got rid of that by updating some packages I am using, I think the important ones here were typings and typescript . I have recreated my local typings then and the problem disappeared – what makes me happy 🙂

I still have angular2 upgrade to RC on my todo list, but I think that is going to happen not earlier than in June, when Daj Się Poznać will be already over and there will be no time constraints for fighting with npm dependencies issues, which I am almost sure will be the case, as it was for all my deps update so far. I’ll focus now on users management and videos creating and displaying. For today, I just need to commit, merge, push and … go to sleep. Stay tuned.


Project status update


This week I have spent on connecting things to have properly working website at the end of the month as that’s the moment when ‘Daj się poznać’ is going to end.

First, I have linked the models and the forms to have videos properly saved in my database. Then I have added some simple display for YouTube videos and some stubs for next videos streaming portals (Vimeo comes next). I have added some tests and started to work on authentication, this part is in progress so I will have some more concrete stuff later but I will already write a bit about my plans.


Passport is authentication middleware for Node.js. Extremely flexible and modular…

Sounds good. So it’s a JavaScript library that has taken a modular approach and each type of authentication is realized in different module, which passportjs is calling strategies. I will let you read the strategies list on their website, but just to mention a few out of over 300 hundreds strategies: facebook, google, twitter, LinkedIn and actually whatever you could think of.

What I would need for my website is user/password authentication with some passwords hashing etc.

The strategy I need to use is called local strategy. I already started to create the necessary components and adapt my menu to allow sign-in/up as that’s what needs to happen first before I actually use passport.

Facebook, twitter and google also sound good, so I might be interested in doing this later as well, of course depending on how hard will it be. Believing passportjs website it’s a piece of cake… we’ll see 🙂

Express routes in typescript

I have quietly skipped this fact before, but I still have pending work on the express routes level, which are still in pure JavaScript, not in typescript. I have already started to modify it, as I would like to use some typescript stuff to work with User entities which will be needed during authentication phase. This also means I will need to learn about using PassportJS with typescript. I think that’s likely to be a topic of one of my next posts.

Problems, problems…

Few days ago, after updating some dependencies and typings, I have noticed I get a lot of warning messages about duplicated classes in the typings definition for many libraries I am using. After checking the content of those typings definition, it seems the content is correct – the classes are repeated – but they are defined in separate modules, so that shouldn’t bring any error. I might need to do some cleanup to resolve it, but since that doesn’t block me at work, I will try to do this as late as possible.

Updating npm packages is still very painful process for me. There’s always something that goes wrong and that generates a huge time waste, so I will spare myself such surprises before the end of the contest and try to focus on finalizing things. Stay tuned.


First videos displayed on .NET Videos

Video DisplayMode

Once I finally got my forms working fine with mongoose and the content was saved in my mongodb, I needed to add a display mode for my videos. So far you could only edit/add videos so in terms of results, now having ‘watch’ mode is finally delivering some content to the website which makes me happy!

In my VideoDetailsComponent I have created a new one (VideoWatchComponent), which is responsible for displaying the video. I will also have to extract the ‘Edit/New’ mode template which is linked with the forms and editing video content. The idea for now is that I will display either of those, depending on some ‘mode’ parameter that I keep in the model and pass/receive using routing (via url). In order to display/hide the appropriate component I will use the *ngIf directive from Angular 2. I don’t like this approach, it looks ugly.

I am not sure how does one do this in Angular 2 ‘the cool way’, maybe you, dear reader, have some idea to share with me? I didn’t find anything interesting on the web.

Displaying videos

Till now only youtube videos are displayed. The displaying stuff is rather simple – it’s solved by embedding the content (iframe). Next ones will be Vimeo and Channel9. I would appreciate, dear reader, any feedback about other websites that are hosting videos that may be related to .NET and are exposing API to display movies via iframe or in any other way (HTML video tag?). I need to make sure this can be done easily and will work fine. Then I will switch to the next tasks, which there’s a full list of.

Unit tests

I started to create some unit/end-to-end tests for my components, but totally don’t know how to run it with the configuration I have. It’s coming from some MEAN starter and after all these days spent on this project, I can clearly say it’s too complex and too many useless things have been added there, what made the complexity much higher and made me waste a lot of time on debugging this config stuff. If I start next application like that, this will be a clean start with no dependencies I am unaware of.

The list of #todos is very long, wich is good 🙂 The time is always very short, which is bad. Stay tuned.