Blog Posts

Microsoft exam AZ-900 vs AZ-204, which one is right for me?

So you’re interested in taking either/or both the AZ-900 and the AZ-204 exams in order to become Microsoft certified as a Azure Fundamentals or Azure Developer Associate. That’s great! Perhaps you’re starting your journey towards a cloud oriented role, want to make sure that you don’t have gaps in your knowledge, or want to demonstrate your level of knowledge.

I recently did them back to back and are hoping that I can bring some insights that will help you decide your own path.

Quick intro to Microsoft exams and certifications

You might be confused as to why there’s both a concept of exams and certifications. Well, in the case of AZ-900 and AZ-204 there’s a 1 to 1 relationship between them, pass the exam AZ-900 and become “Microsoft Certified: Azure Fundamentals”, or pass the exam AZ-204 and become “Microsoft Certified: Azure Developer Associate”. But in some cases, as for becoming “Microsoft Certified: Azure Solutions Architect Expert” you need to pass the 2 exams AZ-303 and AZ-304.

There also used to be a program called “Microsoft Certified Professional” which contained the Microsoft Certified Solutions Developer (MCSD), Microsoft Certified Solutions Expert (MCSE) and Microsoft Certified Solutions Associate (MCSA) certifications which you got for passing multiple exams (where each one might give you a separate certification). This has since been retired but you might find references to it in study material or when contacting training/exam providers.

Who are AZ-900 and AZ-204 for

Let’s have a look at the 2 certifications and who they’re geared towards.

In Microsofts own words

Microsoft has this to say for why and who AZ-900 is for:

Candidates for the Azure Fundamentals certification should have foundational knowledge of cloud services and how those services are provided with Microsoft Azure. This certification is intended for candidates who are just beginning to work with cloud-based solutions and services or are new to Azure.

And that it’s suited for the following roles: Administrator, Business User, Developer, Student, Technology Manager.

And the following for AZ-204:

Candidates for the Azure Developer Associate certification should have subject matter expertise in designing, building, testing, and maintaining cloud applications and services on Microsoft Azure.

And that it’s suited for the following roles: Developer.

My view on this

Microsoft’s page for AZ-900 mentions that This certification is intended for candidates who are just beginning to work with cloud-based solutions, but I have worked with Azure for about 8 years and I’d say that I found value in preparing for this exam. There’s always the possibility that you have gaps in your knowledge depending on what you do. For instance, 95% of my projects contains an App Service (either web or Function) and a storage solution of some sort, but I’ve never worked with Virtual Machines or the IoT offerings. Taking this exam has given me a deeper understanding of what some services might be used for, increasing my ability to pick the right service for a project.

The roles that Microsoft deems AZ-900 suited for are Administrator, Business User, Developer, Student, Technology Manager, and it might be covered under Business User, but as a consultant I’d definitely add Sales to this list.

What’s the actual difference?

In short: AZ-900 is about when you would use a specific service, whereas AZ-204 is about how you would use it. For passing AZ-900 you would be expected to know how to choose between using a Service Bus, or a Event Hub for a specific scenario. But for AZ-204 you would be expected to know which consistency level to choose for Cosmos DB for a scenario.

Which one is right for me and my role?

Should I as an experienced developer do AZ-900?

I’d argue yes, for the following reasons:

  • I’ve mentioned above that even if you’re used to Azure, you’re probably not used to all the services. This exam will expose you to most of the core services, giving you a better ability to make decisions.
  • A lot of the services could technically be used to accomplish the same goal, on the surface most of the Event, Messaging, and even IoT services could be used to build an event based system, and if you’ve used Service Bus (Messaging), then you might simply grab this from your toolbox even when Event Hub is the appropriate choice.
  • AZ-900 covers quite a bit of Azure identity services and governance which is something that you might need a crash course in if you purely do apps development. Not nothing this could come back to bite you quite hard in the future (unsecure apps or infrastructure etc). Studying for AZ-900 might give you the insight that you don’t know as much as you thought about these topics.

Should I as an beginner/junior/student do AZ-900?

Most definitely! I’d say that studying for AZ-900 is a really good crash course that will prepare you for working as a developer, or any other role that’s mentioned above.

Should I who work in a infra focused role do AZ-204?

I’d say that AZ-204 is even more focused on the infrastructure parts than what I would call “pure” dev. There’s quite a lot more focus on SKUs and configuring services then there’s code examples and SDK usage.

Should I as an experienced developer do AZ-204?

I’d say that for me personally, AZ-900 was more rewarding than AZ-204. The reason for this is that AZ-900 “opened my eyes” to some of the services that I’ve never used, whereas AZ-204 told me more in depth how to configure them. But to me the most important part was knowing when to use them, not how to configure them, as that’s something that I could do once I’ve decided to use a specific service.

This is not to say that AZ-204 isn’t valuable, for me this forced me to sit down and actually study minutia of the services, which in some cases surfaced things I didn’t know. But I don’t think it provided the same value for my day to day job as AZ-900 did.

Will passing these help me get a job?

Perhaps.

  • If you’re applying for a position with a Microsoft partner, then having one or both of these certifications could be the edge that pushes you ahead if all else is equal to the other candidates, as having employees with certifications is an important part of maintaining your Microsoft partner level.
  • Having one or both of these on your CV might steer your interview(s), telling the interviewer something about your level of expertise which could help them better utilize your, perhaps, limited time for the interview.

How hard are they to pass?

I’m preparing separate blog posts for how to study for each of the exams. But I’d say that AZ-900 is something that you might be able to pass after studying each night for a week even if you started from almost zero. The pluralsight course for AZ-900 is 9 hours 33 minutes at 1x speed and I’d say that it covers most that you need to know (at least from the parts that I viewed).

AZ-204 was a bit different for me as a lot of the exam actually revolves around using AZ PowerShell and az cli which I mostly only use for things that aren’t possible to do through ARM templates, or something that I do often enough that it’s a hassle to use the portal. I’d say that I spent about 15-20 hours preparing for this.

Could I as an experienced developer book a session for the AZ-900 exam and pass it without studying.

Perhaps, but I’m fairly certain that I wouldn’t have. And also, for me, the AZ-900 was about filling the gaps, not getting a badge (as a fundamentals badge isn’t that cool after years of experience).

References

Generate Swagger file to disk with Swashbuckle in DevOps

Actual real life problem

We’ve got a .NET core project with a web API, and we’re using Swashbuckle to generate Swagger Open API documentation for it.

Now we want to integrate this into our API Management instance, using a custom DevOps task (not important for this post) which requires us to have a Swagger/Open API file on disk.

Solution

First things first, how do we generate a swagger.json file (the one you configure using SwaggerEndpoint("route", "description" in your Startup.Configure is dynamically generated and not stored anywhere). It turns out that Swashbuckle has a tool for precisely this that we may use. More info about using dotnet tool may be found here.

Devops

What would this look like when we want to integrate it into our DevOps build pipeline? Assuming that you’ve generated a tools manifest file by running dotnet new tool-manifest in your project folder, and added the Swagger tool to it by running dotnet tool install --version 6.1.4 Swashbuckle.AspNetCore.Cli.

If you aren’t using any dotnet tools in your pipeline right now, then you would probably need add a task to restore the tools found in your manifest file. That would look something like this:

- task: DotNetCoreCLI@2
  displayName: 'Restore tools'
  inputs:
    command: custom
    custom: tool
    arguments: restore --tool-manifest My.Project/.config/dotnet-tools.json

In my example our project is found in a subfolder of the repository called My.Project, and our tools manifest files is called dotnet-tools.json which is inside the folder .config (default if you run dotnet new tool-manifest).

Now this installs the tool itself as a local (as opposed to global) tool, the next step would be to generate the file, which would look something like this:

- task: DotNetCoreCLI@2
  displayName: 'Build Swagger v1'
  inputs:
    command: custom
    custom: swagger
    arguments:  tofile --output $(Build.ArtifactStagingDirectory)/swagger.json $(System.DefaultWorkingDirectory)/My.Project/bin/Release/netcoreapp3.1/My.Project.dll v1
    workingDirectory: My.Project

Notice:

  • We’re outputting the build file as swagger.json, and we’re dropping it straight into our $(Build.ArtifactStagingDirectory), which means that we will need to have a publish artifact task after this in the pipeline.
  • We’re using the assembly $(System.DefaultWorkingDirectory)/My.Project/bin/Release/netcoreapp3.1/My.Project.dll which means that we need to have a build task before this (release configuration).
  • The last part of the arguments is v1. This is because that’s the name of my Swagger doc as configured in Startup.ConfigureServices, this is also the default document name when you call services.AddSwaggerGen without any arguments. This would be different if you’ve set a custom document name.
  • We’re setting the workingDirectory to My.Project as this is where the Swagger tool is installed.

Result

This is what a complete DevOps pipeline file could look like for accomplishing this:

steps:

- task: UseDotNet@2
  displayName: 'Install .NET Core SDK'
  inputs:
    packageType: 'sdk'
    version: '3.1.x'

- task: DotNetCoreCLI@2
  displayName: 'Restore tools'
  inputs:
    command: custom
    custom: tool
    arguments: restore --tool-manifest My.Project/.config/dotnet-tools.json

- task: DotNetCoreCLI@2
  displayName: 'dotnet build'
  inputs:
    command: build
    projects: My.Project/*.csproj
    arguments: -c Release

- task: DotNetCoreCLI@2
  displayName: 'dotnet publish'
  inputs:
    command: publish
    projects: My.Project/*.csproj
    publishWebProjects: false
    arguments: -c Release -o $(Build.ArtifactStagingDirectory)/My.Project/
    zipAfterPublish: True

- task: DotNetCoreCLI@2
  displayName: 'Build Swagger v1'
  inputs:
    command: custom
    custom: swagger
    arguments:  tofile --output $(Build.ArtifactStagingDirectory)/swagger.json $(System.DefaultWorkingDirectory)/My.Project/bin/Release/netcoreapp3.1/My.Project.dll v1
    workingDirectory: My.Project

- task: PublishBuildArtifacts@1
  displayName: 'Publish Artifacts'
  inputs:
    pathToPublish: $(Build.ArtifactStagingDirectory)/
    artifactName: My.Artifact

What this does is:

  1. Use .NET Core 3.1 (always good to be specific about which version you’re using).
  2. Restore our tool.
  3. Build our project.
  4. Publish our project using the release configuration to the artifact staging directory.
  5. Builds our our Swagger file to the artifact staging directory.
  6. Publish everything in the artifact staging directory as an artifact.

Versions used

  • .NET Core 3.1
  • Swashbuckle.AspNetCore v6.1.4
  • Swashbuckle.AspNetCore.Cli v6.1.4
Structured logging with Durable Functions and Application Insights

There’s a big difference between logging for debugging/development purposes and for monitoring your application. When developing it’s quite easy to write a line containing something similar to this log.LogInformation($"The code reached this function with id {id}"). And this is valuable when debugging the code, but when you’ve deployed it and something behaves unexpectedly, and all you have to go by is thousands of lines of “The code reach this function with id X”, then it’s not that much fun anymore. In this case you would probably want a way of correlating different log entries to see the actual flow of the application, which could potentially show you why it’s behaving incorrectly. You would probably want to query the logs by the effected id, and filter on specific types of log entries.

Easy github pages index.html

Preamble

This came about when I asked myself the question “What would be the easiest/quickest way for me to get a company webpage up in order to satisfy Apple’s requirement for registering as a developer”. I already have this blog hosted on Github pages (Source code), and although this setup is a bit convoluted (old installation of Jekyll, css build etc), I still liked the idea of hosting the page as a simple repository, and thought that there had to be a way of doing that with a simple index.html file (there is).

Define security schemas for Swagger UI to try out authenticated endpoints

You’ve got an API, it’s secured by OAuth using the Client Credentials flow (typically used for server to server communication), and now you want to enable the consumers of your API to try it out in an authenticated way, directly from Swagger. This post is about an API that uses Client Credentials, but it could also be used as a starting point if you want to do the same, but perhaps authenticating end users with the OIDC Authorization code PKCE flow.

Secure your Swagger endpoints using basic authentication

What are we trying to do? We’re trying to lock down our Swagger endpoints (index.html, swagger.json) in order to disallow unauthenticated users from reading our documentation.

Shared variables in docker-compose.yml

This post started out as me trying to solve a fairly straight forward problem (stated below), but instead fell into the rabbit hole of YAML structure and concepts.

Using MS SQL Server with Python and docker: MS SQL default database in docker

Problem

This is a continuation on This blogpost. But now we’ve come to the step we’re I want to add a default database when the container starts for my application to connect to. When doing this with .NET Core there’s an option for creating a database if it doesn’t exist, but no such luck for us here. I also want the responsibility of this on the database container, and not on the application layer.

Using MS SQL Server with Python and docker: MS SQL in docker compose

Problem

I’m currently in a team that’s developing an app with a python backend, that’s deployed to Azure with a MS SQL Server as a database, but uses a SQLite database for local development. This creates issues with having different capabilities and slightly different syntax/rules/keywords.

C# and JS to python: Scoping in Python

Problem

I have a function that should trigger a callback, lambda, Func, Action (or any other name that your language of choice might use for this concept). I also want to write a test to make sure that my function trigger this callback under certain conditions. The least invasive way I could think of to test this is to declare a variable in my test, and supply a callback that sets this variable when triggered.

Variance with generic classes and interfaces in c# (type matching, type guarding)

Actual real life problem

I’ve got a .NET (framework) MVC application (using EpiServer) where most (but not all) views are typed to a Model of type PageViewModel<T> where T: PageData. The feature to be implemented is to render a property of PageData whenever we’re rendering a view that’s typed to PageViewModel<T> where T: PageData, but do nothing for other views. A instance of T is available on the model. The property should be rendered in the “root” of the page, and not besides the rest of the view specific rendering.

Deploying to azure using Github and Visual studio Team services

Currently I have a decently small project that consists of a React SPA that’s built using webpack, the SPA communicates with a api that’s built on .NET Core (2). The code for these 2 units lives in the same repo on Github and they are built on, and deployed to two separate app services on Azure when code is pushed to specific branches. The SPA is built with kudu which runs NPM install and a build script.

Xamarin UserControl instantiation and binding context

Having worked with React for the last 3 years has made Components my favourite way of structuring code, it might have been 5-7 years since I worked with WPF and Silverlight but I do remember them having UserControls as a concept which is fairly similar to Components. This term doesn’t seem to be used as much in the Xamarin camp, instead they seem to be talking about Views, which makes finding information that much harder as the term View could mean any kind of view.

IdentityServer4 with Azure KeyVault

So you’re using IdentityServer4 in your .NET Core application that you’d also like to deploy to Azure. You’ve been using .AddDeveloperSigningCredential() to create keys for signing your tokens and you’ve figured out that this is no good in a production environment. Maybe you’ve been thinking about generating a certificate yourself and deploy with your app, but that doesn’t seem like a good solution since it limits your ability to do key rollover (since you’d have to redeploy your application), it might also seem like a bad idea to read a file from disk in a cloud application. Perhaps you could use Azure blob storage to store the file and make sure that only your application could read it? You’d still have to manually generate the certificate and upload it though. You might have seen a few blog posts suggesting that you use the app service built in certificate store, but you’ve also seen posts about how it’s deprecated and we don’t know for how long it’ll be around.

Conclusions of a React Native hackalong, abridged

I was recently asked to hold a React Native hackalong with the not so secret agenda of finding out if React Native was a good alternative for quickly prototyping native applications, if it was viable for production applications, and how fast a web/React developer could get up to speed with it.

Validate authorization policy in MVC6 vnext, asp.net5

###Background Asp.net vnext now supports creating authorization policies through code. What this means is that you no longer have to copy paste your role/claim names all over your application (Or build your own system around this). You can instead declare policies in a central place and authorize based on these, allowing for easier/safer refactoring.

Custom cultures and multi-tenant localizations in .net

I’m currently converting a webapplication into a multi-tenant solution for a client at work. Handling translations in the app is currently done using resource (.resx) files to build a master resource collection. The client can then translate any resource into a specific language using a separate tool that creates satellite assemblies which are then dropped into the bin folder, just like visual studio does if you create a specific culture .resx file. The application then sets the culture based on the Accept-Language header when it receives a request (unless it’s already been set). The user can override this inside the application.

Javascript objects are hashmaps, how to clone and/or merge them

I love javascript, I really do. One of the things i really enjoy about the language is its dynamic nature where objects are actually hashmaps, associate arrays, dictionaries, or whatever you want to call it. This gives us the ability to get and assign properties using indexing and property names instead of the dot syntax used in most languages (but we can use that to), we can also enumerate over an objects properties which is key to what this post is about.

Augmentation factory pattern

The augmentation factory pattern is an extension of the normal factory pattern when working with dynamic languages. It can be thought of as a way to get interfaces into javascript, or a way to deal with inheritance without using any actual inheritance. I do not find it helpful when working with class based languages but I do use it extensively when writing javascript.

Things you might forget how to do in sql (server)

My goal has always been to be a “full stack developer” and this is also something that is required of me in my current position. But there used to be a time when a spent most of my energy on the top layers of application, ie UX, APIs and business logic, and less time with SQL. The aim of this post is to document some of the things in SQL Server that I’ve had to learn more than once due to using it to seldom, it should hopefully be stuck in my head by now though. Remembering things I’ve have forgotten isn’t the easiest task, meaning that I’ll be updating this post when I remember something new.

Perfect height-width ratio in fluid css layouts

I wrote Tily, a css framework for creating fluid, perfect ratio windows8/windowsphone style tiles in web applications about a year and a half ago. It’s taken this long for me to write this post explaining how it works, but hey, here it is.

Compact datatransfer experiment

I create a lot of different APIs in my work, several of them send out fairly large arrays of objects where the property names of these objects are the bulk of the data being transfered. This is mostly mitigated by compressing the data, but I figured, why not try and see if I can shrink it down a bit more. Most of the APIs are consumed on mobile devices and every second counts. This means that any byte we can save over the line has to be weighed against the cost of reconstructing the objects on the client.

Whitespace in HTML matters

Inline elements in html are elements that flow in the documents direction (ltr/rtl) until they hit the edge, and then they line break, just as text in a book or a newspaper. When rendering such elements (spans or display: inline,inline-block etc) the browser will treat the whitespace between them as having a width of 0.25em of the parent elements font-size, assuming that they aren’t floated or taken out of the normal flow in any other way. Elements without any whitespace between them will be treated as a single continuous element (but keeping their individual applied styling), until hitting the edge of the parent.

Json.NET is awesome

99% of the work that I do involve asp.net webapi and asp.net mvc in some way, I normally build applications using knockoutJS on the frontend and asp.net webapi as, well, a web api. Json.NET is the default javascript/json handling library in asp.net mvc and webapi. It normally perform its magic under the hood without us having to worry about it. For instance it’s responsible for serializing your c# object to json when you do return Request.CreateResponse(object); in webapi and the request has the contentType header set to application/json.

Repurpose, rename ASP.NET solution and projects

Exciting times as Stratiteq, I’ve gotten my first “new” application where I will be tech lead. It’s an application built for a specific part of the customers service offerings, but it turns out that it would be perfect for other services that the customer offers. The teams job will be to adapt the application to allow for several services.

Automate and livereload a jekyll site with less using grunt

I’ll be the first to admit it, my “workflow” is/was really ancient, my professional work is to 99% done in Visual Studio which has made me a bit spoiled since it handles almost anything you want it to do. Work that I do for fun on the other hand is mostly written with sublime text. This site for instance is built with jekyll and I use LESS as a preprocessor for its stylesheets.

Relationship of jQuery data and data- html attributes

I have noticed a pretty common confusion about what the jquery data function and HTML data- attributes are meant to do amongst developers that are new to building frontend bits. This can also be confusing for developers that got into jQuery before 1.4.3 (like me) as the data function was extended in this version.

Transactional repository pattern

Why a new pattern?

I should probably begin by saying that I have no idea if there is any prior art for this, this is something that I just came up with while building the back-end for a new application. It is meant to simplify making a sequence of repository calls from your busines logic while still having it transactional. It is yet to be fully battle tested but initial tests looks like it’s working.

Extend Dapper Dynamic Parameters to send Table Valued Parameter to SQL Server

So you want to pass a list of something to a SQL stored procedure/Query using Dapper? And you want to receive it as a Table Valued Parameter? Sounds great, you might feel a little bummed that this isn’t supported out of the box in Dapper, meaning that you can’t simply put a IEnumerable as a param in the dynamic param object that a Dapper query accepts. But fear not, it’s pretty easy to do yourself, depending on how thorough you want to be.

Structuring less for multi-site projects

We sometimes find ourselves in the position that we have a single project that exposes several sites, most commonly a large brand with several smaller brands in the same corporation. This is yet another way in which LESS shines in comparission to writing plain old CSS. What we can do here is modify the pattern of having a “main” or “master” less file that includes other LESS “modules”, the easiest thing to do here is to have several “main/master” files, one for each brand. These in turn will look the same but include different brand specific LESS files, among other LESS files that they share between them. For instance a brand(a)(b)(c)colors.less that each define the @color-primary variable, with different colors.

Fiddler doesn't show WebClient request inside of MVC action on IIS

I pretty much always run my .NET applications on a local IIS server instead of using Visual Studios built in server, I also use the local IIS for debugging. The reasons for doing this is part performance, part not racking up VS debug server instances and part wanting to run the application from my devices over the network.

Anatomy of a web UI framework, Part1: the grid

After a year of thoughts and a few projects regarding modern UI in web applications I now feel that I’ve got enough material to assemble a new web framework. There might be a few other ones out there trying to accomplish the same task, but I feel that most of them are a bit to static and not responsive enought to fit all devices which is what I’m trying to create. I created Tily a while back, trying to build an incredible flexible/fluid tile system. And I feel that I did just that, but a tile system no modern UI make. This time around I’ll take my time trying to get it right, starting with the basic grid.

Installing Jekyll on Windows
  • Use Ror 1.9 for painless installation. http://railsinstaller.org/en
  • Python 2.7.5 for Pygments.
  • setx path “%path%;c:\Python27”. Restart console
  • Don’t forget to set HOME PATH if it’s currently set to a network drive. SET HOME=%USERPROFILE%
  • gem install jekyll
  • gem install pygments.rb. Only use 0.5.0 as of writing http://stackoverflow.com/questions/17364028/jekyll-on-windows-pygments-not-working
sql server between is not zero based

A small gotcha that happened a colleague tody. A simple search for paginated content went wrong.

dension gw 500s bt in porsche 997

Whats the first thing you do when you buy a new car? Upgrade the stereo to support A2DP phones of course

Dont add duplicate mime-types to an iis site

It is for some reason possible to add duplicate mime-types through inheritance to a site running on IIS. Doing this will cause 500 server error for statically served files.

Limitation in Sitecore Multilist field

There seems to be a limitation on visible items in a multilist field when settings the source as a sitecore node such as /sitecore/content/mysite. I didn’t count but it seems to “only” show the first 50 subitems.

RedirectToAction and the ViewBag

This should have been fairly obvious, but we cannot expect the ViewBag to persist since a RedirectToAction only serves the client with a 302 redirected HTTP response to the new route. A solution for this is to use “TempData[“key”]”, this is a dictionary that is persisted in the users session meaning that we can retrieve it in the new action or controller or wherever the recieving end is.

Less !IMPORTANT mixins

I thought I should share this even though I’m generally against using !important in css rules.

Manually removing ContentTypes from Orchard can be tricky.

I was experimenting with a new contentpart widget tonight, and when I was done I had a couple of UpdateFromX methods in my migrations.cs file. So I thought to myself, this doesn’t look very good, lets delete everything from the DB and then combine this to a pretty Create method instead. At first I thought, hey maybe I’ll only have to remove the table that stores the records themselved. This was of course not enought. I realized that I also had to remove it from the Orchard_Framework_DataMigrationRecord and Orchard_Framework_ContentTypeRecord tables. I think this would be it if I hadn’t been clever enough to actually create a couple of widgets and place them on the layer that displays on my homepage.

Removing zone div wrapper in Orchard

If you have ever tried building a new Orchard theme from scratch, and like me you like full control of your markup. Then perhaps you have noticed the extra “zone-name” div that Orchard adds around every zone.

XSLT rendering with sc:image in for-each

It was a real easy task. Display the ImageField and GeneralLinkField for every item that was selected in a MultiListField.