Wednesday, October 31, 2007

PowerShell Script for AES Key Generation

I have to constantly generate AES keys for the numerous SSO requests that we receive from our clients.  The keys are used for message level security, and they're really the biggest headache we have when it comes to setting up SSO for a new client.  Everything after that is a breeze (a simple database entry).

I used to use one of the unit tests that exercises our cryptography code for this task.  I would set a break point where the AES algorithm was instantiated and then inspect the value of the Key property.  However, a short PowerShell function has now made this much easier.

   1:  function GenerateAesKey() {
   2:     $algorithm = [System.Security.Cryptography.SymmetricAlgorithm]::Create("Rijndael")
   3:     $keybytes = $algorithm.get_Key()
   4:     $key = [System.Convert]::ToBase64String($keybytes)
   5:     Write-Output $key
   6:  }

Friday, October 19, 2007

The Functional Programming Renaissance

At this point, I don't think it's a stretch on my part to say that most "experts" in the computing industry accept that we have just about reached the speed limit enforced by the inherent physical limitations of modern processor architecture.  I could link to any number of news and magazine articles that say as much, but corroboration of factual statements found in blogs is an exercise for the reader.  In any event, the design modification du jour in the chip industry is clearly increasing the number of processor cores, rather than the old school method of advancing clock frequency.

What all of this means for developers is that there are no more free lunches when it comes to the performance of their applications.  The automatic performance gains that came with processor upgrades are a thing of the past since processor speed will remain largely static.  Therefore, in order to make our applications scream, we will have to consciously work at making them take advantage of the multiple computing cores available.  However, parallel computing is an area of computer science that many programmers have no experience in.

Thankfully the computing industry is already hard at work trying to make sure that the transition to parallel computing won't necessarily feel like a step backwards.  On the .NET side of the house, Parallel LINQ (PLINQ) and Task Parallel Library (TPL) are currently under development to help make our lives easier.  However, while these frameworks are not necessarily hard to integrate into existing code and coding habits, they still require extra effort on the part of the developer since they have to be aware of the issues involved (e.g. exception handling and list ordering, just to name a couple).  In short they feel more like a bit of duct tape applied to existing technologies in order to make developers feel more comfortable.  While I can certainly appreciate the sentiment, I think the long-term solution is going to be much more dramatic.

(Re-)Enter functional programming.

Functional programming already has the inherent ability to be broken into discrete units of work that can be shuffled around from processor to processor.  This creates a needed abstraction layer around the details required to facilitate parallel programming, and leaves the developer free to worry about the details of their design.  Since functional programming is already a part of most developers lives (via SQL and, very soon, LINQ), it won't be entirely foreign.  And of course, developers always love learning new technologies anyhow.

I know that I'm not the only person to recognize functional programming as the potential wave of the future.  Microsoft is expected to be integrating F# in a future version of VisualStudio (not Orcas).