Thursday, March 30, 2006

Multiply Loaded .Net Assemblies

I have a .Net web application that seems to use an inordinate* (*Outrageously high) amount of server memory. So, using Process Explorer which I downloaded from SysInternals I examined the ASP.Net worker process (aspnet_wp.exe) and displayed loaded dlls and noticed most of the assemblies in my application are loaded multiple times, which probably explains the memory usage.


As usual I have no cure, just a frightening diagnosis. So I did some exploring on the Inter-Web* (*My name for the WWW) and found Shawn Farkas Blog on .Net Security and an article on how Security evidence will cause an assembly to load for each evidence set. Of course I am left confused, as my .Net security knowledge is weak.


I have also heard that thread culture settings will also cause assemblies to load multiple times, but I have not confirmed this behavior.


My quest is now to understand this behavior, and hopefully rectify it.

Wednesday, March 29, 2006

Upgrading NUnitAsp to .Net Framework 2.0

I want to use NUnitAsp with my VS 2005 Web Projects. So I added a reference to the 1.1 framework compiled version in my Test Suite project. But the NUnit GUI doesn't find my tests? The code compiles however. I suspect it may be possible to mix code from different framework versions, but perhaps I need to look at my .Net architecture books again. Anyhow, I thought perhaps mixing versions doesn't work so I proceeded to convert NUnitAsp to version 2.0. Of Course NUnitAsp has references to 1.1 libraries (NUnit.Framework for example). So I also updated these references to point to 2.0 libraries. But still my tests don't show up. I could always create a 2003 GUI Test project and test the 2005 pages, but I would really like everything together. Since it's only test cases, I don't mind the requirement of a mixed environment.


I am not giving up on this one, I really like using NUnitAsp, even if I have to run a separate GUI only test project.


Some further investigation revealed some interesting clues. Browsing the properties of my referenced libraries showed me that NUnitAsp is compiled against 1.0.3705, framework version 1.0! I don't even have that version installed! So... I guess it's not compiled against a specific framework version? How does this work. 2+ years of .Net and I don't even grasp the basics. man!


After a couple of hours of messing with the NUnitAsp files and trying to get the supplied tests working, I discovered that NUnit was not seeing my GUI tests because the test fixture class was not set to public. That is twice now that I have been burned by the new default class visibility. Visual Studio 2005 defaults new classes to private (I hope there is a way I can change this code gen setting), and I have obviously become to used to the VS 2003 defaults. In any case, I managed to get NUnitAsp working against my 2005 code, and I am happy* (*temporary condition only). I suspect that I could have used the 1.0 compiled version, and it would have also worked.

Tuesday, March 28, 2006

Red-Green-Refactor-Profile-Tune

When should code be tuned for performance? I have seen far too many applications built with little or no consideration for performance, or at best where performance is left as an afterthought. Event the phrase 'performance tune' is problematic because it assumes that performance problems can be simply 'tuned' away, and that performance problems are trivial (which is far from the reality). Tuning code is rarely possible as the fundamental design is usually the source of the problem. Then should you design for performance? I've seen this case and it usually means a complicated design and poor usability (sorry, we can't do that it would be too slow). So when do we worry about performance?


If "Premature optimization is the root of all evil", then what does premature mean? and when does it make sense to optimize. Most performance problems stem from a naive approach to design (it works for 10 records so it's good), and ignoring performance until it becomes a problem. I've seen applications that used XML to pass data between every object in the system. I've seen applications where every object is a COM object (I built an app like this once). What I also see is developers making excuses for these poor design decisions and I hear sentences like "It's too late to fix that now, that would require a full rewrite". I've been involved in far too many last minute performance 'blitzes', hacking away at code and patching up anything that can easily be fixed. I propose that optimization should be as constant as any other activity in an Agile development cycle.


Enter RGRPT. I propose an addition to Kent Beck's famous Red Green Refactor mantra. I propose that optimization become part of the refactoring process so that the enhanced mantra might go something like Red-Green-Refactor-Profile-Tune. So, what does this mean? Here's how it works. Follow the same rules for TDD. Write a test, define the desired interface and write code to satisfy the test. Create a naive initial design, just make the test pass. Now define a performance goal and write a test for that too. The performance tests should probably run against a well sized database with indicative data (a real customer database is ideal here). Run your performance test against the large database and if it fails, profile it. I highly recommend the JetBrains profiler. It is important to use a profiler, as assumptions about where code is slow are very often incorrect. Tune the code, fix the design, do whatever is needed to get that test green.


Using TDD (Test Driven Development) in the context of the Agile 'Attitude' and my proposed RGRPT pattern ensures that the entire development process is followed in small rapid cycles. Leaving performance tuning to the end, is like leaving testing to the end, or writing the installer at the end. Ken Schwaber compares Agile development to 'Sashimi', an 'all at once' software engineering methodology. Performance tuning is as much a part of that as testing and documentation and any other product development activity.

Monday, March 27, 2006

Dude, where's my namespace?

So, lots has changed in Web Project land between VS 2003 and 2005 (See Scott's excellent summary explaining the changes). No more project files, no Front Page Server Extensions (FPSE), no compiled binaries - and no namespaces! Yarrrgh!


For example, I created a User Control folder and put a control in there. My new user control is named UserControl_ControlName. Underscore? What happened to my nice namespaces, Web.UI.UserControls? I want my namespaces back, waahh. This isn't good, how do I organize stuff? How do I not get lost in a sea of files with a**x extensions? Sometimes there is a need to create non-visual 'helper' classes in a web project; classes that assist the UI and know all about the presentation layer. I don't want to create another project for these classes, and I don't want their names preceded with underscores. Well, Maybe I can configure VS2005 to give me back my namespaces, and my dlls too.


In my infinite wisdom I decided to take matters into my own hands and wrap my control class in a namespace and change the class name to remove the 'UserControls_' prefix. This was unwise. There is stuff happening that I don't yet understand. I suspect that classes in new VS2005 web projects are not expected to be organized into folders, as tacking the folder name to the front of the class name is just so absolutely ridiculous. I am afraid I am at a loss, and must succumb to the drag and drop development model for the time being.

Sunday, March 26, 2006

Auto-numbering with MS Access

Well, here's my challenge, which I'm sure has been run into umpteen times by every developer who's used MS Access. I have auto-number primary keys in all of my tables, and when I add a record, I want to get that Id back. So, in my research I found this solution (from ADO.net Cookbook by Bill Hamilton, O'Reilly),
First, create data adapter
da = new OleDbDataAdapter(sqlSelect, connectionString);
where sqlSelect is the select statement that retrieves rows from your table. Next, add your insert statement and parameters. You then attach an event handler to the RowUpdated event,
da.RowUpdated += new OleDbRowUpdatedEventHandler(OnRowUpdated);

which will fire for each insert. Lastly, add the event handler.

private void OnRowUpdated(object Sender, OleDbRowUpdatedEventArgs args)
{
if(args.StatementType == StatementType.Insert)
{
// Retrieve the identity value
OleDbCommand cmd = new OleDbCommand("SELECT @@IDENTITY", da.SelectCommand.Connection);
// Store the id
args.Row[ID_FIELD_NAME] = (int)cmdExecuteScalar();
}
}
So, this looks like a lot of work to me, I cut out a lot of code in the examples above too. I looked at this example and thought "why don't I just call that select @@identity statement after my insert?". So I cut out the event stuff (and I felt like I was losing some automatic protection and handling, but the feeling soon left) and the code seems to work. But I'm still hitting the database twice for an insert!
In SQL Server I can pass the id in as an out parameter and hit the DB once. My suspicion is that I can't use auto-number ids and have one-hit inserts. My next option is to use the Identity Field pattern (Patterns of Application Enterprise Architecture, Martin Fowler et al. Addison Wesley).
I haven't implemented this solution yet, and I am nervous about taking identity control out of the database and implementing it manually, as I will probably do it wrong and spend my remaining years debugging Ids.
Another option is to use GUIDs as my ids, however I suspect GUIDs cause slow retrievals from Access.
Here are some results from some preliminary tests. I created an MS Access 2000 database with 2 tables. One using a text(64) field to stored GUIDs as the primary key and a text(50) datafield. The other with an autonumber primary key and a text(50) datafield. I created a test to add 1000 records to each table and retrieve a record using the primary key. The timings came out as follows,




Key Technique1000 InsertsSelect by key
GUID46922 MSec62 MSec
Autonumber38469 Msec16 MSec

So, the Auto-number technique was faster for inserts and retrieves. I was surprised to see the insert numbers were better since every autonumber insert also comes with an identity retrieve using "SELECT @@IDENTITY". I suppose sorting the text strings is much less efficient, due probably to conversions and compares. The retrive was 4 times faster! So, while the GUID technique is very convenient for development, it is very slow. I suspect I can even improve on the Autonumber technique by using the Identity Field pattern.


Well, I did a test with an Identity Singleton and got about the same result as the GUID tests at 46906 MSecs and the retrieve took 47 MSecs. I can't explain these results, but it looks like the autonumber technique is the way to go.

Friday, March 24, 2006

Car Care Calendar

Well my first post is going to be a pitch for my new web site, the car care calendar hosted web application. It is currently being developed (.net 1.1). And of course it's taking a lot longer than I was hoping. Naturally I'm trying to build it 'right', keeping on top of security issues, and performance.
So, just getting started was pain indeed. I wish creating projects in Visual Studio 2003 was easier. Since I have such a limited short term (and long term) memory, I always forget what "automatic" stuff VS creates when you start a new project. I invariable create a new (duplicate) directory one level lower than I wanted, and have to move the project file etc. up a level. The solution file is in some documents and settings user folder completely disassociated from the project files. Now whenever I open the Solution, VS complains that it was unable to refesh some folder or whatever. It always takes me far longer than expected to move stuff, edit .sln and .csproj files, verify and reopen before I ever start coding. It would probably be worthwhile creating a VS macro to automate this... but of course, once I get set up I quickly forget about the setup pain as I being to feel the onset of new pain.
Tools I'm using include
NCover. It's an opensource project so I cut it a lot of slack. Right now I've set up a batch file to instrument my files, compile them and produce a test coverage report. NCover is not great at dealing with errors, like invalid file paths, so it took me a fair amount of trial and error to get it working. As I'm writing this, I'm looking at the NCover site and it looks like they have a new version so maybe it's friendlier. NCover has Nant support, and I suspect that works much better than batch files.
I've also set up
Subversion to help me back out of my usual plethora of bone-headed mistakes. As a one-man development operation it seems strange to use a Source control system, but with my history of wackhackery, a source control system is a minimum must-have. I'm still new to subversion, I like it so far... (sort of, why are my folders always red, and why do I have to update so much), I suspect I will be blogging more on SVN soon. It is not set up to run as a service yet, which I would also like to do.
Of course I'm using
NUnit. Love this tool.
And
NUnitAsp which I really like and think has great potential, but seems to have lost momentum? If had had more time* (*If I was smarter, and knew how browsers worked) , I would join this Sourceforge project and get this project back on the tracks. I need NUnitAsp for .net 2.0 and all of the fancy new ASP.net 2.0 controls.
I would also like to introduce a profiler, maybe
NProf? and probably Cruise Control.net which I am essentially afraid of, basically due to what I remember reading about configuring it. But, I will conquer my fear over time, and get CCnet up and running and cruising and notifying, and I will blog extensively about all of my pain with that too.