Monday, June 27, 2011

Microsoft Moles!

This is a very cool project from Microsoft Research:

http://research.microsoft.com/en-us/projects/moles/

I'm able to use this to "fake" or "mock" sealed classes inside of the ASP.NET runtime.

For example:

        [TestMethod]
        [HostType("Moles")]
        public void WhenCannotInterpretSdnUserKeyAsIntegerThenMustRedirectToGlobalErrorPageWithProperMessage()
        {
            // Arrange
            var cookies = new HttpCookieCollection { new HttpCookie(SdnUserKeyCookieName, "Gibberish") };
            var context = new MHttpContext();
            var request = new MHttpRequest();
            var response = new MHttpResponse();
            var redirectWasCalled = false;
            var redirectedToLocation = string.Empty;
            var responseEnded = false;
            response.RedirectStringBoolean = (string location, bool endResponse) =>
            {
                redirectWasCalled = true;
                redirectedToLocation = location;
                responseEnded = endResponse;
            };
            MHttpContext.CurrentGet = () => context;
            context.RequestGet = () => request;
            context.ResponseGet = () => response;
            request.CookiesGet = () => cookies;

            // Act
            _sdnAuthenticator.Process(context);

            // Assert
            Assert.IsTrue(redirectWasCalled);
            Assert.AreEqual("/Error.aspx", redirectedToLocation, true);
            Assert.IsTrue(responseEnded);
        }

How cool is that? The moles are implemented as "detours" and replace components of the runtime when configured to do so.

This is great stuff.

Thursday, June 23, 2011

From 15 Minute Stand Ups to Standing Work Stations : How to Start a Trend

As many people involved with the various agile development practices know, one of the common practices is a brief "Daily Stand Up" meeting to discuss project progress, priority changes, and impediments.

When I joined my most recent team, the Epi-X program at CDC, I wanted to try a different kind of Stand Up. This time, I raised my monitors and my keyboard and I began standing for a large portion of the day to do my work. I don't claim any medical expertise, despite working at CDC, so don't take my counsel on this as anything scientific.

All I can say is that there have been some studies written about in popular articles that seem to indicate constant sitting is detrimental to long-term health, including increased risks of obesity and heart disease. I should also say other studies say there are health risks with constant standing as well!

So, for me, it is not that I stand at my workstation all day long. I am in various sit-down meetings and discussions, and have to walk to different areas to speak with people. And, I do lower my monitor and sit from time to time as well.

I'll continue this post later, but in the past week two people in my office have followed suit! So far, we just use boxes to prop up our equipment, but I'm strongly considering investing in a genuine table-top adjustable desk from http://www.ergodesktop.com.

One coworker that adopted this practice also bought himself a foot mat. I have been wearing sandals or occasionally kneeling on my chair.

Remember: Stay agile, not fragile.


Friday, May 20, 2011

MIX 2011 Presentations Reviews

Glenn Block's presentation on WCF and URIs is very good from MIX 2011. Mike Simpson also forwarded me a recent DotNetRocks episode in which Block discusses the WCF HTTP WebAPI.

MIX 2011 presentation: "There's a URI for That": http://channel9.msdn.com/events/MIX/MIX11/FRM14
DNR episode: "Glenn Block Simplifies WCF with WebAPI": http://www.dotnetrocks.com/default.aspx?showNum=661

In the MIX 2011 presentation, some highlights of things he demonstrates the following:
  • Microsoft is committed to delivering a first-class HTTP programming model for WCF
  • The HttpWebResponse<OfTypeT> response type, which has full support of HTTP status codes, demonstarted with response.StatusCode = HttpStatusCode.Created to indicate a successful "resource created" response status from the server to the client.
  • The use of media types and registering X number of media type processors to handle incoming requests based on "extensions" supplied on the URI.
    • Gone are the days when an extension like ".aspx" or ".txt" were tightly-coupled to the file-system and to physical files on disk. Instead, these are now fully interceptable and processable by YOUR handler code in the way YOU WANT.
  • Using an OData producer resource, he showed Google's GMail contacts import dialog pulling down contacts from his WCF service in vCard format, as specified by an extension, and an OData filter expression that limited the results returned to the top 3 results
All in all, I am feeling increasinly confident that Microsoft's direction regarding HTTP and REST is moving in a solid path. While it is of course possible to build REST/HTTP style Web APIs without WCF and without explicit support from Microsoft, the fact that Microsoft is supporting this provides two advantages:
  1. It increases the mind-share and desire amongst .NET developers to understand the web and to leverage these technologies,
  2. It provides an easier path toward enterprise-wide adoption because mangement will begin to understand the benefits and see the backing of MS within their flagship WCF offerring.
Perhaps "hard-core" ALT.NETters and "Restafarians" will say they've been doing things like this without MS support for years, and I would not argue with that. However, most of us work in a heterogeneous world that involves a lot of layers of management and risk-mitigation. So, the extra support at an official level from Microsoft's platform can only help the general adoption curve of web technology.
And, regarding developers and architects, these groups of stakeholders have a lot riding on their technical choices and often face an uphill battle when the major vendor of their company's tools isn't yet "on board".

I and a few of my colleagues at various companies have been building REST-style (though I'd hesitate to call it full REST because of little attention paid to links and hypermedia constraints) for a efw years and have often faced skepticism because of the "That's not what WCF does" responses. Those responses were well-founded at the time, but it's also very nice to see how far MS has come with modernizing WCF to fully support the web and the HTTP specifications for all they offer and are worth.

For me this is a great step in the right direction, and I look forward to evaluating further the use of WCF HTTP WebAPI for backend resources / services.

Saturday, April 16, 2011

MIX 2011: WCF, OData, MVC, and MEF Highlighted Presentations

MIX 11 has finished. Here are the presentations that I will be "diving into" more quickly than others.


My Summary of Summaries:

The topics below caught my eye as priority because:
  • They build upon OData, and thus upon Atom and REST . REST is the foundational architecture of the WWW, and is still not widely understand throughout the entire development industry
    • I'm likely to be joining a large project that will be providing enterprise wide alerting, notification, and reporting capabilities to X number of organizational sub-units. It's critical that interfaces to such services be simple, document-based, and that output data be consumable by end-user tools like Excel. OData support is "baked in" to the latest Office products and into Sharepoint 2010.
  • Regarding MVC and MEF: these two technologies are critical for modular, extensible web applications on the .NET platform. Being able to deploy sub-units independent of an independent application architecture is critical both for ease of maintenance and extensibility, but is also important in highly secure environments that require rigorous application scanning for security threats.
That's more than enough from me. I'll let the experts Castro and Block elaborate :)

Scott Guthrie's key-note, naturally:

Why: because it's The Gu. Period. Full-stop.

Pablo Castro on OData Roadmap: Powering the Next Generation of Services:

Summary:

At home and work, the way we experience the web (share, search and interact with data) is undergoing an industry-changing paradigm shift from "the web of documents" to the "web of data" which enables new data-driven experiences to be easily created for any platform or device. Come to this session to see how OData is helping to enable this shift through a hands-on look at the near term roadmap for the Open Data Protocol and see how it will enable a new set of user experiences. From support for offline applications, to hypermedia-driven UI and much more, join us in this session to see how OData is evolving based on your feedback to enable creating immersive user experiences for any device.

OData Roadmap: Exposing any Data Source as an OData Service: http://channel9.msdn.com/Events/MIX/MIX11/FRM16

Summary:

Many of the popular OData services, including Netflix, Twitpic and Facebook Insights were built by reusing their existing web API with an OData service. Implementing this type of OData service is not simple but it's also not as hard as you might think. In this session, you'll learn how to build similar services that wrap different types of data sources using the WCF Data Services Toolkit. We'll take a look at the implementations for several of the popular services as examples of how to use the toolkit to create new OData services.


Summary:

So you have a team of developers… And a nice architecture to build on… How about making that architecture easy for everyone and getting developers up to speed quickly? Learn all about integrating the managed extensibility framework and ASP.NET MVC for creating loosely coupled, easy to use architectures that anyone can grasp.

OData in Action: Connecting Any Data Source to Any Device

http://channel9.msdn.com/Events/MIX/MIX11/FRM10


Summary:

We are collecting more diverse data than ever before and at the same time undergoing a proliferation of connected devices ranging from phone to the desktop, each with its own requirements. This can pose a significant barrier to developers looking to create great end-to-end user experiences across devices. The OData protocol (http://odata.org) was created to provide a common way to expose and interact with data on any platform (DB, No SQL stores, web services, etc). In this code heavy session we'll show you how Netflix, EBay and others have used OData and Azure to quickly build secure, internet-scale services that power immersive client experiences from rich cross platform mobile applications to insightful BI reports.

Glen Block on WCF Web Apis: There's an URI for That: http://channel9.msdn.com/Events/MIX/MIX11/FRM14

Summary:

Web application developers today are facing new challenges around how to expose data and services. The cloud, move to devices, and shift toward browsers are all placing increasing demands on surfacing such functionality in a web-friendly manner. WCF's Web API makes it easy for developers to expose their services and data to a broad set of clients and to take advantage of rich emerging web standards like Web Sockets.



Tuesday, April 5, 2011

Delivery and Simplicity : Don't Leave Home Without These Agile Principles

Bootstrapping Agile from the Trenches

In February of 2006, I was offered the position of Lead Architect for the redevelopment of CDC's Epi-X system, CDC's flagship secure communications platform for emergent disease outbreak notification and bi-directional collaboration between multi-jurisdictional public health authorities. However, on the same day I was offered a position at a private .NET consulting company, Abel Solutions. Realizing that actual redevelopment of Epi-X would be months, if not years away, due to the then very disruptive agency-wide reorganization, I decided to leave so that I could gain more experience in a variety of private sector industries.

During the five years since I left Epi-X, I've worked as a senior software engineer, architect, lead application architect, and as an independent consultant. My first assignment with Abel Solutions was to re-architect and re-develop a very popular web-based electronic commerce & auction system to support more than 1 million registered users and the processing of more than 300 million dollars in annual sales. For a different company,  I re-engineered the security, object-relational, and querying architecture of a complicated human resources & payroll processing system used by thousands of companies. Most recently, I helped lead the design and development of both a modular user-interface architecture and the core service-oriented architecture for a new correspondence banking & ACH settlement platform to be used by hundreds of local and regional banks to conduct business more easily with the Federal Reserve and each other.

For the companies sponsoring the first two projects mentioned above, I introduced and lead the successful adoption of Agile management and development practices. For the third, I was recruited specifically to consult both on their adoption of Agile and the design of its new system's user-interface and service-oriented architecture.

I've also consulted with many other private entrepreneurial businesses about technology strategy, and in 2008 founded both the Atlanta Science Tavern and the ATL ALT.NET community groups.

Aligning the Agile Approach to the Business Domain

Let me be the first to state that adopting Agile in the "real world" is not easy. To be successful, you must internalize the values of Agile, especially the very first one which reads:

Our highest priority is to satisfy the customer

through early and continuous delivery

of valuable software.

Did you notice that this specifies nothing whatsoever about writing code? It specifies nothing at all about writing code at all. It specifies delivery of valuable software.

Later on in the principles document, it says:

Simplicity--the art of maximizing the amount

of work not done--is essential.

It says simplicity is essential, not optional, but essential. How many projects have you seen that feature unnecessary complexity? That is the exact opposite of this Agile principle. For more about this problem, see my post that reviews a Skype architect's presentation.

You can read the rest of the Agile principles here: http://agilemanifesto.org/principles.html

I highlight this because a lot of practitioners think that Agile is some kind of magic bullet that will solve all the problems that sequential "waterfall" style development has. This is absolutely not the case.  Agile has its own pitfalls that must be addressed as well, and one of them is plainly that development teams don't even understand or truly believe in these two core principles!

The Core of Agile: Communication and Collaboration

As the principles in the Agile Manifesto explain, collaboration and communication are the two most critical underlying themes of agile development. What if, by communicating with your clients successfully you could help them avoid spending millions of dollars custom-developing a solution to a problem that you could solve using low-cost or open-source software?

Would that not be the ultimate fulfillment of the first principle of Agile? I think it most certainly would. And, it would certainly fulfill the later example I highlighted!

Unfortunately, many people, even managers, fail to think this way when they adopt Agile. This is not to say that they don't mean well. It's often just the case that they recognize Agile, and associated development practices like XP and TDD, as a better way of building software, but can lose sight of principle number one: delivery of valuable software.

Internalized Agility = Flexibility

True internalization of Agile values should cause architects, developers, testers, and all manner of managers to adopt an attitude of true collaboration with their stakeholders.

So, keep it mind that being agile doesn't always mean building software. First and foremost, it means delivering valuable software.

Tuesday, March 29, 2011

New Reading List: Acceptance Testing, Specification by Example

As usual, there are far, far too many topics that interest me than I will likely be able to comprehend.


I'm extremely interested in recent presentations and work from Gojko Adzic.

I'm going to buy his books Briding the Communication Gap and Specification by Example:



You can see Gojko present about these topics in numerous places, including:




Why do these topics interest me? The easy answer is that I find it very painful, both mentally, and physically to endure lapses of communication in projects that lead to lost time, money, or functionality when I know in my heart such problems can be avoided with proper communication.

Scott Ambler has also written extensively about Executable Specifications at http://www.agilemodeling.com/essays/executableSpecifications.htm

These ideas bring together the two aspects of system development that matter most to me:
  • Achieving the correct result for my customer / user
  • Seeing something valuable running
Nothing is more crucial to my sense of accomplishment when building a system than to see the correct result in action.  I know, however, that many teams value "documentation" as a very important communication artifact that serves to mediate between "the business" and "the development team", but I find this to be a source of constant frustration for the very reasons that Gojko lays out. Instead, what is more valuable, and very satisfying, is to execute the documentation, the specification, the requirements in a testable, verifiable way that itself represents real tangible value, not just words on a dead sheet of paper.

Gojko's presentations are excellent and his books look like they will really fill the "gap" in my own arsenal. I am looking forward to devoting significant time to studying these works.

As he explains on his web site, here are the "Key Ideas" in the Bridging the Communication Gap book:







Monday, March 28, 2011

Strangulation: The Pattern of Choice for Risk Mitigating, ROI-Maximizing Agilists When Rewriting Legacy Systems


"The most important reason to consider a strangler application over a cut-over rewrite is reduced risk. A strangler can give value steadily and the frequent releases allow you to monitor its progress more carefully. Many people still don't consider a strangler since they think it will cost more - I'm not convinced about that. Since you can use shorter release cycles with a strangler you can avoid a lot of the unnecessary features that cut over rewrites often generate." -- Martin Fowler, Chief Scientist, ThoughtWorks, on The Strangler Pattern

When rewriting a system in a new technology, it's tempting to think that the task will be easier and quicker than the first time it was written. Because of this, sometimes business sponsors believe that a "waterfall" or "big bang all at once" approach will work out, but this is rarely the case for any project large enough and important enough to warrant rewriting. It's always important to practice iterative and incremental development to provide for feedback loops. But, it's even more important to do this in the case of a large application rewrite. This article will explain why this is true. There are a few bedrock development principles that project sponsors and team members should put into practice to ensure the success of large scale migrations. Having learned these lessons from experience, these are:
  1. Involve business sponsors and end-users directly (or a user-experience specialist) and the entire support and operations teams during the entire rewrite
  2. Involve permanent quality-assurance professionals from the beginning and during the entire rewrite
  3. Design, code, and test one complete feature rewritten from the existing system as quickly as possible
  4. Thereafter, design, code, test, and pilot user-valued, return-on-investment-generating (ROI) features in small increments
  5. Most importantly, continuously build team member skills, knowledge, and leadership abilities

Lessons Learned in Rewriting Large Legacy Systems 

In February of 2006 I joined a small .NET consulting company. Shortly thereafter I was assigned to a brand new project for one of their clients to analyze, design, and develop a new version of an existing electronic commerce platform. The system was a highly successful, niche-market leading auction site with nearly 700,000 registered users at the time. In operation for more than seven years by then, the system was built on classic ASP, C++/COM, and SQL Server 2000. It consisted of about 330 ASP pages. Our client wanted to do two primary things. First, he wanted to add new, value-added features to the system to provide a much better user experience, one that would be similar to eBay. These features would be called "My Auctions". This new set of features would take the place of roughly 30 pages from the existing web site. Second, he wanted to migrate the other 300 pages, without introducing any functionality or usability improvements, to ASP.NET WebForms. Having already personally designed and developed the entire business object back-end COM objects, he wanted all of the new web site to reuse this investment by utilizing COM Interop.

My Recommendation: Perform a Phased, Vertical Migration One Piece at a Time

My first assignment was to analyze the existing ASP and C++ code to and produce a migration strategy recommendation. This strategy document would lay out our company's professional opinion for migrating the system to the .NET platform and the C# language. My recommendation was for our client to perform a vertical migration, which is a migration that incorporates an entire functional slice of a subset of the system (My Auctions) which cuts across all architectural layers (top-to-bottom). In their book The Pragmatic Programmer, Dave Thomas and Andrew Hunt call this a "tracer bullet". This was, in fact, what Microsoft recommended in their best practices guidance documents that I researched about performing large scale system architecture migrations. I recommended that our client hire us to build a new core platform on ASP.NET with C# and get the new, value-added features to market as soon as possible on top of that core platform. Only after these value-added features were in production would we then move on to replacing the rest of the 300 pages with ASP.NET replacements.

My Reasoning: Place Customer Satisfaction, ROI, and Risk Mitigation First

My reasoning was that by creating a new core platform and building the brand new, usability-focused, value-added My Auctions features on top of that, our client would generate a return-on-investment (ROI) much sooner by generating more sales volume with the user-friendly features and would simultaneously mitigate significant risk by testing the viability of the COM Interop strategy. By virtue of the features being value-added, there would be no risk whatsoever for him to deploy them to a parallel web server and get his users to begin pilot testing the system and providing valuable feedback early on in the game when he could still make significant changes prior to committing to replacing the entire system with the new technology.


I've since learned from Dan North at QCon 2010 in S.F. that is called The Strangler Pattern according to Martin Fowler, hence the title of this post!

Client Decision: Let's Do It All At Once

Our client considered my recommendation very carefully, but wanted to take a different approach. Rather than deploy the new My Auctions features independently, side-by-side with the existing system, he wanted to have his in-house staff work on the other 300 pages while our company worked on the value-added features. With more than 330 pages to complete, I estimated that the project would not take less than a year, but would more likely take two years or more to complete. Our client and my manager thought things could be done much faster if we had three or four people working on the system. This was certainly the case early on when I worked side-by-side with another developer in our company. Within four months, he and I had completed the new C# application foundation and the value-added features to the point that they were ready for beta-testing.

And that's when all the fun began!

Planning is Essential; Plans are Useless

As anyone who has worked in the software industry for a number of years knows, the best laid plans never go as you planned. Our client's lead C# developer left his company. Soon after that, my manager at my company was let go, but several months later he was hired by our client to take over the development management of the project. This made sense since he already had strong background on the project since its inception. Shortly after this, our client's HTML, graphics, and CSS developer quit when asked to change his focus to become an ASP.NET developer. They hired a lead C# developer and he got to working on a large slice of the application while I continued to work on another large slice. Five months later, they hired a second C# developer and he began working on several other slices of the application.

Wanting to see the project through to success, I joined the client as a direct employee to continue being the lead architect for the project.

Naturally, There's a Big Trade Show In The Story

What would any development story be without a "Big Trade Show" lurking around the corner? As luck and fate would have it, in early 2008 there was a huge industry trade show, and it was critical that we would be able to demonstrate the new version of the system to the roughly 25,000 customers that would be passing through our booth. And, it would be very important that these customers be able to see their own real items, either ones they were selling or ones they were buying. The problem was, of course, that the system was not ready to replace the production system! Due to security requirements brought about by a changing legal environment, we had to repartition the back-end database for the new system from 2 SQL databases into 6 separate databases just before the trade show. It was deemed too risky to perform this radical "surgery" on the live, production system just two months before the trade show. The new system's schema was about 95% the same as the old system, but there were corrections to long-standing column name problems or foreign-key reference inconsistencies. This was a complicating factor, however, for migration.

We tossed around various ideas, such as:

  • Perform the "surgery" on the production database to upgrade it to the new system schema, then use views and synonyms to create a "pass-through" database that looked like the old schema, but mapped across to the new DBs and structures.
  • Do the reverse: create several "pass-through" new databases with views and synonyms that actually resolved to the single existing production database's objects.
We felt that we could mitigate risk entirely by following the second option. What this also allowed us to do was to "override" some of the production system's tables with configuration data specific to the new system. The approach of using synonyms and views ensured that all writes and reads against the pass-through objects would actually resolve into the production database, thus enabling the beta version of the new system to live side-by-side with the legacy system.

The War Room

After some proof-of-concept prototyping, we realized this would be a winning strategy. Over the next couple of weeks, the four of us on the development team gathered daily in our "war room", and worked together to create all the necessary SQL scripts and shell databases, synonyms, views, etc that would be the magic glue. We ensured that we could re-run the scripts at will and automated our quality-control checks and sanity checks to be certain that all mappings would have proper permissions and configurations. After enough practice runs, we felt confident that it was ready to go. We created a single zip file which contained 5 BAK files, and a T-SQL script. We handed them off to our lead database administrator and he ran the scripts. Everything worked just as planned!

Cha-Ching!

At the trade show, everything went off flawlessly! Customers attended our booth and we, the development team, aided them directly in logging into the system and showcasing the new features we had worked so hard to develop. It was a very gratifying feeling to see how our improvised plan came together so well. Most importantly, we had succeeded in mitigating all risks to the money-generating production system, while also achieving the benefit of showcasing the new system to customers with real data. This was very exciting to them because they felt that the new features would greatly help them run their own businesses atop our platform.

Phased Transition From Legacy to New

We had now successfully demonstrated and validated the new, value-added features directly with customers in person. This was a great success. Yet, there was still much to do after the trade show. Features of lesser prominence, those in the 300 other pages set still needed to be developed and tested. This ended up taking a very long time, but we ultimately cycled back to my original recommendation by adopting an incremental replacement strategy.

It worked like this:
  • We deployed the new system to a new web server, named v2.
  • The existing, v1 site, remained at www.
  • We provided a link from v1 to v2 in the header of the v1 site, including advertising the benefits of the new system, but also including disclaimers and calls for assistance in testing and validating the usability of the new system.
  • This garnered a lot of early-adopters who helped find bugs and inconsistencies, all for free to us!
  • We monitored the usage patterns of v2 versus v1, to help estimate the load capacity under real-world conditions.
    • Michael Nygard's book "Release It!" proved prophetic here. In his book he says that "feature complete" is not the same as "production ready."
    • We learned this because the COM code had to be completely replaced with pure C# code since it could not stand up under load using COM Interop.
      • This result bore out my original advice to get the new features into production as soon as possible to monitor under real world conditions.
  • We formally adopted Scrum and Agile practices by identifying business-driven priorities and working through them in sprints.
    • We did this by closely monitoring the real-world usage of both the existing v1 system and the v2 system and focusing our effort first on the highest traffic pages, such as Viewing, Browsing, and Searching. Of course, Bidding and Payment, while producing less volume, were also mission-critical.
    • This focus allowed us to prioritize properly. We did not place inordinate emphasis on automating the testing of all areas of the system.
      • For example: we did not write Selenium test suites for things like Help Pages or Support Pages. Why? They are seldom used! And, they generate no revenue.
        • Instead, we built comprehensive Selenium test suites for the Big Four: Viewing, Browsing & Searching, Bidding, and Payment.

A Pleasant Surprise!

With the site now operating both in legacy, classic mode at www, and in "beta" mode at v2, the team began to actively monitor the new system's health health and encourage more and more users to jump into using v2. And, because we had focused on developing the value-added My Auctions features in the very beginning of the project, those features sat ready and willing to get into production! Our newest member of the team, who joined about two years after those features were ready and "shelved", took it on his own initiative, to our delight, to start building a mobile version of the core My Auctions features using ASP.NET MVC and the business objects that supported the My Auctions features. He was reluctant to show this prototype to the "higher ups", but the rest of our team encouraged him to do so. Within a few months, his mobile application was released into production before the global "switchover", described below, to the new system. A job well done!

Switching Over Right on Time for a Cool Billion Dollars

Over the course of more than a year, the team monitored the usage of v1 and v2, and began to more aggressively push the late adopters and stragglers into the new system. Eventually a "switchover" was made, and the v2 system took over the place of www. At that point, there was now a link back to v1, which ran from a virtual machine. Several months after this, the VM was retired, and the v1 system, and all of its legacy COM, was no more.

Just after the legacy system was retired for good, the company celebrated its 10th anniversary and 1 billion dollars in sales volume!

Retrospective

In retrospect, I spent nearly three years working on this project and learned a great deal! While I wish that the original plan of seeing the entire migration take place "all at once" could have been successful, I also am pleased that my original recommendation to take a phased, incremental, risk-mitigating, ROI-maximizing approach was very sound. Ultimately, that very approach became necessary due to the "expected" unexpected bumps along the road!

Application to Domains Seeking Non-Financial Returns

I understand that not all projects involve financial reward goals. Before I began working on the project just described, I worked for four years at the US Centers for Disease Control and Prevention. While working there, we were not seeking to generate financial return-on-investment. However, we did seek returns in the form of utility and value to the users and stakeholders of our systems. To assess this properly, it was critical to either observe the real users working with the system or to sit down with them and experience their pain, frustration, and sometimes: delight! Our team did this regularly by conducting evaluations, performing proficiency testing, and through coordinated multi-agency and stakeholder exercises under simulated public health emergency "war games".

Tying This All Back to Agile

While I've written more extensively on Agile in other posts on this blog, this post has not been about the "mechanics" of agile so much as it has been about the why. But, I want to look at just the first principle of the Agile Manifesto and make a brief comment:

Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software.

One might look at this principle and ask how can this be done when facing the situation I originally faced that featured my own client asking not for a continuous delivery model, but a "big bang" model? That's a very good question, and it's not one that has any quick-fix answer. My best advice here is that you need to learn the language and goals of both your client and your client's ultimate customers. If your client values financial returns, then ask him or her exactly what it is that generates financial returns. 

In my client's case, returns come in when more people purchase items through his system. The next question should be: what is the shortest path we can take to increase that rate? If the client answers that the shortest path is to rewrite the entire system and deploy a big-bang upgrade, then you're going to have to keep breaking that down into smaller and smaller value-added chunks. You might have to suggest straw-men in terms of business-value if your client will not prioritize by business value naturally. Ultimately, like in this story, reality may bear down on the situation and if you have done your best to incrementally develop the system in terms of business value, then you can deliver value, upholding your end of the deal to the utmost of your ability within your realm of control. Sometimes that's the best you can do, until you run your own show!

Wednesday, March 16, 2011

Thoughts on www.MVCConf.com : Excellent

I started listening to some of the videos at http://www.MVCConf.com today, including Scott Guthrie's keynote and Glen Block's lecture about WCF's super-enhanced REST support. Glen now calls himself a "REST head". Great!


The development of NuGet and Openwrap is really welcome to me. What must be 14 to 15 years ago now, when I started using PERL, I was very impressed by how well-oiled the machinery of CPAN was for installing packages and modules from the command line, even under Windows. When I started developing C# applications in 2001, I was very disappointed by the lack of a cohesive community of open-source packages available for Microsoft .NET. There have long been open-source projects. But, a package is different from a project. A package is shippable, deployable, consumable. I'm really happy to see this kind of thing coming into the .NET world.

I don't know where the credit goes, I'm sure it goes to many people. But, I definitely think a lot goes to Scott Guthrie for the enhanced openness that Microsoft has demonstrated in past years. I know from reading that this story also ties to Shawn Walker, the creator of DotNetNuke for his advanced forays into open-source based upon .NET.

Yes, I know the "entire stack" is not open-source, and that doesn't bother me that much.