Wednesday, September 30, 2009

Agile ROI

Last Friday I listened to the CMIT presentation by David F. Rico on the Business Value of Agile Methods. Overall a good presentation but I left with some concerns and questions. Specifically, slide 13 of his presentation is what has proven to be successful for me in how I approach development projects. The reason SCRUM and Extremem Programming, in my opinion, can't be successful in the government environment, is because the government has an imperative to 'know' that we are on the right track and 'monitor' progress. SCRUM and XP don't lend the visibility necessary to government Project Managers and Contract Officers to know that things are going well. Slide 13 demonstrates the overall model and features that are to be delivered for each iteration. The CO or COR can easily check the box to report that performance is in alignment with the Quality Assurance Surveilance Plan (QASP). With the other Agile methodologies I do not know how I could construct a QASP to effectively monitor progress. So kudos to Mr. Rico for pulling this slide that I was in agreement with, even though he said that most development projects that he has been involved with use SCRUM and XP.

The part for me that was concerning begins in slide 21 in which he begins to use data to build his case. The issue I have with his data is that he is using Lines of Code (LOC) as a common denominator across Agile and non-Agile projects. The issue here is that older projects not using Agile methods are very likely using older development languages. I strongly suspect that the data for the non-Agile projects is relying heavily on COBOL and C projects. I also suspect that the Agile projects is relying more on C++, JAVA and .NET languages. The fact that there are likely differences in the development toolset increses the risk of the findings derived from the studies. In this instance, I see LOC as a high risk because I would expect more lines of code from the COBOL and C projects than the Object-Oriented projects because the OO languages operate more efficiently. For instance you can perform a function in 10 lines of JAVA code that would require 100 lines of COBOL code. As such I have concerns about the findings presented here.

Finally, I ended with some questions. If someone asked me, "What is the return on investment from switching from a waterfall or code-and-test methodology to an Agile methodology" I would probably not start with this type of formula. The first place I would look is to project success. I would begin by digging into overall project success and failure for Agile methods versus non-Agile methods. I'm not a researcher or writing a book on this subject, but I suspect that projects using Agile methods are more likely to be launched or released than projects using other methods. I would also argue that this factor is likely to dwarf other measures.

But if you insist on other measures, I would offer that it offers a significant benefit in the scope dimension when compared to the Waterfall method. In waterfall you exhaustively capture and carve requirements into stone tablets that are then delivered in complete software some amount of time later. The reality is that if the business requirements could lend themselves to stone tablets the world would be a much happier place. But that is unrealistic. As such delivered software in a waterfall project rarely meets the scope of the business needs because the business needs have evolved during the time the team was working to develop the software. Agile allows closer interaction with the business personnel during the entire development process and this helps the final products to be in much closer alignment with the business needs.

An additional measure for comparing Agile practices against the client-server code-and-test practice is on cost. Good Enterprise Architecture practices are very difficult to implement in a code-and-test environment. This leads to a lot of redundant development, more difficult integration, and the most significant component, an increased cost to maintain the finished product.

So overall I agree with the conclusions, that Agile Methods have an increase return when compared to other traditional methods of development. But the details of that analysis are a little uncomfortable for me.

Tuesday, September 29, 2009

Historical Lessons Learned

I was lucky to attend the Center for the Management of Information Technology (CMIT) conference last week because I got to listen to Mark Kozak-Holland. He was speaking about his new book, which just came out, Agile Leadership and the Management of Change: Project Lessons from Winston Churchill and the Battle of Britain. As an undergrad History and Political Science major I enjoyed his presentation for both the historical perspective as well as the relationship he posited towards Agile leadership. It's important to keep this idea in perspective though. It wasn't like Churchill, Dowding and Beaverbrook were scientifically trying to do something unique or different.

I probably disagree with the speaker on the foundation of his position, but it was an interesting exercise to walk through it. In an Agile project you start with the key requirements that can be built in the iteration or sprint schedule. You implement them, deploy and consider the evolving, adjusted or new requirements for the next iteration or sprint. Sometimes this type of project might have a throw-away iteration, especially early. I doubt that anyone in England would agree that they could throw away a couple or three weeks. Additionally, I don't see historical breaks in which something was complete and they went back to re-prioritize the changes or requirements. As such, I think the argument that this is Agile leadership is thin, but as I said, it was nonetheless fun.

Thursday, September 24, 2009

Necessary Deliverables

With all contracts there are certain things that are small nuances that can take a lot of time and energy if they are not managed effectively. In my current environment the administrative elements include the Background Investigations, which I mentioned previously, Computer Security Awareness Training and Separation Forms in my Statements of Work. It is a million times easier to just identify each of these things as deliverables and get the contractor to treat them as such than it is to handle these like administrative items.

I also include the notifications of incurred costs as deliverables. The problem I try to address here is the lag on invoicing. If I wait to receive an invoice that tell me we have expended 80% of the funds available then I am likely to already be at 90% because of the invoice lag. To counter this, I am making contract deliverables on T&M contracts that the contractor sends notification within 2 or 3 business days of incurring costs at the 80% and 90% thresholds of funding obligated and awarded to the contract. This avoids the issue of lag and gives me the visibility I need to take action.

  • List of Key Personnel - Any change
  • Status Meetings - Agenda 2 days prior, Minutes 1 day after
  • Project Schedule - Updated with each status meeting
  • Risk Register - Updated with each status meeting
  • Change Control Register - Updated with each status meeting
  • BI Forms - Submitted before the resource begins work
  • Separation Forms - Submitted before the resource's last day
  • Computer Security Awareness Training - Specified by the COR
  • 80% Cost Incurred - In writing within 3 business days
  • 90% Cost Incurred - In writing within 3 business days

Monday, September 21, 2009

BI

While it would be nice to talk about Business Intelligence, I'm going to touch on a different BI today; the Background Investigation. We have 5 different levels of investigation that we can initiate for people supporting the agency. They are:
  1. Finger Print
  2. NACI
  3. MBI
  4. BI
  5. SSBI

The Finger Print is quick, easy and for very low risk positions. In fact, I don't think that anyone working on a contract for which I am the COR has just a Finger Print check. The most common is the National Agency Check with Inquiries (NACI). This typically costs $100 and is the most common investigation. The Minimum Background Investigation costs about $525 and this is the check that I request for anyone who is an administrator or who has access to the production environment. As the opportunity to do harm increases, the level of investigation should also increase so that we don't allow people with a history of doing bad things the opportunity to repeat that deed. The standard Backgrount Investigation (BI) cost $2825 and is a significant investigation. The highest BI here is an SSBI or Single Scope Background Investigation. This is only used for specialized positions.

All of the contractors working for me undergo some BI. Sometimes there are vendors out there who choose to not submit a BI form for someone working on the project. That is a mistake in my opinion. Everyone who is billed better have a BI in place or at least in process. I check all the time. I don't mean to be a jerk about it, but this is a hard and fast rule, and there is no gray area.

Wednesday, September 16, 2009

It's a Cloudy Day

I can't believe I didn't write about this already. I was sure I had, but as I look back, I didn't. I know I put it on my LinkedIn status. Back in April I was excited to try something unique and innovative for a development project. I wanted to develop the backup capabilities that would allow me to backup to the cloud. In this case, I had performed research to backup to the Amazon cloud, EC2. I had priced it out and was eager to try to get started. The contractors supporting me were eager to try this as well because this has to be the direction of the government.

Unfortunately, I was a little ahead of my time. I was told that I would be required to backup to a different office and use other internal resources. Not exicted about this news, but I rolled with it. Getting services from internal resources is not always a good situation, and this case bears that out. We met and ran through the schedule and agreed that it would be set and ready before August 1. At the time, I asked, "Is there any risk that could get us that would prevent us from meeting the August 1 date?" The response was "No." Here we are on September 16 and it still isn't ready. This is a six week schedule variance.

Then today, what do I read? Apps.gov is now available for use. This is GSA's cloud, competing with Amazon and Google but geard for the federal sector. It is about 4 months too late for my project, but for backup of data and contingency operations, I will try to use it for my next project. I bet that I can get a Service Level Agreement that will help me to avoid six week schedule variances.

Monday, September 14, 2009

Drill Baby Drill (Not what you think)

I recently participated in my first contingency exercise for an application. It was what we call a "Tabletop Test", and while I would rather have had a physical exercise, it was nonetheless informative. This is something that I have been pushing for a very long time. The problem is that people are often so consumed with day-to-day operations that we never make time to actually run through a simulation of what to do when bad things happen.

I picked the very unlikely scenario of a hurricane knocking out operations of a data center in the upper midwest. I know it is not reasonable that a hurricane is going to do that, but a tornado is much more likely, only it wasn't on my scenario list and the hurricane still allowed me to run through what people should do for the other, more likely situation. We actually identified a couple of areas that need some improvement. For example, there are several applications that run out of this particular data center, we have to take some time to prioritize the order in which these applications will be restored and assume that we don't have the resources to bring them all up at once. Also, you already know that I am a Green IT fan. This means that if I don't have to print it, I won't. But Contingency Plans and Disaster Recovery Plans must be available in hard copy. I didn't have them in paper before.

Overall though, it was a worthwhile exercise and we found some things that can be improved for the next time. And the next time will be in 6 months. I think that the more frequently you run through these types of scenarios, the better you become at it. Practice makes perfect (well, at least better than before), drill drill drill. So we'll be doing it again 6 months from now, only I want to have a physical exercise, bring down the servers and restore them at the alternate location and bring the application up. I know that we'll find other useful information that will help us to perform more efficiently if we ever had to do it for real.

Friday, September 11, 2009

Worker Shortage and Hiring Process

I'm really sorry that I keep picking on Federal Computer Week, but they published an article today about a severe worker shortage in the cybersecurity segment of the government space (Do federal hiring processes discourage...). This article went on about how difficult it is to get a job and the pain of the hiring process. But the article really misfired on a couple of levels. First, while there is an emerging need in the cybersecurity segment, the immediate concern is in the aqusition segment. The real problem is in Contract Specialists, Contract Officers and people qualified and capable of awarding contracts.

Second, and I can't believe this was not mentioned is the fact that we have been talking about the risk of a brain drain in the federal government for almost 10 years now. There has been a bubble of retirement eligible people for several years. The issue that I'm surprised to not find discussed is how the current economic climate is affecting the retirement bubble. Though I have no scientific evidence, I think that people are holding off on retirement for 2 reasons:
  1. They lost a chunk of their nest egg in the recent devaluation of the stock market and
  2. There is too much risk and uncertainty in the current economy to begin a retirement now.

On the first point, the market lost about half its value and has been slowly inching up ever since. I think that people will begin to cash out when they feel like their portfolio gets to the pre-recession level. Once they feel like they are even, then we will see a big wave to cash out. Unfortunately, that will likely cause a ripple recession all by itself.

About the same time people start to get to the pre-recession levels in their stocks, the economic outlook will be a lot more rosy and seem less risky. It will feel like a good time to begin a retirement and seem like it will have less risk.

No matter what though, I think that we are looking at a significant opportunity for the next generation to step up and step in to real positions of leadership as soon as the economy turns the corner the retirement bubble will begin to burst. The pace of retirements will quicken, the vacancy rate will increase. Over the short-term this will be painful because we will be forced to do the same amount of work with fewer people, but, as they say, necessity is the mother of invention. We will be forced to become more efficient with our hiring process.

In this, I speak from experience. I recently (2 weeks ago) participated in a panel reviewing applicants for a position. We reviewed 6 resumes, we met for 2 hours, discussed the strengths and weaknesses of each applicant and tabulated the scores. This was on a Wednesday. The offer to the candidate was made on Thursday. The candidate accepted the offer on Friday and started work on Monday. Sure, there was 2 weeks that had passed from the closing of the announcement until we met to consider candidates, but, when the need is urgent, the government can move at the same pace, or even faster than the industry.

Thursday, September 10, 2009

Really?!? Really!?!

I read an article and clicked over to all of the links to get a sense for the content on a recent article in Federal Computer Week, 7 Federal IT Bloggers Worth Reading. I don't mean any offense to anyone, but I would die of boredom while waiting for these people to post content. I'm sure they are all good people, they just aren't exactly prolific bloggers. I know I went through a 3-week lul there, sorry about that. But I'm getting back into the swing of things. I'm very disappointed that FCW would point to these, barely alive blogs and say that these are the ones to watch. Were they just looking for any old blog?

I know my last post will be a resource for a bunch of people, because I have talked about it with them, so that is the kind of content I really want to deliver to this blog.

Tuesday, September 8, 2009

Best Value Analysis

I have participated in the process of awarding many contracts. Most times it is fairly straightforward and easy to identify the winner. Sometimes though it can be tough. I remember several years ago I performed a very complicated Best Value Analysis (BVA) in which I used the labor hours proposed and the cost to identify the average rates for each persons on the team. One bidder for that project only had a couple people and another proposed an army, so it was a complicated affair.

I recently finished another BVA and I am very happy with the process and formula I used to identify the best value. First, in the solicitation we were careful to identify that the award would be made based on Best Value and that the Technical review would be 65% of the score while cost would be 35%.

Then we completed the technical review, and let's just say that we hypothertically had:
  • Offeror A - Technical 90 points - Cost $600K
  • Offeror B - Technical 85 points - Cost $500K
  • Offeror C - Technical 80 points - Cost $450K
  • Offeror D - Technical 75 points - Cost $400K
Just for fun, take a second here and pick who you think the Best Value offeror will be.

In this hypothertical, let's set the Independent Government Cost Estimate to $350K. So everyone is over the IGCE. To get the cost into alignment that will allow me to integrate it with the technical I needed to figure out a way to get it to a 2-digit number that rewarded the offerors closer to the IGCE. I thought about a percentage of the IGCE, (proposed cost / IGCE) would work, but it went the wrong way. As costs got further away from the IGCE the score increased.

But if I took the inverse of that, then it worked well. As such I used the formula, 1 / (proposed cost / IGCE) = Cost Score

Using my examples above, I have:
  • Offeror A - Technical 90 points - Cost Score 58
  • Offeror B - Technical 85 points - Cost Score 70
  • Offeror C - Technical 80 points - Cost Score 78
  • Offeror D - Technical 75 points - Cost Score 88
With this I have all the invormation I need to combine my cost analysis and technical analysis to perform my best value analysis. The formula looks like:
(Technical x .65) + (Cost Score x .35) = combined score

When I do this I find that the offerors' final scores are:
  • Offeror A - Technical 90 points - Cost Score 58 - Combined Score 78.8
  • Offeror B - Technical 85 points - Cost Score 70 - Combined Score 79.75
  • Offeror C - Technical 80 points - Cost Score 78 - Combined Score 79.3
  • Offeror D - Technical 75 points - Cost Score 88 - Combined Score 79.55
Offeror B had the highest combined score, and is the Best Value.