Blog Home  Home Feed your aggregator (RSS 2.0)  
Rays Development Blog - Saturday, November 08, 2008
A look into the mind of a VB Developer
 
# Thursday, October 30, 2008

Been doing a lot of thinking recently about tractability and how far it should really be taken. I have talked to a wide range of people over the years, ranging from project managers, development managers, team leaders and guy-at-the-desk implementers and am getting a wide range of answers.

 

Typically requirements traceability is critical to the success of a software project simply because it helps you ensure that you are doing what’s needed to satisfy the customers need and no more. But, as with may ‘processes’ in the SW realm, I think it can be taken a bit farther than it should be. I have been told by some project and development managers that having a concrete way to trace requirements all the way down to the code that implements them is critical. The ability to look at the code and know exactly why something was put into the system, and more importantly what will be impacted by making a code change, is a ‘must have’ in any good development system. In a traceability graph this usually ends up looking like this:

While I can start to see the benefit of that I also start to see where it breaks down a bit.

 

1)     Code is often used massively between functional areas so it leads to a very large traceability tree. In my opinion once you get past a certain number of branches (a number I have not really quantified yet but I will know it when I see it) the code simply gets qualified as ‘important’ and traceability at that point really looses some value.

2)     The current state of tools at this point really offers no way to store this metadata in the source in a simple, and automated, manner. This leaves it up to the developer to perform this task (usually in the comments) and that means that the developer gets more work to do. As we all know, the more time something takes that does not give the person doing it much (if any) direct value, the more likely it is that the task does not get done. This means that the traceability data can immediately become suspect causing no one to believe it and thus again it looses its value.

3)     Why do we really care that FunctionX was written to explicitly fulfill functional requirement F-101 and thus Business requirement B-203?

 

 

I personally think that this deep traceability is only there to fulfill management needs to see neat charts (ok, maybe I could have worked on the color scheme a bit) and graphs. I also think that this is a way for managers to feel that they are ensuring value from their developers by making sure that the developers are only writing what is needed to satisfy the requirements and not a line of code more. In fact many developers seem to be from my side of the camp, but some of them take it way to far in the other direction. Their opinions are that unless the system can be ensured as ‘good’ why track any of it at all? They know what the requirements are, they should be left on their own to implement the code in a way that satisfies the requirements and that’s it. Why do they need to justify their work at all as long as the end product works well and satisfies the stated requirements?

 

What you end up with here is this:

Who wins form this? No one does. Most of the time when you have an all or nothing strategy the outcome is completely non-productive. Is it good idea to have requirements traceability? Sure it is. I think most sensible developers and managers alike will agree that knowing why you are doing something, what the impact of changes are, and how things get tested are all good (great) ideas. The frustration comes in trying to come up with a solution that satisfies both camps. Something that gives both the managers and developers what they want.

 

I think that something is a very tight level of traceability between all levels of requirements, both up and downwards, but then to augment that into the code by completing the traceability down to the test cases and stopping there. With this you get something that looks like this:

Notice that you now have traceability form business requirements all the way down to the test cases just like you did before but you have left the code out of it. Some folks might say that this is missing the need (want) to trace requirements to the code that implements them but take a closer look and you will see that it really does not. The code traceability has not been skipped over, it has been preserved due to the physical connection to the test cases.

 

Consider this. Every test case should be there to explicitly support a use case, or at least one part of a use case. This means that every test should be traceable back to some code that it is testing. This ‘traceability’ can be seen in one of two ways. First, most test cases that reference no code inside them are easy to spot, since they have no code inside, and second, you can easily run an automated tool to check the source code of a test case that fails to reference any code. Clean, simple and it leaves the developer out of it which is good.

 

Now consider the other use of full traceability down to the code level. The ability to potentially spot dead code, or code that does not specially trace back to any requirement. You have not lost here wither since you can again use an automated tool to run a call tree backwards from all the test cases and ensure that you have no code written that is not reachable by a test. Actually this should be part of a normal test regime anyway and is part of what is called code coverage analysis, making sure that as much of your code is tested as possible.

 

Have you lost anything? No. Well maybe some work. In fact if you take a look back to your test practices you are already probably doing this almost 100% if you are using code coverage analysis. If you are not doing code coverage, start. Look at what it gives you. Management gets what they want, development gets what they want and everyone is happy. This is a classic win-win scenario that I think everyone can live with.

Thursday, October 30, 2008 6:13:03 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Design | Requirements  | 
# Saturday, October 11, 2008

Recently, in one of my many quests for knowledge about the good old NNTP protocol (be on the lookout for a really cool Usenet news reader to be released by Enterprocity within the next few months) I was pointed towards something called Postel’s Law, also referred to as the robustness principal.

 

In a nutshell the law is simple. It states:

 

“Be conservative in what you do, be liberal in what you accept from others.” – Jon Postel

 

You can see it for yourself right here at the bottom of page 12 in RFC 793 (TCP).

 

Since I am embarking on my new role as a Senior Software Engineer next week I thought that me getting pointed to this quotation form Jon Postel was quite apropos.

 

This is something that I have seen so much of over the last few years as my old role as a Senior Applications Engineer, both in the products that I supported as well as in the products that I helped others build. Many times companies can get involved in a finger pointing match over who owns a bug (us or them, it’s not OUR fault) or if something is even a bug or not. Many times engineering would point to a message we got from another component in the users solution (we did VoIP Gateways talking SIP so in these cases is was SIP messages) and said that the message was malformed in some way, and this was why our stack threw it on the garbage heap, or leaked memory, or threw an exception, or dropped a call, or some other undesirable behavior that caused someone to pick up their land line and call me.

 

It all boiled down to Postel’s Law. The third party SIP stack that we used (no names here please) was not very robust at all in its ability to take in things that were not 100% to the RFC. It was a good stack that did its job and had a good team behind it but when it came to handling SIP messages, it was very picky to say the least. One message that was not a complete verbatim to the ABNF used in the RFC and that message was ‘wrong’ and the behavior was indeterminate. That and the fact that there are some really nebulous areas in the RFC that did not help, made it look at times like the product had some serious issues, and in my opinion it did, from a users perspective. Taking this to another level, many of these malformed messages were in message headers that our product did not even care about, that just ended up adding insult to injury there.

 

In user land, people don’t care about all the stuff behind the scenes; they just want things that they paid for to work. Add to the fact that other products that may not have been better in all other respects did not have a problem dealing with these errant messages, and our product became even more suspect in the eyes of the customers. All engineers need to understand that a customer’s perception is reality. Even if YOU, as an engineer, know that the problem is really NOT with your product but with the other one, or a bug in a third party component that you use in your system, the customer sees an exception thrown in YOUR product or poor behavior in YOUR product and not the others; your product is the one with the problem.

 

So, this is just gentle reminder to all engineers out there (myself included) that not only do you need to validate all input to your systems (a good thing that some of us may take way too far) but you also need to decide HOW you are going to act when you detect that bad input. Throwing an exception when you are the upper layer, right next to a human user, may not be the best (be on the lookout for a posting on the use of exceptions :) ).

Saturday, October 11, 2008 12:54:58 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Design | Error Handling  | 
# Tuesday, August 05, 2008

Well I decided that I REALLY wanted to run Vista after all, so, since my old system had a problem with the video cards (they were only PCI) I decided to build a new system that WOULD be able to kick Vistas butt.

I think this one certainly qualifies. As you can see form the photo bellow, the heat sink on the darn CPU is the biggest I have ever seen.

Specs:

  • iStarUSA S-10000 ATX Full-Tower Server Case
  • Crucial Ballistix Dual Channel 4096MB PC6400 DDR2 800MHz EPP
  • Intel Pentium D 945 Processor HH80553PG0964MN - 3.40GHz, 4MB Cache, 800MHz FSB, Presler, Dual-Core
  • EVGA nForce 680i SLI Motherboard - T1 Version, NVIDIA nForce 680i SLI, Socket 775, ATX, Audio, PCI Express, SLI, Dual Gigabit LAN, S/PDIF, USB 2.0 & Firewire, Serial ATA, RAID
  • 2 - EVGA GeForce 8800 GT Video Cards - 512MB DDR3, PCI Express 2.0, SLI Ready, (Dual Link) Dual DVI, HDTV, Video Card
  • Thermaltake CPU Cooler / Big Typhoon VX / 4 in 1 / 6 Heatpipes / 120mm Fan
  • Ultra X3 ULT40064 1000-Watt Power Supply - ATX, SATA-Ready, PCI-E Ready, Modular

Damn! This thing is FAST! and runs Vista like a champ. The modular Power Supply is cooooooollll. No wires in the case but the ones you need. Rocks sweeeet!

So, now thew question is what do I do with my old system? A dual, dual core with Hyper-threading XEON 3Gig system.

I can't let the secret out right now but around the end of the month I might spill it... I do have plans for it though...

Tuesday, August 05, 2008 12:01:44 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Hardware  | 
# Wednesday, July 16, 2008
Opening post!
Wednesday, July 16, 2008 10:49:06 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Site Admin  | 
Copyright © 2019 Raymond Cassick. All rights reserved.
DasBlog 'Portal' theme by Johnny Hughes.
Pick a theme: