So, I have experienced what I feel is a failure of a project that I was recently a part of in my personal life and have been thinking about it a lot lately. Partly because as an systems architect it is my job to always be trying to understand where I can improve myself and ensure that I do not repeat mistakes, but also just because, well darn it, I hate failing.
Who the heck doesn’t hate failing?
Really, I am not counting this as a ‘failure’ per-se because I did bring it up as an issue at the onset of the project and even noted my personal objections to it in the review notes that were taken in the meetings I had. I am noting it as a time of shame in that I allowed my PERSONAL level of professional conduct to be driven by an outside group instead of recusing myself and just walking away. In short, I let go of my principals and am now paying for it.
Not a mistake I will be making again.
How did I come up with the title of this entry? What does a QA analyst have to do with the legal system? Just so you know I am a huge fan of the TV series Law an Order. Not so much the recent off shoots, but the old shows with Jerry Orbach (Lennie Briscoe), Sam Waterston (Jack McCoy), and one of my personal favorites, Chris Noth (Mike Logan), but I digress…
I have always been fascinated by the law. I almost decided to become a layer ate one point but decided that I was not hard enough (or perhaps too hard) to take the role. I looked at it for a while and decided that there were potentially too many gray areas to have to deal with ethically, so I took the IT route instead. Hehehehehe, yeah, who knew?
So, the relation here is this.
In the legal system you have several areas of a legal issue, each on represented by a specific area of expertise looking at the case in a different way. The accused is, by matter of the same legal system that is currently citing them as a ‘bad guy’ provided a way to prove their innocence before a panel of impartial people, and is offered representation to help them. There are people on both sides that defend their position, present their case and in the end the judge and jury make a decision based upon a preponderance of the evidence if the accused is guilty or innocent, and what the method/mode of punishment should be. Remember, the legal system is represented by the scales held by lady justice’s left hand with a sword in her right, and her eyes covered with a blind fold giving the indication that she is unable to be influenced by any outside party, and are driven only by the written matter of law currently established.
In the project system you have several areas of a project issue, each on represented by a specific area of expertise looking at the problem in a different way. The project is what it is, being defined by the specifications that were approved by all the parties involved upon its initiation. There are people on both sides that defend their position, present their case, and in the end someone makes a decision if the delivered system met the requirements or not, and how to correct what needs to be corrected moving forward.
In a business environment, the business owner comes to IT with a need. They understand (probably very well) what needs to be accomplished and can usually state those goals very well in what are referred to as High Level Requirements. These requirements are used to establish a baseline timeframe and budget that is then referenced by the business plan to check validity to the established mission and cash flow for the year to determine if it can be(or even should be) perused. Once they get the green light it moves on.
In the IT environment an architect is assigned the project, provided the business requirements, a basic timeline and a budget framework and told to go off and design, then come back with more specifics to move forward. Once they do the design and pass it back to the company for final approval (timeline and budget) the project then gets assigned to developers to complete according to the specification.
The developers do the work based upon the design of the architect; perform some base level tests to make sure that what they release meets the stated objectives, and then release a build for testing.
Here is where the problem ALWAYS happens.
The business will sometimes NOT want to include a QA test resource.
WHY? I am not sure. Usually the business says that they are too busy to be bothered with anything. They are, after all, the ones making the money for the company, why should they want to do anything else? But I have heard more than a few times that THEY want to be the test people on the project because THEY know the DATA better than ANYONE and can be the best judge of the system processing quality than a QA persona can ever be.
It was HERE where I was bitten.
I fought hard and lost my battle. I was made to allow this abomination into my project. I was provided with the business requirements, I created the low level design, handed that off to developers that created their individual designs and had them reviewed by other developers, then implemented them long with a series of basic test cases that they deemed were required, and then handed the ‘completed’ project over to the business for THEM to test. The business ran their TESTS (I have yet to see an established – IE: written – test plan or results document) and signed off on the completed work. The total time for QA testing ended up being about 4-6 hours.
My right eyebrow rose a bit but it was apparently not for me to say anything and the project went into production where it was run for the first time and the resulting data set was sent off to the next step of the process (not something that I have any control over at all there), and within hours THEY saw issues in the data that they were presented with as a result of this projects processing and kicked it back to us. The business took a look at the data (that they already saw by the way, remember, they ‘QA Tested’ this system just hours before and had ‘signed off’ –approval via email- on its viability and correctness).
The reaction was shocking to say the least. The business came back and questioned the systems correctness. I was shocked, but not at all that surprised, but still a bit ticked off. I am not a person that enjoys assigning blame, but when I am asked to explicitly locate a problem, that job gets done for me. I find the error and the fault is assigned by the simple act of doing that. Who did that work gets the ‘blame’. In my opinion though the blame should be shared by the developer and the person that did the review of the code, and ALSO the QA Analyst that either missed a test case or did not execute one correctly. In this case we had NO QA Analyst, or in reality, I was getting asked BY THE QA Analyst (the business unit in this case) what the problem was. Again, I was a little miffed, but took it. The problem ended up being something that I knew was going to be a potential issue, and that we had even discussed in meetings as part of the implementation and design. A direction was decided upon between me and the PM that the business (err… QA) would manually process through this data list and perform some further cleanup that would take a significant effort in dollars, time and specialized software to accomplish in an automated manner, and that we would look at other more automated solutions in the next round prior to this process needing to be used again next year. Being the diligent architect that I am kept this all documented in the projects documentation, partially because I am just a thorough person, but also as a way to provide some CYA to both myself and the next unlucky architect that got any revisions the next time this project needed to have changes made to it.
The manual processing was done, requiring the business to manually look through every record and try to remediate possible duplicates. I figured this would FORCE them to look at each and every record and if there were any OTHER errors they would see them. They were after all ‘the best people to judge the correctness of the data’ hence the reason that they mandated themselves as the QA team in the first place. I again, shook my head, scratched a bit, and let it go. They completed their manual processing, removed about 1000 or so records that they felt were dupes and handed the file back to me to get converted over and sent back to the vendor for processing. That being done, the project was run, my involvement was closed out, and I was assigned on to other work.
Ding dong, the alarm bell rings again as a new problem is found, and then another.
Once again I am asked to look at the data. Amazingly enough, I am asked by the same team that certified this exact same data, and even had to read through it all manually record by record in their last cleanup effort, to find the ‘problem’. I found the problem, a common mistake in this type of processing (the order that records are placed in when a lookup is performed) that was not caught by the developer, the reviewer of their code, nor the QA team that certified the data TWICE now before it was allowed out the door.
So, what’s the result here? I am going to spend my weekend looking over the data between what we HAD then and what we HAVE now as the result of a change made to address the issue and try to determine what to do next.
Being a process oriented guy and always one to try to learn from my mistakes I have taken a hard look at this and made a determination that I was right at the start and I am not going to ever accept a project that does not have QA resources assigned. Could I be potentially signing my own walking papers? Perhaps, but at this point it is a case based upon principles and not just me being a whiney architect not willing to take blame. In fact all I have been asking all along is that someone who is impartial to the business process, design of the solution, and the development of the solution look at the data going in, the processing, and the data coming out, and TELL ME if there are problems.
I welcome being told there is a problem so it can be addressed BEFORE we ship. That’s the idea of testing, to catch problems before they make it to production. I just fail to see how people cannot understand that. Just as Lady Justice stands outside of every courthouse to ensure fair and impartial judgment on the application of the rules of law, so should QA be allowed to stand and judge the usability of a system before it is relied upon to perform its tasks.
Now I ask you, how many people ASK to be judged like this?
Am I wrong?
Oddly enough I just noticed today how annoying this IE dialog box is:
The example above shows an attempt made by a web page that I visited to reach out on my behalf and open a web page that I happen to have on my ‘Trusted site’ list within IE8. Yeah, I put FaceBook on my trusted sites list because I got tired of having to allow certain things every time I went there and I do trust it enough, because I regulate very closely what features I have enabled and what I use FB for, on my own.
I imagine more and more of us are seeing this nowadays as we are becoming entrenched in the draw of sites like FaceBook and other socially oriented sites and that other web sites are leveraging them as ways to get their sites noticed and voted for, etc… I imagine that it is going to be happening more and more as the line between sites with links such as these gets blurred more and more. Rank this, rate that, yadda, yadda, yadda…
To be honest, I am not 100% clear on the VALUE of this type of cross linking yet, or if it is really more of a passing fad that will soon fizzle out in favor of the next cool ‘thing’ that comes along. But I digress.
The point I want to make is for all those UI centered development folks out there (myself included I am afraid) that often times maintain a somewhat shortsighted focus on the task at hand and perhaps don’t look forward a little bit further and ask the next question:
“What else would make sense to include here as part of the design?”
So, I ask you, what else do YOU think would make sense here as part of this design?
Theme to Jeopardy playing quietly in the background…
How about this as a suggestion?
How about offering the user (me) the ability to ADD the currently ‘Un-trusted site’ to the ‘Trusted sites’ list from here?
To me, this is a HUGE miss in this design. Why? Because had the simple question been asked there are so many easy ‘quick hitter’ options that could have been done to enhance the user experience here with very little effort.
The current state
As it sits right now, the user has the ability to click the ‘Yes’ button and tell IE to trust this link request. The problem is that if there are multiple areas of the currently un-trusted linking to trusted sites you have on your list, even if the URL is the same, you get asked each and every time if you want to allow it.
This can cause two problems.
First - if the site address does not change the user can either think that they didn’t click properly, or maybe they moved the mouse as they clicked, something that people with physical issues often have problems with, and the click didn’t register so they get frustrated at themselves and the user experience as a whole.
Second - they get stuck in a cycle of having to click on so many boxes that they accidentally allow a site that perhaps they really didn’t want to.
In addition to this really poor user experience it is frustrating to think that the only way to avoid having to do this again is to write down or remember the address of each of the sites that popup (probably write them down manually?) and then add them to my trusted sites list latter as a manual effort.
NOT a great UX to say the least.
What could we do here?
So, being the proper engineer here I always have in mind the idea that before I go to someone and say ‘you did this wrong’ I should take the responsibility to bring along my ideas on methods on how to make it right. After all, it is easy to point a finger and laugh, it is harder to think about possible ways to suggest how that problem be solved. Pointing and giggling just makes you an annoyance, offering viable solutions makes you part of the process of solving the problem.
UI Option #1
Provide the user with a button in this window to allow them to just jump right over to the ‘Internet Options’ and then the ‘Trusted sites’ dialog box with the URL filled in and just offer the user the chance to add the site to their list if they want to.
UI Option #2
The second option is very simple. Just provide the user with the ability to add the site to the ‘Trusted sites’ list using a simple check box on this dialog box as I have shown here:
I am sure given a bit more time we could come up with a few more ways to make this work, but the point is that it appears as if the effort was not made at all, and even a small step would have provided some fantastic user level value with a minimal amount of design, code and testing efforts.
You could even go one step further and have the OS keep track of how many times you have allowed a specific URL access and provide the user with a pop-up dialog box in the system tray area maybe once a week or so and let them know that hey, they trusted this site x number of times over the last weeks or months, maybe they want to consider adding it as trusted.
There are so many options that would be simple, add some real value, and enhance the UX in this case, and through so many releases of the OS and IE I have yet to see this addressed once.
If there is someone out there from MS reading my blog (yeah, I am sure there are - NOT!) then let me know if you think I am saying makes sense. Actually, if there is ANYONE out there reading my blog (I know there are a FEW – I watch my daily logs) then reach out and comment here.
Do you agree with me or not? If not, then let me know why.
I am always open to others opinions in cases like this, and since I do design as well write code, I ALWAYS welcome user feedback.
Let me KNOW what YOU think would be the best way to address this.
Touch touch touch…
To be honest I don’t get it.
I touch my computer every day already. I use a mouse and a keyboard to do it, but to be honest I see very little sense in using my finger to manipulate objects on my computer. My finger tip is large, and my monitors (all 4 of them) are at a 90 degree angle to my desk. Why would I want to use my hand to reach out (and up) to manipulate objects on my computer screen when I can use the mouse to do it?
Now other devices like game tables, interactive kiosks, digital book readers, Maybe PDAs and stuff, that’s fine, but I have yet to see value in a touch screen PC that is not at very least stylus oriented. And on that subject, what is the hot thing about handwriting recognition. I specifically use a computer (and previously a typewriter) because my handwriting sucks :) Why on earth would I want to write on my PC screen? Sign a digital document? Sure, but now get someone to trust that ‘I’ signed it and we will be all set. That technology is still not proven yet and most people don’t really trust it. Using a finger print is a better option, and far more trusted, but still not entirely mainstream yet.
Yes, the touch demos that I have seen show fancy things like dragging and throwing photos around a table top, or playing games, or ordering off of a virtual menu, and those are all good examples of the use of touch technology, but at a very narrow focus and scope. The demos about interactive touch counters in the stores that allow you to compare multiple products side by side are cool too but also relay not JUST on touch but also on RFID technology that is not really related to touch. You could do one without the other. Games like chess, checkers, solitaire (every computer HAS to come with a copy of that right?) are fine for touch, but would you really want to play WOW or DOOM using touch?
I have YET to see one ultra compelling demonstration of using touch in an office environment that wows me more than a mouse does. Can you imagine trying to do photo-retouching using your finger? Editing code or creating an application form in Visual Studio using your hands? How about highlighting text and dragging it around or changing fonts using your hands? Now picture doing all that on a 17 or even a 21 inch screen.
I am not saying that touch does not have it use, it does, but on a somewhat narrow scope I think. I think you will see (my prediction) that touch WILL finally take hold at some point, but more along the lines of interface technology that we are already familiar with today. Give me a keyboard that I can reconfigure on the fly based upon the application that is active on my screen, and do it that way. Give my a touch pad to replace my mouse, or maybe two touch pads (one on each side of my virtual keyboard) so I can do multi-touch stuff. Maybe I will reach out to my screen a bit and do larger granularity things like flip pages on a large document, or open an application by tapping on an icon, but touch is not the generic answer to one problem.
It looks cool in movies, and sounds cool in high level technical talk, but in reality, where I live, I need what works, and I just don’t see touch being a PC related thing with a ton of impact like most do.
FORCE me into a touch only interface and loose me as a customer. I WOULD use a stylus more instead of a mouse on a laptop, but don’t make me write what I can type MUCH faster or you loose me as a customer.
My prediction is that the next big wave will be multi-modal interfaces. Provide me the ability to use touch where it makes sense, and then at the same time allow me to use a mouse or stylus or keyboard where it makes sense, at the same time and at MY whim. I want to scroll down in an online book a few pages by using my hand to grab and flip a PDF down a few pages then as they scroll by use my right hand with my mouse to grab the page as I see it, stop it, and then select a few words on the screen so I can reach up and press the bold button with my left hand on the screen? That’s great.
And before all you naysayer out there bring up all the cool ‘things’ from movies like Minority Report, keep in mind that was a ‘gesture based interface’ NOT touch based, and I think that is closer to being far more useful than pure touch, but a subject for another blog entry.
Let’s be clear, to innovate you need to reach.
There are many companies that I have run into over the years that have continuous innovation as one of their core values, but have a buy instead of build mandate. They want to reach for the stars, but they feel they need to (or even can) do it using existing technology.
Why are people so build averse?
One thing that I have noticed is that even when you are in a ‘buy’ environment you end up building, the building is simply different. Instead of building UI, databases or business rules you end up building glue. Glue code that connects disparate systems. Glue code that moves data between stores. Glue code that provides services to secondary consumers. Glue code to allow enterprise level reporting where reporting was not available in the purchased system.
So explain to me again why people are so build averse?
Innovation starts with the ability to take a risk and move in a different direction. It is difficult to consider moving an industry in an entirely different direction when you are building on top of existing applications that fit into a different paradigm. After all, are you not looking to do something different? Are you not looking to accomplish something that the industry is not yet fully ready for in order to get a jump on the competition?
If you answer to these questions is yes then how do you expect to be efficiently innovative using what already exists to move forward in a different direction?
I know that it is simpler to buy something off the shelf and place the responsibility to make it work on the shoulders of a vendor. I also know that it may seem to be cheaper to buy a bunch of cots products and spend time to data integrate them using tools like Informatica, and other data integration methodologies. But once you stray from being able to open a shrink wrapped box and being able to simply install and use you have strayed into a build situation, like it or not. It is similar to putting a ton of effort into deciding what car you want to buy then once you take ownership you drive it right over to the custom shop and have the engine replaced with one that has more power, the interior redone to what you really wanted, and the exterior modified. If the car you bought was under powered and the interior was not what you wanted and the exterior was also not to your liking then why did you buy it?
Consider also what gets induced when you spend your money to glue stuff together and the industry changes. It sounds like you are insulated in cases like this because you feel that the vendor is responsible for bringing the application you purchased into regulatory compliance, and they are, but what about all that glue that you built? The vendors responsibility ends at their borders and whatever you have done to augment your systems over the years is not their responsibility. When push comes to shove they are not responsible for how you use the system and are only bound to deliver to you a system that fulfils the legal and regulatory requirements of the line of business as well as the stated requirements and features of what you purchased. They can’t be held responsible for what you glued onto their product, and nor should they be.
Additionally you cannot predict how they are going to make changes as time progresses so you are stuck working your changes to their time-lines and schedules. You will find yourself having to wait for their release cycles and then your install, evaluate and test cycles to complete before you can even start any decent planning to make changes to your internal systems of glue code before you move a new version into production. If your processes are not fast enough, or your vendors release schedule very aggressive, you can find yourself stuck in an endless cycle of install, test, modify, and move to production, a process that can place some very high stress on both people resources as well as hardware and software costs, not to mention the potential for harm to your business if things do not go right.
I am not saying that it always makes sense to build. No one can say that. Buy Microsoft Office and be happy that you did. Buy an accounting package and be happy that you did. But if your business is unique, or you need to make it unique as a differentiator, then consider the build task, even if you need to live with a coddled together bought system in parallel as you do it.
Holly cow, if I get asked this one more time I think I am going to..... well, I am not sure what I an going to do but be assured that it may not be pretty :)
I get asked this all the time and I am not sure why people ask it.
"What is the best choice, implementing an interface or using inheritance?"
"What language is the best choice?"
"What is a better thing to use, an array or an array list?"
To me these all sound like the same question.... "How long is a piece of string?"
The problem is that they never seem to be satisfied with the answer "it depends". They seem to get frustrated and think that I am holding back on them. That I am hiding some great secret all to my self that is preventing them form becoming the next great developer.
In all honesty that is the best answer I can give simply because it's true. It REALLY does depend. It depends on your situation, your project, your intent, what you want to do and a ton of other factors that only YOU know about your project.
I also get asked a ton "what is the difference between a programmer and a developer?" To put it simply, the answer is that programmers ask the questions above while developers know that the answer is 'it depends' and are satisfied with it.
I don't mind being asked these questions, just take the answer and learn from it. Use it as a learning tool to become a developer.
Being a developer is cool and fun and you get to ask a whole slew of more cool questions like "how does one go about calculating the air speed velocity of an unladen swallow?"
Been doing a lot of thinking recently about tractability and how far it should really be taken. I have talked to a wide range of people over the years, ranging from project managers, development managers, team leaders and guy-at-the-desk implementers and am getting a wide range of answers.
Typically requirements traceability is critical to the success of a software project simply because it helps you ensure that you are doing what’s needed to satisfy the customers need and no more. But, as with may ‘processes’ in the SW realm, I think it can be taken a bit farther than it should be. I have been told by some project and development managers that having a concrete way to trace requirements all the way down to the code that implements them is critical. The ability to look at the code and know exactly why something was put into the system, and more importantly what will be impacted by making a code change, is a ‘must have’ in any good development system. In a traceability graph this usually ends up looking like this:
While I can start to see the benefit of that I also start to see where it breaks down a bit.
1) Code is often used massively between functional areas so it leads to a very large traceability tree. In my opinion once you get past a certain number of branches (a number I have not really quantified yet but I will know it when I see it) the code simply gets qualified as ‘important’ and traceability at that point really looses some value.
2) The current state of tools at this point really offers no way to store this metadata in the source in a simple, and automated, manner. This leaves it up to the developer to perform this task (usually in the comments) and that means that the developer gets more work to do. As we all know, the more time something takes that does not give the person doing it much (if any) direct value, the more likely it is that the task does not get done. This means that the traceability data can immediately become suspect causing no one to believe it and thus again it looses its value.
3) Why do we really care that FunctionX was written to explicitly fulfill functional requirement F-101 and thus Business requirement B-203?
I personally think that this deep traceability is only there to fulfill management needs to see neat charts (ok, maybe I could have worked on the color scheme a bit) and graphs. I also think that this is a way for managers to feel that they are ensuring value from their developers by making sure that the developers are only writing what is needed to satisfy the requirements and not a line of code more. In fact many developers seem to be from my side of the camp, but some of them take it way to far in the other direction. Their opinions are that unless the system can be ensured as ‘good’ why track any of it at all? They know what the requirements are, they should be left on their own to implement the code in a way that satisfies the requirements and that’s it. Why do they need to justify their work at all as long as the end product works well and satisfies the stated requirements?
What you end up with here is this:
Who wins form this? No one does. Most of the time when you have an all or nothing strategy the outcome is completely non-productive. Is it good idea to have requirements traceability? Sure it is. I think most sensible developers and managers alike will agree that knowing why you are doing something, what the impact of changes are, and how things get tested are all good (great) ideas. The frustration comes in trying to come up with a solution that satisfies both camps. Something that gives both the managers and developers what they want.
I think that something is a very tight level of traceability between all levels of requirements, both up and downwards, but then to augment that into the code by completing the traceability down to the test cases and stopping there. With this you get something that looks like this:
Notice that you now have traceability form business requirements all the way down to the test cases just like you did before but you have left the code out of it. Some folks might say that this is missing the need (want) to trace requirements to the code that implements them but take a closer look and you will see that it really does not. The code traceability has not been skipped over, it has been preserved due to the physical connection to the test cases.
Consider this. Every test case should be there to explicitly support a use case, or at least one part of a use case. This means that every test should be traceable back to some code that it is testing. This ‘traceability’ can be seen in one of two ways. First, most test cases that reference no code inside them are easy to spot, since they have no code inside, and second, you can easily run an automated tool to check the source code of a test case that fails to reference any code. Clean, simple and it leaves the developer out of it which is good.
Now consider the other use of full traceability down to the code level. The ability to potentially spot dead code, or code that does not specially trace back to any requirement. You have not lost here wither since you can again use an automated tool to run a call tree backwards from all the test cases and ensure that you have no code written that is not reachable by a test. Actually this should be part of a normal test regime anyway and is part of what is called code coverage analysis, making sure that as much of your code is tested as possible.
Have you lost anything? No. Well maybe some work. In fact if you take a look back to your test practices you are already probably doing this almost 100% if you are using code coverage analysis. If you are not doing code coverage, start. Look at what it gives you. Management gets what they want, development gets what they want and everyone is happy. This is a classic win-win scenario that I think everyone can live with.
Recently, in one of my many quests for knowledge about the good old NNTP protocol (be on the lookout for a really cool Usenet news reader to be released by Enterprocity within the next few months) I was pointed towards something called Postel’s Law, also referred to as the robustness principal.
In a nutshell the law is simple. It states:
“Be conservative in what you do, be liberal in what you accept from others.” – Jon Postel
You can see it for yourself right here at the bottom of page 12 in RFC 793 (TCP).
Since I am embarking on my new role as a Senior Software Engineer next week I thought that me getting pointed to this quotation form Jon Postel was quite apropos.
This is something that I have seen so much of over the last few years as my old role as a Senior Applications Engineer, both in the products that I supported as well as in the products that I helped others build. Many times companies can get involved in a finger pointing match over who owns a bug (us or them, it’s not OUR fault) or if something is even a bug or not. Many times engineering would point to a message we got from another component in the users solution (we did VoIP Gateways talking SIP so in these cases is was SIP messages) and said that the message was malformed in some way, and this was why our stack threw it on the garbage heap, or leaked memory, or threw an exception, or dropped a call, or some other undesirable behavior that caused someone to pick up their land line and call me.
It all boiled down to Postel’s Law. The third party SIP stack that we used (no names here please) was not very robust at all in its ability to take in things that were not 100% to the RFC. It was a good stack that did its job and had a good team behind it but when it came to handling SIP messages, it was very picky to say the least. One message that was not a complete verbatim to the ABNF used in the RFC and that message was ‘wrong’ and the behavior was indeterminate. That and the fact that there are some really nebulous areas in the RFC that did not help, made it look at times like the product had some serious issues, and in my opinion it did, from a users perspective. Taking this to another level, many of these malformed messages were in message headers that our product did not even care about, that just ended up adding insult to injury there.
In user land, people don’t care about all the stuff behind the scenes; they just want things that they paid for to work. Add to the fact that other products that may not have been better in all other respects did not have a problem dealing with these errant messages, and our product became even more suspect in the eyes of the customers. All engineers need to understand that a customer’s perception is reality. Even if YOU, as an engineer, know that the problem is really NOT with your product but with the other one, or a bug in a third party component that you use in your system, the customer sees an exception thrown in YOUR product or poor behavior in YOUR product and not the others; your product is the one with the problem.
So, this is just gentle reminder to all engineers out there (myself included) that not only do you need to validate all input to your systems (a good thing that some of us may take way too far) but you also need to decide HOW you are going to act when you detect that bad input. Throwing an exception when you are the upper layer, right next to a human user, may not be the best (be on the lookout for a posting on the use of exceptions :) ).