Blog Home  Home Feed your aggregator (RSS 2.0)  
Rays Development Blog
A look into the mind of a VB Developer
 
# Friday, 05 November 2010

So, I have experienced what I feel is a failure of a project that I was recently a part of in my personal life and have been thinking about it a lot lately. Partly because as an systems architect it is my job to always be trying to understand where I can improve myself and ensure that I do not repeat mistakes, but also just because, well darn it, I hate failing.

 

Who the heck doesn’t hate failing?

 

Really, I am not counting this as a ‘failure’ per-se because I did bring it up as an issue at the onset of the project and even noted my personal objections to it in the review notes that were taken in the meetings I had. I am noting it as a time of shame in that I allowed my PERSONAL level of professional conduct to be driven by an outside group instead of recusing myself and just walking away. In short, I let go of my principals and am now paying for it.

 

Not a mistake I will be making again.

 

How did I come up with the title of this entry? What does a QA analyst have to do with the legal system? Just so you know I am a huge fan of the TV series Law an Order. Not so much the recent off shoots, but the old shows with Jerry Orbach (Lennie Briscoe), Sam Waterston (Jack McCoy), and one of my personal favorites, Chris Noth (Mike Logan), but I digress…

 

I have always been fascinated by the law. I almost decided to become a layer ate one point but decided that I was not hard enough (or perhaps too hard) to take the role. I looked at it for a while and decided that there were potentially too many gray areas to have to deal with ethically, so I took the IT route instead. Hehehehehe, yeah, who knew?

 

So, the relation here is this.

 

In the legal system you have several areas of a legal issue, each on represented by a specific area of expertise looking at the case in a different way. The accused is, by matter of the same legal system that is currently citing them as a ‘bad guy’ provided a way to prove their innocence before a panel of impartial people, and is offered representation to help them. There are people on both sides that defend their position, present their case and in the end the judge and jury make a decision based upon a preponderance of the evidence if the accused is guilty or innocent, and what the method/mode of punishment should be. Remember, the legal system is represented by the scales held by lady justice’s left hand with a sword in her right, and her eyes covered with a blind fold giving the indication that she is unable to be influenced by any outside party, and are driven only by the written matter of law currently established.

 

In the project system you have several areas of a project issue, each on represented by a specific area of expertise looking at the problem in a different way. The project is what it is, being defined by the specifications that were approved by all the parties involved upon its initiation. There are people on both sides that defend their position, present their case, and in the end someone makes a decision if the delivered system met the requirements or not, and how to correct what needs to be corrected moving forward.

In a business environment, the business owner comes to IT with a need. They understand (probably very well) what needs to be accomplished and can usually state those goals very well in what are referred to as High Level Requirements. These requirements are used to establish a baseline timeframe and budget that is then referenced by the business plan to check validity to the established mission and cash flow for the year to determine if it can be(or even should be)  perused. Once they get the green light it moves on.

 

In the IT environment an architect is assigned the project, provided the business requirements, a basic timeline and a budget framework and told to go off and design, then come back with more specifics to move forward. Once they do the design and pass it back to the company for final approval (timeline and budget) the project then gets assigned to developers to complete according to the specification.

 

The developers do the work based upon the design of the architect; perform some base level tests to make sure that what they release meets the stated objectives, and then release a build for testing.

 

Here is where the problem ALWAYS happens.

 

The business will sometimes NOT want to include a QA test resource.

 

WHY? I am not sure. Usually the business says that they are too busy to be bothered with anything. They are, after all, the ones making the money for the company, why should they want to do anything else? But I have heard more than a few times that THEY want to be the test people on the project because THEY know the DATA better than ANYONE and can be the best judge of the system processing quality than a QA persona can ever be.

 

It was HERE where I was bitten.

 

I fought hard and lost my battle. I was made to allow this abomination into my project. I was provided with the business requirements, I created the low level design, handed that off to developers that created their individual designs and had them reviewed by other developers, then implemented them long with a series of basic test cases that they deemed were required, and then handed the ‘completed’ project over to the business for THEM to test. The business ran their TESTS (I have yet to see an established – IE: written – test plan or results document) and signed off on the completed work. The total time for QA testing ended up being about 4-6 hours.

 

My right eyebrow rose a bit but it was apparently not for me to say anything and the project went into production where it was run for the first time and the resulting data set was sent off to the next step of the process (not something that I have any control over at all there), and within hours THEY saw issues in the data that they were presented with as a result of this projects processing and kicked it back to us. The business took a look at the data (that they already saw by the way, remember, they ‘QA Tested’ this system just hours before and had ‘signed off’ –approval via email- on its viability and correctness).

 

The reaction was shocking to say the least. The business came back and questioned the systems correctness. I was shocked, but not at all that surprised, but still a bit ticked off. I am not a person that enjoys assigning blame, but when I am asked to explicitly locate a problem, that job gets done for me. I find the error and the fault is assigned by the simple act of doing that. Who did that work gets the ‘blame’.  In my opinion though the blame should be shared by the developer and the person that did the review of the code, and ALSO the QA Analyst that either missed a test case or did not execute one correctly. In this case we had NO QA Analyst, or in reality, I was getting asked BY THE QA Analyst (the business unit in this case) what the problem was. Again, I was a little miffed, but took it. The problem ended up being something that I knew was going to be a potential issue, and that we had even discussed in meetings as part of the implementation and design. A direction was decided upon between me and the PM that the business (err… QA) would manually process through this data list and perform some further cleanup that would take a significant effort in dollars, time and specialized software to accomplish in an automated manner, and that we would look at other more automated solutions in the next round prior to this process needing to be used again next year. Being the diligent architect that I am kept this all documented in the projects documentation, partially because I am just a thorough person, but also as a way to provide some CYA to both myself and the next unlucky architect that got any revisions the next time this project needed to have changes made to it.

 

The manual processing was done, requiring the business to manually look through every record and try to remediate possible duplicates. I figured this would FORCE them to look at each and every record and if there were any OTHER errors they would see them. They were after all ‘the best people to judge the correctness of the data’ hence the reason that they mandated themselves as the QA team in the first place. I again, shook my head, scratched a bit, and let it go. They completed their manual processing, removed about 1000 or so records that they felt were dupes and handed the file back to me to get converted over and sent back to the vendor for processing. That being done, the project was run, my involvement was closed out, and I was assigned on to other work.

 

Ding dong, the alarm bell rings again as a new problem is found, and then another.

 

Once again I am asked to look at the data. Amazingly enough, I am asked by the same team that certified this exact same data, and even had to read through it all manually record by record in their last cleanup effort, to find the ‘problem’. I found the problem, a common mistake in this type of processing (the order that records are placed in when a lookup is performed) that was not caught by the developer, the reviewer of their code, nor the QA team that certified the data TWICE now before it was allowed out the door.

 

So, what’s the result here? I am going to spend my weekend looking over the data between what we HAD then and what we HAVE now as the result of a change made to address the issue and try to determine what to do next.

 

Being a process oriented guy and always one to try to learn from my mistakes I have taken a hard look at this and made a determination that I was right at the start and I am not going to ever accept a project that does not have QA resources assigned. Could I be potentially signing my own walking papers? Perhaps, but at this point it is a case based upon principles and not just me being a whiney architect not willing to take blame. In fact all I have been asking all along is that someone who is impartial to the business process, design of the solution, and the development of the solution look at the data going in, the processing, and the data coming out, and TELL ME if there are problems.

 

I welcome being told there is a problem so it can be addressed BEFORE we ship. That’s the idea of testing, to catch problems before they make it to production. I just fail to see how people cannot understand that. Just as Lady Justice stands outside of every courthouse to ensure fair and impartial judgment on the application of the rules of law, so should QA be allowed to stand and judge the usability of a system before it is relied upon to perform its tasks.

 

Now I ask you, how many people ASK to be judged like this?

 

Am I wrong?

Friday, 05 November 2010 11:48:11 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Business | Design | Expectations | Planning | Requirements | Roles | Testing  | 
# Thursday, 14 October 2010

I just found this so funny I had to share it. Although things like this have happened to me in code before so I probably should not be casting stones, I do still admit that I find it funny and point an occasional finger and giggling allows me to vent sometimes :)

I received an email the other day from a company (nameless - unless you happen to recognise the email :) )that provides a daily supply of white-papers and other technically oriented and marketing type documents for us geeky folk to read in our volumes of spare time.

I just found it sooo funny when I got to the bottom of the page and read the last item on the list...

(website link and company name clipped to protect the innocentgoofy)

I'm really sorry, but I have to admit that I spent a good 10 minutes laughing at this and finding great joy for some reason.

So, let this be a lesson to everyone out there, and me too. People WILL laugh at you for making a goofy mistake. How do you want your marketing efforts to be remembered?

Hmmmm...

Maybe they did this on purpose?

Nah... I doubt it...

Note: Yes, the actual link to the document DID work, and it was actually pretty good.

Thursday, 14 October 2010 21:07:56 (Eastern Standard Time, UTC-05:00)  #    Comments [1]   Customer Interaction | Error Handling | Expectations | Requirements | Testing  | 
# Saturday, 02 October 2010

Oddly enough I just noticed today how annoying this IE dialog box is:


The example above shows an attempt made by a web page that I visited to reach out on my behalf and open a web page that I happen to have on my ‘Trusted site’ list within IE8. Yeah, I put FaceBook on my trusted sites list because I got tired of having to allow certain things every time I went there and I do trust it enough, because I regulate very closely what features I have enabled and what I use FB for, on my own.

I imagine more and more of us are seeing this nowadays as we are becoming entrenched in the draw of sites like FaceBook and other socially oriented sites and that other web sites are leveraging them as ways to get their sites noticed and voted for, etc… I imagine that it is going to be happening more and more as the line between sites with links such as these gets blurred more and more. Rank this, rate that, yadda, yadda, yadda…

To be honest, I am not 100% clear on the VALUE of this type of cross linking yet, or if it is really more of a passing fad that will soon fizzle out in favor of the next cool ‘thing’ that comes along. But I digress.

The point I want to make is for all those UI centered development folks out there (myself included I am afraid) that often times maintain a somewhat shortsighted focus on the task at hand and perhaps don’t look forward a little bit further and ask the next question:

“What else would make sense to include here as part of the design?”

So, I ask you, what else do YOU think would make sense here as part of this design?

Theme to Jeopardy playing quietly in the background…

BUZZ!

How about this as a suggestion?

How about offering the user (me) the ability to ADD the currently ‘Un-trusted site’ to the ‘Trusted sites’ list from here?

To me, this is a HUGE miss in this design. Why? Because had the simple question been asked there are so many easy ‘quick hitter’ options that could have been done to enhance the user experience here with very little effort.

The current state

As it sits right now, the user has the ability to click the ‘Yes’ button and tell IE to trust this link request. The problem is that if there are multiple areas of the currently un-trusted linking to trusted sites you have on your list, even if the URL is the same, you get asked each and every time if you want to allow it.

This can cause two problems.

First - if the site address does not change the user can either think that they didn’t click properly, or maybe they moved the mouse as they clicked, something that people with physical issues often have problems with, and the click didn’t register so they get frustrated at themselves and the user experience as a whole.

Second - they get stuck in a cycle of having to click on so many boxes that they accidentally allow a site that perhaps they really didn’t want to.

In addition to this really poor user experience it is frustrating to think that the only way to avoid having to do this again is to write down or remember the address of each of the sites that popup (probably write them down manually?) and then add them to my trusted sites list latter as a manual effort.

NOT a great UX to say the least.

What could we do here?

So, being the proper engineer here I always have in mind the idea that before I go to someone and say ‘you did this wrong’ I should take the responsibility to bring along my ideas on methods on how to make it right. After all, it is easy to point a finger and laugh, it is harder to think about possible ways to suggest how that problem be solved. Pointing and giggling just makes you an annoyance, offering viable solutions makes you part of the process of solving the problem.

UI Option #1

Provide the user with a button in this window to allow them to just jump right over to the ‘Internet Options’ and then the ‘Trusted sites’ dialog box with the URL filled in and just offer the user the chance to add the site to their list if they want to.


UI Option #2

The second option is very simple. Just provide the user with the ability to add the site to the ‘Trusted sites’ list using a simple check box on this dialog box as I have shown here:


I am sure given a bit more time we could come up with a few more ways to make this work, but the point is that it appears as if the effort was not made at all, and even a small step would have provided some fantastic user level value with a minimal amount of design, code and testing efforts.

You could even go one step further and have the OS keep track of how many times you have allowed a specific URL access and provide the user with a pop-up dialog box in the system tray area maybe once a week or so and let them know that hey, they trusted this site x number of times over the last weeks or months, maybe they want to consider adding it as trusted.

There are so many options that would be simple, add some real value, and enhance the UX in this case, and through so many releases of the OS and IE I have yet to see this addressed once.

If there is someone out there from MS reading my blog (yeah, I am sure there are  - NOT!) then let me know if you think I am saying makes sense. Actually, if there is ANYONE out there reading my blog (I know there are a FEW – I watch my daily logs) then reach out and comment here.

Do you agree with me or not? If not, then let me know why.

I am always open to others opinions in cases like this, and since I do design as well write code, I ALWAYS welcome user feedback.

Let me KNOW what YOU think would be the best way to address this.

 

Saturday, 02 October 2010 14:52:33 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Design | Interfaces | Requirements  | 
# Sunday, 26 September 2010

Well it has happened again?

Found yet another inconsistency on the Microsoft Office suite that for some reason annoys me to no end now that I found it. The issue lies around the ability to select colors.

Within outlook you can create message categories and assign them to messages in the mail list. This is pretty useful when you need to quickly identify groups of messages visually. I actually make use of this myself to help me separate personal emails from work related emails, and then a bit more granular to help rate them by importance via color (IE: light are less priority, darker are higher priority). It seems to work pretty good so far and helps keep me organized. However, lately something has just been itching at me about the way it works and looks, and suddenly last night it hit me. The color picker dropdown box that the Outlook colorization category uses is not a standard color selection dropdown. Here is what it looks like:

Problem #1 - It's just wrong! Windows has a standard color picker dialog box, why not use it? Someone had to:

  • Take the time to design the control interface.
  • Take the time to create a drop down control JUST for this use and include it in the code.
  • Test it.
  • Release it

Problem #2 - It's been done wrong! The UI of this control is awful and it is non-standard.

  • It mixes the look and feel of a combo box and a menu control. Notice that the control uses a dropdown arrow to indicate the available action but then the option area is not a list style control, but instead is a menu style area, and even contains the light shading on the left where the menu would normally place icons that show this options equivalent location on a tool bar.
  • The available colors are all mixed up. They are not in any specific order (IE: Light to dark, grouped by tone value, etc?) Very bad design indeed.

It's clear that someone at Microsoft does know how to design a color selection control, because they offer one. I see it used all the time, even in the same application (Outlook) when I want to change the color of my text. It looks like this in case you have forgotten:

See? Now THAT'S how it is supposed to look. Gives you access to the standard colors most often used, allows you to select form a nice wide range of other colors, arranged in (some) hue order to create a theme style, and then gives you the option to jump over to a more advanced selection dialog (using the More Colors?) option, where you can mix your own colors using not just the RGB or HSL scale but also brightness.

THAT is how it's done.

Now I am not going to argue about the effectiveness of the color selection dropdown I am showing above, I am simply showing that it exists, is considered a standard within the Microsoft community, both by internal developers and external customers, and seems to be effective enough for general use. I would not settle for this style in a high end art-centric application, but for what it needs to do as part of the Microsoft Office package I think this design is clear, concise and effective.

I do have to say however that even this design seems to have been perverted a bit. Taking a look into the same type of area but within the Microsoft Publisher application reveals this color option:

Bluch! Boring!

 

BUT at least it gives you the ability to jump to the more advanced option just like the others do using the ‘More Colors…’ option, and there you get the same dialog as shown above. In fact, because this is a professional (semi at least) desk top publishing application, this dialog box offers you the added option of using the Pantone color scale as well as the other RGB and HSL styles (who ever though that selecting a color would be so darn complicated?)

 

Since I was poking around I decided to crank open my copy of Microsoft Expression Blend 3 and see what that did for me. I figured as I went upwards in application complexity and relative use in the graphics industry I figured this one would be a bit more advanced (IE: Complicated) and I was presently surprised at the available features AND the obvious usability.

 

There is surprisingly very little labeling within the control itself, but I think most you can get the idea about what you can do with it pretty quickly. As I have often sighted to various groups in the past that have had to listen to me drone through usability discussions, quite often it is simple UI and the ability to play that leads towards an efficient design. You do not always have to GIVE the answers explicitly as long as you allow people the latitude to try and undo, leading to the adage that experience often leads to the best learning

So, I just HAD to do one more thing and revisit my old buddy PhotoShop.

Wow, it is actually not too bad. Funny, but it was kind of better than I had expected it to be. I do however notice that they may need a usability person to review this because the various types of color selections should really (IMHO at least) have a box around them showing that they are grouped together.

So, what have we learned here?

Yeah, I know, beyond the fact that I can get overly picky?

I think that the message is clear. Consistency is key.

As a developer, there is not really a need to go off reinventing the wheel. What would have been wrong with a dropdown solution similar to this?

At very least I think this leverages the idea I am trying to get across. And, it would give me more darn color options instead of just the static 25 that I am apparently stuck with for my categories.

Oh, and DO NOT GET ME STARTED on the fact that this listing of categories is the SAME listing that I have available between my CALENDAR and my CONTACTS!

UGH!

That is the subject for another blog post!

<Shudder>

Sunday, 26 September 2010 20:55:02 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Customer Interaction | Expectations | Interfaces | Planning  | 
# Saturday, 31 July 2010

I have to say that I am NOT surprised in the least.

Avast, the (IMHO) WORLDS BEST anti-virus package available (aside from their lack of support for the Windows Mobile 6 platform apparently :) ), has come through and delivered.

I was contacted by Adam Riley (a member of Avast support whom I had been previously working with) and at this very point, my refund through the processing center @ Element5 is in action. I have been assured that the refund will be processed in the next week or so and have already seen the paperwork notifications come through.

Again, I would like to say that I am not surprised. Avast has always done their best in the past to take care of me, has provided service when needed, and to top it all off, unlike a majority of the other Anti-virus packages and companies out there, provided a VERY consistent and high quality product. I will continue to use them myself, but more importantly, I will continue to recommend them to others.

Although it did take some rather drastic steps, what matters MOST is that the company came through in the end, admitted there was a design problem and that it was taking longer than first anticipated to address (something that as a developer I can both understand and feel frustrated about), and promptly followed it up with processing a refund.

Bravo!

So, to all those that I dealt with @ Avast, as well as all those working behind the scenes on a product that has consistently blown my socks off with effectiveness and performance, all I can say is 'keep it up!' don't stop, and teach those other AV vendors how it's done.

 

Saturday, 31 July 2010 12:10:01 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Business | Customer Interaction | Service  | 
# Sunday, 04 July 2010

Honestly I would never have thought that I would be here using my technical blog to be venting about a vendor, but I really feel the time has come. This effort has gone far too long to go 'un-published' any longer.

The offender is Avast Antivirus. http://www.avast.com/index

I have been using their free home version for years and have been promoting it all over the place to every one. It works flawlessly and has caught so many things that other AV vendors have missed that I had decided to NEVER use anything but Avast on any system that I set up for any person. Business of course have to pay for the product because that the rules at Avast. I am FINE with that. in fact I think it is GREAT that they have thought enough to understand that home users are in need of a great AV app (the Avast provides) and that business are in a better position to pay for the support and all that. I have NO problem with them charge business and not home users at all.

So... the McAfee install that came as part of my Motorola Q phone had run out and I figured that I would go to my friends at Avast and purchase the mobile version of the app and install it on my phone and get rid of the junk that came on it for free. On 11\19\2009 I ordered and received my copy of Avast 4 PDA version and installed it. Email confirmation assured me that I would get my license code via email latter, and I did. The instructions told me how to enter the Avast license code. The PROBLEM was that the UI on the phone display did not include the 'About' button that they referenced so I could enter any code. Hmmmm

I opened a support case on 11\28\2009 explaining the issue I was having, and was responded to by Tomas the very next day that my phones 320x200 screen resolution was not yet supported and they would try to implement this feature in a future release. I asked him when they anticipate having this since I love using Avast and wanted to keep it going. I was told that they should have support for that resolution by sometime in January 2010. I felt OK with that date and told them that I would wait for the next release and figured that I would work out the license cost issues latter with them since I was going to be a few months off between having the license and being able to actually use it. I figured they would be good for it and we would work it out.

So, January comes and goes... Other things in my life had happened... I decided to follow up on April 4th 2010. Noticed that I could no longer find any listing of my original support case (AFO-665966) so I opened a new one (RSB-517949) asking for a status on my old case and if no help was going to be forthcoming then I would be expecting a refund of my $20. on 4\9\2010 I got an email back from Petr telling me that I can download Avast 5, get my license file resume to me if I forgot it, yadda, yadda, yadda.... I replied to him, and explained that I thought he was a bit confused, I was referring to Windows Mobile, etc... provided with again with the original support case # and asked for followup. As of 6\6\2010 I have yet to hear back from Petr at all and that case is still open.

In the mean time I posted about this on the public Avast message area, thinking that maybe if I started asking for any other people who maybe had the same issue I would get a response form at least the public. So far, no response from anyone (public or Avast) there. OK, what ever...

So, I decide enough is enough, I contact Avast sales and am at this point simply now asking for my $20 back. Shown below is the opening interaction I posted to get this started:

"I would like to request a refund of my purchase of Avast 4 PDA edition. REF#305093845. I have been trying to get support now since my purchase on 11/19/2009 under 2 ticket numbers (AFO-665966 and RSB-517949) and have gotten no where. I hate to move away from using Avast on my mobile phone (I use it at home on all my PCs there and love it and just recently just purchased a package of license for there) but I am not getting any help with the version I am running on my mobile phone at all. If someone can help me get a version that works on my phone I will keep running it (I love Avast) but I do not want to go longer without a functioning AV on the device and I am not getting any responses or help."

Thusly opens another ticket (XAP-831717) with a person named Adam. He asks me for details again and promises to 'look into it' for me. I give him the same technical details i Had given originally:

Motorola Q 9h global running Windows Mobile 6.1 - 2.4 Inch display 320x240 - 120 MB RAM - Avast 4.1.19 PDA

That post was on 6\23\2010. Here we are on 7\5\2010, no response. No refund, and no functional AV on my phone.

I don't get it.

When did support stop being important to people that pay?

I have worked in support most of my technical life. I worked as a phone jockey for Sony desktop and laptop support (I was agent C02E way back in the day if you remember me :) ) and was held to EXTREMELY high standards (phone monitoring, customer surveys, technical information and documentation audits, etc...). I Left there to do engineering level support and technical training for a local company called Voice Technologies Group (VTG) that built interface hardware to allow systems like IVRs, Voice Mail, and latter on Unified messaging systems interface between servers and PBX systems. They were bought by Dialogic so I ended up working for them, they got bought by Intel, and then Intel sold them off to a company called Icon Networks from Europe that ended up putting the name back to Dialogic again. Every step of the way I was involved in the servicing processes in some way, weather through support (phone or on-site) acting as a technical evangelist at all the various trade shows, writing and presenting training classes, or even stepping in and helping customers design solutions, and then test them using our hardware\software. At every step of the way I was held to the highest standards of customer interaction. If I just let a support case lie dead for a week my butt would be in a sling and I would be getting emails from customers, CEOs, you name it.

Now, I will admit that that perhaps the support expectations for a $20 bit of software is slightly lower than a hardware card or IP gateway that runs between $2000 and $10,000, but the idea is the same. You have a paying customer that plopped down some cold hard cash for and has some expectations. with respect to how they are treated and how they are interacted with. I am at this point not even expecting to get it working. I KNOW that they are simply NOT going to address my issue only because of the fact that my phone (as I have been told so many times now) is outdated and has too small a screen. The software is built for an actual 'smart phone' with a larger screen and that is that.

Simply have the personal where-with-all to just email me back and TELL me that fact and then GIVE ME A REFUND.

At this point I am not sure if I would ever buy Avast again even if I did upgrade my phone to something newer. Why? Because the level of support I have received until this point has been abysmal. Why would I expect that to change?

Avast support: Suggestion - Give me a version of SW that works or give me back my $20. I am not going away and I can have a tendency to become very persistent. I have tons of experience knowing what gets attention from the customer side of a support case.

 

Sunday, 04 July 2010 15:20:36 (Eastern Standard Time, UTC-05:00)  #    Comments [1]   Business | Customer Interaction | Expectations | Hardware | Roles  | 
# Thursday, 07 January 2010

Touch touch touch…

To be honest I don’t get it.

I touch my computer every day already. I use a mouse and a keyboard to do it, but to be honest I see very little sense in using my finger to manipulate objects on my computer. My finger tip is large, and my monitors (all 4 of them) are at a 90 degree angle to my desk. Why would I want to use my hand to reach out (and up) to manipulate objects on my computer screen when I can use the mouse to do it?

Now other devices like game tables, interactive kiosks, digital book readers, Maybe PDAs and stuff, that’s fine, but I have yet to see value in a touch screen PC that is not at very least stylus oriented. And on that subject, what is the hot thing about handwriting recognition. I specifically use a computer (and previously a typewriter) because my handwriting sucks :) Why on earth would I want to write on my PC screen? Sign a digital document? Sure, but now get someone to trust that ‘I’ signed it and we will be all set. That technology is still not proven yet and most people don’t really trust it. Using a finger print is a better option, and far more trusted, but still not entirely mainstream yet.

Yes, the touch demos that I have seen show fancy things like dragging and throwing photos around a table top, or playing games, or ordering off of a virtual menu, and those are all good examples of the use of touch technology, but at a very narrow focus and scope. The demos about interactive touch counters in the stores that allow you to compare multiple products side by side are cool too but also relay not JUST on touch but also on RFID technology that is not really related to touch. You could do one without the other. Games like chess, checkers, solitaire (every computer HAS to come with a copy of that right?) are fine for touch, but would you really want to play WOW or DOOM using touch? 

I have YET to see one ultra compelling demonstration of using touch in an office environment that wows me more than a mouse does. Can you imagine trying to do photo-retouching using your finger? Editing code or creating an application form in Visual Studio using your hands? How about highlighting text and dragging it around or changing fonts using your hands? Now picture doing all that on a 17 or even a 21 inch screen.

I am not saying that touch does not have it use, it does, but on a somewhat narrow scope I think. I think you will see (my prediction) that touch WILL finally take hold at some point, but more along the lines of interface technology that we are already familiar with today. Give me a keyboard that I can reconfigure on the fly based upon the application that is active on my screen, and do it that way. Give my a touch pad to replace my mouse, or maybe two touch pads (one on each side of my virtual keyboard) so I can do multi-touch stuff. Maybe I will reach out to my screen a bit and do larger granularity things like flip pages on a large document, or open an application by tapping on an icon, but touch is not the generic answer to one problem.

It looks cool in movies, and sounds cool in high level technical talk, but in reality, where I live, I need what works, and I just don’t see touch being a PC related thing with a ton of impact like most do.

FORCE me into a touch only interface and loose me as a customer. I WOULD use a stylus more instead of a mouse on a laptop, but don’t make me write what I can type MUCH faster or you loose me as a customer.

My prediction is that the next big wave will be multi-modal interfaces. Provide me the ability to use touch where it makes sense, and then at the same time allow me to use a mouse or stylus or keyboard where it makes sense, at the same time and at MY whim. I want to scroll down in an online book a few pages by using my hand to grab and flip a PDF down a few pages then as they scroll by use my right hand with my mouse to grab the page as I see it, stop it, and then select a few words on the screen so I can reach up and press the bold button with my left hand on the screen? That’s great.

And before all you naysayer out there bring up all the cool ‘things’ from movies like Minority Report, keep in mind that was a ‘gesture based interface’ NOT touch based, and I think that is closer to being far more useful than pure touch, but a subject for another blog entry.

Thursday, 07 January 2010 11:40:10 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Business | Design | Hardware | Touch | Interfaces  | 
# Thursday, 17 December 2009

Note: For visitors of your site, this entry is only displayed for users with the preselected language English/English (en)

Perspective

 

Life, as in business, is all about having perspective.

 

I have recently myself just been given a very large dose of personal perspective that, after a lengthy period of internal debate have decided to share with a larger community of people here because I think it is highly relevant to the everyone’s life, both personal and business related.

 

I just recently found out that I have a Brain Tumor. In the grand scheme of the ones that you could ever have, this one is a bad one, and is almost garmented to shorten my life by some unknown factor of time. The word ‘inoperable’ was used, and for a time I have to admit that I fixated on that word alone, and it drove me to many cascaded thoughts after that that, that if left unchecked, could have put me in a very bad place. But at one point I made the conscious decision to examine that word for what it really meant.

 

It DID NOT mean ‘untreatable’.

 

It simply meant that ‘surgery’ was not a viable option.

 

Why am I announcing this on a business, even more so an IT, related blog?

 

Because it really relates to exact conditions that we run into in IT and in business in general.

 

Like most business and IT leaders, we plod through life knowing that there are unknowns that we will have to handle, but for the most part we simply plan for what we know is going to happen, and then handle the issues that arise by adjusting the plan as we go. And in reality that is fine, because it is life. There are always going to be possibilities that we don’t see something coming, or that something outside our sphere of influence of vision will come crashing into our timeline and effect us. For those things we TRY to be prepared by having contingency plans, disaster recovery plans, etc, but in reality we are really still flying by the seat of our pants and simply reacting.

 

We pat ourselves on the back as managers because we have plans in place to handle the unknowns that can come flying at us (as if we CAN REALLY plan for the unknown) but in reality, just as with me not knowing until recently that I had brain cancer, we simply move from our point of origin to tomorrow and beyond just taking things day by day, following a plan that we all know could completely fall apart tomorrow. And for some reason we are happy, maybe even proud, to be doing that.

 

Wake up like I did.

 

In life, any time you are given a piece of information that you did not have before, no matter how bad it is, you need to be happy about it.

 

Why?

 

Because it now changes your perspective, that’s why. It GIVES you a piece of solid foundational information on which you can review, analyze, and make solid adjustments to your actual plan, then and take steps to alter your direction without guessing anymore.

 

I found out that I had brain cancer. Maybe you will find out tomorrow that one of your largest customers has been secretly interviewing other service providers that could potentially replace YOU as a vendor, or maybe you suddenly start to see alarms on a sever that indicate an impending massive failure.

 

Are you going to be shocked? Yes.

 

Are you going to be worried? Sure

 

Are you going to be upset? Probably

 

Get over it.

 

You have been given a gift, the gift of information. Everyone needs to understand that INFORMATION is critical in life and in business. Those that have it rule the world because it gives them a perspective, and thus the ability to plan for alternatives and make judgments, that those without it can’t do accurately at all.

 

Get over the bad news.

 

Bad news is really only bad when it comes after the condition has occurred. In my case REALLY BAD news would have been along the lines of having brain cancer that was so advanced that it was not only inoperable but ALSO untreatable. Mine was NOT both. It IS in a VERY bad location (making it inoperable) but because of that location the effects were noticed very early while it was SMALLER and thus TREATABLE. Compare this to getting a an alarm on a server console that says you have a DEAD hard disk that needs to be replaced vs. one that is starting to fail and you now have time to act on it before the really bad stuff starts to happen, or getting that call that your major customer has already signed a contract with a new vendor and will not be renewing with YOU, and that negotiation is not an option because they already inked a deal with the other vendor and you are now out of the running completely.

 

Again, it is all a matter of perspective.

 

There is a set of lines in the latest Star Trek film between Kirk and Spock that I find highly pertinent in this case:

 

= = = = =

 

Kirk: You say he’s from the future, knows what’s going to happen, the then logical thing is to be unpredictable.

 

Spock: You are assuming that Nero knows how events are predicted to unfold. The contrary, Nero’s very presence has altered the flow of history beginning with the attack on the USS Kelvin and culminating in the events of today, thereby creating an entire new chain of incidents that cannot be anticipated by either party.

 

= = = = =

 

The gist of this exchange is that once you know something, your perspective changes.

 

You are given an opportunity to plan for a once unknown condition, and that your ability to plan is now balanced out with either side because both of you (in my case me and my cancer, but this could be the conditions of you finding out about your biggest customer in negotiations with other vendors also) are now on a more equal playing field. You know something that you did not before, and can therefore start to plan in advance to change the conditions of tomorrow’s results, and the other side now has to also re-plan.

 

In actuality, one major shift in perceptive when you get bad news in both life and business can also be this.

 

YOU have just been given a gift that has turned the tables and given YOU the advantage.

 

YOU now know something that the other side does not know. YOU know their plans and they may not know that you do. This actually switches the advantage over to your side simply because it now allows you to start planning alternative strategies to account for the variance in the relationship. You can now go on the offensive before the other side has a chance to develop their own strategies to react to what is now going to be your plan of attack, be that a very well established, focused, and thought out plan of Chemotherapy, targeted Radiation, and advanced imaging to monitor progress, or your sales departments ability to prepare a revised contract to adjust the terms to meet the current needs of your largest customer, or your IT departments ability to purchase, stage and implement a new NAS server to replace the one that is currently starting to fail.

 

Once again, it is all a matter of perspective, and in business, perspective is KEY because it means that you understand the conditions of the world and have the ability to thoughtfully react instead of just reacting to events that pop up.

 

Remember, I don’t think that there is really ever bad news. There is news that can deliver a bad message, but the fact that you get the bad message can be an opportunity.

 

Keeping that opportunity in perspective is the key.

 

Thursday, 17 December 2009 05:31:53 (Eastern Standard Time, UTC-05:00)  #    Comments [2]   Business | Planning  | 
# Sunday, 21 June 2009

Let’s be clear, to innovate you need to reach.

There are many companies that I have run into over the years that have continuous innovation as one of their core values, but have a buy instead of build mandate. They want to reach for the stars, but they feel they need to (or even can) do it using existing technology.

Why are people so build averse?

One thing that I have noticed is that even when you are in a ‘buy’ environment you end up building, the building is simply different. Instead of building UI, databases or business rules you end up building glue. Glue code that connects disparate systems. Glue code that moves data between stores. Glue code that provides services to secondary consumers. Glue code to allow enterprise level reporting where reporting was not available in the purchased system.

So explain to me again why people are so build averse?

Innovation starts with the ability to take a risk and move in a different direction.  It is difficult to consider moving an industry in an entirely different direction when you are building on top of existing applications that fit into a different paradigm.  After all, are you not looking to do something different? Are you not looking to accomplish something that the industry is not yet fully ready for in order to get a jump on the competition?

If you answer to these questions is yes then how do you expect to be efficiently innovative using what already exists to move forward in a different direction?

I know that it is simpler to buy something off the shelf and place the responsibility to make it work on the shoulders of a vendor. I also know that it may seem to be cheaper to buy a bunch of cots products and spend time to data integrate them using tools like Informatica, and other data integration methodologies. But once you stray from being able to open a shrink wrapped box and being able to simply install and use you have strayed into a build situation, like it or not. It is similar to putting a ton of effort into deciding what car you want to buy then once you take ownership you drive it right over to the custom shop and have the engine replaced with one that has more power, the interior redone to what you really wanted, and the exterior modified. If the car you bought was under powered and the interior was not what you wanted and the exterior was also not to your liking then why did you buy it?

Consider also what gets induced when you spend your money to glue stuff together and the industry changes. It sounds like you are insulated in cases like this because you feel that the vendor is responsible for bringing the application you purchased into regulatory compliance, and they are, but what about all that glue that you built? The vendors responsibility ends at their borders and whatever you have done to augment your systems over the years is not their responsibility. When push comes to shove they are not responsible for how you use the system and are only bound to deliver to you a system that fulfils the legal and regulatory requirements of the line of business as well as the stated requirements and features of what you purchased. They can’t be held responsible for what you glued onto their product, and nor should they be.

Additionally you cannot predict how they are going to make changes as time progresses so you are stuck working your changes to their time-lines and schedules. You will find yourself having to wait for their release cycles and then your install, evaluate and test cycles to complete before you can even start any decent planning to make changes to your internal systems of glue code before you move a new version into production. If your processes are not fast enough, or your vendors release schedule very aggressive, you can find yourself stuck in an endless cycle of install, test, modify, and move to production, a process that can place some very high stress on both people resources as well as hardware and software costs, not to mention the potential for harm to your business if things do not go right.

I am not saying that it always makes sense to build. No one can say that. Buy Microsoft Office and be happy that you did. Buy an accounting package and be happy that you did. But if your business is unique, or you need to make it unique as a differentiator, then consider the build task, even if you need to live with a coddled together bought system in parallel as you do it.

 

Saturday, 20 June 2009 23:31:35 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Design | Requirements  | 
# Thursday, 30 April 2009

I catch myself correcting people all the time when these terms are used because not many seem to use them correctly, at least correctly by my judgment. I am going to speak my mind here, and put out into the public eye, what I think the difference is between each of these roles.

I say roles because they are not people. They can be people, they most certainly are almost always jobs within a company, but at the lowest level they are roles. Each of these is a pattern that a person has to fit into to serve that particular purpose. Multiple people can fill each of them at once, just like one person can fill several, but at any one given time a person fits into either one of them. Because of a persons experience and knowledge levels, as well as their underlying personality, they may be qualified to fill one of these roles or they may not. They may be good at one or not.

I am going to start by putting out a very crude diagram to show my personal view (perhaps rash generalization based upon my experience) of how these roles fit within a development hierarchy.

 

The first thing you will notice is that architects are on top and programmers are on the bottom with developers nicely placed in between. This is not out of any disrespect for either developers or programmers, but we must be honest with ourselves, there is a certain level of expectation between these roles that places them within a very specific hierarchy. Like it or not, professionally speaking, one step up and better than the lower. I use ‘better’ as a relative term here to mean more experienced, more accepting of responsibility, and shouldering more expectations. I know that sometimes programmers can feel the entire weight of the project on their shoulders, but in reality, if they are then someone above them in the hierarchy is not performing in their role properly.

So, how do I place these roles within this hierarchy? What criteria do I use? How do I measure the expectations?

Architect

This person(s) is responsible for the technical footprint of the solution. When it comes down to understanding how all the various piece-parts talk to each other, this person knows. When it comes down to understanding the difference between a clustered and a load-balanced set of servers, this person knows. When it comes down to understanding why clustering is better than load-balancing within the context of the enterprises architecture, this person knows. When it comes to understanding how a specific messaging architecture fits in the system, this person gets it. When it comes to understanding why it may be better to use a server with multiple physical CPUs vs. one with multiple processing cores, this is the guy to ask.

Can they do the work of everyone below them? About 80-90% of it, yes. Should they be responsible for doing low-level work within their project? I don’t think so. Why? Because for a really technical person that has to work at the implementation level it is very difficult to shift gears to a high level technical view and stay objective, to not select one method over another strictly on the merits of its contribution to the overall business need instead of what may be simpler, or cooler, to implement. If an architect is going to be required to actually do work on a project at a lower level then I don’t think it should be on their project. If they are going to switch gears then I think it should be a clean switch.

Architects have to not only be able to work at this high level, but they need to be happy working there. I have seen many cases where developers have been promoted to architect simply on merits such as length of service or their great ability to lead a team of developers and programmers, but be miserable wrecks when they reach the level of an architect because they miss the thrill of the compile. They need to be able to feel personal fulfillment by the act of a project coming together more than the rush of seeing a passing unit test. They need to be well with the fact that they made a good decision on what message transport they selected rather than feeling the high of spending all day working with WSDL and message versioning. They need to feel comfortable sitting in an ivory tower once in a while, even if those bellow them feel a bit off because of the view..

Developer

Developers are the top of the ‘do-er’ list. These people do the work. The build the systems designed by the architects and understand the low-level implementation details of HOW to build the stuff that was designed. You want to know the various methods available on an object? This is who you ask. You want to know how large an XML message is as it goes across the wire between servers? This person can answer that. Do you want to know how two objects connect and what the ripple effect of a change is going to be? Ask these folks.

Developers know it all within their areas of expertise. And to be honest, developers need to maintain a specific area of expertise because software development changes so darn fast that you cannot possibly know it all to a high degree of efficiency and knowledge. You can be very knowledgeable in a ton of areas, but when it comes down to knowing how the bits move in a specific way you need to really have a core set of technologies that you are great in. These folks need to understand how tools like UML help them and how they can hinder. They need to know the difference between book theory and implementation reality. They need to know that ‘pattern’ is not a magic word unless it can really solve your problem, and that OOP is not a mandatory way of life, but you better think at least a little before you decide that it isn’t. This role also understands why you should need a note form your mother to use a global variable in development, but also understand that doing so does not make you an evil Satan worshiper. Developers understand the reason that code comments are useful and that not every line needs to be commented.

Some people can feel confused and worried living here because they think that they need to know it all at a very low level. I think these people are best to live at one level lower, as a programmer until they get a level head enough to move a level higher as an architect, and they  may actually end up being very good architects given enough experience.

Programmer
 
Beginner, Script-kitty, copy-paste-developer, these are the first words that come to mind when I think of this moniker. Don’t get me wrong, being a programmer is part of the natural progression of becoming a developer, and then an architect. Most of us learned to crawl before we could walk, and learning to write software is no different. Programmers understand the syntax, but probably not the reason behind using different patterns. They understand the idea behind separation of concerns and multi-tier development, but are probably not completely clear on the subtle nuances that can make it work well or bring a system down around their knees. They can debug most of the code they write, but get itchy when they have to read others code, or work on code that was written years ago but someone else. They also may not view the process of design, review, and code as having much worth and feel more comfortable by just sitting down with their beverage of choice and writing code to hit a mark. These folks maybe great at writing glue, the code that binds the ‘stuff’ of a project together, but they have not yet had enough experience to be responsible for all the low level details of an objects overall implementation. They are, the good ones, hungry for knowledge and want to learn as much as they can, but focus until they get closer to being a developer they are in an endless search for the silver bullet, the best way, the one true method that allows them to work efficiently and write the next killer bit of code. These guys comment their code because they are told that comments are good but for the most part it is feast or famine. They either comment everything or nothing.

So there we go. If I make it sound like one role is better than the other as in architects are just better people than programmers are, then please accept my apologies as that was not my intention. I think every one of thee roles is very important for a well balanced development team. Like I have always said, the world needs both planners and doers if it wants to get anything done. If there was no one to put their head down and code then it does not matter how good the design is, nothing gets done. So, if you are a programmer that is learning and growing, and understands their role and plays well there, then I say congratulations to you for being a necessary cog in the system. If you are an architect and feel that I am giving programmers or developers too much credit for their jobs then shame on you and get out of the industry because your attitude is getting in the way. Everyone has to start somewhere, it's a natural progression that everyone should go through.

Thursday, 30 April 2009 12:05:59 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Roles  | 
# Thursday, 12 February 2009

Since I have begin my deep dive into Windows Presentation Foundation (WPF) I have started to also take a long hard look at usability and all the various factors that can have an impact on the user experience. After all, WPF allows you to do all kinds of shinny and cool things, and every one of them can have an effect, either positive or negative on the users ability to understand the interface of an application.

I say understand because that is really what we refer to when we talk about the User Experience (UX) of an application. You have all kinds of interesting terms that hide the concept like discoverability, transfer of skills, etc… but when it comes right down to is the users ability to ‘get it’ when they look at the application. One other thing that started me thinking about this more is my recent attempt to get my mother used to using a computer. This process alone has opened my eyes a great deal to usability and what a person who has no existing experience with computer use ‘sees’ when they look at a program for the first time. The concept of a button or a slider, or a scroll bar all have a very simple context to someone that is used to using current GUI based applications, but to someone that has never used one before the term button can have a completely different connotation and can really be confusing. It used to be simpler…

A button was always a square ‘thing’ with a defined border around it, and text that told you what it did, but at some point we started to change it. Buttons started to light up when the mouse moved over them in an attempt to show that the mouse can ‘do something there’ and then in fact someone decided that you could replace a button with a picture, then they even decided that you can remove the border around the button. What we have started to see now is a blurring between buttons and icons. Not a large problem you may think at first until you dig just bellow the surface and look at what I call the ‘action context’, or rather what you can do with the ‘thing’ these concepts, that of button and icon, are really very different.

  • Buttons usually require a single click while icons traditionally require a double click.
  • Icons typically represent something that you can take an action on while buttons typically indicate an action that you can perform.
  • Icons usually allow a right click for a context menu of options while buttons typically do not.

Its funny, but because I have been brought up with the GUI concept for a very long time (ok, not that long, I am not that old) this progression somehow slipped passed me and I ‘just understood it’ but now that I am teaching someone this new I have seen that it can be really difficult to ‘get it’ now. Using a GUI is almost as bad as learning the English language (remember the dreaded i-before-e rule?) and given the fact that GUIs were supposed to make life simpler, that should not be the case.

So why is it this way now? Why are we where we are?

Anyone?

Bueller?

It’s because in our rush to help we lost sight of the fact that GUIs are supposed to be a standard based upon a deep intellectual understanding of the basics of what people can understand an interpret visually. Its also because we just can.

Remember way back, I think it was around the 80’s, when signs started to get less wordy and more visual? Remember when people used to explain that a sign of a person walking with a big red line through it was supposed to be more universal and language agnostic than the words ‘don’t walk? That made sense to most people. As computers became more visual the paradigm (I really hate that word) started to migrate to computer use also and these pictograms (what they really are – just like cave wall paintings) started to be known as icons and the GUI industry was off to a boom.

Fast forward to present day. We seem to be stuck in a new paradigm; that of fluffy and likable user interfaces. When the heck did that happen? When did it become better (or even part of the standard) to use animated buttons with drop shadows and all that golly-gee-wiz stuff? I think it is really more because we can than we needed to. Were people asking us, by us I mean developers, for really cool user interfaces that look like they have been dipped in liquid plastic and that spin and fly around the screen? I don’t ever remember getting that memo on my desk. Users really just what something that works well and is easy to use.

I have listened to UI ‘experts’ that are trying to convince me that discoverability is a major reason for the change, like we saw with the Microsoft Ribbon, and that we need to start thinking different when we design our UIs. I have even been told that a good example of UX and discoverability would be to make buttons grow in size as they are clicked to allow the most used buttons to be larger than the ones used the least. Does anyone here remember the debacle that Microsoft shoved on us in Office (2000 I think it was) when the menus started to ‘hide’ the functions used less often? They called it personalized menus and from what I can see most user centric web sites carried articles that detailed how to shut that ‘feature’ than details about how it worked and how it was supposed to benefit users. It’s gone now for the most part, thank goodness.

What’s my point of all this?

One simple concept. Not everyone is a visual person. Just because you make a ‘thing’ that acts like a button don’t assume that people are going to ‘get it’ and ‘know’ that they can click it. Also, don’t assume that by causing it to glow when the mouse moves over it will ‘mean’ to them that they can click it. Don’t assume by placing a drop shadow under something will give the impression to everyone the meaning of layers and that the ‘thing’ is higher up so you can push it down with the mouse. Remember that not everyone thinks in 3-D. Half the users I deal with for some reason end up with all their windows set to full screen and just do not get the concept of overlapping screens and how to interact with them. That was ‘supposed’ to be a universal understanding, remember? Everyone was supposed to think of their computer desktop as a desk with stacks of papers on it, and that you can brings specific papers to the front to work on them, but in reality that concept is lost on many people, and they result to working in full screen mode, and maybe use the task bar to switch between application windows.

Now, does this mean that we should abandon all the new UI concepts and stuck to the boring gray screens of yesteryear? No. It does mean all of us, yes, even the UI designers, need to understand who uses computers and temper their ‘best practices’ with some humility and understanding. Just because you think it is a cool idea does not make it a good idea (although it may still be cool). Just because you make a picture clickable does not mean that all users will ‘get it’ and just know what to do. Also, don’t think that you can fix all this by building better documentation that says ‘hey, you can click on anything with a drop-shadow’ because most people don’t read the docs.

So, what are you supposed to do? I have some words of what I think is wisdom for all involved.

Developers\designers, I think one of the biggest things that will help is to keep things consistent. After all, that is what the GUI was supposed to do. Remember that the GUI (really with roots way back in the common user interface, or CUII, idea) was supposed to breed the ideas that the framework (OS in the case of Windows) was supposed to provide a common set of UI elements that kept the UX between applications looking and behaving in a consistent manner.

I know that people innovate and there are tons of great ideas out there for new UI ideas, and I am not saying to not innovate and bring these into new technology, but I am saying that you need to understand that just because your new wiz-bang UI element makes sense to you and solves a problem in your eyes does not mean that it will for all your users. Do what you can innovate but temper it with the lens of a new user that may be using your stuff for the first time. Be ready to take support questions on the new idea and maybe have a few videos or other training materials available focused on just that new concept. Maybe even provide a small application that gets installed that allows a user to ‘play’ with the new control completely outside the application free form the worries of messing up their work.

Users, remember that you are new to this, and that things are going to look different to you, but most of all, remember that those helping you have been through this, and are probably completely numb to the fact that you may not ‘know’ what they are saying. The whole UI premise is that once you start to learn a little the rest starts to come easier, and that curve can happen quick, and once you are over the hump the entire thing will become second nature to you. That was the idea in the first place. Also don’t fall back on the thought that you are dumb for not ‘getting it’ and give up. Once you do it enough you will ‘get it’. It simply is new and takes some practice to get good at it. Now, that does not mean that you can be expected to be spoon fed all the time either. You have a responsibility to learn. If you want to use a computer you have to learn a little. Those around you will be (should be) understanding to a point, but after having to remind you that the square ‘thing’ on the same screen you have seen 100 times before is a button and you click it with the left mouse button you can expect some tension in the air.

Trainers\helpers\support, you have to have patience and understanding, but most of all you have to KNOW what the system you are trying to help with looks like and be able to spot potential trouble points. If you ask someone to click the button on the screen that has a specific picture on it and the user tells you that they do not see a button like that on their screen, trust them and change your thoughts. Maybe they are not ‘seeing’ a button. Maybe to them they are ‘seeing’ a picture and just are not ‘getting it’. Remember, what you have spent years looking at and understanding may be new to them.

Thursday, 12 February 2009 14:40:40 (Eastern Standard Time, UTC-05:00)  #    Comments [0]    | 
# Saturday, 10 January 2009

Ok, I have to admit that I am sick and tired of being treated like a second class citizen simply because I own a kick-ass computer and decide to run the 64-bit version of Windows XP professional.

Today I had to try to do a remote assistance session to my mothers new computer (don't ask) and after some searching (because it would not work) I came across this little tidbit of information on the Microsoft web site.

Remote Assistance Is Not Available in Windows XP 64-Bit Edition
http://support.microsoft.com/kb/304727

Symptoms:
Windows XP 64-Bit Edition does not include the Remote Assistance feature.

Status:
This behavior is by design.

Holly freaking hell! Do they not call it the 'professional' version? Whats with not including a support feature in there?

Oh wait a second. I think I understand it... Just like the theme engine, the 64-bit version of something is so completely different that it would have been too hard to make it work in x64 so they just left it off right?

You know, I am usually pretty liberal in my love of MS stuff. Their software has helped me make a decent living over the years and I think that they generally do a pretty good job, but it's these little annoying things that keep getting under my skin like a tick.

Somebody there better wake up.

Oh, by the way, it works just fine on the 64-bit version of VISTA running the exact same copy of Windows Live Messenger so they CAN do it if they wanted to.



 

Saturday, 10 January 2009 22:00:43 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   OS  | 
# Sunday, 04 January 2009

I have been running Windows Vista (Business x64 Edition) since August 5th. In fact I upgraded my entire system just so I could run it. For those of you who know me I had a kick butt desktop system a while ago.

  • Super Micro Motherboard
  • Dual 3Ghz Dual Core with HT 64-bit Xeon processors (8 total cores)
  • 4 GB RAM
  • 800GB SATA3 HD
  • 2 dual head Nvidia 512 MB PCI video cards (4 total video heads)
  • 800 Watt PS

I was running Windows XP Professional x64 Edition for about 2 years on this rig and it ran great but the geek in me decided that he wanted to run Windows Vista. Yes, I was blinded by the new ‘cool’ looking stuff and I loved the side bar aspect of it. I had been running either the Desktop Sidebar or Yahoo Widgets to get a similar experience but had been plagued by a series of poorly written plug-ins that left me with a bit of a bad taste (like I thought Vista widgets may be better?). I purchased a copy of Vista Business x64 and made the leap. I actually purchased an additional HD to install it on so I could leave my XP setup alone for a while in case I had to revert back quickly. Good thing I did that.

Vista looked great but, even on a system with the backbone of two 64-bit 3Ghz Xeons the performance was abysmal. In fact the system ended up with an experience rating of 2.0! After a bit of investigation the problem was found to be the PCI video cards and were the components dragging the system down. All other aspects of the system had a 4.5 or better rating. I was stuck though because the mother board I had selected was server class and did not contain any speedy x16 PCIe slots. It did have two x1 slots but there was no way I was going to locate a decent video card to sit in there. So, it was off to Tiger Direct.

I ended up putting together a kick butt system that I was convinced was going to run Vista very well.

  • iStarUSA S-10000 ATX Full-Tower Server Case
  • Crucial Ballistix Dual Channel 4096MB PC6400 DDR2 800MHz EPP
  • Intel Pentium D 945 Processor HH80553PG0964MN - 3.40GHz, 4MB Cache, 800MHz FSB, Presler, Dual-Core
  • EVGA nForce 680i SLI Motherboard - T1 Version, NVIDIA nForce 680i SLI, Socket 775, ATX, Audio, PCI Express, SLI, Dual Gigabit LAN, S/PDIF, USB 2.0 & Fire-wire, Serial ATA, RAID
  • 2 - EVGA GeForce 8800 GT Video Cards - 512MB DDR3, PCI Express 2.0, SLI Ready, (Dual Link) Dual DVI, HDTV, Video Card
  • Thermaltake CPU Cooler / Big Typhoon VX / 4 in 1 / 6 Heat Pipes / 120mm Fan
  • Ultra X3 ULT40064 1000-Watt Power Supply - ATX, SATA-Ready, PCI-E Ready, Modular

As I already stated in my August 5th posting, it rocked. Vista went right in and ran great without issues this time (no duh right?).

Well, I learned another thing about this experience. The grass always seems greener on the other OS. The real core learning here is this:

"When Vista is good, it’s great, but when it starts to suck, it really starts to suck."

Stability

XP just seemed tighter to me, like a well built car. Sure it had its moments and crashed, but it seemed to recover from crashes much faster and simpler than Vista did. XP would blue screen one in a great while, and when it did it wrote its file and then would do a scan disk as expected. In fact I could always predict when it would run one. If I had a file open at the time of the crash it would run one, every time like clockwork. Vista never ran one on its own, ever. But I could tell that it was suffering from troubles after the reboot and when I set up a scan disk manually and ran it, sure enough, corrupted files, assembly because of the blue screen. Why did I have to take this step on my own? Seemed odd to me that Vista could not detect the junked files but I knew they were there and XP used to detect them.

Now I have to admit that not all the BSODs were Vistas fault. It turns out that I did have one bad stick of RAM and that was playing havoc on the system after about the first month, but the system never felt right after the first 2 blue screens that it took for me to figure that out. I am convinced that had it not been for that bad stick of RAM I may still be running a stable system to day on Vista. But, what does that say about an OS that can be killed buy one bad stick of RAM? Hmmm.

Gadgets

They are really handy, but, as with the others, I also found that the quality of the code was not great. The standard Windows gadgets seemed OK, but they were slim on functionality and not all that I needed. I wanted one that included system stats (like available HD space) so I had to download one of those (and there were several available) but I also needed one that gave me status on Bit-torrent downloads and I have to say that, after a lengthy test effort, I could not seem to locate a single one that did not seem to have a memory leak lurking around that caused a ton of crashes. It seems that one bad gadget can really take the system down hard. It seems to me that they do not have a great system of process isolation there if that can happen.

Aero

What can I say? It looks awesome, but in the grand scheme of things, it adds zero value to the actual usability of the system. I have a feeling that MS was relying on the slick glass interface to lure folks in with the ‘aw, cool’ factor, and it worked :) but, the novelty soon wears off. It’s kind of like when you think you want one of those tall lanky blond babes and realize that they have zero personality, no brains, and you realize that all they want is for you to buy them stuff. Sure other guys walk by and ogle at her and wish they had one, but son enough you really feel like tossing her to the curb and getting a good woman like I ended up with :)

UAC

What more can I say about this that has not already been said by hundreds in the press or even other users. It’s an interesting concept, but what I think is a flawed implementation. To be honest I am not sure what you COULD do here really. Let’s face it. What we really need is simply smarter users. UAC is not going to fix that. I think the idea was perhaps to help educate people as to how often things happen behind the scenes that perhaps they never were aware of before or never gave a second thought about, but come one. I had to ‘allow’ files to be moved from one drive to another even though it was clear that it was ME doing the dragging in dropping. I tried, I really did, to live with UAC enabled but in the end, after about a month it got shut off. Let’s face it. I am a tinkerer, and a pretty good one at that, so I am all over the place at times and really grew to hate that UAC dialog box after a while.

I do give MS credit for allowing it to be turned off though. I think maybe it should be off by default on the business versions and on by default on the home versions. UAC should do two things. First, it needs to know when the act being monitored is being performed by the user or by a process and act accordingly to stay the heck out of the way, and second, it needs to learn a bit and stay out of the way if it gets dismissed at the same spot all the time. Maybe allow a person to turn off notifications on file copy\move with a check box or something.

Application compatibility

I know this is a big one, but come on. The reason I waited as long as I did to run Vista was because I had to wait for Visual Studio 2005 (an MS application) to work on their own OS without causing issues :) I was really annoyed at the issues I had with a few apps. VMware server was a major annoyance. I was a major user of Virtual machines for software testing and there was no reliable way to get it installed as part of Vista simply because the folks there seemed to refuse to sign their damn drivers. Now you may think that this is all the fault of the folks over at VMware, but in reality I think it’s not ALL their fault. Vista does allow you to turn off signed driver checking (under the advanced start-up options in the F8 menu) but you are required to do this every time you start up! UGH!!! It just felt nasty doing that, kind of like I was forced to run in safe mode all the time. It just felt dirty. Visual Studio 2003 was another major problem. I know it’s old, and that there were major issues with the debugger that were causing problems, and I understand that it would have taken significant effort on the order of man-months to get 2003 working on Vista well, but my only option was to run VS2003 in a VM to maintain my old code base. Ooops! Guess what? All my VMs were rendered useless because VMware would not run well with out a major hack :) Now I have to install the MS VM (Virtual PC) product just to get VS2003 working? No thinks. I just kept an old Dual proc PIII XP machine alive for that.

I do think I owe it to the folks at MS though to say that Vista did seem to handle most of my other apps quite well. These were really the only, although major to me, applications that I had problems\issues with.

Performance

Man, nothing feels better to me speed wise than good old Windows XP Professional. Vista was nice and flashy, but unlike buying a Ferrari where you expect it to be a bit high-maintenance but are willing to put up with it because of the growling performance you are getting, I always felt Vista was slower than it should have been.

Start-up was always fast. Power-up to desktop in less than 2 minutes was great, but in all honesty XP is the same here for the most part, maybe 3 minutes, but start-up speed is not where I spend most of my day. In fact I hardly ever turn my system off so unless I am recovering from a crash I care little about start-up speed, and then I am expecting a scan-disk to be run.

File copy\move speed was awful. Look, I really don’t care if you calculate the time its will take for the files to copy or not, but if you do, do NOT make me wait for you to add up all the file sizes to do it. Running a few timings showed that about one third of my time was wasted by that ‘calculating’ junk to happen. This definitely showed one of two things. Either the UI was designed by an engineer or the UI was designed by a marketing person, either way, the next time someone other than a UI expert gets into the chair push them out and do the job right. XP may be a bit off on times but it is FAST so more often than not the time is irrelevant.

Network speed was terrible. One of the things that really ticked me off lately was the fact that I could not get my new Verizon FiOS working properly with Vista. Windows XP required that I run the TCP optimizer form SpeedGiude.net but once I did this simple task it flew (20/5 service is cool). This tool does nothing with Vista. In fact the IP stacks in Vista are apparently ‘tuned’ so this is not needed. BUNK! I was lucky to get 5 Mb\sec downstream on Vista while the XP box right next to it was getting 22. After doing some digging I found that Vista DID have a known issue and there was a fix released in SP1 (that I already had installed) that allowed you to tweak a bit by using a registry hack, still not by using the optimizer tool, that DID allow my speed to get BETTER, but I was still not getting 20. Speed tests over the course of 1 week done every day showed that I was getting no more than 16. I also ran a few tests on my local network just doing simple file copies across my LAN. Although the tests were very non-scientific, the results where interesting. Simply copying a 1GB file across to a file server running Windows 2003, over a 100Mb LAN connection took an extra 4 minutes on my Vista machine than Windows XP.

Conclusion

So, after all that, I am sad (happy) to say that I am once again back on good old comfy Windows XP. It’s fast, clean and very much uncluttered. I actually feel relaxed using it. I had not really felt it before but Vista seemed to make me always feel like I was moving. XP lets me work and lets me feel calm while I do it. I get my VS2003 back for when I need it. I have my VMware images back (a few of which will be running Vista for testing) and I think I may just keep it for a long time.

All I can say is really, honestly, truly I hope Windows 7 is better.

 

 

Sunday, 04 January 2009 10:37:37 (Eastern Standard Time, UTC-05:00)  #    Comments [2]   Vista | OS  | 
# Monday, 01 December 2008

Holly cow, if I get asked this one more time I think I am going to..... well, I am not sure what I an going to do but be assured that it may not be pretty :)

I get asked this all the time and I am not sure why people ask it.

"What is the best choice, implementing an interface or using inheritance?"

"What language is the best choice?"

"What is a better thing to use, an array or an array list?"

To me these all sound like the same question.... "How long is a piece of string?"

The problem is that they never seem to be satisfied with the answer "it depends". They seem to get frustrated and think that I am holding back on them. That I am hiding some great secret all to my self that is preventing them form becoming the next great developer.

In all honesty that is the best answer I can give simply because it's true. It REALLY does depend. It depends on your situation, your project, your intent, what you want to do and a ton of other factors that only YOU know about your project.

I also get asked a ton "what is the difference between a programmer and a developer?" To put it simply, the answer is that programmers ask the questions above while developers know that the answer is 'it depends' and are satisfied with it.

I don't mind being asked these questions, just take the answer and learn from it. Use it as a learning tool to become a developer.

Being a developer is cool and fun and you get to ask a whole slew of more cool questions like "how does one go about calculating the air speed velocity of an unladen swallow?"

Monday, 01 December 2008 00:16:28 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Design  | 
# Saturday, 08 November 2008

Well I did it :)

I now have my monitor array complete. I am sitting in front of 4 Acer 22 inch flat screens running 1680 x 1050. They are sweet! Programming is fantastic. Working on school work is fantastic. The massive screen real estate is great.

Sorry, I have to show off the geek setup here:

One thing I hope that you notice is the lack of paper. Is it always this way? No, I do have paper on the desk sometimes, but it is only when I get it from someone else. It is my goal to produce no paper at all. I figure that I have an awesome system, and I do most of my work on my computer, why do I need paper at all.

The wife on the other hand sees fit to print everything :) I will let her own up to that on her own. I have to admit that I am an enabler there.. I do provide 2 printers in the house (one color ink-jet and a B&W laser) but I hardly ever use them at all. If I see something I want I print it to PDF and then it is always searchable. The extra screen real estate does help me here but the the wife has 2 monitors (her laptop wide screen on her Acer and another Acer 22 inch monitor) so I am not sure what her problem is. I think she just feels 'better' holding paper in her hand to read...

On to finalize the last week of the Software Engineering class  then it is on to an OOP class.

Saturday, 08 November 2008 22:45:50 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Hardware | Site Admin  | 
# Thursday, 30 October 2008

Been doing a lot of thinking recently about tractability and how far it should really be taken. I have talked to a wide range of people over the years, ranging from project managers, development managers, team leaders and guy-at-the-desk implementers and am getting a wide range of answers.

 

Typically requirements traceability is critical to the success of a software project simply because it helps you ensure that you are doing what’s needed to satisfy the customers need and no more. But, as with may ‘processes’ in the SW realm, I think it can be taken a bit farther than it should be. I have been told by some project and development managers that having a concrete way to trace requirements all the way down to the code that implements them is critical. The ability to look at the code and know exactly why something was put into the system, and more importantly what will be impacted by making a code change, is a ‘must have’ in any good development system. In a traceability graph this usually ends up looking like this:

While I can start to see the benefit of that I also start to see where it breaks down a bit.

 

1)     Code is often used massively between functional areas so it leads to a very large traceability tree. In my opinion once you get past a certain number of branches (a number I have not really quantified yet but I will know it when I see it) the code simply gets qualified as ‘important’ and traceability at that point really looses some value.

2)     The current state of tools at this point really offers no way to store this metadata in the source in a simple, and automated, manner. This leaves it up to the developer to perform this task (usually in the comments) and that means that the developer gets more work to do. As we all know, the more time something takes that does not give the person doing it much (if any) direct value, the more likely it is that the task does not get done. This means that the traceability data can immediately become suspect causing no one to believe it and thus again it looses its value.

3)     Why do we really care that FunctionX was written to explicitly fulfill functional requirement F-101 and thus Business requirement B-203?

 

 

I personally think that this deep traceability is only there to fulfill management needs to see neat charts (ok, maybe I could have worked on the color scheme a bit) and graphs. I also think that this is a way for managers to feel that they are ensuring value from their developers by making sure that the developers are only writing what is needed to satisfy the requirements and not a line of code more. In fact many developers seem to be from my side of the camp, but some of them take it way to far in the other direction. Their opinions are that unless the system can be ensured as ‘good’ why track any of it at all? They know what the requirements are, they should be left on their own to implement the code in a way that satisfies the requirements and that’s it. Why do they need to justify their work at all as long as the end product works well and satisfies the stated requirements?

 

What you end up with here is this:

Who wins form this? No one does. Most of the time when you have an all or nothing strategy the outcome is completely non-productive. Is it good idea to have requirements traceability? Sure it is. I think most sensible developers and managers alike will agree that knowing why you are doing something, what the impact of changes are, and how things get tested are all good (great) ideas. The frustration comes in trying to come up with a solution that satisfies both camps. Something that gives both the managers and developers what they want.

 

I think that something is a very tight level of traceability between all levels of requirements, both up and downwards, but then to augment that into the code by completing the traceability down to the test cases and stopping there. With this you get something that looks like this:

Notice that you now have traceability form business requirements all the way down to the test cases just like you did before but you have left the code out of it. Some folks might say that this is missing the need (want) to trace requirements to the code that implements them but take a closer look and you will see that it really does not. The code traceability has not been skipped over, it has been preserved due to the physical connection to the test cases.

 

Consider this. Every test case should be there to explicitly support a use case, or at least one part of a use case. This means that every test should be traceable back to some code that it is testing. This ‘traceability’ can be seen in one of two ways. First, most test cases that reference no code inside them are easy to spot, since they have no code inside, and second, you can easily run an automated tool to check the source code of a test case that fails to reference any code. Clean, simple and it leaves the developer out of it which is good.

 

Now consider the other use of full traceability down to the code level. The ability to potentially spot dead code, or code that does not specially trace back to any requirement. You have not lost here wither since you can again use an automated tool to run a call tree backwards from all the test cases and ensure that you have no code written that is not reachable by a test. Actually this should be part of a normal test regime anyway and is part of what is called code coverage analysis, making sure that as much of your code is tested as possible.

 

Have you lost anything? No. Well maybe some work. In fact if you take a look back to your test practices you are already probably doing this almost 100% if you are using code coverage analysis. If you are not doing code coverage, start. Look at what it gives you. Management gets what they want, development gets what they want and everyone is happy. This is a classic win-win scenario that I think everyone can live with.

Thursday, 30 October 2008 18:13:03 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Design | Requirements  | 
# Saturday, 11 October 2008

Recently, in one of my many quests for knowledge about the good old NNTP protocol (be on the lookout for a really cool Usenet news reader to be released by Enterprocity within the next few months) I was pointed towards something called Postel’s Law, also referred to as the robustness principal.

 

In a nutshell the law is simple. It states:

 

“Be conservative in what you do, be liberal in what you accept from others.” – Jon Postel

 

You can see it for yourself right here at the bottom of page 12 in RFC 793 (TCP).

 

Since I am embarking on my new role as a Senior Software Engineer next week I thought that me getting pointed to this quotation form Jon Postel was quite apropos.

 

This is something that I have seen so much of over the last few years as my old role as a Senior Applications Engineer, both in the products that I supported as well as in the products that I helped others build. Many times companies can get involved in a finger pointing match over who owns a bug (us or them, it’s not OUR fault) or if something is even a bug or not. Many times engineering would point to a message we got from another component in the users solution (we did VoIP Gateways talking SIP so in these cases is was SIP messages) and said that the message was malformed in some way, and this was why our stack threw it on the garbage heap, or leaked memory, or threw an exception, or dropped a call, or some other undesirable behavior that caused someone to pick up their land line and call me.

 

It all boiled down to Postel’s Law. The third party SIP stack that we used (no names here please) was not very robust at all in its ability to take in things that were not 100% to the RFC. It was a good stack that did its job and had a good team behind it but when it came to handling SIP messages, it was very picky to say the least. One message that was not a complete verbatim to the ABNF used in the RFC and that message was ‘wrong’ and the behavior was indeterminate. That and the fact that there are some really nebulous areas in the RFC that did not help, made it look at times like the product had some serious issues, and in my opinion it did, from a users perspective. Taking this to another level, many of these malformed messages were in message headers that our product did not even care about, that just ended up adding insult to injury there.

 

In user land, people don’t care about all the stuff behind the scenes; they just want things that they paid for to work. Add to the fact that other products that may not have been better in all other respects did not have a problem dealing with these errant messages, and our product became even more suspect in the eyes of the customers. All engineers need to understand that a customer’s perception is reality. Even if YOU, as an engineer, know that the problem is really NOT with your product but with the other one, or a bug in a third party component that you use in your system, the customer sees an exception thrown in YOUR product or poor behavior in YOUR product and not the others; your product is the one with the problem.

 

So, this is just gentle reminder to all engineers out there (myself included) that not only do you need to validate all input to your systems (a good thing that some of us may take way too far) but you also need to decide HOW you are going to act when you detect that bad input. Throwing an exception when you are the upper layer, right next to a human user, may not be the best (be on the lookout for a posting on the use of exceptions :) ).

Saturday, 11 October 2008 00:54:58 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Design | Error Handling  | 
# Tuesday, 05 August 2008

Well I decided that I REALLY wanted to run Vista after all, so, since my old system had a problem with the video cards (they were only PCI) I decided to build a new system that WOULD be able to kick Vistas butt.

I think this one certainly qualifies. As you can see form the photo bellow, the heat sink on the darn CPU is the biggest I have ever seen.

Specs:

  • iStarUSA S-10000 ATX Full-Tower Server Case
  • Crucial Ballistix Dual Channel 4096MB PC6400 DDR2 800MHz EPP
  • Intel Pentium D 945 Processor HH80553PG0964MN - 3.40GHz, 4MB Cache, 800MHz FSB, Presler, Dual-Core
  • EVGA nForce 680i SLI Motherboard - T1 Version, NVIDIA nForce 680i SLI, Socket 775, ATX, Audio, PCI Express, SLI, Dual Gigabit LAN, S/PDIF, USB 2.0 & Firewire, Serial ATA, RAID
  • 2 - EVGA GeForce 8800 GT Video Cards - 512MB DDR3, PCI Express 2.0, SLI Ready, (Dual Link) Dual DVI, HDTV, Video Card
  • Thermaltake CPU Cooler / Big Typhoon VX / 4 in 1 / 6 Heatpipes / 120mm Fan
  • Ultra X3 ULT40064 1000-Watt Power Supply - ATX, SATA-Ready, PCI-E Ready, Modular

Damn! This thing is FAST! and runs Vista like a champ. The modular Power Supply is cooooooollll. No wires in the case but the ones you need. Rocks sweeeet!

So, now thew question is what do I do with my old system? A dual, dual core with Hyper-threading XEON 3Gig system.

I can't let the secret out right now but around the end of the month I might spill it... I do have plans for it though...

Tuesday, 05 August 2008 00:01:44 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Hardware  | 
# Wednesday, 16 July 2008
Opening post!
Wednesday, 16 July 2008 22:49:06 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Site Admin  | 
Copyright © 2017 Raymond Cassick. All rights reserved.
DasBlog 'Portal' theme by Johnny Hughes.
Pick a theme: