Blog Home  Home Feed your aggregator (RSS 2.0)  
Rays Development Blog - Sunday, January 04, 2009
A look into the mind of a VB Developer
 
# Sunday, January 04, 2009

I have been running Windows Vista (Business x64 Edition) since August 5th. In fact I upgraded my entire system just so I could run it. For those of you who know me I had a kick butt desktop system a while ago.

  • Super Micro Motherboard
  • Dual 3Ghz Dual Core with HT 64-bit Xeon processors (8 total cores)
  • 4 GB RAM
  • 800GB SATA3 HD
  • 2 dual head Nvidia 512 MB PCI video cards (4 total video heads)
  • 800 Watt PS

I was running Windows XP Professional x64 Edition for about 2 years on this rig and it ran great but the geek in me decided that he wanted to run Windows Vista. Yes, I was blinded by the new ‘cool’ looking stuff and I loved the side bar aspect of it. I had been running either the Desktop Sidebar or Yahoo Widgets to get a similar experience but had been plagued by a series of poorly written plug-ins that left me with a bit of a bad taste (like I thought Vista widgets may be better?). I purchased a copy of Vista Business x64 and made the leap. I actually purchased an additional HD to install it on so I could leave my XP setup alone for a while in case I had to revert back quickly. Good thing I did that.

Vista looked great but, even on a system with the backbone of two 64-bit 3Ghz Xeons the performance was abysmal. In fact the system ended up with an experience rating of 2.0! After a bit of investigation the problem was found to be the PCI video cards and were the components dragging the system down. All other aspects of the system had a 4.5 or better rating. I was stuck though because the mother board I had selected was server class and did not contain any speedy x16 PCIe slots. It did have two x1 slots but there was no way I was going to locate a decent video card to sit in there. So, it was off to Tiger Direct.

I ended up putting together a kick butt system that I was convinced was going to run Vista very well.

  • iStarUSA S-10000 ATX Full-Tower Server Case
  • Crucial Ballistix Dual Channel 4096MB PC6400 DDR2 800MHz EPP
  • Intel Pentium D 945 Processor HH80553PG0964MN - 3.40GHz, 4MB Cache, 800MHz FSB, Presler, Dual-Core
  • EVGA nForce 680i SLI Motherboard - T1 Version, NVIDIA nForce 680i SLI, Socket 775, ATX, Audio, PCI Express, SLI, Dual Gigabit LAN, S/PDIF, USB 2.0 & Fire-wire, Serial ATA, RAID
  • 2 - EVGA GeForce 8800 GT Video Cards - 512MB DDR3, PCI Express 2.0, SLI Ready, (Dual Link) Dual DVI, HDTV, Video Card
  • Thermaltake CPU Cooler / Big Typhoon VX / 4 in 1 / 6 Heat Pipes / 120mm Fan
  • Ultra X3 ULT40064 1000-Watt Power Supply - ATX, SATA-Ready, PCI-E Ready, Modular

As I already stated in my August 5th posting, it rocked. Vista went right in and ran great without issues this time (no duh right?).

Well, I learned another thing about this experience. The grass always seems greener on the other OS. The real core learning here is this:

"When Vista is good, it’s great, but when it starts to suck, it really starts to suck."

Stability

XP just seemed tighter to me, like a well built car. Sure it had its moments and crashed, but it seemed to recover from crashes much faster and simpler than Vista did. XP would blue screen one in a great while, and when it did it wrote its file and then would do a scan disk as expected. In fact I could always predict when it would run one. If I had a file open at the time of the crash it would run one, every time like clockwork. Vista never ran one on its own, ever. But I could tell that it was suffering from troubles after the reboot and when I set up a scan disk manually and ran it, sure enough, corrupted files, assembly because of the blue screen. Why did I have to take this step on my own? Seemed odd to me that Vista could not detect the junked files but I knew they were there and XP used to detect them.

Now I have to admit that not all the BSODs were Vistas fault. It turns out that I did have one bad stick of RAM and that was playing havoc on the system after about the first month, but the system never felt right after the first 2 blue screens that it took for me to figure that out. I am convinced that had it not been for that bad stick of RAM I may still be running a stable system to day on Vista. But, what does that say about an OS that can be killed buy one bad stick of RAM? Hmmm.

Gadgets

They are really handy, but, as with the others, I also found that the quality of the code was not great. The standard Windows gadgets seemed OK, but they were slim on functionality and not all that I needed. I wanted one that included system stats (like available HD space) so I had to download one of those (and there were several available) but I also needed one that gave me status on Bit-torrent downloads and I have to say that, after a lengthy test effort, I could not seem to locate a single one that did not seem to have a memory leak lurking around that caused a ton of crashes. It seems that one bad gadget can really take the system down hard. It seems to me that they do not have a great system of process isolation there if that can happen.

Aero

What can I say? It looks awesome, but in the grand scheme of things, it adds zero value to the actual usability of the system. I have a feeling that MS was relying on the slick glass interface to lure folks in with the ‘aw, cool’ factor, and it worked :) but, the novelty soon wears off. It’s kind of like when you think you want one of those tall lanky blond babes and realize that they have zero personality, no brains, and you realize that all they want is for you to buy them stuff. Sure other guys walk by and ogle at her and wish they had one, but son enough you really feel like tossing her to the curb and getting a good woman like I ended up with :)

UAC

What more can I say about this that has not already been said by hundreds in the press or even other users. It’s an interesting concept, but what I think is a flawed implementation. To be honest I am not sure what you COULD do here really. Let’s face it. What we really need is simply smarter users. UAC is not going to fix that. I think the idea was perhaps to help educate people as to how often things happen behind the scenes that perhaps they never were aware of before or never gave a second thought about, but come one. I had to ‘allow’ files to be moved from one drive to another even though it was clear that it was ME doing the dragging in dropping. I tried, I really did, to live with UAC enabled but in the end, after about a month it got shut off. Let’s face it. I am a tinkerer, and a pretty good one at that, so I am all over the place at times and really grew to hate that UAC dialog box after a while.

I do give MS credit for allowing it to be turned off though. I think maybe it should be off by default on the business versions and on by default on the home versions. UAC should do two things. First, it needs to know when the act being monitored is being performed by the user or by a process and act accordingly to stay the heck out of the way, and second, it needs to learn a bit and stay out of the way if it gets dismissed at the same spot all the time. Maybe allow a person to turn off notifications on file copy\move with a check box or something.

Application compatibility

I know this is a big one, but come on. The reason I waited as long as I did to run Vista was because I had to wait for Visual Studio 2005 (an MS application) to work on their own OS without causing issues :) I was really annoyed at the issues I had with a few apps. VMware server was a major annoyance. I was a major user of Virtual machines for software testing and there was no reliable way to get it installed as part of Vista simply because the folks there seemed to refuse to sign their damn drivers. Now you may think that this is all the fault of the folks over at VMware, but in reality I think it’s not ALL their fault. Vista does allow you to turn off signed driver checking (under the advanced start-up options in the F8 menu) but you are required to do this every time you start up! UGH!!! It just felt nasty doing that, kind of like I was forced to run in safe mode all the time. It just felt dirty. Visual Studio 2003 was another major problem. I know it’s old, and that there were major issues with the debugger that were causing problems, and I understand that it would have taken significant effort on the order of man-months to get 2003 working on Vista well, but my only option was to run VS2003 in a VM to maintain my old code base. Ooops! Guess what? All my VMs were rendered useless because VMware would not run well with out a major hack :) Now I have to install the MS VM (Virtual PC) product just to get VS2003 working? No thinks. I just kept an old Dual proc PIII XP machine alive for that.

I do think I owe it to the folks at MS though to say that Vista did seem to handle most of my other apps quite well. These were really the only, although major to me, applications that I had problems\issues with.

Performance

Man, nothing feels better to me speed wise than good old Windows XP Professional. Vista was nice and flashy, but unlike buying a Ferrari where you expect it to be a bit high-maintenance but are willing to put up with it because of the growling performance you are getting, I always felt Vista was slower than it should have been.

Start-up was always fast. Power-up to desktop in less than 2 minutes was great, but in all honesty XP is the same here for the most part, maybe 3 minutes, but start-up speed is not where I spend most of my day. In fact I hardly ever turn my system off so unless I am recovering from a crash I care little about start-up speed, and then I am expecting a scan-disk to be run.

File copy\move speed was awful. Look, I really don’t care if you calculate the time its will take for the files to copy or not, but if you do, do NOT make me wait for you to add up all the file sizes to do it. Running a few timings showed that about one third of my time was wasted by that ‘calculating’ junk to happen. This definitely showed one of two things. Either the UI was designed by an engineer or the UI was designed by a marketing person, either way, the next time someone other than a UI expert gets into the chair push them out and do the job right. XP may be a bit off on times but it is FAST so more often than not the time is irrelevant.

Network speed was terrible. One of the things that really ticked me off lately was the fact that I could not get my new Verizon FiOS working properly with Vista. Windows XP required that I run the TCP optimizer form SpeedGiude.net but once I did this simple task it flew (20/5 service is cool). This tool does nothing with Vista. In fact the IP stacks in Vista are apparently ‘tuned’ so this is not needed. BUNK! I was lucky to get 5 Mb\sec downstream on Vista while the XP box right next to it was getting 22. After doing some digging I found that Vista DID have a known issue and there was a fix released in SP1 (that I already had installed) that allowed you to tweak a bit by using a registry hack, still not by using the optimizer tool, that DID allow my speed to get BETTER, but I was still not getting 20. Speed tests over the course of 1 week done every day showed that I was getting no more than 16. I also ran a few tests on my local network just doing simple file copies across my LAN. Although the tests were very non-scientific, the results where interesting. Simply copying a 1GB file across to a file server running Windows 2003, over a 100Mb LAN connection took an extra 4 minutes on my Vista machine than Windows XP.

Conclusion

So, after all that, I am sad (happy) to say that I am once again back on good old comfy Windows XP. It’s fast, clean and very much uncluttered. I actually feel relaxed using it. I had not really felt it before but Vista seemed to make me always feel like I was moving. XP lets me work and lets me feel calm while I do it. I get my VS2003 back for when I need it. I have my VMware images back (a few of which will be running Vista for testing) and I think I may just keep it for a long time.

All I can say is really, honestly, truly I hope Windows 7 is better.

 

 

Sunday, January 04, 2009 10:37:37 AM (Eastern Standard Time, UTC-05:00)  #    Comments [2]   Vista | OS  | 
# Monday, December 01, 2008

Holly cow, if I get asked this one more time I think I am going to..... well, I am not sure what I an going to do but be assured that it may not be pretty :)

I get asked this all the time and I am not sure why people ask it.

"What is the best choice, implementing an interface or using inheritance?"

"What language is the best choice?"

"What is a better thing to use, an array or an array list?"

To me these all sound like the same question.... "How long is a piece of string?"

The problem is that they never seem to be satisfied with the answer "it depends". They seem to get frustrated and think that I am holding back on them. That I am hiding some great secret all to my self that is preventing them form becoming the next great developer.

In all honesty that is the best answer I can give simply because it's true. It REALLY does depend. It depends on your situation, your project, your intent, what you want to do and a ton of other factors that only YOU know about your project.

I also get asked a ton "what is the difference between a programmer and a developer?" To put it simply, the answer is that programmers ask the questions above while developers know that the answer is 'it depends' and are satisfied with it.

I don't mind being asked these questions, just take the answer and learn from it. Use it as a learning tool to become a developer.

Being a developer is cool and fun and you get to ask a whole slew of more cool questions like "how does one go about calculating the air speed velocity of an unladen swallow?"

Monday, December 01, 2008 12:16:28 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Design  | 
# Saturday, November 08, 2008

Well I did it :)

I now have my monitor array complete. I am sitting in front of 4 Acer 22 inch flat screens running 1680 x 1050. They are sweet! Programming is fantastic. Working on school work is fantastic. The massive screen real estate is great.

Sorry, I have to show off the geek setup here:

One thing I hope that you notice is the lack of paper. Is it always this way? No, I do have paper on the desk sometimes, but it is only when I get it from someone else. It is my goal to produce no paper at all. I figure that I have an awesome system, and I do most of my work on my computer, why do I need paper at all.

The wife on the other hand sees fit to print everything :) I will let her own up to that on her own. I have to admit that I am an enabler there.. I do provide 2 printers in the house (one color ink-jet and a B&W laser) but I hardly ever use them at all. If I see something I want I print it to PDF and then it is always searchable. The extra screen real estate does help me here but the the wife has 2 monitors (her laptop wide screen on her Acer and another Acer 22 inch monitor) so I am not sure what her problem is. I think she just feels 'better' holding paper in her hand to read...

On to finalize the last week of the Software Engineering class  then it is on to an OOP class.

Saturday, November 08, 2008 10:45:50 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Hardware | Site Admin  | 
# Thursday, October 30, 2008

Been doing a lot of thinking recently about tractability and how far it should really be taken. I have talked to a wide range of people over the years, ranging from project managers, development managers, team leaders and guy-at-the-desk implementers and am getting a wide range of answers.

 

Typically requirements traceability is critical to the success of a software project simply because it helps you ensure that you are doing what’s needed to satisfy the customers need and no more. But, as with may ‘processes’ in the SW realm, I think it can be taken a bit farther than it should be. I have been told by some project and development managers that having a concrete way to trace requirements all the way down to the code that implements them is critical. The ability to look at the code and know exactly why something was put into the system, and more importantly what will be impacted by making a code change, is a ‘must have’ in any good development system. In a traceability graph this usually ends up looking like this:

While I can start to see the benefit of that I also start to see where it breaks down a bit.

 

1)     Code is often used massively between functional areas so it leads to a very large traceability tree. In my opinion once you get past a certain number of branches (a number I have not really quantified yet but I will know it when I see it) the code simply gets qualified as ‘important’ and traceability at that point really looses some value.

2)     The current state of tools at this point really offers no way to store this metadata in the source in a simple, and automated, manner. This leaves it up to the developer to perform this task (usually in the comments) and that means that the developer gets more work to do. As we all know, the more time something takes that does not give the person doing it much (if any) direct value, the more likely it is that the task does not get done. This means that the traceability data can immediately become suspect causing no one to believe it and thus again it looses its value.

3)     Why do we really care that FunctionX was written to explicitly fulfill functional requirement F-101 and thus Business requirement B-203?

 

 

I personally think that this deep traceability is only there to fulfill management needs to see neat charts (ok, maybe I could have worked on the color scheme a bit) and graphs. I also think that this is a way for managers to feel that they are ensuring value from their developers by making sure that the developers are only writing what is needed to satisfy the requirements and not a line of code more. In fact many developers seem to be from my side of the camp, but some of them take it way to far in the other direction. Their opinions are that unless the system can be ensured as ‘good’ why track any of it at all? They know what the requirements are, they should be left on their own to implement the code in a way that satisfies the requirements and that’s it. Why do they need to justify their work at all as long as the end product works well and satisfies the stated requirements?

 

What you end up with here is this:

Who wins form this? No one does. Most of the time when you have an all or nothing strategy the outcome is completely non-productive. Is it good idea to have requirements traceability? Sure it is. I think most sensible developers and managers alike will agree that knowing why you are doing something, what the impact of changes are, and how things get tested are all good (great) ideas. The frustration comes in trying to come up with a solution that satisfies both camps. Something that gives both the managers and developers what they want.

 

I think that something is a very tight level of traceability between all levels of requirements, both up and downwards, but then to augment that into the code by completing the traceability down to the test cases and stopping there. With this you get something that looks like this:

Notice that you now have traceability form business requirements all the way down to the test cases just like you did before but you have left the code out of it. Some folks might say that this is missing the need (want) to trace requirements to the code that implements them but take a closer look and you will see that it really does not. The code traceability has not been skipped over, it has been preserved due to the physical connection to the test cases.

 

Consider this. Every test case should be there to explicitly support a use case, or at least one part of a use case. This means that every test should be traceable back to some code that it is testing. This ‘traceability’ can be seen in one of two ways. First, most test cases that reference no code inside them are easy to spot, since they have no code inside, and second, you can easily run an automated tool to check the source code of a test case that fails to reference any code. Clean, simple and it leaves the developer out of it which is good.

 

Now consider the other use of full traceability down to the code level. The ability to potentially spot dead code, or code that does not specially trace back to any requirement. You have not lost here wither since you can again use an automated tool to run a call tree backwards from all the test cases and ensure that you have no code written that is not reachable by a test. Actually this should be part of a normal test regime anyway and is part of what is called code coverage analysis, making sure that as much of your code is tested as possible.

 

Have you lost anything? No. Well maybe some work. In fact if you take a look back to your test practices you are already probably doing this almost 100% if you are using code coverage analysis. If you are not doing code coverage, start. Look at what it gives you. Management gets what they want, development gets what they want and everyone is happy. This is a classic win-win scenario that I think everyone can live with.

Thursday, October 30, 2008 6:13:03 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Design | Requirements  | 
# Saturday, October 11, 2008

Recently, in one of my many quests for knowledge about the good old NNTP protocol (be on the lookout for a really cool Usenet news reader to be released by Enterprocity within the next few months) I was pointed towards something called Postel’s Law, also referred to as the robustness principal.

 

In a nutshell the law is simple. It states:

 

“Be conservative in what you do, be liberal in what you accept from others.” – Jon Postel

 

You can see it for yourself right here at the bottom of page 12 in RFC 793 (TCP).

 

Since I am embarking on my new role as a Senior Software Engineer next week I thought that me getting pointed to this quotation form Jon Postel was quite apropos.

 

This is something that I have seen so much of over the last few years as my old role as a Senior Applications Engineer, both in the products that I supported as well as in the products that I helped others build. Many times companies can get involved in a finger pointing match over who owns a bug (us or them, it’s not OUR fault) or if something is even a bug or not. Many times engineering would point to a message we got from another component in the users solution (we did VoIP Gateways talking SIP so in these cases is was SIP messages) and said that the message was malformed in some way, and this was why our stack threw it on the garbage heap, or leaked memory, or threw an exception, or dropped a call, or some other undesirable behavior that caused someone to pick up their land line and call me.

 

It all boiled down to Postel’s Law. The third party SIP stack that we used (no names here please) was not very robust at all in its ability to take in things that were not 100% to the RFC. It was a good stack that did its job and had a good team behind it but when it came to handling SIP messages, it was very picky to say the least. One message that was not a complete verbatim to the ABNF used in the RFC and that message was ‘wrong’ and the behavior was indeterminate. That and the fact that there are some really nebulous areas in the RFC that did not help, made it look at times like the product had some serious issues, and in my opinion it did, from a users perspective. Taking this to another level, many of these malformed messages were in message headers that our product did not even care about, that just ended up adding insult to injury there.

 

In user land, people don’t care about all the stuff behind the scenes; they just want things that they paid for to work. Add to the fact that other products that may not have been better in all other respects did not have a problem dealing with these errant messages, and our product became even more suspect in the eyes of the customers. All engineers need to understand that a customer’s perception is reality. Even if YOU, as an engineer, know that the problem is really NOT with your product but with the other one, or a bug in a third party component that you use in your system, the customer sees an exception thrown in YOUR product or poor behavior in YOUR product and not the others; your product is the one with the problem.

 

So, this is just gentle reminder to all engineers out there (myself included) that not only do you need to validate all input to your systems (a good thing that some of us may take way too far) but you also need to decide HOW you are going to act when you detect that bad input. Throwing an exception when you are the upper layer, right next to a human user, may not be the best (be on the lookout for a posting on the use of exceptions :) ).

Saturday, October 11, 2008 12:54:58 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Design | Error Handling  | 
# Tuesday, August 05, 2008

Well I decided that I REALLY wanted to run Vista after all, so, since my old system had a problem with the video cards (they were only PCI) I decided to build a new system that WOULD be able to kick Vistas butt.

I think this one certainly qualifies. As you can see form the photo bellow, the heat sink on the darn CPU is the biggest I have ever seen.

Specs:

  • iStarUSA S-10000 ATX Full-Tower Server Case
  • Crucial Ballistix Dual Channel 4096MB PC6400 DDR2 800MHz EPP
  • Intel Pentium D 945 Processor HH80553PG0964MN - 3.40GHz, 4MB Cache, 800MHz FSB, Presler, Dual-Core
  • EVGA nForce 680i SLI Motherboard - T1 Version, NVIDIA nForce 680i SLI, Socket 775, ATX, Audio, PCI Express, SLI, Dual Gigabit LAN, S/PDIF, USB 2.0 & Firewire, Serial ATA, RAID
  • 2 - EVGA GeForce 8800 GT Video Cards - 512MB DDR3, PCI Express 2.0, SLI Ready, (Dual Link) Dual DVI, HDTV, Video Card
  • Thermaltake CPU Cooler / Big Typhoon VX / 4 in 1 / 6 Heatpipes / 120mm Fan
  • Ultra X3 ULT40064 1000-Watt Power Supply - ATX, SATA-Ready, PCI-E Ready, Modular

Damn! This thing is FAST! and runs Vista like a champ. The modular Power Supply is cooooooollll. No wires in the case but the ones you need. Rocks sweeeet!

So, now thew question is what do I do with my old system? A dual, dual core with Hyper-threading XEON 3Gig system.

I can't let the secret out right now but around the end of the month I might spill it... I do have plans for it though...

Tuesday, August 05, 2008 12:01:44 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Hardware  | 
# Wednesday, July 16, 2008
Opening post!
Wednesday, July 16, 2008 10:49:06 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Site Admin  | 
Copyright © 2019 Raymond Cassick. All rights reserved.
DasBlog 'Portal' theme by Johnny Hughes.
Pick a theme: