Blog Home  Home Feed your aggregator (RSS 2.0)  
Rays Development Blog - Sunday, 21 June 2009
A look into the mind of a VB Developer
 
# Sunday, 21 June 2009

Let’s be clear, to innovate you need to reach.

There are many companies that I have run into over the years that have continuous innovation as one of their core values, but have a buy instead of build mandate. They want to reach for the stars, but they feel they need to (or even can) do it using existing technology.

Why are people so build averse?

One thing that I have noticed is that even when you are in a ‘buy’ environment you end up building, the building is simply different. Instead of building UI, databases or business rules you end up building glue. Glue code that connects disparate systems. Glue code that moves data between stores. Glue code that provides services to secondary consumers. Glue code to allow enterprise level reporting where reporting was not available in the purchased system.

So explain to me again why people are so build averse?

Innovation starts with the ability to take a risk and move in a different direction.  It is difficult to consider moving an industry in an entirely different direction when you are building on top of existing applications that fit into a different paradigm.  After all, are you not looking to do something different? Are you not looking to accomplish something that the industry is not yet fully ready for in order to get a jump on the competition?

If you answer to these questions is yes then how do you expect to be efficiently innovative using what already exists to move forward in a different direction?

I know that it is simpler to buy something off the shelf and place the responsibility to make it work on the shoulders of a vendor. I also know that it may seem to be cheaper to buy a bunch of cots products and spend time to data integrate them using tools like Informatica, and other data integration methodologies. But once you stray from being able to open a shrink wrapped box and being able to simply install and use you have strayed into a build situation, like it or not. It is similar to putting a ton of effort into deciding what car you want to buy then once you take ownership you drive it right over to the custom shop and have the engine replaced with one that has more power, the interior redone to what you really wanted, and the exterior modified. If the car you bought was under powered and the interior was not what you wanted and the exterior was also not to your liking then why did you buy it?

Consider also what gets induced when you spend your money to glue stuff together and the industry changes. It sounds like you are insulated in cases like this because you feel that the vendor is responsible for bringing the application you purchased into regulatory compliance, and they are, but what about all that glue that you built? The vendors responsibility ends at their borders and whatever you have done to augment your systems over the years is not their responsibility. When push comes to shove they are not responsible for how you use the system and are only bound to deliver to you a system that fulfils the legal and regulatory requirements of the line of business as well as the stated requirements and features of what you purchased. They can’t be held responsible for what you glued onto their product, and nor should they be.

Additionally you cannot predict how they are going to make changes as time progresses so you are stuck working your changes to their time-lines and schedules. You will find yourself having to wait for their release cycles and then your install, evaluate and test cycles to complete before you can even start any decent planning to make changes to your internal systems of glue code before you move a new version into production. If your processes are not fast enough, or your vendors release schedule very aggressive, you can find yourself stuck in an endless cycle of install, test, modify, and move to production, a process that can place some very high stress on both people resources as well as hardware and software costs, not to mention the potential for harm to your business if things do not go right.

I am not saying that it always makes sense to build. No one can say that. Buy Microsoft Office and be happy that you did. Buy an accounting package and be happy that you did. But if your business is unique, or you need to make it unique as a differentiator, then consider the build task, even if you need to live with a coddled together bought system in parallel as you do it.

 

Saturday, 20 June 2009 23:31:35 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Design | Requirements  | 
# Thursday, 30 April 2009

I catch myself correcting people all the time when these terms are used because not many seem to use them correctly, at least correctly by my judgment. I am going to speak my mind here, and put out into the public eye, what I think the difference is between each of these roles.

I say roles because they are not people. They can be people, they most certainly are almost always jobs within a company, but at the lowest level they are roles. Each of these is a pattern that a person has to fit into to serve that particular purpose. Multiple people can fill each of them at once, just like one person can fill several, but at any one given time a person fits into either one of them. Because of a persons experience and knowledge levels, as well as their underlying personality, they may be qualified to fill one of these roles or they may not. They may be good at one or not.

I am going to start by putting out a very crude diagram to show my personal view (perhaps rash generalization based upon my experience) of how these roles fit within a development hierarchy.

 

The first thing you will notice is that architects are on top and programmers are on the bottom with developers nicely placed in between. This is not out of any disrespect for either developers or programmers, but we must be honest with ourselves, there is a certain level of expectation between these roles that places them within a very specific hierarchy. Like it or not, professionally speaking, one step up and better than the lower. I use ‘better’ as a relative term here to mean more experienced, more accepting of responsibility, and shouldering more expectations. I know that sometimes programmers can feel the entire weight of the project on their shoulders, but in reality, if they are then someone above them in the hierarchy is not performing in their role properly.

So, how do I place these roles within this hierarchy? What criteria do I use? How do I measure the expectations?

Architect

This person(s) is responsible for the technical footprint of the solution. When it comes down to understanding how all the various piece-parts talk to each other, this person knows. When it comes down to understanding the difference between a clustered and a load-balanced set of servers, this person knows. When it comes down to understanding why clustering is better than load-balancing within the context of the enterprises architecture, this person knows. When it comes to understanding how a specific messaging architecture fits in the system, this person gets it. When it comes to understanding why it may be better to use a server with multiple physical CPUs vs. one with multiple processing cores, this is the guy to ask.

Can they do the work of everyone below them? About 80-90% of it, yes. Should they be responsible for doing low-level work within their project? I don’t think so. Why? Because for a really technical person that has to work at the implementation level it is very difficult to shift gears to a high level technical view and stay objective, to not select one method over another strictly on the merits of its contribution to the overall business need instead of what may be simpler, or cooler, to implement. If an architect is going to be required to actually do work on a project at a lower level then I don’t think it should be on their project. If they are going to switch gears then I think it should be a clean switch.

Architects have to not only be able to work at this high level, but they need to be happy working there. I have seen many cases where developers have been promoted to architect simply on merits such as length of service or their great ability to lead a team of developers and programmers, but be miserable wrecks when they reach the level of an architect because they miss the thrill of the compile. They need to be able to feel personal fulfillment by the act of a project coming together more than the rush of seeing a passing unit test. They need to be well with the fact that they made a good decision on what message transport they selected rather than feeling the high of spending all day working with WSDL and message versioning. They need to feel comfortable sitting in an ivory tower once in a while, even if those bellow them feel a bit off because of the view..

Developer

Developers are the top of the ‘do-er’ list. These people do the work. The build the systems designed by the architects and understand the low-level implementation details of HOW to build the stuff that was designed. You want to know the various methods available on an object? This is who you ask. You want to know how large an XML message is as it goes across the wire between servers? This person can answer that. Do you want to know how two objects connect and what the ripple effect of a change is going to be? Ask these folks.

Developers know it all within their areas of expertise. And to be honest, developers need to maintain a specific area of expertise because software development changes so darn fast that you cannot possibly know it all to a high degree of efficiency and knowledge. You can be very knowledgeable in a ton of areas, but when it comes down to knowing how the bits move in a specific way you need to really have a core set of technologies that you are great in. These folks need to understand how tools like UML help them and how they can hinder. They need to know the difference between book theory and implementation reality. They need to know that ‘pattern’ is not a magic word unless it can really solve your problem, and that OOP is not a mandatory way of life, but you better think at least a little before you decide that it isn’t. This role also understands why you should need a note form your mother to use a global variable in development, but also understand that doing so does not make you an evil Satan worshiper. Developers understand the reason that code comments are useful and that not every line needs to be commented.

Some people can feel confused and worried living here because they think that they need to know it all at a very low level. I think these people are best to live at one level lower, as a programmer until they get a level head enough to move a level higher as an architect, and they  may actually end up being very good architects given enough experience.

Programmer
 
Beginner, Script-kitty, copy-paste-developer, these are the first words that come to mind when I think of this moniker. Don’t get me wrong, being a programmer is part of the natural progression of becoming a developer, and then an architect. Most of us learned to crawl before we could walk, and learning to write software is no different. Programmers understand the syntax, but probably not the reason behind using different patterns. They understand the idea behind separation of concerns and multi-tier development, but are probably not completely clear on the subtle nuances that can make it work well or bring a system down around their knees. They can debug most of the code they write, but get itchy when they have to read others code, or work on code that was written years ago but someone else. They also may not view the process of design, review, and code as having much worth and feel more comfortable by just sitting down with their beverage of choice and writing code to hit a mark. These folks maybe great at writing glue, the code that binds the ‘stuff’ of a project together, but they have not yet had enough experience to be responsible for all the low level details of an objects overall implementation. They are, the good ones, hungry for knowledge and want to learn as much as they can, but focus until they get closer to being a developer they are in an endless search for the silver bullet, the best way, the one true method that allows them to work efficiently and write the next killer bit of code. These guys comment their code because they are told that comments are good but for the most part it is feast or famine. They either comment everything or nothing.

So there we go. If I make it sound like one role is better than the other as in architects are just better people than programmers are, then please accept my apologies as that was not my intention. I think every one of thee roles is very important for a well balanced development team. Like I have always said, the world needs both planners and doers if it wants to get anything done. If there was no one to put their head down and code then it does not matter how good the design is, nothing gets done. So, if you are a programmer that is learning and growing, and understands their role and plays well there, then I say congratulations to you for being a necessary cog in the system. If you are an architect and feel that I am giving programmers or developers too much credit for their jobs then shame on you and get out of the industry because your attitude is getting in the way. Everyone has to start somewhere, it's a natural progression that everyone should go through.

Thursday, 30 April 2009 12:05:59 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Roles  | 
# Thursday, 12 February 2009

Since I have begin my deep dive into Windows Presentation Foundation (WPF) I have started to also take a long hard look at usability and all the various factors that can have an impact on the user experience. After all, WPF allows you to do all kinds of shinny and cool things, and every one of them can have an effect, either positive or negative on the users ability to understand the interface of an application.

I say understand because that is really what we refer to when we talk about the User Experience (UX) of an application. You have all kinds of interesting terms that hide the concept like discoverability, transfer of skills, etc… but when it comes right down to is the users ability to ‘get it’ when they look at the application. One other thing that started me thinking about this more is my recent attempt to get my mother used to using a computer. This process alone has opened my eyes a great deal to usability and what a person who has no existing experience with computer use ‘sees’ when they look at a program for the first time. The concept of a button or a slider, or a scroll bar all have a very simple context to someone that is used to using current GUI based applications, but to someone that has never used one before the term button can have a completely different connotation and can really be confusing. It used to be simpler…

A button was always a square ‘thing’ with a defined border around it, and text that told you what it did, but at some point we started to change it. Buttons started to light up when the mouse moved over them in an attempt to show that the mouse can ‘do something there’ and then in fact someone decided that you could replace a button with a picture, then they even decided that you can remove the border around the button. What we have started to see now is a blurring between buttons and icons. Not a large problem you may think at first until you dig just bellow the surface and look at what I call the ‘action context’, or rather what you can do with the ‘thing’ these concepts, that of button and icon, are really very different.

  • Buttons usually require a single click while icons traditionally require a double click.
  • Icons typically represent something that you can take an action on while buttons typically indicate an action that you can perform.
  • Icons usually allow a right click for a context menu of options while buttons typically do not.

Its funny, but because I have been brought up with the GUI concept for a very long time (ok, not that long, I am not that old) this progression somehow slipped passed me and I ‘just understood it’ but now that I am teaching someone this new I have seen that it can be really difficult to ‘get it’ now. Using a GUI is almost as bad as learning the English language (remember the dreaded i-before-e rule?) and given the fact that GUIs were supposed to make life simpler, that should not be the case.

So why is it this way now? Why are we where we are?

Anyone?

Bueller?

It’s because in our rush to help we lost sight of the fact that GUIs are supposed to be a standard based upon a deep intellectual understanding of the basics of what people can understand an interpret visually. Its also because we just can.

Remember way back, I think it was around the 80’s, when signs started to get less wordy and more visual? Remember when people used to explain that a sign of a person walking with a big red line through it was supposed to be more universal and language agnostic than the words ‘don’t walk? That made sense to most people. As computers became more visual the paradigm (I really hate that word) started to migrate to computer use also and these pictograms (what they really are – just like cave wall paintings) started to be known as icons and the GUI industry was off to a boom.

Fast forward to present day. We seem to be stuck in a new paradigm; that of fluffy and likable user interfaces. When the heck did that happen? When did it become better (or even part of the standard) to use animated buttons with drop shadows and all that golly-gee-wiz stuff? I think it is really more because we can than we needed to. Were people asking us, by us I mean developers, for really cool user interfaces that look like they have been dipped in liquid plastic and that spin and fly around the screen? I don’t ever remember getting that memo on my desk. Users really just what something that works well and is easy to use.

I have listened to UI ‘experts’ that are trying to convince me that discoverability is a major reason for the change, like we saw with the Microsoft Ribbon, and that we need to start thinking different when we design our UIs. I have even been told that a good example of UX and discoverability would be to make buttons grow in size as they are clicked to allow the most used buttons to be larger than the ones used the least. Does anyone here remember the debacle that Microsoft shoved on us in Office (2000 I think it was) when the menus started to ‘hide’ the functions used less often? They called it personalized menus and from what I can see most user centric web sites carried articles that detailed how to shut that ‘feature’ than details about how it worked and how it was supposed to benefit users. It’s gone now for the most part, thank goodness.

What’s my point of all this?

One simple concept. Not everyone is a visual person. Just because you make a ‘thing’ that acts like a button don’t assume that people are going to ‘get it’ and ‘know’ that they can click it. Also, don’t assume that by causing it to glow when the mouse moves over it will ‘mean’ to them that they can click it. Don’t assume by placing a drop shadow under something will give the impression to everyone the meaning of layers and that the ‘thing’ is higher up so you can push it down with the mouse. Remember that not everyone thinks in 3-D. Half the users I deal with for some reason end up with all their windows set to full screen and just do not get the concept of overlapping screens and how to interact with them. That was ‘supposed’ to be a universal understanding, remember? Everyone was supposed to think of their computer desktop as a desk with stacks of papers on it, and that you can brings specific papers to the front to work on them, but in reality that concept is lost on many people, and they result to working in full screen mode, and maybe use the task bar to switch between application windows.

Now, does this mean that we should abandon all the new UI concepts and stuck to the boring gray screens of yesteryear? No. It does mean all of us, yes, even the UI designers, need to understand who uses computers and temper their ‘best practices’ with some humility and understanding. Just because you think it is a cool idea does not make it a good idea (although it may still be cool). Just because you make a picture clickable does not mean that all users will ‘get it’ and just know what to do. Also, don’t think that you can fix all this by building better documentation that says ‘hey, you can click on anything with a drop-shadow’ because most people don’t read the docs.

So, what are you supposed to do? I have some words of what I think is wisdom for all involved.

Developers\designers, I think one of the biggest things that will help is to keep things consistent. After all, that is what the GUI was supposed to do. Remember that the GUI (really with roots way back in the common user interface, or CUII, idea) was supposed to breed the ideas that the framework (OS in the case of Windows) was supposed to provide a common set of UI elements that kept the UX between applications looking and behaving in a consistent manner.

I know that people innovate and there are tons of great ideas out there for new UI ideas, and I am not saying to not innovate and bring these into new technology, but I am saying that you need to understand that just because your new wiz-bang UI element makes sense to you and solves a problem in your eyes does not mean that it will for all your users. Do what you can innovate but temper it with the lens of a new user that may be using your stuff for the first time. Be ready to take support questions on the new idea and maybe have a few videos or other training materials available focused on just that new concept. Maybe even provide a small application that gets installed that allows a user to ‘play’ with the new control completely outside the application free form the worries of messing up their work.

Users, remember that you are new to this, and that things are going to look different to you, but most of all, remember that those helping you have been through this, and are probably completely numb to the fact that you may not ‘know’ what they are saying. The whole UI premise is that once you start to learn a little the rest starts to come easier, and that curve can happen quick, and once you are over the hump the entire thing will become second nature to you. That was the idea in the first place. Also don’t fall back on the thought that you are dumb for not ‘getting it’ and give up. Once you do it enough you will ‘get it’. It simply is new and takes some practice to get good at it. Now, that does not mean that you can be expected to be spoon fed all the time either. You have a responsibility to learn. If you want to use a computer you have to learn a little. Those around you will be (should be) understanding to a point, but after having to remind you that the square ‘thing’ on the same screen you have seen 100 times before is a button and you click it with the left mouse button you can expect some tension in the air.

Trainers\helpers\support, you have to have patience and understanding, but most of all you have to KNOW what the system you are trying to help with looks like and be able to spot potential trouble points. If you ask someone to click the button on the screen that has a specific picture on it and the user tells you that they do not see a button like that on their screen, trust them and change your thoughts. Maybe they are not ‘seeing’ a button. Maybe to them they are ‘seeing’ a picture and just are not ‘getting it’. Remember, what you have spent years looking at and understanding may be new to them.

Thursday, 12 February 2009 14:40:40 (Eastern Standard Time, UTC-05:00)  #    Comments [0]    | 
# Saturday, 10 January 2009

Ok, I have to admit that I am sick and tired of being treated like a second class citizen simply because I own a kick-ass computer and decide to run the 64-bit version of Windows XP professional.

Today I had to try to do a remote assistance session to my mothers new computer (don't ask) and after some searching (because it would not work) I came across this little tidbit of information on the Microsoft web site.

Remote Assistance Is Not Available in Windows XP 64-Bit Edition
http://support.microsoft.com/kb/304727

Symptoms:
Windows XP 64-Bit Edition does not include the Remote Assistance feature.

Status:
This behavior is by design.

Holly freaking hell! Do they not call it the 'professional' version? Whats with not including a support feature in there?

Oh wait a second. I think I understand it... Just like the theme engine, the 64-bit version of something is so completely different that it would have been too hard to make it work in x64 so they just left it off right?

You know, I am usually pretty liberal in my love of MS stuff. Their software has helped me make a decent living over the years and I think that they generally do a pretty good job, but it's these little annoying things that keep getting under my skin like a tick.

Somebody there better wake up.

Oh, by the way, it works just fine on the 64-bit version of VISTA running the exact same copy of Windows Live Messenger so they CAN do it if they wanted to.



 

Saturday, 10 January 2009 22:00:43 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   OS  | 
# Sunday, 04 January 2009

I have been running Windows Vista (Business x64 Edition) since August 5th. In fact I upgraded my entire system just so I could run it. For those of you who know me I had a kick butt desktop system a while ago.

  • Super Micro Motherboard
  • Dual 3Ghz Dual Core with HT 64-bit Xeon processors (8 total cores)
  • 4 GB RAM
  • 800GB SATA3 HD
  • 2 dual head Nvidia 512 MB PCI video cards (4 total video heads)
  • 800 Watt PS

I was running Windows XP Professional x64 Edition for about 2 years on this rig and it ran great but the geek in me decided that he wanted to run Windows Vista. Yes, I was blinded by the new ‘cool’ looking stuff and I loved the side bar aspect of it. I had been running either the Desktop Sidebar or Yahoo Widgets to get a similar experience but had been plagued by a series of poorly written plug-ins that left me with a bit of a bad taste (like I thought Vista widgets may be better?). I purchased a copy of Vista Business x64 and made the leap. I actually purchased an additional HD to install it on so I could leave my XP setup alone for a while in case I had to revert back quickly. Good thing I did that.

Vista looked great but, even on a system with the backbone of two 64-bit 3Ghz Xeons the performance was abysmal. In fact the system ended up with an experience rating of 2.0! After a bit of investigation the problem was found to be the PCI video cards and were the components dragging the system down. All other aspects of the system had a 4.5 or better rating. I was stuck though because the mother board I had selected was server class and did not contain any speedy x16 PCIe slots. It did have two x1 slots but there was no way I was going to locate a decent video card to sit in there. So, it was off to Tiger Direct.

I ended up putting together a kick butt system that I was convinced was going to run Vista very well.

  • iStarUSA S-10000 ATX Full-Tower Server Case
  • Crucial Ballistix Dual Channel 4096MB PC6400 DDR2 800MHz EPP
  • Intel Pentium D 945 Processor HH80553PG0964MN - 3.40GHz, 4MB Cache, 800MHz FSB, Presler, Dual-Core
  • EVGA nForce 680i SLI Motherboard - T1 Version, NVIDIA nForce 680i SLI, Socket 775, ATX, Audio, PCI Express, SLI, Dual Gigabit LAN, S/PDIF, USB 2.0 & Fire-wire, Serial ATA, RAID
  • 2 - EVGA GeForce 8800 GT Video Cards - 512MB DDR3, PCI Express 2.0, SLI Ready, (Dual Link) Dual DVI, HDTV, Video Card
  • Thermaltake CPU Cooler / Big Typhoon VX / 4 in 1 / 6 Heat Pipes / 120mm Fan
  • Ultra X3 ULT40064 1000-Watt Power Supply - ATX, SATA-Ready, PCI-E Ready, Modular

As I already stated in my August 5th posting, it rocked. Vista went right in and ran great without issues this time (no duh right?).

Well, I learned another thing about this experience. The grass always seems greener on the other OS. The real core learning here is this:

"When Vista is good, it’s great, but when it starts to suck, it really starts to suck."

Stability

XP just seemed tighter to me, like a well built car. Sure it had its moments and crashed, but it seemed to recover from crashes much faster and simpler than Vista did. XP would blue screen one in a great while, and when it did it wrote its file and then would do a scan disk as expected. In fact I could always predict when it would run one. If I had a file open at the time of the crash it would run one, every time like clockwork. Vista never ran one on its own, ever. But I could tell that it was suffering from troubles after the reboot and when I set up a scan disk manually and ran it, sure enough, corrupted files, assembly because of the blue screen. Why did I have to take this step on my own? Seemed odd to me that Vista could not detect the junked files but I knew they were there and XP used to detect them.

Now I have to admit that not all the BSODs were Vistas fault. It turns out that I did have one bad stick of RAM and that was playing havoc on the system after about the first month, but the system never felt right after the first 2 blue screens that it took for me to figure that out. I am convinced that had it not been for that bad stick of RAM I may still be running a stable system to day on Vista. But, what does that say about an OS that can be killed buy one bad stick of RAM? Hmmm.

Gadgets

They are really handy, but, as with the others, I also found that the quality of the code was not great. The standard Windows gadgets seemed OK, but they were slim on functionality and not all that I needed. I wanted one that included system stats (like available HD space) so I had to download one of those (and there were several available) but I also needed one that gave me status on Bit-torrent downloads and I have to say that, after a lengthy test effort, I could not seem to locate a single one that did not seem to have a memory leak lurking around that caused a ton of crashes. It seems that one bad gadget can really take the system down hard. It seems to me that they do not have a great system of process isolation there if that can happen.

Aero

What can I say? It looks awesome, but in the grand scheme of things, it adds zero value to the actual usability of the system. I have a feeling that MS was relying on the slick glass interface to lure folks in with the ‘aw, cool’ factor, and it worked :) but, the novelty soon wears off. It’s kind of like when you think you want one of those tall lanky blond babes and realize that they have zero personality, no brains, and you realize that all they want is for you to buy them stuff. Sure other guys walk by and ogle at her and wish they had one, but son enough you really feel like tossing her to the curb and getting a good woman like I ended up with :)

UAC

What more can I say about this that has not already been said by hundreds in the press or even other users. It’s an interesting concept, but what I think is a flawed implementation. To be honest I am not sure what you COULD do here really. Let’s face it. What we really need is simply smarter users. UAC is not going to fix that. I think the idea was perhaps to help educate people as to how often things happen behind the scenes that perhaps they never were aware of before or never gave a second thought about, but come one. I had to ‘allow’ files to be moved from one drive to another even though it was clear that it was ME doing the dragging in dropping. I tried, I really did, to live with UAC enabled but in the end, after about a month it got shut off. Let’s face it. I am a tinkerer, and a pretty good one at that, so I am all over the place at times and really grew to hate that UAC dialog box after a while.

I do give MS credit for allowing it to be turned off though. I think maybe it should be off by default on the business versions and on by default on the home versions. UAC should do two things. First, it needs to know when the act being monitored is being performed by the user or by a process and act accordingly to stay the heck out of the way, and second, it needs to learn a bit and stay out of the way if it gets dismissed at the same spot all the time. Maybe allow a person to turn off notifications on file copy\move with a check box or something.

Application compatibility

I know this is a big one, but come on. The reason I waited as long as I did to run Vista was because I had to wait for Visual Studio 2005 (an MS application) to work on their own OS without causing issues :) I was really annoyed at the issues I had with a few apps. VMware server was a major annoyance. I was a major user of Virtual machines for software testing and there was no reliable way to get it installed as part of Vista simply because the folks there seemed to refuse to sign their damn drivers. Now you may think that this is all the fault of the folks over at VMware, but in reality I think it’s not ALL their fault. Vista does allow you to turn off signed driver checking (under the advanced start-up options in the F8 menu) but you are required to do this every time you start up! UGH!!! It just felt nasty doing that, kind of like I was forced to run in safe mode all the time. It just felt dirty. Visual Studio 2003 was another major problem. I know it’s old, and that there were major issues with the debugger that were causing problems, and I understand that it would have taken significant effort on the order of man-months to get 2003 working on Vista well, but my only option was to run VS2003 in a VM to maintain my old code base. Ooops! Guess what? All my VMs were rendered useless because VMware would not run well with out a major hack :) Now I have to install the MS VM (Virtual PC) product just to get VS2003 working? No thinks. I just kept an old Dual proc PIII XP machine alive for that.

I do think I owe it to the folks at MS though to say that Vista did seem to handle most of my other apps quite well. These were really the only, although major to me, applications that I had problems\issues with.

Performance

Man, nothing feels better to me speed wise than good old Windows XP Professional. Vista was nice and flashy, but unlike buying a Ferrari where you expect it to be a bit high-maintenance but are willing to put up with it because of the growling performance you are getting, I always felt Vista was slower than it should have been.

Start-up was always fast. Power-up to desktop in less than 2 minutes was great, but in all honesty XP is the same here for the most part, maybe 3 minutes, but start-up speed is not where I spend most of my day. In fact I hardly ever turn my system off so unless I am recovering from a crash I care little about start-up speed, and then I am expecting a scan-disk to be run.

File copy\move speed was awful. Look, I really don’t care if you calculate the time its will take for the files to copy or not, but if you do, do NOT make me wait for you to add up all the file sizes to do it. Running a few timings showed that about one third of my time was wasted by that ‘calculating’ junk to happen. This definitely showed one of two things. Either the UI was designed by an engineer or the UI was designed by a marketing person, either way, the next time someone other than a UI expert gets into the chair push them out and do the job right. XP may be a bit off on times but it is FAST so more often than not the time is irrelevant.

Network speed was terrible. One of the things that really ticked me off lately was the fact that I could not get my new Verizon FiOS working properly with Vista. Windows XP required that I run the TCP optimizer form SpeedGiude.net but once I did this simple task it flew (20/5 service is cool). This tool does nothing with Vista. In fact the IP stacks in Vista are apparently ‘tuned’ so this is not needed. BUNK! I was lucky to get 5 Mb\sec downstream on Vista while the XP box right next to it was getting 22. After doing some digging I found that Vista DID have a known issue and there was a fix released in SP1 (that I already had installed) that allowed you to tweak a bit by using a registry hack, still not by using the optimizer tool, that DID allow my speed to get BETTER, but I was still not getting 20. Speed tests over the course of 1 week done every day showed that I was getting no more than 16. I also ran a few tests on my local network just doing simple file copies across my LAN. Although the tests were very non-scientific, the results where interesting. Simply copying a 1GB file across to a file server running Windows 2003, over a 100Mb LAN connection took an extra 4 minutes on my Vista machine than Windows XP.

Conclusion

So, after all that, I am sad (happy) to say that I am once again back on good old comfy Windows XP. It’s fast, clean and very much uncluttered. I actually feel relaxed using it. I had not really felt it before but Vista seemed to make me always feel like I was moving. XP lets me work and lets me feel calm while I do it. I get my VS2003 back for when I need it. I have my VMware images back (a few of which will be running Vista for testing) and I think I may just keep it for a long time.

All I can say is really, honestly, truly I hope Windows 7 is better.

 

 

Sunday, 04 January 2009 10:37:37 (Eastern Standard Time, UTC-05:00)  #    Comments [2]   Vista | OS  | 
# Monday, 01 December 2008

Holly cow, if I get asked this one more time I think I am going to..... well, I am not sure what I an going to do but be assured that it may not be pretty :)

I get asked this all the time and I am not sure why people ask it.

"What is the best choice, implementing an interface or using inheritance?"

"What language is the best choice?"

"What is a better thing to use, an array or an array list?"

To me these all sound like the same question.... "How long is a piece of string?"

The problem is that they never seem to be satisfied with the answer "it depends". They seem to get frustrated and think that I am holding back on them. That I am hiding some great secret all to my self that is preventing them form becoming the next great developer.

In all honesty that is the best answer I can give simply because it's true. It REALLY does depend. It depends on your situation, your project, your intent, what you want to do and a ton of other factors that only YOU know about your project.

I also get asked a ton "what is the difference between a programmer and a developer?" To put it simply, the answer is that programmers ask the questions above while developers know that the answer is 'it depends' and are satisfied with it.

I don't mind being asked these questions, just take the answer and learn from it. Use it as a learning tool to become a developer.

Being a developer is cool and fun and you get to ask a whole slew of more cool questions like "how does one go about calculating the air speed velocity of an unladen swallow?"

Monday, 01 December 2008 00:16:28 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Design  | 
# Saturday, 08 November 2008

Well I did it :)

I now have my monitor array complete. I am sitting in front of 4 Acer 22 inch flat screens running 1680 x 1050. They are sweet! Programming is fantastic. Working on school work is fantastic. The massive screen real estate is great.

Sorry, I have to show off the geek setup here:

One thing I hope that you notice is the lack of paper. Is it always this way? No, I do have paper on the desk sometimes, but it is only when I get it from someone else. It is my goal to produce no paper at all. I figure that I have an awesome system, and I do most of my work on my computer, why do I need paper at all.

The wife on the other hand sees fit to print everything :) I will let her own up to that on her own. I have to admit that I am an enabler there.. I do provide 2 printers in the house (one color ink-jet and a B&W laser) but I hardly ever use them at all. If I see something I want I print it to PDF and then it is always searchable. The extra screen real estate does help me here but the the wife has 2 monitors (her laptop wide screen on her Acer and another Acer 22 inch monitor) so I am not sure what her problem is. I think she just feels 'better' holding paper in her hand to read...

On to finalize the last week of the Software Engineering class  then it is on to an OOP class.

Saturday, 08 November 2008 22:45:50 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Hardware | Site Admin  | 
# Thursday, 30 October 2008

Been doing a lot of thinking recently about tractability and how far it should really be taken. I have talked to a wide range of people over the years, ranging from project managers, development managers, team leaders and guy-at-the-desk implementers and am getting a wide range of answers.

 

Typically requirements traceability is critical to the success of a software project simply because it helps you ensure that you are doing what’s needed to satisfy the customers need and no more. But, as with may ‘processes’ in the SW realm, I think it can be taken a bit farther than it should be. I have been told by some project and development managers that having a concrete way to trace requirements all the way down to the code that implements them is critical. The ability to look at the code and know exactly why something was put into the system, and more importantly what will be impacted by making a code change, is a ‘must have’ in any good development system. In a traceability graph this usually ends up looking like this:

While I can start to see the benefit of that I also start to see where it breaks down a bit.

 

1)     Code is often used massively between functional areas so it leads to a very large traceability tree. In my opinion once you get past a certain number of branches (a number I have not really quantified yet but I will know it when I see it) the code simply gets qualified as ‘important’ and traceability at that point really looses some value.

2)     The current state of tools at this point really offers no way to store this metadata in the source in a simple, and automated, manner. This leaves it up to the developer to perform this task (usually in the comments) and that means that the developer gets more work to do. As we all know, the more time something takes that does not give the person doing it much (if any) direct value, the more likely it is that the task does not get done. This means that the traceability data can immediately become suspect causing no one to believe it and thus again it looses its value.

3)     Why do we really care that FunctionX was written to explicitly fulfill functional requirement F-101 and thus Business requirement B-203?

 

 

I personally think that this deep traceability is only there to fulfill management needs to see neat charts (ok, maybe I could have worked on the color scheme a bit) and graphs. I also think that this is a way for managers to feel that they are ensuring value from their developers by making sure that the developers are only writing what is needed to satisfy the requirements and not a line of code more. In fact many developers seem to be from my side of the camp, but some of them take it way to far in the other direction. Their opinions are that unless the system can be ensured as ‘good’ why track any of it at all? They know what the requirements are, they should be left on their own to implement the code in a way that satisfies the requirements and that’s it. Why do they need to justify their work at all as long as the end product works well and satisfies the stated requirements?

 

What you end up with here is this:

Who wins form this? No one does. Most of the time when you have an all or nothing strategy the outcome is completely non-productive. Is it good idea to have requirements traceability? Sure it is. I think most sensible developers and managers alike will agree that knowing why you are doing something, what the impact of changes are, and how things get tested are all good (great) ideas. The frustration comes in trying to come up with a solution that satisfies both camps. Something that gives both the managers and developers what they want.

 

I think that something is a very tight level of traceability between all levels of requirements, both up and downwards, but then to augment that into the code by completing the traceability down to the test cases and stopping there. With this you get something that looks like this:

Notice that you now have traceability form business requirements all the way down to the test cases just like you did before but you have left the code out of it. Some folks might say that this is missing the need (want) to trace requirements to the code that implements them but take a closer look and you will see that it really does not. The code traceability has not been skipped over, it has been preserved due to the physical connection to the test cases.

 

Consider this. Every test case should be there to explicitly support a use case, or at least one part of a use case. This means that every test should be traceable back to some code that it is testing. This ‘traceability’ can be seen in one of two ways. First, most test cases that reference no code inside them are easy to spot, since they have no code inside, and second, you can easily run an automated tool to check the source code of a test case that fails to reference any code. Clean, simple and it leaves the developer out of it which is good.

 

Now consider the other use of full traceability down to the code level. The ability to potentially spot dead code, or code that does not specially trace back to any requirement. You have not lost here wither since you can again use an automated tool to run a call tree backwards from all the test cases and ensure that you have no code written that is not reachable by a test. Actually this should be part of a normal test regime anyway and is part of what is called code coverage analysis, making sure that as much of your code is tested as possible.

 

Have you lost anything? No. Well maybe some work. In fact if you take a look back to your test practices you are already probably doing this almost 100% if you are using code coverage analysis. If you are not doing code coverage, start. Look at what it gives you. Management gets what they want, development gets what they want and everyone is happy. This is a classic win-win scenario that I think everyone can live with.

Thursday, 30 October 2008 18:13:03 (Eastern Standard Time, UTC-05:00)  #    Comments [0]   Design | Requirements  | 
Copyright © 2019 Raymond Cassick. All rights reserved.
DasBlog 'Portal' theme by Johnny Hughes.
Pick a theme: