Share →
Buffer

How your DevTest Practices are Holding you BAck

You know you’re doing it. You know your development, test, and deployment practices are in serious need of repair, but you think you’re getting away with it because no one is calling you out. Well, my apologies in advance because that’s what I’m doing. Your DevTest and DevOps practices are bad. Don’t believe me? Here are 4 DevTest/DevOps practices that are keeping you from being the best you can be, (as well as some things you can do to break these bad habits).

1. You check in code that breaks the app (no unit tests).

Every bad practice has a reasonable explanation. One of the most common bad practices is checking in code that breaks the app. I suspect it’s rare to find a developer who proactively chooses to check in code that will break the app, but unless you have unit tests, the odds increase, regardless of your local machine testing. You can reduce this likely hood if you have highly decoupled code and a suite of unit tests that validate functionality.

So, why would someone not have unit tests? The most common answers I hear are “there’s not enough time”, “it’s expensive to maintain”, and “our application is too complex”. What folks need to understand however is if you have well-architected, highly decoupled code that is covered by unit tests, then you’ll save time in bug repairs and regression testing. Plus, it’s much easier and more cost effective to maintain the unit tests when your code is highly decoupled.

Adding unit tests is not an easy proposition, and neither is re-architecting your code. But until you take the time to do it, you will continue to have the same problems, and you will continue to bear the cost that a tightly-coupled architecture brings you.

We have found that making a mindful choice to include unit tests has forced us to be proactive about writing decoupled code. This type of architecture gave us the side benefit of letting us redesign portions of our application when we discovered we’d gone down the wrong path, and we could do so without having to rewrite the entire application.

2. You use branches to manage deployments.

This is my own rookie mistake. Many years back I chose to map my branches to the environments they deploy to. In essence I had a dev branch, a QA branch, a UAT branch, and a prod branch. I even created an Archive branch where the master releases would go. The problem with this model is that I would build off each of these branches and push the build to the corresponding environment. This essentially meant that I rebuilt code for each environment, resulting in a code base for each environment, which is not necessarily a clone of the other branches.

I suspect you may be thinking, “if each branch comes from the previous branch, (Dev to QA to UAT to Prod), then wouldn’t the code be the same?” The answer to that is deceiving. Yes, the code would be the same, but building off of two different code bases does not necessarily result in the same bits in the final build. At least not with managed code such as C# or VB.NET. Each time you run a build, the compiler will build in an order that will be determined at the time of build and may not be consistent from build to build. It is dependent on the file system and the order in which the files are given to the compiler. In other words, a singular code base could compile differently from build to build, resulting in different binaries, even if the differences are limited to a timestamp in the meta data.

With this in mind, rebuilding your code for each environment could result in differing binaries, and you don’t want to do this. Instead, you should be deploying a single package into an environment, evaluating whether it passes its acceptance criteria for that environment, and then promoting that same package into the next environment. That’s a far better practice and will help contribute to a higher-quality QA process.

3. You need to manually configure your environment.

Are you manually configuring the individual components for an environment? If so, you’re wasting time. Are you manually deploying or configuring your databases each time you are about to deploy to an environment? If so, you are wasting time. Are you waiting on operations for days at a time to get a new environment provisioned? I won’t say you’re wasting time, but I will say that there are more efficient ways to do it.

If you want to improve your DevTest practices you should start with automation. Manual practices result in errors. You know this, so why are you still doing it? The most frequent response I hear is, “we don’t have time”, followed by “we don’t know how”. If you put time into automating, you will be saving time in the long run. (You know this as well.) It’s time to start automating!

Start with using your MSDN subscription and take advantage of Azure VM’s. It’s really easy and there are some fantastic guides to help you get started. Then start using PowerShell to automate the provisioning of new environments. After you figure that out, start using the Desired State Configuration to make sure that the environments are configured with the right components. Configuration as code is your key to happiness. Whether it’s using PowerShell, DSC, Chef, Puppet, or some other tool, automated provisioning and configuring of environments will make your life easier.

4. You need to manually deploy your app.

This one may seem obvious, but the fact is, most companies we work with do not use an automated release management tool. Most release management and deployments use at least some manual intervention. Manually deploying an app has the same issues that manually configuring an environment has. If you manually deploy something, you’re more likely to introduce errors. And the more complex the deployment, the more likely the error. It’s time to automate the deployment.

Microsoft offers Release Management, and there are other tools out there to support your deployments. First, start with using an automated build practice. Team Foundation Server offers a great tool for automated builds, and there are others too! Then use a release management tool to deploy the application to the right environment.

Configuration of a release management tool may seem daunting at first, but if you have a build team, it’s likely that they’d love to learn a new tool that will make their lives easier!

Conclusion

If any of these practices are familiar to you, then it’s time you take a closer look at your business priorities. Addressing even one of these practices will result in a noticeable improvement on the quality of your deliveries, whether by reducing the potential for errors, or by reducing the potential for introducing bugs. Now is the time to take a serious look at your DevTest practices!

 

Print Friendly
Tagged with →  
  • Curt Zarger

    Rennie,

    Great post. Lots to think about. I am intrigued by the best practice described in section #2, summarized by “…you should be deploying a single package into an environment, evaluating whether it passes its acceptance criteria for that environment, and then promoting that same package into the next environment.”
    I have two questions:
    1. You mention that ‘rebuilding’ can result in different binaries, even if it is only metadata. … Is there a possibility of functional differences resulting?
    2. How do you move the package through the environments and have it configured correctly? I understand web.config management thru XML transforms or tokenization, but how to you remove the debug information that would be part of the first ‘dev’ package? The *.pdb files can simply be removed, but the *.dll files still have debug info in them.
    thx

  • Hey, Curt!

    First off, thanks for taking time out from your day to read our article!
    Now, let’s get to your questions.

    “Is there a possibility of functional differences resulting?” It there a possibility? yes, however, “yes” can be misleading. Maybe the better answer is “it depends.” When you compile code, the resulting binaries will be influenced by the internal state of the compiler at the time of build, the order in files are handed to the compiler while it’s generating the metadata tables, as well as the state of any librarys that may be statically linked to your application. There are many different reasons why your binaries may change during recompile. Just as an example, consider this – if your build server is patched with the latest security fix or OS update, that could impact specific binaries that your application is relying on. If the linked binary changed, unbeknownst to you, it could affect functionality, even though your own source hasn’t chaned. I’m certainly making this ‘worse-case-scenario’, but in truth, if you recompile the same source, there is a chance it could affect functionality, though the odds may be slim.

    “How do you move the package through the environments and have it configured correctly?” I like this question because the answer can address the issue listed above.
    At Northwest CAdence we have 2 branches. A Dev branch and a Main branch. We do our work in Dev. We manually test it and run unit tests against it. We don’t include a debug build, but you could. Once we are happy with it, we merge to main. Main recompiles it into a release build and then releases it into a Dev Environment where we test it. If it’s good, we promote that same build to QA. If that passes, then it promotes to Production.

    We build Main branch with no debug information. As a matter of fact, we choose to use Intellitrace instead, because it provides better information without the bloat. And we don’t use debug for our Dev branch either. It just isn’t necessary for us, but if you need it, do it in the Dev branch. Then branch into Main and recompile a release, push to the dev environment and take it through its paces!

  • Curt Zarger

    THX Rennie, that was very helpful info about the compiled binaries.

    On my second question, I’ve been doing more reading, and I think my question regarding moving the binary package through the pipeline resulted from not understanding that aspect of Continuous Delivery. So, now understanding that better, let me ask another question.

    I have always worked with at least a ‘first’ environment where all of the parts of the applications are integrated and tested, but it is built in debug mode allowing developers to remotely debug issues arising in that environment. Later environments were built in release +mode. … In a CD architecture, where is the integrated debug environment?
    thx

  • I will explain how we do it at NWC. That may help you sort out your process.

    We have three environments, Dev, QA and Prod.
    We have have 2 ‘branches’ a Dev branch and a main branch. (in truth, we use Git and don’t use real branches. Instead, we use pull requests but that’s neither here nor there).

    We do our development, test it locally and then check it in. It then deploys to the Dev environment. Unit tests are run, integration tests are run and hopefully all works fine. We continue doing more integrations with new code as necessary. Rinse, repeat.

    Eventually we promote to Main. (It’s a git pull request) The new code is integrated into the main branch. The Main branch is set up with an CI build that deploys to Dev. (It’s at this point where you’d compile a ‘release’ build) If the package in Dev is approved then we promote it to QA. If all seems good in QA, then we promote to production.

    Does that answer your question? In summary, we use 2 branches. One for development and a Main branch. You would create a debug build for the development branch and you would test the package in Dev. Then, if all is good, you would either wait for more changes to be integrated or you would merge the code into the main branch and rebuild a release build. Then have it redeploy to Dev, and promote it up as you see fit.

  • Curt Zarger

    Thanks for the response.

    This post still has me thinking, or better yet re-thinking my branching and release architecture.

    1.
    I’m not clear that you are even using an automated release pipeline, but I’m trying to understand your architecture in light of the Release Management(RM) concept that the whole release pipeline can be automated.

    If you ‘rinse, repeat’ on your Dev branchenvironment, my understanding is that that process is not under RM control, but rather someone manually assesses test results, determines ‘all works fine’. At that point, code is integrated into Main, which triggers a CI build, and RM. … true?

    2.
    You have the Dev environment receiving builds from two sources: the Dev branch (in debug mode), and also the Main branch (in release mode). I’m curious as to how you avoid collisions in the Dev environment, or in general how you know whether you have the debug or release version installed?
    thx

  • Curt,

    1) In TFS we would have separate build definitions. When I check code into the Dev branch, the Dev Branch build definition would build the code, then take it through an RM cycle, which is just to the Dev environment. When I the merge the code into a Main branch, there’s a new CI build definition that kicks off a different RM workflow that is build to promote from the dev enivornment to QA to prod. Hope that better explains it.

    2) There are a couple different approaches to question #2. You could have two Dev environments that are clones of each other. Or you could have a single Dev environment and you could either 1) blow away the website before reinstalling, or 2) install over it. As for knowing which version is installed, the RM server would be tracking that…

  • Curt Zarger

    Thx for your patient responses!
    #1 – that was it! The use of TWO RM workflows, each starting with its own build definition, was what I was missing.
    #2 – ok, the RM server would know which version is installed. I guess my question is how the development team knows at any given time?
    My perspective is that we do CI, running multiple builds per day, but production releases could be as infrequent as quarterly. So the developers always know the state of the application.
    Given the ‘objective’ of very fast production releases (multiple/day), the architectures, and therefore the documentation/conversations can span a very wide breadth. I suppose the slower release frequencies would opt for the two-Dev-environments option you describe.

    … It’s a new day! Lot’s to consider.
    Thanks again for your great info
    Curt

  • Arie H

    Bit late reply, but i was wondering about your ‘worst-case-scenario’, how would you know if that security fix to the OS wasnt deployed to the Prod servers as well and thus you would only know that the build, you done originaly from Dev, would only break when you deployed to Prod.
    Enviroments not being equal, isnt exactly a good reason to not do a rebuild. This is where RM and other configuration managment serves solutions come in handy.
    As im building my organization CI+CD with RM, im curious if you have more ideas or reasons why rebuilding from the QA branch and deploying to the QA enviroment would not be a good or desirable.
    Thanks for intresting read !

  • Thanks, Arie.

    I may not have been clear on the post.
    When you rebuild managed code, the final product will not necessarily be the same product that was built yesterday, even if your build is based on the exact same lines of code. The problem with rebuilding is that the newer rebuilt package your are testing is not the same as the package your tested previously.

    Let’s say your Dev, QA and Prod environments are exactly the same. If you rebuild your code for each environment then you are essentially not testing the same package, even if the lines of code are the same. This is because of how the build compiler processes the code. (see my previous responses).

    Now, I’ve worked with organizations that map their branching structures to their environments in order to deal with scenarios where QA is working with a build that is older than the one that the Dev’s are working with… What does the Devs do when a bug is filed against that older build? In this case, we still suggest a branching model that does not mirror the environments. Instead, the Devs would do a “get by tag” using the build # that QA is using. That’s cleaner.

    Now, in a CI/CD model, you would create a single build and that build would by managed by RM, publishing that unique build to each environment. Once approved in Environment 1, RM would move the package to the next environment. RM inherently presumes it’s moving a single package through the environments. By default it isn’t going to rebuild the application for each environment.

    By keeping a single build going through all the various environments, you are isolating any environment-related issues by ensuring that the package you’re deploying has not changed since it’s deployment into Dev.

    Does that answer your question?