So here's my problem.
We dont know deployment. We work from same copy on one test server through ftp and then upload live on FTP.
We have some small projects and some big collaborative projects.
We host all these projects on our local shared computer which we call test server.
All guys take code from it and return it there. We show our work to clients on that machine and then upload that work to live ftp.
Do you think this is a good scenario or do we make this machine a dev server and introduce a staging server for some projects as well?
I wrote him a reply with some suggestions (and my consulting rate) attached, and we had a little email exchange about some improvements that could fit in with the existing setup, both of the hardware and of the team skills. Then I started to think ... he probably isn't the only person who is wondering if there's a better way. So here's my advice, now with pictures! Continue reading
.htaccessfile, or place the setting in the vhost (virtual host) configuration. Which one you choose depends largely on your project setup, let's look at each in turn:
The .htaccess File
The biggest item in favour of an
.htaccess file is that it belongs in your webroot, and can be checked in to your version control tool as part of your project. Particularly if your project is going to be installed by multiple people on multiple platforms, this can be a very easy way to get development copies of code set up very quickly and for it to be easy for developers to see what should be in their
With version control, you can also send new
.htaccess configuration through by updating your copy of the file - but whether this is a strength or a weakness is up to you to judge! If everyone needs different path settings, for example, and is constantly overwriting your
.htaccess file, that's not a particularly excellent setup! Previously I've distributed a sample
.htaccess file with a different file name, and added
.htaccess itself to the ignore list of the version control tool.
The Virtual Host
Putting settings in the virtual host allows an easy way to configure the environment on a per-server basis, and not to accidentally deploy an incorrect setup. You could still distribute a sample vhost setup for people to use as their basis, exactly as you could for
The biggest reason for using the virtual host, especially on a production server, is that using
.htaccess incurs a performance penalty. When you use
.htaccess, apache looks in the current directory for an
.htaccess file to use. It also searches in the parent directory ... and that parent directory's parent directory ... and so on, up to the root of the file system. Imagine doing that on every request to a system under load!
Which To Choose?
It completely depends. At one end of the system, the open source project that will be set up on a relatively large number of systems by potentially inexperienced people - you'd probably choose
.htaccess. For a large-scale, live platform deployment, use the apache settings themselves (a virtual host for a server which runs multiple sites - apache's own settings for a server which only hosts a single site). Where are you on the scale and which will you choose?
Once upon a time, a long time ago, I went onto a conference stage for the very first time and said that I thought I might be the world's ditsiest PHP developer. I actually still think that is pretty true, and if you work with me then you will know that I mostly break and fix things in approximately equal measure. With this in mind, when I launched my own product recently (BiteStats, a thing to automatically email you a summary of your analytics stats every month), I knew that I would need a really robust way of deploying code. I've been doing a few different things for a few years, and I've often implemented these tools with or for other organisations, but I don't have much code in production in my own right, weirdly. I decided Phing was the way to go, got it installed, and worked out what to do next.
My current project (BiteStats, a simple report of your google analytics data) uses a basic system where there are numbered patches, and a
patch_history table with a row for every patch that was run, showing the version number and a timestamp. When I deploy the code to production, I have a script that runs automatically to apply the patches.
When I deploy an application, which is almost invariably a PHP application, I like to put a whole new version of the code alongside the existing one that is in use, and when everything is in place, simply switch between the two. As an added bonus, if the sky falls in when the new version goes live, the previous version is uploaded and ready to be put back into service. In order to be able to do this, I have my document root pointing at a symlink, let's say it is called "current". (disclaimer: I have no knowledge of non-linux operating systems, this post is linux-specific)
When it is time to deploy, I place the new code onto the server, and create two new symlinks, one called "previous" which points to the same location as the "current" symlink does (bear with me) and one called "next" which points to the location of the new code. To deploy, all I need is this:
mv -fT next current
The f forces mv to overwrite the target if needs be, and the T directs mv to consider the second argument as a normal file, rather than as a directory to copy in to. The neat thing about doing it this way is that it happens in a single move, no weird results for people who manage to hit your site while you are typing the new symlink command or during the code updating. It is also just as simple to roll back from this, since you have a symlink pointing to the previously used code version.
I thought I'd share this snippet as it is a handy inclusion in deployment scripts/strategies. What are your tips for managing code deployment?