A very simple way to achieve this is by using Apache’s mod_expires. For instance, if you add the following example, taken from the manual, to your .htaccess file – assuming of course that mod_expires is properly installed and configured – it will tell the browser to cache all the files for a month.
ExpiresDefault "access plus 4 weeks"
So, every time the client returns in the following month to the site, the browser won’t download all the static content again, but load it locally from the cache, thus minimising the loading time. Of course, there are some issues that usually appear after updates. Particularly after updates of the cached files
But there’s another way, much more elegant. First of all, place all the static, cache-able files in a separate folder. The browser will cache all the files based on their URL. If you want the browser to reload all the static data on every new request, you need to change the URLs on every new request.
Let’s say, the site resides at www.example.com and that all the static information will be served from www.example.com/static/. Now, a good idea is to make the links look like this:
Where release-number is a number that increments with every new release. This way, the URLs will be different after each release thus forcing the browsers to fetch the new files. You don’t need to go through lots of files and increment the release number by hand, you can just use the following python script:
Read more about what this script actually does on
python increment_release.py start_directory static_prefix
Tudor Barbu http://blog.motane.lu
import sys, os, re
REGEX = ''
CACHED_DIR = ''
global REGEX, CACHED_DIR
if sys.argv is not None:
length = len( sys.argv )
if length < 3:
print 'Read the comments in the source code'
start_folder = sys.argv
CACHED_DIR = sys.argv
REGEX = re.compile( '=(\'|")' + re.escape( CACHED_DIR ) + '\/((\d+)\/|)([^\'|"]+)(\'|")' )
parse_files( start_folder )
def parse_files( dir ):
basedir = dir
subdirectories = 
for item in os.listdir( dir ):
if os.path.isfile( os.path.join( basedir, item ) ):
perform_replace( os.path.join( basedir, item ) )
subdirectories.append( os.path.join( basedir, item ) )
for subdir in subdirectories:
parse_files( subdir )
def perform_replace( file ):
f = open( file, 'r' )
contents = f.read()
if REGEX.search( contents ):
f = open( file, 'w' )
f.write( REGEX.sub( handle_match, contents ) )
def handle_match( matches ):
if matches.group(3) is not None:
revision_number = int( matches.group(3) ) + 1
revision_number = 1
return '=%s%s/%s/%s%s' % ( matches.group(1), CACHED_DIR, revision_number, matches.group(4), matches.group(5) )
if __name__ == '__main__':
…and, of course, there’s no need to create lots of directories either. A simple .htaccess rewrite rule will do. Just redirect all the URLs like /static/(number)/css/style.css to point to /static/css/style.css, by adding these 2 lines in the /static/.htaccess file:
RewriteRule ^\/static\/(\d+)\/(.*)$ $2 [NC,L]
This should solve all your caching related problems. If you want to look savvy, you can use the version number of the head revision from subversion of whatever versioning system you might be using instead of a simple incremental number.
Yes, I know that the script is a little bit buggy but it works for me. If you have an improved version, post a comment below. Credits will be given.