Email iconarrow-down-circleGroup 8Path 3arrow-rightGroup 4Combined Shapearrow-rightGroup 4Combined ShapeUntitled 2Untitled 2Path 3ozFill 166crosscupcake-icondribbble iconGroupPage 1GitHamburgerPage 1Page 1LinkedInOval 1Page 1Email iconphone iconPodcast ctaPodcast ctaPodcastpushpinblog icon copy 2 + Bitmap Copy 2Fill 1medal copy 3Group 7twitter icontwitter iconPage 1

We all know that AWS S3 is a great service for hosting your files, but does present some issues during the development/testing phases of the product lifecycle. You could just give all of your developers your AWS Access Key and AWS Secret Key, and during development allow them to upload to the real bucket which gets you moving quickly, but doing this brings real problems:

  • All developers will have to have some real AWS credentials and they will be uploading together.
  • Your AWS bucket will quickly fill with useless test/fixture data, especially when you have a CI tool like Jenkins continuously running E2E (End to End) tests on your features.
  • In order to separate your production/test data you’ll need to create different buckets, so you’d have to be able to insert different credentials depending on the environment.
  • If you lose network connection you’re out of luck; your application must be connected to the internet or you can’t develop.

In addition to this, when E2E  testing your application it’s always a good idea to stub out external API dependencies. Usually this is pretty straightforward, for example for our frontend applications we often use robohydra to run a light-weight node server which responds to requests in simple ways.

For our Symfony2 PHP applications we use a CURL wrapper called Guzzle. This is used by the official AWS PHP SDK too. When it comes to creating unit tests Guzzle is great because we can simply stub out the external dependency, indeed for E2E tests Guzzle comes packaged with a similar light-weight node server. Unfortunately stubbing out the AWS authentication process would be very complicated as it involves handshaking with a plethora of unfathomable XML data.

Cue fakeS3. fakeS3 is a lightweight ruby implementation of AWS S3 designed to run locally. It stores uploads locally and responds identically to how S3 would. We’re primarily a PHP shop with applications running on the Symfony2 framework, so introducing another dependency that my colleagues would have to install to get some functionality of the application working (in the form of ruby/gems) was not a decision I made lightly. When I installed the gem and saw the benefits though, I decided it was worth it.

Here’s how I made installation simple for my fellow devs:

1. I constructed a Gemfile in the project root with the fakes3 gem in it:

2. Added the gem install task to the build process we define in a Rakefile (with a helpful message that they may need to install ‘bundler’: the ruby dependency manager, if this command fails):

3. Added another rake task to start the fakes3 server:

The third rake task starts the fakeS3 server on port 4567 and stores all of the files in a .tmp directory in the project root. Now we can run the S3 server with rake start_fakes3. Our S3 instance is configured in yaml for Symfony2 dependency injection like this:

The S3Client has host set by default to the correct AWS server settings, which is what we want in production. During development/testing however, we need to specify that the S3Client should try and connect to localhost instead, which we can do quite easily with the AWS PHP SDK. We create another yaml file to be used for these instances (note that our aws.access_key and aws.secret_key are defined in parameters.yml. We have some dummy ones in our parameters.yaml.dist):

mock_api_services.yml

And then depending on the environment we switch out while file to load:

And there we have it, a simple way to stub out your external AWS dependencies when E2E testing your Symfony2 applications.

Share: