Dynamic On-Demand Image Resizing Using Firebase Hosting and Google Cloud Functions to Make a Cheap Cloudinary Alternative

Max Barry
9 min readOct 25, 2020

I’m desperately cheap, so my appetite for a $90 / month Cloudinary account is low. Cloudinary’s best features are…

  1. Passing parameters through the URL to dynamically resize imagery
  2. A fast CDN to serve our images from
  3. Dynamic image formats based on a browser’s capabilities (e.g. to serve webp)

… and fortunately, we can recreate all of this using low cost tools from the Google Cloud Platform. Yes, a do-it-yourself option will be frustrating to maintain and denser to explain to others, but it does satisfy our primary goal:

I’m desperately cheap

What we’re going to build

Our Cloud Function resizing imagery on request

On our client website we will have an img tag that contains information in the URL indicating the dimensions we need our image in.

<img src="//project.web.app/path/to/image.jpeg?w=400&h=300" />

Our original source image (a giant jpeg) is sitting on cheap remote storage. This might be your own server, Amazon S3, Google Cloud Storage (Firebase Storage), or basically anywhere we can download an image from.

To connect the two, a Google Cloud Function (or Firebase Function, it’s the same thing) sits in the middle, taking the parameters from our request URL and returning a resized image, optimized, Brotli encoded, webp formatted image where accepted.

We then put a layer of caching over the top, so we’re not constantly accessing and resizing imagery — a process that will likely be slow and expensive.

Why can’t you just pre-size your imagery to your needs, and upload that to a cloud storage solution?

Well… you could. But just from a practical developer experience, the setup of image resizing in a build step (let alone trying to resize manually!) is cumbersome and you always feel a step behind the game. What happens when you need new sizes? Do you have to go and resize all the imagery that already exists on the server?

There are some cool examples that will do this for you. Firebase has an extension that will resize when you upload an image to Cloud Storage, and it uses most of the technologies we’re going to use.

But again, it’s not an immediately responsive option when you’re just messing with client code trying to get a design together. Also consider a better real life example:

<picture>
<source src=" 2X for hi-dpi desktop screens " />
<source src=" 1X for lo-dpi desktop screens " />
<source src=" 2X for hi-dpi mobile screens " />
<source src=" 1X for lo-dpi mobile screens " />
<img src=" low quality placeholder " />
</picture>

The number of images you will need to be resizing (and storing!) will grow faster than your configurations can keep up with. Hence my preference for an on-demand solution that resizes imagery right when you need it.

1. Dynamically resizing imagery though URL via a Serverless Function

I’m going to use Firebase Functions for the purpose of this example, but the same principles apply to Cloud Functions (they’re the same thing as Firebase Functions) or AWS Lambda or probably Heroku Whatevers.

First, let’s break down the requesting URL in our <img /> tag:

//project.web.app

The host will be the location of our Firebase Function, or as we’ll get to later the caching CDN layer we’re putting in front of our function.

/path/to/image.jpeg

We need to tell the Firebase Function where this specific image lives. This will be a path the function can follow on our storage solution. In this case, the file image.jpeg sitting in the directories path > to on the default bucket of our Firebase app on Google Cloud Storage.

?w=400&h=300&q=80

For now, we’ll just handle 3 parameters but this could be easily expanded with more. Here’s we’re specifying we want a 400px by 300px image at 80% quality of the original. Note that all of these parameters should be optional. If you simply want to serve the image as it was originally: use no parameters. If you want to smartly scale the image based only on a width: add only the w parameter.

Writing our function

This file can be split into thirds.

Processing the URL params

// Parse these params to integers
const width = query.w && parseInt(query.w);
const height = query.h && parseInt(query.h);
const quality = query.q && parseInt(query.q);
// We need to strip the leading "/" in he URL parameter
const filepath = urlparam.replace(/^\/+/, "");

We’re just taking the URL params out of the request and processing them to integers, as well as cleaning up our filepath.

Again, this is written in Javascript for a Firebase function, but similar could be done in a Python Flask app, or any language and infrastructure. It’s just messing with HTTP requests.

Retrieving the file from storage

const bucket = admin.storage().bucket();
const ref = bucket.file(filepath);
// We need to check if the file exists
const [isExists] = await ref.exists();
if (!isExists) {
response.sendStatus(404);
return;
}

Next we’re preparing to read our file. This is about accessing the original source file (our large JPEG) using the URL parameter, and checking it exists. If not we 404, since we still need to behave like this is a direct request to a file in storage.

This file path could be to a local file or a cloud storage solution (e.g. Firebase Storage in this case, or Cloud Storage, or AWS S3). We just need somewhere we can get a buffer or readable stream from.

Resizing and streaming back our file

// We're going to use streams to do the following:
// read from our source image > pipe to Sharp > pipe to the HTTP response
// Let's create a Sharp pipeline
const pipeline = sharp();
// Read the remote file into that pipeline
ref.createReadStream().pipe(pipeline);
// Now run the Sharp pipeline and pipe the output to the response
pipeline.resize(resizeOpts).toFormat(format, formatOpts).pipe(response);

Finally, we stream our file in from the storage, pipe it to Sharp.js, then pipe the output to our response.

Sharp is a fast Node image resizing library, that is using ImageMagick under the hood (I think?) to do our actual resizes and optimizations.

We’re using streams here, but we don’t really need to. If you wanted a more synchronous solution, something like the following would achieve the same result:

const inputBuffer = await ref.download()const outputBuffer = await sharp(downloadedBuffer).resize().toFormat().toBuffer()response.send(outputBuffer)

The Sharp documentation has lots of parameters we could be processing in the URL (for example, the cropping strategy when images are resized to odd widths and heights). But pulling more parameters from the URL and passing them to Sharp should be straight forward, and similar to how we parse the width, height, and quality.

What does this give us? Well by now we will have a Firebase Function we can call and get a resized image back, but you’ll notice that it’s really slow. There’s a few reasons for this:

  1. The serverless function has a cold-start issue. There’s always a little lag when a function spools up, and whilst there are strategies to mitigate this, it’s always going to be there.
  2. We’re downloading the image twice: once to our serverless function (when we read it to a stream) and again when our client app downloads the response from the server.

This is why people use CDNs: they’re designed to serve static files at volume. So let’s implement a caching CDN in front of our function, to overcome this speed issue.

2. Caching our outcome in a CDN

We know the slow parts of our function:

  1. The function cold-start
  2. The downloading of the source image
  3. (and to a lesser extent) the time it takes Sharp to process the image

Even Cloudinary experiences some slowdown the first time you request an image.

We need to make sure we only perform the above steps once, then rarely again on re-request of that image and dimensions. To do that, we can introduce a caching layer that sits in-front of our Firebase Function. The first request to a URL hits the function and resizes the image. Subsequent hits receive the cached resize, instead of returning to the function and asking it to do all that work again.

Note that we never save the result of our resize to disk. We could save it back to our cloud storage to avoid doing the resize again in the future, but it doesn’t save us when it comes to the cold-start or the double download of the image.

Caching strategies in Firebase

Turns out Firebase Hosting will let you rewrite a URL to point at a Firebase Function (or Cloud Run instance), and will then cache the result of that function for you.

I don’t want to muddy things and get too far into how to setup Firebase Hosting, but to cover it for the unfamiliar: it’s a very cheap hosting service for static apps — and now with these dynamic rewrites — more server driven applications.

You could achieve a similar caching layer with any other caching service, too. Maybe Fastly, Varnish, or your own NGINX setup. The premise is just:

1. First request hits the serverless function
2. Subsequent requests are served the result of step #1 from the cache

Let’s extend our Firebase Hosting configuration so that all requests to /images are sent straight to our Firebase Function.

"hosting": {
"public": "build",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{ "source": "/images/**", "function": "dynamicImages" },
{ "source": "**", "destination": "/index.html" }
]
}

I’ve included the rest of my firebase.json configuration file for reference, but the important line is the first rewrite. All requests to /images/** will be sent to the dynamicImages function we wrote earlier (and presumably deployed to Firebase). Let’s update the img tag to make use of this:

<img src="/images/path/to/image.jpeg?w=400&h=300" />

Requests to that URL will be rewritten by Firebase Hosting to hit our function instead.

You need to be a bit careful about what the URL looks like when it arrives at your function. It will include the /images in the URL path, and you may want to strip that, depending on how you access the filesystem of your source image storage.

Setting our Cache-Control headers

Firebase Hosting (or any other caching layer) isn’t going to cache files without proper Cache-Control headers. For the sake of this example, I’m going to take a really straight forward approach and just cache all our responses for 1 year.

Remember that if you’re using a cloud storage solution then you may have set Cache Control headers on your source files, and those will have been stripped out using this method. You could retrieve those original values, and set them as the Cache Control headers for the response in your serverless function. In the final Gist in this article, I use the source files Cache Control metadata value and fallback to a default of 1 year.

Let’s add headers to our Firebase Function response:

response.contentType(contentType);
response.set('Cache-Control', `public, max-age=31536000, s-maxage=31536000`);

How do we know this is working? Let’s take a look in Devtools.

Our first request to a resized image

Our first request has an x-cache value of MISS meaning the request was served by our “upstream” (in the parlance of caching layers) — the Firebase Function. This request took ~1.2s as the function downloaded and resized the source image.

Our second request to a resized image

The second request is a cache HIT. It took 35ms to return the image at the same dimensions that our first resize requested.

The added bonus is that the client’s browser will also respect these Cache-Control headers, meaning on subsequent visits they may not even request the image from our CDN.

You will probably need to tweak and play with the Vary headers to dial in the caching and ensure you only hit the serverless function when necessary

The final function

This is the final version of our function. It takes the principles of the Gist from earlier and adds the boring bits back in. For example…

  1. It checks to see if any transformation is required, and only runs Sharp if needed
  2. It has more error checking and logs itself
  3. I accept a dpr parameter to easily create 2x and 3x versions of imagery

Another difference is that we introduce dynamic .webp and Brotli / GZIP content encoding based on the request’s Accept headings. That’s more to bring our file in-line with the features that Cloudinary also offers. I won’t dive into that in this article because it’s a bonus extra, but I will try to write something explaining that in more detail.

I would like to refine this approach and turn it into a Firebase Extension. If someone wants to help work on that, then please get in touch.

This was written by Max. Max is always looking for work building products or fun projects, either doing general problem solving or hard engineering if needed. You can contact: max (at) mxbry.com

--

--