Hugo and Static Sites in GCP

Published: Apr 21, 2026 by Isaac Johnson

In our last post we explored Huge static sites with Azure, focusing on Azure Front Door and Storage Buckets. Today, we will look at doing a similar thing in GCP with Hugo and Cloud CDN and Application Load Balancers.

The goal here is to compare process and costs and see how they differ. Some of the findings may surprise you.

Let’s start with Cloud DNS…

Cloud DNS

I have an unused domain in Ionos left over from a hackathon, dbeelogs.me

/content/images/2026/04/clouddns-01.png

Since I’m not 100% what the end costs might be, let’s do a trial run this with DNS before I register anything new.

Since I’m trying to do a GCP setup, let’s move that into Cloud DNS for management.

/content/images/2026/04/clouddns-02.png

I can then copy the Google DNS server entries over to the Name Servers area in IONOS

/content/images/2026/04/clouddns-03.png

I’m warned this can take 48 hours, but often I find it goes much quicker

/content/images/2026/04/clouddns-04.png

Buckets

Let’s create a bucket for our site

/content/images/2026/04/hugoingcp-05.png

There isn’t really cost savings between zone and region (both are about 2c/gb/month) so I’ll pick a region near me.

/content/images/2026/04/hugoingcp-06.png

While I am going with Standard for now, for a real blog that would last, “Autoclass” would save money over a long period

/content/images/2026/04/hugoingcp-07.png

You need to disable/uncheck the ‘enforce public access prevention’. I think the words there could be better but having this box selected means you cant use it for website hosting

/content/images/2026/04/hugoingcp-08.png

I tend to like Soft delete for data protection. I only use object versioning when storing immutable artifacts

/content/images/2026/04/hugoingcp-09.png

We now have a bucket created we can use with the blog

/content/images/2026/04/hugoingcp-10.png

I also made a “test” bucket we’ll use later

/content/images/2026/04/hugoingcp-14.png

Forgejo CICD

At the conclusion of our last post, I had the Workflow set to upload to Azure using an SP

name: Gitea Actions Test
run-name: $ is testing out Gitea Actions 🚀
on: [push]

jobs:
  Explore-Gitea-Actions:
    runs-on: my_custom_label
    container: node:22
    steps:
      - run: |
          DEBIAN_FRONTEND=noninteractive apt update -y
          umask 0002
          DEBIAN_FRONTEND=noninteractive apt install -y ca-certificates curl apt-transport-https lsb-release gnupg build-essential sudo zip
          # Install MS Key
          
          # Use the official Microsoft script to handle repo mapping automatically
          curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
      - run: |
          echo "🔍 Checking Azure CLI version..."
          az --version
      - name: Check out repository code
        uses: actions/checkout@v3
        with:
          submodules: recursive
      - run: |
          # DEBIAN_FRONTEND=noninteractive sudo apt install -y hugo zip
          wget https://github.com/gohugoio/hugo/releases/download/v0.160.0/hugo_0.160.0_linux-amd64.tar.gz
          tar -xzvf hugo_0.160.0_linux-amd64.tar.gz
      - run: |
          echo "🔍 Checking Hugo version..."
          pwd
          ./hugo version
      - run: |
          export
          ls
          ls -ltra themes/hugo-theme-stack
      - run: |
          ./hugo
          zip -r public.zip public
      - name: Branch check and upload
        shell: bash
        run: |
          if [[ "$GITHUB_REF_NAME" == "main" && "$GITHUB_REF_TYPE" == "branch" ]]; then
            echo "✅ On main branch, proceeding with Azure Blob upload..."
            az storage blob upload-batch --account-name $AZSTORAGE_ACCOUNT --account-key $AZSTORAGE_KEY -d '$web' -s ./public --overwrite
          else
            echo "⚠️ Not on main branch, uploading to testing container."
            az storage blob upload --account-name $AZSTORAGE_ACCOUNT --account-key $AZSTORAGE_KEY --container-name testing --name public-$GITHUB_RUN_NUMBER.zip --file ./public.zip --overwrite
          fi  
        env:
          AZSTORAGE_ACCOUNT: $
          AZSTORAGE_KEY: $

      - name: Front Door cache purge
        shell: bash
        run: |
          if [[ "$GITHUB_REF_NAME" == "main" && "$GITHUB_REF_TYPE" == "branch" ]]; then
            az login --service-principal -u $ -p $ --tenant $
            az afd endpoint purge \
                --subscription $ \
                --resource-group bloggingTestRG \
                --profile-name ttpklat \
                --endpoint-name tpklat \
                --domains tpk.lat \
                --content-paths '/*'
          else
            echo "⚠️ Not on main branch, skipping Azure Front Door purge."
          fi

We need to create a service account in GCP for this work.

$ gcloud iam service-accounts create forgejo-publisher \
    --description="CI/CD publisher for Forgejo" \
    --display-name="Forgejo Publisher"

Created service account [forgejo-publisher]

By default, the SA has no powers, so I need to grant it bucket access

$ gcloud storage buckets add-iam-policy-binding gs://dbeelogsme \
    --member="serviceAccount:forgejo-publisher@myanthosproject2.iam.gserviceaccount.com" \
    --role="roles/storage.objectAdmin"
    
bindings:
- members:
  - projectEditor:myanthosproject2
  - projectOwner:myanthosproject2
  role: roles/storage.legacyBucketOwner
- members:
  - projectViewer:myanthosproject2
  role: roles/storage.legacyBucketReader
- members:
  - projectEditor:myanthosproject2
  - projectOwner:myanthosproject2
  role: roles/storage.legacyObjectOwner
- members:
  - projectViewer:myanthosproject2
  role: roles/storage.legacyObjectReader
- members:
  - serviceAccount:forgejo-publisher@myanthosproject2.iam.gserviceaccount.com
  role: roles/storage.objectAdmin
etag: CAI=
kind: storage#policy
resourceId: projects/_/buckets/dbeelogsme
version: 1

Also grant access to the test bucket

$ gcloud storage buckets add-iam-policy-binding gs://dbeelogsme-test \
    --member="serviceAccount:forgejo-publisher@myanthosproject2.iam.gserviceaccount.com" \
    --role="roles/storage.objectAdmin"
bindings:
- members:
  - projectEditor:myanthosproject2
  - projectOwner:myanthosproject2
  role: roles/storage.legacyBucketOwner
- members:
  - projectViewer:myanthosproject2
  role: roles/storage.legacyBucketReader
- members:
  - projectEditor:myanthosproject2
  - projectOwner:myanthosproject2
  role: roles/storage.legacyObjectOwner
- members:
  - projectViewer:myanthosproject2
  role: roles/storage.legacyObjectReader
- members:
  - serviceAccount:forgejo-publisher@myanthosproject2.iam.gserviceaccount.com
  role: roles/storage.objectAdmin
etag: CAI=
kind: storage#policy
resourceId: projects/_/buckets/dbeelogsme-test
version: 1

When I was testing, I found the ‘objectAdmin’ missed some permissions so I went back and added ‘storage.admin’:

$ gcloud storage buckets add-iam-policy-binding gs://dbeelogsme-test     --member="serviceAccount:forgejo-publisher@myanthosproject2.iam.gserviceaccount.com"     --role="roles/storage.admin"
$ gcloud storage buckets add-iam-policy-binding gs://dbeelogsme     --member="serviceAccount:forgejo-publisher@myanthosproject2.iam.gserviceaccount.com"     --role="roles/storage.admin"

If you forgot or don’t know the email address of that account, you can look it up in Service Accounts in “IAM & Admin” in the GCP console

/content/images/2026/04/hugoingcp-11.png

With Azure, we needed a Client ID, Client Secret and Tenant. With GCP, we need a larger SA JSON file that includes a bit more (including certs)

We’ll create that SA JSON with

$ gcloud iam service-accounts keys create sa-key.json \
    --iam-account=forgejo-publisher@myanthosproject2.iam.gserviceaccount.com

created key [5f40xxxxxxxxxxxxxxxxxxxxxb2b] of type [json] as [sa-key.json] for [forgejo-publisher@myanthosproject2.iam.gserviceaccount.com]

which is now saved locally

/content/images/2026/04/hugoingcp-12.png

I’ll add that to the Hugo Blog Actions Secrets as “GCP_SAJSON”

/content/images/2026/04/hugoingcp-13.png

You might be able to use some pre-baked actions like

- id: 'auth'
        name: 'Authenticate to Google Cloud'
        uses: google-github-actions/auth@v2
        with:
          credentials_json: '$'

      - name: 'Set up Cloud SDK'
        uses: google-github-actions/setup-gcloud@v2

But I always prefer command line first.

I have a CDN step defined, but i won’t work for now as we haven’t setup Cloud CDN yet:

name: Gitea Actions Test
run-name: $ is testing out Gitea Actions 🚀
on: [push]

jobs:
  Explore-Gitea-Actions:
    runs-on: my_custom_label
    container: node:22
    steps:
      - run: |
          DEBIAN_FRONTEND=noninteractive apt update -y
          umask 0002
          DEBIAN_FRONTEND=noninteractive apt install -y ca-certificates curl apt-transport-https lsb-release gnupg build-essential sudo zip
      - name: setup gcloud
        run: |
          curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg
          echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
          DEBIAN_FRONTEND=noninteractive apt-get update
          DEBIAN_FRONTEND=noninteractive apt-get install -y google-cloud-cli
      - name: test gcloud
        run: |
          gcloud version
      - name: Check out repository code
        uses: actions/checkout@v3
        with:
          submodules: recursive
      - run: |
          # DEBIAN_FRONTEND=noninteractive sudo apt install -y hugo zip
          wget https://github.com/gohugoio/hugo/releases/download/v0.160.0/hugo_0.160.0_linux-amd64.tar.gz
          tar -xzvf hugo_0.160.0_linux-amd64.tar.gz
      - run: |
          echo "🔍 Checking Hugo version..."
          pwd
          ./hugo version
      - run: |
          export
          ls
          ls -ltra themes/hugo-theme-stack
      - name: hugo build
        run: |
          ./hugo
      - name: create sa and auth
        run: |
          cat <<EOF > /tmp/gcp-key.json
          $GCP_SAJSON
          EOF
          gcloud auth activate-service-account --key-file=/tmp/gcp-key.json
          gcloud config set project myanthosproject2
          # export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account-file.json"
        env:
          GCP_SAJSON: $
      - name: test bucket
        run: |
          # test
          gcloud storage buckets list gs://dbeelogsme
      - name: Branch check and upload to GCS
        shell: bash
        run: |
            if [[ "$GITHUB_REF_NAME" == "main" && "$GITHUB_REF_TYPE" == "branch" ]]; then
                echo "✅ On main branch, proceeding with GCS sync..."
                # -r is recursive, -d deletes files in destination not in source (optional)
                gcloud storage rsync ./public gs://dbeelogsme --recursive
            else
                echo "⚠️ Not on main branch, uploading to testing path."
                gcloud storage cp ./public gs://dbeelogsme-test --recursive
            fi

      - name: Cloud CDN Cache Invalidation
        shell: bash
        run: |
            if [[ "$GITHUB_REF_NAME" == "main" && "$GITHUB_REF_TYPE" == "branch" ]]; then
                echo "🧹 Invalidating Cloud CDN cache..."
                # Replace [URL_MAP_NAME] with your Load Balancer's URL map name
                gcloud compute url-maps invalidate-cdn-cache [URL_MAP_NAME] --path "/*" --async
            else
                echo "⚠️ Skipping CDN invalidation."
            fi

It worked as far as I expected

/content/images/2026/04/hugoingcp-15.png

Since I was pushing directly to main, indeed I saw the files in the “production” bucket

/content/images/2026/04/hugoingcp-16.png

Cloud CDN

I went back and forth with Cloud CDN as my start point and Application Load Balancers (ALBs) as my start point. If you want to just see the approach I landed on, skip ahead to “Challenges”

Let’s create a new Cloud CDN with a backend bucket

/content/images/2026/04/hugoingcp-30.png

I’ll create a new LB

/content/images/2026/04/hugoingcp-31.png

I’ll leave the default Cache settings

/content/images/2026/04/hugoingcp-32.png

Soon I saw the CDN Created.

/content/images/2026/04/hugoingcp-33.png

Modifying the LB for HTTPS/TLS

We now see the created prod load balancer

/content/images/2026/04/hugoingcp-34.png

I’ll now edit and add a frontend IP

/content/images/2026/04/hugoingcp-35.png

I’ll now set it to be HTTPS, use classic certs and click “Create new certificate”

/content/images/2026/04/hugoingcp-36.png

I’ll add the domain

/content/images/2026/04/hugoingcp-37.png

The SSL Cert was satisfied right away

/content/images/2026/04/hugoingcp-38.png

I think this was satisfied immediately because the first time through I did DNS auth with an ALB

Prior Steps …

…I click “Create DNS Authorization”

/content/images/2026/04/hugoingcp-26.png

…I then see

/content/images/2026/04/hugoingcp-27.png

…It doesn’t realize my DNS is _in_ GCP so I’ll need to pop open another window and add that requested CName so LE (ACME) will satisfy the DNS challenge

/content/images/2026/04/hugoingcp-28.png

Challenges…

I fought this for a while. Starting with the CDN then modifying the LB. Or starting with the LB and trying to add a CDN.

I registered new domains. Nothing seemed to get the cert side to work. I always got a “FAILED_NOT_VISIBLE” error in certs

/content/images/2026/04/hugoingcp-41.png

And always a “DNS_PROBE_FINISHED_NXDOMAIN” with “This site can’t be reached”

/content/images/2026/04/hugoingcp-42.png

I decided to clean everything up - reserved IPs, CDNs and LBs to start over

/content/images/2026/04/hugoingcp-43.png

That meant removing my CDNs, Certs, and LoadBalancers, but noting not the buckets

/content/images/2026/04/hugoingcp-44.png

And lastly any remaining static IP addresses not in use:

/content/images/2026/04/hugoingcp-47.png

do it again..

First, I went to the “Permissions” tab on my bucket, and in the “Permissions” pane, added “allUsers” to have storage object viewer - this would have prevented it from serving to unauthed users

/content/images/2026/04/hugoingcp-45.png

which was made clear in the confirmation dialogue

/content/images/2026/04/hugoingcp-46.png

Next, I’ll get an IP address. I believe this was my problem all the other times - not getting a reserved IPv4 then setting up an A or CNAME record. Here I named it dbeelogsmelbstaticip:

/content/images/2026/04/hugoingcp-48.png

We can now use that address

/content/images/2026/04/hugoingcp-49.png

To create an A record. Here I am setting “www” to point to 34.111.157.82

/content/images/2026/04/hugoingcp-50.png

… Back to our LB edit

Lastly, I just click “Update” to finish this work

/content/images/2026/04/hugoingcp-39.png

IF you want to have HTTP to HTTPS redirect, then you’ll need to reserve a static IP

/content/images/2026/04/hugoingcp-40.png

I’ll start by creating a new ALB

/content/images/2026/04/hugoingcp-17.png

It will be public facing

/content/images/2026/04/hugoingcp-18.png

I’ll make it global

/content/images/2026/04/hugoingcp-51.png

And the default global type (not classic)

/content/images/2026/04/hugoingcp-52.png

LB Setup - Frontend

This time we’ll pick HTTPS and the IP address we already created

/content/images/2026/04/hugoingcp-53.png

Then create new certificate

/content/images/2026/04/hugoingcp-54.png

and give it the same DNS name we used

/content/images/2026/04/hugoingcp-55.png

I enabled HTTP to HTTPS redirect

/content/images/2026/04/hugoingcp-56.png

Because I did a Global Application Balancer, I can now pick a Bucket for a backend (prior, when using regional, I could only see VMs, Cloud Run, GKE etc, but not buckets)

/content/images/2026/04/hugoingcp-57.png

Lastly, I just click create to complete the LB setup

/content/images/2026/04/hugoingcp-58.png

I now have a primary and redirect ALB

/content/images/2026/04/hugoingcp-59.png

The root seems to be serving index.xml instead of HTML

/content/images/2026/04/hugoingcp-60.png

However, if I type index.html the blog is served up

/content/images/2026/04/hugoingcp-61.png

The menu for that is kind of hidden (IMHO). Go to the main buckets list page and find the “more actions” ellipse menu

/content/images/2026/04/hugoingcp-62.png

Chose “Edit Website Configuration”

/content/images/2026/04/hugoingcp-63.png

Now we can set the index and 404 pages (you want to at least set the index/home page)

/content/images/2026/04/hugoingcp-64.png

I can now use the URL without adding “index.html” and see the blog

/content/images/2026/04/hugoingcp-65.png

Adding the Test site

I’ll now make a branch and a local post. I can use hugo serve to test

/content/images/2026/04/hugoingcp-66.png

One thing I did different this time was give it a name with a date. This will be handy for finding things over time

/content/images/2026/04/hugoingcp-67.png

We can then see that pushed to the test bucket

/content/images/2026/04/hugoingcp-19.png

However, when I checked the bucket, I realized I made a mistake. It had copied the “public” folder in there instead of making the files at root

/content/images/2026/04/hugoingcp-20.png

I deleted it

/content/images/2026/04/hugoingcp-21.png

My goof was using “cp” instead of “rsync” in the branch upload


            if [[ "$GITHUB_REF_NAME" == "main" && "$GITHUB_REF_TYPE" == "branch" ]]; then
                echo "✅ On main branch, proceeding with GCS sync..."
                # -r is recursive, -d deletes files in destination not in source (optional)
                gcloud storage rsync ./public gs://dbeelogsme --recursive
            else
                echo "⚠️ Not on main branch, uploading to testing path."
                gcloud storage cp ./public gs://dbeelogsme-test --recursive
            fi

I fixed it

            if [[ "$GITHUB_REF_NAME" == "main" && "$GITHUB_REF_TYPE" == "branch" ]]; then
                echo "✅ On main branch, proceeding with GCS sync..."
                # -r is recursive, -d deletes files in destination not in source (optional)
                gcloud storage rsync ./public gs://dbeelogsme --recursive
            else
                echo "⚠️ Not on main branch, uploading to testing path."
                gcloud storage rsync ./public gs://dbeelogsme-test --recursive
            fi

Test site

Let’s run through that flow again for the “test” site

I’ll make a new static IP

/content/images/2026/04/hugoingcp-22.png

Then take the new static IPv4 (34.54.122.70)

/content/images/2026/04/hugoingcp-23.png

And use it in a new A record for test.dbeelogs.me

/content/images/2026/04/hugoingcp-24.png

I can now create a new global ALB

/content/images/2026/04/hugoingcp-25.png

And create the certificate during the front-end configuration as before

/content/images/2026/04/hugoingcp-29.png

I’ll pick the test bucket (which I had set up some time ago)

/content/images/2026/04/hugoingcp-68.png

Then leave default routing rules and click create

/content/images/2026/04/hugoingcp-69.png

With the ALB created, I checked the “test” site, but did not see my new post

/content/images/2026/04/hugoingcp-70.png

Perhaps the bucket has the wrong contents. I checked a direct storage URL:

/content/images/2026/04/hugoingcp-71.png

I went back to edit the backend configuration of the load balancer and instead of picking an existing bucket, I clicked “Create a backend bucket”

/content/images/2026/04/hugoingcp-72.png

An important step here is to uncheck the “Enable Cloud CDN” checkbox. For a test site (which is just a preview of in-flux articles), we would not want any of them cached (as we are actively changing them)

/content/images/2026/04/hugoingcp-73.png

Lastly, uncheck the errant bucket (dbeelogsme), check the new endpoint bucket (dbeelogsmetest) and click update

/content/images/2026/04/hugoingcp-74.png

Oddly, I hit a quota issue when I did this

/content/images/2026/04/hugoingcp-75.png

This is an easy fix in the limits and quotas page

/content/images/2026/04/hugoingcp-76.png

setting a more reasonable limit

/content/images/2026/04/hugoingcp-77.png

Once I swapped backend buckets, I saw the site load as I hoped. Moreover, it was a smidge slower in loading the larger images which was a good clue it was not using a CDN (the desired behaviour)

/content/images/2026/04/hugoingcp-78.png

Costs

I saw some spikes but waited a day or so to see their origins.

/content/images/2026/04/hugoingcp-79.png

It would seem I was spending roughly 2.5c/hour on ALBs. That means in a month I likely would be spending $20 for the main and test websites

/content/images/2026/04/hugoingcp-80.png

I ran that by the Cost calculator for GCP

/content/images/2026/04/hugoingcp-81.png

Indeed it would be just under US$20/mo for the ALBs.

However, what I found interesting is that I can do up to 5 of them for that price:

Summary

Coming from Azure, I believe this will ultimately be much much cheaper for hosting. In the Azure model, we were required to have an Azure Front Door if we wanted proper certs and HTTPS but with Google, using the Cloud CDN is entirely optional.

I spent entirely too much time debugging the flow. The fact is, GCP does not make the order of operations clear. Never, when I was trying to get certs working, did it suggest to create a missing A Record, or setup the static IPs first. At one point it made an unbound cert for www.dbeelogs.me that I couldn’t figure out how to use (as it wasn’t a ‘classic’ cert).

I did, in the end, go to the thinking mode of Gemini to help figure out the flow. Once it suggested a specific order of operations (static IP, A record, ALB, then cert) I found my issues entirely went away.

Sometimes in the blog I will detail my bad flows but I wiped all those out since it would just waste yalls time.

Let’s talk about cost.

In Azure, each Front Door (standard) is US$35/mo + traffic. So every ‘web site’ would basically cost $35 and then some. For little blogs that would just be a non-starter (as getting something like a basic Wordpress account is $2.75 or using Github pages is basically free).

In GCP, having a persistent site is not free. It is essentially US$20 for the Global Application Balancer (which can use the backend buckets). However, the CDN aspect is very cheap (if you add it). Additionally, as we saw, it’s not $20/site rather we can get up to 5 endpoints for US$20 before the price starts to go up.

I did not do AWS (yet), but mostly because I know AWS. I use it today for this blog. And the reason is because it is just stoopid cheap for hosting a static blog.

I’m only now getting upwards of $15/mo in AWS due to the large about of traffic (shouldn’t be too braggy - likely most of it are bots). But for years i spent $3-5:

/content/images/2026/04/hugoingcp-82.png

And unlike Azure, the invalidation step with AWS is fast.

However, to lower costs over time, I trimming the rsync command to just handle the last couple months when i upload, which is some complicate logic

      - name: create sync command
        run: |
            #!/bin/bash

            # Get the current month (numerical representation)
            current_month=$(date +%m)
            current_year=$(date +%Y)

            last_month=$(( $current_month - 1 ))
            last_year=$(( $current_year - 1 ))

            if (( $current_month == 1 )); then
              current_year=$last_year
              current_month=12
              last_month=$(( $current_month - 1 ))
            fi

            printf "aws s3 sync ./_site s3://freshbrewed.science --size-only" > /tmp/synccmd.sh

            for (( year=2019; year<$current_year; year++ )); do
                printf " --exclude 'content/images/%04d/*'" "$year" >> /tmp/synccmd.sh
            done

            # Loop through the months and print them up to the current month
            for (( month_num=1; month_num<last_month; month_num++ )); do 
                printf " --exclude 'content/images/%04d/%02d/*'" "$current_year" "$month_num" >> /tmp/synccmd.sh
            done
            printf " --acl public-read\n" >> /tmp/synccmd.sh
            chmod 755 /tmp/synccmd.sh
      - name: copy files to final s3 fb
        run: |
            /tmp/synccmd.sh
        env: # Or as an environment variable
          AWS_ACCESS_KEY_ID: $
          AWS_SECRET_ACCESS_KEY: $
          AWS_DEFAULT_REGION: $

I also had to not just invalidate the main index.html but very sub-page one (or the ordered pages would lose posts)


      - name: cloudfront invalidations
        run: |
            aws cloudfront create-invalidation --distribution-id E3U2HCN2ZRTBZN --paths "/index.html"
            # Invalidate all index.html files, main and pages (.e.g page2/index.html)
            cd _site
            mapfile -t paths < <(find . -type f -name index.html -printf '/%P\n')
            aws cloudfront create-invalidation --distribution-id E3U2HCN2ZRTBZN --paths "${paths[@]}"

This makes me wonder, if our goal is “small blog hosting”, is perhaps serverless the way to go. We pay for Load Balancers per hour and if we are honest, a huge amount of the time no one is looking. Could a GCP Cloud Function, Azure Function, or AWS Lambda do a better job? And would that be a hugo server? or a rendered static site served with an nginx container. A lot of serverless costs has to do with memory asks and startup times (Nginx is really fast and has low memory demands).

But we’ll save all that for another day.

In summary, I think the Google solution, at least when compared to Azure, is a better value. However, at this point, not enough to get me to migrate this site over (but we are getting close).

blog staticwebsite GCP Hugo markdown forgejo gitea cicd cloudcdn

Have something to add? Feedback? You can use the feedback form

Isaac Johnson

Isaac Johnson

Cloud Solutions Architect

Isaac is a CSA and DevOps engineer who focuses on cloud migrations and devops processes. He also is a dad to three wonderful daughters (hence the references to Princess King sprinkled throughout the blog).

Theme built by C.S. Rhymes