Engineering
We migrated from Hostinger to Vercel. Here's what actually broke.
The migration itself took an evening. The aftermath took two weeks. Six specific things broke that the standard migration blog posts don't mention — with the fixes for each.
By Mr. Gill ·
In February this year I moved pixipace.com from Hostinger to Vercel. The decision itself wasn't hard. The migration itself was full of small, infuriating details nobody tells you about.
I'm writing this down now because roughly every other week I talk to another small studio thinking of doing the same move, and they'd benefit from knowing exactly what to expect. This is the receipt, not the marketing.
The migration itself is a Tuesday evening
If the site is static, or a JAMstack setup, or a React SPA, moving to Vercel is actually simple. Push your repo to GitHub. Connect Vercel to the repo. Vercel detects the framework, builds it, and serves it. Twenty minutes if your DNS is already somewhere sane.
The domain switch is a couple of CNAME records. Vercel's dashboard tells you exactly what to add. If you point A records at their IPs, add a CNAME for www, wait for DNS to propagate, and you're live.
None of that is hard. The hard parts start the next morning.
1. The canonical URL bug that hid six weeks of content
For six weeks after the migration, roughly half our articles were effectively orphaned from Google's index. The canonical looked fine when I loaded the page in a browser (React Helmet overwrote it with the correct value). But Googlebot reads the HTML that's served before hydration, and that was the Cloud Function's wrong canonical.
The fix was a single line: hardcode the canonical host in the function instead of reading it from the request. I know. Painful.
2. The sitemap route that appeared to work but didn't
Vercel's rewrite syntax uses :path+ and :path*, and they behave differently from the path-to-regexp you might remember. I'd set up a rewrite like this:
{
"source": "/insights/:slug*",
"destination": "https://...cloudfunctions.net/slugHandler"
}
That looks right. It isn't. The destination doesn't include the path segments, so the function received request.path = "/" and returned 404 for every slug. Fix:
{
"source": "/insights/:slug+",
"destination": "https://...cloudfunctions.net/slugHandler/insights/:slug+"
}
Note both differences: :slug+ requires at least one segment (so the bare /insights listing still works), and the destination explicitly includes the path.
3. Email deliverability went sideways
Contact form emails were sending from our Firebase Function via Gmail SMTP. Worked fine on Hostinger. Stopped working the week after Vercel cut over. Turned out the SPF and DKIM records from the old hosting hadn't been fully migrated, so emails were quietly going to spam.
You can run nslookup -type=txt pixipace.com to check your SPF. You'll want a single SPF record that includes include:_spf.google.com (or whoever's sending). Multiple SPF records or missing DKIM means your form submissions land in spam.
4. DNS propagation is slower than you think
I'd read "DNS propagation takes up to 48 hours" and assumed the real number was closer to 10 minutes. For most records, sure. For the apex record (the bare pixipace.com with no www), it took almost a full day. Users on cached DNS saw the old site for hours. One of our contact forms got submitted to the old Hostinger site and we never knew.
Moral: before you switch, set the TTL on your existing records to something tiny (300 seconds) a day in advance. That way when you flip, propagation actually happens within minutes.
5. Preview envs need their own secrets
Vercel's preview deployments (one per pull request) get their own environment. If you set Firebase API keys in the "Production" env tab only, your preview builds will fail silently with auth/invalid-api-key errors. The fix is to set env vars for all three environments: Production, Preview, and Development. Each can have different values, or just duplicate the same values across all three.
6. Google remembered a WordPress site I didn't
Old site goes offline. Google re-crawls. Indexes the new site. Move on.
Google had old WordPress URLs in its index from well before I even owned the domain — /tag/, /category/, /wp-content/, plus a handful of historical blog URLs. Six months after the migration, those were still showing up as "Discovered - not indexed" in Search Console. The 404s were hurting the site's quality signals.
The fix was a batch of regex-based 301 redirects in vercel.json. I added one for each old pattern. Within a few weeks Google stopped trying to crawl them and the quality signal recovered.
What I'd do differently
I'd spend the day before the migration doing three things. First, lower all DNS TTLs to 300 seconds. Second, do a full curl+grep of the canonical URLs on every page type (article, service, home) to confirm they match the final domain. Third, set up a monitoring ping on five key URLs so I notice within minutes if a route breaks.
None of this is hard. It's just the boring checklist stuff that, when skipped, turns a Tuesday-evening migration into two weeks of small fires.
- The migration itself is easy. The post-migration tail is where the pain lives.
- Check canonical URLs on your SSR output specifically. Don't trust what you see in the browser.
- Lower DNS TTLs a day before the switch. The "48 hour propagation" warning is real for apex records.
- Set env vars across Production, Preview, and Development on Vercel. Not just Production.
- Expect Google to remember historical URLs from the domain. Plan your 301 redirects before you go live.