For years, our trade customers—the landscapers, builders, and architects at the core of our wholesale business—accessed pricing through what I quietly called the spreadsheet era. A private, embedded document. It worked in the sense that nobody complained loudly, but it was not mobile-friendly, it offered nothing in the way of UX for someone standing on a job site, and it gave us zero visibility into buying intent until the moment a customer emailed in an order.
When we finally committed to building a proper Trade Portal, the first real decision was not which framework to use. It was whether to build inside the existing e-commerce environment at all.
The case for building outside #
The temptation with any new internal tool is to reach for a plugin. Add the feature to what you already have, avoid the overhead of a new system, keep everything in one place. We went the other way, and the reasoning came down to three things that were genuinely in tension with each other.
The CMS carries overhead that is invisible on desktop and brutal on mobile. Every page load drags the full stack with it—template engine, plugin chain, the works. For a user on a phone in direct sunlight, trying to pull up a SKU before a meeting, that latency is not a minor inconvenience. It ends the session. Beyond speed, we needed a clean separation between trade-specific data—quote history, year-to-date totals, login frequency—and our retail sales data. Mixing those inside a single WooCommerce install was going to create reporting noise we would never fully untangle. And on security, the trade portal needed stricter session controls than anything a general-purpose CMS plugin was going to give us by default: forced session expiry, rate limiting tuned for professional users, not casual browsers.
So we built a lightweight PHP application with its own MySQL database, treating the e-commerce platform purely as the source of truth for product data, pulled over a shared-key REST API. The trade portal does not write back to the catalog. It only reads from it, and it stores everything specific to trade activity locally.
That boundary ended up being the right call, though it created its own problems—more on that shortly.
Designing for the muddy boot user #
Our primary users are not at desks. They are on job sites, checking stock availability on a phone, often with one hand, often in the sun. The conventional category-browse model—drill down through a tree, land on a product page, add to cart, navigate back—falls apart in that context.
We moved to a Family → Variant → Format model with a single horizontally scrollable tab row across the top. Natural stone types, for example, each get a tab. The header auto-collapses on scroll so the product list takes up as much of the screen as possible. Small decisions, but they came directly from watching how the users actually held their phones.
The feature that worked best, and the one I would prioritize in any similar build, is the Sticky Quote Drawer. Rather than a cart page that pulls you out of browsing, the drawer sits pinned at the bottom of the screen. Users build a proforma quote, estimate shipping, and see ex-tax versus inclusive totals without ever losing their place in the catalog. It sounds like a minor UX detail. In practice it changed the rhythm of how people use the tool—they browse longer and commit earlier because the cost summary is always visible.
The data entropy problem #
Anyone who has managed an e-commerce catalog with thousands of SKUs over several years knows that the category structure quietly deteriorates. Inconsistent naming, orphaned SKUs, products filed under the wrong parent, variations that were created for a one-off job and never cleaned up. Our catalog had accumulated years of exactly this.
We could not run a clean B2B interface directly off that structure. So we built a custom mapping plugin inside the legacy system that acts as a curation layer. Staff maintain a Trade Product List of approximately 219 approved items, each mapped explicitly to a legacy SKU. The front-end renders only what is on that list. Live pricing and stock still pull from the main inventory system in real time, but the shape of what the portal presents is controlled entirely through the mapping layer.
This kept the portal clean without requiring a full catalog audit first. It also gave the operations team direct control over what trade customers see, which removed a category of developer requests that used to come through as one-line tickets every few weeks.
Two things that broke in ways worth documenting #
The implicit commit bug. During the rollout of our transactional email system, quote requests were saving to the database but the associated line items were disappearing. The transaction appeared to commit, but the items were gone.
What was happening: a lazy initialization check was firing a CREATE TABLE statement inside an open transaction. In MySQL, DDL statements—CREATE, ALTER, DROP—trigger an implicit commit. The table creation committed the transaction immediately, before the line items had been written, and the rest of the insert ran outside any transaction context. The partial data appeared to succeed, which is why it took a while to spot.
Once we understood the mechanism, the fix was straightforward: pre-warm all services and run any DDL before opening transactional blocks, not inside them. The broader lesson I took from this is to treat service initialization as a deployment-time concern, not a runtime one. If a service checks for its own table on first use, that check does not belong inside a code path that also writes business data.
The API fan-out problem. Early load times were poor. Profiling showed the application was making ten sequential calls to the legacy system’s API just to build the navigation menu—one call per product family to check availability and fetch counts. Each call waited for the previous one to finish.
The fix was a Bootstrap Manifest: a single API endpoint that returns the full catalog structure in one response. The portal caches that on load and navigates from it. Individual product detail calls still go back to the API, but the menu and browsing structure render instantly. The trade-off is that the manifest response is larger than any individual call, so first load is slightly heavier. In practice, on the networks our users are actually on, that first-load cost was far less painful than ten serial round-trips per session.
Turning quotes into sales intelligence #
The admin analytics dashboard was the last piece, and in retrospect it might be the one with the longest-term value.
The two things that changed how the sales team operates: first, tracking top users by year-to-date quote value, which made previously invisible buying patterns visible and changed how account conversations started. Second, and this was not obvious until someone on the team asked for it, storing a price snapshot at the moment each quote is generated.
Prices change. A salesperson following up three weeks after a quote was sent needs to see the exact numbers the customer saw, not today’s pricing. Before the snapshot, that conversation required digging through emails. Now it is a single lookup. The snapshot is not technically complicated—it is just a JSON blob stored alongside the quote record—but it resolved a recurring friction point that had been invisible to us because nobody had thought to name it.
Where things stand #
As of April 2026, the portal is in production. The architecture runs on lightweight PHP with no framework, a dual-stack database setup (MySQL for trade data, WooCommerce for catalog), Docker-based local development mirroring production, and a security configuration that includes CSP hardening, session rotation on login, and split-key rate limiting.
I still would not call this project finished. The catalog mapping process is manual and does not scale cleanly beyond a few hundred SKUs. The analytics dashboard covers the questions we thought to ask; I am fairly confident there are patterns in the quote data that we have not yet learned to look for. And the session management, which felt strict when we set it, is already prompting questions about whether a 16-hour expiry is the right number for users who come back to the same quote over several days.
For anyone facing a similar decision—whether to extend the CMS or build alongside it—the honest answer is that decoupling solved the problems we knew we had and introduced complexity we had not fully priced in. The integration boundary requires maintenance. The mapping layer requires curation. The two databases require careful thinking about which system owns what. None of that is unmanageable, but it is not free either, and I would go in with clearer eyes about the ongoing cost than I did.