What Skills Should a MarTech Consultant Have? A Practitioner's Vetting Checklist
A MarTech consultant's job is the same shape regardless of stack: read systems, decide what matters, connect things that don't talk, and hand the team a runnable system. The skills that produce those outcomes split into four buckets — strategy, platforms, data, and judgment. Each one carries a question you can ask in a 30-minute call to verify it.
If you're vetting a candidate, this article doubles as a checklist. If you're earlier in the buying process — figuring out what a MarTech consultant actually does before deciding to hire one — start there and come back.
The four skill buckets
- Strategy — translating ambiguous requests into measurable systems. The skill of moving from "we want better attribution" to "here are the six events and three user properties that matter."
- Platforms — depth on the actual tools the work runs on. CDP, marketing automation, product analytics, tag management, consent management. Depth on a few, not breadth on twenty.
- Data — the bridge between marketing and engineering. SQL fluency, event taxonomies, identity resolution, server-side tracking. Enough to spec the work the data engineering team will ship.
- Judgment — the skills that don't fit on a resume. Reading a room. Saying no. Writing things down. Designing the engagement to end.
The first three bucket up neatly. The fourth is where the senior practitioners separate from the rest.
Strategy skills — translating ambiguous requests into measurable systems
This is the work most people mean when they say "MarTech strategy" — and it's the skill most easily faked, because strategy decks are easy to make and hard to evaluate.
Translating "we need better attribution" into 6 events and 3 user properties
The senior consultant will not start with attribution methodology. They will start by asking what business decisions the attribution is for. Three or four discovery questions later, the conversation has moved from "we want better attribution" to "we need to know whether email-led customers have higher LTV than paid-social-led customers, by acquisition cohort, for the next quarterly review." That sentence specifies the events, the user properties, and the time horizon — and it is the only kind of sentence the data layer can actually answer.
Vetting question. "How would you scope this if we said 'attribution is broken' on a discovery call?" The answer should not be a methodology. It should be a sequence of questions that turn the symptom into a decision.
Mapping a buyer journey to data + tools + team
A MarTech consultant should be able to walk a whiteboard from a marketing surface (a paid social ad, an organic blog post, an email send) all the way through to revenue, naming the data pipe, the platform, and the team responsible at every step. Most stacks have three or four breaks in that chain that nobody has named. Diagnosing them in 90 minutes is a senior skill.
Vetting question. "Walk me through how you'd diagnose a stuck conversion funnel without seeing our data yet." The answer reveals whether the consultant works from systems-thinking principles or from a generic checklist.
Vendor selection without category traps
Most vendor categories are 30% noise. A senior consultant has clear opinions about which categories are mostly marketing — "customer engagement platform," "AI-powered conversion suite," and similar — and is willing to push back on a brief that contains them. A vendor recommendation that defaults to the largest player in every category is not a recommendation; it's an abdication.
Vetting question. "What categories are mostly noise? Where would you push back on the brief?" The answer should be specific and a little bit uncomfortable.
A real example of where this matters: most MarTech architecture engagements end up cutting more from the planned stack than adding to it. That outcome only happens if the consultant has the conviction to remove things.
Building a measurement framework that survives a re-org
Measurement debt is the biggest hidden cost in MarTech. A measurement framework that listed every event a marketer might want will collapse the first time the marketing org reshuffles. A framework that lists only the events the business will act on, with named owners and a clear retention strategy, survives.
Vetting question. "What does measurement debt look like, and how do you avoid creating more of it?" The senior answer involves event taxonomy governance, named owners per event, and explicit retirement criteria — not "we measure everything that might be useful."
Platform skills — depth over breadth
The most over-claimed area on consultant resumes. Twenty platforms listed, no story behind any of them. The right shape is the opposite: deep mastery of a primary platform in each major category, and informed working knowledge of a few alternatives.
A primary CDP at expert level
Tealium, Segment, mParticle, or RudderStack. One of those, mastered. Mastery means the consultant can:
- Diagnose an identity resolution edge case from a debug log.
- Write the integration spec for a non-trivial source or destination by hand.
- Explain the differences between the platform's identity model and a warehouse-native pattern, with conviction about when each one wins.
Vetting question. "Walk me through a non-trivial integration you debugged. What was the failure mode?" Senior platform knowledge surfaces in war stories, not in feature lists.
The work of a CDP implementation engagement is mostly this skill applied at the architectural level. A consultant who has never debugged a CDP at 11pm because activation broke is a consultant who hasn't earned the depth yet.
A primary lifecycle and marketing automation platform
Iterable, Braze, Customer.io, HubSpot, or Klaviyo. Mastery of the segmentation model, the sending architecture, the deliverability constraints, and the integration with the data layer above it.
A primary product analytics tool
Amplitude, Mixpanel, or PostHog. Comfort with the event-property data model, the cohort/funnel/journey reports, and the integration patterns into the rest of the stack.
Tag management and server-side tracking
Tealium iQ or GTM (client and server). Server-side tracking is non-optional in 2026 — if a consultant is still defaulting to fully client-side measurement, the consultant is two years behind the platform. The right depth includes server-side container architecture, conversion API integration with Meta CAPI, Google Enhanced Conversions, and TikTok Events API, and the cookie / consent / first-party-data tradeoffs each of them implies. This is the depth area for a tracking implementation engagement.
Consent management
OneTrust, Sourcepoint, Iubenda for Italian and EU work. Mastery includes TCF v2.2 implementation, GPP migration, lawful-basis mapping for downstream activations, and the interaction between the CMP and the rest of the stack.
Anti-pattern. A resume listing twenty platforms with no story behind any of them. Twenty platforms is not "broad expertise" — it is the mark of a consultant who has been a list of certifications and not a practitioner.
Data skills — the bridge between marketing and engineering
A MarTech consultant is not a data engineer. But a consultant who can't read SQL, can't reason about warehouse architecture, and doesn't understand identity resolution will not be able to spec the engineering work the team needs. The right level of fluency:
- SQL at intermediate level. Joins, window functions, CTEs, and the discipline to write a query that makes sense to read six months later. Not "I know SQL exists." Not "I can write production-grade pipelines." Somewhere in between.
- Event taxonomies. Understanding of Segment-style spec design, Iteratively/Avo-style governance, and the tradeoff between flexibility and discipline. The single biggest data debt in most stacks is an event taxonomy that grew accidentally and was never rationalized.
- Warehouse architecture, as a consumer. Working knowledge of Snowflake, BigQuery, or Redshift — enough to understand what your data team is asking for, sequence the marketing-side work that depends on it, and write specifications that don't break warehouse cost models.
- Identity resolution. Probabilistic vs deterministic, the tradeoffs, the failure modes. When to use a cross-device graph from a CDP vendor vs build identity resolution in the warehouse vs both.
- Server-side tracking. Why, when, and what breaks if you skip it. The trend lines on browser-side tracking decay (ITP, third-party cookies, CNAME cloaking detection) make this non-optional.
Vetting question. "Show me an event taxonomy you wrote — what did you cut, and why?" The senior answer involves explicit cuts, governance principles, and a story about a debate the consultant had with a marketing stakeholder about what to remove.
This is also the area where the data and stack strategy work lives — measurement frameworks, governance models, taxonomy design.
Judgment skills — the ones that don't fit on a resume
This section is the most important and the hardest to fake. None of these show up on a vendor certification.
Reading a room
Not every decision presented as "this needs to be made" is actually being made in the room. Senior consultants notice which decisions are real, which are theater, and which were already made and the engagement is being run for political cover. Acting on the wrong layer of decision wastes the engagement.
Saying no
Turning down work where success is impossible inside the proposed scope. Pushing back on a brief that has the wrong shape. Naming the decision the buyer is actually trying to avoid. A consultant who has never said no in a sales call is a consultant who hasn't yet figured out what they're for.
Writing things down
Decisions logged in writing, not in meeting recordings. Briefs over slides. Runbooks over wiki dumps. The single best signal of a senior consultant is an existing portfolio of written deliverables — even anonymized, even partial — that you can read.
Handing it back
The engagement is designed to end. The handover is real, not theatrical. The consultant will not still be on a retainer six months after the system shipped, unless the relationship has explicitly transitioned into a fractional advisor shape and both sides agreed to it. This is a design decision, made in week one, not a thing that happens at the end.
Vetting question. "When was the last time you turned down work, and why?" The answer reveals whether the consultant has the conviction to define the shape of the work, or whether the consultant takes whatever work is on the table.
Soft signals that something is off
A few patterns that should make you slow down:
- "I do everything." Specialists outperform generalists in this work. A consultant who covers paid media and CDP architecture and lifecycle email and attribution modelling at expert level is unlikely to be senior in any of them.
- "I don't share my deliverables in advance." A senior consultant has a writing portfolio. If the consultant can't show you a single anonymized brief or audit they've written, the writing portfolio doesn't exist.
- "I'll learn it on your engagement." Acceptable for one secondary platform; not acceptable for the primary platform on a six-figure CDP rebuild.
- "I work alone." A consultant who can't collaborate with an in-house data team will produce shelf decoration. The work is fundamentally collaborative.
Credentials that matter (and the ones that don't)
- Vendor certifications. Useful as a filter, not as proof. Tealium Certified, Segment Certified, Iterable Certified — they prove the consultant has been exposed to the platform; they do not prove judgment.
- Conference talks. Stronger proof. A consultant who has spoken at MeasureCamp, MarTech World, or CDP Summit has been pressure-tested in front of a senior audience.
- Open-source contributions or written work. The strongest proof. Code and writing are both portable, public, and impossible to fake post-hoc.
- Wikidata entity, Crunchbase profile, industry directory inclusion. Tertiary signals. They confirm a consultant is recognizable as one — which matters less than the writing does, but matters more than zero.
How to test these in a 30-minute discovery call
Three questions surface 80% of the signal:
- "Walk me through the messiest engagement you've worked on. What did you cut, what did you keep, what did you regret?" The answer reveals whether the consultant has the introspection to learn from real work, or whether every engagement was a triumph.
- "If we hired you for four weeks, what would the written deliverable look like?" The answer reveals whether the consultant has a clear template for written work, or whether deliverables are improvised at the end of an engagement.
- "When was the last time you turned down work, and why?" The answer reveals whether the consultant has a defined shape for the engagements they accept.
If a consultant gives clean, specific answers to all three, the rest of the vetting can run on platform-specific questions and reference checks.
FAQ
What's the most important skill for a MarTech consultant?
Judgment, by a margin. A consultant with senior judgment and intermediate platform skills is more useful than a consultant with senior platform skills and weak judgment, because the platform skills can be sourced — from documentation, from vendor support, from in-house engineers — while judgment cannot.
Do MarTech consultants need to code?
Not at the level of a software engineer, but yes, at intermediate level. SQL fluency is non-negotiable. Comfort reading and modifying JavaScript in tag manager configurations is required. Understanding the difference between a webhook and a job-queue-based integration matters. The honest framing: a MarTech consultant should be able to spec engineering work and review its outputs, even if the engineering team ships it.
What technical certifications matter most?
For platform proof, the major vendor certifications: Tealium, Segment, mParticle, Braze, Iterable, OneTrust, GTM. None of them are sufficient on their own. They're filters, not signals. The cluster of certifications, conference talks, and written work together is what proves seniority.
How can I tell if a consultant has real platform depth?
Ask for a war story. A senior consultant has at least one specific anecdote per major platform — a failed integration, a debugged identity edge case, a vendor support ticket that revealed a real product limitation. Generic platform praise is the inverse signal: a consultant who only has good things to say about every platform has not pushed any of them hard enough.
If you're vetting a MarTech consultant — me or anyone else — those are the questions worth asking. If you'd like to test them on a real engagement, a short scoping call is the fastest way to find out whether the work fits.