Weekly intelligence for portfolio executives closing the AI Wage Gap.
| ||||||
Action taking — vs. taking action.Why almost nobody who's "inspired" by AI ever ships anything — and what David did differently.
| ||||||
|
There's action taking — and there's taking action. Allow me to explain. If I had a dime for every time I heard how someone "uses AI," but in practice, uses it as a search engine or conversation partner… I'd be a very rich man, indeed. If I had a dime for every time I heard from the same exact people, in so many words, that they're afraid to actually build anything using AI (like, you know, actual workflows, agents, web apps)… You get the idea. Fear of the unknown is the great mojo killer for those who love what AI does, but have no clue how to use it to even 10% of its capability. It's with this in mind that I took a meeting from a LinkedIn DM at Israel Tech Week last week in Miami. Lovely town, by the way. David and I met at a café in Bal Harbor and the conversation started flying. I wasn't sure first what exactly this would be about, why he wanted to speak with me. After the intros and pleasantries, he peppered me with questions about my AI builds, how I learned, what are my suggestions for his particular situation. He's a fractional CRO (Chief Revenue Officer) for various companies and has a great organization that works with Israeli startups. I showed him one of my 2-hour website builds for a PE fund — he was inspired. Ok, everyone's inspired on some level with what AI does when you use it to even a little of its potential. That's usually where the story ends. David actually asked me about newsletters I read, other resources I can recommend, etc. Ok, that's already unusual. I put together a list and shared it after we spoke. Next thing I know… David messages me and says he was inspired… AND totally rebuilt his site. This is later the same day. Which got me thinking… How often do we all feel inspired… And do absolutely nothing. No follow-up, no playing around, no experimentation. More than 95% of all the people I meet talk a big game or shrink away, but almost nobody follows up, asks about resources, asks for best practices. I'm beyond happy to share — and not just in this newsletter, either — but practically nobody asks. Maybe it's fear of the unknown. Maybe it's the 4-second attention span most of us have devolved to. Maybe it's that people are waiting for permission. Everyone's busy, has a million obligations and projects. That's no different for me or David or most of the other grown, adult professionals I meet. Most talk and hold on for dear life to their jobs, even as the rules and conventions of work and life are already changing quickly, in front of our eyes. It's not a question of being alarmist or overly optimistic or play prophet. These are facts on the ground. And yet, only David and one or two other people — out of many tens, even hundreds — are actually acting on what they learn. They're actually building — without permission, without being afraid — and yes, without knowing how to code or how MCPs work, what's GitHub or Vercel. And it's just fine. They are moving, they're creating, they're playing like kids again. This is the essence of what AI does (for me, at least). Yeah, scary data breaches and governance. Yeah, I'm not a coder. Who cares? Shoot. Don't talk. Eventually, you'll hit something — maybe even something big. And if you're not sure where to start — DM me. I'll happily share whatever I know. | ||||||
| ||||||
The Gig Intelligence Report.Three platforms. Three real projects. One I just signed.
| ||||||
| ||||||
|
Most newsletters in this space stop at "go apply to Mercor." That's a hyperlink, not intelligence. Here's what's actually live this week — what I just signed, and what you need to know before you sign anything. | ||||||
I just signed onto a Mercor Generalist project at $140/hr. Generalist roles are designed for operators who can move across domains — strategy, ops, business judgment, written communication — and produce structured evaluations of frontier model output. Not data labeling. Not transcription. The highest-leverage non-specialist work on the platform. Mercor's median Generalist rate sits in the $40–$85/hr band. A $140/hr rate signals senior-operator pricing — what Mercor pays when the project requires McKinsey/Bain/BCG-caliber pattern matching, prior CHRO/CLO experience, and high-quality written output in the Zara screening.
| ||||||
micro1 has two tracks. Skip the one you'll see promoted in lifestyle blogs (general AI training, $15–$30/hr). The track Leverage Brief readers want: domain expert AI training. Doctors, lawyers, financial analysts, conservation scientists, linguists — brought in specifically to train frontier AI systems in their areas of expertise. Engineering and STEM contracts run $70–$200+/hr. Onboarding is the Zara AI interview — 20 minutes, retakable up to three times, deliberately rigorous. Top 1% of applicants get certified. Once you're in, you're matched to projects via the dashboard. Bi-monthly payouts via Deel. And there's a sleeper-hit referral program: up to $3,000 per successful referral after they complete 10 paid hours, with the payout listed on every job before you refer.
| ||||||
Meridial (built by Invisible Technologies) positions differently from Mercor and micro1. Not an open marketplace — invitation-only matching after domain verification (legal, STEM, finance, linguistics, coding, safety). Project fees disclosed up-front before you accept. Operates inside Invisible's SOC 2, HIPAA, and GDPR perimeter, which matters more this month than it did last month. If you have a verifiable credential — JD, MD, PhD, CFA, licensed engineer — and you don't want to compete on an open marketplace post-Mercor breach, Meridial is the cleanest fit. Lower volume. Higher signal. Most invitations come within 48 hours of profile activation.
| ||||||
|
Portfolio move of the week: if you have an hour, complete the Mercor Zara interview. If you have an afternoon, do all three. ~90 minutes invested. Expected value at the senior-operator tier: a $50–200/hr income node that compounds existing expertise instead of competing with it. This is Phase 4 of the Portfolio OS, applied literally — or, in this week's vocabulary, taking action. | ||||||
| ||||||
The Q1 2026 Report.34 pages. 15 headline numbers. Free.
| ||||||
|
The Q1 2026 AI Wage Gap Report is the most-cited resource in the field on the income divide. Live now at aiwagegap.com.
Plus function-level breakdowns for HR (+66% YoY), Consulting (+58%), Marketing (+50%), Finance (+40%), Legal (+34%), and Operations (+29%). The three-archetype framework. The full 5-phase Portfolio OS.
| ||||||
| ||||||
Portfolio Stack.BookToCourse.AI — the Phase 4 to Phase 5 conversion engine.
| ||||||
|
Most executives stall at Phase 4 of the Portfolio OS because they ship one income stream and run out of conversion infrastructure. They have a book. Never make the course. Have the course outline. Never build the cohort. The IP exists. The packaging doesn't. BookToCourse.AI is the bridge. Upload your manuscript or outline → the platform generates a structured course (modules, lessons, exercises, assessments, learning objectives) mapped to your content → you review, edit, publish → output is platform-ready for Teachable, Thinkific, Kajabi, or Gumroad. The Portfolio Engineering math: a $20 book validates expertise. A $297 course built from the same IP produces 15× the unit economics with no new content creation. A $2,500 cohort built on top of the course produces another 8× on top of that. One piece of IP. Three layers. Each compounds the credibility of the one below it.
| ||||||
| ||||||
The Cohort Protocol — № 01.Calendar re-architecture for the multiplier.
| ||||||
|
Most executives think of their calendar as a record of commitments. That's wrong. The calendar is the operating system. Whatever shows up there, repeated, becomes the architecture of the next twelve months. If your calendar still looks the way it did in 2022 — back-to-back internal meetings, status updates, defensive 1:1s — you are running 2022 software in a 2026 economy. The Wage Gap is the natural output. The re-architecture has three moves. All three are below in full. Move 1: The 4-Block Week.Replace the open calendar with four named blocks, repeated every week: BUILD (12–15 hrs/wk). Where new IP, products, income streams, and client work get created. Heads-down, AI-leveraged, output-producing. Calendar shows it as one or two unbroken multi-hour blocks, ideally before noon. Notifications off. Nobody invited. LEVERAGE (8–10 hrs/wk). Meetings that produce a multiplier — selling, partnerships, hiring, client delivery, strategic alignment with a direct report who can ship without you. The test: if the meeting were canceled, does something measurable get worse within 30 days? If no, it's not Leverage. It's coordination theater. MAINTAIN (5–8 hrs/wk). Email, admin, internal status, recurring 1:1s that exist for relationship reasons but don't generate output. Cap it. Box it into one 90-minute window twice a day. Never let it bleed into BUILD. RECOVERY (everything else). Real recovery, not "kind-of-working-on-the-couch." Family, sleep, exercise, learning, Shabbat. The block most executives lie to themselves about. Also the block that most predicts whether the other three compound or collapse. The Diagnostic.Before redesigning anything, you measure. Pull last week's calendar. Color-code every block into the four categories. Most mid-career execs — including the ones I coach who think they're already AI-leveraged — come back with this distribution: BUILD 4–6 hours, LEVERAGE 6–8 hours, MAINTAIN 18–24 hours, RECOVERY fragmented and mostly fake. That's the Adaptor calendar. It produces an Adaptor income. The Multiplier calendar inverts MAINTAIN and BUILD — not by 10%, by 3–4×. That single inversion is responsible for most of the 14.2× output multiplier McKinsey measured. The tools didn't change. The architecture did. Move 2: The AI-Augmented BUILD Block.Renaming a calendar block does nothing. The block has to produce output that wouldn't otherwise exist. Here's the protocol I run inside the cohort, four steps: 1. Pre-load the block, the night before. The cost of a BUILD block is not the time. It's the activation energy required to start. Most BUILD blocks fail in the first 12 minutes — opened laptop, opened Slack, "quick" check, gone. The fix: at the end of the previous workday, write three lines in a plain text file: (a) what specifically I'm building tomorrow, (b) the first concrete artifact that needs to exist by 11am, (c) the AI prompt or workflow I'll start with. Three lines, two minutes, every night. The block now starts with momentum, not a blank page. 2. AI as the first collaborator, not the last. Most executives use AI as a polish layer — they write a draft, then ask Claude to "make it better." This is the lowest-leverage way to use AI ever invented. The Multiplier inverts it: the AI generates the first three versions, the executive selects, edits, and combines. You do the work the AI can't do (taste, judgment, strategic context) and skip the work it can (first drafts, structural scaffolding, iteration on tone). A 4-hour BUILD block under this protocol produces 3–5× the output of the same block without AI. Not because AI is faster — because the cognitive load shifts from "creating" to "selecting and shaping," which is genuinely less depleting and you can sustain it longer. 3. One artifact, no exceptions. Every BUILD block ships one concrete artifact: a published draft, a working prototype, a sent proposal, a recorded video, a finished section of a deliverable. Not "I worked on the strategy doc." A finished, named, dated artifact. The reason isn't productivity theater — it's that artifacts are the only true measure of whether AI-augmented BUILD time is actually compounding. If three weeks of BUILD blocks produce three artifacts, you're a Multiplier. If they produce zero, you've just renamed your calendar. 4. End the block at the assigned time, even when you're rolling. Counterintuitive but critical. Stopping mid-flow at the boundary of the block is what makes the next block possible. Executives who let BUILD blocks "just keep going because the work is good" destroy the LEVERAGE and RECOVERY blocks downstream. The McKinsey 14.2× multiplier is observed in operators who run the protocol consistently for 90+ days, not in operators who have one heroic 14-hour Tuesday and then collapse for the rest of the week. Move 3: The Maintain Compression Script.MAINTAIN is the silent killer. It's the block that swallows the entire week if you let it. It looks like work. It feels like work. It produces almost no measurable output. The Maintain Compression Script is how you contain it. The 90/90 rule. Two 90-minute MAINTAIN windows per day. One mid-morning (after BUILD), one mid-afternoon. All email, Slack, internal status updates, recurring check-ins, admin, and "quick favors" live inside those two windows. Outside the windows, those tools are closed. Not minimized — closed. The 90/90 split is enough to handle real-world correspondence without bleeding into BUILD or RECOVERY. The 4D triage. Inside the MAINTAIN window, every incoming item gets one of four labels in under 10 seconds: Delete (you don't need it, it doesn't need a response, archive and move on), Delegate (someone else should handle this; forward with one sentence of context and unsubscribe yourself from the thread), Defer (it's a real task but not for this window; capture it in a single inbox-zero list with a date), Do (under 5 minutes, finish it now). Roughly 60–70% of typical executive inbox volume falls in Delete or Delegate. Most operators give all of it Do treatment, which is why MAINTAIN swallows the day. The AI compression layer. Train Claude (or your AI of choice) on your last 90 days of email, Slack, and meeting notes. Once it knows your voice, your stakeholders, your recurring contexts, your standard responses — you stop writing replies from scratch. You triage in 4D, and for everything in the Do bucket, you draft three sentences of intent and the AI produces the response in your voice. You review, edit if needed, send. A typical 90-minute MAINTAIN window under this protocol handles 2–3× the volume of the same window without AI compression. The block doesn't grow. The throughput grows. The recurring meeting cull, monthly. Once a month, on the same day every month, audit every recurring meeting on your calendar. For each one, ask three questions: (a) what specific decision or output did this meeting produce in the last 30 days? (b) if I canceled it permanently, what would actually break? (c) is there an async substitute (Loom, written update, dashboard) that captures 80% of the value? Cancel anything that fails the test. Most executives run the cull once and recover 4–7 hours per week immediately. The discipline is doing it monthly, because the meeting load grows back if you don't. The "no" template. The single most leveraged sentence in the Multiplier vocabulary: "I'd love to help, but I'm in a build cycle until [date]. Can we revisit then?" Honest. Specific. Defers without rejecting. Kills 80% of the inbound asks that would otherwise eat your week. If you can't say this, you don't have BUILD blocks — you have hopes. Putting it together.Move 1 redesigns the architecture. Move 2 makes the BUILD block actually produce. Move 3 keeps MAINTAIN from swallowing everything else. The three are designed as a system — running one without the others is what produces the most common failure mode I see in coaching: executives who block BUILD time on the calendar but never ship because they didn't fix the MAINTAIN side, so BUILD blocks get hijacked by "urgent" asks within the first week. Run the full system for 30 days. Track BUILD hours, artifacts shipped, and the ratio of MAINTAIN to BUILD. The numbers move fast. By week 3 most operators report they've recovered 8–12 hours per week of usable BUILD time and the artifact count goes from near-zero to 3–5 finished pieces of work per month. By month 3, the McKinsey 14.2× multiplier starts to show up in actual output. | ||||||
| ||||||
Deeper reads.From the PortLev library, related to this week's signal.
| ||||||
|
$105K Saved — the Ravenbridge case study · 12 min read. 87.5% cost reduction, 20× speed, three institutional fund websites in three days. What is the AI Build Gap? · 16 min read. Why 78% of enterprise AI programs fail — and what the 14.2× organizations do differently. What is Portfolio Engineering? · 15 min read. The full 5-phase Portfolio OS explained. Custom AI app development for organizations · 14 min read. What custom AI really costs, the 6 apps we've shipped, how to evaluate a builder. AI due diligence tools for VC/PE: 2026 comparison · 13 min read. DueDrill vs PitchBook vs the field. | ||||||
| ||||||
One move this week. | ||||||
|
Pull last week's calendar. Color-code every block into BUILD / LEVERAGE / MAINTAIN / RECOVERY. Don't redesign anything yet — just measure. Then reply to this email with one number: how many BUILD hours did you have last week? I read every reply. Or, in the spirit of David: don't reply — build something. Then send me what you built. Either one moves you forward. | ||||||
| ||||||
|
Custom AI builds — $5K–$50K, 2–3 week delivery. Portfolio Engineering inside your org. Fractional CHRO / CLO — $15K/mo retainer. Founding cohort (90 days) — the full protocol above, run live with 14 other senior operators. Apply → Pre-order the book — Closing the AI Wage Gap. Pre-order → | ||||||
© 2026 Portfolio Leverage Company. You're receiving this because you subscribed to The Leverage Brief. Reply to this email — I read every one. Unsubscribe in one click.
|