Market Strategy Opinion 4-15-2026 Author: ChatGPT Extended Pro I could inspect the public surface but not reliably use the authenticated product in this environment. Based on the public site and your description, SHEP’s strongest category is not “AI tutor.” The public site frames it as legal reasoning infrastructure for law students, emphasizes a portfolio, and describes structured reasoning visualization. That is the correct category because the school market is filling with subsidized AI access: Reuters reported on April 7, 2026 that Harvey and Legora are giving students free access through multiple law schools, while Lexis, Westlaw, Clio, and Spellbook are also expanding school channels. Reuters has also reported that more than half of law schools offer AI classes, that at least eight law schools had first-year AI training requirements by late 2025, and that California is considering mandatory AI instruction for some schools. ([Shep Legal][1]) That means the product’s value is not the model output. The value is the structure around the output: syllabus ingestion, course-specific drills, longitudinal reasoning data, writing-feedback loops, and a portfolio that makes student thinking visible. Market comps suggest that value band is already real. Quimbee’s study aids start at $23/month on an annual plan. Write.law sells student plans at $15, $29, and $49 per month, its Ace.law study assistant at $15/month, and even a single writing review at $499. BARBRI’s 2026 bar review offerings are listed at $1,999, $4,199, and $5,999. A product that combines study planning, targeted practice, answer feedback, and portfolio creation is materially underpriced at $9 if it works as described. ([Quimbee][2]) If the goal is maximum users, the correct wedge is 1L, not “all law students.” The ABA says fall 2025 JD enrollment was 120,039, with 42,817 first-year students across 196 ABA-approved schools. 1Ls have the sharpest pain, the least stable study habits, and the longest lifetime value. I would run a freemium model: free syllabus import, one course map, limited drills, limited written feedback, and one collaborative or competitive exercise per week. Premium would be $19/month. I would not lead at $29 for broad top-of-funnel acquisition, and I would not sit at $9 unless the product were intentionally stripped down. Reuters reported in February 2026 that new federal loan caps taking effect July 1, 2026 are expected to make students more price-sensitive, which argues for low-friction entry and strong perceived ROI. ([American Bar Association][3]) If the goal is maximum money per user, do not squeeze the student first. Sell the institution, then upsell serious students. The highest-value buyer is the school or professor who needs AI training, accountable workflows, and visible reasoning. The student tier should be simple: Core at $19/month and Pro at $29/month, with Pro including unlimited writing feedback, reasoning analytics, exportable portfolio, and finals mode. The real margin expansion sits above that: faculty tools, cohort analytics, rubric tuning, assignment creation, audit trails, and school dashboards. I would price school pilots around $15k-$30k and full-school deployments around $40k-$60k depending on seats and support. The market direction supports that move because schools are increasingly expected to teach AI responsibly while larger vendors are already normalizing school-level distribution. ([Reuters][4]) The product priorities are straightforward. First, make syllabus upload the activation event and have it produce an immediately useful semester map. Second, turn every practice answer into a visible scorecard for issue spotting, rule recall, analysis quality, and writing clarity. Third, make the portfolio cumulative across semesters so the student does not churn out emotionally after finals. Fourth, keep the social layer tightly tied to academic artifacts: brief battles, collaborative practice exams, school leaderboards, journal or mock-trial cohorts. Generic DMs are not the story. Competitive and collaborative work product is. The most profitable model is B2B2C. Your current economics imply the marginal cost of another user is very low relative to price, so the constraint is distribution, not compute. That means the best strategy is cheap or free student entry to build adoption, faculty embedding to lower CAC, and institutional contracts for durable revenue. Student-only SaaS in legal education is seasonal and churn-prone. Curriculum infrastructure with persistent student data is not. For expansion, I would not start by fighting a full bar-prep war. I would launch a narrow 3L and post-finals “exam sprint” or “NextGen readiness” module instead. NCBE says the NextGen UBE will first be administered in a limited number of jurisdictions in July 2026. That creates a timely opening for skills-based practice without forcing SHEP into a direct head-on battle with incumbents that already spend heavily on bar-prep distribution. ([NCBE][5]) The rough math is good enough to matter. Using the ABA’s 120,039 JD-student count, 10% penetration at $19/month for a 9-month school year is about $2.05 million in annual student revenue. At 20%, it is about $4.11 million. Ten percent of 1Ls alone is about $0.73 million. That is before faculty and school contracts. So the business is real. The main strategic error would be positioning SHEP as another chatbot instead of the system that makes legal reasoning visible, trainable, social, and institutionally legible. ([American Bar Association][3]) [1]: https://sheplegal.com/ "https://sheplegal.com/" [2]: https://www.quimbee.com/pricing/study-aids "https://www.quimbee.com/pricing/study-aids" [3]: https://www.americanbar.org/news/abanews/aba-news-archives/2025/12/council-of-legal-ed-law-school-data/ "https://www.americanbar.org/news/abanews/aba-news-archives/2025/12/council-of-legal-ed-law-school-data/" [4]: https://www.reuters.com/legal/legalindustry/ai-training-becomes-mandatory-more-us-law-schools-2025-09-22/ "https://www.reuters.com/legal/legalindustry/ai-training-becomes-mandatory-more-us-law-schools-2025-09-22/" [5]: https://www.ncbex.org/ "https://www.ncbex.org/"