Well, the thing is - the only AI is really the Shep Associates who operate as agent-native associates on the platform and can do what humans do. They analyze student submissions using the same rubric (Professors Desk) the way other student users may grade eachothers work. Outside of that, the platform presents scenarios (fact situation hypotehticals) to students and the students respond. They break down their work into "chips" which are highlighting issues, rules, arguments, cases, which are then parsed and added to our neon database to keep track of all the arguments, rules, issues, and cases being applied to the fact hypotheticals (we call them scnearios). The purpose is to train students to think through the law they are applying carefully because the theory is that if you have to apply current law to novel facts, you have to think harder and more discretely. Also, their reasoning (chips, submissions) are tagged in the data as their owen (provenance), and it is used to track where their reasoning on hypotheticals converge with other student reasoning. We are connecting dots. The data is then collected for the purpose of a later application we'll call PASTURE which will be like a smart search or llm that law firms, regulators, and future lawyers can use to find arguments and cases for novel facts that happen in real life. Another thing is the app tracks users drafts from start to finish through version tracking. So, we collect the "reasoning" that happens over time to help train llms which are currently only trained on final work product in the legal field. Also the website is live at sheplegal.com. The scenario types fall under various domains. You can quickly query neon to find what exists. Right now, we have a app/University route where students can upload their syllabus and parse it and it creates a workspace for them to track their reading assignments and it matches 5 scenarios in the case bank, and allows them to generate two custom scenarios using an llm call that is based on their syllabus topics. Exam preparation. One project i want you to add to linear in the University team is we need to be able to ingest any syllabus and create relevant scenarios and not be limited by the current domains we have for matching. I currently have a $29/mo price with a 9 day trial. But, no users yet. for pilot i am thinking of offering the SHEPALPHA1 promo code so a user can have a month free. But also, i want to create a referral system for users. Need to plan that out at some point. We eventually want to do institutional licensing. Nextgen is something we want to eventually plug in as a module or part of the content. Need to do ongoing research on how we can do that. ----- - chip taxonomy really is focused on issue, fact, rule, argument, and case. We have other chip types (evidence, etc) feel free to look at neon to know. We have something called Provenance Ecosystem, a gamification engine. also the POINTS team in shep. Repo is airobal/provenance-economy or something like that. it needs to be updated tbh but just is a good reference for you. Professors desk is a "hat" students wear. When they go to it, it shows a list of submissions by other users. They pick one and haved a rubric pane on the right and the submission on the left. They then validate chips and the work of the submission using a rubric. Evaluations are anonymous both ways to prevent gaming. The associates create submissions and grade submissions and operate in the litigation flow. That is called the "judge route". basically a user can choose to respond to a scenario as a "litigation" where they choose a side and then another user (or shep associate) can respond. Once both are in, a third user (or associate) can write an opinion evaluating both sides. All submissions from the judge route (both briefs and the opinion) are saved to shep as submissions all the same for data perspective. --- I don't currently know how we are to fill out the reasoning chains. or the steps. Right now, users simply mark chips in their submissions and submit. the edges and advanced stuff is too burdensome on the user so i figure we can have another layer clean up the data or connect the dots later. same with proposition tables. 3. idk how they were characterized as medium. i dont remember what system we used to set them up. what i do now is that we have lengths, right? idk. add that as a problem to the points team. 4. no users for social to live right now. 5. i think nextgen uses general principles of law right? so federal is probably fine even though state administered. that just means each state administers, not necessarily the content. 6. great. 7. great. As to your connections: How do you think we can easily create chains from a user perspective? this seems hard. Pasture: idk. this is a problem for down the line. as technology gets better i think an llm can draw the lines. what do you think? Strategically, we don't have a pasture problem until we have a great product that students actually use right now. so pasture issues are a little bit pushed back. not totally, but a little.