Prototype View:β οΈ Prototype β Not an official UC platform
Competition Overview
University of California Systemwide Entrepreneurship Challenge Β· Applications Open
β οΈ This is a prototype platform prepared for concept presentation. Competition terms, legal review, privacy policy, SAFE documentation, and eligibility enforcement require institutional approval. Not an official UC offering.
π£ Applications Now Open β Deadline: [Admin Configurable]
The UC Venture Challenge AI
A proposed systemwide entrepreneurship competition open to all UC students, faculty, and staff. Pitch your idea. Win funding. Build the future.
20
Winners Selected
$125K
Per Winning Team
10
UC Campuses Eligible
3 mo
Submission Window
Competition Timeline
Key dates β all dates configurable by program administrators
β
Applications Open
β
Submission Window
3
AI Evaluation
4
50 Semi- finalists
5
Judge Review
6
20 Winners Announced
ποΈ
Open to All UC Affiliates
Students, faculty, and staff across all University of California campuses and programs are eligible to apply individually or as a team.
π€
AI-Assisted Evaluation
Submissions are analyzed across three dimensions: Hardcore VC Metrics, Subconscious Signals, and Resonance. AI identifies 50 semifinalists for human review.
βοΈ
Expert Human Judges
A panel of Silicon Valley investors and entrepreneurs makes the final selection of 20 winners from the AI-identified semifinalist pool.
π°
$125,000 Per Winner
Each of the 20 winning teams receives $125,000 in funding structured as a 7% post-money SAFE note. Full terms subject to legal review and institutional approval.
π
Structured Feedback
Every applicant β including non-winners β receives an AI-generated evaluation with strengths, gaps, and recommendations for improvement.
π―
Video + Text Pitches
Submit a written application and optionally a video pitch of up to 5 minutes. Audio is transcribed and analyzed alongside your written materials.
Frequently Asked Questions
Program details subject to final institutional approval
Who can apply?
The competition is proposed to be open to all current UC students, faculty, and staff across the UC system. Eligibility criteria and verification processes are subject to final institutional approval.
What does the funding look like?
Winners are proposed to receive $125,000 each, structured using a 7% post-money SAFE model. This is a program descriptor β specific legal terms, documentation, and funding structures require formal review and approval.
How does AI evaluation work?
AI analyzes all submissions across three dimensions (VC Metrics, Subconscious Signals, Resonance) and generates a ranked list of recommended semifinalists. This is a decision-support tool β human judges make all final determinations.
Can I submit as an individual or must I have a team?
Individual and team applications are accepted. Team composition details are captured in the application. Minimum and maximum team sizes are admin-configurable.
β οΈ Prototype β No data is stored or transmitted. This form is for demonstration purposes only.
1. Account
2. Project Info
3. Team
4. Upload & Submit
Project Information
Provide details about your venture concept
This will appear in your summary card. Be specific and compelling.
Optional Details
Not required, but strengthen your application
UC Affiliation
Required for eligibility verification
Must be an active UC institutional email for verification
File Uploads
Upload supporting materials. All formats optional unless starred.
π
Drop your deck here or click to browse
PDF, PPTX Β· Max 25MB
π¬
Upload a video pitch (under 5 minutes)
MP4, MOV, WEBM Β· Max 500MB Β· Duration checked on upload
β οΈ Duration validation is attempted on upload. If video length cannot be verified automatically, submission will be flagged for admin review.
π
Research papers, financials, letters of supportβ¦
PDF, DOCX, TXT Β· Max 10MB each
π€
Your submission will be analyzed by Claude AI across three evaluation dimensions. You will receive structured feedback regardless of outcome. AI scores are decision support only β human judges make all final determinations.
My Application β AgriSense AI
UC Davis Β· Student (Graduate) Β· Submitted Jan 14, 2025
β Submitted
π€ AI Evaluation Complete β Your submission has been analyzed. Results visible below. This is a decision-support output; human judges make all final determinations.
VC Metrics
74
Weight: 50%
Signals
71
Weight: 30%
Resonance
82
Weight: 20%
75
Overall Score
81%
AI Confidence
β Top 50 Candidate
Strengths Identified
β
Strong problem clarity β smallholder agriculture is a large, underserved global market with documented disease-loss data.
β
Compelling resonance: "crop disease detection AI" is immediately legible with strong emotional pull for food security.
β
Founder credibility is supported by UC Davis affiliation and cited academic research on plant pathology.
Areas for Development
β³
Business model is not fully articulated. Revenue path (SaaS vs. per-scan vs. partnership) needs clarity.
β³
Go-to-market strategy for reaching smallholder farmers in target geographies is underdeveloped.
β
Missing: competitive differentiation vs. existing plant AI tools (e.g. PlantVillage). Address directly.
β οΈ Admin View β Prototype Mode Β· All data is simulated for demonstration Β· Not connected to live systems
Total Applications
3
Demo seed data
AI Evaluated
3
β 100% complete
Top 50 Candidates
2
AI recommended
Award Pool
$2.5M
20 Γ $125,000
Submissions Overview
Click any row to open the AI evaluation report
#
Project
Campus
Category
VC Score
Signals
Resonance
Overall
Confidence
AI Recommendation
Status
2
NeuralPath Drug target identification AI
UCSF
Faculty
86
79
68
80
88%
β Top 50 Candidate
Submitted
1
AgriSense AI AI crop disease detection
UC Davis
Student
74
71
82
75
81%
β Top 50 Candidate
Submitted
3
CampusFlow Campus space optimization AI
UC Berkeley
Staff
61
68
72
65
74%
β Promising / Underdeveloped
Submitted
Campus Distribution
UCSF
1
UC Davis
1
UC Berkeley
1
Applicant Type
Student
1
Faculty
1
Staff
1
Score Distribution
80β100 (Strong)
1
70β79 (Good)
1
60β69 (Developing)
1
π€ AI Recommendation β Human Review Required. The ranking below reflects AI-generated scores and a suggested semifinalist list. This is a decision-support output. Human administrators and judges determine all final outcomes. AI recommendations are not final decisions.
Faculty Research Β· UCSF Β· Drug target identification using protein structure AI
80
Overall
88%
Confidence
VC Metrics (50%)
86
Signals (30%)
79
Resonance (20%)
68
2
AgriSense AIβ Top 50 Candidate
Student Team Β· UC Davis Β· AI-powered crop disease detection for smallholder farmers
75
Overall
81%
Confidence
VC Metrics (50%)
74
Signals (30%)
71
Resonance (20%)
82
3
CampusFlowβ Promising / Underdeveloped
Staff Β· UC Berkeley Β· AI-driven campus space utilization optimization
65
Overall
74%
Confidence
VC Metrics (50%)
61
Signals (30%)
68
Resonance (20%)
72
βοΈ Judge Panel View β Jagdeep Singh Bachher Β· You are reviewing 2 AI-recommended semifinalists. Score independently. Human scores are combined with AI scores in the final panel summary. All selections are final judge determinations.
NeuralPathSemifinalist
Faculty Β· UCSF Β· Drug target identification AI
80
AI Overall
π€
Judge Memo: Strong scientific foundation in protein structure AI with clear IP potential. Resonance score is lower β the pitch clarity for a non-technical audience needs work. Recommend probing team's commercialization experience and go-to-market clarity.
Your Scoring
Innovation
1
2
3
4
5
Feasibility
1
2
3
4
5
Market Impact
1
2
3
4
5
Team Confidence
1
2
3
4
5
AgriSense AISemifinalist
Student Β· UC Davis Β· Crop disease detection AI
75
AI Overall
π€
Judge Memo: Excellent resonance β the mission is clear and emotionally compelling. Student team shows genuine domain knowledge. Key question for judges: is the business model sufficiently articulated, and can a student team execute at scale?
Your Scoring
Innovation
1
2
3
4
5
Feasibility
1
2
3
4
5
Market Impact
1
2
3
4
5
Team Confidence
1
2
3
4
5
AI vs. Human Score Comparison β Panel View
AI scores are decision support only. Divergence between AI and human scores is flagged for panel discussion.
βΉοΈ Legend:β Blue = AI Score | β Gold = Judge Average | Large delta (>15 pts) triggers a panel discussion flag.
NeuralPathDelta: +3 pts β Agreement
AI
80
Human
77
AgriSense AIDelta: +5 pts β Agreement
AI
75
Human
80
β οΈ
Final Selection β Admin Decision Page. This page supports selection of 20 winners from the 50 semifinalists. All selections represent human judgment. The $125,000 award and SAFE note terms are subject to final legal review and institutional approval prior to communication to applicants.
Semifinalists
50
Target (2 in demo)
Winners Target
20
$125K each Β· SAFE
Confirmed Selected
0
Pending selections
Remaining Slots
20
Admin configurable
Selection Dashboard
Sorted by combined AI + Judge ranking Β· Campus diversity view enabled
Select
Rank
Project
Campus
Type
AI Score
Judge Score
Consensus
Action
β
1
NeuralPath Drug target AI
UCSF
Faculty
80
77
β Strong Agreement
β
2
AgriSense AI Crop disease detection
UC Davis
Student
75
80
β Judge Favored
3
CampusFlow Space optimization
UC Berkeley
Staff
65
β
β Not Yet Reviewed
π
Once 20 winners are confirmed, the system can generate: (1) Winner notification drafts, (2) Waitlist management, (3) Anonymized results summary for public release, and (4) Applicant feedback packages. All communications require admin review before sending. SAFE documentation is external to this system and requires legal review.
Click any row in Admin Dashboard to open report
π€ AI-Generated Evaluation Report Β· This output is inference-based and cites evidence from submitted materials. It is decision support β not a final determination. Human judges make all outcomes.
NeuralPath
Faculty Research Commercialization Β· University of California, San Francisco
VC Metrics
86
Wt: 50%
Signals
79
Wt: 30%
Resonance
68
Wt: 20%
80
Investibility Score
88%
AI Confidence
β Top 50 Candidate
Executive Summary
NeuralPath proposes a drug target identification platform leveraging transformer-based models applied to protein structure prediction data. The UCSF-affiliated research team demonstrates strong domain credibility, with the submission citing published lab findings on protein folding accuracy.
The core technical differentiation claim β applying structure-aware AI to identify novel binding sites β is plausible given recent advances in the field (AlphaFold derivatives). However, the submission does not fully distinguish NeuralPath from existing commercial tools, which is a notable gap.
β
AI Inference Note: The following analysis draws on submitted materials. Where evidence is inferred rather than stated, this is flagged. AI has not independently verified scientific claims.
Dimension Breakdown β VC Metrics (Score: 86)
Scored against rigorous venture and accelerator criteria Β· Weight: 50% of overall
Problem Clarity
90
Market Size
85
Team Credibility
88
Differentiation
72
Business Model
74
Scalability
88
Go-to-Market
68
Defensibility
84
Capital Efficiency
82
Technical Merit
92
β Strengths
β
Scientific foundation is exceptional. Published UCSF research cited demonstrates credibility beyond student-level conceptual work.
β
Market is vast. Global drug discovery market estimated at $69B+ with structural demand for faster target identification.
β
Technical differentiation using protein structure data is a plausible wedge with defensibility if patents filed.
β
Faculty team composition suggests execution maturity versus purely student submissions.
β³ Weaknesses
β³
Resonance is the lowest-scoring dimension (68). The pitch is technically strong but difficult to articulate simply β "AI for drug targets" needs a cleaner hook for non-technical judges.
β³
Go-to-market is underdeveloped. Who is the first paying pharma partner? Is this tool sold, licensed, or used in a CRO model?
β³
Differentiation from AlphaFold-adjacent commercial tools (SchrΓΆdinger, Relay Therapeutics) is asserted but not fully evidenced.
β Red Flags & Missing Info
β
Missing: IP status. Is there a patent filed, pending, or is the IP owned by UCSF? This is critical for venture structure.
β
Missing: Customer validation. No mention of pharma conversations, LOIs, or pilot interest. Science is strong; market pull is unconfirmed.
β³
Inference: Funding history is absent. AI infers this is pre-revenue; grant funding status unknown and should be confirmed.
AI Recommendations
Immediate
βDevelop a one-sentence non-technical pitch that conveys the core value proposition clearly.
βClarify UCSF IP ownership and commercialization rights before pitch.
βName one pharma company this platform could help and why they'd pay for it.
Strategic
βMap competitive landscape explicitly. Show where NeuralPath outperforms or is adjacent to existing tools.
βDefine the business model: SaaS API, per-project licensing, or CRO partnership.
βInitiate customer discovery conversations with 2β3 biotech companies before final judge presentations.
Investor Readiness
βThis team is close to Series Seed-ready with the right packaging and IP clarity.
βConsider UCSF QB3 accelerator or SPARK program for pre-commercialization support.
βWith traction evidence, this submission would score 88β92 overall in next evaluation cycle.
Evidence Snippets
Quoted or paraphrased directly from submission materials Β· AI inference noted where applicable
From: Written Submission Β· Problem Statement
"Current drug target identification relies on expensive, time-consuming wet lab screening processes. Our lab's research on structure-aware attention models suggests a 40% reduction in false-positive hit rates is achievable."
AI Interpretation: Claim of 40% improvement is cited from lab research. Reproducibility not confirmed by AI. Recommend asking judges to probe this in Q&A.
From: Team Description Β· Inferred
Team lists three UCSF faculty researchers with PhD-level credentials in computational biology and machine learning. No business, commercialization, or go-to-market professional listed on team.
AI Flag: Team completeness score reduced due to absence of commercial or market-facing expertise. This is a moderate risk factor.
βοΈ
API keys are stored in your browser only (localStorage). They are never sent to any third party β only directly to Anthropic and OpenAI servers. Use the PHP proxy option for a hosted deployment where you don't want keys in the browser at all.
Anthropic β Not set
OpenAI β Not set (optional)
Keys stored locally in your browser
π€ Anthropic API β Claude Analysis
Required for all scoring, report generation, and feedback
Use this if hosting on Hostinger β keys stay server-side, never in the browser
If set, all API calls route through your PHP proxy. Upload proxy.php to the same folder as index.html and enter your keys inside it.
Competition Settings
All values configurable β these do not constitute legal terms
β οΈ Config descriptor only β not a legal term. Requires separate institutional legal review.
AI Scoring Weights
Must total 100%
Total: 100% β
π¬ Applicant Feedback Generator Β· AI-generated feedback for all applicants, including non-winners. Tone is constructive and respectful. Feedback is generated from evaluation data and reviewed before release.
CampusFlow β Applicant Feedback Package
Staff Β· UC Berkeley Β· AI Overall Score: 65 Β· Status: Promising but Underdeveloped
Not Selected β Current Round
β What We Found Compelling
β
Your framing of the campus space utilization problem is clear and relatable β the pain point of underutilized classroom and lab space is real and well-documented.
β
The Resonance score of 72 indicates your concept is accessible and clearly communicated. Non-technical reviewers understood your concept quickly.
β
Staff perspective brings valuable operational insight that student teams often lack β this is a genuine differentiator worth amplifying.
β³ Key Areas to Strengthen
β³
The market opportunity section needs expansion. While the UC system is a compelling initial customer, evaluators noted the addressable market beyond UC was not defined.
β³
Business model clarity was the most significant gap. How would CampusFlow be priced and licensed? What is the revenue model beyond the UC pilot?
β³
Competitive landscape was not addressed. Similar solutions exist in the higher-ed facility management space β your differentiation needs explicit articulation.
Recommended Next Steps
STRENGTHEN THE SUBMISSION
Research existing higher-ed space optimization tools and explicitly position CampusFlow against them. Define your go-to-market outside the UC system β which university systems would adopt this next?
BUILD TRACTION
A pilot with one UC department β even informal β would dramatically strengthen your next submission. Documented outcome data (room utilization improvement, cost savings) would move your score significantly.
REAPPLY OR SEEK OTHER PATHS
This concept has real merit. We encourage you to continue developing it. Consider UC Innovation, SBIR grants, or campus innovation programs as parallel paths. Your operational insights are valuable β keep going.
Running AI Analysisβ¦
This takes 15β45 seconds depending on submission length