Our research
Groundbreaking research on legal LLMs
We know what we're doing, and we can back it up. Read about how our leading engineers are contributing to the field.
Our research
Groundbreaking research on legal LLMs
We know what we're doing, and we can back it up. Read about how our leading engineers are contributing to the field.
Our research
Groundbreaking research on legal LLMs
We know what we're doing, and we can back it up. Read about how our leading engineers are contributing to the field.
LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
LegalBench: Your essential tool for exploring how large language models tackle legal reasoning. Developed collaboratively by legal experts, it offers 162 tasks across six types of legal reasoning. Bridging the gap between legal professionals and AI developers, LegalBench empowers research and evaluation of 20 LLMs, driving innovation in law and artificial intelligence.
Chain Of Reference prompting helps LLM to think like a lawyer
Discover Chain of Reference (CoR): a game-changing technique for legal professionals. By pre-prompting questions with established legal frameworks like IRREAC, CoR breaks down complex tasks into simple steps. Our research shows that leveraging CoR with large language models like GPT-3 boosts Zero-Shot performance by up to 12%.
LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
LegalBench: Your essential tool for exploring how large language models tackle legal reasoning. Developed collaboratively by legal experts, it offers 162 tasks across six types of legal reasoning. Bridging the gap between legal professionals and AI developers, LegalBench empowers research and evaluation of 20 LLMs, driving innovation in law and artificial intelligence.
Chain Of Reference prompting helps LLM to think like a lawyer
Discover Chain of Reference (CoR): a game-changing technique for legal professionals. By pre-prompting questions with established legal frameworks like IRREAC, CoR breaks down complex tasks into simple steps. Our research shows that leveraging CoR with large language models like GPT-3 boosts Zero-Shot performance by up to 12%.
LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
LegalBench: Your essential tool for exploring how large language models tackle legal reasoning. Developed collaboratively by legal experts, it offers 162 tasks across six types of legal reasoning. Bridging the gap between legal professionals and AI developers, LegalBench empowers research and evaluation of 20 LLMs, driving innovation in law and artificial intelligence.
Chain Of Reference prompting helps LLM to think like a lawyer
Discover Chain of Reference (CoR): a game-changing technique for legal professionals. By pre-prompting questions with established legal frameworks like IRREAC, CoR breaks down complex tasks into simple steps. Our research shows that leveraging CoR with large language models like GPT-3 boosts Zero-Shot performance by up to 12%.