🚀 Quickstart
Welcome to OpenGenomeLLM — the open-source platform for genomic AI. This guide will get you up and running in under 5 minutes.
New: OpenGenomeLLM-70B achieves 78.9% on GeneTuring benchmark, surpassing GPT-4 on genomic tasks. See models →
Installation
Install the Python SDK and get set up in minutes.
REST API
Use the HTTP API from any language or tool.
VCF Interpretation
Analyze VCF files and generate clinical reports.
Fine-tuning
Adapt models to your institution's data and needs.
Installation
Install the OpenGenomeLLM Python SDK using pip:
bash
pip install opengenomellm
# Or install from source for the latest features
git clone https://github.com/deepcog-ai/opengenomellm
cd opengenomellm && pip install -e .
Python SDK — Basic Usage
Load and run the OpenGenomeLLM-70B model for variant interpretation:
python
from opengenomellm import GenomicLLM # Initialize the model model = GenomicLLM("deepcog-ai/OpenGenomeLLM-70B") # Interpret a variant result = model.interpret_variant( gene="BRCA1", variant="c.5266dupC", population="South Asian" ) print(result.pathogenicity) # "Pathogenic" print(result.acmg_class) # "Class V" print(result.clinical_report) # Full narrative report
VCF File Interpretation
Process an entire VCF file and generate a comprehensive clinical report:
python
from opengenomellm import GenomicLLM, VCFProcessor model = GenomicLLM("deepcog-ai/OpenGenomeLLM-70B") processor = VCFProcessor() # Load and process VCF variants = processor.load("patient_001.vcf") report = model.generate_clinical_report( variants=variants, patient_info={"age": 45, "sex": "F", "ethnicity": "South Asian"}, format="pdf" ) report.save("clinical_report_001.pdf")
REST API
Use the OpenGenomeLLM REST API from any programming language. All endpoints require an API key from your account dashboard.
bash
curl -X POST https://api.opengenomellm.org/v1/interpret \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "opengenomellm-70b", "gene": "BRCA1", "variant": "c.5266dupC", "population": "South Asian" }'
Docker Deployment
Deploy OpenGenomeLLM-70B in your own infrastructure using our official Docker image:
bash
# Pull and run with GPU support docker pull deepcogai/opengenomellm:70b-latest docker run --gpus all -p 8000:8000 \ -e MODEL=opengenomellm-70b \ -e QUANTIZATION=4bit \ deepcogai/opengenomellm:70b-latest # API will be available at http://localhost:8000