Model Details
This model is a fine-tuned version of Llama-3.2-3B-Instruct designed specifically for Text-to-SQL tasks. It was trained to accept a database schema and a natural language question, and output a valid SQL query along with a brief explanation of the logic. It is lightweight (3B parameters), making it suitable for local deployment on consumer GPUs using 4-bit quantization.
Model Description
- Base Model: unsloth/Llama-3.2-3B-Instruct
- Fine-tuning Framework: Unsloth (QLoRA)
- Dataset: gretelai/synthetic_text_to_sql
Uses
The model was trained using the Alpaca prompt format. For best results, structure your input as follows:
How to Get Started with the Model
import torch
from transformers import pipeline
model_id = "Ary-007/Text-to-sql-llama-3.2"
# Load the pipeline
pipe = pipeline(
"text-generation",
model=model_id,
device_map="auto",
)
# Define the schema (Context)
schema = """
CREATE TABLE employees (
id INT,
name TEXT,
department TEXT,
salary INT,
hire_date DATE
);
"""
# Define the user question
question = "Find the name and salary of employees in the 'Engineering' department who earn more than 80000."
# Format the prompt exactly as trained
prompt = f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Company Database : {schema}
### Input:
SQL Prompt :{question}
### Response:
"""
outputs = pipe(
prompt,
max_new_tokens=200,
do_sample=True,
temperature=0.1,
top_p=0.9
)
print(outputs[0]["generated_text"])
Training Details
The model was fine-tuned using Unsloth on a Tesla T4 GPU (Google Colab).
Hyperparameters
- Rank (r): 16
- LoRA Alpha: 16
- Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Quantization: 4-bit (Normal Float4)
- Max Sequence Length: 2048
- Learning Rate: 2e-4
- Optim: adamw_8bit
- Max Steps: 60
Dataset Info
The model was trained on the gretelai/synthetic_text_to_sql dataset, utilizing the following fields:
- sql_context: Used as the database schema context.
- sql_prompt: The natural language question.
- sql: The target SQL query.
- sql_explanation: The explanation of the query logic.
Limitations
- Training Steps: This model was trained for a limited number of steps (60) as a proof of concept. It may not generalize well to extremely complex or unseen database schemas.
- Hallucination: Like all LLMs, it may generate syntactically correct but logically incorrect SQL. Always validate the output before running it on a production database.
- Scope: It is optimized for standard SQL (similar to SQLite/PostgreSQL) as presented in the GretelAI dataset.
License
This model is derived from Llama-3.2 and is subject to the Llama 3.2 Community License.
- Downloads last month
- 66
Model tree for Ary-007/Text-to-sql-llama-3.2
Base model
meta-llama/Llama-3.2-3B-Instruct