A service to detect if a given image of palm is ripe or not. A R&D initiated for Swopt

Dr-Swopt ebcabef0d7 basic setup and trained data 4 päivää sitten
.gitignore ebcabef0d7 basic setup and trained data 4 päivää sitten
README.md ebcabef0d7 basic setup and trained data 4 päivää sitten
Streamlit.md ebcabef0d7 basic setup and trained data 4 päivää sitten
best.pt ebcabef0d7 basic setup and trained data 4 päivää sitten
demo_app.py ebcabef0d7 basic setup and trained data 4 päivää sitten
main.py ebcabef0d7 basic setup and trained data 4 päivää sitten
test_model.py ebcabef0d7 basic setup and trained data 4 päivää sitten
train_script.py 6dc23e13aa simple setup 4 päivää sitten
yolov8n.pt 6dc23e13aa simple setup 4 päivää sitten

README.md

README.md

Palm Oil Ripeness Agent (n8n + YOLOv8)

This project uses a custom-trained YOLOv8 model to detect the ripeness of Palm Oil Fresh Fruit Bunches (FFB). It features a local Python FastAPI server that integrates into an agentic n8n workflow, storing results and embeddings in MongoDB Atlas.

🚀 Project Overview

  1. Model: YOLOv8 Nano (custom-trained on Roboflow dataset).
  2. Server: FastAPI (Python) hosting the model for inference.
  3. Database: MongoDB Atlas (Vector Search for historical similarity).
  4. Orchestration: n8n (Agentic workflow for decision making).

🛠 Prerequisites

  • Python 3.10+
  • n8n (Desktop or Self-hosted)
  • MongoDB Atlas Account (with Vector Search index enabled)
  • Optional: NVIDIA GPU with CUDA for faster training.

📦 Setup Instructions

1. Clone & Environment

git clone <your-repo-url>
cd palm-oil-ai
python -m venv venv
# Windows: venv\Scripts\activate | Mac: source venv/bin/activate
pip install -r requirements.txt

2. Dataset Preparation

  1. Download the dataset from Roboflow Universe.
  2. Unzip into the /datasets folder.
  3. Ensure your data.yaml matches the local paths:

    train: ../datasets/train/images
    val: ../datasets/valid/images
    
    

    3. Training the Model

    To train locally without hanging your PC, use the throttled script:

    python train_script.py
    
    
  • Outputs: The best model will be saved at runs/detect/train/weights/best.pt.
  • Move it: Copy best.pt to the root directory for the server to use.

4. Running the Inference Server

python main.py

The server will start at http://localhost:8000.

  • Endpoint: POST /detect
  • Payload: Multipart Form-data (Key: file, Value: image.jpg)

🤖 n8n Integration

The n8n workflow follows this logic:

  1. Trigger: Receives image (Telegram/Webhook).
  2. HTTP Request: Sends image to localhost:8000/detect.
  3. MongoDB Node: Performs Vector Search using the returned embedding.
  4. Agent Logic: Final ripeness determination based on model confidence + DB similarity.

📂 Repository Structure

├── datasets/           # Labeled images from Roboflow
├── runs/               # YOLO training logs and weights
├── main.py             # FastAPI Inference Server
├── train_script.py     # Local training configuration
├── best.pt             # The "Brain" (Trained Model)
├── requirements.txt    # dependencies
└── README.md           # You are here


📝 Future Improvements

  • Implement CLIP embeddings for higher-accuracy vector similarity.
  • Add a Streamlit dashboard for manual batch verification.