A service to detect if a given image of palm is ripe or not. A R&D initiated for Swopt

Dr-Swopt 328e3e8b22 Initialize: Add requirements.txt and MIT LICENSE 3 روز پیش
src 5f0bb61282 Feature: Add /analyze endpoint for full processing and refactor /detect for simple detection 3 روز پیش
.env 930fbc5999 Refactor: Implement Domain-Driven Design (DDD) with Vertex AI and MongoDB integration 3 روز پیش
.gitignore ebcabef0d7 basic setup and trained data 4 روز پیش
LICENSE 328e3e8b22 Initialize: Add requirements.txt and MIT LICENSE 3 روز پیش
README.md 328e3e8b22 Initialize: Add requirements.txt and MIT LICENSE 3 روز پیش
Streamlit.md ebcabef0d7 basic setup and trained data 4 روز پیش
best.pt ebcabef0d7 basic setup and trained data 4 روز پیش
demo_app.py ebcabef0d7 basic setup and trained data 4 روز پیش
main.py 14213a73a9 Refactor: Redirect root main.py to src.api.main and migrate confidence features 3 روز پیش
requirements.txt 328e3e8b22 Initialize: Add requirements.txt and MIT LICENSE 3 روز پیش
test_model.py ebcabef0d7 basic setup and trained data 4 روز پیش
train_script.py 6dc23e13aa simple setup 4 روز پیش
yolov8n.pt 6dc23e13aa simple setup 4 روز پیش

README.md

🌴 Palm Oil Ripeness Agent (n8n + YOLOv8)

This project uses a custom-trained YOLOv8 model to detect the ripeness of Palm Oil Fresh Fruit Bunches (FFB). It features a local Python FastAPI server and a Streamlit Dashboard, both integrated into an agentic n8n workflow.

🚀 Project Overview

  1. Vision Engine: YOLOv8 Nano (Custom-trained on MPOB-standard datasets).
  2. Inference Server: FastAPI (Python) for n8n integration.
  3. Demo Dashboard: Streamlit UI for drag-and-drop batch testing.
  4. Intelligence: MongoDB Atlas Vector Search for similarity-based reasoning.

🛠 Prerequisites

  • Python 3.10+
  • n8n (Desktop or Self-hosted)
  • MongoDB Atlas Account (with Vector Search index enabled)

📦 Setup Instructions

1. Environment Setup

git clone <your-repo-url>
cd palm-oil-ai
python -m venv venv
# Windows: venv\Scripts\activate | Mac: source venv/bin/activate
pip install ultralytics fastapi uvicorn streamlit python-multipart pillow

2. Dataset & Training

  1. Download the dataset from Roboflow *Or you can also find your own separate source of datasets. Make sure the file/folder format structure is consistent, especially the .yaml file.
  2. Extract into /datasets.
  3. Train the model:

    python train_script.py
    
    
    1. Copy the resulting best.pt from runs/detect/train/weights/ to the project root.

    Running the API Server (DDD Structure)

    The new architecture decouples the vision logic from the API entry point. You can run it either via the root wrapper or directly as a module:

    # Option 1: Using the root wrapper
    python main.py
    
    # Option 2: Running as a module
    python -m src.api.main
    

By default, the server runs on http://localhost:8000.

Available Endpoints

Endpoint Method Description
/detect POST Simple YOLO detection (Returns JSON)
/analyze POST Detection + Vertex Vectorization + MongoDB Archival
/get_confidence GET Returns the current model confidence setting
/set_confidence POST Updates the global model confidence setting

Running the Streamlit Dashboard

The Streamlit app still provides the user interface for manual testing.

# Run the Streamlit app
streamlit run demo_app.py
---

## 📂 Repository Structure

```text
├── datasets/           # Labeled images (Train/Valid/Test)
├── runs/               # YOLO training logs and output weight├── src
│   ├── api                 # FastAPI entry points
│   ├── application         # Use Cases (Orchestration)
│   ├── domain              # Business Logic & Entities
│   └── infrastructure      # External Services (MongoDB, VertexAI)
├── best.pt                 # YOLOv8 Trained weights
├── requirements.txt        # Python dependencies
├── .env                    # Configuration (Mongo, Vertex)
├── LICENSE                 # MIT License
└── README.md