A service to detect if a given image of palm is ripe or not. A R&D initiated for Swopt

Dr-Swopt 491c6c4829 services hai 3 días
src 5f0bb61282 Feature: Add /analyze endpoint for full processing and refactor /detect for simple detection hai 3 días
.env 491c6c4829 services hai 3 días
.gitignore ebcabef0d7 basic setup and trained data hai 4 días
LICENSE 328e3e8b22 Initialize: Add requirements.txt and MIT LICENSE hai 3 días
README.md 328e3e8b22 Initialize: Add requirements.txt and MIT LICENSE hai 3 días
Streamlit.md ebcabef0d7 basic setup and trained data hai 4 días
best.pt ebcabef0d7 basic setup and trained data hai 4 días
demo_app.py ebcabef0d7 basic setup and trained data hai 4 días
gemini-embedding-service-key.json 491c6c4829 services hai 3 días
main.py 14213a73a9 Refactor: Redirect root main.py to src.api.main and migrate confidence features hai 3 días
requirements.txt 328e3e8b22 Initialize: Add requirements.txt and MIT LICENSE hai 3 días
test_model.py ebcabef0d7 basic setup and trained data hai 4 días
train_script.py 6dc23e13aa simple setup hai 4 días
yolov8n.pt 6dc23e13aa simple setup hai 4 días

README.md

🌴 Palm Oil Ripeness Agent (n8n + YOLOv8)

This project uses a custom-trained YOLOv8 model to detect the ripeness of Palm Oil Fresh Fruit Bunches (FFB). It features a local Python FastAPI server and a Streamlit Dashboard, both integrated into an agentic n8n workflow.

🚀 Project Overview

  1. Vision Engine: YOLOv8 Nano (Custom-trained on MPOB-standard datasets).
  2. Inference Server: FastAPI (Python) for n8n integration.
  3. Demo Dashboard: Streamlit UI for drag-and-drop batch testing.
  4. Intelligence: MongoDB Atlas Vector Search for similarity-based reasoning.

🛠 Prerequisites

  • Python 3.10+
  • n8n (Desktop or Self-hosted)
  • MongoDB Atlas Account (with Vector Search index enabled)

📦 Setup Instructions

1. Environment Setup

git clone <your-repo-url>
cd palm-oil-ai
python -m venv venv
# Windows: venv\Scripts\activate | Mac: source venv/bin/activate
pip install ultralytics fastapi uvicorn streamlit python-multipart pillow

2. Dataset & Training

  1. Download the dataset from Roboflow *Or you can also find your own separate source of datasets. Make sure the file/folder format structure is consistent, especially the .yaml file.
  2. Extract into /datasets.
  3. Train the model:

    python train_script.py
    
    
    1. Copy the resulting best.pt from runs/detect/train/weights/ to the project root.

    Running the API Server (DDD Structure)

    The new architecture decouples the vision logic from the API entry point. You can run it either via the root wrapper or directly as a module:

    # Option 1: Using the root wrapper
    python main.py
    
    # Option 2: Running as a module
    python -m src.api.main
    

By default, the server runs on http://localhost:8000.

Available Endpoints

Endpoint Method Description
/detect POST Simple YOLO detection (Returns JSON)
/analyze POST Detection + Vertex Vectorization + MongoDB Archival
/get_confidence GET Returns the current model confidence setting
/set_confidence POST Updates the global model confidence setting

Running the Streamlit Dashboard

The Streamlit app still provides the user interface for manual testing.

# Run the Streamlit app
streamlit run demo_app.py
---

## 📂 Repository Structure

```text
├── datasets/           # Labeled images (Train/Valid/Test)
├── runs/               # YOLO training logs and output weight├── src
│   ├── api                 # FastAPI entry points
│   ├── application         # Use Cases (Orchestration)
│   ├── domain              # Business Logic & Entities
│   └── infrastructure      # External Services (MongoDB, VertexAI)
├── best.pt                 # YOLOv8 Trained weights
├── requirements.txt        # Python dependencies
├── .env                    # Configuration (Mongo, Vertex)
├── LICENSE                 # MIT License
└── README.md