# 🌴 Palm Oil Ripeness Agent (n8n + YOLOv8) This project uses a custom-trained **YOLOv8** model to detect the ripeness of Palm Oil Fresh Fruit Bunches (FFB). It features a local Python FastAPI server and a Streamlit Dashboard, both integrated into an **agentic n8n workflow**. ## 🚀 Project Overview 1. **Vision Engine:** YOLOv8 Nano (Custom-trained on MPOB-standard datasets). 2. **Inference Server:** FastAPI (Python) for n8n integration. 3. **Demo Dashboard:** Streamlit UI for drag-and-drop batch testing. 4. **Intelligence:** MongoDB Atlas Vector Search for similarity-based reasoning. --- ## 🛠 Prerequisites * Python 3.10+ * n8n (Desktop or Self-hosted) * MongoDB Atlas Account (with Vector Search index enabled) --- ## 📦 Setup Instructions ### 1. Environment Setup ```bash git clone cd palm-oil-ai python -m venv venv # Windows: venv\Scripts\activate | Mac: source venv/bin/activate pip install ultralytics fastapi uvicorn streamlit python-multipart pillow ``` ### 2. Dataset & Training 1. Download the dataset from [Roboflow](https://universe.roboflow.com/assignment-vvtq7/oil-palm-ripeness/dataset/5/download/yolov8) *Or you can also find your own separate source of datasets. Make sure the file/folder format structure is consistent, especially the .yaml file. 2. Extract into `/datasets`. 3. **Train the model:** ```bash python train_script.py ``` 4. Copy the resulting `best.pt` from `runs/detect/train/weights/` to the project root. ### Running the API Server (DDD Structure) The new architecture decouples the vision logic from the API entry point. You can run it either via the root wrapper or directly as a module: ```powershell # Option 1: Using the root wrapper python main.py # Option 2: Running as a module python -m src.api.main ``` By default, the server runs on `http://localhost:8000`. ### Available Endpoints | Endpoint | Method | Description | | :--- | :--- | :--- | | `/detect` | `POST` | Simple YOLO detection (Returns JSON) | | `/analyze` | `POST` | Detection + Vertex Vectorization + MongoDB Archival | | `/get_confidence` | `GET` | Returns the current model confidence setting | | `/set_confidence` | `POST` | Updates the global model confidence setting | ### Running the Streamlit Dashboard The Streamlit app still provides the user interface for manual testing. ```powershell # Run the Streamlit app streamlit run demo_app.py --- ## 📂 Repository Structure ```text ├── datasets/ # Labeled images (Train/Valid/Test) ├── runs/ # YOLO training logs and output weights ├── main.py # FastAPI Inference Server (for n8n) ├── demo_app.py # Streamlit Dashboard (for demos) ├── train_script.py # Throttled training configuration ├── best.pt # THE BRAIN: The trained model weights └── README.md # Documentation ```