|
|
@@ -1,15 +1,15 @@
|
|
|
-## README.md
|
|
|
+### ✅ Updated README.md (Copy/Paste this)
|
|
|
|
|
|
-# Palm Oil Ripeness Agent (n8n + YOLOv8)
|
|
|
+# 🌴 Palm Oil Ripeness Agent (n8n + YOLOv8)
|
|
|
|
|
|
-This project uses a custom-trained **YOLOv8** model to detect the ripeness of Palm Oil Fresh Fruit Bunches (FFB). It features a local Python FastAPI server that integrates into an **agentic n8n workflow**, storing results and embeddings in **MongoDB Atlas**.
|
|
|
+This project uses a custom-trained **YOLOv8** model to detect the ripeness of Palm Oil Fresh Fruit Bunches (FFB). It features a local Python FastAPI server and a Streamlit Dashboard, both integrated into an **agentic n8n workflow**.
|
|
|
|
|
|
## 🚀 Project Overview
|
|
|
|
|
|
-1. **Model:** YOLOv8 Nano (custom-trained on Roboflow dataset).
|
|
|
-2. **Server:** FastAPI (Python) hosting the model for inference.
|
|
|
-3. **Database:** MongoDB Atlas (Vector Search for historical similarity).
|
|
|
-4. **Orchestration:** n8n (Agentic workflow for decision making).
|
|
|
+1. **Vision Engine:** YOLOv8 Nano (Custom-trained on MPOB-standard datasets).
|
|
|
+2. **Inference Server:** FastAPI (Python) for n8n integration.
|
|
|
+3. **Demo Dashboard:** Streamlit UI for drag-and-drop batch testing.
|
|
|
+4. **Intelligence:** MongoDB Atlas Vector Search for similarity-based reasoning.
|
|
|
|
|
|
---
|
|
|
|
|
|
@@ -18,89 +18,71 @@ This project uses a custom-trained **YOLOv8** model to detect the ripeness of Pa
|
|
|
* Python 3.10+
|
|
|
* n8n (Desktop or Self-hosted)
|
|
|
* MongoDB Atlas Account (with Vector Search index enabled)
|
|
|
-* *Optional:* NVIDIA GPU with CUDA for faster training.
|
|
|
|
|
|
---
|
|
|
|
|
|
## 📦 Setup Instructions
|
|
|
|
|
|
-### 1. Clone & Environment
|
|
|
+### 1. Environment Setup
|
|
|
|
|
|
```bash
|
|
|
git clone <your-repo-url>
|
|
|
cd palm-oil-ai
|
|
|
python -m venv venv
|
|
|
# Windows: venv\Scripts\activate | Mac: source venv/bin/activate
|
|
|
-pip install -r requirements.txt
|
|
|
+pip install ultralytics fastapi uvicorn streamlit python-multipart pillow
|
|
|
|
|
|
```
|
|
|
|
|
|
-### 2. Dataset Preparation
|
|
|
+### 2. Dataset & Training
|
|
|
|
|
|
-1. Download the dataset from [Roboflow Universe](https://www.google.com/search?q=https://universe.roboflow.com/assignment-vvtq7/oil-palm-ripeness/).
|
|
|
-2. Unzip into the `/datasets` folder.
|
|
|
-3. Ensure your `data.yaml` matches the local paths:
|
|
|
-```yaml
|
|
|
-train: ../datasets/train/images
|
|
|
-val: ../datasets/valid/images
|
|
|
+1. Download the dataset from [Roboflow](https://universe.roboflow.com/assignment-vvtq7/oil-palm-ripeness/dataset/5/download/yolov8)
|
|
|
+*Or you can also find your own separate source of datasets. Make sure the file/folder format structure is consistent, especially the .yaml file.
|
|
|
+2. Extract into `/datasets`.
|
|
|
+3. **Train the model:**
|
|
|
+```bash
|
|
|
+python train_script.py
|
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
+4. Copy the resulting `best.pt` from `runs/detect/train/weights/` to the project root.
|
|
|
|
|
|
-### 3. Training the Model
|
|
|
+### 3. Running the Demo (Streamlit)
|
|
|
|
|
|
-To train locally without hanging your PC, use the throttled script:
|
|
|
+To show the interactive dashboard to colleagues:
|
|
|
|
|
|
```bash
|
|
|
-python train_script.py
|
|
|
+streamlit run demo_app.py
|
|
|
|
|
|
```
|
|
|
|
|
|
-* **Outputs:** The best model will be saved at `runs/detect/train/weights/best.pt`.
|
|
|
-* **Move it:** Copy `best.pt` to the root directory for the server to use.
|
|
|
+* **Local URL:** `http://localhost:8501`
|
|
|
+
|
|
|
+### 4. Running the API for n8n
|
|
|
|
|
|
-### 4. Running the Inference Server
|
|
|
+To connect your AI to n8n workflows:
|
|
|
|
|
|
```bash
|
|
|
python main.py
|
|
|
|
|
|
```
|
|
|
|
|
|
-The server will start at `http://localhost:8000`.
|
|
|
-
|
|
|
-* **Endpoint:** `POST /detect`
|
|
|
-* **Payload:** Multipart Form-data (Key: `file`, Value: `image.jpg`)
|
|
|
-
|
|
|
----
|
|
|
-
|
|
|
-## 🤖 n8n Integration
|
|
|
-
|
|
|
-The n8n workflow follows this logic:
|
|
|
-
|
|
|
-1. **Trigger:** Receives image (Telegram/Webhook).
|
|
|
-2. **HTTP Request:** Sends image to `localhost:8000/detect`.
|
|
|
-3. **MongoDB Node:** Performs Vector Search using the returned embedding.
|
|
|
-4. **Agent Logic:** Final ripeness determination based on model confidence + DB similarity.
|
|
|
+* **Endpoint:** `POST http://localhost:8000/detect`
|
|
|
+* **Payload:** Form-data with key `file`.
|
|
|
|
|
|
---
|
|
|
|
|
|
## 📂 Repository Structure
|
|
|
|
|
|
```text
|
|
|
-├── datasets/ # Labeled images from Roboflow
|
|
|
-├── runs/ # YOLO training logs and weights
|
|
|
-├── main.py # FastAPI Inference Server
|
|
|
-├── train_script.py # Local training configuration
|
|
|
-├── best.pt # The "Brain" (Trained Model)
|
|
|
-├── requirements.txt # dependencies
|
|
|
-└── README.md # You are here
|
|
|
+├── datasets/ # Labeled images (Train/Valid/Test)
|
|
|
+├── runs/ # YOLO training logs and output weights
|
|
|
+├── main.py # FastAPI Inference Server (for n8n)
|
|
|
+├── demo_app.py # Streamlit Dashboard (for demos)
|
|
|
+├── train_script.py # Throttled training configuration
|
|
|
+├── best.pt # THE BRAIN: The trained model weights
|
|
|
+└── README.md # Documentation
|
|
|
|
|
|
```
|
|
|
|
|
|
----
|
|
|
-
|
|
|
-## 📝 Future Improvements
|
|
|
-
|
|
|
-* Implement **CLIP** embeddings for higher-accuracy vector similarity.
|
|
|
-* Add a **Streamlit** dashboard for manual batch verification.
|