|
@@ -1,21 +1,23 @@
|
|
|
# 🌴 Palm Oil Ripeness Agent (n8n + YOLOv8)
|
|
# 🌴 Palm Oil Ripeness Agent (n8n + YOLOv8)
|
|
|
|
|
|
|
|
-This project uses a custom-trained **YOLOv8** model to detect the ripeness of Palm Oil Fresh Fruit Bunches (FFB). It features a local Python FastAPI server and a Streamlit Dashboard, both integrated into an **agentic n8n workflow**.
|
|
|
|
|
|
|
+This project uses a custom-trained **YOLOv8** model to detect the ripeness of Palm Oil Fresh Fruit Bunches (FFB). It features a local Python FastAPI server and a Streamlit Dashboard, both architected with **Domain-Driven Design (DDD)** for maximum flexibility and scalability in an **agentic n8n workflow**.
|
|
|
|
|
|
|
|
## 🚀 Project Overview
|
|
## 🚀 Project Overview
|
|
|
|
|
|
|
|
-1. **Vision Engine:** YOLOv8 Nano (Custom-trained on MPOB-standard datasets).
|
|
|
|
|
-2. **Inference Server:** FastAPI (Python) for n8n integration.
|
|
|
|
|
-3. **Demo Dashboard:** Streamlit UI for drag-and-drop batch testing.
|
|
|
|
|
-4. **Intelligence:** MongoDB Atlas Vector Search for similarity-based reasoning.
|
|
|
|
|
|
|
+1. **Vision Engine:** YOLOv8 Nano (Custom-trained on MPOB-standard datasets).
|
|
|
|
|
+2. **Inference Server:** FastAPI (Python) for n8n integration.
|
|
|
|
|
+3. **Visual Fingerprinting:** Vertex AI Multimodal Embedding (`multimodalembedding@001`).
|
|
|
|
|
+4. **Archival & Reasoning:** MongoDB Atlas Vector Search for similarity-based reasoning.
|
|
|
|
|
+5. **Demo Dashboard:** Streamlit UI for drag-and-drop batch testing.
|
|
|
|
|
|
|
|
---
|
|
---
|
|
|
|
|
|
|
|
## 🛠 Prerequisites
|
|
## 🛠 Prerequisites
|
|
|
|
|
|
|
|
-* Python 3.10+
|
|
|
|
|
-* n8n (Desktop or Self-hosted)
|
|
|
|
|
-* MongoDB Atlas Account (with Vector Search index enabled)
|
|
|
|
|
|
|
+- Python 3.10+
|
|
|
|
|
+- n8n (Desktop or Self-hosted)
|
|
|
|
|
+- MongoDB Atlas Account
|
|
|
|
|
+- Google Cloud Platform (Vertex AI API enabled)
|
|
|
|
|
|
|
|
---
|
|
---
|
|
|
|
|
|
|
@@ -23,72 +25,82 @@ This project uses a custom-trained **YOLOv8** model to detect the ripeness of Pa
|
|
|
|
|
|
|
|
### 1. Environment Setup
|
|
### 1. Environment Setup
|
|
|
|
|
|
|
|
-```bash
|
|
|
|
|
|
|
+```powershell
|
|
|
|
|
+# Clone and enter the repository
|
|
|
git clone <your-repo-url>
|
|
git clone <your-repo-url>
|
|
|
cd palm-oil-ai
|
|
cd palm-oil-ai
|
|
|
|
|
+
|
|
|
|
|
+# Create and activate virtual environment
|
|
|
python -m venv venv
|
|
python -m venv venv
|
|
|
-# Windows: venv\Scripts\activate | Mac: source venv/bin/activate
|
|
|
|
|
-pip install ultralytics fastapi uvicorn streamlit python-multipart pillow
|
|
|
|
|
|
|
+.\venv\Scripts\activate
|
|
|
|
|
|
|
|
|
|
+# Install dependencies
|
|
|
|
|
+pip install -r requirements.txt
|
|
|
```
|
|
```
|
|
|
|
|
|
|
|
-### 2. Dataset & Training
|
|
|
|
|
|
|
+### 2. Configuration (`.env`)
|
|
|
|
|
|
|
|
-1. Download the dataset from [Roboflow](https://universe.roboflow.com/assignment-vvtq7/oil-palm-ripeness/dataset/5/download/yolov8)
|
|
|
|
|
-*Or you can also find your own separate source of datasets. Make sure the file/folder format structure is consistent, especially the .yaml file.
|
|
|
|
|
-2. Extract into `/datasets`.
|
|
|
|
|
-3. **Train the model:**
|
|
|
|
|
-```bash
|
|
|
|
|
-python train_script.py
|
|
|
|
|
-
|
|
|
|
|
-```
|
|
|
|
|
|
|
+Ensure your `.env` file is populated with the following keys:
|
|
|
|
|
+- `MONGO_URI`: Your MongoDB Atlas connection string.
|
|
|
|
|
+- `PROJECT_ID`: Your Google Cloud Project ID.
|
|
|
|
|
+- `LOCATION`: Vertex AI location (e.g., `us-central1`).
|
|
|
|
|
+- `DB_NAME`: MongoDB database name.
|
|
|
|
|
+- `COLLECTION_NAME`: MongoDB collection name.
|
|
|
|
|
+- `GOOGLE_APPLICATION_CREDENTIALS`: Path to your GCP service account JSON key.
|
|
|
|
|
|
|
|
|
|
+---
|
|
|
|
|
|
|
|
-4. Copy the resulting `best.pt` from `runs/detect/train/weights/` to the project root.
|
|
|
|
|
|
|
+## 🚦 How to Run
|
|
|
|
|
|
|
|
-### Running the API Server (DDD Structure)
|
|
|
|
|
|
|
+### Running the API Server
|
|
|
|
|
|
|
|
-The new architecture decouples the vision logic from the API entry point. You can run it either via the root wrapper or directly as a module:
|
|
|
|
|
|
|
+The API acts as the bridge for n8n or mobile integrations. You can start it using the root wrapper:
|
|
|
|
|
|
|
|
```powershell
|
|
```powershell
|
|
|
-# Option 1: Using the root wrapper
|
|
|
|
|
|
|
+# Start the FastAPI server
|
|
|
python main.py
|
|
python main.py
|
|
|
-
|
|
|
|
|
-# Option 2: Running as a module
|
|
|
|
|
-python -m src.api.main
|
|
|
|
|
```
|
|
```
|
|
|
-By default, the server runs on `http://localhost:8000`.
|
|
|
|
|
-
|
|
|
|
|
-### Available Endpoints
|
|
|
|
|
-
|
|
|
|
|
-| Endpoint | Method | Description |
|
|
|
|
|
-| :--- | :--- | :--- |
|
|
|
|
|
-| `/detect` | `POST` | Simple YOLO detection (Returns JSON) |
|
|
|
|
|
-| `/analyze` | `POST` | Detection + Vertex Vectorization + MongoDB Archival |
|
|
|
|
|
-| `/get_confidence` | `GET` | Returns the current model confidence setting |
|
|
|
|
|
-| `/set_confidence` | `POST` | Updates the global model confidence setting |
|
|
|
|
|
|
|
+*Alternatively, run as a module: `python -m src.api.main`*
|
|
|
|
|
|
|
|
### Running the Streamlit Dashboard
|
|
### Running the Streamlit Dashboard
|
|
|
|
|
|
|
|
-The Streamlit app still provides the user interface for manual testing.
|
|
|
|
|
|
|
+For manual testing and visual analysis:
|
|
|
|
|
|
|
|
```powershell
|
|
```powershell
|
|
|
-# Run the Streamlit app
|
|
|
|
|
|
|
+# Start the Streamlit app
|
|
|
streamlit run demo_app.py
|
|
streamlit run demo_app.py
|
|
|
|
|
+```
|
|
|
|
|
+
|
|
|
|
|
+---
|
|
|
|
|
+
|
|
|
|
|
+## 🔌 API Endpoints
|
|
|
|
|
+
|
|
|
|
|
+| Endpoint | Method | Description |
|
|
|
|
|
+| :--- | :--- | :--- |
|
|
|
|
|
+| `/detect` | `POST` | **Fast Inference**: Returns YOLO detection results (JSON). |
|
|
|
|
|
+| `/analyze` | `POST` | **Full Process**: Detection + Vertex AI Vectorization + MongoDB Archival. |
|
|
|
|
|
+| `/get_confidence` | `GET` | Retrieve the current AI confidence threshold. |
|
|
|
|
|
+| `/set_confidence` | `POST` | Update the AI confidence threshold globally. |
|
|
|
|
|
+
|
|
|
---
|
|
---
|
|
|
|
|
|
|
|
-## 📂 Repository Structure
|
|
|
|
|
|
|
+## 📂 Repository Structure (DDD)
|
|
|
|
|
|
|
|
```text
|
|
```text
|
|
|
-├── datasets/ # Labeled images (Train/Valid/Test)
|
|
|
|
|
-├── runs/ # YOLO training logs and output weight├── src
|
|
|
|
|
-│ ├── api # FastAPI entry points
|
|
|
|
|
-│ ├── application # Use Cases (Orchestration)
|
|
|
|
|
-│ ├── domain # Business Logic & Entities
|
|
|
|
|
-│ └── infrastructure # External Services (MongoDB, VertexAI)
|
|
|
|
|
-├── best.pt # YOLOv8 Trained weights
|
|
|
|
|
|
|
+├── src/
|
|
|
|
|
+│ ├── api/ # FastAPI entry points & route handlers
|
|
|
|
|
+│ ├── application/ # Use Cases & Orchestration logic
|
|
|
|
|
+│ ├── domain/ # Business Logic, Entities, & Core models
|
|
|
|
|
+│ └── infrastructure/ # External integrations (MongoDB, VertexAI)
|
|
|
|
|
+├── datasets/ # Labeled images (Train/Valid/Test)
|
|
|
|
|
+├── runs/ # YOLO training logs and output weights
|
|
|
|
|
+├── best.pt # THE BRAIN: Trained model weights
|
|
|
├── requirements.txt # Python dependencies
|
|
├── requirements.txt # Python dependencies
|
|
|
-├── .env # Configuration (Mongo, Vertex)
|
|
|
|
|
|
|
+├── .env # Configuration state
|
|
|
├── LICENSE # MIT License
|
|
├── LICENSE # MIT License
|
|
|
-└── README.md
|
|
|
|
|
|
|
+└── README.md # You are here
|
|
|
```
|
|
```
|
|
|
|
|
+
|
|
|
|
|
+## 📜 License
|
|
|
|
|
+
|
|
|
|
|
+This project is licensed under the MIT License - see the [LICENSE](file:///LICENSE) file for details.
|