Kaynağa Gözat

feat: implement FastAPI service with multi-engine inference and local SQLite history tracking

Dr-Swopt 6 saat önce
ebeveyn
işleme
5096aab978
4 değiştirilmiş dosya ile 319 ekleme ve 68 silme
  1. 186 51
      README.md
  2. 109 9
      demo_app.py
  3. BIN
      palm_history.db
  4. 24 8
      src/api/main.py

+ 186 - 51
README.md

@@ -1,23 +1,32 @@
-# 🌴 Palm Oil Ripeness AI (YOLO26)
+# 🌴 Palm Oil FFB Management System (YOLO26)
 
-This project uses a custom-trained **YOLOv8** model to detect the ripeness of Palm Oil Fresh Fruit Bunches (FFB). It features a local Python FastAPI server and a Streamlit Dashboard, both architected with **Domain-Driven Design (DDD)** for maximum flexibility and scalability in an **agentic n8n workflow**.
+A production-ready AI system for detecting the ripeness of Palm Oil Fresh Fruit Bunches (FFB). Built on a custom-trained **YOLO26** model (YOLOv8 architecture fork) with a **dual-engine inference backend** (ONNX + PyTorch), a FastAPI server, and a full-featured Streamlit dashboard. The entire backend is architected with **Domain-Driven Design (DDD)** for maximum scalability and n8n workflow integration.
+
+---
 
 ## 🚀 Project Overview
 
-1.  **Vision Engine:** YOLOv8 Nano (Custom-trained on MPOB-standard datasets).
-2.  **Inference Server:** FastAPI (Python) for n8n integration.
-3.  **Visual Fingerprinting:** Vertex AI Multimodal Embedding (`multimodalembedding@001`).
-4.  **Archival & Reasoning:** MongoDB Atlas Vector Search for similarity-based reasoning.
-5.  **Demo Dashboard:** Streamlit UI for drag-and-drop batch testing.
+| Component | Technology | Purpose |
+| :--- | :--- | :--- |
+| **Vision Engine** | YOLO26 (Custom-trained on MPOB-standard datasets) | FFB Ripeness Detection |
+| **ONNX Runtime** | `onnxruntime` + `best.onnx` | Zero-latency, NMS-Free edge inference (~39ms) |
+| **PyTorch Runtime** | `ultralytics` + `best.pt` | High-resolution auditing inference |
+| **Benchmark Engine** | YOLOv8-Sawit (`sawit_tbs.pt`) | Third-party model comparison |
+| **Inference Server** | FastAPI (Python) | REST API for n8n & mobile integration |
+| **Visual Fingerprinting** | Vertex AI Multimodal Embedding (`multimodalembedding@001`) | 1408-D vector generation |
+| **Cloud Archival** | MongoDB Atlas Vector Search | Similarity-based semantic recall |
+| **Local History** | SQLite (`palm_history.db`) | Offline audit log, zero cloud dependency |
+| **Demo Dashboard** | Streamlit (`demo_app.py`) | 5-tab production operations UI |
 
 ---
 
 ## 🛠 Prerequisites
 
--   Python 3.10+
--   n8n (Desktop or Self-hosted)
--   MongoDB Atlas Account
--   Google Cloud Platform (Vertex AI API enabled)
+- Python 3.10+
+- An NVIDIA GPU (recommended, but not required — CPU inference is supported)
+- n8n (Desktop or Self-hosted) for workflow automation
+- MongoDB Atlas Account *(optional — required only for cloud archival & semantic search)*
+- Google Cloud Platform with Vertex AI API enabled *(optional — required only for vectorization)*
 
 ---
 
@@ -38,83 +47,209 @@ python -m venv venv
 pip install -r requirements.txt
 ```
 
+> **Note:** `onnxruntime` and `fpdf2` are required but not yet in `requirements.txt`. Install manually if needed:
+> ```powershell
+> pip install onnxruntime fpdf2
+> ```
+
 ### 2. Dataset & Training
 
-1. Download the dataset from [Roboflow](https://universe.roboflow.com/assignment-vvtq7/oil-palm-ripeness/dataset/5/download/yolov8)
-*Or you can also find your own separate source of datasets. Make sure the file/folder format structure is consistent, especially the .yaml file.
+1. Download the dataset from [Roboflow](https://universe.roboflow.com/assignment-vvtq7/oil-palm-ripeness/dataset/5/download/yolov8) or source your own (ensure consistent YOLO `.yaml` structure).
 2. Extract into `/datasets`.
 3. **Train the model:**
-```bash
-python train_p.py
+    ```bash
+    python train_palm.py
+    ```
+4. Copy the resulting `best.pt` from `runs/detect/train/weights/` to the project root.
+5. **Export to ONNX** for high-speed inference:
+    ```bash
+    python export_raw_tflite.py  # or use yolo export
+    ```
+    Copy the resulting `best.onnx` to the project root.
 
-```
+### 3. Configuration (`.env`)
 
-3. Copy the resulting `best.pt` from `runs/detect/train/weights/` to the project root.
+Populate your `.env` file. Cloud services (Vertex AI, MongoDB) are **optional** — the system gracefully degrades to local-only mode if they are unavailable.
 
-### 3. Configuration (`.env`)
+```env
+# Required for Cloud Archival & Semantic Search
+MONGO_URI=mongodb+srv://<user>:<password>@<cluster>.mongodb.net/
+PROJECT_ID=your-gcp-project-id
+LOCATION=us-central1
+DB_NAME=palm_oil_db
+COLLECTION_NAME=ffb_records
 
-Ensure your `.env` file is populated with the following keys:
-- `MONGO_URI`: Your MongoDB Atlas connection string.
-- `PROJECT_ID`: Your Google Cloud Project ID.
-- `LOCATION`: Vertex AI location (e.g., `us-central1`).
-- `DB_NAME`: MongoDB database name.
-- `COLLECTION_NAME`: MongoDB collection name.
-- `GOOGLE_APPLICATION_CREDENTIALS`: Path to your GCP service account JSON key.
+# Path to your GCP Service Account key JSON
+GOOGLE_APPLICATION_CREDENTIALS=gemini-embedding-service-key.json
+```
 
 ---
 
 ## 🚦 How to Run
-### Running the API Server
 
-The API acts as the bridge for n8n or mobile integrations. You can start it using the root wrapper:
+### Start the FastAPI Backend
+
+The API server is the **required** component. The Streamlit dashboard will not function without it.
 
 ```powershell
-# Start the FastAPI server
+# Start the FastAPI server (root-level wrapper)
 python main.py
 ```
+
+The server will be available at `http://localhost:8000`. Interactive API docs are at `http://localhost:8000/docs`.
+
 *Alternatively, run as a module: `python -m src.api.main`*
 
-### Running the Streamlit Dashboard
+### Start the Streamlit Dashboard
 
-For manual testing and visual analysis:
+Open a **second terminal** and run:
 
 ```powershell
-# Start the Streamlit app
 streamlit run demo_app.py
 ```
 
+The dashboard automatically connects to the backend and will display an error with a retry button if the API is offline.
+
 ---
 
 ## 🔌 API Endpoints
 
 | Endpoint | Method | Description |
 | :--- | :--- | :--- |
-| `/analyze` | `POST` | **Local Detection**: Returns YOLO results only. Guaranteed to work without Cloud Billing. |
-| `/vectorize_and_store` | `POST` | **Cloud Archival**: Vectorizes a detection and saves to MongoDB Atlas. Requires GCP Billing. |
-| `/process_batch` | `POST` | **Bulk Processor**: Handles multiple images. Detects locally; archives to cloud if available. |
-| `/search_hybrid` | `POST` | **Semantic Search**: Visual similarity or natural language search via Vertex AI. |
-| `/get_confidence` | `GET` | Retrieve the current AI confidence threshold. |
-| `/set_confidence` | `POST` | Update the AI confidence threshold globally. |
+| `/analyze` | `POST` | **Single Analysis**: Runs inference on one image; auto-archives to local SQLite vault. Accepts `model_type` form field (`onnx`, `pytorch`, `yolov8_sawit`). |
+| `/process_batch` | `POST` | **Batch Processor**: Processes multiple images; generates a `manifest.json` data contract in `batch_outputs/`. Accepts `model_type` and `metadata` (JSON string). |
+| `/vectorize_and_store` | `POST` | **Cloud Archival**: Vectorizes a single detection and saves to MongoDB Atlas. Requires active GCP billing. |
+| `/search_hybrid` | `POST` | **Semantic Search**: Visual similarity (upload image) or natural language query via Vertex AI embeddings. |
+| `/get_history` | `GET` | **History Vault**: Returns all records from the local SQLite audit log, ordered by most recent. |
+| `/get_image/{record_id}` | `GET` | **Image Retrieval**: Returns the Base64-encoded image for a specific MongoDB record. |
+| `/get_model_info` | `GET` | Returns the available detection categories and description for the specified `model_type`. |
+| `/get_confidence` | `GET` | Retrieves the current global AI confidence threshold. |
+| `/set_confidence` | `POST` | Updates the AI confidence threshold globally (live, no restart required). |
+
+---
+
+## 🖥️ Streamlit Dashboard Tabs
+
+The dashboard (`demo_app.py`) features a **5-tab production operations UI**:
+
+| Tab | Feature | Description |
+| :--- | :--- | :--- |
+| **Single Analysis** | Live Detection | Drag-and-drop a single image for auto-detection. Includes an interactive Plotly overlay viewer, a Manager's Dashboard (metrics), raw tensor inspector, harvest quality pie chart, OER yield-loss insights, cloud archival button, and misclassification flagging. |
+| **Batch Processing** | Bulk Analysis | Upload multiple images and configure production metadata (Estate, Block ID, Harvester ID, Priority) via a modal dialog. Displays a batch quality dashboard (bar chart), annotated evidence gallery, performance timeline (start/end/duration), and generates a **downloadable PDF executive report**. |
+| **Similarity Search** | Semantic Search | Search the MongoDB Atlas vector index by uploading a reference image (visual similarity) or typing a natural language query (text-to-vector). |
+| **History Vault** | Local Audit Log | SQLite-backed audit log of every `/analyze` call. Supports a list view (filterable dataframe) and a "Deep Dive" detail view with interactive Plotly + static annotated image views and the raw mathematical tensor. |
+| **Batch Reviewer** | Manifest Auditor | Browses batches saved in the `batch_outputs/` directory. Loads `manifest.json` data contracts, displays the full batch metadata audit (Job ID, venue, engine, threshold, performance timeline), a quality overview chart, and a per-image inventory with interactive detection overlays and Subscriber Payloads (clean ERP-ready JSON). |
+
+### Sidebar Controls
+
+- **Confidence Threshold**: Live slider (0.1–1.0) that updates the backend globally in real-time.
+- **Model Engine Selector**: Switch between YOLO26 (ONNX), YOLO26 (PyTorch), and YOLOv8-Sawit (Benchmark). Switching engines automatically clears the current analysis canvas.
+- **Model Capabilities Panel**: Dynamically shows the detection categories for the selected engine.
+- **AI Interpretation Guide**: A built-in dialog explaining the raw tensor format, coordinate systems (normalized vs. absolute pixels), and the confidence scoring mechanism.
 
 ---
 
-## 📂 Repository Structure (DDD)
+## 📦 Batch Output Contract (`manifest.json`)
 
-```text
+Each batch job produces a portable data bundle under `batch_outputs/<BATCH_ID>/`:
+
+```
+batch_outputs/
+└── BATCH_<ID>/
+    ├── manifest.json   # The Data Contract
+    └── raw/            # Original uploaded images
+        ├── <uid>_image1.jpg
+        └── <uid>_image2.jpg
+```
+
+The `manifest.json` schema:
+
+```json
+{
+  "job_id": "BATCH_XXXXXXXX",
+  "timestamp": "2026-03-30T...",
+  "source_context": { "estate": "...", "block": "...", "harvester": "...", "priority": "..." },
+  "engine": { "name": "YOLO26", "type": "onnx", "threshold": 0.25 },
+  "performance": { "start_time": "...", "end_time": "...", "duration_seconds": 1.23 },
+  "industrial_summary": { "Ripe": 5, "Unripe": 1, "Underripe": 2, "Abnormal": 0, "Empty_Bunch": 0, "Overripe": 0 },
+  "inventory": [
+    {
+      "image_id": "abc123",
+      "filename": "abc123_image.jpg",
+      "inference_ms": 38.5,
+      "raw_tensor": [...],
+      "detections": [
+        {
+          "bunch_id": 1, "class": "Ripe", "confidence": 0.92,
+          "is_health_alert": false,
+          "box": [x1, y1, x2, y2],
+          "norm_box": [0.1, 0.2, 0.5, 0.8]
+        }
+      ]
+    }
+  ]
+}
+```
+
+> **Note:** `norm_box` stores resolution-agnostic normalized coordinates (0.0–1.0), enabling the Batch Reviewer to re-render detections on any image resolution without data loss.
+
+---
+
+## 🏗️ Architecture (DDD)
+
+```
+palm-oil-ai/
 ├── src/
-│   ├── api/                # FastAPI entry points & route handlers
-│   ├── application/        # Use Cases & Orchestration logic
-│   ├── domain/             # Business Logic, Entities, & Core models
-│   └── infrastructure/     # External integrations (MongoDB, VertexAI)
-├── datasets/               # Labeled images (Train/Valid/Test)
-├── runs/                   # YOLO training logs and output weights
-├── best.pt                 # THE BRAIN: Trained model weights
-├── requirements.txt        # Python dependencies
-├── .env                    # Configuration state
-├── LICENSE                 # MIT License
-└── README.md               # You are here
+│   ├── api/
+│   │   └── main.py             # FastAPI routes, ModelManager (ONNX + PyTorch), SQLite auto-archival
+│   ├── application/
+│   │   └── analyze_bunch.py    # Use Cases: AnalyzeBunchUseCase, AnalyzeBatchUseCase, SearchSimilarUseCase
+│   ├── domain/
+│   │   └── models.py           # PalmOilBunch dataclass (core business entity)
+│   └── infrastructure/
+│       ├── repository.py       # MongoPalmOilRepository (Atlas Vector Search, CRUD)
+│       └── vision_service.py   # VertexVisionService (1408-D embeddings, Base64 encoding)
+├── demo_app.py                 # Streamlit 5-tab dashboard
+├── main.py                     # Root-level uvicorn launcher (DDD wrapper)
+├── train_palm.py               # YOLO training script
+├── export_raw_tflite.py        # ONNX/TFLite export utility
+├── best.onnx                   # YOLO26 ONNX weights (primary engine)
+├── best.pt                     # YOLO26 PyTorch weights
+├── sawit_tbs.pt                # YOLOv8-Sawit benchmark weights
+├── palm_history.db             # Local SQLite audit log
+├── batch_outputs/              # Batch job data bundles (manifest + raw images)
+├── history_archive/            # Archived images for History Vault
+├── feedback/                   # Misclassification feedback data (Human-in-the-Loop)
+├── datasets/                   # Labeled training images (Train/Valid/Test)
+├── runs/                       # YOLO training logs and output weights
+├── requirements.txt            # Python dependencies
+├── .env                        # Configuration (secrets, GCP, MongoDB)
+└── README.md                   # You are here
 ```
 
+### Detection Classes (MPOB Standard)
+
+| Class | Description | Health Alert |
+| :--- | :--- | :--- |
+| `Ripe` | Prime harvest condition — maximum OER | ❌ |
+| `Underripe` | Harvested before peak — reduces OER | ❌ |
+| `Unripe` | Harvested too early — significant yield loss | ❌ |
+| `Overripe` | Past peak — potential quality degradation | ❌ |
+| `Abnormal` | Disease or structural defect detected | ✅ **CRITICAL** |
+| `Empty_Bunch` | No fruit present — waste indicator | ✅ Warning |
+
+---
+
+## 🔑 Key Design Decisions
+
+- **Dual-Engine Inference**: ONNX runtime is the primary engine for its ~39ms NMS-free speed. PyTorch (`.pt`) is retained for high-resolution auditing where standard NMS post-processing is preferred.
+- **Coordinate Normalization**: The batch pipeline stores `norm_box` (0.0–1.0 ratios) alongside absolute pixel `box` coordinates. This makes the data contract resolution-agnostic for downstream ERP or vectorization subscribers.
+- **Graceful Degradation**: MongoDB Atlas and Vertex AI connections are established at startup. If they fail (e.g., no billing, no network), the system logs a warning and continues operating in local-only mode. Only cloud-dependent endpoints return errors.
+- **Human-in-the-Loop**: The "Flag Misclassification" feature in the Single Analysis tab saves flagged images and their detection metadata to a local `feedback/` folder for future model retraining data collection.
+- **SQLite Auto-Archival**: Every call to `/analyze` is automatically logged to `palm_history.db` with the image, detections, engine used, inference/processing latency, and the raw mathematical tensor — enabling a full offline audit trail.
+
+---
+
 ## 📜 License
 
-This project is licensed under the MIT License - see the [LICENSE](file:///LICENSE) file for details.
+This project is licensed under the MIT License — see the [LICENSE](LICENSE) file for details.

+ 109 - 9
demo_app.py

@@ -461,7 +461,7 @@ def generate_batch_report(data, uploaded_files_map=None):
     pdf.set_font("Arial", "I", 8)
     pdf.cell(190, 10, "Generated by Palm Oil AI Desktop PoC - YOLO26 Engine", align="C")
     
-    return pdf.output(dest='S')
+    return bytes(pdf.output(dest='S'))
 
 
 
@@ -710,6 +710,18 @@ with tab2:
         with st.container(border=True):
             st.success(f"✅ Successfully processed {res_data['processed_count']} images.")
             
+            # Performance Timeline (New)
+            perf = res_data.get('performance', {})
+            if perf:
+                p_col1, p_col2, p_col3 = st.columns(3)
+                with p_col1:
+                    st.metric("🕒 Start Time", datetime.fromisoformat(perf['start_time']).strftime("%H:%M:%S"))
+                with p_col2:
+                    st.metric("🏁 End Time", datetime.fromisoformat(perf['end_time']).strftime("%H:%M:%S"))
+                with p_col3:
+                    st.metric("⚡ Duration", f"{perf['duration_seconds']}s")
+            st.divider()
+            
             # Batch Summary Dashboard
             st.write("### 📈 Batch Quality Overview")
             batch_summary = res_data.get('industrial_summary', {})
@@ -959,24 +971,53 @@ with tab5:
     st.subheader("📦 Local Batch Reviewer")
     st.caption("Provide a local directory path to review the AI Data Contract and evidence.")
 
-    # 1. Path Input
-    batch_path = st.text_input(
-        "Enter Batch Folder Path:", 
-        placeholder="e.g., batch_outputs/BATCH_2646CB27",
-        help="The folder should contain 'manifest.json' and a 'raw' subfolder."
-    )
+    # 1. Initialize Reviewer State
+    if "reviewer_path_key" not in st.session_state:
+        st.session_state.reviewer_path_key = 0
+
+    # 2. List Existing Batches for Suggestion
+    existing_batches = []
+    if os.path.exists("batch_outputs"):
+        # We only want directories within batch_outputs
+        existing_batches = [f for f in os.listdir("batch_outputs") if os.path.isdir(os.path.join("batch_outputs", f))]
+    
+    # Sort to show most recent first (if using timestamp-based naming)
+    existing_batches.sort(reverse=True)
+
+    # 3. Selection UI
+    col_rev1, col_rev2 = st.columns([2,1])
+    with col_rev1:
+        selected_batch = st.selectbox(
+            "Select from Production Output:", 
+            options=["-- Manual Entry --"] + existing_batches,
+            key=f"reviewer_select_{st.session_state.reviewer_path_key}",
+            help="Choose an existing batch folder from the 'batch_outputs/' directory."
+        )
+    
+    # 4. Path Input logic
+    if selected_batch != "-- Manual Entry --":
+        batch_path = os.path.join("batch_outputs", selected_batch)
+        # Display the resolved path for awareness
+        st.caption(f"📍 Reviewing: `{batch_path}`")
+    else:
+        batch_path = st.text_input(
+            "Enter Custom Batch Folder Path:", 
+            placeholder="e.g., path/to/your/batch",
+            help="Provide the path to any folder containing 'manifest.json' and 'raw/' subfolder.",
+            key=f"reviewer_path_{st.session_state.reviewer_path_key}"
+        )
 
     if batch_path:
         manifest_path = os.path.join(batch_path, "manifest.json")
         raw_dir = os.path.join(batch_path, "raw")
 
-        # 2. Validation
+        # 3. Validation
         if not os.path.exists(manifest_path):
             st.error(f"❌ Could not find `manifest.json` at: `{manifest_path}`")
         elif not os.path.exists(raw_dir):
             st.error(f"❌ Could not find `raw` folder at: `{raw_dir}`")
         else:
-            # 3. Load the Contract
+            # 4. Load the Contract
             try:
                 with open(manifest_path, 'r') as f:
                     manifest = json.load(f)
@@ -994,6 +1035,49 @@ with tab5:
                         eng = manifest.get('engine', {})
                         st.write(f"**AI Engine:** {eng.get('name')} ({eng.get('type')})")
                         st.write(f"**Threshold:** {eng.get('threshold')}")
+                    
+                    # Performance Audit (New)
+                    perf = manifest.get('performance')
+                    if perf:
+                        st.divider()
+                        p1, p2, p3 = st.columns(3)
+                        p1.write(f"🕒 **Started:** {datetime.fromisoformat(perf['start_time']).strftime('%H:%M:%S')}")
+                        p2.write(f"🏁 **Finished:** {datetime.fromisoformat(perf['end_time']).strftime('%H:%M:%S')}")
+                        p3.write(f"⚡ **Duration:** {perf['duration_seconds']}s")
+
+                st.divider()
+
+                # --- 4. Batch Quality Overview (Dashboard) ---
+                st.write("### 📈 Batch Quality Overview")
+                industrial_summary = manifest.get('industrial_summary')
+                
+                # Backward compatibility: calculate if missing
+                if not industrial_summary:
+                    industrial_summary = {}
+                    for item in manifest.get('inventory', []):
+                        for det in item.get('detections', []):
+                            grade = det.get('class')
+                            industrial_summary[grade] = industrial_summary.get(grade, 0) + 1
+                
+                if industrial_summary:
+                    sum_df = pd.DataFrame(list(industrial_summary.items()), columns=['Grade', 'Count'])
+                    sum_df = sum_df[sum_df['Count'] > 0]
+                    
+                    b_col1, b_col2 = st.columns([1, 1])
+                    with b_col1:
+                        st.dataframe(sum_df, hide_index=True, width='stretch')
+                    with b_col2:
+                        if not sum_df.empty:
+                            fig_batch = px.bar(sum_df, x='Grade', y='Count', color='Grade',
+                                              color_discrete_map={
+                                                 'Ripe': '#22c55e',
+                                                 'Underripe': '#fbbf24',
+                                                 'Unripe': '#3b82f6',
+                                                 'Abnormal': '#dc2626',
+                                                 'Empty_Bunch': '#64748b'
+                                              })
+                            fig_batch.update_layout(margin=dict(t=0, b=0, l=0, r=0), height=200, showlegend=False)
+                            st.plotly_chart(fig_batch, width='stretch', key="rev_batch_bar")
 
                 st.divider()
 
@@ -1039,7 +1123,23 @@ with tab5:
                                 
                                 if st.button(f"🚀 Vectorize Image {item['image_id']}", key=f"btn_{item['image_id']}"):
                                     st.toast(f"Broadcasting data for {fname} to remote subscribers...")
+                                
+                                # 4. Technical Audit Trail (Subscriber's Perspective)
+                                st.divider()
+                                with st.expander("🔬 Raw Mathematical Tensor", expanded=False):
+                                    st.info("Technical Evidence: Sample of the unprocessed output from the inference contract.")
+                                    raw_data = item.get('raw_tensor')
+                                    if raw_data:
+                                        st.json(raw_data)
+                                    else:
+                                        st.warning("No raw tensor found in manifest for this record.")
                     else:
                         st.warning(f"⚠️ Image missing from /raw folder: `{fname}`")
+
+                st.divider()
+                if st.button("🗑️ Clear Results & Start New Review", width='stretch'):
+                    st.session_state.reviewer_path_key += 1
+                    st.rerun()
+
             except Exception as e:
                 st.error(f"Failed to load batch: {e}")

BIN
palm_history.db


+ 24 - 8
src/api/main.py

@@ -302,6 +302,7 @@ async def process_batch(
     output_dir = os.path.join("batch_outputs", batch_id)
     os.makedirs(os.path.join(output_dir, "raw"), exist_ok=True)
     
+    start_time = datetime.now()
     meta_dict = json.loads(metadata)
     batch_records = []
     
@@ -341,19 +342,38 @@ async def process_batch(
             "image_id": unique_id,
             "filename": filename,
             "detections": normalized_dets,
-            "inference_ms": inf_ms
+            "inference_ms": inf_ms,
+            "raw_tensor": raw_sample # Added for technical evidence/contract
         })
 
-    # 4. Generate the Manifest (The Contract)
+    end_time = datetime.now()
+    duration = (end_time - start_time).total_seconds()
+
+    # 4. Generate the Summary (For Manifest and immediate UI feedback)
+    active_names = model_manager.class_names if model_type != "yolov8_sawit" else model_manager.benchmark_class_names
+    total_summary = {name: 0 for name in active_names.values()}
+    for record in batch_records:
+        for det in record['detections']:
+            total_summary[det['class']] += 1
+
+    # 5. Generate the Manifest (The Contract)
+    performance_metrics = {
+        "start_time": start_time.isoformat(),
+        "end_time": end_time.isoformat(),
+        "duration_seconds": round(duration, 2)
+    }
+
     manifest = {
         "job_id": batch_id,
-        "timestamp": datetime.now().isoformat(),
+        "timestamp": end_time.isoformat(),
         "source_context": meta_dict,
         "engine": {
             "name": "YOLO26",
             "type": model_type,
             "threshold": current_conf
         },
+        "performance": performance_metrics, # Added performance metrics
+        "industrial_summary": total_summary, # Added for subscribers
         "inventory": batch_records
     }
     
@@ -362,11 +382,6 @@ async def process_batch(
 
     # Note: Maintaining compatibility with the frontend's expectation of 'industrial_summary'
     # and 'processed_count' for immediate UI feedback.
-    active_names = model_manager.class_names if model_type != "yolov8_sawit" else model_manager.benchmark_class_names
-    total_summary = {name: 0 for name in active_names.values()}
-    for record in batch_records:
-        for det in record['detections']:
-            total_summary[det['class']] += 1
 
     return {
         "status": "success",
@@ -375,6 +390,7 @@ async def process_batch(
         "processed_count": len(files),
         "total_count": sum(total_summary.values()),
         "industrial_summary": total_summary,
+        "performance": performance_metrics, 
         "record_ids": [r['image_id'] for r in batch_records], # Backward compatibility
         "manifest_preview": manifest,
         "detailed_results": [{"filename": r['filename'], "detection": d} for r in batch_records for d in r['detections']] # Backward compatibility