Jelajahi Sumber

feat: initialize frontend project with Angular, ONNX Runtime, and TFLite inference services

Dr-Swopt 1 Minggu lalu
induk
melakukan
9bb1ac592a
3 mengubah file dengan 206 tambahan dan 38 penghapusan
  1. 184 24
      frontend/README.md
  2. 6 1
      frontend/angular.json
  3. 16 13
      frontend/src/app/services/local-inference.service.ts

+ 184 - 24
frontend/README.md

@@ -1,59 +1,219 @@
-# Frontend
+# PalmOilAI Frontend
 
-This project was generated using [Angular CLI](https://github.com/angular/angular-cli) version 20.0.5.
+**Version 2.6 Industrial** — A fully client-side Angular web application for palm oil fruit bunch ripeness detection, running AI inference 100% in the browser with no backend dependency.
 
-## Development server
+---
 
-To start a local development server, run:
+## Overview
+
+The PalmOilAI frontend is a standalone Angular application that loads YOLOv8-based detection models directly in the browser using **ONNX Runtime Web** (for the industrial engine) and **TensorFlow.js TFLite** (for the standard PoC engine). All inference, image preprocessing, and result history are handled client-side with no network round-trips.
+
+### Detection Classes
+
+The models detect six ripeness categories:
+
+| Class | Description | Color |
+|---|---|---|
+| `Ripe` | Ready for harvest | Green |
+| `Unripe` | Not ready | Olive |
+| `Underripe` | Almost ready | Amber |
+| `Overripe` | Past optimal harvest | Brown |
+| `Abnormal` | Health alert — disease or damage | Red |
+| `Empty_Bunch` | No fruit present | Grey |
+
+---
+
+## Technology Stack
+
+| Category | Technology | Version |
+|---|---|---|
+| Framework | Angular | ^20.0.0 |
+| Bundler | Vite/esbuild (via `@angular/build`) | ^20.0.5 |
+| ONNX Inference | `onnxruntime-web` | ^1.24.3 |
+| TFLite Inference | `@tensorflow/tfjs-tflite` | ^0.0.1-alpha.10 |
+| TF Core | `@tensorflow/tfjs` | ^4.22.0 |
+| Styling | SCSS | — |
+| Language | TypeScript | ~5.8.2 |
+
+---
+
+## Project Structure
 
-```bash
-ng serve
 ```
+src/
+├── app/
+│   ├── components/
+│   │   ├── analyzer/       # Main scanner/inference UI
+│   │   ├── header/         # Navigation & theme toggle
+│   │   ├── history/        # Inference history (Vault)
+│   │   └── settings/       # Confidence threshold config (backend)
+│   ├── services/
+│   │   ├── local-inference.service.ts   # Core AI engine (ONNX + TFLite)
+│   │   ├── image-processor.service.ts   # Image resize + CHW preprocessing
+│   │   ├── local-history.service.ts     # LocalStorage-based result vault
+│   │   ├── theme.service.ts             # Dark/light mode persistence
+│   │   ├── api.service.ts               # Backend API client (optional)
+│   │   └── tflite.d.ts                  # TypeScript declarations for TFLite globals
+│   ├── app.routes.ts       # Client-side routing
+│   └── app.ts              # Root application component
+└── assets/
+    ├── models/
+    │   ├── onnx/best.onnx              # YOLOv8 Industrial model (~9.4 MB)
+    │   └── tflite/
+    │       ├── best_float32.tflite    # Standard PoC model, full precision (~9.4 MB)
+    │       └── best_float16.tflite    # Reduced precision variant (~4.8 MB)
+    ├── wasm/                           # ONNX Runtime WASM binaries
+    └── tflite-wasm/                    # TFLite WASM runtime glue files
+```
+
+---
+
+## Application Routes
+
+| Path | Component | Description |
+|---|---|---|
+| `/` | → `/analyzer` | Redirect to Scanner |
+| `/analyzer` | `AnalyzerComponent` | Main image upload + inference UI |
+| `/history` | `HistoryComponent` | Saved inference "Vault" records |
+| `/settings` | `SettingsComponent` | Backend confidence threshold (requires backend) |
+
+---
+
+## Key Services
+
+### `LocalInferenceService`
+The core AI engine. Selects between **ONNX** and **TFLite** backends based on the model file extension.
+
+- **ONNX path** (`best.onnx`): Uses `onnxruntime-web` with WASM execution provider. Input tensor shape: `[1, 3, 640, 640]` (CHW).
+- **TFLite path** (`best_float32.tflite`): Accesses the globally-loaded `tflite` object (injected via `angular.json` scripts). Input is transposed from CHW to HWC (`[1, 640, 640, 3]`) before prediction.
+
+> **Bundler Note:** Because `@tensorflow/tfjs-tflite` is a legacy CommonJS/UMD hybrid that is incompatible with the modern Vite/esbuild ESM bundler, **both TF and TFLite are loaded as global scripts** in `angular.json`. This is intentional — it ensures they populate `window.tflite` and `window.tf` before Angular bootstraps, bypassing all module resolution issues.
+
+### `ImageProcessorService`
+Resizes any input image to `640×640` using an offscreen Canvas and converts the pixel data from RGBA to a normalized `Float32Array` in CHW format (`[1, 3, 640, 640]`).
 
-Once the server is running, open your browser and navigate to `http://localhost:4200/`. The application will automatically reload whenever you modify any of the source files.
+### `LocalHistoryService`
+Persists inference results to `localStorage` under the key `palm_oil_vault`. Stores up to **20 records** (FIFO), each containing the detection summary, inference latency, engine type, thumbnail image data, and bounding box coordinates.
 
-## Code scaffolding
+### `ThemeService`
+Manages dark/light mode by toggling `theme-dark` / `theme-light` CSS classes on `<body>`. Persists the user's preference to `localStorage` under `palm-ai-theme`.
 
-Angular CLI includes powerful code scaffolding tools. To generate a new component, run:
+---
+
+## Required Asset Files
+
+The following binary files must be manually placed — they are not installed by `npm install`:
+
+### ONNX Models (`src/assets/models/onnx/`)
+| File | Description |
+|---|---|
+| `best.onnx` | YOLOv8 industrial detection model |
+
+### TFLite Models (`src/assets/models/tflite/`)
+| File | Description |
+|---|---|
+| `best_float32.tflite` | Full-precision TFLite model |
+| `best_float16.tflite` | Half-precision TFLite model (smaller, faster) |
+
+### ONNX WASM Runtime (`src/assets/wasm/`)
+Copy from `node_modules/onnxruntime-web/dist/` after `npm install`.
+
+### TFLite WASM Runtime (`src/assets/tflite-wasm/`)
+Copy from `node_modules/@tensorflow/tfjs-tflite/dist/` after `npm install`. Required files:
+
+| File | Purpose |
+|---|---|
+| `tflite_web_api_cc.js` | TFLite glue (non-SIMD) |
+| `tflite_web_api_cc.wasm` | TFLite engine (non-SIMD) |
+| `tflite_web_api_cc_simd.js` | TFLite glue (SIMD-accelerated) |
+| `tflite_web_api_cc_simd.wasm` | TFLite engine (SIMD-accelerated) |
+| `tflite_web_api_cc_simd_threaded.js` | TFLite glue (SIMD + multi-threaded) |
+| `tflite_web_api_cc_simd_threaded.wasm` | TFLite engine (SIMD + multi-threaded) |
+| `tflite_web_api_client.js` | TFLite high-level client API |
+
+---
+
+## Getting Started
+
+### Prerequisites
+- Node.js ≥ 18
+- Angular CLI (`npm install -g @angular/cli`)
+
+### Installation
 
 ```bash
-ng generate component component-name
+npm install
 ```
 
-For a complete list of available schematics (such as `components`, `directives`, or `pipes`), run:
+### Development Server
 
 ```bash
-ng generate --help
+# Standard local serve
+ng serve
+
+# Serve on all network interfaces (for device testing)
+ng serve --host 0.0.0.0 --port 4200
 ```
 
-## Building
+Open your browser and navigate to `http://localhost:4200/`. The app redirects to `/analyzer` by default.
 
-To build the project run:
+### Build for Production
 
 ```bash
 ng build
 ```
 
-This will compile your project and store the build artifacts in the `dist/` directory. By default, the production build optimizes your application for performance and speed.
+Build artifacts are placed in the `dist/` directory.
 
-## Running unit tests
+> **Important:** The production build will warn about initial bundle size due to the TF.js global scripts (~4MB). This is expected — both `tf.min.js` and `tf-tflite.min.js` are loaded as external scripts, not tree-shakeable modules.
 
-To execute unit tests with the [Karma](https://karma-runner.github.io) test runner, use the following command:
+---
 
-```bash
-ng test
+## Inference Pipeline
+
+The `AnalyzerComponent` orchestrates the following pipeline when **Run Inference** is clicked:
+
+```
+User uploads image
+       │
+       ▼
+ImageProcessorService.processImage()
+  └─ Resize to 640×640 via Canvas
+  └─ Convert RGBA → CHW Float32Array ([1, 3, 640, 640])
+       │
+       ▼
+LocalInferenceService.loadModel(modelPath)
+  ├─ ONNX: Set WASM path → Create InferenceSession
+  └─ TFLite: setWasmPath → loadTFLiteModel
+       │
+       ▼
+LocalInferenceService.runInference(input)
+  ├─ ONNX: Create tensor → session.run() → extract output
+  └─ TFLite: Transpose CHW→HWC → model.predict() → extract data
+       │
+       ▼
+LocalInferenceService.parseDetections(rawData, threshold)
+  └─ Filter by confidence → map to class labels + bounding boxes
+       │
+       ▼
+AnalyzerComponent draws bounding boxes on Canvas
+LocalHistoryService saves result to localStorage
 ```
 
-## Running end-to-end tests
+---
 
-For end-to-end (e2e) testing, run:
+## Running Tests
 
 ```bash
-ng e2e
+ng test
 ```
 
-Angular CLI does not come with an end-to-end testing framework by default. You can choose one that suits your needs.
+Uses Karma + Jasmine. No end-to-end test framework is configured by default.
+
+---
 
 ## Additional Resources
 
-For more information on using the Angular CLI, including detailed command references, visit the [Angular CLI Overview and Command Reference](https://angular.dev/tools/cli) page.
+- [Angular CLI Reference](https://angular.dev/tools/cli)
+- [ONNX Runtime Web](https://onnxruntime.ai/docs/get-started/with-javascript/web.html)
+- [TensorFlow.js TFLite](https://www.tensorflow.org/js/guide/tflite)

+ 6 - 1
frontend/angular.json

@@ -40,6 +40,10 @@
             ],
             "styles": [
               "src/styles.scss"
+            ],
+            "scripts": [
+              "node_modules/@tensorflow/tfjs/dist/tf.min.js",
+              "node_modules/@tensorflow/tfjs-tflite/dist/tf-tflite.min.js"
             ]
           },
           "configurations": {
@@ -103,7 +107,8 @@
             ],
             "styles": [
               "src/styles.scss"
-            ]
+            ],
+            "scripts": []
           }
         }
       }

+ 16 - 13
frontend/src/app/services/local-inference.service.ts

@@ -1,6 +1,9 @@
 import { Injectable } from '@angular/core';
 import * as ort from 'onnxruntime-web';
-import * as tf from '@tensorflow/tfjs';
+
+// Use 'any' to bypass TypeScript checking for global libraries
+declare var tflite: any;
+declare var tf: any;
 
 @Injectable({ providedIn: 'root' })
 export class LocalInferenceService {
@@ -13,24 +16,20 @@ export class LocalInferenceService {
   };
 
   async loadModel(modelPath: string) {
-    await tf.ready(); // Ensure TFJS core is initialized
-
     if (modelPath.endsWith('.onnx')) {
       ort.env.wasm.wasmPaths = '/assets/wasm/';
       this.onnxSession = await ort.InferenceSession.create(modelPath, { executionProviders: ['wasm'] });
       this.tfliteModel = null;
-      console.log('ONNX Engine Ready');
+      console.log('ONNX Engine Ready');
     } else {
-      // DYNAMIC IMPORT: This fixes the "setWasmPath undefined" error
-      const tflite = await import('@tensorflow/tfjs-tflite');
-      
-      if (tflite && tflite.setWasmPath) {
+      // Accessing global variables from angular.json scripts
+      if (typeof tflite !== 'undefined') {
         tflite.setWasmPath('/assets/tflite-wasm/');
         this.tfliteModel = await tflite.loadTFLiteModel(modelPath);
         this.onnxSession = null;
-        console.log('TFLite Engine Ready');
+        console.log('TFLite Engine Ready');
       } else {
-        throw new Error('TFLite module resolution failed');
+        throw new Error('TFLite library not loaded. Check angular.json scripts.');
       }
     }
   }
@@ -42,11 +41,15 @@ export class LocalInferenceService {
         const output = await this.onnxSession.run({ [this.onnxSession.inputNames[0]]: tensor });
         return output[this.onnxSession.outputNames[0]].data as Float32Array;
       } else if (this.tfliteModel) {
-        // Create tensor and predict
+        // Use Global 'tf' for tensor conversion
         const inputTensor = tf.tensor(input, [1, 3, 640, 640]);
-        const result = await this.tfliteModel.predict(inputTensor);
+        // YOLOv8 TFLite often requires HWC shape [1, 640, 640, 3]
+        const hwcTensor = tf.transpose(inputTensor, [0, 2, 3, 1]);
+        
+        const result = await this.tfliteModel.predict(hwcTensor);
         const data = await result.data();
-        tf.dispose([inputTensor, result]); // Cleanup memory
+        
+        tf.dispose([inputTensor, hwcTensor, result]); 
         return data as Float32Array;
       }
     } catch (err) {