List view
Phase 7 focuses on turning FlightEdge from a working prototype into a polished portfolio project. The objective of this phase is to improve repository quality, complete documentation, clean up code structure, and prepare the project for sharing on GitHub, discussing in interviews, and referencing on a resume. During this phase we will refine the README, improve diagrams and explanations, clean up code, and summarize project outcomes. Key goals: - Finalize and polish the README - Add architecture diagrams and workflow explanations - Clean up repository structure and code comments - Ensure setup instructions work from a clean clone - Summarize benchmark and optimization results - Write strong resume bullets and project summary language - Prepare screenshots or demo visuals for portfolio use Deliverables: - polished README and docs - cleaned codebase - reproducible setup instructions - architecture visuals - finalized benchmark summary - resume-ready project description Completion of this phase will produce a polished, shareable project suitable for GitHub, interviews, and resume use.
Due by May 6, 2026Phase 6 focuses on improving inference efficiency and studying deployment tradeoffs for edge environments. The objective of this phase is to benchmark baseline inference performance, export the model into deployment-friendly formats, and experiment with optimization techniques such as ONNX export and quantization. During this phase we will compare standard inference against optimized inference paths and evaluate speed, memory use, and accuracy tradeoffs. Key goals: - Benchmark baseline inference latency and throughput - Export the model to ONNX - Experiment with FP16 and/or INT8 quantization where appropriate - Compare baseline and optimized inference performance - Measure latency, throughput, CPU usage, and memory usage - Evaluate any accuracy degradation caused by optimization - Document findings and deployment tradeoffs Deliverables: - `export_onnx.py` for model export - `quantize.py` for quantization experiments - benchmark scripts for baseline and optimized inference - performance comparison results in `benchmarks/results/` - documentation summarizing optimization results Completion of this phase will produce a performance-focused evaluation of how the anomaly detection pipeline can be adapted for edge-style deployment constraints.
Due by April 29, 2026Phase 5 focuses on making FlightEdge observable and easier to debug. The objective of this phase is to build a lightweight dashboard and monitoring layer that makes telemetry streams, anomaly detections, and system behavior visible in real time. During this phase we will create a simple dashboard, surface anomaly alerts visually, and add operational visibility into the system. Key goals: - Build a lightweight dashboard using Streamlit or a similar framework - Display live telemetry values and recent telemetry history - Surface anomaly detections clearly in the UI - Show useful operational metrics such as event rate, anomaly count, and inference latency - Improve logging and service-level visibility - Document how to use the dashboard for debugging and demonstration Deliverables: - `dashboard/app.py` for real-time visualization - visual anomaly alert display - operational logging improvements - documentation for dashboard usage and observability features Completion of this phase will produce a demo-friendly and engineer-friendly observability layer for understanding system behavior in real time.
Due by April 22, 2026Phase 4 focuses on training and integrating the anomaly detection layer for FlightEdge. The objective of this phase is to build a baseline anomaly detection model, integrate it into the live pipeline, and surface anomaly scores or alerts from streaming telemetry data. During this phase we will choose a baseline model, train it on simulated telemetry, run inference in real time, and validate that anomalies are being detected correctly. Key goals: - Select a baseline anomaly detection approach such as Isolation Forest, One-Class SVM, or a lightweight autoencoder - Train the model using synthetic telemetry data - Save and version model artifacts - Integrate inference into the streaming consumer pipeline - Emit anomaly scores, labels, or alerts from live telemetry - Log anomalous events for debugging and review - Document model choice, assumptions, and limitations Deliverables: - `train.py` for baseline anomaly model training - `infer.py` for live inference - saved model artifacts in `model/artifacts/` - anomaly scoring integrated into the telemetry pipeline - documentation describing model behavior and anomaly criteria Completion of this phase will produce a working real-time anomaly detection system operating on streaming telemetry data.
Due by April 15, 2026Phase 3 focuses on transforming raw telemetry streams into model-ready features. The objective of this phase is to build a preprocessing and feature extraction layer that converts live telemetry data into useful rolling-window statistics and derived signals for anomaly detection. During this phase we will implement feature windowing, derived metrics, normalization logic, and validation for model inputs. Key goals: - Build a preprocessing pipeline for incoming telemetry events - Implement rolling windows over time-series telemetry - Compute derived features such as moving averages, rates of change, variance, and z-scores - Normalize or scale features where appropriate - Ensure the feature pipeline can operate continuously in real time - Document feature definitions and preprocessing decisions Deliverables: - `preprocess.py` for raw telemetry transformation - `feature_windows.py` for rolling feature computation - validated model-ready feature vectors - documentation describing derived features and processing flow Completion of this phase will produce a live feature engineering layer that turns telemetry streams into structured inputs for anomaly detection models.
Due by April 8, 2026•3/3 issues closed