diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml new file mode 100644 index 00000000..085ac051 --- /dev/null +++ b/.github/FUNDING.yml @@ -0,0 +1,3 @@ +# These are supported funding model platforms + +github: MAMware diff --git a/.gitignore b/.gitignore index 21c92d68..3e60b28d 100644 --- a/.gitignore +++ b/.gitignore @@ -1,5 +1,5 @@ node_modules/ -future/test + # Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] @@ -174,3 +174,4 @@ cython_debug/ # PyPI configuration file .pypirc +future/project-files.txt diff --git a/README.md b/README.md index bf78a09b..2880676e 100644 --- a/README.md +++ b/README.md @@ -1,22 +1,31 @@ -# AcoustSee - **a photon to phonon code** - ## [Introduction](#introduction) -The content in this repository aims to transform a visual environment into a intuitive soundscape, in a synesthesic transform, empowering the user to experience the visual world by synthetic audio cues in real time. +The content in this repository provides code and docs of an accessibility web app that aims to help visually impaired users by transforming visual environments into soundscapes in real time. -> **Why?** We believe in solving real problems with open-source software in a fast, accessible, and impactful way. You are invited to join us to improve and make a difference! +> We believe in enhancing humanity with open-source software. You are invited to join us improving this mission and make a difference! ### Project Vision -The synesthesia at project is the translation from a photon to a phonon, with a tech stack that should be easy to get for a regular user, in this case a visually challenged one. -This repo holds code that when loaded into a mobile phone browser translates its camera imput into a stereo soundscape, where a sidewalk that the user is waling on could have a distintive spectrum signature that is being hear by both hears, a wall at the left with its should make its distintive sound signature, a car, a hole... a light... and so on... +- Synesthetic Translation: Converting visual data such as a live camera feed into stereo audio cues, mapping colors, shapes and motions to distinct sound signatures. +- Dynamic Soundscapes for Location-Aware Audio: Adjusts audio in real time based on object distance and motion, e.g., an object approaching shifts the sound in tone, volume and complexity as it moves. + + +### Tech stack needed + +Development: Pure Vanilla JS with no external dependency ->Imagine a person that is unable to see, sitting at a park with headphones on and paired to a mobile phone. This phone is being weared like a necklage with the camera facing a quiet swing, as the seat of the swing gets back/further the sound generator makes a sound spectra that has a less broad harmonic content and a lower volume and wen it swings closer its spectra complexity raises, broader and louder. +Software: Runs in a web browser from 2020 and up (ES6+) +Hardware: The design is tested with low settings on a mobile phone from 2020. +Input: Video camera for real-time visual data capture. +Audio Output: Stereo headphones or speakers for spatial audio effects. -This project aims to make this imagination into a reality, with its first milestones coded entirely by xAI Grok is now ready to welcome contributors from open source community to enhace in a afordable way, perception. +### BRIEF + +Entirely coded by xAI Grok 3 to Milestone 4 as per @MAMware prompts +Milestone 5 got help from OpenAI ChatGPT 4.1, 04-mini, Anthropic Claude 4 via @github copilot at codespaces and also Grok 4 wich is charge of the re-estructuring from v0.5.12 +Currently at Milestone 9 (26.02.09) the project is developed in private and near public annoucement. ## Table of Contents @@ -33,141 +42,150 @@ This project aims to make this imagination into a reality, with its first milest ### [Usage](docs/USAGE.md) -The latest stable proof of concept can be run from - -- https://mamware.github.io/acoustsee/present - -Previous versions and other approachs can be found at - -- https://mamware.github.io/acoustsee/past +The webapp runs from a Internet browsers and mobile hardware from 2021. -Unstable versions currently being developed adn tested can be found at +- Current version [RUN](https://mamware.github.io/acoustsee/present/) +- Previous versions [RUN](https://mamware.github.io/acoustsee/past/old_versions/preview) +- Testing developments [RUN](https://mamware.github.io/acoustsee/future/web) -- https://mamware.github.io/acoustsee/future +### Check [Usage](docs/USAGE.md) for further details -To use it, having the most up to date version of mobile web browsers is diserable yet most mobile internet browsers from 2021 should work. +### [Current Status](#status) **OUTDATED** -For a complete mobile browser compability list check the doc [Usage](docs/USAGE.md) there you also find instruccions to run the first command line PoC made with Python - -### Hardware needed: - -A mobile phone/cellphone from 2021 and up, with a front facing camera and stereo headphones with mic. - -### Steps to initialize - -- The webapp is designed to be used with a mobile phone where its front camera (and screen) are facing the desired objetive to be transformed in to sound, wearing the mobile phone like a necklage is its first use case in mind. - -- Enter https://mamware.github.io/acoustsee/present (or your version of preference from [Usage](docs/USAGE.md)) - -- The User Interface of the webapp is split into five regions, - - Center rectangle: Audio enabler, a touchplace holder that enables the webpage to produce sound. - - Top border rectangle: Settings SHIFTer button - - Bottom rectangle: Start and Stop button - - Left rectangle: Day and night switch for light logic inversion - - Right rectangle: Languaje switcher - - SHIFTed left rectangle (settings enabled): Grid selector, changes how the camera frames or "grids" the environment - - SHIFTed right rectangle (settings enabled): Audio engine selector, changes how the sound synthetizer reacts to the selected grid. +Working at **Milestone 5 (Current)** +- Haptic feedback via Vibration API **Developing in Progress 85%** +- Console log on device screen and mail to feature for debuggin. **Developing in Progress 85%** +- New languajes agnostic architecture ready to provide multilingual support for the speech sinthetizer and UI **Developing in Progress 95%** +- Mermaid diagrams to reflect current Modular Single Responsability Principle **To do** + +### [Changelog](docs/CHANGELOG.md) -IMPORTANT: The processing of the camera is done privately on your device and not a single frame is sent outside your device processor. A permision to access the camera by the browser will be requested in order to do this local processing and thus generate the audio for the navigation. +- Current "stable" version from "present" is v0.4.7, link above logs the history and details past milestones achieved. +- Current "future" version in development starts from v0.5 -### [Status](#status) +### [Project structure](#project_structure) +WARN: this is alpha stage and is only meant for testing purposing using the Sinewave synth, some synths like Strings cause VERY HIGH NOISE that could damage hearing and speakers. -**Milestone 4 (Current)**: **Developing in Progress** at /future folder from developing branch +- Current version [RUN](https://mamware.github.io/acoustsee/present/) +- Previous versions [RUN](https://mamware.github.io/acoustsee/past/old_versions/preview) +- Test version in development [RUN](https://mamware.github.io/acoustsee/future/web) -- Current effort is at setting the repository with the most confortable structure for developers, with niche experts in mind, to have a fast way to understand how we do what we do and be able to contribute in a fast and simple way. -- We should refactor dependencies, isolate the audio pipeline and decouple UI and logic. -- Make WCAG contrast UI. -- Code should be educational purpose ready (JSDoc) - -### [Changelog](docs/CHANGELOG.md) +### [Current Status](#status) -- Current version is v0.4.7, follow link above for a the history change log, details and past milestones achieved. +- Milestone 0 to 4: reached by vibecoding with xAI Grok 3 +- Milestone 5: reached byv ibecoded with SuperGrok 4. some assistance from Gemini 2.5 Pro (Preview), ChatGPT 4.1 & o4-mini agents + small reviews from Claude 4. +- Milestone 6: restructered with Gemini 2.5 Pro and ChatGPT 4.1 & 04-mini agents +- Milestone 6.5: (WIP) robust architectural improvements and integration work by GPT-5 mini (Preview) +- Milestone 7 to 9: mayor redesign with a foundational Command pattern and Hexagonal architecture while still in plain vanilla JS, not merged to developing branch becouse this actually a complete rebase. -### [Project structure](#project_structure) +### [v0.6 Project structure, (in construction)](#project_structure) ``` -acoustsee/ - -├── present/ # Current Stable Modular Webapp -│ ├── index.html -│ ├── styles.css -│ ├── main.js -│ ├── state.js -│ ├── audio-processor.js -│ ├── grid-selector.js -│ ├── ui/ -│ │ ├── rectangle-handlers.js # Handles settingsToggle, modeBtn, languageBtn, startStopBtn -│ │ ├── settings-handlers.js # Manages gridSelect, synthesisSelect, languageSelect, fpsSelect -│ │ ├── frame-processor.js # Processes video frames (processFrame) -│ │ └── event-dispatcher.js # Routes events to handlers -│ └── synthesis-methods/ -│ ├── grids/ -│ │ ├── hex-tonnetz.js -│ │ └── circle-of-fifths.js -│ └── engines/ -│ ├── sine-wave.js -│ └── fm-synthesis.js -│ -├── tests/ # Unit tests (TO_DO) -│ ├── ui-handlers.test.js -│ ├── trapezoid-handlers.test.js -│ ├── settings-handlers.test.js -│ └── frame-processor.test.js -├── docs/ # Documentation -│ ├── USAGE.md -│ ├── CHANGELOG.md -│ ├── CONTRIBUTING.md -│ ├── TO_DO.md -│ ├── DIAGRAMS.md -│ ├── LICENSE.md -│ └── FAQ.md -├── past/ # Historic folder for older versions. -├── future/ # Meant to be used for fast, live testing of new features and improvements -└── README.md +web/ +├── audio/ # Audio synthesis/processing (notes-to-sound, HRTF, mic) +│ ├── audio-controls.js # PowerOn/AudioContext init +│ ├── audio-manager.js # AudioContext management +│ ├── audio-processor.js # Core audio (oscillators, playAudio, cleanup; integrates HRTF/ML depth) +│ ├── hrtf-processor.js # HRTF logic (PannerNode, positional filtering) +│ └── synths/ # Synth methods (extend with HRTF) +│ ├── sine-wave.js +│ ├── fm-synthesis.js +│ └── available-engines.json +├── video/ # Video capture/mapping (camera-to-notes/positions; includes ML depth) +│ ├── video-capture.js # Stream setup/cleanup +│ ├── frame-processor.js # Frame analysis (emits notes/positions; calls ML if enabled) +│ ├── ml-depth-processor.js # New: Monocular depth estimation +│ └── grids/ # Visual mappings +│ ├── hex-tonnetz.js +│ ├── circle-of-fifths.js +│ └── available-grids.json +├── core/ # Orchestration (events, state) +│ ├── dispatcher.js # Event handling +│ ├── state.js # Settings/configs +│ └── context.js # Shared refs +├── ui/ # Presentation (buttons, DOM; optional ML/HRTF toggles) +│ ├── ui-controller.js # UI setup +│ ├── ui-settings.js # Button bindings +│ ├── cleanup-manager.js # Teardown listeners +│ └── dom.js # DOM init +├── utils/ # Cross-cutting tools (TTS, haptics, logs) +│ ├── async.js # Error wrappers +│ ├── idb-logger.js # Persistent logs +│ ├── logging.js # Structured logs +│ └── utils.js # Helpers (getText, ...) +├── languages/ # Localization (add ML/HRTF strings) +│ ├── es-ES.json +│ ├── en-US.json +│ └── available-languages.json +├── test/ # Tests (grouped by category) +│ ├── audio/ # Audio/HRTF tests +│ │ ├── audio-processor.test.js +│ │ └── hrtf-processor.test.js +│ ├── video/ # Video/grid/ML tests +│ │ ├── frame-processor.test.js +│ │ └── ml-depth-processor.test.js # New: Test depth estimation +│ ├── core/ # Dispatcher/state tests (if added) +│ ├── ui/ # UI tests +│ │ ├── ui-settings.test.js +│ │ └── video-capture.test.js +│ └── utils/ # Utils tests (if added) +├── .eslintrc.json # Linting +├── index.html # HTML entry +├── main.js # Bootstrap (update imports for moves/ML init) +├── README.md # Docs (update structure/ML/HRTF) +└── styles.css # Styles ``` ### [Contributing](docs/CONTRIBUTING.md) -- Please follow the link above for the detailed contributing guidelines, branching strategy and examples. - -### [To-Do List](docs/TO_DO.md) - -- At this document linked above, you will find the list for current TO TO list, we are now at milestone 4 (v0.4.X) +>We welcome contributors! -Resume of TO_DO: - -- Haptic feedback via Vibration API -- Console log on device screen and mail to feature for debuggin. -- New languajes for the speech sinthetizer -- Audio imput from camera into the headphones among the synthetized sound from camera. -- Further Modularity: e.g., modularize audio-processor.js -- Optimizations aiming the use less resources and achieve better performance, ie: implementing Web Workers and using WebAssembly. -- Reintroducing Hilbert curves. -- Gabor filters for motion detection. -- New grid types and synth engines -- Voting system for grid and synth engines. -- Consider making User selectable synth engine version. -- Consider adding support for VST like plugins. -- Testing true HRTF, loading CIPIC HRIR data. -- New capabilities like screen/video capture to sound engine. -- Android/iOS app developtment if considerable performance gain can be achieved. -- Mermaid diagrams to reflect current Modular Single Responsability Principle +- At this document linked above, you will find the list for our current TO TO list, now from milestone 5 (v0.5.2) ### [Code flow diagrams](docs/DIAGRAMS.md) -Diagrams covering the Turnk Based Development approach. +Diagrams covering the Turnk Based Development approach (v0.2). -Reflecting: - Process Frame Flow - Audio Generation Flow - Motion Detection such as oscillator logic. +```mermaid + +graph TD + A[dispatcher.js] -->|routes| B[core/handlers/] + B --> C[video-handlers.js] + B --> D[audio-handlers.js] + B --> E[ui-handlers.js] + B --> F[settings-handlers.js] + B --> G[grid-handlers.js] + B --> H[debug-handlers.js] + C -->|calls| I[video/frame-processor.js] + D -->|calls| J[audio/audio-processor.js] + E -->|updates| K[ui/ui-settings.js] + F -->|uses| L[utils/utils.js] + A -->|state| M[state.js] + A -->|logs| N[utils/logging.js] + B -->|future| O[ml-handlers.js] +``` + +### [Changelog](docs/CHANGELOG.md) + +- Current "stable" version from "present" is v0.4.7, the link above logs the history and details past milestones achieved. +- Current "future" version in development starts from v0.6 + +### [FAQ](docs/FAQ.md) + +- Follow the link for list of the Frecuently Asqued Questions. ### [License](docs/LICENSE.md) - GPL-3.0 license details -### [FAQ](docs/FAQ.md) +Peace +Love +Union +Respect + -- Follow the link for list of the Frecuently Asqued Questions. diff --git a/docs/CONTRIBUTING.md b/docs/CONTRIBUTING.md index 94f3f361..5361f398 100644 --- a/docs/CONTRIBUTING.md +++ b/docs/CONTRIBUTING.md @@ -3,17 +3,20 @@ Welcome to `acoustsee`! We’re building spatial audio navigation for the visual ## How to contribute -1. The `developing` branch is our zone(branch). -2. There, you can work new artifacts in `future` folder, meant as placeholder to play around new ideas and radical changes. -4. Try this artificts among the consolidated files from the `past` folder. +Hi there! just a quick update, on branch v0.5 we are introducing an automatic file loader for grids, engines and languages. +If you contribute one or several of such, you can run the file-indexer.js from /scripts that automatically generates a file listing the affected folders files. + +1. The `developing` branch is our zone that rapid deploys to GitHub Pages for easy and broad testing. +2. At `developing` branch, you can create new artifacts in `future` folder, meant as placeholder to play around new ideas and radical changes, where you could create a folder name of your liking and do as you like inside it. +4. You can compare your new artifacts among the consolidated files from the `past` or `present` folder. 5. You could add unit tests in `tests/` and run `npm test`. 6. Submit a PR to `developing` with a clear description. Q: Wich is the content and purpuse of the other folders? A: - `past` #Consolidated files from the `main` branch, usefull to have them at hand for vibe developing technics. - `future` #The playground, a place holder for your new features or to make radical changes. + `past` #Historical files, usefull to have them at hand for easy comparsion to vibe a new develop. + `future` #Our playground, a place holder for your new features or to make radical changes. `present` #Here you can PR the integration resulted from your `future`. // (Considering removing it for simplicity and moving this folder as the staging branch) @@ -50,6 +53,9 @@ The `main` branch is then deployed to production. ## Contribution Types +When debugLogging is `false` (default) at `state.js` , browser console only shows errors (critical stuff), and internal logs only collect errors. +Toggle to `true` for full verbosity during dev. + ### Adding a New Language - Create a new file in `web/languages/` (e.g., `fr-FR.json`) based on `en-US.json`. - Update `web/ui/rectangle-handlers.js` to include the language in the `languages` array. @@ -103,7 +109,7 @@ The `main` branch is then deployed to production. ## Code Style - Use JSDoc comments for functions (see `web/ui/event-dispatcher.js`). - Follow ESLint rules (run `npm run lint`). -- Keep code modular, placing UI logic in `web/ui/`. +- Keep code modular. ## Testing - Add tests in `tests/` for new features (see `tests/rectangle-handlers.test.js`). @@ -112,210 +118,6 @@ The `main` branch is then deployed to production. npm test ``` -Below is a curated to-do list to onboard collaborators, aligning with SWEBOK 4 (Software Engineering Body of Knowledge, 4th Edition), adopting naming conventions, ensuring correct modularity, and sets a solid ground for AcoustSee. - -Each item includes a rationale tied to open-source success and a Mermaid diagram where relevant to visualize structure or process. - -Adopt SWEBOK 4 Practices for Maintainability and Quality - -Objective: Align AcoustSee with SWEBOK 4 to ensure robust software engineering practices, making it easier for contributors to maintain and extend the codebase. - -To-Do: - -- Software Design (SWEBOK 4, Chapter 3): - - Document the architecture using a modular design, separating concerns (e.g., UI, audio processing, state management). - - Use context.js for dependency injection to decouple modules, as seen in your codebase. -- Software Testing (SWEBOK 4, Chapter 5): - - Create unit tests for critical modules (e.g., rectangle-handlers.js, audio-processor.js) using Jest or Mocha. - - Add integration tests for the frame-to-audio pipeline (e.g., frame-processor.js → grid-dispatcher.js → audio-processor.js). -- Software Maintenance (SWEBOK 4, Chapter 7): - - Set up a CONTRIBUTING.md file with guidelines for code reviews, testing, and issue reporting. - - Use GitHub Actions for CI/CD to automate linting, testing, and deployment. - -Rationale: SWEBOK 4 ensures a standardized approach, attracting skilled contributors familiar with industry best practices, testing and CI/CD. - -**Mermaid Diagram**: High-Level Architecture - -```mermaid -classDiagram - class Main { - +init() void - } - class Context { - +getDOM() Object - +getDispatchEvent() Function - } - class State { - +settings Object - +setStream(stream) void - } - class EventDispatcher { - +dispatchEvent(eventName, payload) void - } - class DOM { - +initDOM() Promise~Object~ - } - class RectangleHandlers { - +setupRectangleHandlers() void - } - class AudioProcessor { - +initializeAudio(context) Promise~boolean~ - +playAudio(frameData, width, height) Object - } - class FrameProcessor { - +processFrame() void - } - class Utils { - +speak(elementId, state) Promise~void~ - } - Main --> Context - Main --> DOM - Main --> EventDispatcher - Main --> RectangleHandlers - RectangleHandlers --> Context - RectangleHandlers --> State - RectangleHandlers --> AudioProcessor - RectangleHandlers --> Utils - AudioProcessor --> State - AudioProcessor --> FrameProcessor - EventDispatcher --> FrameProcessor - EventDispatcher --> Utils - FrameProcessor --> DOM -``` -Modular architecture separating UI (DOM, Utils), state (State), events (EventDispatcher), and audio processing (AudioProcessor, FrameProcessor). - - - - -Set up Jest for unit tests and GitHub Actions for CI/CD. - -Standard Naming Conventions - -Objective: Adopt consistent naming conventions to improve code readability and maintainability, aligning with open-source standards. - -File and Module naming: - Use kebab-case for files (e.g., rectangle-handlers.js, audio-processor.js), - Refactor service files to PascalCase (e.g., Context.js, EventDispatcher.js) to distinguish them from utilities. - Keep camelCase for functions and variables (e.g., setupRectangleHandlers, settings.language) - -Translation Files: - Ensure languages/en-US.json, languages/es-ES.json use consistent locale codes (e.g., en-US, not en-us). - -Documentation: - Add JSDoc comments to exports (e.g., speak, initializeAudio) for clarity. - -Enhance Modularity for Scalability -Objective: Restructure AcoustSee for correct modularity, reducing coupling and enabling easier contributions. - -To-Do: - -Refactor Dependencies: -Centralize dependency injection in Context.js (e.g., getDOM, getDispatchEvent) to avoid direct imports of DOM or dispatchEvent. -Move shared constants (e.g., updateInterval, audioInterval) to a config.js module. - -Isolate Audio Pipeline: -Create a dedicated audio/ folder for audio-processor.js, grid-dispatcher.js, sine-wave.js, fm-synthesis.js, hex-tonnetz.js, circle-of-fifths.js. -Export a single AudioService from audio/index.js to simplify imports. - -Decouple UI and Logic: -Move settings-handlers.js, utils.js, frame-processor.js to ui/handlers/ for clarity. -Use event-driven communication via EventDispatcher.js for all UI-logic interactions. - -Build Tooling: -Use Vite or Webpack to bundle modules, ensuring correct path resolution (e.g., ../languages/${lang}.json). -Configure a base path (e.g., /acoustsee/future/web/) to avoid hardcoded paths. - -**Audio pipeline** - -```mermaid -graph LR - A[RectangleHandlers] -->|dispatchEvent('processFrame')| B[EventDispatcher] - B --> C[FrameProcessor] - C --> D[AudioService] - D --> E[AudioProcessor] - D --> F[GridDispatcher] - F --> G[HexTonnetz] - F --> H[CircleOfFifths] - E --> I[SineWave] - E --> J[FMSynthesis] - D -->|playAudio()| K[Audio Output] -``` -Modular audio pipeline with AudioService as the entry point. - -TO-DO: - -Create audio/ folder and AudioService module. -Refactor imports to use Context.js exclusively. -Set up Vite with a base path in vite.config.js. - - -## Current Dependency Map - -main.js: -Imports: setupRectangleHandlers (./ui/rectangle-handlers.js), setupSettingsHandlers (./ui/settings-handlers.js), createEventDispatcher (./ui/event-dispatcher.js), initDOM (./ui/dom.js) - -Exports: None - -Dependencies: Passes DOM to setupRectangleHandlers, setupSettingsHandlers, createEventDispatcher - -dom.js: -Imports: None - -Exports: initDOM (returns DOM object) - -Dependencies: None - -rectangle-handlers.js: -Imports: processFrame (./frame-processor.js), initializeAudio, isAudioInitialized, setAudioContext (../audio-processor.js), settings, setStream, setAudioInterval, setSkipFrame (../state.js), speak (./utils.js) - -Exports: setupRectangleHandlers - -Dependencies: Receives DOM and dispatchEvent, passes DOM to processFrame - -settings-handlers.js: -Imports: settings (../state.js), speak (./utils.js) - -Exports: setupSettingsHandlers - -Dependencies: Receives DOM and dispatchEvent - -event-dispatcher.js: -Imports: setAudioInterval, settings (../state.js), processFrame (./frame-processor.js), speak (./utils.js) - -Exports: dispatchEvent, createEventDispatcher - -Dependencies: Receives DOM, passes DOM to processFrame - -frame-processor.js: -Imports: playAudio (../audio-processor.js), skipFrame, setSkipFrame, prevFrameDataLeft, prevFrameDataRight, setPrevFrameDataLeft, setPrevFrameDataRight, frameCount, lastTime, settings (../state.js) - -Exports: processFrame - -Dependencies: Receives DOM as a parameter - -state.js: -Imports: None - -Exports: settings, skipFrame, prevFrameDataLeft, prevFrameDataRight, frameCount, lastTime, setStream, setAudioInterval, setSkipFrame, setPrevFrameDataLeft, setPrevFrameDataRight - -Dependencies: None - -audio-processor.js (assumed): -Imports: Unknown (likely settings from ../state.js) - -Exports: playAudio, initializeAudio, isAudioInitialized, setAudioContext - -Dependencies: Unknown - -utils.js: -Imports: Unknown - -Exports: speak - -Dependencies: Unknown - - - ## Code of Conduct Please be kind, inclusive, and collaborative. Let’s make accessibility tech awesome! diff --git a/docs/FAQ.md b/docs/FAQ.md index cf7c448d..e56fee20 100644 --- a/docs/FAQ.md +++ b/docs/FAQ.md @@ -1 +1,3 @@ -gather questions, answer and fill this KB, sort by most asked (keep count) +## Frecuently asked questions + +In this space, we aim to gather questions and thier answers. diff --git a/docs/TO_DO.md b/docs/TO_DO.md index ce63565b..ae4c33ba 100644 --- a/docs/TO_DO.md +++ b/docs/TO_DO.md @@ -18,91 +18,3 @@ Sorted by priority - New capabilities like screen/video capture to sound engine. - Android/iOS app development if considerable performance gain can be achieved. -**6/16/2025** -## Considering adding OpenCV support - Adapting OpenCV for AcoustSee -To meet the 66ms constraint and aid a blind user via audio cues, here’s how OpenCV can be tailored for AcoustSee: -Pipeline Design -Camera Input: -Capture frames at 15–30 FPS (66–33.3 ms) at 480p or 720p to reduce processing load. - -Use browser-based APIs (e.g., getUserMedia) for WebAssembly compatibility. - -Object Detection: -Use a lightweight model like MobileNet-SSD or YOLO-Tiny (pre-trained for common objects: sidewalk, wall, car, swing). - -Processing time: ~15–30 ms on mid-range devices for 720p. - -Output: Bounding boxes and labels for objects (e.g., “sidewalk: center, wall: left”). - -Feature Extraction: -Analyze color and texture within bounding boxes using OpenCV’s image processing (e.g., HSV color histograms, edge detection). - -Example: Sidewalk = smooth, gray (low-frequency hum); wall = flat, textured (mid-frequency tone). - -Processing time: ~5–10 ms. - -Depth Estimation: -Use a lightweight monocular depth model (e.g., MiDaS Small, optimized for TFLite) to estimate object distances. - -Example: Swing at 2m = loud, broad sound; at 5m = quiet, narrow sound. - -Processing time: ~20–30 ms on flagships, ~30–50 ms on mid-range. - -Alternative: Use motion cues (e.g., optical flow) for faster processing (~10–20 ms). - -Audio Mapping: -Map visual features to audio cues using Web Audio API: -Position: Stereo panning (left-right based on bounding box x-coordinate). - -Depth: Volume (louder for closer) and spectral complexity (broader for closer). - -Object type: Unique spectral signatures (e.g., hum for sidewalk, tone for wall, broadband for swing). - -Generate 3–6 cues per 66ms frame, each ~5–10 ms, to align with auditory resolution. - -Total Latency: -Example: MobileNet-SSD (20 ms) + feature extraction (10 ms) + depth estimation (~30 ms) = ~60 ms on a flagship device for 720p. - -Optimizations (e.g., 480p, quantized models, GPU) can reduce this to ~40–50 ms, fitting within 66ms. - -Optimizations for Real-Time -Lower resolution: Use 480p (640×480) instead of 720p/1080p to cut processing time by ~30–50%. - -Lightweight models: Use quantized TFLite models (e.g., MobileNet-SSD, MiDaS Small) for 2–3x speedup. - -Frame skipping: Process every other frame (effective 15 FPS) if needed, while interpolating audio cues. - -GPU/Neural acceleration: Leverage OpenCV’s DNN module with OpenCL or mobile neural engines. - -Asynchronous processing: Run image processing in parallel with audio synthesis to reduce perceived latency. - -Audio Cue Design -Number of cues: Limit to 3–6 per 66ms frame (e.g., sidewalk, wall, swing) to ensure auditory clarity, based on the 5–10 ms per cue limit from our previous discussion. - -Spectral signatures: -Sidewalk: Low-pass noise (100–200 Hz), center-panned, steady. - -Wall: Sine wave (500–1000 Hz), left-panned, constant. - -Swing: Sawtooth wave (200–2000 Hz), center-panned, dynamic volume/filter. - -Car: Bandpass noise (500–5000 Hz), panned based on position. - -Dynamic updates: Adjust volume and filter cutoff every 66ms based on depth/motion (e.g., swing closer = louder, broader spectrum). - - - - - -Example of UI Templates: - -- Bezel type template: The top trapezoid is (should be) where the setting toggle is, this toggle shifts the function of the lateral trapezoid a the left (dayNight toggle without shift) and right (languaje selectror for speech synthesis) for a cursor for options navigation such as grid and synth engine both versioned selector. - -The confirmation is done by pressing the center vertical rectangular square, that also works as webcam feed preview/canvas - -The start and stop of the navigation is done by pressing the buttom trapezoid. - -- A reintroduction of a frames per seconds (FPS) toggle that is usefull if your device stutters or generates artifacts due to processing issues, likely by a cpu processor limitation will be reconsidered as a configuration option, among the grid and synth engine selector. - -A console log live view and a copy feature is being considered too. diff --git a/docs/USAGE.md b/docs/USAGE.md index 04fbff18..94b08f2e 100644 --- a/docs/USAGE.md +++ b/docs/USAGE.md @@ -1,10 +1,32 @@ ## USAGE -Please note that the current best performer can be run without installation directly from a internet browser, the latest stable version is hosted at: +# Web Application Interface +The web application features a user-friendly interface divided into five interactive regions, designed to facilitate seamless control and customization of the synesthetic audio experience. All camera processing is performed locally on your device, ensuring privacy. No frames are transmitted externally, though the browser will request camera access permission to enable this local processing for audio generation. + +## Interface Regions +- **Center Rectangle**: Audio Enabler + A touch-sensitive area that activates the webpage’s audio output, allowing sound generation to begin. +- **Top Border Rectangle**: Settings SHIFTer Button + Toggles the settings mode to reveal advanced configuration options. +- **Bottom Rectangle**: Start/Stop Button + Initiates or pauses the audio generation and camera processing. +- **Left Rectangle**: Day/Night Switch + Inverts light logic to optimize visibility and processing for different lighting conditions. +- **Right Rectangle**: Language Switcher + Changes the interface language for improved accessibility. + +## Settings Mode (SHIFTed Interface) +When settings are enabled via the SHIFTer button: +- **SHIFTed Left Rectangle**: Grid Selector + Adjusts the camera’s framing or "gridding" of the environment, allowing users to customize how the visual input is segmented for audio mapping. +- **SHIFTed Right Rectangle**: Audio Engine Selector + Modifies the sound synthesizer’s response to the selected grid, enabling users to tailor the audio output to their preferences. + +The latest stable version is hosted at: https://mamware.github.io/acoustsee/present -Browser compability list: +## Browser compability list: | Browser | Minimum Version for Full Support | Notes | @@ -16,43 +38,10 @@ Browser compability list: | Opera Mobile | Opera 36 (2016) | Based on Chromium, full support for all APIs. | | Edge for Android | Edge 79 (January 2020) | Based on Chromium, full support for all APIs. | -Privacy Note: All of the video processing is done at your device, not a single frame is sent to anyone or anywhere than that the ones that takes places at your own device processing logic. +### To test our first commit wich is a Python script, either out of curiosity or educational purposes, follow the instrucctions below - - - -### Project structure for TBD version - -``` -acoustsee/ -├── src/ # Contains the Python PoC code for still image processing and audio generation. -├── web/ # Contains HTML, CSS, and JavaScript files for the web interface folder for different approaches at the core logic -│ ├── fft/ # Experimenting with Fourier, fast. -│ │ ├── index.html -│ │ ├── main.js -│ │ ├── styles.css -│ ├── hrft/ # Experimenting the Head Related Transfer Function -│ │ ├── index.html -│ │ ├── main.js -│ │ ├── styles.css -│ ├── tonnetz/ # Experimenting with Euler, Tonnetz. -│ │ ├── index.html -│ │ ├── main.js -│ │ ├── styles.css -│ ├── index.html # The current chosen version as a better performer (Tonnetz, 5/18/2025). -│ ├── main.js -│ ├── styles.css -├── examples/ # Still image and output container for the Python PoC -├── tests/ # Should contain unit tests (currently missing) -├── docs/ # Contains technical documentation (working) -│ ├── DIAGRAMS.ms # Wireframes the logic at main.js -└── README.md # This file, providing an overview of the project -``` - -## To test our first commit wich is a Python script, either out of curiosit or educational purposes, follow the instrucctions below - -Our first iteration, a simple proof-of-concept: process a static image file and output basic left/right panned audio file. +How to run the first iteration, a simple proof-of-concept processing a static image file and output basic left/right panned audio file. ## Setup @@ -139,4 +128,9 @@ Try it with examples/wall_left.jpg to hear a basic left/right audio split! - `pyo` may warn about missing WxPython, falling back to Tkinter. This is harmless for WAV generation. - **SetuptoolsDeprecationWarning**: - A warning about `License :: OSI Approved :: GNU General Public License` is harmless (it’s a `pyo` packaging issue). + +> Privacy and Processing +The application processes all camera data locally on your device, ensuring no visual information leaves your processor. Upon launching, the browser will request camera access to perform this private processing, which is essential for generating the real-time audio cues used for navigation. + + - **Still stuck?** Open an issue on GitHub or ping us on [X](https://x.com/MAMware). diff --git a/future/project-files.txt b/future/project-files.txt new file mode 100644 index 00000000..2e6f9da0 --- /dev/null +++ b/future/project-files.txt @@ -0,0 +1,2634 @@ +// Generated on: 2025-07-30 18:17:33 +0000 + +// File: web/utils/logging.js +// web/utils/logging.js +// Centralized logging utilities for structured, level-based outputs with async emission and sampling. +// Supports async to avoid blocking high-throughput paths (e.g., frame processing). +// Sampling reduces log volume for DEBUG level in performance-critical scenarios. + +import { addIdbLog } from './idb-logger.js'; // Updated to use IndexedDB. + +const LOG_LEVELS = { + DEBUG: 0, + INFO: 1, + WARN: 2, + ERROR: 3, +}; + +let currentLogLevel = LOG_LEVELS.DEBUG; // Default; can be set from settings.debugLogging. +const isMobile = /Mobile|Android|iPhone|iPad/.test(navigator.userAgent); +let sampleRate = isMobile ? 0.1 : 1.0; // 10% DEBUG logs on mobile. + +// Helper to set global log level (e.g., from settings.isSettingsMode or debugLogging). +export function setLogLevel(level) { + const upperLevel = level.toUpperCase(); + if (Object.keys(LOG_LEVELS).includes(upperLevel)) { + currentLogLevel = LOG_LEVELS[upperLevel]; + } else { + structuredLog('WARN', 'Invalid log level attempted', { level }); + } +} + +// Helper to set sampling rate (0.0 to 1.0; from settings or dynamically). +export function setSampleRate(rate) { + if (rate >= 0 && rate <= 1) { + sampleRate = rate; + } else { + structuredLog('WARN', 'Invalid sample rate attempted', { rate }); + } +} + +/** + * Logs a structured message with level, timestamp, and data payload. + * Emits asynchronously to prevent blocking. + * @param {string} level - One of 'DEBUG', 'INFO', 'WARN', 'ERROR'. + * @param {string} message - Descriptive message (e.g., 'setAudioInterval'). + * @param {Object} [data={}] - Additional context (e.g., { timerId: 42, ms: 50 }). + * @param {boolean} [persist=true] - If true, also calls addLog with serialized form. + * @param {boolean} [sample=true] - If false, bypass sampling (for critical logs). + */ + +let inStructuredLog = false; +/** + * Logs a structured message synchronously with recursion guard. + */ +export async function structuredLog(level, message, data = {}, persist = true, sample = true) { + const numericLevel = LOG_LEVELS[level.toUpperCase()] || LOG_LEVELS.INFO; + if (numericLevel < currentLogLevel) return; + if (sample && level.toUpperCase() === 'DEBUG' && Math.random() > sampleRate) return; + + if (inStructuredLog) return; + inStructuredLog = true; + try { + const timestamp = new Date().toISOString(); + const logEntry = { timestamp, level: level.toUpperCase(), message, data }; + // Use global console to avoid circular import + const fn = (console[level.toLowerCase()] || console.log).bind(console); + fn(`[${timestamp}] ${logEntry.level}: ${message}`, data); + if (persist) { + addIdbLog(logEntry).catch(err => { + originalConsole.warn('Failed to persist log to IndexedDB:', err.message); + }); + } + } finally { + inStructuredLog = false; + } +} + +// File: web/utils/async.js +/** + * Wraps any async function in a standardized try/catch boundary. + * @param {Function} fn - The async function to execute. + * @param {...any} args - Arguments to pass to the function. + * @returns {Promise<{data: any, error: Error|null}>} + */ +export async function withErrorBoundary(fn, ...args) { + try { + const data = await fn(...args); + return { data, error: null }; + } catch (error) { + console.error(`${fn.name} error:`, error); + return { data: null, error }; + } +} + +// File: web/utils/idb-logger.js +// web/utils/idb-logger.js +// IndexedDB wrapper for persistent logging: Append JSON logs, retrieve all, cap size, export. +// Asynchronous, transaction-based for non-blocking ops in high-throughput scenarios. +// Fallback if IndexedDB not supported (e.g., logs to console only). + +const DB_NAME = 'AcoustSeeLogsDB'; +const DB_VERSION = 1; +const STORE_NAME = 'logs'; +const MAX_ENTRIES = 1000; // Cap to prevent unbounded growth. +let dbPromise = null; + +// Check IndexedDB support (technical: Feature detection to avoid errors in non-supporting envs like some iframes or old browsers). +const isIndexedDBSupported = 'indexedDB' in window; + +// Open (or create) DB asynchronously. +function openDB() { + if (!isIndexedDBSupported) { + return Promise.reject(new Error('IndexedDB not supported in this environment')); + } + return new Promise((resolve, reject) => { + const request = indexedDB.open(DB_NAME, DB_VERSION); + + request.onerror = () => reject(request.error); + request.onsuccess = () => resolve(request.result); + + request.onupgradeneeded = (event) => { + const db = event.target.result; + if (!db.objectStoreNames.contains(STORE_NAME)) { + db.createObjectStore(STORE_NAME, { autoIncrement: true }); + } + }; + }); +} + +// Lazy-init DB promise with error handling. +async function getDB() { + if (!dbPromise) { + dbPromise = openDB().catch(err => { + console.warn('IndexedDB init failed; falling back to console-only logging:', err.message); + return null; // Null signals fallback. + }); + } + return dbPromise; +} + +// Append a log entry (JSON object). Fallback to console if DB unavailable. +export async function addIdbLog(logEntry) { + const db = await getDB(); + if (!db) { + console.warn('DB unavailable; logging to console:', logEntry); + return; // Fallback: No persistence. + } + return new Promise((resolve, reject) => { + const transaction = db.transaction([STORE_NAME], 'readwrite'); + const store = transaction.objectStore(STORE_NAME); + const addRequest = store.add(logEntry); + + addRequest.onsuccess = () => { + // Cap size: If over max, delete oldest (cursor for efficiency). + capLogSize(store).then(resolve).catch(reject); + }; + addRequest.onerror = () => reject(addRequest.error); + + transaction.onerror = () => reject(transaction.error); + }); +} + +// Helper to cap entries: Delete oldest if > MAX_ENTRIES. +async function capLogSize(store) { + return new Promise((resolve, reject) => { + const countRequest = store.count(); + countRequest.onsuccess = () => { + if (countRequest.result <= MAX_ENTRIES) return resolve(); + + // Delete excess oldest entries via cursor. + let deleted = 0; + const excess = countRequest.result - MAX_ENTRIES; + const cursorRequest = store.openCursor(); + + cursorRequest.onsuccess = (event) => { + const cursor = event.target.result; + if (cursor && deleted < excess) { + cursor.delete(); + deleted++; + cursor.continue(); + } else { + resolve(); + } + }; + cursorRequest.onerror = () => reject(cursorRequest.error); + }; + countRequest.onerror = () => reject(countRequest.error); + }); +} + +// Retrieve all logs for export. Fallback to empty if DB unavailable. +export async function getAllIdbLogs() { + const db = await getDB(); + if (!db) return []; // Fallback: Empty array. + return new Promise((resolve, reject) => { + const transaction = db.transaction([STORE_NAME], 'readonly'); + const store = transaction.objectStore(STORE_NAME); + const request = store.getAll(); + + request.onsuccess = () => resolve(request.result); + request.onerror = () => reject(request.error); + }); +} + +// Clear all logs (optional, e.g., after send). Fallback no-op if DB unavailable. +export async function clearIdbLogs() { + const db = await getDB(); + if (!db) return; + return new Promise((resolve, reject) => { + const transaction = db.transaction([STORE_NAME], 'readwrite'); + const store = transaction.objectStore(STORE_NAME); + const request = store.clear(); + + request.onsuccess = resolve; + request.onerror = () => reject(request.error); + }); +} + +// File: web/utils/utils.js +import { settings, availableLanguages } from '../core/state.js'; +import { structuredLog } from './logging.js'; + +export function tryVibrate(event) { + if (event.cancelable && navigator.vibrate) { + try { + navigator.vibrate(50); + } catch (err) { + console.warn('Vibration blocked:', err.message); + } + } +} + +export function hapticCount(count) { + if (navigator.vibrate) { + const pattern = Array(count * 2 - 1).fill(30).map((v, i) => i % 2 === 0 ? 30 : 50); + navigator.vibrate(pattern); + } +} + +const translationsCache = {}; + +export async function getText(key, params = {}, type = 'tts') { + try { + const language = availableLanguages.find(l => l.id === settings.language); + if (!language) throw new Error(`Language not found: ${settings.language}`); + + let translations = translationsCache[language.id]; + if (!translations) { + const response = await fetch(`./languages/${language.id}.json`); + if (!response.ok) throw new Error(`Failed to load language file: ${response.status}`); + translations = await response.json(); + translationsCache[language.id] = translations; + } + + let finalMessage = translations; + for (const part of key.split('.')) { + finalMessage = finalMessage[part] || key; + } + if (typeof finalMessage === 'object') { + finalMessage = finalMessage[params.state || params.fps || params.lang] || key; + } + for (const [paramKey, paramValue] of Object.entries(params)) { + const placeholderRegex = new RegExp(`\\{${paramKey}\\}`, 'g'); + finalMessage = finalMessage.replace(placeholderRegex, paramValue); + } + if (type === 'tts' && settings.ttsEnabled) { + const utterance = new SpeechSynthesisUtterance(finalMessage); + utterance.lang = settings.language; + window.speechSynthesis.speak(utterance); + } + const announcements = document.getElementById('announcements'); + if (announcements) { + announcements.textContent = finalMessage; + } + return finalMessage; + } catch (err) { + console.error(`${type} error:`, err.message); + const announcements = document.getElementById('announcements'); + if (announcements) { + announcements.textContent = `${type} error: Unable to process message`; + } + return key; + } +} + +export function parseBrowserVersion(userAgent) { + const rx = /Chrome\/([0-9.]+)|Firefox\/([0-9.]+)|Safari\/([0-9.]+)|Edg\/([0-9.]+)/; + const m = userAgent.match(rx); + return (m && (m[1] || m[2] || m[3] || m[4])) || 'Unknown'; +} + +export function setTextAndAriaLabel(element, text, ariaLabel) { + if (element) { + element.textContent = text; + element.setAttribute('aria-label', ariaLabel); + } else { + structuredLog('WARN', 'Element not found for text update', { text }); + } +} + +// File: web/ui/ui-controller.js +import { setupAudioControls } from '../audio/audio-controls.js'; +import { setupUISettings } from './ui-settings.js'; +import { setupCleanupManager } from './cleanup-manager.js'; +import { setupVideoCapture } from './video-capture.js'; +// Importa los módulos de configuración cuando los tengas +// import { setupSaveSettings, setupLoadSettings } from './settings-manager.js'; + +export function setupUIController({ dispatchEvent, DOM }) { + console.log('setupUIController: Starting setup'); + setupAudioControls({ dispatchEvent, DOM }); + setupUISettings({ dispatchEvent, DOM }); + setupCleanupManager(); + + // Inicialización futura para guardar y leer configuraciones + // setupSaveSettings({ dispatchEvent, DOM }); + // setupLoadSettings({ dispatchEvent, DOM }); + + console.log('setupUIController: Setup complete'); +} + +// File: web/ui/ui-settings.js +import { settings } from '../core/state.js'; +import { getText, tryVibrate, hapticCount } from '../utils/utils.js'; + +export function setupUISettings({ dispatchEvent, DOM }) { + if (!DOM || !DOM.button1 || !DOM.button2 || !DOM.button3 || + !DOM.button4 || !DOM.button5 || !DOM.button6) { + console.error('Missing DOM elements in ui-settings'); + dispatchEvent('logError', { message: 'Missing DOM elements in ui-settings' }); + return; + } + + // Helper: wire a single pointer event for both touch & click + function wireButton(el, id, { normal, settings: settingsAction }, { + normalError, settingsError, params = () => ({}) + }) { + el.addEventListener('pointerdown', async (event) => { + if (event.cancelable) event.preventDefault(); + console.log(`${id} event`, { settingsMode: settings.isSettingsMode }); + tryVibrate(event); + hapticCount(Number(id.replace('button', ''))); + try { + if (!settings.isSettingsMode) { + await normal(); + } else { + await settingsAction(); + } + dispatchEvent('updateUI', { + settingsMode: settings.isSettingsMode, + streamActive: !!settings.stream, + micActive: !!settings.micStream, + }); + } catch (err) { + console.error(`${id} error:`, err.message); + dispatchEvent('logError', { message: `${id} error: ${err.message}` }); + const key = !settings.isSettingsMode ? normalError : settingsError; + await getText(key, params()); + } + }); + } + + // Button 1 + wireButton(DOM.button1, 'button1', + { + normal: () => dispatchEvent('startStop', { settingsMode: settings.isSettingsMode }), + settings: () => dispatchEvent('startStop', { settingsMode: settings.isSettingsMode }) + }, + { + normalError: 'button1.tts.startStop', + settingsError: 'button1.tts.startStop', + params: () => ({ state: 'error' }) + } + ); + + // Button 2 + wireButton(DOM.button2, 'button2', + { + normal: () => dispatchEvent('toggleAudio', { settingsMode: settings.isSettingsMode }), + settings: () => dispatchEvent('toggleAudio', { settingsMode: settings.isSettingsMode }) + }, + { + normalError: 'button2.tts.micError', + settingsError: 'button2.tts.micError' + } + ); + + // Button 3 + wireButton(DOM.button3, 'button3', + { + normal: () => dispatchEvent('toggleLanguage'), + settings: () => dispatchEvent('toggleVideoSource') + }, + { + normalError: 'button3.tts.languageError', + settingsError: 'button3.tts.videoSourceError' + } + ); + + // Button 4 + wireButton(DOM.button4, 'button4', + { + normal: async () => { + if (settings.autoFPS) { + settings.autoFPS = false; + settings.updateInterval = 1000 / 20; + } else { + const fpsOptions = [20, 30, 60]; + const currentFps = 1000 / settings.updateInterval; + const idx = fpsOptions.indexOf(currentFps); + settings.autoFPS = idx === fpsOptions.length - 1; + if (!settings.autoFPS) { + settings.updateInterval = 1000 / fpsOptions[idx + 1]; + } + } + dispatchEvent('updateFrameInterval', { interval: settings.updateInterval }); + await getText('button4.tts.fpsBtn', { + fps: settings.autoFPS ? 'auto' : Math.round(1000 / settings.updateInterval) + }); + }, + settings: () => dispatchEvent('saveSettings', { settingsMode: true }) + }, + { + normalError: 'button4.tts.fpsError', + settingsError: 'button4.tts.saveError' + } + ); + + // Button 5 + wireButton(DOM.button5, 'button5', + { + normal: async () => { + dispatchEvent('emailDebug'); + await getText('button5.tts.emailDebug'); + }, + settings: () => dispatchEvent('loadSettings', { settingsMode: true }) + }, + { + normalError: 'button5.tts.emailDebug', + settingsError: 'button5.tts.loadError', + params: () => ({ state: 'error' }) + } + ); + + // Button 6 + wireButton(DOM.button6, 'button6', + { + normal: async () => { + settings.isSettingsMode = !settings.isSettingsMode; + dispatchEvent('toggleDebug', { show: settings.isSettingsMode }); + await getText('button6.tts.settingsToggle', { + state: settings.isSettingsMode ? 'on' : 'off' + }); + }, + settings: async () => { + settings.isSettingsMode = !settings.isSettingsMode; + dispatchEvent('toggleDebug', { show: settings.isSettingsMode }); + await getText('button6.tts.settingsToggle', { + state: settings.isSettingsMode ? 'on' : 'off' + }); + } + }, + { + normalError: 'button6.tts.settingsError', + settingsError: 'button6.tts.settingsError' + } + ); + + console.log('setupUISettings: Setup complete'); +} + + +// File: web/ui/cleanup-manager.js +import { settings, setStream, setAudioInterval } from '../core/state.js'; +import { cleanupAudio } from '../audio/audio-processor.js'; + +let isAudioInitialized = false; +let audioContext = null; + +export function setupCleanupManager() { + window.addEventListener("beforeunload", async () => { + if (settings.stream) { + settings.stream.getTracks().forEach((track) => track.stop()); + setStream(null); + } + if (settings.micStream) { + settings.micStream.getTracks().forEach((track) => track.stop()); + settings.micStream = null; + } + if (settings.audioInterval) { + clearInterval(settings.audioInterval); + setAudioInterval(null); + } + if (isAudioInitialized && audioContext) { + await cleanupAudio(); + await audioContext.close(); + isAudioInitialized = false; + audioContext = null; + } + console.log("cleanupManager: Cleanup completed"); + }); + + console.log("setupCleanupManager: Setup complete"); +} + +// Expose for audio-controls.js to update audioContext state +export function setAudioContextState(context, initialized) { + audioContext = context; + isAudioInitialized = initialized; +} + +// File: web/ui/dom.js +import { getText } from '../utils/utils.js'; + +function assignDOMElements() { + DOM.splashScreen = document.getElementById('splashScreen'); + DOM.powerOn = document.getElementById('powerOn'); + DOM.mainContainer = document.getElementById('mainContainer'); + DOM.button1 = document.getElementById('button1'); + DOM.button2 = document.getElementById('button2'); + DOM.button3 = document.getElementById('button3'); + DOM.button4 = document.getElementById('button4'); + DOM.button5 = document.getElementById('button5'); + DOM.button6 = document.getElementById('button6'); + DOM.emailDebug = document.getElementById('emailDebug'); + DOM.videoFeed = document.getElementById('videoFeed'); +} + +let DOM = { + splashScreen: null, + powerOn: null, + mainContainer: null, + button1: null, + button2: null, + button3: null, + button4: null, + button5: null, + button6: null, + videoFeed: null, + emailDebug: null +}; + +export function initDOM() { + return new Promise((resolve, reject) => { + const checkDOMReady = () => { + if (document.readyState === 'complete' || document.readyState === 'interactive') { + assignDOMElements(); + const missingElements = Object.entries(DOM).filter(([_, value]) => !value); + if (missingElements.length > 0) { + const missingKeys = missingElements.map(([key]) => key).join(', '); + console.error(`Critical DOM elements missing: ${missingKeys}. Check index.html IDs.`); + reject(new Error(`Missing DOM elements: ${missingKeys}`)); + } else { + resolve(DOM); + } + } + }; + + if (document.readyState === 'complete' || document.readyState === 'interactive') { + checkDOMReady(); + } else { + document.addEventListener('DOMContentLoaded', checkDOMReady, { once: true }); + } + }); +} + +// File: web/ui/settings-handlers.js +// future/web/ui/settings-handlers.js +import { settings } from "../state.js"; +import { speak } from "./utils.js"; + +export function setupSettingsHandlers({ dispatchEvent, DOM }) { + console.log("setupSettingsHandlers: Starting setup"); + + if (!DOM) { + console.error("DOM is undefined in setupSettingsHandlers"); + return; + } + + function tryVibrate(event) { + if (event.cancelable && navigator.vibrate) { + try { + navigator.vibrate(50); + } catch (err) { + console.warn("Vibration blocked:", err.message); + } + } + } + + // Button 1: Start/Stop + if (DOM.button1) { + DOM.button1.addEventListener("touchstart", async (event) => { + if (event.cancelable) event.preventDefault(); + console.log("button1 touched"); + tryVibrate(event); + try { + dispatchEvent("startStop", { settingsMode: settings.isSettingsMode }); + } catch (err) { + console.error("button1 error:", err.message); + dispatchEvent("logError", { message: `button1 error: ${err.message}` }); + await speak("startStop", { state: "error" }); + } + }); + console.log("button1 event listener attached"); + } + + // Button 2: Audio On/Off (Mic) + if (DOM.button2) { + DOM.button2.addEventListener("touchstart", async (event) => { + if (event.cancelable) event.preventDefault(); + console.log("button2 touched"); + tryVibrate(event); + try { + dispatchEvent("toggleAudio", { settingsMode: settings.isSettingsMode }); + } catch (err) { + console.error("button2 error:", err.message); + dispatchEvent("logError", { message: `button2 error: ${err.message}` }); + await speak("audioError"); + } + }); + console.log("button2 event listener attached"); + } + + // Button 3: FPS + if (DOM.button3) { + DOM.button3.addEventListener("touchstart", async (event) => { + if (event.cancelable) event.preventDefault(); + console.log("button3 touched"); + tryVibrate(event); + try { + if (settings.isSettingsMode) { + dispatchEvent("toggleInput"); + } else { + if (settings.autoFPS) { + settings.autoFPS = false; + settings.updateInterval = 1000 / 20; + } else { + const fpsOptions = [20, 30, 60]; + const currentFps = 1000 / settings.updateInterval; + const currentIndex = fpsOptions.indexOf(currentFps); + if (currentIndex === fpsOptions.length - 1) { + settings.autoFPS = true; + } else { + const nextFps = fpsOptions[currentIndex + 1]; + settings.updateInterval = 1000 / nextFps; + } + } + dispatchEvent("updateFrameInterval", { + interval: settings.updateInterval, + }); + await speak("fpsBtn", { + fps: settings.autoFPS + ? "auto" + : Math.round(1000 / settings.updateInterval), + }); + } + dispatchEvent("updateUI", { + settingsMode: settings.isSettingsMode, + streamActive: !!settings.stream, + }); + } catch (err) { + console.error("button3 error:", err.message); + dispatchEvent("logError", { message: `button3 error: ${err.message}` }); + await speak("fpsError"); + } + }); + console.log("button3 event listener attached"); + } + + // Button 4: Save Settings + if (DOM.button4) { + DOM.button4.addEventListener("touchstart", async (event) => { + if (event.cancelable) event.preventDefault(); + console.log("button4 touched"); + tryVibrate(event); + try { + dispatchEvent("saveSettings", { + settingsMode: settings.isSettingsMode, + }); + } catch (err) { + console.error("button4 error:", err.message); + dispatchEvent("logError", { message: `button4 error: ${err.message}` }); + await speak("saveError"); + } + }); + console.log("button4 event listener attached"); + } + + // Button 5: Load Settings + if (DOM.button5) { + DOM.button5.addEventListener("touchstart", async (event) => { + if (event.cancelable) event.preventDefault(); + console.log("button5 touched"); + tryVibrate(event); + try { + dispatchEvent("loadSettings", { + settingsMode: settings.isSettingsMode, + }); + } catch (err) { + console.error("button5 error:", err.message); + dispatchEvent("logError", { message: `button5 error: ${err.message}` }); + await speak("loadError"); + } + }); + console.log("button5 event listener attached"); + } + + // Button 6: Settings Toggle + if (DOM.button6) { + DOM.button6.addEventListener("touchstart", async (event) => { + if (event.cancelable) event.preventDefault(); + console.log("button6 touched"); + tryVibrate(event); + try { + settings.isSettingsMode = !settings.isSettingsMode; + dispatchEvent("updateUI", { + settingsMode: settings.isSettingsMode, + streamActive: !!settings.stream, + }); + dispatchEvent("toggleDebug", { show: settings.isSettingsMode }); + } catch (err) { + console.error("button6 error:", err.message); + dispatchEvent("logError", { message: `button6 error: ${err.message}` }); + await speak("settingsError"); + } + }); + console.log("button6 event listener attached"); + } + + console.log("setupSettingsHandlers: Setup complete"); +} + + +// File: web/ui/video-capture.js +import { settings } from '../core/state.js'; +import { structuredLog } from '../utils/logging.js'; +import { getText } from '../utils/utils.js'; +import { dispatchEvent } from '../core/dispatcher.js'; +import { getDOM } from '../core/context.js'; + +export async function setupVideoCapture(DOM) { + try { + if (!DOM.videoFeed || !DOM.frameCanvas) { + const msg = 'Missing videoFeed or frameCanvas in setupVideoCapture'; + structuredLog('ERROR', msg); + dispatchEvent('logError', { message: msg }); + return false; + } + + DOM.videoFeed.setAttribute('autoplay', ''); + DOM.videoFeed.setAttribute('muted', ''); + DOM.videoFeed.setAttribute('playsinline', ''); + DOM.frameCanvas.style.display = 'none'; + DOM.frameCanvas.setAttribute('aria-hidden', 'true'); + + structuredLog('INFO', 'setupVideoCapture: Video feed and canvas initialized'); + return true; + } catch (err) { + structuredLog('ERROR', 'setupVideoCapture error', { message: err.message }); + dispatchEvent('logError', { message: `Video capture setup error: ${err.message}` }); + return false; + } +} + +export async function cleanupVideoCapture() { + const DOM = getDOM(); + if (DOM.videoFeed?.srcObject) { + DOM.videoFeed.srcObject.getTracks().forEach(track => track.stop()); + DOM.videoFeed.srcObject = null; + } + DOM.frameCanvas.width = 0; + DOM.frameCanvas.height = 0; + structuredLog('INFO', 'cleanupVideoCapture: Video capture cleaned up'); +} + +// File: web/core/dispatcher.js +/* @ts-nocheck */ +import { settings, setAudioInterval, setStream, setMicStream, getLogs } from './state.js'; +import { getText, parseBrowserVersion, setTextAndAriaLabel } from '../utils/utils.js'; +import { withErrorBoundary } from '../utils/async.js'; +import { initializeMicAudio } from '../audio/audio-processor.js'; +import { processFrame } from './frame-processor.js'; +import { cleanupFrameProcessor } from './frame-processor.js'; +import { dispatchEvent, setDispatcher } from './dispatcher.js'; +import { structuredLog } from '../utils/logging.js'; + +let lastTTSTime = 0; +const ttsCooldown = 3000; +let fpsSamplerInterval = null; +let frameCount = 0; + +export async function createEventDispatcher(DOM) { + structuredLog('INFO', 'createEventDispatcher: Initializing event dispatcher', { domExists: !!DOM }); + if (!DOM) { + structuredLog('ERROR', 'DOM is undefined in createEventDispatcher'); + return { dispatchEvent: () => structuredLog('ERROR', 'dispatchEvent not initialized due to undefined DOM') }; + } + + structuredLog('DEBUG', 'DOM elements received', { + hasButton1: !!DOM.button1, + hasButton2: !!DOM.button2, + hasButton3: !!DOM.button3, + hasButton4: !!DOM.button4, + hasButton5: !!DOM.button5, + hasButton6: !!DOM.button6, + hasVideoFeed: !!DOM.videoFeed, + }); + + const [availableGrids, availableEngines, availableLanguages] = await Promise.all([ + fetch('./synthesis-grids/available-grids.json').then(res => res.json()), + fetch('./audio/synthesis-engines/available-engines.json').then(res => res.json()), + fetch('./languages/available-languages.json').then(res => res.json()) + ]); + + const browserInfo = { + userAgent: navigator.userAgent, + platform: navigator.platform, + parsedBrowserVersion: parseBrowserVersion(navigator.userAgent), + hardwareConcurrency: navigator.hardwareConcurrency || 'N/A', + deviceMemory: navigator.deviceMemory ? `${navigator.deviceMemory} GB` : 'N/A', + screen: `${screen.width}x${screen.height}`, + audioContextState: typeof audioContext !== 'undefined' ? audioContext.state : 'Not initialized', + streamActive: !!settings.stream, + micActive: !!settings.micStream, + currentFPSInterval: settings.updateInterval + }; + structuredLog('INFO', 'Enhanced browser and app debug info', browserInfo); + + if (settings.debugLogging) { + fpsSamplerInterval = setInterval(() => { + if (settings.stream) { + const avgFPS = frameCount / 10; + structuredLog('DEBUG', 'Average FPS sample', { avgFPS, overSeconds: 10 }); + frameCount = 0; + } + }, 10000); + } + + const handlers = { + updateUI: async ({ settingsMode, streamActive, micActive }) => { + try { + if (!DOM.button1 || !DOM.button2 || !DOM.button3 || !DOM.button4 || !DOM.button5 || !DOM.button6) { + const missing = [ + !DOM.button1 && 'button1', + !DOM.button2 && 'button2', + !DOM.button3 && 'button3', + !DOM.button4 && 'button4', + !DOM.button5 && 'button5', + !DOM.button6 && 'button6' + ].filter(Boolean); + structuredLog('ERROR', 'Missing critical DOM elements for UI update', { missing }); + dispatchEvent('logError', { message: 'Missing critical DOM elements for UI update' }); + return; + } + + const currentTime = performance.now(); + const grid = availableGrids.find(g => g.id === settings.gridType); + const engine = availableEngines.find(e => e.id === settings.synthesisEngine); + const language = availableLanguages.find(l => l.id === settings.language); + + const button1Text = settingsMode + ? await getText('button1.settings.text', { gridName: grid?.id || 'Grid' }, 'text') + : await getText(`button1.normal.${streamActive ? 'stop' : 'start'}.text`, {}, 'text'); + const button1Aria = settingsMode + ? await getText('button1.settings.aria', { gridType: settings.gridType }, 'aria') + : await getText(`button1.normal.${streamActive ? 'stop' : 'start'}.aria`, {}, 'aria'); + if (currentTime - lastTTSTime >= ttsCooldown) { + await getText(`button1.tts.${settingsMode ? 'gridSelect' : 'startStop'}`, { + state: settingsMode ? settings.gridType : (streamActive ? 'stopping' : 'starting') + }); + } + setTextAndAriaLabel(DOM.button1, button1Text, button1Aria); + + const button2Text = settingsMode + ? await getText('button2.settings.text', { engineName: engine?.id || 'Engine' }, 'text') + : await getText(`button2.normal.${micActive ? 'off' : 'on'}.text`, {}, 'text'); + const button2Aria = settingsMode + ? await getText('button2.settings.aria', { synthesisEngine: settings.synthesisEngine }, 'aria') + : await getText(`button2.normal.${micActive ? 'off' : 'on'}.aria`, {}, 'aria'); + if (currentTime - lastTTSTime >= ttsCooldown) { + await getText(`button2.tts.${settingsMode ? 'synthesisSelect' : 'micToggle'}`, { + state: settingsMode ? settings.synthesisEngine : (micActive ? 'turningOff' : 'turningOn') + }); + } + setTextAndAriaLabel(DOM.button2, button2Text, button2Aria); + + const button3Text = settingsMode + ? await getText('button3.settings.text', { languageName: language?.id || 'Language' }, 'text') + : await getText('button3.normal.text', { languageName: language?.id || 'Language' }, 'text'); + const button3Aria = settingsMode + ? await getText('button3.settings.aria', { language: settings.language }, 'aria') + : await getText('button3.normal.aria', { language: settings.language }, 'aria'); + if (currentTime - lastTTSTime >= ttsCooldown) { + await getText(`button3.tts.${settingsMode ? 'videoSourceSelect' : 'languageSelect'}`, { + state: settingsMode ? (DOM.videoFeed?.srcObject?.getVideoTracks()[0]?.getSettings().facingMode || 'unknown') : settings.language + }); + } + setTextAndAriaLabel(DOM.button3, button3Text, button3Aria); + + const button4Text = settingsMode + ? await getText('button4.settings.text', {}, 'text') + : await getText(`button4.normal.${settings.autoFPS ? 'auto' : 'manual'}.text`, { fps: Math.round(1000 / settings.updateInterval) }, 'text'); + const button4Aria = settingsMode + ? await getText('button4.settings.aria', {}, 'aria') + : await getText('button4.normal.aria', {}, 'aria'); + if (currentTime - lastTTSTime >= ttsCooldown) { + await getText(`button4.tts.${settingsMode ? 'saveSettings' : 'fpsBtn'}`, { + state: settingsMode ? 'save' : (settings.autoFPS ? 'auto' : Math.round(1000 / settings.updateInterval)) + }); + } + setTextAndAriaLabel(DOM.button4, button4Text, button4Aria); + + const button5Text = settingsMode + ? await getText('button5.settings.text', {}, 'text') + : await getText('button5.normal.text', {}, 'text'); + const button5Aria = settingsMode + ? await getText('button5.settings.aria', {}, 'aria') + : await getText('button5.normal.aria', {}, 'aria'); + if (currentTime - lastTTSTime >= ttsCooldown) { + await getText(`button5.tts.${settingsMode ? 'loadSettings' : 'emailDebug'}`, { + state: settingsMode ? 'load' : 'email' + }); + } + setTextAndAriaLabel(DOM.button5, button5Text, button5Aria); + + const button6Text = await getText(`button6.${settingsMode ? 'settings' : 'normal'}.text`, {}, 'text'); + const button6Aria = await getText(`button6.${settingsMode ? 'settings' : 'normal'}.aria`, {}, 'aria'); + if (currentTime - lastTTSTime >= ttsCooldown) { + await getText('button6.tts.settingsToggle', { state: settingsMode ? 'off' : 'on' }); + } + setTextAndAriaLabel(DOM.button6, button6Text, button6Aria); + + lastTTSTime = currentTime; + structuredLog('DEBUG', 'updateUI: UI updated', { settingsMode, streamActive, micActive }); + } catch (err) { + structuredLog('ERROR', 'updateUI error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `UI update error: ${err.message}` }); + } + }, + + processFrame: async () => { + const { data: result, error } = await withErrorBoundary(processFrame, DOM.videoFeed.videoWidth, DOM.videoFeed.videoHeight); + if (error) { + structuredLog('ERROR', 'processFrame handler error', { message: error.message, stack: error.stack }); + handlers.logError({ message: `Frame processing handler error: ${error.message}` }); + return; + } + if (!result) { + structuredLog('WARN', 'processFrame: No result returned', { width: DOM.videoFeed?.videoWidth, height: DOM.videoFeed?.videoHeight }); + return; + } + structuredLog('DEBUG', 'processFrame result', { notesCount: result.notes?.length || 0, avgIntensity: result.avgIntensity }); + frameCount++; + }, + + startStop: async ({ settingsMode }) => { + try { + if (settingsMode) { + const currentIndex = availableGrids.findIndex(g => g.id === settings.gridType); + const nextIndex = (currentIndex + 1) % availableGrids.length; + settings.gridType = availableGrids[nextIndex].id; + await getText('button1.tts.gridSelect', { state: settings.gridType }); + } else { + if (!settings.stream) { + const stream = await navigator.mediaDevices.getUserMedia({ video: true }); + DOM.videoFeed.srcObject = stream; + await new Promise((resolve, reject) => { + DOM.videoFeed.addEventListener('loadedmetadata', () => { + if (DOM.videoFeed.videoWidth <= 0 || DOM.videoFeed.videoHeight <= 0) { + return reject(new Error('Invalid video dimensions after metadata')); + } + structuredLog('INFO', 'Video metadata loaded', { width: DOM.videoFeed.videoWidth, height: DOM.videoFeed.videoHeight }); + resolve(); + }, { once: true }); + DOM.videoFeed.addEventListener('error', reject, { once: true }); + }); + setStream(stream); + setAudioInterval(setInterval(() => { + dispatchEvent('processFrame'); + }, settings.updateInterval)); + await getText('button1.tts.startStop', { state: 'starting' }); + } else { + settings.stream.getVideoTracks().forEach(track => track.stop()); + setStream(null); + await cleanupFrameProcessor(); + if (settings.micStream) { + settings.micStream.getTracks().forEach(track => track.stop()); + setMicStream(null); + initializeMicAudio(null); + } + clearInterval(settings.audioTimerId); + setAudioInterval(null); + if (fpsSamplerInterval) { + clearInterval(fpsSamplerInterval); + fpsSamplerInterval = null; + structuredLog('INFO', 'FPS sampler cleared on stream stop'); + } + await getText('button1.tts.startStop', { state: 'stopping' }); + } + dispatchEvent('updateUI', { settingsMode, streamActive: !!settings.stream, micActive: !!settings.micStream }); + } + } catch (err) { + structuredLog('ERROR', 'startStop error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Stream toggle error: ${err.message}` }); + await getText('button1.tts.cameraError'); + } + }, + + toggleAudio: async ({ settingsMode }) => { + try { + structuredLog('INFO', 'toggleAudio: Current mic state', { micActive: !!settings.micStream }); + if (settingsMode) { + const currentIndex = availableEngines.findIndex(e => e.id === settings.synthesisEngine); + const nextIndex = (currentIndex + 1) % availableEngines.length; + settings.synthesisEngine = availableEngines[nextIndex].id; + await getText('button2.tts.synthesisSelect', { state: settings.synthesisEngine }); + } else { + if (!settings.micStream) { + const micStream = await navigator.mediaDevices.getUserMedia({ audio: true }); + setMicStream(micStream); + initializeMicAudio(micStream); + await getText('button2.tts.micToggle', { state: 'turningOn' }); + } else { + settings.micStream.getTracks().forEach(track => track.stop()); + setMicStream(null); + initializeMicAudio(null); + await getText('button2.tts.micToggle', { state: 'turningOff' }); + } + dispatchEvent('updateUI', { settingsMode, streamActive: !!settings.stream, micActive: !!settings.micStream }); + } + } catch (err) { + structuredLog('ERROR', 'toggleAudio error', { message: err.message }); + handlers.logError({ message: `Mic toggle error: ${err.message}` }); + await getText('button2.tts.micError'); + } + }, + + toggleLanguage: async () => { + try { + const currentIndex = availableLanguages.findIndex(l => l.id === settings.language); + const nextIndex = (currentIndex + 1) % availableLanguages.length; + settings.language = availableLanguages[nextIndex].id; + await getText('button3.tts.languageSelect', { state: settings.language }); + dispatchEvent('updateUI', { settingsMode: settings.isSettingsMode, streamActive: !!settings.stream, micActive: !!settings.micStream }); + } catch (err) { + structuredLog('ERROR', 'toggleLanguage error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Language toggle error: ${err.message}` }); + await getText('button3.tts.languageError'); + } + }, + + toggleVideoSource: async () => { + try { + const oldStream = DOM.videoFeed?.srcObject; + if (oldStream) { + const currentVideoTrack = oldStream.getVideoTracks()[0]; + const currentFacingMode = currentVideoTrack.getSettings().facingMode || 'user'; + const newFacingMode = currentFacingMode === 'user' ? 'environment' : 'user'; + + oldStream.getTracks().forEach(track => track.stop()); + await cleanupFrameProcessor(); + + const newStream = await navigator.mediaDevices.getUserMedia({ + video: { facingMode: newFacingMode }, + audio: !!settings.micStream + }); + DOM.videoFeed.srcObject = newStream; + await new Promise((resolve, reject) => { + DOM.videoFeed.addEventListener('loadedmetadata', () => { + if (DOM.videoFeed.videoWidth <= 0 || DOM.videoFeed.videoHeight <= 0) { + return reject(new Error('Invalid video dimensions after metadata')); + } + structuredLog('INFO', 'Video metadata loaded', { width: DOM.videoFeed.videoWidth, height: DOM.videoFeed.videoHeight }); + resolve(); + }, { once: true }); + DOM.videoFeed.addEventListener('error', reject, { once: true }); + }); + setStream(newStream); + + if (settings.micStream) { + setMicStream(newStream); + initializeMicAudio(newStream); + } + + await getText('button3.tts.videoSourceSelect', { state: newFacingMode }); + } else { + structuredLog('WARN', 'toggleVideoSource: No video track available'); + await getText('button3.tts.videoSourceError'); + } + } catch (err) { + structuredLog('ERROR', 'toggleVideoSource error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Video source toggle error: ${err.message}` }); + await getText('button3.tts.videoSourceError'); + } + }, + + updateFrameInterval: async ({ interval }) => { + try { + settings.updateInterval = interval; + if (settings.stream) { + clearInterval(settings.audioTimerId); + setAudioInterval(setInterval(() => { + dispatchEvent('processFrame'); + }, settings.updateInterval)); + } + await getText('button4.tts.fpsBtn', { + fps: settings.autoFPS ? 'auto' : Math.round(1000 / settings.updateInterval) + }); + dispatchEvent('updateUI', { settingsMode: settings.isSettingsMode, streamActive: !!settings.stream, micActive: !!settings.micStream }); + } catch (err) { + structuredLog('ERROR', 'updateFrameInterval error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Frame interval update error: ${err.message}` }); + await getText('button4.tts.fpsError'); + } + }, + + toggleGrid: async () => { + try { + const currentIndex = availableGrids.findIndex(g => g.id === settings.gridType); + const nextIndex = (currentIndex + 1) % availableGrids.length; + settings.gridType = availableGrids[nextIndex].id; + await getText('button1.tts.gridSelect', { state: settings.gridType }); + dispatchEvent('updateUI', { settingsMode: settings.isSettingsMode, streamActive: !!settings.stream, micActive: !!settings.micStream }); + } catch (err) { + structuredLog('ERROR', 'toggleGrid error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Grid toggle error: ${err.message}` }); + await getText('button1.tts.startStop', { state: 'error' }); + } + }, + + toggleDebug: async ({ show }) => { + try { + if (DOM.debug) { + DOM.debug.style.display = show ? 'block' : 'none'; + } + await getText('button6.tts.settingsToggle', { state: show ? 'on' : 'off' }); + } catch (err) { + structuredLog('ERROR', 'toggleDebug error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Debug toggle error: ${err.message}` }); + } + }, + + saveSettings: async () => { + try { + const settingsToSave = { + gridType: settings.gridType, + synthesisEngine: settings.synthesisEngine, + language: settings.language, + autoFPS: settings.autoFPS, + updateInterval: settings.updateInterval, + dayNightMode: settings.dayNightMode, + ttsEnabled: settings.ttsEnabled + }; + localStorage.setItem('acoustsee-settings', JSON.stringify(settingsToSave)); + await getText('button4.tts.saveSettings'); + } catch (err) { + structuredLog('ERROR', 'saveSettings error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Save settings error: ${err.message}` }); + await getText('button4.tts.saveError'); + } + dispatchEvent('updateUI', { settingsMode: settings.isSettingsMode, streamActive: !!settings.stream, micActive: !!settings.micStream }); + }, + + loadSettings: async () => { + try { + const savedSettings = localStorage.getItem('acoustsee-settings'); + if (savedSettings) { + let parsedSettings; + try { + parsedSettings = JSON.parse(savedSettings); + } catch (parseErr) { + throw new Error(`Invalid JSON in localStorage: ${parseErr.message}`); + } + + const expectedKeys = ['gridType', 'synthesisEngine', 'language', 'autoFPS', 'updateInterval', 'dayNightMode', 'ttsEnabled']; + const expectedTypes = { + gridType: 'string', + synthesisEngine: 'string', + language: 'string', + autoFPS: 'boolean', + updateInterval: 'number', + dayNightMode: 'string', + ttsEnabled: 'boolean' + }; + + expectedKeys.forEach(key => { + if (Object.hasOwn(parsedSettings, key) && typeof parsedSettings[key] === expectedTypes[key]) { + settings[key] = parsedSettings[key]; + } else if (Object.hasOwn(parsedSettings, key)) { + structuredLog('WARN', 'Invalid type for setting during load', { key, receivedType: typeof parsedSettings[key] }); + } + }); + + const extraKeys = Object.keys(parsedSettings).filter(key => !expectedKeys.includes(key)); + if (extraKeys.length > 0) { + structuredLog('WARN', 'Extra keys ignored in loaded settings (potential pollution)', { extraKeys }); + } + + await getText('button5.tts.loadSettings.loaded'); + } else { + await getText('button5.tts.loadSettings.none'); + } + } catch (err) { + structuredLog('ERROR', 'Load settings error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Load settings error: ${err.message}` }); + await getText('button5.tts.loadError'); + } + dispatchEvent('updateUI', { settingsMode: settings.isSettingsMode, streamActive: !!settings.stream, micActive: !!settings.micStream }); + }, + + emailDebug: async () => { + try { + const logsText = await getLogs(); + if (!logsText || logsText.trim() === '') { + structuredLog('WARN', 'emailDebug: No logs retrieved or empty from IndexedDB'); + alert('No logs available to download. Try generating some actions first.'); + await getText('button5.tts.emailDebug', { state: 'error' }); + return; + } + const blob = new Blob([logsText], { type: 'text/plain' }); + const url = URL.createObjectURL(blob); + const a = document.createElement('a'); + a.href = url; + a.download = 'acoustsee-debug-log.txt'; + document.body.appendChild(a); + a.click(); + document.body.removeChild(a); + URL.revokeObjectURL(url); + await getText('button5.tts.emailDebug'); + } catch (err) { + structuredLog('ERROR', 'emailDebug error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Email debug error: ${err.message}` }); + alert('Failed to download logs: ' + err.message); + await getText('button5.tts.emailDebug', { state: 'error' }); + } + }, + + logError: ({ message }) => { + structuredLog('ERROR', 'Error logged', { message }); + } + }; + + setDispatcher((eventName, payload = {}) => { + if (handlers[eventName]) { + try { + structuredLog('DEBUG', `Dispatching event: ${eventName}`, { payload }); + handlers[eventName](payload); + } catch (err) { + structuredLog('ERROR', `Error in handler ${eventName}`, { message: err.message, stack: err.stack }); + handlers.logError({ message: `Handler ${eventName} error: ${err.message}` }); + } + } else { + structuredLog('ERROR', `No handler found for event: ${eventName}`); + handlers.logError({ message: `No handler for event: ${eventName}` }); + } + }); + + structuredLog('INFO', 'createEventDispatcher: Dispatcher initialized'); + return { dispatchEvent }; +} + +// File: web/core/frame-processor.js +import { settings } from "./state.js"; +import { dispatchEvent } from "./dispatcher.js"; + +export async function mapFrameToNotes(frameData, width, height, prevFrameDataLeft, prevFrameDataRight) { + try { + // Use cached grids loaded at startup + const availableGrids = settings.availableGrids; + const grid = availableGrids.find((g) => g.id === settings.gridType); + if (!grid) { + console.error(`Grid not found: ${settings.gridType}`); + dispatchEvent("logError", { message: `Grid not found: ${settings.gridType}` }); + return { notes: [], prevFrameDataLeft, prevFrameDataRight }; + } + const gridModule = await import(`./synthesis-methods/grids/${grid.id}.js`); + const mapFunction = gridModule[`mapFrameTo${grid.id.split('-').map(word => word.charAt(0).toUpperCase() + word.slice(1)).join('')}`]; + if (!mapFunction) { + console.error(`Map function for ${grid.id} not found`); + dispatchEvent("logError", { message: `Map function for ${grid.id} not found` }); + return { notes: [], prevFrameDataLeft, prevFrameDataRight }; + } + // Determine split buffers and copy full RGBA pixels + const halfWidth = Math.floor(width / 2); + const frameSize = halfWidth * height * 4; + const leftFrameData = new Uint8ClampedArray(frameSize); + const rightFrameData = new Uint8ClampedArray(frameSize); + + for (let y = 0; y < height; y++) { + for (let x = 0; x < halfWidth; x++) { + const fullIdx = (y * width + x) * 4; + const halfIdx = (y * halfWidth + x) * 4; + // Copy left RGBA + leftFrameData.set(frameData.subarray(fullIdx, fullIdx + 4), halfIdx); + // Copy right RGBA + const fullIdxR = (y * width + x + halfWidth) * 4; + rightFrameData.set(frameData.subarray(fullIdxR, fullIdxR + 4), halfIdx); + } + } + const leftResult = mapFunction(leftFrameData, halfWidth, height, prevFrameDataLeft, -1); + const rightResult = mapFunction(rightFrameData, halfWidth, height, prevFrameDataRight, 1); + const allNotes = [...(leftResult.notes || []), ...(rightResult.notes || [])]; + return { + notes: allNotes, + prevFrameDataLeft: leftResult.newFrameData, + prevFrameDataRight: rightResult.newFrameData, + }; + } catch (err) { + console.error("mapFrameToNotes error:", err.message); + dispatchEvent("logError", { message: `Frame mapping error: ${err.message}` }); + return { notes: [], prevFrameDataLeft, prevFrameDataRight }; + } +} + +// File: web/core/state.js +import { structuredLog } from '../utils/logging.js'; // Top import. +import { addIdbLog, getAllIdbLogs } from '../utils/idb-logger.js'; // New import for DB logging. + +export let settings = { + debugLogging: true, + stream: null, + availableGrids: [], // Loaded once at startup + availableEngines: [], // Loaded once at startup + availableLanguages: [], // Loaded once at startup + audioTimerId: null, // Renamed from audioInterval: timer ID from setInterval, or null when cleared. + updateInterval: 30, + autoFPS: true, + gridType: null, + synthesisEngine: null, + language: null, + isSettingsMode: false, + micStream: null, + ttsEnabled: false, + dayNightMode: 'day' +}; + +export const loadConfigs = (async () => { + try { + const [grids, engines, languages, intervals] = await Promise.all([ + fetch('./synthesis-methods/grids/availableGrids.json').then(res => res.json()), + fetch('./synthesis-methods/engines/availableEngines.json').then(res => res.json()), + fetch('./languages/availableLanguages.json').then(res => res.json()), + Promise.resolve([50, 33, 16]) + ]); + settings.availableGrids = grids; + settings.availableEngines = engines; + settings.availableLanguages = languages; + settings.gridType = grids[0]?.id || settings.gridType; + settings.synthesisEngine = engines[0]?.id || settings.synthesisEngine; + settings.language = languages[0]?.id || settings.language; + settings.updateInterval = intervals[0] || settings.updateInterval; + } catch (err) { + structuredLog('ERROR', 'Failed to load configurations', { message: err.message }); + } +})(); + +export async function getLogs() { + // Fetch from IndexedDB and pretty-print for readability. + const allLogs = await getAllIdbLogs(); + return allLogs.map(log => { + try { + return `Timestamp: ${log.timestamp}\nLevel: ${log.level}\nMessage: ${log.message}\nData: ${JSON.stringify(log.data, null, 2)}\n---\n`; + } catch (err) { + return `Invalid log entry: ${JSON.stringify(log)}\n---\n`; // Fallback for malformed logs. + } + }).join(''); +} + +export function setStream(stream) { + settings.stream = stream; + if (settings.debugLogging) { + structuredLog('INFO', 'setStream', { streamSet: !!stream }); + } +} + +export function setAudioInterval(timerId) { + settings.audioTimerId = timerId; + if (settings.debugLogging) { + const ms = settings.updateInterval; + structuredLog('INFO', 'setAudioInterval', { timerId, updateIntervalMs: ms }); + } +} + + +// File: web/languages/es-ES.json +{ + "button1": { + "normal": { + "start": { + "text": "Iniciar Procesamiento", + "aria": "Iniciar procesamiento de video" + }, + "stop": { + "text": "Detener Procesamiento", + "aria": "Detener procesamiento de video" + } + }, + "settings": { + "text": "Seleccionar Cuadrícula: {gridName}", + "aria": "Seleccionar tipo de cuadrícula {gridType}" + }, + "tts": { + "startStop": { + "starting": "Iniciando procesamiento", + "stopping": "Deteniendo procesamiento", + "error": "Error al iniciar o detener el procesamiento" + }, + "cameraError": "Error de acceso a la cámara", + "gridSelect": "Cuadrícula establecida en {state}" + } + }, + "button2": { + "normal": { + "on": { + "text": "Encender Micrófono", + "aria": "Encender micrófono" + }, + "off": { + "text": "Apagar Micrófono", + "aria": "Apagar micrófono" + } + }, + "settings": { + "text": "Seleccionar Motor: {engineName}", + "aria": "Seleccionar motor de síntesis {synthesisEngine}" + }, + "tts": { + "micToggle": { + "turningOn": "Encendiendo micrófono", + "turningOff": "Apagando micrófono" + }, + "micError": "Error de acceso al micrófono", + "synthesisSelect": "Síntesis establecida en {state}" + } + }, + "button3": { + "normal": { + "text": "Idioma: {languageName}", + "aria": "Seleccionar idioma {language}" + }, + "settings": { + "text": "Cambiar Cámara", + "aria": "Cambiar entre cámara frontal y trasera" + }, + "tts": { + "languageSelect": "Idioma establecido en {state}", + "videoSourceSelect": "Cámara establecida en {state}", + "videoSourceError": "Error al cambiar de cámara", + "languageError": "Error al cambiar de idioma" + } + }, + "button4": { + "normal": { + "auto": { + "text": "FPS Automático", + "aria": "Seleccionar velocidad de fotogramas" + }, + "manual": { + "text": "{fps} FPS", + "aria": "Seleccionar velocidad de fotogramas" + }, + "aria": "Seleccionar velocidad de fotogramas" + }, + "settings": { + "text": "Guardar Configuración", + "aria": "Guardar configuración" + }, + "tts": { + "fpsBtn": "Velocidad de fotogramas establecida en {fps}", + "fpsError": "Error en velocidad de fotogramas", + "saveSettings": "Configuración guardada", + "saveError": "Error al guardar configuración" + } + }, + "button5": { + "normal": { + "text": "Enviar Registro de Consola", + "aria": "Enviar registro de consola" + }, + "settings": { + "text": "Cargar Configuración", + "aria": "Cargar configuración" + }, + "tts": { + "emailDebug": { + "email": "Enviando registro de consola", + "error": "Error al enviar registro de consola" + }, + "loadSettings": { + "loaded": "Configuración cargada", + "none": "No se encontró configuración" + }, + "loadError": "Error al cargar configuración" + } + }, + "button6": { + "normal": { + "text": "Configuración", + "aria": "Entrar en modo configuración" + }, + "settings": { + "text": "Salir de Configuración", + "aria": "Salir del modo configuración" + }, + "tts": { + "settingsToggle": { + "on": "Entrando en modo configuración", + "off": "Saliendo del modo configuración" + }, + "settingsError": "Error al alternar configuración" + } + } +} + +// File: web/languages/availableLanguages.json +[ + { + "id": "es-ES", + "createdAt": 1751622668665.7266 + }, + { + "id": "en-US", + "createdAt": 1751622636604.726 + } +] + +// File: web/languages/en-US.json +{ +"powerOn": { + "text": "Power On", + "aria": "Power On to enable audio", + "failed": { + "text": "Audio Failed - Retry", + "aria": "Retry audio initialization" + } + }, + "videoFeed": { + "aria": "Video Feed" + }, + "frameCanvas": { + "aria": "Hidden Frame Processing Canvas" + }, + "debugPanel": { + "aria": "Debug Panel" + }, + + "button1": { + "normal": { + "start": { + "text": "Start", + "aria": "Processing video started" + }, + "stop": { + "text": "Stop", + "aria": "Processing video stopped" + } + }, + "settings": { + "text": "Kernel: {gridType}", + "aria": "Kernel selection {gridType}" + }, + "tts": { + "startStop": { + "starting": "Startinng syneshesia", + "stopping": "Stopping synesthesia", + "error": "Error starting or stopping processing" + }, + "cameraError": "Camera access error", + "gridSelect": "Kernel set to {state}" + } + }, + "button2": { + "normal": { + "on": { + "text": "Mic On", + "aria": "Turn on microphone" + }, + "off": { + "text": "Mic Off", + "aria": "Turn off microphone" + } + }, + "settings": { + "text": "Sound synthetizer: {synthesisEngine}", + "aria": "Select synthesis engine {synthesisEngine}" + }, + "tts": { + "micToggle": { + "turningOn": "Turning on microphone", + "turningOff": "Turning off microphone" + }, + "micError": "Microphone access error", + "synthesisSelect": "Synthesis set to {state}" + } + }, + "button3": { + "normal": { + "text": "Language: {languageName}", + "aria": "Select language {languageName}" + }, + "settings": { + "text": "Input: {inputType}", + "aria": "Input selector: {inputType}" + }, + "tts": { + "languageSelect": "Language set to {languageName}", + "fpsError": "Language toggle error" + } + }, + "button4": { + "normal": { + "auto": { + "text": "Auto FPS", + "aria": "Select frame rate" + }, + "manual": { + "text": "{fps} FPS", + "aria": "Select frame rate" + }, + "aria": "Select frame rate" + }, + "settings": { + "text": "Save Settings", + "aria": "Save settings" + }, + "tts": { + "fpsBtn": "Frame rate set to {fps}", + "fpsError": "Frame rate error", + "saveSettings": "Settings saved", + "saveError": "Error saving settings" + } + }, + "button5": { + "normal": { + "text": "Email Console Log", + "aria": "Email console log" + }, + "settings": { + "text": "Load Settings", + "aria": "Load settings" + }, + "tts": { + "emailDebug": { + "email": "Emailing console log for debuggin", + "error": "Error emailing console log" + }, + "loadSettings": { + "loaded": "Settings loaded", + "none": "No settings found" + }, + "loadError": "Error loading settings" + } + }, + "button6": { + "normal": { + "text": "Settings", + "aria": "Enter settings mode" + }, + "settings": { + "text": "Exit Settings", + "aria": "Exit settings mode" + }, + "tts": { + "settingsToggle": { + "on": "Entering settings mode", + "off": "Exiting settings mode" + }, + "settingsError": "Settings toggle error" + } + } +} + +// File: web/styles.css +body { + font-family: Arial, sans-serif; + margin: 0; + padding: 0; + height: 100vh; + width: 100vw; + display: flex; + justify-content: center; + align-items: center; + overflow: hidden; + background-color: #f0f0f0; +} + +.splash-screen { + position: fixed; + top: 0; + left: 0; + width: 100%; + height: 100%; + background-color: #000; + display: flex; + justify-content: center; + align-items: center; + z-index: 30; +} + +.power-on-button { + font-size: 5vw; + padding: 2vw 4vw; + background-color: #4CAF50; + color: white; + border: none; + border-radius: 1vw; + cursor: pointer; +} + +.instructions-button { + font-size: 5vw; + padding: 2vw 4vw; + background-color: #4CAF50; + color: white; + border: none; + border-radius: 1vw; + cursor: pointer; +} + +.main-container { + width: 100%; + height: 100%; + display: grid; + grid-template-columns: repeat(2, 50%); + grid-template-rows: repeat(3, 33.33%); + gap: 1vw; + padding: 1vw; + box-sizing: border-box; +} + +.grid-button { + font-size: 3vw; + background-color: #4CAF50; + color: white; + border: none; + border-radius: 1vw; + cursor: pointer; + display: flex; + justify-content: center; + align-items: center; + position: relative; +} + +.video-container { + position: relative; + overflow: hidden; +} + +.video-container video { + width: 100%; + height: 100%; + object-fit: cover; + z-index: 1; +} + +.video-container .button-text { + position: absolute; + bottom: 5%; + left: 50%; + transform: translateX(-50%); + z-index: 2; + background: rgba(0, 0, 0, 0.5); + padding: 0.5vw 1vw; + border-radius: 0.5vw; + color: white; + font-size: 3vw; +} + +// File: web/.eslintrc.json +// future/web/.eslintrc.json +{ + "env": { "browser": true, "es2020": true }, + "parserOptions": { "ecmaVersion": 2020, "sourceType": "module" } +} + + +// File: web/audio/synthesis-engines/sine-wave.js +import { audioContext, oscillators } from "../../audio-processor.js"; + +export function playSineWave(notes) { + let oscIndex = 0; + const allNotes = notes.sort((a, b) => b.intensity - a.intensity); + for (let i = 0; i < oscillators.length; i++) { + const oscData = oscillators[i]; + if (oscIndex < allNotes.length && i < oscillators.length) { + const { pitch, intensity, harmonics, pan } = allNotes[oscIndex]; + oscData.osc.type = "sine"; + oscData.osc.frequency.setTargetAtTime( + pitch, + audioContext.currentTime, + 0.015, + ); + oscData.gain.gain.setTargetAtTime( + intensity, + audioContext.currentTime, + 0.015, + ); + oscData.panner.pan.setTargetAtTime(pan, audioContext.currentTime, 0.015); + oscData.active = true; + if ( + harmonics.length && + oscIndex + harmonics.length < oscillators.length + ) { + for ( + let h = 0; + h < harmonics.length && oscIndex + h < oscillators.length; + h++ + ) { + oscIndex++; + const harmonicOsc = oscillators[oscIndex]; + harmonicOsc.osc.type = "sine"; + harmonicOsc.osc.frequency.setTargetAtTime( + harmonics[h], + audioContext.currentTime, + 0.015, + ); + harmonicOsc.gain.gain.setTargetAtTime( + intensity * 0.5, + audioContext.currentTime, + 0.015, + ); + harmonicOsc.panner.pan.setTargetAtTime( + pan, + audioContext.currentTime, + 0.015, + ); + harmonicOsc.active = true; + } + } + oscIndex++; + } else { + oscData.gain.gain.setTargetAtTime(0, audioContext.currentTime, 0.015); + oscData.active = false; + } + } +} + + +// File: web/audio/synthesis-engines/fm-synthesis.js +import { audioContext, oscillators, modulators } from "../../audio-processor.js"; + +export function playFmSynthesis(notes) { + let oscIndex = 0; + let modIndex = 0; + const allNotes = notes.sort((a, b) => b.intensity - a.intensity); + for (let i = 0; i < oscillators.length; i++) { + const oscData = oscillators[i]; + if (oscIndex < allNotes.length) { + const { pitch, intensity, harmonics, pan } = allNotes[oscIndex]; + oscData.osc.type = "sine"; + oscData.osc.frequency.setTargetAtTime( + pitch, + audioContext.currentTime, + 0.015, + ); + oscData.gain.gain.setTargetAtTime( + intensity, + audioContext.currentTime, + 0.015, + ); + oscData.panner.pan.setTargetAtTime(pan, audioContext.currentTime, 0.015); + oscData.active = true; + if (harmonics.length) { + // handle one modulator per note, reuse or create + let modData; + if (modIndex < modulators.length) { + modData = modulators[modIndex]; + } else { + const mOsc = audioContext.createOscillator(); + const mGain = audioContext.createGain(); + modulators.push({ osc: mOsc, gain: mGain, started: false }); + modData = modulators[modulators.length - 1]; + } + // configure modulator + modData.osc.type = "sine"; + modData.osc.frequency.setTargetAtTime( + pitch * 2, + audioContext.currentTime, + 0.015, + ); + modData.gain.gain.setTargetAtTime( + intensity * 100, + audioContext.currentTime, + 0.015, + ); + // connect and start only once + modData.osc.connect(modData.gain).connect(oscData.osc.frequency); + if (!modData.started) { + modData.osc.start(); + modData.started = true; + } + modIndex++; + // Use next oscillator for main harmonic + if (oscIndex + 1 < oscillators.length) { + const harmonicOsc = oscillators[oscIndex + 1]; + harmonicOsc.osc.type = "sine"; + harmonicOsc.osc.frequency.setTargetAtTime( + harmonics[0], + audioContext.currentTime, + 0.015, + ); + harmonicOsc.gain.gain.setTargetAtTime( + intensity * 0.5, + audioContext.currentTime, + 0.015, + ); + harmonicOsc.panner.pan.setTargetAtTime( + pan, + audioContext.currentTime, + 0.015, + ); + harmonicOsc.active = true; + } + } + oscIndex++; + } else { + oscData.gain.gain.setTargetAtTime(0, audioContext.currentTime, 0.015); + oscData.active = false; + } + } + // silence any unused modulators + for (let i = modIndex; i < modulators.length; i++) { + modulators[i].gain.gain.setTargetAtTime(0, audioContext.currentTime, 0.015); + } +} + + +// File: web/audio/synthesis-engines/availableEngines.json +[ + { + "id": "sine-wave", + "createdAt": 1750899236911.1191 + }, + { + "id": "fm-synthesis", + "createdAt": 1750899236897.1191 + } +] + +// File: web/audio/synthesis-engines/available-engines.json + + + +// File: web/audio/audio-controls.js +// Update web/ui/audio-controls.js: Remove { passive: true } from touchstart listener to ensure it counts as a user gesture for AudioContext + +import { getText } from "./utils.js"; +import { initializeAudio, cleanupAudio } from "../audio-processor.js"; +import { structuredLog } from "../utils/logging.js"; + +let isAudioContextInitialized = false; +let audioContext = null; + +export function setupAudioControls({ dispatchEvent: dispatch, DOM }) { + if (!DOM || !DOM.powerOn) { + console.error("setupAudioControls: Missing DOM elements"); + dispatch("logError", { message: "Missing DOM elements in audio-controls" }); + return; + } + + const initializeAudioContext = async (event) => { + console.log(`powerOn: ${event.type} event`); + const maxRetries = 3; + for (let i = 0; i <= maxRetries; i++) { + try { + audioContext = new (window.AudioContext || window.webkitAudioContext)({ sampleRate: 44100 }); + if (!audioContext) throw new Error("AudioContext creation failed"); + if (audioContext.state === "suspended") { + console.log("AudioContext is suspended, attempting to resume"); + await audioContext.resume(); + } + if (audioContext.state !== "running") { + throw new Error(`AudioContext failed to start, state: ${audioContext.state}`); + } + await initializeAudio(audioContext); + isAudioContextInitialized = true; + DOM.splashScreen.style.display = "none"; + DOM.mainContainer.style.display = "grid"; + await getText("audioOn"); + dispatch("updateUI", { settingsMode: false, streamActive: false, micActive: false }); + console.log("powerOn: AudioContext initialized, UI updated"); + return; + } catch (err) { + if (err.message.includes("Permission denied")) { + structuredLog('ERROR', 'Audio init permission denied', { message: err.message }); + await getText('button2.tts.micError'); + } + console.error(`Attempt ${i + 1} failed: ${err.message}`); + dispatch("logError", { message: `Audio init attempt ${i + 1} failed: ${err.message}` }); + } + } + await getText("audioError"); + DOM.powerOn.textContent = await getText("powerOn.failed.text", {}, 'text'); + DOM.powerOn.setAttribute("aria-label", await getText("powerOn.failed.aria", {}, 'aria')); + }; + + const handlePowerOn = async (event) => { + if (!isAudioContextInitialized) { + await initializeAudioContext(event); + } else { + console.log("powerOn: Audio already initialized, cleaning up"); + await cleanupAudio(); + isAudioContextInitialized = false; + DOM.splashScreen.style.display = "flex"; + DOM.mainContainer.style.display = "none"; + await getText("audioOff"); + dispatch("updateUI", { settingsMode: false, streamActive: false, micActive: false }); + } + }; + + DOM.powerOn.addEventListener("click", handlePowerOn); + DOM.powerOn.addEventListener("touchstart", handlePowerOn); // Removed { passive: true } + + console.log("setupAudioControls: Audio controls initialized"); +} + + +// File: web/audio/audio-processor.js +// future/web/audio-processor.js +import { settings } from "../core/state.js"; +import { dispatchEvent } from "../core/dispatcher.js"; +import { structuredLog } from "../utils/logging.js"; // Add for detailed logging. + +let audioContext = null; +let isAudioInitialized = false; +let oscillators = []; +let modulators = []; +let micSource = null; +let micGainNode = null; + +export function setAudioContext(newContext) { + audioContext = newContext; + isAudioInitialized = false; +} + +export async function initializeAudio(context) { + if (isAudioInitialized || !context) { + structuredLog('WARN', 'initializeAudio: Already initialized or no context'); + return false; + } + try { + audioContext = context; + if (audioContext.state === "suspended") { + structuredLog('INFO', 'initializeAudio: Resuming AudioContext'); + await audioContext.resume(); + } + if (audioContext.state !== "running") { + throw new Error(`AudioContext not running, state: ${audioContext.state}`); + } + oscillators = Array(24) + .fill() + .map(() => { + const osc = audioContext.createOscillator(); + const gain = audioContext.createGain(); + const panner = audioContext.createStereoPanner(); + osc.type = "sine"; + osc.frequency.setValueAtTime(0, audioContext.currentTime); + gain.gain.setValueAtTime(0, audioContext.currentTime); + panner.pan.setValueAtTime(0, audioContext.currentTime); + osc.connect(gain).connect(panner).connect(audioContext.destination); + osc.start(); + return { osc, gain, panner, active: false }; + }); + isAudioInitialized = true; + structuredLog('INFO', 'initializeAudio: Audio initialized with 24 oscillators'); + return true; + } catch (error) { + structuredLog('ERROR', 'initializeAudio error', { message: error.message }); + dispatchEvent('logError', { message: `Audio init error: ${error.message}` }); + isAudioInitialized = false; + audioContext = null; + return false; + } +} + +export async function playAudio(notes) { + if (!isAudioInitialized || !audioContext || audioContext.state !== "running") { + structuredLog('WARN', 'playAudio: Audio not initialized or context not running', { + isAudioInitialized, + audioContext: !!audioContext, + state: audioContext?.state, + }); + // Attempt to resume AudioContext on mobile (requires user gesture). + if (audioContext && audioContext.state === "suspended") { + try { + await audioContext.resume(); + structuredLog('INFO', 'playAudio: Resumed AudioContext'); + } catch (err) { + structuredLog('ERROR', 'playAudio: Failed to resume AudioContext', { message: err.message }); + } + } + return; + } + try { + // Use cached engines loaded at startup + const availableEngines = settings.availableEngines; + const engine = availableEngines.find((e) => e.id === settings.synthesisEngine); + if (!engine) { + structuredLog('ERROR', `playAudio: Engine not found`, { synthesisEngine: settings.synthesisEngine }); + dispatchEvent('logError', { message: `Engine not found: ${settings.synthesisEngine}` }); + return; + } + const engineModule = await import(`./synthesis-methods/engines/${engine.id}.js`); + // Fix DEF-001: Normalize to camelCase (e.g., fm-synthesis -> playFmSynthesis). + const engineName = engine.id.split('-').map(word => word.charAt(0).toUpperCase() + word.slice(1)).join(''); + const playFunction = engineModule[`play${engineName}`]; + if (playFunction) { + playFunction(notes); + structuredLog('INFO', 'playAudio: Played notes', { engine: engine.id, noteCount: notes.length }); + } else { + structuredLog('ERROR', `playAudio: Play function not found`, { engine: engine.id }); + dispatchEvent('logError', { message: `Play function for ${engine.id} not found` }); + } + } catch (err) { + structuredLog('ERROR', 'playAudio error', { message: err.message }); + dispatchEvent('logError', { message: `Play audio error: ${err.message}` }); + } +} + +export async function cleanupAudio() { + if (isAudioInitialized && audioContext) { + try { + oscillators.forEach(({ osc, gain, panner }) => { + osc.stop(); + osc.disconnect(); + gain.disconnect(); + panner.disconnect(); + }); + if (micSource && micGainNode) { + micSource.disconnect(); + micGainNode.disconnect(); + micSource = null; + micGainNode = null; + } + oscillators = []; + // cleanup modulators + modulators.forEach(({ osc, gain }) => { + osc.stop(); + osc.disconnect(); + gain.disconnect(); + }); + modulators = []; + // Fully close AudioContext to release system resources + await audioContext.close(); + audioContext = null; + isAudioInitialized = false; + structuredLog('INFO', 'cleanupAudio: Audio resources cleaned up and context closed'); + } catch (err) { + structuredLog('ERROR', 'cleanupAudio error', { message: err.message }); + dispatchEvent('logError', { message: `Cleanup audio error: ${err.message}` }); + } + } +} + +export async function stopAudio() { + await cleanupAudio(); +} + +export function initializeMicAudio(micStream) { + if (!audioContext || !isAudioInitialized) { + structuredLog('WARN', 'initializeMicAudio: Audio context not initialized'); + dispatchEvent('logError', { message: 'Audio context not initialized for microphone' }); + return null; + } + try { + if (micSource && micGainNode) { + micSource.disconnect(); + micGainNode.disconnect(); + micSource = null; + micGainNode = null; + } + if (micStream) { + micSource = audioContext.createMediaStreamSource(micStream); + micGainNode = audioContext.createGain(); + micGainNode.gain.setValueAtTime(0.7, audioContext.currentTime); + micSource.connect(micGainNode).connect(audioContext.destination); + structuredLog('INFO', 'initializeMicAudio: Microphone stream connected', { gain: 0.7 }); + return micSource; + } + structuredLog('INFO', 'initializeMicAudio: Microphone stream disconnected'); + return null; + } catch (error) { + structuredLog('ERROR', 'initializeMicAudio error', { message: error.message }); + dispatchEvent('logError', { message: `Microphone init error: ${error.message}` }); + return null; + } +} + +export { audioContext, isAudioInitialized, oscillators, modulators }; + +// File: web/main.js +import { setupUIController } from './ui/ui-controller.js'; +import { createEventDispatcher } from './core/dispatcher.js'; +import { loadConfigs, settings } from './core/state.js'; +import { structuredLog } from './utils/logging.js'; +import { setDOM } from './core/context.js'; +import { getText } from './utils/utils.js'; + +const DOM = { + videoFeed: document.getElementById('videoFeed'), + frameCanvas: document.getElementById('frameCanvas'), + button1: document.getElementById('button1'), + button2: document.getElementById('button2'), + button3: document.getElementById('button3'), + button4: document.getElementById('button4'), + button5: document.getElementById('button5'), + button6: document.getElementById('button6'), + powerOn: document.getElementById('powerOn'), + splashScreen: document.getElementById('splashScreen'), + mainContainer: document.getElementById('mainContainer'), + debugPanel: document.getElementById('debugPanel'), +}; + +// Initialize shared DOM context for modules that need it +setDOM(DOM); + +async function init() { + try { + await loadConfigs; + let getText; + try { + ({ getText } = await import('./ui/utils.js')); + console.log('utils.js imported successfully'); // Confirm import worked + } catch (importErr) { + console.error('Failed to import utils.js:', importErr.message); + getText = async (key) => { // Make async for consistency with await calls + console.warn('TTS fallback for key:', key); + return key; // Return key as fallback string (better than '') + }; + } + // Set aria and text for all relevant elements deriving from ID + const staticElements = [ + DOM.splashScreen, + DOM.mainContainer, + DOM.powerOn, + DOM.videoFeed, + DOM.frameCanvas, + DOM.debugPanel, + DOM.button1, + DOM.button2, + DOM.button3, + DOM.button4, + DOM.button5, + DOM.button6, + ]; + for (const el of staticElements) { + if (!el) { + console.warn(`Skipping null element in staticElements`); + continue; + } + const baseKey = el.id; + el.setAttribute('aria-label', await getText(`${baseKey}.aria`, {}, 'aria')); + // Set text content only for elements that need it (e.g., powerOn, buttons) + if (['powerOn'].includes(el.id) || el.tagName === 'BUTTON') { + el.textContent = await getText(`${baseKey}.text`, {}, 'text'); + } + } + if (!DOM.videoFeed || !DOM.button1 || !DOM.button2 || !DOM.button3 || + !DOM.button4 || !DOM.button5 || !DOM.button6 || !DOM.powerOn || + !DOM.splashScreen || !DOM.mainContainer || !DOM.debugPanel || !DOM.frameCanvas) { + throw new Error('Missing DOM elements in main.js'); + } + const { dispatchEvent } = await createEventDispatcher(DOM); + setupUIController({ dispatchEvent, DOM }); + // Console overrides moved here to break circular dependency + const originalConsole = { + log: console.log, + warn: console.warn, + error: console.error + }; + const oldStructuredLog = structuredLog; + window.structuredLog = async (level, message, data = {}, persist = true, sample = true) => { + const backup = { log: console.log, warn: console.warn, error: console.error }; + console.log = originalConsole.log; + console.warn = originalConsole.warn; + console.error = originalConsole.error; + await oldStructuredLog(level, message, data, persist, sample); + console.log = backup.log; + console.warn = backup.warn; + console.error = backup.error; + }; + console.log = (...args) => { + originalConsole.log.apply(console, args); + if (settings.debugLogging) window.structuredLog('INFO', 'Console log', { args }, false); + }; + console.warn = (...args) => { + originalConsole.warn.apply(console, args); + if (settings.debugLogging) window.structuredLog('WARN', 'Console warn', { args }, false); + }; + console.error = (...args) => { + originalConsole.error.apply(console, args); + window.structuredLog('ERROR', 'Console error', { args }, false); + }; + // Force initial UI update for dynamic content + dispatchEvent('updateUI', { settingsMode: false, streamActive: false, micActive: false }); + console.log('init: UI setup complete'); + } catch (err) { + console.error('init error:', err.message); + try { + await getText('init.tts.error'); + } catch (ttsErr) { + console.error('TTS error:', ttsErr.message); + } + } +} + +// Adds uncaught error handler for global contexts (e.g., hangs/OOM). +window.onerror = function (message, source, lineno, colno, error) { + structuredLog('ERROR', 'Uncaught global error', { message, source, lineno, colno, stack: error ? error.stack : 'N/A' }); + return true; // Prevent default browser error logging. +}; + +init(); + + +// File: web/index.html + + + + + + + + + AcoustSee + + + +
+ +
+ + + + +
+ + + + + +// File: web/context.js +let DOM = null; +let dispatchEvent = null; + +export function setDOM(dom) { + DOM = dom; +} + +export function getDOM() { + if (!DOM) { + console.error("DOM not initialized"); + throw new Error("DOM not initialized"); + } + return DOM; +} + +export function setDispatchEvent(dispatcher) { + dispatchEvent = dispatcher; +} + +export function getDispatchEvent() { + if (!dispatchEvent) { + console.error("dispatchEvent not initialized"); + throw new Error("dispatchEvent not initialized"); + } + return dispatchEvent; +} + +// File: web/synthesis-grids/availableGrids.json +[ + { + "id": "hex-tonnetz", + "createdAt": 1750899236982.1191 + }, + { + "id": "circle-of-fifths", + "createdAt": 1750899236950.1191 + } +] + +// File: web/synthesis-grids/available-grids.json + + + +// File: web/synthesis-grids/hex-tonnetz.js +import { settings } from "../../state.js"; + +const gridSize = 32; +const notesPerOctave = 12; +const octaves = 5; +const minFreq = 100; +const maxFreq = 3200; +const frequencies = []; +for (let octave = 0; octave < octaves; octave++) { + for (let note = 0; note < notesPerOctave; note++) { + const freq = minFreq * Math.pow(2, octave + note / notesPerOctave); + if (freq <= maxFreq) frequencies.push(freq); + } +} +const tonnetzGrid = Array(gridSize) + .fill() + .map(() => Array(gridSize).fill(0)); +for (let y = 0; y < gridSize; y++) { + for (let x = 0; x < gridSize; x++) { + const octave = Math.floor((y / gridSize) * octaves); + const noteOffset = (x + (y % 2) * 6) % notesPerOctave; + const freqIndex = octave * notesPerOctave + noteOffset; + tonnetzGrid[y][x] = + frequencies[freqIndex % frequencies.length] || + frequencies[frequencies.length - 1]; + } +} + +export function mapFrameToHexTonnetz( + frameData, + width, + height, + prevFrameData, + panValue, +) { + const gridWidth = width / gridSize; + const gridHeight = height / gridSize; + const movingRegions = []; + const newFrameData = new Uint8ClampedArray(frameData); + + // Correct avgIntensity over pixels (skip alpha) + let avgIntensity = 0; + for (let i = 0; i < frameData.length; i += 4) { + const r = frameData[i]; + const g = frameData[i + 1]; + const b = frameData[i + 2]; + avgIntensity += (r + g + b) / 3; + } + avgIntensity /= (frameData.length / 4); + + if (prevFrameData) { + for (let y = 0; y < height; y++) { + for (let x = 0; x < width; x++) { + const idx = (y * width + x) * 4; + const r = frameData[idx]; + const g = frameData[idx + 1]; + const b = frameData[idx + 2]; + const intensity = (r + g + b) / 3; + + const pr = prevFrameData[idx]; + const pg = prevFrameData[idx + 1]; + const pb = prevFrameData[idx + 2]; + const prevIntensity = (pr + pg + pb) / 3; + + const delta = Math.abs(intensity - prevIntensity); + if (delta > 20) { + const gridX = Math.floor(x / gridWidth); + const gridY = Math.floor(y / gridHeight); + movingRegions.push({ gridX, gridY, intensity, delta }); + } + } + } + } + + movingRegions.sort((a, b) => b.delta - a.delta); + const notes = []; + const usedCells = new Set(); + for (let i = 0; i < Math.min(16, movingRegions.length); i++) { + const { gridX, gridY, intensity } = movingRegions[i]; + const cellKey = `${gridX},${gridY}`; + if (usedCells.has(cellKey)) continue; + usedCells.add(cellKey); + for (let dy = -1; dy <= 1; dy++) { + for (let dx = -1; dx <= 1; dx++) { + if (dx === 0 && dy === 0) continue; + usedCells.add(`${gridX + dx},${gridY + dy}`); + } + } + const freq = tonnetzGrid[gridY][gridX]; + const amplitude = + settings.dayNightMode === "day" + ? 0.02 + (intensity / 255) * 0.06 + : 0.08 - (intensity / 255) * 0.06; + const harmonics = [freq * Math.pow(2, 7 / 12), freq * Math.pow(2, 4 / 12)]; + notes.push({ pitch: freq, intensity: amplitude, harmonics, pan: panValue }); + } + + return { notes, newFrameData, avgIntensity }; +} + + +// File: web/synthesis-grids/circle-of-fifths.js +import { settings } from "../../state.js"; + +const notesPerOctave = 12; +const octaves = 5; +const minFreq = 100; +const maxFreq = 3200; +const frequencies = []; +for (let octave = 0; octave < octaves; octave++) { + for (let note = 0; note < notesPerOctave; note++) { + const freq = minFreq * Math.pow(2, octave + note / notesPerOctave); + if (freq <= maxFreq) frequencies.push(freq); + } +} + +export function mapFrameToCircleOfFifths( + frameData, + width, + height, + prevFrameData, + panValue, +) { + const gridWidth = width / 12; + const gridHeight = height / 12; + const movingRegions = []; + const newFrameData = new Uint8ClampedArray(frameData); + // Correct avgIntensity over pixels (skip alpha) + let avgIntensity = 0; + for (let i = 0; i < frameData.length; i += 4) { + const r = frameData[i]; + const g = frameData[i + 1]; + const b = frameData[i + 2]; + avgIntensity += (r + g + b) / 3; + } + avgIntensity /= (frameData.length / 4); + + if (prevFrameData) { + for (let y = 0; y < height; y++) { + for (let x = 0; x < width; x++) { + const idx = (y * width + x) * 4; + const r = frameData[idx]; + const g = frameData[idx + 1]; + const b = frameData[idx + 2]; + const intensity = (r + g + b) / 3; + + const pr = prevFrameData[idx]; + const pg = prevFrameData[idx + 1]; + const pb = prevFrameData[idx + 2]; + const prevIntensity = (pr + pg + pb) / 3; + + const delta = Math.abs(intensity - prevIntensity); + if (delta > 20) { + const gridX = Math.floor(x / gridWidth); + const gridY = Math.floor(y / gridHeight); + movingRegions.push({ gridX, gridY, intensity, delta }); + } + } + } + } + + movingRegions.sort((a, b) => b.delta - a.delta); + const notes = []; + const usedCells = new Set(); + for (let i = 0; i < Math.min(8, movingRegions.length); i++) { + const { gridX, gridY, intensity } = movingRegions[i]; + const cellKey = `${gridX},${gridY}`; + if (usedCells.has(cellKey)) continue; + usedCells.add(cellKey); + const noteIndex = (gridX + gridY) % notesPerOctave; + const freq = frequencies[noteIndex] || frequencies[frequencies.length - 1]; + const amplitude = + settings.dayNightMode === "day" + ? 0.02 + (intensity / 255) * 0.06 + : 0.08 - (intensity / 255) * 0.06; + const harmonics = [freq * Math.pow(2, 7 / 12), freq * Math.pow(2, 4 / 12)]; + notes.push({ pitch: freq, intensity: amplitude, harmonics, pan: panValue }); + } + + return { notes, newFrameData, avgIntensity }; +} + + +// File: web/test/ui-settings.test.js +// test/ui-settings.test.js +import { setupUISettings } from '../ui/ui-settings.js'; +import { settings } from '../state.js'; + +jest.mock('../state.js', () => ({ + settings: { isSettingsMode: false, stream: null, micStream: null }, +})); + +describe('ui-settings', () => { + beforeEach(() => { + document.body.innerHTML = ` +
+ + + + + + +
+ `; + }); + + test('binds button events', () => { + const dispatchEvent = jest.fn(); + setupUISettings({ dispatchEvent, DOM: document }); + expect(document.getElementById('button1').ontouchstart).toBeDefined(); + expect(document.getElementById('button2').ontouchstart).toBeDefined(); + }); + + test('toggles settings mode on button6', async () => { + const dispatchEvent = jest.fn(); + setupUISettings({ dispatchEvent, DOM: document }); + await document.getElementById('button6').dispatchEvent(new Event('touchstart')); + expect(settings.isSettingsMode).toBe(true); + expect(dispatchEvent).toHaveBeenCalledWith('updateUI', expect.any(Object)); + }); +}); + + +// File: web/test/video-capture.test.js +describe('processFrame', () => { + test('handles non-finite dimensions', async () => { + jest.spyOn(global.console, 'error').mockImplementation(() => {}); + jest.spyOn(structuredLog, 'call').mockImplementation(() => {}); + const DOM = { frameCanvas: { getContext: () => null }, videoFeed: {} }; + jest.spyOn(getDOM, 'call').mockReturnValue(DOM); + const result = await processFrame(NaN, 240); + expect(result).toEqual({ notes: [], newFrameData: null, avgIntensity: 0 }); + expect(structuredLog).toHaveBeenCalledWith('DEBUG', 'processFrame dimensions', { rawWidth: NaN, rawHeight: 240 }); + expect(console.error).toHaveBeenCalledWith("Canvas context not found"); + }); +}); + + diff --git a/future/scripts/file-indexer.js b/future/scripts/file-indexer.js new file mode 100644 index 00000000..eebe3896 --- /dev/null +++ b/future/scripts/file-indexer.js @@ -0,0 +1,32 @@ +import fs from 'fs'; +import path from 'path'; + +const targets = [ + { dir: '../web/synthesis-grids', output: '../web/synthesis-grids/available-grids.json', ext: '.js' }, + { dir: '../web/audio/synthesis-engines', output: '../web/audio/synthesis-engines/available-engines.json', ext: '.js' }, + { dir: '../web/languages', output: '../web/languages/availableLanguages.json', ext: '.json' } +]; + +for (const { dir, output, ext } of targets) { + const absDir = path.resolve(dir); + if (!fs.existsSync(absDir)) { + console.warn(`DIR NOT FOUND: ${absDir}`); + continue; + } + const files = fs.readdirSync(absDir) + .filter(f => f.endsWith(ext) && !f.startsWith('available')); + + const items = files.map(file => ({ + id: path.basename(file, ext), + createdAt: fs.statSync(path.join(absDir, file)).ctimeMs + // Puedes agregar más metadatos si lo deseas + })); + if (items.length === 0) { + console.warn(`NO FILES FOUND: ${absDir}`); + continue; + } + + items.sort((a, b) => b.createdAt - a.createdAt); + fs.writeFileSync(path.resolve(output), JSON.stringify(items, null, 2)); + console.log(`File generated: ${output}`); +} \ No newline at end of file diff --git a/future/test/async.test.js b/future/test/async.test.js new file mode 100644 index 00000000..85680f08 --- /dev/null +++ b/future/test/async.test.js @@ -0,0 +1,18 @@ +// File: web/test/async.test.js +import { withErrorBoundary } from '../utils/async.js'; + +describe('async', () => { + test('withErrorBoundary handles success', async () => { + const mockFn = async () => 'success'; + const { data, error } = await withErrorBoundary(mockFn); + expect(data).toBe('success'); + expect(error).toBeNull(); + }); + + test('withErrorBoundary handles error', async () => { + const mockFn = async () => { throw new Error('test error'); }; + const { data, error } = await withErrorBoundary(mockFn); + expect(data).toBeNull(); + expect(error.message).toBe('test error'); + }); +}); \ No newline at end of file diff --git a/future/test/eslint.config.mjs b/future/test/eslint.config.mjs new file mode 100644 index 00000000..0c7e8c23 --- /dev/null +++ b/future/test/eslint.config.mjs @@ -0,0 +1,16 @@ +import js from "@eslint/js"; +import globals from "globals"; +import { defineConfig } from "eslint/config"; + +export default defineConfig([ + { + files: ["**/*.{js,mjs,cjs}"], + plugins: { js }, + extends: ["js/recommended"], + languageOptions: { globals: globals.browser }, + rules: { + semi: ["error", "always"], + quotes: ["error", "single"] + } + } +]); \ No newline at end of file diff --git a/future/tools/gpf.sh b/future/tools/gpf.sh new file mode 100755 index 00000000..21a1d08e --- /dev/null +++ b/future/tools/gpf.sh @@ -0,0 +1,16 @@ +#!/bin/bash +# filepath: /workspaces/acoustsee/future/tools/gpf.sh + +OUTPUT_FILE="../project-files.txt" +TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S %z') + +echo "// Generated on: $TIMESTAMP" > "$OUTPUT_FILE" +echo "" >> "$OUTPUT_FILE" + +find ../web -type f ! -path "../web/dist/*" | while read -r file; do + echo "// File: ${file#../}" >> "$OUTPUT_FILE" + cat "$file" >> "$OUTPUT_FILE" + echo -e "\n" >> "$OUTPUT_FILE" +done + +echo "Generated $OUTPUT_FILE" \ No newline at end of file diff --git a/future/trashcan/generate-project-files.sh b/future/trashcan/generate-project-files.sh new file mode 100755 index 00000000..c358203e --- /dev/null +++ b/future/trashcan/generate-project-files.sh @@ -0,0 +1,131 @@ +#!/bin/bash + +# Script to generate project-outline.md and project-files.txt for Grok Workspaces +# Run from future/scripts/ folder, outputs files to future/ (development root) +# Usage: ./scripts/generate-project-files.sh [project_name] [extensions] [exclude_dirs] + +# Configuration with defaults +SCRIPT_DIR="$(realpath "$(dirname "$0")")" # Absolute path to future/scripts/ +PROJECT_ROOT="$(realpath "$SCRIPT_DIR/..")" # Navigate to future/ +PROJECT_NAME="${1:-$(basename "$(dirname "$PROJECT_ROOT")")}" # Use arg or parent directory (acoustsee) +INCLUDE_EXTENSIONS="${2:-*.js|*.md|*.json|*.html|*.css}" # Default extensions +EXCLUDE_DIRS="${3:-node_modules|dist|scripts|past|present}" # Exclude scripts, past, present +OUTPUT_DIR="$PROJECT_ROOT" +PROJECT_OUTLINE="$OUTPUT_DIR/project-outline.md" +PROJECT_FILES="$OUTPUT_DIR/project-files.txt" +TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S %z') # e.g., 2025-07-11 13:00:00 -0300 + +# Debug: Print configuration +echo "Debug: SCRIPT_DIR=$SCRIPT_DIR" +echo "Debug: PROJECT_ROOT=$PROJECT_ROOT" +echo "Debug: PROJECT_NAME=$PROJECT_NAME" +echo "Debug: INCLUDE_EXTENSIONS=$INCLUDE_EXTENSIONS" +echo "Debug: EXCLUDE_DIRS=$EXCLUDE_DIRS" +echo "Debug: OUTPUT_DIR=$OUTPUT_DIR" +echo "Debug: TIMESTAMP=$TIMESTAMP" + +# Verify PROJECT_ROOT exists +if [ ! -d "$PROJECT_ROOT" ]; then + echo "Error: PROJECT_ROOT ($PROJECT_ROOT) is not a valid directory." + exit 1 +fi + +# Check if tree and jq are installed +command -v tree >/dev/null 2>&1 || { echo "Error: tree is not installed. Run 'sudo apt-get install tree'"; exit 1; } +command -v jq >/dev/null 2>&1 || { echo "Warning: jq is not installed. Using directory name for PROJECT_NAME"; } + +# Try to get project name from package.json in PROJECT_ROOT or parent +if [ -f "$PROJECT_ROOT/package.json" ] && command -v jq >/dev/null 2>&1; then + PROJECT_NAME=$(jq -r '.name // "'"$PROJECT_NAME"'"' "$PROJECT_ROOT/package.json") +elif [ -f "$PROJECT_ROOT/../package.json" ] && command -v jq >/dev/null 2>&1; then + PROJECT_NAME=$(jq -r '.name // "'"$PROJECT_NAME"'"' "$PROJECT_ROOT/../package.json") +fi +echo "Debug: Final PROJECT_NAME=$PROJECT_NAME" + +# Step 1: Generate project-outline.md with tree structure +cd "$PROJECT_ROOT" || { echo "Error: Cannot change to $PROJECT_ROOT"; exit 1; } +cat << EOF > "$PROJECT_OUTLINE" + +# $PROJECT_NAME Project Structure + +Due to Grok Workspaces' 10-file limit, project files are consolidated in \`$PROJECT_FILES\`. This covers only the \`future/\` directory (development area). Refer to the tree below for the logical organization and use prefixes (e.g., \`web-\`, \`ui-\`) when referencing files. + +## Project Tree +\`\`\` +EOF + +# Debug: Print tree command +echo "Debug: Running tree -f -a -I '$EXCLUDE_DIRS' --noreport" +# Generate tree, excluding unwanted directories +tree -f -a -I "$EXCLUDE_DIRS" --noreport >> "$PROJECT_OUTLINE" 2>/dev/null || echo "Warning: tree command failed, tree may be empty." + +cat << EOF >> "$PROJECT_OUTLINE" +\`\`\` + + +## Reusable Modules +- \`web/utils.js\`: General utilities. +- \`web/processing-utils.js\`: Shared processing logic. +- \`web/frame-processor.js\`: Grid features for computer vision. +- \`web/audio-processor.js\`: Synthesis engine logic for audio processing. +- \`web/context.js\`: Shared context utilities. +- Check \`$PROJECT_FILES\` for these modules to avoid redundancy. + +## File Access +- All files are in \`$PROJECT_FILES\`. Use paths like \`web/frame-processor.js\` when prompting Grok. +- Example: "Extract \`web/frame-processor.js\` from \`$PROJECT_FILES\`." + +## Dynamic Loading +- Add grid types to \`frame-processor.js\` (e.g., \`import('./synthesis-methods/grids/circle-of-fifths.js')\`). +- Add synthesis engines to \`audio-processor.js\` (e.g., \`import('./synthesis-methods/engines/fm-synthesis.js')\`). +- Configuration in \`web/synthesis-methods/grids/availableGrids.json\` and \`web/synthesis-methods/engines/availableEngines.json\`. + +## Contribution Guidelines +See the workspace’s custom instructions for ES modules, top-down flow, avoiding redundancy, and proposing improvements. + +## Notes +- Run \`future/scripts/generate-project-files.sh\` in Codespaces to update this file and \`$PROJECT_FILES\`. +- Only the \`future/\` directory is included, as it’s the development focus. +EOF + +# Step 2: Generate project-files.txt by concatenating source files +> "$PROJECT_FILES" # Clear the output file +echo "// Generated on: $TIMESTAMP" >> "$PROJECT_FILES" +echo "" >> "$PROJECT_FILES" +# Convert INCLUDE_EXTENSIONS to find-compatible syntax +FIND_NAMES="" +IFS='|' read -ra EXT_ARRAY <<< "$INCLUDE_EXTENSIONS" +for ext in "${EXT_ARRAY[@]}"; do + if [ -n "$FIND_NAMES" ]; then + FIND_NAMES="$FIND_NAMES -o -name '$ext'" + else + FIND_NAMES="-name '$ext'" + fi +done + +# Debug: Print find command +echo "Debug: Running find . -type f \( $FIND_NAMES \) -not -path './$EXCLUDE_DIRS/*'" +# Run find command +eval "find . -type f \( $FIND_NAMES \) -not -path './$EXCLUDE_DIRS/*'" | while read -r file; do + # Get relative path and remove leading ./ + relative_path="${file#./}" + # Add delimiter and file content + echo "// File: $relative_path" >> "$PROJECT_FILES" + cat "$file" >> "$PROJECT_FILES" 2>/dev/null || echo "Warning: Could not read $file" + echo -e "\n" >> "$PROJECT_FILES" +done + +# Step 3: Check if files are empty +if [ ! -s "$PROJECT_OUTLINE" ]; then + echo "Warning: $PROJECT_OUTLINE is empty. Check if tree command found files." +fi +if [ ! -s "$PROJECT_FILES" ] || [ "$(wc -l < "$PROJECT_FILES")" -le 2 ]; then + echo "Warning: $PROJECT_FILES is empty or contains only timestamp. Check if files match extensions: $INCLUDE_EXTENSIONS" + echo "Debug: List files in $PROJECT_ROOT:" + ls -R "$PROJECT_ROOT" +fi + +# Step 4: Print completion message +echo "Generated $PROJECT_OUTLINE and $PROJECT_FILES" +echo "Please review $PROJECT_OUTLINE and verify the tree and module details." +echo "Upload both files to Grok Workspaces." \ No newline at end of file diff --git a/future/web/.eslintrc.json b/future/web/.eslintrc.json new file mode 100644 index 00000000..045137ea --- /dev/null +++ b/future/web/.eslintrc.json @@ -0,0 +1,5 @@ +// future/web/.eslintrc.json +{ + "env": { "browser": true, "es2020": true }, + "parserOptions": { "ecmaVersion": 2020, "sourceType": "module" } +} diff --git a/future/web/README.md b/future/web/README.md new file mode 100644 index 00000000..b4d9aea0 --- /dev/null +++ b/future/web/README.md @@ -0,0 +1,144 @@ +**a photon to phonon code** + +## [Introduction](#introduction) + +The content in this repository is meant to provide the code for a public infraestructure web app that aims to transform visual environments into soundscapes, empowering the users to experience the visual world by synthetic audio cues, in real time. + +> **Why?** We believe in enhancing humanity with open-source software in a fast, accessible and impactful way. You are invited to join us to improve its mission and make a difference! + +### Project Vision + +- Synesthetic Translation: Converting visual data into stereo audio cues, mapping colors, motion to distinct sound signatures. +- Dynamic Soundscapes: Adjusts audio in real time based on object distance and motion, e.g., a swing’s sound shifts in volume and complexity as it moves. +- Location-Aware Audio: Enhances spatial awareness by producing sounds in the corresponding ear, such as a wall on the left sounding in the left ear. + +### Tech stack needed + +Run the version of your choice in any internet browser from year 2020 and up. +The design is tested with a mobile phone anda its front camera +Input: Mobile camera for real-time visual data capture. +Audio Output: Stereo headphones for spatial audio effects. + +### Hipothetic Use Case + +Launch the app on a mobile device to translate live camera input into a dynamic stereo soundscape. For a visually impaired user in a park, a mobile phone worn as a necklace captures surrounding visuals like a swing in motion, as the swing moves away, the app produces a softer, simpler sound; as it approaches, the sound grows louder and more complex. Similarly, a sidewalk might emit a steady, textured tone, a car in the distance a low hum, and a wall to the left a localized sound in the left ear. This enables users to perceive and interact with their surroundings through an innovative auditory interface, fostering greater independence and environmental awareness. + +### Development + +Entirely coded by xAI Grok 3 to Milestone 4 as per @MAMware prompts +Milestone 5 wich is a work in progress is getting help from OpenAI ChatGPT 4.1, 04-mini, Anthropic Claude 4 via @github copilot at codespaces and also Grok 4 wich is charge of the re-estructuring from v0.5.12 + +>We welcome contributors! + +## Table of Contents + +- [Introduction](#introduction) +- [Usage](docs/USAGE.md) +- [Status](#status) +- [Project structure](#project_structure) +- [Changelog](docs/CHANGELOG.md) +- [Contributing](docs/CONTRIBUTING.md) +- [To-Do List](docs/TO_DO.md) +- [Diagrams](docs/DIAGRAMS.md) +- [License](docs/LICENSE.md) +- [FAQ](docs/FAQ.md) + +### [Usage](docs/USAGE.md) + +The webapp runs from a Internet browsers and mobile hardware from 2021. + +- Current version [RUN](https://mamware.github.io/acoustsee/present/) +- Previous versions [RUN](https://mamware.github.io/acoustsee/past/old_versions/preview) +- Testing developments [RUN](https://mamware.github.io/acoustsee/future/web) + +### Check [Usage](docs/USAGE.md) for further details + +### [Current Status](#status) + +Working at **Milestone 5 (Current)** + +- Haptic feedback via Vibration API **Developing in Progress 85%** +- Console log on device screen and mail to feature for debuggin. **Developing in Progress 85%** +- New languajes agnostic architecture ready to provide multilingual support for the speech sinthetizer and UI **Developing in Progress 95%** +- Mermaid diagrams to reflect current Modular Single Responsability Principle **To do** + +### [Changelog](docs/CHANGELOG.md) + +- Current "stable" version from "present" is v0.4.7, link above logs the history and details past milestones achieved. +- Current "future" version in development starts from v0.5 + +### ["future" Project structure](#project_structure) + +``` + +web/ +├── audio/ # Audio processing and synthesis +│ ├── audio-processor.js # AudioContext, oscillators, mic handling +│ ├── synthesis-engines/ # Synthesis methods (sine-wave.js, fm-synthesis.js) +│ │ ├── sine-wave.js +│ │ ├── fm-synthesis.js +│ │ └── available-engines.json +│ └── audio-controls.js # PowerOn button and AudioContext initialization (moved from ui) +├── core/ # Core application logic and state +│ ├── dispatcher.js # Event dispatching (renamed from event-dispatcher.js) +│ ├── frame-processor.js # Frame-to-notes mapping (moved from ui) +│ ├── state.js # Global settings and config loading +│ └── context.js # Shared DOM and dispatcher context +├── ui/ # Strictly UI-related code (DOM, buttons, rendering) +│ ├── ui-controller.js # UI setup and orchestration +│ ├── ui-settings.js # Button event bindings +│ ├── video-capture.js # Video feed rendering and canvas setup (refocused from processing) +│ └── dom.js # DOM element initialization +├── utils/ # General-purpose utilities +│ ├── logging.js # Structured logging +│ ├── idb-logger.js # IndexedDB logging +│ ├── utils.js # General utilities (tryVibrate, hapticCount, getText, etc.) +│ └── async.js # Async utilities (withErrorBoundary) +├── synthesis-grids/ # Grid-based synthesis methods +│ ├── hex-tonnetz.js +│ ├── circle-of-fifths.js +│ └── available-grids.json +├── languages/ # Language and translation files +│ ├── es-ES.json +│ ├── en-US.json +│ └── available-languages.json +├── styles.css # Global styles +├── index.html # Main HTML +├── main.js # Application entry point +└── test/ # Tests + ├── ui-settings.test.js + └── video-capture.test.js + +``` + +### [Contributing](docs/CONTRIBUTING.md) + +- Please follow the link above for the detailed contributing guidelines, branching strategy and examples. + +### [To-Do List](docs/TO_DO.md) + +- At this document linked above, you will find the list for our current TO TO list, now from milestone 5 (v0.5.2) + +### [Code flow diagrams](docs/DIAGRAMS.md) + +Diagrams covering the Turnk Based Development approach (v0.2). + +Reflecting: + - Process Frame Flow + - Audio Generation Flow + - Motion Detection such as oscillator logic. + +### [FAQ](docs/FAQ.md) + +- Follow the link for list of the Frecuently Asqued Questions. + +### [License](docs/LICENSE.md) + +- GPL-3.0 license details + +Peace +Love +Union +Respect + + diff --git a/future/web/audio-processor.js b/future/web/audio-processor.js deleted file mode 100644 index 0e5cab62..00000000 --- a/future/web/audio-processor.js +++ /dev/null @@ -1,128 +0,0 @@ -import { settings } from './state.js'; -import { mapFrame } from './grid-dispatcher.js'; -import { playSineWave } from './synthesis-methods/engines/sine-wave.js'; -import { playFMSynthesis } from './synthesis-methods/engines/fm-synthesis.js'; - -export let audioContext = null; -export let isAudioInitialized = false; -export let oscillators = []; - -export function setAudioContext(newContext) { - audioContext = newContext; - isAudioInitialized = false; // Reset state when setting a new context -} - -export async function initializeAudio(context) { - if (isAudioInitialized || !context) { - console.warn('initializeAudio: Already initialized or no context provided'); - return false; - } - try { - audioContext = context; - if (audioContext.state === 'suspended') { - console.log('initializeAudio: Resuming AudioContext'); - await audioContext.resume(); - } - if (audioContext.state !== 'running') { - throw new Error(`AudioContext is not running, state: ${audioContext.state}`); - } - oscillators = Array(24).fill().map(() => { - const osc = audioContext.createOscillator(); - const gain = audioContext.createGain(); - const panner = audioContext.createStereoPanner(); - osc.type = 'sine'; - osc.frequency.setValueAtTime(0, audioContext.currentTime); - gain.gain.setValueAtTime(0, audioContext.currentTime); - panner.pan.setValueAtTime(0, audioContext.currentTime); - osc.connect(gain).connect(panner).connect(audioContext.destination); - osc.start(); - return { osc, gain, panner, active: false }; - }); - isAudioInitialized = true; - if (window.speechSynthesis) { - const utterance = new SpeechSynthesisUtterance('Audio initialized'); - utterance.lang = settings.language || 'en-US'; - window.speechSynthesis.speak(utterance); - } - console.log('initializeAudio: Audio initialized successfully'); - return true; - } catch (error) { - console.error('Audio Initialization Error:', error.message); - if (window.dispatchEvent) { - window.dispatchEvent('logError', { message: `Audio init error: ${error.message}` }); - } - isAudioInitialized = false; - audioContext = null; - if (window.speechSynthesis) { - const utterance = new SpeechSynthesisUtterance('Failed to initialize audio'); - utterance.lang = settings.language || 'en-US'; - window.speechSynthesis.speak(utterance); - } - return false; - } -} - -export function playAudio(frameData, width, height, prevFrameDataLeft, prevFrameDataRight) { - if (!isAudioInitialized || !audioContext || audioContext.state !== 'running') { - console.warn('playAudio: Audio not initialized or context not running', { - isAudioInitialized, - audioContext: !!audioContext, - state: audioContext?.state, - }); - return { prevFrameDataLeft, prevFrameDataRight }; - } - - const halfWidth = width / 2; - const leftFrame = new Uint8ClampedArray(halfWidth * height); - const rightFrame = new Uint8ClampedArray(halfWidth * height); - for (let y = 0; y < height; y++) { - for (let x = 0; x < halfWidth; x++) { - leftFrame[y * halfWidth + x] = frameData[y * width + x]; - rightFrame[y * halfWidth + x] = frameData[y * width + x + halfWidth]; - } - } - - const leftResult = mapFrame(leftFrame, halfWidth, height, prevFrameDataLeft, -1); - const rightResult = mapFrame(rightFrame, halfWidth, height, prevFrameDataRight, 1); - - const allNotes = [...(leftResult.notes || []), ...(rightResult.notes || [])]; - switch (settings.synthesisEngine) { - case 'fm-synthesis': - playFMSynthesis(allNotes); - break; - case 'sine-wave': - default: - playSineWave(allNotes); - break; - } - - return { - prevFrameDataLeft: leftResult.newFrameData, - prevFrameDataRight: rightResult.newFrameData, - }; -} - -export async function cleanupAudio() { - if (!isAudioInitialized || !audioContext) { - console.warn('cleanupAudio: No audio context to clean up'); - return; - } - try { - oscillators.forEach(({ osc, gain, panner }) => { - gain.gain.setValueAtTime(0, audioContext.currentTime); - osc.stop(audioContext.currentTime + 0.1); - osc.disconnect(); - gain.disconnect(); - panner.disconnect(); - }); - oscillators = []; - isAudioInitialized = false; - audioContext = null; - console.log('cleanupAudio: Audio resources cleaned up successfully'); - } catch (error) { - console.error('Audio Cleanup Error:', error.message); - if (window.dispatchEvent) { - window.dispatchEvent('logError', { message: `Audio cleanup error: ${error.message}` }); - } - } -} diff --git a/future/web/audio/audio-controls.js b/future/web/audio/audio-controls.js new file mode 100644 index 00000000..1dbe3b64 --- /dev/null +++ b/future/web/audio/audio-controls.js @@ -0,0 +1,63 @@ +// Update web/ui/audio-controls.js: Remove { passive: true } from touchstart listener to ensure it counts as a user gesture for AudioContext + +import { getText } from "../utils/utils.js"; +import { initializeAudio } from "./audio-processor.js"; +import { structuredLog } from "../utils/logging.js"; +import { AudioManager } from "./audio-manager.js"; + +const audioManager = new AudioManager(); +let isAudioContextInitialized = false; + +export function setupAudioControls({ dispatchEvent: dispatch, DOM }) { + if (!DOM || !DOM.powerOn) { + console.error("setupAudioControls: Missing DOM elements"); + dispatch("logError", { message: "Missing DOM elements in audio-controls" }); + return; + } + + const initializeAudioContext = async (event) => { + console.log(`powerOn: ${event.type} event`); + try { + const success = await audioManager.initialize(); + if (success) { + await initializeAudio(audioManager.context); + isAudioContextInitialized = true; + DOM.splashScreen.style.display = "none"; + DOM.mainContainer.style.display = "grid"; + await getText("audioOn"); + dispatch("updateUI", { settingsMode: false, streamActive: false, micActive: false }); + console.log("powerOn: AudioContext initialized, UI updated"); + return; + } + } catch (err) { + if (err.message.includes("Permission denied")) { + structuredLog('ERROR', 'Audio init permission denied', { message: err.message }); + await getText('button2.tts.micError'); + } + console.error(`Audio init failed: ${err.message}`); + dispatch("logError", { message: `Audio init failed: ${err.message}` }); + } + await getText("audioError"); + DOM.powerOn.textContent = await getText("powerOn.failed.text", {}, 'text'); + DOM.powerOn.setAttribute("aria-label", await getText("powerOn.failed.aria", {}, 'aria')); + }; + + const handlePowerOn = async (event) => { + if (!isAudioContextInitialized) { + await initializeAudioContext(event); + } else { + console.log("powerOn: Audio already initialized, cleaning up"); + await audioManager.cleanup(); + isAudioContextInitialized = false; + DOM.splashScreen.style.display = "flex"; + DOM.mainContainer.style.display = "none"; + await getText("audioOff"); + dispatch("updateUI", { settingsMode: false, streamActive: false, micActive: false }); + } + }; + + DOM.powerOn.addEventListener("click", handlePowerOn); + DOM.powerOn.addEventListener("touchstart", handlePowerOn); // Removed { passive: true } + + console.log("setupAudioControls: Audio controls initialized"); +} diff --git a/future/web/audio/audio-manager.js b/future/web/audio/audio-manager.js new file mode 100644 index 00000000..c46dd7b2 --- /dev/null +++ b/future/web/audio/audio-manager.js @@ -0,0 +1,52 @@ +import { structuredLog } from '../utils/logging.js'; +import { dispatchEvent } from '../core/dispatcher.js'; + +export class AudioManager { + constructor() { + this.context = null; + this.state = 'uninitialized'; + } + + async initialize() { + if (this.state !== 'uninitialized') { + structuredLog('WARN', 'AudioManager: Already initialized', { currentState: this.state }); + return this.context?.state === 'running'; + } + + try { + this.state = 'initializing'; + this.context = new (window.AudioContext || window.webkitAudioContext)(); + + if (this.context.state === 'suspended') { + structuredLog('INFO', 'AudioManager: Resuming suspended context'); + await this.context.resume(); + } + + if (this.context.state !== 'running') { + throw new Error(`AudioContext failed to reach running state: ${this.context.state}`); + } + + this.state = 'ready'; + structuredLog('INFO', 'AudioManager: Initialized', { sampleRate: this.context.sampleRate, state: this.state }); + return true; + } catch (error) { + this.state = 'error'; + structuredLog('ERROR', 'AudioManager init error', { message: error.message }); + dispatchEvent('logError', { message: `Audio init error: ${error.message}` }); + throw error; + } + } + + async cleanup() { + if (this.context) { + await this.context.close(); + this.context = null; + this.state = 'uninitialized'; + structuredLog('INFO', 'AudioManager: Cleaned up'); + } + } + + getState() { + return { state: this.state, contextState: this.context?.state }; + } +} \ No newline at end of file diff --git a/future/web/audio/audio-processor.js b/future/web/audio/audio-processor.js new file mode 100644 index 00000000..571fab60 --- /dev/null +++ b/future/web/audio/audio-processor.js @@ -0,0 +1,180 @@ +import { settings } from "../core/state.js"; +import { dispatchEvent } from "../core/dispatcher.js"; +import { structuredLog } from "../utils/logging.js"; // Add for detailed logging. + +let audioContext = null; +let isAudioInitialized = false; +let oscillators = []; +let modulators = []; +let micSource = null; +let micGainNode = null; + +export function setAudioContext(newContext) { + audioContext = newContext; + isAudioInitialized = false; +} + +export async function initializeAudio(context) { + if (isAudioInitialized || !context) { + structuredLog('WARN', 'initializeAudio: Already initialized or no context'); + return false; + } + try { + audioContext = context; + if (audioContext.state === "suspended") { + structuredLog('INFO', 'initializeAudio: Resuming AudioContext'); + await audioContext.resume(); + } + if (audioContext.state !== "running") { + throw new Error(`AudioContext not running, state: ${audioContext.state}`); + } + oscillators = Array(24) + .fill() + .map(() => { + const osc = audioContext.createOscillator(); + const gain = audioContext.createGain(); + const panner = audioContext.createStereoPanner(); + osc.type = "sine"; + osc.frequency.setValueAtTime(0, audioContext.currentTime); + gain.gain.setValueAtTime(0, audioContext.currentTime); + panner.pan.setValueAtTime(0, audioContext.currentTime); + osc.connect(gain).connect(panner).connect(audioContext.destination); + osc.start(); + return { osc, gain, panner, active: false }; + }); + isAudioInitialized = true; + structuredLog('INFO', 'initializeAudio: Audio initialized with 24 oscillators'); + return true; + } catch (error) { + structuredLog('ERROR', 'initializeAudio error', { message: error.message }); + dispatchEvent('logError', { message: `Audio init error: ${error.message}` }); + isAudioInitialized = false; + audioContext = null; + return false; + } +} + +export async function playAudio(notes) { + if (!isAudioInitialized || !audioContext || audioContext.state !== "running") { + structuredLog('WARN', 'playAudio: Audio not initialized or context not running', { + isAudioInitialized, + audioContext: !!audioContext, + state: audioContext?.state, + }); + // Attempt to resume AudioContext on mobile (requires user gesture). + if (audioContext && audioContext.state === "suspended") { + try { + await audioContext.resume(); + structuredLog('INFO', 'playAudio: Resumed AudioContext'); + } catch (err) { + structuredLog('ERROR', 'playAudio: Failed to resume AudioContext', { message: err.message }); + } + } + return; + } + try { + // Use cached engines loaded at startup + const availableEngines = settings.availableEngines; + const engine = availableEngines.find((e) => e.id === settings.synthesisEngine); + if (!engine) { + structuredLog('ERROR', `playAudio: Engine not found`, { synthesisEngine: settings.synthesisEngine }); + dispatchEvent('logError', { message: `Engine not found: ${settings.synthesisEngine}` }); + return; + } + const engineModule = await import(`./synthesis-engines/${engine.id}.js`); + // Fix DEF-001: Normalize to camelCase (e.g., fm-synthesis -> playFmSynthesis). + const engineName = engine.id.split('-').map(word => word.charAt(0).toUpperCase() + word.slice(1)).join(''); + const playFunction = engineModule[`play${engineName}`]; + if (playFunction) { + playFunction(notes); + structuredLog('INFO', 'playAudio: Played notes', { engine: engine.id, noteCount: notes.length }); + } else { + structuredLog('ERROR', `playAudio: Play function not found`, { engine: engine.id }); + dispatchEvent('logError', { message: `Play function for ${engine.id} not found` }); + } + } catch (err) { + structuredLog('ERROR', 'playAudio error', { message: err.message }); + dispatchEvent('logError', { message: `Play audio error: ${err.message}` }); + } +} + +export async function cleanupAudio() { + if (isAudioInitialized && audioContext) { + try { + oscillators.forEach(({ osc, gain, panner, modulator, modGain }) => { + // Stop and disconnect carrier + osc.stop(); + osc.disconnect(); + gain.disconnect(); + panner.disconnect(); + // Stop and disconnect FM modulator if present + if (modulator) { + modulator.stop(); + modulator.disconnect(); + } + // Disconnect modGain if present + if (modGain) { + modGain.disconnect(); + } + }); + if (micSource && micGainNode) { + micSource.disconnect(); + micGainNode.disconnect(); + micSource = null; + micGainNode = null; + } + oscillators = []; + // cleanup modulators + modulators.forEach(({ osc, gain }) => { + osc.stop(); + osc.disconnect(); + gain.disconnect(); + }); + modulators = []; + // Fully close AudioContext to release system resources + await audioContext.close(); + audioContext = null; + isAudioInitialized = false; + structuredLog('INFO', 'cleanupAudio: Audio resources cleaned up and context closed'); + } catch (err) { + structuredLog('ERROR', 'cleanupAudio error', { message: err.message }); + dispatchEvent('logError', { message: `Cleanup audio error: ${err.message}` }); + } + } +} + +export async function stopAudio() { + await cleanupAudio(); +} + +export function initializeMicAudio(micStream) { + if (!audioContext || !isAudioInitialized) { + structuredLog('WARN', 'initializeMicAudio: Audio context not initialized'); + dispatchEvent('logError', { message: 'Audio context not initialized for microphone' }); + return null; + } + try { + if (micSource && micGainNode) { + micSource.disconnect(); + micGainNode.disconnect(); + micSource = null; + micGainNode = null; + } + if (micStream) { + micSource = audioContext.createMediaStreamSource(micStream); + micGainNode = audioContext.createGain(); + micGainNode.gain.setValueAtTime(0.7, audioContext.currentTime); + micSource.connect(micGainNode).connect(audioContext.destination); + structuredLog('INFO', 'initializeMicAudio: Microphone stream connected', { gain: 0.7 }); + return micSource; + } + structuredLog('INFO', 'initializeMicAudio: Microphone stream disconnected'); + return null; + } catch (error) { + structuredLog('ERROR', 'initializeMicAudio error', { message: error.message }); + dispatchEvent('logError', { message: `Microphone init error: ${error.message}` }); + return null; + } +} + +export { audioContext, isAudioInitialized, oscillators, modulators }; \ No newline at end of file diff --git a/future/web/audio/synthesis-engines/available-engines.json b/future/web/audio/synthesis-engines/available-engines.json new file mode 100644 index 00000000..00521b48 --- /dev/null +++ b/future/web/audio/synthesis-engines/available-engines.json @@ -0,0 +1,10 @@ +[ + { + "id": "sine-wave", + "createdAt": 1750899236911.1191 + }, + { + "id": "fm-synthesis", + "createdAt": 1750899236897.1191 + } +] diff --git a/future/web/audio/synthesis-engines/fm-synthesis.js b/future/web/audio/synthesis-engines/fm-synthesis.js new file mode 100644 index 00000000..5c69f460 --- /dev/null +++ b/future/web/audio/synthesis-engines/fm-synthesis.js @@ -0,0 +1,96 @@ +import { audioContext, oscillators, modulators } from "../audio-processor.js"; + +export function playFmSynthesis(notes) { + let oscIndex = 0; + let modIndex = 0; + const allNotes = notes.sort((a, b) => b.intensity - a.intensity); + // Clean up any existing FM modulators from previous frames + oscillators.forEach(oscData => { + if (oscData.modulator) { + oscData.modulator.stop(); + oscData.modulator.disconnect(); + oscData.modulator = null; + } + if (oscData.modGain) { + oscData.modGain.disconnect(); + oscData.modGain = null; + } + }); + for (let i = 0; i < oscillators.length; i++) { + const oscData = oscillators[i]; + if (oscIndex < allNotes.length) { + const { pitch, intensity, harmonics, pan } = allNotes[oscIndex]; + oscData.osc.type = "sine"; + oscData.osc.frequency.setTargetAtTime( + pitch, + audioContext.currentTime, + 0.015, + ); + oscData.gain.gain.setTargetAtTime( + intensity, + audioContext.currentTime, + 0.015, + ); + oscData.panner.pan.setTargetAtTime(pan, audioContext.currentTime, 0.015); + oscData.active = true; + if (harmonics.length) { + // handle one modulator per note, reuse or create + let modData; + if (modIndex < modulators.length) { + modData = modulators[modIndex]; + } else { + const mOsc = audioContext.createOscillator(); + const mGain = audioContext.createGain(); + modulators.push({ osc: mOsc, gain: mGain, started: false }); + modData = modulators[modulators.length - 1]; + } + // configure modulator + modData.osc.type = "sine"; + modData.osc.frequency.setTargetAtTime( + pitch * 2, + audioContext.currentTime, + 0.015, + ); + modData.gain.gain.setTargetAtTime( + intensity * 100, + audioContext.currentTime, + 0.015, + ); + modulator.connect(modGain).connect(oscData.osc.frequency); + modulator.start(); + // Store references for cleanup + oscData.modulator = modulator; + oscData.modGain = modGain; + // Usar el siguiente oscilador para el armónico principal + if (oscIndex < oscillators.length) { + const harmonicOsc = oscillators[oscIndex]; + harmonicOsc.osc.type = "sine"; + harmonicOsc.osc.frequency.setTargetAtTime( + harmonics[0], + audioContext.currentTime, + 0.015, + ); + harmonicOsc.gain.gain.setTargetAtTime( + intensity * 0.5, + audioContext.currentTime, + 0.015, + ); + harmonicOsc.panner.pan.setTargetAtTime( + pan, + audioContext.currentTime, + 0.015, + ); + harmonicOsc.active = true; + } + } + oscIndex++; + } else { + oscData.gain.gain.setTargetAtTime(0, audioContext.currentTime, 0.015); + oscData.active = false; + } + } + // silence any unused modulators + for (let i = modIndex; i < modulators.length; i++) { + modulators[i].gain.gain.setTargetAtTime(0, audioContext.currentTime, 0.015); + } +} diff --git a/future/web/audio/synthesis-engines/sine-wave.js b/future/web/audio/synthesis-engines/sine-wave.js new file mode 100644 index 00000000..94de7eec --- /dev/null +++ b/future/web/audio/synthesis-engines/sine-wave.js @@ -0,0 +1,59 @@ +import { audioContext, oscillators } from "../audio-processor.js"; + +export function playSineWave(notes) { + let oscIndex = 0; + const allNotes = notes.sort((a, b) => b.intensity - a.intensity); + for (let i = 0; i < oscillators.length; i++) { + const oscData = oscillators[i]; + if (oscIndex < allNotes.length && i < oscillators.length) { + const { pitch, intensity, harmonics, pan } = allNotes[oscIndex]; + oscData.osc.type = "sine"; + oscData.osc.frequency.setTargetAtTime( + pitch, + audioContext.currentTime, + 0.015, + ); + oscData.gain.gain.setTargetAtTime( + intensity, + audioContext.currentTime, + 0.015, + ); + oscData.panner.pan.setTargetAtTime(pan, audioContext.currentTime, 0.015); + oscData.active = true; + if ( + harmonics.length && + oscIndex + harmonics.length < oscillators.length + ) { + for ( + let h = 0; + h < harmonics.length && oscIndex + h < oscillators.length; + h++ + ) { + oscIndex++; + const harmonicOsc = oscillators[oscIndex]; + harmonicOsc.osc.type = "sine"; + harmonicOsc.osc.frequency.setTargetAtTime( + harmonics[h], + audioContext.currentTime, + 0.015, + ); + harmonicOsc.gain.gain.setTargetAtTime( + intensity * 0.5, + audioContext.currentTime, + 0.015, + ); + harmonicOsc.panner.pan.setTargetAtTime( + pan, + audioContext.currentTime, + 0.015, + ); + harmonicOsc.active = true; + } + } + oscIndex++; + } else { + oscData.gain.gain.setTargetAtTime(0, audioContext.currentTime, 0.015); + oscData.active = false; + } + } +} diff --git a/future/web/context.js b/future/web/core/context.js similarity index 53% rename from future/web/context.js rename to future/web/core/context.js index 636ce1eb..f455451b 100644 --- a/future/web/context.js +++ b/future/web/core/context.js @@ -1,4 +1,3 @@ -// context.js let DOM = null; let dispatchEvent = null; @@ -7,7 +6,10 @@ export function setDOM(dom) { } export function getDOM() { - if (!DOM) console.error('DOM not initialized'); + if (!DOM) { + console.error("DOM not initialized"); + throw new Error("DOM not initialized"); + } return DOM; } @@ -16,6 +18,9 @@ export function setDispatchEvent(dispatcher) { } export function getDispatchEvent() { - if (!dispatchEvent) console.error('dispatchEvent not initialized'); + if (!dispatchEvent) { + console.error("dispatchEvent not initialized"); + throw new Error("dispatchEvent not initialized"); + } return dispatchEvent; -} +} \ No newline at end of file diff --git a/future/web/core/dispatcher.js b/future/web/core/dispatcher.js new file mode 100644 index 00000000..f6db1821 --- /dev/null +++ b/future/web/core/dispatcher.js @@ -0,0 +1,514 @@ +// File: web/core/dispatcher.js +/* @ts-nocheck */ +import { settings, setAudioInterval, setStream, setMicStream, getLogs } from './state.js'; +import { getText, parseBrowserVersion, setTextAndAriaLabel } from '../utils/utils.js'; +import { withErrorBoundary } from '../utils/async.js'; +import { initializeMicAudio } from '../audio/audio-processor.js'; +import { processFrameWithState, cleanupFrameProcessor } from './frame-processor.js'; +import { structuredLog } from '../utils/logging.js'; + +let _dispatcherFn = null; + +export function setDispatcher(fn) { + _dispatcherFn = fn; +} + +export function dispatchEvent(eventName, payload) { + if (_dispatcherFn) { + structuredLog('DEBUG', `dispatchEvent: ${eventName}`, { payload }); + return _dispatcherFn(eventName, payload); + } else { + structuredLog('ERROR', 'dispatchEvent called before initialization', { eventName, payload }); + } +} + +let lastTTSTime = 0; +const ttsCooldown = 3000; +let fpsSamplerInterval = null; +let frameCount = 0; + +export async function createEventDispatcher(DOM) { + structuredLog('INFO', 'createEventDispatcher: Initializing event dispatcher', { domExists: !!DOM }); + if (!DOM) { + structuredLog('ERROR', 'DOM is undefined in createEventDispatcher'); + return { dispatchEvent: () => structuredLog('ERROR', 'dispatchEvent not initialized due to undefined DOM') }; + } + + structuredLog('DEBUG', 'DOM elements received', { + hasButton1: !!DOM.button1, + hasButton2: !!DOM.button2, + hasButton3: !!DOM.button3, + hasButton4: !!DOM.button4, + hasButton5: !!DOM.button5, + hasButton6: !!DOM.button6, + hasVideoFeed: !!DOM.videoFeed, + }); + + // Use the centrally loaded configurations from the settings object. + const { availableGrids, availableEngines, availableLanguages } = settings; + + const browserInfo = { + userAgent: navigator.userAgent, + platform: navigator.platform, + parsedBrowserVersion: parseBrowserVersion(navigator.userAgent), + hardwareConcurrency: navigator.hardwareConcurrency || 'N/A', + deviceMemory: navigator.deviceMemory ? `${navigator.deviceMemory} GB` : 'N/A', + screen: `${screen.width}x${screen.height}`, + audioContextState: typeof audioContext !== 'undefined' ? audioContext.state : 'Not initialized', + streamActive: !!settings.stream, + micActive: !!settings.micStream, + currentFPSInterval: settings.updateInterval + }; + structuredLog('INFO', 'Enhanced browser and app debug info', browserInfo); + + if (settings.debugLogging) { + fpsSamplerInterval = setInterval(() => { + if (settings.stream) { + const avgFPS = frameCount / 10; + structuredLog('DEBUG', 'Average FPS sample', { avgFPS, overSeconds: 10 }); + frameCount = 0; + } + }, 10000); + } + + const handlers = { + updateUI: async ({ settingsMode, streamActive, micActive }) => { + try { + if (!DOM.button1 || !DOM.button2 || !DOM.button3 || !DOM.button4 || !DOM.button5 || !DOM.button6) { + const missing = [ + !DOM.button1 && 'button1', + !DOM.button2 && 'button2', + !DOM.button3 && 'button3', + !DOM.button4 && 'button4', + !DOM.button5 && 'button5', + !DOM.button6 && 'button6' + ].filter(Boolean); + structuredLog('ERROR', 'Missing critical DOM elements for UI update', { missing }); + dispatchEvent('logError', { message: 'Missing critical DOM elements for UI update' }); + return; + } + + const currentTime = performance.now(); + const grid = availableGrids.find(g => g.id === settings.gridType); + const engine = availableEngines.find(e => e.id === settings.synthesisEngine); + const language = availableLanguages.find(l => l.id === settings.language); + + const button1Text = settingsMode + ? await getText('button1.settings.text', { gridName: grid?.id || 'Grid' }, 'text') + : await getText(`button1.normal.${streamActive ? 'stop' : 'start'}.text`, {}, 'text'); + const button1Aria = settingsMode + ? await getText('button1.settings.aria', { gridType: settings.gridType }, 'aria') + : await getText(`button1.normal.${streamActive ? 'stop' : 'start'}.aria`, {}, 'aria'); + if (currentTime - lastTTSTime >= ttsCooldown) { + await getText(`button1.tts.${settingsMode ? 'gridSelect' : 'startStop'}`, { + state: settingsMode ? settings.gridType : (streamActive ? 'stopping' : 'starting') + }); + } + setTextAndAriaLabel(DOM.button1, button1Text, button1Aria); + + const button2Text = settingsMode + ? await getText('button2.settings.text', { engineName: engine?.id || 'Engine' }, 'text') + : await getText(`button2.normal.${micActive ? 'off' : 'on'}.text`, {}, 'text'); + const button2Aria = settingsMode + ? await getText('button2.settings.aria', { synthesisEngine: settings.synthesisEngine }, 'aria') + : await getText(`button2.normal.${micActive ? 'off' : 'on'}.aria`, {}, 'aria'); + if (currentTime - lastTTSTime >= ttsCooldown) { + await getText(`button2.tts.${settingsMode ? 'synthesisSelect' : 'micToggle'}`, { + state: settingsMode ? settings.synthesisEngine : (micActive ? 'turningOff' : 'turningOn') + }); + } + setTextAndAriaLabel(DOM.button2, button2Text, button2Aria); + + const button3Text = settingsMode + ? await getText('button3.settings.text', { languageName: language?.id || 'Language' }, 'text') + : await getText('button3.normal.text', { languageName: language?.id || 'Language' }, 'text'); + const button3Aria = settingsMode + ? await getText('button3.settings.aria', { language: settings.language }, 'aria') + : await getText('button3.normal.aria', { language: settings.language }, 'aria'); + if (currentTime - lastTTSTime >= ttsCooldown) { + await getText(`button3.tts.${settingsMode ? 'videoSourceSelect' : 'languageSelect'}`, { + state: settingsMode ? (DOM.videoFeed?.srcObject?.getVideoTracks()[0]?.getSettings().facingMode || 'unknown') : settings.language + }); + } + setTextAndAriaLabel(DOM.button3, button3Text, button3Aria); + + const button4Text = settingsMode + ? await getText('button4.settings.text', {}, 'text') + : await getText(`button4.normal.${settings.autoFPS ? 'auto' : 'manual'}.text`, { fps: Math.round(1000 / settings.updateInterval) }, 'text'); + const button4Aria = settingsMode + ? await getText('button4.settings.aria', {}, 'aria') + : await getText('button4.normal.aria', {}, 'aria'); + if (currentTime - lastTTSTime >= ttsCooldown) { + await getText(`button4.tts.${settingsMode ? 'saveSettings' : 'fpsBtn'}`, { + state: settingsMode ? 'save' : (settings.autoFPS ? 'auto' : Math.round(1000 / settings.updateInterval)) + }); + } + setTextAndAriaLabel(DOM.button4, button4Text, button4Aria); + + const button5Text = settingsMode + ? await getText('button5.settings.text', {}, 'text') + : await getText('button5.normal.text', {}, 'text'); + const button5Aria = settingsMode + ? await getText('button5.settings.aria', {}, 'aria') + : await getText('button5.normal.aria', {}, 'aria'); + if (currentTime - lastTTSTime >= ttsCooldown) { + await getText(`button5.tts.${settingsMode ? 'loadSettings' : 'emailDebug'}`, { + state: settingsMode ? 'load' : 'email' + }); + } + setTextAndAriaLabel(DOM.button5, button5Text, button5Aria); + + const button6Text = await getText(`button6.${settingsMode ? 'settings' : 'normal'}.text`, {}, 'text'); + const button6Aria = await getText(`button6.${settingsMode ? 'settings' : 'normal'}.aria`, {}, 'aria'); + if (currentTime - lastTTSTime >= ttsCooldown) { + await getText('button6.tts.settingsToggle', { state: settingsMode ? 'off' : 'on' }); + } + setTextAndAriaLabel(DOM.button6, button6Text, button6Aria); + + lastTTSTime = currentTime; + structuredLog('DEBUG', 'updateUI: UI updated', { settingsMode, streamActive, micActive }); + } catch (err) { + structuredLog('ERROR', 'updateUI error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `UI update error: ${err.message}` }); + } + }, + + processFrame: async () => { + try { + const canvas = document.createElement('canvas'); + const ctx = canvas.getContext('2d'); + canvas.width = DOM.videoFeed.videoWidth; + canvas.height = DOM.videoFeed.videoHeight; + ctx.drawImage(DOM.videoFeed, 0, 0, canvas.width, canvas.height); + const frameData = ctx.getImageData(0, 0, canvas.width, canvas.height).data; + const { data: result, error } = await withErrorBoundary(processFrameWithState, frameData, DOM.videoFeed.videoWidth, DOM.videoFeed.videoHeight); + if (error) { + structuredLog('ERROR', 'processFrame handler error', { message: error.message, stack: error.stack }); + handlers.logError({ message: `Frame processing handler error: ${error.message}` }); + return; + } + if (!result) { + structuredLog('WARN', 'processFrame: No result returned', { width: DOM.videoFeed?.videoWidth, height: DOM.videoFeed?.videoHeight }); + return; + } + structuredLog('DEBUG', 'processFrame result', { notesCount: result.notes?.length || 0, avgIntensity: result.avgIntensity }); + frameCount++; + } catch (err) { + structuredLog('ERROR', 'processFrame error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Frame processing error: ${err.message}` }); + } + }, + + startStop: async ({ settingsMode }) => { + try { + if (settingsMode) { + const currentIndex = availableGrids.findIndex(g => g.id === settings.gridType); + const nextIndex = (currentIndex + 1) % availableGrids.length; + settings.gridType = availableGrids[nextIndex].id; + await getText('button1.tts.gridSelect', { state: settings.gridType }); + } else { + if (!settings.stream) { + // first try user-facing video + no audio (audio toggled separately) + let constraints = { video: { facingMode: 'user' }, audio: false }; + let stream; + try { + stream = await navigator.mediaDevices.getUserMedia(constraints); + } catch (err) { + structuredLog('WARN', 'getUserMedia(user) failed, retrying default video', { message: err.message }); + stream = await navigator.mediaDevices.getUserMedia({ video: true, audio: false }); + } + DOM.videoFeed.srcObject = stream; + await new Promise((resolve, reject) => { + DOM.videoFeed.addEventListener('loadedmetadata', () => { + if (DOM.videoFeed.videoWidth <= 0 || DOM.videoFeed.videoHeight <= 0) { + return reject(new Error('Invalid video dimensions after metadata')); + } + structuredLog('INFO', 'Video metadata loaded', { width: DOM.videoFeed.videoWidth, height: DOM.videoFeed.videoHeight }); + resolve(); + }, { once: true }); + DOM.videoFeed.addEventListener('error', reject, { once: true }); + }); + setStream(stream); + // schedule frame processing + const timerId = setInterval(() => dispatchEvent('processFrame'), settings.updateInterval); + setAudioInterval(timerId); + await getText('button1.tts.startStop', { state: 'starting' }); + } else { + settings.stream.getVideoTracks().forEach(track => track.stop()); + setStream(null); + await cleanupFrameProcessor(); + if (settings.micStream) { + settings.micStream.getTracks().forEach(track => track.stop()); + setMicStream(null); + initializeMicAudio(null); + } + clearInterval(settings.audioTimerId); + setAudioInterval(null); + if (fpsSamplerInterval) { + clearInterval(fpsSamplerInterval); + fpsSamplerInterval = null; + structuredLog('INFO', 'FPS sampler cleared on stream stop'); + } + await getText('button1.tts.startStop', { state: 'stopping' }); + } + dispatchEvent('updateUI', { settingsMode, streamActive: !!settings.stream, micActive: !!settings.micStream }); + } + } catch (err) { + structuredLog('ERROR', 'startStop error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Stream toggle error: ${err.message}` }); + await getText('button1.tts.cameraError'); + } + }, + + toggleAudio: async ({ settingsMode }) => { + try { + structuredLog('INFO', 'toggleAudio: Current mic state', { micActive: !!settings.micStream }); + if (settingsMode) { + const currentIndex = availableEngines.findIndex(e => e.id === settings.synthesisEngine); + const nextIndex = (currentIndex + 1) % availableEngines.length; + settings.synthesisEngine = availableEngines[nextIndex].id; + await getText('button2.tts.synthesisSelect', { state: settings.synthesisEngine }); + } else { + if (!settings.micStream) { + const micStream = await navigator.mediaDevices.getUserMedia({ audio: true }); + setMicStream(micStream); + initializeMicAudio(micStream); + await getText('button2.tts.micToggle', { state: 'turningOn' }); + } else { + settings.micStream.getTracks().forEach(track => track.stop()); + setMicStream(null); + initializeMicAudio(null); + await getText('button2.tts.micToggle', { state: 'turningOff' }); + } + dispatchEvent('updateUI', { settingsMode, streamActive: !!settings.stream, micActive: !!settings.micStream }); + } + } catch (err) { + structuredLog('ERROR', 'toggleAudio error', { message: err.message }); + handlers.logError({ message: `Mic toggle error: ${err.message}` }); + await getText('button2.tts.micError'); + } + }, + + toggleLanguage: async () => { + try { + const currentIndex = availableLanguages.findIndex(l => l.id === settings.language); + const nextIndex = (currentIndex + 1) % availableLanguages.length; + settings.language = availableLanguages[nextIndex].id; + await getText('button3.tts.languageSelect', { state: settings.language }); + dispatchEvent('updateUI', { settingsMode: settings.isSettingsMode, streamActive: !!settings.stream, micActive: !!settings.micStream }); + } catch (err) { + structuredLog('ERROR', 'toggleLanguage error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Language toggle error: ${err.message}` }); + await getText('button3.tts.languageError'); + } + }, + + toggleVideoSource: async () => { + try { + const oldStream = DOM.videoFeed?.srcObject; + if (oldStream) { + const currentVideoTrack = oldStream.getVideoTracks()[0]; + const currentFacingMode = currentVideoTrack.getSettings().facingMode || 'user'; + const newFacingMode = currentFacingMode === 'user' ? 'environment' : 'user'; + + oldStream.getTracks().forEach(track => track.stop()); + await cleanupFrameProcessor(); + + const newStream = await navigator.mediaDevices.getUserMedia({ + video: { facingMode: newFacingMode }, + audio: !!settings.micStream + }); + DOM.videoFeed.srcObject = newStream; + await new Promise((resolve, reject) => { + DOM.videoFeed.addEventListener('loadedmetadata', () => { + if (DOM.videoFeed.videoWidth <= 0 || DOM.videoFeed.videoHeight <= 0) { + return reject(new Error('Invalid video dimensions after metadata')); + } + structuredLog('INFO', 'Video metadata loaded', { width: DOM.videoFeed.videoWidth, height: DOM.videoFeed.videoHeight }); + resolve(); + }, { once: true }); + DOM.videoFeed.addEventListener('error', reject, { once: true }); + }); + setStream(newStream); + + if (settings.micStream) { + setMicStream(newStream); + initializeMicAudio(newStream); + } + + await getText('button3.tts.videoSourceSelect', { state: newFacingMode }); + } else { + structuredLog('WARN', 'toggleVideoSource: No video track available'); + await getText('button3.tts.videoSourceError'); + } + } catch (err) { + structuredLog('ERROR', 'toggleVideoSource error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Video source toggle error: ${err.message}` }); + await getText('button3.tts.videoSourceError'); + } + }, + + updateFrameInterval: async ({ interval }) => { + try { + settings.updateInterval = interval; + if (settings.stream) { + clearInterval(settings.audioTimerId); + setAudioInterval(setInterval(() => { + dispatchEvent('processFrame'); + }, settings.updateInterval)); + } + await getText('button4.tts.fpsBtn', { + fps: settings.autoFPS ? 'auto' : Math.round(1000 / settings.updateInterval) + }); + dispatchEvent('updateUI', { settingsMode: settings.isSettingsMode, streamActive: !!settings.stream, micActive: !!settings.micStream }); + } catch (err) { + structuredLog('ERROR', 'updateFrameInterval error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Frame interval update error: ${err.message}` }); + await getText('button4.tts.fpsError'); + } + }, + + toggleGrid: async () => { + try { + const currentIndex = availableGrids.findIndex(g => g.id === settings.gridType); + const nextIndex = (currentIndex + 1) % availableGrids.length; + settings.gridType = availableGrids[nextIndex].id; + await getText('button1.tts.gridSelect', { state: settings.gridType }); + dispatchEvent('updateUI', { settingsMode: settings.isSettingsMode, streamActive: !!settings.stream, micActive: !!settings.micStream }); + } catch (err) { + structuredLog('ERROR', 'toggleGrid error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Grid toggle error: ${err.message}` }); + await getText('button1.tts.startStop', { state: 'error' }); + } + }, + + toggleDebug: async ({ show }) => { + try { + if (DOM.debug) { + DOM.debug.style.display = show ? 'block' : 'none'; + } + await getText('button6.tts.settingsToggle', { state: show ? 'on' : 'off' }); + } catch (err) { + structuredLog('ERROR', 'toggleDebug error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Debug toggle error: ${err.message}` }); + } + }, + + saveSettings: async () => { + try { + const settingsToSave = { + gridType: settings.gridType, + synthesisEngine: settings.synthesisEngine, + language: settings.language, + autoFPS: settings.autoFPS, + updateInterval: settings.updateInterval, + dayNightMode: settings.dayNightMode, + ttsEnabled: settings.ttsEnabled + }; + localStorage.setItem('acoustsee-settings', JSON.stringify(settingsToSave)); + await getText('button4.tts.saveSettings'); + } catch (err) { + structuredLog('ERROR', 'saveSettings error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Save settings error: ${err.message}` }); + await getText('button4.tts.saveError'); + } + dispatchEvent('updateUI', { settingsMode: settings.isSettingsMode, streamActive: !!settings.stream, micActive: !!settings.micStream }); + }, + + loadSettings: async () => { + try { + const savedSettings = localStorage.getItem('acoustsee-settings'); + if (savedSettings) { + let parsedSettings; + try { + parsedSettings = JSON.parse(savedSettings); + } catch (parseErr) { + throw new Error(`Invalid JSON in localStorage: ${parseErr.message}`); + } + + const expectedKeys = ['gridType', 'synthesisEngine', 'language', 'autoFPS', 'updateInterval', 'dayNightMode', 'ttsEnabled']; + const expectedTypes = { + gridType: 'string', + synthesisEngine: 'string', + language: 'string', + autoFPS: 'boolean', + updateInterval: 'number', + dayNightMode: 'string', + ttsEnabled: 'boolean' + }; + + expectedKeys.forEach(key => { + if (Object.hasOwn(parsedSettings, key) && typeof parsedSettings[key] === expectedTypes[key]) { + settings[key] = parsedSettings[key]; + } else if (Object.hasOwn(parsedSettings, key)) { + structuredLog('WARN', 'Invalid type for setting during load', { key, receivedType: typeof parsedSettings[key] }); + } + }); + + const extraKeys = Object.keys(parsedSettings).filter(key => !expectedKeys.includes(key)); + if (extraKeys.length > 0) { + structuredLog('WARN', 'Extra keys ignored in loaded settings (potential pollution)', { extraKeys }); + } + + await getText('button5.tts.loadSettings.loaded'); + } else { + await getText('button5.tts.loadSettings.none'); + } + } catch (err) { + structuredLog('ERROR', 'Load settings error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Load settings error: ${err.message}` }); + await getText('button5.tts.loadError'); + } + dispatchEvent('updateUI', { settingsMode: settings.isSettingsMode, streamActive: !!settings.stream, micActive: !!settings.micStream }); + }, + + emailDebug: async () => { + try { + const logsText = await getLogs(); + if (!logsText || logsText.trim() === '') { + structuredLog('WARN', 'emailDebug: No logs retrieved or empty from IndexedDB'); + alert('No logs available to download. Try generating some actions first.'); + await getText('button5.tts.emailDebug', { state: 'error' }); + return; + } + const blob = new Blob([logsText], { type: 'text/plain' }); + const url = URL.createObjectURL(blob); + const a = document.createElement('a'); + a.href = url; + a.download = 'acoustsee-debug-log.txt'; + document.body.appendChild(a); + a.click(); + document.body.removeChild(a); + URL.revokeObjectURL(url); + await getText('button5.tts.emailDebug'); + } catch (err) { + structuredLog('ERROR', 'emailDebug error', { message: err.message, stack: err.stack }); + handlers.logError({ message: `Email debug error: ${err.message}` }); + alert('Failed to download logs: ' + err.message); + await getText('button5.tts.emailDebug', { state: 'error' }); + } + }, + + logError: ({ message }) => { + structuredLog('ERROR', 'Error logged', { message }); + } + }; + + setDispatcher((eventName, payload = {}) => { + if (handlers[eventName]) { + try { + structuredLog('DEBUG', `Dispatching event: ${eventName}`, { payload }); + handlers[eventName](payload); + } catch (err) { + structuredLog('ERROR', `Error in handler ${eventName}`, { message: err.message, stack: err.stack }); + handlers.logError({ message: `Handler ${eventName} error: ${err.message}` }); + } + } else { + structuredLog('ERROR', `No handler found for event: ${eventName}`); + handlers.logError({ message: `No handler for event: ${eventName}` }); + } + }); + + structuredLog('INFO', 'createEventDispatcher: Dispatcher initialized'); + return { dispatchEvent }; +} \ No newline at end of file diff --git a/future/web/core/frame-processor.js b/future/web/core/frame-processor.js new file mode 100644 index 00000000..703c4671 --- /dev/null +++ b/future/web/core/frame-processor.js @@ -0,0 +1,132 @@ +import { settings } from "./state.js"; +import { dispatchEvent } from "./dispatcher.js"; +import { structuredLog } from "../utils/logging.js"; + +// Module-level state for stateful wrapper +let prevFrameDataLeft = null; +let prevFrameDataRight = null; + +export async function mapFrameToNotes(frameData, width, height, prevLeft, prevRight) { + try { + // Guard against invalid dimensions + if (!width || !height || width <= 0 || height <= 0) { + structuredLog('ERROR', 'Invalid dimensions for frame processing', { width, height }); + dispatchEvent("logError", { message: `Invalid dimensions for frame processing: ${width}x${height}` }); + // Reset state on dimension error if configured + if (settings.resetStateOnError) { + return { notes: [], prevFrameDataLeft: null, prevFrameDataRight: null, avgIntensity: 0 }; + } + return { notes: [], prevFrameDataLeft: prevLeft, prevFrameDataRight: prevRight, avgIntensity: 0 }; + } + + // Validate frameData + if (!frameData || !(frameData instanceof Uint8ClampedArray) || frameData.length < width * height * 4) { + structuredLog('ERROR', 'Invalid frameData for processing', { frameDataLength: frameData?.length || 0 }); + dispatchEvent("logError", { message: `Invalid frameData: length ${frameData?.length || 0}` }); + if (settings.resetStateOnError) { + return { notes: [], prevFrameDataLeft: null, prevFrameDataRight: null, avgIntensity: 0 }; + } + return { notes: [], prevFrameDataLeft: prevLeft, prevFrameDataRight: prevRight, avgIntensity: 0 }; + } + // New: Initial frame prev data check + if (!prevLeft || !prevRight) { + structuredLog('INFO', 'mapFrameToNotes: Initial frame, no prev data', { width, height }); + } + + // Use cached grids loaded at startup + const availableGrids = settings.availableGrids; + const grid = availableGrids.find((g) => g.id === settings.gridType); + if (!grid) { + console.error(`Grid not found: ${settings.gridType}`); + dispatchEvent("logError", { message: `Grid not found: ${settings.gridType}` }); + return { notes: [], prevFrameDataLeft: prevLeft, prevFrameDataRight: prevRight, avgIntensity: 0 }; + } + const gridModule = await import(`../synthesis-grids/${grid.id}.js`); + const mapFunction = gridModule[`mapFrameTo${grid.id.split('-').map(word => word.charAt(0).toUpperCase() + word.slice(1)).join('')}`]; + if (!mapFunction) { + console.error(`Map function for ${grid.id} not found`); + dispatchEvent("logError", { message: `Map function for ${grid.id} not found` }); + return { notes: [], prevFrameDataLeft: prevLeft, prevFrameDataRight: prevRight, avgIntensity: 0 }; + } + + // Determine split buffers and copy full RGBA pixels + const halfWidth = Math.floor(width / 2); + const frameSize = halfWidth * height * 4; + const leftFrameData = new Uint8ClampedArray(frameSize); + const rightFrameData = new Uint8ClampedArray(frameSize); + + // TODO: Optimize with buffer pooling or single-pass copy if performance becomes an issue + for (let y = 0; y < height; y++) { + for (let x = 0; x < halfWidth; x++) { + const fullIdx = (y * width + x) * 4; + const halfIdx = (y * halfWidth + x) * 4; + // Copy left RGBA + leftFrameData.set(frameData.subarray(fullIdx, fullIdx + 4), halfIdx); + // Copy right RGBA + const fullIdxR = (y * width + x + halfWidth) * 4; + rightFrameData.set(frameData.subarray(fullIdxR, fullIdxR + 4), halfIdx); + } + } + + const leftResult = mapFunction(leftFrameData, halfWidth, height, prevLeft, -1); + const rightResult = mapFunction(rightFrameData, halfWidth, height, prevRight, 1); + const allNotes = [...(leftResult.notes || []), ...(rightResult.notes || [])]; + + // Compute average intensity across both frames + const avgIntensity = ((leftResult.avgIntensity || 0) + (rightResult.avgIntensity || 0)) / 2; + + return { + notes: allNotes, + prevFrameDataLeft: leftResult.newFrameData, + prevFrameDataRight: rightResult.newFrameData, + avgIntensity + }; + } catch (err) { + console.error("mapFrameToNotes error:", err.message); + dispatchEvent("logError", { message: `Frame mapping error: ${err.message}` }); + if (settings.resetStateOnError) { + return { notes: [], prevFrameDataLeft: null, prevFrameDataRight: null, avgIntensity: 0 }; + } + return { notes: [], prevFrameDataLeft: prevLeft, prevFrameDataRight: prevRight, avgIntensity: 0 }; + } +} + +// Stateful wrapper for dispatcher integration +export async function processFrameWithState(frameData, width, height) { + // New: Validate frameData variance + let hasVariance = false; + let sampleSum = 0; + for (let i = 0; i < Math.min(1000, frameData.length); i += 4) { + const intensity = (frameData[i] + frameData[i+1] + frameData[i+2]) / 3; + sampleSum += intensity; + if (intensity > 0) hasVariance = true; + } + if (!hasVariance) { + structuredLog('WARN', 'processFrame: No variance in frame data', { sampleAvg: sampleSum / 250 }); + return { notes: [], avgIntensity: 0 }; + } + + const result = await mapFrameToNotes(frameData, width, height, prevFrameDataLeft, prevFrameDataRight); + prevFrameDataLeft = result.prevFrameDataLeft; + prevFrameDataRight = result.prevFrameDataRight; + return result; +} + +// Expose mapFrameToNotes as processFrame for backward compatibility +export { mapFrameToNotes as processFrame }; + +/** Cleanup function for frame processor */ +export async function cleanupFrameProcessor() { + try { + structuredLog('INFO', 'cleanupFrameProcessor: Resetting frame processor state'); + prevFrameDataLeft = null; + prevFrameDataRight = null; + return { prevFrameDataLeft: null, prevFrameDataRight: null }; + } catch (err) { + structuredLog('ERROR', 'cleanupFrameProcessor error', { message: err.message }); + dispatchEvent('logError', { message: `Frame processor cleanup error: ${err.message}` }); + prevFrameDataLeft = null; + prevFrameDataRight = null; + return { prevFrameDataLeft: null, prevFrameDataRight: null }; + } +} \ No newline at end of file diff --git a/future/web/core/state.js b/future/web/core/state.js new file mode 100644 index 00000000..3bb8b63b --- /dev/null +++ b/future/web/core/state.js @@ -0,0 +1,142 @@ +// File: web/core/state.js +import { structuredLog } from '../utils/logging.js'; // Top import. +import { addIdbLog, getAllIdbLogs } from '../utils/idb-logger.js'; // New import for DB logging. + +export let settings = { + debugLogging: true, + stream: null, + availableGrids: [], // Loaded once at startup + availableEngines: [], // Loaded once at startup + availableLanguages: [], // Loaded once at startup + audioTimerId: null, // Renamed from audioInterval: timer ID from setInterval, or null when cleared. + updateInterval: 30, + autoFPS: true, + gridType: null, + synthesisEngine: null, + language: null, + isSettingsMode: false, + micStream: null, + ttsEnabled: false, + dayNightMode: 'day', + resetStateOnError: true, // New flag to control state reset on errors + motionThreshold: 20 // Default threshold for motion detection +}; + +/** + * Initializes default settings from the loaded configuration files. + * This runs after the config files have been fetched and parsed. + */ +function initializeDefaults() { + structuredLog('INFO', 'Initializing settings from loaded configs.'); + + if (settings.availableGrids.length > 0 && !settings.gridType) { + settings.gridType = settings.availableGrids[0].id; + } + + if (settings.availableEngines.length > 0 && !settings.synthesisEngine) { + settings.synthesisEngine = settings.availableEngines[0].id; + } + + if (settings.availableLanguages.length > 0) { + if (!settings.language || !settings.availableLanguages.some(l => l.id === settings.language)) { + settings.language = settings.availableLanguages[0].id; + } + } + + structuredLog('INFO', 'Settings initialized', { settings }); +} + +export const loadConfigs = Promise.all([ + fetch('./synthesis-grids/available-grids.json') + .then(async res => { + if (!res.ok) throw new Error(`Failed to fetch available-grids.json: ${res.status}`); + const clone = res.clone(); + const data = await res.json(); + settings.availableGrids = data; + console.log('Debug: availableGrids raw JSON', await clone.text()); + if (settings.availableGrids.length === 0) console.warn('Debug: availableGrids is empty array'); + return data; + }) + .catch(err => { + console.error('available-grids load error:', err.message); + structuredLog('ERROR', 'available-grids load error', { message: err.message }); + settings.availableGrids = []; + return []; + }), + + fetch('./audio/synthesis-engines/available-engines.json') + .then(async res => { + if (!res.ok) throw new Error(`Failed to fetch available-engines.json: ${res.status}`); + const clone = res.clone(); + const data = await res.json(); + settings.availableEngines = data; + console.log('Debug: availableEngines raw JSON', await clone.text()); + if (settings.availableEngines.length === 0) console.warn('Debug: availableEngines is empty array'); + return data; + }) + .catch(err => { + console.error('available-engines load error:', err.message); + structuredLog('ERROR', 'available-engines load error', { message: err.message }); + settings.availableEngines = []; + return []; + }), + + fetch('./languages/available-languages.json') + .then(async res => { + if (!res.ok) throw new Error(`Failed to fetch available-languages.json: ${res.status}`); + const clone = res.clone(); + const data = await res.json(); + settings.availableLanguages = data; + console.log('Debug: availableLanguages raw JSON', await clone.text()); + if (settings.availableLanguages.length === 0) console.warn('Debug: availableLanguages is empty array'); + return data; + }) + .catch(err => { + console.error('available-languages load error:', err.message); + structuredLog('ERROR', 'available-languages load error', { message: err.message }); + settings.availableLanguages = []; + return []; + }), +]) + .then(() => { + initializeDefaults(); // Derive defaults from loaded (or empty) arrays + }) + .catch(err => { + console.error('Configs load aggregate error:', err.message); + structuredLog('ERROR', 'Configs load aggregate error', { message: err.message }); + initializeDefaults(); // Ensure defaults even if failed + }); + +export async function getLogs() { + // Fetch from IndexedDB and pretty-print for readability. + const allLogs = await getAllIdbLogs(); + return allLogs.map(log => { + try { + return `Timestamp: ${log.timestamp}\nLevel: ${log.level}\nMessage: ${log.message}\nData: ${JSON.stringify(log.data, null, 2)}\n---\n`; + } catch (err) { + return `Invalid log entry: ${JSON.stringify(log)}\n---\n`; // Fallback for malformed logs. + } + }).join(''); +} + +export function setStream(stream) { + settings.stream = stream; + if (settings.debugLogging) { + structuredLog('INFO', 'setStream', { streamSet: !!stream }); + } +} + +export function setAudioInterval(timerId) { + settings.audioTimerId = timerId; + if (settings.debugLogging) { + const ms = settings.updateInterval; + structuredLog('INFO', 'setAudioInterval', { timerId, updateIntervalMs: ms }); + } +} + +export function setMicStream(micStream) { + settings.micStream = micStream; + if (settings.debugLogging) { + structuredLog('INFO', 'setMicStream', { micStreamSet: !!micStream }); + } +} \ No newline at end of file diff --git a/future/web/grid-dispatcher.js b/future/web/grid-dispatcher.js deleted file mode 100644 index 682ec5b3..00000000 --- a/future/web/grid-dispatcher.js +++ /dev/null @@ -1,13 +0,0 @@ -import { mapFrameToTonnetz } from './synthesis-methods/grids/hex-tonnetz.js'; -import { mapFrameToCircleOfFifths } from './synthesis-methods/grids/circle-of-fifths.js'; -import { settings } from './state.js'; - -export function mapFrame(frameData, width, height, prevFrameData, panValue) { - switch (settings.gridType) { - case 'circle-of-fifths': - return mapFrameToCircleOfFifths(frameData, width, height, prevFrameData, panValue); - case 'hex-tonnetz': - default: - return mapFrameToTonnetz(frameData, width, height, prevFrameData, panValue); - } -} diff --git a/future/web/index.html b/future/web/index.html index 2246151e..cbefd0b0 100644 --- a/future/web/index.html +++ b/future/web/index.html @@ -1,43 +1,33 @@ - - - AcoustSee - + + + + + + AcoustSee + -
- - - - - - - - - - - - - -
- - -
Loading...
-
- - - -
- - -
-

-        
-        
-    
- - +
+ +
+ + + + +
+ diff --git a/future/web/languages/available-languages.json b/future/web/languages/available-languages.json new file mode 100644 index 00000000..7a84ad64 --- /dev/null +++ b/future/web/languages/available-languages.json @@ -0,0 +1,10 @@ +[ + { + "id": "es-ES", + "createdAt": 1751622668665.7266 + }, + { + "id": "en-US", + "createdAt": 1751622636604.726 + } +] \ No newline at end of file diff --git a/future/web/languages/en-US.json b/future/web/languages/en-US.json index 34a6c084..38e06754 100644 --- a/future/web/languages/en-US.json +++ b/future/web/languages/en-US.json @@ -1,20 +1,144 @@ { - "settingsToggle": "Settings {state}", - "modeBtn": "Mode set to {mode}", - "gridSelect": "Grid set to {grid}", - "synthesisSelect": "Synthesis set to {engine}", - "languageSelect": "Language set to {lang}", - "startStop": "Navigation {state}", - "cameraError": "Failed to access camera", - "audioOn": "Audio enabled", - "audioError": "Audio initialization failed", - "audioNotEnabled": "Audio not enabled, please enable audio first", - "cameraPermissionDenied": "Camera permission denied, please enable in settings", - "cameraInUse": "Camera is already in use by another application", - "cameraConstraintsError": "Camera settings not supported by this device", - "frameProcessingError": "Error processing video frame", - "fpsBtn": "Frame rate set to {fps} FPS", - "emailDebug": "Error log email {state}", - "audioError": "Audio error, please tap again to enable audio", - "emailError": "Failed to generate error log email" -} +"powerOn": { + "text": "Power On", + "aria": "Power On to enable audio", + "failed": { + "text": "Audio Failed - Retry", + "aria": "Retry audio initialization" + } + }, + "videoFeed": { + "aria": "Video Feed" + }, + "frameCanvas": { + "aria": "Hidden Frame Processing Canvas" + }, + "debugPanel": { + "aria": "Debug Panel" + }, + + "button1": { + "normal": { + "start": { + "text": "Start", + "aria": "Processing video started" + }, + "stop": { + "text": "Stop", + "aria": "Processing video stopped" + } + }, + "settings": { + "text": "Kernel: {gridType}", + "aria": "Kernel selection {gridType}" + }, + "tts": { + "startStop": { + "starting": "Startinng syneshesia", + "stopping": "Stopping synesthesia", + "error": "Error starting or stopping processing" + }, + "cameraError": "Camera access error", + "gridSelect": "Kernel set to {state}" + } + }, + "button2": { + "normal": { + "on": { + "text": "Mic On", + "aria": "Turn on microphone" + }, + "off": { + "text": "Mic Off", + "aria": "Turn off microphone" + } + }, + "settings": { + "text": "Sound synthetizer: {synthesisEngine}", + "aria": "Select synthesis engine {synthesisEngine}" + }, + "tts": { + "micToggle": { + "turningOn": "Turning on microphone", + "turningOff": "Turning off microphone" + }, + "micError": "Microphone access error", + "synthesisSelect": "Synthesis set to {state}" + } + }, + "button3": { + "normal": { + "text": "Language: {languageName}", + "aria": "Select language {languageName}" + }, + "settings": { + "text": "Input: {inputType}", + "aria": "Input selector: {inputType}" + }, + "tts": { + "languageSelect": "Language set to {languageName}", + "fpsError": "Language toggle error" + } + }, + "button4": { + "normal": { + "auto": { + "text": "Auto FPS", + "aria": "Select frame rate" + }, + "manual": { + "text": "{fps} FPS", + "aria": "Select frame rate" + }, + "aria": "Select frame rate" + }, + "settings": { + "text": "Save Settings", + "aria": "Save settings" + }, + "tts": { + "fpsBtn": "Frame rate set to {fps}", + "fpsError": "Frame rate error", + "saveSettings": "Settings saved", + "saveError": "Error saving settings" + } + }, + "button5": { + "normal": { + "text": "Email Console Log", + "aria": "Email console log" + }, + "settings": { + "text": "Load Settings", + "aria": "Load settings" + }, + "tts": { + "emailDebug": { + "email": "Emailing console log for debuggin", + "error": "Error emailing console log" + }, + "loadSettings": { + "loaded": "Settings loaded", + "none": "No settings found" + }, + "loadError": "Error loading settings" + } + }, + "button6": { + "normal": { + "text": "Settings", + "aria": "Enter settings mode" + }, + "settings": { + "text": "Exit Settings", + "aria": "Exit settings mode" + }, + "tts": { + "settingsToggle": { + "on": "Entering settings mode", + "off": "Exiting settings mode" + }, + "settingsError": "Settings toggle error" + } + } +} \ No newline at end of file diff --git a/future/web/languages/es-ES.json b/future/web/languages/es-ES.json index 6f0c0c35..30664f25 100644 --- a/future/web/languages/es-ES.json +++ b/future/web/languages/es-ES.json @@ -1,20 +1,128 @@ { - "settingsToggle": "Configuraciones {state}", - "modeBtn": "Modo establecido en {mode}", - "gridSelect": "Cuadrícula establecida en {grid}", - "synthesisSelect": "Síntesis establecida en {engine}", - "languageSelect": "Idioma establecido en {lang}", - "startStop": "Navegación {state}", - "cameraError": "No se pudo acceder a la cámara", - "audioOn": "Audio activado", - "audioError": "Fallo en la inicialización del audio", - "audioNotEnabled": "Audio no activado, por favor active el audio primero", - "cameraPermissionDenied": "Permiso de cámara denegado, por favor habilita en ajustes", - "cameraInUse": "La cámara ya está en uso por otra aplicación", - "cameraConstraintsError": "Configuración de cámara no soportada por este dispositivo", - "frameProcessingError": "Error al procesar el cuadro de video", - "fpsBtn": "Velocidad de fotogramas establecida en {fps} FPS", - "emailDebug": "Correo de registro de errores {state}", - "audioError": "Error de audio, por favor toca de nuevo para activar el audio", - "emailError": "No se pudo generar el correo de registro de errores" -} + "button1": { + "normal": { + "start": { + "text": "Iniciar Procesamiento", + "aria": "Iniciar procesamiento de video" + }, + "stop": { + "text": "Detener Procesamiento", + "aria": "Detener procesamiento de video" + } + }, + "settings": { + "text": "Seleccionar Cuadrícula: {gridName}", + "aria": "Seleccionar tipo de cuadrícula {gridType}" + }, + "tts": { + "startStop": { + "starting": "Iniciando procesamiento", + "stopping": "Deteniendo procesamiento", + "error": "Error al iniciar o detener el procesamiento" + }, + "cameraError": "Error de acceso a la cámara", + "gridSelect": "Cuadrícula establecida en {state}" + } + }, + "button2": { + "normal": { + "on": { + "text": "Encender Micrófono", + "aria": "Encender micrófono" + }, + "off": { + "text": "Apagar Micrófono", + "aria": "Apagar micrófono" + } + }, + "settings": { + "text": "Seleccionar Motor: {engineName}", + "aria": "Seleccionar motor de síntesis {synthesisEngine}" + }, + "tts": { + "micToggle": { + "turningOn": "Encendiendo micrófono", + "turningOff": "Apagando micrófono" + }, + "micError": "Error de acceso al micrófono", + "synthesisSelect": "Síntesis establecida en {state}" + } + }, + "button3": { + "normal": { + "text": "Idioma: {languageName}", + "aria": "Seleccionar idioma {language}" + }, + "settings": { + "text": "Cambiar Cámara", + "aria": "Cambiar entre cámara frontal y trasera" + }, + "tts": { + "languageSelect": "Idioma establecido en {state}", + "videoSourceSelect": "Cámara establecida en {state}", + "videoSourceError": "Error al cambiar de cámara", + "languageError": "Error al cambiar de idioma" + } + }, + "button4": { + "normal": { + "auto": { + "text": "FPS Automático", + "aria": "Seleccionar velocidad de fotogramas" + }, + "manual": { + "text": "{fps} FPS", + "aria": "Seleccionar velocidad de fotogramas" + }, + "aria": "Seleccionar velocidad de fotogramas" + }, + "settings": { + "text": "Guardar Configuración", + "aria": "Guardar configuración" + }, + "tts": { + "fpsBtn": "Velocidad de fotogramas establecida en {fps}", + "fpsError": "Error en velocidad de fotogramas", + "saveSettings": "Configuración guardada", + "saveError": "Error al guardar configuración" + } + }, + "button5": { + "normal": { + "text": "Enviar Registro de Consola", + "aria": "Enviar registro de consola" + }, + "settings": { + "text": "Cargar Configuración", + "aria": "Cargar configuración" + }, + "tts": { + "emailDebug": { + "email": "Enviando registro de consola", + "error": "Error al enviar registro de consola" + }, + "loadSettings": { + "loaded": "Configuración cargada", + "none": "No se encontró configuración" + }, + "loadError": "Error al cargar configuración" + } + }, + "button6": { + "normal": { + "text": "Configuración", + "aria": "Entrar en modo configuración" + }, + "settings": { + "text": "Salir de Configuración", + "aria": "Salir del modo configuración" + }, + "tts": { + "settingsToggle": { + "on": "Entrando en modo configuración", + "off": "Saliendo del modo configuración" + }, + "settingsError": "Error al alternar configuración" + } + } +} \ No newline at end of file diff --git a/future/web/main.js b/future/web/main.js index 15db3a32..fbaefe20 100644 --- a/future/web/main.js +++ b/future/web/main.js @@ -1,42 +1,215 @@ -/** - * Entry point for AcoustSee, initializing UI handlers and event dispatcher. - */ -import { setupRectangleHandlers } from './ui/rectangle-handlers.js'; -import { setupSettingsHandlers } from './ui/settings-handlers.js'; -import { createEventDispatcher } from './ui/event-dispatcher.js'; -import { initDOM } from './ui/dom.js'; -import { setDOM, setDispatchEvent } from './context.js'; - -console.log('main.js: Starting initialization'); - -document.addEventListener('DOMContentLoaded', async () => { - const DOM = await initDOM(); - setDOM(DOM); - const { dispatchEvent } = createEventDispatcher(DOM); - setDispatchEvent(dispatchEvent); - console.log('DOM loaded, initializing AcoustSee'); +// File: web/main.js +import { setupUIController } from './ui/ui-controller.js'; +import { createEventDispatcher } from './core/dispatcher.js'; +import { loadConfigs, settings } from './core/state.js'; +import { structuredLog } from './utils/logging.js'; +import { setDOM } from './core/context.js'; + +let getText, initializeLanguageIfNeeded, speakText, announceMessage; +try { + ({ getText, initializeLanguageIfNeeded, speakText, announceMessage } = await import('./utils/utils.js')); + console.log('utils.js imported successfully'); // Confirm import worked +} catch (importErr) { + console.error('Failed to import utils.js:', importErr.message); + getText = async (key) => { + console.warn('TTS fallback for key:', key); + return key; + }; + initializeLanguageIfNeeded = () => { + structuredLog('WARN', 'Language init skipped due to import failure'); + return 'en-US'; // Fallback return + }; + speakText = () => { + structuredLog('WARN', 'TTS skipped due to import failure'); + }; + announceMessage = (msg) => { + structuredLog('WARN', 'Announcement skipped due to import failure', { msg }); + }; +} + +const DOM = { + videoFeed: document.getElementById('videoFeed'), + frameCanvas: document.getElementById('frameCanvas'), + button1: document.getElementById('button1'), + button2: document.getElementById('button2'), + button3: document.getElementById('button3'), + button4: document.getElementById('button4'), + button5: document.getElementById('button5'), + button6: document.getElementById('button6'), + powerOn: document.getElementById('powerOn'), + splashScreen: document.getElementById('splashScreen'), + mainContainer: document.getElementById('mainContainer'), + debugPanel: document.getElementById('debugPanel'), +}; + +// Initialize shared DOM context for modules that need it +setDOM(DOM); + +// Custom Error class to attach metadata +class CustomError extends Error { + constructor(message, data = {}) { + super(message); + this.data = data; + } +} + +// Helper to validate DOM elements +function validateDOM() { + const requiredIds = ['videoFeed', 'button1', 'button2', 'button3', 'button4', 'button5', 'button6', 'powerOn', 'splashScreen', 'mainContainer', 'debugPanel', 'frameCanvas']; + const missing = requiredIds.filter(id => !DOM[id]); + if (missing.length > 0) { + throw new CustomError('Missing DOM elements', { missing }); + } +} + +async function init() { + const originalConsole = { + log: console.log, + warn: console.warn, + error: console.error + }; try { - // Wait for DOM elements to be assigned and get the DOM object - const DOM = await initDOM(); - console.log('DOM initialized:', DOM); - - // Create event dispatcher with DOM - const { dispatchEvent } = createEventDispatcher(DOM); - window.dispatchEvent = dispatchEvent; // For mailto: feature - console.log('Dispatcher created:', dispatchEvent); - - // Setup handlers with DOM explicitly passed - setupRectangleHandlers({ dispatchEvent, DOM }); - console.log('Rectangle handlers set up'); - setupSettingsHandlers({ dispatchEvent, DOM }); - console.log('Settings handlers set up'); - - // Initial UI update - dispatchEvent('updateUI', { settingsMode: false, streamActive: false }); - console.log('Initial UI update dispatched'); + // Validate DOM early + validateDOM(); + + // Wait for configs to fully load and defaults to be set + await loadConfigs; + structuredLog('INFO', 'init: Configurations loaded', { + gridType: settings.gridType, + synthesisEngine: settings.synthesisEngine, + language: settings.language + }); + + // Handle missing configuration gracefully + if (!settings.gridType || !settings.synthesisEngine || !settings.language) { + const missing = []; + if (!settings.gridType) missing.push('grids'); + if (!settings.synthesisEngine) missing.push('engines'); + if (!settings.language) missing.push('languages'); + const msg = await getText('initMissingConfigs', { missing: missing.join(', ') }); + announceMessage(msg); + if (settings.ttsEnabled) speakText(msg); + structuredLog('WARN', 'Partial configs; proceeding with limitations', { missing }); + } + + // Ensure language is initialized before translating + initializeLanguageIfNeeded(); + + // Set aria and text for all relevant elements deriving from ID + const staticElements = [ + { el: DOM.splashScreen, baseKey: 'splashScreen', setText: false, setAria: false }, // Non-interactive, no aria/text + { el: DOM.mainContainer, baseKey: 'mainContainer', setText: false, setAria: false }, + { el: DOM.powerOn, baseKey: 'powerOn', setText: true, setAria: true }, + { el: DOM.videoFeed, baseKey: 'videoFeed', setText: false, setAria: true }, + { el: DOM.frameCanvas, baseKey: 'frameCanvas', setText: false, setAria: false }, // Hidden, no aria + { el: DOM.debugPanel, baseKey: 'debugPanel', setText: false, setAria: true }, + { el: DOM.button1, baseKey: 'button1', setText: true, setAria: true }, + { el: DOM.button2, baseKey: 'button2', setText: true, setAria: true }, + { el: DOM.button3, baseKey: 'button3', setText: true, setAria: true }, + { el: DOM.button4, baseKey: 'button4', setText: true, setAria: true }, + { el: DOM.button5, baseKey: 'button5', setText: true, setAria: true }, + { el: DOM.button6, baseKey: 'button6', setText: true, setAria: true }, + ]; + const setupErrors = []; + for (const { el, baseKey, setText: shouldSetText, setAria } of staticElements) { + if (!el) continue; // Validation already threw; no need for warn here + try { + if (setAria) { + const ariaText = await getText(`${baseKey}.aria`, {}); + el.setAttribute('aria-label', ariaText); + announceMessage(ariaText); // Announce if needed + } + if (shouldSetText) { + const text = await getText(`${baseKey}.text`, {}); + el.textContent = text; + announceMessage(text); + speakText(text); // Speak if TTS enabled + } + } catch (textErr) { + setupErrors.push({ baseKey, message: textErr.message }); + // Continue with best-effort: set fallback + if (setAria) { + el.setAttribute('aria-label', baseKey); + announceMessage(baseKey); + } + if (shouldSetText) { + el.textContent = baseKey; + announceMessage(baseKey); + speakText(baseKey); + } + } + } + if (setupErrors.length > 0) { + structuredLog('WARN', 'UI setup had partial failures', { errors: setupErrors }); + } + + const { dispatchEvent } = await createEventDispatcher(DOM); + setupUIController({ dispatchEvent, DOM }); + + // Console overrides moved here to break circular dependency + function safeStructuredLog(level, message, data = {}, persist = true, sample = true) { + const tempLog = console.log; + const tempWarn = console.warn; + const tempError = console.error; + try { + console.log = originalConsole.log; + console.warn = originalConsole.warn; + console.error = originalConsole.error; + + structuredLog(level, message, data, persist, sample); + } finally { + console.log = tempLog; + console.warn = tempWarn; + console.error = tempError; + } + } + + console.log = (...args) => { + originalConsole.log.apply(console, args); + if (settings.debugLogging) safeStructuredLog('INFO', 'Console log', { args }, false); + }; + console.warn = (...args) => { + originalConsole.warn.apply(console, args); + if (settings.debugLogging) safeStructuredLog('WARN', 'Console warn', { args }, false); + }; + console.error = (...args) => { + originalConsole.error.apply(console, args); + safeStructuredLog('ERROR', 'Console error', { args }, false); + }; + + // Force initial UI update for dynamic content + dispatchEvent('updateUI', { settingsMode: false, streamActive: false, micActive: false }); + structuredLog('INFO', 'init: UI setup complete'); } catch (err) { - console.error('Initialization failed:', err.message); + let errorMessage = err.message; + let errorData = err instanceof CustomError ? err.data : {}; + let specificMessage = errorMessage; + if (err.data?.missing) { + specificMessage = `Missing DOM elements: ${err.data.missing.join(', ')}`; + } else if (err.data?.language === null) { + specificMessage = 'Language configuration failed to initialize'; + } // Add more categories as needed + structuredLog('ERROR', 'init error', { message: specificMessage, data: errorData, stack: err.stack }); + originalConsole.error('init error:', err.message); + try { + const errorText = await getText('init.tts.error'); + speakText(errorText); + announceMessage(`Initialization failed: ${specificMessage}. Check console for details.`); + } catch (ttsErr) { + originalConsole.error('TTS error:', ttsErr.message); + announceMessage(`Initialization failed: ${specificMessage}. Check console for details.`); + } + } +} + +// Adds uncaught error handler for global contexts +window.onerror = function (message, source, lineno, colno, error) { + structuredLog('ERROR', 'Uncaught global error', { message, source, lineno, colno, stack: error ? error.stack : 'N/A' }); + if (settings?.debugLogging ?? true) { // Safe check; default to true if settings null (pre-init) + console.error(message); // Allow bubbling in debug mode + return false; // Let browser handle } -}); + return true; // Suppress in production +}; -console.log('main.js: Initialization script loaded'); +init(); \ No newline at end of file diff --git a/future/web/state.js b/future/web/state.js index 5ceb2059..0eeec1d9 100644 --- a/future/web/state.js +++ b/future/web/state.js @@ -1,36 +1,74 @@ -export const settings = { - audioInterval: null, - updateInterval: 50, - language: 'en-US', - synthesisEngine: 'sine-wave', - stream: null, - gridType: 'circle-of-fifths', - dayNightMode: 'day', - isSettingsMode: false -}; +import { structuredLog } from './utils/logging.js'; // Top import. +import { addIdbLog, getAllIdbLogs } from './utils/idb-logger.js'; // New import for DB logging. -export let skipFrame = false; -export let prevFrameDataLeft = null; -export let prevFrameDataRight = null; -export let frameCount = 0; -export let lastTime = performance.now(); +// Capture original console methods before overrides +const originalConsole = { + log: console.log, + warn: console.warn, + error: console.error +}; -export function setStream(stream) { - settings.stream = stream; -} +export let settings = { + debugLogging: true, + stream: null, + availableGrids: [], // Loaded once at startup + availableEngines: [], // Loaded once at startup + availableLanguages: [], // Loaded once at startup + audioTimerId: null, // Renamed from audioInterval: timer ID from setInterval, or null when cleared. + updateInterval: 30, + autoFPS: true, + gridType: null, + synthesisEngine: null, + language: null, + isSettingsMode: false, + micStream: null, + ttsEnabled: false, + dayNightMode: 'day' +}; -export function setAudioInterval(interval) { - settings.audioInterval = interval; -} +export const loadConfigs = (async () => { + try { + const [grids, engines, languages, intervals] = await Promise.all([ + fetch('./synthesis-methods/grids/availableGrids.json').then(res => res.json()), + fetch('./synthesis-methods/engines/availableEngines.json').then(res => res.json()), + fetch('./languages/availableLanguages.json').then(res => res.json()), + Promise.resolve([50, 33, 16]) + ]); + settings.availableGrids = grids; + settings.availableEngines = engines; + settings.availableLanguages = languages; + settings.gridType = grids[0]?.id || settings.gridType; + settings.synthesisEngine = engines[0]?.id || settings.synthesisEngine; + settings.language = languages[0]?.id || settings.language; + settings.updateInterval = intervals[0] || settings.updateInterval; + } catch (err) { + structuredLog('ERROR', 'Failed to load configurations', { message: err.message }); + } +})(); -export function setSkipFrame(skip) { - skipFrame = skip; +export async function getLogs() { + // Fetch from IndexedDB and pretty-print for readability. + const allLogs = await getAllIdbLogs(); + return allLogs.map(log => { + try { + return `Timestamp: ${log.timestamp}\nLevel: ${log.level}\nMessage: ${log.message}\nData: ${JSON.stringify(log.data, null, 2)}\n---\n`; + } catch (err) { + return `Invalid log entry: ${JSON.stringify(log)}\n---\n`; // Fallback for malformed logs. + } + }).join(''); } -export function setPrevFrameDataLeft(data) { - prevFrameDataLeft = data; +export function setStream(stream) { + settings.stream = stream; + if (settings.debugLogging) { + structuredLog('INFO', 'setStream', { streamSet: !!stream }); + } } -export function setPrevFrameDataRight(data) { - prevFrameDataRight = data; +export function setAudioInterval(timerId) { + settings.audioTimerId = timerId; + if (settings.debugLogging) { + const ms = settings.updateInterval; + structuredLog('INFO', 'setAudioInterval', { timerId, updateIntervalMs: ms }); + } } diff --git a/future/web/styles.css b/future/web/styles.css index 631b16dc..e759b5c1 100644 --- a/future/web/styles.css +++ b/future/web/styles.css @@ -1,89 +1,94 @@ body { - font-family: Arial, sans-serif; - margin: 0; - padding: 0; - height: 100vh; - display: flex; - justify-content: center; - align-items: center; - overflow: hidden; + font-family: Arial, sans-serif; + margin: 0; + padding: 0; + height: 100vh; + width: 100vw; + display: flex; + justify-content: center; + align-items: center; + overflow: hidden; + background-color: #f0f0f0; } -#main-container { - position: relative; - width: 90%; - height: 90%; - display: flex; - flex-wrap: wrap; - justify-content: space-between; - align-content: space-between; +.splash-screen { + position: fixed; + top: 0; + left: 0; + width: 100%; + height: 100%; + background-color: #000; + display: flex; + justify-content: center; + align-items: center; + z-index: 30; } -.edge-button { - position: absolute; - width: 20%; - height: 10%; - font-size: 18px; - padding: 10px; - border: none; - background-color: #4CAF50; - color: white; - cursor: pointer; - z-index: 10; +.power-on-button { + font-size: 5vw; + padding: 2vw 4vw; + background-color: #4CAF50; + color: white; + border: none; + border-radius: 1vw; + cursor: pointer; } -.top-rectangle { top: 0; left: 40%; } -.left-rectangle { top: 20%; left: 0; } -.right-rectangle { top: 20%; right: 0; } -.bottom-rectangle { bottom: 0; left: 40%; } -.audio-rectangle { bottom: 10%; right: 10%; width: 15%; height: 8%; } +.instructions-button { + font-size: 5vw; + padding: 2vw 4vw; + background-color: #4CAF50; + color: white; + border: none; + border-radius: 1vw; + cursor: pointer; +} -.center-rectangle { - position: relative; - width: 60%; - height: 60%; - display: flex; - flex-direction: column; - justify-content: center; - align-items: center; - background-color: #f0f0f0; - border: 2px solid #333; +.main-container { + width: 100%; + height: 100%; + display: grid; + grid-template-columns: repeat(2, 50%); + grid-template-rows: repeat(3, 33.33%); + gap: 1vw; + padding: 1vw; + box-sizing: border-box; } -#videoFeed, #imageCanvas { - width: 100%; - height: 100%; - object-fit: cover; +.grid-button { + font-size: 3vw; + background-color: #4CAF50; + color: white; + border: none; + border-radius: 1vw; + cursor: pointer; + display: flex; + justify-content: center; + align-items: center; + position: relative; } -.loading-indicator { - position: absolute; - font-size: 20px; - color: #333; - display: none; +.video-container { + position: relative; + overflow: hidden; } -.debug-panel { - position: fixed; - top: 10px; - right: 10px; - width: 300px; - height: 400px; - background: rgba(0, 0, 0, 0.8); - color: white; - overflow-y: auto; - padding: 10px; - display: none; - z-index: 20; +.video-container video { + width: 100%; + height: 100%; + object-fit: cover; + z-index: 1; } -.debug-button { - width: 100%; - margin: 5px 0; - padding: 5px; - font-size: 14px; - background-color: #ff4444; - color: white; - border: none; - cursor: pointer; +.video-container .button-text { + position: absolute; + bottom: 5%; + left: 50%; + transform: translateX(-50%); + z-index: 2; + background: rgba(0, 0, 0, 0.5); + padding: 0.5vw 1vw; + border-radius: 0.5vw; + color: white; + font-size: 3vw; } diff --git a/future/web/synthesis-grids/available-grids.json b/future/web/synthesis-grids/available-grids.json new file mode 100644 index 00000000..c397dadc --- /dev/null +++ b/future/web/synthesis-grids/available-grids.json @@ -0,0 +1,11 @@ + +[ + { + "id": "hex-tonnetz", + "createdAt": 1750899236982.1191 + }, + { + "id": "circle-of-fifths", + "createdAt": 1750899236950.1191 + } +] \ No newline at end of file diff --git a/future/web/synthesis-grids/circle-of-fifths.js b/future/web/synthesis-grids/circle-of-fifths.js new file mode 100644 index 00000000..2485b363 --- /dev/null +++ b/future/web/synthesis-grids/circle-of-fifths.js @@ -0,0 +1,82 @@ +import { settings } from "../core/state.js"; +import { structuredLog } from "../utils/logging.js"; + +const notesPerOctave = 12; +const octaves = 5; +const minFreq = 100; +const maxFreq = 3200; +const frequencies = []; +for (let octave = 0; octave < octaves; octave++) { + for (let note = 0; note < notesPerOctave; note++) { + const freq = minFreq * Math.pow(2, octave + note / notesPerOctave); + if (freq <= maxFreq) frequencies.push(freq); + } +} + +export function mapFrameToCircleOfFifths( + frameData, + width, + height, + prevFrameData, + panValue, +) { + const gridWidth = width / 12; + const gridHeight = height / 12; + const movingRegions = []; + const newFrameData = new Uint8ClampedArray(frameData); + const motionThreshold = settings.motionThreshold || 20; + // Correct avgIntensity over pixels (skip alpha) + let avgIntensity = 0; + for (let i = 0; i < frameData.length; i += 4) { + const r = frameData[i]; + const g = frameData[i + 1]; + const b = frameData[i + 2]; + avgIntensity += (r + g + b) / 3; + } + avgIntensity /= (frameData.length / 4); + + if (prevFrameData) { + for (let y = 0; y < height; y++) { + for (let x = 0; x < width; x++) { + const idx = (y * width + x) * 4; + const r = frameData[idx]; + const g = frameData[idx + 1]; + const b = frameData[idx + 2]; + const intensity = (r + g + b) / 3; + + const pr = prevFrameData[idx]; + const pg = prevFrameData[idx + 1]; + const pb = prevFrameData[idx + 2]; + const prevIntensity = (pr + pg + pb) / 3; + + const delta = Math.abs(intensity - prevIntensity); + if (delta > motionThreshold) { + const gridX = Math.floor(x / gridWidth); + const gridY = Math.floor(y / gridHeight); + movingRegions.push({ gridX, gridY, intensity, delta }); + } + } + } + structuredLog('DEBUG', 'Motion regions detected', { count: movingRegions.length, threshold: motionThreshold }); + } + + movingRegions.sort((a, b) => b.delta - a.delta); + const notes = []; + const usedCells = new Set(); + for (let i = 0; i < Math.min(8, movingRegions.length); i++) { + const { gridX, gridY, intensity } = movingRegions[i]; + const cellKey = `${gridX},${gridY}`; + if (usedCells.has(cellKey)) continue; + usedCells.add(cellKey); + const noteIndex = (gridX + gridY) % notesPerOctave; + const freq = frequencies[noteIndex] || frequencies[frequencies.length - 1]; + const amplitude = + settings.dayNightMode === "day" + ? 0.02 + (intensity / 255) * 0.06 + : 0.08 - (intensity / 255) * 0.06; + const harmonics = [freq * Math.pow(2, 7 / 12), freq * Math.pow(2, 4 / 12)]; + notes.push({ pitch: freq, intensity: amplitude, harmonics, pan: panValue }); + } + + return { notes, newFrameData, avgIntensity }; +} diff --git a/future/web/synthesis-grids/hex-tonnetz.js b/future/web/synthesis-grids/hex-tonnetz.js new file mode 100644 index 00000000..dc43d359 --- /dev/null +++ b/future/web/synthesis-grids/hex-tonnetz.js @@ -0,0 +1,101 @@ +import { settings } from "../core/state.js"; +import { structuredLog } from "../utils/logging.js"; + +const gridSize = 32; +const notesPerOctave = 12; +const octaves = 5; +const minFreq = 100; +const maxFreq = 3200; +const frequencies = []; +for (let octave = 0; octave < octaves; octave++) { + for (let note = 0; note < notesPerOctave; note++) { + const freq = minFreq * Math.pow(2, octave + note / notesPerOctave); + if (freq <= maxFreq) frequencies.push(freq); + } +} +const tonnetzGrid = Array(gridSize) + .fill() + .map(() => Array(gridSize).fill(0)); +for (let y = 0; y < gridSize; y++) { + for (let x = 0; x < gridSize; x++) { + const octave = Math.floor((y / gridSize) * octaves); + const noteOffset = (x + (y % 2) * 6) % notesPerOctave; + const freqIndex = octave * notesPerOctave + noteOffset; + tonnetzGrid[y][x] = + frequencies[freqIndex % frequencies.length] || + frequencies[frequencies.length - 1]; + } +} + +export function mapFrameToHexTonnetz( + frameData, + width, + height, + prevFrameData, + panValue, +) { + const gridWidth = width / gridSize; + const gridHeight = height / gridSize; + const movingRegions = []; + const newFrameData = new Uint8ClampedArray(frameData); + + // Correct avgIntensity over pixels (skip alpha) + let avgIntensity = 0; + for (let i = 0; i < frameData.length; i += 4) { + const r = frameData[i]; + const g = frameData[i + 1]; + const b = frameData[i + 2]; + avgIntensity += (r + g + b) / 3; + } + avgIntensity /= (frameData.length / 4); + + if (prevFrameData) { + for (let y = 0; y < height; y++) { + for (let x = 0; x < width; x++) { + const idx = (y * width + x) * 4; + const r = frameData[idx]; + const g = frameData[idx + 1]; + const b = frameData[idx + 2]; + const intensity = (r + g + b) / 3; + + const pr = prevFrameData[idx]; + const pg = prevFrameData[idx + 1]; + const pb = prevFrameData[idx + 2]; + const prevIntensity = (pr + pg + pb) / 3; + + const delta = Math.abs(intensity - prevIntensity); + if (delta > (settings.motionThreshold || 20)) { + const gridX = Math.floor(x / gridWidth); + const gridY = Math.floor(y / gridHeight); + movingRegions.push({ gridX, gridY, intensity, delta }); + } + } + } + structuredLog('DEBUG', 'Motion regions detected', { count: movingRegions.length, threshold: settings.motionThreshold || 20 }); + } + + movingRegions.sort((a, b) => b.delta - a.delta); + const notes = []; + const usedCells = new Set(); + for (let i = 0; i < Math.min(16, movingRegions.length); i++) { + const { gridX, gridY, intensity } = movingRegions[i]; + const cellKey = `${gridX},${gridY}`; + if (usedCells.has(cellKey)) continue; + usedCells.add(cellKey); + for (let dy = -1; dy <= 1; dy++) { + for (let dx = -1; dx <= 1; dx++) { + if (dx === 0 && dy === 0) continue; + usedCells.add(`${gridX + dx},${gridY + dy}`); + } + } + const freq = tonnetzGrid[gridY][gridX]; + const amplitude = + settings.dayNightMode === "day" + ? 0.02 + (intensity / 255) * 0.06 + : 0.08 - (intensity / 255) * 0.06; + const harmonics = [freq * Math.pow(2, 7 / 12), freq * Math.pow(2, 4 / 12)]; + notes.push({ pitch: freq, intensity: amplitude, harmonics, pan: panValue }); + } + + return { notes, newFrameData, avgIntensity }; +} diff --git a/future/web/synthesis-methods/engines/fm-synthesis.js b/future/web/synthesis-methods/engines/fm-synthesis.js deleted file mode 100644 index cd76c7d3..00000000 --- a/future/web/synthesis-methods/engines/fm-synthesis.js +++ /dev/null @@ -1,40 +0,0 @@ -import { audioContext, oscillators } from '../../audio-processor.js'; - -export function playFMSynthesis(notes) { - let oscIndex = 0; - const allNotes = notes.sort((a, b) => b.intensity - a.intensity); - for (let i = 0; i < oscillators.length; i++) { - const oscData = oscillators[i]; - if (oscIndex < allNotes.length && i < oscillators.length) { - const { pitch, intensity, harmonics, pan } = allNotes[oscIndex]; - oscData.osc.type = 'sine'; - oscData.osc.frequency.setTargetAtTime(pitch, audioContext.currentTime, 0.015); - oscData.gain.gain.setTargetAtTime(intensity, audioContext.currentTime, 0.015); - oscData.panner.pan.setTargetAtTime(pan, audioContext.currentTime, 0.015); - oscData.active = true; - if (harmonics.length && oscIndex + 1 < oscillators.length) { // Limitar a 1 modulador por nota - oscIndex++; - const modulator = audioContext.createOscillator(); - modulator.type = 'sine'; - modulator.frequency.setTargetAtTime(pitch * 2, audioContext.currentTime, 0.015); - const modGain = audioContext.createGain(); - modGain.gain.setTargetAtTime(intensity * 100, audioContext.currentTime, 0.015); - modulator.connect(modGain).connect(oscData.osc.frequency); - modulator.start(); - // Usar el siguiente oscilador para el armónico principal - if (oscIndex < oscillators.length) { - const harmonicOsc = oscillators[oscIndex]; - harmonicOsc.osc.type = 'sine'; - harmonicOsc.osc.frequency.setTargetAtTime(harmonics[0], audioContext.currentTime, 0.015); - harmonicOsc.gain.gain.setTargetAtTime(intensity * 0.5, audioContext.currentTime, 0.015); - harmonicOsc.panner.pan.setTargetAtTime(pan, audioContext.currentTime, 0.015); - harmonicOsc.active = true; - } - } - oscIndex++; - } else { - oscData.gain.gain.setTargetAtTime(0, audioContext.currentTime, 0.015); - oscData.active = false; - } - } -} diff --git a/future/web/synthesis-methods/engines/sine-wave.js b/future/web/synthesis-methods/engines/sine-wave.js deleted file mode 100644 index a54e9251..00000000 --- a/future/web/synthesis-methods/engines/sine-wave.js +++ /dev/null @@ -1,32 +0,0 @@ -import { audioContext, oscillators } from '../../audio-processor.js'; - -export function playSineWave(notes) { - let oscIndex = 0; - const allNotes = notes.sort((a, b) => b.intensity - a.intensity); - for (let i = 0; i < oscillators.length; i++) { - const oscData = oscillators[i]; - if (oscIndex < allNotes.length && i < oscillators.length) { - const { pitch, intensity, harmonics, pan } = allNotes[oscIndex]; - oscData.osc.type = 'sine'; - oscData.osc.frequency.setTargetAtTime(pitch, audioContext.currentTime, 0.015); - oscData.gain.gain.setTargetAtTime(intensity, audioContext.currentTime, 0.015); - oscData.panner.pan.setTargetAtTime(pan, audioContext.currentTime, 0.015); - oscData.active = true; - if (harmonics.length && oscIndex + harmonics.length < oscillators.length) { - for (let h = 0; h < harmonics.length && oscIndex + h < oscillators.length; h++) { - oscIndex++; - const harmonicOsc = oscillators[oscIndex]; - harmonicOsc.osc.type = 'sine'; - harmonicOsc.osc.frequency.setTargetAtTime(harmonics[h], audioContext.currentTime, 0.015); - harmonicOsc.gain.gain.setTargetAtTime(intensity * 0.5, audioContext.currentTime, 0.015); - harmonicOsc.panner.pan.setTargetAtTime(pan, audioContext.currentTime, 0.015); - harmonicOsc.active = true; - } - } - oscIndex++; - } else { - oscData.gain.gain.setTargetAtTime(0, audioContext.currentTime, 0.015); - oscData.active = false; - } - } -} \ No newline at end of file diff --git a/future/web/synthesis-methods/grids/circle-of-fifths.js b/future/web/synthesis-methods/grids/circle-of-fifths.js deleted file mode 100644 index 8ea6bb02..00000000 --- a/future/web/synthesis-methods/grids/circle-of-fifths.js +++ /dev/null @@ -1,54 +0,0 @@ -import { settings } from '../../state.js'; - -const notesPerOctave = 12; -const octaves = 5; -const minFreq = 100; -const maxFreq = 3200; -const frequencies = []; -for (let octave = 0; octave < octaves; octave++) { - for (let note = 0; note < notesPerOctave; note++) { - const freq = minFreq * Math.pow(2, octave + note / notesPerOctave); - if (freq <= maxFreq) frequencies.push(freq); - } -} - -export function mapFrameToCircleOfFifths(frameData, width, height, prevFrameData, panValue) { - const gridWidth = width / 12; - const gridHeight = height / 12; - const movingRegions = []; - const newFrameData = new Uint8ClampedArray(frameData); - let avgIntensity = 0; - for (let i = 0; i < frameData.length; i++) avgIntensity += frameData[i]; - avgIntensity /= frameData.length; - - if (prevFrameData) { - for (let y = 0; y < height; y++) { - for (let x = 0; x < width; x++) { - const idx = y * width + x; - const delta = Math.abs(frameData[idx] - prevFrameData[idx]); - if (delta > 50) { - const gridX = Math.floor(x / gridWidth); - const gridY = Math.floor(y / gridHeight); - movingRegions.push({ gridX, gridY, intensity: frameData[idx], delta }); - } - } - } - } - - movingRegions.sort((a, b) => b.delta - a.delta); - const notes = []; - const usedCells = new Set(); - for (let i = 0; i < Math.min(8, movingRegions.length); i++) { - const { gridX, gridY, intensity } = movingRegions[i]; - const cellKey = `${gridX},${gridY}`; - if (usedCells.has(cellKey)) continue; - usedCells.add(cellKey); - const noteIndex = (gridX + gridY) % notesPerOctave; - const freq = frequencies[noteIndex] || frequencies[frequencies.length - 1]; - const amplitude = settings.dayNightMode === 'day' ? 0.02 + (intensity / 255) * 0.06 : 0.08 - (intensity / 255) * 0.06; - const harmonics = [freq * Math.pow(2, 7/12), freq * Math.pow(2, 4/12)]; - notes.push({ pitch: freq, intensity: amplitude, harmonics, pan: panValue }); - } - - return { notes, newFrameData, avgIntensity }; -} diff --git a/future/web/synthesis-methods/grids/hex-tonnetz.js b/future/web/synthesis-methods/grids/hex-tonnetz.js deleted file mode 100644 index 492fe5da..00000000 --- a/future/web/synthesis-methods/grids/hex-tonnetz.js +++ /dev/null @@ -1,69 +0,0 @@ -import { settings } from '../../state.js'; - -const gridSize = 32; -const notesPerOctave = 12; -const octaves = 5; -const minFreq = 100; -const maxFreq = 3200; -const frequencies = []; -for (let octave = 0; octave < octaves; octave++) { - for (let note = 0; note < notesPerOctave; note++) { - const freq = minFreq * Math.pow(2, octave + note / notesPerOctave); - if (freq <= maxFreq) frequencies.push(freq); - } -} -const tonnetzGrid = Array(gridSize).fill().map(() => Array(gridSize).fill(0)); -for (let y = 0; y < gridSize; y++) { - for (let x = 0; x < gridSize; x++) { - const octave = Math.floor((y / gridSize) * octaves); - const noteOffset = (x + (y % 2) * 6) % notesPerOctave; - const freqIndex = octave * notesPerOctave + noteOffset; - tonnetzGrid[y][x] = frequencies[freqIndex % frequencies.length] || frequencies[frequencies.length - 1]; - } -} - -export function mapFrameToTonnetz(frameData, width, height, prevFrameData, panValue) { - const gridWidth = width / gridSize; - const gridHeight = height / gridSize; - const movingRegions = []; - const newFrameData = new Uint8ClampedArray(frameData); - let avgIntensity = 0; - for (let i = 0; i < frameData.length; i++) avgIntensity += frameData[i]; - avgIntensity /= frameData.length; - - if (prevFrameData) { - for (let y = 0; y < height; y++) { - for (let x = 0; x < width; x++) { - const idx = y * width + x; - const delta = Math.abs(frameData[idx] - prevFrameData[idx]); - if (delta > 50) { - const gridX = Math.floor(x / gridWidth); - const gridY = Math.floor(y / gridHeight); - movingRegions.push({ gridX, gridY, intensity: frameData[idx], delta }); - } - } - } - } - - movingRegions.sort((a, b) => b.delta - a.delta); - const notes = []; - const usedCells = new Set(); - for (let i = 0; i < Math.min(16, movingRegions.length); i++) { - const { gridX, gridY, intensity } = movingRegions[i]; - const cellKey = `${gridX},${gridY}`; - if (usedCells.has(cellKey)) continue; - usedCells.add(cellKey); - for (let dy = -1; dy <= 1; dy++) { - for (let dx = -1; dx <= 1; dx++) { - if (dx === 0 && dy === 0) continue; - usedCells.add(`${gridX + dx},${gridY + dy}`); - } - } - const freq = tonnetzGrid[gridY][gridX]; - const amplitude = settings.dayNightMode === 'day' ? 0.02 + (intensity / 255) * 0.06 : 0.08 - (intensity / 255) * 0.06; - const harmonics = [freq * Math.pow(2, 7/12), freq * Math.pow(2, 4/12)]; - notes.push({ pitch: freq, intensity: amplitude, harmonics, pan: panValue }); - } - - return { notes, newFrameData, avgIntensity }; -} diff --git a/future/web/test/frame-processor.test.js b/future/web/test/frame-processor.test.js new file mode 100644 index 00000000..7297d95c --- /dev/null +++ b/future/web/test/frame-processor.test.js @@ -0,0 +1,103 @@ +import { mapFrameToNotes, processFrameWithState, cleanupFrameProcessor } from '../core/frame-processor.js'; +import { structuredLog } from '../utils/logging.js'; +import { dispatchEvent } from '../core/dispatcher.js'; +import { settings } from '../core/state.js'; + +jest.mock('../utils/logging.js', () => ({ + structuredLog: jest.fn(), +})); +jest.mock('../core/dispatcher.js', () => ({ + dispatchEvent: jest.fn(), +})); +jest.mock('../core/state.js', () => ({ + settings: { + availableGrids: [{ id: 'hex-tonnetz' }], + gridType: 'hex-tonnetz', + dayNightMode: 'day', + resetStateOnError: true + } +})); +jest.mock('../synthesis-grids/hex-tonnetz.js', () => ({ + mapFrameToHexTonnetz: jest.fn(() => ({ + notes: [{ pitch: 440, intensity: 0.05, harmonics: [], pan: -1 }], + newFrameData: new Uint8ClampedArray(1000), + avgIntensity: 50 + })) +})); + +describe('frame-processor', () => { + beforeEach(() => { + jest.clearAllMocks(); + settings.resetStateOnError = true; + }); + + test('mapFrameToNotes handles invalid dimensions', async () => { + const result = await mapFrameToNotes(new Uint8ClampedArray(1000), 0, 0, null, null); + expect(result).toEqual({ + notes: [], + prevFrameDataLeft: null, + prevFrameDataRight: null, + avgIntensity: 0 + }); + expect(structuredLog).toHaveBeenCalledWith('ERROR', 'Invalid dimensions for frame processing', { width: 0, height: 0 }); + expect(dispatchEvent).toHaveBeenCalledWith('logError', { message: 'Invalid dimensions for frame processing: 0x0' }); + }); + + test('mapFrameToNotes handles invalid frameData', async () => { + const result = await mapFrameToNotes(null, 100, 100, null, null); + expect(result).toEqual({ + notes: [], + prevFrameDataLeft: null, + prevFrameDataRight: null, + avgIntensity: 0 + }); + expect(structuredLog).toHaveBeenCalledWith('ERROR', 'Invalid frameData for processing', { frameDataLength: 0 }); + }); + + test('mapFrameToNotes preserves state when resetStateOnError is false', async () => { + settings.resetStateOnError = false; + const prevLeft = new Uint8ClampedArray(1000); + const prevRight = new Uint8ClampedArray(1000); + const result = await mapFrameToNotes(null, 100, 100, prevLeft, prevRight); + expect(result).toEqual({ + notes: [], + prevFrameDataLeft: prevLeft, + prevFrameDataRight: prevRight, + avgIntensity: 0 + }); + }); + + test('mapFrameToNotes processes valid data', async () => { + const frameData = new Uint8ClampedArray(100 * 100 * 4); + const result = await mapFrameToNotes(frameData, 100, 100, null, null); + expect(result.notes).toHaveLength(2); // One from each side + expect(result.avgIntensity).toBe(50); // (50 + 50) / 2 + expect(result.prevFrameDataLeft).toBeInstanceOf(Uint8ClampedArray); + expect(result.prevFrameDataRight).toBeInstanceOf(Uint8ClampedArray); + }); + + test('processFrameWithState updates module state', async () => { + const frameData = new Uint8ClampedArray(100 * 100 * 4); + const result = await processFrameWithState(frameData, 100, 100); + expect(result.notes).toHaveLength(2); + expect(result.avgIntensity).toBe(50); + expect(result.prevFrameDataLeft).toBeInstanceOf(Uint8ClampedArray); + expect(result.prevFrameDataRight).toBeInstanceOf(Uint8ClampedArray); + }); + + test('cleanupFrameProcessor resets module state', async () => { + const result = await cleanupFrameProcessor(); + expect(result).toEqual({ prevFrameDataLeft: null, prevFrameDataRight: null }); + expect(structuredLog).toHaveBeenCalledWith('INFO', 'cleanupFrameProcessor: Resetting frame processor state'); + }); + + test('cleanupFrameProcessor handles errors', async () => { + structuredLog.mockImplementationOnce(() => { + throw new Error('Test error'); + }); + const result = await cleanupFrameProcessor(); + expect(result).toEqual({ prevFrameDataLeft: null, prevFrameDataRight: null }); + expect(structuredLog).toHaveBeenCalledWith('ERROR', 'cleanupFrameProcessor error', expect.any(Object)); + expect(dispatchEvent).toHaveBeenCalledWith('logError', expect.any(Object)); + }); +}); \ No newline at end of file diff --git a/future/web/test/rectangle-handlers.test.js b/future/web/test/rectangle-handlers.test.js deleted file mode 100644 index ea3d459a..00000000 --- a/future/web/test/rectangle-handlers.test.js +++ /dev/null @@ -1,44 +0,0 @@ -import { setupRectangleHandlers } from '../web/ui/rectangle-handlers.js'; -import { stopAudio } from '../web/audio-processor.js'; - -jest.mock('../web/audio-processor.js', () => ({ - initializeAudio: jest.fn(), - audioContext: { state: 'suspended', resume: jest.fn() }, - stopAudio: jest.fn() -})); - -describe('rectangle-handlers', () => { - beforeEach(() => { - document.body.innerHTML = ` - -
-
Daylight
-
- - -
-
Language
-
- - `; - }); - - test('binds rectangle button events', () => { - const dispatchEvent = jest.fn(); - setupRectangleHandlers({ dispatchEvent }); - expect(document.getElementById('settingsToggle').ontouchstart).toBeDefined(); - expect(document.getElementById('modeBtn').ontouchstart).toBeDefined(); - expect(document.getElementById('languageBtn').ontouchstart).toBeDefined(); - expect(document.getElementById('startStopBtn').ontouchstart).toBeDefined(); - }); - - test('stops audio when stopping stream', () => { - const dispatchEvent = jest.fn(); - setupRectangleHandlers({ dispatchEvent }); - const startStopBtn = document.getElementById('startStopBtn'); - const mockStream = { getTracks: () => [{ stop: jest.fn() }] }; - require('../web/state.js').settings.stream = mockStream; - startStopBtn.dispatchEvent(new Event('touchstart')); - expect(stopAudio).toHaveBeenCalled(); - }); -}); diff --git a/future/web/test/ui-settings.test.js b/future/web/test/ui-settings.test.js new file mode 100644 index 00000000..24a49117 --- /dev/null +++ b/future/web/test/ui-settings.test.js @@ -0,0 +1,37 @@ +// test/ui-settings.test.js +import { setupUISettings } from '../ui/ui-settings.js'; +import { settings } from '../state.js'; + +jest.mock('../state.js', () => ({ + settings: { isSettingsMode: false, stream: null, micStream: null }, +})); + +describe('ui-settings', () => { + beforeEach(() => { + document.body.innerHTML = ` +
+ + + + + + +
+ `; + }); + + test('binds button events', () => { + const dispatchEvent = jest.fn(); + setupUISettings({ dispatchEvent, DOM: document }); + expect(document.getElementById('button1').ontouchstart).toBeDefined(); + expect(document.getElementById('button2').ontouchstart).toBeDefined(); + }); + + test('toggles settings mode on button6', async () => { + const dispatchEvent = jest.fn(); + setupUISettings({ dispatchEvent, DOM: document }); + await document.getElementById('button6').dispatchEvent(new Event('touchstart')); + expect(settings.isSettingsMode).toBe(true); + expect(dispatchEvent).toHaveBeenCalledWith('updateUI', expect.any(Object)); + }); +}); diff --git a/future/web/test/utils.test.js b/future/web/test/utils.test.js deleted file mode 100644 index e88de46b..00000000 --- a/future/web/test/utils.test.js +++ /dev/null @@ -1,25 +0,0 @@ -import { speak } from '../ui/utils.js'; -import { settings } from '../state.js'; - -jest.mock('../state.js', () => ({ - settings: { language: 'en-US' } -})); - -global.fetch = jest.fn(() => - Promise.resolve({ - ok: true, - json: () => Promise.resolve({ testMessage: 'Test message' }) - }) -); - -global.SpeechSynthesisUtterance = jest.fn(); -global.speechSynthesis = { speak: jest.fn() }; - -describe('speak', () => { - it('speaks a message with correct language', async () => { - await speak('testMessage'); - expect(global.fetch).toHaveBeenCalledWith('./languages/en-US.json'); - expect(SpeechSynthesisUtterance).toHaveBeenCalledWith('Test message'); - expect(speechSynthesis.speak).toHaveBeenCalled(); - }); -}); diff --git a/future/web/test/video-capture.test.js b/future/web/test/video-capture.test.js new file mode 100644 index 00000000..8b4e130c --- /dev/null +++ b/future/web/test/video-capture.test.js @@ -0,0 +1,54 @@ +// File: web/test/video-capture.test.js +import { setupVideoCapture, cleanupVideoCapture } from '../ui/video-capture.js'; +import { structuredLog } from '../utils/logging.js'; +import { getDOM } from '../core/context.js'; +import { dispatchEvent } from '../core/dispatcher.js'; + +jest.mock('../utils/logging.js', () => ({ + structuredLog: jest.fn() +})); +jest.mock('../core/context.js', () => ({ + getDOM: jest.fn() +})); +jest.mock('../core/dispatcher.js', () => ({ + dispatchEvent: jest.fn() +})); + +describe('video-capture', () => { + test('setupVideoCapture handles missing DOM elements', async () => { + const DOM = { videoFeed: null, frameCanvas: null }; + const result = await setupVideoCapture(DOM); + expect(result).toBe(false); + expect(structuredLog).toHaveBeenCalledWith('ERROR', 'Missing videoFeed or frameCanvas in setupVideoCapture'); + expect(dispatchEvent).toHaveBeenCalledWith('logError', { message: 'Missing videoFeed or frameCanvas in setupVideoCapture' }); + }); + + test('setupVideoCapture initializes video feed and canvas', async () => { + const DOM = { + videoFeed: { setAttribute: jest.fn() }, + frameCanvas: { style: { display: '' }, setAttribute: jest.fn() } + }; + const result = await setupVideoCapture(DOM); + expect(result).toBe(true); + expect(DOM.videoFeed.setAttribute).toHaveBeenCalledWith('autoplay', ''); + expect(DOM.videoFeed.setAttribute).toHaveBeenCalledWith('muted', ''); + expect(DOM.videoFeed.setAttribute).toHaveBeenCalledWith('playsinline', ''); + expect(DOM.frameCanvas.style.display).toBe('none'); + expect(DOM.frameCanvas.setAttribute).toHaveBeenCalledWith('aria-hidden', 'true'); + expect(structuredLog).toHaveBeenCalledWith('INFO', 'setupVideoCapture: Video feed and canvas initialized'); + }); + + test('cleanupVideoCapture clears video feed and canvas', async () => { + const DOM = { + videoFeed: { srcObject: { getTracks: () => [{ stop: jest.fn() }] }, srcObject: null }, + frameCanvas: { width: 0, height: 0 } + }; + getDOM.mockReturnValue(DOM); + await cleanupVideoCapture(); + expect(DOM.videoFeed.srcObject.getTracks()[0].stop).toHaveBeenCalled(); + expect(DOM.videoFeed.srcObject).toBe(null); + expect(DOM.frameCanvas.width).toBe(0); + expect(DOM.frameCanvas.height).toBe(0); + expect(structuredLog).toHaveBeenCalledWith('INFO', 'cleanupVideoCapture: Video capture cleaned up'); + }); +}); \ No newline at end of file diff --git a/future/web/ui/cleanup-manager.js b/future/web/ui/cleanup-manager.js new file mode 100644 index 00000000..d45f1501 --- /dev/null +++ b/future/web/ui/cleanup-manager.js @@ -0,0 +1,37 @@ +import { settings, setStream, setAudioInterval } from '../core/state.js'; +import { cleanupAudio } from '../audio/audio-processor.js'; + +let isAudioInitialized = false; +let audioContext = null; + +export function setupCleanupManager() { + window.addEventListener("beforeunload", async () => { + if (settings.stream) { + settings.stream.getTracks().forEach((track) => track.stop()); + setStream(null); + } + if (settings.micStream) { + settings.micStream.getTracks().forEach((track) => track.stop()); + settings.micStream = null; + } + if (settings.audioTimerId) { + clearInterval(settings.audioTimerId); + setAudioInterval(null); + } + if (isAudioInitialized && audioContext) { + await cleanupAudio(); + await audioContext.close(); + isAudioInitialized = false; + audioContext = null; + } + console.log("cleanupManager: Cleanup completed"); + }); + + console.log("setupCleanupManager: Setup complete"); +} + +// Expose for audio-controls.js to update audioContext state +export function setAudioContextState(context, initialized) { + audioContext = context; + isAudioInitialized = initialized; +} \ No newline at end of file diff --git a/future/web/ui/dom.diagram.md b/future/web/ui/dom.diagram.md deleted file mode 100644 index 7c9d11c0..00000000 --- a/future/web/ui/dom.diagram.md +++ /dev/null @@ -1,11 +0,0 @@ -```mermaid -graph TD - A[initDOM] --> B[Check document.readyState] - B -->|complete/interactive| C[assignDOMElements] - B -->|loading| D[Wait for DOMContentLoaded] - D --> C - C --> E[Map IDs to DOM object] - E --> F[Check for missing elements] - F -->|All present| G[Resolve DOM] - F -->|Missing elements| H[Reject with error] -``` diff --git a/future/web/ui/dom.js b/future/web/ui/dom.js index 55bc9118..c4a920d3 100644 --- a/future/web/ui/dom.js +++ b/future/web/ui/dom.js @@ -1,14 +1,30 @@ +import { getText } from '../utils/utils.js'; + +function assignDOMElements() { + DOM.splashScreen = document.getElementById('splashScreen'); + DOM.powerOn = document.getElementById('powerOn'); + DOM.mainContainer = document.getElementById('mainContainer'); + DOM.button1 = document.getElementById('button1'); + DOM.button2 = document.getElementById('button2'); + DOM.button3 = document.getElementById('button3'); + DOM.button4 = document.getElementById('button4'); + DOM.button5 = document.getElementById('button5'); + DOM.button6 = document.getElementById('button6'); + DOM.emailDebug = document.getElementById('emailDebug'); + DOM.videoFeed = document.getElementById('videoFeed'); +} + let DOM = { - settingsToggle: null, - audioToggle: null, - modeBtn: null, - languageBtn: null, - startStopBtn: null, + splashScreen: null, + powerOn: null, + mainContainer: null, + button1: null, + button2: null, + button3: null, + button4: null, + button5: null, + button6: null, videoFeed: null, - imageCanvas: null, - loadingIndicator: null, - debug: null, - closeDebug: null, emailDebug: null }; @@ -17,11 +33,22 @@ export function initDOM() { const checkDOMReady = () => { if (document.readyState === 'complete' || document.readyState === 'interactive') { assignDOMElements(); - const missingElements = Object.entries(DOM).filter(([_, value]) => !value); - if (missingElements.length > 0) { - const missingKeys = missingElements.map(([key]) => key).join(', '); - console.error(`Critical DOM elements missing: ${missingKeys}. Check index.html IDs.`); - reject(new Error(`Missing DOM elements: ${missingKeys}`)); + // Enhanced validation + const missing = []; + const available = []; + Object.entries(DOM).forEach(([key, value]) => { + if (!value) { + missing.push(key); + } else { + available.push(key); + } + }); + + if (missing.length > 0) { + const errorMsg = `Missing DOM elements: ${missing.join(', ')}. Available: ${available.join(', ')}`; + console.error(errorMsg); + structuredLog('ERROR', 'DOM validation failed', { missing, available }); + reject(new Error(errorMsg)); } else { resolve(DOM); } @@ -34,18 +61,4 @@ export function initDOM() { document.addEventListener('DOMContentLoaded', checkDOMReady, { once: true }); } }); -} - -function assignDOMElements() { - DOM.settingsToggle = document.getElementById('settingsToggle'); - DOM.audioToggle = document.getElementById('audioToggle'); - DOM.modeBtn = document.getElementById('modeBtn'); - DOM.languageBtn = document.getElementById('languageBtn'); - DOM.startStopBtn = document.getElementById('startStopBtn'); - DOM.videoFeed = document.getElementById('videoFeed'); - DOM.imageCanvas = document.getElementById('imageCanvas'); - DOM.loadingIndicator = document.getElementById('loadingIndicator'); - DOM.debug = document.getElementById('debug'); - DOM.closeDebug = document.getElementById('closeDebug'); - DOM.emailDebug = document.getElementById('emailDebug'); -} +} \ No newline at end of file diff --git a/future/web/ui/event-dispatcher.js b/future/web/ui/event-dispatcher.js deleted file mode 100644 index c2d8a367..00000000 --- a/future/web/ui/event-dispatcher.js +++ /dev/null @@ -1,121 +0,0 @@ -export let dispatchEvent = null; - -import { setAudioInterval, settings } from '../state.js'; -import { processFrame } from './frame-processor.js'; -import { speak } from './utils.js'; - -export function createEventDispatcher(DOM) { - console.log('createEventDispatcher: Initializing event dispatcher'); - if (!DOM) { - console.error('DOM is undefined in createEventDispatcher'); - return { dispatchEvent: () => console.error('dispatchEvent not initialized due to undefined DOM') }; - } - - const handlers = { - updateUI: async ({ settingsMode, streamActive }) => { - if (!DOM.settingsToggle || !DOM.modeBtn || !DOM.languageBtn || !DOM.startStopBtn) { - console.error('Missing critical DOM elements for UI update'); - dispatchEvent('logError', { message: 'Missing critical DOM elements for UI update' }); - return; - } - - const state = { state: settingsMode ? 'on' : 'off' }; - await speak('settingsToggle', state); - setTextAndAriaLabel( - DOM.settingsToggle, - settingsMode ? 'Exit Settings' : 'Toggle Settings', - settingsMode ? 'Exit settings mode' : 'Toggle settings mode' - ); - - state.state = settingsMode ? settings.gridType : settings.dayNightMode; - await speak('modeBtn', { mode: state.state }); - setTextAndAriaLabel( - DOM.modeBtn, - settingsMode ? (state.state === 'hex-tonnetz' ? 'Hex Tonnetz' : 'Circle of Fifths') : (state.state === 'day' ? 'Daylight' : 'Night'), - settingsMode ? `Select grid: ${state.state}` : `Toggle ${state.state} mode` - ); - - state.state = settingsMode ? settings.synthesisEngine : settings.language || 'en-US'; - await speak('languageSelect', { lang: state.state }); - setTextAndAriaLabel( - DOM.languageBtn, - settingsMode ? (state.state === 'sine-wave' ? 'Sine Wave' : 'FM Synthesis') : (state.state === 'en-US' ? 'English' : 'Spanish'), - settingsMode ? `Select synthesis: ${state.state}` : `Cycle to ${state.state}` - ); - - if (DOM.startStopBtn) { - const startStopState = streamActive ? 'stopped' : 'started'; - await speak('startStop', { state: startStopState }); - DOM.startStopBtn.textContent = startStopState === 'started' ? 'Start' : 'Stop'; - } - }, - processFrame: () => { - try { - processFrame(); - } catch (err) { - console.error('Process frame error:', err.message); - dispatchEvent('logError', { message: `Process frame error: ${err.message}` }); - } - }, - updateFrameInterval: ({ interval }) => { - if (settings.audioInterval) { - clearInterval(settings.audioInterval); - setAudioInterval(setInterval(() => { - try { - processFrame(); - } catch (err) { - console.error('Process frame error:', err.message); - dispatchEvent('logError', { message: `Process frame error: ${err.message}` }); - } - }, interval)); - } - }, - toggleDebug: ({ show, message }) => { - console.log('toggleDebug handler called:', { show, message }); - if (DOM.debug) { - DOM.debug.style.display = show ? 'block' : 'none'; - if (message && show) { - const pre = DOM.debug.querySelector('pre'); - if (pre) { - pre.textContent += `${new Date().toISOString()} - ${message}\n`; - pre.scrollTop = pre.scrollHeight; - } else { - console.error('Debug pre element not found'); - } - } - } else { - console.error('Debug element not found'); - } - }, - logError: ({ message }) => { - console.error('Logging error:', message); - handlers.toggleDebug({ show: true, message }); - }, - }; - - dispatchEvent = (eventName, payload = {}) => { - if (handlers[eventName]) { - try { - handlers[eventName](payload); - } catch (err) { - console.error(`Error in handler ${eventName}:`, err.message); - handlers.logError({ message: `Handler ${eventName} error: ${err.message}` }); - } - } else { - console.error(`No handler found for event: ${eventName}`); - handlers.logError({ message: `No handler for event: ${eventName}` }); - } - }; - - console.log('createEventDispatcher: Dispatcher initialized'); - return { dispatchEvent }; -} - -function setTextAndAriaLabel(element, text, ariaLabel) { - if (element) { - element.textContent = text; - element.setAttribute('aria-label', ariaLabel); - } else { - console.warn(`Element not found for text update: ${text}`); - } -} diff --git a/future/web/ui/frame-processor.js b/future/web/ui/frame-processor.js deleted file mode 100644 index e5cc8ebf..00000000 --- a/future/web/ui/frame-processor.js +++ /dev/null @@ -1,55 +0,0 @@ - /** - * Processes video frames for audio synthesis, converting video input to grayscale data. - * Placed in ui/ due to interaction with videoFeed and imageCanvas elements. - */ -import { playAudio } from '../audio-processor.js'; -import { skipFrame, setSkipFrame, prevFrameDataLeft, prevFrameDataRight, setPrevFrameDataLeft, setPrevFrameDataRight, frameCount, lastTime, settings } from '../state.js'; - - -/** - * Processes a single video frame, converting it to grayscale and passing to audio synthesis. - * @param {HTMLVideoElement} videoFeed - The video element providing frames. - * @param {HTMLCanvasElement} canvas - The canvas for drawing frames. - */ -export function processFrame(videoFeed, canvas, DOM) { - if (skipFrame) { - setSkipFrame(false); - return; - } - if (!videoFeed || !canvas || !DOM) { - console.error('Missing required parameters in processFrame:', { videoFeed, canvas, DOM }); - if (DOM && window.dispatchEvent) { - window.dispatchEvent('logError', { message: 'Missing videoFeed, canvas, or DOM in processFrame' }); - } - return; - } - const currentTime = performance.now(); - const deltaTime = currentTime - lastTime; - if (deltaTime < settings.updateInterval) { - setSkipFrame(true); - return; - } - try { - const ctx = canvas.getContext('2d', { willReadFrequently: true }); - ctx.drawImage(videoFeed, 0, 0, canvas.width, canvas.height); - const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height); - const grayData = new Uint8ClampedArray(canvas.width * canvas.height); - for (let i = 0; i < imageData.data.length; i += 4) { - grayData[i / 4] = (imageData.data[i] + imageData.data[i + 1] + imageData.data[i + 2]) / 3; - } - const { prevFrameDataLeft: newLeft, prevFrameDataRight: newRight } = playAudio( - grayData, canvas.width, canvas.height, prevFrameDataLeft, prevFrameDataRight - ); - setPrevFrameDataLeft(newLeft); - setPrevFrameDataRight(newRight); - frameCount++; - lastTime = currentTime; - } catch (err) { - console.error('Frame processing error:', err); - // Use dispatchEvent from rectangle-handlers.js if available - if (window.dispatchEvent) { - window.dispatchEvent('logError', { message: `Frame processing error: ${err.message}` }); - } - setSkipFrame(true); - } -} diff --git a/future/web/ui/rectangle-handlers.diagram.md b/future/web/ui/rectangle-handlers.diagram.md deleted file mode 100644 index c6c2db67..00000000 --- a/future/web/ui/rectangle-handlers.diagram.md +++ /dev/null @@ -1,74 +0,0 @@ -v0.9.8.9 Diagram as per Copilop chat -```mermaid -flowchart TD - A[setupRectangleHandlers] --> B{Check DOM elements} - B -- missing --> C[Log error & dispatchEvent'logError'] - B -- present --> D[Bind audioToggle.addEventListener'click'] - D --> E[audioToggleHandler] - - E --> F{isAudioContextInitialized} - F -- No --> G[Create AudioContext] - G --> H[initializeAudioaudioContext] - H --> I{settings.stream exists?} - I -- Yes --> J[setStreamsettings.stream\nsetAudioInterval\nprocessFrame] - I -- No --> K[Skip stream init] - J --> L[Update audioToggle UI to Off] - L --> M[speak audioToggle, state: on] - M --> N[dispatchEvent'updateUI'] - K --> L - - F -- Yes --> O{audioContext.state} - O -- running --> P[audioContext.suspend] - P --> Q[Update audioToggle UI to On] - Q --> R[speak audioToggle, state: off] - R --> S[dispatchEvent'updateUI'] - O -- suspended --> T[audioContext.resume] - T --> U[Update audioToggle UI to Off] - U --> V[speak audioToggle, state: on] - V --> S - - E -->|catch| W[Log error, dispatchEvent'logError', speakerror] - W --> X{!isAudioContextInitialized && audioContext} - X -- Yes --> Y[audioContext.close, audioContext = null] - X -- No --> Z[Retry up to 3 times] - Z --> AA[If success: Update UI to Off, speak on] - Z --> AB[If fail: Update to On, speakerror] - - D --> AC{!settings.stream} - AC -- Yes --> AD[getUserMedia video: true, audio: false] - AD --> AE[DOM.videoFeed.srcObject = stream, setStream stream] - AE --> E - AC -- No --> E - - subgraph Cleanup [On window.beforeunload] - AF[Stop stream tracks, setStream null] - AG[clearInterval settings.audioInterval] - AH[cleanupAudio & audioContext.close] - AF --> AG --> AH - end -``` - -v0.9.8.8 Diagram as per chat https://x.com/i/grok?conversation=1929542935810314370 Jun 7, 2025 - -```mermaid -graph TD - A[setupRectangleHandlers] --> B[DOM Event Listeners] - B --> C[touchstart: audioToggle] - B --> D[touchstart: startStopBtn] - B --> E[touchstart: languageBtn] - B --> F[touchstart: closeDebug] - B --> G[touchstart: emailDebug] - C --> H[create AudioContext] - H --> I[resume AudioContext] - I --> J[setAudioContext] - J --> K[ensureAudioContext] - K --> L[initializeAudio] - D --> M[start/stop Stream] - M --> N[processFrameLoop] - E --> O[switch Language] - F --> P[toggleDebug] - G --> Q[email Log] - subgraph Visibility - R[visibilitychange] --> S[suspend/resume] - end -``` diff --git a/future/web/ui/rectangle-handlers.js b/future/web/ui/rectangle-handlers.js deleted file mode 100644 index 6a07c523..00000000 --- a/future/web/ui/rectangle-handlers.js +++ /dev/null @@ -1,134 +0,0 @@ -import { getDOM, getDispatchEvent } from '../context.js'; -import { settings, setStream, setAudioInterval } from '../state.js'; -import { initializeAudio, cleanupAudio } from '../audio-processor.js'; -import { speak } from './utils.js'; - -let isAudioContextInitialized = false; -let audioContext = null; - -export function setupRectangleHandlers() { - const DOM = getDOM(); - const dispatchEvent = getDispatchEvent(); - - if (!DOM || !DOM.audioToggle || !DOM.videoFeed) { - console.error('Critical DOM elements missing in rectangle-handlers'); - dispatchEvent('logError', { message: 'Critical DOM elements missing in rectangle-handlers' }); - return; - } - - const audioToggle = DOM.audioToggle; - - async function audioToggleHandler() { - try { - if (!isAudioContextInitialized) { - audioContext = new (window.AudioContext || window.webkitAudioContext)(); - if (!audioContext) { - throw new Error('AudioContext creation failed'); - } - - await initializeAudio(audioContext); - isAudioContextInitialized = true; - - if (settings.stream) { - await setStream(settings.stream); - setAudioInterval(setInterval(() => { - dispatchEvent('processFrame'); - }, settings.updateInterval)); - } - - audioToggle.textContent = 'Turn Audio Off'; - audioToggle.setAttribute('aria-label', 'Turn audio off'); - await speak('audioToggle', { state: 'on' }); - dispatchEvent('updateUI', { settingsMode: false, streamActive: true }); - } else { - if (audioContext.state === 'running') { - await audioContext.suspend(); - audioToggle.textContent = 'Turn Audio On'; - audioToggle.setAttribute('aria-label', 'Turn audio on'); - await speak('audioToggle', { state: 'off' }); - } else if (audioContext.state === 'suspended') { - await audioContext.resume(); - audioToggle.textContent = 'Turn Audio Off'; - audioToggle.setAttribute('aria-label', 'Turn audio off'); - await speak('audioToggle', { state: 'on' }); - } - dispatchEvent('updateUI', { settingsMode: false, streamActive: audioContext.state === 'running' }); - } - } catch (err) { - console.error('Audio toggle error:', err.message); - dispatchEvent('logError', { message: `Audio toggle error: ${err.message}` }); - await speak('audioToggle', { state: 'error', message: 'Failed to toggle audio' }); - - if (!isAudioContextInitialized && audioContext) { - try { - await audioContext.close(); - } catch (closeErr) { - console.error('Error closing AudioContext:', closeErr.message); - } - audioContext = null; - } - - for (let i = 0; i < 3; i++) { - await new Promise(resolve => setTimeout(resolve, 1000)); - try { - audioContext = new (window.AudioContext || window.webkitAudioContext)(); - await initializeAudio(audioContext); - isAudioContextInitialized = true; - audioToggle.textContent = 'Turn Audio Off'; - audioToggle.setAttribute('aria-label', 'Turn audio off'); - await speak('audioToggle', { state: 'on' }); - dispatchEvent('updateUI', { settingsMode: false, streamActive: true }); - break; - } catch (retryErr) { - console.error(`Retry ${i + 1} failed:`, retryErr.message); - dispatchEvent('logError', { message: `Audio retry ${i + 1} failed: ${retryErr.message}` }); - } - } - - if (!isAudioContextInitialized) { - audioToggle.textContent = 'Turn Audio On'; - audioToggle.setAttribute('aria-label', 'Turn audio on'); - await speak('audioToggle', { state: 'error', message: 'Audio initialization failed' }); - } - } - } - - audioToggle.addEventListener('click', async () => { - if (!isAudioContextInitialized && !navigator.mediaDevices.getUserMedia) { - console.error('getUserMedia not supported'); - dispatchEvent('logError', { message: 'getUserMedia not supported' }); - await speak('audioToggle', { state: 'error', message: 'Media devices not supported' }); - return; - } - - try { - if (!settings.stream) { - const stream = await navigator.mediaDevices.getUserMedia({ video: true, audio: false }); - DOM.videoFeed.srcObject = stream; - setStream(stream); - } - await audioToggleHandler(); - } catch (err) { - console.error('Media access error:', err.message); - dispatchEvent('logError', { message: `Media access error: ${err.message}` }); - await speak('audioToggle', { state: 'error', message: 'Failed to access media' }); - } - }); - - window.addEventListener('beforeunload', async () => { - if (settings.stream) { - settings.stream.getTracks().forEach(track => track.stop()); - setStream(null); - } - if (settings.audioInterval) { - clearInterval(settings.audioInterval); - setAudioInterval(null); - } - if (isAudioContextInitialized && audioContext) { - await cleanupAudio(); - await audioContext.close(); - isAudioContextInitialized = false; - audioContext = null; - } - }); -} diff --git a/future/web/ui/settings-handlers.js b/future/web/ui/settings-handlers.js deleted file mode 100644 index 85f74457..00000000 --- a/future/web/ui/settings-handlers.js +++ /dev/null @@ -1,74 +0,0 @@ -import { settings } from '../state.js'; -import { speak } from './utils.js'; - -/** - * Sets up handlers for settings-related actions (e.g., FPS selection). - * Currently minimal, as settings are handled by rectangle buttons. - * @param {Object} options - Configuration options. - * @param {Function} options.dispatchEvent - Event dispatcher function. - * @param {Object} options.DOM - DOM elements object. - */ -export function setupSettingsHandlers({ dispatchEvent, DOM }) { - console.log('setupSettingsHandlers: Starting setup'); - - if (!DOM) { - console.error('DOM is undefined in setupSettingsHandlers'); - return; - } - - function tryVibrate(event) { - if (event.cancelable && navigator.vibrate) { - try { - navigator.vibrate(50); // Line 43: Handle vibration safely - } catch (err) { - console.warn('Vibration blocked:', err.message); - } - } - } - - if (DOM.settingsToggle) { - DOM.settingsToggle.addEventListener('touchstart', async (event) => { - if (event.cancelable) event.preventDefault(); - console.log('settingsToggle touched'); - tryVibrate(event); - try { - settings.isSettingsMode = !settings.isSettingsMode; - dispatchEvent('updateUI', { settingsMode: settings.isSettingsMode, streamActive: !!settings.stream }); - dispatchEvent('toggleDebug', { show: settings.isSettingsMode }); - } catch (err) { - console.error('settingsToggle error:', err.message); - dispatchEvent('logError', { message: `settingsToggle error: ${err.message}` }); - await speak('settingsError'); - } - }); - console.log('settingsToggle event listener attached'); - } else { - console.error('settingsToggle element not found'); - } - - if (DOM.modeBtn) { - DOM.modeBtn.addEventListener('touchstart', async (event) => { - if (event.cancelable) event.preventDefault(); - console.log('modeBtn touched'); - tryVibrate(event); - try { - if (settings.isSettingsMode) { - settings.gridType = settings.gridType === 'circle-of-fifths' ? 'hex-tonnetz' : 'circle-of-fifths'; - } else { - settings.dayNightMode = settings.dayNightMode === 'day' ? 'night' : 'day'; - document.body.className = settings.dayNightMode; - } - dispatchEvent('updateUI', { settingsMode: settings.isSettingsMode, streamActive: !!settings.stream }); - } catch (err) { - console.error('modeBtn error:', err.message); - dispatchEvent('logError', { message: `modeBtn error: ${err.message}` }); - await speak('modeError'); - } - }); - console.log('modeBtn event listener attached'); - } else { - console.error('modeBtn element not found'); - } - - console.log('setupSettingsHandlers: Setup complete'); -} diff --git a/future/web/ui/ui-controller.js b/future/web/ui/ui-controller.js new file mode 100644 index 00000000..3cae5234 --- /dev/null +++ b/future/web/ui/ui-controller.js @@ -0,0 +1,19 @@ +import { setupAudioControls } from '../audio/audio-controls.js'; +import { setupUISettings } from './ui-settings.js'; +import { setupCleanupManager } from './cleanup-manager.js'; +import { setupVideoCapture } from './video-capture.js'; +// Importa los módulos de configuración cuando los tengas +// import { setupSaveSettings, setupLoadSettings } from './settings-manager.js'; + +export function setupUIController({ dispatchEvent, DOM }) { + console.log('setupUIController: Starting setup'); + setupAudioControls({ dispatchEvent, DOM }); + setupUISettings({ dispatchEvent, DOM }); + setupCleanupManager(); + + // Inicialización futura para guardar y leer configuraciones + // setupSaveSettings({ dispatchEvent, DOM }); + // setupLoadSettings({ dispatchEvent, DOM }); + + console.log('setupUIController: Setup complete'); +} \ No newline at end of file diff --git a/future/web/ui/ui-settings.js b/future/web/ui/ui-settings.js new file mode 100644 index 00000000..0025bdb5 --- /dev/null +++ b/future/web/ui/ui-settings.js @@ -0,0 +1,195 @@ +// File: web/ui/ui-settings.js +import { settings } from '../core/state.js'; +import { getText, tryVibrate, hapticCount } from '../utils/utils.js'; +import { structuredLog } from '../utils/logging.js'; + +export function setupUISettings({ dispatchEvent, DOM }) { + + // Helper: wire a single pointer event for both touch & click + function wireButton(el, id, { normal, settings: settingsAction }, { + normalError, settingsError, params = () => ({}) + }) { + el.addEventListener('pointerdown', async (event) => { + if (event.cancelable) event.preventDefault(); + console.log(`${id} event`, { settingsMode: settings.isSettingsMode }); + tryVibrate(event); + hapticCount(Number(id.replace('button', ''))); + try { + if (!settings.isSettingsMode) { + await normal(); + } else { + await settingsAction(); + } + dispatchEvent('updateUI', { + settingsMode: settings.isSettingsMode, + streamActive: !!settings.stream, + micActive: !!settings.micStream, + }); + } catch (err) { + console.error(`${id} error:`, err.message); + dispatchEvent('logError', { message: `${id} error: ${err.message}` }); + const key = !settings.isSettingsMode ? normalError : settingsError; + await getText(key, params()); + } + }); + // Additional touchstart for compatibility (from settings-handlers.js) + el.addEventListener('touchstart', async (event) => { + if (event.cancelable) event.preventDefault(); + console.log(`${id} touched`); + tryVibrate(event); + try { + if (!settings.isSettingsMode) { + await normal(); + } else { + await settingsAction(); + } + dispatchEvent('updateUI', { + settingsMode: settings.isSettingsMode, + streamActive: !!settings.stream, + micActive: !!settings.micStream, + }); + } catch (err) { + console.error(`${id} error:`, err.message); + dispatchEvent('logError', { message: `${id} error: ${err.message}` }); + await getText(`${id}.tts.${!settings.isSettingsMode ? normalError.split('.').pop() : settingsError.split('.').pop()}`, params()); + } + }); + console.log(`${id} event listeners attached`); + } + + // Button 1 + wireButton(DOM.button1, 'button1', + { + normal: () => dispatchEvent('startStop', { settingsMode: settings.isSettingsMode }), + settings: () => dispatchEvent('startStop', { settingsMode: settings.isSettingsMode }) + }, + { + normalError: 'button1.tts.startStop', + settingsError: 'button1.tts.startStop', + params: () => ({ state: 'error' }) + } + ); + + // Button 2 + wireButton(DOM.button2, 'button2', + { + normal: () => dispatchEvent('toggleAudio', { settingsMode: settings.isSettingsMode }), + settings: () => dispatchEvent('toggleAudio', { settingsMode: settings.isSettingsMode }) + }, + { + normalError: 'button2.tts.micError', + settingsError: 'button2.tts.micError' + } + ); + }); + // Button 3 click: Language Toggle (Normal) or Input Selector (Settings) + DOM.button3.addEventListener('click', async (event) => { + if (event.cancelable) event.preventDefault(); + console.log('button3 clicked', { settingsMode: settings.isSettingsMode }); + tryVibrate(event); + hapticCount(3); + try { + if (!settings.isSettingsMode) { + dispatchEvent('toggleLanguage'); + } else { + dispatchEvent('toggleVideoSource'); + } + dispatchEvent('updateUI', { + settingsMode: settings.isSettingsMode, + streamActive: !!settings.stream, + micActive: !!settings.micStream, + }); + } catch (err) { + console.error('button3 error:', err.message); + dispatchEvent('logError', { message: `button3 error: ${err.message}` }); + await getText( + settings.isSettingsMode + ? 'button3.tts.videoSourceError' + : 'button3.tts.languageError' + ); + } + }); + + // Button 3 + wireButton(DOM.button3, 'button3', + { + normal: () => dispatchEvent('toggleLanguage'), + settings: () => dispatchEvent('toggleVideoSource') + }, + { + normalError: 'button3.tts.languageError', + settingsError: 'button3.tts.videoSourceError' + } + ); + + // Button 4 + wireButton(DOM.button4, 'button4', + { + normal: async () => { + if (settings.autoFPS) { + settings.autoFPS = false; + settings.updateInterval = 1000 / 20; + } else { + const fpsOptions = [20, 30, 60]; + const currentFps = 1000 / settings.updateInterval; + const idx = fpsOptions.indexOf(currentFps); + settings.autoFPS = idx === fpsOptions.length - 1; + if (!settings.autoFPS) { + settings.updateInterval = 1000 / fpsOptions[idx + 1]; + } + } + dispatchEvent('updateFrameInterval', { interval: settings.updateInterval }); + await getText('button4.tts.fpsBtn', { + fps: settings.autoFPS ? 'auto' : Math.round(1000 / settings.updateInterval) + }); + }, + settings: () => dispatchEvent('saveSettings', { settingsMode: true }) + }, + { + normalError: 'button4.tts.fpsError', + settingsError: 'button4.tts.saveError' + } + ); + + // Button 5 + wireButton(DOM.button5, 'button5', + { + normal: async () => { + dispatchEvent('emailDebug'); + await getText('button5.tts.emailDebug'); + }, + settings: () => dispatchEvent('loadSettings', { settingsMode: true }) + }, + { + normalError: 'button5.tts.emailDebug', + settingsError: 'button5.tts.loadError', + params: () => ({ state: 'error' }) + } + ); + + // Button 6 + wireButton(DOM.button6, 'button6', + { + normal: async () => { + settings.isSettingsMode = !settings.isSettingsMode; + dispatchEvent('toggleDebug', { show: settings.isSettingsMode }); + await getText('button6.tts.settingsToggle', { + state: settings.isSettingsMode ? 'on' : 'off' + }); + }, + settings: async () => { + settings.isSettingsMode = !settings.isSettingsMode; + dispatchEvent('toggleDebug', { show: settings.isSettingsMode }); + await getText('button6.tts.settingsToggle', { + state: settings.isSettingsMode ? 'on' : 'off' + }); + } + }, + { + normalError: 'button6.tts.settingsError', + settingsError: 'button6.tts.settingsError' + } + ); + + console.log('setupUISettings: Setup complete'); +} \ No newline at end of file diff --git a/future/web/ui/utils.js b/future/web/ui/utils.js deleted file mode 100644 index c04b032c..00000000 --- a/future/web/ui/utils.js +++ /dev/null @@ -1,84 +0,0 @@ -import { settings } from '../state.js'; -import { getDispatchEvent } from '../context.js'; - -let translationCache = {}; - -async function loadTranslations(lang) { - if (translationCache[lang]) { - console.log(`Using cached translations for ${lang}`); - return translationCache[lang]; - } - - try { - console.log(`Fetching translations for ${lang}`); - const response = await fetch(`../languages/${lang}.json`); - if (!response.ok) { - throw new Error(`Failed to load ${lang} translations: ${response.status}`); - } - const translations = await response.json(); - translationCache[lang] = translations; - console.log(`Translations loaded for ${lang}`); - return translations; - } catch (error) { - console.error('Translation load failed:', error.message); - const dispatchEvent = getDispatchEvent(); - dispatchEvent('logError', { message: `Translation load failed for ${lang}: ${error.message}` }); - return {}; - } -} - -export async function speak(elementId, state = {}) { - const lang = settings.language || 'en-US'; - const translations = await loadTranslations(lang); - let message = translations[elementId] || elementId; - - if (!message) { - message = elementId; - console.warn(`No translation found for ${elementId} in ${lang}`); - } - - if (message) { - let finalMessage = message; - for (const [key, value] of Object.entries(state)) { - if (key === 'state') { - if (elementId === 'emailDebug') { - finalMessage = finalMessage.replaceAll(`{${key}}`, value === 'sent' ? (lang === 'en-US' ? 'sent' : 'enviado') : value); - } else if (elementId === 'settingsToggle') { - finalMessage = finalMessage.replaceAll(`{${key}}`, value === 'on' ? (lang === 'en-US' ? 'enabled' : 'activadas') : (lang === 'en-US' ? 'disabled' : 'desactivadas')); - } else if (elementId === 'startStop') { - finalMessage = finalMessage.replaceAll(`{${key}}`, value === 'started' ? (lang === 'en-US' ? 'started' : 'iniciada') : (lang === 'en-US' ? 'stopped' : 'detenida')); - } else { - finalMessage = finalMessage.replaceAll(`{${key}}`, value); - } - } else if (key === 'mode') { - finalMessage = finalMessage.replaceAll(`{${key}}`, value === 'day' ? (lang === 'en-US' ? 'day' : 'día') : (lang === 'en-US' ? 'night' : 'noche')); - } else if (key === 'grid') { - finalMessage = finalMessage.replaceAll(`{${key}}`, value === 'hex-tonnetz' ? (lang === 'en-US' ? 'hexagonal tonnetz' : 'tonnetz hexagonal') : (lang === 'en-US' ? 'circle of fifths' : 'círculo de quintas')); - } else if (key === 'engine') { - finalMessage = finalMessage.replaceAll(`{${key}}`, value === 'sine-wave' ? (lang === 'en-US' ? 'sine wave' : 'onda sinusoidal') : (lang === 'en-US' ? 'FM synthesis' : 'síntesis FM')); - } else if (key === 'lang') { - finalMessage = finalMessage.replaceAll(`{${key}}`, value === 'en-US' ? (lang === 'en-US' ? 'English' : 'inglés') : (lang === 'en-US' ? 'Spanish' : 'español')); - } else { - finalMessage = finalMessage.replaceAll(`{${key}}`, value); - } - } - console.log(`Speaking: ${finalMessage} in ${lang}`); - try { - if ('speechSynthesis' in window) { - const utterance = new SpeechSynthesisUtterance(finalMessage); - utterance.lang = lang; - utterance.volume = 1.0; - utterance.rate = 1.0; - window.speechSynthesis.speak(utterance); - } else { - console.warn('Speech synthesis not supported'); - const dispatchEvent = getDispatchEvent(); - dispatchEvent('logError', { message: 'Speech synthesis not supported' }); - } - } catch (error) { - console.error('Speech synthesis error:', error.message); - const dispatchEvent = getDispatchEvent(); - dispatchEvent('logError', { message: `Speech synthesis error: ${error.message}` }); - } - } -} diff --git a/future/web/ui/video-capture.js b/future/web/ui/video-capture.js new file mode 100644 index 00000000..d3f3630b --- /dev/null +++ b/future/web/ui/video-capture.js @@ -0,0 +1,40 @@ +import { settings } from '../core/state.js'; +import { structuredLog } from '../utils/logging.js'; +import { getText } from '../utils/utils.js'; +import { dispatchEvent } from '../core/dispatcher.js'; +import { getDOM } from '../core/context.js'; + +export async function setupVideoCapture(DOM) { + try { + if (!DOM.videoFeed || !DOM.frameCanvas) { + const msg = 'Missing videoFeed or frameCanvas in setupVideoCapture'; + structuredLog('ERROR', msg); + dispatchEvent('logError', { message: msg }); + return false; + } + + DOM.videoFeed.setAttribute('autoplay', ''); + DOM.videoFeed.setAttribute('muted', ''); + DOM.videoFeed.setAttribute('playsinline', ''); + DOM.frameCanvas.style.display = 'none'; + DOM.frameCanvas.setAttribute('aria-hidden', 'true'); + + structuredLog('INFO', 'setupVideoCapture: Video feed and canvas initialized'); + return true; + } catch (err) { + structuredLog('ERROR', 'setupVideoCapture error', { message: err.message }); + dispatchEvent('logError', { message: `Video capture setup error: ${err.message}` }); + return false; + } +} + +export async function cleanupVideoCapture() { + const DOM = getDOM(); + if (DOM.videoFeed?.srcObject) { + DOM.videoFeed.srcObject.getTracks().forEach(track => track.stop()); + DOM.videoFeed.srcObject = null; + } + DOM.frameCanvas.width = 0; + DOM.frameCanvas.height = 0; + structuredLog('INFO', 'cleanupVideoCapture: Video capture cleaned up'); +} \ No newline at end of file diff --git a/future/web/utils/async.js b/future/web/utils/async.js new file mode 100644 index 00000000..0072bbb3 --- /dev/null +++ b/future/web/utils/async.js @@ -0,0 +1,15 @@ +/** + * Wraps any async function in a standardized try/catch boundary. + * @param {Function} fn - The async function to execute. + * @param {...any} args - Arguments to pass to the function. + * @returns {Promise<{data: any, error: Error|null}>} + */ +export async function withErrorBoundary(fn, ...args) { + try { + const data = await fn(...args); + return { data, error: null }; + } catch (error) { + console.error(`${fn.name} error:`, error); + return { data: null, error }; + } +} \ No newline at end of file diff --git a/future/web/utils/idb-logger.js b/future/web/utils/idb-logger.js new file mode 100644 index 00000000..cff26c17 --- /dev/null +++ b/future/web/utils/idb-logger.js @@ -0,0 +1,131 @@ +// web/utils/idb-logger.js +// IndexedDB wrapper for persistent logging: Append JSON logs, retrieve all, cap size, export. +// Asynchronous, transaction-based for non-blocking ops in high-throughput scenarios. +// Fallback if IndexedDB not supported (e.g., logs to console only). + +const DB_NAME = 'AcoustSeeLogsDB'; +const DB_VERSION = 1; +const STORE_NAME = 'logs'; +const MAX_ENTRIES = 1000; // Cap to prevent unbounded growth. +let dbPromise = null; + +// Check IndexedDB support (technical: Feature detection to avoid errors in non-supporting envs like some iframes or old browsers). +const isIndexedDBSupported = 'indexedDB' in window; + +import { structuredLog } from './logging.js'; +// Open (or create) DB asynchronously with retry on transient errors. +function openDB(retries = 3) { + if (!isIndexedDBSupported) { + return Promise.reject(new Error('IndexedDB not supported in this environment')); + } + return new Promise((resolve, reject) => { + const attempt = (count) => { + const request = indexedDB.open(DB_NAME, DB_VERSION); + request.onerror = () => { + if (count > 0) { + setTimeout(() => attempt(count - 1), 500); + } else { + reject(request.error); + } + }; + request.onsuccess = () => resolve(request.result); + request.onupgradeneeded = (event) => { + const db = event.target.result; + if (!db.objectStoreNames.contains(STORE_NAME)) { + db.createObjectStore(STORE_NAME, { autoIncrement: true }); + } + }; + }; + attempt(retries); + }); +} + +// Lazy-init DB promise with error handling. +async function getDB() { + if (!dbPromise) { + dbPromise = openDB().catch(err => { + console.warn('IndexedDB init failed; falling back to console-only logging:', err.message); + return null; // Null signals fallback. + }); + } + return dbPromise; +} + +// Append a log entry (JSON object). Fallback to console if DB unavailable. +export async function addIdbLog(logEntry) { + const db = await getDB(); + if (!db) { + console.warn('DB unavailable; logging to console:', logEntry); + structuredLog('WARN', 'IDB fallback to console', { entry: logEntry }, false, false); + return; // Fallback: No persistence. + } + return new Promise((resolve, reject) => { + const transaction = db.transaction([STORE_NAME], 'readwrite'); + const store = transaction.objectStore(STORE_NAME); + const addRequest = store.add(logEntry); + + addRequest.onsuccess = () => { + // Cap size: If over max, delete oldest (cursor for efficiency). + capLogSize(store).then(resolve).catch(reject); + }; + addRequest.onerror = () => reject(addRequest.error); + + transaction.onerror = () => reject(transaction.error); + }); +} + +// Helper to cap entries: Delete oldest if > MAX_ENTRIES. +async function capLogSize(store) { + return new Promise((resolve, reject) => { + const countRequest = store.count(); + countRequest.onsuccess = () => { + if (countRequest.result <= MAX_ENTRIES) return resolve(); + + // Delete excess oldest entries via cursor. + let deleted = 0; + const excess = countRequest.result - MAX_ENTRIES; + const cursorRequest = store.openCursor(); + + cursorRequest.onsuccess = (event) => { + const cursor = event.target.result; + if (cursor && deleted < excess) { + cursor.delete(); + deleted++; + cursor.continue(); + } else { + resolve(); + } + }; + cursorRequest.onerror = () => reject(cursorRequest.error); + }; + countRequest.onerror = () => reject(countRequest.error); + }); +} + +// Retrieve all logs for export. Fallback to empty if DB unavailable. +export async function getAllIdbLogs() { + const db = await getDB(); + if (!db) return []; // Fallback: Empty array. + return new Promise((resolve, reject) => { + const transaction = db.transaction([STORE_NAME], 'readonly'); + const store = transaction.objectStore(STORE_NAME); + const request = store.getAll(); + + request.onsuccess = () => resolve(request.result); + request.onerror = () => reject(request.error); + }); +} + +// Clear all logs (optional, e.g., after send). Fallback no-op if DB unavailable. +export async function clearIdbLogs() { + const db = await getDB(); + if (!db) return; + return new Promise((resolve, reject) => { + const transaction = db.transaction([STORE_NAME], 'readwrite'); + const store = transaction.objectStore(STORE_NAME); + const request = store.clear(); + + request.onsuccess = resolve; + request.onerror = () => reject(request.error); + }); +} \ No newline at end of file diff --git a/future/web/utils/logging.js b/future/web/utils/logging.js new file mode 100644 index 00000000..113715d5 --- /dev/null +++ b/future/web/utils/logging.js @@ -0,0 +1,104 @@ +// web/utils/logging.js +// Centralized logging utilities for structured, level-based outputs with async emission and sampling. +// Supports async to avoid blocking high-throughput paths (e.g., frame processing). +// Sampling reduces log volume for DEBUG level in performance-critical scenarios. + +import { addIdbLog } from './idb-logger.js'; // Updated to use IndexedDB. + +// Safely stringify objects, handling circular refs and Error instances +function safeStringify(obj) { + const seen = new WeakSet(); + return JSON.stringify(obj, (key, val) => { + if (typeof val === 'object' && val !== null) { + if (seen.has(val)) return '[Circular]'; + seen.add(val); + } + if (val instanceof Error) { + return { message: val.message, stack: val.stack }; + } + return val; + }); +} + +// Capture original console methods before any overrides +const originalConsoleRef = { + log: console.log, + warn: console.warn, + error: console.error, +}; + +const LOG_LEVELS = { + DEBUG: 0, + INFO: 1, + WARN: 2, + ERROR: 3, +}; + +let currentLogLevel = LOG_LEVELS.DEBUG; // Default; can be set from settings.debugLogging. +const isMobile = /Mobile|Android|iPhone|iPad/.test(navigator.userAgent); +let sampleRate = isMobile ? 0.1 : 1.0; // 10% DEBUG logs on mobile. + +// Helper to set global log level (e.g., from settings.isSettingsMode or debugLogging). +export function setLogLevel(level) { + const upperLevel = level.toUpperCase(); + if (Object.keys(LOG_LEVELS).includes(upperLevel)) { + currentLogLevel = LOG_LEVELS[upperLevel]; + } else { + structuredLog('WARN', 'Invalid log level attempted', { level }); + } +} + +// Helper to set sampling rate (0.0 to 1.0; from settings or dynamically). +export function setSampleRate(rate) { + if (rate >= 0 && rate <= 1) { + sampleRate = rate; + } else { + structuredLog('WARN', 'Invalid sample rate attempted', { rate }); + } +} + +/** + * Logs a structured message with level, timestamp, and data payload. + * Emits asynchronously to prevent blocking. + * @param {string} level - One of 'DEBUG', 'INFO', 'WARN', 'ERROR'. + * @param {string} message - Descriptive message (e.g., 'setAudioInterval'). + * @param {Object} [data={}] - Additional context (e.g., { timerId: 42, ms: 50 }). + * @param {boolean} [persist=true] - If true, also calls addLog with serialized form. + * @param {boolean} [sample=true] - If false, bypass sampling (for critical logs). + */ + +let inStructuredLog = false; +/** + * Logs a structured message synchronously with recursion guard. + */ +export async function structuredLog(level, message, data = {}, persist = true, sample = true) { + const numericLevel = LOG_LEVELS[level.toUpperCase()] || LOG_LEVELS.INFO; + if (numericLevel < currentLogLevel) return; + if (sample && level.toUpperCase() === 'DEBUG' && Math.random() > sampleRate) return; + + if (inStructuredLog) return; + inStructuredLog = true; + try { + const timestamp = new Date().toISOString(); + const logEntry = { timestamp, level: level.toUpperCase(), message, data }; + // Use global console to avoid circular import + const fn = (console[level.toLowerCase()] || console.log).bind(console); + // Serialize only own properties to a JSON payload string to prevent endless prototype expansion + let payload = ''; + if (Object.keys(data).length) { + try { + payload = ' ' + safeStringify(data); + } catch (e) { + payload = ' [Unserializable data]'; + } + } + fn(`[${timestamp}] ${logEntry.level}: ${message}${payload}`); + if (persist) { + addIdbLog(logEntry).catch(err => { + console.warn('Failed to persist log to IndexedDB:', err.message); + }); + } + } finally { + inStructuredLog = false; + } +} \ No newline at end of file diff --git a/future/web/utils/utils.js b/future/web/utils/utils.js new file mode 100644 index 00000000..536a5018 --- /dev/null +++ b/future/web/utils/utils.js @@ -0,0 +1,136 @@ +import { settings } from '../core/state.js'; +import { structuredLog } from './logging.js'; + +/** + * Initializes language if not set, using available configs. + * Call this once upfront (e.g., after loadConfigs in main.js) to avoid races. + * @returns {string} The selected language ID. + */ +export function initializeLanguageIfNeeded() { + if (!settings.language) { + structuredLog('WARN', 'Language not initialized; setting default'); + if (settings.availableLanguages.length === 0) { + // Configs likely not loaded; use ultimate fallback (assumes loadConfigs awaited upstream) + settings.language = 'en-US'; + structuredLog('INFO', 'Using ultimate fallback language', { language: settings.language }); + } else { + settings.language = settings.availableLanguages[0].id; + structuredLog('INFO', 'Auto-set language to first available', { language: settings.language }); + } + } + return settings.language; +} + +export function tryVibrate(event) { + if (event.cancelable && navigator.vibrate) { + try { + navigator.vibrate(50); + } catch (err) { + console.warn('Vibration blocked:', err.message); + } + } +} + +export function hapticCount(count) { + if (navigator.vibrate) { + const pattern = Array(count * 2 - 1).fill(30).map((v, i) => i % 2 === 0 ? 30 : 50); + navigator.vibrate(pattern); + } +} + +const translationsCache = {}; + +/** + * Fetches and formats a translated message. No DOM/TTS side-effects—callers handle those. + * @param {string} key - Translation key (dot-notated). + * @param {Object} [params={}] - Params for placeholder replacement. + * @returns {Promise} The formatted message, or key on failure. + */ +export async function getText(key, params = {}) { + try { + const languageId = settings.language; + if (!languageId) { + throw new Error('Language not set; call initializeLanguageIfNeeded first'); + } + + const language = settings.availableLanguages.find(l => l.id === languageId); + if (!language) { + structuredLog('ERROR', 'Language not found', { + requestedLanguage: languageId, + availableLanguages: settings.availableLanguages.map(l => l.id), + key + }); + return key; // No fallback mutation—caller decides + } + + let translations = translationsCache[language.id]; + if (!translations) { + try { + const response = await fetch(`./languages/${language.id}.json`); + if (!response.ok) throw new Error(`Failed to load language file: ${response.status}`); + translations = await response.json(); + translationsCache[language.id] = translations; + } catch (fetchErr) { + structuredLog('ERROR', 'Language file fetch error', { message: fetchErr.message, key }); + return key; // Fallback on network/parse error + } + } + + let finalMessage = translations; + for (const part of key.split('.')) { + finalMessage = finalMessage[part] || key; + } + if (typeof finalMessage === 'object') { + finalMessage = finalMessage[params.state || params.fps || params.lang] || key; + } + + // Safer placeholder replacement (exact match to avoid partial brace issues) + for (const [paramKey, paramValue] of Object.entries(params)) { + finalMessage = finalMessage.replaceAll(`{${paramKey}}`, paramValue); + } + + return finalMessage; + } catch (err) { + structuredLog('ERROR', 'getText error', { message: err.message, key, params }); + throw err; // Rethrow for callers to handle (e.g., fallback or announce) + } +} + +/** + * Speaks the message via TTS if enabled. + * @param {string} message - Message to speak. + * @param {string} [type='tts'] - Type (for logging). + */ +export function speakText(message, type = 'tts') { + if (type === 'tts' && settings.ttsEnabled) { + const utterance = new SpeechSynthesisUtterance(message); + utterance.lang = settings.language; + window.speechSynthesis.speak(utterance); + } +} + +/** + * Updates the announcements element with a message. + * @param {string} message - Message to announce. + */ +export function announceMessage(message) { + const announcements = document.getElementById('announcements'); + if (announcements) { + announcements.textContent = message; + } +} + +export function parseBrowserVersion(userAgent) { + const rx = /Chrome\/([0-9.]+)|Firefox\/([0-9.]+)|Safari\/([0-9.]+)|Edg\/([0-9.]+)/; + const m = userAgent.match(rx); + return (m && (m[1] || m[2] || m[3] || m[4])) || 'Unknown'; +} + +export function setTextAndAriaLabel(element, text, ariaLabel) { + if (element) { + element.textContent = text; + element.setAttribute('aria-label', ariaLabel); + } else { + structuredLog('WARN', 'Element not found for text update', { text }); + } +} \ No newline at end of file diff --git a/present/audio-processor.js b/present/audio-processor.js index 875ac9b2..084896a8 100644 --- a/present/audio-processor.js +++ b/present/audio-processor.js @@ -7,6 +7,19 @@ export let audioContext = null; export let isAudioInitialized = false; export let oscillators = []; +// WeakMap for tracking audio resources +const audioResourceMap = new WeakMap(); +// Interval ID for memory logging +let memoryInterval; + +// Function to log memory usage (Chrome only) +function logMemoryUsage() { + if (performance && performance.memory) { + const { usedJSHeapSize, totalJSHeapSize } = performance.memory; + console.log(`Memory Usage: ${(usedJSHeapSize/1e6).toFixed(2)}MB / ${(totalJSHeapSize/1e6).toFixed(2)}MB`); + } +} + export async function initializeAudio(context) { if (isAudioInitialized || !context) return; try { @@ -25,8 +38,14 @@ export async function initializeAudio(context) { osc.connect(gain).connect(panner).connect(audioContext.destination); osc.start(); oscillators.push({ osc, gain, panner, active: false }); + // Track audio resources in WeakMap + audioResourceMap.set(osc, { gain, panner }); } isAudioInitialized = true; + // Start memory monitoring + if (!memoryInterval) { + memoryInterval = setInterval(logMemoryUsage, 60000); + } if (window.speechSynthesis) { const utterance = new SpeechSynthesisUtterance('Audio initialized'); utterance.lang = settings.language || 'en-US'; @@ -73,3 +92,19 @@ export function playAudio(frameData, width, height, prevFrameDataLeft, prevFrame return { prevFrameDataLeft, prevFrameDataRight }; } + +// Cleanup audio resources on unload +window.addEventListener('beforeunload', () => { + oscillators.forEach(({ osc, gain, panner }) => { + try { + osc.stop(); + osc.disconnect(); + gain.disconnect && gain.disconnect(); + panner.disconnect && panner.disconnect(); + } catch (e) { + console.warn('Error during audio cleanup', e); + } + audioResourceMap.delete(osc); + }); + if (memoryInterval) clearInterval(memoryInterval); +});