From 4b06931086f6ebde57197cdf7715ad5c78c1ac4c Mon Sep 17 00:00:00 2001 From: Matt Cossins Date: Wed, 11 Feb 2026 19:06:12 +0000 Subject: [PATCH 1/5] C2PA updates --- ...deo-&-Audio-Provenance-In-The-Age-of-AI.md | 71 ++++++++++--------- 1 file changed, 38 insertions(+), 33 deletions(-) diff --git a/Projects/Projects/Video-&-Audio-Provenance-In-The-Age-of-AI.md b/Projects/Projects/Video-&-Audio-Provenance-In-The-Age-of-AI.md index 6709fb84..f720a9e9 100644 --- a/Projects/Projects/Video-&-Audio-Provenance-In-The-Age-of-AI.md +++ b/Projects/Projects/Video-&-Audio-Provenance-In-The-Age-of-AI.md @@ -1,6 +1,6 @@ --- title: "Video & Audio Provenance on Arm in the Age of AI" -description: "Integrating transparent provenance - disclosing whether media is AI-generated or AI-edited, and also what other AI processing has occurred on any media - is necessary for accountability in domains like journalism, security, and regulated industries. This project uses C2PA 2.3 to record such actions as signed, machine-verifiable assertions attached to the asset. This reduces deepfake proliferation and fraud by providing a method to verify provenance. Additionally it tracks authorised and potential unauthorised use of models to analyse the asset." +description: "Integrating transparent provenance - disclosing whether media is AI-generated or AI-edited, and what other AI processing has occurred on any media - is fundamental for accountability in domains like journalism, security, and regulated industries. This project uses C2PA specification (www.c2pa.org) revision 2.3 to record such actions as signed, machine-verifiable assertions attached to the asset. This reduces deepfake proliferation and fraud by providing a method to verify provenance. Additionally, it tracks authorized and potential unauthorized use of models to analyze the asset, by means of Model Provenance." subjects: - "ML" - "Security" @@ -25,18 +25,18 @@ badges: donation: --- +## Description -## Description +### Why is this important? -### Why is this important? +Generative AI has created a trust gap: +- Images, videos, and audio may be AI-generated or heavily manipulated +- this can lead to deepfakes, and fraud. +- This AI-enhancement or editing can occur in real-time - i.e. a person can live stream themselves but use ML to change their appearance and voice. +- Moreover, the use of AI models to scan images, video, and audio (e.g. facial or voice recognition) is often opaque, with no actual changes to the underlying data. +- Viewers rarely know whether AI analysis was performed, or what conclusion was reached. Such seamless content transformation, does not require much specialized skills and hence within the reach of common masses. -Generative AI has created a trust gap: -- Images, videos, and audio may be AI-generated or heavily manipulated - this can lead to deepfakes, and fraud. -- This AI-enhancement or editing can occur in real-time - i.e. a person can livestream themeself, but use ML to change their appearance and voice. -- Moreover, the use of AI models to scan images, video, and audio (e.g. facial or voice recognition) is often opaque, with no actual changes to the underlying data. -- Viewers rarely know whether AI analysis was performed, or what conclusion was reached. - -Integrating transparent provenance - disclosing whether media is AI-generated or AI-edited, and also what other AI processing has occurred on any media - is necessary for accountability in domains like journalism, security, and regulated industries. This project uses C2PA 2.3 to record such actions as signed, machine-verifiable assertions attached to the asset. This reduces deepfake proliferation and fraud by providing a method to verify provenance. Additionally it tracks authorised and potential unauthorised use of models to analyse the asset. +Integrating transparent provenance - disclosing whether media is AI-generated or AI-edited, and what other AI processing has occurred on any media - is fundamental for accountability in domains like journalism, security, and regulated industries. This project uses C2PA specification (www.c2pa.org) revision 2.3 to record such actions as signed, machine-verifiable assertions attached to the asset. This reduces deepfake proliferation and fraud by providing a method to verify provenance. Additionally, it tracks authorized and potential unauthorized use of models to analyze the asset, by means of Model Provenance. [Arm is a founder member of the C2PA Standards Group.](https://newsroom.arm.com/blog/c2pa-fights-disinformation) The Coalition for Content Provenance and Authenticity (C2PA) specification lets creators and platforms attach cryptographically signed metadata to content like: - images 📷 @@ -44,41 +44,46 @@ Integrating transparent provenance - disclosing whether media is AI-generated or - audio 🎧 - documents 📄 -That metadata can include: -- who created it -- what tool was used (camera, Photoshop, generative AI, etc.) -- what edits were made and when -- whether AI was involved +The metadata can include: +- who created it +- what tool was used (camera, Photoshop, generative AI, etc.) +- what edits were made and when +- whether AI was involved -Anyone can later verify this info to see if the content is authentic or has been tampered with. +Anyone can later verify this info to see if the content is authentic or has been tampered with. -The best place to start would be with processing on static audio or video files, but we encourage you to ultimately target a streamed data format - i.e. streamed audio or video, and to perform the inference and record the actions in real-time. +The best place to start would be with processing on static audio or video files, but we encourage you to ultimately target a streamed data format - i.e. streamed audio or video, and to perform the inference and record the actions in real-time, via attaching provenance information on the streaming content. -You should leverage an Arm-powered device, such as a Windows-on-Arm laptop, Apple Silicon MacBook, Arm-powered mobile phone, or Raspberry Pi. +You should leverage an Arm-powered device, such as a Windows-on-Arm laptop, Apple Silicon MacBook, Arm-powered mobile phone, or Raspberry Pi. +### Project Summary -### Project Summary +Build and evaluate a comprehensive AI-augmented audio/video capture and provenance system on an Arm-powered device (e.g. Windows-on-Arm laptop) +- captures streamed media with a camera or microphone, +- runs AI models on-device (i.e. face/object/keyword/sentiment detection, upscaling/filters/enhancements), +- generates C2PA Content Credentials that transparently disclose: + 1. which models were run, + 2. their effect/impact on the image or video -Build and evaluate a comprehensive AI-augmented audio/video capture and provenance system on an Arm-powered device (e.g. Windows-on-Arm laptop) -- captures streamed media with a camera or microphone, -- runs AI models on-device (i.e. face/object/keyword/sentiment detection, upscaling/filters/enhancements), -- generates C2PA Content Credentials that transparently disclose: - 1. which models were run, - 2. their effect/impact on the image or video +and demonstrates how this provenance enables trust and auditability in real-world use cases such as content integrity validation and responsible media pipelines. -and demonstrates how this provenance enables trust and auditability in real-world use cases such as content integrity validation and responsible media pipelines. +You should be able to show your source code, along with documentation/instructions/comments, a short demo video or images to show the project in action, and a short document describing your decisions and how you implemented the project. -You should be able to show your source code, along with documentation/instructions/comments, a short demo video or images to show the project in action, and a short document describing your decisions and how you implemented the project. +## What will you use? +You should be willing to learn about, or already familiar with: +- Programming and building a basic application on your chosen platform/OS +- C2PA and the concepts behind content provenance and authentication +- Deploying optimized inference models on Arm-powered CPU via frameworks with KleidiAI integration (e.g. PyTorch). - ## What will you use? -You should be willing to learn about, or already familiar with: -- Programming and building a basic application on your chosen platform/OS -- C2PA and the concepts behind content provenance and authentication -- Deploying optimised inference models on Arm-powered CPU via frameworks with KleidiAI integration (e.g. PyTorch) + +## A few helpful links to relevant items: +- [Live Video Streaming](https://spec.c2pa.org/specifications/specifications/2.3/specs/C2PA_Specification.html#live-video) +- [C2PA](https://github.com/contentauth/c2pa-rs) +- [C-Wrapper for Rust Library](https://gitlab.com/guardianproject/proofmode/simple-c2pa) -## Resources from Arm and our partners +## Other potentially useful resources from Arm and our partners - Repository: [AI on Arm course](https://github.com/arm-university/AI-on-Arm) - Arm / Cambridge University edX course: [AI at the Edge on Arm (Mobile)](https://www.edx.org/learn/computer-science/arm-education-ai-at-the-edge-on-arm) - Learning Path: [Vision LLM Inference on Android with KleidiAI](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vision-llm-inference-on-android-with-kleidiai-and-mnn/) From 3b79a1a7ececf72a59043a3ad56c6bddb8974289 Mon Sep 17 00:00:00 2001 From: Matt Cossins Date: Thu, 12 Feb 2026 14:32:30 +0000 Subject: [PATCH 2/5] Changed to published --- Projects/Projects/Video-&-Audio-Provenance-In-The-Age-of-AI.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Projects/Projects/Video-&-Audio-Provenance-In-The-Age-of-AI.md b/Projects/Projects/Video-&-Audio-Provenance-In-The-Age-of-AI.md index f720a9e9..14b5f813 100644 --- a/Projects/Projects/Video-&-Audio-Provenance-In-The-Age-of-AI.md +++ b/Projects/Projects/Video-&-Audio-Provenance-In-The-Age-of-AI.md @@ -18,7 +18,7 @@ support-level: publication-date: 2026-02-06 license: status: - - "Hidden" + - "Published" badges: - trending - recently_added From 381b3e01ad8c6dd14d553df8a3ec6931c6e1c407 Mon Sep 17 00:00:00 2001 From: ci-bot Date: Thu, 12 Feb 2026 14:33:13 +0000 Subject: [PATCH 3/5] docs: auto-update --- docs/_data/navigation.yml | 24 +++ ...deo-&-Audio-Provenance-In-The-Age-of-AI.md | 140 ++++++++++-------- 2 files changed, 100 insertions(+), 64 deletions(-) diff --git a/docs/_data/navigation.yml b/docs/_data/navigation.yml index 3e6d6544..501af80a 100644 --- a/docs/_data/navigation.yml +++ b/docs/_data/navigation.yml @@ -45,6 +45,30 @@ projects: - Arm Ambassador Support status: - Published + - title: Video-&-Audio-Provenance-In-The-Age-of-AI + description: Integrating transparent provenance - disclosing whether media is + AI-generated or AI-edited, and what other AI processing has occurred on any + media - is fundamental for accountability in domains like journalism, security, + and regulated industries. This project uses C2PA specification (www.c2pa.org) + revision 2.3 to record such actions as signed, machine-verifiable assertions + attached to the asset. This reduces deepfake proliferation and fraud by providing + a method to verify provenance. Additionally, it tracks authorized and potential + unauthorized use of models to analyze the asset, by means of Model Provenance. + url: /2026/02/06/Video-&-Audio-Provenance-In-The-Age-of-AI.html + subjects: + - ML + - Security + - Edge AI + platform: + - AI + - Laptops and Desktops + sw-hw: + - Software + support-level: + - Self-Service + - Arm Ambassador Support + status: + - Published - title: AI-Agents description: "This self-service project builds a sandboxed AI agent on Arm hardware\ \ that harnesses appropriately sized LLMs to safely automate complex workflows\u2014\ diff --git a/docs/_posts/2026-02-06-Video-&-Audio-Provenance-In-The-Age-of-AI.md b/docs/_posts/2026-02-06-Video-&-Audio-Provenance-In-The-Age-of-AI.md index 7925bca9..16201f69 100644 --- a/docs/_posts/2026-02-06-Video-&-Audio-Provenance-In-The-Age-of-AI.md +++ b/docs/_posts/2026-02-06-Video-&-Audio-Provenance-In-The-Age-of-AI.md @@ -1,6 +1,6 @@ --- title: Video & Audio Provenance on Arm in the Age of AI -description: Integrating transparent provenance - disclosing whether media is AI-generated or AI-edited, and also what other AI processing has occurred on any media - is necessary for accountability in domains like journalism, security, and regulated industries. This project uses C2PA 2.3 to record such actions as signed, machine-verifiable assertions attached to the asset. This reduces deepfake proliferation and fraud by providing a method to verify provenance. Additionally it tracks authorised and potential unauthorised use of models to analyse the asset. +description: Integrating transparent provenance - disclosing whether media is AI-generated or AI-edited, and what other AI processing has occurred on any media - is fundamental for accountability in domains like journalism, security, and regulated industries. This project uses C2PA specification (www.c2pa.org) revision 2.3 to record such actions as signed, machine-verifiable assertions attached to the asset. This reduces deepfake proliferation and fraud by providing a method to verify provenance. Additionally, it tracks authorized and potential unauthorized use of models to analyze the asset, by means of Model Provenance. subjects: - ML - Security @@ -18,7 +18,7 @@ support-level: publication-date: 2026-02-06 license: status: -- Hidden +- Published badges: - trending - recently_added @@ -27,17 +27,18 @@ layout: article sidebar: nav: projects full_description: |- - ## Description + ## Description - ### Why is this important? + ### Why is this important? - Generative AI has created a trust gap: - - Images, videos, and audio may be AI-generated or heavily manipulated - this can lead to deepfakes, and fraud. - - This AI-enhancement or editing can occur in real-time - i.e. a person can livestream themeself, but use ML to change their appearance and voice. - - Moreover, the use of AI models to scan images, video, and audio (e.g. facial or voice recognition) is often opaque, with no actual changes to the underlying data. - - Viewers rarely know whether AI analysis was performed, or what conclusion was reached. + Generative AI has created a trust gap: + - Images, videos, and audio may be AI-generated or heavily manipulated + - this can lead to deepfakes, and fraud. + - This AI-enhancement or editing can occur in real-time - i.e. a person can live stream themselves but use ML to change their appearance and voice. + - Moreover, the use of AI models to scan images, video, and audio (e.g. facial or voice recognition) is often opaque, with no actual changes to the underlying data. + - Viewers rarely know whether AI analysis was performed, or what conclusion was reached. Such seamless content transformation, does not require much specialized skills and hence within the reach of common masses. - Integrating transparent provenance - disclosing whether media is AI-generated or AI-edited, and also what other AI processing has occurred on any media - is necessary for accountability in domains like journalism, security, and regulated industries. This project uses C2PA 2.3 to record such actions as signed, machine-verifiable assertions attached to the asset. This reduces deepfake proliferation and fraud by providing a method to verify provenance. Additionally it tracks authorised and potential unauthorised use of models to analyse the asset. + Integrating transparent provenance - disclosing whether media is AI-generated or AI-edited, and what other AI processing has occurred on any media - is fundamental for accountability in domains like journalism, security, and regulated industries. This project uses C2PA specification (www.c2pa.org) revision 2.3 to record such actions as signed, machine-verifiable assertions attached to the asset. This reduces deepfake proliferation and fraud by providing a method to verify provenance. Additionally, it tracks authorized and potential unauthorized use of models to analyze the asset, by means of Model Provenance. [Arm is a founder member of the C2PA Standards Group.](https://newsroom.arm.com/blog/c2pa-fights-disinformation) The Coalition for Content Provenance and Authenticity (C2PA) specification lets creators and platforms attach cryptographically signed metadata to content like: - images 📷 @@ -45,41 +46,46 @@ full_description: |- - audio 🎧 - documents 📄 - That metadata can include: - - who created it - - what tool was used (camera, Photoshop, generative AI, etc.) - - what edits were made and when - - whether AI was involved + The metadata can include: + - who created it + - what tool was used (camera, Photoshop, generative AI, etc.) + - what edits were made and when + - whether AI was involved - Anyone can later verify this info to see if the content is authentic or has been tampered with. + Anyone can later verify this info to see if the content is authentic or has been tampered with. - The best place to start would be with processing on static audio or video files, but we encourage you to ultimately target a streamed data format - i.e. streamed audio or video, and to perform the inference and record the actions in real-time. + The best place to start would be with processing on static audio or video files, but we encourage you to ultimately target a streamed data format - i.e. streamed audio or video, and to perform the inference and record the actions in real-time, via attaching provenance information on the streaming content. - You should leverage an Arm-powered device, such as a Windows-on-Arm laptop, Apple Silicon MacBook, Arm-powered mobile phone, or Raspberry Pi. + You should leverage an Arm-powered device, such as a Windows-on-Arm laptop, Apple Silicon MacBook, Arm-powered mobile phone, or Raspberry Pi. + ### Project Summary - ### Project Summary + Build and evaluate a comprehensive AI-augmented audio/video capture and provenance system on an Arm-powered device (e.g. Windows-on-Arm laptop) + - captures streamed media with a camera or microphone, + - runs AI models on-device (i.e. face/object/keyword/sentiment detection, upscaling/filters/enhancements), + - generates C2PA Content Credentials that transparently disclose: + 1. which models were run, + 2. their effect/impact on the image or video - Build and evaluate a comprehensive AI-augmented audio/video capture and provenance system on an Arm-powered device (e.g. Windows-on-Arm laptop) - - captures streamed media with a camera or microphone, - - runs AI models on-device (i.e. face/object/keyword/sentiment detection, upscaling/filters/enhancements), - - generates C2PA Content Credentials that transparently disclose: - 1. which models were run, - 2. their effect/impact on the image or video + and demonstrates how this provenance enables trust and auditability in real-world use cases such as content integrity validation and responsible media pipelines. - and demonstrates how this provenance enables trust and auditability in real-world use cases such as content integrity validation and responsible media pipelines. + You should be able to show your source code, along with documentation/instructions/comments, a short demo video or images to show the project in action, and a short document describing your decisions and how you implemented the project. - You should be able to show your source code, along with documentation/instructions/comments, a short demo video or images to show the project in action, and a short document describing your decisions and how you implemented the project. + ## What will you use? + You should be willing to learn about, or already familiar with: + - Programming and building a basic application on your chosen platform/OS + - C2PA and the concepts behind content provenance and authentication + - Deploying optimized inference models on Arm-powered CPU via frameworks with KleidiAI integration (e.g. PyTorch). - ## What will you use? - You should be willing to learn about, or already familiar with: - - Programming and building a basic application on your chosen platform/OS - - C2PA and the concepts behind content provenance and authentication - - Deploying optimised inference models on Arm-powered CPU via frameworks with KleidiAI integration (e.g. PyTorch) + + ## A few helpful links to relevant items: + - [Live Video Streaming](https://spec.c2pa.org/specifications/specifications/2.3/specs/C2PA_Specification.html#live-video) + - [C2PA](https://github.com/contentauth/c2pa-rs) + - [C-Wrapper for Rust Library](https://gitlab.com/guardianproject/proofmode/simple-c2pa) - ## Resources from Arm and our partners + ## Other potentially useful resources from Arm and our partners - Repository: [AI on Arm course](https://github.com/arm-university/AI-on-Arm) - Arm / Cambridge University edX course: [AI at the Edge on Arm (Mobile)](https://www.edx.org/learn/computer-science/arm-education-ai-at-the-edge-on-arm) - Learning Path: [Vision LLM Inference on Android with KleidiAI](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vision-llm-inference-on-android-with-kleidiai-and-mnn/) @@ -100,17 +106,18 @@ full_description: |- To receive the benefits, you must show us your project through our [online form](https://forms.office.com/e/VZnJQLeRhD). Please do not include any confidential information in your contribution. Additionally if you are affiliated with an academic institution, please ensure you have the right to share your material. --- -## Description +## Description -### Why is this important? +### Why is this important? -Generative AI has created a trust gap: -- Images, videos, and audio may be AI-generated or heavily manipulated - this can lead to deepfakes, and fraud. -- This AI-enhancement or editing can occur in real-time - i.e. a person can livestream themeself, but use ML to change their appearance and voice. -- Moreover, the use of AI models to scan images, video, and audio (e.g. facial or voice recognition) is often opaque, with no actual changes to the underlying data. -- Viewers rarely know whether AI analysis was performed, or what conclusion was reached. +Generative AI has created a trust gap: +- Images, videos, and audio may be AI-generated or heavily manipulated +- this can lead to deepfakes, and fraud. +- This AI-enhancement or editing can occur in real-time - i.e. a person can live stream themselves but use ML to change their appearance and voice. +- Moreover, the use of AI models to scan images, video, and audio (e.g. facial or voice recognition) is often opaque, with no actual changes to the underlying data. +- Viewers rarely know whether AI analysis was performed, or what conclusion was reached. Such seamless content transformation, does not require much specialized skills and hence within the reach of common masses. -Integrating transparent provenance - disclosing whether media is AI-generated or AI-edited, and also what other AI processing has occurred on any media - is necessary for accountability in domains like journalism, security, and regulated industries. This project uses C2PA 2.3 to record such actions as signed, machine-verifiable assertions attached to the asset. This reduces deepfake proliferation and fraud by providing a method to verify provenance. Additionally it tracks authorised and potential unauthorised use of models to analyse the asset. +Integrating transparent provenance - disclosing whether media is AI-generated or AI-edited, and what other AI processing has occurred on any media - is fundamental for accountability in domains like journalism, security, and regulated industries. This project uses C2PA specification (www.c2pa.org) revision 2.3 to record such actions as signed, machine-verifiable assertions attached to the asset. This reduces deepfake proliferation and fraud by providing a method to verify provenance. Additionally, it tracks authorized and potential unauthorized use of models to analyze the asset, by means of Model Provenance. [Arm is a founder member of the C2PA Standards Group.](https://newsroom.arm.com/blog/c2pa-fights-disinformation) The Coalition for Content Provenance and Authenticity (C2PA) specification lets creators and platforms attach cryptographically signed metadata to content like: - images 📷 @@ -118,41 +125,46 @@ Integrating transparent provenance - disclosing whether media is AI-generated or - audio 🎧 - documents 📄 -That metadata can include: -- who created it -- what tool was used (camera, Photoshop, generative AI, etc.) -- what edits were made and when -- whether AI was involved +The metadata can include: +- who created it +- what tool was used (camera, Photoshop, generative AI, etc.) +- what edits were made and when +- whether AI was involved -Anyone can later verify this info to see if the content is authentic or has been tampered with. +Anyone can later verify this info to see if the content is authentic or has been tampered with. -The best place to start would be with processing on static audio or video files, but we encourage you to ultimately target a streamed data format - i.e. streamed audio or video, and to perform the inference and record the actions in real-time. +The best place to start would be with processing on static audio or video files, but we encourage you to ultimately target a streamed data format - i.e. streamed audio or video, and to perform the inference and record the actions in real-time, via attaching provenance information on the streaming content. -You should leverage an Arm-powered device, such as a Windows-on-Arm laptop, Apple Silicon MacBook, Arm-powered mobile phone, or Raspberry Pi. +You should leverage an Arm-powered device, such as a Windows-on-Arm laptop, Apple Silicon MacBook, Arm-powered mobile phone, or Raspberry Pi. +### Project Summary -### Project Summary +Build and evaluate a comprehensive AI-augmented audio/video capture and provenance system on an Arm-powered device (e.g. Windows-on-Arm laptop) +- captures streamed media with a camera or microphone, +- runs AI models on-device (i.e. face/object/keyword/sentiment detection, upscaling/filters/enhancements), +- generates C2PA Content Credentials that transparently disclose: + 1. which models were run, + 2. their effect/impact on the image or video -Build and evaluate a comprehensive AI-augmented audio/video capture and provenance system on an Arm-powered device (e.g. Windows-on-Arm laptop) -- captures streamed media with a camera or microphone, -- runs AI models on-device (i.e. face/object/keyword/sentiment detection, upscaling/filters/enhancements), -- generates C2PA Content Credentials that transparently disclose: - 1. which models were run, - 2. their effect/impact on the image or video +and demonstrates how this provenance enables trust and auditability in real-world use cases such as content integrity validation and responsible media pipelines. -and demonstrates how this provenance enables trust and auditability in real-world use cases such as content integrity validation and responsible media pipelines. +You should be able to show your source code, along with documentation/instructions/comments, a short demo video or images to show the project in action, and a short document describing your decisions and how you implemented the project. -You should be able to show your source code, along with documentation/instructions/comments, a short demo video or images to show the project in action, and a short document describing your decisions and how you implemented the project. +## What will you use? +You should be willing to learn about, or already familiar with: +- Programming and building a basic application on your chosen platform/OS +- C2PA and the concepts behind content provenance and authentication +- Deploying optimized inference models on Arm-powered CPU via frameworks with KleidiAI integration (e.g. PyTorch). - ## What will you use? -You should be willing to learn about, or already familiar with: -- Programming and building a basic application on your chosen platform/OS -- C2PA and the concepts behind content provenance and authentication -- Deploying optimised inference models on Arm-powered CPU via frameworks with KleidiAI integration (e.g. PyTorch) + +## A few helpful links to relevant items: +- [Live Video Streaming](https://spec.c2pa.org/specifications/specifications/2.3/specs/C2PA_Specification.html#live-video) +- [C2PA](https://github.com/contentauth/c2pa-rs) +- [C-Wrapper for Rust Library](https://gitlab.com/guardianproject/proofmode/simple-c2pa) -## Resources from Arm and our partners +## Other potentially useful resources from Arm and our partners - Repository: [AI on Arm course](https://github.com/arm-university/AI-on-Arm) - Arm / Cambridge University edX course: [AI at the Edge on Arm (Mobile)](https://www.edx.org/learn/computer-science/arm-education-ai-at-the-edge-on-arm) - Learning Path: [Vision LLM Inference on Android with KleidiAI](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vision-llm-inference-on-android-with-kleidiai-and-mnn/) From f72ebcf39c6053375365f2e717f8144bb1575ec1 Mon Sep 17 00:00:00 2001 From: Matt Cossins Date: Thu, 12 Feb 2026 15:43:02 +0000 Subject: [PATCH 4/5] c2pa final update --- .../Projects/Video-&-Audio-Provenance-In-The-Age-of-AI.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/Projects/Projects/Video-&-Audio-Provenance-In-The-Age-of-AI.md b/Projects/Projects/Video-&-Audio-Provenance-In-The-Age-of-AI.md index 14b5f813..b122bdfa 100644 --- a/Projects/Projects/Video-&-Audio-Provenance-In-The-Age-of-AI.md +++ b/Projects/Projects/Video-&-Audio-Provenance-In-The-Age-of-AI.md @@ -1,6 +1,6 @@ --- title: "Video & Audio Provenance on Arm in the Age of AI" -description: "Integrating transparent provenance - disclosing whether media is AI-generated or AI-edited, and what other AI processing has occurred on any media - is fundamental for accountability in domains like journalism, security, and regulated industries. This project uses C2PA specification (www.c2pa.org) revision 2.3 to record such actions as signed, machine-verifiable assertions attached to the asset. This reduces deepfake proliferation and fraud by providing a method to verify provenance. Additionally, it tracks authorized and potential unauthorized use of models to analyze the asset, by means of Model Provenance." +description: "Integrating transparent provenance - disclosing whether media is AI-generated or AI-edited, and what other AI processing has occurred on any media - is fundamental for accountability in domains like journalism, security, and regulated industries. This project uses C2PA specification (www.c2pa.org) revision 2.3 to record such actions as signed, machine-verifiable assertions attached to the asset." subjects: - "ML" - "Security" @@ -15,12 +15,11 @@ sw-hw: support-level: - "Self-Service" - "Arm Ambassador Support" -publication-date: 2026-02-06 +publication-date: 2026-02-12 license: status: - "Published" badges: - - trending - recently_added donation: --- From cd91dc2b19c04af9126a45ed453e6e9f5c32682e Mon Sep 17 00:00:00 2001 From: ci-bot Date: Thu, 12 Feb 2026 15:43:57 +0000 Subject: [PATCH 5/5] docs: auto-update --- docs/_data/navigation.yml | 6 ++---- ...2026-02-12-Video-&-Audio-Provenance-In-The-Age-of-AI.md} | 5 ++--- 2 files changed, 4 insertions(+), 7 deletions(-) rename docs/_posts/{2026-02-06-Video-&-Audio-Provenance-In-The-Age-of-AI.md => 2026-02-12-Video-&-Audio-Provenance-In-The-Age-of-AI.md} (97%) diff --git a/docs/_data/navigation.yml b/docs/_data/navigation.yml index 501af80a..8184e5f4 100644 --- a/docs/_data/navigation.yml +++ b/docs/_data/navigation.yml @@ -51,10 +51,8 @@ projects: media - is fundamental for accountability in domains like journalism, security, and regulated industries. This project uses C2PA specification (www.c2pa.org) revision 2.3 to record such actions as signed, machine-verifiable assertions - attached to the asset. This reduces deepfake proliferation and fraud by providing - a method to verify provenance. Additionally, it tracks authorized and potential - unauthorized use of models to analyze the asset, by means of Model Provenance. - url: /2026/02/06/Video-&-Audio-Provenance-In-The-Age-of-AI.html + attached to the asset. + url: /2026/02/12/Video-&-Audio-Provenance-In-The-Age-of-AI.html subjects: - ML - Security diff --git a/docs/_posts/2026-02-06-Video-&-Audio-Provenance-In-The-Age-of-AI.md b/docs/_posts/2026-02-12-Video-&-Audio-Provenance-In-The-Age-of-AI.md similarity index 97% rename from docs/_posts/2026-02-06-Video-&-Audio-Provenance-In-The-Age-of-AI.md rename to docs/_posts/2026-02-12-Video-&-Audio-Provenance-In-The-Age-of-AI.md index 16201f69..2d1d3fd9 100644 --- a/docs/_posts/2026-02-06-Video-&-Audio-Provenance-In-The-Age-of-AI.md +++ b/docs/_posts/2026-02-12-Video-&-Audio-Provenance-In-The-Age-of-AI.md @@ -1,6 +1,6 @@ --- title: Video & Audio Provenance on Arm in the Age of AI -description: Integrating transparent provenance - disclosing whether media is AI-generated or AI-edited, and what other AI processing has occurred on any media - is fundamental for accountability in domains like journalism, security, and regulated industries. This project uses C2PA specification (www.c2pa.org) revision 2.3 to record such actions as signed, machine-verifiable assertions attached to the asset. This reduces deepfake proliferation and fraud by providing a method to verify provenance. Additionally, it tracks authorized and potential unauthorized use of models to analyze the asset, by means of Model Provenance. +description: Integrating transparent provenance - disclosing whether media is AI-generated or AI-edited, and what other AI processing has occurred on any media - is fundamental for accountability in domains like journalism, security, and regulated industries. This project uses C2PA specification (www.c2pa.org) revision 2.3 to record such actions as signed, machine-verifiable assertions attached to the asset. subjects: - ML - Security @@ -15,12 +15,11 @@ sw-hw: support-level: - Self-Service - Arm Ambassador Support -publication-date: 2026-02-06 +publication-date: 2026-02-12 license: status: - Published badges: -- trending - recently_added donation: layout: article