Skip to content

Conversation

@changjian-wang
Copy link
Member

Purpose

Update samples from sdk

Does this introduce a breaking change?

[ ] Yes
[x] No

Pull Request Type

What kind of change does this Pull Request introduce?

[ ] Bugfix
[ ] Feature
[ ] Code style update (formatting, local variables)
[ ] Refactoring (no functional changes, no api changes)
[ ] Documentation content changes
[x] Other... Please describe: Use sdk to refactor codes.

- Updated the content_extraction.ipynb notebook to use the new Azure AI Content Understanding SDK.
- Replaced deprecated methods and adjusted the code for asynchronous operations.
- Improved the structure of the notebook for better readability and organization.
- Added a new sample_helper.py file containing utility functions for handling analysis results, saving images, and extracting operation IDs.
- Enhanced error handling and logging throughout the notebook.
- Updated `.gitignore` to exclude `test_output/` directory.
- Added new face images for enrollment and testing.
- Refactored `build_person_directory.ipynb` to use async methods and improved logging.
- Updated person and face management logic to handle Azure SDK changes.
- Improved error handling and logging for face and person operations.
- Enhanced `content_extraction.ipynb` with audio analysis capabilities and cleanup logic.
- Updated `analyzer_training.ipynb` to enhance client initialization and error handling.
- Modified training data path handling and SAS URL generation for better clarity.
- Improved analyzer creation process with unique ID generation and logging.
- Enhanced document analysis with operation ID extraction and result retrieval.
- Updated `build_person_directory.ipynb` to streamline face addition and identification processes.
- Refined face association and disassociation logic for better clarity and functionality.
- Improved person directory updates with clearer resource handling.
- Updated `classifier.ipynb` to enhance classifier ID generation and result handling.
- Improved operation ID extraction and result retrieval for classification tasks.
- Refactored `content_extraction.ipynb` to standardize analyzer ID usage and improve file handling.
- Enhanced audio and video analysis processes with clearer logging and data handling.
- Updated kernel specifications across notebooks for consistency.
"Before creating the analyzer, fill in the constant `ANALYZER_ID` with a relevant name for your task. In this example, we generate a unique suffix so that this cell can be run multiple times to create different analyzers.\n",
"\n",
"We use **training_data_sas_url** and **training_data_path** as set in the [.env](./.env) file and used in the previous step."
"We use **TRAINING_DATA_SAS_URL** and **TRAINING_DATA_PATH** as set in the [.env](./.env) file and used in the previous step."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We changed to lower-case in #60 (comment).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@changjian-wang I saw your new commits. Sorry for the confusion, I was referring to this specific line. I mean we're using lower-case training_data_sas_url and training_data_path as variables within the notebook. I think we may not need to mention .env file here like "We use training_data_sas_url and training_data_path set in the previous step."

"from azure.storage.blob import ContainerSasPermissions\n",
"from azure.core.credentials import AzureKeyCredential\n",
"from azure.identity import DefaultAzureCredential\n",
"from azure.ai.contentunderstanding.aio import ContentUnderstandingClient\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We had some module errors in auto check. We will need to add it into requirements.txt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants