- High-performance computing (HPC)
- Low-latency Remote Direct Memory Access (RDMA) networking
- Low-latency network connection between the processing running on two servers or virtual machines in Azure.
- It copies data from the network adapter directly to memory and avoids wasting CPU cycles.
- For developers, it's seen like single machine with shared memory
- Usage: Installed on head node (must be windows)
- Server can manage compute nodes (can be linux or windows)
- How it works:
- Uses Intel MPI library.
- Some VM's (A8,A9 or ends with 'r') have network interface for RDMA
- Use-cases:
- Burst to Azure from on-prem
- molecular modeling and computational fluid dynamics
- In Azure:
- H-seris VM's
- Azure Batch for HPC PaaS
- Manages VM's, computes nodes with job scheduling (uses internal queue), autoscale, data persistence.
- Batch API lets you wrap existing (on-prem or cloud) compute node so it runs as a service on computer node pool that Batch manages.
- Good for intrinsically parallel (sometimes called embarrassingly parallel or perfectly parallel) workloads
- Little to no effort to make application parallel because there's little to no dependencies between them
- Image processing
- Integrates with Azure Data Factory (pipeline solution in Azure)
- Usage
- Create Batch Account
- In Batch account, create pools of computer nodes (Vm's)
- In Batch account, create job that runs basic tasks on pool
- A task is a CMD script
- Low-latency Remote Direct Memory Access (RDMA) networking
- Queues
- Split application into smaller modules with asynchronous messaging
- Easy-to-manage in long term: Modules can be swapped/modified/updated without having to update the entire application.
- Message queues
- Queue = buffer that supports send & receive operations.
- Supports following cases:
- Sender can send message to the queue
- Receiver receives message from the queue
- Message is removed.
- Receiver examines (peeks at) the next available message
- Message is not removed in the queue.
- Many queues supports also returning the length of messages or if there's message left in queue to avoid blocking the receiver.
- Many support transactions, leasing and visibility on top of it.
- Leasing = concurrency lock
- Message is unavailable to other receives while one receiver handles it.
- Leasing = concurrency lock
- Concurrent queue consumers
- Does not block the business logic while requests are processed.
- Too many requests => consumer service is overflowed
- Solution:
- They need to be scaled out horizontally.
- Only once instance should get the message
- Load Balancer is needed to avoid one receiver becomes bottleneck
- Solution:
- In Azure: Azure Storage queues, Azure Service Bus queues.
- Implementation in code
- Azure storage queues
-
Send message
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString")); CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient(); CloudQueue queue = queueClient.GetQueueReference("myqueue"); queue.CreateIfNotExists(); CloudQueueMessage message = new CloudQueueMessage("Hello, World"); queue.AddMessage(message);
-
Receive message
- Peek:
CloudQueueMessage peekedMessage = queue.PeekMessage();
- Read & dequeue
- Get message (message becomes invisible for 30 seconds):
CloudQueueMessage retrievedMessage = queue.GetMessage();
- Delete message to dequeue:
retrievedMessage.DeleteMessage(retrievedMessage);
- Get message (message becomes invisible for 30 seconds):
- Peek:
-
- Azure Service Bus
-
Extends Azure Storage queues with middlewares, publish/subscribe messaging, load balancing, FIFO.
-
Send batch to queue
QueueClient queueClient = new QueueClient(ServiceBusConnectionString, QueueName); string messageBody = $"First Message"; Message message = new Message(Encoding.UTF8.GetBytes(messageBody)); await queueClient.SendAsync(message); var messages = new List<Message>(); for (int i = 0; i < 10; i++) { var message = new Message(Encoding.UTF8.GetBytes($"Message {i:00}")}; messages.Add(message); } await queueClient.SendBatchAsync(messages);
-
Handle a message
- In client SDK, you have handler callback you call either
CompleteAsync
orAbandonAsync
(you receive the message again)
- In client SDK, you have handler callback you call either
-
- Azure storage queues
- Split application into smaller modules with asynchronous messaging
- Webhooks
- Polling => overallocating resource to check data.
- Use user-defined HTTP callbacks
- In azure functions you use it with Azure Functions.
-
function.json
defines when the function is triggered. -
Example for webhooks
{ "disabled":false, "bindings":[ { "authLevel":"function", "name":"req", "type":"httpTrigger", "direction":"in", "methods":[ "post" ] }, { "name":"$return", "type":"http", "direction":"out" } ] }
-
Example for github trigger
{ "bindings":[ { "type":"httpTrigger", "direction":"in", "webHookType":"github", "name":"req" }, { "type":"http", "direction":"out", "name":"res" } ], "disabled":false }
-
- Configure to send emails
- Use SendGrid
- Flexible API, scalable, real-time analytics & sent messages, transactional.
- C# has SDK's.
- Use SendGrid
- Configure an event and publish model
- Stipulation: Requirement that applications will handle high volume and velocity.
- Stipulate => make an agreement or to demand
- Applications in serial manner => difficult to meet the stipulation
- Solution event driven architecture.
- An event-driven architecture consists of _event producer_s that generate a stream of events and event consumers that listen for the events.
- Decoupling => producers from consumers, consumers from each other.
- Common implementations
- Simple event processing
- Event immediately triggers an action.
- E.g. Azure Functions with Azure Service Bus trigger.
- Complex event processing
- Looks at series of events to look for patterns.
- E.g. Azure Stream Analytics or Apache Storm_._
- Event stream processing
- Streaming platform as pipeline to ingest event and feed them to processors.
- Ex. Azure IoT Hub or Apache Kafka_._
- Simple event processing
- Azure Event Grid
-
Built-in supports for incoming events from Azure services, supports also custom application & third party events via webhooks/custom topics.
-
You can filter & route events to different points, or use multicasting to send events to multiple end-points.
-
Concepts:
- Event sources sends Topics.
- Event handlers listens to Topics via Event Subscriptions
-
Subscribe to Blob Storage events to send to an endpoint
-
Supported only in Blob Storage or General Purpose v2
-
Using Azure CLI
az group create --name DemoGroup --location eastus az storage account create --name demostor --location eastus --resource-group DemoGroup --sku Standard\_LRS --kind BlobStorage --access-tier Hot az eventgrid event-subscription create --resource-id $storageid –name contosostoragesub --endpoint https://contoso.com/api/update
-
-
Security and authentication
- Three types:
- WebHook event delivery
ValidationCode
handshake (programmatic)- Application gets validation code & echoes back.
ValidationURL
handshake (manual)- Send GET request to validation URL.
- Event subscriptions
- You need Microsoft.EventGrid/EventSubscriptions/Write permission on the resource.
- Custom topic publishing
- SAS (recommended) or key authentication: include the resource, an expiration time, and a signature
- WebHook event delivery
- Three types:
-
- Stipulation: Requirement that applications will handle high volume and velocity.
- Configure the Azure Relay service
- Reveals on-prem to cloud for different communications (request/response, P2P, publish/subscribe)
- Especially bidirectional via WebHooks.
- It only uses outbound port from on-prem for extra security.
- Flow
- Client => Azure Relay service => Gateway node (each has its own relay/input)
- It's listening request (bidirectional)?
- Creates a new relay
- It's a request to a specific relay
- Gateway forwards to gateway node that's own the relay.
- Relay owning gateway sends rendezvous request to client.
- Rendezvous => temporary channel to exchange messages.
- It's listening request (bidirectional)?
- Client => Azure Relay service => Gateway node (each has its own relay/input)
- In Node.js
ws
package => WebSocket protocol client libraryhyco-ws
package => Extends ws where Azure Relay is built.- Use
hycows.Server
instead of ws.Server, mostly contract compatible
- Applications can authenticate to Azure Relay using Shared Access Signature (SAS) authentication
- You set permissions as authorization rules in a Relay namespace.
- Reveals on-prem to cloud for different communications (request/response, P2P, publish/subscribe)
- Create & configure a Notification hub
- Azure Notification Hub provides server to client mobile app communication through push notifications.
- From any back-end (cloud or on-prem) to any platform (iOS, Android, Windows..)
- Delivered through Platform Notification Systems (PNSs) => Platform specific infrastructures
- Provides handle to push notifications
- No common interface, different for IOS, Android, Windows.
- Multi-platform, scaled abstraction of PNS's, no handles, send notifications there and it routes further to users/interest groups.
- cases: location based coupons to segments, breaking news to all, codes for MFA.
- Flow : Client app contacts PNS to retrieve push handle, it stores the handle and uses to push notifications.
- SDKS for IOS, Xamarin, C# exists.
- Create & configure an Event Hub
- Event hubs = big data streaming platform and event ingestion service.
- Event ingestion = "front door" for an event pipeline
- Sits between publishers and consumers.
- Event ingestion = "front door" for an event pipeline
- Seamless integration with data + analytics services inside/outside Azure.
- Key components:
- Event Producers : Can publish using HTTPS, AMQP 1.0 or Apache Kafka.
- Partitions : Views of consumers (state + position + offsets)
- Throughput units : Prepurchased units to control capacity of EventHubs
- Event recievers : Connect via AMQP 1.0.
- Can be integrated Azure Stream Analytics
- Azure Stream Analytics: PaaS for parallel Complex Event Processing (CEP) pipelines for BI.
- Azure IoT Hub leverages Event Hubs for its telemetry flow path
- Both support the ingestion of data with low latency and high reliability
- Azure IoT Hub
- Central message hub for bidirectional communication between your IoT application and the devices it manages.
- Use cases: device=>cloud telemetry, file upload from devices, request-reply.
- You can run Azure services (Azure Functions, Azure Stream Analytics, Azure Machine Learning, ..), or your own code on-prem on devices with remote cloud monitoring & management
- Event hubs = big data streaming platform and event ingestion service.
- Create & configure a Service Bus
- Service Bus extends Storage Queues.
- Communication mechanisms:
- Queues (simple queues)
- Each queue acts as broker (intermediary) that stores messages until sent.
- One directional broker: Message is sent to single recipients.
- Topics (publish-and-subscribe)
- One-directional broker: single topic can have multiple subscriptions.
- Can use filters
- Relays (connects with direct communication)
- Bidirectional communications
- No storage or broker: just passes messages to the destination.
- Queues (simple queues)
- Namespaces:
- Application scoped container for all messaging components.
- Can have multiple queues & topics.
- Can have different communication mechanisms
- Messages
- Decouples applications.
- Enables 1:n relationships with topics & subscriptions
- Message sessions enables message ordering or deferral.
- Decouples applications: improves scalability and reliability.
- Queues
- Offers FIFO with ordering + timestamps or more competing consumers.
- Messages are delivered in pull mode, held in redundant storage.
- Scenarios : connects on-prem with cloud or cloud to cloud.
- Configure with Microsoft Graph
- Unified API for Azure AD, Office 365, Windows 10 (activities and devices), Microsoft Enterprise Mobility (Microsoft Identity Manager, Microsoft Intune, Microsoft Advanced Threat Analytics, and Azure Advanced Threat Protection)
- Uses relationships such as
memberOf
-
Asynchronous Processing
- Multiple CPU cores enables multiple threads.
- In past, it was complex with locks, took years to master.
- Today, it's simplified with ex. TPL (Task Parallel Library).
- Task Parallel Library (TPL)
- Extends System.Threading to simplify parallelism and concurrency.
- Dynamical scaling of degree of concurrency for maximum efficiency.
- Handles partitioning of work, scheduling threads in thread pool, cancellation, state management and more.
- A task holds metadata about the delegate: whether delegate has started/completed executing and it's resulting value.
- Use async-await to refactor TPL code.
-
Serverless computing (Azure Functions and Azure Logic Apps)
-
Useful for "gluing" together disparate systems (e.g. integration solutions)
-
They can all define input, actions, conditions, and output.
-
You can run each of them on a schedule or via a trigger.
-
Azure Functions FaaS: Functions as a Service
- Support many languages such as C#, F#, Node.js, Java or PHP.
- Excellent for: data processing, integrating systems working with IoT, simple APIs/micro-services.
- Integrates with Azure and 3rd party services
- Integrations can trigger your function or serve as input/output.
- You can edit & compile & test (with request manipulations) & save in Portal.
- One function app can have many functions.
- In .NET
-
Provided by Azure WebJobs SDK.
-
json
specifies input/outputs.{ "disabled": false, "bindings":[ { "type": "queueTrigger", "direction": "in", "name": "message", "queueName": "announcementqueue", "connection":"StorageConnectionString" } ] }
-
Required fields: direction, name, type
-
C# script binds to "in" property as a method parameter:
public static void Run(string message, System.TraceWriter log) { log.Info($"New message: {message}"); }
-
-
Logic Apps
-
Build & schedule & automate processes as workflows.
-
Scenarios: B2B, on-prem or cloud only or both, data/system integration, enterprise application integration (EAI).
-
You can integrate with on-premises using an On-premises Data Gateway.
- Download and install gateway
- Create Azure resource for gateway (on-premises data gateway)
- Connect to on-premises data
- Add a connector that supports on-premises connections, for example, SQL Server.
- Select Connect via on-premises data gateway.
-
Build visually in Logic Apps designer, available in Azure portal & Visual Studio. You can also use PowerShell & ARM templates for select tasks.
-
Describe in a json template:
{ "$schema":"https://schema.management.azure.com/schemas/2016-06-01/Microsoft.Logic.json", "contentVersion":"1.0.0.0", "parameters": { "uri": { "type":"string" } }, "triggers": { "request":{ "type":"request","kind":"http" } }, "actions": { "readData": { "type":"Http", "inputs": { "method":"GET","uri":"@parameters('uri')" } } }, "outputs": {} }
-
-
-
Implement interfaces for storage or data access
- Blocking the calling thread during I/O
- Thread enters a wait state which other operations can't use it.
- Anti-pattern as it reduces performance & bad for vertical scalability.
- Asynchronous code
- Do not wait for I/O, transition long-running work other CPU cores.
- Allows servers to handle more requests & UIs to be more responsive.
- Use asynchronous interfaces for all data tier code, pair those with interfaces with asynchronous versions.
- E.g. if you have Entity Framework in controller, you'll need to rewrite code if you use another mapper or database in future => Use interfaces with async implementation.
- Blocking the calling thread during I/O
- Applications workloads are unpredictable
- Overestimate => Pay for unnecessary compute resources
- Underestimate => Poor user experience
- Ideally => Use extra instance only when it's needed and shut down when it's not.
- Four common computing patterns you'll see for web applications in cloud
- On and Off
- |||||....|||||......
- E.g.: batch processing.
- Growing fast
- |.||.|||.||||.|||||.||||||
- Often growing start-ups.
- Unpredictable bursting
- |..|..|||||||||..|.|
- Predictable bursting
- |.|.||||.|.|.||||.|.|
- E.g. during black friday for a e-commerce site.
- On and Off
- Distribute applications across multiple instances to provide redundancy + performance.
- A load balancer is needed to distribute.
- Auto scale
- Primary advantage of the cloud is elastic scaling.
- Ability to use as much capacity as you need, scale out if load increases, scale in when the extra capacity is not needed.
- Supported in many Azure Services
- IaaS: Azure Virtual Machine Scale Sets (identical VM's in same set)
- PaaS: Azure App Service
- Or event database services such as Cosmos Db
- Auto-scale metrics
- Supported in all pricing plans of App Service.
- Autoscale can be triggered based on metrics or at scheduled date and time.
- Metrics are aggregated over all instances of the plan
- E.g. CpuPercentage, MemoryPercentage, BytesRecieved, BytesSent, HttpQueueLength, DiskQueueLength (read+writes queued on storage)
- ❗ Basic plan does not include AutoScaling
- Primary advantage of the cloud is elastic scaling.
- Durable Functions
- An extension of Azure Functions that lets you write stateful functions
- The extension manages state, checkpoints, and restarts for you.
- Logic
-
You get starter object injected in JS & C# (DurableOrchestrationClient)
If starter => existing instance (instanceid) exists return HttpStatusCode.Conflict else starter => start new instance starter => create response from instanceid return response from starter
-
- An extension of Azure Functions that lets you write stateful functions
- An application that communicates with elements running in the cloud has to be sensitive to the transient faults that can occur in this environment.
- Faults e.g. momentary loss of network connectivity to components and services, the temporary unavailability of a service, or timeouts that occur when a service is busy.
- These faults are self-correcting and if action is done after delay, it's likely to be successful.
- E.g.
ConnectionClosed
,TimeOut
,RequestCanceled
- E.g.
- Strategies
- Cancel : Report exception & cancel operation. E.g. invalid credentials.
- Retry : If specific fault reported is unusual or rare, E.g. network packet becoming corrupted.
- Retry after delay : Fault caused by e.g.. busy/connectivity failures. Try after short period of time.
- For more common transient failures, period between retries should be chosen to spread requests from multiple instances of the application as evenly as possible
- Reduces chance of being overloaded.
- Too many service retry => longer to recover
- If service fails again, wait & make another attempt, if necessary, increase dalys between retry attempts until maximum is reached.
- Delay can be increased incrementally or exponentially depending on the type of failure & probability that it'll be corrected during this time.
- For more common transient failures, period between retries should be chosen to spread requests from multiple instances of the application as evenly as possible
- Many SDK's implement retry policies, where some parameters can be set: maximum number of retries, amount of time between retry, ….
- An application should log the details of faults & failing operations.
- Scaling out can lower frequency of faults caused by being overloaded etc.
- Partition the database & spread the load across multiple servers.
- In code
- Try catch for the exception
- Set delay (
Delay = TimeSpan.FromSeconds(5)
) and wait for the delay (Task.Delay
) - Log the exception
throw
if retry count is maximum
- Set delay (
- Try catch for the exception
- Querying Azure resources
-
Using Azure CLI
-
Azure CLI uses
-query
argument to execute aJMESPath
queryJMESPath
=> JSON query language.-query
argument is supported by all commands in the Azure CLI.
-
Return type
- JSON Array, no order guarantee.
-
Projection
- E.g.
select
in LINQ
az vm list --query '[].{name:name image:storageProfile.imageReference.offer}'
- E.g.
-
Filtering
- E.g.
where
in LINQ
az vm list --query "[?starts\_with(storageProfile.imageReference.offer, 'WindowsServer')]"
- E.g.
-
Combine project + filter
az vm list --query "[?starts\_with(storageProfile.imageReference.offer, 'Ubuntu')].{name:name, id:vmId}
-
-
Using fluent Azure SDK
- Better option if you intend to write code to find connection information for a specific application instance.
- Flow:
-
Connect
- You need
azure.auth
file (JSON file describing, secret, key url's etc) - You can create like this:
az ad sp create-for-rbac --sdk-auth > azure.auth
- Then
Azure azure = Azure.Authenticate("azure.auth").WithDefaultSubscription();
- You need
-
See VM's
var vms = await azure.VirtualMachines.ListAsync(); foreach(var vm in vms) { Console.WriteLine(vm.Name); }
-
Gather virtual machine metadata to determine the IP address
INetworkInterface targetnic = targetvm.GetPrimaryNetworkInterface(); INicIPConfiguration targetipconfig = targetnic.PrimaryIPConfiguration; IPublicIPAddress targetipaddress = targetipconfig.GetPublicIPAddress(); Console.WriteLine($"IP Address:\t{targetipaddress.IPAddress}");
-
-
- Develop solutions that use Cosmos DB storage
- Create, read, update, and delete data by using appropriate APIs
- Use async, use interfaces!
- SQL API: Standard API.
DocumentClient.CreateDocumentAsync
,DocumentClient.ReadDocumentAsync
,DocumentClient.ReadDocumentFeedAsync
(read all documents),DocumentClient.CreateDocumentQuery
,DocumentClient.ReplaceDocumentAsync
,DocumentClient.UpsertDocumentAsync
,DocumentClient.DeleteDocumentAsync
.
- Azure Cosmos DB's API for MongoDB
- Gremlin API: It’s is used to store and operate on graph data. Gremlin API supports modeling Graph data and provides APIs to traverse through the graph data.
- Cassandra API: You can switch from using Apache Cassandra to using Azure Cosmos DB 's Cassandra API, by just changing a connection string.
- Table API: Azure Table storage can migrate to Azure Cosmos DB by using the Table API with no code changes and take advantage of premium capabilities.
- Implement partitioning schemes
- Cosmos DB uses hash-based partitioning to spread logical partitions across physical partitions. The partition key value of an item is hashed by Cosmos DB, and the hashed result determines the physical partition.
- ❗ Limitations
- A single logical partition is allowed an upper limit of 10 GB of storage.
- Partitioned containers are configured with minimum throughput of 400 RU/s. Requests to the same partition key can't exceed the throughput allocated to a partition.
- It's important to pick a partition key that doesn't result in "hot spots" within your application.
- Syntatic partition keys
-
It's a best practice to have a partition key with many distinct values, such as hundreds or thousands.
-
Concatenate multiple properties of an item
-
E.g.
{ "deviceId": "abc-123", "date": 2018, "partitionKey": "abc-123-2018" }
-
-
Use a partition key with a random suffix
- Distribute the workload more evenly is to append a random number at the end of the partition key value.
- You can perform parallel write operations across partitions.
- E.g. you might choose a random number between 1 and 400 and concatenate it as a suffix to the date.
- like 2018-08-09.1, 2018-08-09.2, and so on, through 2018-08-09.400
- This method results in better parallelism and overall higher throughput
- The randomizing strategy can greatly improve write throughput, but it's difficult to read a specific item
-
Use a partition key with precalculated suffixes
- Easier to read then randomizing.
- E.g. you have ID, you create partitions key with date + hash of ID.
- The writes are evenly spread across the partition key values, and across the partitions.
- You can easily read a particular item and date because you can calculate the partition key value for a specific ID.
- The benefit of this method is that you can avoid creating a single hot partition key.
- A hot partition key is the partition key that takes all the workload.
-
- You can provision throughput for a CosmosDb database or container during creation e.g.
OfferThroughput = 100000
- Set the appropriate consistency level for operations
- Make the fundamental tradeoff between the read consistency vs. availability, latency, and throughput.
- Strong consistency model (or linearization)
- Adds a steep price of higher latency (in steady state) and reduced availability (during failures).
- Easy to program applicatoins
- Eventual consistency
- Higher availability and better performance
- Hard to program applications.
- Consistensy levels in Cosmos Db (from strongest to weakest)
- Strong, Bounded staleness, Session, Consistent prefix, Eventual
- Strong: Always guaranteed to read the latest committed write.
- Bounded staleness
- Staleness can be configured in two ways:
- The number of versions (K) of the item
- The time interval (t) by which the reads might lag behind the writes
- 💡 Recommended for high consistency
- Staleness can be configured in two ways:
- Session
- Scoped to client session.
- honors the consistent-prefix (assuming a single “writer” session), monotonic reads, monotonic writes, read-your-writes, and write-follows-reads guarantees
- Monotonic reads: If a process reads the value of a data item x, any successive read operation on x by that process will always return that same value or a more recent value.
- Monotonic writes: A write operation by a process on a data item X is completed before any successive write operation on X by the same process.
- 💡 Recommended option for most scenerios.
- Consistent prefix
- Updates that are returned contain some prefix of all the updates, with no gaps. Consistent prefix guarantees that reads never see out-of-order writes.
- Eventual: There's no ordering guarantee for reads. In the absence of any further writes, the replicas eventually converge.
- Consistent prefix, and eventual consistency provide about two times the read throughput when compared with strong and bounded staleness.
- For a given type of write operation, the write throughput for request units is identical for all consistency levels.
- Strong, Bounded staleness, Session, Consistent prefix, Eventual
- You can configure the default consistency level on your Azure Cosmos account at any time
- Create, read, update, and delete data by using appropriate APIs
- Develop solutions that use a relational database
- Provision and configure relational databases
- Create single database
- Create a resource -> Databses -> SQL Database
- Type database name, select subscription, resource group
- Select source: Blank, Sample, Back-up
- Server -> Create new
- Type server name, admin login, password, location
- Select pricing tier
- Standard/Basic/Premium (sets miniumm DTU)
- Select DTU's & maximum storage
- You can query in SQL Database -> Query Editor
- Create a resource -> Databses -> SQL Database
- Create managed instance
- Azure SQL Database Managed Instance is a fully managed SQL Server Database Engine Instance hosted in Azure cloud. This is the best PaaS option for migrating your SQL Server database to the cloud.
- The managed instance deployment option provides high compatibility with on-premises SQL Server Database Engine.
- Cross-database query support is one of the main reasons to use managed instance over Azure SQL Database.
- ❗ Elastic Transactions / The Microsoft Distributed Transaction Coordinator is not supported.
- Flow: Create a resource -> Managed Isntance -> Azure SQL Managed Instance
- Set e.g. collation, VNet
- Features:
- Availability: Always-on, backup
- Security: Auditing, certificates, credentials through Azure Key Vault or Shared Access Signature
- Set cryptographic providers
- Configuration:
- Collation
- Defines a collation of a database or table column, or a collation cast operation when applied to character string expression
- Sets the alphabetical order based on culter, e.g.
Latin1_General_CS_AS_KS_WS
, orTraditional_Spanish_ci_ai
- SQL Server Agent: Server options, e.g. replication options such as snapshot, transaction-log reader, notifications.
- Collation
- Functionalities
- Bulk insert / openrowset
- Distributed transactions: ❗ Neither MSDTC (The Microsoft Distributed Transaction Coordinator) nor Elastic Transactions are currently supported in managed instances.
- Service broker, stored procedures/functions/triggers
- Behavior changes: Returns through e.g.
ServerProperty('InstanceName')
- Azure SQL Database Managed Instance is a fully managed SQL Server Database Engine Instance hosted in Azure cloud. This is the best PaaS option for migrating your SQL Server database to the cloud.
- Create single database
- Configure elastic pools for Azure SQL Database
- SQL Database elastic pools
- Simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands.
- The databases are on a single Azure SQL Database server.
- Share a set number of resources at a set price
- Problem & solution
- Problem:
- A common application pattern is to provision a single database for each customer.
- But different customers often have varying and unpredictable usage patterns, and it is difficult to predict the resource requirements of each individual database user.
- You over-provision or under-provision
- Solution
- Elastic pools solve this problem by ensuring that databases get the performance resources they need when they need it. They provide a simple resource allocation mechanism within a predictable budget.
- Problem:
- eDTUs are shared between many databases.
- Costs 1.5x more than DTU's but pool eDTUs can be shared by many databases and fewer total eDTUs are needed.
- Choose the correct pool size
- Determine
- Maximum resources utilized by all databases in the pool.
- eDTUs:
MAX(<Total number of DBs X average DTU utilization per DB>)
- eDTUs:
- Maximum storage bytes utilized by all databases in the pool.
- Sum of storage needed per DB
- Maximum resources utilized by all databases in the pool.
- Determine
- Elastic jobs
- Management tasks are simplified by running scripts in elastic jobs.
- Flow:
- Create pool: SQL elastic pool -> +Add -> Create pool on new server or existing SQL server.
- In pool -> Configure pool
- Select a service tier, add databases to the pool, and configure the resource limits for the pool and its databases.
- DTU-based resource limits
- Max eDTUs per database, Min eDTUs per database, Max storage per database
- vCore-based resource limits
- Max vCores per database, Min vCores per database, Max storage per database
- DTU-based resource limits
- Select a service tier, add databases to the pool, and configure the resource limits for the pool and its databases.
- Rescaling
- When rescaling pool, database connections are briefly dropped
- The duration to rescale pool can depend on the total amount of storage space used by all databases in the pool
- SQL Database elastic pools
- Elastic transactions
-
Allows you to write queries against multiple databases.
-
If both SQL databases supportselastic transactions, you can create server communication link.
New-AzureRmSqlServerCommunicationLink
: Create a new communication relationship between two SQL Database servers in Azure SQL Database. The relationship is symmetric which means both servers can initiate transactions with the other server.
-
In C#
- Multi-database applications
-
The
TransactionScope
class establishes an ambient transaction.- Ambient transaction = transaction that lives in the current thread.
-
All connections opened within the TransactionScope participate in the transaction. If different databases participate, the transaction is automatically elevated to a distributed transaction.
-
Ex:
using (var scope = new TransactionScope()) { using (var conn1 = new SqlConnection(connStrDb1)) { conn1.Open(); SqlCommand cmd1 = conn1.CreateCommand(); cmd1.CommandText = string.Format("insert into T1 values(1)"); cmd1.ExecuteNonQuery(); } using (var conn2 = new SqlConnection(connStrDb2)) { conn2.Open(); var cmd2 = conn2.CreateCommand(); cmd2.CommandText = string.Format("insert into T2 values(2)"); cmd2.ExecuteNonQuery(); } scope.Complete(); }
-
Sharded database applications
-
Use
OpenConnectionForKey
method of the elastic database client library to open connections for a scaled out data tier. -
C#:
using (var scope = new TransactionScope()) { using (var conn1 = shardmap.OpenConnectionForKey(tenantId1, credentialsStr)) { conn1.Open(); SqlCommand cmd1 = conn1.CreateCommand(); cmd1.CommandText = string.Format("insert into T1 values(1)"); cmd1.ExecuteNonQuery(); } using (var conn2 = shardmap.OpenConnectionForKey(tenantId2, credentialsStr)) { conn2.Open(); var cmd2 = conn2.CreateCommand(); cmd2.CommandText = string.Format("insert into T1 values(2)"); cmd2.ExecuteNonQuery(); } scope.Complete(); }
-
-
- Multi-database applications
-
Monitoring
- Use Dynamic Management Views (DMVs) in SQL DB.
- E.g.
sys.dm_tran_active_transactions
,sys.dm_tran_database_transactions
,sys.dm_tran_locks
.
- E.g.
- Use Dynamic Management Views (DMVs) in SQL DB.
-
- Create, read, update, and delete data tables by using code (C# ADO .NET)
-
First ensure you can reach the database: Database -> Set server firewall -> Add client IP
-
You create/delete table, populate table or update/delete and select da ta with TSQL
-
Example:
string query = "UPDATE [guitarBrands] SET type = @type, name = @name, image = @image WHERE id = @id"; using (SqlConnection connection = new SqlConnection(ConfigurationManager.ConnectionStrings["brandsConnection"].ToString())) using (SqlCommand command = new SqlCommand(query, connection) { try { // open the connection, execute, etc List<SqlParameter> p = new List<SqlParameter>(); p.Add(new SqlParameter("@type", newType.Text)); p.Add(new SqlParameter("@name", newName.Text)); p.Add(new SqlParameter("@image", newImage.Text)); p.Add(new SqlParameter("@id", id)); connection.Open(); /* Submit query command.ExecuteNonQuery(); // Non query does not return results */ /* Read using (SqlDataReader reader = command.ExecuteReader()) { while (reader.Read()) { Console.WriteLine("{0} , {1} , {2} , {3} , {4}", reader.GetGuid(0), reader.GetString(1), reader.GetInt32(2), (reader.IsDBNull(3)) ? "NULL" : reader.GetString(3), (reader.IsDBNull(4)) ? "NULL" : reader.GetString(4)); } } */ } catch { // log and handle exception(s) } }
-
- Provision and configure relational databases