AZ-204 Developer Associate: Exploring Azure Storage Solutions for Developers
The content here is under the Attribution 4.0 International (CC BY 4.0) license
Azure is a cloud-based platform that offers a wide range of storage services to support modern application development and deployment. As you prepare for the AZ-204 exam, understanding Azure storage is essentialβitβs a foundational component youβll interact with in nearly every Azure solution.
Storage in Azure is integrated with computing, networking, and database services, enabling you to build scalable and reliable applications that handle large volumes of data efficiently. This guide covers the core storage services you need to master for AZ-204: Azure Storage Accounts, Blob Storage, File Shares, Table Storage, Storage Queues, and Azure Cosmos DB.
Azure Storage Account fundamentals
An Azure Storage Account is a container that provides storage on the cloud. It acts as a gateway to all of Azureβs storage services. When you create a storage account, youβre establishing:
- A unique namespace (storage account name)
- A location (region)
- A replication strategy (redundancy level)
- Performance tier (standard or premium)
Types of data you can store
Blob Storage - For unstructured data
- Images, videos, documents, backups
- Binary large objects (blobs) stored in containers
- Three blob types: Block blobs (text/binary), Page blobs (virtual machine disks), Append blobs (log files)
- Requires a container (like a folder) to organize blobs
- Access levels: Private, Blob-level read access, Container-level read access
File Shares - For shared file access
- Network file shares similar to Dropbox
- Accessible from Windows, Linux, and macOS
- Multiple machines can mount the same share
- Useful for legacy applications needing SMB protocol
Table Storage - For structured, non-relational data
- Key-value pairs with attributes (NoSQL table storage)
- Uses partition keys and row keys for organization
- Supports CRUD operations but not relationships (no foreign keys)
- Cost-effective for semi-structured data
Queue Storage - For asynchronous messaging
- Pub/sub messaging for decoupling applications
- Simple queue-based communication between services
- Each message can be up to 64 KB
- Simpler alternative to Service Bus for basic scenarios
Access control and security
Three approaches to securing storage
1. Access Keys
- Direct connection string with account name and key
- Full access to all services in the account
- Can be regenerated for security rotation
- Not recommended for most scenarios due to security risk
2. Shared Access Signatures (SAS)
Can define:
- Expiration date/time (when the SAS becomes invalid)
- Specific services accessible (blob, queue, table, file)
- Permissions granted (read, write, delete, list)
- IP address restrictions
- Protocol (HTTPS only recommended)
Advantages:
- Limited scope and duration
- Can't be revoked (design with short expiration times)
- More secure than access keys for delegated access
3. Azure Active Directory (Azure AD)
- Role-based access control (RBAC)
- Fine-grained permissions tied to identities
- Can restrict to specific containers
- Access can be revoked immediately
- Recommended approach for production applications
Storage Access Policies
For container-level access control, storage access policies allow you to:
- Define reusable authorization rules
- Revoke access without changing SAS tokens
- Set more granular permissions than direct SAS
Access tiers and storage performance
Access tiers (cost vs. availability)
Hot Access Tier (Default)
- For frequently accessed data
- Highest retrieval speed
- Lowest storage cost, highest access cost
- Suitable for: Active applications, recent backups
Cool Access Tier
- For infrequently accessed data (accessed < 30 days/month)
- Lower storage cost than hot, higher access cost
- 30-day minimum retention required for billing
- Suitable for: Archived files, old backups
Archive Access Tier
- For rarely accessed data (accessed < once per year)
- Lowest storage cost, highest retrieval latency
- Data must be rehydrated before reading (1-15 hours depending on priority)
- Retrieval process: Edit blob β Change tier to βHotβ or βCoolβ β Wait for rehydration
- Suitable for: Compliance archives, disaster recovery backups
Performance tiers
Standard Performance (Default)
- Good for general-purpose workloads
- HDD-backed storage
- Lower cost
Premium Performance
- For high-performance workloads
- SSD-backed storage
- Available for Block Blobs, File Shares, and Page Blobs
- Higher cost but guaranteed IOPS
Redundancy strategies
Azure provides multiple redundancy options to protect your data:
LRS (Locally Redundant Storage)
- Three copies within a single data center
- Lowest cost, least resilient
- Protects against disk/server failures only
ZRS (Zone-Redundant Storage)
- Three copies across three availability zones (different physical locations)
- Higher availability than LRS
- Slightly higher cost
- Recommended for critical data in same region
GRS (Geo-Redundant Storage)
- Three copies in primary region + three copies in secondary region (300+ miles away)
- Data replicates asynchronously to secondary region
- Can failover to secondary region if primary fails
- Higher cost but protects against regional outages
GZRS (Geo-Zone-Redundant Storage)
- Combines ZRS (primary) + GRS (secondary)
- Three copies across zones in primary region + three in secondary region
- Highest redundancy and cost
- Recommended for mission-critical data
Lifecycle management and data protection
Blob Lifecycle Rules
Automatically transition blobs between access tiers based on conditions:
Example rule:
- IF blob not modified for 30 days THEN move to Cool tier
- IF blob not modified for 90 days THEN move to Archive tier
- IF blob not modified for 365 days THEN delete
Rules apply every 24 hours (plan accordingly)
Can filter by blob prefix or tags
Blob Versioning
- Automatically maintains previous versions of blobs
- Each modification creates a new version
- Enable in Data Protection settings
- Useful for accidental deletion recovery
- Increases storage cost (youβre storing multiple versions)
Blob Snapshots
- Read-only point-in-time copy of a blob
- Snapshot shares blocks with original blob (efficient storage)
- Deleting original blob doesnβt delete snapshots
- Snapshots can be deleted independently
Soft Delete
- Retains deleted blobs for a specified period (default 90 days)
- Deleted blobs appear as βdeletedβ but can be recovered
- Enable for both blobs and blob snapshots
- Useful for accidental deletion recovery, compliance requirements
Working with Blob Storage
Blob Storage SDK for .NET
Required NuGet package: Azure.Storage.Blobs
For AZ-204, youβll work with these key classes:
BlobServiceClient
// Connect to storage account and manage containers
var client = new BlobServiceClient(new Uri(blobUri), new DefaultAzureCredential());
// Create a container
BlobContainerClient container = client.CreateBlobContainer("my-container");
// Get existing container
container = client.GetBlobContainerClient("my-container");
// List all containers
await foreach (BlobContainerItem item in client.GetBlobContainersAsync())
{
Console.WriteLine(item.Name);
}
BlobContainerClient
// Work with containers and upload/download blobs
BlobContainerClient container = client.GetBlobContainerClient("my-container");
// Upload a blob
BlobClient blob = container.GetBlobClient("myfile.txt");
using FileStream uploadFileStream = File.OpenRead("localfile.txt");
await blob.UploadAsync(uploadFileStream, overwrite: true); // overwrite if exists
// List all blobs in container
await foreach (BlobItem blob in container.GetBlobsAsync())
{
Console.WriteLine(blob.Name);
}
BlobClient
// Download a specific blob
BlobClient blob = container.GetBlobClient("myfile.txt");
BlobDownloadInfo download = await blob.DownloadAsync();
using (StreamReader reader = new StreamReader(download.Content))
{
string contents = await reader.ReadToEndAsync();
Console.WriteLine(contents);
}
// Get blob metadata
BlobProperties properties = await blob.GetPropertiesAsync();
foreach (var metadata in properties.Metadata)
{
Console.WriteLine($"{metadata.Key}: {metadata.Value}");
}
// Set metadata
var metadata = new Dictionary<string, string> { { "key", "value" } };
await blob.SetMetadataAsync(metadata);
Blob Leasing (Exclusive Locking)
Exclusive locks prevent concurrent modifications:
BlobLeaseClient lease = blob.GetBlobLeaseClient(leaseId: null);
// Acquire a 30-second lease
BlobLease leaseResult = await lease.AcquireAsync(TimeSpan.FromSeconds(30));
string leaseId = leaseResult.LeaseId;
// Perform operations while holding lease...
// Release the lease (required, or it expires after timeout)
await lease.ReleaseAsync(leaseId);
// Renew a lease before expiration
await lease.RenewAsync(leaseId);
// Break a lease immediately (useful for cleanup)
await lease.BreakAsync();
Shared Access Signatures (SAS) with BlobSasBuilder
using Azure.Storage.Sas;
// Create SAS token programmatically
BlobClient blob = container.GetBlobClient("myfile.txt");
var sasBuilder = new BlobSasBuilder()
{
BlobContainerName = "my-container",
BlobName = "myfile.txt",
Resource = "b", // "b" for blob, "c" for container
ExpiresOn = DateTimeOffset.UtcNow.AddHours(1),
};
// Grant specific permissions
sasBuilder.SetPermissions(BlobSasPermissions.Parse("r")); // read-only
// Generate SAS URI
Uri sasUri = new Uri(new Uri(blobUri), blob.GenerateSasUri(sasBuilder));
Console.WriteLine(sasUri); // Share this URI for limited access
Change Feed
Tracks all changes to blobs in your storage account:
- Enables audit trails and change-driven workflows
- Stored as blobs in
$blobchangefeedcontainer - Format: Apache Avro
- Supported for: General Purpose v2 and Blob Storage accounts
- Enable in Data Protection settings
Use case: Trigger Azure Functions when blobs are modified
File Shares
Azure File Shares provide SMB protocol access to shared folders:
File Share Tiers
Hot Tier
- Frequent access to file shares
- Optimized for interactive workloads
- Default tier
Cool Tier
- Infrequent access (share accessed < 3-4 days/week)
- Lower share-level fees, higher transaction costs
- Good for backup/archive scenarios
Premium Tier
- Guaranteed IOPS and throughput
- SSD-backed storage
- Only available with FileStorage storage accounts
Accessing File Shares
File shares mount like network drives on Windows, Linux, and macOS:
# Windows: Mount as network drive
net use Z: \\storageaccount.file.core.windows.net\sharename /user:storageaccount <key>
# Linux: Mount with CIFS
sudo mount -t cifs //storageaccount.file.core.windows.net/sharename /mnt \
-o username=storageaccount,password=key,vers=3.0
Table Storage
Table Storage stores semi-structured data as key-value entities:
Core concepts
Partition Key: Groups related entities for efficient querying Row Key: Unique identifier within a partition Entity: A record with properties (like a row in a table)
using Azure.Data.Tables;
// Connect to table
var tableClient = new TableClient(
new Uri("https://storageaccount.table.core.windows.net"),
"MyTable",
new DefaultAzureCredential()
);
// Define an entity
public class Person : ITableEntity
{
public string PartitionKey { get; set; } // e.g., "Sales"
public string RowKey { get; set; } // e.g., "001"
public string Name { get; set; }
public int Age { get; set; }
public DateTimeOffset? Timestamp { get; set; }
public ETag ETag { get; set; }
}
// Create/insert entity
var person = new Person
{
PartitionKey = "Sales",
RowKey = "001",
Name = "John",
Age = 30
};
await tableClient.AddEntityAsync(person);
// Retrieve specific entity
TableEntity entity = await tableClient.GetEntityAsync<Person>("Sales", "001");
// Query all entities in partition
await foreach (Person p in tableClient.QueryAsync<Person>(x => x.PartitionKey == "Sales"))
{
Console.WriteLine(p.Name);
}
// Update entity
person.Age = 31;
await tableClient.UpdateEntityAsync(person, ETag.All);
// Delete entity
await tableClient.DeleteEntityAsync("Sales", "001");
Storage Queues
Queue Storage decouples applications with simple message passing:
Queue basics
using Azure.Storage.Queues;
using Azure.Storage.Queues.Models;
// Connect to queue
var queueClient = new QueueClient(
new Uri("https://storageaccount.queue.core.windows.net/myqueue"),
new DefaultAzureCredential()
);
// Send a message
await queueClient.SendMessageAsync("Hello, Queue!");
// Peek at message without removing (read-only)
PeekedMessage[] messages = await queueClient.PeekMessagesAsync(maxMessages: 10);
foreach (PeekedMessage msg in messages)
{
Console.WriteLine(msg.MessageText);
}
// Receive message (removes from queue)
QueueMessage message = await queueClient.ReceiveMessageAsync();
Console.WriteLine(message.MessageText);
// Must explicitly delete after processing
await queueClient.DeleteMessageAsync(message.MessageId, message.PopReceipt);
// Get queue properties (message count, etc.)
QueueProperties properties = await queueClient.GetPropertiesAsync();
Console.WriteLine($"Approximate message count: {properties.ApproximateMessagesCount}");
Queue + Azure Functions integration
Azure Functions can be triggered by queue messages:
[Function("ProcessQueueMessage")]
public static void Run(
[QueueTrigger("myqueue")] string message,
ILogger log)
{
log.LogInformation($"C# Queue trigger function processed: {message}");
// Messages must be base64 encoded by Azure Functions
// Automatic deserialization happens
}
Important: Azure Functions retries failed messages 5 times. Failed messages after 5 retries go to a poison queue (myqueue-poison).
AzCopy tool
AzCopy is a command-line utility for transferring data between storage accounts:
Upload to Azure Storage
# Upload single file
azcopy copy "C:\path\to\file.txt" "https://storageaccount.blob.core.windows.net/container/SAS_TOKEN"
# Upload directory (non-recursive)
azcopy copy "C:\path\to\folder\*" "https://storageaccount.blob.core.windows.net/container/?SAS_TOKEN"
# Upload directory recursively (includes subdirectories)
azcopy copy "C:\path\to\folder" "https://storageaccount.blob.core.windows.net/container/?SAS_TOKEN" --recursive
# Create container first (if doesn't exist)
azcopy make "https://storageaccount.blob.core.windows.net/newcontainer/?SAS_TOKEN"
Download from Azure Storage
# Download single blob
azcopy copy "https://storageaccount.blob.core.windows.net/container/file.txt?SAS_TOKEN" "C:\local\file.txt"
# Download entire container
azcopy copy "https://storageaccount.blob.core.windows.net/container?SAS_TOKEN" "C:\local\" --recursive
Copy between storage accounts
# Copy from one account to another
azcopy copy "https://source.blob.core.windows.net/container?SAS_TOKEN" "https://dest.blob.core.windows.net/container?SAS_TOKEN" --recursive
# Sync (mirror one source to destination)
azcopy sync "https://source.blob.core.windows.net/container?SAS_TOKEN" "https://dest.blob.core.windows.net/container?SAS_TOKEN"
Azure CLI tool for blob operations
Azure CLI provides convenient commands for blob management:
# Copy blob
az storage blob copy start --source-uri <source-uri> --destination-blob <blob-name> --destination-container <container-name>
# Delete blob
az storage blob delete --name <blob-name> --container-name <container-name>
# Download blob
az storage blob download --name <blob-name> --container-name <container-name> --file <local-path>
# Upload blob
az storage blob upload --name <blob-name> --container-name <container-name> --file <local-file-path>
# Sync between containers
az storage blob sync --source <source-path> --destination <container-name>
# Run command with SAS (for specific operations)
az storage blob run-command --source-uri <blob-uri-with-sas>
Learn more: Azure CLI storage commands documentation
ARM Templates for Storage
Automate storage account provisioning with Azure Resource Manager (ARM) templates:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"environment": {
"type": "string",
"defaultValue": "dev",
"allowedValues": ["dev", "staging", "prod"]
},
"storageAccountCount": {
"type": "int",
"defaultValue": 1,
"minValue": 1,
"maxValue": 10
}
},
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2023-01-01",
"name": "[concat('storage', copyIndex(), uniqueString(resourceGroup().id))]",
"location": "[resourceGroup().location]",
"sku": {
"name": "Standard_LRS"
},
"kind": "StorageV2",
"properties": {
"accessTier": "Hot",
"minimumTlsVersion": "TLS1_2",
"supportsHttpsTrafficOnly": true
},
"copy": {
"name": "storageAccountsCopy",
"count": "[parameters('storageAccountCount')]"
},
"tags": {
"environment": "[parameters('environment')]",
"createdDate": "[utcNow()]"
}
}
],
"outputs": {
"storageAccountIds": {
"type": "array",
"copy": {
"count": "[parameters('storageAccountCount')]",
"input": "[resourceId('Microsoft.Storage/storageAccounts', concat('storage', copyIndex(), uniqueString(resourceGroup().id)))]"
}
}
}
}
Azure Cosmos DB
Azure Cosmos DB is a globally distributed, multi-model NoSQL database:
Key characteristics
- Fully managed: No server/infrastructure management
- Multi-API: SQL, Table, MongoDB, Gremlin, Cassandra
- Global distribution: Replicate data across regions with single-digit millisecond latency
- Guaranteed throughput: Predictable performance with SLAs
- NoSQL: No relationships (use embedded documents instead)
Pricing model
Cosmos DB charges for:
-
Request Units (RUs) - Throughput cost (read/write operations)
- Provisioned throughput: Pay per hour for reserved RUs
- Serverless: Pay per consumed RUs
- Storage - Data stored in the database
Free tier: 400 RU/s + 5 GB storage
API choice
SQL API: Recommended (most flexible, JSON documents)
Table API: Query existing Table Storage with Cosmos features
MongoDB API: If you have MongoDB code/skills
Gremlin API: Graph queries and relationships
Cassandra API: Cassandra-compatible workloads
Core concepts
Database: Container for multiple collections/containers
Container: Stores individual items (documents)
Item: JSON document with properties
Partition Key: Divides data across physical partitions for scale
- Choose a property with high cardinality and even distribution
- Canβt change after creation
TTL (Time to Live): Auto-delete items after time period
Working with Cosmos DB
using Azure.Cosmos;
// Connect
var client = new CosmosClient("https://account.documents.azure.com:443/", "<key>");
Database database = client.GetDatabase("mydb");
Container container = database.GetContainer("items");
// Define item
public class Product
{
[JsonPropertyName("id")]
public string Id { get; set; }
public string Name { get; set; }
public decimal Price { get; set; }
public string Category { get; set; } // Good partition key
}
// Create item
var item = new Product { Id = "1", Name = "Laptop", Price = 999 };
ItemResponse<Product> created = await container.CreateItemAsync(
item,
new PartitionKey("Electronics")
);
// Query items
string query = "SELECT * FROM products WHERE products.price > @price";
var parameters = new[] { new QueryParameter("@price", 500) };
FeedIterator<Product> iterator = container.GetItemQueryIterator<Product>(
new QueryDefinition(query).WithParameters(parameters)
);
List<Product> results = new();
while (iterator.HasMoreResults)
{
FeedResponse<Product> response = await iterator.ReadNextAsync();
results.AddRange(response);
}
// Update item
item.Price = 899;
await container.UpsertItemAsync(item, new PartitionKey(item.Category));
// Delete item
await container.DeleteItemAsync<Product>("1", new PartitionKey("Electronics"));
Consistency levels
Cosmos DB offers five levels of consistency (consistency vs. performance trade-off):
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Strong β Bounded Staleness β Session β Prefix β Eventual β
β (Strict) (Relaxed) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Max latency Max throughput
Strong: All replicas immediately in sync (highest latency)
Bounded Staleness: Data replicates asynchronously but within bounds
- Max age: 5 seconds or 100,000 writes
- Good balance for most applications
Session: Within a client session, reads see your own writes
- Different sessions may see stale data
- Default choice for web apps
Consistent Prefix: Never see out-of-order writes (temporal consistency)
Eventual: Highest throughput, data eventually consistent
Indexing and Composite Indexes
All properties indexed by default (impacts write cost).
Composite Indexes required for multi-property ORDER BY:
// Without composite index, this query fails:
SELECT * FROM products
WHERE category = 'Electronics'
ORDER BY price DESC, rating DESC
// Fix: Add composite index in portal
// Container β Settings β Indexing Policy β Add composite index
// Properties: (category, price DESC, rating DESC)
Partition key design
Partition key determines how data distributes across servers:
Good partition key:
- High cardinality (many distinct values): Category, User ID, Timestamp
- Evenly distributed: Avoid βhotβ partitions with all data
Bad partition key:
- Low cardinality: Boolean fields, Gender, Status
- Skewed distribution: 95% of data in one partition
Synthetic partition keys
When your main partition key isnβt ideal, concatenate values:
// If partition key "country" has uneven distribution:
public string id { get; set; } = Guid.NewGuid().ToString();
public string PartitionKey { get; set; } = $"{country}#{guid.ToString().Substring(0, 1)}";
// Distributes uneven "country" data across 16 buckets (0-F from guid)
Triggers and stored procedures
Cosmos DB supports server-side logic:
Triggers - Execute on CREATE, UPDATE, DELETE, REPLACE, ALL operations
// Pre-trigger: Update document before insert
function PreInsertTrigger() {
var context = getContext();
var collection = context.getCollection();
var response = context.getResponse();
var doc = response.getBody();
doc.createdAt = new Date();
response.setBody(doc);
}
Stored Procedures - Execute server-side transactions
function BulkInsert(items) {
var context = getContext();
var collection = context.getCollection();
var count = 0;
items.forEach(item => {
collection.createDocument(
collection.getSelfLink(),
item,
(err, doc) => {
if (err) throw new Error(err.message);
count++;
}
);
});
getContext().getResponse().setBody(count);
}
Change feed
Track all changes to items in a container:
// Process change feed events
var feedIterator = container.GetChangeFeedIterator<Product>(
ChangeFeedStartFrom.Beginning()
);
while (feedIterator.HasMoreResults)
{
FeedResponse<Product> response = await feedIterator.ReadNextAsync();
foreach (Product item in response)
{
Console.WriteLine($"Item {item.Id} changed");
}
}
Use case: Sync Cosmos DB with other systems, audit trail, real-time analytics
Cosmos DB with CLI
Create a Cosmos DB account with Azure CLI:
resourceGroup=my-resource-group
accountName=my-cosmos-account
databaseName=my-database
consistencyLevel=BoundedStaleness
az cosmosdb create \
--name $accountName \
--resource-group $resourceGroup \
--default-consistency-level $consistencyLevel \
--locations regionName=southcentralus failoverPriority=0 isZoneRedundant=false \
--locations regionName=northcentralus failoverPriority=1 isZoneRedundant=true \
--max-interval 5 \
--enable-automatic-failover true
Learn more: az cosmosdb create documentation
Cosmos DB RBAC
Azure Cosmos DB supports role-based access control (RBAC) for fine-grained permission management:
# Assign Cosmos DB Operator role to a user
az role assignment create \
--assignee <user-object-id> \
--role "Cosmos DB Operator" \
--scope /subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.DocumentDB/databaseAccounts/<account-name>
Learn more: Azure Cosmos DB Operator role for RBAC
Data Factory
Azure Data Factory is an ETL (Extract, Transform, Load) service for data integration:
Main concepts
Pipeline: Workflow that orchestrates data movement/transformation
Activity: Unit of work (copy, transform, etc.)
Dataset: Data structure (source or destination)
Linked Service: Connection to data store/compute
Basic workflow
Create Linked Services β Define Datasets β Build Pipeline β Execute β Monitor
Example: Copy CSV from Blob to SQL Database
- Create Linked Service (connection to Blob Storage)
- Create Dataset (CSV file in blob container)
- Create Linked Service (connection to SQL Database)
- Create Dataset (SQL table)
- Create Pipeline with Copy Activity
- Run pipeline to transfer data
Blob Trigger Azure Function
Connect blob changes directly to Azure Functions with output binding to Cosmos DB:
[Function("ProcessBlobImage")]
public async Task Run(
[BlobTrigger("images/{name}", Connection = "AzureWebJobsStorage")] Stream myBlob,
string name,
[CosmosDB(
databaseName: "mydb",
containerName: "metadata",
Connection = "CosmosDBConnection")] IAsyncCollector<dynamic> documentsOut,
ILogger log)
{
log.LogInformation($"Processing blob: {name}");
// Simulate image processing
var metadata = new
{
id = Guid.NewGuid().ToString(),
blobName = name,
processedAt = DateTime.UtcNow,
size = myBlob.Length
};
// Write results to Cosmos DB
await documentsOut.AddAsync(metadata);
}
Related subjects
Changelog
- Feb 15, 2026 - Comprehensive refresh: reorganized structure, updated SDK versions, expanded code examples, deeper explanations
- Jan 19, 2025 - Updated content with quiz