Hey there, fellow Go developer! Ready to dive into the world of Azure Blob Storage? You're in for a treat. Azure Blob Storage is a powerhouse for storing massive amounts of unstructured data, and with Go's concurrency features, we can build some seriously efficient integrations. We'll be using the azblob
package, which is like a Swiss Army knife for all things Azure Blob Storage. Let's get cracking!
Before we jump in, make sure you've got:
azblob
package (go get -u github.com/Azure/azure-storage-blob-go/azblob
)Got all that? Great! Let's move on to the fun stuff.
First things first, let's create a new Go project:
mkdir azure-blob-go && cd azure-blob-go go mod init azure-blob-go
Now, let's import the packages we'll need:
import ( "context" "log" "os" "github.com/Azure/azure-storage-blob-go/azblob" )
Alright, time to get our hands dirty with some authentication. We'll use a connection string for simplicity:
accountName, accountKey := os.Getenv("AZURE_STORAGE_ACCOUNT"), os.Getenv("AZURE_STORAGE_ACCESS_KEY") credential, err := azblob.NewSharedKeyCredential(accountName, accountKey) if err != nil { log.Fatal("Invalid credentials:", err) } p := azblob.NewPipeline(credential, azblob.PipelineOptions{})
Now for the good stuff. Let's run through some basic operations:
ctx := context.Background() URL, _ := url.Parse(fmt.Sprintf("https://%s.blob.core.windows.net/mycontainer", accountName)) containerURL := azblob.NewContainerURL(*URL, p) _, err = containerURL.Create(ctx, azblob.Metadata{}, azblob.PublicAccessNone) if err != nil { log.Fatal("Error creating container:", err) }
blobURL := containerURL.NewBlockBlobURL("myblob") data := []byte("Hello, Azure Blob Storage!") _, err = azblob.UploadBufferToBlockBlob(ctx, data, blobURL, azblob.UploadToBlockBlobOptions{})
get, err := blobURL.Download(ctx, 0, azblob.CountToEnd, azblob.BlobAccessConditions{}, false, azblob.ClientProvidedKeyOptions{}) downloadedData := &bytes.Buffer{} reader := get.Body(azblob.RetryReaderOptions{}) downloadedData.ReadFrom(reader) reader.Close()
for marker := (azblob.Marker{}); marker.NotDone(); { listBlob, err := containerURL.ListBlobsFlatSegment(ctx, marker, azblob.ListBlobsSegmentOptions{}) if err != nil { log.Fatal("Error listing blobs:", err) } marker = listBlob.NextMarker for _, blobInfo := range listBlob.Segment.BlobItems { fmt.Println(blobInfo.Name) } }
_, err = blobURL.Delete(ctx, azblob.DeleteSnapshotsOptionNone, azblob.BlobAccessConditions{})
Let's kick it up a notch with some advanced features:
metadata := azblob.Metadata{"key1": "value1", "key2": "value2"} _, err = blobURL.SetMetadata(ctx, metadata, azblob.BlobAccessConditions{}, azblob.ClientProvidedKeyOptions{})
leaseID, err := blobURL.AcquireLease(ctx, "", 60, azblob.ModifiedAccessConditions{}) // Use the leaseID for operations, then release it _, err = blobURL.ReleaseLease(ctx, leaseID, azblob.ModifiedAccessConditions{})
sasQueryParams, err := azblob.BlobSASSignatureValues{ Protocol: azblob.SASProtocolHTTPS, ExpiryTime: time.Now().UTC().Add(48 * time.Hour), ContainerName: "mycontainer", BlobName: "myblob", Permissions: azblob.BlobSASPermissions{Read: true, Write: true}.String(), }.NewSASQueryParameters(credential) sasURL := blobURL.URL() sasURL.RawQuery = sasQueryParams.Encode()
Always check for errors and close your resources:
defer reader.Close() if err != nil { log.Fatal("Error:", err) }
For concurrent operations, use goroutines and channels wisely:
results := make(chan error, 10) for i := 0; i < 10; i++ { go func() { // Perform blob operation results <- err }() } // Collect results
For unit tests, use mocks:
type mockBlobURL struct { azblob.BlobURL } func (m *mockBlobURL) Download(ctx context.Context, offset int64, count int64, ac azblob.BlobAccessConditions, rangeGetContentMD5 bool, cpk azblob.ClientProvidedKeyOptions) (*azblob.DownloadResponse, error) { // Mock implementation }
For integration tests, use Azurite, Microsoft's Azure Storage emulator.
Use buffered operations for large files:
reader := bufio.NewReader(file) _, err = azblob.UploadStreamToBlockBlob(ctx, reader, blobURL, azblob.UploadStreamToBlockBlobOptions{})
Implement retries for better reliability:
retryOptions := azblob.RetryOptions{ MaxTries: 3, TryTimeout: 30 * time.Second, } p := azblob.NewPipeline(credential, azblob.PipelineOptions{Retry: retryOptions})
And there you have it! You're now equipped to build robust Azure Blob Storage integrations in Go. Remember, the azblob
package is your friend - it's got a ton of features we didn't even touch on here. So go forth, experiment, and build something awesome!
For more in-depth info, check out the Azure Blob Storage documentation and the azblob package docs.
Happy coding, Gophers!