× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Please no third-party vendor solutions.

We have been working with saving virtual tape images to Azure using HTTPAPIR4.

After lots of testing we have finally identified that Azure does not support the large file sizes, and their approach is to split the file up into 100MB Block Blobs which subsequently get joined up into one file. Now while this is not a primary backup, I shudder at the thought of breaking an 80GB virtual tape image into 100MB pieces and then joining them up.

So, plan B would be to look for other cloud storage options that don't have this restriction.

Just looking for any examples that others may be using that would suit this requirement.

Below is the sample python scrip supplied to do the split and join

Thanks
Don

Eg, here is a python script which shows how it works:

import os
import base64
import requests

# Configuration
FILE_PATH = "path/to/your/60GB-file.bin" # Change this to your actual file path
STORAGE_URL = "https://<storage-account>.blob.core.windows.net/<location>/<blob-name>" # Change <blob-name> accordingly
SAS_TOKEN = "<your-sas-token>" # Your SAS token
BLOCK_SIZE = 100 * 1024 * 1024 # 100MB per block

# Read file and split into blocks
block_ids = []
with open(FILE_PATH, "rb") as file:
block_index = 0
while True:
block_data = file.read(BLOCK_SIZE)
if not block_data:
break # End of file

# Generate a base64-encoded block ID (must be unique)
block_id = base64.b64encode(f"block-{block_index:05}".encode()).decode()

# Upload the block
block_url = f"{STORAGE_URL}?comp=block&blockid={block_id}&{SAS_TOKEN}"
response = requests.put(block_url, data=block_data, headers={"x-ms-blob-type": "BlockBlob"})

if response.status_code != 201:
print(f"Failed to upload block {block_index}: {response.text}")
exit(1)

block_ids.append(f"<BlockId>{block_id}</BlockId>")
print(f"Uploaded block {block_index}")

block_index += 1

# Commit all blocks
block_list_xml = f"<?xml version='1.0' encoding='utf-8'?><BlockList>{''.join(block_ids)}</BlockList>"
commit_url = f"{STORAGE_URL}?comp=blocklist&{SAS_TOKEN}"
commit_response = requests.put(commit_url, data=block_list_xml, headers={"x-ms-blob-type": "BlockBlob", "Content-Type": "application/xml"})

if commit_response.status_code == 201:
print("Upload successful!")
else:
print(f"Failed to commit block list: {commit_response.text}")




Brisbane - Sydney - Melbourne


Don Brown

Senior Consultant




P: 1300 088 400




DISCLAIMER. Before opening any attachments, check them for viruses and defects. This email and its attachments may contain confidential information. If you are not the intended recipient, please do not read, distribute or copy this email or its attachments but notify sender and delete it. Any views expressed in this email are those of the individual sender

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.