SCORM Azure Storage
- Before starting, confirm that Azure blob storage is configured
- Confirm that the Scorm Xblock is installed and working. For testing, we tested with the master branch up to commit b98887564e95fcb63c8cbf40eb0e2b1175f41696
- The following is the raw git description for changes made to the Raccon Gang SCORM block to enable Azure storage. Changes were made from this commit
Configure Azure storage in Scorm Xblock
diff --git a/scormxblock/scormxblock.py b/scormxblock/scormxblock.py
index c23aa84..c5a90f1 100644
--- a/scormxblock/scormxblock.py
+++ b/scormxblock/scormxblock.py
@@ -106,6 +104,8 @@ class ScormXBlock(XBlock):
def student_view(self, context=None):
context_html = self.get_context_student()
+ log.info("=============context============")
+ log.info(context_html)
template = self.render_template('static/html/scormxblock.html', context_html)
frag = Fragment(template)
frag.add_css(self.resource_string("static/css/scormxblock.css"))
@@ -176,6 +176,10 @@ class ScormXBlock(XBlock):
os.system('unzip {} -d {}'.format(temporary_path, path_to_file))
os.remove(temporary_path)
+ # Upload unzip files to Azure blob container
+ azure_path = os.path.dirname(path)
+ self.azure_upload(azure_path, path_to_file)
+
self.set_fields_xblock(path_to_file)
return Response(json.dumps({'result': 'success'}), content_type='application/json')
@@ -256,13 +260,14 @@ class ScormXBlock(XBlock):
}
def get_context_student(self):
+ folder_path = os.path.dirname(self.scorm_file_meta.get('path', ''))
scorm_file_path = ''
if self.scorm_file:
scheme = 'https' if settings.HTTPS == 'on' else 'http'
- scorm_file_path = '{}://{}{}'.format(
- scheme,
- configuration_helpers.get_value('site_domain', settings.ENV_TOKENS.get('LMS_BASE')),
- self.scorm_file
+ scorm_file_path = '{}{}/{}'.format(
+ settings.ENV_TOKENS.get('MEDIA_ROOT'),
+ folder_path,
+ self.path_index_page
)
return {
@@ -337,6 +342,20 @@ class ScormXBlock(XBlock):
file_descriptor.seek(0)
return sha1.hexdigest()
+ def azure_upload(self, azure_path, path_to_file):
+
+ for r,d,f in os.walk(path_to_file):
+ if f:
+ for file in f:
+ file_path_on_azure = os.path.join(r,file).replace(path_to_file,"")
+ file_path_on_local = os.path.join(r,file)
+ #block_blob_service.create_blob_from_path(container_name,azure_path+file_path_on_azure,file_path_on_local)
+ ffile = open(file_path_on_local, 'rb')
+ default_storage.save(azure_path+file_path_on_azure, ffile)
+ ffile.close()
+ log.info('Azure blob uploaded "{}"'.format(azure_path))
+ return True
+
def student_view_data(self):
"""
Inform REST api clients about original file location and it's "freshness".
@@ -360,4 +379,4 @@ class ScormXBlock(XBlock):
<scormxblock/>
</vertical_demo>
"""),
- ]
\ No newline at end of file
+ ]
- In addition, the x-block unzips .zip scorm packages on the server. After successful upload to Azure, we want to delete the redundant files on the application server. Here is the code.
diff --git a/scormxblock/scormxblock.py b/scormxblock/scormxblock.py
index c5a90f1..f3ec736 100644
--- a/scormxblock/scormxblock.py
+++ b/scormxblock/scormxblock.py
@@ -162,9 +162,6 @@ class ScormXBlock(XBlock):
# Now unpack it into SCORM_ROOT to serve to students later
path_to_file = os.path.join(SCORM_ROOT, self.location.block_id)
- if os.path.exists(path_to_file):
- shutil.rmtree(path_to_file)
-
if hasattr(scorm_file, 'temporary_file_path'):
os.system('unzip {} -d {}'.format(scorm_file.temporary_file_path(), path_to_file))
else:
@@ -182,6 +179,9 @@ class ScormXBlock(XBlock):
self.set_fields_xblock(path_to_file)
+ # Clear the server storage in scorm directory
+ shutil.rmtree(path_to_file)
+
return Response(json.dumps({'result': 'success'}), content_type='application/json')
@XBlock.json_handler
- At this point, SCORM packages should be uploaded to Azure and then deleted on the application server. But uploading a replacement file will result in duplicate files on Azure storage, where the old files are still referenced. The following commit allows us to actually replace the files in Azure.
diff --git a/scormxblock/scormxblock.py b/scormxblock/scormxblock.py
index f3ec736..24f534b 100644
--- a/scormxblock/scormxblock.py
+++ b/scormxblock/scormxblock.py
@@ -351,6 +351,8 @@ class ScormXBlock(XBlock):
file_path_on_local = os.path.join(r,file)
#block_blob_service.create_blob_from_path(container_name,azure_path+file_path_on_azure,file_path_on_local)
ffile = open(file_path_on_local, 'rb')
+ if default_storage.exists(azure_path+file_path_on_azure):
+ default_storage.delete(azure_path+file_path_on_azure)
default_storage.save(azure_path+file_path_on_azure, ffile)
ffile.close()
log.info('Azure blob uploaded "{}"'.format(azure_path))
Known Issues
This is intended to be a training for Azure SCORM storage. It proves the concepts and explains the major steps. However, there are some known weaknesses with the appraoch which would need to be addressed with further development.
- The Azure library we are using does not allow uploads greater than 64MB. We believe an updgrade to the Azure storage library may get past this issue. But an upgraded Azure storage library might not be compatible with our django-storage library.
- Better storage logic. It seems possible if an upload fails that the old files would be deleted but then if the new files fail then we'd be left with no files in this scenario.
- There are no status or progress bars on SCORM uploads. For large errors, this may cause confusion if a user thinks the site has frozen if it actually is still uploading.
- Better error handling. Right now, a studio user might not know why an upload failed, if Azure sends back an error that only makes it to the logs.