Actions
- Table of contents
- To do
- Documentation
Issue #1542
To do¶
- Fix upload: uploading a 14 MB image fails with 413 Request Entity Too Large although limit is set to 20 MB (in OpenAtlas admin)
- Fix disappearing database: interface broke because JavaScript and CSS couldn't be loaded anymore (500). Redeploy on Rancher didn't help. A new build with gitlab pipeline wiped the database clean, although the code in start.sh should have prevented that.
- Remove need for extra branches for each instance
- Implement backup for daily, weekly, quarterly and test it (only database dumps or with files?)
- Implement and test a presentation site with OpenAtlas discovery. For the ACDH-CH demo we will use https://discovery-demo-acdh-ch.openatlas.eu
- Implement using Debian packages (instead pip) with a script for daily updates for security, reliability and stability reasons
Automate upgrades¶
Debian package updates- Implement a mechanism for software updates with new/different packages
- Update SQL scripts are already provided in install/upgrade named after the specific version e.g. 6.0.0.sql. Upgrades of former major versions are stored in install/upgrade/archive.
- We would need a mechanism that has to work outside/before the main application because changes may interfere with the initialization process in openatlas/__init__.py e.g. all types are loaded in the before_request() function -> Since every update will make new setup, I don't think, this will be a problem.
- In case that the database update, which runs in a transaction, fails the software shouldn't be updated but we need the software update to get the upgrade file. Maybe Kubernetes can help there, e.g. abort the update all together. -> If the transaction fails, the pipeline should fail and therefore not be deployed.
- The version update process should be "aware" if an update SQL is needed. The application code "knows" it's version, it's tracked as VERSION in config/default.py. To make the database aware of it's version we could add a value in the web/settings table but we still need functionality that checks if, which and in what order upgrade scripts are needed and can deal with failed update SQLs. -> We need to make the database aware of it's version in order properly check if a database exists and needs updates.
Documentation¶
Packages¶
We use Pipfile for now with Python and Heroku. Heroku will take care of security updates.To add a new package:
pipenv install <package==version>
To update pipfile.lock:
pipenv update
- More on Pipfile: https://docs.pipenv.org/ https://github.com/pypa/pipfile
- How Heroku works: https://devcenter.heroku.com/articles/how-heroku-works
- What is Heroku: https://stackoverflow.com/tags/heroku/info
- Python Dependencies via Pip: https://devcenter.heroku.com/articles/python-pip
File folders¶
The folder openatlas/uploads, openatlas/export and openatlas/image_processing will contain files which users uploaded or generated. Therefore, additional volumes has to be created with the mount point /app/openatlas/<folder> (/app/openatlas/uploads) and read-only false.
Update with new OpenAtlas releases¶
- For Python packages Heroku/Kubernetes will use the Pipfile
- Update of npm packages is done via start.sh
- Database updates are done manually
Backup¶
Backups are made every day at 03:20 to a separate volume, which can be mounted to the container. At the moment only the last 10 Backups are kept.
Updated by Alexander Watzinger about 3 years ago · 38 revisions