Snippets » History » Revision 78
Revision 78/79
| Next »
Alexander Watzinger, 2024-05-28 11:43
- Table of contents
- Replace the database with a backup
- Set files with licenses to public
- Export
- Reset Demo
- Add update trigger for modified field
- Data clean up
Below are some database related code snippets used in development.
Replace the database with a backup¶
It is possible (for users with the role admin or manager) to make SQL dumps via the backend anytime: admin -> data -> SQL export. Also, every time the database_upgrade.py script, which is used at an upgrade to a new OpenAtlas version, is called before doing database changes. These backups are located at files/export and because the date of creation is included in the filename it is easy to see when they were made. Of course it is good practice to store these SQL backups on another machine too, e.g. in case of a hard disk failure, but this is outside the scope of OpenAtlas.
In case these backups are needed, lets say for example an import went wrong, the workflow on a Debian machine would be as follows. The commands are executed as postgres user and it is assumed the database and user are called openatlas.
- Delete the current database
dropdb openatlas
- Create a new, empty database with the owner openaltas
createdb openatlas -O openatlas
- Import the (unpacked) backup into the new database
psql openatlas < path/to/the/unpacked/backup.sql
Set files with licenses to public¶
-- Define the list of IDs of License types
WITH ids AS (
SELECT unnest(array[20158, 12939, 211353, 18135, 128538, 12940, 116287, 12937, 196373, 12938]) AS id
),
-- Select the entity IDs that are connected through the model.link table
connected_entities AS (
SELECT DISTINCT e.id AS entity_id
FROM model.entity e
JOIN model.link l ON e.id = l.domain_id OR e.id = l.range_id
WHERE l.domain_id IN (SELECT id FROM ids)
OR l.range_id IN (SELECT id FROM ids)
)
-- Upsert into model.file_info table
INSERT INTO model.file_info (entity_id, public)
SELECT ce.entity_id, 'true'
FROM connected_entities ce
ON CONFLICT (entity_id)
DO UPDATE SET public = EXCLUDED.public;
Export¶
Database Structure¶
Export the database structure from the test database (used to avoid specialties in production databases) into install/1_structure.sql
pg_dump -sc --if-exists -n model -n gis -n log -n web -n import openatlas_test > install/1_structure.sql
Model Data¶
pg_dump openatlas --rows-per-insert 10000 -a -t model.cidoc_class -t model.cidoc_class_i18n -t model.cidoc_class_inheritance -t model.property -t model.property_i18n -t model.property_inheritance > install/2_data_model.sql
Single Schema¶
pg_dump -n web openatlas > /tmp/openatlas_web.sql pg_dump -n model openatlas > /tmp/openatlas_web.sql
Windows import¶
There are mainly two ways of restoring a PostgreSQL database on Windows. For clarification of the commands and options, please read the pg_dump and pg_restore documentation.
Custom Format¶
Export on Debian server as postgres
user with following command:
pg_dump -Fc openatlas > openatlas.dump
This dump file can be used with pgAdmin4 to restore the database or with the following command in PowerShell (git bash doesn't work). Please adjust variables for your need:
pg_restore.exe --host "localhost" --port "5432" --username "openatlas" --dbname "openatlas" --clean --verbose "path to dump file"
Plain text file (.sql)¶
Export on Debian server as postgres
user with following command:
pg_dump --attribute-inserts openatlas > openatlas.sql
In the pgAdmin4 tool, select the corresponding database, run the query tool and select the open file option.
Load the provided dump file and execute (F5) it.
This method can be quite slow and if the database has a large size (over ~10 MB) loading the file in pgAdmin4 will be a challenge.
Also, the psql tool can be used.
Reset Demo¶
/var/lib/postgresql/reset_demo.sh
Add update trigger for modified field¶
Replace schema.table below accordingly and execute:
CREATE TRIGGER update_modified BEFORE UPDATE ON schema.table FOR EACH ROW EXECUTE PROCEDURE model.update_modified();
Data clean up¶
Find used but missing places¶
e.g. after a case study separation (not sure if this statement is now showing really missing or just needed locations)
SELECT e.id, e.name, e.system_type FROM model.entity e
WHERE e.class_code = 'E53' AND e.system_type = 'place location' AND e.id IN (
SELECT r.id FROM model.link lr JOIN model.entity r ON lr.range_id = r.id AND lr.property_code IN ('P74', 'OA8', 'OA9', 'P7'))
AND e.id NOT IN (SELECT range_id FROM model.link WHERE property_code = 'P53');
Delete orphaned locations¶
DELETE FROM model.entity WHERE id IN (
SELECT id FROM model.entity WHERE system_type = 'place location' AND id NOT IN (
SELECT e.id FROM model.entity e JOIN model.link l ON e.id = l.range_id AND l.property_code = 'P53'
WHERE e.class_code = 'E53' AND e.system_type = 'place location'));
Recursive Events¶
To get all child events of a given event the SQL below (replace ROOT_EVENT_ID at bottom). It works but is slow and could be improved.
WITH RECURSIVE tree AS (
SELECT e.id, ARRAY[]::INTEGER[] AS ancestors
FROM model.entity e
WHERE (SELECT s.id FROM model.entity s JOIN model.link l ON s.id = l.range_id AND l.domain_id = e.id AND l.property_id = (SELECT id FROM model.property WHERE code = 'P117')) IS NULL
UNION ALL
SELECT e.id, tree.ancestors ||
(SELECT s.id FROM model.entity s JOIN model.link l ON s.id = l.range_id AND l.domain_id = e.id AND l.property_id = (SELECT id FROM model.property WHERE code = 'P117'))
FROM model.entity e, tree
WHERE (SELECT s.id FROM model.entity s JOIN model.link l ON s.id = l.range_id AND l.domain_id = e.id AND l.property_id = (SELECT id FROM model.property WHERE code = 'P117')) = tree.id
)
SELECT * FROM tree WHERE ROOT_EVENT_ID = ANY(tree.ancestors);
Updated by Alexander Watzinger 7 months ago · 78 revisions